官术网_书友最值得收藏!

Summary

In this chapter, we explored the foundations of DL from the basics of the simple single perceptron to more complex multilayer perceptron models. We started with the past, present, and future of DL and, from there, we built a basic reference implementation of a single perceptron so that we could understand the raw simplicity of DL. Then we built on our knowledge by adding more perceptrons into a multiple layer implementation using TF. Using TF allowed us to see how a raw internal model is represented and trained with a much more complex dataset, MNIST. Then we took a long journey through the math, and although a lot of the complex math was abstracted away from us with Keras, we took an in-depth look at how gradient descent and backpropagation work. Finally, we finished off the chapter with another reference implementation from Keras that featured an autoencoder. Auto encoding allows us to train a network with multiple purposes and extends our understanding of how network architecture doesn't have to be linear.

For the next chapter, we will build on our current level of knowledge and discover convolutional and recurrent neural networks. These extensions provide additional capabilities to the base form of a neural network and have played a significant part in our most recent DL advances.

For the next chapter, we will begin our journey into building components for games when we look at another element considered foundational to DL—the GAN. GANs are like a Swiss Army knife in DL and, as we will see in the next chapter, they offer us plenty of uses.

主站蜘蛛池模板: 班玛县| 正安县| 和田市| 兰西县| 桂平市| 英德市| 天台县| 海林市| 盘锦市| 襄垣县| 高碑店市| 延安市| 隆昌县| 泗阳县| 苏州市| 陆良县| 海原县| 荆门市| 兴隆县| 班玛县| 鸡西市| 彝良县| 宁化县| 怀来县| 清丰县| 南雄市| 依兰县| 拉萨市| 万荣县| 即墨市| 辉县市| 富蕴县| 乌兰察布市| 当阳市| 兴城市| 开江县| 西和县| 淮北市| 隆德县| 郯城县| 年辖:市辖区|