官术网_书友最值得收藏!

What this book covers

Chapter 1, Getting Started with Deep Learning, covers the evolution of intelligence in machines and artificial intelligence and, eventually, deep learning. We'll then look at some applications of deep learning and set up our environment for coding our way through deep learning models. Completing this chapter, you will learn the following things.

Chapter 2, Deep Feedforward Networks, covers the evolution history of deep feedforward networks and their architecture. We will also demonstrate how to bring up and preprocess data for training a deep learning network.

Chapter 3, Restricted Boltzmann Machines and Autoencoders, explains the algorithm behind the scenes, called restricted Boltzmann machines (RBMs) and their evolutionary path. We will then dig deeper into the logic behind them and implement RBMs in TensorFlow. We will also apply them to build a movie recommender. We'll then learn about autoencoders and briefly look at their evolutionary path. We will also illustrate a variety of autoencoders, categorized by their architectures or forms of regularization.

Chapter 4, CNN Architecture, covers an important class of deep learning network for images, called convolutional neural networks (CNNs). We will also discuss the benefits of CNNs over deep feedforward networks. We will then learn more about some famous image classification CNNs and then build our first CNN image classifier on the CIFAR-10 dataset. Then, we'll move on to object detection with CNNs and the TensorFlow detection model, zoo.  

Chapter 5, Mobile Neural Networks and CNNs, discusses the need for mobile neural networks for doing CNN work in a real-time application. We will also talk about the two benchmark MobileNet architectures introduced by Google—MobileNet and MobileNetV2. Later, we'll discuss the successful combination of MobileNet with object detection networks such as SSD to achieve object detection on mobile devices.

Chapter 6, Recurrent Neural Networks, explains one of the most important deep learning models, recurrent neural networks (RNNs), its architecture, and the evolutionary path of RNNs. Later, we'll will discuss a variety of architectures categorized by the recurrent layer, including vanilla RNNs, LSTM, GRU, and bidirectional RNNs, and apply the vanilla architecture to write our own War and Peace (a bit nonsensical though). We'll also introduce the bidirectional architecture that allows the model to preserve information from both past and future contexts of the sequence. 

Chapter 7, Generative Adversarial Networks, explains one of the most interesting deep learning models, generative adversarial networks (GANs), and its evolutionary path. We will also illustrate a variety of GAN architectures with an example of image generation. We will also explore four GAN architectures, including vanilla GANs, deep convolutional GANs, conditional GANs, and information-maximizing GANs. 

Chapter 8, New Trends in Deep Learning, talks about a few deep learning ideas that we have found impactful this year and more prominent in the future. We'll also learn that Bayesian deep learning combines the merits of both Bayesian learning and deep learning.

主站蜘蛛池模板: 玛沁县| 陆川县| 崇明县| 鲁甸县| 江都市| 白城市| 文昌市| 沂南县| 东莞市| 虞城县| 华宁县| 济宁市| 二手房| 西丰县| 西乡县| 鹤山市| 许昌市| 梁河县| 芷江| 漠河县| 永川市| 苗栗市| 延川县| 岑溪市| 富源县| 新和县| 东乌珠穆沁旗| 桦川县| 广丰县| 桓台县| 陈巴尔虎旗| 栖霞市| 秭归县| 双峰县| 普兰县| 北票市| 罗甸县| 呈贡县| 永春县| 益阳市| 文安县|