舉報(bào)

會(huì)員
Deep Learning Essentials
Aspiringdatascientistsandmachinelearningexpertswhohavelimitedornoexposuretodeeplearningwillfindthisbooktobeveryuseful.Ifyouarelookingforaresourcethatgetsyouupandrunningwiththefundamentalsofdeeplearningandneuralnetworks,thisbookisforyou.AsthemodelsinthebookaretrainedusingthepopularPython-basedlibrariessuchasTensorflowandKeras,itwouldbeusefultohavesoundprogrammingknowledgeofPython.
目錄(274章)
倒序
- coverpage
- Title Page
- Packt Upsell
- Why subscribe?
- PacktPub.com
- Contributors
- About the authors
- About the reviewer
- Packt is searching for authors like you
- Preface
- Who this book is for
- What this book covers
- To get the most out of this book
- Download the example code files
- Download the color images
- Conventions used
- Get in touch
- Reviews
- Why Deep Learning?
- What is AI and deep learning?
- The history and rise of deep learning
- Why deep learning?
- Advantages over traditional shallow methods
- Impact of deep learning
- The motivation of deep architecture
- The neural viewpoint
- The representation viewpoint
- Distributed feature representation
- Hierarchical feature representation
- Applications
- Lucrative applications
- Success stories
- Deep learning for business
- Future potential and challenges
- Summary
- Getting Yourself Ready for Deep Learning
- Basics of linear algebra
- Data representation
- Data operations
- Matrix properties
- Deep learning with GPU
- Deep learning hardware guide
- CPU cores
- CPU cache size
- RAM size
- Hard drive
- Cooling systems
- Deep learning software frameworks
- TensorFlow – a deep learning library
- Caffe
- MXNet
- Torch
- Theano
- Microsoft Cognitive Toolkit
- Keras
- Framework comparison
- Setting up deep learning on AWS
- Setup from scratch
- Setup using Docker
- Summary
- Getting Started with Neural Networks
- Multilayer perceptrons
- The input layer
- The output layer
- Hidden layers
- Activation functions
- Sigmoid or logistic function
- Tanh or hyperbolic tangent function
- ReLU
- Leaky ReLU and maxout
- Softmax
- Choosing the right activation function
- How a network learns
- Weight initialization
- Forward propagation
- Backpropagation
- Calculating errors
- Backpropagation
- Updating the network
- Automatic differentiation
- Vanishing and exploding gradients
- Optimization algorithms
- Regularization
- Deep learning models
- Convolutional Neural Networks
- Convolution
- Pooling/subsampling
- Fully connected layer
- Overall
- Restricted Boltzmann Machines
- Energy function
- Encoding and decoding
- Contrastive divergence (CD-k)
- Stacked/continuous RBM
- RBM versus Boltzmann Machines
- Recurrent neural networks (RNN/LSTM)
- Cells in RNN and unrolling
- Backpropagation through time
- Vanishing gradient and LTSM
- Cells and gates in LTSM
- Step 1 – The forget gate
- Step 2 – Updating memory/cell state
- Step 3 – The output gate
- Practical examples
- TensorFlow setup and key concepts
- Handwritten digits recognition
- Summary
- Deep Learning in Computer Vision
- Origins of CNNs
- Convolutional Neural Networks
- Data transformations
- Input preprocessing
- Data augmentation
- Network layers
- Convolution layer
- Pooling or subsampling layer
- Fully connected or dense layer
- Network initialization
- Regularization
- Loss functions
- Model visualization
- Handwritten digit classification example
- Fine-tuning CNNs
- Popular CNN architectures
- AlexNet
- Visual Geometry Group
- GoogLeNet
- ResNet
- Summary
- NLP - Vector Representation
- Traditional NLP
- Bag of words
- Weighting the terms tf-idf
- Deep learning NLP
- Motivation and distributed representation
- Word embeddings
- Idea of word embeddings
- Advantages of distributed representation
- Problems of distributed representation
- Commonly used pre-trained word embeddings
- Word2Vec
- Basic idea of Word2Vec
- The word windows
- Generating training data
- Negative sampling
- Hierarchical softmax
- Other hyperparameters
- Skip-Gram model
- The input layer
- The hidden layer
- The output layer
- The loss function
- Continuous Bag-of-Words model
- Training a Word2Vec using TensorFlow
- Using existing pre-trained Word2Vec embeddings
- Word2Vec from Google News
- Using the pre-trained Word2Vec embeddings
- Understanding GloVe
- FastText
- Applications
- Example use cases
- Fine-tuning
- Summary
- Advanced Natural Language Processing
- Deep learning for text
- Limitations of neural networks
- Recurrent neural networks
- RNN architectures
- Basic RNN model
- Training RNN is tough
- Long short-term memory network
- LSTM implementation with tensorflow
- Applications
- Language modeling
- Sequence tagging
- Machine translation
- Seq2Seq inference
- Chatbots
- Summary
- Multimodality
- What is multimodality learning?
- Challenges of multimodality learning
- Representation
- Translation
- Alignment
- Fusion
- Co-learning
- Image captioning
- Show and tell
- Encoder
- Decoder
- Training
- Testing/inference
- Beam Search
- Other types of approaches
- Datasets
- Evaluation
- BLEU
- ROUGE
- METEOR
- CIDEr
- SPICE
- Rank position
- Attention models
- Attention in NLP
- Attention in computer vision
- The difference between hard attention and soft attention
- Visual question answering
- Multi-source based self-driving
- Summary
- Deep Reinforcement Learning
- What is reinforcement learning (RL)?
- Problem setup
- Value learning-based algorithms
- Policy search-based algorithms
- Actor-critic-based algorithms
- Deep reinforcement learning
- Deep Q-network (DQN)
- Experience replay
- Target network
- Reward clipping
- Double-DQN
- Prioritized experience delay
- Dueling DQN
- Implementing reinforcement learning
- Simple reinforcement learning example
- Reinforcement learning with Q-learning example
- Summary
- Deep Learning Hacks
- Massaging your data
- Data cleaning
- Data augmentation
- Data normalization
- Tricks in training
- Weight initialization
- All-zero
- Random initialization
- ReLU initialization
- Xavier initialization
- Optimization
- Learning rate
- Mini-batch
- Clip gradients
- Choosing the loss function
- Multi-class classification
- Multi-class multi-label classification
- Regression
- Others
- Preventing overfitting
- Batch normalization
- Dropout
- Early stopping
- Fine-tuning
- Fine-tuning
- When to use fine-tuning
- When not to use fine-tuning
- Tricks and techniques
- List of pre-trained models
- Model compression
- Summary
- Deep Learning Trends
- Recent models for deep learning
- Generative Adversarial Networks
- Capsule networks
- Novel applications
- Genomics
- Predictive medicine
- Clinical imaging
- Lip reading
- Visual reasoning
- Code synthesis
- Summary
- Other Books You May Enjoy
- Leave a review – let other readers know what you think 更新時(shí)間:2021-06-30 19:18:45
推薦閱讀
- 來吧!帶你玩轉(zhuǎn)Excel VBA
- 自動(dòng)檢測(cè)與轉(zhuǎn)換技術(shù)
- 完全掌握AutoCAD 2008中文版:綜合篇
- 21天學(xué)通Java
- 新編計(jì)算機(jī)組裝與維修
- 基于單片機(jī)的嵌入式工程開發(fā)詳解
- 菜鳥起飛系統(tǒng)安裝與重裝
- 筆記本電腦使用與維護(hù)
- RealFlow流體制作經(jīng)典實(shí)例解析
- 中老年人學(xué)電腦與上網(wǎng)
- 微機(jī)組裝與維護(hù)教程
- 巧學(xué)活用Linux
- iLike職場(chǎng)大學(xué)生就業(yè)指導(dǎo):C和C++方向
- Orange'S:一個(gè)操作系統(tǒng)的實(shí)現(xiàn)
- 網(wǎng)絡(luò)攻防工具
- AlphaGo如何戰(zhàn)勝人類圍棋大師:智能硬件TensorFlow實(shí)踐
- 單片機(jī)數(shù)據(jù)通信及測(cè)控應(yīng)用技術(shù)詳解
- 數(shù)據(jù)結(jié)構(gòu)(C語(yǔ)言版)
- 大數(shù)據(jù)安全技術(shù)與應(yīng)用
- 嵌入式系統(tǒng)原理與應(yīng)用設(shè)計(jì)
- 一本書讀懂大數(shù)據(jù)(全彩圖解版)
- 物聯(lián)網(wǎng)用傳感器
- 案例解說Visual C++典型控制應(yīng)用
- 二維動(dòng)畫制作(Flash CS3)
- jQuery即學(xué)即用
- 機(jī)器學(xué)習(xí)
- 三維動(dòng)畫綜合實(shí)訓(xùn)
- Red Hat Enterprise Linux 6從入門到精通
- 看圖學(xué)中文版Word 2003
- 單片機(jī)原理與應(yīng)用系統(tǒng)設(shè)計(jì)