舉報

會員
Hands-On Deep Learning for Games
Thenumberofapplicationsofdeeplearningandneuralnetworkshasmultipliedinthelastcoupleofyears.Neuralnetshasenabledsignificantbreakthroughsineverythingfromcomputervision,voicegeneration,voicerecognitionandself-drivingcars.Gamedevelopmentisalsoakeyareawherethesetechniquesarebeingapplied.Thisbookwillgiveanindepthviewofthepotentialofdeeplearningandneuralnetworksingamedevelopment.Wewilltakealookatthefoundationsofmulti-layerperceptron’stousingconvolutionalandrecurrentnetworks.InapplicationsfromGANsthatcreatemusicortexturestoself-drivingcarsandchatbots.Thenweintroducedeepreinforcementlearningthroughthemulti-armedbanditproblemandotherOpenAIGymenvironments.AsweprogressthroughthebookwewillgaininsightsaboutDRLtechniquessuchasMotivatedReinforcementLearningwithCuriosityandCurriculumLearning.WealsotakeacloserlookatdeepreinforcementlearningandinparticulartheUnityML-Agentstoolkit.Bytheendofthebook,wewilllookathowtoapplyDRLandtheML-Agentstoolkittoenhance,testandautomateyourgamesorsimulations.Finally,wewillcoveryourpossiblenextstepsandpossibleareasforfuturelearning.
目錄(192章)
倒序
- coverpage
- Title Page
- Copyright and Credits
- Hands-On Deep Learning for Games
- Dedication
- About Packt
- Why subscribe?
- Packt.com
- Contributors
- About the author
- Packt is searching for authors like you
- Preface
- Who this book is for
- What this book covers
- To get the most out of this book
- Download the example code files
- Download the color images
- Conventions used
- Get in touch
- Reviews
- Section 1: The Basics
- Deep Learning for Games
- The past present and future of DL
- The past
- The present
- The future
- Neural networks – the foundation
- Training a perceptron in Python
- Multilayer perceptron in TF
- TensorFlow Basics
- Training neural networks with backpropagation
- The Cost function
- Partial differentiation and the chain rule
- Building an autoencoder with Keras
- Training the model
- Examining the output
- Exercises
- Summary
- Convolutional and Recurrent Networks
- Convolutional neural networks
- Monitoring training with TensorBoard
- Understanding convolution
- Building a self-driving CNN
- Spatial convolution and pooling
- The need for Dropout
- Memory and recurrent networks
- Vanishing and exploding gradients rescued by LSTM
- Playing Rock Paper Scissors with LSTMs
- Exercises
- Summary
- GAN for Games
- Introducing GANs
- Coding a GAN in Keras
- Training a GAN
- Optimizers
- Wasserstein GAN
- Generating textures with a GAN
- Batch normalization
- Leaky and other ReLUs
- A GAN for creating music
- Training the music GAN
- Generating music via an alternative GAN
- Exercises
- Summary
- Building a Deep Learning Gaming Chatbot
- Neural conversational agents
- General conversational models
- Sequence-to-sequence learning
- Breaking down the code
- Thought vectors
- DeepPavlov
- Building the chatbot server
- Message hubs (RabbitMQ)
- Managing RabbitMQ
- Sending and receiving to/from the MQ
- Writing the message queue chatbot
- Running the chatbot in Unity
- Installing AMQP for Unity
- Exercises
- Summary
- Section 2: Deep Reinforcement Learning
- Introducing DRL
- Reinforcement learning
- The multi-armed bandit
- Contextual bandits
- RL with the OpenAI Gym
- A Q-Learning model
- Markov decision process and the Bellman equation
- Q-learning
- Q-learning and exploration
- First DRL with Deep Q-learning
- RL experiments
- Keras RL
- Exercises
- Summary
- Unity ML-Agents
- Installing ML-Agents
- Training an agent
- What's in a brain?
- Monitoring training with TensorBoard
- Running an agent
- Loading a trained brain
- Exercises
- Summary
- Agent and the Environment
- Exploring the training environment
- Training the agent visually
- Reverting to the basics
- Understanding state
- Understanding visual state
- Convolution and visual state
- To pool or not to pool
- Recurrent networks for remembering series
- Tuning recurrent hyperparameters
- Exercises
- Summary
- Understanding PPO
- Marathon RL
- The partially observable Markov decision process
- Actor-Critic and continuous action spaces
- Expanding network architecture
- Understanding TRPO and PPO
- Generalized advantage estimate
- Learning to tune PPO
- Coding changes required for control projects
- Multiple agent policy
- Exercises
- Summary
- Rewards and Reinforcement Learning
- Rewards and reward functions
- Building reward functions
- Sparsity of rewards
- Curriculum Learning
- Understanding Backplay
- Implementing Backplay through Curriculum Learning
- Curiosity Learning
- The Curiosity Intrinsic module in action
- Trying ICM on Hallway/VisualHallway
- Exercises
- Summary
- Imitation and Transfer Learning
- IL or behavioral cloning
- Online training
- Offline training
- Setting up for training
- Feeding the agent
- Transfer learning
- Transferring a brain
- Exploring TensorFlow checkpoints
- Imitation Transfer Learning
- Training multiple agents with one demonstration
- Exercises
- Summary
- Building Multi-Agent Environments
- Adversarial and cooperative self-play
- Training self-play environments
- Adversarial self-play
- Multi-brain play
- Adding individuality with intrinsic rewards
- Extrinsic rewards for individuality
- Creating uniqueness with customized reward functions
- Configuring the agents' personalities
- Exercises
- Summary
- Section 3: Building Games
- Debugging/Testing a Game with DRL
- Introducing the game
- Setting up ML-Agents
- Introducing rewards to the game
- Setting up TestingAcademy
- Scripting the TestingAgent
- Setting up the TestingAgent
- Overriding the Unity input system
- Building the TestingInput
- Adding TestingInput to the scene
- Overriding the game input
- Configuring the required brains
- Time for training
- Testing through imitation
- Configuring the agent to use IL
- Analyzing the testing process
- Sending custom analytics
- Exercises
- Summary
- Obstacle Tower Challenge and Beyond
- The Unity Obstacle Tower Challenge
- Deep Learning for your game?
- Building your game
- More foundations of learning
- Summary
- Other Books You May Enjoy
- Leave a review - let other readers know what you think 更新時間:2021-06-24 15:48:33
推薦閱讀
- 計算機組成原理與接口技術:基于MIPS架構實驗教程(第2版)
- 大數據技術基礎
- 數據庫技術與應用教程(Access)
- Python金融大數據分析(第2版)
- 使用GitOps實現Kubernetes的持續部署:模式、流程及工具
- 新型數據庫系統:原理、架構與實踐
- Hands-On Mathematics for Deep Learning
- 數據庫技術實用教程
- Apache Kylin權威指南
- SQL Server深入詳解
- 算力經濟:從超級計算到云計算
- Access 2016數據庫應用基礎
- AndEngine for Android Game Development Cookbook
- Python金融數據挖掘與分析實戰
- Trino權威指南(原書第2版)
- 數據庫高效優化:架構、規范與SQL技巧
- Getting Started with Review Board
- Python數據分析入門與實戰
- 構建最高可用Oracle數據庫系統:Oracle 11gR2RAC管理、維護與性能優化
- 21天學通SQL Server
- 騰訊大數據構建之道
- XNA 4 3D Game Development by Example:Beginner's Guide
- Flink與Kylin深度實踐
- 消息設計與開發
- 數據說服力:菜鳥學數據分析
- XNA 4.0 Game Development by Example Beginner's Guide(Visual Basic Edition)
- 智能數據治理:基于大模型、知識圖譜
- 數據結構:使用C語言(第4版)
- 以太坊技術詳解與實戰
- Learning SciPy for Numerical and Scientific Computing