- TensorFlow Reinforcement Learning Quick Start Guide
- Kaushik Balakrishnan
- 226字
- 2021-06-24 15:29:08
Algorithms covered in this book
In Chapter 2, Temporal Difference, SARSA, and Q-Learning, we will look into our first two RL algorithms: Q-learning and SARSA. Both of these algorithms are tabular-based and do not require the use of neural networks. Thus, we will code them in Python and NumPy. In Chapter 3, Deep Q-Network, we will cover DQN and use TensorFlow to code the agent for the rest of the book. We will then train it to play Atari Breakout. In Chapter 4, Double DQN, Dueling Architectures, and Rainbow, we will cover double DQN, dueling network architectures, and rainbow DQN. In Chapter 5, Deep Deterministic Policy Gradient, we will look at our first Actor-Critic RL algorithm called DDPG, learn about policy gradients, and apply them to a continuous action problem. In Chapter 6, Asynchronous Methods – A3C and A2C, we will investigate A3C, which is another RL algorithm that uses a master and several worker processes. In Chapter 7, Trust Region Policy Optimization and Proximal Policy Optimization, we will investigate two more RL algorithms: TRPO and PPO. Finally, we will apply DDPG and PPO to train an agent to drive a car autonomously in Chapter 8, Deep RL Applied to Autonomous Driving. From Chapter 3, Deep Q-Network, to Chapter 8, Deep RL Applied to Autonomous Driving, we'll use TensorFlow agents. Have fun learning RL.
- 精通MATLAB神經(jīng)網(wǎng)絡(luò)
- 大數(shù)據(jù)專業(yè)英語
- 極簡AI入門:一本書讀懂人工智能思維與應(yīng)用
- Matplotlib 3.0 Cookbook
- 大數(shù)據(jù)時代
- Google SketchUp for Game Design:Beginner's Guide
- MCGS嵌入版組態(tài)軟件應(yīng)用教程
- Mastering MongoDB 3.x
- 21天學(xué)通Linux嵌入式開發(fā)
- 基于Proteus的單片機應(yīng)用技術(shù)
- 未來學(xué)徒:讀懂人工智能飛馳時代
- Machine Learning with Spark(Second Edition)
- 自適應(yīng)學(xué)習(xí):人工智能時代的教育革命
- Flash CS5二維動畫設(shè)計與制作
- Flash CS3動畫制作融會貫通