- PyTorch 1.x Reinforcement Learning Cookbook
- Yuxi (Hayden) Liu
- 238字
- 2021-06-24 12:34:45
How it works...
In this oversimplified study-sleep-game process, the optimal policy, that is, the policy that achieves the highest total reward, is choosing action a0 in all steps. However, it won't be that straightforward in most cases. Also, the actions taken in individual steps won't necessarily be the same. They are usually dependent on states. So, we will have to solve an MDP by finding the optimal policy in real-world cases.
The value function of a policy measures how good it is for an agent to be in each state, given the policy being followed. The greater the value, the better the state.
In Step 4, we calculated the value, V, of the optimal policy using matrix inversion. According to the Bellman Equation, the relationship between the value at step t+1 and that at step t can be expressed as follows:

When the value converges, which means Vt+1 = Vt, we can derive the value, V, as follows:

Here, I is the identity matrix with 1s on the main diagonal.
One advantage of solving an MDP with matrix inversion is that you always get an exact answer. But the downside is its scalability. As we need to compute the inversion of an m * m matrix (where m is the number of possible states), the computation will become costly if there is a large number of states.
- Getting Started with MariaDB
- JSF2和RichFaces4使用指南
- 工業機器人安裝與調試
- 學會VBA,菜鳥也高飛!
- 統計挖掘與機器學習:大數據預測建模和分析技術(原書第3版)
- MATLAB-Simulink系統仿真超級學習手冊
- IBM? SmartCloud? Essentials
- Cloudera Hadoop大數據平臺實戰指南
- 網站規劃與網頁設計
- PostgreSQL 10 High Performance
- ASP.NET學習手冊
- Photoshop CS6白金手冊
- 智能與智慧:人工智能遇見中國哲學家
- 博弈論與無線傳感器網絡安全
- Artificial Intelligence for Robotics