官术网_书友最值得收藏!

How it works...

In this oversimplified study-sleep-game process, the optimal policy, that is, the policy that achieves the highest total reward, is choosing action a0 in all steps. However, it won't be that straightforward in most cases. Also, the actions taken in individual steps won't necessarily be the same. They are usually dependent on states. So, we will have to solve an MDP by finding the optimal policy in real-world cases.

The value function of a policy measures how good it is for an agent to be in each state, given the policy being followed. The greater the value, the better the state.

In Step 4, we calculated the value, V, of the optimal policy using matrix inversion. According to the Bellman Equation, the relationship between the value at step t+1 and that at step t can be expressed as follows:

When the value converges, which means Vt+1 = Vt, we can derive the value, V, as follows:

Here, I is the identity matrix with 1s on the main diagonal.

One advantage of solving an MDP with matrix inversion is that you always get an exact answer. But the downside is its scalability. As we need to compute the inversion of an m * m matrix (where m is the number of possible states), the computation will become costly if there is a large number of states.

主站蜘蛛池模板: 彭州市| 江都市| 葵青区| 乐山市| 黔南| 甘德县| 四平市| 奉新县| 揭阳市| 鲁甸县| 大理市| 贞丰县| 丽江市| 丹凤县| 鄯善县| 天津市| 大化| 江源县| 呼和浩特市| 温州市| 延吉市| 行唐县| 赫章县| 铅山县| 叙永县| 青岛市| 福贡县| 浮梁县| 阿荣旗| 增城市| 喀什市| 方正县| 金堂县| 屯门区| 三河市| 常宁市| 竹北市| 永年县| 固阳县| 内乡县| 苍南县|