官术网_书友最值得收藏!

Defining the actions of the agent

The agent performs actions to explore the environment. Obtaining this action vector is the primary goal in RL. Ideally, you need to strive to obtain optimal actions.

An action is the decision an agent takes in a certain state, st. Typically, it is represented as at, where, as before, the subscript t denotes the time instant. The actions that are available to an agent depends on the problem. For instance, an agent in a maze can decide to take a step north, or south, or east, or west. These are called discrete actions, as there are a fixed number of possibilities. On the other hand, for an autonomous car, actions can be the steering angle, throttle value, brake value, and so on, which are called continuous actions as they can take real number values in a bounded range. For example, the steering angle can be 40 degrees from the north-south line, and the throttle can be 60% down, and so on.

Thus, actions at can be either discrete or continuous, depending on the problem at hand. Some RL approaches handle discrete actions, while others are suited for continuous actions.

A schematic of the agent and its interaction with the environment is shown in the following diagram:

Figure 1: Schematic showing the agent and its interaction with the environment

Now that we know what an agent is, we will look at the policies that the agent learns, what value and advantage functions are, and how these quantities are used in RL.

主站蜘蛛池模板: 洛南县| 霸州市| 南木林县| 安庆市| 鞍山市| 甘肃省| 古丈县| 文昌市| 固镇县| 镇原县| 文成县| 师宗县| 土默特左旗| 崇文区| 射阳县| 彭山县| 临夏县| 改则县| 乌拉特中旗| 林芝县| 清涧县| 井研县| 冀州市| 故城县| 平安县| 中西区| 客服| 朝阳市| 延寿县| 盐城市| 福鼎市| 双城市| 孟连| 南木林县| 美姑县| 涡阳县| 阿巴嘎旗| 当阳市| 清徐县| 云龙县| 辽宁省|