官术网_书友最值得收藏!

Seeing the world through our agent's eyes

In order to make our AI convincing, our agent needs to be able to respond to the events around him, the environment, the player, and even other agents. Much like real living organisms, our agent can rely on sight, sound, and other "physical" stimuli. However, we have the advantage of being able to access much more data within our game than a real organism can from their surroundings, such as the player's location, regardless of whether or not they are in the vicinity, their inventory, the location of items around the world, and any variable you chose to expose to that agent in your code:

In the preceding diagram, our agent's field of vision is represented by the cone in front of it, and its hearing range is represented by the grey circle surrounding it:

Vision, sound, and other senses can be thought of, at their most essential level, as data. Vision is just light particles, sound is just vibrations, and so on. While we don't need to replicate the complexity of a constant stream of light particles bouncing around and entering our agent's eyes, we can still model the data in a way that produces believable results.

As you might imagine, we can similarly model other sensory systems, and not just the ones used for biological beings such as sight, sound, or smell, but even digital and mechanical systems that can be used by enemy robots or towers, for example sonar and radar.

If you've ever played Metal Gear Solid, then you've definitely seen these concepts in action—an enemy's field of vision is denoted on the player's mini map as cone-shaped fields of view. Enter the cone and an exclamation mark appears over the enemy's head, followed by an unmistakable chime, letting the player know that they've been spotted.

主站蜘蛛池模板: 佳木斯市| 建瓯市| 时尚| 大名县| 略阳县| 安溪县| 灵石县| 楚雄市| 曲阜市| 鄂托克旗| 揭阳市| 资溪县| 且末县| 无棣县| 安陆市| 延长县| 大丰市| 株洲县| 和政县| 密山市| 北流市| 巴青县| 南宫市| 北安市| 新巴尔虎左旗| 华蓥市| 阿克陶县| 曲阳县| 荔浦县| 大田县| 永丰县| 普陀区| 台北市| 陇西县| 长沙县| 鄂托克旗| 台江县| 瓮安县| 百色市| 临漳县| 乐都县|