官术网_书友最值得收藏!

Setting up a bandit problem

A straightforward MABP involves encountering a slot machine with n arms (alternatively, a row of n one-armed machines). We have a set amount of money to put in these machines and we want to maximize our payout. We keep track of which machines pay out each time and keep a probability distribution of each machine's payout. 

When we start playing, we don't know which of these arms will pay out more than others; the only way we can find that out is by playing each one and observing long-term how often they pay out. What strategy should we use to decide which arms to pull, when to pull them, and when to prioritize one arm over others? 

For simplicity, let's assume that each time you pull an arm, you get a reward of either $1 or $0 (a bandit with a payout of either 1 or 0 is called a Bernoulli bandit). With a particular 4-armed bandit, we might get the following results:

In the preceding simulation, we start out in exploration mode (meaning that we have a high epsilon value), so we try all four arms at once. We get no reward from arms 1 and 3, but we do get rewards from 2 and 4. We pull 2 again hoping for another reward, but we don't get one. We pull 4 again and get a reward, and now it looks like 4 is a good arm, so we pull it again and get another reward. 

By the end of the trial, we have the following results:

  • Arm 1: 2 pulls, 1 win, and 50% success
  • Arm 2: 3 pulls, 1 win, and 33% success
  • Arm 3: 1 pull, 0 wins, and 0% success
  • Arm 4: 3 pulls, 3 wins, and 100% success

Based on these results, we now have some reason to believe 4 is a good arm to pull and that 2 is not necessarily very good. We have a 0% chance of a reward outcome from 3; however, because we have only pulled it once, we should pull it more to get more information about it before making a decision.

Similarly, we have a 50% chance of a reward outcome from 1, but since we have only pulled it twice, we probably don't have enough information yet to decide whether it is a good arm to pull. We will need to run the game for more rounds in order to be able to make useful predictions about the arms that we don't yet know enough about. 

As we continue to play, we keep track of our results, and eventually, we build up a probability distribution of wins for each arm with enough observations so that we can rely on the results. This becomes the exploitation side of our strategy, and we want to make sure we play arms that we know are likely to win as often as possible, even while we continue to explore the results for other arms. 

主站蜘蛛池模板: 静安区| 常州市| 田阳县| 卓资县| 新宁县| 合肥市| 池州市| 海林市| 临泉县| 五大连池市| 濉溪县| 麻江县| 雷波县| 吴川市| 邳州市| 恩施市| 濮阳市| 昌黎县| 淳化县| 永城市| 颍上县| 建瓯市| 龙江县| 洛隆县| 沧源| 东丰县| 肇源县| 大冶市| 九寨沟县| 河北省| 安溪县| 焦作市| 夹江县| 西充县| 蓝田县| 日照市| 崇州市| 肥乡县| 云霄县| 象山县| 定南县|