官术网_书友最值得收藏!

MAP learning

When selecting the right hypothesis, a Bayesian approach is normally one of the best choices, because it takes into account all the factors and, as we'll see, even if it's based on conditional independence, such an approach works perfectly when some factors are partially dependent. However, its complexity (in terms of probabilities) can easily grow because all terms must always be taken into account. For example, a real coin is a very short cylinder, so, in tossing a coin, we should also consider the probability of even. Let's say, it's 0.001. It means that we have three possible outcomes: P(head) = P(tail) = (1.0 - 0.001) / 2.0 and P(even) = 0.001. The latter event is obviously unlikely, but in Bayesian learning it must be considered (even if it'll be squeezed by the strength of the other terms).

An alternative is picking the most probable hypothesis in terms of a posteriori probability:

This approach is called MAP (maximum a posteriori) and it can really simplify the scenario when some hypotheses are quite unlikely (for example, in tossing a coin, a MAP hypothesis will discard P(even)). However, it still does have an important drawback: it depends on Apriori probabilities (remember that maximizing the a posteriori implies considering also the Apriori). As Russel and Norvig (Russel S., Norvig P., Artificial Intelligence: A Modern Approach, Pearson) pointed out, this is often a delicate part of an inferential process, because there's always a theoretical background which can drive to a particular choice and exclude others. In order to rely only on data, it's necessary to have a different approach.

主站蜘蛛池模板: 那坡县| 长寿区| 安远县| 碌曲县| 桂阳县| 永顺县| 荥经县| 柳州市| 大悟县| 濉溪县| 明星| 牡丹江市| 阿巴嘎旗| 星座| 遂川县| 阜南县| 秦皇岛市| 探索| 宜春市| 余江县| 甘洛县| 安泽县| 广南县| 福清市| 新干县| 东安县| 石棉县| 贵南县| 天长市| 武夷山市| 墨竹工卡县| 蓝田县| 凤山市| 蕲春县| 股票| 青阳县| 苏尼特右旗| 鹤壁市| 那坡县| 潜江市| 溆浦县|