官术网_书友最值得收藏!

Maximum likelihood estimation

Logistic regression works on the principle of maximum likelihood estimation; here, we will explain in detail what it is in principle so that we can cover some more fundamentals of logistic regression in the following sections. Maximum likelihood estimation is a method of estimating the parameters of a model given observations, by finding the parameter values that maximize the likelihood of making the observations, this means finding parameters that maximize the probability p of event 1 and (1-p) of non-event 0, as you know:

probability (event + non-event) = 1

Example: Sample (0, 1, 0, 0, 1, 0) is drawn from binomial distribution. What is the maximum likelihood estimate of μ?

Solution: Given the fact that for binomial distribution P(X=1) = μ and P(X=0) = 1- μ where μ is the parameter:

Here, log is applied to both sides of the equation for mathematical convenience; also, maximizing likelihood is the same as the maximizing log of likelihood:

Determining the maximum value of μ by equating derivative to zero:

However, we need to do double differentiation to determine the saddle point obtained from equating derivative to zero is maximum or minimum. If the μ value is maximum; double differentiation of log(L(μ)) should be a negative value:

Even without substitution of μ value in double differentiation, we can determine that it is a negative value, as denominator values are squared and it has a negative sign against both terms. Nonetheless, we are substituting and the value is:

Hence it has been proven that at value μ = 1/3, it is maximizing the likelihood. If we substitute the value in the log likelihood function, we will obtain:

The reason behind calculating -2*ln(L) is to replicate the metric calculated in proper logistic regression. In fact:

AIC = -2*ln(L) + 2*k

So, logistic regression tries to find the parameters by maximizing the likelihood with respect to individual parameters. But one small difference is, in logistic regression, Bernoulli distribution will be utilized rather than binomial. To be precise, Bernoulli is just a special case of the binomial, as the primary outcome is only two categories from which all the trails are made.

主站蜘蛛池模板: 青田县| 兴和县| 庆元县| 祁门县| 崇文区| 榕江县| 全州县| 隆昌县| 龙井市| 惠安县| 宁德市| 南部县| 察隅县| 忻城县| 绥滨县| 伊宁市| 万源市| 原平市| 哈巴河县| 上杭县| 噶尔县| 西华县| 苍山县| 镇坪县| 安达市| 时尚| 甘南县| 吉木乃县| 印江| 华安县| 平遥县| 卓资县| 娱乐| 桓仁| 新巴尔虎左旗| 甘谷县| 东乌珠穆沁旗| 内江市| 丹江口市| 东宁县| 遵化市|