官术网_书友最值得收藏!

Maximum likelihood estimation

Logistic regression works on the principle of maximum likelihood estimation; here, we will explain in detail what it is in principle so that we can cover some more fundamentals of logistic regression in the following sections. Maximum likelihood estimation is a method of estimating the parameters of a model given observations, by finding the parameter values that maximize the likelihood of making the observations, this means finding parameters that maximize the probability p of event 1 and (1-p) of non-event 0, as you know:

probability (event + non-event) = 1

Example: Sample (0, 1, 0, 0, 1, 0) is drawn from binomial distribution. What is the maximum likelihood estimate of μ?

Solution: Given the fact that for binomial distribution P(X=1) = μ and P(X=0) = 1- μ where μ is the parameter:

Here, log is applied to both sides of the equation for mathematical convenience; also, maximizing likelihood is the same as the maximizing log of likelihood:

Determining the maximum value of μ by equating derivative to zero:

However, we need to do double differentiation to determine the saddle point obtained from equating derivative to zero is maximum or minimum. If the μ value is maximum; double differentiation of log(L(μ)) should be a negative value:

Even without substitution of μ value in double differentiation, we can determine that it is a negative value, as denominator values are squared and it has a negative sign against both terms. Nonetheless, we are substituting and the value is:

Hence it has been proven that at value μ = 1/3, it is maximizing the likelihood. If we substitute the value in the log likelihood function, we will obtain:

The reason behind calculating -2*ln(L) is to replicate the metric calculated in proper logistic regression. In fact:

AIC = -2*ln(L) + 2*k

So, logistic regression tries to find the parameters by maximizing the likelihood with respect to individual parameters. But one small difference is, in logistic regression, Bernoulli distribution will be utilized rather than binomial. To be precise, Bernoulli is just a special case of the binomial, as the primary outcome is only two categories from which all the trails are made.

主站蜘蛛池模板: 阿坝| 清远市| 奉新县| 松滋市| 樟树市| 前郭尔| 民勤县| 镇远县| 都昌县| 河东区| 甘谷县| 东乡县| 和政县| 梁山县| 略阳县| 柘荣县| 盐津县| 如东县| 玉溪市| 山西省| 灵寿县| 庐江县| 威海市| 柘荣县| 恩施市| 淳化县| 左云县| 桓仁| 松阳县| 兴业县| 九江市| 开化县| 泰来县| 宜昌市| 章丘市| 宁明县| 万宁市| 遂川县| 肥乡县| 广宗县| 石泉县|