官术网_书友最值得收藏!

Maximum likelihood estimation

Logistic regression works on the principle of maximum likelihood estimation; here, we will explain in detail what it is in principle so that we can cover some more fundamentals of logistic regression in the following sections. Maximum likelihood estimation is a method of estimating the parameters of a model given observations, by finding the parameter values that maximize the likelihood of making the observations, this means finding parameters that maximize the probability p of event 1 and (1-p) of non-event 0, as you know:

probability (event + non-event) = 1

Example: Sample (0, 1, 0, 0, 1, 0) is drawn from binomial distribution. What is the maximum likelihood estimate of μ?

Solution: Given the fact that for binomial distribution P(X=1) = μ and P(X=0) = 1- μ where μ is the parameter:

Here, log is applied to both sides of the equation for mathematical convenience; also, maximizing likelihood is the same as the maximizing log of likelihood:

Determining the maximum value of μ by equating derivative to zero:

However, we need to do double differentiation to determine the saddle point obtained from equating derivative to zero is maximum or minimum. If the μ value is maximum; double differentiation of log(L(μ)) should be a negative value:

Even without substitution of μ value in double differentiation, we can determine that it is a negative value, as denominator values are squared and it has a negative sign against both terms. Nonetheless, we are substituting and the value is:

Hence it has been proven that at value μ = 1/3, it is maximizing the likelihood. If we substitute the value in the log likelihood function, we will obtain:

The reason behind calculating -2*ln(L) is to replicate the metric calculated in proper logistic regression. In fact:

AIC = -2*ln(L) + 2*k

So, logistic regression tries to find the parameters by maximizing the likelihood with respect to individual parameters. But one small difference is, in logistic regression, Bernoulli distribution will be utilized rather than binomial. To be precise, Bernoulli is just a special case of the binomial, as the primary outcome is only two categories from which all the trails are made.

主站蜘蛛池模板: 永春县| 墨脱县| 得荣县| 腾冲县| 潼关县| 康保县| 黎城县| 嘉兴市| 临颍县| 宝清县| 神池县| 瑞丽市| 祁阳县| 福清市| 长阳| 天水市| 九寨沟县| 清水县| 原平市| 武鸣县| 新巴尔虎左旗| 视频| 慈溪市| 西充县| 霍州市| 驻马店市| 普洱| 滨海县| 封丘县| 贵港市| 宝清县| 敖汉旗| 高淳县| 敦化市| 义马市| 张家港市| 贞丰县| 丽水市| 芦溪县| 渑池县| 广宁县|