官术网_书友最值得收藏!

Solution concepts

In the last 50 years, many great algorithms have been developed for numerical optimization and these algorithms work well, especially in case of quadratic functions. As we have seen in the previous section, we only have quadratic functions and constraints; so these methods (that are implemented in R as well) can be used in the worst case scenarios (if there is nothing better).

However, a detailed discussion of numerical optimization is out of the scope of this book. Fortunately, in the special case of linear and quadratic functions and constraints, these methods are unnecessary; we can use the Lagrange theorem from the 18th century.

Theorem (Lagrange)

If Theorem (Lagrange) and Theorem (Lagrange), (where Theorem (Lagrange)) have continuous partial derivatives and Theorem (Lagrange) is a relative extreme point of f(x) subject to the Theorem (Lagrange) constraint where Theorem (Lagrange).

Then, there exist the coefficients Theorem (Lagrange) such that Theorem (Lagrange)

In other words, all of the partial derivatives of the function Theorem (Lagrange) are 0 (Bertsekas Dimitri P. (1999)).

In our case, the condition is also sufficient. The partial derivative of a quadratic function is linear, so the optimization leads to the problem of solving a linear system of equations, which is a high school task (unlike numerical methods).

Let's see, how this can be used to solve the third problem:

Theorem (Lagrange)

It can be shown that this problem is equivalent to the following system of linear equations:

Theorem (Lagrange)

(Two rows and two columns are added to the covariance matrix, so we have conditions to determine the two Lagrange multipliers as well.) We can expect a unique solution for this system.

It is worth emphasizing that what we get with the Lagrange theorem is not an optimization problem anymore. Just as in one dimension, minimizing a quadratic function leads to taking a derivative and a linear system of equations, which is trivial from the point of complexity. Now let's see what to do with the return maximization problem:

Theorem (Lagrange)

It's easy to see that the derivative of the Lagrange function subject to λ is the constraint itself.

To see this, take the derivative of L:

  • Theorem (Lagrange)
  • Theorem (Lagrange)

So this leads to non-linear equations, which is more of an art than a science.

主站蜘蛛池模板: 静安区| 思茅市| 彭阳县| 东光县| 永州市| 东至县| 汝阳县| 马龙县| 孟津县| 上栗县| 江陵县| 遂昌县| 石柱| 乾安县| 遵义市| 闸北区| 桂林市| 晋中市| 横山县| 湘阴县| 买车| 韶关市| 自贡市| 商城县| 罗山县| 北海市| 美姑县| 施秉县| 惠东县| 开化县| 泽普县| 东乌| 崇仁县| 潮州市| 丁青县| 苍南县| 平江县| 麻城市| 伊川县| 临海市| 新竹市|