Convexity/(quasi-)concavity and nonlinear programming: the rest
Updated with video links and a curriculum clarification for the portfolio separation.
The "20 or 21" videos that ended up being the beginning of "21".
- the C1 functions characterization, https://youtu.be/v6MeRFSL0YM
Links to an external site.
- the characterization for only continuity assumptions, in terms of generalized gradients (supergradients for concavity, subgradients for convexity), and sketchy: the supporting-line characterization of quasiconcavity/quasiconvexity (the "price vector"), https://youtu.be/t6JxIaP-UJM
Links to an external site.
The "fact sheet" lectures
The PDF first: https://folk.universitetetioslo.no/ncf/4140/lectures2021/concavity+quasiconcavity_FACTSHEET.pdf
Links to an external site.
"Page 0" is just some picture, so the second actual page in the PDF is fact sheet "1". That makes "1" through "4".
- Fact sheet 1: The basics. https://youtu.be/ATzJbd4Wkfw
Links to an external site.
-
Fact sheet 2: General properties, https://youtu.be/rOZCjSx9IAw
Links to an external site.
There is an example and proof-sketches in a separate clip: https://youtu.be/y9udRAkQrHc Links to an external site. - Fact sheet 3: Special cases and properties for (unconstrained) optimization. https://youtu.be/JYif8YR03XY
Links to an external site.
For one of these, the concavity of homogeneous positive degree-1 homogeneous functions, see also this Canvas document. And, the application to Cobb-Douglas here. - Moved to lecture 22: Fact sheet 4. https://youtu.be/Lpbve9ZK_vY
Links to an external site.
More lecture 22: concave programs examples
Three examples - a fourth (exam 2015 no. 2) will be given as seminar problem. And then the portfolio separation video at the very end, the "textbook version" of the problem will be a concave program.
One example here: https://youtu.be/dCQHvM4GGlU
Links to an external site.
Two more examples here: https://youtu.be/uv_m8gcxhSw
Links to an external site.
The final optimization lecture:
- Precise Lagrange (& Kuhn-Tucker) conditions. https://youtu.be/mb9VjyGdBWw
Links to an external site.
- First part until 15:45 (with four more minutes of talk): the main theory.
- From 19:51: nine minutes on what scenarios the constraint qualification would fail.
- From 28:45: Deducing the (precise and the "Math 2") Lagrange conditions.
- A portfolio optimization problem (a classic from theoretical finance!): https://youtu.be/T3NJI04ctoo
Links to an external site.
The risk-averse agent has a concave program, while the more general agent will get a problem that is not up-front concave, and where one of the (not-so-awful) cases of CQ failing will materialize in a special case.
Curriculum clarification:
- The finance part of this is not Math 3 syllabus, nor is the probability part
- Doing the math is Math 3 syllabus. That includes solving in matrix form, recognizing the concave program and its consequences for sufficient conditions, and also manipulating (again in matrix form) the max [linear] subject to [quadratic] that is not ex ante concave.
Some considerations beyond Math 3?
Sadly (?) I cannot overload curriculum with everything an economist should know and that is even curriculum in compulsory courses: Micro 3 goes beyond Mathematics 3 on a few on optimization topics. For example the duality between utility/output maximization subject to budget, and cost minimization subject to utility/output.
- It is not mathematically obvious that max output s.t. budget and min expenditure s.t. output should give the same allocation and cost. But the conditions for that are related to convex sets.
- The same goes for a Nash equilibrium for a zero-sum game: zero-sum means that I want to minimize your payoff (not out of ill will, but because that is my gain), and we are looking at maxx miny criteria. Minimax or maximin. For a Nash equilibrium, it shouldn't matter who "has the last word", because the first player anticipates the second. But that depends on the problem actually admitting nice properties; lo and behold, the conditions should look familiar, at least if we work on continuous functions in Rn: https://en.wikipedia.org/wiki/Sion%27s_minimax_theorem
Links to an external site.
- There are actually minimax properties lurking in the Lagrangian
f(x)−λ(g(x)−b): in fact, part of the K-T conditions are that λ minimizes the Lagrangian. (The λ coefficient
b−g(x) is nonnegative, and if it is positive, then λ = 0 maximizes over all nonnegative λ.)
- There are actually minimax properties lurking in the Lagrangian
- And as hinted on in a video: "pure" strategies are "either-or". Randomizing between A and B gives a convex opportunity set, the interval [0,1] of probabilities. You have utilized this for a long time already.