Stanford EE364A Convex Optimization I Stephen Boyd I 2023 I Lecture 7

()
Stanford EE364A Convex Optimization I Stephen Boyd I 2023 I Lecture 7

Vector Optimization

  • Vector optimization generalizes the objective of an optimization problem from a scalar to a vector.
  • The semantics of vector optimization are different from scalar optimization because vectors do not have a linear ordering.
  • A Pareto optimal point is a point that is not dominated by any other feasible point.
  • The set of Pareto optimal points is also called the non-dominated set.
  • Multicriterion optimization is a type of vector optimization problem where the objectives are competing.
  • A point is optimal in multicriterion optimization if it simultaneously minimizes all of the objectives.

Pareto Optimality

  • Pareto optimality is a concept in optimization where a solution is considered non-dominated, meaning no other solution can simultaneously match or improve all of its objectives without worsening at least one.
  • In multi-objective optimization problems, trade-offs often occur between different objectives.
  • The tradeoff-optimal curve represents the set of Pareto-optimal solutions, where each point on the curve represents a balance between the objectives.
  • Comparing two Pareto-optimal solutions involves evaluating their performance on each objective and considering the trade-offs between them.

Applications of Vector Optimization

  • In portfolio construction, Pareto optimality is used to find the optimal mix of investments that balances risk and return.
  • Negative weights in portfolio optimization represent borrowing assets and selling them immediately, with the obligation to buy them back later.
  • The goal of an investment portfolio is to maximize return and minimize risk.
  • The expected return of a portfolio is the average return of the assets in the portfolio, while the variance of a portfolio is a measure of the risk of the portfolio.
  • The risk-return trade-off curve shows the relationship between the risk and return of a portfolio.

Scalarization

  • Scalarization is a technique for taking a vector objective and mapping it down to a single score, which allows for a linear ordering of portfolios.
  • The Lambda vector in scalarization can be interpreted as a vector of prices, and the point of tangency between the level curve of Lambda transpose F0 of X and the boundary of the feasible region represents an optimal portfolio.
  • Scalarization in multicriterion problems allows for the interpretation of lambdas as prices for each objective.

Regularized Least Squares

  • Regularized least squares can be interpreted as a fitting error and a desire for small parameters.
  • The gamma parameter in regularized least squares represents the exchange rate between large residuals and large parameters.
  • The constraint on lambda in K star ensures that the optimization problem remains convex.
  • Varying lambda in regularized least squares generates almost all Pareto optimal points, except for limit points.

Risk-Return Tradeoff

  • The risk-return tradeoff can be computed using the risk-adjusted return, which is the return minus a risk penalty proportional to the risk aversion parameter gamma.
  • Alternative methods to solve multicriterion problems include scalarization, sweeping across a risk constraint, and reporting the return and risk for different risk limits.

Lagrangian Duality

  • The video discusses the concept of duality in optimization, specifically Lagrangian duality.
  • The Lagrangian function is a single function that combines the objective function, weighted constraints, and weighted inequality constraints.
  • Geometrically, the Lagrangian function can be visualized as a line that approximates the original constrained problem.
  • The dual function is defined as the minimum of the Lagrangian function over all possible values of the Lagrange multipliers.
  • Lagrange multipliers can be interpreted as prices, and the dual function represents the optimal cost as a function of these prices.
  • Duality theory provides a powerful framework for solving constrained optimization problems and has applications in various fields such as economics, engineering, and operations research.

Lower Bounds

  • The video introduces the concept of a lower bound on a linear programming problem.
  • The lower bound is calculated by minimizing the dual function over the feasible region.
  • The dual function is a linear function restricted to a hyperplane and is a concave function.
  • If someone provides a vector new such that a transpose new plus C is greater than or equal to zero, then minus B transpose new is a lower bound on the optimal value of the linear program.
  • When solving a linear program numerically, the solver will provide an optimal point X and a dual certificate, which is a set of dual variables that can be used to calculate a lower bound on the optimal value.

Minimum Fuel Trajectory Generation

  • The dual function for this problem is a linear function, provided that the transpose of the new vector multiplied by the transpose of the matrix A is less than one.
  • A lower bound for the minimum fuel trajectory generation problem can be obtained by evaluating the dual function for any vector that satisfies the constraint.

Two-Way Partitioning

  • The two-way partitioning problem involves partitioning a set of objects into two groups, with the objective of minimizing the sum of the weights of the items in the same group minus the sum of the weights of the items in different groups.
  • The dual function for the two-way partitioning problem is a quadratic function of a quadratic form in x plus a constant.
  • A lower bound for the two-way partitioning problem can be obtained by evaluating the dual function for any vector that satisfies the constraint.

Spectral Partitioning

  • Spectral partitioning is a technique for partitioning a set of objects into groups based on the eigenvectors of the weight matrix.
  • The speaker demonstrates a method for solving a problem by transforming it into a relaxation problem.
  • The relaxation problem produces a lower bound on the original problem and a set of vectors that satisfy certain conditions.
  • A heuristic assignment is suggested to round the vectors and obtain a partitioning of the original problem.
  • Spectral partitioning can be used to improve the partitioning by adjusting the diagonal of the matrix W.

Overwhelmed by Endless Content?