Stanford EE364A Convex Optimization I Stephen Boyd I 2023 I Lecture 17

()
Stanford EE364A Convex Optimization I Stephen Boyd I 2023 I Lecture 17

Interior-Point Methods

  • Barrier method: find a point on the central path using Newton's method, increment the barrier parameter T, compute the new point, and repeat until the gap is less than a desired threshold.
  • Complexity analysis: classical analysis suggests the number of Newton steps grows as T increases, but empirical results show it takes between 20 and 50 least squares problems to solve a convex optimization problem with inequality constraints.
  • Self-concordance analysis: assumes the objective is self-concordant and provides a bound on the number of Newton steps needed to compute X+ from X.
  • The number of Newton steps is bounded by the function value at X minus the function value at X+ multiplied by a constant.
  • To get an upper bound on the function value difference, use the fact that the difference between the log of the ratio of fi(x) and fi(x+) is equal to minus T Lambda I.

Generalized Inequalities

  • Generalized inequalities can be solved using the same methods as linear programs, by generalizing the concept of a logarithm to cones.
  • A generalized logarithm is a function that is concave, strictly concave, and agrees with a logarithm on any ray.
  • The degree of a generalized logarithm is the number that multiplies the log term.
  • The gradient of a generalized logarithm is non-negative in the dual cone, and the inner product between the gradient and a vector in the dual cone is equal to the degree of the logarithm.
  • The logarithmic barrier for generalized inequalities is defined similarly to the logarithmic barrier for scalar inequalities, but with the generalized logarithm replacing the scalar logarithm.
  • The central path for generalized inequalities is defined as the minimizer of the logarithmic barrier plus a term involving the objective function.
  • The optimality conditions for generalized inequalities are the same as for scalar inequalities, with the generalized logarithm replacing the scalar logarithm.
  • The duality gap for generalized inequalities is the same as for scalar inequalities, with the sum of the degrees of the generalized logarithms replacing the number of constraints.

Semi-Definite Programming

  • Semi-definite programming (SDP) is a powerful optimization technique that can be used to solve a wide variety of problems.
  • SDP problems can be solved using the barrier method, which is a relatively simple algorithm that converges in a predictable number of steps.
  • Primal-dual interior-point methods are more efficient than the barrier method and are used in commercial solvers.
  • SDP solvers are now the default for solving a wide range of optimization problems, including linear programming, quadratic programming, and geometric programming.
  • Understanding the basics of SDP can help demystify optimization methods and make it easier to read research papers and solver codes.

CVXPY

  • CVXPY forms a sequence of equivalent problems, reduces them, and passes them to a solver.
  • Solvers like SCS and OSQP exploit sparsity in linear algebra.
  • Interior point methods and barrier methods are mostly the same, with interior point being the more common term.
  • CVXPY reduces problems through methods like introducing slack variables and handling sums of the largest entries of a vector.
  • The retrieval method involves replicating the solution X and changing the optimal value by flipping the sign.

L1 Norm Methods

  • L1 Norm methods are used for convex cardinality problems, where cardinality is the number of non-zeros in a vector.
  • The cardinality function is not convex and has been used in sparse design, such as in aerospace structures in the 1950s and 60s.
  • In aerospace design, L1 Norm is applied to the cross-sectional areas of bars in a structure to achieve sparsity.
  • L1 Norm is used in various fields such as structural engineering, circuit design, statistics, image processing, signal processing, and geophysics.
  • In structural engineering, it is used to design structures that can withstand various forces and stresses.
  • In circuit design, it is used to design filters with sparse coefficients, which can reduce hardware complexity.
  • In statistics, it is used for feature selection, where it helps identify the most relevant features from a large dataset.
  • L1 Norm is also used in image processing, signal processing, and geophysics for various tasks such as total variation reconstruction.
  • The cardinality of a vector is the number of non-zero elements in it.
  • The cardinality function is quasi-concave, but it has no convexity properties.
  • Convex cardinality problems involve finding the point with the smallest number of non-zeros in a convex set.
  • Feature selection problems involve maximizing a certain objective function (e.g., log likelihood) subject to constraints on the number of non-zero features.
  • Fixing the sparsity pattern of a vector reduces the problem to a convex problem with a smaller number of variables.
  • Brute-force search for solving cardinality problems is not feasible for large problems.
  • The general cardinality problem is NP-hard, but it can be solved globally using branch-and-bound or branch-and-cut methods.
  • Boolean LP problems can be expressed as cardinality problems, and there are heuristic methods for approximately solving them.
  • Sparse design aims to find the simplest solution that satisfies given specifications. It is used in various fields such as circuit design and wire sizing.
  • Sparse modeling and regressor selection involve selecting a subset of features to fit a model. It acts as a regularizer and can be used to find the sparsest model that meets certain performance criteria.
  • Sparse signal reconstruction aims to estimate a signal that is sparse in a given basis. It is commonly used in image and signal processing.
  • Handling outliers using L1 regularization is effective due to its robustness to large values. It can be formulated as a cardinality constraint problem, where a certain number of measurements are assumed to be completely wrong.
  • Minimum number of violations: Introduce a slack variable T to represent the violation of inequalities. Minimize the cardinality of T to find a feasible solution.
  • Linear classifier: Formulate a linear classifier as a convex cardinality problem. By approximating the problem as an L1 problem, it can be transformed into a support vector machine.
  • Identifying mutual infeasibility: Given a set of mutually infeasible inequalities, find the smallest subset of inequalities that are mutually infeasible. This can be achieved by minimizing the cardinality of the subset while ensuring that the certificate of infeasibility is positive.
  • Trading with fees: In a trading scenario with proportional and fixed fees, the problem can be formulated as a convex cardinality problem. The objective is to minimize the total cost, including the linear cost and the fixed fees.

Overwhelmed by Endless Content?