# Stanford EE364A Convex Optimization I Stephen Boyd I 2023 I Lecture 16

26 Mar 2024 (6 months ago)

## Newton's Method for Linear Programming

• The speaker reviews a homework problem from the previous lecture on implementing an INF feasible start Newton method and emphasizes the importance of ensuring that X plus TV is positive in the line search.
• Sparse linear algebra can be used to solve larger problems efficiently, and the speaker encourages students to implement a sparse-friendly version of the LP solver.
• The speaker discusses the challenges of extending Newton's method to handle inequality constraints and presents two approaches: applying Newton's method directly to the indicator function (which fails) and using a smooth approximation of the indicator function.

## Logarithmic Barrier Function

• The logarithmic barrier is introduced as a smooth approximation of the indicator function, and its properties are discussed.
• As T (the parameter controlling the approximation) gets bigger, the approximation improves, but Newton's method takes longer to solve.
• The log barrier function is a convex function that is used to approximate the optimal solution of a linear programming (LP) problem.

## Central Path Method

• The central path is a curve in RN parameterized by the parameter T that connects the analytic center of the polyhedron to the optimal point.
• If a point is on the central path, then it satisfies the Karush-Kuhn-Tucker (KKT) conditions for the LP problem.
• The gap between the objective value of a point on the central path and the optimal value P* is less than M/T, where M is the number of constraints in the LP problem.
• The central path method solves a smooth unconstrained problem to find a primal point, a dual point, and a pair of dual dual points.
• The gap between the primal and dual objective values is M/T, where T goes to infinity as the problem is solved.
• This method differs from the traditional approach of starting with the Lagrangian of the original problem because it finds dual variables that are guaranteed to be dual feasible and provides a non-trivial lower bound.
• The primal-dual pair obtained from the central path method satisfies all the Karush-Kuhn-Tucker (KKT) conditions except for the complementary slackness condition, which is replaced with a condition involving 1/T.
• A force field interpretation can be used to visualize the central path method, where the objective function and constraints are represented as force fields, and the central path is the point where all the forces balance.
• Adding a redundant constraint to an LP does not change the central path.
• Redundant inequalities do not affect the central path.

## Homotopy Methods

• The barrier method is a technique for solving linear programming problems.
• The sequential unconstrained minimization technique (SUMT) is a method for solving linear programming problems that uses a homotopy method.
• Homotopy methods are a class of methods for solving nonlinear equations or optimization problems by introducing a parameter that perturbs or simplifies the original problem.
• In the context of circuit design, a homotopy parameter can be used to find the equilibrium point of a circuit.
• Homotopy methods are used to solve nonlinear equations by following a continuous path from a known solution to the desired solution.
• In this specific case, the homotopy method involves gradually increasing the supply voltage of a circuit from zero to the desired voltage while solving for equilibrium points at each step.
• The method is guaranteed to converge to the solution, but the number of iterations required depends on the choice of the parameter mu, which controls how aggressively the supply voltage is increased.
• Classical analysis predicts that the number of Newton steps required to solve the problem will grow rapidly as the supply voltage increases, but in practice, real methods use a short-step approach to achieve polynomial time complexity.
• Near the optimal solution, the last few Newton steps may be unnecessary, and real methods avoid solving the minimization problem fully to improve efficiency.

## Centering Step in Interior-Point Methods

• The centering step in interior-point methods for linear programming (LP) provides a significant gain in practice compared to starting with a large barrier parameter and solving the LP directly.
• In theory, the gain is not infinite, but it is extremely large.
• Numerically, using a large barrier parameter can lead to issues and require a large number of iterations.
• The offset of m/T in the centering step allows for solving the LP to a desired accuracy by adjusting the value of T.
• Classical analysis predicts that the number of Newton steps required to re-enter the feasible region increases as T gets bigger, but empirical evidence shows that the number of steps does not grow significantly.
• The staircase plots show the number of Newton steps per centering and the reduction in the duality gap.
• The choice of the mu parameter affects the number of Newton iterations, with smaller mu values resulting in shorter treads in the staircase plot but slower progress in reducing the gap.
• The plots show that even in high-dimensional problems, interior-point methods can find high-accuracy solutions in a relatively small number of Newton steps.
• The trade-off between the aggressiveness of updating the homotopy parameter (mu) and the number of Newton steps per centering is discussed.

## Barrier Methods for Convex Optimization

• Barrier methods for solving convex optimization problems take between 20 and 50 iterations, regardless of the problem size or dimension.
• The number of iterations is constant because each iteration involves solving a convex quadratic problem subject to equality constraints.
• Advances in solving convex optimization problems come from improvements in linear algebra rather than algorithm design.
• The rediscovery of barrier methods in the late 1980s and early 1990s led to claims that linear programs could be solved in polynomial time, but this had no practical impact as the Simplex method was already widely used and effective.
• Some Fortran code from the 1960s for barrier methods was found to be competitive with the best solvers at the time, highlighting the importance of practical considerations over theoretical advances.

## Phase One Methods

• Phase one methods for finding a strictly feasible point in convex optimization problems are still useful despite the availability of more modern methods that combine phase one and phase two.
• The basic phase one method is used to minimize the maximum constraint violation.
• The sum of infeasibilities method minimizes the sum of the constraint violations.
• The sum of infeasibilities method often results in sparsity, meaning that many of the constraints are satisfied exactly.
• The number of steps required to find a feasible point using the basic phase one method increases as the polyhedron becomes larger and as the boundary between feasible and infeasible regions is approached.

## Self-Concordant Barriers

• Self-concordant barriers are used to ensure that the objective function is well-behaved and has certain desirable properties.
• The practical implications of self-concordant barriers are not yet clear, but they have strong theoretical foundations.

## Analysis of Newton Steps in Interior-Point Methods

• The video discusses the analysis of the number of Newton steps required in the centering steps of an interior-point method for solving linear programming problems.
• The number of Newton steps is bounded by a function that depends on the function's self-concordance.
• The embarrassing thing is that to bound the number of Newton steps, one would need to solve the problem using Newton's method, which defeats the purpose of the bound.

## Duality

• A lower bound on the optimal value can be obtained from duality, which will be discussed in the next video.