25089: Numerical Optimization Methods
Course Name: Numerical Optimization Methods
Course Number: 25089
Prerequisite(s): -
Co-requisite(s): -
Units: 3
Level: Postgraduate
Last Revision: Fall 2014

Description
Many engineering problems (including numerous issues encountered by Master's and Ph.D. students in their theses) can ultimately be modeled as the minimization or maximization of a cost function under specific constraints. Various algorithms exist for performing this optimization, such as the steepest descent method, Newton's method, Quasi-Newton methods, and others. The primary goal of this course is to provide a comprehensive review of existing optimization algorithms and to familiarize students with the various methods and conditions for their application. This course will not be limited to convex constraints but will address the general case. However, there will be some discussion on convex problems and how certain theorems become simpler in that context. Conversely, specific methods for convex problems will also be covered. Therefore, this course may also be referred to as "Non-Convex Optimization," in contrast to another course focused solely on convex problems, which could be called "Convex Optimization."
 
Syllabus:
  • Introductions
    • An Introduction to Real Analysis, with Emphasis on Analysis of Multivariate Functions, Gradient and Hessian Concepts, and Taylor Expansion of Multivariate Functions
    • An introduction to Linear Algebra and Vector Spaces
  • Introduction to Various Concepts of Optimization
    • Continuous and Discrete Optimization, Unconstrained Optimization, Constrained Optimization, Global and Local Optimization, Stochastic and Deterministic Optimization, Concepts of Convexity and Convex Optimization
  • Unconstrained Optimization
    • Basic Concepts of Unconstrained Optimization: Theory, Introduction Of two Strategies: Trust-Region and Line-Search, Concept of Convergence Rate, Steepest Descent and Newton Methods
    • Line-Search Methods
    • Conjugate Gradient Method
    • Quasi-Newton Methods: DFP Method, BFGS Method (S, Goldfarb, Fletcher, Broyden, Shanno), Broyden Family
    • Solving Least-Squares Problems, Levenberg-Marquardt Algorithm
    • Solving Nonlinear Equations
  • A Summary of Evolutionary Methods
    • Genetic, Ant System and Particle Swarm Optimization (PSO) Algorithms
  • Constrained Optimizations.
    • Theory of Constrained Optimizations, Legendre Coefficients, First-Order Optimality Conditions (KKT=Karush-Kuhn-Tucker), Second-Order Optimality Conditions
    • General Principles of Nonlinear Constrained Optimization Algorithm: Gradient-Projection, Penalty, Barrier, …
    • The Simplex Method for Solving Linear Programming Problems

References:
  • J. Nocedal, S. J. Wright, Numerical Optimization, Springer, 1999
  • R. Fletcher, Practical Methods of Optimization, Wiley, 1989
  • E. K. P. Chong, S. H. Zak, An Introduction to Optimization, Wiley, 2001
  • D. G. Luenberger, Linear and Nonlinear Programming, 1984
  • D. P. Bertsekas, Nonlinear Programming, 1999
Last Update: 2024-07-07