Abstract:
The presentation introduces elements of the Theory of Global Search (TGS) for optimization problems with a target function and equality and inequality type constraints defined by DC functions (differences of convex functions). In such problems, the modern apparatus of convex optimization proves inoperative not only in terms of characterizing and finding a global solution but also when attempting to "escape" from a local extremum.
The report presents the main properties of the linear space of DC functions, in particular, that $C^2(X)\subset DC(X)$ on a compact set $X\subset\mathbb{R}^n$, and therefore any optimization problem with continuous data can be approximated, with any given precision, by some DC optimization task. Initially, the primary focus is on canonical DC optimization tasks, such as convex maximization and DC minimization on standard sets, where the apparatus of Global Optimality Conditions (GOC) constituting the core of TGS is presented.
The focus is on the DC optimization problem with DC equality and inequality constraints. Using the Theory of Exact Penalization, this problem is reduced to an unconstrained problem, the target function of which turns out to be DC. For the latter task, the corresponding GOCs are proven, initiating the construction of a certain Global Search Scheme (GSS), using special Methods of Local Search (MLS), inside of which modern (classical) methods of convex optimization are applied.
In conclusion, applications of the developed approach are presented, such as numerical search for Nash equilibria in a bimatrix game, bi-level optimization, and the classical problem of solving systems of nonlinear equations (SSNE). A (limited) list of publications on mathematical optimization and optimal control is provided.