Recent Advances in Nonsmooth OptimizationNonsmooth optimization covers the minimization or maximization of functions which do not have the differentiability properties required by classical methods. The field of nonsmooth optimization is significant, not only because of the existence of nondifferentiable functions arising directly in applications, but also because several important methods for solving difficult smooth problems lead directly to the need to solve nonsmooth problems, which are either smaller in dimension or simpler in structure.This book contains twenty five papers written by forty six authors from twenty countries in five continents. It includes papers on theory, algorithms and applications for problems with first-order nondifferentiability (the usual sense of nonsmooth optimization) second-order nondifferentiability, nonsmooth equations, nonsmooth variational inequalities and other problems related to nonsmooth optimization. |
From inside the book
Results 1-5 of 30
Page 47
Sorry, this page's content is restricted.
Sorry, this page's content is restricted.
Page 49
Sorry, this page's content is restricted.
Sorry, this page's content is restricted.
Page 52
Sorry, this page's content is restricted.
Sorry, this page's content is restricted.
Page 53
Sorry, this page's content is restricted.
Sorry, this page's content is restricted.
Page 215
Sorry, this page's content is restricted.
Sorry, this page's content is restricted.
Contents
Subdifferential Characterization of Convexity | 18 |
On Generalized Differentiability of Optimal Solutions and its Application | 36 |
Projected Gradient Methods for Nonlinear Complementarity Problems | 57 |
An NCPFunction and its Use for the Solution of Complementarity | 88 |
An Elementary Rate of Convergence Proof for the Deep | 106 |
Solving Nonsmooth Equations by Means of QuasiNewton Methods | 121 |
Superlinear Convergence of Approximate Newton Methods for LCą | 141 |
On SecondOrder Directional Derivatives in Nonsmooth Optimization | 159 |
Prederivatives and Second Order Conditions for Infinite | 244 |
Necessary and Sufficient Conditions for Solution Stability | 261 |
Miscellaneous Incidences of Convergence Theories in Optimization | 289 |
SecondOrder Nonsmooth Analysis in Nonlinear Programming | 322 |
Characterizations of Optimality for Homogeneous Programming | 351 |
On Regularized Duality in Convex Optimization | 381 |
A Globally Convergent Newton Method for Solving Variational | 405 |
Upper Bounds on a Parabolic Second Order Directional Derivative | 418 |
On the Solution of Optimum Design Problems with Variational | 172 |
Monotonicity and Quasimonotonicity in Nonsmooth Analysis | 193 |
Nonunique Multipliers | 215 |
A SLP Method with a Quadratic Correction Step for Nonsmooth | 438 |
A Successive Approximation QuasiNewton Process for Nonlinear | 459 |
Other editions - View all
Recent Advances in Nonsmooth Optimization Ding-Zhu Du,Liqun Qi,Robert S Womersley Limited preview - 1995 |
Common terms and phrases
applied approximate Newton assume assumptions Banach spaces BFGS method bounded computing consider constraint qualification convex functions convex set defined Definition denote directional derivative equivalent Euclidean distance matrix Example exists f(xo feasible finite function f Gauss-Newton point given global convergence Hence holds implies iteration K₁ Lemma linear complementarity problem Lipschitz Lipschitz continuous Lipschitzian Mathematical Programming Mathematics of Operations merit function minimization monotone Newton method nonempty nonlinear complementarity problem nonlinear programming nonsmooth analysis nonsmooth equations Nonsmooth Optimization normal cone objective function obtain Operations Research optimal solution optimality conditions optimization problems orthant parameter polyhedral positively homogeneous programming problems projected gradient Proof properties Proposition quasi-Newton methods quasimonotone R. T. Rockafellar result satisfied second order Section semismooth sequence SIAM Journal step subdifferential subgradient subset superlinear Theory triangulation variational inequality vector zero