Higher-Order Optimality Conditions for Degenerate Unconstrained Optimization Problems
Анотація
In this paper necessary and sufficient conditions of a minimum for the unconstrained degenerate optimization problem are presented. These conditions generalize the well-known optimality conditions. The new optimality conditions are presented in terms of polylinear forms and Hesse’s pseudoinverse matrix. The results are illustrated by examples.The formulation and appearance of these conditions differ from high-order optimality conditions by other authors. The suggested representation of high-order optimality conditions makes them convenient for the evaluation of the convergence rate for unconstrained optimization methods in the case of a singular minimum point, for example, for the analysis of Newton’s and quasi-Newton’s methods.
Ключові слова
Повний текст:
PDF (English)Посилання
W. Ring, B. Wirth, Optimization methods on Riemannian manifolds and their application to shape space, SIAM Journal on Optimization, 22 (2) (2012), 596–627.
N.G. Maratos, M.A. Moraitis, Some results on the Sign recurrent neural network for unconstrained minimization, Neurocomputing, 287 (2018), 1–25.
I. Goodfellow, Y. Bengio, A. Courville, Deep Learning, MIT Press, 2016.
A.R. Sankar, V.N. Balasubramanian, Are saddles good enough for deep learning?, arXiv preprint arXiv:1706.02052 (2017).
D. Mehta, T. Chen, T. Tang, J.D. Hauenstein, The Loss Surface Of Deep Linear Networks Viewed Through The Algebraic Geometry Lens, arXiv preprint arXiv: 1810.07716 (2018).
K.N. Belash, A.A. Tret’yakov, Methods for solving degenerate problems, USSR Comput. Math. And Math. Phys, 28 (4) (1988), 90–94.
E. Szczepanik, A. Prusinska, A. Tret’yakov, The p-Factor Method for Nonlinear Optimization, Schedae Informaticae, 21 (2012), 141–157.
M. Avriel, Nonlinear Programming: Analysis and Methods, Dover Publishing, New York, 2003.
D.H. Li, M. Fukushima, L. Qi, N. Yamashita, Regularized newton methods for convex minimization problems with singular solutions, Comput. Optim. Appl., 28 (2004), 131–147.
C. Shen, X. Chen, Y. Liang, A regularized Newton method for degenerate unconstrained optimization problems, Optimization Letters, 6 (2012), 1913–1933.
Q. Li, A Modified Fletcher-Reeves-Type Method for Nonsmooth Convex Minimization, Stat., Optim. & Inf. Comput., 2 (3) (2014), 200–210.
T.N. Graspa, A modified Newton direction for unconstrained optimization, Optimization, 63 (7) (2014), 983–1004.
X. Han, J. Zhang, J. Chen, A new hybrid conjugate gradient algorithm for unconstrained optimization, Bulletin of the Iranian Mathematical Society, 43 (6) (2017), 2067–2084.
X. Li, B. Wang, W. Hu, A modified nonmonotone BFGS algorithm for unconstrained optimization, Journal of Inequalities and Applications, 183 (2017), 1–18.
K. Ghazali, J. Sulaiman, Y. Dasril, D. Gabda, Newton-SOR Iteration for Solving Large-Scale Unconstrained Optimization Problems with an Arrowhead Hessian Matrices, Journal of Physics: Conference Series, 1358:1 (2019), 1–10.
S. Taheri, M. Mammadov, S. Seifollahi, Globally convergent algorithms for solving unconstrained optimization problems, Optimization, 64 (2) (2015), 249–263.
W. Quapp, Searching Minima of an N-Dimensional Surface: A Robust Valley Following Method, Computers and Mathematics with Applications, 41 (2001), 407–414.
Jean-Paul Penot, Higher-order optimality conditions and higher-order tangents sets, SIAM Journal on Optimization, 27 (4) (2017), 2508–2527.
DOI: http://dx.doi.org/10.15421/142204
Посилання
- Поки немає зовнішніх посилань.
Індексування журналу
Журнал розміщено у наукометричних базах, репозитаріях та пошукових системах:
Адреса редколегії: 49050, Україна, Дніпровський національний університет імені Олеся Гончара, вул. Козакова 18, корп. 14, механіко-математичний факультет, д-р фіз.-мат. наук, проф. Когут П.І.
email: p.kogut@i.ua
Це видання має доступ за ліцензією Creative Commons «Attribution» («Атрибуция») 4.0 Всемирная.