A New Method for Unconstrained Optimization Problem
- Zhiguang Zhang
Abstract
This paper presents a new memory gradient method for unconstrained optimization problems. This method makes use of the current and previous multi-step iteration information to generate a new iteration and add the freedom of some parameters. Therefore it is suitable to solve large scale unconstrained optimization problems. The global convergence is proved under some mild conditions. Numerical experiments show the algorithm is efficient in many situations.
- Full Text: PDF
- DOI:10.5539/mas.v4n10p133
This work is licensed under a Creative Commons Attribution 4.0 License.
Journal Metrics
(The data was calculated based on Google Scholar Citations)
h5-index (July 2022): N/A
h5-median(July 2022): N/A
Index
- Aerospace Database
- American International Standards Institute (AISI)
- BASE (Bielefeld Academic Search Engine)
- CAB Abstracts
- CiteFactor
- CNKI Scholar
- Elektronische Zeitschriftenbibliothek (EZB)
- Excellence in Research for Australia (ERA)
- JournalGuide
- JournalSeek
- LOCKSS
- MIAR
- NewJour
- Norwegian Centre for Research Data (NSD)
- Open J-Gate
- Polska Bibliografia Naukowa
- ResearchGate
- SHERPA/RoMEO
- Standard Periodical Directory
- Ulrich's
- Universe Digital Library
- WorldCat
- ZbMATH
Contact
- Sunny LeeEditorial Assistant
- mas@ccsenet.org