请求英语专家的帮忙Memory gradient and supermemory gradient methods are generalizations of conjugate gradient methods,9,16,20,30].They all use not only the current iterative information but also the previous iterative informa-on to generate a

来源:学生作业帮助网 编辑:作业帮 时间:2024/05/11 16:07:19
请求英语专家的帮忙Memory gradient and supermemory gradient methods are generalizations of conjugate gradient methods,9,16,20,30].They all use not only the current iterative information but also the previous iterative informa-on to generate a

请求英语专家的帮忙Memory gradient and supermemory gradient methods are generalizations of conjugate gradient methods,9,16,20,30].They all use not only the current iterative information but also the previous iterative informa-on to generate a
请求英语专家的帮忙
Memory gradient and supermemory gradient methods are generalizations of conjugate gradient methods
,9,16,20,30].They all use not only the current iterative information but also the previous iterative informa-
on to generate a new iterate at each iteration.Moreover,limited-memory quasi-Newton method is also a
eful approach to solve large-scale minimization problems [2,3,11,12,18,25,26,31,39].It is to find a newiterate by memorizing limited gradients at each iteration.Thus,limited-memory quasi-Newton methods are
also supermemory gradient methods [19,22,27–29].
In fact,some conjugate gradient methods in minimizing non-quadratic objective functions have no global
convergence when some traditional inexact line searches are used.Many supermemory gradient methods have
no global convergence either in some cases.In order to guarantee the global convergence of supermemory gra-
dient methods and simplify the computation,some new inexact line search and curve search rules were pro-
posed [1,34–36] in practical computation.
However,there are some difficulties to overcome in analyzing supermemory gradient method with inexact
line searches.For example,the global convergence and convergence rate in general cases are interesting and
meaningful problems.How to use trust region approach to guarantee the global convergence is another chal-
lenge in algorithm design.In fact,we expect to construct such a supermemory gradient method that has three
properties:converging stably and averagely so as to solve ill-conditioned problems,simple computation so as
to solve large-scale problems,and self-adaptiveness of parameter-adjusting automatically.
In this paper we propose a new class of supermemory gradient methods for unconstrained optimization
problems.Trust region approach is used in the new algorithms to guarantee the global convergence.In each
iteration,the new algorithms generate a suitable trust region radius automatically and obtain the next iterate
by solving a simple subproblem.These algorithms converge stably and averagely due to using more iterative
information,and can be reduced to quasi-Newton methods when the iterate is close to the optimal solution.
Numerical results show that this new class of supermemory gradient methods is effective in practical
computation.The rest of this paper is organized as follows.In Section 2 we describe the algorithm and analyze some sim-
ple properties.In Sections 3 and 4 we prove its global convergence and convergence rate respectively.Numer-
ical results are reported in Section 5.Some conclusions are summarized in Section 6.(有点多,但是真的没办法啊!)受累了!

请求英语专家的帮忙Memory gradient and supermemory gradient methods are generalizations of conjugate gradient methods,9,16,20,30].They all use not only the current iterative information but also the previous iterative informa-on to generate a
你去360装机备用里面下一个有道桌面词典翻译就可以了
supermemory渐变坡度和记忆的方法,归纳共轭梯度的方法
,9,16,20,30].他们都使用不仅当前迭代信息,而且以前的迭代信息-
在产生一种新的迭代在每步迭代计算.而且,limited-memory拟牛顿法也是一种
eful方法求解大型最小化问题[2,3,11,12,18,25,26,31,39].这是一个newiteratefind记忆有限的梯度在每步迭代计算.因此,limited-memory拟牛顿法
也supermemory梯度方法[19,22,27 - 29岁).
事实上,某些共轭梯度方法在目标函数最小non-quadratic没有全球性的
当一些传统的非精确线收敛于全局最优解搜索被使用.许多supermemory梯度方法
没有全局收敛性或者在某些情况下.为了保证supermemory草的全局收敛性
简化计算方法和金融区,一些新的非精确线搜索和曲线搜索规则是亲
[1,34胜36负)构成的实用计算方法.
然而,也有一些difficulties克服supermemory梯度法分析与准确
线搜索.例如,具有全局收敛性和收敛速度,一般情况是有趣的
有意义的问题.如何使用信赖域方法来保证算法的全局收敛性,又chal吗
lenge算法设计中.事实上,我们希望supermemory构建具有三种梯度法
特性:收敛稳定和平均解决问题,简单的计算ill-conditioned等
要解决问题,并self-adaptiveness大规模的参数校正.
在本文中,我们提出一种新的supermemory梯度约束优化问题的方法
问题.信赖域方法用于新算法的全局收敛性,保证.在每个
新算法的迭代,产生一个合适的信赖域半径及自动获得下一次迭代
通过求解一个简单的子问题.这些算法收敛稳定,由于使用了更多的平均迭代
信息,可以减少到拟牛顿迭代方法接近全局最优解.
数值结果表明这类新型supermemory梯度方法是实用effective
计算.其余的文章是有组织的如下.在第二节描述算法和分析一些sim -
该属性.在第三和第四节我们证明其具有全局收敛性和收敛速度.应用-
古典的研究结果发表在第五章的规定.一些结论.在第6部分.
翻译的不完全,你可以自己再改改