Login  Register

Armijo Line Search

Posted by Sergey Iskoz on Jul 31, 2004; 1:26pm
URL: http://quantlib.414.s1.nabble.com/Armijo-Line-Search-tp3128.html

Hi,

 

I have a question about the implementation of the Armijo line search algorithm (see ql/Optimization/armijo.*).

 

The termination condition in the do ... while loop seems to be using the gradient norm qpt_.  However, looking at the description of the algorithm in Polak's book (pp. 30-31) referenced in armijo.hpp and in Bertsekas's book (p. 25) [Dimitri Bersekas, Nonlinear Programming, 1995], I see that they use an inner product of the gradient (g) and the direction vector d, which will not always be equal to -g.  In addition, Bertsekas recommends setting alpha (which he calls sigma) to a value in [1e-5, 1e-1], while in the current implementation alpha defaults to .5.  As an aside, I also notice that one of the terminating conditions differs somewhat between Polak (eq-n 20b on p. 30) and Bertsekas (essentially replaces the RHS of 20b with 0).

 

I do not have much experience with implementing multi-dimensional optimization, so I may very well be missing something.  The current implementation worked when I used it on a very simple problem.  When I tried using conjugate gradient and steepest descent methods on a slightly more complicated problem (which still has analytic derivatives), they did not work.  They failed b/c Armijo procedure produced very small values of the scaling factor t, and I eventually got an underflow error.

 

I would appreciate any comments/clarifications that people might have.

 

Thanks,

Sergey Iskoz