Re: optimizers

Posted by frederic.degraeve (Bugzilla) on
URL: http://quantlib.414.s1.nabble.com/optimizers-tp9481p9489.html

Hello,

 

The architecture of QuantLib for optimization can be re-used, can't it? The main difference between 
conjugate gradient and bfgs is only the update of the vector "lineSearch_->searchDirection()".
bfgs would re-use current "armijo linesearch".  However, it is true that this case would need more
time for tests and also that no box constraints have been developed in QuantLib with conjugate gradient.  
 

Regards,

 

Frédéric Degraeve

 


From: [hidden email] [mailto:[hidden email] ] On Behalf Of DU VIGNAUD DE VILLEFORT FRANCOIS GASAPRD PHI
Sent: mercredi 23 mai 2007 19:35
To: Bianchetti Marco; [hidden email]
Subject: Re: [Quantlib-dev] optimizers

 

 

Here is a good C++ implementation candidate,

 

http://www.alglib.net/optimization/lbfgs.php

 

Though, I'm not absolutely sure that the terms of use are compatible with those of QL.

François

 

-----Original Message-----
From: [hidden email] [mailto:[hidden email]] On Behalf Of Bianchetti Marco
Sent: Wednesday, May 23, 2007 6:13 PM
To: [hidden email]
Subject: [Quantlib-dev] optimizers

 

Hello,

 

at the moment are available into QuantLib the following optimizers:

·          Simplex (recently revisited: the Numerical Recipes implementation badly failed in finding the minimum of a 1D parabole...)

·          Levenberg-Marquardt

·          Conjugate Gradient

·          Steepest Descent (still to be debugged, work in progress)

and we are currently considering the option to port into QuantLib the Broyden-Fletcher-Goldfarb-Shanno (BFGS2) algorithm, which in GSL is declared to be the best (see the text below). So:

·          Any comment on the choice of BFGS2? do anyone has experience with it ?

·          is anyone aware of an available open source C++ implementation to be ported into Quantlib with small effort ?

Personally, I would prefer NOT to translate the GSL implementation from C to C++, because of the danger to introduce some tricky bug and because it requires a much more sophisticated test suite (and much work).

 

ciao

Marco

 

---

from:  http://www.gnu.org/software/gsl/manual/html_node/Multimin-Algorithms.html

Minimizer: gsl_multimin_fdfminimizer_vector_bfgs2
Minimizer: gsl_multimin_fdfminimizer_vector_bfgs

These methods use the vector Broyden-Fletcher-Goldfarb-Shanno (BFGS) algorithm. This is a quasi-Newton method which builds up an approximation to the second derivatives of the function f using the difference between successive gradient vectors. By combining the first and second derivatives the algorithm is able to take Newton-type steps towards the function minimum, assuming quadratic behavior in that region.

The bfgs2 version of this minimizer is the most efficient version available, and is a faithful implementation of the line minimization scheme described in Fletcher's Practical Methods of Optimization , Algorithms 2.6.2 and 2.6.4. It supercedes the original bfgs routine and requires substantially fewer function and gradient evaluations. The user-supplied tolerance tol corresponds to the parameter \sigma used by Fletcher. A value of 0.1 is recommended for typical use (larger values correspond to less accurate line searches).


-------------------------------------------------------------------------
This SF.net email is sponsored by DB2 Express
Download DB2 Express C - the FREE version of DB2 express and take
control of your XML. No limits. Just data. Click to get it now.
http://sourceforge.net/powerbar/db2/
_______________________________________________
QuantLib-dev mailing list
[hidden email]
https://lists.sourceforge.net/lists/listinfo/quantlib-dev