Posted by
andrea-110 on
Oct 11, 2009; 6:46pm
URL: http://quantlib.414.s1.nabble.com/Update-version-of-experimental-mcbasket-tp9167p9169.html
On 10/10/09 20:53, Klaus Spanderen wrote:
> Hi Andrea
>
> do you see a change to replace the original version of the classes
>
> EarlyExercisePathPricer and LongstaffSchwartzPathPricer
>
> in ql/methods/montecarlo by your version?
Ideally yes.
The reasons I did not use the EarlyExercisePathPricer are:
1) it only provides a method to get the exercise payment (i.e. operator()) and the state, while I
would like to get the value of payments that are made until the option is cancelled
2) I first added this 3rd virtual method. but then I realized that what the
LongstaffSchwartzPathPricer caches is not optimal. it caches the paths, while I found much better to
cache a triplet of vectors (exercise payments, payments, states) so that I process a single path
only once.
For a trivial case (i.e. the american call option) it is the same, but if each payment or state is
path dependent, then if I only cache the path, I have to reprocess the whole path each time I need a
new "time".
3) so I decided that it was better for the LongstaffSchwartzPathPricer to have access to the payoff
directly and store the relevant information for each path (and not the path itself)
I have not seen how the current implementation of EarlyExercisePathPricer &
LongstaffSchwartzPathPricer are used in a complex path dependent case, so maybe there is a more
"standard" way of doing it.
Then I added to LongstaffSchwartzPathPricer 2 extra things (which are easily ported to the main version)
1) it currently skips all paths that give a non positive exercise value. this because it assumes the
continuation value will always be positive, and there is no point to accept a negative exercise.
it might not be the case always and it is hard to dynamically detect this lower bound (0.0). so the
payoff has a function to return the lower bound of the continuation value or -INF if absent
2) it checks if the 2 trivial exercise strategies (exercise for all paths or never) are better than
the LS algorithm. if this is the case, then we override the decision with 2 special vectors of
coefficients (empty => never, too big => always).
I noticed that if the states are chosen badly, the LS algorithm can be trivially wrong.
This might be bad for sensitivities since the whole "function" has a discontinuity.
And it prints the result of this check using QL_TRACE
Andrea
------------------------------------------------------------------------------
Come build with us! The BlackBerry(R) Developer Conference in SF, CA
is the only developer event you need to attend this year. Jumpstart your
developing skills, take BlackBerry mobile applications to market and stay
ahead of the curve. Join us from November 9 - 12, 2009. Register now!
http://p.sf.net/sfu/devconference_______________________________________________
QuantLib-dev mailing list
[hidden email]
https://lists.sourceforge.net/lists/listinfo/quantlib-dev