Update version of experimental/mcbasket

classic Classic list List threaded Threaded
5 messages Options
Reply | Threaded
Open this post in threaded view
|

Update version of experimental/mcbasket

andrea-110
Hi,

After almost 2 year I would like to post an updated version of mcbasket experimental.
The idea is still the same as the initial version: i.e. have one single engine (here
MCAmericanPathEngine or MCPathBasketEngine) and to only have to write a new option/payoff
(PathMultiAssetOption/PathPayoff)

Here the biggest new feature is the support of American Options.

The key feature is this virtual method of PathPayoff

         virtual void value(const Matrix       & path,
                            Array              & payments,
                            Array              & exercises,
                            std::vector<Array> & states) const = 0;

which has to return all information needed to value a payoff with early exercise (for a single path)

path: is the path of all assets/times
payments: all payments made
exercises: if the option is exercised at time i, all payment up to (and including i) are preserved
and the others cancelled
states: a vector of financial coordinates used in LS

I have attached a diff wrt the most recent svn and an example of an American Lookback.

I can go through all the code in details if people are interested.

I don't think it makes much sense to compare it to the 1st version, since it was very limited in
features and hardly usable at all.

TODO, problems, bugs:

- I had to copy&paste a lot of classes/templates already used in QL (e.g.: EarlyExercisePathPricer,
LongstaffSchwartzPathPricer). My problem is that the existing EarlyExercisePathPricer only seems to
handle an option which pays only once (i.e. at exercise), while I wanted to allow an option that
pays many times and that I can cancel at some point

- allow for non 1-1 mapping between paths and assets (e.g. stoch vol)

- allow for non deterministic interest rates (i.e. replace the discount factors with 1/numeraire)

- find nicer names

------------------------------------------------------------------------------
Come build with us! The BlackBerry(R) Developer Conference in SF, CA
is the only developer event you need to attend this year. Jumpstart your
developing skills, take BlackBerry mobile applications to market and stay
ahead of the curve. Join us from November 9 - 12, 2009. Register now!
http://p.sf.net/sfu/devconference
_______________________________________________
QuantLib-dev mailing list
[hidden email]
https://lists.sourceforge.net/lists/listinfo/quantlib-dev

example.cpp (7K) Download Attachment
ql.diff.gz (12K) Download Attachment
Reply | Threaded
Open this post in threaded view
|

Re: Update version of experimental/mcbasket

Klaus Spanderen-2
Hi Andrea

do you see a change to replace the original version of the classes

 EarlyExercisePathPricer and LongstaffSchwartzPathPricer

in ql/methods/montecarlo by your version?

best regards
 Klaus

------------------------------------------------------------------------------
Come build with us! The BlackBerry(R) Developer Conference in SF, CA
is the only developer event you need to attend this year. Jumpstart your
developing skills, take BlackBerry mobile applications to market and stay
ahead of the curve. Join us from November 9 - 12, 2009. Register now!
http://p.sf.net/sfu/devconference
_______________________________________________
QuantLib-dev mailing list
[hidden email]
https://lists.sourceforge.net/lists/listinfo/quantlib-dev
Reply | Threaded
Open this post in threaded view
|

Re: Update version of experimental/mcbasket

andrea-110
On 10/10/09 20:53, Klaus Spanderen wrote:
> Hi Andrea
>
> do you see a change to replace the original version of the classes
>
>   EarlyExercisePathPricer and LongstaffSchwartzPathPricer
>
> in ql/methods/montecarlo by your version?

Ideally yes.
The reasons I did not use the EarlyExercisePathPricer are:

1) it only provides a method to get the exercise payment (i.e. operator()) and the state, while I
would like to get the value of payments that are made until the option is cancelled

2) I first added this 3rd virtual method. but then I realized that what the
LongstaffSchwartzPathPricer caches is not optimal. it caches the paths, while I found much better to
cache a triplet of vectors (exercise payments, payments, states) so that I process a single path
only once.
For a trivial case (i.e. the american call option) it is the same, but if each payment or state is
path dependent, then if I only cache the path, I have to reprocess the whole path each time I need a
new "time".

3) so I decided that it was better for the LongstaffSchwartzPathPricer to have access to the payoff
directly and store the relevant information for each path (and not the path itself)

I have not seen how the current implementation of EarlyExercisePathPricer &
LongstaffSchwartzPathPricer are used in a complex path dependent case, so maybe there is a more
"standard" way of doing it.

Then I added to LongstaffSchwartzPathPricer 2 extra things (which are easily ported to the main version)

1) it currently skips all paths that give a non positive exercise value. this because it assumes the
continuation value will always be positive, and there is no point to accept a negative exercise.
it might not be the case always and it is hard to dynamically detect this lower bound (0.0). so the
payoff has a function to return the lower bound of the continuation value or -INF if absent

2) it checks if the 2 trivial exercise strategies (exercise for all paths or never) are better than
the LS algorithm. if this is the case, then we override the decision with 2 special vectors of
coefficients (empty => never, too big => always).
I noticed that if the states are chosen badly, the LS algorithm can be trivially wrong.
This might be bad for sensitivities since the whole "function" has a discontinuity.
And it prints the result of this check using QL_TRACE

Andrea


------------------------------------------------------------------------------
Come build with us! The BlackBerry(R) Developer Conference in SF, CA
is the only developer event you need to attend this year. Jumpstart your
developing skills, take BlackBerry mobile applications to market and stay
ahead of the curve. Join us from November 9 - 12, 2009. Register now!
http://p.sf.net/sfu/devconference
_______________________________________________
QuantLib-dev mailing list
[hidden email]
https://lists.sourceforge.net/lists/listinfo/quantlib-dev
Reply | Threaded
Open this post in threaded view
|

Re: Update version of experimental/mcbasket

andrea-110
On 11/10/09 19:46, Andrea wrote:

> 1) it currently skips all paths that give a non positive exercise value. this because it assumes the
> continuation value will always be positive, and there is no point to accept a negative exercise.
> it might not be the case always and it is hard to dynamically detect this lower bound (0.0). so the
> payoff has a function to return the lower bound of the continuation value or -INF if absent

It is actually possible to dynamically detect what an out of the money option is.
Since we go backward, we just remember what the lowest payoff is, and never exercise if the early
termination value is less or equal to that value.

In this updated version of the mcbasket experimental patch, I've implemented it.
In my (simple) tests it worked properly.

------------------------------------------------------------------------------
Let Crystal Reports handle the reporting - Free Crystal Reports 2008 30-Day
trial. Simplify your report design, integration and deployment - and focus on
what you do best, core application coding. Discover what's new with
Crystal Reports now.  http://p.sf.net/sfu/bobj-july
_______________________________________________
QuantLib-dev mailing list
[hidden email]
https://lists.sourceforge.net/lists/listinfo/quantlib-dev

mcbasket.diff.gz (13K) Download Attachment
Reply | Threaded
Open this post in threaded view
|

Re: Update version of experimental/mcbasket

Klaus Spanderen-2
Hi

can you do me a favour and send me your ql/experimental/mcbasket directory as
a tar-ball? (the diff doesn't work out on my machine here.).

thanks

Klaus

On Monday 09 November 2009 14:34:24 Andrea wrote:

> On 11/10/09 19:46, Andrea wrote:
> > 1) it currently skips all paths that give a non positive exercise value.
> > this because it assumes the continuation value will always be positive,
> > and there is no point to accept a negative exercise. it might not be the
> > case always and it is hard to dynamically detect this lower bound (0.0).
> > so the payoff has a function to return the lower bound of the
> > continuation value or -INF if absent
>
> It is actually possible to dynamically detect what an out of the money
> option is. Since we go backward, we just remember what the lowest payoff
> is, and never exercise if the early termination value is less or equal to
> that value.
>
> In this updated version of the mcbasket experimental patch, I've
> implemented it. In my (simple) tests it worked properly.



------------------------------------------------------------------------------
Let Crystal Reports handle the reporting - Free Crystal Reports 2008 30-Day
trial. Simplify your report design, integration and deployment - and focus on
what you do best, core application coding. Discover what's new with
Crystal Reports now.  http://p.sf.net/sfu/bobj-july
_______________________________________________
QuantLib-dev mailing list
[hidden email]
https://lists.sourceforge.net/lists/listinfo/quantlib-dev