re "BS call option price lower than intrinsic value - bug #3417114"

classic Classic list List threaded Threaded
4 messages Options
Reply | Threaded
Open this post in threaded view
|

re "BS call option price lower than intrinsic value - bug #3417114"

R Yan
The bug report was invalidated and closed too soon that I haven't been able to put a comment on that one, so let me post it here...
The bug was that the result of BS() formula, c=BS(F,K,sigma,"call") for a call option doesn't satisfy c>=F-K for certain K<<F (i.e. deep in the money strike).

The issue is caused by floating point precision. That is true. But I was hoping the dev team could dig a bit further for the cause but it seems I have to do that myself and lay out the answer here...

The root cause is that there is a flaw in the implementation of the cumulative normal CDF function such that the function f(x) is not monotonically increasing in x.
 
I'm not an expert in numerical analysis so I don't know what's wrong exactly with the implementation. But if we use the cdf function in boost::math, we fix the problem. A sketch of code below (assuming discount factor 1.0):

  // Compute call value manually.
  double d1 = std::log(F/K)/(sigma*sqrt(T)) + 0.5*sigma*sqrt(T);
  double d2 = d1 - sigma*sqrt(T);
  boost::math::normal N;
  double nd1 = boost::math::cdf(N, d1);
  double nd2 = boost::math::cdf(N, d2);
  double c = F * nd1 - K * nd2;

------------------------------------------------------------------------------
All the data continuously generated in your IT infrastructure contains a
definitive record of customers, application performance, security
threats, fraudulent activity and more. Splunk takes this data and makes
sense of it. Business sense. IT sense. Common sense.
http://p.sf.net/sfu/splunk-d2dcopy1
_______________________________________________
QuantLib-dev mailing list
[hidden email]
https://lists.sourceforge.net/lists/listinfo/quantlib-dev
Reply | Threaded
Open this post in threaded view
|

Re: re "BS call option price lower than intrinsic value - bug #3417114"

Luigi Ballabio
On Mon, 2011-10-03 at 20:31 +0800, R Y wrote:
> The bug report was invalidated and closed too soon that I haven't been
> able to put a comment on that one, so let me post it here...
>
> The bug was that the result of BS() formula, c=BS(F,K,sigma,"call")
> for a call option doesn't satisfy c>=F-K for certain K<<F (i.e. deep
> in the money strike).

Yes.  The returned value was below the lower bound by about 1e-16, which
suggested a floating-point issue.  Indeed, this turned out to be the
case.


> The issue is caused by floating point precision. That is true. But I
> was hoping the dev team could dig a bit further for the cause but it
> seems I have to do that myself and lay out the answer here...
>
> The root cause is that there is a flaw in the implementation of the
> cumulative normal CDF function such that the function f(x) is not
> monotonically increasing in x.

I did dig for the cause.  The root cause is simply that tests for strict
floating-point equality are ill-formed.  The same problem is triggered
by the following program, which is similar to yours, has the same input
data, but does not use a cdf at all:

#include <iostream>

int main() {
        double F = 1.35;
        double K = 0.39;
        double diff = 0.96;
        if (diff < F-K)
        {
                std::cerr << "Error: diff = " << diff << ", F-K = "
                        << (F - K) << std::endl;
        }
}

The above fails when compiled with VC++10 Express.  QuantLib is not even
included.


> I'm not an expert in numerical analysis so I don't know what's wrong
> exactly with the implementation. But if we use the cdf function in
> boost::math, we fix the problem.

We do fix this particular test case, but not the problem.  The
boost::math implementation passes the test with F = 1.35 and K = 0.39,
but fails, for instance, for F = 1.98 and K = 0.74.  The implementation
is different, so it results in different roundings and the error is
triggered by different values; but the problem is not in either
implementation, but underneath both of them.

Unfortunately, the punchline is that floating-point comparisons must
allow for rounding.  A simplistic way to do this would be to write the
test like:

if (c < (F - K) - epsilon)

with epsilon depending on the byte size of the floating point numbers on
one's machine.  A more sophisticated approach can be found by googling
for "Knuth floating-point comparison".

Later,
        Luigi


P.S.  I'm afraid I've rejected at least a couple of suggestions of yours
in the past few days.  That's just an accident---please don't be
discouraged by this and keep them coming.  I might seem a cranky old
conservative, but I do appreciate the effort you're putting into this.


>  A sketch of code below (assuming discount factor 1.0):
>
>   // Compute call value manually.
>   double d1 = std::log(F/K)/(sigma*sqrt(T)) + 0.5*sigma*sqrt(T);
>   double d2 = d1 - sigma*sqrt(T);
>   boost::math::normal N;
>   double nd1 = boost::math::cdf(N, d1);
>   double nd2 = boost::math::cdf(N, d2);
>   double c = F * nd1 - K * nd2;
> ------------------------------------------------------------------------------
> All the data continuously generated in your IT infrastructure contains a
> definitive record of customers, application performance, security
> threats, fraudulent activity and more. Splunk takes this data and makes
> sense of it. Business sense. IT sense. Common sense.
> http://p.sf.net/sfu/splunk-d2dcopy1
> _______________________________________________ QuantLib-dev mailing list [hidden email] https://lists.sourceforge.net/lists/listinfo/quantlib-dev

--

All generalizations are dangerous, even this one.
-- Alexandre Dumas



------------------------------------------------------------------------------
All the data continuously generated in your IT infrastructure contains a
definitive record of customers, application performance, security
threats, fraudulent activity and more. Splunk takes this data and makes
sense of it. Business sense. IT sense. Common sense.
http://p.sf.net/sfu/splunk-d2dcopy1
_______________________________________________
QuantLib-dev mailing list
[hidden email]
https://lists.sourceforge.net/lists/listinfo/quantlib-dev
Reply | Threaded
Open this post in threaded view
|

Re: re "BS call option price lower than intrinsic value - bug #3417114"

R Yan
Thanks for the detailed reply! Indeed this problem is more of theoretical interest than practical usefulness, so let me wrap up...
 
At first I was trying to test the BSImpliedVol routine. To do that, I fixed stdev, enumerated a few F and K, computed the call option price c and put option price p, and fed that into the BSImpliedVol routine. Then I got a few error messages from the newton solver complaining f(xmin)*f(xmax)>=0 for xmin=0.0 and xmax=3.0 (hardcoded stdev bounds). Unsurprisingly the errors only occur for deep in-the-money strikes where the numerical precision becomes an issue. So I followed in the code to see what could be done to mitigate the issue. Quite naturally I found if we can ensure BS(F,K,"call") >= F-K, we won't get an error from BSImpliedVol. [However I don't mean it's good or even useful practice to imply vol from deep ITM options; it's better to compute the OTM option price of the same strike and imply vol from that for numerical reason.]
 
Now for the requirements on the BS(F,K,sigma) function, it is really bit of a design-goal issue. Of course it would be good if BS(F,K)>=F-K, but would it be useful to tweak around the numerical tricks to achieve that strictly while a clean implementation can achieve that to machine epsilon precision readily? Well I guess the answer is yes, if someone has nothing better to do... like me :-)
 
Your test case showing (double)1.35 - (double)0.39 < (double)0.96 demonstrates that testing floating equality is problematic. (Indeed, if the left hand side cannot be exactly represented, then either < or > will hold.) But that is a bit different from requiring BS(F,K)>=F-K because here both sides are computed from the same input doubles.

A simple solution is to use put-call parity to compute ITM option price from OTM option price. That will satisfy the boundary constraint but whether the solution is desirable needs to be further investigated. (in particular whether the function will be numerically smooth at K=F).

One thing that does seem desirable is to ensure the normal CDF is monotonically increasing, and bounded by [0,1]. This is not always true in the current implementation, e.g. N(10.41) < N(10.29). That's why I suggest using the library from boost::math. Now that using boost is probably good in its own right and we already have a thread for that, no need to talk more here.
 
Finally, for your test case F = 1.98 and K = 0.74 that fails with my sketch of code using boost::math, it's not surprising as the last statement c = F * nd1 - K * nd2 in my code is not robust. That expression is equal to c = (F-K)*nd2+F*(nd1-nd2). When nd1-nd2 is almost zero, and nd2<1, c may be less than F-K. That's why I would recommend using the put-call parity for ITM option if we really want to ensure the bound.
 
Best regards.

 
On Tue, Oct 4, 2011 at 12:00 AM, Luigi Ballabio <[hidden email]> wrote:
On Mon, 2011-10-03 at 20:31 +0800, R Y wrote:
> The bug report was invalidated and closed too soon that I haven't been
> able to put a comment on that one, so let me post it here...
>
> The bug was that the result of BS() formula, c=BS(F,K,sigma,"call")
> for a call option doesn't satisfy c>=F-K for certain K<<F (i.e. deep
> in the money strike).

Yes.  The returned value was below the lower bound by about 1e-16, which
suggested a floating-point issue.  Indeed, this turned out to be the
case.


> The issue is caused by floating point precision. That is true. But I
> was hoping the dev team could dig a bit further for the cause but it
> seems I have to do that myself and lay out the answer here...
>
> The root cause is that there is a flaw in the implementation of the
> cumulative normal CDF function such that the function f(x) is not
> monotonically increasing in x.

I did dig for the cause.  The root cause is simply that tests for strict
floating-point equality are ill-formed.  The same problem is triggered
by the following program, which is similar to yours, has the same input
data, but does not use a cdf at all:

#include <iostream>

int main() {
       double F = 1.35;
       double K = 0.39;
       double diff = 0.96;
       if (diff < F-K)
       {
               std::cerr << "Error: diff = " << diff << ", F-K = "
                       << (F - K) << std::endl;
       }
}

The above fails when compiled with VC++10 Express.  QuantLib is not even
included.


> I'm not an expert in numerical analysis so I don't know what's wrong
> exactly with the implementation. But if we use the cdf function in
> boost::math, we fix the problem.

We do fix this particular test case, but not the problem.  The
boost::math implementation passes the test with F = 1.35 and K = 0.39,
but fails, for instance, for F = 1.98 and K = 0.74.  The implementation
is different, so it results in different roundings and the error is
triggered by different values; but the problem is not in either
implementation, but underneath both of them.

Unfortunately, the punchline is that floating-point comparisons must
allow for rounding.  A simplistic way to do this would be to write the
test like:

if (c < (F - K) - epsilon)

with epsilon depending on the byte size of the floating point numbers on
one's machine.  A more sophisticated approach can be found by googling
for "Knuth floating-point comparison".

Later,
       Luigi


P.S.  I'm afraid I've rejected at least a couple of suggestions of yours
in the past few days.  That's just an accident---please don't be
discouraged by this and keep them coming.  I might seem a cranky old
conservative, but I do appreciate the effort you're putting into this.


>  A sketch of code below (assuming discount factor 1.0):
>
>   // Compute call value manually.
>   double d1 = std::log(F/K)/(sigma*sqrt(T)) + 0.5*sigma*sqrt(T);
>   double d2 = d1 - sigma*sqrt(T);
>   boost::math::normal N;
>   double nd1 = boost::math::cdf(N, d1);
>   double nd2 = boost::math::cdf(N, d2);
>   double c = F * nd1 - K * nd2;
> ------------------------------------------------------------------------------
> All the data continuously generated in your IT infrastructure contains a
> definitive record of customers, application performance, security
> threats, fraudulent activity and more. Splunk takes this data and makes
> sense of it. Business sense. IT sense. Common sense.
> http://p.sf.net/sfu/splunk-d2dcopy1
> _______________________________________________ QuantLib-dev mailing list [hidden email] https://lists.sourceforge.net/lists/listinfo/quantlib-dev

--

All generalizations are dangerous, even this one.
-- Alexandre Dumas




------------------------------------------------------------------------------
All the data continuously generated in your IT infrastructure contains a
definitive record of customers, application performance, security
threats, fraudulent activity and more. Splunk takes this data and makes
sense of it. Business sense. IT sense. Common sense.
http://p.sf.net/sfu/splunk-d2dcopy1
_______________________________________________
QuantLib-dev mailing list
[hidden email]
https://lists.sourceforge.net/lists/listinfo/quantlib-dev
Reply | Threaded
Open this post in threaded view
|

Re: re "BS call option price lower than intrinsic value - bug #3417114"

Ferdinando M. Ametrano-3
> A simple solution is to use put-call parity to compute ITM option price from
> OTM option price.

I've always been in love with this approach and I haven't tackled it
just for lack of time (i.e. cronic lazyness). As a matter of fact it's
10 years now I keep advocating to calibrate models to OTM (vega
weighted) options, and I cannot understand why such a simple and
crucial point is always neglected.

This said what seems just a one-line patch might have so many far
fetched consequences that it has to be approached with care

> One thing that does seem desirable is to ensure the normal CDF is
> monotonically increasing, and bounded by [0,1]. This is not always true in
> the current implementation, e.g. N(10.41) < N(10.29). That's why I suggest
> using the library from boost::math. Now that using boost is probably good in
> its own right and we already have a thread for that, no need to talk more
> here.

The QuantLib implementation is Abramowitz-Stegun. I'm not aware of
better algorithms, but if they are available they could be adopted
provided that (a) they are computationally efficient and (b) play well
with Acklam's inverse CDF

Anyway my opinion when it comes to (inverse) CDF finance usage is that
robustness and efficiency are much more relevant than extreme tail
precision

ciao -- Nando

------------------------------------------------------------------------------
All the data continuously generated in your IT infrastructure contains a
definitive record of customers, application performance, security
threats, fraudulent activity and more. Splunk takes this data and makes
sense of it. Business sense. IT sense. Common sense.
http://p.sf.net/sfu/splunk-d2dcopy1
_______________________________________________
QuantLib-dev mailing list
[hidden email]
https://lists.sourceforge.net/lists/listinfo/quantlib-dev