I am a risk manager and trying to run Monte Carolo sim for a large number of issues. So far I have a 4000 x 4000 correlation matrix. With rankreducedSqrt function with Spectral and 100% retention, it took 3 hrs to complete the calculation. Am I doing this the most efficient way? Is there anyway to improve performance wise? Thanks. Following is the code:
rankReducedSqrt(*corr_mat, 4000, 1.0, SalvagingAlgorithm::Spectral); |
Hello, apologies for the delay. Did you get any feedback on this? Luigi On Fri, Jul 15, 2016 at 10:29 AM ian_dfw <[hidden email]> wrote: I am a risk manager and trying to run Monte Carolo sim for a large number of ------------------------------------------------------------------------------ _______________________________________________ QuantLib-users mailing list [hidden email] https://lists.sourceforge.net/lists/listinfo/quantlib-users |
In reply to this post by ian_dfw
I don’t run calculations on large matrices, so take the comments below with some scepticism. Assuming the functions are working as intended:
1) I don’t see that you gain anything from calling rankReducedSqrt with 100% retention. Effectively, you keep the whole matrix this way. Try experimenting with lower values (but large enough to give your exercise meaning). If the eigenvalues in your matrix don’t decay rapidly in value, you may have to resign yourself to a lengthy calculation. 2) 4000 is a large matrix and QuantLib calculates all of the eigenvalues of the matrix, even when calculating the rank reduced square root matrices. I can’t remember (or never knew) the efficiency of the Schur decomposition algorithm, but I am willing to bet that it is expensive. You could try applying the Schur decomposition method in the algorithm and see if that is the bottleneck. If it is, then methods that involve calculating the biggest eigenvalues that you are interested in only might be competitive. Best regards, Etuka
------------------------------------------------------------------------------ _______________________________________________ QuantLib-users mailing list [hidden email] https://lists.sourceforge.net/lists/listinfo/quantlib-users |
Thanks for your comment. Reason I was doing x4000 issues is that we are trying to do Monte Carlo simulation on most of traded issues. Potentially the number would be much larger than 4000. For the rank reduction, I tried 0.95 but not much improvement.
So right now I am thinking some dimension reduction like PCA. |
Hi
I guess the number of non-zero eigenvalues is much smaller than 4000. There are good routines available to calculate the largest n eigenvalues / eigenvectors and these routines perform much better than trying to calculate all eigenvalues suppose most of them are zero. You might want to try LAPACK or ARPACK. IMO QuantLib does not offer these alogrithms. regards Klaus On Montag, 1. August 2016 06:22:31 CEST ian_dfw wrote: > Thanks for your comment. Reason I was doing x4000 issues is that we are > trying to do Monte Carlo simulation on most of traded issues. Potentially > the number would be much larger than 4000. For the rank reduction, I tried > 0.95 but not much improvement. > So right now I am thinking some dimension reduction like PCA. > > > > -- > View this message in context: > http://quantlib.10058.n7.nabble.com/Sqrt-of-large-correlation-matrix-tp1759 > 6p17630.html Sent from the quantlib-users mailing list archive at > Nabble.com. > > ---------------------------------------------------------------------------- > -- _______________________________________________ > QuantLib-users mailing list > [hidden email] > https://lists.sourceforge.net/lists/listinfo/quantlib-users ------------------------------------------------------------------------------ _______________________________________________ QuantLib-users mailing list [hidden email] https://lists.sourceforge.net/lists/listinfo/quantlib-users |
In reply to this post by ian_dfw
Hello Ian The QuantLib row reduction algorithm is no good for you because it calculates all of the eigenvalues (and this is almost certainly what you don’t want), whether or not you discard them when you come to simulate. This is most likely the reason why you see little difference in performance when you perform the rank reduction. It is also rather hard to see how many eigenvalues are discarded when you choose a rank reduction factor. PCA is just finding the largest, most contributing eigenvalues such that the variance explained is not too small - and this is what QuantLib’s calculation is trying to achieve. Etuka
------------------------------------------------------------------------------ _______________________________________________ QuantLib-users mailing list [hidden email] https://lists.sourceforge.net/lists/listinfo/quantlib-users |
Free forum by Nabble | Edit this page |