вход по аккаунту



код для вставкиСкачать
Understanding and Managing Model Risk:
A Practical Guide for Quants, Traders and Validators
by Massimo Morini
Copyright Е 2011, John Wiley & Sons, Ltd.
Part II
Snakes in the Grass: Where
Model Risk Hides
Understanding and Managing Model Risk:
A Practical Guide for Quants, Traders and Validators
by Massimo Morini
Copyright Е 2011, John Wiley & Sons, Ltd.
This book is devoted to the management of model risk in the use of valuation models. However,
valuation models are not used only for valuation. They are used also for another purpose which
is at least equally important for a ?nancial institution, and very dif?cult: hedging. In spite of
this, the predominant part of the literature on ?nancial models only regards pricing, and the
limited existing literature on hedging is mainly theoretical, attempting to give a rigorous
foundation to hedging but bearing little relation to what hedging is in practice. Thus hedging
remains a very heuristic discipline, more art than science, and it is very dif?cult to ?nd
examples of model risk in hedging in the literature.
Notwithstanding hedging is a crucial topic when talking of model risk and model validation.
Hedging enters model risk management in two ways:
1. Since the birth of the modern option market observation of the performances of a model
in replicating a derivative has been a traditional technique used in the validation of models
for valuation. Hedging enters the picture since replicating a derivative means building a
hedging strategy that yields the same pro?ts and losses (P&L) as holding the derivative
2. The real hedging strategy put in place by a trader always makes some use of valuation
models, and is an activity that can generate important losses for a bank. Therefore many
institutions also include in model validation the validation of the hedging strategies implemented by traders.
While in point 1) hedging enters model validation as a tool for assessing the quality of a
valuation, in 2) the hedging strategy actually implemented by traders with the help of some
models is the focus of a validation that can potentially be separate from the validation of the
same models when used only for pricing. In the following we consider both ways in which
hedging can affect model validation.
The ?rst point, tackled in Section 5.2, relates to what most risk management units call the
P&L Explain Test, where the pricing model is applied to a set of historical data to build a
hedging portfolio and perform the exercise of replicating a derivative on this dataset. From a
less ambitious perspective, this test is performed just to have an example of how the model
would have worked in the past, making clearer which market movements have generated the
trader?s P&L and double-checking that all technicalities for the management of a derivative
have been taken into account. In a more ambitious application, this is a backtest that can
con?rm or belie the correspondence between the pricing model and the market reality.
Here we try to reason on the real meaning of this kind of test, taking into account both the
theory and the practice. First we describe how a prototypical P&L exercise works, and see
how under ideal market conditions this P&L test, which only requires the observation of the
hedging behaviour of a model in the past, could really reveal whether the model is effective
for pricing and consistent with reality. This is based on the principle that the true price of
a derivative is the cost of its self-?nancing hedging strategy, as done by Black and Scholes
(1973) in their seminal paper. If one believes that results obtained in the P&L backtesting also
apply to the relevant future, model validation may be considered successfully computed when
a P&L Explain test is satisfactorily passed.
Then we dash the extremely optimistic expectations just created by showing that the market
conditions required for a P&L Explain test to be really successful are so idealized that they do
not apply to any real model in any real market, and that the lack of these conditions is suf?cient
to annul much of this test?s theoretical meaning. In spite of this, we show that the test remains
interesting and potentially useful, particularly to compare models and to rank them. In fact,
a worse performance by a model in a P&L Explain test, compared to an alternative model,
may mean that the underperforming model requires more frequent recalibration, is less robust
when used in the real world, and is rather unstable, overreacting to small market changes, all
features that make a model less desirable for both pricing and hedging.
The analysis of Section 5.2 has an important implication, among others that even when a
P&L analysis can be performed and is of some use in assessing a model, the hedging test
typical of a P&L analysis bears little relation to a real hedging strategy. The crucial difference
is that in a P&L analysis the model must never be recalibrated during the life of a derivative,
while in the real world hedging is performed with a model which is regularly recalibrated. Real
hedging strategies take this fact into account, and therefore do not appear consistent with the
theoretical hedging strategy suggested by the valuation model, which clearly does not consider
any recalibration. Thus the assessment of a real hedging strategy, called Hedge-Analysis, is
related to the validation of the pricing model but different from it.
In Section 5.3 we analyze how a real hedging strategy works, how it relates to the underlying
valuation model, and how it can be assessed, by means of a practical case study regarding
the behaviour of local vs stochastic volatility models in hedging plain-vanilla options. This
may appear a simple issue, but it has been the subject of ?erce debate for a long time, and
is by no means resolved. Chapter 2 reported one piece of research that shows that there are
actual and important differences between local and stochastic volatility models; the results we
see here in hedging demystify a little, at least in regard to hedging performances, the debate
about which of the two modelling frameworks ?is better?. In fact, we show that, in contrast
to what one may have found in the literature until quite recently, both local and stochastic
volatility models have an undesirable hedging behaviour, even when they are assumed to
give an acceptable valuation. We also show that, for both models, this undesirable behaviour
can be simply corrected with heuristic adjustments which are trivially implemented by most
Con?rming what we guessed in the theoretical analysis at 5.2, the practical case study
shows that the analysis of a real hedging strategy is not trivial, and is not redundant even after
the model used in hedging has been validated for pricing. In fact, ?rst of all it is dif?cult to
understand the implications of model assumptions on hedging, a topic on which quants expend
much less effort than on the analysis of pricing. Secondly, real hedging strategies put in place
by traders have a high likelihood of differing from the model-implied hedging suggested by the
theory. Even if traders are happy with the price coming out of a model, they may be unhappy
with its dynamic behaviour and apply a hedging strategy based on ad hoc adjustments and
tricks that can be in absolute contradiction with the assumptions underlying the model. In
this case the hedging strategy must be validated and analyzed as something different from the
validation of the pricing model, and we may ?nd that the real hedging strategy implemented
uses the model in a way, not implicit in the theory, that makes hedging much more effective.
Hedging and Model Validation: What is Explained by P&L Explain?
There is one ?nal lesson to be learned from this story: even the inventors of a model can
be misled about its hedging behaviour, even if they are among the best quants that ever trod
the boards of a trading ?oor. . . a useful warning for all of us, model developers and model
users alike.
In this section I do not provide any new results. I would just like to spark some reasoning on
quite old issues based on some informal considerations of my own and on many discussions
with practitioners and researchers.
The question to be tackled is: can we use the hedging performances of a model applied
to a derivative to assess the goodness of the model itself? We assume that some complex,
illiquid derivative is hedged according to some pricing model, and that all risk factors that are
considered relevant are hedged. At time t0 the derivative has a price
Vt0 = f At0 , ?
where the vector
A = A1 , . . . , A N A
indicates the prices of the underlying instruments depending on the risk factors which affect
pricing. These instruments are also the ingredients of the hedging portfolio, while the vector
? = ? 1, . . . , ? N p
indicates the parameters. The function f (. . .) is the pricing function for the derivative in the
valuation model.
We want to perform a P&L Explain analysis of this model applied to this derivative. We
set A at the current market value At0 , we calibrate the parameters and then we compute all the
sensitivities of the derivative
|t=t0 , . . . , wtN0 A =
|t=t0 .
wt10 =
? A1t
? AtN A
We construct a replicating portfolio based on these sensitivities, with value
wti0 Ait0 + Bt0
t0 =
where B is an amount kept in cash that guarantees equality of the portfolio value t0 with the
model value of the derivative Vt0
Bt0 = Vt0 ?
wti0 Ait0 .
At t1 we compute the price of the derivative Vt1 , and we compare this change in price
Vt1 = Vt1 ? Vt0
with the change in value of the hedging portfolio
wti0 Ait1 + r Bt0 t1 ,
t1 =
xti = xti ? xti ?1
We repeat this for all dates ti ? T? = t1 , . . . , tm . We have negative evidence for the validity of
the model when
Err =
ti ? Vti
ti ?T?
is large, and positive evidence when it is small. What ?small? and ?large? mean depends on your
view about the meaning of this test. You will see that a ?fundamentalist view? may consider
Err ? 0 a feasible target and be very satis?ed with the model only when this target is at least
approached, while a ?sceptical view? would not expect Err ? 0 to be a feasible target and, if it
is reached, will conclude that it was a matter of chance or that it was obtained by construction.
A moderate view will probably consider this test good enough for a comparison, so that a
model will be considered better than another, limited to this test, when it gets a lower level for
Err, with no view on an absolute level of Err to be taken as a target.
5.2.1 The Sceptical View
A sceptic will tell you that, even if the test works, the results mean nothing. I have heard
this view expressed in the market many times, among both traders and quants. One common
argument is: this test never compares the price Vt from the model with a market price, since
it is usually applied to complex derivatives for which there is no liquid market. Everything is
computed within the model: the derivative price and the sensitivities. If we have a function
Vt = f ( At ) (assuming N A = 1 for simplicity), what is the surprise in ?nding that
d Vt ?
? f ( At )
d At
? At
over a short time interval? According to the sceptic, this is just a ??rst-order Taylor expansion?
and there is no surprise if (5.1) holds approximately. A real validation based on market data
can only be obtained by comparing f (At ) with some real market price for the derivative.
Another argument is: a model has the purpose of de?ning a probability distribution for the
possible states of the world ?1 , ?2 , . . . in order to set the pricing probability measure P and
be able to compute
P (?1 ) , P (?2 ) , . . .
The backtesting of the hedging strategy seen above is instead based only on the observation
of one ?i . What can it tell us about the measure P? According to the sceptic, nothing.
In these sceptical positions one element is usually forgotten: that to be valid a P&L Explain
analysis, should take care to ensure that the hedging strategy considered is self-?nancing.
5.2.2 The Fundamentalist View and Black and Scholes
Now let us look at the test from what we shall call a fundamentalist point of view, where
the fundamentalist here is a hypothetical market operator who strongly believes in the typical
assumptions of the theory of mathematical ?nance, basically those that underlie the Black
and Scholes (1973) seminal result. I have never met a pure fundamentalist but I have met
Hedging and Model Validation: What is Explained by P&L Explain?
some mathematicians so little used to challenge the foundations of our job, and some traders
so much in awe of mathematical assumptions, that they ended up having a fundamentalist
To understand this position, go back to the replication argument on which the Black and
Scholes formula is based. Consider a European derivative with an initial cost and a payoff at
maturity. If one is able to construct a replication strategy that is self-?nancing, in the sense
that I will add no money to it and I will withdraw no money until maturity, and the strategy
replicates the payoff of the derivative at maturity in all states of the world, the initial cost of
the strategy must be the same as the price of the derivative, if we want the market to remain
arbitrage-free. Otherwise one could short the strategy and buy the derivative, or the other way
around, and get a free lunch.
Saying that the above strategy must be constructed as self-?nancing does not mean that there
are no gains and no losses. There will be gains and losses in the process of readjusting the
vector of weights (sensitivities) wt as time goes on, but these gains and losses will simply not
be taken away but will increase or decrease the cash amount B, which may become negative
(we borrow cash).
In this framework we build the replicating portfolio as a hedging portfolio based on model
sensitivities in order to replicate each instantaneous variation of the value of a derivative (recall
that the Black and Scholes framework is considered for continuous processes in continuous
time). We never withdraw/add any money from/to the hedging and portfolio, if the model we
use corresponds to the true process of the underlying risk factors, we expect that, in any state
of the world, the T (maturity) cost/bene?t of closing the positions in all Ai and in B equals the
payoff. In an arbitrage-free market this also implies that the value of the replication strategy
equals the value of the derivatives at any time 0 ? t ? T , otherwise, again, one could go
long/short in the derivative and short/long in the strategy and create an arbitrage opportunity.
This implies
t ? Vt = 0, ?t
and therefore Err = 0. When we are performing the P&L Explain test on some historical data,
we are testing if this happens for one speci?c state of the world. Getting Err = 0 is a necessary
condition ? although not suf?cient, since we are assessing only one speci?c state of the
world ? for the model to be right, namely to correspond to the true process of the underlying
risk factors.
This is what happens when the replicating strategy is based on the sensitivities of the Black
and Scholes formula and the underlying is supposed to move as in Black and Scholes,
A1 = S, B = B
d S = ?Sdt + ? SdWt ,
Vt = B&S St , K , 0, ? 2 (T ? t) ,
where we took zero rates for simplicity. In this case a delta-hedging strategy is enough to
replicate the option, since the only stochastic factor is the unique underlying. According to
Ito?s Lemma
? Vt
? Vt
1 ? 2 Vt 2 2
? S dt
dt +
dS +
2 ? S2
= Theta О dt + Delta О d S + Gamma О ? 2 S 2 dt.
d Vt =
To match the sensitivity to the underlying at the ?rst step, the replicating portstocks, and to match the sensitivity to ?time? it must contain
folio must1 contain Delta
Theta + 2 Gamma О ? 2 S 2 r1 in cash. Is it true that this strategy has the same value as
the option? For this to be true we need the amount of cash at time zero to be
B&S St , K , 0, ? 2 (T ? t) ? Delta О S
thus we need
Theta + 12 Gamma О ? 2 S 2
B&S St , K , 0, ? 2 (T ? t) ? Delta О S =
Theta + 12 Gamma О ? 2 S 2
= e?(T ?t) K N (d2 ) ,
the condition is satis?ed. Rebonato (1998) shows in practice that if one simulates the Black
and Scholes model with a very small simulation step and then performs a hedging strategy (or
equivalently a P&L Explain test) of a call using the Black and Scholes sensitivities with a very
small rebalancing period, the strategy really replicates the payoff in all scenarios and always
has the same value as the option priced with the Black and Scholes formula.
This reveals the meaning of the test a bit better. The test is not trivial, since we are not
only verifying the ?rst order derivatives, as in the sceptic critique, but we also require the
self-?nancing condition. The brilliant element of the P&L Explain test lies in noticing that in
the Black and Scholes framework we can actually use one single state of the world, which
is given by our historical data, to ?falsify? the model, namely to show that the model cannot
be the correct one. We have only to perform a P&L Explain test: if even in a single state of
the world the model-based self-?nancing replication strategy does not replicate the derivative
price, the model cannot be the right one. If, instead, the test is successful, we can?t claim we
have ?veri?ed? the model because we should test all possible states of the world, but we have
certainly obtained strong con?rmation of its validity. The question now is: do we really live in
a Black and Scholes framework?
5.2.3 Back to Reality
We already know that we do not live in the ?wonderful world of Black and Scholes?. But if
in pricing the discrepancies between such a world and the reality are usually neglected, they
seem suf?cient to denude the P&L Explain test of a lot of its power:
1. In reality we work in discrete time with transaction costs. Thus we never really observe
?d S? or ?d V ?, and in any case we cannot replicate them with no cost.
2. The concept of perfect replication so crucial to Black and Scholes requires the market to
be complete. All risk factors need to be hedgeable, which almost by de?nition is not the
case when we are speaking of a complex derivative, as we saw in Section 1.2.2. Some risk
factors cannot be really accessible, meaning that there are no liquid derivatives depending
on them.
3. Some other risk factors, if not all risk factors, may not be pure diffusions. With jumps we
cannot even attempt perfect hedging through standard sensitivities. The perfect replication
idea is the privilege of diffusion models in continuous time, like Black and Scholes. Only
when we observe continuous changes of the underlying in continuous time can we hope
Hedging and Model Validation: What is Explained by P&L Explain?
for a real replication. As Cont and Tankov (2004) put it, ?when moving from diffusionbased complete market models to more realistic models, the concept of replication does not
provide the right framework for hedging and risk management?.
These three points show that the interpretation of a P&L Explain test given by a fundamentalist view clashes dramatically with reality. The relevance of a P&L test in the real world is
much lower than it would be if we lived in a Black and Scholes world. This is the answer to
one of the questions we raised at the beginning.
Now let us look at the other question: is the hedging test required by a P&L Explain test
similar to the hedging strategy put in place by real-world traders? This cannot be the case
because of the three points above and, even more importantly, because of the next one:
4. In a rigorous P&L Explain test inspired by the Black and Scholes framework the model
should never be recalibrated during the life of an option. One updates the At but the set
of parameters ? must remain the same as established at the beginning. Today no model is
calibrated to a single point in time; all models are regularly recalibrated, particularly when
they are used for dynamic hedging. See Buraschi and Corielli (2005) for interesting results
on the justi?cations for this standard practice. What matters to us now it that the issue of
recalibration is the crucial element that shows how the rigorous P&L Explain test is not
a test of the model as it is really used in hedging, and that we need a different test called
?Hedge Analysis?.
The above points suggest that a fundamentalist view of a P&L test is not justi?ed. However,
in difference to the sceptical view, we have seen that in an ideal world P&L Explain would be
a good way to assess model validity. We can add that the measurement and the analysis of the
test quantity
t ? Vt 2
Err =
may retain a practical interest, and, abandoning the dogmatic requirements of a rigorous P&L
test, can still be a useful part of model validation.
Consider the above point 1. There we showed that for an effective replication the theory
prescribes hedging to be continuous, while in practice hedging and rebalancing are done in
discrete time. Even in a numerical simulation of the test like the one performed by Rebonato
(1998), where the ?market data? are in reality generated by a Black and Scholes Monte Carlo,
there is the same issue, since Monte Carlo is performed at discrete time. Rebonato tackles the
issue as follows: the tests are considered successful if the hedging error reduces as we increase
the hedging frequency. Then he applies the idea to the numerically generated data and ?nds
that really, along all Monte Carlo paths, namely all possible scenarios of his arti?cial world, as
the hedging frequency increases the derivative and the replication strategy actually converge.
A similar test may be also useful with real market data, even with no illusion about perfect
convergence. In real hedging we try at t0 to hedge the future price movement
Vt1 Amkt
? Vt0 Amkt
by the movement of the hedge:
At1 ? Amkt
| A=Amkt
We know that real
? Vt0 Amkt
Vt1 Amkt
t1 ? At0
will surely be different from
that we use in computing our greeks and hedges, but we do not know how different they are.
The analysis of Err in (5.2) allows us to measure how much our sensitivities, theoretically constructed for in?nitesimal changes of the underlying, are robust to real-world daily changes. If
Err is too big, it may be an indication that the time-step in our hedging strategy is not adequate,
we should reduce the time-step or use higher order sensitivities in hedging (gamma hedging).
Consider the above point 4. Since traders actually recalibrate the model when they hedge,
they are trying to hedge the future price movement
mkt mkt 1)
At1 ? Vtmod(t
d V mod(t0 )
At1 ? Amkt
| At0 =Amkt
where in the notation I have pointed out that the model actually changes between t0 and t1
since the recalibration of parameters at t1 moves us from the model mod (t0 ) to the model
mod (t1 ). Instead, the sensitivity computed at t0 is fully based on mod (t0 ).
If we observe Err in the hedging strategy based on (5.4 ), obtaining a quantity Err which
is small (or smaller than the quantity obtained with an alternative model) usually indicates
that we had on average a small difference between mod (t0 ) and mod (t1 ), since only in such
a case is it likely that the sensitivity based on mod (t0 ) matches the realized price change.
To an extent the model is making ?good short-term predictions about future prices?. As an
experienced trader once told me, we commonly say that valuation models are not used to make
predictions. This is not true when they are applied to hedging: here the capability to predict
short-term movement is crucial. In other words, the test can show that on average recalibration
does not change the model dramatically. This can be seen as an assessment of model correct
speci?cation and model stability.
5.2.4 Remarks: Recalibration, Hedges and Model Instability
Some observations are now in order. First, saying that the model requires recalibration implies
that in
Vt = f (A, ? ) ,
there are also some parameters that we do not hedge against but that we need to recalibrate,
thus we have parameters ? that change in time even if the model assumes they do not change.
We should write
= f At0 , ?t0 and Vtmod(t
= f At1 , ?t1 ,
where ?t is not a parameter that the model assumes to be a predictable function of time,
like ? (t) in Black and Scholes with time-dependent volatility, but a model parameter which
is changed, in an unpredictable way, every time the model is recalibrated. In a situation of
Hedging and Model Validation: What is Explained by P&L Explain?
this kind, traders often decide also to hedge against movements of ? . This is the case of the
real-world use of the Black and Scholes model, where the volatility ? can be deterministic
in the model but actually changes due to recalibration, so that vega-hedging is performed for
immunization from such changes. We need to add one asset Z such that its price is a function
of the parameter, Z t = g (?t ), so that
?t = g ?1 (Z t ) .
In the Black and Scholes example, we would have ?t = ?t and the additional asset would be
an option, priced with its own Black and Scholes formula, corresponding to g (и). Now we can
write the price of the derivative we are hedging as
f (At , ?t ) = f At , g ?1 (Z t ) =: f? (At , Z t ) .
In this case, should one even try to ?nd a cost of the hedging strategy equal to the price of the
option in the model, namely Err = 0? Not at all. In fact, according to the model, the parameter
? is not stochastic and there is no hedge to perform on it. Since the model price is always the
price of the self-?nancing hedging strategy, it cannot embed the cost of this ?out-of-the model?
hedging. This has, ?rstly, practical consequences for risk managers who need to compute
reserves and, secondly, can imply instability problems, as we shall see.
Hedges vs Model Reserves
The strategy that also hedges against movements of ? will have a cost different from
the one predicted by the model, and this cost appears a better estimation of the true price
of the derivative because it takes into account a material risk neglected by the model, the
risk associated with changes of ? . Should this be considered a model reserve? According to
experienced risk managers it should not, because the hedging of ? is an expected cost while
model reserves relate to losses which are feared but not expected. Thus it makes sense to
charge our estimate of the amount
? Vtmod(t
to the buyer of the derivative, as we do with expected costs, rather than setting up a model
Now pay attention to the following consequence of this observation. One natural outcome
of the above situation, where there is regular hedging against stochastic movements of a
parameter that the model treats as deterministic, is that an alternative model with stochastic ? ,
which we call mod ?, will be considered. For example, if Black and Scholes is our base model
we may move to the Heston model where volatility is stochastic. Do we expect to ?nd again
? Vtmod?(t
between the cost of hedging and the model price as large as (5.5)? No, because a model
that already incorporates ? hedging, thus the above
with stochastic ? gives a price Vtmod?(t
hedge charge may disappear completely. But it is also possible that new hedge charges may
be needed, even if they will probably be smaller. Consider, for example, the above case of
the passage from Black and Scholes to Heston. Now there will be other parameters, such as
the volatility of volatility, that are recalibrated even if the model assumes them to be constant
. . . the ?true? model does not exist; we have only the possibility to move to ?better? models.
Model instability
Additionally, when we use the model mod where ? should be ?at or deterministic, but in
practice we hedge against movements of ?t by buying and selling Zt , we have to recalibrate ?t
to Zt day-by-day. This may not be as simple an exercise as was indicated above,
?t = g ?1 (Z t ) .
In general recalibration is performed via optimization, so we have
2 ,
?t = arg min Z t ? Z tmod
with the further complication that ?t and Zt may actually be vectors. Thus we can have an
ill-posed calibration problem, such that even small changes from Z ti to Z ti+1 can lead to large
changes between ?ti and ?ti+1 .
In this case it is particularly dif?cult for the model to explain its own hedges, ie getting
(5.3) close to (5.4). In fact the hedges are computed by modti but the realized change of value
is also computed using modti+1 which can be very different from modti due to the calibration
problem. A large Err can be an indication of this problem, which is often also detected through
the following tests:
1) test of the variability of parameters when the model is calibrated through time. The higher
the variability, the more unstable the model.
2) test of the evolution of the derivative prices in the future according to the model. The
more these predicted prices differ from those that will be revealed in the market in the future,
the more unstable the model. The issue of model stability is touched on and exempli?ed
in Chapter 9.
5.2.5 Conclusions: from Black and Scholes to Real Hedging
In this section we saw how the P&L Explain test inspired by the Black and Scholes replication
would be a solid and general assessment of the correspondence of the model with reality only
if the reality conformed to the fundamental assumptions of Black Scholes, such as continuous
time trading, no transaction costs, a complete market and purely diffusive stochastic processes.
Since these do not hold in practice, P&L Explain loses a lot of its power.
Additionally, we have seen that a rigorous P&L explain test must be performed with a model
whose deterministic parameters are never recalibrated. This opens a relevant gap between P&L
Explain and the reality of hedging, which is performed with regular recalibration.
Yet, as we showed, it can be useful in model validation to compute the P&L of a hedging
exercise, even if this hedging exercise considers recalibration and discrete-time hedging. A
lower P&L volatility indicates that the model does not require too much recalibration, which
is a sign of good dynamic behaviour, and that the model sensitivities, constructed for instantaneous changes of the underlying, are robust to real-world discrete-time changes. Obtaining
a good performance in these tests is a positive feature of a model, both for hedging and for
However, we are still not speaking of the validation of a real hedging strategy. In fact, for
all the above tests we are still assuming that the traders compute the sensitivities consistently
with the model, without modifying the sensitivity ad hoc. The sensitivity used to construct the
hedging portfolio is still the ?rst derivative (analytic or numeric) of the price computed within
From Theory to Practice: Real Hedging
the model. In reality, traders know that the model will be recalibrated tomorrow and that this
is not taken into account by the ?rst derivative computed today. Therefore they can decide to
adjust the sensitivity. This is what we explore and explain in the next section.
This analysis takes as its starting point one of the most popular pieces of research in mathematical ?nance. I refer to the ?rst sections of the Hagan et al. (2002) paper that introduces the
SABR model. This paper does more than introduce a model: in the ?rst sections it performs an
analysis of the requirements that an option model should satisfy in its hedging behaviour, and
of how the most common option models actually behave with respect to these requirements.
In particular, it shows analytically the behaviour of local volatility models. This part of the
paper had a profound in?uence on the trading world.
In option markets characterized by smile and skew there is the issue, recognized for
many years, of the shadow-delta. I anticipated it in Remark 9 and I now revisit it brie?y.
When dealing with a classic Black and Scholes market with ?at smile, delta hedging is a
simple exercise. Delta is computed as the sensitivity of the option price to a small change of
the underlying price F, with no change of the implied volatility of options, that remains at the
level ? predicted by Black and Scholes. When, however, traders are using Black and Scholes
as a quotation model in a market with a smile, things are not so simple. Consider a volatility
smile as in Figure 5.1.
Computing the delta of a derivative means repricing the derivative after a small shift of the
underlying F from its current value F 0 to F0 + ?. Is it reasonable to assume that, if tomorrow
there occurs such a shift of the underlying, the smile will remain unchanged? This is a strong
assumption, called a sticky-strike assumption and represented by a null movement of the smile
curve when moving from F 0 to F0 + ?, such that the continuous line in the ?gure (today smile)
coincides with the smile after a movement of the underlying.
Hagan et al. (2002) claim that this assumption is wrong. In real markets the smile would
move to the right following the movement of F, as in the situation represented by the dotted
Figure 5.1 The behaviour of a model in hedging: the shadow delta
line. Here I do not discuss this initial assumption. In my experience the majority of traders
agree with it; some of them disagree or claim that it depends on many other factors. In any
case, from my limited trading experience I point out that the assumption sounds reasonable.
In fact for many markets there are regularities in the shape of the smile. For example, I have
often heard traders say that in the interest rate market, for short maturities, ?the minimum
of the smile is around the ATM strike?. The Hagan et al. (2002) assumption is the one that
guarantees that in the model, when the underlying moves, these kinds of properties persist, as
they actually persist in the market.
Consequently, Hagan et al. (2002) claim that, when using a model that is not Black and
Scholes but implies a non-?at option smile, if we assess the effect of a change in the level of
the underlying, for example from F to F + ?, it is desirable to observe that the model implied
smile curve moves in the same direction. We call this a comonotonic behaviour. Since this
appears to be the usual pattern observed in the market, if the model is able to predict it the
hedging of derivatives will be less expensive, with reduced rebalancing costs.
For performing this analysis, Hagan et al. (2002) consider the volatility curve ? F (K ),
implied by a model when the underlying forward is F. Namely ? F (K ) is the function such
that model call and put option prices coincide with Black prices when Black volatility is given
by ? F (K ),
MOD (K , F, ? ) = Black K , F, ? F2 (K ) T .
For assessing the dynamic behaviour of the model in delta-hedging, we must consider the
movement of ? F (K ) caused by variations of F. We would like to observe that
F ? F + ? ? ? F+? (K ) ? ? F (K ? ?) .
This means, as we said, that if we move the underlying to the right, namely increase it, the
implied volatility curve will also move to the right. In this way, for example, the property such
as ?the minimum of the smile is at the ATM strike? is preserved by changes in the underlying
itself, since ( 5.6) implies
F ? F + ? ? ? F+? (F + ?) ? ? F (F) .
Hagan et al. (2002), using singular perturbation techniques, show that in a local volatility
model with dynamics
d Ft = LocVol (Ft ) Ft dW,
F0 = F,
ie a model with a local volatility function not depending on time, the implied volatility is
F+K ?
(F ? K ) + и и и .
? F (K ) = LocV ol
LocV ol
The ?rst term dominates the second one, so
? F (K ) ? LocV ol
From Theory to Practice: Real Hedging
After the model has been calibrated, we can assess its predictions for an increase in the
F ? F +? ?
(F + ?) + K
? F+? (K ) ? LocV ol
F + (? + K )
= LocV ol
? ? F (K + ?) .
Comparing with (5.6), we see this is the opposite of the desired behaviour. In particular,
if the forward price F increases to F + ?, ? > 0, the implied volatility curve moves to the
left; if F decreases to F ? ?, the implied volatility curve moves to the right. Local volatility
models predict that the market smile/skew moves in the opposite direction to the price of the
underlying asset. This is opposite to typical market behaviour.
This can have consequences on hedging performance. Under any model, the delta is
Black K , F, ? F2 (K ) T
?Black ?Black ?? F (K )
The ?rst term is the Black delta. The second term is the model correction to the Black delta,
which consists of the Black vega risk multiplied by the predicted change in volatility due to
changes in the underlying forward price. If the sign of this latter term is opposite to what it
should be according to market evidence, the entire correction has the wrong sign. It would
be better, in practice, to hedge with the Black model. According to the above analysis, this is
exactly the situation with local volatility models.
To resolve this problem, Hagan et al. (2002) introduce a stochastic volatility model which
has become the most popular stochastic volatility model in the market, the SABR model.
5.3.1 Stochastic Volatility Models: SABR
In the SABR model, the forward price F is assumed to evolve under the associated natural
forward measure QT according to
d F(t) = V (t)F(t)? dW F (t),
d V (t) = V (t) dWV (t),
V (0) = ?,
where WF and WV are QT standard Brownian motions with
E [dWV dW F ] = ? dt.
The ? exponent makes the dynamics of the underlying analogous to the CEV (constant
elasticity of variance) local volatility model, perturbed by the lognormal stochastic volatility
term V. Setting the local volatility exponent ? < 1 allows us to explain typical market skews
(monotonically decreasing, asymmetric smiles). When associated with stochastic volatility it
allows us to ?t more general hockey-stick smiles. The skew component of the smile can also
be explained by assuming ? < 0 (alternatively to ? < 1 or jointly with it, as we will see in
Section 7.1). The parameter is the volatility of volatility, what we often call the volvol.
Using singular perturbation techniques, Hagan et al. (2002) obtain a closed-form approximation for the model implied volatility ? F (K ),
1?? x(z)
(1 ? ?)4 4 F
(1 ? ?)2 2 F
(F K ) 2
+ иии
(1 ? ?)2 ? 2
2 2 ? 3?
T j?1 + и и и ,
1?? + 24(F K )1??
4(F K ) 2
z := (F K ) ln
1 ? 2?z + z 2 + z ? ?
x(z) := ln
? FSABR (K ) :=
so that
2 E T (F(T ) ? K )+ = Black K , F(0), ? FSABR (K ) T .
Hagan et al. (2002) remark that, using this formula and moving from F to F + ?, leaving all
else unchanged, we can assess that the underlying and the implied volatility curve move in the
same direction, consistently with desired dynamic behaviour.
5.3.2 Test Hedging Behaviour Leaving Nothing Out
We start by reviewing a few simple tests, where we use the SABR closed-form formula to
assess model hedging behaviour with a test consistent with the approach detailed above:
observing the calibrated implied volatility curve, changing in the implied volatility curve the
input F for the new input F + ?, and then by observing how the implied volatility curve has
moved. In particular, we consider sample tests where we start from F = 0.08 and then move
to (F + ?) = 0.1.
Simple hedging tests
The ?rst observation we make is trivial. The SABR model, by de?nition, cannot always have
a dynamic behaviour different from local volatility models. In fact, when = 0 and ? < 1,
the SABR model is reduced to a local volatility model. This is the case for the model in Figure
In Figure 5.2 we see that, as for local volatility models, the smile moves backwards when
increasing F = 0.08 (continuous curve) to (F + ?) = 0.1 (dotted curve).
However, what is more relevant is that even moving to a real stochastic volatility model the
local volatility part may still dominate, as in the example at Figure 5.3 (parameters are given
below the chart).
Here we have again the undesirable behaviour. If we further increase the volatility of
volatility, as in Figure 5.4, we have a mixed behaviour that is still not consistent with the
desired pattern.
From Theory to Practice: Real Hedging
F =0.08, ?=0.25, ?=0, ?=0.05, vv=0
F0=0.1, ?=0.25, ?=0, ?=0.05, vv=0
Figure 5.2 = 0, ? = 5%, ? = 0.25, ? = 0, F = 0.08
A choice of parameters that gives a behaviour consistent with market patterns is given in
Figure 5.5.
However, this parameter con?guration is good for representing an almost symmetric smile,
not the hockey-stick shape with a dominant skew that we often observe in market quotations.
Is there a way to represent a situation where the skew is important in the smile shape, but, in
the above hedging test, we keep the desired behaviour of (5.6)? This is obtained by ?tting the
F0= 0.08, ?= 0.25, ?=0, ?=0.05, vv = 0.25
F0= 0.1, ?= 0.25, ?=0, ?=0.05, vv=0.25
Figure 5.3 = 0.25, ? = 5%, ? = 0.25, ? = 0
F0=0.08, ?=0.25, ?=0, ?=0.05, vv= 0.75
F0=0.1, ?=0.25, ?=0, ?=0.05, vv=0.75
Figure 5.4 = 0.50, ? = 5%, ? = 0.25, ? = 0
skew through ? < 0 rather than by ? < 1, as most market practitioners know. This is shown
in the example at Figure 5.6, where the continuous curve represents a smile very similar to the
continuous curve of Figure 5.3, but with a totally different set of parameters, and in particular
? < 0 and ? = 1 rather than ? = 0 and ? < 1. In spite of the strong analogy of the curves
before shifting the underlying, the behaviour after the shift is radically different.
F0= 0.08, ?= 0.8, ?=0, ?=0.15, vv=0.75
F0= 0.1, ?= 0.8, ?= 0, ?=0.15, vv=0.75
Figure 5.5 = 1, ? = 5%, ? = 0.8, ? = 0
From Theory to Practice: Real Hedging
F0=0.08, ?=1, ? =?0.45, ? =0.341, vv=0.64
F0=0.1, ?=1, ? =?0.45, ?=0.341, vv=0.64
Figure 5.6 = 0.6, ? = 33.24%, ? = 1, ? = ?0.41
The local volatility part of SABR behaves as in local volatility models, such that in case
of a smile with a relevant skew the SABR model behaves as desired in terms of hedging
behaviour only when the skew is determined by the correlation between underlying and
stochastic volatility, possibly with the local exponent ? set to 1 as in a standard lognormal
model. So the desired behaviour in the above test is a property of the model only with speci?c
parameterization, negative correlation, otherwise the implied model skew continues to move
opposite to the underlying, as in the simplest local volatility models. But now an important
observation is in order.
Hedging when ? = 0
All the tests seen above, used for assessing the hedging behaviour of both local volatility and
SABR models, require assessing smile dynamics for a shift in the underlying with the other
parameters left unchanged. The question we now raise is the following: is it consistent with
model assumptions to test smile dynamics in this way when using a stochastic volatility model
where the underlying and the stochastic volatility are correlated, with ? = 0?
The answer is a resounding no. In the above tests one is neglecting an important implication
of ? = 0. The model predicts that a shift in the underlying is accompanied by a corresponding
expected movement of the stochastic volatility. Let us examine this in more detail. If we
want to perform a hedging test consistent with model assumptions (termed model-consistent
or in-the-model hedging), we need to assume that a shift F is due to a stochastic shock
W F (there are no other sources of randomness directly affecting the underlying). Recall that
assuming correlation between underlying and stochastic volatility amounts to set
dWV =
1 ? ? 2 d Z + ?dW F ,
where Z ? W F . This way
E [dWV dW F ] = ? dt
E [dWV | dW F ] = ?dW F .
Therefore assuming a shift dWF in the underlying corresponds to expecting a contemporary shift ?dW F in the volatility, if we are performing model-consistent hedging. Assuming we have a shock to the underlying forward price with unchanged volatility is not
necessarily wrong, but corresponds to one of an in?nity of possible scenarios, and is not the
average or expected scenario. On average we will have a non-null corresponding volatility
shift ?dW F .
How does this affect the results of our hedging test? Let us look at a numerical example
where, as in a Monte Carlo simulation, we discretize the dynamics moving from the instantaneous dWF , dWV to discrete increments W F , WV over a short but not instantaneous
interval of time.
In delta-hedging we want to assess the effect of a shift of the underlying, say a swap
rate, from 0.05 to 0.051. Suppose values for parameters such as ? = 0.1, volvol = 0.3,
? = 1, ? = ?0.7.
This corresponds to a stochastic shock
W F =
= 0.2.
Following (5.11), this corresponds to an expected stochastic volatility shock
E [WV | W F = 0.2] = ?W F = ?0.14,
leading to a shock of the initial value of volatility ? from ? = 0.1 to 0.096.
If we neglect this expected change in volatility, as was done in the typical hedging
test mentioned above, we have the model behaviour of Figure 5.7 for a shift in F from
0.05 to 0.051.
This is the desired behaviour: for an increase in the forward, the smile moves right. Figure 5.8
tests what happens if we take into account the expected change in volatility implied by the
In this case, the behaviour is not what we desire: the smile has moved back. Dupire (2006)
and Bartlett (2006), in contexts different from ours, reach similar conclusions about the actual
behaviour of the SABR model.
These results are quite surprising. We started by observing that a comonotonic behaviour
of underlying and volatility curve in hedging is the most realistic and desirable behaviour
for a model. We observed that even SABR does not have this behaviour when the skew is
reproduced through ? = 1 with ? = 0; in order to have the comonotonic behaviour we need to
reproduce market skew through correlation between underlying and volatility, namely ? = 0.
However, we have assessed this fact through the simplest possible delta-hedging test:
assessing smile dynamics in case of a shift in the underlying with all other parameters left
unchanged. Such a test, although it is meaningful and fully consistent with model assumptions
in the case of local volatility models or stochastic volatility models with ? = 0, turns out to
be model-inconsistent precisely when ? = 0, the only case in which the test was giving the
desired result. If, instead, we perform a model-consistent test taking into account the expected
From Theory to Practice: Real Hedging
?=1, ?=?0.7, T=5, vv=0.3
F0=0.05, ?=0.1
F =0.051, ?=0.1
Figure 5.7 SABR: model-inconsistent hedging
?=1, ?=?0.7, T=5, vv=0.3
F0=0.05, ?=0.1
F0=0.051, ?=0.096
Figure 5.8 SABR: model-consistent hedging
behaviour of stochastic volatility, we ?nd that a model with ? = 0 also has a smile moving
opposite to the forward.
It appears that both stochastic volatility and local volatility models, when calibrated to the
market skew, assume a negative relationship between underlying and volatility that generates
a non-comonotonic behaviour. In local volatility models this relationship is deterministic and
thus captured even in the simplest delta-hedging test (that is therefore fully model-consistent
for local volatility models), while in correlated stochastic volatility models the relationship
is stochastic and thus missed by the simplest delta-hedging test (that is therefore not modelconsistent for stochastic volatility models). A hedging test that consistently takes into account
the assumption of correlation between underlying and volatility shows that even a correlated
stochastic volatility model has an undesirable dynamic behaviour, while a simple hedging
exercise where only F is altered appears inconsistent with model assumptions but recovers the
behaviour that a trader would desire for being consistent with market patterns. The interesting
fact is that most traders in the market perform exactly this simple, inconsistent exercise when
computing the model sensitivities for hedging.
5.3.3 Real Hedging for Local Volatility Models
These ?ndings require a consistent update of our way of assessing the hedging behaviour of
a model. So far, for local volatility models the dynamic behaviour has been assessed only
by a hedging test which is fully consistent with model assumptions, and the conclusion has
been, as in Hagan et al. (2002), that its dynamic behaviour is opposite to market behaviour.
However, we have seen that stochastic volatility models can also have a similar, undesirable
behaviour when the dynamic behaviour is assessed by a fully model-consistent procedure.
Stochastic volatility models only work in a desirable way when used in a model-inconsistent
but simpler hedging practice (namely shifting F with all other parameters left unchanged in
spite of correlation).
What happens if we reverse the perspective and try a similar inconsistent but effective
hedging for local volatility models? Is anything like that possible and reasonable? We show in
the following that not only is this possible, but it is the most common market practice among
users of local volatility models.
In the previous section we saw that model-consistent hedging is a procedure where one
treats a shift in the underlying as the consequence of a stochastic shock admitted by the model,
taking into account all implications, as we did in (5.12) and (5.13). On the other hand, in
model-inconsistent hedging one just restarts the model with a different level of the underlying.
In model-inconsistent hedging, as we did with SABR changing F without changing ?, the
change in F is not an endogenous product of the model, but an exogenous restart of the model
from a different position.
In order to exemplify this approach for local volatility models, we will look at it as applied
to the simplest possible local volatility model, the shifted lognormal of Rubinstein (1983),
characterized by the dynamics
d F(t) = ? [F(t) + ?] d Z (t).
This model is consistent with (5.7) and can calibrate market skews. It is also consistent with
the assumption (see Rebonato (2002b)) that skews, particularly in the interest rates world, can
relate to the fact that the underlying does not react to shocks in a way fully proportional to
From Theory to Practice: Real Hedging
its level (a typical implication of the lognormal assumption) but is in-between a proportional
reaction and an absolute reaction (a typical implication of the normal/Gaussian assumption).
In fact, notice that 5.14 can be derived as a dynamics intermediate between the following
two dynamics: a lognormal model
d F(t) = ? r el F(t) d Z (t)
d F(t) = ? abs d Z (t).
and a normal model
The simplest way of obtaining an intermediate behaviour is to combine the above two dynamics
by assuming
d F(t) = A и ? r el F(t)d Z (t) + (1 ? A) и ? abs d Z (t).
How to choose the parameters ? r el and ? abs ? If the lognormal model is calibrated to ATM
options, as is typical for lognormal models, we simply have ? r el = ? ATM , where ? ATM is the
market implied volatility for ATM options. Then, how to choose ? abs in such a way that ATM
options are also calibrated as well as possible in the normal model? Intuitively, the idea is to
make the dynamics (normal) as similar as possible to the calibrated dynamics (lognormal) at
least near time 0, setting
? abs = ? r el F(0) = ? ATM F(0).
See Marris (1999) for more details on why this trick allows us to get a combined model (5.15)
which is well calibrated to ATM options. We now have
d F(t) = A и ? ATM F(t)d Z (t) + (1 ? A) и ? ATM F(0)d Z (t)
that setting
? = ? ATM A
(1 ? A)
can be rewritten as in (5.14),
d F(t) = ? [F(t) + ?] d Z (t).
Notice that
(1 ? A)
F(0) =? A =
F(0) + ?
From this comes the recipe that Marris (1999) gives to calibrate approximately the model to
ATM options for any ?: just set
? = ? ATM A = ? ATM
F(0) + ?
It is clear that both ? and ? in the model de?nition (5.14) are ?xed when the model is ?rst
calibrated. Then, when assessing the effect of a change F(0) = F ? F(0) = F + ?, for being
theoretically consistent these parameters should be left unchanged and only the initial value
of the underlying in the dynamics should be altered. Results of this model-consistent hedging
test are shown in Figure 5.8 on the left, and are, as expected, opposite to the desired behaviour.
Often, traders in the market do not use local volatility models in this way. In the case of
this simple local volatility model, one would take into account that the value of ? comes from
F(0), and that only the latter representation guarantees calibration to ATM implied
? = (1?A)
volatility. Thus, when assessing a change from F(0) = F to F(0) = F + ? one would also
(F(0) + ?), while this is a model parameter that should not be altered.
change ? to ? = (1?A)
This is an example of model-inconsistent hedging, where a shift in the underlying is not treated
as an endogenous product of a model shock but rather the model is restarted with this different
level of the underlying, with consequent implicit recalibration to ATM options. Results are
given in Figures 5.9 and 5.10.
The behaviour of the model is not comonotonic in the ?rst, model-consistent hedging test,
as we expected. However, we see that if we move to a model-inconsistent (but common and not
unreasonable) delta hedging test, local volatility models also exhibit the desired comonotonic
smile movement.
Thus we have con?rmation of the conclusions in the above section. Traders know that
a model will be recalibrated tomorrow, therefore they do not care over much about being
consistent with model assumptions in computing sensitivities. On the contrary, traders build
sensitivities that try to anticipate the effect of tomorrow?s recalibration. In the example of local
and stochastic volatility models, they see that the model assumes that the smile tomorrow will
move opposite to the underlying. But they know, if they agree with Pat Hagan, that the opposite
will happen in most cases in the market. Therefore tomorrow the model will need recalibration
to adjust this inconsistency. Traders try to anticipate recalibration, building a sensitivity that, as
we saw in this local volatility hedging test, also incorporates a recalibration to ATM volatilities
?=0.175, T=5
F0=0.05, alpha=0.021429
F0=0.051, alpha=0.021429
Impl Vol
Figure 5.9 The shifted lognormal: model consistent hedging
From Theory to Practice: Real Hedging
?=0.175, T=5
F0=0.05, alpha=0.021429
F0=0.051, alpha=0.021857
Impl Vol
Figure 5.10 The shifted lognormal: model-inconsistent hedging
that has the effect of moving the smile in the same direction as the underlying. The sensitivity
is not consistent with model assumptions, but it takes into account the reality of market
movements and the practice of recalibration.
5.3.4 Conclusions: the Reality of Hedging Strategies
It is common market wisdom that local volatility models do not possess a desirable dynamic
behaviour. Hagan et al. (2002) show in particular that, in case of an increase of the underlying,
local volatility models predict the implied volatility curve will move in the opposite direction, contrary to the market empirical behaviour. This is undesirable since it implies wrong
hedges. Stochastic volatility models are instead deemed not to suffer from this shortcoming,
in particular when they ?t the market skew through a negative correlation between volatility and asset price. Here we have provided results suggesting that, however, if one assesses
the dynamic behaviour taking into account all model assumptions (in particular the correlation between stochastic volatility and the underlying) stochastic volatility models also have
a dynamic behaviour that is not qualitatively different from the wrong behaviour of local
volatility models.
This may appear discomforting, and lead to the conclusion that almost all common ?nancial
models are inapt for hedging. However, this conclusion is belied by further tests showing
that both local volatility and stochastic volatility models can imply correct hedges when
hedging is performed by techniques that are of simple implementation and widespread in the
marketplace. These techniques are not consistent with the initial model assumptions, but they
are consistent with the reality of markets where models are regularly recalibrated.
In my experience, model-inconsistent hedging is considered a natural choice by many
traders, since they deem common pricing models to be reliable for representing the relative
values of different ?nancial assets but not realistic enough as a representation of the dynamics
of actual market variables. Since hedging also depends on the latter dynamics, it must be
adjusted ad hoc to be effective in practice.
As Li (2006) recalls, ?most derivatives dealers tend to believe there are too few factors in
a model to suf?ciently capture the evolution of the underlying?, so that the model is used
for valuation purposes but the hedging strategy implied by the model is discarded. In such
an approach a variation of the underlying from F to F + ? is not treated as an endogenous
product of the model, but simply as an exogenous new starting point for re-evaluation, all else
being equal, or with ad hoc adjustments to the other variables that may recover a more realistic
joint dynamics of fundamental market variables. Of course, it is a relevant goal for research in
our ?eld to commit to develop models that reconcile model implications and market practice
in terms of hedging behaviour. In the meanwhile, for the purposes of validation, we must be
able to understand how hedging is done in practice and what are the differences between the
implications of a model and the way they are modi?ed for hedging, assessing the hedging
practice as something related to but different from the pricing model.
Без категории
Размер файла
229 Кб
9781118467312, ch5
Пожаловаться на содержимое документа