TVaR, or not TVaR - that is the question; in the context of portfolio optimisation at least. The choice of risk measure to be minimised is of great importance as it has grave consequences on both the performance of potential optimisation algorithms and the ‘smoothness’ of the reduction in the tail.
Risk is defined as the uncertainty (or volatility) of the portfolio rate of return. Volatility in the rate of return distribution translates to volatility in the underlying loss distribution. The natural choice of risk measure would therefore seem to be ‘value at risk’ (Var(L)) or ‘tail value at risk’ (TVaR(L)) as management are focused on these measures for capital and risk management purposes. However, this doesn’t necessarily mean that either of these should be used as the risk measure for portfolio optimisation purposes!
By analogy; a headache may be symptomatic of the flu. However, it would be better to kill the flu virus and alleviate all of the unwanted symptoms, rather than take painkillers for the resultant headache. One can think of a particular return period loss as being symptomatic of overall volatility.
Porfolio Optimisation - VaR vs Variance
Let's compare and contrast the results of a hypothetical portfolio optimisation; with the risk measure defined first as 200VaR(L) (0.005 value at risk) and second as the variance of the rate of return.
The pre-optimised aggregate loss distribution (in blue) is associated with $𝑋 premium.
A black box optimisation algorithm has altered the contract percent shares, to seek the minimum 1 in 200 return period loss (0.005 AEP) for the same portfolio premium $𝑋. The optimised curve is shown in orange.
The portfolio has also been optimised by selecting the variance of the rate of return as the risk measure. Again, the optimised loss distribution (shown in green) is associated with $𝑋 premium.
In each case the AAL is roughly similar as this is predominantly driven by the shorter return period losses. It can be seen that the 200VaR(L) minimised curve has a lower 1 in 200 RPL than both the original and variance minimised (denoted ’SD’) loss distributions. However, the extreme tail of the 200VaR(L) distribution has kicked up and indeed, some of the shorter return period losses have also increased.
In choosing VaR(L) as the risk measure, the black box algorithm has in this case exploited a ‘kink’ in the modelled losses, achieving the absolute minimum 1 in 200 RPL possible for $𝑋 premium. However, there have been deleterious consequences for the remainder of the tail.
Minimising the variance has resulted in less of a 1 in 200 RPL reduction. Yet, it has resulted in a much more consistent reduction throughout the entire loss distribution.
Justification of Variance as the choice of Risk Measure
Consistent Risk Reduction
Regulators may well place emphasis on a particular return period for either VaR(L) or TVaR(L), but there is little point in ‘penny pinching’ at this return period if the rest of the distribution potentially suffers as a result. It is more prudent to seek a risk measure that captures the overall volatility of the loss distribution and the variance is indeed an excellent candidate.
After all, what if the 1 in 500 event happens? In the example above the PM would theoretically suffer a greater loss than before - hardly effective portfolio optimisation.
Reduces the Tail of the Distribution
The largest losses contribute the most to the variance statistic (proportional to the square of the difference with the mean). Hence, by minimising the variance, the largest losses in the tail of the distribution will be the first target for reduction. Minimising the variance is of course going to reduce VaR(L) and TVaR(L). This is obvious perhaps, but still worth pointing out.
Consider that the computation of the variance of a discrete loss distribution uses every single data point. Unlike VaR(L) and TVaR(L), none of the hard-earned loss model data is wasted in the computation of the variance.
Easy to compute
VaR(L) and TVaR(L) rely on sorting and ranking algorithms. This makes them harder to compute and successfully incorporate into optimisation algorithms. Variance, on the other hand is a relatively simple computational operation.
It is mathematically elegant to express the risk as the second moment of the rate of return distribution, and the expected return as the first moment of the rate of return distribution. Mean-variance analysis is a mathematical framework which makes use of this mathematical grace.
RiskWave minimises the variance of the rate of return for the reasons outlined above. The resultant reductions in key return periods can then evaluated for all combinations of feasible portfolio premium and efficient return. For further information - please see the white paper.