We are often asked about the volatility of our strategies, and whether it should be increased. The article below explains how we think about both volatility specifically and P&L risk more generally.
There is a truism in investment, and in life, which is that to achieve rewards one must take risk. We can keep cash in a bank or hold “safe” assets like short-term treasury bills, or we can use our capital for riskier, more speculative investments, like stocks, corporate bonds, or “alternatives”.
A basic question that follows is how much risk should we take? And if we already have a portfolio of investments, how much risk are we currently taking? There are many kinds of risk associated with investment, but here we will focus on one particular kind: the risk that comes from the P&L of our investment strategies.
In the context of systematic quantitative investment, where the portfolio is usually liquid enough to be marked to market daily, it has become common to summarise this risk in terms of a single number – the standard deviation of the daily P&L, usually called the volatility of the portfolio. Often, the volatility of the daily P&L is “annualised” by multiplying by a number close to 16, and the result expressed as a percentage of the size of the investment. So an annualised standard deviation of 10% is sometimes called a “10-vol” strategy. And investment managers are asked, in the context of risk management: “what is your vol?” or “what vol do you target?”
This way of thinking is so pervasive in the industry that people often refer to volatility as “risk” and use the two terms interchangeably. As we will show in a sequence of examples below, this is wrong and it is a perspective that results in poor risk management. By way of an aperitif, consider that US equities prices rose during 2017 and into the following year to all-time highs, while volatility fell to all-time lows. If volatility is risk, then at the end of January 2018 the risk in US equities was as low as it had ever been. Does that sound right?
Before looking at some of the perils of confusing volatility with risk, it is worth thinking about how the conflation arose. One clue is that the two terms are usually kept distinct where markets, rather than quantitative strategies, are concerned. For example, there was no great confidence in equities in early February 2018 just because volatility was low – in fact, worries about extreme valuations were commonplace.
And what did these investors’ worries amount to? Most surely believed that the valuations indicated a risk of a significant downturn in the stock market – a “correction”, perhaps, or even a “crash”, to use industry nomenclature. On reflection, this is the risk we are really trying to measure, control, and target: the risk of a significant loss to capital over a relatively short timescale – a timescale too short for us to react, or at least one that is shorter than the time it takes us to make investment decisions.
Clearly, this definition of risk is vague. A loss that is “significant” for one investor may not be significant for another investor with a greater appetite for risk. And the time over which we make investment decisions varies, too – perhaps it could be five minutes for a day trader, or many months for large institutions. What we are investing in also matters. We may comfort ourselves after a big drawdown in equity markets by theorising about the long-term equity risk premium, but we might be less sanguine if a hedge fund experiences a similar loss.
To develop this point further, let us say for the sake of argument that a 10% loss or greater over one month is representative of the risk that concerns us. This scenario immediately raises a problem: it is very difficult to know what the chance of such a loss is. In the context of a long-only equity investment, if we knew the chance of a 10% loss occurring over a month, we would also know the chance of a stock market “correction” occurring in that period, which is very difficult – if not impossible – to estimate with any confidence. The problem is no easier in less constrained long-short investment strategies such as Winton’s.
If it is impractical to measure the risk that we want to control or target, what can we do? The standard practice is to approach this problem statistically, by assuming that the daily P&L of the strategy obeys a distribution. This does not have to be a normal distribution – we can allow its tails to be “fat” – but the important thing is that the daily P&L obeys some distribution that we can encapsulate in a formula.
We can then use the mathematical properties of the formula to translate the risk of a 10% loss during a month into a particular standard deviation, or volatility of the distribution. For example, if we assumed a normal distribution with a standard deviation of 1% for the daily P&L, then the chance of a 10% loss or more at some point during the period would be 2.2% if the month contained 21 trading days. One can then, or so the thought goes, simply control the risk of large losses over longer time periods by targeting a particular daily P&L volatility, a far easier task.
This method of risk management has the advantages of being simple to describe: you just name your volatility target; and simple to implement: merely estimate your current volatility and scale your portfolio up or down as necessary to achieve the volatility target. But, as you will gather from the tone of these remarks, we do not believe that it works! Furthermore, the manner in which it apparently turns a fantastically hard problem into a relatively simple one brings to mind Daniel Kahneman’s description of behavioural heuristics:
‘When the question is difficult … we answer an easier and related question. If the question is “Should I invest in Ford Motor Company Stock?” The easier question to answer is “Do I like Ford cars?”’
For our purposes, the difficult question is “what is my risk?”, with the easier question becoming “what is my vol?”.
We have already seen with the example of equities in early 2018 that volatility can be misleading as a measure of the risk or risks that we care about. But where is the mistake in the statistical method outlined above? The answer is that it neglects a crucial feature of markets and of trading strategies, which is that the distribution of their returns changes over time. In statistical jargon, it is non-stationary. Moreover, the occasions when a strategy experiences a large loss are often when the distribution of P&L changes suddenly, so that its previous volatility was a poor measure of its future volatility.
We can see this by looking at some additional examples of volatility failing to capture the risks of large losses.
In August 2007, several investment firms suffered significant losses in long-short equity strategies that had previously been highly successful. What we believe occurred was that several different participants were unknowingly trading very similar strategies to one another, in some cases using extreme amounts of leverage . One group decided to liquidate their portfolio, perhaps to free up cash to cover losses in different strategies; this was during the period of the credit crisis, when losses in asset-backed securities were becoming apparent.
This liquidation resulted in losses in the long-short equities strategies, forcing some of the more highly leveraged funds to liquidate, which exacerbated the losses and caused others to liquidate in turn. The result was a cascade of liquidations, causing significant losses that were much larger than the typical level of P&L of the strategies. We can see roughly how it played out by backtesting a simple mean-reversion strategy over the period.
There was nothing in the volatility of these strategies that indicated that this event was imminent. Of particular relevance for us in this context was the comment of David Viniar, Goldman Sachs’ Chief Financial Officer at the time: “We were seeing things that were 25-standard deviation moves, several days in a row.” What he was really saying was that the distribution had changed. In the old distribution, these events were extreme outliers, and he clearly did not expect – based on the old distribution – to ever see them. But they happened, nonetheless.
The so-called Taper Tantrum is a more widely known event, because it affected almost everyone invested in both stocks and bonds, a larger group than those invested in the long-short equity strategies that suffered during the Quant Quake. But its impact on “vol-targeting” investors was different to that on long-only unlevered investors.
Since both stocks and bonds had been rising, momentum investors had long positions. Nevertheless, there was a considerable negative correlation between the daily returns of stocks and bonds. The precise number depends on the exact definition and look-back window used for the correlation, but reasonable estimates are as low as -0.6. There was significant diversification, therefore, between long stock and bond positions, driving down the volatility of a portfolio with fixed leverage. Accordingly, any volatility-targeting portfolio was required to lever up to achieve its desired volatility, given the degree to which its positions in stocks and bonds offset each other.
This approach of keeping portfolio volatility constant did not keep risk constant, however. The risk lurking in the portfolio was that the stock-bond correlation would rise suddenly from its historic lows, and that both asset classes would drop together – exactly what then happened during the Taper Tantrum.
The examples above demonstrate that there is more to risk management than merely targeting a chosen level of volatility. So how do we measure and control the risk we are taking? We start by identifying what we view as the most concerning risk: the chance of a significant loss in a relatively short time. By staying focused on this, we avoid the temptation to make oversimplifying assumptions about P&L distributions that would “reduce” the problem to a different one.
The first step is to state the level of risk we are aiming for. In the Winton Fund, our aim is to limit the frequency of monthly losses higher than 4% to on average once every two years, or one month in 24. This, rather than a particular level of volatility, is the risk level that we attempt to target.
How can we target a monthly loss frequency in practice? We do so by examining a variety of other risk statistics that potentially indicate hidden risks in the portfolio. These include the standard numbers you might expect: betas to stocks, bonds, oil and the US dollar, and of course forecast and realised volatilities. But a variety of other statistics are also useful.
Some are informed by historical events, during which quantitative strategies experienced significant losses. For example, one clear lesson from the Quant Quake is the importance of prudence in the use of leverage. In 2007, the investors who suffered most were the most highly levered. It is vital to consider, therefore, both sector-specific and overall exposure.
Another lesson from August 2007 is that we should be concerned about crowding in our strategies. If there is a wave of position reduction among highly leveraged trend-followers (as in February 2018), we want our losses to be limited in scope. Crowding is difficult to measure, but we look for clues where we can, for example in CFTC Commitments of Traders reports.
The Taper Tantrum suggests a stress test, where we imagine correlations changing such that all markets move against our positions. One of the numbers we calculate every day is the loss that would be experienced if this occurred.
But some of the most useful risk statistics we have identified are not specific to any one historical scenario. These involve taking our current positions and calculating the P&L that they would have experienced historically, using all the data we can collect . We look at how severe the worst days, weeks, and months would have been.
These statistics provide a more nuanced picture of the portfolio’s risks than volatility alone. If any of the numbers reach concerning levels, then it may prompt the Investment Board to make a change, perhaps in terms of portfolio-level gearing, or perhaps a change in weighting in a specific sector or strategy. This may seem less exact than saying that we target 10% volatility. However, our job is to manage the risk of the portfolio, rather than to fixate on a single, inadequate measure. We believe that an approximate answer to the right question is better than a precise answer to the wrong question.
 The sequence of events has been approximately reconstructed by conversing with several of the participants and studying the backtests of some of the strategies traded at the time. What Happened to The Quants in August 2007? by Amir E. Khandani and Andrew W. Lo, 2007.
 Since currencies form a significant part of the Winton Fund, we start these tests in the early 1970s, after the end of the Bretton Woods system.
This information is communicated by Winton Capital Management Limited ("WCM") which is authorised and regulated by the UK Financial Conduct Authority.
WCM is a company registered in England and Wales with company number 03311531. Its registered office address is 20 Old Bailey, London EC4M 7AN, England. The information herein does not constitute an offer to sell or the solicitation of an offer to buy any securities. The value of an investment can fall as well as rise, and investors may not get back the amount originally invested. Past performance is not indicative of future results.