Monday, October 20, 2008

What Does the Markowitz Portfolio Really Mean

One of my formative physics memories is watching Prof. Donald H. Perkins derive the Rutherford Scattering cross-section formula on half a blackboard in about five minutes. His derivation was based on dimensional arguments and physical principles, and came just one week after my graduate class in Oxford had derived the very same formula from first priciples, which took several hours and pages of dense algebra. Prof. Perkins class was about phenomenology, which means it was about what happens in nature and his lesson to me was that Physics is not Applied Mathematics. That what happens in nature is due to the structure of the universe and not because of the way the math works out.

When solving the Markowitz Mean-Variance efficient investment problem one is lead to the portfolio defined by the product of the inverse of the covariance matrix into the vector of the asset return forecasts. So let's follow Prof. Perkins' lead and ask what this equation tells us about the principles of how we should structure a portfolio.

First of all, remember that the covariance matrix is required to be a symmetric positive definite matrix. What this means is that it can be diagonalized by a similarity transformation and that the diagonal terms of the resultant matrix are positive quantities. (A little linear algebra reveals that the transformation matrix is the matrix of eigenvectors of the covariance matrix and the diagonal terms are the associated eigenvalues.)

From a statistical point of view what we have done is rotated into a new coordinate system in which our set of original correlated random variables have been replaced with new variables, each one formed from a linear combination of the original variables, which are all statistically independent. The new, transformed, matrix represents the covariance matrix of these new variables.

Many authors now declare that these new variables are the actualdriving factors behind the variance of our original portfolio and that each asset has a factor loading on to the factors which are the real sources of portfolio variance.

I would not go so far. Mathematically, any symmetric positive definite matrix, whatever its source, can be decomposed in this manner, so I feel unwilling to add any interpetational overhead where it is not neccessary. I am not saying that factor models do not exist, what I'm saying is that all covariance matrices can be diagonalized, with or without the existence of factors, so the fact that a particular matrix can be treated in this manner doesn't actually contain any new information. We will use the common term principal components to refer to the independent variables we have produced, but need to go no further that that.

This philosophical diversion notwithstanding, the mathematics is fairly straightforward. When we transform to the principal components coordinate system we move from a system in which each axis represents a particular asset to one in which each axis represents an independent portfolio. The vector of asset forecasts is similarly transformed into a vector of portfolio forecasts. The payoff is that the covariance matrix has become a trivial diagonal matrix and, even more usefully, the inverse of the covariance matrix is simple a that matrix with the diagonal elements replaced by their reciprocals. The product of the inverse covariance matrix with the forecast vector then becomes a simple vector where each element is the ratio of the forecast to the variance of each component portfolio.

So the big question is why is this the right portfolio? The answer comes when we consider what the expected profit for each component portfolio is. This is just the product of the forecast and the holding which is the ratio of the square of the component forecast to the component variance. This is interesting because it is dimensionless and structure free (by which I mean that the formula is the same for every component independent of the component label). We are diversified because we are treating each component equally --- it's not due to the fancy mathematics, but it is clearly the right answer.

No Evidence for GARCH in the Mean

Here's a first result using the excursion of DJIA daily volatility into the 300-400 points per day region. Does this market show any evidence of GARCH-in-the-Mean type behaviour? i.e. Does there seem to be a systematic bias to the drift of the market conditioned on the level of volatility. From the naive regression analysis, this hypothesis should clearly be rejected.

The more sophisticated approach of building a GARCH-M model directly, also fails.

Thursday, October 16, 2008

Our New Volatility Laboratory

Recent turmoil in the financial markets has been accompanied by daily volatility reaching unprecedented levels. (The chart DJIA Volatility illustrates the daily point volatility for the DJIA estimated from a simple GARCH model. n.b. This chart was prepared during the day, so the "current" levels indicated numerically do not represent the "end of day" levels.)

The level of volatility enters into our trading strategy in several places. Firstly, if we are risk averse, then our asset price change forecasts must be weighed against a risk metric when we decide whether or not to trade. Most likely this risk metric will scale in some way with the level of volatility. If we do not dynamically alter our risk metric to take account of the current levels of volatility then we will fail to maintain the same risk/reward ratio (or signal-to-noise ratio) in volatilite times that we have in quiescent times. This will act to deteriorate the Sharpe Ratio of the trading strategy. To pick a guady metaphore, when one hears the noise of the waterfall ahead one should start to paddle less swiftly.

Secondly, if our forecasting procedure involves variables in lagged returns; or, cross-sectional dispersive measures; or, implied volatilities; or such like factors; then, our alpha itself will scale in some way with the level of volatility and so it itself will become larger in magnitude during volatile times.

Canonical "Modern Portfolio Theory" explicitly specifies that the ideal portfolio should be linear in the product of the inverse of the covariance matrix into the vector of forecasts. This quantity, whether expressed in price change space or return space or some other manner, is not dimensionless (it has the dimension of quantity/forecast e.g. contracts/dollar) and will therefore scale inversely with the level of volatility.

So theory often tells risk averse traders to take some account of volatility when making their trade decision. However, in practice I've often found it difficult to show the actual benefit of such considerations as an empirical reality. But one problem with econometric analysis of financial markets is that the data does not do a good job of exploring the available ranges of empirically important variables. Interest rates, for example, can stay in a similar range for years. This, as we see from the DJIA chart referenced above, is also true for volatility.

Now, in stark contrast, volatility has broken into a wholly new region of phase space. Now we can actually compare decisions made in times of radically high volatility with those made in more quiet times. Of course, this analysis still has a temporal bias --- for we only have one such region of high volatility and during that time the markets fell dramatically --- so we must maintain caution as to what we do with this dataset but, nevertheless, we have a new volatility laboratory to work in.