I am now hosting this blog at blog.gillerinvestments.com
You will now be redirected to the new site...
On the application of sound empirical practice to trading
I am now hosting this blog at blog.gillerinvestments.com
You will now be redirected to the new site...
I've inserted code into this Blogger template to automatically redirect posts to my new blog domain blog.gillerinvestments.com.
I have implemented a meta refresh to get search engines, such as Google to follow the link. However, for a browser client, a slightly different approach will be followed. You will be redirected to the search page on my new site with the results of a search on the document you were attempting to view. This is done via the "onload" event, and so you will briefly see this page before seeing the new one. Hopefully, this will help those following deep links reconnect to the page they actually wanted to view.
Published earlier on blog.gillerinvestments.com
This has taken a while, due to teething issues, but my Web 2.0 CTA experiment advances one step further today with a delayed Twitter feed and a real-time Twitter feed. Like the RSS feeds, these Twitter feeds provide a trade blotter for my index futures intraday strategy. There are details on how to subscribe to the real-time feeds on my main company web site.
Published earlier on blog.gillerinvestments.com
With another month of data for the dynamic trading risk factor available, we can look again at how various funds' and companies' performance compares to this factor. As we do not have a great deal more data, and nothing very dramatic has happened since this analysis was last performed, it is unlikely that the pro articulum parameter estimates will have changed very much, so I won't report the regression analysis in depth.
Published earlier on blog.gillerinvestments.com
Published earlier on blog.gillerinvestments.com
When talking about the SPX data, I glibly asserted that the data was evidently not I.I.D. normal. I then proceeded to show how the Generalized Error Distribution can be used to describe the data quite well and to reject the hypothesis that the data is I.I.D. Normal with a reasonable degree of confidence.
Published earlier on blog.gillerinvestments.com
In the previous post we illustrated the evident abnormality of financial data by examining the longitudenal returns of the S&P 500 Index.
I used the Generalized Error Distribution as it possesses the ability to be smoothly transformed from a Normal Distribution into a leptokurtotic distribution and that allowed me to use the Maximum Likelihood Ratio Test to distinguish between the null hypothesis (that the data is I.I.D. Normal) and the alternate hypothesis (that it is not).
I subscribe to the theory that if something is right you should be able to draw the same conclusions via various methods and data sets. So I am going to look again at the likely models for the innovations of financial data (we're taking a GARCH(1,1) model as given); but, this time, I decided to look at the S&P Goldman Sachs Commodity Index and to use a test based on Pearson's χ² Test. (In the following the data is actually based on the first deliverable contract on the GSCI traded at the CME.)
Before that, however, we should discuss what the possible options are the for the PDF of the process innovations. The candidates are:
Published earlier on blog.gillerinvestments.com
This post is to address the question why use the Generalized Error Distribution? The subject, the evident abnormality of financial data, should be very familiar to the intended audience of this blog; but I'm going to summarize some basic facts here as there have been requests as to why the GED should be used.
Firstly, longitudenal returns of financial asset prices are evidently not described by the Normal Distribution. Many statements one hears along the lines of "a once in a hundred years"event are made in the context on comparing the scale of a realized event with its expected rate under the normal distribution. However, financial data are so clearly non-normal (more specifically not identically and independently distributed, or I.I.D., normal) that only a naive analyst would even start off an argument by discussing that hypothesis.
Even without doing any statistical tests, a cursory analysis of the time series of daily S&P 500 Index returns (the upper panel in the above figure) would suggest that the returns are not homoskedastic — or constant in variance.
The lower panel shows the best fit of the normal distribution form to a histogram of daily index returns. The fit is clearly poor, and the data shows the pattern typical of leptokurtotic data. There is a deficit of events in the sides of the distribution (in the region around ±1σ) and an excess in the centre and in the tails.
I'm interested in specifying the process distribution correctly because it directly affects the relative weighting of the various data periods in any regression analysis we do. Ordinary least squares is only the correct estimation procedure when the underlying data are i.i.d. normal. This procedure assumes that deviations at the level of 3σ–5σ, or more, are highly significant and will cause the estimated parameters to be chosen to explain these particular realizations more than those in the lower range.
In the case of the data above, the regression will listen strongly to the current period, although the process realization now many not be that characteristic of the entire period. One might argue that we should just replace OLS with generalized least squares which, if we weight with the appropriate covariance matrix, is equivalent to maximum likelihood estimation which is a very powerful technique. However, this does not circumvent the problem of estimation based on the normal distribution treating 3σ–5σ residuals as very very significant whereas, under a leptokurtotic distribution, they are not particularly so.
The GED is useful because it can be smoothely transformed from a Normal distribution into a leptokurtotic distribution ("fat tails") or even into a platykurtotic distribution ("thin tails"). This allows us to use the maximum likelihood ratio test to test the hypothesis as to whether the GARCH process innovations are IID normal.
This test convincingly rejects the null hypothesis that the GARCH process innovations are normally distributed (shape=1). The estimated shape parameter, which controls the kurtosis of the distribution, is also approximately 6σ from the null hypothesis value.
In another post I will go into more depth about the various distributional choices that are available once one rejects the Normal.
Published earlier on blog.gillerinvestments.com
The basic problem is that when the price of a subset of the index increases then their weight relative to the rest of the index also increases. The index tracking investor is then required to buy more of those components, at their new higher price. If their prices should subsequently decline, then the index tracking investor will be required to sell a little of the investment, for the same reasoning as before, at the new lower price.
Unfortunately, stocks do regularly go up and down relative to each other and so the logic embedded in the previous paragraph represents an embedded buy high – sell low strategy which is overlaid over the basic strategy represented by the index. This is one of the defects of cap. weighted indices and will lead a fund manager that attempts to track such an index to underperform through no fault of their own.
The Markowitz Portfolio is constructed to be Mean-Variance efficient and weights components so that the expected risk-adjusted profit from each position is equal. However, cap. weighting doesn't follow any utility driven formalism and it explicitly contradicts known facts about the market (it overweights large cap. stocks whereas academic reasarch by Fama and French indicates that small cap. stocks consistently outperform).
The adverts. caught my attention because I had just tackled a similar buy high – sell low defect in the basket I own to track the Compact Model Portfolio. The portfolio that tracks the CMP Index is equally weighted, meaning that we allocate the same fraction of the overall equity to each individual investment.
Now equal weighting also has an embedded strategy, but in this case it is reversion rather than momentum. With an equal weighted basket, every time returns occur we need to reduce the position in the stocks that outperformed and increase the position in the stocks that underperformed, in order that we maintain the equal weighting. This is an embedded sell high – buy low strategy.
I was aware of this, but as I watched my basket I realized that I kept repeating the opposite. On the daily rebalance, the strategy would buy some more of a stock that went up at the end of the day and then, then next day, if it lost money, it would sell at a loss. This was repeated again and again.
I finally realized that this was because I was rounding my position into round lots, of a given size. The conventional algorithm for rounding positive numbers is to add one half and then truncate to an integer. The number of lots to hold in a given company is the fraction of the capital allocated to that company divided by the product of the price and the lot size. Following conventional ½ rounding we tend to round up after we've made money and round down after we've lost money. This is an embedded buy high – sell low strategy.
I solved this by rounding against it. I round up on a losing day and round down on a winning day. i.e.
shares=lotsize×⌊capital/(lotsize×price)−½sign δprice⌋.
This seems to work.
Note that these are predictions, not measurements, so we'll have to wait and see how they actually do.
Fund Return
================================================
Citadel Kensington Global Strategies Fund +1.15%
IKOS Equity Hedge Fund +0.78%
Renaissance Institutional Equity Fund +1.25%
Millenium International Ltd. +1.06%
------------------------------------------------
Hedge Fund Trading Risk Factor +0.93%



Ticker St.Dev.This shows that we should not expect both stocks to typically have the same scale of move on any given day. It casts our data (which is for today, 01/02/2009) into a different light. Statistically speaking, i.e. relative to the typical scale of daily moves, IBM moved more than CSCO (2.3 s.d. vs 1.6 s.d.).
------ -------
CSCO 2.35 %
IBM 1.57 %
Beat the Dealer: A Winning Strategy for the Game of Twenty-One
Triumph of the Optimists: 101 Years of Global Investment Returns
Kendall's Advanced Theory of Statistics, Volume 1: Distribution Theory
Theory of Probability (Oxford Classic Texts in the Physical Sciences)
A History of Interest Rates, Fourth Edition (Wiley Finance)