Using a simple moving average to time markets has been a successful strategy over a very long period of time. Nothing to brag home about, but it cuts the drawdown of a buy and hold by about a half, sacrificing less than 1% of the CAGR in the process. In two words, simple yet effective. Here are the important numbers (using the S&P 500 index from 1994 to 2013, inclusive):
Category Archives: Programming
There are a lot of “winning” strategies for bull markets floating around. “Buy the pullbacks” is certainly one of them. Does this sound simple enough to implement to you? While I am no Sheldon Cooper (although I have a favorite couch seat), I still like to live in a somewhat well defined world, a world in which, there is much more information attached to a tip like “Buy the pullbacks”. Let’s start with a chart of the recent history of the S&P 500 ETF:
Quantscript is an old project of mine, which was hosted on google.code. Since google.code is shutting down, I had to either scrap it or migrate it to GitHub. I am not using this code on a daily basis anymore, and since the project is relatively small – the natural thing would have been to scrap it. However, I found myself a few times over the years pulling out the source code of the project to follow as an example how to do different things in Python. Hence, I thought better to spend the time to migrate the project.
Over the years I have tried to simplify and streamline my access to financial historic data. All different solutions I tried (see here, for example) so far have been unsatisfactory, at least to some degree. That however changed after I started using R6. Here is an example of using the R6 class for the same task as before:
In an earlier post, I used mclapply to kick off parallel R processes and to demonstrate inter-process synchronization via the flock package. Although I have been using this approach to parallelism for a few years now, I admit, it has certain important disadvantages. It works only on a single machine, and also, it doesn’t work on Windows.
Have you tried synchronizing R processes? I did and it wasn’t straightforward. In fact, I ended up creating a new package – flock.
One of the improvements I did not too long ago to my R back-testing infrastructure was to start using a database to store the results. This way I can compute all interesting models (see the “ARMA Models for Trading” series for an example) once and store the relevant information (mean forecast, variance forecast, AIC, etc) into the database. Then, I can test whatever I want without further heavy lifting.
Over the years, there have been a couple of issues I have been trying to address in my daily use of this excellent package. Both are “cosmetic” improvements, they only improve the usability of the package. Let me share them and see whether they can be improved further.:)
Various of my R scripts produce csv files as output. For instance, I run a lengthy SVM back test, the end result is a csv file containing the indicator with some additional information. The problem is that over time one loses track what exactly the file contained and what parameters were used to produce it.
As mentioned earlier, currently I am playing with trading strategies based on Support Vector Machines. At a high level, the approach is quite similar to what I have implemented for my ARMA+GARCH strategy. Briefly, the simulation goes as follows: we step through the series one period (day, week, etc) at a time. For each period, we use history of pre-defined length to determine the best model parameters. Then, using these parameters, we forecast the next period.