i3 Insights – Quants and Market Timing – Part II

This post was first published on i3 InsightsQuants and Market Timing – Part II

 

Reading Time: 6 minutes

 

My last post asked the question: “Are Quants the Chiropractors of Finance?” The analogy of a chiropractor was chosen to illustrate two potential risks:

  1. Over-reliance on a single model.
  2. The model becomes our guide to reality. Instead of merely describing what we see, it influences what we choose to pay attention to and thus changes our experience of the world.

I used the example of growth investing to illustrate the effects of these risks. In summary, growth happens in the future and there’s no data for the future because it hasn’t happened yet. My argument was that this pushes quants to do one of three things:

  1. Ignore it.
  2. Define it in terms of what can be measured – e.g. assume that growth is “expensive” or the opposite of value.
  3. Try to proxy it using a third variable that shares a less-than-perfect relationship with growth and can be measured; such as historical earnings growth, price momentum or profitability

This post elaborates further on the second risk. The key point is this: using quantitative techniques changes the way that quants formulate research questions and that this might affect their results. Charlie Munger describes this risk in a speech presented at UCS Business School in 2003:

You’ve got a complex system and it spews out a lot of wonderful numbers that enable you to measure some factors. But there are other factors that are terribly important [yet] there’s no precise numbering that you can put on these factors. You know they’re important, but you don’t have the numbers. Well, practically (1) everybody overweighs the stuff that can be numbered, because it yields to the statistical techniques they’re taught in academia, and (2) doesn’t mix the hard-to-measure stuff that may be more important. That is a mistake I’ve tried all my life to avoid, and I have no regrets for having done that.

As always, it’s helpful to use an example: Breadth. Greater breadth, or a larger sample size is like oxygen to a quant for two reasons:

  1. A larger number of observations leads to a greater statistical power (i.e. a reduced risk of making Type 2 errors).
  2. Most of the factors identified by quants have a low signal-to-noise-ratio. They require lots of observations before clear evidence of an “edge” emerges.

The need for breadth, or a sufficiently large number of observations, affects the way a research question is asked and the research design used by quants. For example, market timing. Most quantitative studies of market timing show that it is very difficult (if not impossible) to out-perform the stock market, net of transaction costs and taxes.

The classic market timing study was performed by Professor William Sharpe, published in his paper Likely Gains from Market Timing (Financial Analyst’s Journal, 1975).

Professor Sharpe tested the effectiveness of market timing by first comparing the returns of a “perfect timing” strategy, with a buy-and-hold investment in the Standard and Poor’s (S&P) Composite Index from 1934 through to 1972. At the beginning of each year, perfect timing strategy was 100% invested in the highest returning asset class (either the S&P Composite or US Treasuries) over the next 12-months.

The perfect timing strategy yielded 14.86% return per annum versus 10.64% per annum for the buy-and-hold strategy. This was achieved with 14.58% standard deviation for the perfect timing strategy versus 21.06% standard deviation for the buy-and-hold strategy.

To test the potential gains based on imperfect market timing ability, various levels of prediction accuracy were tested using a binomial model, assuming that the market timer has equal prediction accuracy for both bull and bear market.

Professor Sharpe is found that benefits from market timing are only available to market timers with more than 74% prediction accuracy on a risk-adjusted basis. He also indicated that there exists a bias when one assumes equal accuracy for predicting bull and bear markets, since there are more good years than bad years historically.

I don’t know about you, but I don’t know many investors that come up with a New Year’s resolution to invest 100% of their savings into stocks or bonds for the next 12 months. For starters, most skilled investors know better than to take all-or-nothing, one-way bets. Most experienced investors know that fundamentals, such as valuation, only have predictive power over time horizons much longer than 12 months. And they are aware that fundamental indicators are only meaningful when they are at extreme-levels.

An experienced investor would know that, as an approximate rule-of-thumb, “the stock market moves up roughly a third of the time, sideways a third of the time and downward a third of the time” – Jesse Livermore. Consequently, moving out of stocks and into bonds is not a decision made lightly as it involves going against the odds. They would also be aware that fees and taxes make the odds even more unfavourable.

Putting all of this together its reasonable to believe that:

  • Most experienced equity investors would stay fully invested unless there were good reasons to make a change.
  • That these reasons wouldn’t come along very often, maybe once every few years.
  • And they would be unlikely to take one-way, all-or-nothing bets.

So why did Professor Sharpe test the perfect timing strategy described above? My guess is that it has to do with breadth. The period from 1934- 1972 covers 38 years. Suppose that Sharpe tested a market timing rule that resulted in shifts every three years on average. He would have had only 14 observations instead of 38, reducing the statistical power of the study.

On the other hand, why didn’t he test rolling 12-month periods from 1934-1972? That would have produced 444 observations, significantly boosting the statistical power of the study. But this would have been very time-consuming. Remember, the paper was written in 1975 when computers were not as powerful as they are today.

Sharpe seems to have formulated the question in such a way as to have enough observations to obtain a statistically powerful result but not so many as to exceed the limitations of the computation tools that were available at the time.

In other words, the techniques probably determined the way the question was asked which, at least in part, determined the result. In so doing, he discovered that market timing, as defined by a strategy that no sane investor would probably ever contemplate, requires an almost-impossible level of accuracy to make it work. Sharpe also left the more relevant question of whether it’s ever appropriate to reduce a portfolio’s weight to equities largely unanswered.

My point is not to argue that market timing is possible or that investors should try it. Rather the point is that quantitative analysis isn’t suited to analysis of infrequent events or “once in a lifetime” opportunities or risks. Either there just isn’t enough data, or the data is so noisy that it takes a lot of it to be sure that something’s there.

The implication is that investors who rely solely on quantitative analysis are putting themselves at danger of ignoring opportunities and risks that seldom occur but can have enormous impact. Why is this a problem? Because it creates a blind spot where potentially life-changing decisions are missed. Charlie Munger explained why at the 1996 WESCO Annual Meeting:

Experience tends to confirm a long-held notion that being prepared, on a few occasions in a lifetime, to act promptly in scale, in doing some simple and logical thing, will often dramatically improve the financial results of that lifetime. A few major opportunities, clearly recognizable as such, will usually come to one who continuously searches and waits, with a curious mind that loves diagnosis involving multiple variables. And then all that is required is a willingness to bet heavily when the odds are extremely favourable, using resources available as a result of prudence and patience in the past.

At the 2017 Daily Journal Annual Meeting, Munger gave a personal example of how 2 decisions in 50 years resulted in a $400-$500 million-dollar gain.

I read Barron’s for 50 years. In 50 years I found one investment opportunity in Barron’s out of which I made about $80 million with almost no risk. I took the $80 million and gave it to Li Lu who turned it into $400 or $500 million. So I have made $400 or $500 million reading Barron’s for 50 years and following one idea. Now that doesn’t help you very much does it? I’m sorry but that’s the way it really happened. If you can’t do it… I didn’t have a lot of ideas. I didn’t find them that easily, but I did pounce on one.

You’re probably thinking, how do we know that wasn’t just luck? Or how can someone else repeat Munger’s success? Even Munger was aware of this problem. He admitted that his answer wasn’t much help to others hoping to emulate his success.

Again, I’m not advocating that we all take out a subscription to Barrons. Rather, the point is that quantitative analysis is probably not the best way to analyse low- frequency, high-impact decisions. Investors face these decisions, which is why its probably a good idea to use several different problem-solving approaches, including quantitative analysis.

Quantitative analysis is a powerful and useful tool. But like any tool it has its limits. It requires data, which means it may struggle in situations where the future is less than perfectly correlated with the past. It requires breadth, which means that it has difficulty answering questions about events that occur infrequently. And its effectiveness relative to other forms of analysis also depends on the ratio of signal to noise. But that’s a topic for another post.

 

Advertisements

4 comments

  1. I’m liking this series of posts. Any time the role of randomness and ‘luck’ is called out as having more influence than people think, I’m happy. Can we next expect your thoughts on how one might best position themselves to benefit from unlikely and unpredictable events?

    p.s. minor typo near the top – Definite = Define

    Like

    • Thanks Ben. It’s an interesting an important topic. How to position yourself for the unexpected really depends on your philosophical viewpoint. But it’s a great question and a good theme for a post. Thanks for the idea!

      Like

  2. Daniel,

    In many branches of study quantitative techniques can be immensely useful to establish validity of hypotheses. The techniques are of course fallible. They are just models. Every model is wrong, but some are useful (George Box). The big mistake is not to recognise the weaknesses and strengths of your modelling approach (whether quantitative or otherwise).

    I think your main criticism is with the naive application of quantitative techniques to finance. I agree, an expectation that models (yes plural) can and should explain/predict to all market dynamics is naive. There is no one-size-fits-all model. Some can be designed to forecast/explain extremes, some can be developed to forecast growth.

    I would argue the naive application of quantitative techniques to anything is bad, and a more nuanced approach integrating domain knowledge with sound statistical analysis is always going to be superior to one or the other. Search for “Abraham Wald aircraft” for a terrific example.

    Like

    • Anthony, thanks for your comments. I agree that it’s an area where nuance is essential. My goal is to help investors use quantitative tools safely and effectively. That involves exploring the circumstances where they don’t work well.

      For example in situations where there isn’t any historical information to analyse, where historical data is not applicable to the future or where the research question has to be framed in a particular way to fit the data and the tools available.

      I come from a psychology background and what most people don’t know about psychology is that it’s largely based on statistics. In fact, many of the statistical tools that we use in finance were discovered by people trying to answer psychological questions.

      For example, a lot of the maths used to discover factors such as value, size etc. was first used to discover common personality traits. There are 3, 4 and 5 factor models of personality just like there are for asset pricing.

      Psychologists have been debating the usefulness and limits of these tools for a lot longer than finance academics and practitioners.

      I think there’s a lot that finance professionals can learn from the debates going on in other fields using statistical approaches.

      Like

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s