Financial research is highly prone to statistical distortion. Academics have the choice of many thousands of stocks, bonds and currencies being traded across dozens of countries, complete with decades’ worth of daily price data. They can backtest thousands of correlations to find a few that appear to offer profitable strategies.
The paper points out that most financial research applies a two-standard-deviation (or “two sigma” in the jargon) test to see if the results are statistically significant. This is not rigorous enough.
One way round this problem is to use “out-of-sample” testing. If you have 20 years of data, then split them in half. If a strategy works in the first half of the data, see if it also does so in the second out-of-sample period. If not, it is probably a fluke.
The problem with out-of-sample testing is that researchers know what happened in the past, and may have designed their strategies accordingly: consciously avoiding bank stocks in 2007 and 2008, for example. In addition, slicing up the data means fewer observations, making it more difficult to discover relationships that are truly statistically significant.
Campbell Harvey, one of the report’s authors, says that the only true out-of-sample approach is to ignore the past and see whether the strategy works in future. But few investors or fund managers have the required patience. They want a winning strategy now, not in five years’ time.
The authors’ conclusions are stark. “Most of the empirical research in finance, whether published in academic journals or put into production as an active trading strategy by an investment manager, is likely false. This implies that half the financial products (promising outperformance) that companies are selling to clients are false.”
More here – The Economist