Sustainability. We mean it.
Report

A history of quant

How we got from there to here

Insight
18 June 2025 |
Macro
Quantitative investing (quant) encompasses a broad range of strategies that use data analysis, mathematical modelling and automated transactions to deliver investment returns. In our latest paper, we discuss how quant strategies have evolved over recent decades; explore how they have been enhanced by advances in technology; and dispel some of the myths that have grown up around them.
A history of quant

Fast reading

  • It’s now 125 years since the foundations of quantitative investing were laid, with the publication of Louis Bachelier’s Theory of Speculation in 1900.
  • The practical application of quant scholarship took off from the late 1960s, helped by improvements in computing power that facilitated the analysis of large datasets and the back-testing of portfolio strategies.
  • Towards the end of the 20th century, a key development in quant investing was the identification of a number of ‘factors’ that could be used to predict price movements in the markets.
  • As quant moved from theory to practice, it achieved striking successes in both investment returns and asset growth.
  • Lessons learned from post-2000s development of quant is the need for diversification both between factors such as value, size and momentum and within them. Rather than relying on one formulation for each factor, strategies need to have several formulations of each.
  • In the 21st century, quant investing has benefited from three closely interconnected revolutions: in computing power, data and algorithms. All three are still underway, allowing quants to harvest – and harness – a dazzling array of information.
  • The combined revolution in compute power, big data and more sophisticated algorithms is extraordinarily potent – with machine learning and other AI-adjacent tools providing quants insights that were undreamt of in the past.

Continue below or download the report

Quantitative investing (quant) encompasses a broad range of strategies that use data analysis, mathematical modelling and automated transactions to deliver investment returns. In this paper, we discuss how quant strategies have evolved over recent decades; explore how they have been enhanced by advances in technology; and dispel some of the myths that have grown up around them.

Over more than a century, quant has evolved from a purely theoretical concept to a practical approach to investing in financial markets. Ideas that were once confined to the world of academia have been implemented by numerous investment strategies, often with remarkable success.

Along the way, there have been some high-profile failures too. This has led to a degree of skepticism and even cynicism towards quant strategies.

But advances in computing power and an extraordinary abundance of data allow today’s quant managers to achieve insights that were previously unimaginable. Every day seems to provide quant processes with a wealth of new data. And as new datasets achieve sufficient maturity to offer genuine predictive power, quant investing is making extraordinary strides in its reach and scope.

Part one: origins

It’s now 125 years since the foundations of quantitative investing were laid, with the publication of Louis Bachelier’s Theory of Speculation in 1900. In his doctoral thesis at the University of Paris, Bachelier set out a transformative insight: that mathematical principles could be usefully applied to the financial markets.

Bachelier’s work paved the way for theories of quantitative finance and, eventually, full-blown quant investing – the use of mathematical, statistical and modelling techniques with the aim of generating excess investment returns.

Theoretical milestones included the evolution of the ‘efficient-market hypothesis’; Harry Markowitz’s Portfolio Selection, which established the use of mathematical models to optimize portfolios; Fischer Black and Myron Scholes’ The Pricing of Options and Corporate Liabilities, which revolutionized the use of derivatives to reduce risk; and Eugene Fama and Kenneth French’s work on factors, which delivered greater insights into the forces driving stock returns.

The catalysts for quant

What were the drivers behind the evolution of quant investing? When we consider its genesis, we can point to the usual sources of innovation in investment theory and practice: the perceived opportunity to identify and exploit novel sources of advantage.

Here, the potential for repeatable, rule-based decision-making to improve portfolio outcomes was one consideration – as was the potential to reduce or eliminate behavioral biases.

Later, as computing power improved, other benefits became apparent too. The use of ever larger and more complete data sets along with AI-driven pattern recognition, for instance, began to provide portfolio managers with an analytical edge. Likewise, faster decision-making now offers the potential to provide a trading advantage.

As in all things, success begets success and the quant of today bears no relation to the quant of even a decade prior.

From theory to practice

The practical application of quant scholarship took off from the late 1960s, helped by improvements in computing power that facilitated the analysis of large datasets and the back-testing of portfolio strategies. Quant pioneers such as Edward Thorp and Victor Niederfhoffer moved from theory to market practice, setting up funds that employed the quantitative methods they had developed in academia.

Their successes were followed by those of other pioneering quant funds in the 1980s, such as those managed by Renaissance Technologies and D.E. Shaw. These funds were able to lean on further improvements in computing power that enabled high-frequency trading. Investment banks got in on the act too, with the likes of Goldman Sachs, JP Morgan and Morgan Stanley setting up dedicated quant desks.

The birth of factor-based investing

Towards the end of the 20th century, a key development in quant investing was the identification of a number of ‘factors’ that could be used to predict price movements in the markets. In the 1960s, various academics had developed the capital asset pricing model (CAPM), which relied on a single factor: market risk. But as CAPM rested on the assumption that markets were efficient (aptly named the ‘efficient-market hypothesis’), it struggled to explain various aspects of asset-price performance.

Stephen A. Ross’s arbitrage pricing theory, which he set out in 1976, challenged the CAPM standard. Ross proposed the use of a wide range of different factors – although this complexity made the theory difficult to implement.

In the early 1990s, Eugene Fama and Kenneth French proposed a three-factor model, identifying size and value as two factors that could be used alongside market risk to price assets appropriately. The conventional measures of market risk fail to account for the fact that smaller companies tend to outperform bigger ones and that cheaper companies likewise tend to outperform their more expensive peers. So the three-factor model appealed to quant investors looking for a more nuanced way of capturing stock performance.

Others took this further – notably adding momentum as a fourth factor. In 2015, Fama and French would update their model to include five factors, adding operating profitability and investment to their original three.

The fact that Fama had previously been a longstanding advocate of the efficient-market hypothesis was also significant. The undermining of the efficient-market orthodoxy by one of its luminaries opened up graduate research to those with an interest in looking for mispricing in the markets – making broader academic investigation of quant strategies a more viable path.

A selective chronology of quant

  • 1900 – Louis Bachelier’s Theory of Speculation
  • 1952 – Harry Markowitz’s Portfolio Selection
  • 1960s – Development of the capital asset pricing model (CAPM)
  • 1966 – Victor Niederhoffer’s Market Making and Reversal on the Stock Exchange
  • 1969 – Edward O. Thorp launches Convertible Hedge Associates
  • 1970s – Introduction of computerized trading to the New York Stock Exchange
  • 1973 – Fischer Black and Myron Scholes’s The Pricing of Options and Corporate Liabilities
  • 1976 – Stephen A Ross’s The Arbitrage Theory of Capital Asset Pricing
  • 1980 – Victor Niederhoffer launches the NCZ Commodities fund
  • 1982 – Founding of Renaissance Technologies
  • 1984 – Breiman et al’s Classification and Regression Trees (CART)
  • Mid-80s – Major investment banks set up quant desks
  • 1988 – Founding of D.E. Shaw
  • 1992 – Eugene Fama and Kenneth French’s three-factor model
  • 1998 – Collapse of Long-Term Capital Management
  • 2007 – The ‘quant quake’
  • 2008 – The Global Financial Crisis
  • 2000s – Ongoing revolutions in computing power, data storage and algorithms
  • 2010s – Increasing use of machine learning

Part two: growing pains – and lessons learned

As quant moved from theory to practice, it achieved striking successes in both investment returns and asset growth. But there were also some high-profile failures – some of which were so prominent that they have continued to color views on quant to this day.

From the 1990s on, quant strategies were typically based on models incorporating three or four factors. These strategies grew over time, largely through enhanced indexing – whereby quant techniques are used with the aim of amplifying the returns from passive index strategies. This can involve using factor-based data to adjust the allocations of an index-tracking strategy, for example.

Eventually, these strategies had trillions of dollars in assets under management. They had vast amounts of capacity and were able to deliver consistent returns.

It’s important to note that not all quant strategies involved enhanced indexing. Hedge funds, in particular, often employed quite different quant approaches. A case in point was Long-Term Capital Management (LTCM), a hedge fund that employed a range of complex quant techniques, initially to great success. In 1998, however, following Russia’s default on its sovereign debt obligations, LTCM collapsed as a result of its highly leveraged positions. The event shook the global financial system and necessitated a bailout organized by the Federal Reserve Bank of New York. The collapse of LTCM had two main causes: excess use of leverage (debt) to magnify returns; and assumptions based on data without sufficient history. These were two key lessons for quants heading into the 21st century, but almost a decade after the collapse of LTCM, another prominent market event was to have a lasting negative impact.

The ‘quant quake’

The so-called ‘quant quake’ occurred in August 2007. Its origins were somewhat mysterious at the time, but a consensus has since emerged that heavy losses at a large quant fund forced it to sell down its holdings rapidly to meet redemptions. As with the collapse of LTCM, leverage played an important part in the crisis. But this time, the effects of excessive leverage were amplified by the herding effect in a certain body of quant funds and the contagion that resulted from this.

The funds in question employed statistical arbitrage – an approach that aims to make money from small deviations in price between similar securities. Statistical arbitrage, or ‘stat arb’, had been highly successful – but its success had made it popular, which resulted in diminishing returns. To amplify these lower returns and keep their strategies viable, fund managers began to use substantial amounts of leverage.

This meant that many very similar strategies were both invested in the same securities and heavily leveraged – which made them especially vulnerable to contagion when one of their peers had to unwind its positions at pace.

As many funds combined statistical arbitrage with other quant strategies, the run on ‘stat arb’ bled into other parts of the quant universe too. Risk-averse quant strategies started to sell, creating a downward spiral. The event exposed an overleveraged ecosystem and was extremely painful for the managers and investors who were forced to sell at the bottom.

For those who were able to sit tight through the crisis, however, it was little different to any other short-term market run. Prices had fallen sharply, but they soon snapped back as investors began to buy again. So for many quants, the ‘quake’ amounted to just a couple of nerve-wracking days. The lesson was simply not to rely on leverage to offset reduced returns.

The Global Financial Crisis

By contrast with the quant quake, the Global Financial Crisis (GFC) of 2008 did expose deeper-seated weaknesses in quant processes. While the quake was largely a result of too much leverage, the GFC showed that the factors quants were using were less robust than assumed.

For many quants, the bursting of the internet bubble in 2000 had been relatively painless. Quant strategies tended to have a focus on value, and unloved value stocks had done well when the dotcom bubble burst. But the GFC proved different. Value stocks – such as financials and energy stocks – had been generating strong earnings, which then evaporated as the crisis struck. Rather than proving resilient, as in 2000, value stocks were the worst affected by the GFC.

This was an eye-opening experience for many quant managers. It demonstrated that market crashes don’t always play out in the same way. In the GFC, a reliance on cheap stocks led to bad outcomes. Instead, portfolios needed to be better diversified to be more robust: exposure to the value factor was not a cure-all for every crisis.

Lessons learned

One lesson from the GFC was that diversification was required both between factors such as value, size and momentum and within them. Rather than relying on one formulation for each factor, strategies needed to have several formulations of each.

Another was that quants should look beyond the standard informational inefficiencies when looking for mispricing in the markets. Behavioral factors – which depend on human emotions rather than hard information – are also a powerful force.

Behavioral mispricing explains the momentum factor, for example. Investors are often keen to sell their winners into a rising market because crystallizing a gain feels good; conversely, loss aversion makes them reluctant to sell stocks that have performed poorly. Therefore, many investors will hold onto ‘losing’ stocks for too long, which creates downward momentum by prolonging corrections in share prices.

So quants can benefit by buying stocks from willing sellers when prices still have further to run and by shorting stocks that are gradually falling to appropriate levels. Identifying such behavioral biases allows quants to gain from irrational investor behavior, as well as from the rational behavior that informs factors such as value.

Part three: revolutions

Many investors had bad experiences during the ‘quant quake’ and the GFC. But the resultant aversion to quant strategies in some quarters overlooks the huge advances that have been made in recent years – and the progress that continues to be made as computing power accelerates exponentially.

In the 21st century, quant investing has benefited from three closely interconnected revolutions: in computing power, data and algorithms. All three are still underway, allowing quants to harvest – and harness – a dazzling array of information.

The revolution in computing power

One of the key developments in quant in the past couple of decades has been the availability of faster chips and better architecture for servers.

In the late 1990s and early 2000s, quants relied on giant-sized and eye-wateringly expensive servers for simulations and optimizations. These machines had state-of-the-art processors for the time and cost upwards of half a million dollars. But they had less computing power than the smartphone in your pocket. So they ran slowly, which meant that the models they operated were limited and their simulations were relatively simple. Each of their central processing units (CPUs) had to be on its own processor board, and each had only a single core.

But the situation was transformed under Moore’s Law¹ – thanks to continued innovation in the semiconductor industry. This has allowed CPUs to have multiple cores, and it has also enabled higher clock speeds, greater memory density and better energy efficiency. In short, computers can do much more, much faster.

As a result, the computers that quants use today have higher chip speeds, multiple cores per CPU and multiple CPUs on each processing board. These advances mean that thousands of different portfolios can be tested at the same time. Computing in parallel allows quant managers to get the answers they need a great deal faster than in the past – vastly improving the efficiency of their processes.

The revolution in data

There has also been a revolution in data – or, more accurately, an explosion. In the past, storing a gigabyte’s worth of data was expensive. Today, you carry terabytes in the phone in your pocket. And along with much lower storage costs, there’s been a growing appreciation of the importance of data for all facets of the economy. In 2006, the British mathematician Clive Humby said that “data is the new oil”; since then, the increasingly digital manner in which most businesses are run has made data far easier to collect.

These shifts have revolutionized the data business. People are collecting far more data than ever before. While traditional datasets are still available, we now have an ever-expanding range of new and deep datasets. Many of these weren’t even fathomable 20 years ago – real-time records of every credit-card transaction or satellite photographs of every parking lot in the world, for example.

We have also seen rapid advances in computer analysis and machine learning – the subset of AI that allows computer models to adapt to situations without explicit programming. When these technologies are applied to these new datasets, they offer insights that human analysts simply can’t spot. Humans can’t count all the parked cars around the world, for example. But machines can – and they can update their figures and their forecasts every single day. And real-time data – the number of trucks leaving a company’s factories, say – offer spin-free insights that may not be obtainable from company representatives.

One challenge with these new datasets stems from their shorter timeframes. We have at least a century of stock-pricing data, for example, but only 10 or 15 years of the newer datasets like satellite information on parking lots.

For some quant strategies, that is not a problem. Certain processes rely on very short-term signals to identify trading opportunities – signals that may be mere seconds in duration. But many quants have longer time horizons and like to have a lot of data informing their decision-making. They don’t like to assume that stats covering just a handful of years will deliver a perfect forecast. Instead, they want to see full market cycles. But as time passes, these new datasets are accruing enough history to become genuinely useful. This, of course, will only increase, providing models with a constantly expanding supply of meaningful datasets.

The revolution in algorithms

A third revolution has occurred in algorithms – the computational procedures that turn data into useful insights and predictions. Some of the algorithms that quants rely on today have their intellectual origins in the 1970s or even earlier. But their contemporary equivalents have become much more effective and powerful thanks to continued innovations in algorithmic design and the greater scope that more powerful computers offer. These advances have delivered approaches that are particularly well suited for dealing with large and ‘noisy’ real-world datasets.

Decision trees are a case in point. These algorithms, which have vast potential for predictive modelling, achieved recognizable form with the publication of Clarification and Regression Trees (CART) by Leo Breiman et al in 1984.² But the nascent technology outlined in CART has since been supercharged by advances in computing power. And as ‘compute’ has increased, quants have taken advantage by developing more sophisticated adaptations of the original algorithms. In recent years, decision trees have been successfully applied in a range of industries and have provided a powerful and transparent means of identifying investment opportunities.

Another example is the neural network – a machine-learning model based on the human brain that offers highly developed problem-solving capabilities. Although the concepts of neural networks and artificial intelligence were mooted in the 1960s, for decades they failed to move far beyond that conceptual stage. In 2012, however, the development of deep neural networks accelerated substantially, leading to the development of the foundational technology for generative AI in 2017. The resultant neural networks are astonishingly powerful – although, in comparison to decision trees, their inner workings are more opaque.

Obtaining an edge

The combined revolution in compute power, big data and more sophisticated algorithms is extraordinarily potent – with machine learning and other AI-adjacent tools providing quants insights that were undreamt of in the past.

Here, it’s important to remember that the managers who oversee quant processes are investors as well as data experts and mathematicians. So they often use data in very similar ways to those in which traditional investment managers do – only at much greater speed and on a much greater scale.

On top of this, many of the models employed by quants are based on stock fundamentals or other intuitive aspects of investment. Big data doesn’t necessarily mean the big picture: instead, it can mean focusing on individual securities in extraordinary detail.

The detailed impressions that inform quants’ decisions can be drawn from a vast range of inputs – and some of these can be far removed from traditional numerical data. For example, sentiment can now be gauged from everything from SEC filings to newsfeeds, press releases, company reports and even conference calls. Meanwhile, trends in online activity such as web searches and transactions can provide valuable datasets – as can GPS data on supply chains and satellite imagery of agricultural areas.

With both traditional and alternative datasets, today’s tools allow quants to extract useful information from massive amounts of data. The information can be used to gain a significant edge on a small group of stocks or a small edge on an enormous number of stocks – or anything in between.

Improvements in the quality and quantity of data allow new techniques to come into being. Machine learning allows models to improve as they factor in the successes and failures of the past. This mirrors the experience that a traditional bottom-up manager might acquire through years in the market – with the crucial difference that a quant model doesn’t need to have been around for long if it has access to sufficient quantities of backward-looking data from the appropriate time periods.

Today, there is much less emphasis on a few factors or a particular style. Although some quant managers may still rely on these more rudimentary approaches, the leaders have moved on. Instead, each stock is evaluated on many distinct factors.

MDT: The human touch

There is sometimes a perception that quant is a ‘black box’ – a mysterious process that can’t easily be explained, perhaps with a touch of smoke and mirrors thrown in. And, for some quant managers that may be an accurate representation.

Neural networks, which some strategies employ, for instance, do entail an inherent opaqueness: they don’t ‘show their working’ in the way that other quant approaches do. But often, the ‘black box’ perception arises simply because quant managers do not share the precise details of their proprietary processes. In this, they are no different from traditional bottom-up managers, who typically keep their own proprietary stock-picking processes private.

Another aspect is simply the necessary complexity of quantitative approaches. For non-mathematicians, quantitative techniques can often appear daunting. But complexity is not the same as opacity. Many quant models are entirely transparent. MDT’s decision trees-based models, for example, do allow potential investors to see exactly how our managers arrive at their forecasts and investment decisions.

Ultimately, like traditional bottom-up managers, we use bottom-up data, along with technical data (both are ‘fundamental’ to share-price movements). We know what data will affect the stocks in which we invest, and in this we are no different from other investment managers.

Where we differ from quants of old is in the breadth and depth of data we can analyze, the speed at which we can process it, and the range of stocks to which we can apply the resultant insights. In our ever-more complex world, we believe this data-driven approach is increasingly valuable.

MDT US Equity

Power in data.

BD015904

A history of quant

Related insights

Lightbulb icon

Get the latest insights straight to your inbox