In late April 2017, we noticed a new string of dominoes falling at the fast, automated end of the trading spectrum: With Virtu about to gobble up KCG – not to mention additional consolidations of principal trading groups like RGM Advisors (to DRW), Timber Hill (to Two Sigma) and Chopper Trading (to DRW), among others – it seemed pretty clear that one of the next dominos to fall would be in the direct-feed market data space. The question was: To what degree? (See: “Nasdaq Under Virtu Market Data Axe,” April 28, 2017)
And yet, when we went back to look – via updating our Nasdaq model – this picture showed up:
As Paul Harvey used to say: “…And now the rest of the story…”
Obviously this trajectory is the opposite of what was expected. Better yet, in a dictionary somewhere is this chart – at least, of late – next to the words, “fairly smooth sailing” or “strong growth.”
Over the last few years, data products (and the growth in data revenues) have become critical business units for many types of companies along the broad financial services spectrum, not the least of which are exchanges. Along with those that have market technology to sell, as well – Nasdaq being uniquely strong on that score – data products have been a primary engine of growth, and growth tends to trickle down to a strong currency (i.e. – stock).
So, with legitimate threats to the lucrative direct-feed market data franchise (given all the consolidation in the prop trading space) – and the need to maintain strong top line growth in data products revenue – what do you do? You expand the portfolio and add a new, healthy data product to the roster just at the right time to make up for the decline of a legacy product.
In specific, Alphacution estimates that Virtu was able to carve at least $208 million in savings out of its tech and data spend for 2017 relative to 2016 after the acquisition of KCG. Furthermore, we expect that most of this reduction came out of redundant spending at KCG, within which was direct market data feed expenses with Nasdaq (especially given KCG’s heavy concentration in US equity market-making).
Meanwhile, in order to fill the gaps in the nick of time, Nasdaq acquires eVestment – “a leading content and analytics provider used by asset managers, investment consultants and asset owners to help facilitate institutional investment decisions” – in October 2017 for $744 million. Alphacution estimates that this price was at a more than 5x multiple to eVestment’s 2017 revenues of $140 million. (Note that this revenue estimate is based on context provided by Alphacution’s comprehensive “revenue per employee”[RPE] modeling.)
With the eVestment acquisition – and subsequent revenue to be included (in the 4th quarter 2017) that Alphacution estimates to be about $35 million, post acquisition – the previous picture could have looked something like this (which is what we expected to see back in April 2017):
First, why is this story important? There are “dominoes” like these falling all the time, and we believe that each one knocks into another, and so on. Something transformational happens to one company, and all of its stakeholders – vendors, partners, competitors, clients and investors alike – adjust to the impacts, both on the upside and downside.
These impacts show up (somewhere) in the data, much of it public data. Certainly, enough of the impacts show up in public data to produce signals for what we call, navigational intelligence. The challenge, however, is that the wide variance of potential impacts means that they cannot be captured in a single model. Instead, there must be a diverse library of models that are pre-designed with the vision to be independent, and yet still fit together. More on this strategy in our growing research archives, and as we move forward…
Second, here’s a few points to close with on the process of detecting impacts of market shifts like this:
- Modeling of this type is a tedious “art form” – particularly in areas where there aren’t a lot of standards around the segmentation of the data – like revenue and expense segmentation. Taxonomies change often – and usually when companies want to increase opacity. (Many of our models are filled with these taxonomy tweaks.) Companies aren’t obligated to serve you the full truth on a platter – whether you can handle it or not. (Just ask Jack…) The good news is that the value of tedious modeling decays quite slowly.
- Without powerful context from contiguous modeling (which is a significant benefit of our 360-degree modeling strategy), it would have taken far more effort to fill in the missing pieces of this puzzle. All we really needed to know was one number: the rough RPE of a data products company, like eVestment. The rest – purchase price and headcount – was in the public data for all to discover.
- Lastly, this type of research and modeling lies far beyond the latest AI (artificial intelligence). Multi-dimensional modeling on data where segmentation taxonomies are not standardized limits the level of automation. Translation: this data is too messy for AI to be accurate enough often enough to be credible. Too much risk of false positives. This is not to say that there aren’t opportunities for automation in our workflow – there are – but only that it will be hybridized, at best – given current levels of data standardization.
As always, thanks for your attention. And, if you like it, please share it…