A recurring question of our time is how to regulate artificial intelligence. Among those asking it are the European Commission, which was this week reported to be working on the technicalities of how to make self-driving vehicles safe. In the UK, preparations for legalising autonomous vehicles have recently required an update of the Highway Code.
This is all familiar territory for financial regulators, which for more than half a century have been figuring out how to police machines that pretend to be humans.
Mostly, their regulation is based around two simple principles: nearly every task that can be delegated to a computer will be considered of critical importance; and nothing the computer ever does will be truly autonomous. In combination these principles allow algorithms to be entrusted with money but, if they do something abusive or stupid, a human will always be held accountable. The buck never stops with the laptop.
Blanket principles won’t match the experience of traders and market makers, however. For them, machine-generated disorder remains a daily occurrence that seems to go unchecked.
Occasionally the effect can be obvious, such as in May, when a Citigroup trader reportedly added an extra zero to a programme trade and started a Europe-wide flash crash. More often disorder is a routine part of their job, such as when abusive trading tactics, such as entering then immediately cancelling bogus orders, are skewing an order book.
Monitoring by the Financial Conduct Authority and others should be detecting and investigating the most serious incidents, but these checks can only do so much. About 65-75 per cent of overall equity trading volumes are through algorithmic trading, according to a 2018 SelectUSA study, while in the foreign exchange markets the estimates are 90 per cent-plus.
Such prevalence brings with it a high risk of persistent machine-generated background noise. No one knows for sure how much prices are being subtly skewed by malicious or misprogrammed trading bots, particularly when they are running in concert. Is an obvious failure such as the May flash crash an isolated breakdown, or did it amplify distortions that on an ordinary day are too small to be detected (but still big enough to make someone a profit)? How can we ever be sure that the machines are fit for purpose?
Under European law, trading algorithms must be tested comprehensively before deployment. Firms must certify that their bot fleet won’t contribute to disorder and will keep working effectively “in stressed market conditions”.
And while the buck will always stop with humans, robots can be held to higher standards of conduct. There’s no diminished responsibility when a bot acts impulsively nor is successfully bluffed. Any trader whose algorithm misbehaves when pitted against a manipulative or faulty trading strategy is also committing market abuse.
European trading venues have the burden of guaranteeing that participants test to the gold standard, which is to prove that bots won’t contribute to market disorder, and it is the trading venues that can be held responsible in first order for any failures.
But governance relies on self-certification and there’s no overarching rule book defining what should be tested and by what mechanism. Compliance appears very low as a result: industry consultant TraderServe estimates that fewer than half of firms have stress tested their strategies to the standard required.
And Europe has so far lacked a headline algorithmic trading prosecution to rival the US, where JPMorgan Chase agreed a $920mn settlement in 2020 over spoofing metals markets. The FCA in particular seems to favour a nudge approach, noting in its May 2021 Market Watch Bulletin that “as a result of our enquiries”, an algorithmic trading form had changed its formula to help avoid “having an undue influence on the market”.
In sum, Europe’s Mifid II regulations are a wide-ranging attempt to control algorithmic intelligence as it is applied to finance, which is probably the world’s most micro-regulated industry. And, more than three years after Mifid II became law, its fundamental tenet of societal safeguarding appears to remain widely ignored. Good luck with those self-driving cars.