Quicker and more accurate business decisions continue to be a key interest in many industries. The capital markets, of course, wrote the book on this topic! Lots of people talk about big data and analytics -- but for a long time capital markets has been delivering real-time analytics on big data in motion.
I've long been a fan of technologies that can allow real-time analytics and complex temporal decisions to be automated. However, now I am seeing a new imperative alongside these real-time systems: self-learning and self-evolving systems.
Traditionally, most trading algorithms work on pre-defined rules. When events occur in the market, the algorithm responds by updating its state or placing and managing orders. Decisions happen very rapidly, usually at sub-millisecond velocity. This is fine but is there a way that an algo can come up with new trading ideas? And how about spotting when existing algorithms don't work so well in current market conditions?
The markets evolve so quickly that often rules designed to trade on market patterns might be quickly out of date. For instance, several years back, the Aite Group reported that the average shelf life of a trading algorithm is three months. I suspect if the research were done today, that has come down even further. New algorithm ideas might come from quantitative research or from trader intuition.
A key competitive advantage is to discover a new pattern of trading before your competitors. But it is also important to spot when a pattern of trading is no longer working as it was intended. Self-learning techniques offer a potential route for smart algorithms to spot new trading opportunities and also to spot when they are no longer effective.
Machine Learning for Market Surveillance
Along with trade pattern discovery, another application for self-learning systems is for market surveillance and risk. New forms of market abuse or risk breaches emerge all the time. It's a bit like viruses on the Internet -- where one has to rely on someone spotting it and writing a new anti-virus. Self-learning algorithms can be used to figure out what constitutes "normal behavior" of the market, of a specific trading desk, and then track that over time. If new patterns emerge that constitute a breach from normality -- they may be abuse and should be investigated.
There are a number of different self-learning (sometimes called "machine learning" or even "artificial intelligence") techniques in use. Currently, machine learning is not truly "real-time" in that it cannot take into account things that are changing instantly. It is more of a technique to analyze existing behavior to predict what will happen next. However, I see that evolving over the next decade. It's also still a bit of a black art, relying on experts to program and tune it. I see machine learning becoming more accessible to mere mortals over the next decade.
I still believe that the human trader is a critical piece of the puzzle. There's currently no substitute for human intuition. Algorithms are still not as smart as experienced humans. And often the world can get, very suddenly, weirder than any algorithm can cope with. I've seen hedge funds that are set up to trade with AI algorithms that claim to guarantee they know the market will go up or down tomorrow. Some of these funds have closed down because they didn't get it quite right and things changed quickly.
[To read about one hedge fund that is using artificial intelligence to determine its strategy, read: NYC Hedge Fund Trains Computer to Pick Stocks].
A self-learning system may spot complex patterns, but these need to be validated before any action is taken. A trader, quant or risk officer can use tick databases, analytics and other research tools to investigate the self-learning system's observations in more detail to see if they agree. A market expert can then encode real-time rules into algorithms to monitor for these trading, risk and surveillance patterns. Results can be recorded and fed back into the self-learning system to improve its performance.
The human will evolve further to be the "validator" (or not) of AI strategies. Upon validation, the strategy could be put into action. But without validating the strategy, it's possible that there is an error that might cause massive losses. So people are here to stay, at least for now.Dr. John Bates is a Member of the Group Executive Board and Chief Technology Officer at Software AG, responsible for Intelligent Business Operations and Big Data strategies. Until July 2013, John was Executive Vice President and Corporate Chief Technology Officer at Progress ... View Full Bio