CFTC commissioner Bart Chilton recently proposed a bold idea: Exchanges and regulators should test trading algorithms before they go to market and perhaps award them a "Good Housekeeping Seal of Approval" before launch.
While this may be the first time that a CFTC commissioner has talked about this, the concept is part of the European Commission's MiFID Review from this past December.
Putting a Seal of Approval on a trading algorithm that can spew thousands of orders into the market in a second is a perfectly legitimate idea. But it's not very practical. The problem with the official vetting of all algos is that the majority of executed trades today are generated or managed by computers.
While many auto-generated orders are created by high-frequency firms, many asset managers use non-latency-sensitive quantitative strategies (or machine-generated orders) to determine what to buy and sell.
So unless we want to vet virtually every quantitative trading strategy, we need to manage our definitions with care. Otherwise everyone outside of an individual buying 100 shares of IBM may need to have their quantitative models vetted.
Further, who would be responsible for verifying that the algos were vetted? Even if the proposal were limited to HFT-type formulas, algorithms can be hosted at many points in the trading chain - from the exchange, to the broker's data center, to the institutional investor's data center and even a retail investor's desktop.
At a basic level, an investor can obtain an aggregated data feed, develop a trading model and have orders sent to a broker via a FIX gateway. In this instance, it would be nearly impossible for the broker to even know that the orders were created by an algo and not a direct-market-access (DMA) platform.
Even if the customer was required to register the algo with an exchange or regulator, how would the algo be tested?
Superficial testing might not be a problem, as creating test scripts against a series of historical and even traumatic days could be assembled fairly easily. Unfortunately, the market does not behave consistently. What works today may not work tomorrow. Which raises another question: Each time the algorithm is tweaked, would it need to be retested? In theory, yes, as once you touch the code, you can mess up the algo. But who would have the time and computing capacity to run so many simulations?
Commissioner Chilton proposed that the exchanges or regulators review the algorithms. Good luck. With more than 50 U.S. equity trading venues, how could an exchange, ECN or dark pool take responsibility for validating an algo that works across so many platforms? Or would the testing occur across all 50 platforms? Talk about a logistical mess.
And how would the regulators vet algos? Neither the SEC or the CFTC have the time, understanding or technology to manage this enormous task. (And didn't the CFTC just have its technology budget cut and its supervisory powers doubled?)
A much better - if still problematic - solution is the SEC's Rule 15C3-5, "Risk Management Controls for Brokers or Dealers With Market Access,"in which the regulator proposes that the orders generated from algorithms be monitored - and not the algorithms.
While this rule is still being challenged, it is a much simpler, cleaner and better solution than testing every algo that gets employed in the market.
Regulators need a better understanding of how the market works, which would help them avoid industry headaches and create a safer and more effective trading environment. Now that would be a Seal of Approval. Larry Tabb is the founder and CEO of TABB Group, the financial markets' research and strategic advisory firm focused exclusively on capital markets. Founded in 2003 and based on the interview-based research methodology of "first-person knowledge" he developed, TABB Group ... View Full Bio