June 15, 2010

Ask any trader, vendor or marketplace operator in the global securities market, “How fast do you need to be in order to be successful?” and the answer will most likely be, “It depends.”

AWS Report

Technological advances move at such a pace, and firms rely on such varying strategies, that the level of latency — defined as the time it takes for an order to travel to a marketplace and to be executed or canceled, and then for a confirmation of that activity to return to its source — acceptable to any given party will vary, though none of the intervals are perceptible to the human eye. For the past few years, hardware and software providers have been able to decrease latency exponentially each year. We have gone from talking about milliseconds to microseconds and even nanoseconds.

Although it may seem to be accepted wisdom that everyone wants to be “as fast as possible,” that’s not necessarily true. And as the market saw graphically and frighteningly on May 6, speed, by itself, is not the ultimate goal. In fact, lacking business rules that acknowledge the full implications of instantaneous transactions, speed is dangerous.

According to Adam Honore, senior analyst at Aite Group, the level of latency that is considered acceptable by market participants largely depends on several factors, including:

  • Trading style. If you have an aggressive trading style that relies on opportunistic pricing differentials, you need to be the fastest. If you are a long-only quant fund, speed is not as critical.
  • Instrument class. Generally speaking, equities are the fastest-moving markets, with futures, foreign exchange and fixed income lagging further behind.
  • Venue characteristics. The capabilities offered by each exchange and marketplace will vary — dark pools, intraday matching, live limit-order books, etc. — as will the level of traffic they attract throughout the course of the day.
  • Instrument characteristics. Trading shares of highly liquid Microsoft would have a vastly lower latency requirement than an illiquid OTC Bulletin Board stock, for example.

There are ranges of latencies that can be instructive, according to Steve Rubinow, CIO at NYSE Euronext. But it is important to remember that these numbers vary greatly with market conditions and that the methodologies of measuring latency are not consistent, he adds, stressing that if latencies are quoted out of context with market traffic, they are essentially useless.

“Everyone publishes numbers that were generated under the best possible conditions and hopes nobody asks the details, because those details would reveal whether it was comparable or not,” Rubinow says. “Having said all that, to be competitive today, you have to be in the few hundred microseconds of turnaround time.”

When it embarked on its Universal Trading Platform (UTP) program last year, NYSE Euronext stated that it was aiming for 150 microseconds to 400 microseconds of latency per round trip for its European Cash Markets. By comparison, on May 14 NYSE rival Nasdaq OMX published an overall average result of 157 microseconds, while noting that 99.9 percent of orders were completed within 757 microseconds, at a rate of 194,205 orders per second.

To illustrate how quickly the standard moves, the top class was in the tens of milliseconds a year ago, according to Donal Byrne, CEO and founder of Corvil, a Dublin, Ireland-based vendor of latency measurement technology. [Ed. Note: 1 millisecond = 1,000 microseconds.]

About 40 percent of the U.S. equities market volume is comprised of market makers that are trying to match the latency of the marketplace they are using, notes Shawn Melamed, founder, president and CEO of New York-based Correlix, which also sells latency monitoring devices to exchanges, including Nasdaq OMX. That group needs the fastest response times, he says.

On the other hand, “For someone that is doing index arbitrage, the average latency they would require is really situational,” Melamed comments. “There is no fixed number there. [Because they need to get information from several marketplaces], they will be tolerant to higher latency, so that you can at least get full price information before you make your decision.”

Knowing Latency Is Half the Battle

The emergence of companies such as Corvil and Correlix, which did not exist five years ago, illustrates an important consideration about latency: Often, knowing the level of latency at a given marketplace is more important than the number itself. Traders — and the algorithms they deploy — now can make decisions about execution venues using latency data, just as they would use fill rate and price quality as decision factors.

At Tacoma, Wash.-based Russell Investments, which operates the Russell 2000 small-cap index, traders rely on this operational transparency to make decisions, bringing the latency data about each execution venue and market-data source right onto trader desktops, relates Jason Lenzo, head of equity and fixed income. “To the extent that we have multiple paths to get to a venue, we can effectively normalize out the native latency within that venue,” he says. “We can then optimize the speed to market across specific optical and telephony links in those networks."