The speed of light is about 186.3 miles per millisecond. A photon traveling the 2,787.7 miles from the corner of Sunset and Vine in Los Angeles to Wall Street in lower Manhattan would arrive in 14.96 milliseconds. That's in a vacuum. In the real world, data latency would slow that trip to around 35 milliseconds, according to a new report from TABB Group.
The difference comes not only from the multiple routers along the way, but also from the inherent slowdown in transmitting light over a fiber-optic cable. "Technology is not going to make that go away," says Dan Tuchler, senior director of product management at Mellanox, a supplier of silicon for ultrahigh-end switching fabrics. "Physics makes it impossible to improve that."
That doesn't mean the effort to reduce data latency has ended. As Tuchler puts it, the inherent latency in routers, switches and other equipment accounts for most latency in networks, not the cables. Vendors are putting billions of dollars into reducing latency in their gear, fueling the move from Ethernet to InfiniBand — a high-speed input/output technology that speeds up the transfer of data-intensive files across servers, storage devices and networks. Found mostly inside data centers, InfiniBand is likely to be extended to customer premises in the next few years.
Beyond that there's still plenty of work to be done to reduce latency in servers and applications, for example, using InfiniBand and a protocol called Remote Direct Memory Access, which injects data straight into computer memory without passing it through the operating system, says Peter Lankford, director of the Securities Technology Analysis Center. He points to complex event-processing systems from companies such as Apama, Streambase and Vhayu.
"The question is, 'Where do you go next?'" says Mark Akass, CTO for global financial services at BT. "In terms of network infrastructure and connectivity you're pretty much hitting optimum speeds."