Datacenters have long been the centerpiece of many financial services organizations' strategies. However, with a shift to hosted software, cloud computing, and other newer technologies that run outside of the traditional corporate data facility, the future of the datacenter is in question.
Older facilities are being shuttered and companies are consolidating datacenter operations to fewer locations with more computing power per square foot, lower operating costs, and better economics.
Firms that have an edge in latency continue to buy new datacenter equipment and discard older technology, but for the most part firms are evaluating continued investment in on-site servers, opting instead to go with a cloud or outsourced system.
Bigger banks have pretty large datacenter footprints. Many have been repurposing the area, but it's not as though you can do it overnight."
-- Richard Mitterando, Nasdaq OMX
Optimization is no longer just about the infrastructure. Data has become a commodity, and the means in which it is delivered — be it through a virtualized or on-site servers — is increasingly irrelevant to the firm. Terry Keene, CEO at iSys, believes that in the face of commoditization, the cost of data must be measured in a revolutionary way. For datacenters, rather than measure value using traditional capital expenditures or total cost of operations, total cost of information (TCI) is more appropriate.
TCI, a potentially complex method of calculating costs, accounts for the way a firm uses its technology to gather and utilize information. Julia Sears, associate VP of Nasdaq OMX FinQloud, says strides are already being made to apply TCI. "We look at total cost of ownership: humans, power, information, management, connection, the security upgrades required to continue having and building product in a traditional datacenter," Sears says. "We ask, what is the end-to-end total cost? How complex is it to upgrade software? We're looking at the full footprint."
Across the board, considering all components, total cost of ownership after switching to a cloud provider could be 40% to 60% lower, according to Sears. Anecdotally, Sears has seen savings as high as 85%.
Future Of The DatabaseOver the past nine months, Nasdaq OMX has noticed various banks and market participant clients highlight their large datacenter footprints as a point of frustration. Many have overconstructed datacenters in anticipation of volume, but ultimately use their CPUs at only 2%, instead of at 50%, which is more acceptable. With firms looking for cost reductions, IT groups are investigating cloud offerings to better manage peak performance requirements.
Significant migration to the cloud will become challenging over the next decade, especially as firms start moving over more sensitive information. Continued innovation in the data hardware space will enable cloud providers to keep up with the pace. After all, clouds are datacenters, too.
Richard Mitterando, managing director of Nasdaq OMX, works with Verizon to run Nasdaq's US datacenters. In an interview, he says Nasdaq is adding density and power to its datacenters, including the capability to handle 20-kilowatt cabinets. Ten years ago, 2 kilowatts was considered a lot.
But will the traditional data ever truly go away? "Bigger banks have pretty large datacenter footprints. Many have been repurposing the area, but it's not as though you can do it overnight," says Mitterando. "Firms have to do this in phases, and they are still holding onto things they want closer to them, like certain applications that are more proprietary to the business. So they are not getting rid of all of it, definitely not." Becca Lipman is Senior Editor for Wall Street & Technology. She writes in-depth news articles with a focus on big data and compliance in the capital markets. She regularly meets with information technology leaders and innovators and writes about cloud computing, datacenters, ... View Full Bio