In financial services, owning and operating a proprietary datacenter has long been a cost of doing business. For decades, Wall Street banks built gleaming facilities that were the pride and joy of CTOs and operations executives.
However, as the economy shifted (post 2008) and newer technologies emerged, these facilities have often become a costly burden to banks. With operational costs that continually increase, datacenters are now a topic of discussion for boards of directors and shareholders alike. Costs for energy, cooling and real estate are always increasing and weighing heavily on the bottom line. Many are asking if there is a better way to provide computing power for a bank.
Currently, most large financial institutions run multiple data centers that they either own, or lease completely from a third party provider. The facilities are custom built to the needs of the organization at that point in time. As computing needs change, technology advances or the business climate shifts, the datacenters remain in place. In short, there is little flexibility with existing space. A drastic shift in the marketplace may mean a particular datacenter runs at only partial capacity. And technology, such as the latest servers and networking equipment, needs to be upgraded or replaced at certain intervals.
Across many verticals, such as retail or pharmaceuticals, there has been a push to adopt cloud technology or move applications to third-party cloud-based applications. In financial services, due to regulatory and privacy concerns, the move to the cloud has been slowed. Financial services organizations are reluctant to move customer data to the cloud, and many groups are worried about placing sensitive intellectual property at at third party provider. This may be changing.
Tony Bishop, Chief Strategy Officer of The 451 Group and Co-Head of 451 Advisors, financial services firms are already beginning the process of rationalize and reorient their datacenter footprints. "Across the datacenter portfolio a rationalization will start to occur that applies decision filters related to ownership of control, placement and sourcing of datacenter and services," Bishop says. "Non critical and sensitive workloads will be placed in lower-cost datacenters out of region that are managed by a third party. SaaS-delivered applications supporting corporate functions with non-regulated data will be shifted to trusted cloud providers such as Salesforce, Workday or Taleo, for instance." Lastly, high-performance workloads, such as trading systems, or applications with sensitive regulated data will be maintained in existing (controlled) datacenters of the financial firms, although the datacenters will likely be reoriented to meet current demand, Bishop adds.
By "reorientation," Bishop explains, "the datacenter will go through a top down/bottom up re-orientation in how services are built and provisioned, services are delivered and consumed, and physical capacity is transformed to support a virtual assembly fulfilling dynamic and fluctuating workload behaviors." For instance, depending on user demand, datacenter capacity will need to be flexible to handle varying applications and needs, Bishop notes. "User experience and expectations will drive logical and physical configurations. Designs of logical and physical infrastructure will need to support differentiated workload and service requirements along with ability to be re-assembled and re-sized on the fly," he says.
[To hear more from Tony Bishop about how financial firms are managing their complex data architectures, attend the Future of the Financial Services Data Center panel at Interop 2014 in Las Vegas, March 31-April 4.
You can also REGISTER FOR INTEROP HERE.]
On the fly resizing of datacenters is not something that current strategies can accommodate, but it is something they will need to do in the future. In fact, Bishops description of flexible datacenter strategy seems nothing like what currently exists for most financial services organizations. "The era of one size fits all datacenter tiering … will be replaced by mixed tiers. Power, cooling and whitespace will be modular, just in time and software will be integrated & managed," Bishop continues. " … Strategies related to memory, compute, I/O, and placement of external networks will be rethought around aggregated hubs of carriers, cloud and mobile providers. Critical applications and data services will be rearchitected to support abstracted, high concurrency, fluctuating load and variable capacity processing."
Such a drastic shift from a static, inflexible model to one that is "just in time," as Bishop calls it, must be managed correctly. "To accommodate and achieve these changes, firms will need to institute a holistic and integrated strategy and planning model." For instance, there will be trade-offs and potential impacts to service by moving to a new model. These impacts "must be transparent and understood" before they occur, he adds. "An implementable and evolving enterprise architecture, married with an engineering discipline -- from use case, to application, to workload, to service configuration, to physical execution -- must become the norm of the business," Bishop continues.
Moreover, the current practice that many firms have where siloed business units each have different computing demands and needs needs to end. "Today, the challenge is firms operate siloed and disparate business, application and data, systems and facilities planning. This cannot continue," Bishop maintains. If it does continue, "a crisis of inability to execute will occur." Greg MacSweeney is editorial director of InformationWeek Financial Services, whose brands include Wall Street & Technology, Bank Systems & Technology, Advanced Trading, and Insurance & Technology. View Full Bio