Why do we need different boxes for servers, storage, and network switches in the datacenter? They're all just computers, says David Reilly, who is the global technology infrastructure executive for Bank of America. Why can't companies fill their datacenters with white-box computers stuffed with x86 chips and a ton of memory, controlled by software that can make that box an in-memory storage device today, a software-defined switch tomorrow, and a server next week?
This radical departure from today's datacenter approach isn't just idle salon chatter. Bank of America, this country's second-largest bank with about $2.1 trillion in assets, has a team of people right now exploring how to reinvent the bank's datacenters using a private cloud architecture.
The hardest part of getting to this kind of total reset of the datacenter, Reilly says, is persuading technologists to throw out their old ways of doing things and think more ambitiously. It's why Bank of America has created a separate team to develop the company's next-generation architecture, so team members could consider big ideas such as having only one type of hardware. "It's not the technical piece. It's: Why stop there, why not go further, why not do more?" Reilly says.
[For more on how David Reilly is transforming enterprise architecture at Bank of America Merrill Lynch, read: IT Is The Business.]
The bank wants that kind of blank-sheet thinking from its tech vendors, too. Reilly won't name vendors it's working with, but he says the team stood up two platforms for its private cloud environment, one proprietary and one based on OpenStack. The vendors it's working with are the ones embracing software-driven architecture and nonproprietary hardware.
"The hardware side of what they would do is something they should begin to let go," Reilly says. The bank is running its pilot on two platforms to keep its vendor options open, while "encouraging our large partners to feel like this is something of a burning platform that we need everyone to respond to."
Bank of America has about 200 workloads running on pilot versions of the new architecture, and it plans to put about 7,000 workloads into production this year. That volume still represents a small part of the bank's computing, but if it delivers strong results, it sets the stage for major adoption in 2015.
The business goal is to dramatically cut costs -- as much as 50% from today's best-case datacenter costs, Reilly says -- and let BofA respond more quickly to changing business needs, such as a spike in demand for network capacity or computing power (or, just as important, drops in demand when the bank wants less capacity).
NEXT: Different technology, different skills
Chris Murphy is editor of InformationWeek and leader of its Strategic CIO community. He has been covering technology leadership and strategy issues for InformationWeek since 1999. Before that, he was editor of the Budapest Business Journal, a business newspaper in Hungary; ... View Full Bio