As financial institutions expanded into derivatives and fixed income over the years, they built complex, distributed-computing environments, networking high-end Unix servers to meet the needs of traders and risk managers in the front and middle offices. But times have changed.
Now firms are wrestling with bloated transaction-processing infrastructures and distributed environments that are costly to maintain. Estimates are that only 15 to 20 percent of their compute cycles are being utilized. These facts are causing industry insiders to ask: How about migrating off heavy iron and running the compute-intensive risk calculations on commodity hardware, whose price is one-tenth the cost? And why not exploit those unused processing cycles and optimize the performance of applications by allocating jobs to idle machines?
In a move to cut hardware costs while boosting performance of analytics, Wall Street firms are gravitating toward grid computing - a distributed-processing technique that balances the workload of compute-intensive applications across multiple machines and potentially thousands of processors.
Bank One's interest-rate derivatives-trading business is doing exactly that for its Chicago-based trading floor where it must process massive risk analytics. "There are really two different ways to handle that," explains Peter Dennis, Bank One's senior vice president and manager of capital markets systems.
The bank is implementing Calypso Technology's transaction-processing system with proprietary analytics. One way is to use a giant super-fast processor, while the other is to use a lot of smaller processors and to distribute computing responsibilities across them. "We chose the latter approach as being far more cost effective," says Dennis, who built a server farm of 100 to 150 Intel boxes and plans to distribute Calypso's risk-analytics processing using software from Data Synapse, a New York-based grid-middleware specialist.
Teaming up with IBM in an R&D project, Charles Schwab was able to speed up a portfolio-rebalancing application from four minutes to 15 seconds by exploiting unused processing cycles on non-dedicated servers. "The economics are so compelling as well as the turnaround," says David Dibble, executive vice president of Charles Schwab's Technology Services Division.
One purpose of the project - sponsored by Schwab's Advanced Technology Group (ATG) - is "to see if we can capture the free cycles available on equipment that's already been built and paid for in an effort to deploy what we're referring to as technologically enabled, high-performance customer-advice products," he says.
To accommodate highs and lows in the trading day, Schwab, like most brokerage firms coping with the Internet trading boom, built capacity across all of its servers to handle the peak-hour processing. "The machines have free cycles not only during that period but a lot of free cycles during slower parts of the trading day," says Dibble.
Though Schwab didn't have a specific business application in mind, says Dibble, "Portfolio rebalancing is a big deal for the firm because understanding all the options across the universe of financial instruments for a particular customer is very complicated, mathematically." If a customer asks a Schwab representative to run different scenarios, "four minutes per scenario is boring," says Dibble, "but 15 seconds is more like a mouse click."
On Wall Street, grid is most applicable to numerical, compute-intensive applications. "There's a ton of analytical computing that occurs in the trade analysis, customer analysis, credit and derivatives analytics, where grid fits in well," says Stephen Resar, vice president for the professional services division at Ontario-based Platform Computing. "Instead of companies having to change their applications, we can put in technology to increase the capacity of their infrastructure," says Resar.
In the case of business analytics, "you're talking about lashing together multiple servers that are at different locations and having them work together to solve a problem," says Al Bunshaft, vice president of grid business development and sales for IBM. On Jan. 27, IBM announced it had designed 10 grid offerings for five key industries, including financial markets.
"We're seeing tremendous activity in the financial-services sector, even today," says Bunshaft. Big Blue is working with all of the world's leading financial-services institutions in two primary areas: accelerating business-analytics applications and scaling up analytical applications. A third area that IBM calls "enterprise optimization" offers Wall Street firms a way to save costs by exploiting unused compute resources.
Firms that are utilizing only 15 to 20 percent of their compute cycles are looking to cut back on demands for new hardware. "Firms are trying to reduce their IT budgets by 20 percent. Those are hundreds of millions of dollars," comments Frank Cicio, chief operating officer of Data Synapse.
Noting that hardware is the second highest expenditure after compensation, Cicio contends: "If you have the opportunity to consolidate your hardware, you could probably do with half of what you have. To do that, you need the technology to optimize the utilization and grid," says Cicio.
Grid software, he says, enables firms to virtualize all compute resources, as if they were one supercomputer. For example, if it's 3:00 p.m. in New York, a trading application can take advantage of dormant capacity in Europe or Asia, says Cicio.
Distributing the Workload
In Bank One's case, Data Synapse is "distributing the jobs as they're fired off from Calypso over the grid," explains Cicio. Calypso is a front-to-back-office transaction-processing application that provides an STP Framework.
Farming out a massive amount of processing over 150 Intel-architecture-based servers and "controlling that is a complex process," admits Bank One's Dennis. "To control the distributed process, we'll be using some software (Live Cluster) from Data Synapse," he says.
In the past, Wall Street's investment banks built their own distributed-processing environments, explains Cicio. "Customers that have written their own in-house systems have as many as six product ... They want to get out of the plumbing business," says the chief operating officer.
By breaking up a mathematical problem into smaller pieces, grid computing allows a derivatives-trading shop like Bank One to process the different pieces on less expensive, smaller machines, get back the individual answers, and then aggregate the results.
Besides saving hardware costs, grid computing is expected to boost model accuracy by allowing firms to pursue more paths of analysis. Most firms have to trade off model accuracy for speed - "They may chose a simple Monte Carlo simulation to get a ball-park price, but if they used a more complex model they'd get a sharper price," says Jeremy Dobrick, Bank One's first vice president of interest-rate systems development. "When you have faster compute power available to you, you can step up and use those more accurate models with less worry about the overall impact on your ability to price trades to the market," he adds.
But, is this any different from distributed processing which first came of age in the 1980s?
"What's now being called grid, I see as not much different than what used to be called distributed processing, which is essentially the ability to farm out calculations from a single application over multiple, physically separated CPUs (central processing units)," says Debbie Williams, group vice president of capital markets and corporate banking with Financial Insights.
Though grid-computing draws upon distributed processing and, in that sense, is not completely new, "What has changed, somewhat dramatically, is the industry's appetite for capital expenditures related to large hardware platforms," contends Williams.
Grid computing is also enabling the financial industry's migration to cheaper Intel-based hardware, running Linux on Intel (called Lintel), says Williams. The trend is enticing, right now, because grid computing facilitates the move to Linux - the free Unix-based open-source operating system that poses a threat to Sun Microsystems' Unix servers, the dominant systems used in capital markets. "You have Linux and then you have enormous cost pressure so institutions who wouldn't have gambled on Linux 24 months ago, now think Linux is their savior," she says.
For example, Charles Schwab took an existing wealth-management application that ran on non-IBM hardware and grid-enabled it using the Globus Toolkit (running RedHat Linux on IBM eServer Xseries 330 machines).
Outside of the grid initiative, Schwab has a stated strategic direction to move toward an Intel-based architecture, says Dibble, noting that in 2003 there is a big push to move applications running on proprietary Unix over to Linux running on Intel.
Cost Factor: Server Farm versus Big Iron
While some firms are adopting grid computing to speed up analytical applications, there is a speed-versus-cost trade-off that is definitely driving the equation.
"We don't go to grid computing for speed, per se. In fact, most grid solutions (because) you're doing this distribution, are going to introduce overhead," explains Dobrick. In a perfect world, says Dobrick, "I would have an infinitely fast single processor execute my job as quickly as my grid. He says it's about trading off performance for grid-compute costs. For example, "I would expect a 20-CPU single machine would perform better than a 20-node grid," he says. Hence, the larger machine is certainly more expensive, says Dobrick. However that 20-CPU single machine is likely to cost four to 10 times the price of a 20-node grid or maybe 10 to 20 percent in throughput.
In addition to comparing the cost of a large versus a small machine, firms have to look at the overhead of the grid software.
Grid software costs $100,000 a year to license, at the low-end, says Cicio, who adds: "You get what you pay for." Another issue, adds Dobrick, is that if a firm buys a 20-CPU machine for $1 million, and it runs out of capacity, "My next investment upgrade is then another million dollars." Whereas with grids, "I can scale out horizontally with a fairly linear cost structure, keeping costs somewhat constant over time," says Dobrick.
Whereas a proprietary Unix processor, such as a large Sun box can go for $25,000, a Linux processor can cost $2,500. "That doesn't mean Linux runs as Unix does," says Cicio, noting that Unix has been made industrial strength. But if a customer implements commodity hardware, they are more apt to throw away a $2,500 processor if it breaks because of what it costs to maintain it, he adds.
On the other hand, with a Linux/Intel architecture, the catch is that firms may need more systems management. "Because it's a commodity, and applications are not a commodity, that plays to our strength," says Cicio.
Acknowledging that Linux "is certainly a trend," Bank One's Dennis says: "We're not planning to use it at this stage," explaining that the bank will run Windows on Intel boxes.
Meanwhile, cost seems to be the main driver for both moving to Linux and grid computing. "There are applications that exist today that just aren't fast enough and looking to speed them up on their current platform is more expensive than over a grid of lower-cost platforms," says Williams.
Grid Rivals Gird for a Street Fight
Two companies influencing the financial industry's adoption of grid computing - Data Synapse and Platform Computing - are contending for the Street's business, but they come from opposite sides of the tracks.
Platform is a 10-year-old Canadian company with $50 million in revenues, 500 employees and 1,600 customers, focused on distributed-workload computing and bringing grid computing into the commercial world, says Stephen Resar, Platform's vice president for professional services.
Meanwhile, Data Synapse, whose founders come out of investment banking, is a five-year-old start up specializing in grid computing for financial markets. The company's main product, LiveCluster, has been deployed over the past two-and-a-half years, and currently has 12 financial-services accounts, including Wachovia and Bank One. Frank Cicio, chief operating officer, says these accounts include top 10 bulge-bracket firms (he refused to name them).
"Whereas Platform was designed for applications that run in a batch mode and need to be farmed out across a bunch of processors, Data Synapse was designed for the financial-services industry where the demand is for real-time applications in derivatives processing," says Debbie Williams, group vice president, capital markets and corporate banking, Financial Insights.
While Platform made its name in the manufacturing and life sciences, it's expanding into financial services. Last year, it struck a deal with JPMorgan Chase to jointly develop a grid product, called Symphony, that will compete with Data Synapse's ability to manage an application in real-time. But some industry sources are casting doubt on whether Symphony is finished. They also contend that JPMorgan Chase is Platform's only client in financial services. A call to a JPMorgan Chase spokesperson was not returned by press time.
"We are unaware of any major production environment that Platform has with either Symphony or LSF (a workload-management product) within the financial community," says an industry source. We know it's out there with industrial clients, says the source, adding, "That's their genesis."
In response, a Platform spokeswoman says: "Symphony is generally available, we just haven't announced it publicly yet, but we will be doing so in the next few weeks. JPMorgan Chase is using Symphony, although it started with a beta version, and is moving into production," she contends.
The spokeswoman states: "Platform has approximately 70 financial-services customers around the world. Depending on their needs, they use a variety of products including Platform LSF (workload management), JobScheduler, Symphony, Site Assure and Platform Intelligence (IT Analytics).
According to the firm's Web site, Deutsche Morgan Grenfell, now part of Deutsche Bank Group, is a user of Platform LSF MultiCluster software to handle processing for a portfolio-risk-analysis function. Ivy is Editor-at-Large for Advanced Trading and Wall Street & Technology. Ivy is responsible for writing in-depth feature articles, daily blogs and news articles with a focus on automated trading in the capital markets. As an industry expert, Ivy has reported on a myriad ... View Full Bio