In the last few months, the financial industry has witnessed a growing number of technology failures, ranging from Nasdaq’s botched Facebook IPO to BATS’ failed IPO and Knight Capital’s $400 million software glitch.
In a recent Financial Times column, Andrew Lo, professor at the MIT Sloan School of Management and chief investment strategist at AlphaSimplex Group, pointed out that our 21st century digital economy now has to content with a new risk: Technology risk.
He suggests that the financial system has reached a level of complexity that means only “power users” – highly trained experts with domain-specific knowledge – are able to manage it. Further, since financial markets and technology are constantly evolving and technology advances are increasingly widely adopted, there aren’t enough experts to handle the technology that is driving the markets.
He argues that the solution is not to “forswear financial technology but to develop more advanced technology – so advanced that it becomes foolproof and invisible to the human operator.”
But can there ever be such a thing as foolproof technology? I asked Sang Lee, co-founder of research firm Aite Group, and Dr. Howard Rubin, founder of Rubin Worldwide, a research and advisory firm focused on the economics of business technology, for their thoughts on this thorny issue.
Wall Street & Technology: Do you agree with Andrew Lo’s view that the financial system is now so complex that only highly trained experts are able to manage it, and there aren’t enough of them to go around?
Sang Lee: I would agree. I could say the same about any technology out there. It’s the technologist’s job to make a system, however complicated it is. From a user’s perspective it’s easy to use. But understanding how it works behind the scenes has always been complicated. Look at Microsoft Office. People talk about gazillions of lines of code in that thing. Only half the people at Microsoft understand how it is built.
Howard Rubin: The technology economy is growing so fast there will always be a skills shortage. Demand will exceed supply. Technology is changing fast. It’s hard to find people with the right skills. But I would separate that from the market failures. That goes back to testing disciplines and testing completeness. Obviously as systems get more complex and interactions get more complex, it begs the question, how do you even test these things? I think testing technology and capabilities are behind both demand and supply. What represents an adequate test before you go to market with something that can impact billions or trillions of dollars in a nanosecond or so? How do you actually engineer the testing? The expense of building systems that behave as markets, to be able to test this, someone has to figure out the risk reward of doing that. [continued after video]
Watch WS&T's interview with Dr. Howard Rubin:
WS&T: Do you think technology can ever be foolproof or do firms just need to factor in technology risk, or operational risk, to a greater extent than before?
Sang Lee: My wife recently dropped a splash of oil while she was cooking and it headed towards her eye. Her eyelid [automatically] closed. She noted that it’s amazing how your body reacts without you doing anything. That’s what technology is for. If you look at the application of technology, it’s taking a human function and automating it. You’re doing it repeatedly and using technology to mimic human function. But it doesn’t mean it’s as seamless as how the human body reacts. Often it takes a lot of code to do a single human act, which by definition is complicated. There is no such thing as a foolproof machine. We’re trying to mimic human function. Human function is not perfect. We break down. Sometimes without reason. And there are always unintended consequences.
Howard Rubin: The issue has to go back to risk reward. What is good enough to manage the risk, consistent with the impact you might have? How do you test to that level, and what does it take to get to the next level? I think engineering of quality of systems becomes key. It’s really examining what “good enough” means. You look at data centers and people go for “six nines” (Ed’s note:99.9999% uptime) availability. Does that mean you’re out no more than thirty seconds a year, or three seconds a year? What happens if there’s a major transaction in those thirty seconds or three seconds? It really doesn’t matter if there’s “six nines” or “five nines” availability. So it’s this very delicate balance between what’s the engineered level of quality, what’s the risk at that level of quality, what’s the cost of taking it to the next level, and the risk reward. If you look at software testing techniques, in some ways they really haven’t evolved since the beginning of software development. So clearly the demand and options we have with technology is growing faster than even software development and/or testing methods. It’s quite a conundrum. We’re getting ahead of ourselves. But the concept of zero defect software has been around in literature for fifty or sixty years. It’s a matter of, at what cost? And that becomes key.