Over the last several years, investment banks and financial shops have faced marginally declining business. Hit by crisis in 2008, they have been recovering for the last three years. When it comes to the technology organization, actions for recovery include optimize spending on IT, reduce in-house software inventory and enhanced financial software development processes and teams. Most internal research and "investment" projects have frozen and budgets have become very conservative. Even funding critical for business projects has been withheld. If the project is not urgent, it is not financed.
Much of the shift in spending stems from the the Dodd-Frank Act, which introduced over 1,000 pages of guidelines aimed at addressing issues that were at the heart of the financial crisis. While only some of the proposed guidelines were aimed at IT directly, the technology departments at financial institutions have felt the reach of Dodd Frank and regulation has become one of the largest drains on IT budgets.
One of the main requirements introduced by Dodd-Frank Act was the need for increased reporting capabilities. The SEC is looking for reporting on enterprise-wide exposures on risky counterparties, not just risk metrics on the account level.
For banks, this means they must pinpoint potential risks before the SEC demands reports and implement enterprise-wide position and risk aggregation systems to improve risk management and risk analytic capabilities. They have also been forced to implement compliance workflows to meet the latest requirements.
Big investment banks began investing in risk aggregation systems early in 2009, and even though they started early, the following challenges have prevented them from effective implementation.
Experience Needed for 'Big Data' Scale SystemsMost banks decided to add an additional level of software on top of existing risk and positions systems. The new software is responsible for data aggregation from underlying systems. In most cases, enterprises use this additional level of software to cache risk and position data in memory (In-Memory caches), to achieve appropriate performance, build unified data models and monitor enterprise risks across region. However, many businesses make the mistake of treating the new system like a traditional relational database (querying data in SQL style rather than use Key-Value concept to build systems with predictive performance) and they are building complicated software levels on top of the In-Memory Caches -- leading to increasingly complex and expensive systems.
Such hastily built systems lack comprehensive disaster recovery solutions and additional spend is expected for enhancing disaster recovery, data routing and synchronization across regional data centers.
Unified Symbology and Data Tagging RequiredMany large banks have business units in the US, Europe and Asia. In many cases, trading software is developed in one region and used by businesses in another. When it comes to enterprise-level reporting and monitoring, it's often hard to match and reconcile trades coming from systems built on different symbology standards — usually resulting in duplicated, invalid and missed trades. Banks suffer from the absence of a unique code known as Legal Entity Identifier (LEI) which easily identifies trades and events, like corporate actions. Without LEIs, missing inventories and mistaken counterparties prevent regulators from detecting stress signs in future activity. This problem however, cannot be addressed by financial organizations alone. It must be solved at the G20 level, which will not happen in 2012. When this problem is solved, banks and data providers will be forced to allocate portions of their budget to adapt to the new set of LEIs.
Performance IssuesNew regulations also impact IT departments working with OTC instruments. Because requirements on SWAPs clearing or the execution of Swap Execution Facilities (SEF) require much work from IT, the real-time public reporting requirement (as soon as technologically practical) is a significant challenge.
Traditionally, systems responsible for risk metrics calculation for SWAPs and complex derivatives were designed to complete calculations overnight because the process requires large volumes of data and significant CPU resources. Last year, many IT departments attempted to re-design and optimize existing systems to shorten calculation cycles that would provide not only end-of-day but also intra-day results. Banks continue to improve the performance and the horizontal scalability of systems to ensure they work in "near real-time" mode. Financial institutions also face similar performance issues with their proprietary methodologies for testing capital levels and quantifying target capital according to risk profiles. In order to manager toward these risk profiles, banks must implement stress test scenarios and tweak economic environment metrics to detect weaknesses. As a result, they are spending IT resources on calculation facilities in data centers and hardware upgrades (such as replacing hard drives with SSD, or adding extra RAM).
As mentioned above, moving forward through the remainder of 2012 and into 2013, financial institutions face a number of challenges, a number of which can be addressed by properly implementing big data technologies:
* New regulatory guidelines demand significant initiatives from Banks. Enterprise-level, close-to-real-time reporting is one of the biggest challenges for IT departments. With limited annual budgets and unclear regulations in 2012, banks will continue to spend on compliance systems in 2013.
* New regulations force banks to deal with Big Data volumes and scaling calculations is beyond the capabilities of a traditional system. At the same time, banks do not have budget for new equipment and provisioning delays make the cloud (Infrastructure as a Service. Private or Hybrid Cloud) very attractive for elastic scaling that is more time and cost effective.
* In the cloud, banks will have to learn to adapt to Big Data technologies to build horizontally scalable, close-to real time, cost-effective systems.
Oleg Komissarov is Vice President, Enterprise Solutions, for DataArt. He is a veteran of the IT industry with more than 15 years of experience in custom software development and enterprise systems architecture. Oleg joined DataArt's St. Petersburg office in 2006 as a senior software developer and advanced to a software architect in 2009. During that time he's been responsible for enterprise solutions implementation for key financial clients in the United States and Europe. In 2010, he relocated to New York headquarters and was appointed Vice President of Enterprise Solutions.
[Big Data Technology: Buyer Beware.]