Wall Street & Technology is part of the Informa Tech Division of Informa PLC

This site is operated by a business or businesses owned by Informa PLC and all copyright resides with them. Informa PLC's registered office is 5 Howick Place, London SW1P 1WG. Registered in England and Wales. Number 8860726.

Careers

10:19 AM
Connect Directly
RSS
E-Mail
50%
50%

The Challenge of Establishing Enterprise Data Consistency

Investment data is the lifeblood of securities operations. Most large institutions not only maintain multiple repositories supporting individual applications, but each of these are like to have its own vendor data feeds, staffs of analysts and unique cleansing protocols. The resulting data inconsistencies stall transactions and workflows until analysts can reconcile them.

Because of complicated infrastructure, most institutions face some or all of the following data-related issues:

-Lack of data transparency creating difficulties in restructuring business process and automation to improve straight-through processing

-Basel II requirements and potential penalties for operational risk related to quality, control and audibility of investment data

-Capacity bottlenecks caused by highly manual data management in some areas

-Mounting volume of investment data managed and new automation (such as corporate actions processing) that creates new types of data consumption

-The need to bridge unrelated systems to upgrade risk management, research, new product development or application development

-The dilemma of how to address burgeoning data management requirements while cutting costs of overhead

Though the facing problems at any given moment may appear to be tactical, such as overcoming obstacles to STP or meeting regulatory requirement for operational risk oversight, the overriding issue is strategic and central. At base, the data infrastructure is either functional in terms of efficiency, economy and capacity, or it is not. Two measures of the quality of the data infrastructure are the costs associated with internal reconciliation among systems (including the projected losses associated with delayed or failed transactions) and the costs of redundancy among independently managed data silos.

Data consistency is the result of an optimized data infrastructure. We define it as provision of identically cleansed, normalized and modeled data to all automated systems, analytical functions and operational areas, front to back office. The data elements are named and defined identically. The same processes of quality control are used for equivalent data, no matter how it originates or where it is stored. All data stores are synchronized on a functionally real-time basis for both core processing and analytic needs.

Though the concept of enterprise data consistency is logically appealing, the prospect of implementing it is daunting. Until the recent market downturn forced institutions to begin evaluating the costs of their infrastructure, we seldom saw a data management project begin at the enterprise level.

While virtually all our customers are now implementing enterprise-wide data initiatives, the boundaries of their initial implementations were usually departmental. Today, however, the combined forces of pressure on profits, the industry commitment straight-through processing, and the anticipation of Basel II compliance have made enterprise data consistency a strategic priority for many institutions.

In this chapter, we will discuss general challenges in implementing a mandate for enterprise-wide data consistency.

This information represents what Asset Control has learned from association with the firm's customers as they have built data management initiatives that bridge applications, lines of business and geographies.

The Politics of Consistency

Data consistency is simple in concept, and may be relatively simple for a de novo financial institution, without incumbent systems and processes, to mandate internally. For the vast majority of institutions, however, and particularly those who have extensive and multi-faceted involvement in the marketplace, the concept of data consistency opens a Pandora's box of issues and challenges.

Organizationally, this type of mandate equates to centralization of control of data management. Over time, it will result in obsolescence and closure of previously independent data management operations. These changes affect not only the data management operations but also many end users. Therefore, visible support of management and clear internal communications about objectives, timeline and benefits are necessary components of these projects.

While we find an increasing number of institutions committed today to the concept of enterprise-wide data consistency, it is impossible to overcome a tangled data infrastructure with a "big bang" approach. Progressive assimilation of independent data management areas into a centralized quality-control system is the pragmatic approach, building organizational support through clear illustration of benefits and cumulative return on investment.

Initial foundation work involves creation or purchase of data dictionaries and data models that will support normalization of various data feeds into a unified core data structure to verify, normalize, cleanse and story all incoming data. This structure will house the "golden copy" or organizational reference copy of data that feeds applications, users and global offices.

The initial introduction of data integration platforms is usually in middle-office environments, where benefits are rapidly demonstrated. The need for masses of reliable and fully integrated data is very clear in analytical areas. Risk management, opportunity management and the more complex facets of asset management are points of pain to institutions that are struggling with data management. Inaccessibility of data or excessive manual involvement in data preparation can clearly be linked to money earned or lost.

Organizational resistance to the spread of centralized data management may vary according to functional areas. In our experience, the expansion of platform scope from middle to back office functions is relatively straightforward. The financial rationale is clear for standardizing the investment data that supports accounting and other core processes.

The front office, however, is a different animal. Any new technology is evaluated stringently for its ability to support sales and manage risk in a meaningful timeframe. Front office concerns typically come down to three issues -- the timeliness of the data, server response to a potential barrage of ad hoc queries, and risk of downtime during trading hours.

In one of our customer institutions, where the trading operation had initial concerns about the usability of the central data management service, the situation was resolved by an open attitude to competitive evaluation. The process served as a needs-analysis exercise. The result was a moderate redesign of the system to accommodate traders' specific requirements for on-the-fly tracking of price histories and corporate events.

Local vs. Enterprise Projects, and Other Design Issues

This institution was one that began with a localized risk project and evolved to a enterprise data initiative. Typically, projects that begin with a enterprise mandate have more comprehensive planning and fewer redesign requirements as the initiative progresses. One of our showcase implementations is a UK bank that anticipated the Basel II requirements several years ago. Because their original mandate was reduction of operational risk throughout the institution, their progress in establishing data consistency has been rapid and well supported.

In any extension of centralized management into applications and business areas, the primary technical challenge is in building interfaces to feeds and applications, although off-the-shelf interfaces or rapid interface-building tools can reduce the work. No matter how comprehensively the core data model is structured with accommodation to industry standards and organizational standards and protocols, the sources and targets have their own unique data structures. Thus, the data will require transformation. This transformation function may be part of the central data management system or accomplished with middleware.

"Centralized" data management is not necessarily physically located in a single place. In fact, it is more likely to be distributed but managed centrally to eliminate redundant functions and establish version control of "golden copy" data. Contingency planning for disasters suggests the value of mirrored systems. Likewise the sheer volume of stored investments data, as well as the system stresses created by bulk loading and ad hoc queries, suggest that a two-way replicated infrastructure may enhance both security and responsiveness.

A related issue in development of an enterprise data architecture is the question of whether to outsource or maintain in-house systems. While outsourcing is a low-cost alternative for managing "commodity" data - i.e. the data that is commonly used by all institutions, such as vendor-sourced reference and historical pricing data - an outsourced platform may not be a viable solution for handling proprietary data or developing proprietary analytics. Outsourcing services are, by definition, "vanilla" solutions that offer economy and best-of-breed functionality but not the adaptability and control of an in-house technology base.

Fortunately, vendors now provide hybrid solutions that include both in-house and outsourced technology developed within the same framework. By enabling institutions to distribute data management across identical in-house and outsourced platforms, these hybrids offer the potential to significantly reduce overhead without compromising proprietary data and processes and without the added operational risk of integrating unrelated systems.

Benefits and Limitations

Driving data consistency through centralization of data management offers both tactical advantage in reducing overhead and risk, as well as strategic advantages in supporting better decisioning and development. There is no question that data consistency will reduce general overhead and eliminate the obstacles to straight-through processing.

In addition, the usability of the data increases substantially. Data analysts are relieved from solving the "apples and oranges" conundrums created by similar-but-different data sets from multiple sources. Both product and application developers have access to broader and more reliable information from inside and outside the organization.

The long-term impact on the organization can be dramatic. The data transparency associated with an enterprise "golden copy" enables development of business processes and supporting software tools that naturally integrate into the enterprise system architecture. As various data silos are assimilated by the centralized data management service, their data types and relationship are also included in the enterprise system, enriching the data resources of the entire institution.

Just as the first implementation is typically a high-visibility and quick-return project, every extension of the system must be evaluated for return on investment. The law of diminishing returns holds sway in this field, an in any other attempt to standardize. Data that is infrequently seen or used, such as particularly obscure forms of corporate actions, may not be worth the effort of modeling or cleansing through automated systems.

Conclusion

One good result of the current market downturn is an increased willingness on the part of financial institutions to consider enterprise-level initiatives to improve core infrastructure. In the case of data management, pressures to improve STP, reduce overhead and control operational risk have opened the door to a strategic evaluation of enterprise-wide data consistency.

While such infrastructure overhauls may seem daunting, our customers have demonstrated repeatedly that successful localized projects are likely to be leveraged to enterprise initiatives - usually within the first year. The difference between an initial commitment to enterprise consistency and a delayed one is only the quality of the planning that goes into the initial development. The same methodology of progressive assimilation of de-centralized data silos is followed in either case.

With every application that brought onto the centralized data service, another cause of exceptions and reconciliation work is eliminated. Redundant operations are pared away. Both analytics and development gain from greater breadth and transparency of data. These cumulative benefits do more than solve today's data challenges. They derive from a streamlined infrastructure that supports future growth into new areas of business and adaptability to changing markets.

Ger Rosenkamp is the CEO of Asset Control ( www.asset-control.com , www.acdex.com ), a leading provider of in-house and outsourced investment data integration technology to capital markets firms. Asset Control has offices in New York, London and the Netherlands.

Asset Control
One Rockefeller Plaza
14th Floor, Suite 1420
New York, NY 10020
Phone: 212.445.1076

Contact Person: Bridget Piraino
E-mail: [email protected]

www.asset-control.com

Register for Wall Street & Technology Newsletters
Video
Exclusive: Inside the GETCO Execution Services Trading Floor
Exclusive: Inside the GETCO Execution Services Trading Floor
Advanced Trading takes you on an exclusive tour of the New York trading floor of GETCO Execution Services, the solutions arm of GETCO.