Today, zettabytes (1 zettabyte equals 1 billion gigabytes) of data are produced every year. In fact, it's projected that in 2014 approximately 7.2 zettabytes of digital data will be created. With the world's information doubling every two years, understanding large data sets and making intelligent technology and methodology choices are more important than ever. These also remain key challenges. How will companies and users weed through the big data deluge and effectively use it to make better decisions?
In 1786, William Playfair, the inventor of core graphical methods of statistics, stated "When data was less abundant, an understanding of economic structure was both more difficult to formulate and less important for success." He made this observation in the context of increasingly complex eighteenth century commercial life. To help organize, analyze and decipher data, he invented the line, bar, and pie charts. With the amount of data generated today, however, these tools are no longer sufficient.
Standalone sets of individual data points can appear meaningless. But data visualization turns data points into graphical representations and imagery that can produce insightful knowledge from big data and bring to the forefront answers to key questions or patterns that enable better decision making. Creating an image of data lets users readily spot, process and develop better insights into data's meaning. In recent years, the ability to create data visualizations has grown in interest across various sectors and become popular in academia, research labs, service and software companies and publications.
While data visualization has been around for centuries, its adoption in the financial markets is relatively new. There are no best practices for creating data visualizations or ways to leverage existing work and limited tools to organize data visualizations. There are online tools, technology and programming languages that categorize data visualizations into types, traits and subject domains. A few of the available resources are able to establish limited sets of diagramming patterns. However, none of them provide a methodology for selecting the appropriate data visualization for a defined problem. Selecting the right visualization to clearly analyze the data is the missing step in today's technology offerings.
Many businesses recognize that two of their most important assets are their digital data and employees. Yet they often use compromised methods of gathering and transforming data into meaningful and valuable knowledge. For example, they run standardized or ad hoc reports, create proprietary applications of gridded data and use employee resources to manually review and analyze those reports to then call out any important findings.
Manually analyzing data is time consuming but is often done in order to maintain core business capacity, operational continuity, competitive advantage and compliance. Reviewing stacks of numbers and text is not only error prone but also makes it difficult to analyze data in order to:
1) Develop or assess a hypothesis: Those managing regulatory compliance may need to consider and assess a hypothesis like Hyman Minsky's financial instability hypothesis to protect their firm's future.
2) Discover errors and outliers: From a risk and compliance standpoint, a firm may want to find a way to easily monitor risk exposure across a portfolio on a trade-by-trade basis and manage outliers or trades that are over certain limits.
3) Map trends: From an investment management perspective, a firm may want to track volatility across sectors or industries to capitalize on market opportunity.
4) Create categories: A valuation and risk group may want to know if it can readily quantify exposure to all counterparties by subsidiaries.
5) Make decisions: A structured products group may want to know if it can create "what if" stress scenarios and decide on optimal product selection.
6) Understand relationships, such as spatial hierarchy and rank: For energy traders, the need may be to determine if a company can manage pipeline operations and portfolio optimization across crude, refined, natural gas and other commodities.
The need to effectively and efficiently address these concerns, individually or in combination, is a challenge for many firms. Following a thoughtfully crafted method to hone the possible visualizations choices is a good way to identify the most appropriate one.
[Data Center Spending Is Simply Unsustainable]
Understanding these underlying concerns introduces the beginnings of a more accurate set of visualizations. The visuals produced let you record, analyze and share data so that it becomes actionable. The data used as an input into the process becomes knowledge that can be leveraged by others, expanded upon and cataloged into a repository to help connect the dots into other similar efforts.
About The Authors
Julie Rodriguez is an information architect manager at Sapient Global Markets with experience in user research, analysis and design for complex systems. Within the global markets domain, Julie speicalizes in wealth management, investment research, securities lending, commodities and retail and institutional trading platforms.
Francesco Brullo is a senior software engineer with over 16 years of experience -- four of which are in user interface design and front-end prototyping. He is currently a senior UI designer and trainer for UI best practices in the WPF and Silverlight space. Prior to this role, he worked as senior developer and project leader on large-scale enterprise products.