IT "benchmarking" as you have likely practiced for years is dead. Or at least, it should be.
The traditional, somewhat universal IT benchmarking process of using external industry data (collected once a year) as the basis for a Weight Watchers-like "weigh-in," then casting judgment -- heavy is bad, light is good -- and simply waiting for next year's weigh-in to measure again is just about irrelevant. With worldwide technology spending having jumped to more than $30 trillion for the period 2000 to 2010 from perhaps $3 trillion for 1990 to 2000 and just $600 billion for 1980 to 1990, it doesn't make sense to use a static scale.
Worse, the scale is misleading because it is likely that as a company accelerates its own efforts, the background benchmarking data is changing. If technology spending increased tenfold in 2000-2010 as compared to 1990-2000, which in itself experienced a fivefold increase in IT spending over 1980-1990, what will the spending bump look like during the current decade? Will we see twentyfold growth by 2020?
The new benchmark is to think and use competitive and other external data in a (near) real-time process of market analytics:
• Collect external/competitive data on a continuous, streaming basis.
• Update and monitor internal data on the same continuous basis.
• Aggregate and normalize internal and external data to allow you to "mark to market" on demand.
• Use the results forensically/interpretively to identify opportunities, set targets and focus on high-value/high-return change.
Similarly, the new external input vehicle is a technology market data feed that includes:
• Known metrics/parameters of importance.
• Precise underlying definitions/standards.
• Clear identification of the competitive space that the data feed represents.
• A "watch list" of other parameters or events not in the feed about which you also want to be alerted.
A Substantive Shift
You may be wondering if all this adds up to a simple semantic shift rather than a real paradigm shift. Certainly, the change would be only semantic if the process and outcomes didn't change. But realizing the full power of this model requires a change in end-user behavior, too. This is where the notion of supply-and-demand analytics enters the picture.
According to InvestorWords.com, "supply" is "the total amount of a good or service available for purchase; along with demand, one of the two key determinants of price." The site defines "demand" as:
"The amount of a particular economic good or service that a consumer or group of consumers will want to purchase at a given price. The demand curve is usually downward sloping, since consumers will want to buy more as price decreases. Demand for a good or service is determined by many different factors other than price, such as the price of substitute goods and complementary goods. In extreme cases, demand may be completely unrelated to price, or nearly infinite at a given price. Along with supply, demand is one of the two key determinants of the market price."
The "supply" side of an IT organization is the essence of what it does and delivers, the associated services it offers. IT "supply" is the infrastructure. "Supply" is the applications development and maintenance community. "Supply" is the applications portfolio itself.
IT management has to exercise control over all economic aspects of supply -- obtaining the best prices for hardware, software, labor and space; selecting the right technology, people and locations; and implementing management and governance best practices. Unit cost optimization clearly is in the scope of the supply side of information technology.
However, all of this only has meaning in the context of "demand." Demand takes the form of business needs and requirements -- volumes, service levels, geographic reach, processing specifications, business applications/functionalities and needed talent pools. Demand today and the future of demand are at the core of business-IT interaction; this is the layer across which IT value is created.
By the way, many financial services organizations use the terms RTB, for Run the Bank, and CTB, for Change the Bank, in looking at their IT expenses. RTB is the keep-the-lights-on cost. Total RTB is, of course, driven by demand. But the underlying economics of RTB are driven by supply-side optimization of unit costs under the constraints of demand-side service levels and volumes. The underlying economics of CTB are driven largely by business objectives, or demand-side plans for change.
Optimizing IT Supply and Demand
The total outcome/value of IT and the total cost of IT are based on the match (and optimization) of supply and demand. Hence, the appropriate analytics need to be defined along these lines. And their use, in essence, defines the technology agenda. Consider a hypothetical example using this model applied to infrastructure expense:
Finco's Investment Bank (FCIB) has an infrastructure expense of about $2 billion annually, which represents a cost level of about 10 percent of net revenue and 15 percent of operating expense.
1. According to IT market analytics, the total of $2 billion on a mark-to-market basis is about $700 million higher than FCIB's most-efficient competitor.
2. Of the $700 million gap, $200 million (again, based on mark-to-market analysis) is driven by high costs for FCIB's distributed environment.
3. The $200 million gap in the distributed environment is driven by higher-than-peer-level data center costs ($130 million of the gap) and staff rates ($30 million); the higher staff rates are driven by higher occupancy costs and corporate overhead.
4. FCIB's total platform footprint (servers, storage, etc.), mark-to-market, is far larger than its peers of equivalent size -- including 33 percent more servers, 15 percent more desktop devices and 12 percent more mainframe capacity.
Using a supply-and-demand view provides high-value insights into FCIB's evident dynamics and what needs to be done to improve efficiency. On the supply side, while $160 million is driven by things that are not controllable in the short term (data center costs and staff rates), $40 million of the gap is controllable and can be addressed by IT management with a focus on direct expenses. The $160 million cannot be ignored, but it must be the focus of longer-term, strategic efforts to get data center space and staff costs in alignment with the industry
The demand side accounts for a $500 million mark-to-market gap. The source of this gap must be explored using further analytics/forensics:
• What is driving the need for more computing capacity and storage versus peers? Is it the applications portfolio itself (redundancies, inefficiencies)? Is it the applications architecture? Something else?
• Are there service-level anomalies that are drivers of above-average cost?
• Is there value evident in the business that provides an above market-to-market return on the gap?
A Continuous Effort
The FCIB example provides a crisp illustration of applying supply-and-demand thinking in interpreting mark-to-market analytics. This example, however, can be a bit misleading in that it represents a point-in-time snapshot and doesn't clearly illustrate the continuous market analytic/market data feed model introduced above.
But imagine doing such analysis continuously. Imagine charging your leadership team and key managers with the mandate to continuously mark-to-market and to use the data to drive the analytics that enable continuous improvement of your IT economy. Imagine using the supply-and-demand model as the basis of articulating the dynamics of your technology economy with your business partners, CFO, CEO and perhaps even your shareholders.
This is the foundation for taking charge of your technology economy. As I have said before, those firms that do it first will likely enjoy an extreme competitive advantage -- until marking to market is taught in the world's business schools. But by then, the world will realize that technology data is the new market data. And every firm will be positioned to be a first mover and act like the "Bloomberg of IT."