How Old Metrics may strand you strategically

Ever stIMG_0267op to consider how the ever present changes going on around you make your own transformation easier?

John Hagel relatively recent blog post describes the opposite.

In a world of accelerating change, one of our greatest imperatives is to “unlearn” – to challenge and ultimately abandon some of our most basic beliefs about how the world works and what is required for success.

Accenture a few years ago noticed that many different companies had shifted their approach to strategy. Perhaps, the availability of cheap, powerful computing capacity and Big Data are responsible for driving changes in strategy development as more organizations using technology find it easier to build consideration of the future into their present planning.   Hagel, a long time fan of scenario planning would applaud these efforts too.

With the rise of automated business processes, analytics too get incorporated automatically to enhance decision making and may be simultaneously compromising management capabilities to internalize all of these changes or understand the underlying dynamics traditional measures mask. Several articles provided case studies in different industries provided the basis of discussion around transformation (see the bottom of the post for specific article links).

how to lie with staticsSuccessful organizations rely on their strategy to put forward action plans, realize new ideas while averting risk. Statesmen and management alike find themselves in precarious places when they assume a trend will continue without change. Many statistical methods and decision-makers use of data remain unchanged from 1954 when Daniel Huff first published How to Lie with Statistics. His timeless book describes very simply the perils of improper use of methods that were designed to capture and explain if not contextualize the significance of singular observations, or data.   The current transformations enabled by technology have done more to alter behavior than organizations seem to recognize. That’s the path our discussion took.

The capability for insight

Prospective vs retrospective cohort analysis  and data mining techniques are far from new. Though the volume and speed of available data to digest and process with ever The increasingly sophisticated tools and the ease with which volume and speed of available can be processed may help as well as hinder their digestion. Sure the time to test alternative scenarios may be faster, but how do you choose the model?

Do you begin with the intended outcome? The scientific method and numerous models from multiple disciplines make it possible to isolate factors, determine their significance, and estimate alternative scenarios and assess how these variations produce changes in impact.

Similarly, the cross pollination of data modeling from one discipline into multiple industries and use cases continue to shift management beliefs regarding the importance of specific factors and interactions in their processes. The perennial blind spot denies many organizations and their leadership the insight necessary to transform both their internal strategic thinking process and business operating models. Last month’s discussion of McDonald’s and Coca-Cola illustrated how easily leadership misinterpreted fluctuating performance as temporal issues versus recognizing structural factors. It’s one thing to balance efficiency and effectiveness, quality and satisfaction and another to manage awareness of change and insights necessary to your continued survival.

What else thinking

“…both the digital world and the physical one are indispensable parts of life and of business. The real transformation taking place today isn’t the replacement of the one by the other, it’s the marriage of the two into combinations that create wholly new sources of value.  “

The sudden availability of online data tracking provided many organizations with the proper capability to understand user behavior differently. A whole new industry arose to focus on interpretation while creating of new measures while also introducing new thinking about effectiveness in sales, customer service, training etc.  Metrics, once created to prove out a strategy or an idea, now leave many organizations vulnerable until they build up the capacity to understand this new thinking let alone make corresponding operational changes necessary to sustain their business.

This is not the story of companies who fail to adapt such as Kodak who invented digital cameras only to retain their focus on film; but maybe it is. reporting dashboards summarize specific indicators or activity associated with managing process or business relevant factors. The time and reporting cost savings that result from the automatic generation and ready access to information by managers and executives reinforce existing thinking and leave little room for understanding wider changes that may be impacting their business. It wasn’t long ago that analysts, and teams of them, spent their entire day pulling data and then calculating critical statistics detailing the effectiveness and efficiency of organizational activities to create reports for senior management. These efforts also made them accountable by insuring the data was clean, verifying whether outliers were real or indicative of a model failing to fully capture the wider dynamics. I was once one of those managers.  Today, automated reporting has eliminated many of the people capable of deeper data exploration and who chose what data, which statistics and the context necessary to understand the situation. The second problem is that data shared graphically or in tables never tell the whole story, though infographics do try.

A good analyst is taught to review the data and results, double-check whether the model or calculated results makes sense. Sure managers and executives may be quicker to detect aberrations and then raise questions but , how many of them have the time, patience or skills to test their ideas or intuitions? I imagine very few if any. Where are these available resources and how widely known are they to questioning executives?   How might the dashboard provide additional information to help frame the results executives see as they too seek to understand or make sense of the results?

Outside in thinking

Established data flow processes and automated reporting do deliver great advantages but they may also explain why outsiders find it easier than insiders to create new business models.   Where’s the out of the box thinking? And how can different data help?

Sure, it’s easy to blame regulatory requirements or compensation structures incentivized to focus on effectiveness and efficiency that leave little latitude to notice opportunity. For example, in the airline industry route fares were once set by regulations. The minimum fares were intended to cover airlines operating expenses that both insured passenger safety and access to air travel in more locations where market forces may lead airlines to cut corners. Deregulation may have given airlines additional freedom but many manage their business using the same metrics that they report to the Department of Transportation. Likewise in Healthcare, the imposition of new regulatory requirements came with new metrics that forced hospitals to focus on patient outcomes not just their costs.

When executives bottom line focus limits their thinking as an exercise in how making corrections in operation may maximize that number they overlook other contexts. Data quality issues should surface quickly in most organizations, but what if another factor created the data issue? A misplaced data point, or inconsistent treatment of the content of a data field rarely explain all aberrations in the results.   Weather, for example exemplifies a ubiquitous, exogenous variable. Observable data fluctuations may be directly or indirectly responsible by affecting other more directly connected factors, such as a snowstorms that change people’s activity plans. I’m not familiar with any automated reporting system that will automatically create a footnote to the data point associated with the arrival of a snowstorm. The reviewer is forced to remember or manually if possible add the footnote for others.

Bigger transformations to come

Bain believes there are significant implications for every organization that result from this digital and physical combination of innovations , they call Digical. It’s not easy to keep up with the corresponding behavioral shifts that result from these rapidly changing technological capabilities.

Focusing exclusively on efficiency and cost data helped management measure impact in the old era, though still necessary today they may no longer suffice. Do you know how social behaviors of your customers impact your bottom line? The technologies to support your business, such as your website or your cash register misses out on the social behaviors evident on sites like Facebook, Twitter, Yelp or even their bank. Mapping the ecosystem and then aligning the digital tracking data can now be supplemented with sensor data that may be anonymous to specific customers but can inform movement and actions relevant to your engagement.

Naturally, as mentioned earlier bias plays a role in our inability to notice the significance of new data. The more we automate and configure systems to measure what we always knew mattered, the less likely we are to be able to recognize new data and its significance. What should you the analyst and you the executive do to counteract these factors?


Monitor the activity of smaller companies as they experiment to learn what’s most relevant.

Don’t make assumptions, exercise strategic intentions to become more open receptive and curious about anomalies and be more creativity and persistent in identifying the drivers or possible factors.

Historically, metrics were an output designed to assess the validity of your strategy –did it work and/or deliver value. Not it’s time for strategic thinking to view metrics as an input. The use of statistics enabling analysis tools partnered with business knowledge and acumen must be part of communicating to higher levels in the business.

Often we measure the wrong things because the incentives are misaligned. Am I paid based on my proven ability to produce widgets at specific levels , or to produce effective, sustainable results for the business, not just my business unit?

Computers are useless they can only give you answers. For strategy, validating the questions may be important but so too is taking the time and effort needed to determine even better questions.


Alternative case examples

Bain’s study and understanding of the state of “digical” transformation:
Fast Food



A guest post by:   Willard Zangwill, Ph.D., Professor, University of Chicago, Booth School of Business

Rachel Kaberon, in preparation for the Strategy Management Practices Issues Group discussion of the Chicago Booth Alumni Club, asked me to put together a page or two of thoughts about uncertainty in decision making.  Since she had helped me with software I have developed to assist in complex decision making, this was my chance to return the favor.  Hence, here are some thoughts that strongly influenced my thinking about uncertainty and how I have tried to suggest how people might better predict the future and make better decisions.


First is that uncertainty is remarkably uncertain, and our efforts to predict it are likely worse than we often assume.   Overconfidence bias is indeed strong.   What demonstrated this to me was the outstanding work of Philip Tetlock[i]. He studied how accurate were the predictions of experts and pundits in the political or economic  areas. These people were similar to the prognosticators we see on television or other experts discussing what might happen to events in the future.  Tetlock examined such predictions for years and studied tens of thousands of them, which was a huge undertaking.

What Tetlock discovered was how bad the predictions were.   They were only slightly better than chance.  Not the result one might expect, but worse.  Too many events seem to unexpectedly occur in the future.

Interestingly, the prognosticators that were most confident and sure of themselves, were wrong more than the more cautious forecasters who hedged and added conditional statements.    The confident experts tended to gain more support and attention, as their confidence convinced others, but that did not make them more right.

How could predictions be so faulty?  By and large, we tend to think we predict better than we do because if we are wrong, we give ourselves excuses.  We suggest that no one could forecast what really happened, or that events no one could have foreseen occurred. That process absolves us of blame and provides exoneration.   The net result, however, is that the future is harder to predict than most of us are likely to believe.


Given this conundrum that we have to predict events, but are probably not that good at it, what can be done.  Here are a couple of experiments that I have found useful to try to build upon.

As Gary Klein[ii] has noted, Research conducted in 1989 by Deborah J. Mitchell, of the Wharton School; Jay Russo, of Cornell; and Nancy Pennington, of the University of Colorado, found that prospective hindsight—imagining that an event has already occurred—increases the ability to correctly identify reasons for future outcomes by 30%.

The concept  is illustrated by the following.  Consider some upcoming event, say a presidential election.  Then think of reasons why a particular candidate might win.

Now do the following.  Assume it is now after the election. And assume it has just been announced that candidate has won by a solid margin.   Now think of reasons that triumph occurred.   You will likely think of  more reasons.  In essence, assuming an outcome and carefully imagining it, helps you think of reasons why that outcome might occur.   Perceiving those additional reasons then helps as you proceed to analyze the situation.

A much different approach in a study by Armstrong and Green[iii], was also quite helpful for forecasting the future.  In brief, they had subjects predict the outcome of past situations that were unknown to the subjects.    Since these were past situations, the actual outcome was known, although the subjects did not know those outcomes.  After the subjects made their predictions about the outcome of these situations, the accuracy of the predictions were then determined.

At this juncture, the experimenters then changed the situation.  They required that the subjects first consider several situations analogous to the one they had to predict; these were analogous situations where the subjects knew the outcomes.   Once they considered those several analogous situations, now the subjects were told to predict the situation in question.  The success rate went up substantially.  In fact, when a group of subjects were involved and they carefully compared analogous situations, the accuracy of the prediction roughly doubled.

The message seems to be this.  When we forecast an event, we tend to do that by thinking of some similar event that we know.  That similar event we know, gives us ideas about the outcome of the event we are trying to predict.  Now take this one step further.  If you consider several events roughly similar to the one you are trying to predict, it is like increasing the sample size. The accuracy of your prediction should rise.    Moreover, just examining how several situations similar to the one you are considering turned out, is illuminating, and by exposing the complexities of the situation,  provides useful insights.


Given the difficulty of predicting the future and the challenges thereof, it might help to broaden our decision-making framework and, in particular, to do more breakthrough thinking as that might provide us with an advantage.  Considering breakthrough thinking, as least for most people, good breakthrough ideas seem to occur almost randomly, as we tend to think about an issue and the exciting idea somehow jumps into our minds.    But there do seem to be procedures that help them occur more frequently and more when needed.   The key insight is to look and examine where the breakthrough idea is more likely to occur.

To illustrate, suppose you cannot find your car keys and have searched all over the house.  In frustration, you ask your spouse.   He/she replies that they are on your dresser.  Despite the mess on your dresser ( not necessarily yours, but certainly mine) you dash over to your dresser and with only a little rummaging, quickly find your keys.

As another example, when they search for oil, they do not put the exploratory well anywhere. But they first conduct detailed geological and seismologic examinations to locate where the oil find is more probable.

The concept for breakthrough ideas is the same.  Suppose you have one million possible ideas to search through in order to discover your breakthrough idea.  Finding that breakthrough idea from among the million possibilities,  is not likely to be easy.  This explains why getting breakthrough ideas is usually a challenge, as it required a quite large search to uncover it.

On the other hand, now suppose you obtain some clues as to where that exciting idea might be found that narrows your search down to ten possibilities.   You can easily search the ten and, in all likelihood, uncover the breakthrough idea.

The insight is to examine where the breakthrough idea is likely.   It is like drilling where the oil is likely, and you will more easily find it.   One of the concepts behind the software for decision making I developed takes advantage of this and seeks to suggest where the breakthroughs will be more likely, helping you to more easily discover it.


The uncertainty of the future is probably far greater than most of us assume. Here I have tried to suggest some means that might help reduce that uncertainty and improve decision-making.  There are other ways as well, and they should help as you proceed to make difficult decisions for the future.

[i] Philip Tetlock, “Expert Political Judgment: How Good Is It? How Can We Know?” (Princeton)

[ii] Klein, Gary, “Performing a Project Premortem,”  Harvard Business Review, Sept.2007

[iii] Kesten C. Green and J. Scott Armstrong   “Structured analogies in forecasting”, University of Pennsylvania, 9-10-2004