Damn lies and statistics (part II)

The Be Excellent blog recently posted an article pointing to an interview with David Norton, of Balanced Scorecard fame. Norton commented that:

There have been a number of studies on the success rate companies have in executing strategy, and they generally conclude that something like nine out of ten organizations that have strategies fail to execute them. That’s even true when those strategies have been well-formulated at the top and they have buy in from senior management. That’s a pretty foreboding statistic.

The problem here is that Norton does not say how success or failure is defined. If Norton says 9 out of 10 strategy implementations fail, it is incumbent on Norton to say what is meant by “failure” – or his statistic is completely meaningless!

One would think that from the 9 “failed” companies out of every 10, most if not all would have made some sort of attempt to implement the strategy – and learnt something from doing so.

It is worth recalling the story (hopefully not apocryphal) of Thomas Watson Sr., atIBM. Watson called a new division head into his office. The new division head had made a mistake that had cost the company ten millions dollars – in the 1930s or 1940s or sometime when that was a phenomenal amount of money rather than the yearly bonus the typical CEO gets.

The head had fully expected to be fired on the spot. On entering the room, he ventured: ‘I guess you’ll be wanting my resignation then?’ Watson Sr. replied: ‘you can’t be serious! We just spent 10 million dollars training you!’.

Watson’s point was: this clear and objective event (the company lost money) was not a failure unless it was defined as such. What IBM successfully got from the event was learning and improvement and the ability of a particular divisional head to perform better and avoid similar mistakes in the future. Perhaps in the long term this “failure” lead to greater successes down the road that compensated generously in both financial and strategic terms for the $10M loss.

If something as clearcut as financial gain or loss is open to interpretation as to whether it is a success or failure, under what terms can we say categorically something as complex as a strategy execution is completely and unequivocally a failure? We can only regard it as a failure if we have clearly defined and stated the terms of success, and if we are additionally closed to the (intangible) benefits of learning and growth. Failing these criteria, assessing success or failure is a qualitative judgement – and therefore not a matter for any simplistic statistical assessment.

In addition, one could point out that presumably a majority of the 9 companies out of 10 that “failed” at executing their strategy did not fail completely. Presumably they got at least some benefits in some areas, made some useful operational process changes, acquired some new customers, and otherwise generated some tangible business value.

I suspect what Norton meant is that the organisations in question did not get exactly the business benefits they were anticipating when formulating the strategy, which would assume that they were, in fact, clear about what business benefits they were expecting.

Personally, I am a fan of the Balanced Scorecard methodology, and of Kaplan and Norton’s work with strategy maps and alignment. But I also think that it’s perhaps too easy to bandy around statistics like Norton has, that clearly serve his purpose. I think that without citing his sources for these surveys, and without defining precisely how he defines ‘failure’ for a strategy implementation, Norton’s statistics are essentially useless – and worse, misleading.

I think it may be interesting to look into the history of the surveys that have purportedly established this “statistic” and see how they have defined “success” and “failure.”

2 Responses to Damn lies and statistics (part II)
  1. Abbie Lundberg
    June 25, 2007 | 8:25 PM

    The other problem with this stat is its age. Kaplan and Norton mention it in The Strategy-Focused Organization, citing a survey of management consultants that got written up in a December 27, 1982, article by Walter Kiechel in Fortune called "Corporate Strategists Under Fire."
    Perhaps there have been further studies since that replicate these results, but I haven't been able to find them.

  2. Dr. Lauchlan A. K. Mackinnon
    June 25, 2007 | 9:47 PM

    Hi Abbie,

    Thanks for your comment!

    The Strategy Focused Organization also cites a June 1999 Fortune article "Why CEOs fail" by Charan and Colvin where the authors argue that they "estimate" that in 70% of cases the problem isn't bad strategy but poor execution.

    In both cases, the source for these "studies" are Fortune Magazine articles, so in addition to the age there is the question of why the reader should take these articles as being particularly authoritative sources. I don't mean to impugn the authors, and I think there was an element of truth in what they said, but if Norton wants to cite statistics I'd rather see them sourced from peer reviewed scholarly studies or perhaps from industry surveys of companies or CEOs conducted by one of the major consultancies.

    Kaplan and Norton's third piece of "evidence" was a 1998 Ernst & Young study ("Measures that Matter") of 275 portfolio managers, reporting that the ability to execute strategy was more important than the strategy itself. From my point of view this is more authoritative / credible than the other two – perhaps this is why Kaplan and Norton ran with this result first in the Strategy Focused Organisation.

    Kind regards,

    Lauchlan Mackinnon

Damn lies and statistics (part II)

The Be Excellent blog recently posted an article pointing to an interview with David Norton, of Balanced Scorecard fame. Norton commented that:

There have been a number of studies on the success rate companies have in executing strategy, and they generally conclude that something like nine out of ten organizations that have strategies fail to execute them. That’s even true when those strategies have been well-formulated at the top and they have buy in from senior management. That’s a pretty foreboding statistic.

The problem here is that Norton does not say how success or failure is defined. If Norton says 9 out of 10 strategy implementations fail, it is incumbent on Norton to say what is meant by “failure” – or his statistic is completely meaningless!

One would think that from the 9 “failed” companies out of every 10, most if not all would have made some sort of attempt to implement the strategy – and learnt something from doing so.

It is worth recalling the story (hopefully not apocryphal) of Thomas Watson Sr., atIBM. Watson called a new division head into his office. The new division head had made a mistake that had cost the company ten millions dollars – in the 1930s or 1940s or sometime when that was a phenomenal amount of money rather than the yearly bonus the typical CEO gets.

The head had fully expected to be fired on the spot. On entering the room, he ventured: ‘I guess you’ll be wanting my resignation then?’ Watson Sr. replied: ‘you can’t be serious! We just spent 10 million dollars training you!’.

Watson’s point was: this clear and objective event (the company lost money) was not a failure unless it was defined as such. What IBM successfully got from the event was learning and improvement and the ability of a particular divisional head to perform better and avoid similar mistakes in the future. Perhaps in the long term this “failure” lead to greater successes down the road that compensated generously in both financial and strategic terms for the $10M loss.

If something as clearcut as financial gain or loss is open to interpretation as to whether it is a success or failure, under what terms can we say categorically something as complex as a strategy execution is completely and unequivocally a failure? We can only regard it as a failure if we have clearly defined and stated the terms of success, and if we are additionally closed to the (intangible) benefits of learning and growth. Failing these criteria, assessing success or failure is a qualitative judgement – and therefore not a matter for any simplistic statistical assessment.

In addition, one could point out that presumably a majority of the 9 companies out of 10 that “failed” at executing their strategy did not fail completely. Presumably they got at least some benefits in some areas, made some useful operational process changes, acquired some new customers, and otherwise generated some tangible business value.

I suspect what Norton meant is that the organisations in question did not get exactly the business benefits they were anticipating when formulating the strategy, which would assume that they were, in fact, clear about what business benefits they were expecting.

Personally, I am a fan of the Balanced Scorecard methodology, and of Kaplan and Norton’s work with strategy maps and alignment. But I also think that it’s perhaps too easy to bandy around statistics like Norton has, that clearly serve his purpose. I think that without citing his sources for these surveys, and without defining precisely how he defines ‘failure’ for a strategy implementation, Norton’s statistics are essentially useless – and worse, misleading.

I think it may be interesting to look into the history of the surveys that have purportedly established this “statistic” and see how they have defined “success” and “failure.”

2 Responses to Damn lies and statistics (part II)
  1. Abbie Lundberg
    June 25, 2007 | 8:25 PM

    The other problem with this stat is its age. Kaplan and Norton mention it in The Strategy-Focused Organization, citing a survey of management consultants that got written up in a December 27, 1982, article by Walter Kiechel in Fortune called "Corporate Strategists Under Fire."
    Perhaps there have been further studies since that replicate these results, but I haven't been able to find them.

  2. Dr. Lauchlan A. K. Mackinnon
    June 25, 2007 | 9:47 PM

    Hi Abbie,

    Thanks for your comment!

    The Strategy Focused Organization also cites a June 1999 Fortune article "Why CEOs fail" by Charan and Colvin where the authors argue that they "estimate" that in 70% of cases the problem isn't bad strategy but poor execution.

    In both cases, the source for these "studies" are Fortune Magazine articles, so in addition to the age there is the question of why the reader should take these articles as being particularly authoritative sources. I don't mean to impugn the authors, and I think there was an element of truth in what they said, but if Norton wants to cite statistics I'd rather see them sourced from peer reviewed scholarly studies or perhaps from industry surveys of companies or CEOs conducted by one of the major consultancies.

    Kaplan and Norton's third piece of "evidence" was a 1998 Ernst & Young study ("Measures that Matter") of 275 portfolio managers, reporting that the ability to execute strategy was more important than the strategy itself. From my point of view this is more authoritative / credible than the other two – perhaps this is why Kaplan and Norton ran with this result first in the Strategy Focused Organisation.

    Kind regards,

    Lauchlan Mackinnon