Back ] Home ] Up ] Next ]

Must Your Scorecard be Balanced?

by

Arthur M. Schneiderman

An edited version of this article appears in strategy+business

Background

Conventional wisdom1 mandates that a scorecard contain a balance of: 

financial and non-financial

lagging (results or retrospective) and leading (process or predictive), 

externally (customer) and internally (processes) focused, and 

short-term and long-term metrics.  

It also demands representation within a prescriptive framework; most often financial, customer, internal, and learning and growth.

But is this really necessary?  Let's first look at the origins of the "balanced" part of the scorecard.  

The balanced scorecard resulted from the confluence of three streams of late 1980's management thinking:

bullet

Total Quality Management (TQM) practitioners were discovering that non-financial measures were much more useful in the day-to-day management of their organizations ("you get what you measure") and were struggling with determining the vital view metrics that they should use in steering their organization's limited resources.

bullet

Accountants where loosing both the eyes and ears of management to the new non-financial measures2 and were failing in their effort to regain their past prominence by reengineering traditional product cost systems3 (Activity Based Costing) in light of the compelling criticism by both internal and external advocates of the Theory of Constraints (TOC).

bullet

IT professionals were desperately seeking non-transactional IT applications to expand their internal market from operations to management in the hope that that would forestall their eventual relegation to a part of those operations.

The first balanced scorecard was created in 1987 to address the first of these issues.  Although it was "balanced" in the current sense, its inclusion of financial measures was for pragmatic not conceptual reasons (see "How the Scorecard Became Balanced").  Three years later it was discovered by a collaborating accounting professor and IT consultant, who recognized that it also provided a solution to both of their professions' most pressing challanges.

But just like the three blind men (or six, depending on which version you choose) first confronting an elephant, each of these three scorecard proponents approached it from their own parochial perspective.  To the Process Management devotee, it remained a tool for identifying, communicating and tracking the vital few "... measures of those processes whose improvement is critical to the success of the organization."   To the accountant, steeped in double-entry bookkeeping, income statements and balance sheets, and the like, the need for balance and control appeared essential.  While to an IT consultant, the opportunity to create software systems that extract managerially useful data from their costly data warehouses became a godsend.

Now I know that some of you are thinking that I'm being grossly unfair to today's promoters of the balanced scorecard, but my point is that each of us views the scorecard from our own often biased perspective.  Right now, the accountant and IT consultant perspectives are dominating the scorecard framework.  As proof, just search the internet using the keywords "balanced scorecard" or "software."  Virtually every major enterprise software company and accountancy now offers a balanced scorecard product that they can even have certified as conforming to some self-proclaimed "standard."

So to answer the question posed in the title, I'll take the perspective that I've maintained from the scorecard's very beginning: 

The most valuable use of a scorecard is as a driver of a strategically focused improvement process and as such need not and usually should not be "balanced."

Before you decide that my definition is too narrow, keep in mind that strategy creation and deployment can be viewed as processes and in many organizations are themselves in need of significant improvement.

 

Is a Scorecard for Control or Improvement?

Current balanced scorecard practice often mixes measures of control and improvement.  In a nutshell, there just isn't room on a manageable scorecard for control measures.  There are far too many of them.  Furthermore, control measures can only be managed at the process level.  Every process in an organization has the out-of-control potential to significantly damage stakeholder satisfaction, so ALL must be effectively controlled (see Step 4 in my Process Management Model).  Process Control is part of every employees daily job activities and should not be singled out for special attention.

A number of years ago, I had the opportunity to sit in the cockpit simulator of what was then the next generation of commercial aircraft.  Missing was the vast array of instruments that we are use to seeing.  They had been replaced by a very high resolution color LCD display about the size of today's laptop screens.  In the simulation, what appeared on the screen was only the set of virtual instruments that were critical to the particular activity that was currently going on.  I was told that even that was unnecessary, but it made the pilots feel more comfortable.  The engineers had designed the system to display only anomalous measures: instruments that were outside their acceptable range under the current flight situation.  And the automated system already knew what had to be done and was taking the actions required to bring these measures back to their nominal ranges.

The same approach is appropriate for control measures in process management.  Only a pattern of out-of control situations that can't be resolved by existing recovery processes should be a candidates for scorecard inclusion.  For example, the number of "serious" out-of-control situations per month, or the time to resolve an out-of-control situation are potential scorecard metrics, but only if reducing their numbers is an identified strategic priority.

Scorecard metrics should be used to align "non-production related" activities around the vital few improvements (change from current in-control practice) that can impact achievement of the organizations strategic objectives.  Note that the introduction or improvement of process control (e.g. SPC/SQC) itself, does fall into this category of activities.

Some balanced scorecard advocates make use of this dashboard or control panel metaphor.  That metaphor is only useful if, as with modern flight decks, it excludes control measures and limits itself to measures that require process improvements for the organization to be successful.

Let's now look at each of the identified elements of scorecard balance.

 

Financial and Non-financial?

OK, I'll admit it right up front: I still don't understand why we need financial measures on a scorecard at all.  Many view this position as unacceptable heresy.  I do realize all-too-well the practical need to include them to make the scorecard more palatable and sellable to executives who are often steeped in traditional management by the financial numbers.  And I acknowledge the perceived need to signal to the the stockholders that their interests are still paramount.  But to earn a place on a scorecard, a metric should be directly actionable and I would argue that financial measures simply are not.

Financial results are always dependant variables in the mathematics of metrics.  They are determined by the independent, non-financial metrics which fall into two categories: controllable or uncontrollable.  Only the independent, controllable metrics are actionable.  There is ongoing and very constructive debate over whether any independent metric is really uncontrollable in the long run (for instance exchange rates can be hedged, supplier prices contractually smoothed, and risk shed through insurance).

For example, you cannot really manage long-term sales (yes, I know that you can play lots of games with short-term sales numbers); you can only manage (i.e. improve) the controllable processes that cause sales to happen (new product development, marketing, sales force training, answer-getting, etc.).  It's appropriate measures of those processes that belong on a scorecard.  And the same argument holds for unit cost reduction.

As proof of this, just listen in on a typical management conversation around an unfavorable variance in a financial measure.  Inevitably, the discussion will first move to "uncontrollable" causes (exchange rates, economic conditions, supplier price increases, tight labor markets, etc.).  Only if that fails will explainers turn to the underlying processes and all too often revert to finger-pointing instead of real root cause identification.  Now compare this to a similar discussion about a well-conceived non-financial metric where the process owner can have a clear understanding of the root causes of variances from plan as well as credible corrective actions.

I will reluctantly bow to pragmatism, but I can't conceptually defend the mandatory inclusion of financial measures on a scorecard.  Companies that really benefit from a scorecard process will inevitably move the focus of their attention to the non-financial scorecard metrics.  And remember, if you can't make good money after stretch improvements in your most critical business processes, than it's probably time to reassess your strategy.

External and Internal

If you accept my premise that the scorecards highest and best use is in strategically driven process improvement, than the next question is whether it must have a balance of stakeholder (external) and process (internal) metrics.  Because they are linked to strategic imperatives, scorecards are usually crafted at the top of the organization and deployed down to action agents who are the only ones that can "make it happen." 

Implicit in this cascade of scorecards is that the higher level ones will generally have metrics that steer lower level scorecards to required internal process improvements.  For this reason, high level scorecards tend to be unbalanced toward external or stakeholder metrics while lower level ones need to focus on internal process metrics.  The actions taken by higher level scorecard owners are principally related to steering and diagnosing, while that of lower level scorecard owners is process improvement focused.

The hierarchical structure of organizations implies that the number of scorecards increases as you move down the organizations.  Consequently scorecard metrics, taken in their entirety, should be unbalanced in favor of internal or process metrics.  Another way of saying this is that for improvement of each external scorecard metric, there are usually several internal improvements required, and each of these have a place on someone's scorecard.

 

Leading and Lagging

This "requirement" goes to the very heart of the issue that lead to the need for the creation of an instrument that would raise the visibility of non-financial performance measurement in the first place.  You can not manage lagging indicators ... they are inherently after-the-fact measures.  Certainly spectators are interested in the results.  It's interesting to know who won the World Series; but no aficionado, no participant would be content with that meager information.  To affect the outcome, we need to focus on the leading indicators.  So this issue translates into the basic question of who is the BSC for?  If it's for interested outsiders, lagging indicators have their place.  But if it is intended as a management tool, as a driver of future success, then it must be dominated by leading indicators, principally process metrics ... the only things that CAN be managed (see Selecting Scorecard Metrics for a more detailed discussion of this subject).

 

Short- and Long-term Objectives

Long-term objectives run the high risk of inadvertently incenting short-term inaction.  A distant goal looses its ability to motivate in the press of day-to-day business.  By the time that that distant date approaches, the gap between current and desired performance is likely to be insurmountable thus assuring failure.  What is a far better approach is to break a long-term objective into intermediate quarterly and/or semiannual milestones (using for example the Half-Life Method).  If progress toward the the long-term objective is slower than required, more resources can be directed toward its achievement or the long-term goal must be reassessed.  If progress is faster, resources being used for this improvement can be redeployed to other needed areas.

 

Bottom Line

Well, where does this leave us.  If an organization were to force its scorecard to contain a numerical balance (equal numbers) of measures in each of the above categories than I would argue that nearly half of them don't really belong there.  My advice is to avoid altogether selecting scorecard metrics based on any prescriptive and arbitrary framework that can clutter the scorecard with un-actionable measures.  Instead insist that metrics on high level scorecards focus on performance gaps in areas deemed most important to strategically targeted stakeholders.  Then deploy these metrics down the organization to those processes whose improvement will contribute most to their closure (see my e-paper on How to Build a Balanced Scorecard).  

In my opinion, most scorecards should be unbalanced toward internal, leading, short-term metrics.  At the highest scorecard level, where management diagnosis rather than process improvement is the main purpose, there is a place for long-term, external metrics, but here the purpose is to trigger a review of the appropriate lower level scorecards which should contain mostly internal, leading, short-term measures.  To accomplish this, those external metrics must be directly linkable to metrics associated with internal drivers.

For example, customer satisfaction indices are poor scorecard metrics at any level unless they are disaggregated into measures of the major drivers of customer dissatisfaction such as poor responsiveness, quality, delivery, or product features.  Once the principal drivers of customer dissatisfaction are known, they become potential high-level scorecard metrics that are then linked to the appropriate internal process drivers.

It's worth noting here that the popular strategy maps, used in communicating the balanced scorecard story, may require the inclusion of non-scorecard measures in order to weave a compelling logic path linking strategy to action.  These adjective-like measures, which perform a logic rather than action function, need not meet the same test as effective scorecard metrics.

It's ironic that the first balanced  scorecard, created in 1987, was called the "Corporate Scorecard."  But its subsequent renaming often encourages dysfunctional behavior.  When I use the term "balanced scorecard," I'm simply bowing to the current vernacular.  So don't be misled by its name, a Balanced Scorecard need not be balanced.

 

NOTES:

1see for example: Translating Strategy into Action The Balanced Scorecard, Robert S. Kaplan and David P. Norton, Harvard Business School Press, Boston, Massachusetts 1996, ISBN 0-87584-641-3, preface.

2see for example: Relevance Lost: The Rise and Fall of Management Accounting, by H. Thomas Johnson  and Robert S. Kaplan, Harvard Business School Publishing, November  1986, ISBN: 0875841384

3see for example: Relevance Regained: From Top-down Control to Bottom-up Empowerment, by H. Thomas Johnson, Simon & Schuster, June  1992, ISBN: 0029165555

return to top

1999-2006, Arthur M. Schneiderman  All Rights Reserved

Last modified: August 13, 2006