“It is wrong to suppose that if you can’t measure it, you can’t manage it – a costly myth.”

— W. Edwards Deming

“If you can’t measure it, you can’t manage it.”

— Peter Drucker (but maybe not – see Related Reading below)

Who is right – Deming or Drucker?

Two giants of management thought and practice seemingly disagreeing completely with one another on a critically important matter. Which one is right?

I ran into this troublesome contradiction during some research on how to measure leadership performance and leadership potential for purposes of developing and improving these. This baby was a $24 billion industry in 2018, according to Training Industry, which bills itself as “… the most trusted source of information on the business of learning”. Sure must be very important at this price, and of course it is.

Leadership assessment or measurement is a major foundation part of all of this spending. A lot of folks out there are certainly measuring something and no doubt charging or spending big bucks for doing so.

Just to complete the confusion here, there is this quote by good old thinker Anonymous (Greek philosopher, I believe).

“If you can’t measure it, it doesn’t exist”

— Anonymous

Possibly both are right (ignoring Anonymous)?

You can manage almost anything, even if you can’t or don’t want to measure it. We do this all the time. Measuring can be a real hassle and is often quite costly. So we simply manage by doing things that make sense or that everyone agrees seem to be right. If how we are managing or our managing skills are not working as well as hoped, we just change what we are doing to try something else that seems sensible and/or right. Who needs measurement?

So, Deming nailed it, yes?

Well, folks are certainly measuring all manner of things related to leadership and spending billions a year doing so. So, poor old (misquoted) Drucker appears to have the story right on this one. A lot of organizations agree with him enough to shell out big bucks for leadership assessments that drive leadership development programs costing further big bucks.

Appears that there is no contradiction Deming vs. Drucker after all. Some manage without measuring explicitly. Others measure all kinds of things as the basis for managing. Everybody is happy.

My take? Both are wrong.

If you can’t measure it, you don’t understand it

There are many things that are hard or even impossible to measure. Success. Happiness. Leadership. Leadership? There are thousands of leadership assessment providers out there. They sure are measuring something and getting well paid for doing so.

From what I have read, leadership is quite tricky to measure. Lots of research and approaches have been trying to get a good handle on just what constitutes leadership and how to improve it. Not a lot of agreement on just what works best.

An example is “How to Identify Leadership Potential: Development and Testing of a Consensus Model” by Nicky Dries and Roland Pepermans in Human Resource Management (May 2012). Pretty thorough and rigorous from what I can see. Traits and abilities considered as important include such “readily-quantified” items as emotional intelligence, perseverance, and dedication.

This appears in the end to be not much more than an interesting framework, a starting point for understanding what makes up leadership. I am sure that each one of these essential items can be “measured” in some way but they seem to me to be about as vague as leadership itself.

So, where does this leave us?

Well, we can certainly measure a lot of things that seem to be related to leadership potential. But credible measurements? Maybe, to some people.

Understanding leadership potential and performance

I wonder: Suppose that we could identify a group of people who were considered “successful leaders” by some other people, perhaps even us. Could they serve somehow as leadership models for comparison and assessment purposes?

Just to get started on this one, we would need to define a “successful leader” in a generally accepted way. Good luck with that – just Google “successful leaders” to get an idea of how many views there are on just this basic concept.

What you will see here is a bunch of hard-to-define characteristics similar to those in the figure above. This gets us exactly no place. Even if we put names on leaders that we consider successful intuitively, what is our basis for assessing them as particularly successful?

This question gets us pretty much back to square one.

Pick your favorite set of leadership qualities and run with it. Any set that works for you. Measure a bunch of stuff that can be easily measured and that seems somewhat related to your leadership definition.

That’s where we seem to be today.

Deming’s position that many important things can’t truly be measured but they have to be “managed” or “improved” regardless appears to be our reality. Drucker’s misquote also applies since we try to measure what we can and do the best with whatever it tells us. Another reality. An alternative reality?

It gets worse: people and contexts are different

I have noticed after decades of working with many “leaders” that they are all different. Big surprise, yes? Many are simply excellent managers rather than “leaders” as I might define them. So, how might I define a leader?

Tough question. My answer, lame though it may be, is that leadership is both person-dependent and context-dependent. A “great leader” in one organizational context may be a weak leader in another.

I worked for some years with a really strong entrepreneurial leader who was amazingly effective in his initial startup context but proved relatively disastrous when his business context changed.

How might I attempt to strengthen him as a leader? I have no idea. He was who he was. His considerable strengths were impressive but only where they worked. This suggests to me that leadership in large measure is innate and suited to only a limited range of business contexts.

For another highly effective “leader” whose highly-disguised story appears in the Bank CEO coaching example, I saw an extremely effective executive who was a leader only in the narrow context of business growth. He was almost obsessively focused here but made several attempts to broaden the business base outside of its core. These all bombed, fortunately early enough that no damage was done.

He was an extremely effective leader within his particular business context but not elsewhere. Could he be “developed” for another context? I don’t believe so. Again, he was who he was.

So, leadership assessment and development is not generally possible?

Context may be the key to measurement?

Leadership may be largely in the eye of the beholder. My definition of a “leader” (vs. a strong manager or executive) has been developed over many different business and organizational contexts, from startups to multi-national enterprises. I’ll bet that your definition of a “leader” is quite different but no less valid.

The key here I think is context. We might agree to at least some extent on what should make a strong leader in a startup or entrepreneurial context, and perhaps even on aspects of large enterprise leadership.

In a recent post, I made an attempt to define leadership broadly enough to make it applicable outside of context in general. You’ll have to judge for yourself on the degree of success if any, but here are the dimensions I came up with:

Vision: Leadership requires vision. Vision motivates. Vision integrates and coordinates. Managers do things right. Leaders do the right things.

Integrity: The personal quality of being honest and having strong moral principles; moral uprightness; someone people will trust.

Courage: Having and demonstrating mental or moral strength to venture, persevere, and withstand danger, fear, or difficulty.

Communication: The ability to use words, sounds, signs, or behaviors to express or exchange information, ideas, thoughts, feelings clearly and understandably.

Passion: Having and demonstrating strong emotions reflecting an intense desire or boundless enthusiasm.

Commitment: The state or quality of being demonstratively dedicated to a cause, activity, project, person, or organization.

Action: Being able to routinely demonstrate the ability to act, to get something done, to show inspiring initiative or enterprise; action-oriented.

There may be a few more that are useful for leadership effectiveness but these appear to me to represent a decent core set.

The problem here, as you will see immediately, is the fundamental old one of measurement. Exactly what do we mean or understand by each of these leadership dimensions? And who is “we”?

Defining “leadership” for assessment purposes

The good news is that there are at least a zillion ways to approach this. Probably more. Pick one – any one that grabs you. Measure away. Then use your measurements to develop or improve a set of leaders. Where does that get you?

The proof is in the pudding, to quote a very old proverb attributed to Cervantes in Don Quixote. Frustratingly, it gets us right back to how we define a leader: results achieved, outcomes. This in turn brings up examples of people who we consider to be strong, successful leaders. Leadership role models but in our working context.

The French emperor Napoleon is widely regarded as one of the greatest leaders in history. Except for minor details such as that he destroyed the French army in his war on Russia, and ended his days in exile on Saint Helena island. Good example or bad example?

My sense here is that we must begin with examples of real people who we regard as “leaders” in whatever our organizational context may be. Our very own leader examples. Types of people who we want to develop more of, and to strengthen those already well along the leadership path.

Now the challenge is understanding why we regard them as desirable leader role models. Again, these will be different for each organization and its managers. My leader role models are likely to be quite different from your leader role models.

Understanding in turn requires that we identify abilities, traits, performance, and such as our set of primary dimensions. Does not this take us right back to square one?

It does not.

A comparative assessment approach

If we can identify a set of leadership role models for our context and needs, then we can compare our development candidates with these role models and assess the candidates as “equal”, “better than”, “not as good as”, etc. In practice, these comparisons would be done along a number of dimensions – ones that matter most to us, to you. The assessments in this case  are likely to be “gut-level” – qualitative – rather than “quantitative” (whatever quantitative means in practice).

Carefully chosen role models will almost certainly be identified along our own leadership assessment dimensions. These people will be truly successful leaders in our estimation even though we can’t really define why in all or even many cases. We just agree that these folks are what we want to develop and strengthen among our leadership candidate pool.

Extending this approach to leadership self-assessment

Even an individual like you or me can use this comparative approach to develop a leadership self-assessment. I have quite a few leadership role models, or reference points, that I could use as my self-assessment set. You, I am sure, can do the same with a little thought.

With these role models in mind, the next step is to choose a set of leadership traits, abilities, performance, and qualities that seem most important to you as leadership indicators. You can choose your own or look up what others are using.

The final step in self-assessment is simply, or maybe not quite so simply, is to rate yourself along each of your dimensions. Your scale might be something like: “Excellent”, “Above Average”, “Average”, “Below Average”, and “Poor”. There are many other Likert scales to choose from.

Bottom line:

Stating that you have to be able to “measure” something before you can manage, develop, or improve it seems like a truism, but it isn’t. It is a vitally important management challenge, especially when applied to business and organizational leadership development. You need to be able to assess potential to choose from among many available candidates, to assess where each person is strong and weak, and to track individual progress for your leadership development program. Critically important today.

Related Reading

The Drucker Institute weighs in on this management maxim, probably with adequate authority:

“If you can’t measure it, you can’t manage it.”

“This maxim ranks high on the list of quotations attributed to Peter Drucker. There’s just one problem: He never actually said it.”

“Confession: I’m a numbers guy, and so I’ve always loved using this purported Druckerism. After rolling it out at a recent conference to emphasize the importance of measuring outcomes, Zach First of the Drucker Institute, who was also there, kindly informed me of my mistake—not only on the misquote, but regarding Drucker’s broader views on the subject.”

“The fact is, Drucker’s take on measurement was quite nuanced. Yes, he certainly did believe that measuring results and performance is crucial to an organization’s effectiveness. ‘Work implies not only that somebody is supposed to do the job, but also accountability, a deadline and, finally, the measurement of results —that is, feedback from results on the work and on the planning process itself,’ Drucker wrote in Management: Tasks, Responsibilities, Practices.”

“But for all that, Drucker also knew that not everything could be held to this standard. ‘Your first role . . . is the personal one,’ Drucker told Bob Buford, a consulting client then running a cable TV business, in 1990. ‘It is the relationship with people, the development of mutual confidence, the identification of people, the creation of a community. This is something only you can do.’ Drucker went on: “It cannot be measured or easily defined. But it is not only a key function. It is one only you can perform.”

“What a wonderful insight. When it comes to people, not everything that goes into being effective can be captured by some kind of metric. Not enthusiasm. Not alignment with an organization’s mission. Not the willingness to go above and beyond. True, a 360-degree review might pick up on some of these qualities, but often poorly.”

DMAIC is an interesting approach that I recently stumbled across. Measuring something as inherently vague and largely subjective as “leadership potential and status” in a credible way that can guide development programs is a widely recognized challenge. DMAIC places some helpful structure on the overall process (via Wikipedia):

“DMAIC (an acronym for Define, Measure, Analyze, Improve and Control) (pronounced dee-MAY-ick) refers to a data-driven improvement cycle used for improving, optimizing and stabilizing business processes and designs. The DMAIC improvement cycle is the core tool used to drive Six Sigma projects. However, DMAIC is not exclusive to Six Sigma and can be used as the framework for other improvement applications.”

“Steps. DMAIC is an abbreviation of the five improvement steps it comprises: Define, Measure, Analyze, Improve and Control. All of the DMAIC process steps are required and always proceed in the given order.”

“Define. The purpose of this step is to clearly pronounce the business problem, goal, potential resources, project scope and high-level project timeline. This information is typically captured within project charter document. Write down what you currently know. Seek to clarify facts, set objectives and form the project team. Define the following:

> A problem

> The customer(s), SIPOC

> Voice of the customer (VOC) and Critical to Quality (CTQs) — what are the critical process outputs?”

“Measure. The purpose of this step is to measure the specification of problem/goal. This is a data collection step, the purpose of which is to establish process performance baselines. The performance metric baseline(s) from the Measure phase will be compared to the performance metric at the conclusion of the project to determine objectively whether significant improvement has been made. The team decides on what should be measured and how to measure it. It is usual for teams to invest a lot of effort into assessing the suitability of the proposed measurement systems. Good data is at the heart of the DMAIC process.”

“Analyze. The purpose of this step is to identify, validate and select root cause for elimination. A large number of potential root causes (process inputs, X) of the project problem are identified via root cause analysis (for example a fishbone diagram). The top 3-4 potential root causes are selected using multi-voting or other consensus tool for further validation. A data collection plan is created and data are collected to establish the relative contribution of each root causes to the project metric, Y. This process is repeated until “valid” root causes can be identified. Within Six Sigma, often complex analysis tools are used. However, it is acceptable to use basic tools if these are appropriate. Of the “validated” root causes, all or some can be.

> List and prioritize potential causes of the problem

> Prioritize the root causes (key process inputs) to pursue in the Improve step

> Identify how the process inputs (X’s) affect the process outputs (Y’s). Data are analyzed to understand the magnitude of contribution of each root cause, X, to the project metric, Y. Statistical tests using p-values accompanied by Histograms, Pareto charts, and line plots are often used to do this.

> Detailed process maps can be created to help pin-point where in the process the root causes reside, and what might be contributing to the occurrence.”

“Improve. The purpose of this step is to identify, test and implement a solution to the problem; in part or in free of all whole. This depends on the situation. Identify creative solutions to eliminate the key root causes in order to fix and prevent process problems. Use brainstorming or techniques like Six Thinking Hats and Random Word. Some projects can utilize complex analysis tools like DOE (Design of Experiments), but try to focus on obvious solutions if these are apparent. However, the purpose of this step can also be to find solutions without implementing them.

> Create

> Focus on the simplest and easiest solutions

> Test solutions using Plan-Do-Check-Act (PDCA) cycle

> Based on PDCA results, attempt to anticipate any avoidable risks associated with the “improvement” using the Failure mode and effects analysis (FMEA)

> Create a detailed implementation plan

> Deploy improvements”

“Control. The purpose of this step is to embed the changes and ensure sustainability, this is sometimes referred to as making the change ‘stick’. Control is the final stage within the DMAIC improvement method. In this step; Amend ways of working; Quantify and sign-off benefits; Track improvement; Officially close the project; Gain approval to release resources.

> A Control chart can be useful during the Control stage to assess the stability of the improvements over time by serving as 1. a guide to continue monitoring the process and 2. provide a response plan for each of the measures being monitored in case the process becomes unstable.

> Standard operating procedures (SOP’s) and Standard work

> Process confirmation

> Development plans

> Transition plans

> Control plan

> Benefit delivery”

“Criticisms. One common criticism of DMAIC is that it is ineffective as a communication framework. Many improvement practitioners attempt to use the same DMAIC process, effective in solving the problem, as a framework for communication only to leave the audience confused and frustrated. One solution to this problem is reorganized the DMAIC information using the Minto Pyramid Principle’s SCQA and MECE tools. The result is a framed solution with supported by easily followed logic.”