Category Archives: Performance Measures & Scoring

Persistent Joke #5

Similar to the Twelve Days of Christmas, we are drawing special emphasis to Number 5!

This one comes to us via Henry Mintzberg at McGill University. Karl Moore has a very popular cover version of this joke.

Here is the joke:

A group of soon-to-be freshly minted MBAs are sent on their final program project where they leave the world of case studies and simulations. They are to test their skills on a “live patient” as they analyze a real organization and provide strategic counsel to improve their future competitiveness.

Through a family connection, they start to work on a mid-level professional orchestra.

Sound strategic analysis tells the group that one high-potential strategy for “regional service providers” is to focus on efficiency in delivery, which can create the case flow required to acquire and grow. After careful and thorough analysis, the group reports back with observations and strategic recommendations. Highlights include:

Under the heading Talent Effectiveness:

  • The group seems to be performing mostly on evenings and weekends. Payroll is a major expense for such services and we are concerned that you are paying a premium for work outside of regular business hours.
  • Performances tend to be 2 or 3 hours, several times per month, with summers off. Operations Management thinking around “batch processing” will suggest that grouping performances together can curtail costs for set up and take down.

Under the heading Technology Deployment:

  • Some of the instruments appear very outdated, with one violin being several hundred years old. A comprehensive equipment refresh would take advantage of new materials that require much less maintenance and, in many cases, weigh a great deal less than older instruments.
  • Audio technology to amplify sound could mean that some of the sections that currently employ several people—who, by the way, are often playing EXACTLY the same thing—could be reduced to one player per part.

NOTE: This latter recommendation will allow additional seating for audience members on the stage. (The group attributed this insight to Blue Ocean Strategy.)

Owing to extreme tact, the group was able to access position-level compensation. This had them create a special subheading Immediate Next-level Impact:

The conductor’s remuneration is the highest of the group, although the function of this position during performances seemed to be largely to keep time and cue musicians. The group provided contact information for a classmate developer who would create an app that could both keep time and provide instant messaging cues through small electric shock so as not to provide audio interference.

Needless to say, when leadership of the Orchestra politely rejected ALL of this strategic advice, our group took it as evidence that status quo decision makers often reject “out of the box thinking” only to come to regret that decision later on when they inevitably cease to operate competitively.

Here is the point:

In many workplace interactions, let’s be mindful of three important elements for those participating:

  • Level of understanding of the situation,
  • Level of insight into workable improvement, and
  • Level of confidence to share thoughts.

Misalignment between the last one and the first two creates unfortunate situations where EITHER people speak without knowing what they are talking about OR they do not speak when they really should.

Persistent thought-provoking joke #2

Having shared passengership on an ill-fated cruise, three professionals (a mechanical engineer, a chemist and an economist) find themselves marooned on a desert island with no source of nourishment but cans of tuna. Together, using emergency equipment from a life-raft, they have fashioned a means to capture fresh water from rain and condensation. Having mitigated the risk of dying of thirst, the three now turn their problem-solving skills to avoiding starvation:

Mechanical Engineer: I am going to walk up the beach to see if there are rocks and vines we can use. We may be able to generate enough force to break into those cans.

Chemist: If we can get enough super-salinized water in a receptacle, we may be able to soak the cans and speed up the corrosion that will weaken the cans. I am going to start by digging a hole in the sand.

Economist: I’ll set the table.

Both non-economists in unison: Hey! We can’t eat until we get these cans open!

Economist: Oh, sorry. I should have told you. I am assuming that we have a can opener.

The metaphor here is for the problem: “how do we open these cans of tuna?” and the point of the joke is that we all bring our own tools and orientations to any problem. “If all you have is a hammer, everything looks like a nail,” is yet another humourous illustrating of the weight of our own expertise in driving the action we think is best.

Rather than mocking economists for making assumptions, this joke illustrates that we all make assumptions all the time. In dealing with areas that are complex and have a degree of ambiguity (like managing performance in any business), we have to make some assumptions and make some decisions based on less-than-certain data and evidence.

Perhaps, once our two non-economists gently raise awareness to the fact that the stated assumption does not hold, our economist can engage in a more impactful supportive role. Such moments of redirection require a fairly specific context that includes shared focus, mutual trust and assertive communications.

Persistent thought-provoking joke #1

A new beat police officer patrolling the street one night finds a man squatted down at the base of lamppost. He appears to be looking for something. The officer learns that the lost object is indeed car keys and, being a kind soul, joins in the search. After a few minutes, and almost apologetically, the young officer probes to kick off the following dialogue:

Officer: “Where did you last have the keys?”

Man: “I had them when I went into that alley to, ah, take care of something.”

Officer (curious about the “something”, but feeling the next question is almost patronizing): “Then can I ask you why you are searching for them at the base of this lamppost?”

Man: “Well, you see, the light is so much better out here.”

Funny? Relevant? Yes, indeed!

The joke relates to the area of organizational performance in that people will focus on the data/evidence/findings that they have. The metaphorical circle around the base of the lamppost that is reasonably well lit. The statement, “If you can’t measure it, you can’t manage it,” suggests that we pay attention to what is under the lamppost… or, bring big lights in to illuminate the alley.

The alley is the metaphor for the ambiguity and uncertainty that surrounds many business decisions. Even with top-flight (and expensive!) lighting, we are not going to see every nook and cranny. Alley is a misleading metaphor because it has walls and eventually ends. Maybe the joke should reference a recollection of having the keys while in “the abyss.”

Maybe we can wade into the alley/abyss with a flash light and create more well-lit circles to inform the situation. This should be an area of curiosity around what else we could be measuring.

Maybe we simply venture into the alley and see what we bump into. This may indeed uncover some learning about which parts of the abyss are more interesting.

It is much more comfortable to stay by the lamppost, but this limits our ability to assess the situation and, thereby, monitor and judge performance.

The solvable problem with Self Evaluations

I have a vivid memory of a self-evaluation from my undergrad days at McGill. We had to take a writing course, which must have been a cross-cultural exercise for the Faculty of English instructors that ventured into the Business building for these weekly encounters. There was a self-evaluation at the end of it, which, if I recall correctly, included a pre-amble that encouraged reflection on your development over the 12 weeks, as well as your ability compared to your classmates. I think I may have been guilted into responding “B+” and admitting that I could really have done more. I talked to classmates afterwards, some of whom skipped a number of classes. Their responses were “A. Can I give myself an A+?” (Note: A+ was not an option. McGill operated on the U.S. 4-point GPA system.)

This is a very obvious example of the challenges of “self-evaluations.” Self-attribution bias leads us to truly believe that we excel. Self-preservation instincts dampen the guilt of over stating the truth because these results can create positive future options or avoid negative future outcomes. For the undergrad business student, “strong marks = better job opportunities upon graduation” so, go for the A.  In a business context, if staffing cuts are looming, do you really want to have a mediocre self-evaluation in the HR file?! I wonder how many of my fellow graduates, decades into their business careers, have grown to learn that actual strength in writing provides a significant advantage in the workplace.

Necessary perspective

The self-evaluation brings the performer’s perspective into the discussion, which is absolutely crucial and applies to an organizational context. In addition to “perspective,” objectivity is also vital and this is enabled by clear criteria. The healthiest criteria mix features both “what you do” (e.g. somewhat controllable; gets at “how” your get results) and “what you accomplish” (e.g. somewhat more impacted by external factors; focussed on outcomes).

The evaluation becomes less of an assault on the ego if we can validate that someone did “what was expected” even if they did not “achieve expected results.” This demands some time and effort up front to go through the exercise of making logical connections between activity and results. You have to be ready for a reasoned conversation about what drives performance.

“How are we doing?” is a big question

For an organization, the questions about “what results do you want to achieve?” and “what do you think gives the best chance at achieving those results?” are really big questions that bring out some deep-seeded assumptions. A good strategic discussion will expose these and will explore some of the big decisions behind some of our assumptions. This should surface options to move forward rather than a clear best way. Imagine interactions where people say, “We said we are trying to reduce T&E, so we can’t fly everyone down for this meeting,” OR “We said that our focus was growing our business with our top-tier accounts, so we can’t get too anxious because we lost some tier-3 business.”

Like anything, involvement breeds acceptance, so it makes sense to have a senior-team conversation to tease out relevant expected outcomes and relevant expected actions. When you are involved in creating your own report card, the evaluation feels less daunting. This may turn down the volume on the “self-attribution bias” and the “self-preservation instinct.”

The Feedback Context – Developing and Evaluating

When it comes to performance, the question “How are you doing?” can start a very rich discussion. Do you really want to know? Do we really have a good way to gauge it (except by historical occurrences or lack thereof)? In typical business education fashion, let’s say it depends.

Feedback fills a really nice space in an working context, and any survey of employees will say that it is much sought after information. Hopefully the yearly performance evaluation as the sole source of feedback is a thing of the past that left with the move toward “flat” organizational structures and non-linear career development. There is an important difference between “evaluative feedback” and “developmental feedback.” I argue that they are best kept separate in order for feedback to work for both individuals and for contributing to the performance culture of organization.

Feedback for evaluation

In the evaluation sphere, the guiding question for feedback seems to be, “I am doing awesome, right?” or “Your not going to fire me, are you?” The receiver is primed for positive reinforcement or for some piece of mind that there job is not in jeopardy. This can be driven by a number of things, but ego is probably front and centre. Routinely research will find that much more than 50% of a group will think that they are above average. This creates an unworkable situation in many workplaces where we are striving for performance “excellence,” but those delivering “average” think they are going above an beyond the call. If the tick boxes are “meeting” and “exceeding” expectations, most of those you evaluate will be disappointed with the former even though logic dictates in a performance culture the expectations are high. To further complicate things, the Dunning-Kruger effect suggests that those who are furthest away from “excellence” will think themselves closest to it.

I encounter this in classroom evaluations in the MBA program within which I teach. There is conundrum created by the university expectation for a B to B+ class average and the reality that the large majority of students think that they should be getting As. This drives a reluctance to accept critical feedback without having to defend and justify the position (presumably because accepting criticism would be setting the stage for accepting a lower or “average” evaluation.)

If this is the kind of tension that manifests itself in the workplace, it is no wonder that managers find providing feedback a challenge. Who wants to get into a debate about someone’s performance? It becomes so much easier to provide positive feedback or at least put a bigger emphasis on the positive elements, even if those are not the most relevant. (e.g. Don’t worry about the sales results, you had a lot of really good meetings with some very key people.) One of the knock-on effects for an organization is that the standards get relaxed (e.g. the President’s club gets expanded) and there can be a general inflation of any quantified evaluation (e.g. you see more 5 out 5s or 100% ratings). This expands and dilutes your group of top performers. This does not have to be a problem and in many organizations there is a lot of resignation that this is the way it is.

On the other end of the bell curve, a similar conflation can happen in that unsatisfactory performance can get bumped up to “satisfactory” or better (which is actually much worse). If you are after high performance, this situation could not be worse. Your very high performers will be group in with the “average,” and the “below average” are convinced that they are doing their job.

Feedback for development

The nuanced difference with this kind of feedback is that everyone can improve: good enough is never good enough. There is a an apocryphal anecdote about Prince and his back-up band the Revolution. When working on a new number, the band members were encouraged to let Prince know if they mastered their part during the rehearsal period. He would be ready with an extra guitar lick, a percussion part, a dance move, a vocal harmonization, etc., to keep them occupied while the rest of the band worked on their parts.  The message being, don’t hide the fact that you can handle more.

With evaluation in place, too often people direct effort based on the location of the goal line. This is why those who are being evaluated complain when we “move the goal posts.” With an evaluative set-up, your most capable performers (especially those who understand the system), know exactly how much effort to expend to meet the given bar and not make the next one any higher. I worked with a sales manager, who was surgical about meeting budgets almost to the penny (nickle?) and mysteriously having a bunch of business “just come in” in the early weeks fo the new quarter. The fear of having the goal posts moved based on an extraordinary result is a function of the evaluative context. The risking of falling short is enough of an incentive to launch very elaborate gaming of the system.

For lower-level performers, you get the opposite behaviour where people will sacrifice “next quarter” in order to drive short-term results. Picture the account manager who is pushing a major client to close business to meet the end of their quarter, and goes as far as to extend an exploding offer in the flavour of “If I can’t get your commitment on the renewal before next week, I am not sure that we can extend the same offer.” Picture also the predictable response from the major client saying: “We will make our decisions based on our financial year, not yours. Thanks very much and we will talk to you in about 6 weeks and will be expecting at least as good a deal as you just described.” It is hard to say whether such an exchange would actually hurt next years’ arrangement, but I suspect there would be some future blow back.

With a developmental mindset, we can entertain stretch goals without worrying about people feeling that they “failed” by achieving 89% of a really challenging goal. The “evaluated outcome” is immaterial; the direction (e.g. forward) is the only thing that matters (so it is crucial to have a shared understanding of what “forward” looks like). The focus is simply what went well and what could be better next time.

Analogy from the world of sport: “Hey, Jason Day, I know that you are number one in the world and just won in a blow out. Your approach shots are phenomenal, but can we talk about your course management off the tee. Twice you were between clubs and better distance control could get you relying a bit less on feel for those shots.”

In the world of work, this will equate to drawing attention to something that had to “give” to meet a deadline, achieve a client outcome, etc. In the “developmental world,” this won’t be seen as “Great job, but…” There will be a thirst and expectation for some reflection on how could this be even better, or more sustainable, or less contentious, or… some other desirable—even aspirational—attribute. This is the drummer for the revolution keeping great rhythm, nailing the drum solo, and working on a juggling move for next time.

When you try to do both…

The flavours of feedback have very different intent: evaluative feedback seeks to differentiate performance, (e.g. separating the wheat from the chaff), while developmental performance doesn’t care how you stack up, (e.g. how could you improve?).

Here are the very predictable outcomes you can expect from not distinguishing between these types of feedback.

The effect on top performers can happen in at least three ways:

  • Just-enoughing: As mentioned above, giving exactly enough effort to attain the prescribed “goal.” A friend of mine joked about the grading at our alma mater, McGill University, saying there are only two grades you should get: 85%, which was the minimum to getting an “A” (this was the highest grade, so any effort beyond this had no impact), and 56%, which was the minimum to get your credit. The thinking being, you certainly don’t want to fail, but unless you are getting an “A,” the grade doesn’t matter.
  • Sandbagging: In setting the original goal, you can count on active negotiation to establish a bar that they are certain they can attain, but they convince you that it is a stretch goal. In client-facing activities, this is called managing expectations. This may be avoided by splitting up the types of feedback.
  • Skinner-boxing: When attempts motivate involve evaluation and reward, you can create a stimulus-response dynamic where you feel that the only way to maintain performance is to continue offering tangible rewards. Peak performance has to include some intrinsic drive. Any theory of human motivation takes us beyond the gun-for-hire, will-work-for-rewards mindset.

The effect on “average” performers may be that the category ceases to exist. With a reluctance to acknowledge that one is average, the evaluators can be stuck evaluating everyone as “high performing.” JK Simmons in the movie Whiplash has a nice little soliloquy about performance that finishes with: “there are no two words in the English language more harmful than ‘good job’.”

Under performers are always an interesting group. Jack Welch had an answer for this group (identified as being in the bottom 10%): Fire them. As cold-hearted as that seems, the compassionate view is that the situation is not working for either party. There is something about the context that is not working. You will find a context that is a better fit for you. It’s not you… it’s us (and you). When you are able to separate the results from the development, you can get a cleaner look at what is not working. If it can’t be fixed, maybe the “exit” is best for all involved.

What exactly to keep separate

For the developmental conversation, there is no differentiating. Everyone is tagged for improvement. The evaluation decisions you make have a huge impact on the culture of the organization because you will get the behaviour that you reinforce. Bringing performance out in the open will create pressure to align with existing systems and practices.

The mindset when approaching a top-performer has to be around tapping into intrinsic areas to maintain motivation. What can we do to help you be even better? What can we do to help you develop in ways that you want to?

The approach for the “average” employee should balance the possibility that the person could be performing at a higher level, but chooses not to. As an organization, are we OK with someone phoning it in and delivering adequate results? There will also be those who work really hard for adequate results. Can those two archetypes co-exist?

If performance and accountability are part of the fabric of your organization, healthy churn on the lower end of performance will mitigate churn at the top end because results do matter. Many organizations (but not all) will want to keep a compassion and understanding about underperformance because every industry has external factors. If sales are down because no one is buying, do we really judge by results only?  Again, curiosity should be the driving force toward those who are not delivering to the level they should be. Did we not do a good job of assessing potential at onboarding? Has the work environment changed such that your best effort is no longer good enough?

The conversation about the “change in fit” is not limited to under-performers. You may find that churn can be healthy across different levels of the performance spectrum. A former colleague of mine, who was a very long-standing top-performer, talked about diplomatically broaching the subject of a “new horizon” next step in a performance review. Apparently, this had been an elephant in the room and both parties seemed to appreciate the overt acknowledgement. A CEO client routinely states, “No one is working here forever, including me.” These discussions are clearly in the “development” realm and can very quickly tie into the clichés that “everyone can be replaced” and “if you love something, set it free.”

The way in which an organization handles performance has a huge impact on the culture. This is a complex collision of scientific evaluation, individual motivation and the art of collaboration. Drawing a clear divide between the development and the evaluation will give you a better chance at getting and sustaining the desired performance.

Diversity Boxes – ticking and talking

The Schumpeter column of The Economist took a run at diversity this week with the hypothesis that fatigue is big part of the problem. This fatigue appears to take different forms:

  • We hear about it far too much (Enough already!)
  • We hear about it but nothing changes (Not enough yet!)
  • We hear about it but what does it really mean (When is enough enough?!)

A look at the article’s comments section (which is always a dangerous move), reveals everything you need to know about the multitude of issues attached to the surprisingly complex word. Doubts and critiques expose some deep philosophical questions, as well as some statements that one is surprised to see in a written format (or not surprised, if you tend to read the comments section of publications).

A couple of things are clear about diversity:

  1. This idea has been getting attention of late. (I recall a similar trend bubbled up around the multi-generational workforce in the last decade or so. Maybe this, too, will pass or linger.)
  2. The word has many different interpretations and understandings
  3. Consistent with 2, ideas vary on whether an organization needs it and, if so, how best to get it.

One of the ideas that the article attacks is diversity as a “tick-the-box activity. Fittingly, differing narratives surrounding “diversity” brings one critique that states the box-ticking organizations actually deserve credit because at least they are doing something!

Is it reasonable to say that the merits of box-ticking depends on the contents of the box?

There may be some consensus that filling the ranks with “the token [insert statistically under represented group member]” probably doesn’t work for anyone. (But I can imagine being challenged on that statement.) So, we should stay away from those kind of boxes.

Similarly, awareness building (especially when the topic is on heavy rotation in media) can also wear thin. So, maybe it’s not enough to “tick the box” on the Diversity Lunch & Learns.

If we are trying to prevent an over-reliance on predictable cognitive biases in important decisions, maybe we can tick the box on the presence of such initiatives as:

  • panel interviews for new hires
  • formal meetings of the senior leadership team to discuss and determine merit bonuses for employees above a certain level
  • determining tangible indicators to test the connection between our idea of diversity and our idea of performance

This is by no means an exhaustive list, nor is it a collection of best practices. Well-intended efforts to “do the right thing” can quickly get lost in the contentious world-view debates that risks making the situation worse. We are convinced in the merits of digging into an idea like diversity to understand how it fits into the business and find some clear ways to track the progress of distinct efforts even if that means ticking some boxes… but only the good boxes.

The Balancing Act of Collaborating

There is lots of talk about “getting on the same page,” but in most work situations some level of conflict persists and can vary from subtle differences in opinion to diametrically opposed views. We all know that maintaining cordial working relationships is a must, yet too much focus appeasing diminishes our results and too much focus on our agenda carries the risk of losing status as “a team player.”

It can feel a bit like walking a tightrope and constantly balancing between

  • Being self assured, but not belligerent.
  • Being accommodating, but not spineless.
  • Being ambitious, but staying realistic (Picture a “stretch goal” snapping our rope!)

Maintaining forward momentum while maintaining this “balance” is also tricky. There are three large areas of attention that can help:
How am I seeing the situation (and should I look at it differently)?

With reams of data at our disposal, it is very easy to arrive at very different evidence-supported answers to the question “how are we doing?”  Those closest to the situation tend to have a really good read on how things actually work, but once performance measures are imposed, these same people can start to question their gut feelings. Taking time to gather a different perspective on your own may be more effective than simply taking in the perspectives of others. One part confidence; two parts humility.

Who do I have to work with (and how are those existing relationships)?

We have relationships to manage that are up, down and across. Our group of stakeholders will vary in terms of stature they maintain in the organization, but individual differences in style almost guarantees interpersonal challenges amidst the organizational politics. In practice, we have to navigate a complex web to get what we want for us and for others. Efforts are building/rebuilding relationships can make the tightrope seem a little wider (or maybe not so high).

What are the real priorities here (or, at least, what should they be)?

Sticking with the “rope” metaphor (why abandon it now?), what happens when tightropes turn into tug-o-wars? Such situations tend to consume lots of effort, but provide disappointingly little in the form of results. Many of us are not in the position to impose our views on the organization, but we all can exert a degree of influence. Even when things are at cross-purposes, speaking truth to power can be scary. Is asking power for a small clarification any better?

Can logic models work for you?

The “logic model” is a tool that is widely used in public and social sector initiatives. Like any tool, there are obvious on-target applications (e.g. hammer for inserting nail) as well as more creative applications (e.g. hammer to open a paint can). In all cases, the user is responsible for picking the right tool for the application. To me, there is relevance for the logic model in the private sector because this tool can expose assumptions (logical or not) and bring rigour to the thinking. Here is a quick primer on logic models, followed by some suggestions on if/how/when to use it for your business.

USEFUL VOCABULARY

Theory of Change: this is a set of fundamental assumptions that underpin a line of reasoning. This is often referred to in solving large social issues like homelessness or poverty. Relevance to a private sector context could be, for example, an ad agency president believes that to be successful, her team has to know our clients business better than they do. She believes sees her team as “providers of insight” rather than “meeters of needs.”

Logic Model: a framework that allows you to portray the specific linkages of your reasoning from the resources you expend to the final impact that you will have. The model takes into account the linkages between four fundamental components:

  • Inputs – These are resources that we control and choose to deploy toward the end objective. This is usually about money and time. Energy fits in here, too.
  • Outputs – This is what we create or produce or get from expending the “input” resources. This could be a report, the provision of a service, creation of some capacity, etc.
  • Outcomes – What we get helps us out in some way. This is the specific way in which it helps us out. We are better able to do something or something is improved because of the output created from the inputs.
  • Impact – This is the higher order calling of the whole endeavour. What did we set out to address in the first place? This is what we were after all along.

WORKING EXAMPLE

The thing about logic is that it can seem both commonsensical and obvious, while also seeming a bit opaque. To alleviate the latter, here is a quick example: Our agency leader (who believes that “provider of insight” is the way to success) might have the following idea.

Let’s get some of our junior staff to work on developing industry reports that capture both analyst information, as well as “chatter” from social networks. They will create an overview document as a summer project, and monitor/update on an ongoing basis. Our senior account people will refer to these before client meetings, and also share insights gained from the direct client interaction.

Breakdown using Logic Model:

  • Inputs – Junior staff hours in creating foundational document and ongoing monitoring (hours); Senior account staff time in inputting client insights (hours)
  • Outputs – The actual document, once it is created. The document is actually updated.
  • Outcomes – Senior account staff go to meetings with broad industry knowledge that they use to: (1) demonstrate knowledge to clients; (2) share value-adding insights; (3) initiate strategic conversations, etc.
  • Impact – Clients will use us more

Note: The understood “we hope” as a qualifier gets louder with each step of the model.

USING THE TOOL

Really thinking through these connections demands a good degree of effort and will: what do we want to “impact”? And how we will actually go about getting there? To illustrate the difficulty, recall the success of the ALS Ice Bucket Challenge. (Remember, this space is the sweet spot of the logic model). This was a huge success in gaining awareness (Mel B. did the challenge on America’s Got Talent!), but you may still ask: “So what? Are those afflicted by ALS better off? If so, how?” You can imagine that asking such questions without being labelled as “doubter,” “hater,” “loser,” etc., would be no mean achievement. This is an inherent challenge of such models. People don’t like to have the gaps in their logic exposed.

To use this tool effectively, leadership has to be comfortable explaining their logic (e.g. “provider of insight” beats “meeter of needs) and the followership has to be comfortable trying it out (if they don’t believe it in the first place).

Building the connections between the elements is an important exercise. You end up asking really good questions, for example:

Input to output questions: What are we getting for all these hours that we have put into research?

Output to outcomes: Is our new report, tool, capacity, etc. actually contributing to something that we are using, noticing, applying, etc.?

Outcomes to impact: Is our idea of the “means to the end” actually playing out? What do we really want here? What are we trying to achieve anyway?

This is the kind of thinking that goes into our “performance playbook” process to help ensure that the measures you are choosing hang together with the logic under which you are operating.

 

Aligning for Performance – Where to start

The Lululemon stories coming out this week illustrate, if nothing else, that running a successful business is a complicated endeavour. There are a number of interests to balance, and something always has to give. Determining what exactly what should “give” and how exactly to implement that decision introduces an interplay between three dimensions of an organization:

  1. Overall Direction
  2. Measures and Metrics
  3. Rules and Norms

To have a serious look at “performance,” each of these is necessary though no one dimension logically prevails. The result of the interplay is very tangible to those operating in and around the environment. Employees actually live it, and investors, suppliers and other stakeholders are deeply affected by it.

From an organizational development perspective, these dimensions offer distinctly different lenses through which to analyze and evaluate performance. They can also inform opportunities for on-course corrections that can pre-empt a larger “realignment” or “change project.” Here is a quick explanation of what you could see through each lens.

Dimension #1 – Overall Direction (balancing inspiration with reality; clarity with rigidity)

Done well
  • There is alignment toward an overarching purpose.
  • We all know why we are here.
  • We have an obvious shared interest and our conflict is about how to get there not where to go.
Overdone 
  • Attachment to “core values” grows rigid such that an unrealistic zeal drives activity.
  • People are quick to become indignant when others suggest that we would ever compromise or question the direction that has been set.
  • There is talk of “sacred cows.”
Underdone
  • Lack of consistent focus makes it hard for people to assign priority.
  • Lower levels of management feel compelled to check with upper levels.
  • Management shows reluctance to exercise judgement because decision-making criteria is unclear.
Dimension #2 – Measures and Metrics (balancing art and science; means and ends)
Done well
  • There are appropriate and trackable indicators of performance at individual, team and organizational levels.
  • Discussions around performance, including performance reviews, have some objective and tangible criteria.
  • With negative changes in measures and metrics, discussions turn to “what can we do to affect this outcome?”
Overdone 
  • Emphasis on “making the numbers” leads to situations akin to “the operation was a success, but the patient died.”
  • Rampant gaming of the system to make “my numbers,” with complete disregard for overall impact.
  • No concept of “taking one for the team” because there is no opportunity to provide a context or expectation of reciprocity.
Underdone
  • There is no meaningful indication of results and outcomes.
  • Well-intentioned people often feel that although much gets done, little may have been accomplished.
  • There is little perceived connection to and control over end-results (positive or negative)


Dimension #3 – Rules and norms (balancing constraints with restrictions; formal with informal)
Done well
  • There are a few key parameters that people maintain (and don’t need to look at the website for guidance).
  • These are supported in formal policy (e.g. vision, mission and values).
  • There is a “spirit” of the rules not fully captured by the “letter” of the formal statements
Overdone 
  • Decision-making may be stifled because everything is prescribed and no judgment is required.
  • Rationale for doing something is often replaced with explanation of rules, guidelines and norms that prescribe behaviour (more “we/you can’t” than “why couldn’t we?”)
  • People look for air-cover from a policy or from “so-and-so said we have to do it this way” to justify actions/decisions.
Underdone
  • The walls of the office have signs like: “DO NOT LEAVE FOOD IN THE OFFICE FRIDGE OVERNIGHT.“ & “DO NOT LET THIS DOOR SLAM.”
  • The funnel of “policies in progress” is always full.
  • Existing policies are routinely reworked to be clearer. (e.g. Coffee cream is exempt from “Food left in Fridge” policy.)
What now/what next?

An analysis of this nature has to sift through competing perceptions of the situation. If the goal is to improve performance, the first step should be to better understand it. The interplay of these dimensions is similar to the combination of individual life philosophy, personal goals, and code of conduct that form a human being. Some degree of misalignment is inevitable, but very often it is manageable. Large misalignments and inconsistencies will become obvious over time and become more difficult to manage and to hide.

Using these dimensions as a periodic diagnostic within an organization can bring insight to where to focus time and energy to proactively affect future performance. This can also help to prevent large crises that require swift and sudden change.

 

The good news is: I understand your thinking…

Yesterday, I caught a very brief segment on talk radio where a well-intentioned gentleman was explaining a solution that reduced energy consumption by turning off more lights at night. As a good interviewer does, Jerry Agar asked questions about the rationale for this endeavour. I was impressed at how quickly the guest (who I could not find on the site!) explained the connections that he was making, The train of thought is this:

  • If people trust each other, they are more comfortable in the dark.
  • One way to know that it is dark is that you can see stars (He actually said “the milky way.”)
  • So, the ability to see the milky way is a great indicator of how much people trust each other.
  • Let’s turn off the lights and start trusting!

Making connections between indicators and such fuzzy concepts as “degree of trust,” is a worthwhile, yet very difficult task. The impressive part of his explanation was not the logical connections, but the comfort that he had in telling another person how he had put it together.

My professional network contains some experts in philosophy who I will consult for a more technical critique of this reasoning than “Huh?!” In my humble opinion, this gentleman made a horrible argument, but I can’t stress enough the effectiveness of his clarity and willingness to reveal his thinking. A clearly explained whacky argument is easier for everyone to address than an obfuscated description that you have to untangle. The former, we know to ignore or, if we like the guy, give some very blunt critique. The latter takes much more time and energy before we get anywhere.

So, full marks for clarity. Let’s work on the logic,