Tag Archives: performance indicators

The solvable problem with Self Evaluations

I have a vivid memory of a self-evaluation from my undergrad days at McGill. We had to take a writing course, which must have been a cross-cultural exercise for the Faculty of English instructors that ventured into the Business building for these weekly encounters. There was a self-evaluation at the end of it, which, if I recall correctly, included a pre-amble that encouraged reflection on your development over the 12 weeks, as well as your ability compared to your classmates. I think I may have been guilted into responding “B+” and admitting that I could really have done more. I talked to classmates afterwards, some of whom skipped a number of classes. Their responses were “A. Can I give myself an A+?” (Note: A+ was not an option. McGill operated on the U.S. 4-point GPA system.)

This is a very obvious example of the challenges of “self-evaluations.” Self-attribution bias leads us to truly believe that we excel. Self-preservation instincts dampen the guilt of over stating the truth because these results can create positive future options or avoid negative future outcomes. For the undergrad business student, “strong marks = better job opportunities upon graduation” so, go for the A.  In a business context, if staffing cuts are looming, do you really want to have a mediocre self-evaluation in the HR file?! I wonder how many of my fellow graduates, decades into their business careers, have grown to learn that actual strength in writing provides a significant advantage in the workplace.

Necessary perspective

The self-evaluation brings the performer’s perspective into the discussion, which is absolutely crucial and applies to an organizational context. In addition to “perspective,” objectivity is also vital and this is enabled by clear criteria. The healthiest criteria mix features both “what you do” (e.g. somewhat controllable; gets at “how” your get results) and “what you accomplish” (e.g. somewhat more impacted by external factors; focussed on outcomes).

The evaluation becomes less of an assault on the ego if we can validate that someone did “what was expected” even if they did not “achieve expected results.” This demands some time and effort up front to go through the exercise of making logical connections between activity and results. You have to be ready for a reasoned conversation about what drives performance.

“How are we doing?” is a big question

For an organization, the questions about “what results do you want to achieve?” and “what do you think gives the best chance at achieving those results?” are really big questions that bring out some deep-seeded assumptions. A good strategic discussion will expose these and will explore some of the big decisions behind some of our assumptions. This should surface options to move forward rather than a clear best way. Imagine interactions where people say, “We said we are trying to reduce T&E, so we can’t fly everyone down for this meeting,” OR “We said that our focus was growing our business with our top-tier accounts, so we can’t get too anxious because we lost some tier-3 business.”

Like anything, involvement breeds acceptance, so it makes sense to have a senior-team conversation to tease out relevant expected outcomes and relevant expected actions. When you are involved in creating your own report card, the evaluation feels less daunting. This may turn down the volume on the “self-attribution bias” and the “self-preservation instinct.”

The Feedback Context – Developing and Evaluating

When it comes to performance, the question “How are you doing?” can start a very rich discussion. Do you really want to know? Do we really have a good way to gauge it (except by historical occurrences or lack thereof)? In typical business education fashion, let’s say it depends.

Feedback fills a really nice space in an working context, and any survey of employees will say that it is much sought after information. Hopefully the yearly performance evaluation as the sole source of feedback is a thing of the past that left with the move toward “flat” organizational structures and non-linear career development. There is an important difference between “evaluative feedback” and “developmental feedback.” I argue that they are best kept separate in order for feedback to work for both individuals and for contributing to the performance culture of organization.

Feedback for evaluation

In the evaluation sphere, the guiding question for feedback seems to be, “I am doing awesome, right?” or “Your not going to fire me, are you?” The receiver is primed for positive reinforcement or for some piece of mind that there job is not in jeopardy. This can be driven by a number of things, but ego is probably front and centre. Routinely research will find that much more than 50% of a group will think that they are above average. This creates an unworkable situation in many workplaces where we are striving for performance “excellence,” but those delivering “average” think they are going above an beyond the call. If the tick boxes are “meeting” and “exceeding” expectations, most of those you evaluate will be disappointed with the former even though logic dictates in a performance culture the expectations are high. To further complicate things, the Dunning-Kruger effect suggests that those who are furthest away from “excellence” will think themselves closest to it.

I encounter this in classroom evaluations in the MBA program within which I teach. There is conundrum created by the university expectation for a B to B+ class average and the reality that the large majority of students think that they should be getting As. This drives a reluctance to accept critical feedback without having to defend and justify the position (presumably because accepting criticism would be setting the stage for accepting a lower or “average” evaluation.)

If this is the kind of tension that manifests itself in the workplace, it is no wonder that managers find providing feedback a challenge. Who wants to get into a debate about someone’s performance? It becomes so much easier to provide positive feedback or at least put a bigger emphasis on the positive elements, even if those are not the most relevant. (e.g. Don’t worry about the sales results, you had a lot of really good meetings with some very key people.) One of the knock-on effects for an organization is that the standards get relaxed (e.g. the President’s club gets expanded) and there can be a general inflation of any quantified evaluation (e.g. you see more 5 out 5s or 100% ratings). This expands and dilutes your group of top performers. This does not have to be a problem and in many organizations there is a lot of resignation that this is the way it is.

On the other end of the bell curve, a similar conflation can happen in that unsatisfactory performance can get bumped up to “satisfactory” or better (which is actually much worse). If you are after high performance, this situation could not be worse. Your very high performers will be group in with the “average,” and the “below average” are convinced that they are doing their job.

Feedback for development

The nuanced difference with this kind of feedback is that everyone can improve: good enough is never good enough. There is a an apocryphal anecdote about Prince and his back-up band the Revolution. When working on a new number, the band members were encouraged to let Prince know if they mastered their part during the rehearsal period. He would be ready with an extra guitar lick, a percussion part, a dance move, a vocal harmonization, etc., to keep them occupied while the rest of the band worked on their parts.  The message being, don’t hide the fact that you can handle more.

With evaluation in place, too often people direct effort based on the location of the goal line. This is why those who are being evaluated complain when we “move the goal posts.” With an evaluative set-up, your most capable performers (especially those who understand the system), know exactly how much effort to expend to meet the given bar and not make the next one any higher. I worked with a sales manager, who was surgical about meeting budgets almost to the penny (nickle?) and mysteriously having a bunch of business “just come in” in the early weeks fo the new quarter. The fear of having the goal posts moved based on an extraordinary result is a function of the evaluative context. The risking of falling short is enough of an incentive to launch very elaborate gaming of the system.

For lower-level performers, you get the opposite behaviour where people will sacrifice “next quarter” in order to drive short-term results. Picture the account manager who is pushing a major client to close business to meet the end of their quarter, and goes as far as to extend an exploding offer in the flavour of “If I can’t get your commitment on the renewal before next week, I am not sure that we can extend the same offer.” Picture also the predictable response from the major client saying: “We will make our decisions based on our financial year, not yours. Thanks very much and we will talk to you in about 6 weeks and will be expecting at least as good a deal as you just described.” It is hard to say whether such an exchange would actually hurt next years’ arrangement, but I suspect there would be some future blow back.

With a developmental mindset, we can entertain stretch goals without worrying about people feeling that they “failed” by achieving 89% of a really challenging goal. The “evaluated outcome” is immaterial; the direction (e.g. forward) is the only thing that matters (so it is crucial to have a shared understanding of what “forward” looks like). The focus is simply what went well and what could be better next time.

Analogy from the world of sport: “Hey, Jason Day, I know that you are number one in the world and just won in a blow out. Your approach shots are phenomenal, but can we talk about your course management off the tee. Twice you were between clubs and better distance control could get you relying a bit less on feel for those shots.”

In the world of work, this will equate to drawing attention to something that had to “give” to meet a deadline, achieve a client outcome, etc. In the “developmental world,” this won’t be seen as “Great job, but…” There will be a thirst and expectation for some reflection on how could this be even better, or more sustainable, or less contentious, or… some other desirable—even aspirational—attribute. This is the drummer for the revolution keeping great rhythm, nailing the drum solo, and working on a juggling move for next time.

When you try to do both…

The flavours of feedback have very different intent: evaluative feedback seeks to differentiate performance, (e.g. separating the wheat from the chaff), while developmental performance doesn’t care how you stack up, (e.g. how could you improve?).

Here are the very predictable outcomes you can expect from not distinguishing between these types of feedback.

The effect on top performers can happen in at least three ways:

  • Just-enoughing: As mentioned above, giving exactly enough effort to attain the prescribed “goal.” A friend of mine joked about the grading at our alma mater, McGill University, saying there are only two grades you should get: 85%, which was the minimum to getting an “A” (this was the highest grade, so any effort beyond this had no impact), and 56%, which was the minimum to get your credit. The thinking being, you certainly don’t want to fail, but unless you are getting an “A,” the grade doesn’t matter.
  • Sandbagging: In setting the original goal, you can count on active negotiation to establish a bar that they are certain they can attain, but they convince you that it is a stretch goal. In client-facing activities, this is called managing expectations. This may be avoided by splitting up the types of feedback.
  • Skinner-boxing: When attempts motivate involve evaluation and reward, you can create a stimulus-response dynamic where you feel that the only way to maintain performance is to continue offering tangible rewards. Peak performance has to include some intrinsic drive. Any theory of human motivation takes us beyond the gun-for-hire, will-work-for-rewards mindset.

The effect on “average” performers may be that the category ceases to exist. With a reluctance to acknowledge that one is average, the evaluators can be stuck evaluating everyone as “high performing.” JK Simmons in the movie Whiplash has a nice little soliloquy about performance that finishes with: “there are no two words in the English language more harmful than ‘good job’.”

Under performers are always an interesting group. Jack Welch had an answer for this group (identified as being in the bottom 10%): Fire them. As cold-hearted as that seems, the compassionate view is that the situation is not working for either party. There is something about the context that is not working. You will find a context that is a better fit for you. It’s not you… it’s us (and you). When you are able to separate the results from the development, you can get a cleaner look at what is not working. If it can’t be fixed, maybe the “exit” is best for all involved.

What exactly to keep separate

For the developmental conversation, there is no differentiating. Everyone is tagged for improvement. The evaluation decisions you make have a huge impact on the culture of the organization because you will get the behaviour that you reinforce. Bringing performance out in the open will create pressure to align with existing systems and practices.

The mindset when approaching a top-performer has to be around tapping into intrinsic areas to maintain motivation. What can we do to help you be even better? What can we do to help you develop in ways that you want to?

The approach for the “average” employee should balance the possibility that the person could be performing at a higher level, but chooses not to. As an organization, are we OK with someone phoning it in and delivering adequate results? There will also be those who work really hard for adequate results. Can those two archetypes co-exist?

If performance and accountability are part of the fabric of your organization, healthy churn on the lower end of performance will mitigate churn at the top end because results do matter. Many organizations (but not all) will want to keep a compassion and understanding about underperformance because every industry has external factors. If sales are down because no one is buying, do we really judge by results only?  Again, curiosity should be the driving force toward those who are not delivering to the level they should be. Did we not do a good job of assessing potential at onboarding? Has the work environment changed such that your best effort is no longer good enough?

The conversation about the “change in fit” is not limited to under-performers. You may find that churn can be healthy across different levels of the performance spectrum. A former colleague of mine, who was a very long-standing top-performer, talked about diplomatically broaching the subject of a “new horizon” next step in a performance review. Apparently, this had been an elephant in the room and both parties seemed to appreciate the overt acknowledgement. A CEO client routinely states, “No one is working here forever, including me.” These discussions are clearly in the “development” realm and can very quickly tie into the clichés that “everyone can be replaced” and “if you love something, set it free.”

The way in which an organization handles performance has a huge impact on the culture. This is a complex collision of scientific evaluation, individual motivation and the art of collaboration. Drawing a clear divide between the development and the evaluation will give you a better chance at getting and sustaining the desired performance.

Diversity Boxes – ticking and talking

The Schumpeter column of The Economist took a run at diversity this week with the hypothesis that fatigue is big part of the problem. This fatigue appears to take different forms:

  • We hear about it far too much (Enough already!)
  • We hear about it but nothing changes (Not enough yet!)
  • We hear about it but what does it really mean (When is enough enough?!)

A look at the article’s comments section (which is always a dangerous move), reveals everything you need to know about the multitude of issues attached to the surprisingly complex word. Doubts and critiques expose some deep philosophical questions, as well as some statements that one is surprised to see in a written format (or not surprised, if you tend to read the comments section of publications).

A couple of things are clear about diversity:

  1. This idea has been getting attention of late. (I recall a similar trend bubbled up around the multi-generational workforce in the last decade or so. Maybe this, too, will pass or linger.)
  2. The word has many different interpretations and understandings
  3. Consistent with 2, ideas vary on whether an organization needs it and, if so, how best to get it.

One of the ideas that the article attacks is diversity as a “tick-the-box activity. Fittingly, differing narratives surrounding “diversity” brings one critique that states the box-ticking organizations actually deserve credit because at least they are doing something!

Is it reasonable to say that the merits of box-ticking depends on the contents of the box?

There may be some consensus that filling the ranks with “the token [insert statistically under represented group member]” probably doesn’t work for anyone. (But I can imagine being challenged on that statement.) So, we should stay away from those kind of boxes.

Similarly, awareness building (especially when the topic is on heavy rotation in media) can also wear thin. So, maybe it’s not enough to “tick the box” on the Diversity Lunch & Learns.

If we are trying to prevent an over-reliance on predictable cognitive biases in important decisions, maybe we can tick the box on the presence of such initiatives as:

  • panel interviews for new hires
  • formal meetings of the senior leadership team to discuss and determine merit bonuses for employees above a certain level
  • determining tangible indicators to test the connection between our idea of diversity and our idea of performance

This is by no means an exhaustive list, nor is it a collection of best practices. Well-intended efforts to “do the right thing” can quickly get lost in the contentious world-view debates that risks making the situation worse. We are convinced in the merits of digging into an idea like diversity to understand how it fits into the business and find some clear ways to track the progress of distinct efforts even if that means ticking some boxes… but only the good boxes.

MONEYBALL – The Measure of Success Review

“..the first guy through the wall…it always gets bloody, always.” (John Henry to Billy Beane)

  • How things change
  • Getting people on board
  • Defining performance and changing expectations

Background:

This Michael Lewis story lays out what was the beginning of the rise of Sabermetrics: a new way of thinking about baseball. Previously, baseball nerd Bill James had a small cult-like following of people who always knew that mainstream baseball thinking and strategy were flawed. This group was enlightened but their wisdom was contained to the group of believers. The baseball establishment was simply not interested. In the early days of the new millennium, along comes Billy Beaned at GM of the Oakland A’s, whose particular problem makes it impossible to “play the game” as it is dictated.

“The problem we are trying to solve is that there are rich teams and poor teams, then there is 50 feet of crap, and then there’s us.” (Billy Beane to his Team Scouts)

The story plays out as Beane and his trusty sidekick Pete try to implement their strategy in collaboration with ownership, team scouts, team management and players. This challenge to an existing status quo and persistence in implementation are both fascinating and insightful, bringing real-world lessons to managers and leaders. Here is how the Money Ball story maps to the “collaboration game” framework.

Direction:

There is a great scene in the movie (quoted above), where Billy Beane lays out the problem for his team of scouts. The expression of this is only partial in this scene where he alludes to the fact that they have to run a shoe-string budget. Earlier in the movie he is very clear to state that rather than just “be competitive” or “not embarrassing” the objective is to win the World Series. Although “winning the World Series” is a point in time accomplishment, the general direction of “be the best” is important here and distinctly different from “be one of the best” or “not be the worst.”

Set-up – Rules and Constraints:

The link between the “be the best” direction and the specific Oakland A’s challenge stems from the small budget. The opportunity here is to create an understanding that “this is a challenge” rather than “this is impossible, why even try?” The former takes on the narrative of the wily underdog taking on the deep-pocketed establishment. Rather than moaning about not having enough money, the group has something to prove to the rest of the baseball world (think KC Royals of 2015).

Set-up – Measures and Metrics:

One measure for a professional sports team is summed up in the movie by the Billy Beane line “[Once you make the playoffs] If you don’t win the last game of the season, nobody gives a shit.” Close doesn’t count for those who want to “be the best.”

Spending within budget could be a constraint attached to a measure. There is at least one negotiation with ownership to release some extra money, so that constraint is apparently a little fluid. Conceivably as long as you can make the case for the necessity of this extra money in pursuit of the “be the best” agenda.

The tangible metric that is most revealing of the new logic is in how to evaluate potential. Enter the on-base-percentage (replacing the “batting average), which accounts for any skill in getting a base-on-balls, in addition to that of getting an actual “hit.” The logic flows as follows: You win games by scoring runs, to score you have to get players on base, so we want players who can get on base. (Sabermetrics had since evolved, and will continue to.)

In Sum:

To me the greatest relevance to the workplace is in the area of change overhauls that come down from the top. The CEO gets and idea in his/her head and tries to role it out through the organization. There are instances to “sell and tell” and there are some constituencies that refuse to buy-in to the new logic… and like any logical construct, the new way of thinking always has its flaws.

The Balancing Act of Collaborating

There is lots of talk about “getting on the same page,” but in most work situations some level of conflict persists and can vary from subtle differences in opinion to diametrically opposed views. We all know that maintaining cordial working relationships is a must, yet too much focus appeasing diminishes our results and too much focus on our agenda carries the risk of losing status as “a team player.”

It can feel a bit like walking a tightrope and constantly balancing between

  • Being self assured, but not belligerent.
  • Being accommodating, but not spineless.
  • Being ambitious, but staying realistic (Picture a “stretch goal” snapping our rope!)

Maintaining forward momentum while maintaining this “balance” is also tricky. There are three large areas of attention that can help:
How am I seeing the situation (and should I look at it differently)?

With reams of data at our disposal, it is very easy to arrive at very different evidence-supported answers to the question “how are we doing?”  Those closest to the situation tend to have a really good read on how things actually work, but once performance measures are imposed, these same people can start to question their gut feelings. Taking time to gather a different perspective on your own may be more effective than simply taking in the perspectives of others. One part confidence; two parts humility.

Who do I have to work with (and how are those existing relationships)?

We have relationships to manage that are up, down and across. Our group of stakeholders will vary in terms of stature they maintain in the organization, but individual differences in style almost guarantees interpersonal challenges amidst the organizational politics. In practice, we have to navigate a complex web to get what we want for us and for others. Efforts are building/rebuilding relationships can make the tightrope seem a little wider (or maybe not so high).

What are the real priorities here (or, at least, what should they be)?

Sticking with the “rope” metaphor (why abandon it now?), what happens when tightropes turn into tug-o-wars? Such situations tend to consume lots of effort, but provide disappointingly little in the form of results. Many of us are not in the position to impose our views on the organization, but we all can exert a degree of influence. Even when things are at cross-purposes, speaking truth to power can be scary. Is asking power for a small clarification any better?

Can logic models work for you?

The “logic model” is a tool that is widely used in public and social sector initiatives. Like any tool, there are obvious on-target applications (e.g. hammer for inserting nail) as well as more creative applications (e.g. hammer to open a paint can). In all cases, the user is responsible for picking the right tool for the application. To me, there is relevance for the logic model in the private sector because this tool can expose assumptions (logical or not) and bring rigour to the thinking. Here is a quick primer on logic models, followed by some suggestions on if/how/when to use it for your business.

USEFUL VOCABULARY

Theory of Change: this is a set of fundamental assumptions that underpin a line of reasoning. This is often referred to in solving large social issues like homelessness or poverty. Relevance to a private sector context could be, for example, an ad agency president believes that to be successful, her team has to know our clients business better than they do. She believes sees her team as “providers of insight” rather than “meeters of needs.”

Logic Model: a framework that allows you to portray the specific linkages of your reasoning from the resources you expend to the final impact that you will have. The model takes into account the linkages between four fundamental components:

  • Inputs – These are resources that we control and choose to deploy toward the end objective. This is usually about money and time. Energy fits in here, too.
  • Outputs – This is what we create or produce or get from expending the “input” resources. This could be a report, the provision of a service, creation of some capacity, etc.
  • Outcomes – What we get helps us out in some way. This is the specific way in which it helps us out. We are better able to do something or something is improved because of the output created from the inputs.
  • Impact – This is the higher order calling of the whole endeavour. What did we set out to address in the first place? This is what we were after all along.

WORKING EXAMPLE

The thing about logic is that it can seem both commonsensical and obvious, while also seeming a bit opaque. To alleviate the latter, here is a quick example: Our agency leader (who believes that “provider of insight” is the way to success) might have the following idea.

Let’s get some of our junior staff to work on developing industry reports that capture both analyst information, as well as “chatter” from social networks. They will create an overview document as a summer project, and monitor/update on an ongoing basis. Our senior account people will refer to these before client meetings, and also share insights gained from the direct client interaction.

Breakdown using Logic Model:

  • Inputs – Junior staff hours in creating foundational document and ongoing monitoring (hours); Senior account staff time in inputting client insights (hours)
  • Outputs – The actual document, once it is created. The document is actually updated.
  • Outcomes – Senior account staff go to meetings with broad industry knowledge that they use to: (1) demonstrate knowledge to clients; (2) share value-adding insights; (3) initiate strategic conversations, etc.
  • Impact – Clients will use us more

Note: The understood “we hope” as a qualifier gets louder with each step of the model.

USING THE TOOL

Really thinking through these connections demands a good degree of effort and will: what do we want to “impact”? And how we will actually go about getting there? To illustrate the difficulty, recall the success of the ALS Ice Bucket Challenge. (Remember, this space is the sweet spot of the logic model). This was a huge success in gaining awareness (Mel B. did the challenge on America’s Got Talent!), but you may still ask: “So what? Are those afflicted by ALS better off? If so, how?” You can imagine that asking such questions without being labelled as “doubter,” “hater,” “loser,” etc., would be no mean achievement. This is an inherent challenge of such models. People don’t like to have the gaps in their logic exposed.

To use this tool effectively, leadership has to be comfortable explaining their logic (e.g. “provider of insight” beats “meeter of needs) and the followership has to be comfortable trying it out (if they don’t believe it in the first place).

Building the connections between the elements is an important exercise. You end up asking really good questions, for example:

Input to output questions: What are we getting for all these hours that we have put into research?

Output to outcomes: Is our new report, tool, capacity, etc. actually contributing to something that we are using, noticing, applying, etc.?

Outcomes to impact: Is our idea of the “means to the end” actually playing out? What do we really want here? What are we trying to achieve anyway?

This is the kind of thinking that goes into our “performance playbook” process to help ensure that the measures you are choosing hang together with the logic under which you are operating.

 

Results-Based Development (Some Backstory on Goal Setting)

One of my biggest frustrations as an education professional (trainer, instructor, consultant, etc.) is that the standard “measure of success” is “Did the participant like it?” I do not suggest that participant enjoyment is not important, but “did they like it” is only part of the story. I would like to think that the “liking” could align with developing in an intended direction. For example, “I liked it because the skills and awareness were necessary for me to better perform in my role” rather than “I liked it because the facilitator was funny, let us go early, and we had a hot lunch.” Similarly, if participants didn’t “like it” what was the reason? Not relevant? Waste of time? Made me think too much? No clear tools? Sharpening the axe can take some time; maybe an axe isn’t even the right tool…

What to measure becomes so important. In the absence of any other measure, maybe “participant satisfaction as indicated by ‘smile sheets'” is acceptable and maybe we even set a goal accordingly. We could get some help from George Doran and employ the SMART goal framework (Specific, Measurable, Achievable, Relevant, Time-bound). Mr. Doran’s helpful and memorable tool may create some unintended consequences.

Specific: Oversimplifying a situation such that the focus is on the “operation” and not on the “patient,” as in the dark humour of “the operation was a success, but the patient died.” We trained teams separately to keep a friendly atmosphere. Participants loved the “team building” sessions, but we still have turf wars between these two groups. Other examples could include, delivering a product that met the customers specs exactly, but seeing unacceptable margins.

Measurable: This orientation tends to push us toward what can be measured, which can dangerously skew attention toward distracting elements. E.G. We wanted to reduce customer complaints, but all we did was encourage front-line staff to accommodate ridiculous requests (which ended up costing us money!)

Achievable: This aspects needs much more context. Achievable to whom? What are the consequences of success or failure? If the latter has any connection to monetary reward, you can guarantee that “sand bagging” will ensue, more generously known as “managing expectations.”

Relevant: Again, to whom? In trying to increase relevance by attaching rewards to achievement, the sand-bagging danger rises.

Time-bound: This tends to drive the behaviour that the stages of the journey are discreet and independent. Winning the Tour de France is not necessarily about leading at every stage.

In an effort to establish goals that align the interests, I find myself up against three (at least) immovable truisms that I will explain here.

It is a journey not a destination: The long-game can easily get lost because it is so difficult to conceptualize. Let’s pick a direction to move towards and not worry too much about “what happens if we get there?” or exactly where “there” even is.

Anecdote – The Artist Formerly Known as Prince

If pinned down to an overall direction for his live shows, let’s assume that Prince would say he wanted to create an exceptional musical experience. Rumour has it that all musicians and back-up vocalists were encouraged to come and tell Prince when they had nailed their part, at which point, Prince would add to their task. The guitarists that mastered the base-track would get a dance sequence. The well rehearsed back-up vocalist would be given a percussion part. And if you nailed that, he had even more for you. The message being: good enough is never good enough.

Everyone games the system: Self interest is part of everyone’s psyche. It will kick in for different people at different times, but even the most principled and well-intentioned people will take advantage of ways to game the system. We must take extreme care in selecting measures because that will directly impact behaviour.

Work is not family (for everyone): Many will use the metaphor of a family or a community to describe an organization that functions with a healthy degree of trust and shared focus. For me, community is more realistic simply because it introduces the responsibility you have as a member of the community, but also leaves the door open to leave the community if you find another one that is a better fit. The understood permanence of the “family” connection means that your only choice is to make the best of it. This can generate a nice bit of commitment, but can also create resentment and guilt.

This critique of some common approaches to goal-setting and identifying some relevant “truisms” should provide some important rationale behind the “results-based development” approach explained here.

Results-Based Development (Under the hood of Aligning Interests)

In many different contexts, we see examples of competition contributing to higher performance. For competition in business, we can draw and important distinction between “good competition” and “bad competition,” which is sometimes under emphasized. As I understand it, “good competition” creates an environment where everyone has to “up their game” to remain competitive. As evidence that “the market works,” we would see examples of customers benefiting from competition because organizations have to work harder and smarter to remain in business. Conversely, “bad competition” creates an environment that destroys long-term value in the name of “winning” or “surviving.” In such scenarios, organizations harm the sector and themselves in a “race to the bottom.” Such scenarios also have organizations engage in ethically questionable behaviour to “win at all costs.”

To start, let’s assume that “good competition” is indeed possible. Let’s further assume that for it to work, it requires that parties share an understanding of what “good” they are trying to accomplish.

For businesses, making money is “good,” but so are other forms of benefit: safer automobile travel (Toyota), or sustainable practices (Unilever). Governments are expected to think more about the greater “good,” and as a specific illustration, let me use community health-care in Ontario.  Let’s say that “good” in this context is “efficiency in delivering necessary services to patients,” or something that balances provision of necessary services within fiscal constraints. As is the current practice, the government-funded payment to service providers for some activities can be attached to a result or outcome:  a service provider is given a lump sum to achieve a specific outcome (e.g. heal a wound). If they can complete the task more efficiently, profit is theirs. If it happens to take longer or more resources, the provider spends those resources, but can’t come back to the funder for more money. If this works, tax-payers in Ontario get better bang for their collective buck, and patients get high quality care; wins all around.

This same type of arrangement could work in a non-governemnt context as long as the service provider is at least partially interested in the same definition of “good.” This creates “good competition, and efficient organizations that do good work will succeed.

Slide1The realm of “bad competition” can be peppered with “perverse incentives,” whereby, for example, a service provider could legitimately want a patient to stay sick, or at very least, err too much on the side of caution and so as to go wildly offside with a “fiscal responsibility” effort. This is the potentially very ugly underbelly of the public-sector contracting out to the private sector. In a consulting relationship, this can create, for example, an incentive to run-up the billable hours.

Slide2 again

 

Setting goals and objectives that promote shared accountability is extremely tricky. From my experience, the real trick is to align activity to a common purpose (e.g. the “good”), and I will go as far to say that without a shared interest, collaboration of this nature is impossible because the result will actually create “bad” competition.

 

Aligning for Performance – Where to start

The Lululemon stories coming out this week illustrate, if nothing else, that running a successful business is a complicated endeavour. There are a number of interests to balance, and something always has to give. Determining what exactly what should “give” and how exactly to implement that decision introduces an interplay between three dimensions of an organization:

  1. Overall Direction
  2. Measures and Metrics
  3. Rules and Norms

To have a serious look at “performance,” each of these is necessary though no one dimension logically prevails. The result of the interplay is very tangible to those operating in and around the environment. Employees actually live it, and investors, suppliers and other stakeholders are deeply affected by it.

From an organizational development perspective, these dimensions offer distinctly different lenses through which to analyze and evaluate performance. They can also inform opportunities for on-course corrections that can pre-empt a larger “realignment” or “change project.” Here is a quick explanation of what you could see through each lens.

Dimension #1 – Overall Direction (balancing inspiration with reality; clarity with rigidity)

Done well
  • There is alignment toward an overarching purpose.
  • We all know why we are here.
  • We have an obvious shared interest and our conflict is about how to get there not where to go.
Overdone 
  • Attachment to “core values” grows rigid such that an unrealistic zeal drives activity.
  • People are quick to become indignant when others suggest that we would ever compromise or question the direction that has been set.
  • There is talk of “sacred cows.”
Underdone
  • Lack of consistent focus makes it hard for people to assign priority.
  • Lower levels of management feel compelled to check with upper levels.
  • Management shows reluctance to exercise judgement because decision-making criteria is unclear.
Dimension #2 – Measures and Metrics (balancing art and science; means and ends)
Done well
  • There are appropriate and trackable indicators of performance at individual, team and organizational levels.
  • Discussions around performance, including performance reviews, have some objective and tangible criteria.
  • With negative changes in measures and metrics, discussions turn to “what can we do to affect this outcome?”
Overdone 
  • Emphasis on “making the numbers” leads to situations akin to “the operation was a success, but the patient died.”
  • Rampant gaming of the system to make “my numbers,” with complete disregard for overall impact.
  • No concept of “taking one for the team” because there is no opportunity to provide a context or expectation of reciprocity.
Underdone
  • There is no meaningful indication of results and outcomes.
  • Well-intentioned people often feel that although much gets done, little may have been accomplished.
  • There is little perceived connection to and control over end-results (positive or negative)


Dimension #3 – Rules and norms (balancing constraints with restrictions; formal with informal)
Done well
  • There are a few key parameters that people maintain (and don’t need to look at the website for guidance).
  • These are supported in formal policy (e.g. vision, mission and values).
  • There is a “spirit” of the rules not fully captured by the “letter” of the formal statements
Overdone 
  • Decision-making may be stifled because everything is prescribed and no judgment is required.
  • Rationale for doing something is often replaced with explanation of rules, guidelines and norms that prescribe behaviour (more “we/you can’t” than “why couldn’t we?”)
  • People look for air-cover from a policy or from “so-and-so said we have to do it this way” to justify actions/decisions.
Underdone
  • The walls of the office have signs like: “DO NOT LEAVE FOOD IN THE OFFICE FRIDGE OVERNIGHT.“ & “DO NOT LET THIS DOOR SLAM.”
  • The funnel of “policies in progress” is always full.
  • Existing policies are routinely reworked to be clearer. (e.g. Coffee cream is exempt from “Food left in Fridge” policy.)
What now/what next?

An analysis of this nature has to sift through competing perceptions of the situation. If the goal is to improve performance, the first step should be to better understand it. The interplay of these dimensions is similar to the combination of individual life philosophy, personal goals, and code of conduct that form a human being. Some degree of misalignment is inevitable, but very often it is manageable. Large misalignments and inconsistencies will become obvious over time and become more difficult to manage and to hide.

Using these dimensions as a periodic diagnostic within an organization can bring insight to where to focus time and energy to proactively affect future performance. This can also help to prevent large crises that require swift and sudden change.

 

Well, what do/did you expect?

Any discussion regarding performance has to include both outcomes (e.g. what you accomplished) and conduct (e.g. how you accomplished it). These concepts can exist together in statements like “they won fair and square,” but with the current mayoral race in Toronto, many would encourage to keep them separate.

  • Pro-Forders say: Look what he’s done (e.g. outcomes). So what if he’s not perfect (e.g conduct).
  • Another camp says: I don’t care about his record (e.g. outcomes); his behaviour is unacceptable (e.g. conduct).

A reasonable response would be to balance the two, which is what I believe is at the heart of John Tory’s code of conduct. One truism of the performance evaluation is: “clarify expectations.” In more practical terms, this quickly becomes an exercise in managing expectations. Unfortunately, the result of that, more often than not, is defining the “barely acceptable.”

Enter the “Code of Conduct.”

Such well-intentioned documents set the bar for accountability for future actions. It states: “Here is how I am going to go about my business, and please call me out if I conduct myself otherwise.” But that is where the clarity ends because we are stuck with statements like Tory’s Point #2 “I will show up to work each day to get things done…”

So, John, do you mean that you will show up to work “everyday”? “Every workday” (e.g. you will take vacations and weekends)? Everyday that you show up to work, you will try to get things done (e.g. you could indeed be absent, maybe even absent a lot, but when you are there, you are there to get things done.)?

Note: If the response is to tighten the wording of the “code,” we will undoubtedly get stuck with unreadable legalese!

Transparency, honesty and integrity are far too conceptual to be prescribed on a code of conduct. That said, I think we have every right to expect these traits in leaders, political or not.

My second problem with defining the “barely acceptable” conduct is that inevitably the code is used to counter any critique of performance.  As of April 4, 2014, Rob Ford can factually claim: “I have not been charged with a criminal offence while in office.” The binary distinction of charged or not charged is apparently the expectation here. Does integrity and honesty really come down to “I have not been charged with a crime.”? This is akin to Lance Armstrong’s claim that he had “never failed a drug test,” which, in retrospect, was not the best evaluation of his performance.

Even if a candidate for Toronto’s mayor said: “Trust me, I am going to pay better attention to my conduct than the current mayor has been,” some still won’t care. Unfortunately, the outcomes Toronto will have received by 2018 will remain a mystery past voting day.

When it comes to conduct (e.g. the “how you go about doing it”), leaders should give us much more than “barely acceptable,” so why bother defining it? The effort in defining the “barely acceptable” should be spent on the outcome side. (e.g. If I have not achieved X by 2018, I will not run again.) This will demand leaders accepting responsibility for things beyond their individual control, which might create a necessity for people to work together.

I would love to see more leaders clarify the “barely acceptable” outcomes rather than trying to pin down the specifics of “integrity” and “respect.”