Category Archives: Results-based Development

Persistent Joke #5

Similar to the Twelve Days of Christmas, we are drawing special emphasis to Number 5!

This one comes to us via Henry Mintzberg at McGill University. Karl Moore has a very popular cover version of this joke.

Here is the joke:

A group of soon-to-be freshly minted MBAs are sent on their final program project where they leave the world of case studies and simulations. They are to test their skills on a “live patient” as they analyze a real organization and provide strategic counsel to improve their future competitiveness.

Through a family connection, they start to work on a mid-level professional orchestra.

Sound strategic analysis tells the group that one high-potential strategy for “regional service providers” is to focus on efficiency in delivery, which can create the case flow required to acquire and grow. After careful and thorough analysis, the group reports back with observations and strategic recommendations. Highlights include:

Under the heading Talent Effectiveness:

  • The group seems to be performing mostly on evenings and weekends. Payroll is a major expense for such services and we are concerned that you are paying a premium for work outside of regular business hours.
  • Performances tend to be 2 or 3 hours, several times per month, with summers off. Operations Management thinking around “batch processing” will suggest that grouping performances together can curtail costs for set up and take down.

Under the heading Technology Deployment:

  • Some of the instruments appear very outdated, with one violin being several hundred years old. A comprehensive equipment refresh would take advantage of new materials that require much less maintenance and, in many cases, weigh a great deal less than older instruments.
  • Audio technology to amplify sound could mean that some of the sections that currently employ several people—who, by the way, are often playing EXACTLY the same thing—could be reduced to one player per part.

NOTE: This latter recommendation will allow additional seating for audience members on the stage. (The group attributed this insight to Blue Ocean Strategy.)

Owing to extreme tact, the group was able to access position-level compensation. This had them create a special subheading Immediate Next-level Impact:

The conductor’s remuneration is the highest of the group, although the function of this position during performances seemed to be largely to keep time and cue musicians. The group provided contact information for a classmate developer who would create an app that could both keep time and provide instant messaging cues through small electric shock so as not to provide audio interference.

Needless to say, when leadership of the Orchestra politely rejected ALL of this strategic advice, our group took it as evidence that status quo decision makers often reject “out of the box thinking” only to come to regret that decision later on when they inevitably cease to operate competitively.

Here is the point:

In many workplace interactions, let’s be mindful of three important elements for those participating:

  • Level of understanding of the situation,
  • Level of insight into workable improvement, and
  • Level of confidence to share thoughts.

Misalignment between the last one and the first two creates unfortunate situations where EITHER people speak without knowing what they are talking about OR they do not speak when they really should.

The solvable problem with Self Evaluations

I have a vivid memory of a self-evaluation from my undergrad days at McGill. We had to take a writing course, which must have been a cross-cultural exercise for the Faculty of English instructors that ventured into the Business building for these weekly encounters. There was a self-evaluation at the end of it, which, if I recall correctly, included a pre-amble that encouraged reflection on your development over the 12 weeks, as well as your ability compared to your classmates. I think I may have been guilted into responding “B+” and admitting that I could really have done more. I talked to classmates afterwards, some of whom skipped a number of classes. Their responses were “A. Can I give myself an A+?” (Note: A+ was not an option. McGill operated on the U.S. 4-point GPA system.)

This is a very obvious example of the challenges of “self-evaluations.” Self-attribution bias leads us to truly believe that we excel. Self-preservation instincts dampen the guilt of over stating the truth because these results can create positive future options or avoid negative future outcomes. For the undergrad business student, “strong marks = better job opportunities upon graduation” so, go for the A.  In a business context, if staffing cuts are looming, do you really want to have a mediocre self-evaluation in the HR file?! I wonder how many of my fellow graduates, decades into their business careers, have grown to learn that actual strength in writing provides a significant advantage in the workplace.

Necessary perspective

The self-evaluation brings the performer’s perspective into the discussion, which is absolutely crucial and applies to an organizational context. In addition to “perspective,” objectivity is also vital and this is enabled by clear criteria. The healthiest criteria mix features both “what you do” (e.g. somewhat controllable; gets at “how” your get results) and “what you accomplish” (e.g. somewhat more impacted by external factors; focussed on outcomes).

The evaluation becomes less of an assault on the ego if we can validate that someone did “what was expected” even if they did not “achieve expected results.” This demands some time and effort up front to go through the exercise of making logical connections between activity and results. You have to be ready for a reasoned conversation about what drives performance.

“How are we doing?” is a big question

For an organization, the questions about “what results do you want to achieve?” and “what do you think gives the best chance at achieving those results?” are really big questions that bring out some deep-seeded assumptions. A good strategic discussion will expose these and will explore some of the big decisions behind some of our assumptions. This should surface options to move forward rather than a clear best way. Imagine interactions where people say, “We said we are trying to reduce T&E, so we can’t fly everyone down for this meeting,” OR “We said that our focus was growing our business with our top-tier accounts, so we can’t get too anxious because we lost some tier-3 business.”

Like anything, involvement breeds acceptance, so it makes sense to have a senior-team conversation to tease out relevant expected outcomes and relevant expected actions. When you are involved in creating your own report card, the evaluation feels less daunting. This may turn down the volume on the “self-attribution bias” and the “self-preservation instinct.”

The Feedback Context – Developing and Evaluating

When it comes to performance, the question “How are you doing?” can start a very rich discussion. Do you really want to know? Do we really have a good way to gauge it (except by historical occurrences or lack thereof)? In typical business education fashion, let’s say it depends.

Feedback fills a really nice space in an working context, and any survey of employees will say that it is much sought after information. Hopefully the yearly performance evaluation as the sole source of feedback is a thing of the past that left with the move toward “flat” organizational structures and non-linear career development. There is an important difference between “evaluative feedback” and “developmental feedback.” I argue that they are best kept separate in order for feedback to work for both individuals and for contributing to the performance culture of organization.

Feedback for evaluation

In the evaluation sphere, the guiding question for feedback seems to be, “I am doing awesome, right?” or “Your not going to fire me, are you?” The receiver is primed for positive reinforcement or for some piece of mind that there job is not in jeopardy. This can be driven by a number of things, but ego is probably front and centre. Routinely research will find that much more than 50% of a group will think that they are above average. This creates an unworkable situation in many workplaces where we are striving for performance “excellence,” but those delivering “average” think they are going above an beyond the call. If the tick boxes are “meeting” and “exceeding” expectations, most of those you evaluate will be disappointed with the former even though logic dictates in a performance culture the expectations are high. To further complicate things, the Dunning-Kruger effect suggests that those who are furthest away from “excellence” will think themselves closest to it.

I encounter this in classroom evaluations in the MBA program within which I teach. There is conundrum created by the university expectation for a B to B+ class average and the reality that the large majority of students think that they should be getting As. This drives a reluctance to accept critical feedback without having to defend and justify the position (presumably because accepting criticism would be setting the stage for accepting a lower or “average” evaluation.)

If this is the kind of tension that manifests itself in the workplace, it is no wonder that managers find providing feedback a challenge. Who wants to get into a debate about someone’s performance? It becomes so much easier to provide positive feedback or at least put a bigger emphasis on the positive elements, even if those are not the most relevant. (e.g. Don’t worry about the sales results, you had a lot of really good meetings with some very key people.) One of the knock-on effects for an organization is that the standards get relaxed (e.g. the President’s club gets expanded) and there can be a general inflation of any quantified evaluation (e.g. you see more 5 out 5s or 100% ratings). This expands and dilutes your group of top performers. This does not have to be a problem and in many organizations there is a lot of resignation that this is the way it is.

On the other end of the bell curve, a similar conflation can happen in that unsatisfactory performance can get bumped up to “satisfactory” or better (which is actually much worse). If you are after high performance, this situation could not be worse. Your very high performers will be group in with the “average,” and the “below average” are convinced that they are doing their job.

Feedback for development

The nuanced difference with this kind of feedback is that everyone can improve: good enough is never good enough. There is a an apocryphal anecdote about Prince and his back-up band the Revolution. When working on a new number, the band members were encouraged to let Prince know if they mastered their part during the rehearsal period. He would be ready with an extra guitar lick, a percussion part, a dance move, a vocal harmonization, etc., to keep them occupied while the rest of the band worked on their parts.  The message being, don’t hide the fact that you can handle more.

With evaluation in place, too often people direct effort based on the location of the goal line. This is why those who are being evaluated complain when we “move the goal posts.” With an evaluative set-up, your most capable performers (especially those who understand the system), know exactly how much effort to expend to meet the given bar and not make the next one any higher. I worked with a sales manager, who was surgical about meeting budgets almost to the penny (nickle?) and mysteriously having a bunch of business “just come in” in the early weeks fo the new quarter. The fear of having the goal posts moved based on an extraordinary result is a function of the evaluative context. The risking of falling short is enough of an incentive to launch very elaborate gaming of the system.

For lower-level performers, you get the opposite behaviour where people will sacrifice “next quarter” in order to drive short-term results. Picture the account manager who is pushing a major client to close business to meet the end of their quarter, and goes as far as to extend an exploding offer in the flavour of “If I can’t get your commitment on the renewal before next week, I am not sure that we can extend the same offer.” Picture also the predictable response from the major client saying: “We will make our decisions based on our financial year, not yours. Thanks very much and we will talk to you in about 6 weeks and will be expecting at least as good a deal as you just described.” It is hard to say whether such an exchange would actually hurt next years’ arrangement, but I suspect there would be some future blow back.

With a developmental mindset, we can entertain stretch goals without worrying about people feeling that they “failed” by achieving 89% of a really challenging goal. The “evaluated outcome” is immaterial; the direction (e.g. forward) is the only thing that matters (so it is crucial to have a shared understanding of what “forward” looks like). The focus is simply what went well and what could be better next time.

Analogy from the world of sport: “Hey, Jason Day, I know that you are number one in the world and just won in a blow out. Your approach shots are phenomenal, but can we talk about your course management off the tee. Twice you were between clubs and better distance control could get you relying a bit less on feel for those shots.”

In the world of work, this will equate to drawing attention to something that had to “give” to meet a deadline, achieve a client outcome, etc. In the “developmental world,” this won’t be seen as “Great job, but…” There will be a thirst and expectation for some reflection on how could this be even better, or more sustainable, or less contentious, or… some other desirable—even aspirational—attribute. This is the drummer for the revolution keeping great rhythm, nailing the drum solo, and working on a juggling move for next time.

When you try to do both…

The flavours of feedback have very different intent: evaluative feedback seeks to differentiate performance, (e.g. separating the wheat from the chaff), while developmental performance doesn’t care how you stack up, (e.g. how could you improve?).

Here are the very predictable outcomes you can expect from not distinguishing between these types of feedback.

The effect on top performers can happen in at least three ways:

  • Just-enoughing: As mentioned above, giving exactly enough effort to attain the prescribed “goal.” A friend of mine joked about the grading at our alma mater, McGill University, saying there are only two grades you should get: 85%, which was the minimum to getting an “A” (this was the highest grade, so any effort beyond this had no impact), and 56%, which was the minimum to get your credit. The thinking being, you certainly don’t want to fail, but unless you are getting an “A,” the grade doesn’t matter.
  • Sandbagging: In setting the original goal, you can count on active negotiation to establish a bar that they are certain they can attain, but they convince you that it is a stretch goal. In client-facing activities, this is called managing expectations. This may be avoided by splitting up the types of feedback.
  • Skinner-boxing: When attempts motivate involve evaluation and reward, you can create a stimulus-response dynamic where you feel that the only way to maintain performance is to continue offering tangible rewards. Peak performance has to include some intrinsic drive. Any theory of human motivation takes us beyond the gun-for-hire, will-work-for-rewards mindset.

The effect on “average” performers may be that the category ceases to exist. With a reluctance to acknowledge that one is average, the evaluators can be stuck evaluating everyone as “high performing.” JK Simmons in the movie Whiplash has a nice little soliloquy about performance that finishes with: “there are no two words in the English language more harmful than ‘good job’.”

Under performers are always an interesting group. Jack Welch had an answer for this group (identified as being in the bottom 10%): Fire them. As cold-hearted as that seems, the compassionate view is that the situation is not working for either party. There is something about the context that is not working. You will find a context that is a better fit for you. It’s not you… it’s us (and you). When you are able to separate the results from the development, you can get a cleaner look at what is not working. If it can’t be fixed, maybe the “exit” is best for all involved.

What exactly to keep separate

For the developmental conversation, there is no differentiating. Everyone is tagged for improvement. The evaluation decisions you make have a huge impact on the culture of the organization because you will get the behaviour that you reinforce. Bringing performance out in the open will create pressure to align with existing systems and practices.

The mindset when approaching a top-performer has to be around tapping into intrinsic areas to maintain motivation. What can we do to help you be even better? What can we do to help you develop in ways that you want to?

The approach for the “average” employee should balance the possibility that the person could be performing at a higher level, but chooses not to. As an organization, are we OK with someone phoning it in and delivering adequate results? There will also be those who work really hard for adequate results. Can those two archetypes co-exist?

If performance and accountability are part of the fabric of your organization, healthy churn on the lower end of performance will mitigate churn at the top end because results do matter. Many organizations (but not all) will want to keep a compassion and understanding about underperformance because every industry has external factors. If sales are down because no one is buying, do we really judge by results only?  Again, curiosity should be the driving force toward those who are not delivering to the level they should be. Did we not do a good job of assessing potential at onboarding? Has the work environment changed such that your best effort is no longer good enough?

The conversation about the “change in fit” is not limited to under-performers. You may find that churn can be healthy across different levels of the performance spectrum. A former colleague of mine, who was a very long-standing top-performer, talked about diplomatically broaching the subject of a “new horizon” next step in a performance review. Apparently, this had been an elephant in the room and both parties seemed to appreciate the overt acknowledgement. A CEO client routinely states, “No one is working here forever, including me.” These discussions are clearly in the “development” realm and can very quickly tie into the clichés that “everyone can be replaced” and “if you love something, set it free.”

The way in which an organization handles performance has a huge impact on the culture. This is a complex collision of scientific evaluation, individual motivation and the art of collaboration. Drawing a clear divide between the development and the evaluation will give you a better chance at getting and sustaining the desired performance.

MONEYBALL – The Measure of Success Review

“..the first guy through the wall…it always gets bloody, always.” (John Henry to Billy Beane)

  • How things change
  • Getting people on board
  • Defining performance and changing expectations

Background:

This Michael Lewis story lays out what was the beginning of the rise of Sabermetrics: a new way of thinking about baseball. Previously, baseball nerd Bill James had a small cult-like following of people who always knew that mainstream baseball thinking and strategy were flawed. This group was enlightened but their wisdom was contained to the group of believers. The baseball establishment was simply not interested. In the early days of the new millennium, along comes Billy Beaned at GM of the Oakland A’s, whose particular problem makes it impossible to “play the game” as it is dictated.

“The problem we are trying to solve is that there are rich teams and poor teams, then there is 50 feet of crap, and then there’s us.” (Billy Beane to his Team Scouts)

The story plays out as Beane and his trusty sidekick Pete try to implement their strategy in collaboration with ownership, team scouts, team management and players. This challenge to an existing status quo and persistence in implementation are both fascinating and insightful, bringing real-world lessons to managers and leaders. Here is how the Money Ball story maps to the “collaboration game” framework.

Direction:

There is a great scene in the movie (quoted above), where Billy Beane lays out the problem for his team of scouts. The expression of this is only partial in this scene where he alludes to the fact that they have to run a shoe-string budget. Earlier in the movie he is very clear to state that rather than just “be competitive” or “not embarrassing” the objective is to win the World Series. Although “winning the World Series” is a point in time accomplishment, the general direction of “be the best” is important here and distinctly different from “be one of the best” or “not be the worst.”

Set-up – Rules and Constraints:

The link between the “be the best” direction and the specific Oakland A’s challenge stems from the small budget. The opportunity here is to create an understanding that “this is a challenge” rather than “this is impossible, why even try?” The former takes on the narrative of the wily underdog taking on the deep-pocketed establishment. Rather than moaning about not having enough money, the group has something to prove to the rest of the baseball world (think KC Royals of 2015).

Set-up – Measures and Metrics:

One measure for a professional sports team is summed up in the movie by the Billy Beane line “[Once you make the playoffs] If you don’t win the last game of the season, nobody gives a shit.” Close doesn’t count for those who want to “be the best.”

Spending within budget could be a constraint attached to a measure. There is at least one negotiation with ownership to release some extra money, so that constraint is apparently a little fluid. Conceivably as long as you can make the case for the necessity of this extra money in pursuit of the “be the best” agenda.

The tangible metric that is most revealing of the new logic is in how to evaluate potential. Enter the on-base-percentage (replacing the “batting average), which accounts for any skill in getting a base-on-balls, in addition to that of getting an actual “hit.” The logic flows as follows: You win games by scoring runs, to score you have to get players on base, so we want players who can get on base. (Sabermetrics had since evolved, and will continue to.)

In Sum:

To me the greatest relevance to the workplace is in the area of change overhauls that come down from the top. The CEO gets and idea in his/her head and tries to role it out through the organization. There are instances to “sell and tell” and there are some constituencies that refuse to buy-in to the new logic… and like any logical construct, the new way of thinking always has its flaws.

Case Study: Results-based Development in Chris’ Golf Game

NOTE: This landing page is the most recent blog post. For background on Measure of Success, visit: OUR APPROACH

What follows here is a real-time description of applying results-based consulting to developing my golf game. Following the “Initial Post” are periodic updates and reflections.

Sept. 6, 2014 – Initial Post

I decided to take my own medicine and approach an area of personal development in this same results-based manner. I will be the “client” in this exchange and Mark Linton, head of instruction at Weston Golf Club, will be the “consultant.” With his permission, I will document our shared success.

This arrangement is based on the model outlined in “Our Approach”:

1. Overall direction:

For this, I am focussing on scoring better in competitive golf. This very likely fits with “play better golf” or “play closer to my potential.” Any of those definitions are indeed, in my opinion, close enough.

2. Aspirational goal:

I have shared this with Mark, but I will not divulge the details of it here. It is important that Mark know what I am shooting for and the relevant deadlines. We agreed that it is difficult, yet attainable, under the right conditions. There is a monetary incentive for Mark should I attain this goal. (I won’t disclose those details either.)

3. Means-to-an-end goal:

If you are not familiar with golf, here is a brief description of the “handicap” system that is widely used with amateur golfers. To oversimplify, your handicap gives you an adjustment to your score (adding or subtracting) to make for an “even game” when playing with someone of differing ability. The idea being that if two golfers of differing ability both play a usual game, a head-to-head contest will be competitive.

I am currently a 5 handicap. I think that if I can get my handicap down to a 2, my aspirational goal will be reachable. Mark has expertise in this context, and is comfortable with being, to an extent, “on the hook” for my development toward those goals. So far, we have been able to establish a “shared goal.” As an experiential comment, I am quite enjoying the confirmation that this set of goals is reachable and that I have an interested party to teach, coach and support. I am sure there is plenty of work to do, so I will have to keep this initial euphoria in mind when the going gets tough.

Periodic Updates and Reflections (reverse chronological order)

Oct 9, 2014

The work that I have done with Mark has got me focussing on specific areas of my game. This is where he is exercising his expertise to guide me in the right direction. I am really enjoying the shared responsibility because I almost feel that it is his job to figure this out for me. Not to bore you with specifics, but short game (close to the green) is an important element to scoring. When I have played recently, even not scoring that well, I have found solace in the fact that my chipping has improved. Knowing where to look to find indications of improvement is important in maintaining hope and not being disheartened or overwhelmed by the task at hand.

Oct 1, 2014

Probably riding the positivity that came from the confirmation that my goals are achievable, I was pretty quick to tell people what I was doing and what I was trying to accomplish. There is a certain vulnerability with revealing your intentions to people. Now that everyone knows that I am trying to get my handicap down, it will be embarrassing when it does not seem to be tracking in that direction. (This was the case this past week!)

Sept 19, 2014

Simply because the set up is different, I find myself sharing more information with Mark than I would normally have done. Maybe to convey that I was holding up my side of the bargain, I relayed how much I practiced, how it went, what was working and not working. This was quite different from the episodic relationship that I have had with golf pros in the past. I could certainly envision establishing a flow for this information with clients with whom I will be working.

Sept 12, 2014

Getting confirmation from Mark that my goals were achievable felt very good. There was an initial surge, which actually surprised me. I started thinking that because the journey had started, reaching the destination was an eventuality. When I played nine holes with my kids, I found that I was taking it more seriously, and was very disappointed with a score that previously would have been so-so. I found myself thinking that I should be operating at a new level.

 

Results-Based Development (Some Backstory on Goal Setting)

One of my biggest frustrations as an education professional (trainer, instructor, consultant, etc.) is that the standard “measure of success” is “Did the participant like it?” I do not suggest that participant enjoyment is not important, but “did they like it” is only part of the story. I would like to think that the “liking” could align with developing in an intended direction. For example, “I liked it because the skills and awareness were necessary for me to better perform in my role” rather than “I liked it because the facilitator was funny, let us go early, and we had a hot lunch.” Similarly, if participants didn’t “like it” what was the reason? Not relevant? Waste of time? Made me think too much? No clear tools? Sharpening the axe can take some time; maybe an axe isn’t even the right tool…

What to measure becomes so important. In the absence of any other measure, maybe “participant satisfaction as indicated by ‘smile sheets'” is acceptable and maybe we even set a goal accordingly. We could get some help from George Doran and employ the SMART goal framework (Specific, Measurable, Achievable, Relevant, Time-bound). Mr. Doran’s helpful and memorable tool may create some unintended consequences.

Specific: Oversimplifying a situation such that the focus is on the “operation” and not on the “patient,” as in the dark humour of “the operation was a success, but the patient died.” We trained teams separately to keep a friendly atmosphere. Participants loved the “team building” sessions, but we still have turf wars between these two groups. Other examples could include, delivering a product that met the customers specs exactly, but seeing unacceptable margins.

Measurable: This orientation tends to push us toward what can be measured, which can dangerously skew attention toward distracting elements. E.G. We wanted to reduce customer complaints, but all we did was encourage front-line staff to accommodate ridiculous requests (which ended up costing us money!)

Achievable: This aspects needs much more context. Achievable to whom? What are the consequences of success or failure? If the latter has any connection to monetary reward, you can guarantee that “sand bagging” will ensue, more generously known as “managing expectations.”

Relevant: Again, to whom? In trying to increase relevance by attaching rewards to achievement, the sand-bagging danger rises.

Time-bound: This tends to drive the behaviour that the stages of the journey are discreet and independent. Winning the Tour de France is not necessarily about leading at every stage.

In an effort to establish goals that align the interests, I find myself up against three (at least) immovable truisms that I will explain here.

It is a journey not a destination: The long-game can easily get lost because it is so difficult to conceptualize. Let’s pick a direction to move towards and not worry too much about “what happens if we get there?” or exactly where “there” even is.

Anecdote – The Artist Formerly Known as Prince

If pinned down to an overall direction for his live shows, let’s assume that Prince would say he wanted to create an exceptional musical experience. Rumour has it that all musicians and back-up vocalists were encouraged to come and tell Prince when they had nailed their part, at which point, Prince would add to their task. The guitarists that mastered the base-track would get a dance sequence. The well rehearsed back-up vocalist would be given a percussion part. And if you nailed that, he had even more for you. The message being: good enough is never good enough.

Everyone games the system: Self interest is part of everyone’s psyche. It will kick in for different people at different times, but even the most principled and well-intentioned people will take advantage of ways to game the system. We must take extreme care in selecting measures because that will directly impact behaviour.

Work is not family (for everyone): Many will use the metaphor of a family or a community to describe an organization that functions with a healthy degree of trust and shared focus. For me, community is more realistic simply because it introduces the responsibility you have as a member of the community, but also leaves the door open to leave the community if you find another one that is a better fit. The understood permanence of the “family” connection means that your only choice is to make the best of it. This can generate a nice bit of commitment, but can also create resentment and guilt.

This critique of some common approaches to goal-setting and identifying some relevant “truisms” should provide some important rationale behind the “results-based development” approach explained here.

Results-Based Development (Under the hood of Aligning Interests)

In many different contexts, we see examples of competition contributing to higher performance. For competition in business, we can draw and important distinction between “good competition” and “bad competition,” which is sometimes under emphasized. As I understand it, “good competition” creates an environment where everyone has to “up their game” to remain competitive. As evidence that “the market works,” we would see examples of customers benefiting from competition because organizations have to work harder and smarter to remain in business. Conversely, “bad competition” creates an environment that destroys long-term value in the name of “winning” or “surviving.” In such scenarios, organizations harm the sector and themselves in a “race to the bottom.” Such scenarios also have organizations engage in ethically questionable behaviour to “win at all costs.”

To start, let’s assume that “good competition” is indeed possible. Let’s further assume that for it to work, it requires that parties share an understanding of what “good” they are trying to accomplish.

For businesses, making money is “good,” but so are other forms of benefit: safer automobile travel (Toyota), or sustainable practices (Unilever). Governments are expected to think more about the greater “good,” and as a specific illustration, let me use community health-care in Ontario.  Let’s say that “good” in this context is “efficiency in delivering necessary services to patients,” or something that balances provision of necessary services within fiscal constraints. As is the current practice, the government-funded payment to service providers for some activities can be attached to a result or outcome:  a service provider is given a lump sum to achieve a specific outcome (e.g. heal a wound). If they can complete the task more efficiently, profit is theirs. If it happens to take longer or more resources, the provider spends those resources, but can’t come back to the funder for more money. If this works, tax-payers in Ontario get better bang for their collective buck, and patients get high quality care; wins all around.

This same type of arrangement could work in a non-governemnt context as long as the service provider is at least partially interested in the same definition of “good.” This creates “good competition, and efficient organizations that do good work will succeed.

Slide1The realm of “bad competition” can be peppered with “perverse incentives,” whereby, for example, a service provider could legitimately want a patient to stay sick, or at very least, err too much on the side of caution and so as to go wildly offside with a “fiscal responsibility” effort. This is the potentially very ugly underbelly of the public-sector contracting out to the private sector. In a consulting relationship, this can create, for example, an incentive to run-up the billable hours.

Slide2 again

 

Setting goals and objectives that promote shared accountability is extremely tricky. From my experience, the real trick is to align activity to a common purpose (e.g. the “good”), and I will go as far to say that without a shared interest, collaboration of this nature is impossible because the result will actually create “bad” competition.