Category Archives: Uncategorized

The solvable problem with Self Evaluations

I have a vivid memory of a self-evaluation from my undergrad days at McGill. We had to take a writing course, which must have been a cross-cultural exercise for the Faculty of English instructors that ventured into the Business building for these weekly encounters. There was a self-evaluation at the end of it, which, if I recall correctly, included a pre-amble that encouraged reflection on your development over the 12 weeks, as well as your ability compared to your classmates. I think I may have been guilted into responding “B+” and admitting that I could really have done more. I talked to classmates afterwards, some of whom skipped a number of classes. Their responses were “A. Can I give myself an A+?” (Note: A+ was not an option. McGill operated on the U.S. 4-point GPA system.)

This is a very obvious example of the challenges of “self-evaluations.” Self-attribution bias leads us to truly believe that we excel. Self-preservation instincts dampen the guilt of over stating the truth because these results can create positive future options or avoid negative future outcomes. For the undergrad business student, “strong marks = better job opportunities upon graduation” so, go for the A.  In a business context, if staffing cuts are looming, do you really want to have a mediocre self-evaluation in the HR file?! I wonder how many of my fellow graduates, decades into their business careers, have grown to learn that actual strength in writing provides a significant advantage in the workplace.

Necessary perspective

The self-evaluation brings the performer’s perspective into the discussion, which is absolutely crucial and applies to an organizational context. In addition to “perspective,” objectivity is also vital and this is enabled by clear criteria. The healthiest criteria mix features both “what you do” (e.g. somewhat controllable; gets at “how” your get results) and “what you accomplish” (e.g. somewhat more impacted by external factors; focussed on outcomes).

The evaluation becomes less of an assault on the ego if we can validate that someone did “what was expected” even if they did not “achieve expected results.” This demands some time and effort up front to go through the exercise of making logical connections between activity and results. You have to be ready for a reasoned conversation about what drives performance.

“How are we doing?” is a big question

For an organization, the questions about “what results do you want to achieve?” and “what do you think gives the best chance at achieving those results?” are really big questions that bring out some deep-seeded assumptions. A good strategic discussion will expose these and will explore some of the big decisions behind some of our assumptions. This should surface options to move forward rather than a clear best way. Imagine interactions where people say, “We said we are trying to reduce T&E, so we can’t fly everyone down for this meeting,” OR “We said that our focus was growing our business with our top-tier accounts, so we can’t get too anxious because we lost some tier-3 business.”

Like anything, involvement breeds acceptance, so it makes sense to have a senior-team conversation to tease out relevant expected outcomes and relevant expected actions. When you are involved in creating your own report card, the evaluation feels less daunting. This may turn down the volume on the “self-attribution bias” and the “self-preservation instinct.”

“That’s a great company name!”

When the prompt of “Company name?” comes my way, “Measure of Success” often gets a reaction. Just this week, my bank paid me the complement you find in title of this post.

NOTE: An SEO expert would not share this evaluation: a search for “measure of success” does not easily serve up Measure of Success Inc.

The original naming contained a double entendre referencing “measures/metrics” (e.g. “So, tell me what do you use to measure your success?”) and “small amount” (e.g. “Well, our last foray into that market did generate a measure/modicum of success.”) Clever, no?

The practical focus of this venture is the importance and complexity of measuring performance. Financial measures are great to show past results, and forecasts are great for setting expectations. For current activity, attempts to “measure” cause more problems than they solve.

No one questions the importance of setting goals in order to mobilize a group of people or an organization. These are important gauges for individuals to derive a sense of accomplishment and, perhaps more importantly, set the expectation by which we are evaluated and judged. Choosing the measure shapes activity and can quickly change the focus. Recall the oft-relayed story of the spouse looking for keys in the kitchen. It is later revealed that the keys were indeed last seen in the bedroom, but “the lighting is better here.” (There is also a version involving a drunk, a lamppost and the other side of the street.)

The clarity of metrics and measures give rise to an emboldened sense of purpose for both current activity forecasting. I have written about the utility of logic models in isolating what exactly you are measuring. Again, the rigour of this model—and the fervour with which some apply it—suggest that it is all encompassing when in the very large picture, the greater impact of any activity will be necessarily ambiguous.

Here is an example that is near to my métier: Evaluation of training and education.

The Kirkpatrick model seems to mirror the logic model in moving from more superficial to more complex:

  • Level 1 – Reaction (Did you .. . like it? … find it relevant? … etc.)
  • Level 2 – Learning (Did you… remember it? … get it?)
  • Level 3 – Behaviour (Are you using it?)
  • Level 4 – Results (Is it making a difference? What is the ROI?)

Sounds pretty straightforward, yes? Well, imagine you conducting training on some area of communication like being assertive, professional and respectful  in dealing with workplace conflict.

  • Level 1 – What is the reaction if a participant learns that she/he is not very good at listening? How is “reaction” affected by a hot lunch?
  • Level 2 – Can you test this?
  • Level 3 – What do we look for? What if, during an observation, we get the sense that people are “going through the motions”?
  • Level 4 – Where do you even start? Complaints to managers (conceivably of people who encounter approaches to conflict which are not professional, respectfully, etc.)? Retention numbers (because, if we manage conflict better, people will be happier and stay)? Instances of obvious conflict (because people are not shying away from these important conversations)?

The tremendous temptation is to (1) oversimplify or (2) not really even try to measure impact. Temptation #1 would be invoked if the client (internal or external) demands measurement. The real danger here is that the oversimplified metric sways activity. Imagine if we were tracking “visible searching hours” as a Level 3 indicator after we had trained people in “looking for keys.” The camera is in the kitchen. (Recall: the keys are in the bedroom.)

I would love to think that my services can help you avoid Temptation  #2.

So, is there irony in a company called “Measure of Success” suggesting that it is impossible to clearly measure your success? May it’s not such a good name…

The Feedback Context – Developing and Evaluating

When it comes to performance, the question “How are you doing?” can start a very rich discussion. Do you really want to know? Do we really have a good way to gauge it (except by historical occurrences or lack thereof)? In typical business education fashion, let’s say it depends.

Feedback fills a really nice space in an working context, and any survey of employees will say that it is much sought after information. Hopefully the yearly performance evaluation as the sole source of feedback is a thing of the past that left with the move toward “flat” organizational structures and non-linear career development. There is an important difference between “evaluative feedback” and “developmental feedback.” I argue that they are best kept separate in order for feedback to work for both individuals and for contributing to the performance culture of organization.

Feedback for evaluation

In the evaluation sphere, the guiding question for feedback seems to be, “I am doing awesome, right?” or “Your not going to fire me, are you?” The receiver is primed for positive reinforcement or for some piece of mind that there job is not in jeopardy. This can be driven by a number of things, but ego is probably front and centre. Routinely research will find that much more than 50% of a group will think that they are above average. This creates an unworkable situation in many workplaces where we are striving for performance “excellence,” but those delivering “average” think they are going above an beyond the call. If the tick boxes are “meeting” and “exceeding” expectations, most of those you evaluate will be disappointed with the former even though logic dictates in a performance culture the expectations are high. To further complicate things, the Dunning-Kruger effect suggests that those who are furthest away from “excellence” will think themselves closest to it.

I encounter this in classroom evaluations in the MBA program within which I teach. There is conundrum created by the university expectation for a B to B+ class average and the reality that the large majority of students think that they should be getting As. This drives a reluctance to accept critical feedback without having to defend and justify the position (presumably because accepting criticism would be setting the stage for accepting a lower or “average” evaluation.)

If this is the kind of tension that manifests itself in the workplace, it is no wonder that managers find providing feedback a challenge. Who wants to get into a debate about someone’s performance? It becomes so much easier to provide positive feedback or at least put a bigger emphasis on the positive elements, even if those are not the most relevant. (e.g. Don’t worry about the sales results, you had a lot of really good meetings with some very key people.) One of the knock-on effects for an organization is that the standards get relaxed (e.g. the President’s club gets expanded) and there can be a general inflation of any quantified evaluation (e.g. you see more 5 out 5s or 100% ratings). This expands and dilutes your group of top performers. This does not have to be a problem and in many organizations there is a lot of resignation that this is the way it is.

On the other end of the bell curve, a similar conflation can happen in that unsatisfactory performance can get bumped up to “satisfactory” or better (which is actually much worse). If you are after high performance, this situation could not be worse. Your very high performers will be group in with the “average,” and the “below average” are convinced that they are doing their job.

Feedback for development

The nuanced difference with this kind of feedback is that everyone can improve: good enough is never good enough. There is a an apocryphal anecdote about Prince and his back-up band the Revolution. When working on a new number, the band members were encouraged to let Prince know if they mastered their part during the rehearsal period. He would be ready with an extra guitar lick, a percussion part, a dance move, a vocal harmonization, etc., to keep them occupied while the rest of the band worked on their parts.  The message being, don’t hide the fact that you can handle more.

With evaluation in place, too often people direct effort based on the location of the goal line. This is why those who are being evaluated complain when we “move the goal posts.” With an evaluative set-up, your most capable performers (especially those who understand the system), know exactly how much effort to expend to meet the given bar and not make the next one any higher. I worked with a sales manager, who was surgical about meeting budgets almost to the penny (nickle?) and mysteriously having a bunch of business “just come in” in the early weeks fo the new quarter. The fear of having the goal posts moved based on an extraordinary result is a function of the evaluative context. The risking of falling short is enough of an incentive to launch very elaborate gaming of the system.

For lower-level performers, you get the opposite behaviour where people will sacrifice “next quarter” in order to drive short-term results. Picture the account manager who is pushing a major client to close business to meet the end of their quarter, and goes as far as to extend an exploding offer in the flavour of “If I can’t get your commitment on the renewal before next week, I am not sure that we can extend the same offer.” Picture also the predictable response from the major client saying: “We will make our decisions based on our financial year, not yours. Thanks very much and we will talk to you in about 6 weeks and will be expecting at least as good a deal as you just described.” It is hard to say whether such an exchange would actually hurt next years’ arrangement, but I suspect there would be some future blow back.

With a developmental mindset, we can entertain stretch goals without worrying about people feeling that they “failed” by achieving 89% of a really challenging goal. The “evaluated outcome” is immaterial; the direction (e.g. forward) is the only thing that matters (so it is crucial to have a shared understanding of what “forward” looks like). The focus is simply what went well and what could be better next time.

Analogy from the world of sport: “Hey, Jason Day, I know that you are number one in the world and just won in a blow out. Your approach shots are phenomenal, but can we talk about your course management off the tee. Twice you were between clubs and better distance control could get you relying a bit less on feel for those shots.”

In the world of work, this will equate to drawing attention to something that had to “give” to meet a deadline, achieve a client outcome, etc. In the “developmental world,” this won’t be seen as “Great job, but…” There will be a thirst and expectation for some reflection on how could this be even better, or more sustainable, or less contentious, or… some other desirable—even aspirational—attribute. This is the drummer for the revolution keeping great rhythm, nailing the drum solo, and working on a juggling move for next time.

When you try to do both…

The flavours of feedback have very different intent: evaluative feedback seeks to differentiate performance, (e.g. separating the wheat from the chaff), while developmental performance doesn’t care how you stack up, (e.g. how could you improve?).

Here are the very predictable outcomes you can expect from not distinguishing between these types of feedback.

The effect on top performers can happen in at least three ways:

  • Just-enoughing: As mentioned above, giving exactly enough effort to attain the prescribed “goal.” A friend of mine joked about the grading at our alma mater, McGill University, saying there are only two grades you should get: 85%, which was the minimum to getting an “A” (this was the highest grade, so any effort beyond this had no impact), and 56%, which was the minimum to get your credit. The thinking being, you certainly don’t want to fail, but unless you are getting an “A,” the grade doesn’t matter.
  • Sandbagging: In setting the original goal, you can count on active negotiation to establish a bar that they are certain they can attain, but they convince you that it is a stretch goal. In client-facing activities, this is called managing expectations. This may be avoided by splitting up the types of feedback.
  • Skinner-boxing: When attempts motivate involve evaluation and reward, you can create a stimulus-response dynamic where you feel that the only way to maintain performance is to continue offering tangible rewards. Peak performance has to include some intrinsic drive. Any theory of human motivation takes us beyond the gun-for-hire, will-work-for-rewards mindset.

The effect on “average” performers may be that the category ceases to exist. With a reluctance to acknowledge that one is average, the evaluators can be stuck evaluating everyone as “high performing.” JK Simmons in the movie Whiplash has a nice little soliloquy about performance that finishes with: “there are no two words in the English language more harmful than ‘good job’.”

Under performers are always an interesting group. Jack Welch had an answer for this group (identified as being in the bottom 10%): Fire them. As cold-hearted as that seems, the compassionate view is that the situation is not working for either party. There is something about the context that is not working. You will find a context that is a better fit for you. It’s not you… it’s us (and you). When you are able to separate the results from the development, you can get a cleaner look at what is not working. If it can’t be fixed, maybe the “exit” is best for all involved.

What exactly to keep separate

For the developmental conversation, there is no differentiating. Everyone is tagged for improvement. The evaluation decisions you make have a huge impact on the culture of the organization because you will get the behaviour that you reinforce. Bringing performance out in the open will create pressure to align with existing systems and practices.

The mindset when approaching a top-performer has to be around tapping into intrinsic areas to maintain motivation. What can we do to help you be even better? What can we do to help you develop in ways that you want to?

The approach for the “average” employee should balance the possibility that the person could be performing at a higher level, but chooses not to. As an organization, are we OK with someone phoning it in and delivering adequate results? There will also be those who work really hard for adequate results. Can those two archetypes co-exist?

If performance and accountability are part of the fabric of your organization, healthy churn on the lower end of performance will mitigate churn at the top end because results do matter. Many organizations (but not all) will want to keep a compassion and understanding about underperformance because every industry has external factors. If sales are down because no one is buying, do we really judge by results only?  Again, curiosity should be the driving force toward those who are not delivering to the level they should be. Did we not do a good job of assessing potential at onboarding? Has the work environment changed such that your best effort is no longer good enough?

The conversation about the “change in fit” is not limited to under-performers. You may find that churn can be healthy across different levels of the performance spectrum. A former colleague of mine, who was a very long-standing top-performer, talked about diplomatically broaching the subject of a “new horizon” next step in a performance review. Apparently, this had been an elephant in the room and both parties seemed to appreciate the overt acknowledgement. A CEO client routinely states, “No one is working here forever, including me.” These discussions are clearly in the “development” realm and can very quickly tie into the clichés that “everyone can be replaced” and “if you love something, set it free.”

The way in which an organization handles performance has a huge impact on the culture. This is a complex collision of scientific evaluation, individual motivation and the art of collaboration. Drawing a clear divide between the development and the evaluation will give you a better chance at getting and sustaining the desired performance.

We have a diversity problem? Who says?

Earlier this month, Chris MacDonald wrote about diversity programs and why they fail. The list of reasons includes breeding resentment toward the marginalized group for causing additional work.

Why corporate diversity programs fail, and what to do about it

This very realistic (and wholly unintended) consequence is textbook irony. Those attached to an initiative that goes sideways in this manner will exhaust all credibility in affecting future cultural shifts in their organization. This is the danger when efforts are made to solve a “problem” that has yet to be defined and properly contextualized.

Lots of aspirational words drive efforts to change a culture: innovation, efficiency, collaboration, accountability and, of course, diversity. Each of these aspirational (and metaphorical) sticks has a wrong end that is easily grasped. It is well worth taking a step back to ask some critical questions about the current state before launching your program to increase <<insert aspirational noun>>.

If you think your organization has a diversity problem (or, has an opportunity to improve its diversity), go through the exercise of making the case to someone who says to you: “Problem? What problem?”

Are you…

  • In a knowledge-driven industry? Diversity in approach among your staff will drive better insights.
  • Afraid of not complying to regulations? Get out in front of this one.
  • Embroiled in the war on talent? A focus on diversity might boost your Glassdoor reputation.
  • Seeing well-heeled competitors poach your top-talent? Earn loyalty by doing the right thing.

A word of caution:

“The right thing” is in the eye of the beholder. You may run into leaders who feels that, for example, earning loyalty from our employees does not justify the time, energy, dollars, risk, etc. of the investment in your initiative. Such push back may reveal some pervasive cultural attitudes toward employees. As one of those employees, rather than affect the culture, you may rethink your decision to continue working there.

 

 

 

 

Re-writing Unwritten Rules – Is this the change we want?

Sports as a metaphor for business fits well for me, so imagine my delight at Chris MacDonald using the Odor/Bautista brouhaha as a starting point for a discussion on unwritten rules. He was equating the competitiveness of sport with competitiveness in the market economy. The parallels are endearing, and he also presents the necessity for self-regulation (according to “the code”), as well as some imposed regulation (from the umpiring squad or regulator). This balance often gets lost in the sports-meets-business mash-up and we are left feeling that some external force (e.g. referees or “the Government”) is supposed to curtail undesirable actions or that “the market” will keep us in check.

In baseball, there is a complex series of “you do this; we do this.” Predictably, people’s interpretation of the Odor/Bautista event is heavily coloured by allegiances: whether you love/hate the Jays, love/hate Texas, love/hate “old school baseball,” etc. Beyond the superficial barbs, what appears to be noteworthy is the first punch by Odor. In baseball’s full unwritten ledger of “this warrants that,” there is no “this” for which the “that” is “”punch opposing player in the face.” When it comes to the retaliatory punch, we are into another rule book where, presumably, equal retaliation is deemed acceptable. (But he punched me first!)

Call it a “code” or “unwritten rules” or “norms” or whatever, in sport and business there is a rich interplay between the enforced formal rules and the understood informal rules. In both cases, the very worst kind of rule making is in reaction to a specific incident. Sports tends to enshrine the original offender in the rule: the Utley rule in baseball, the Avery rule in hockey. Hopefully we don’t see an Odor rule emerge from this because reactive rule alters the essence of a game played between competitors who abide by a similar set of beliefs. Do we want to shift from the shared understanding that “we don’t fight in baseball” to crafting a rule that delineates a “scrap” or a “tussle” from a “fight” in order to assign the correct fair punishment to each (e.g. if the hand stays unclenched, it is at most a tussle)?

When he took over as global CEO of Unilever, Paul Polman countered a conventional unwritten rule by refusing to report quarterly to the analysts who so craved the latest information. Unilever’s stock price hit a low in March 2009 with the first missed quarterly report. Was this a metaphoric “punch” to the information hungry analysts and the short-term profit seekers they serve? Seven years out, Polman is still CEO and the stock sits at almost 3x what it was at the end of Q1 2009. This metaphoric punch has been described a courageous, which could not be more different than the words by some to describe Odor real-live punch.

For baseball, one big question is whether Odor has tapped into something that changes the fabric of the game. In the ebb and flow of sport, we now have professional golfers sporting beards and “joggers,” both of which can be hailed a step into the 21st century for a game weighed down by elitist traditions. Is baseball due for a similar shift?

Whether from the sport or business perspective,, the question of “what kind of game do we want?” can help shape if and how we challenge the status quo. An Odor/Polman punch can be the catalyst to shake things up, or be the action that begs our response. The change to the written rule can be swift, but changes to the code can be a slower burn. Both can have lingering effects on “the game we get,” which may not be the one we want.

 

 

Learning fast and slow – Educating and Credentialing

Earlier this month the Financial Post magazine did its feature on MBA programs part of which was MBA alumni commenting on how their education contributed to their success. Ellis Jacob (CEO of Cineplex) and Jennifer Reynolds (CEO of Women in Capital Markets) both provided excellent reinforcement of the benefits of getting a grounding in business education. From my perspective as an instructor in an MBA program, this is  heartening reinforcement from the real world.

An additional common theme was a little less comforting to me: both of these leaders talked about the drive to complete the program as quickly as possible. I understand this urgency, and my discomfort is not so much in a student being anxious to get on with their career, but in the temptation to see the gaining of a credential as the secret to success rather than the rigour and thinking skills that one should develop in such a program.

Given their impressive accomplishments, I am convinced that neither of these CEOs think that learning to run a businesses comes in the form of a crash course. Ms. Reynold heads an organization that advocates for women in a sector that is heavily male-dominated for reasons that can’t easily be explained in this day and age. Mr. Jacob has to deal with an entertainment industry that has seen “Silver Screen” experiences shift from IMAX to iPhone. Likely, the most tangible learning from their respective business programs was in understanding fundamental drivers and how to adapt to change. (This means that an MBA circa 1977 or 1998 sets you up to remain current.)

In the Nobel-prize-winning work Think Fast and Slow, Daniel Kahneman shares how ill equipped we are to make reasoned decisions because the part of our brain that houses this competency is lazy and is quick to defer to our automatic but less thoughtful brain. Rushing through an MBA program may feel like speed meditating for quick enlightenment.

The somewhat clichéd description of higher education can be “learning how to think.” From my experience with business education (on both sides of the chalk), the real world is a wonderful, yet unforgiving forum to test your thinking and your credentials. As MBAs become more pervasive in the workplace, my hope is that the “slow learning” at the school of the real world further strengthens the educational grounding and helps this particular credential to improve with experience.

Diversity Boxes – ticking and talking

The Schumpeter column of The Economist took a run at diversity this week with the hypothesis that fatigue is big part of the problem. This fatigue appears to take different forms:

  • We hear about it far too much (Enough already!)
  • We hear about it but nothing changes (Not enough yet!)
  • We hear about it but what does it really mean (When is enough enough?!)

A look at the article’s comments section (which is always a dangerous move), reveals everything you need to know about the multitude of issues attached to the surprisingly complex word. Doubts and critiques expose some deep philosophical questions, as well as some statements that one is surprised to see in a written format (or not surprised, if you tend to read the comments section of publications).

A couple of things are clear about diversity:

  1. This idea has been getting attention of late. (I recall a similar trend bubbled up around the multi-generational workforce in the last decade or so. Maybe this, too, will pass or linger.)
  2. The word has many different interpretations and understandings
  3. Consistent with 2, ideas vary on whether an organization needs it and, if so, how best to get it.

One of the ideas that the article attacks is diversity as a “tick-the-box activity. Fittingly, differing narratives surrounding “diversity” brings one critique that states the box-ticking organizations actually deserve credit because at least they are doing something!

Is it reasonable to say that the merits of box-ticking depends on the contents of the box?

There may be some consensus that filling the ranks with “the token [insert statistically under represented group member]” probably doesn’t work for anyone. (But I can imagine being challenged on that statement.) So, we should stay away from those kind of boxes.

Similarly, awareness building (especially when the topic is on heavy rotation in media) can also wear thin. So, maybe it’s not enough to “tick the box” on the Diversity Lunch & Learns.

If we are trying to prevent an over-reliance on predictable cognitive biases in important decisions, maybe we can tick the box on the presence of such initiatives as:

  • panel interviews for new hires
  • formal meetings of the senior leadership team to discuss and determine merit bonuses for employees above a certain level
  • determining tangible indicators to test the connection between our idea of diversity and our idea of performance

This is by no means an exhaustive list, nor is it a collection of best practices. Well-intended efforts to “do the right thing” can quickly get lost in the contentious world-view debates that risks making the situation worse. We are convinced in the merits of digging into an idea like diversity to understand how it fits into the business and find some clear ways to track the progress of distinct efforts even if that means ticking some boxes… but only the good boxes.

In which, we string together Golf, Diversity and the Schulich MBA

This week the Globe published an analysis of golf in Canada: more gloom than sunshine, but certainly no outright doom. Perceptions are an important part of assessing any situation, whether it is the subtle “half-full/half-empty” divide or the distinctions you make while applying an analytical tool. Former PGA tour player, Ian Leggatt cited some problems to attracting golfers to the game. In theory there are some interesting questions: At what point does this risk become significant? Does a market change mean an opportunity or a threat? (The symmetry of the SWOT analysis feels a bit like a dirty little secret.) In practice, we have to make decisions.

Leggatt identifies two specific issues to golf’s problem of attracting new players: (1) it is too expensive and time consuming, and (2) it is too tough thanks to golf course development in the past two decades that left us with public courses that pose too much of a challenge for the non-expert golfer. Sadly, part of the discussion of the latter issue involves entities like Clublink looking at redeveloping Glen Abbey. (Hey, it happened to Yankee stadium!)

There is already lots of attention to Problem #1, too: shorter courses, cheaper memberships, bigger holes, and a slew of other innovations. As a golfer and a bit of a traditionalist (and a member of Weston Golf and Country Club), I want to look at what can change without changing the fabric of the game… or redeveloping the land. This is how Iron Lady Golf caught my eye because founder Lindsay Knowlton sees “an opportunity” where others see nothing but threats.

More than an business opportunity, Iron Lady Golf is righting the lingering inequity that the game of golf creates tight networks that seem to be inaccessible to women. The programs and the thinking aim to build individual skills and confidence, as well as to create accessibility to the game and to golf clubs. The “exclusivity” of the private golf course has shifted from being a value-adding differentiator (MBA-speak for “a good thing), to being something that hinders a clubs existence.

Additionally, this history of “male-centric exclusivity” translates into the wider corporate world. The topic of “diversity” is in heavy rotation among large corporations, which also contain tight male-centric networks, though not restricted golfers. Will bringing more professional women into the game of golf solve the lack of diversity in corporate Canada? Probably not. Will it help? I would argue, “It could.”

The piece of curriculum in the Schulich MBA program that I deliver encourages thinking about multiple stakeholders and the intersecting interests. Such an orientation creates a rich landscape over which to layer in opportunities and threats. In identifying interests, the bigger questions become “opportunities FOR what?” and “threats TO what?” People mobilize quickly around existing self-interest. For me, threats to golf get my attention, as do opportunities to bring more people into a game that I really enjoy.

The Balancing Act of Collaborating

There is lots of talk about “getting on the same page,” but in most work situations some level of conflict persists and can vary from subtle differences in opinion to diametrically opposed views. We all know that maintaining cordial working relationships is a must, yet too much focus appeasing diminishes our results and too much focus on our agenda carries the risk of losing status as “a team player.”

It can feel a bit like walking a tightrope and constantly balancing between

  • Being self assured, but not belligerent.
  • Being accommodating, but not spineless.
  • Being ambitious, but staying realistic (Picture a “stretch goal” snapping our rope!)

Maintaining forward momentum while maintaining this “balance” is also tricky. There are three large areas of attention that can help:
How am I seeing the situation (and should I look at it differently)?

With reams of data at our disposal, it is very easy to arrive at very different evidence-supported answers to the question “how are we doing?”  Those closest to the situation tend to have a really good read on how things actually work, but once performance measures are imposed, these same people can start to question their gut feelings. Taking time to gather a different perspective on your own may be more effective than simply taking in the perspectives of others. One part confidence; two parts humility.

Who do I have to work with (and how are those existing relationships)?

We have relationships to manage that are up, down and across. Our group of stakeholders will vary in terms of stature they maintain in the organization, but individual differences in style almost guarantees interpersonal challenges amidst the organizational politics. In practice, we have to navigate a complex web to get what we want for us and for others. Efforts are building/rebuilding relationships can make the tightrope seem a little wider (or maybe not so high).

What are the real priorities here (or, at least, what should they be)?

Sticking with the “rope” metaphor (why abandon it now?), what happens when tightropes turn into tug-o-wars? Such situations tend to consume lots of effort, but provide disappointingly little in the form of results. Many of us are not in the position to impose our views on the organization, but we all can exert a degree of influence. Even when things are at cross-purposes, speaking truth to power can be scary. Is asking power for a small clarification any better?

Can logic models work for you?

The “logic model” is a tool that is widely used in public and social sector initiatives. Like any tool, there are obvious on-target applications (e.g. hammer for inserting nail) as well as more creative applications (e.g. hammer to open a paint can). In all cases, the user is responsible for picking the right tool for the application. To me, there is relevance for the logic model in the private sector because this tool can expose assumptions (logical or not) and bring rigour to the thinking. Here is a quick primer on logic models, followed by some suggestions on if/how/when to use it for your business.

USEFUL VOCABULARY

Theory of Change: this is a set of fundamental assumptions that underpin a line of reasoning. This is often referred to in solving large social issues like homelessness or poverty. Relevance to a private sector context could be, for example, an ad agency president believes that to be successful, her team has to know our clients business better than they do. She believes sees her team as “providers of insight” rather than “meeters of needs.”

Logic Model: a framework that allows you to portray the specific linkages of your reasoning from the resources you expend to the final impact that you will have. The model takes into account the linkages between four fundamental components:

  • Inputs – These are resources that we control and choose to deploy toward the end objective. This is usually about money and time. Energy fits in here, too.
  • Outputs – This is what we create or produce or get from expending the “input” resources. This could be a report, the provision of a service, creation of some capacity, etc.
  • Outcomes – What we get helps us out in some way. This is the specific way in which it helps us out. We are better able to do something or something is improved because of the output created from the inputs.
  • Impact – This is the higher order calling of the whole endeavour. What did we set out to address in the first place? This is what we were after all along.

WORKING EXAMPLE

The thing about logic is that it can seem both commonsensical and obvious, while also seeming a bit opaque. To alleviate the latter, here is a quick example: Our agency leader (who believes that “provider of insight” is the way to success) might have the following idea.

Let’s get some of our junior staff to work on developing industry reports that capture both analyst information, as well as “chatter” from social networks. They will create an overview document as a summer project, and monitor/update on an ongoing basis. Our senior account people will refer to these before client meetings, and also share insights gained from the direct client interaction.

Breakdown using Logic Model:

  • Inputs – Junior staff hours in creating foundational document and ongoing monitoring (hours); Senior account staff time in inputting client insights (hours)
  • Outputs – The actual document, once it is created. The document is actually updated.
  • Outcomes – Senior account staff go to meetings with broad industry knowledge that they use to: (1) demonstrate knowledge to clients; (2) share value-adding insights; (3) initiate strategic conversations, etc.
  • Impact – Clients will use us more

Note: The understood “we hope” as a qualifier gets louder with each step of the model.

USING THE TOOL

Really thinking through these connections demands a good degree of effort and will: what do we want to “impact”? And how we will actually go about getting there? To illustrate the difficulty, recall the success of the ALS Ice Bucket Challenge. (Remember, this space is the sweet spot of the logic model). This was a huge success in gaining awareness (Mel B. did the challenge on America’s Got Talent!), but you may still ask: “So what? Are those afflicted by ALS better off? If so, how?” You can imagine that asking such questions without being labelled as “doubter,” “hater,” “loser,” etc., would be no mean achievement. This is an inherent challenge of such models. People don’t like to have the gaps in their logic exposed.

To use this tool effectively, leadership has to be comfortable explaining their logic (e.g. “provider of insight” beats “meeter of needs) and the followership has to be comfortable trying it out (if they don’t believe it in the first place).

Building the connections between the elements is an important exercise. You end up asking really good questions, for example:

Input to output questions: What are we getting for all these hours that we have put into research?

Output to outcomes: Is our new report, tool, capacity, etc. actually contributing to something that we are using, noticing, applying, etc.?

Outcomes to impact: Is our idea of the “means to the end” actually playing out? What do we really want here? What are we trying to achieve anyway?

This is the kind of thinking that goes into our “performance playbook” process to help ensure that the measures you are choosing hang together with the logic under which you are operating.