When the prompt of “Company name?” comes my way, “Measure of Success” often gets a reaction. Just this week, my bank paid me the complement you find in title of this post.
NOTE: An SEO expert would not share this evaluation: a search for “measure of success” does not easily serve up Measure of Success Inc.
The original naming contained a double entendre referencing “measures/metrics” (e.g. “So, tell me what do you use to measure your success?”) and “small amount” (e.g. “Well, our last foray into that market did generate a measure/modicum of success.”) Clever, no?
The practical focus of this venture is the importance and complexity of measuring performance. Financial measures are great to show past results, and forecasts are great for setting expectations. For current activity, attempts to “measure” cause more problems than they solve.
No one questions the importance of setting goals in order to mobilize a group of people or an organization. These are important gauges for individuals to derive a sense of accomplishment and, perhaps more importantly, set the expectation by which we are evaluated and judged. Choosing the measure shapes activity and can quickly change the focus. Recall the oft-relayed story of the spouse looking for keys in the kitchen. It is later revealed that the keys were indeed last seen in the bedroom, but “the lighting is better here.” (There is also a version involving a drunk, a lamppost and the other side of the street.)
The clarity of metrics and measures give rise to an emboldened sense of purpose for both current activity forecasting. I have written about the utility of logic models in isolating what exactly you are measuring. Again, the rigour of this model—and the fervour with which some apply it—suggest that it is all encompassing when in the very large picture, the greater impact of any activity will be necessarily ambiguous.
Here is an example that is near to my métier: Evaluation of training and education.
The Kirkpatrick model seems to mirror the logic model in moving from more superficial to more complex:
- Level 1 – Reaction (Did you .. . like it? … find it relevant? … etc.)
- Level 2 – Learning (Did you… remember it? … get it?)
- Level 3 – Behaviour (Are you using it?)
- Level 4 – Results (Is it making a difference? What is the ROI?)
Sounds pretty straightforward, yes? Well, imagine you conducting training on some area of communication like being assertive, professional and respectful in dealing with workplace conflict.
- Level 1 – What is the reaction if a participant learns that she/he is not very good at listening? How is “reaction” affected by a hot lunch?
- Level 2 – Can you test this?
- Level 3 – What do we look for? What if, during an observation, we get the sense that people are “going through the motions”?
- Level 4 – Where do you even start? Complaints to managers (conceivably of people who encounter approaches to conflict which are not professional, respectfully, etc.)? Retention numbers (because, if we manage conflict better, people will be happier and stay)? Instances of obvious conflict (because people are not shying away from these important conversations)?
The tremendous temptation is to (1) oversimplify or (2) not really even try to measure impact. Temptation #1 would be invoked if the client (internal or external) demands measurement. The real danger here is that the oversimplified metric sways activity. Imagine if we were tracking “visible searching hours” as a Level 3 indicator after we had trained people in “looking for keys.” The camera is in the kitchen. (Recall: the keys are in the bedroom.)
I would love to think that my services can help you avoid Temptation #2.
So, is there irony in a company called “Measure of Success” suggesting that it is impossible to clearly measure your success? May it’s not such a good name…