Performance Focused Smile Sheets: Applied

How many training sessions have you gone to where you received a one sheet evaluation form that asked you to rate your instructor, the course, the room, the chair and the snacks provided at ten a.m.? Chances are you have filled out a few of these over time. In my own experience I have probably filled out hundreds and after a while, there is a tendency to just tick off “strongly agree” on everything, especially if it’s getting close to supper time!

Once in awhile, a new method comes along that radically changes the way we do things. Fire, the wheel, smartphones… you get the idea. Have you heard about Dr. Will Thalheimer’s book Performance Focussed Smile Sheets: A Radical Rethinking of a Dangerous Art Form? Will is one of my top go-to guys in evidence-based performance improvement and for myth busting methods being used in the field that aren’t so evidence-based.

Will explains why the current design of end of training evaluations are actually counter-productive, and sums it up nicely with this list of nine points:

  1. They are not correlated with learning results.
  2. They don’t tell us whether our learning interventions are good or bad.
  3. They misinform us about what improvements should be made.
  4. They don’t enable meaningful feedback loops.
  5. They don’t support smile-sheet decision making.
  6. They don’t help stakeholders understand smile-sheet results.
  7. They provide misleading information.
  8. They hurt our organizations by not enabling cycles of continuous improvement.
  9. They create a culture of dishonest deliberation (Thalheimer, 2016, Kindle Locations 137-143).

That’s just in the book’s introduction! Will uses the rest of the book to show us all a better way for “creating smile sheets that will actually help us gather meaningful data— data that we can use to make our training more effective and more efficient” (Thalheimer, 2016, Kindle Locations 2646-2647) by targeting training effectiveness and actionable results.

Actionable results are where I am going to focus the rest of my discussion or this would be a really long post! I recently conducted a session of the International Society for Performance Improvement and Dr. Roger Chevalier‘s workshop “Improving Workplace Performance” for a group of 20 school board managers. The post workshop survey of 19 questions was delivered electronically using SurveyMonkey. Within a week I received 16 responses for a completion rate of 80%

Here are the results from question #1 which Will calls “The world’s best smile-sheet question.”  By asking this question we are getting a measure of potential for the trainee’s improvement back on the job.


37.5% responded “I have GENERAL AWARENESS of the concepts taught, but I will need more training/practice/guidance/experience TO DO ACTUAL JOB TASKS using the concepts taught.”

62.5% answered “I am ABLE TO WORK ON ACTUAL JOB TASKS, but I’LL NEED MORE HANDS-ON EXPERIENCE to be fully competent in using the concepts taught.”

Dr. Thalheimer provides a rubric or set of standards in the book to measure the responses for each question. The standards for question 1 are shown in Table 1 below.

Q1 Standards (Thalheimer, W. (2016).

Not bad, but not what I can accept either. Two-thirds of the respondents felt they would be able to employ the methods taught in the workplace with more practice. One-third has a general awareness but won’t be able to apply what they learned. This is a one-day workshop that covers a lot of ground and arguably the goal is to raise awareness of performance improvement. There is also a follow on “at-work” component that the learners can do to further increase their skills and earn a certification if they choose. My goal is to have all the learners choosing C or D. Clearly, I have some work to do in the design and delivery departments for this offering.

The result above made me question if the learners had enough hands-on practice with the case study and the exercises. That takes us to question 9 shown below.




The averaged response was 54%. Will believes (and I agree) that the absolute minimum for time devoted to practice is 35%. Given the  number of practical exercises, I think this number needs to be higher… in the 65% range, so that gives me some quantifiable results and work to make changes to the design before the next session. More practice – less lecture. Check!

One final example. Have you heard of spaced learning theory? Casebourne (2015) provides a good overview of the body of research that suggests that by spacing learning over time, people learn more quickly and remember better. Will has designed questions such as #11 below to measure spacing.



The results were interesting because one respondent apparently went back to the training facility the following day! Overall, it seems that the spacing designed into the workshop was effective and 69% of the respondents recognized that topics were covered more than once. As noted above, the learners do have the opportunity to apply what they learned back on the job and submit it to ISPI to earn a certification which is another spacing strategy, but one I have little control over.

In order to get a measure of actual performance improvement for question #1, and to accurately measure the spacing effect, I will need to conduct the survey again after the learners have had sufficient time to apply the skills they were taught on the job. That’s still to come.

If you are still using level one evaluations or smile sheets that ask if the learning was fun, if the learner liked the instructor and the facilities were comfortable, it’s time to re-think your approach. If you attend a training session and still receive those old style smiley sheets, you might also ask yourself how effective the training design really was. I hope this example has shown you enough evidence to convince you there is a better way. If it was – please share it with your friends and colleagues. Heck – share it with your enemies, they might become your friends!


Casebourne, I. (2015) Spaced learning: An approach to minimize the forgetting curve. Retrieved from 06 December, 2017.

Thalheimer, W. (2016). Performance-focused smile sheets: A radical rethinking of a dangerous art form (). Work-Learning Press. Kindle Edition.


More on Metrics

On the weekends, my wife and I start the day by watching CBC’s The National evening news show. The February 12th edition had a segment called The Next: Server Farms (5:03) which provides a glimpse into the power consumption requirements of the Internet.

One statement caught my attention. The reporter said around minute 2:15

“…most people who run corporate data centers aren’t responsible for how much energy their IT systems use. They’re judged on reliability and speed.”

As noted in Mentors, Managers and Metrics, these are great metrics, but again they don’t tell the whole story! Different metrics are needed to measure the work going on within the system itself! So a learning moment for me… electrical consumption for the servers and cooling are an important measure of efficiency in this scenario.

In this story it appears that the biggies like Facebook are learning these lessons already, realizing that reduced energy consumption means big savings. It is the small to medium sized companies that have may have more to gain by reducing – or better managing the use of their idle servers. Food for thought for all my IT friends and colleagues.

When we apply Performance Improvement methods to a problem in the workplace, we try our best to be systemic and systematic in our approach. To do that, we need to “see” the problem (or opportunity) from many different angles and levels inside and outside of the organization. This morning I learned another perspective… and I hadn’t even planned on writing anything!

One more thought popped up while in the shower (where I do my best thinking and some average singing) – I wonder if an environmental impact assessment has been done for the barge server farm in the story… to determine the effects of warming the water around it to cool the servers. An outside of the organization perspective.

Okay – NOW I am off to hockey. Happy Saturday everyone!

Mentors, Managers and Metrics

I recently learned that one of my mentors and good friend, Dr. Roger Chevalier, is going to become the latest Honourary Life Member of the International Society for Performance

roger and brett (2)
Roger and I at the 2012 ISPI Conference in Toronto

Improvement or ISPI. That has had me thinking about mentors, managers and metrics.

I met Roger through the Armed Forces Chapter of ISPI where he took me under his wing and I ended up following him into a leadership role in the Chapter. There is no better way of learning than by doing! Roger was a student of Ken Blanshard, Paul Hersey and Marshall Goldsmith – all leadership and management gurus in their own rights, so I feel very fortunate that we crossed paths and have remained in touch over the years.So that is the mentor in this story. My warmest congratulations to a tireless promoter of our craft!

The vast majority of books that I have read regarding performance improvement are very “text-booky” (my term) and/or aimed at consultants in the field. Roger has long believed that ISPI needs to focus more attention on managers – the folks on the front lines who have to make performance happen. This is a view I share! Roger published a book called A Manager’s Guide to Improving Workplace Performance in 2007 to help that management group understand how to apply performance improvement methods in their workplace. In 200 pages – he lays out a pretty straightforward prescription for helping work teams succeed. Now this is NOT an ad for Roger’s book, but I DO strongly recommend it for anyone in a managerial position. Don’t tell him – but I am hoping that his book sales will skyrocket and he will fly me out to Cali and take me for a ride in his ’64 Corvette convertible!

So where do metrics fit in? I recently did a project for a government organization [who shall remain nameless but you know who you are]. The aim of the project was to examine the training system and make recommendations on how it could be improved.

To give you some context, performance improvement is pretty straight forward. It kinda goes like this:

  • There is a problem (or someone thinks there is a problem)
  • You do some analysis… the organization, the environment it exists within etc to help understand the context
  • You ask the boss “If your problem was fixed, what would the world look like?” This is referred to as “The Desired Performance Statement.” Some folks call it the “To-Be” state
  • Then you ask “What is actually happening right now?” This is the “As-Is” state or the “Current Performance Statement”
  • Comparing the As-Is to the To-Be is called the “Gap Analysis”
  • Then you look for the reasons why you are stuck in the As-Is when you really want to get to To-Be. This is called “Cause Analysis”
  • Once you know the cause(s) [There is normally more than one] you can look at all the potential ways to reduce or remove those causes… the “interventions”
  • Then you select the intervention(s) that will give you the biggest bang for the buck, figure out how to best implement them and do it!
  • All throughout this process you should be evaluating what you have done so far and consider change management requirements

Click HERE to see ISPI’s Performance Improvement Model

Easy peasy right? What if there aren’t any metrics or the wrong things are being measured? Roger’s book has a great quote at the start of Chapter 6 “Defining the Performance Gap” that has always stuck with me (and been repeated in different forms by many people.)

“I can’t improve it if I can’t measure it”
~William Thompson, Lord Kelvin

So – back to that project I was doing. There are metrics, but they are all about the output of the training system ~ graduates. That’s a good metric but it doesn’t tell the whole story! There is nothing in place to measure the work going on within the system itself! For example… how long does it take to define the job, write the performance standards, design and develop the training? No idea. If they did the training this way or that way – what is the cost difference? What are the resource implications? There is some data, but not enough to see how the system is working. Now in fairness, they are developing those metrics and hopefully someday soon they will have that figured out.

Metrics then, are tied to organizational goals and the expectations of your workforce. If you are missing any of these three factors, chances are that your organization is underperforming.

That’s it! Stay tuned for next time… expectations of the workforce is in the batter’s box!