A Process Model to Improve Performance

Last month I shared V10 of the Performance Improvement Process Model or PIPM. While I had started writing earlier about the differences between problems and opportunities in “Opportunities vs. Good Ideas“and needs vs. wants in “Putting the NEED in Needs Assessment,” V10 has had some significant changes! Rather than write a long post – it seemed like a better idea to share the recording of a webinar I did last week that covers the changes and includes some stories relating to the main steps. Hope you enjoy it! If you would like copies of the handouts/articles mentioned in the webinar, drop me a line and I will send them to you!

Improving the Performance Improvement Process Model

Each time I present the Performance Improvement Process Model (PIPM) and as it becomes more widely used, I receive feedback from colleagues on how to improve it. Some I accept and apply, others I file away for future reference ’cause you just never know when that suggestion might fit! Looking back, I have written about the model a lot starting with “Needs Assessment or Needs Analysis” (Still my most popular blog post to date) followed by “Just because it says performance doesn’t mean it’s there (sadly)” then “Opportunities vs. Good Ideas“and finally “Putting the NEED in Needs Assessment” waaayy back in February.

Since then – there has been lots of feedback from a number of presentations practitioners and readers, so it is time to officially share version ten of the model!

The Performance Improvement Process Model (V10)

What’s different? I’m glad you asked! First, I always appreciated that Van Tiem, Moseley and Desinger’s (2012) HPT Model was “wrapped” by Change Management (CM) to indicate that it is something that has to be considered throughout. This was an important improvement on their earlier versions. Enter Change 1: The PIPM needed that too! I also believe that Project Management (PM) is equally important, as all our work is project based. Like CM and PM, formative evaluation is an activity that occurs throughout. Van Tiem et al show this by having a box at the bottom of the model that spans all other phases. I added it to the three foundational practices and show summative evaluation as the “Measure Performance” step in the process.

Big Change #2: This all occurs after the intervention analysis step. I always felt this portion of the model was thin. For major interventions, like rolling out a new enterprise customer relationship management platform, we need to do a Cost Benefit Analysis (CBA) to verify the ROI and then put it into a business case for sponsor/client approval. Fail to do that before jumping into the design and development phases and you might be doing some serious back peddling that could have been avoided!

Big Change #3: A little more clarity on the split between training and non-training interventions. BOTH require objectives and metrics if we want to be able to measure performance during and after implementation so defining user requirements was added to the non-training intervention side. Defining user requirements always reminds me of this well known cartoon:

Source: https://i.pinimg.com/originals/77/2b/4f/772b4f5055ca898a809f8c64903360f4.jpg

Another piece of the puzzle that is crucial to getting right. The picture explains it best. Right? Right!? For anyone who hasn’t experienced one of the pictures above, please stand up. I see we are all still sitting.

Change #4: Not a huge change, but I added the infinity symbol between “Measure Performance” and results improved – or not. Obvious to some that we need to keep measuring as this creates the systemic feedback loop to monitor performance, not so obvious to others. So it’s a handy little reminder.

Last Change: In earlier versions I had “Mega, Macro, Micro” as originated by Kaufman (1996) beside the Needs Assessment Step and as layers behind “Gaps.” If you aren’t familiar with “Mega Planning” you can get a brief overview here: https://en.wikipedia.org/wiki/Roger_Kaufman#Mega_Planning. I changed to worker, work, workplace and world, based on Addison, Kearney and Haig’s (2009) work as I felt it was easier for people outside of the field to understand. Don’t get me wrong, I am a Kaufman fan and a believer in Mega… but if you have never heard of it, those terms won’t make much sense.

Okay – let’s wrap this up. I started writing about each of the major steps in the “Opportunities vs. Good Ideas“and “Putting the NEED in Needs Assessment” articles earlier this year before getting side tracked by that 4-letter word “work.” My near term schedule is a little lighter so I am hoping to get back to the deeper explanations of the remaining steps.

If you like it, please pass it along. If you see something missing, have a question or want to talk about fishing, drop me a line. The model continues to improve because of all the fantastic feedback. Thanks!

References

Addison, R. Haig, C. & Kearney. L. (2009). Performance architecture: The art and science of improving organizations. San Francisco. Pfieffer.

Kaufman, R. (1996). Strategic Thinking: A Guide to Identifying and Solving Problems. Arlington, VA. & Washington, D.C. Jointly published by the American Society for Training & Development and the International Society for Performance Improvement 

Van Tiem, D.M., Moseley, J.L & Dessinger, J.C. (2012). Fundamental of performance improvement: Optimizing results through people, processes and organizations. San Francisco. Pfieffer.

Performance Focused Smile Sheets: Applied

How many training sessions have you gone to where you received a one sheet evaluation form that asked you to rate your instructor, the course, the room, the chair and the snacks provided at ten a.m.? Chances are you have filled out a few of these over time. In my own experience I have probably filled out hundreds and after a while, there is a tendency to just tick off “strongly agree” on everything, especially if it’s getting close to supper time!

Once in awhile, a new method comes along that radically changes the way we do things. Fire, the wheel, smartphones… you get the idea. Have you heard about Dr. Will Thalheimer’s book Performance Focussed Smile Sheets: A Radical Rethinking of a Dangerous Art Form? Will is one of my top go-to guys in evidence-based performance improvement and for myth busting methods being used in the field that aren’t so evidence-based.

Will explains why the current design of end of training evaluations are actually counter-productive, and sums it up nicely with this list of nine points:

  1. They are not correlated with learning results.
  2. They don’t tell us whether our learning interventions are good or bad.
  3. They misinform us about what improvements should be made.
  4. They don’t enable meaningful feedback loops.
  5. They don’t support smile-sheet decision making.
  6. They don’t help stakeholders understand smile-sheet results.
  7. They provide misleading information.
  8. They hurt our organizations by not enabling cycles of continuous improvement.
  9. They create a culture of dishonest deliberation (Thalheimer, 2016, Kindle Locations 137-143).

That’s just in the book’s introduction! Will uses the rest of the book to show us all a better way for “creating smile sheets that will actually help us gather meaningful data— data that we can use to make our training more effective and more efficient” (Thalheimer, 2016, Kindle Locations 2646-2647) by targeting training effectiveness and actionable results.

Actionable results are where I am going to focus the rest of my discussion or this would be a really long post! I recently conducted a session of the International Society for Performance Improvement and Dr. Roger Chevalier‘s workshop “Improving Workplace Performance” for a group of 20 school board managers. The post workshop survey of 19 questions was delivered electronically using SurveyMonkey. Within a week I received 16 responses for a completion rate of 80%

Here are the results from question #1 which Will calls “The world’s best smile-sheet question.”  By asking this question we are getting a measure of potential for the trainee’s improvement back on the job.

Chart_Q1_171206

37.5% responded “I have GENERAL AWARENESS of the concepts taught, but I will need more training/practice/guidance/experience TO DO ACTUAL JOB TASKS using the concepts taught.”

62.5% answered “I am ABLE TO WORK ON ACTUAL JOB TASKS, but I’LL NEED MORE HANDS-ON EXPERIENCE to be fully competent in using the concepts taught.”

Dr. Thalheimer provides a rubric or set of standards in the book to measure the responses for each question. The standards for question 1 are shown in Table 1 below.

rubric.PNG
Q1 Standards (Thalheimer, W. (2016).

Not bad, but not what I can accept either. Two-thirds of the respondents felt they would be able to employ the methods taught in the workplace with more practice. One-third has a general awareness but won’t be able to apply what they learned. This is a one-day workshop that covers a lot of ground and arguably the goal is to raise awareness of performance improvement. There is also a follow on “at-work” component that the learners can do to further increase their skills and earn a certification if they choose. My goal is to have all the learners choosing C or D. Clearly, I have some work to do in the design and delivery departments for this offering.

The result above made me question if the learners had enough hands-on practice with the case study and the exercises. That takes us to question 9 shown below.

Chart_Q9_171206.png

 

 

The averaged response was 54%. Will believes (and I agree) that the absolute minimum for time devoted to practice is 35%. Given the  number of practical exercises, I think this number needs to be higher… in the 65% range, so that gives me some quantifiable results and work to make changes to the design before the next session. More practice – less lecture. Check!

One final example. Have you heard of spaced learning theory? Casebourne (2015) provides a good overview of the body of research that suggests that by spacing learning over time, people learn more quickly and remember better. Will has designed questions such as #11 below to measure spacing.

Chart_Q11_171206.png

Q11

The results were interesting because one respondent apparently went back to the training facility the following day! Overall, it seems that the spacing designed into the workshop was effective and 69% of the respondents recognized that topics were covered more than once. As noted above, the learners do have the opportunity to apply what they learned back on the job and submit it to ISPI to earn a certification which is another spacing strategy, but one I have little control over.

In order to get a measure of actual performance improvement for question #1, and to accurately measure the spacing effect, I will need to conduct the survey again after the learners have had sufficient time to apply the skills they were taught on the job. That’s still to come.

If you are still using level one evaluations or smile sheets that ask if the learning was fun, if the learner liked the instructor and the facilities were comfortable, it’s time to re-think your approach. If you attend a training session and still receive those old style smiley sheets, you might also ask yourself how effective the training design really was. I hope this example has shown you enough evidence to convince you there is a better way. If it was – please share it with your friends and colleagues. Heck – share it with your enemies, they might become your friends!

References

Casebourne, I. (2015) Spaced learning: An approach to minimize the forgetting curve. Retrieved from https://www.td.org/insights/spaced-learning-an-approach-to-minimize-the-forgetting-curve 06 December, 2017.

Thalheimer, W. (2016). Performance-focused smile sheets: A radical rethinking of a dangerous art form (). Work-Learning Press. Kindle Edition.