Each time I present the Performance Improvement Process Model (PIPM) and as it becomes more widely used, I receive feedback from colleagues on how to improve it. Some I accept and apply, others I file away for future reference ’cause you just never know when that suggestion might fit! Looking back, I have written about the model a lot starting with “Needs Assessment or Needs Analysis” (Still my most popular blog post to date) followed by “Just because it says performance doesn’t mean it’s there (sadly)” then “Opportunities vs. Good Ideas“and finally “Putting the NEED in Needs Assessment” waaayy back in February.
Since then – there has been lots of feedback from a number of presentations practitioners and readers, so it is time to officially share version ten of the model!
What’s different? I’m glad you asked! First, I always appreciated that Van Tiem, Moseley and Desinger’s (2012) HPT Model was “wrapped” by Change Management (CM) to indicate that it is something that has to be considered throughout. This was an important improvement on their earlier versions. Enter Change 1: The PIPM needed that too! I also believe that Project Management (PM) is equally important, as all our work is project based. Like CM and PM, formative evaluation is an activity that occurs throughout. Van Tiem et al show this by having a box at the bottom of the model that spans all other phases. I added it to the three foundational practices and show summative evaluation as the “Measure Performance” step in the process.
Big Change #2: This all occurs after the intervention analysis step. I always felt this portion of the model was thin. For major interventions, like rolling out a new enterprise customer relationship management platform, we need to do a Cost Benefit Analysis (CBA) to verify the ROI and then put it into a business case for sponsor/client approval. Fail to do that before jumping into the design and development phases and you might be doing some serious back peddling that could have been avoided!
Big Change #3: A little more clarity on the split between training and non-training interventions. BOTH require objectives and metrics if we want to be able to measure performance during and after implementation so defining user requirements was added to the non-training intervention side. Defining user requirements always reminds me of this well known cartoon:
Another piece of the puzzle that is crucial to getting right. The picture explains it best. Right? Right!? For anyone who hasn’t experienced one of the pictures above, please stand up. I see we are all still sitting.
Change #4: Not a huge change, but I added the infinity symbol between “Measure Performance” and results improved – or not. Obvious to some that we need to keep measuring as this creates the systemic feedback loop to monitor performance, not so obvious to others. So it’s a handy little reminder.
Last Change: In earlier versions I had “Mega, Macro, Micro” as originated by Kaufman (1996) beside the Needs Assessment Step and as layers behind “Gaps.” If you aren’t familiar with “Mega Planning” you can get a brief overview here: https://en.wikipedia.org/wiki/Roger_Kaufman#Mega_Planning. I changed to worker, work, workplace and world, based on Addison, Kearney and Haig’s (2009) work as I felt it was easier for people outside of the field to understand. Don’t get me wrong, I am a Kaufman fan and a believer in Mega… but if you have never heard of it, those terms won’t make much sense.
Okay – let’s wrap this up. I started writing about each of the major steps in the “Opportunities vs. Good Ideas“and “Putting the NEED in Needs Assessment” articles earlier this year before getting side tracked by that 4-letter word “work.” My near term schedule is a little lighter so I am hoping to get back to the deeper explanations of the remaining steps.
If you like it, please pass it along. If you see something missing, have a question or want to talk about fishing, drop me a line. The model continues to improve because of all the fantastic feedback. Thanks!
Addison, R. Haig, C. & Kearney. L. (2009). Performance architecture: The art and science of improving organizations. San Francisco. Pfieffer.
Kaufman, R. (1996). Strategic Thinking: A Guide to Identifying and Solving Problems. Arlington, VA. & Washington, D.C. Jointly published by the American Society for Training & Development and the International Society for Performance Improvement
Van Tiem, D.M., Moseley, J.L & Dessinger, J.C. (2012). Fundamental of performance improvement: Optimizing results through people, processes and organizations. San Francisco. Pfieffer.