“All models are wrong, some are useful.”

Image of George P Box

The statistician George Box (1976) coined the phrase “All models are wrong, but some are useful.” I have used this quote many times… in workshops, as content in lessons for clients on modeling and simulation and at cocktail hour. Okay not at cocktail hour!!

If you are one of the seven avid followers of this blog, you have seen the continual efforts to make the Performance Improvement Process Model, (PIPM) now in it’s tenth iteration, useful. Every time I post an update or use it in an article, I get more feedback that moves me and the model closer to that goal.

Recently I received a comment from my friend and colleague Dr. Jim Hill, the CEO of Organizational Performance Systems, regarding the PIPM on LinkedIn. Jim reminded me that part of the value consultants bring to their clients is the adaptation of “our” models (there are soooo many) to fit their processes and language. So very true!

Another good friend and colleague of mine, Lieutenant Commander Janice Kirk, is doing exactly what Jim described, using the PIPM and a host of other models to inform the creation of a process model for the Canadian Armed Forces (CAF) Materiel Division (the folks that buy the big equipment aka tanks, planes and submarines) to clearly explain to project stakeholders what the differences are between Needs Assessment/Analysis and Training Needs Analysis. With her permission, I am sharing it below.

Janice’s NA to TNA process

I know… hard to read in this form! If you right click the image and select “open in a new tab” you can see a larger image that’s much easier to read. (You’re welcome!) If you are a “keener” and compare this model to the PIPM, you will see that the major steps are the same… it’s the bits in between that:

1. Drill down more,
2. Address the ADM Mat specific requirements, and
3. Link to the military guidance (Canadian Forces Individual Training and Education System (CFITES) policies and processes.

It’s a great example to underscore Dr. Hill’s advice that we need to work within and adjust to the client’s processes and just as importantly, speak their language. If you did open up Janice’s diagram in a new tab, you have seen it is rife with “CAF” TLA’s (Three Letter Acronyms) and more! Just for fun, I have included them below under TLA’s in case you are REALLY interested 🙂

Box, G.P. (1976). Science and statistics. Journal of the American Statistical Association 71(356) pages 791-799.


ICT: Initial Cadre Training – training which enables CAF members to perform the tasks associated with new equipment, systems or directives upon their fielding, delivery or initiation. The responsibility for this training rests with the Project Management Office/Contractor for new equipment and systems, unless specified otherwise in the statement of requirements.

JBS: Job Based Specification – the document that describes the tasks, medical requirements etc for specific jobs.

MES: Military Employment Structure – The arrangement of CF [Canadian Forces] jobs into structural elements, consisting of military career fields, occupations and sub-occupations that collectively provide the necessary management framework for the personnel life cycle of activities across all components of the CF and throughout the spectrum of conflict.

MOS: Military Occupation Structure- The arrangement of Canadian Armed Forces (CAF) jobs into structural elements, consisting of military career fields, occupations and sub-occupations that collectively provide the necessary management framework for the personnel life cycle of activities across all components of the CAF and throughout the spectrum of conflict. It is a structural arrangement of the work performed by members of the CAF.

MOS ID: Military Occupation Specification Identification Code – Equivalent to a National Occupation Code.

QS: Qualification Standard – Describes how well the job should be done using Performance Objectives (POs).

QSP: A combined QS and TP. Far more efficient.

RQ: Rank Qualification – A qualification, obtained via formal training, that enables a member to perform one or more entry level occupational jobs, where required to attain a new substantive rank (RQs replace former occupation qualification levels, such as QL3, QL5, etc.)

SST: Steady State Training – training identified in the QS/RQ for normal career progression.

TP: Training Plan – describes the instructional programme which will enable the learner to achieve, at optimum cost, the performance objectives from the QS.

Improving the Performance Improvement Process Model

Each time I present the Performance Improvement Process Model (PIPM) and as it becomes more widely used, I receive feedback from colleagues on how to improve it. Some I accept and apply, others I file away for future reference ’cause you just never know when that suggestion might fit! Looking back, I have written about the model a lot starting with “Needs Assessment or Needs Analysis” (Still my most popular blog post to date) followed by “Just because it says performance doesn’t mean it’s there (sadly)” then “Opportunities vs. Good Ideas“and finally “Putting the NEED in Needs Assessment” waaayy back in February.

Since then – there has been lots of feedback from a number of presentations practitioners and readers, so it is time to officially share version ten of the model!

The Performance Improvement Process Model (V10)

What’s different? I’m glad you asked! First, I always appreciated that Van Tiem, Moseley and Desinger’s (2012) HPT Model was “wrapped” by Change Management (CM) to indicate that it is something that has to be considered throughout. This was an important improvement on their earlier versions. Enter Change 1: The PIPM needed that too! I also believe that Project Management (PM) is equally important, as all our work is project based. Like CM and PM, formative evaluation is an activity that occurs throughout. Van Tiem et al show this by having a box at the bottom of the model that spans all other phases. I added it to the three foundational practices and show summative evaluation as the “Measure Performance” step in the process.

Big Change #2: This all occurs after the intervention analysis step. I always felt this portion of the model was thin. For major interventions, like rolling out a new enterprise customer relationship management platform, we need to do a Cost Benefit Analysis (CBA) to verify the ROI and then put it into a business case for sponsor/client approval. Fail to do that before jumping into the design and development phases and you might be doing some serious back peddling that could have been avoided!

Big Change #3: A little more clarity on the split between training and non-training interventions. BOTH require objectives and metrics if we want to be able to measure performance during and after implementation so defining user requirements was added to the non-training intervention side. Defining user requirements always reminds me of this well known cartoon:

Source: https://i.pinimg.com/originals/77/2b/4f/772b4f5055ca898a809f8c64903360f4.jpg

Another piece of the puzzle that is crucial to getting right. The picture explains it best. Right? Right!? For anyone who hasn’t experienced one of the pictures above, please stand up. I see we are all still sitting.

Change #4: Not a huge change, but I added the infinity symbol between “Measure Performance” and results improved – or not. Obvious to some that we need to keep measuring as this creates the systemic feedback loop to monitor performance, not so obvious to others. So it’s a handy little reminder.

Last Change: In earlier versions I had “Mega, Macro, Micro” as originated by Kaufman (1996) beside the Needs Assessment Step and as layers behind “Gaps.” If you aren’t familiar with “Mega Planning” you can get a brief overview here: https://en.wikipedia.org/wiki/Roger_Kaufman#Mega_Planning. I changed to worker, work, workplace and world, based on Addison, Kearney and Haig’s (2009) work as I felt it was easier for people outside of the field to understand. Don’t get me wrong, I am a Kaufman fan and a believer in Mega… but if you have never heard of it, those terms won’t make much sense.

Okay – let’s wrap this up. I started writing about each of the major steps in the “Opportunities vs. Good Ideas“and “Putting the NEED in Needs Assessment” articles earlier this year before getting side tracked by that 4-letter word “work.” My near term schedule is a little lighter so I am hoping to get back to the deeper explanations of the remaining steps.

If you like it, please pass it along. If you see something missing, have a question or want to talk about fishing, drop me a line. The model continues to improve because of all the fantastic feedback. Thanks!


Addison, R. Haig, C. & Kearney. L. (2009). Performance architecture: The art and science of improving organizations. San Francisco. Pfieffer.

Kaufman, R. (1996). Strategic Thinking: A Guide to Identifying and Solving Problems. Arlington, VA. & Washington, D.C. Jointly published by the American Society for Training & Development and the International Society for Performance Improvement 

Van Tiem, D.M., Moseley, J.L & Dessinger, J.C. (2012). Fundamental of performance improvement: Optimizing results through people, processes and organizations. San Francisco. Pfieffer.