The View From the Top

Backward Design for The Kirkpatrick Model

For over seven decades, the Kirkpatrick Model for training evaluation has stood the test of time and proven to be an effective tool for measuring training outcomes. In this fast-paced and constantly evolving world of instructional design, it’s essential to ensure that we are employing the most effective methods available. Let’s explore how we can harness the power of the Kirkpatrick Model to maximize our training efforts. 


Kirkpatrick outlines four levels for evaluation: 


This model is straightforward and effective, but can also give a false sense of security if we approach it the wrong way. 


For example, level one measures the immediate response from learners, but does this reaction tell us much about the effectiveness of the training intervention? Surveys are easy to create, implement, and track. An entertaining facilitator and some good food can prompt positive survey results, but is the problem solved?  


What about level two? Tests and quizzes are pretty easy to draft but tend to lean heavily toward knowledge at the expense of skills or attitudes. The types of questions that are easiest to measure (e.g., multiple choice) often just measure the lowest level of thinking: recall. In too many cases, half the answer options are obviously wrong and freely guide the learner to the correct answer, skewing the results.


Progressing from level two to level three of the Kirkpatrick model can pose a significant challenge. While the previous levels of evaluation may have been relatively simple to assess and monitor, measuring behavior can prove to be much more difficult. As a result, learning and development teams may not strategize past the surveys and tests and limit the effectiveness of their evaluation process. 


There is a solution! When using Kirkpatrick’s Levels, we must avoid treating the levels as a ladder, progressing from one step to the next from the bottom up. It is widely acknowledged that backward design is the ideal approach for designing effective training and this ought to be applied to the evaluation strategy. By starting with a clear understanding of the desired goals, it is possible to develop a program that is more likely to make a meaningful impact. 


The fourth level of Kirkpatrick’s Model, which measures results, should be the starting point of the evaluation process. 


What does this look like? Start by identifying a clear business goal that the training intervention will help solve. Only once this has been established should the training program be designed and developed, with a focus on measuring and evaluating the root problem and its solution. This approach ensures that the evaluation process is based on meaningful, measurable outcomes that provide insight into the effectiveness of the training. With this foundation, you are primed to consider what behaviors should be adopted and how to measure them. Tests and surveys can be more easily targeted to collect valid and reliable data.


The beauty of starting with level four is organizations can effectively demonstrate the value of their instructional design teams and showcase their return on investment. This approach enables companies to truly reap the rewards of effective training and development. Begin at the top with the results, the view is pretty good from up here!


From Disaster to Design

Using a Space Shuttle Explosion to Inform Learning Experience Design

Have you heard about the catastrophic space shuttle crash that was attributed to PowerPoint?

In 2003, the Columbia space shuttle was launched into low orbit with seven astronauts on board. They worked in shifts to run 80 experiments around the clock for 16 days. Back on earth, NASA officials were investigating a problem that occurred just 82 seconds into the mission: a piece of spray foam had dislodged from the spacecraft and collided with the left wing. Despite evidence that it may have caused serious damage, NASA officials risked re-entry.

Columbia exploded once immersed in Earth’s atmosphere, killing all seven astronauts on board and destroying the scientific payloads.

What was the root cause of this tragic decision? The Columbia Accident Investigation Board (CAIB) concluded their findings with this:

“As information gets passed up an organization hierarchy, from people who do analysis to mid-level managers to high-level leadership, key explanations and supporting information are filtered out. In this context, it is easy to understand how a senior manager might read this PowerPoint slide and not realize that it addresses a life-threatening situation.

At many points during its investigation, the Board was surprised to receive similar presentation slides from NASA officials in place of technical reports. The Board views the endemic use of PowerPoint briefing slides instead of technical papers as an illustration of the problematic methods of technical communication at NASA.”

Read that last sentence one more time. An iconic space shuttle blown to pieces, almost 80 scientific experiments lost, and seven astronauts killed and the investigative board points to the deeply cultural use of PowerPoint to communicate technical papers as a key illustration of the problem. This heartbreaking incident serves as a stark reminder of the critical role that communication tools play in decision-making and behavior.

Take a look at one of the key slides:

Edward Tufte has a fascinating analysis of this slide. He questions the use of the initial VBB (Very Big Bullet), the optimistic title, the overuse of the vague term ‘significant(ly)’, and questionable language throughout. You really need to read his scrutiny yourself.

The only problem I want to explore is the hierarchical structure promoted by PowerPoint. The title is large and centered, signaling to the reader that it is the most important information on the slide. Unfortunately, this leads to the neglect of the less prominent bullet points, which are assumed to simply support the title. This same hierarchy can be observed in the decreasing font size of subsequent bullet points. As a result, vital information buried in the smaller text can be overlooked.

In truth, the test results indicated the possibility of significant structural damage even as the flight conditions were far more serious than the size and velocity of the foam used in test. This vital information would have been better communicated in a report drafted in Microsoft Word.

The deliberate selection of tools is critical for instructional designers, as neglecting this aspect will impede the learning process.

Let me ask you a question – are you putting your learners first when selecting tools for your instructional design? Unfortunately, it is easy to fall into the trap of selecting tools that are actually centered around the trainer or developer, making their job easier rather than focusing on the learners’ needs. We’ll continue looking at PowerPoint as an example. It’s easy to use visuals and text to create a presentation, but it often leads to the presenters simply reading off the slides without putting much thought into the organization or delivery of the content. This approach is flawed because it doesn’t consider how people actually think and process information – not in bullet points, but in a compelling narrative that captures their attention and keeps them engaged. By relying too heavily on the hierarchical structure of PowerPoint, designers risk misleading their audience about what information is truly important.

As an instructional designer, it is crucial to keep the learner at the front of your mind during each stage of course development. Maybe you won’t cause a space shuttle explosion, but consider the consequences of choosing the wrong tools or failing to design with the learner in mind. Increased workplace accidents? Decreased product quality? Higher attrition? Lower sales?

It is not enough to blame the tool – we must strive to use our tools in the best way possible to create compelling, effective learning experiences.

 

P.S. If you aren’t sure what ‘power points’ are…