Nearly all learning leaders face a common struggle — credibly measuring the business impact of their initiatives. Often, attempts to show the impact of learning investments are met with skepticism. Simply showing a correlation between training and results won’t quell those skeptics. “Yeah, I know sales are up after training, but it’s probably because of the great ad campaign, not the training.” Stakeholders frequently challenge claims of training’s impact by suggesting that the economy, a pandemic, a new product or a good territory might be the true cause of an improvement in performance, not the training. Sound familiar? Faced with arguments like these, how can a learning leader move beyond correlations and make a causal argument that credibly demonstrates and isolates the impact of training?

It’s not for lack of desire. Survey data from Leo Learning and Watershed indicates that 95% of learning leaders want to measure impact. Further, according to “The Future Is Now: Learning Strategy 2020” report, only 16% believe they are effectively able to do so. The Watershed research goes on to report the reasons learning organizations stop their measurement and evaluation efforts short of Kirkpatrick’s Level 4 (results):

    • Competing priorities.
    • They don’t know how to start.
    • They don’t believe they can get the data.

The second reason, “they don’t know how,” resonates across the learning industry, encompassing reasons from not knowing what to measure to not knowing how to make that elusive causal argument. Indeed, making a credible causal argument takes some effort.

Making a Causal Argument

There is no silver bullet for causation. As noted by Kahn Academy, “Causation can only be determined from an appropriately designed experiment.” Making a causal argument is really like running a science experiment and testing a hypothesis. In learning and development (L&D), the hypotheses typically center around whether the training had the desired results. Framed as a testable hypothesis, it could look like this:

“Trained salespeople will see a greater gain in sales performance than untrained salespeople.”

At its roots, this is a research question and requires research principles and a designed experiment to answer.

The research principles of causal modeling and experimental and observational study design are well-entrenched in many fields (e.g., social sciences and physical sciences), but they are not vigorously practiced in L&D departments. Learning professionals who are serious about measuring business impact need to become familiar with these techniques. Getting started means understanding the specific elements that go into making a credible causal argument.

A causal argument must meet three fundamental requirements:

    1. The cause must precede the effect.
    2. There must be a correlation between the cause and effect.
    3. Other plausible alternative explanations need to be ruled out.

The first two are quite straightforward (as illustrated below). It is the final requirement, ruling out plausible alternatives, that frequently stumps L&D practitioners.

1. The Cause Must Precede the Effect

This first requirement is the easiest to meet. Did the training happen before the change in performance was observed? If it did, as in Example 1, then the customer service training program might be a contributing factor to the increase in customer satisfaction.

Obviously, if customer satisfaction went up before training the customer service representatives, as in Example 2, then clearly the training was not the cause.

 

2. There Must Be a Correlation Between the Cause and Effect

Correlations illustrate how two or more variables move together. For example, charting hours of training to sales volume in a scatterplot helps visualize the presence (or absence) of a correlation (Example 3).

In Example 3 on the left, the two variables (training hours and sales volume) show no correlation — they are not moving together. This indicates that the sales training did not impact sales volume. On the other hand, the example on the right shows a positive correlation between training hours and sales volume — as training hours increase, so does sales volume. It is tempting to proudly pronounce such a correlation as a causal result of training. But something is missing: Ruling out all those other factors, such as tenure, territory and advertising budget.

While a correlation is a requirement for causation, the well-known mantra holds: Correlation does not equal causation.

Many fun (and spurious) correlations illustrate how treating a correlation as causation can lead to erroneous conclusions. One of the most famous is the correlation between ice cream consumption and shark attacks.

Claiming this relationship to be causal suggests that shark attacks could be reduced by reducing ice cream consumption. As in the sales training example, when factoring in “other plausible alternative explanations,” it becomes clear that time of year actually drives both ice cream sales and shark attacks. When it’s warm out, people are more apt to go to the beach and swim in the ocean (where sharks live). People are also more apt to eat more ice cream in warmer weather. Stopping consumption of ice cream will not reduce the number of shark attacks.

3. Rule Out Other Plausible Alternative Explanations

Clearly, the ruling out of other plausible alternative explanations is critical to making a causal argument and is the most difficult of the three requirements. It involves two key elements:

  1. Developing a strong logic model.
  2. Formulating a good study design.

A.  A Logic Model for Learning

As defined by the Centers for Disease Control (CDC), a logic model is a graphic depiction (road map) that presents the shared relationships among the resources, activities, outputs, outcomes, and impact for your program. It depicts the relationship between your program’s activities and its intended effects.

Creating a logic model for learning follows the same principles as outlined by the CDC for drug trials (see sidebar). It involves explicitly laying out a causal chain of evidence from the learning intervention to its intended effects — its business impact. Often called a Measurement Map®, learning practitioners can build a logic model with business stakeholders, together defining what success would look like, starting with initial training activities through outputs, outcomes and impact — all in measurable terms.

Note that the map expands the typical L&D purview of data, bringing in the business data that is essential to showing business impact. With these key performance indicators (KPIs), the map tells the hypothesized causal story, making the argument that positive results on leading indicator metrics will lead to positive business outcomes and that positive business outcomes lead to an impact on overall organizational goals.

A measurement model helps in getting aligned with stakeholders, building agreement on the causal chain of evidence, defining “what to measure,” and serving as the foundation for a good research study design.

B. A Study Design for Learning

The logic model defines “what to measure.” The measurement plan or study design describes “how to measure.” It includes hypotheses to be tested, data requirements, time parameters, study approach (e.g., observational study using test and control groups) and other influencing variables — those other plausible explanations (e.g., region, tenure, ad campaign and so on) that must be controlled for in the analysis.

With all the details laid out and data in hand, the hypotheses can be tested, and the other influencers can be examined for their effect. The analysis will determine if an effect of training was found and if there were any interactions with the influencing variables. Now, when sharing results, L&D is in a strong position. Stakeholders have already bought into the map they helped create. The research design addressed other plausible factors. This is the makings of a strong and credible causal argument for the business impact of training.

Summary

The “causation dilemma” really isn’t new. Nor is it unique to training. Today, across industries, scientists and practitioners use experimental and observational study design approaches to test hypotheses and make credible causal arguments. It’s time for the learning industry to follow suit and draw on the study design methods used so successfully in the social and physical sciences.