Applying the principle in practice
At this stage, start with the purpose of mission in mind. The first thing to ask is “What are the human behaviors that we want to influence to drive the intended results?”
For example, if your company wants to lower its operating costs (the impact) by reducing unnecessary rework (the outcome), there are two ways to do so. The first is to create or improve systems and enable employees to become more efficient, and the second is to create or improve programs that help employees become more effective. The logical starting point in this case would be to develop a hypothesis about how your organization's internal systems or programs could be contributing to unnecessary work.
Your hypothesis might be that rework is high because employees have a tendency to avoid using a particular system that is too complex to use or too difficult to access. Or, employees find the quality assurance process too confusing to apply properly. Perhaps it is a combination of both.
Whatever the case, it’s vital to start by understanding the specific human behaviors that may be at play before attempting to create the results that you want.
Your hypotheses are just educated guesses at this point, so the next step is to ask, “How might we influence the behaviors that we want?” In this example, the best approach would be to use a series of controlled experiments based on the scientific method to iteratively test the efficacy of relatively modest changes to internal systems or programs. In many cases, a relatively modest change to a system or program can often lead to big improvements: 80% of the desired outcome can be derived from changing 20% of the current conditions.
To get a sense of what that process might look like, we’ve adapted the following diagram from Teresa Torres to show how this works in practice.
The last piece of this process are the measures of success themselves, i.e. How will we know we are successful?” In this example, the best way to measure success is to design bespoke measures of success using a combination of leading and lagging indicators.
Leading indicators are actionable, and their job is to tell us whether we’re going in the right direction or not. Lagging indicators are historical, and can tell us if we’ve made meaningful progress against our higher order business objectives so far.
Taken together, our leading and lagging indicators should be able to tell us whether we are likely to be successful with enough time and pressure, or whether we should cancel our investment and develop a new series of tests.
To help you get started, we’ve provided Table VI below, which demonstrates how we might operationalize this at Rangle.
As you can see, motivating teams to achieve outcomes based on measures of success that correlate to organizational business goals is quite different from directing teams to deliver a set of requirements on time, scope and budget.
In the case of the former, the intent is to optimize for outcomes that drive impact. With the latter, the intent is to follow a fixed and unvalidated plan. Again, shifting to this way of working can be challenging, but the advantages far outweigh the disadvantages.
Last updated