Experimentation is just good public policy practice

Dan Monafu, Innovation & Experimentation at Treasury Board Secretariat

Headshot of Dan Monafu
Dan Monafu, Innovation & Experimentation at Treasury Board Secretariat

Nowadays, many policy documents in government start with something like this:

“Highly complex challenges, fundamental demographic, economic and environmental shifts, disruptive technologies, and changing customer and citizen expectations demand increasingly agile, adaptive, data-driven and measurable responses from public and private service providers.”

Translation: everything is changing, and the Public Service needs to be able to not only hang on, but continually adapt to new realities in order to stay relevant in Canadians’ lives.

The trouble is that over time, we became very good at drafting vision statements, with detailed plans for how things ought to go – yet our ability to put these plans into practice has actually weakened, despite increased expectations from the public.

So we (as the Government of Canada) are constantly transforming, and because of our size as a large organization, never in one area at a time: our transformation priorities overlap and compound, resulting in acute system tension for all parts of our organization. Take the move from analog to digital (in service delivery), waterfall to agile (in project management), or inputs to outcomes (in grants and contributions) as more well-known examples of what I mean.

Where does my team come in? I work for the Innovation and Experimentation team at the Treasury Board of Canada Secretariat (see box for more information), which has been tasked with promoting experimentation principles across the Government of Canada, in collaboration with colleagues in Privy Council Office’s Impact and Innovation Unit. By experimentation, we mean activities that seek to explore, test and compare the effects of policies, interventions and approaches in order to inform evidence-based decision-making. In short, I like to think of it as trying to answer a (deceptively simple) question: “how do you know what works?”

I think of experimentation as one tool among many; it’s specific in the sense that it won’t answer anything outside the scope of what is being compared. But it will give you an answer. And it works irrespective of area of activity, whether in service delivery, policy, regulatory, or internal services environments. As long as you are gathering insights (in the form of data points, for instance) that allow you to rigorously test and compare interventions to see if one of them works better than another, you are experimenting.

The Innovation and Experimentation team in TBS works on three fronts:
  1. We help build institutional demand for experimentation by showcasing the impacts of successfully-run experiments and by leveraging existing policies and processes;
  2. We help teams build experimental capacity via training, engagement, and general support to design and conduct experiments in an effective and ethical way, and;
  3. We work with partners to anchor experiments in the machinery of government to trigger system changes in support of experimentation.

I’ve written before on the value of experimenting and who is doing this elsewhere in the world (hint: the Government of Canada is not alone), but I find that when I speak to public servants in various departments about why one should start experimenting, people usually get it when somebody inadvertently mentions recent large scale IT project failures.

Workshop entitled “Experiments in government: how to design and implement them?’
The Innovation and Experimentation team running a workshop entitled “Experiments in government: how to design and implement them?’ during the 2019 Innovation Fair on May 22nd, 2019

Thinking from an experimentation perspective, I often wonder if we would have seen different results had specific elements first been rigorously tested at a small scale (e.g. comparing the model in place against a proposed new system in terms of a couple of indicators, say reliability of calculations and system downtime). In a sense then, experimentation might intuitively seem risky when taken at semantic value alone, but it is actually diminishing the risk in situations where the outcome is simply not known.

One thing I’ve learned over the past number of years working in the experimentation field is that experimentation quickly becomes theoretical if I don’t provide examples of what I mean in practice. So here’s one such example:

The Mental Health Commission conducted a number of experiments to test Housing First, an approach that focuses on providing housing for homeless people with complex needs as a first priority, before addressing other challenges. Participants were randomly placed into either a Housing First group or a group that continued to receive typical services provided to high-needs homeless persons. These experiments demonstrated that prioritizing housing before other interventions offered better outcomes, in terms of both financial savings for government and improved quality-of-life metrics for participants. The strength of the experimental evidence helped cement Housing First as a favoured practice by governments to tackle homelessness.

In order for examples of experimentation such as Housing First to become more pervasive in the Government of Canada, last year our team embarked on a new learning-by-doing model called Experimentation Works, where we paired departments wishing to experiment with existing Government of Canada experts in experimental design methodologies. And we made it open by default, so that everyone could follow along on the journey to understanding what it takes to set up and run experimental projects.

Members of the first Experimentation Works cohort during the Experimentation Works 1 closing event on June 3rd, 2019
Members of the first Experimentation Works cohort during the Experimentation Works 1 closing event
on June 3rd, 2019

We’ve just recently published Impact and Failure reports on our first cohort experience and a sum up of our project itself on our Medium.com blog, in case you missed it. What we found is that matching experts with project teams wishing to experiment was indeed integral to the success of the cohort: departments don’t have enough access to experimental design expertise to guide them through the process of experimenting.

But what I find most striking as an insight is that it’s always the project implementation that takes the most time: experimentation is a methodology like all others, and once you have a person trained in it, running the experiment itself is not the main barrier. The most difficult part is ensuring the system is able to coherently modify all that needs to be modified to enable experimentation in the first place (e.g. set up regular collection of data, standardize that data so that it’s readable and consistent), then constantly tweak its processes to gradually get better.

Ensuring all functions are well integrated for the pipeline of experimentation to happen, and making space for the evidence to be integrated into decision-making are two outcomes my team focuses on these days. And it is difficult work, given the number of discrete sectors that typically need to come together to make an experiment happen: from a good research and development function to generate the insights and data (typically found in comms / service environments), all the way to the project managers that decide on function design / redesign (project owners), and finally to those that capture recommendations for continual improvements in quicker and quicker iteration cycles (performance measurement and audit and evaluation functions).

For instance, Health Canada’s project for Experimentation Works involved carrying out real-time randomized A/B testing in order to improve its consumer incident reporting process. The project team needed to make this happen included subject-matter experts from the program itself, IM/IT specialists from both the program and from the departmental IM/IT directorate, advisors from digital communications branch, in addition to the experimental design experts provided through Experimentation Works. That’s a lot of people who didn’t typically work together!

So how do we get to a system that’s sufficiently mature to make such collaborations common practice in the Government of Canada? Our answer is to keep plugging away on a number of fronts, from building a second (hopefully bigger and better) cohort of Experimentation Works, to continually refining what departments need to report on with respect to what they’ve done in this space in order to show growing experimental capacity, to building networks of practitioners inside and outside the Government of Canada.

In the end, I think of experimentation simply as good public policy. It by all means won’t answer all questions, but it will tell you what works.

Page details

Date modified: