Supporting Effective Evaluations: A Guide to Developing Performance Measurement Strategies



This guide has been designed to support departments—and, in particular, program managers and heads of evaluation—in activities around the development of performance measurement to ensure that these effectively support evaluation, as required in the Policy on Evaluation, and related Directive on the Evaluation Function. This guide outlines the key content of performance measurement strategies, provides a recommended process for developing clear, concise performance measurement strategies, and presents examples of tools and frameworks for that purpose. It also provides an overview of the roles of program managers and heads of evaluation for developing performance measurement strategies. This guide also outlines key points to consider when developing performance measurement strategies, including linkages to the Policy on Management, Resources and Results Structures and the Policy on Transfer Payments.

Table of Contents

  1. Introduction
  2. Overview of Performance Measurement Strategy
  3. Components of the Performance Measurement Strategy
  4. Program Profile
  5. Logic Model
  6. Performance Measurement Strategy Framework
  7. Evaluation Strategy
  8. Conclusion


1.0 Introduction

The purpose of this guide is to support program managers and heads of evaluation in meeting the requirements related to Performance Measurement (PM) Strategies as outlined in the Policy on Evaluation (2009), the Directive on the Evaluation Function (2009) and the Standard on Evaluation for the Government of Canada (2009). This guide also aims to support results-based management (RBM) practices across departments and agencies1 by promoting a streamlined and cohesive approach to performance monitoring and evaluation.

This guide is aligned with and complements the Policy on Transfer Payments (2008), the Directive on Transfer Payments (2008) and the Office of the Comptroller General's (OCG) Guidance on Performance Measurement Strategies under the Policy on Transfer Payments.2 While these policy instruments and OCG guidance establish the general requiremnts for a PM Strategy, this guide provides a more detailed step-by-step process for its development. The guide will also help ensure that the PM Strategies developed are aligned with the Policy on Management, Resources and Results Structures (2008).3

In developing this guide, the Centre of Excellence for Evaluation (CEE) considered the lessons learned from nearly a decade of working with departments on the development, review and implementation of the Results-Based Management Accountability Frameworks that were required under the former Transfer Payment Policy. This guide was also informed by the Independent Blue Ribbon Panel on Grant and Contribution Programs' report From Red Tape to Clear Results (2006).



2.0 Overview of Performance Measurement Strategy

2.1 The Purpose of the Performance Measurement Strategy

The PM Strategy is a results-based management tool that is used to guide the selection, development and ongoing use of performance measures. Its purpose is to assist program managers and deputy heads to:

  • continuously monitor and assess the results of programs as well as the economy and efficiency of their management;
  • make informed decisions and take appropriate, timely action with respect to programs;
  • provide effective and relevant departmental reporting on programs; and
  • ensure that credible and reliable performance data are being collected to effectively support evaluation.

It is important to remember that performance monitoring and evaluation play complementary and mutually reinforcing roles. In the Government of Canada, evaluation is defined as "the systematic collection and analysis of evidence on the outcomes of programs to make judgments about their relevance, performance and alternative ways to deliver them or to achieve the same results" (Policy on Evaluation, Section 3.1). Implementing effective performance measurement, in addition to supporting ongoing program monitoring, can also support and facilitate effective evaluation. Evaluation can help establish whether observed results are attributable (in whole or in part) to the program intervention and provide an in-depth understanding of why program outcomes were (or were not) achieved.

2.2 Programs Requiring a Performance Measurement Strategy


All programs, including transfer payment programs, are required to have a PM Strategy.
Under the Policy on Evaluation, deputy heads are to ensure that "ongoing performance measurement is implemented throughout the department so that sufficient performance information is available to effectively support the evaluation of programs" (Section 6.1.10). The Directive on the Evaluation Function requires that all programs, including transfer payment programs, have a PM Strategy ( Section 6.2.1). 4

2.3 Roles and Responsibilities


Program managers are responsible for developing, implementing and updating PM Strategies.

Heads of evaluation are responsible for reviewing and providing advice on all PM Strategies.

Heads of evaluation are to report annually to the Departmental Evaluation Committee on the status of performance measurement in support of evaluation.

Under the Directive on the Evaluation Function, program managers are responsible for developing and implementing PM Strategies for their programs(Section 6.2.1)5 and for consulting with the head of evaluation on these strategies (Section 6.2.3). Program managers should also update their PM Strategies to ensure they remain relevant and that credible and reliable performance data are being collected to effectively support evaluation (Directive on the Evaluation Function, Section 6.2.1).

Heads of evaluation are responsible for "reviewing and providing advice on the performance measurement strategies for all new and ongoing direct program spending, including all ongoing programs of grants and contributions, to ensure that they effectively support an evaluation of relevance and performance" and for "submitting to the Departmental Evaluation Committee an annual report on the state of performance measurement of programs in support of evaluation" (Directive on the Evaluation Function, Section 6.1.4).

2.4 When to Develop the Performance Measurement Strategy

For new programs, the PM Strategy should be developed at the program design stage when key decisions are being made about the programming model, delivery approaches, reporting requirements, including those of third parties, and evaluations.6 For ongoing programs for which no PM Strategy exists, one should be developed in a timely manner to ensure the availability of performance data for program monitoring and evaluation activities.

Developing the PM Strategy is only the first step in the performance measurement process. During its implementation, the PM Strategy should be reviewed periodically and revised (if required) to maintain its relevance in supporting effective program monitoring and evaluation activities.

2.5 Defining a Program within the Context of the Performance Management Strategy

In both the Policy on Evaluation and the Policy on Transfer Payments, a program is defined as "a group of related activities that are designed and managed to meet a specific public need and are often treated as a budgetary unit" (also in keeping with the definition provided in the Policy on Management, Resources and Results Structures). This may include, for example, a program established and funded through approval of a Treasury Board submission, a transfer payment program or other groupings of programmatic activities.


The PAA is a structured inventory of a department's programs; programs are arranged in a hierarchical manner to depict the logical relationship between each program and the department's strategic outcome(s) to which they contribute.
Based on the Policy on Management, Resources and Results Structures, all programs within a department are expected to be reflected in its Program Activity Architecture (PAA). Because the Policy on Evaluation requires full evaluation coverage of a department's direct spending for programs represented in the PAA, the PM Strategy should be based on how a program is defined (or reflected) in the PAA.

At the same time, it is also important to remember that, for evaluation purposes, the "units" that will be evaluated may not always correspond to a single program represented in the PAA. Units of evaluation could be:

  • a single program
  • a grouping of multiple programs
  • a sub-element7 of a single program
  • a grouping of sub-elements of multiple programs
  • a cross-cutting theme, function or policy embedded in multiple programs
  • other configurations

Depending on the unit of evaluation, there may be efficiencies to be gained in developing common performance measures and systems across programming units. Therefore, it is important to consult the head of evaluation and the Departmental Evaluation Plan when developing a PM Strategy to support evaluation.



3.0 Components of the Performance Measurement Strategy

3.1 Key Components of a Performance Measurement Strategy

In Annex A of the Policy on Evaluation, a PM Strategy is defined as "the selection, development and ongoing use of performance measures for program management and decision making." To support the PM Strategy process and align it with future evaluations, the document outlining the PM Strategy should include the following components:

  • a program profile
  • a logic model
  • a Performance Measurement Strategy Framework
  • an Evaluation Strategy8

These components represent the foundation of the PM Strategy and ensure that the performance measures selected and developed produce useful and comprehensive information for program monitoring and evaluation.

3.2 Determining the Scope and Complexity

The program profile, logic model, PM Strategy Framework and Evaluation Strategy should be succinct and focused. Their length and level of detail should reflect the program's scope and complexity, the challenges associated with producing quality data for monitoring and evaluation, and the risks associated with the program. Risks and issues to consider when establishing the scope and complexity of the PM Strategy include:

  • risks to the health and safety of the public or the environment (including both the degree and magnitude of the consequences that could result from the policy, program or initiative's failure as well as the probability of risk materializing);
  • risks associated with public confidence or political sensitivity (both current and potential), including media, parliamentary or ministerial interests;
  • risks related to size of the population affected or targeted by the program;
  • known problems, challenges or weaknesses in the program (ideally based on previous evaluative assessment);
  • materiality of the program;
  • complexity of the program (e.g. in terms of its components and delivery mechanisms);
  • time available to prevent or mitigate risks; and
  • other factors that are of significance to a particular program.

In the introductory remarks of the PM Strategy, explain how the program's scope and complexity, the challenges associated with producing quality data for monitoring and evaluation, and the risks associated with the program were considered when developing the PM Strategy.



4.0 Program Profile

4.1 Overview of the Program Profile

The Program Profile section of the PM Strategy document should be concise and focused and provide readers (e.g. heads of evaluation, program stakeholders) with key information required to understand the program. Its purpose is to:

  • support the development of the logic model, the PM Strategy Framework and the Evaluation Strategy;
  • serve as a reference for evaluators in upcoming evaluations; and
  • facilitate communication about the program to program staff and other program stakeholders.

4.2 Program Profile Content

The program profile should include the elements shown in Table 1.

Table 1: Elements of a Program Profile
Element Requirements
1.1  Need for the program
  • Explain the past and current need for the program and provide supporting evidence, such as demographic studies, research studies or needs assessments.
1.2  Alignment with government priorities
  • Demonstrate the program's objectives, explicitly linking them to federal government priorities and related, departmental strategic outcomes.
1.3 Target population(s)
  • Identify the target population(s) (those intended to be influenced and benefit from the program) and the characteristics that are relevant to program delivery (e.g. geographical area, age group, gender).
1.4  Stakeholders
  • Include a list of key program stakeholders (e.g. delivery partners, other government departments) and a brief description of their roles and responsibilities.
1.5  Governance
  • Identify the role and responsibilities of the federal government in delivering the program;
  • Include the roles and responsibilities of others (e.g. third-party partners) involved in delivering the program; and
  • When there are partners, describe their respective roles and responsibilities for performance monitoring, reporting and evaluation.

Note: More specific responsibilities associated with performance measurement and evaluation should be included in later sections of the PM Strategy.

1.6  Resources
  • Summarize (in a table) annual program resources allocated to the department and each delivery partner, with salaries, operations and maintenance (O&M), transfers to partners and capital costs separated out.
  • The table should also separately display the resources required for implementing performance monitoring of the program and the estimated costs for conducting the evaluation.

4.3 Considerations when Developing the Program Profile

Developing the program profile should not be an onerous process. Typically, most of the information required for the program profile should already have been compiled when the program's initial planning documents were developed. Sources of information for the program profile generally include the program's Treasury Board submission, terms and conditions, business plans and Memorandum of Understanding.



5.0 Logic Model

5.1 Overview of the Logic Model

The logic model serves as the program's road map. It outlines the intended results (i.e. outcomes) of the program, the activities the program will undertake and the outputs it intends to produce in achieving the expected outcomes. The purpose of the logic model is to:

  • help program managers verify that the program theory is sound and that outcomes are realistic and reasonable;
  • ensure that the PM Strategy Framework and the Evaluation Strategy are clearly linked to the logic of the program and will serve to produce information that is meaningful for program monitoring, evaluation and, ultimately, decision making;
  • help program managers interpret the monitoring data collected on the program and identify implications for program design and/or operations on an ongoing basis;
  • serve as a key reference point for evaluators in upcoming evaluations; and
  • facilitate communication about the program to program staff and other program stakeholders.

5.2 Logic Model Content

There are many ways to present logic models (see Appendix 1 for a sample logic model). While each organization can use the format that best suits its audience, a standard series of components (sometimes referred to as the "results-chain") should be included in order for the logic model to effectively support an evaluation. These components, which are logically linked, are the program inputs, activities, outputs and outcomes. As shown in Figure 1, there are three types of outcomes: immediate, intermediate and ultimate. The key components of the logic model are defined in Table 2.

Figure 1: Main Components of a Logic Model
Six main components of a Logic Model. Text version below:
Figure 1 - Text version

Main components of a Logic Model:

  • Inputs
  • Activities
  • Outputs
  • Immediate Outcomes
  • Intermediate Outcomes
  • Ultimate Outcomes

An important design feature of logic models is that they are, ideally, contained on a single page. As the logic model is intended to be a visual depiction of the program, its level of detail should be comprehensive enough to adequately describe the program but concise enough to capture the key details on a single page.

Table 2: Descriptions of Logic Model Components
Component Description Examples
Inputs
  • Financial and non-financial resources used to deliver activities, produce outputs and accomplish outcomes.
  • Funds
  • Personnel
  • Equipment, supplies
  • Physical facilities
Activities
  • The action(s) that a departmental organization undertakes to produce one or more outputs under the program.
  • Demonstrate the "how" of the program.
  • Activities are sometimes referred to as "processes", "strategies" or "action steps".

Note: For grants and contribution programs, it is recommended that the activities carried out by departments be differentiated from those carried out by the organizations receiving funding.

  • Conducting research and analysis
  • Delivering training sessions
  • Consulting, engaging stakeholder opinion
  • Conducting inspections
Outputs
  • Direct products or services generated from the activities of an organization, policy, program or initiative.
  • Are usually within the control of the organization itself.
  • Typically are tangible and can be counted.
  • Demonstrate the "what" of the program.
  • Outputs are sometimes referred to as "deliverables" or "units of service".
  • Pamphlet
  • Water treatment plant
  • Training sessions completed, number of people trained
  • Position papers, research reports or studies
Outcomes
  • The change(s) or the difference(s) that result from the program outputs.
  • Demonstrate the "why" of the program.
  • Higher-level outcomes (e.g. ultimate outcomes) are not always within the control of a single program; instead, they are within a sphere of the organization's influence.
  • Outcomes are sometimes referred to as "impacts" or "results."

Note: Some identified program outcomes may be identical to the "expected results" in the departmental PAA. In such cases, a footnote explaining this alignment should be provided.

  • Improved collaboration and coordination among partners
  • Increased visibility of a certain issue
  • Improved policies
Immediate Outcomes
  • An outcome that is directly attributable to the outputs delivered.
  • In terms of time frame, these are short-term outcomes.
  • Change in awareness, knowledge, skills or access of a target population (e.g. increased knowledge of a certain issue)
Intermediate Outcomes
  • Outcomes that are logically expected to occur once one or more immediate outcomes have been achieved.
  • Often, intermediate outcomes describe behavioural changes that result from increases in a target population's skills, knowledge, awareness and/or access.
  • The change may occur at the individual, group, organizational or community level.
  • Change in target population's behaviour
Ultimate Outcomes
  • These are the highest-level outcomes that can be reasonably and causally attributed to a policy, program or initiative.
  • Are a consequence of one or more intermediate outcomes having been achieved.
  • They often contribute to the higher-level departmental strategic outcome(s).

Note: The ultimate outcome should not be at a higher level than the expected results of the PAA element to which the program contributes.

  • A change of state in a target population, e.g. social impact

5.3 Logic Model Narrative (Including Theories of Change)


Theory of Change

Every program is based on a "theory of change" - a set of assumptions, risks and external factors that describes how and why the program is intended to work. This theory connects the program's activities with its goals. It is inherent in the program design and is often based on knowledge and experience of the program, research, evaluations, best practices and lessons learned.

A logic model is a visual expression of the rationale behind a program. However, on its own, the logic model does not provide enough detail on how the program activities will contribute to its intended outcomes and how lower-level outcomes will lead to higher-level outcomes. As such, the logic model should be accompanied by a narrative that outlines how certain activities or actions are intended to produce results. This is sometimes referred to as the "theory of change" or "program theory." A good narrative explains the linkages between activities, outputs and outcomes by describing the underlying assumptions of the program, risks and external factors that influence whether or not the outcomes will be achieved.

5.4 Considerations when Developing the Logic Model

There are many different approaches for developing the logic model. A participatory process9 is recommended, as it helps to improve the accuracy of the logic model and provides stakeholders with a common understanding of what the program is supposed to achieve and how it is supposed to achieve it. It is important to remember that the logic model is not static; it is an iterative tool. As the program changes, the logic model should be revised to reflect the changes, and these revisions should be documented.

The following key questions should be considered once the program logic model is completed:

  • Are all activities, outputs and outcomes included?
  • Does each outcome state an intended change?
  • Is it reasonable to expect that the program's activities will lead to the program's outcomes?
  • Are the causal linkages plausible and substantiated by the program theory?
  • Are all the elements clearly stated?
  • Are the outcomes measurable?
  • Do the activities and outcomes address a demonstrated need?
  • Is the final outcome at a lower level than the expected results of the departmental PAA?


6.0 Performance Measurement Strategy Framework

6.1 Overview of the Performance Measurement Strategy Framework

The PM Strategy Framework identifies the indicators required to monitor and gauge the performance of a program. Its purpose is to support program managers in:

  • continuously monitoring and assessing the results of programs as well as the efficiency of their management;
  • making informed decisions and taking appropriate, timely action with respect to programs;
  • providing effective and relevant departmental reporting on programs; and
  • ensuring that the information gathered will effectively support an evaluation.

Program managers should consult with heads of evaluation on the selection of indicators to help ensure that the indicators selected will effectively support an evaluation of the program. See Section 2.3 of this guide for more information on the roles and responsibilities of program managers and heads of evaluation.

6.2 Content of the Program Performance Measurement Strategy Framework

Table 3 summarizes the major components of the PM Strategy Framework. The framework should include the program's title as shown in the departmental PAA as well as the PAA elements that are directly linked to the program, i.e. the program activities, subactivities and/or sub-subactivities. It should also include the program's outputs, immediate and intermediate outcomes (as defined in the logic model), as well as one or more indicators for each output and outcome. For each indicator, provide:

  • the data source(s)
  • the frequency of data collection
  • baseline data
  • targets and timelines for when targets will be achieved
  • the organization, unit and position responsible for data collection
  • the data management system used
Table 3: Sample Table for the Performance Measurement Strategy Framework: PAA Elements Linked to the Program
Program outputs and outcomes Indicator Data source Frequency Baseline Target Date to achieve target Organization and position responsible for data collection Data management system

Output 1

Indicator 1              

Output 2

Rows may be added below for additional outputs.

Indicator 2              
Indicator 3              

Outcome 1

Indicator 4              
Indicator 5              

Outcome 2

Rows may be added below for additional outcomes.

Indicator 6              
Indicator 7              
Indicator 8              

6.3 Performance Measurement Frameworks and the Performance Measurement Strategy Framework

The Policy on Management, Resources and Results Structures (MRRS) requires the development of a departmental Performance Measurement Framework (PMF), which sets out the expected results and the performance measures to be reported for programs identified in the PAA. The PMF is intended to communicate the overarching framework through which a department will collect and track performance information about the intended results of the department and its programs. The indicators in the departmental PMF are limited in number and focus on supporting departmental monitoring and reporting.

The PM Strategy Framework is used to identify and plan how performance information will be collected to support ongoing monitoring of the program and its evaluation. It is intended to more effectively support both day-to-day program monitoring and delivery and the eventual evaluation of that program. Accordingly, the PM Strategy Framework may include expected results, outputs and supporting performance indicators beyond the limits established for the expected results and performance indicators to be included in the MRRS PMF (see Figure 2 below). Unlike the MRRS PMF, the PM Strategy Framework has no imposed limit on the number of indicators that can be included; however, successful implementation of the PM Strategy is more likely if indicators are kept to a reasonable number.

Figure 2: Comparison of Requirements: Performance Measurement Frameworks and Performance Measurement Strategies
Comparison of requirements. Details in text following the image:

As illustrated in Figure 2, the indicators in the PM Strategy Framework focus on supporting ongoing program monitoring and evaluation activities and therefore align with and complement the indicators included in the departmental PMF. In instances where the program is shown as a distinct program in the PAA and indicators have been identified in the departmental PMF, the PM Strategy Framework should include, at a minimum, the indicators reported in the departmental PMF. When a PM Strategy is developed for a new program that is not represented in the departmental PAA, the outcomes, outputs and related indicators developed for the PM Strategy Framework should be considered for inclusion in the departmental MRRS.10

6.4 Accountabilities and Reporting

The PM Strategy Framework should be accompanied by a short text that describes:

  • the reporting commitments and how they will be met, including who will analyze the data, who will prepare the reports, to whom they will be submitted, by when, what information will be included, the purpose of the reports and how they will be used to improve performance; and
  • if relevant, the potential challenges associated with data collection and reporting, as well as mitigating strategies for addressing these challenges (e.g. there may not be a system that can be used for data management).

6.5 Considerations when Developing the Performance Measurement Strategy Framework

The chart below provides guidance on how to develop the PM Strategy Framework.11

Step Description Comments
1.

Start with the MRRS PMF: Review the MRRS PMF that the department developed in accordance with the Policy on MRRS. Include, as appropriate, the performance indicators from the MRRS PMF in the PM Strategy Framework.

The PM Strategy Framework should not be developed in isolation. In accordance with the Policy on MRRS, all programs represented in a department's PAA must contribute to its strategic outcome(s). As such, when developing the PM Strategy Framework, the performance indicators should complement those already established in the departmental PMF.

2.

Identify performance indicators: Develop or select at least one performance indicator for each output and each outcome (immediate, intermediate and ultimate) that has been identified in the program logic model.

Keep in mind that, in addition to day-to-day program monitoring, the performance indicators will also be used for evaluation purposes. As such, it is recommended that program managers also consider the core issues12 for evaluation (i.e. relevance and performance) and consult with heads of evaluation when developing the performance indicators.

There are two types of indicators: quantitative and qualitative.

  • Quantitative performance indicators are composed of a number and a unit. The number indicates the magnitude (how much) and the unit gives the number its meaning (what), e.g. the number of written complaints received.
  • Qualitative indicators are expressed in expository form, e.g. assessment of research quality. As much as possible, qualitative indicators should be condensed into a rating scale, e.g. research quality is rated as "excellent," "average" or "below average." This will allow for comparability over time.
  • Choosing appropriate performance indicators is essential for effectively evaluating and monitoring a program's progress. Appropriate indicators are characterized as follows:
  • Valid—The indicators measure what they intend to measure.
  • Reliable—The data collected should be the same if collected repeatedly under the same conditions at the same point in time.
  • Affordable—Cost-effective data collection (and analysis) methods can be developed.
  • Available—Data for the indicators are readily and consistently available to track changes in the indicator.
  • Relevant—The indicator clearly links back to the program outcomes.

Keep the number of performance indicators to a manageable size. A small set of good performance indicators are more likely to be implemented than a long list of indicators.

3.

Identify data sources: Identify the data sources and the system that will be used for data management.

There are a number of possible data sources:

  • Administrative data—Information that is already being collected through program files or databases or could be collected with adjustments to regular programming processes or funding agreements;
  • Primary performance data—Specialized data collection exercises, such as focus groups, expert panels or surveys, in which information needs to be collected; and
  • Secondary data—Information that has been collected for other purposes, such as national statistics on health or economic status (e.g. Statistics Canada data).

Use readily available information. Take advantage of any available sources of information at your disposal.

4.

Define frequency and responsibility for data collection: Identify frequency for data collection and the person(s) or office responsible for the activity.

Describe how often performance information will be gathered. Depending on the performance indicator, it may make sense to collect data on an ongoing, monthly, quarterly, annual or other basis.

When planning the frequency and scheduling of data collection, an important factor to consider is management's need for timely information for decision making.

In assigning responsibility, it is important to take into account not only which parties have the easiest access to the data or sources of data but also the capacities and system that will need to be put in place to facilitate data collection.

5.

Establish baselines, targets and timelines.

Performance data can only be used for monitoring and evaluation if there is something to which it can be compared. A baseline serves as the starting point for comparison. Performance targets are then required to assess the relative contribution of the program and ensure that appropriate information is being collected.

Baseline data for each indicator should be provided by the program when the PM Strategy Framework is developed. Often, and particularly for indicators related to higher-level outcomes, this information will have already been collected by program managers as part of their initial needs assessment and to support the development of their business case.

Targets are either qualitative or quantitative, depending on the indicator. Some sources and reference points for identifying targets include:

  • baseline data based on past performance or previous iterations of the program;
  • the achievements of other, similar programs that are considered leaders in the field (benchmarking);
  • generally accepted industry or business standards; and
  • publicly stated targets (e.g. set by the government, the federal budget).

If a target cannot be established or if the program is unable to establish baseline data at the outset of the program, explicit timelines for when these will be established as well as who is responsible for establishing them should be stated.

6.

Consult and verify: As a final step, program managers should consult 13 with heads of evaluation and other specialists in performance measurement to validate the performance indicators and confirm that the resources required to collect data are budgeted for. Consultation with other relevant program stakeholders (e.g. information management personnel) is also encouraged.

  • This step complies with the Directive on the Evaluation Function, which holds heads of evaluation responsible for reviewing and providing advice on the PM Strategies for all new and ongoing direct program spending, including all ongoing programs of grants and contributions, to ensure that they effectively support an evaluation of relevance and performance (Section 6.1.4, subsection a).
  • It is highly recommended that program managers work with the head of evaluation to ensure that the performance indicators selected will provide adequate data to support an evaluation of the program. The choice of consultation mechanism remains at the discretion of the program manager and head of evaluation.

Key questions to be considered during the consultation and verification stage include:

  • Does the selection of indicators align with and complement the indicators in the departmental PMF?
  • Are the indicators valid measures of the outputs and outcomes (i.e. do they measure what they intend to measure)?
  • Are the indicators reliable (i.e. would the data recorded be the same if collected repeatedly under the same conditions at the same point in time)?
  • Is it likely that the required data will be provided in a timely manner?
  • Is the cost reasonable and clearly budgeted for?

If the answer to any of the above questions is "no," adjustments need to be made to the indicators or to the resources allocated for implementation of the PM Strategy.


Implementation

The full benefits of the PM Strategy Framework can only be realized when it is implemented.14 As such, program managers should ensure that the necessary resources (financial and human) and infrastructure (e.g. data management systems) are in place for implementation. Program managers should begin working at the design stage of the program to create databases, reporting templates and other supporting tools required for effective implementation. Following the first year of program implementation, they should undertake a review of the PM Strategy to ensure that the appropriate information is being collected and captured to meet both program management and evaluation needs.

Keep in mind that, according to the Directive on the Evaluation Function, the head of evaluation is responsible for "submitting to the Departmental Evaluation Committee an annual report on the state of performance measurement of programs in support of evaluation" (Section 6.1.4, subsection d). Given that the PM Strategy Framework represents a key source of evidence for this annual report, its implementation is especially important.



7.0 Evaluation Strategy

7.1 Overview of the Evaluation Strategy

In the Government of Canada, evaluation is defined as "the systematic collection and analysis of evidence on the outcomes of programs to make judgments about their relevance, performance and alternative ways to deliver them or to achieve the same results" (Policy on Evaluation, Section 3.1).Evaluations can be conducted in a variety of ways to serve a variety of purposes and, consequently, can be designed to address a multiplicity of issues.

The Evaluation Strategy is a high-level overview of the evaluation plans for a program. The inclusion of an Evaluation Strategy in the PM Strategy is important because it allows program managers and heads of evaluation to:

  • Ensure that the data generated through the PM Strategy Framework can effectively support evaluation and will be available in a timely manner. It also allows for the identification of additional data that may need to be collected to support the evaluation.
  • Identify the rationale and need to undertake an evaluation well in advance of legislated or policy-driven evaluation deadlines (e.g. the FAA, Policy on Evaluation). This is done, in part, to inform the development of the Departmental Evaluation Plan, which deputy heads are required to approve annually in accordance with the Policy on Evaluation (Section 6.1.7).15
  • Engage in early planning for evaluations and develop rigorous cost-effective evaluation approaches and designs (which might not be possible if evaluation planning is left until later in the programming cycle).

Consultation between program managers and heads of evaluation on the Evaluation Strategy also supports heads of evaluation in developing the annual report on the state of performance measurement of programs in support of evaluation that is submitted to the Departmental Evaluation Committee (Directive on the Evaluation Function, Section 6.1.4).

7.2 Content of the Evaluation Strategy

It is recommended that the Evaluation Strategy include details on:

  • the drivers and rationale for the evaluation (e.g. information needs of deputy heads, program renewal, support for expenditure management, FAA requirements, commitments expressed in Treasury Board submissions, or policy commitments under the Policy on Evaluation or Cabinet Directive on Streamlining Regulations, among others);
  • resource commitments in support of evaluation, including resources as described in Treasury Board submissions, and other resource commitments;
  • an initial evaluation framework or the timing and responsibilities for the development of the evaluation framework (along with a rationale for not including the evaluation framework in the PM Strategy); and
  • other details related to the evaluation.

The evaluation framework that is developed as part of the Evaluation Strategy is understood to be a working model based on the best information available at the outset of the program. The head of evaluation and the program manager should work together to revisit and update the framework as appropriate. Ultimately, the head of evaluation will be responsible for working with the program manager to develop the final evaluation framework (sometimes called an evaluation matrix) that will guide the evaluation process.

It is important to note that the program manager is responsible for developing and implementing the PM Strategy and for consulting with the head of evaluation in doing so (Directive on the Evaluation Function, Section 6.2.3), whereas the evaluation framework ultimately falls under the purview of the head of evaluation. While departments are strongly encouraged to develop an initial evaluation framework (even at a rudimentary level) at the outset of a program, there may be instances where the head of evaluation decides (in consultation with the program manager) that, based on the risks and other characteristics of the program, developing an initial evaluation framework is not feasible at the time the PM Strategy is created. For example, the head of evaluation may decide that in a given low-risk program, the data being proposed for collection by the program manager in implementing the PM Strategy is sufficient to support an evaluation that addresses the five core evaluation issues (see Table 5). In such cases, an initial evaluation framework does not need to be included in the Evaluation Strategy. Nevertheless, the rationale for the head of evaluation's decision to forego the early development of a draft evaluation framework at the outset of a program should be documented in the Evaluation Strategy. Furthermore, the Evaluation Strategy should outline a clear plan to develop an initial evaluation framework at the earliest possible time.

7.3 Evaluation Frameworks

An important element to include in the Evaluation Strategy is the draft evaluation framework, as it outlines proposed evaluation questions and identifies the data required to address these questions (see Table 4). The development of the evaluation framework is, ultimately, the responsibility of the head of evaluation. The evaluation framework is an important preparatory tool in the evaluation process because it allows the head of evaluation to give advance consideration to the evaluation approach, to identify data requirements for the evaluation and to determine how these requirements will be met. Certain data requirements for addressing the questions in the evaluation framework will be fulfilled by the indicators identified in the PM Strategy Framework (see Section 6.0 of this guide). Additional data requirements identified in the evaluation framework may require adjustments or additions to the monitoring data being collected on the program. In some cases, the additional data required will be collected by evaluators as part of the evaluation process.

The draft evaluation framework should include:

  • initial evaluation questions covering the five core evaluation issues and other issues as identified by the program manager and others (e.g. the deputy head);
  • indicators;
  • data sources and methods of data collection (Note that in some cases, the data source may be the monitoring data collected on the program as the PM Strategy is implemented.);
  • where applicable, information on what baseline data needs to be collected and timelines for data collection; and
  • where required, a description of simple adjustments that can be made to administrative protocols and procedures (e.g. third-party reporting templates, financial record keeping) by the program area to ensure that the evaluation's data requirements are met.

The head of evaluation should work with the program manager to revisit and revise (as required) the evaluation framework (e.g. on an annual basis) and to develop a final evaluation framework at the start of the evaluation process.

Table 4: Evaluation Framework Template
Questions Indicators Data sources and methods Baseline data needed Timelines for data collection Adjustments to administrative protocols and procedures needed

Note: It is expected that in addressing core evaluation issues 4 and 5 (see Table 5 below), evaluation questions addressing specific program outcomes as well as more general questions about program performance will be developed.

Relevance
Question 1 Indicators 1 to n        
Question 2 Indicators 1 to n        
Question 3 Indicators 1 to n        
Question n Indicators 1 to n        
Performance (effectiveness, efficiency and economy)*
Question 1 Indicators 1 to n        
Question 2 Indicators 1 to n        
Question 3 Indicators 1 to n        
Question n Indicators 1 to n        

Table 5: Core Evaluation Issues (source: Directive on the Evaluation Function, Annex A)
Relevance
Issue 1: Continued need for program

Assessment of the extent to which the program continues to address a demonstrable need and is responsive to the needs of Canadians

Issue 2: Alignment with government priorities

Assessment of the linkages between program objectives and (i) federal government priorities and (ii) departmental strategic outcomes

Issue 3: Alignment with federal roles and responsibilities

Assessment of the role and responsibilities of the federal government in delivering the program

Performance (effectiveness, efficiency and economy)
Issue 4: Achievement of expected outcomes

Assessment of progress toward expected outcomes (including immediate, intermediate and ultimate outcomes) with reference to performance targets and program reach, program design, including the linkage and contribution of outputs to outcomes

Issue 5: Demonstration of efficiency and economy

Assessment of resource utilization in relation to the production of outputs and progress toward expected outcomes



8.0 Conclusion

This guide is intended to support the development of PM Strategies that will effectively support evaluation. To establish a sound foundation for the PM Strategy and align it with future evaluations, it is important that the document outlining the PM Strategy also include a program profile, a logic model, a Performance Measurement Strategy Framework and an Evaluation Strategy.

The guide will be updated periodically as required. Enquiries concerning this guide should be directed to:

Centre of Excellence for Evaluation
Results-Based Management Division
Treasury Board of Canada Secretariat
222 Nepean Street
Ottawa, Ontario
K1A 0R5

Email: evaluation@tbs-sct.gc.ca



Appendix 1: Sample Logic Model

As stated in section 5.2 there are many ways to present a logic model, this is simply one example.

Figure 3: Appendix 1 - Sample Logic Model
Appendix 1 - Sample Logic Model. Text version below:
Figure 3 - Text version

The example in Appendix 1 illustrates one of the many ways to present a logic model. As described in further detail under Section 5.2 Logic Model Content, a logic model is one of the four components of the PM Strategy. Its level of detail should be comprehensive enough to adequately describe the program but concise enough to capture the key details on a single page. Appendix 1 provides an illustration of a particular health care program's logic model, featuring its inputs, activities, outputs, immediate outcomes, intermediate outcomes and ultimate outcomes.



Appendix 2: Review Template and Self-Assessment Tool

This review template and self-assessment tool can be used by program managers, performance measurement specialists and evaluation specialists when developing and reviewing the PM Strategy.

Performance Measurement Strategy - Review Template

Program Manager:

Departmental Evaluation Official:

Program Title:

Program Status (new, renewal, other):

Department:

Overall Value and Period of Funding:

Reviewed by:

Date:


Section Reviewer Comments Reviewer's Overall Assessment
Strong Acceptable To be improved
Introduction        
1 Program Profile
1.1 Need for the program        
1.2 Alignment with government priorities        
1.3 Target population(s)        
1.4 Stakeholders        
1.5 Governance        
1.6 Resources        
2. Logic Model
2.1 Logic model        
2.2 Narrative        
3. PM Strategy Framework
3.1 PM Strategy Framework        
3.2 Accountabilities and reporting        
4. Evaluation Strategy
4.1 Requirements, timelines and responsibilities        
4.2 Draft evaluation framework and data requirements        

 
Performance Measurement Strategy - Self-Assessment Tool: Introduction
Introduction Review Criteria Comments
Introduction
  • A brief explanation of how the program's scope and complexity, the challenges associated with producing quality data for monitoring and evaluation, and the risks associated with the program were considered when developing the PM Strategy is provided.
 

 
Performance Measurement Strategy - Self-Assessment Tool: Program Profile
1. Program Profile Review Criteria Comments
1.1 Need for the program
  • Clearly stated and demonstrated past and current need for the program
  • Evidence such as research studies, needs assessment, detailed demographic studies and results of past evaluations provided to support the analysis
 
1.2 Alignment with government priorities
  • Clearly stated objectives
  • Objectives explicitly linked to the department's PAA and related strategic outcome(s)
  • Objectives linked to government priorities, i.e. references to Speech from the Throne or federal budget included
 
1.3 Target population(s)
  • Clearly described target population(s) (those intended to be influenced and benefit from the program) and the characteristics that are relevant to program delivery (e.g. geographical area, age group, gender)
 
1.4 Stakeholders
  • A list of key program stakeholders and a brief description of their roles and responsibilities
 
1.5 Governance
  • Clear description of the role and responsibilities of the federal government in delivering the program
  • Clear description of roles and responsibilities of others (e.g. third-party delivery partners) involved in delivering the program
  • When there are partners, their respective roles and responsibilities for performance monitoring, reporting and evaluation is explained.
 
1.6 Resources
  • The costing table displays annual resources allocated to the department and each delivery partner.
  • All of the following costs are separated out: salaries, O&M, transfers to partners, capital costs, costs for implementing performance monitoring and costs for implementing evaluation activities.
 

 
Performance Measurement Strategy - Self-Assessment Tool: Logic Model
2. Logic Model Review Criteria Comments
2.1 Logic model
  • All steps of the logic model are completed and clearly presented (inputs, activities, outputs and immediate, intermediate and ultimate outcomes).
  • The ultimate outcome is not at a higher level than the expected results of the PAA element to which the program contributes.
  • Program outcomes that are identical to those included as "expected results" in the departmental PAA are accompanied by a footnote that explains how they are aligned.
 
2.2 Narrative
  • The linkages between activities, outputs and outcomes are clearly explained.
  • The external factors that influence whether or not the outcome(s) will be achieved are explained.
  • The narrative is written so that a reader who is unfamiliar with the program will understand it.
 

 
Performance Measurement Strategy - Self-Assessment Tool: PM Strategy Framework
3. PM Strategy Framework Review Criteria Comments
3.1 PM Strategy Framework
  • All required components are included in the table:
  • the PAA elements that are directly linked to the program, including the name of the program as represented in the departmental PAA as well as its associated activities, subactivities and/or sub-subactivities
  • indicators for each output and outcome
  • baselines
  • targets
  • timelines
  • data sources
  • methods
  • frequency of data collection
  • authority responsible for data collection and analysis
  • data management system used
  • If any information is missing, a valid rationale is provided, as are timelines for when the information will be included.
  • The components are consistent with the definitions presented in this guide, and indicators are valid, reliable and relevant.
  • The selection of indicators and the measurement plan are realistic, considering the time and resources available.
 
3.2 Accountabilities and reporting
  • The reporting commitments and how they will be met are clearly described, including who will analyze the data, who will prepare the reports, to whom they will be submitted, by when, what information will be included, the purpose of the reports and how they will be used to improve performance.
  • If relevant, the potential challenges associated with data collection and reporting, as well as mitigating strategies for addressing these challenges are provided.
 

 
Performance Measurement Strategy - Self-Assessment Tool: Evaluation Strategy
4. Evaluation Strategy Review Criteria Comments
4.1 Requirements, timelines and responsibilities
  • The following components have been included and are accurate:
  • if applicable, evaluation commitments as described in Treasury Board submissions, policies or other documents; and
  • timing and responsibilities for the development of the evaluation framework, for the completion of the evaluation and for other evaluation activities (where applicable).
 
4.2 Draft evaluation framework and data requirements (if applicable)
  • The evaluation issues appropriately cover relevance and performance (effectiveness, efficiency and economy).
  • The selection of indicators aligns with the performance measurement indicators identified in the PM Strategy Framework (where appropriate).
  • The draft evaluation framework includes concrete evaluation questions, indicators, data sources and methods.
  • Where applicable, the draft evaluation framework identifies the indicators for which the program manager or head of evaluation will collect baseline data prior to the evaluation and the timelines for data collection.
  • Where required, the draft evaluation framework includes a description of simple adjustments that can be made to administrative protocols and procedures (e.g. third-party reporting templates, financial record keeping) to support the evaluation activities.
 


Appendix 3: Glossary

Activity

An operation or work process that is internal to an organization and uses inputs to produce outputs (e.g. training, research, construction, negotiation, investigation).

Baseline

The starting point against which subsequent changes are compared and targets are set.

Benchmarking

The action of identifying, comparing, understanding and adapting outstanding practices found either inside or outside an organization. Benchmarking is based mainly on common measures and the comparison of obtained results both internally and externally. Comparing results against those of a best practice organization will help the organization to know where it is in terms of performance and to take action to improve its performance.

Cost

A resource expended to achieve an objective.

Delivery partner

A public or private organization that assists the Government of Canada in delivering the program.

Departmental Evaluation Plan

A clear and concise framework that establishes the evaluations a department will undertake over a five-year period, in accordance with the Policy on Evaluation and its supporting directive and standard.

Departmental Evaluation Committee

A senior executive body chaired by the deputy head or senior-level designate. This committee serves as an advisory body to the deputy head with respect to the Departmental Evaluation Plan, resourcing and final evaluation reports and may also serve as the decision-making body on other evaluation and evaluation-related activities of the department. Refer to Annex B of the Policy on Evaluation for a full description of the roles and responsibilities of the committee.

Economy

Minimizing the use of resources. Economy is achieved when the cost of resources used approximates the minimum amount of resources needed to achieve expected results.

Effectiveness

The extent to which a program is achieving expected outcomes.

Efficiency

The extent to which resources are used such that a greater level of output is produced with the same level of input or a lower level of input is used to produce the same level of output. The level of input and output could be increases in quantity or quality, decreases in quantity or quality, or both.

Evaluation

The systematic collection and analysis of evidence on the outcomes of policies and programs to make judgments about their relevance, performance and alternative ways to deliver programs or to achieve the same results.

Expected result

An outcome that a program, policy or initiative is designed to produce.

Indicator

A qualitative or quantitative means of measuring an output or outcome with the intention of gauging the performance of a program.

Input

The financial and non-financial resources used by organizations, policies, programs and initiatives to produce outputs and accomplish outcomes (e.g. funds, personnel, equipment, supplies).

Logic model

A depiction of the causal or logical relationship between the inputs, activities, outputs and outcomes of a given policy, program or initiative.

Management, Resources and Results Structures (MRRS)

Management, Resources and Results Structures provide a common, government-wide approach to the collection, management and public reporting of financial and non-financial information. The MRRS consists of a department's strategic outcome(s), Program Activity Architecture (PAA), supporting Performance Measurement Framework (PMF) and governance structure.

Needs assessment

Policy tool used to shape a new program. A needs assessment involves primary data collection to identify the need for the program and includes a preliminary description of the program intervention required to address that need. It also assesses whether there are any existing services in the federal, provincial or municipal government spheres or in the private sector that are within the scope of the proposed program intervention.

Objective

The high-level, enduring benefit toward which effort is directed.

Outcome

An external consequence attributed, in part, to an organization, policy, program or initiative. Outcomes are not within the control of a single organization, policy, program or initiative; instead, they are within the area of the organization's influence. Usually, outcomes are further qualified as immediate, intermediate, ultimate (or final), expected, direct, etc. There are three types of outcomes related to the logic model, defined as follows:

  1. Immediate outcome: an outcome that is directly attributable to a policy, program or initiative's outputs. In terms of time frame and level of reach, these are short-term outcomes involving an increase in a target population's awareness.
  2. Intermediate outcome: an outcome that is expected to logically occur once one or more immediate outcomes have been achieved. In terms of time frame and level of reach, these are medium-term outcomes and often involve a change in the behaviour of the target population.
  3. Ultimate (or final) outcome: the highest-level outcome that can be reasonably and causally attributed to a policy, program or initiative and is the consequence of one or more intermediate outcomes having been achieved. These outcomes usually represent the raison d'être of a policy, program or initiative. They are long-term outcomes that represent a change in the target population's state. Ultimate outcomes of individual programs, policies or initiatives contribute to the higher-level departmental strategic outcome(s).
Output

Direct products or services that stem from the activities of an organization, policy, program or initiative and are usually within the control of the organization itself (e.g. pamphlet, water treatment plant, training session).

Performance

The extent to which economy, efficiency and effectiveness are achieved by a policy or program.

Performance measure

See indicator.

Performance Measurement Framework (PMF)

A requirement of the Policy on Management, Resources and Results Structures, a PMF sets out an objective basis for collecting information related to a department's programs. A PMF includes the department's strategic outcome(s), expected results of programs, performance indicators and associated targets, data sources and data collection frequency, and actual data collected for each indicator.

Performance Measurement Strategy (PM Strategy)

A PM Strategy is the selection, development and ongoing use of performance measures to guide program or corporate decision making. In this guide, the recommended components of the PM Strategy are the program profile, logic model, Performance Measurement Strategy Framework and Evaluation Strategy.

Program

A group of related activities that are designed and managed to meet a specific public need and are often treated as a budgetary unit.

Program Activity Architecture (PAA)

A structured inventory of all programs of a department. Programs are depicted according to their logical relationship to each other and the strategic outcome(s) to which they contribute. The PAA includes a supporting Performance Measurement Framework (PMF).

Program of grants and contributions / Transfer payment programs

As defined in section 42.1 of the Financial Administration Act, a program of grants or contributions made to one or more recipients that is administered so as to achieve a common objective and for which spending authority is provided in an appropriation Act.

Program theory

An explicit theory of how a program causes the intended or observed outcomes, including assumptions about resources and activities and how these lead to realizing intended outcomes.

Reach

The individuals and organizations targeted and directly affected by a policy, program or initiative.

Relevance

The extent to which a policy or program addresses a demonstrable need, is appropriate to the federal government and is responsive to the needs of Canadians.

Reliability

The degree to which the results obtained by a measurement procedure can be replicated.

Result

See outcome.

Risk-based approach to determining evaluation approach and level of effort

A method whereby risk is considered when determining the evaluation approach for individual evaluations. Departments should determine, as required, the specific risk criteria relevant to their context. Specific risk criteria may include the size of the population that could be affected by non-performance of the program, the probability of non-performance, the severity of the consequences that could result, the materiality of the program and its importance to Canadians. Additional criteria could include the quality of the last evaluation and/or other studies, their findings, when they were conducted and whether these findings remain relevant, and the extent of change experienced in the program's environment.

Strategic outcome

A long-term and enduring benefit to Canadians that stems from a department's mandate, vision and efforts. It represents the difference a department wants to make for Canadians and should be a clear, measurable outcome that is within the department's sphere of influence.

Target

A measurable performance or success level that an organization, program or initiative plans to achieve within a specified time period. Targets can be expressed quantitatively or qualitatively.

Target population

The group of individuals that a program intends to influence and who will benefit from the program.

Value for money

The extent to which a program demonstrates relevance and performance.



Appendix 4: References

Legislation

Treasury Board policy instruments

Guidance documents

Other relevant publications


Report a problem or mistake on this page
Please select all that apply:

Privacy statement

Thank you for your help!

You will not receive a reply. For enquiries, contact us.

Date modified: