Outcome Management Guide and Tools
Table of Contents
- 1. Introduction
- 2. Outcome Management Concepts
- 3. The Outcome Management Process
- Appendix A: Outcome Management Readiness Checklist
- Appendix B: Logic Model / Outcomes Map Checklist
- Appendix C: Sample Outcome Types
- Appendix D: Initiative Register
- Appendix E: Outcome Register
- Appendix F: Risk Questionnaire
- Appendix G: Risk Register
- Appendix H: Value case Template
- Appendix I: Outcome Realization Plan Table of Contents
- Appendix J: Performance Metrics: Traps to Avoid
- Appendix K: Outcome and Risk Reporting Checklist
- Appendix L: Outcome Management Office Checklist
- Appendix M: Harvesting Outcomes Checklist
- Appendix N: Glossary
- Appendix O: References
1. Introduction
1.1 Why Use Outcome Management?
Outcome Management is all about planning, managing and achieving the intended outcomes of an initiative or a program in the public sector. It is all about having the same focus and discipline around attaining these outcomes as the domain of Project Management has around delivering the capability and the systems in an on-time and on-budget manner.
In fact, focusing on Project Management only provides the deliverables of the project – it does not provide the outcomes themselves, or the “big picture” as to why we are undertaking the initiative. (The word outcomes can be interpreted as benefits or results in this context).
In the private sector, outcomes typically mean the financial return on investment that is produced. In recent years, The Balanced Scorecard by Kaplan and Norton has broadened the management thinking to include customer satisfaction, the efficiency of internal business processes and the learning and innovation aspects of the business. In the public sector, outcomes represent the mandate that citizens entrust to government – to deliver outcomes in the social, environmental, political and economic realms effectively.
This requirement of the public sector to deliver outcomes as opposed to financial return is by definition more complex. There are no standard formulas in finance textbooks to easily calculate outcomes as there are for financial return. Many outcomes are not just financial, but also include social, environmental, citizen satisfaction, supportive of Canadian values, and so on. Other countries (the USA, the UK, France, Australia are leading examples) are well along the adoption curve of Outcome Management in their own way.
The companion document to this guide, the Outcomes Management: Lessons Learned and Best Practices report, found several lessons learned through the initial Outcome Management method that will help to focus public servants on the outcomes of the work that they do, not just on the cost. This will also encourage a “whole of government solution approach” to establishing and obtaining outcomes, not just a “silo” approach that will not maximize the outcomes attainable. It is a shift from outputs / deliverables management only to include proactive management of the outcomes.
Outcome Management in the Government of Canada has not yet had a published method supporting the execution of the approach. This document is intended to be the first published “how-to” guide on Outcome Management, with a sequential flow, descriptions on each of the stages and steps, and numerous checklists and templates to follow and use.
1.2 Purpose of this Guide
The purpose of this guide is to provide practitioners of Outcome Management with a process flow, a set of instructions and tools and tips and pointers on how to conduct an Outcome management exercise within the context of the Government of Canada (GoC). These practitioners can be public servants within line departments, members of a Centre of Excellence in the Outcome Management Practice being contemplated by Treasury Board Secretariat, or external management consultants assisting GoC departments or agencies.
The primary target audience is both new and experienced practitioners in the area of Outcome Management. It will ensure that a consistent approach is taken, and a consistent set of tools and deliverables are used across departments and agencies in the GoC. Secondarily, it can be used by client departments in preparing for an Outcome Management exercise, to know how to prepare and what to expect from Outcome Management.
Finally, it can be used by the Treasury Board Secretariat (TBS) to integrate with other existing TBS guides and frameworks, such as the Management Accountability Framework (MAF), Enhanced Management Framework (EMF) and Results-Based Management (RBM). At the time of writing, TBS was looking at revisiting the intersection and integration of a number of frameworks and tools, including Outcome Management.
1.3 When Should You Use This Guide?
This guide should be used by Senior Managers, Project Directors and practitioners when they have identified a need to conduct Outcome Management for projects or initiatives and are preparing to launch the exercise. Outcome Management can be applied to an initiative at various stages of the initiative’s lifecycle. It is most successful when conducted early in the initiative’s lifecycle while it is being conceptualized (i.e. the tangible and intangible deliverables are being contemplated). Outcome Management is then used to, among other things, articulate the value of the initiative and develop a comprehensive business or value case. It also forces the identification of intermediate outcomes that serve as milestones or leading indicators towards attaining the outcomes and to permit tracking of progress towards the final outcomes.
It should then be used at regularly planned intervals (or gates) to determine any variances from the original outcome targets and help to determine corrective action.
1.4 Conventions of the Guide
The Guide follows the following format to describe the Outcome Management process:
Purpose
Short summary of the overall purpose of the Stage.
Step 1
<Step Name>
<Description>
<Tool Name. These are hyperlinks>
<Short description>
Appendix N contains a glossary of terms and Appendix O contains references for the guide.
2. Outcome Management Concepts
2.1 Overview of Outcome Management
What Is An Outcome?
An outcome is “something that follows as a result or consequence.”Footnote 1
In the fields of science and medicine, an outcome is the result expected at the end of an experiment or treatment. Similarly, in an organizational context, an outcome involves an intentional change being imposed on the system (people, processes, technology), with a resulting end state that can be measured. Synonyms for outcome include aftermath, consequence, results, and score. The word “benefit” is commonly substituted for outcome; however, the term suggests a positive result, while “outcome” is neutral and can represent a positive or negative effect.
Intermediate vs. End Outcomes
To achieve the end results of an initiative, it is crucial to identify and track intermediate outcomes that can be used as milestones along the road.
Leading indicators allow for changes to be detected earlier in the lifecycle, rather than having to wait until the end to discover whether the initiative or program was a success or a failure. There are many types of intermediate outcomes that can be included in the logic model of outcomes. Appendix C contains examples of the various types of intermediate outcomes that can be inserted in the logic model, as appropriate. The Office of the Auditor General of Canada has developed what they refer to as a results chain model that is useful in illuminating immediate, intermediate, and ultimate outcomes adapted and described in the figure below.
Quantitative and Qualitative Outcomes
Adopting both quantitative and qualitative measures for outcomes presents a broader view of expected value.
Quantitative outcomes are measured in numeric terms; for example, dollars, hours, or turnover rates. Qualitative outcomes are measured in non-numeric terms, which are often monitored through in-depth interviews, direction observation, and/or written documentation. A common assumption holds that quantitative measures are more solid and reliable than qualitative measures; however, Robert S. Kaplan and David Norton in 1992 demonstrated that relying primarily on financial measures did not sufficiently support strategic management. Financial measures alone, they noted, are inadequate “for guiding and evaluating the journey that the information age companies must make to create future value through investment in customers, suppliers, employees, processes, technology, and innovation.”Footnote 3 Another combined approach known as the Triple Bottom Line, which encompasses financial, environmental, and social measures, was coined in 1987 by the consulting firm Sustain Ability in the U.K, and has gained worldwide recognition.
Financial and Non-Financial Outcomes
Quantitative outcomes can be further separated into financial and non-financial outcomes.
Financial outcomes can be measured in dollars and many can be fed into additional evaluative criteria used in cost-benefit analyses, such as Net Present Value (NPV), Present Value Ratio (PVR), Internal Rate of Return (IRR), and Return on Investment (ROI). Non-financial outcomes are measured in non-dollar terms, with examples such as reduced complaints, increased employee satisfaction, and increased throughput. It is possible to extrapolate financial benefits to some of these measures; however, this is not necessary in the Outcome Management process. What matters most in the process is whether or not an outcome measurement makes sense, and if it is possible to affect the outcome through an initiative.
For additional information on financial and non-financial outcomes, see the Treasury Board Secretariat Guide - An Enhanced Framework for the Management of Information Technology Projects: Creating and Using a Business Case for Information Technology Projects.
What Is Outcome Management?
Outcome Management is the set of activities for the planning, managing, and realizing of the desired outcomes from initiatives.
Outcome Management in the Government of Canada currently has two components that, while related, are different. From the perspective of a GoC Program, Outcome Management is the set of activities designed to monitor, and adjust as required, the way in which the Program, and its associated Services, Processes and Activities, contribute to meeting the needs of Canadians. From the perspective of a GoC Project or Initiative, Outcome Management is the set of activities designed to manage and oversee the change in a way that ensures it contributes to improving the capability and or capacity of a Program to meet the needs of Canadians. The difference being one manages meeting the needs of Canadians while the other manages the development of capabilities that supports meeting the needs of Canadians.
This guide focuses on Outcome Management for Projects where a GoC project (or a collection of projects) consists of the set of activities for planning, managing, and realizing the desired outcomes from a change. In other words, it is focused on ensuring that a project contributes to improving the capability of the GoC to deliver Programs and Services that meet the needs of Canadians. To date the methods and tools of the TBS Outcome Management Practice have focused on supporting these activities. In this context Outcome has been defined to be “something that follows as a result or consequence” of an action. In other words, an Outcome is the consequence of an intentional change imposed on people, processes, and technology. Outcome Management of Projects is about having the same focus and discipline around aligning a project to achieving results as the domain of Project Management provides focus and discipline around delivering a capability or system in an on-time and on-budget manner. In fact, focusing on Project Management only provides the deliverables of the project – it does not provide the outcomes themselves, or the “big picture” as to why we are undertaking the initiative.
A Logic Model (also known as an outcomes map or strategy map) is a visual model that shows how a project (or a set of projects) or all activities within a project will drive the attainment of outcomes. In other words, it shows how each output of an activity contributes to an immediate outcome, how these immediate outcomes contribute to an intermediate outcome, and how these intermediate outcomes contribute to a final outcome. An Immediate Outcome is the first level effect of an Output from a Project or a Project Activity. An Intermediate outcome is a capability delivered by a project or a business impact resulting from a group of projects within the initiative – the benefits and changes resulting from the outputs. A Final Outcome is the end result expected from an initiative – the final or long-term consequences.
The Outcome Management Process is applied to the entire initiative holistically, and is divided into the following stages:
- Stage 0: Launch Outcome Management
- Stage 1: Develop Outcome Realization Model
- Stage 2: Develop Outcome Realization Plan
- Stage 3: Monitor Delivery of Outcomes
- Stage 4: Realize and Optimize Outcomes.
The following diagram outlines the Outcome Management Process:
Figure 2: The Stages of the Outcome Management Process
Stage 0, Launching Outcome Management, involves confirming that the organization is ready to undertake the exercise. Stage 1 involves the identification of desired outcomes and the creation of the comprehensive initiative view, which defines all the projects, activities, and capabilities required to achieve the outcomes. Stage 2 provides the value case for the initiative, as well as the framework for ensuring that the outcomes are properly managed, monitored, and reported. Stage 3 involves the activities to monitor and track the progress of the initiatives and to re-affirm the logic of how the outcomes will be realized. The final stage encompasses the activities in harvesting the benefits towards fully realizing the end results. A more detailed discussion of all the stages can be found in Section 3.
The broad umbrella of these techniques and methods to ensure that outcomes are planned and realized are referred to as Value Management. Value in this context means the set of outcomes desired by the organization that can be a combination of qualitative or quantitative. Management refers to the proactive planning, organizing of activities, tracking of information, and reporting on desired outcomes. Value Management is comprised of three components: Outcome Management for one initiative (which this document describes), Portfolio Management for multiple initiatives and Value Governance to tie it into the organization’s processes and governance framework.
2.2 Principles
To establish a successful Outcome Management process, an organization must expand its focus beyond delivering projects or initiatives on time and on budget to include the delivery of outcomes or value desired from the activities and initiatives, which is a redefinition of success. This expanded focus requires the incorporation of four key principles into an organization’s mindset:
- Begin at the end: focus on outcomes
- Move from a project view to the initiative view
- Manage at the portfolio level
- Impose discipline: governance, measurement, and accountability
Begin at the End: Focus onOutcomes
In Stephen R. Covey’s The Seven Habits of Highly Effective People, the second habit presented is “begin with the end in mind,” meaning that it is all about achieving the outcomes, not doing the activities. In many projects, there is a tendency to measure the value of the solution being delivered, rather than the business results. A project should begin with identifying the business outcomes that the organization is looking to achieve. For example, while “delivering an Intranet” is a technology-driven capability that is difficult to measure and assess, an outcome such as “reducing the time it takes employees to find useful information” is measurable and quantifiable.
Move from a Project View to the Initiative View
Even if all projects could be evaluated based on business results, isolated project views are not sufficient to produce these business outcomes. Identifying the set of all of the additional activities and projects that are necessary and contribute to achieving results is referred to as the initiative view. This view ensures that all the activities and capabilities required to achieve desired outcomes are identified, which include:
- Business projects (e.g. process redesign, restructuring)
- Technology projects
- Legislative, policy, regulatory, and directive changes
- Organizational change management
- Intermediate outcomes that provide capabilities to be exploited for business outcomes.
The initiative view gives a “big picture” perspective: how the pieces are linked together and how people and/or places in the organization are affected. This allows organizations to thoroughly plan, organize, and manage the initiative because the technology, organization, processes, and people have been evaluated in terms of how they fit together.
Manage at the Portfolio Level
An organization will always identify more initiatives than it can reasonably accomplish. Constraints on time, money, and people hinder an organization’s ability to do everything it wants, in cases such as:
- Not enough appropriately skilled technical resources to deliver the initiatives.
- Not enough business resources to define and implement the changes.
- Insufficient funding levels to afford all proposed initiatives.
- Tight timeframes due to impending deadlines that impose choices to be made, as not all components can be implemented in time.
- Not being able to cope with change. People can manage a large amount of change, but not necessarily all at once. For example, deploying a new financial system at the same time as restructuring the organization may lead to decreased productivity and bring the level and timing of desired outcomes from both initiatives into question.
The portfolio view presents the set of initiatives that the organization has selected to support within given resource constraints. However, since portfolios are not static, they need to be regularly managed to reflect changes in the organization’s strategy, priorities, needs, goals, and opportunities.
Impose Discipline: Governance, Measurement and Accountability
Simply adopting the initiative view of work and portfolio management does not solve all problems related to Outcome Management. Standard methodologies, tools, techniques, and clearly defined responsibilities to bring the initiative (and ultimately the portfolio) to fruition are required, and depend on discipline for them to be used. This discipline relies immensely on existing project, operations, and financial management practices, supplemented by the following Outcome Management components:
- A value modeling technique, such as a logic model, to determine the following:
- Projects and initiatives
- Outcomes and capabilities
- Possible paths to reach the outcomes
- How the components contribute to building an outcome
- A rigorous gating process with clearly articulated deliverables and controls for portfolios, initiatives, and projects, to ensure that work is on time, on budget, and continues to be of value.
- The Outcome Management process that ensures that outcomes can be:
- Continuously monitored
- Measured
- Realized
- Roles with defined accountabilities to execute the process.
3. The Outcome Management Process
This section provides a detailed description of the stages in the Outcome Management Process, along with their steps, tools and checklists.
3.1 Stage 0: Launch Outcome Management
Purpose
To confirm that the organization is ready to undertake the Outcome Management exercise, and identify any areas that are not yet ready.
Step 1
Ensure Readiness
Review the Outcome Management Readiness Checklist to confirm that the organization is ready to proceed with the Outcome Management process.
Appendix A Outcome Management Readiness Checklist
Checklist to assist the main organizer / facilitator in determining whether the initiative and stakeholders are ready for the Outcome Management exercise and highlight any deficiencies that need to be corrected before starting Outcome Management.
3.2 Stage 1: Develop Outcome Realization Model
Purpose
To identify the desired outcomes and describe the logic of how the outcomes will be realized. Note that the five steps in this stage are described in their typical sequence, but can be done in parallel, iteratively or in any order.
Step 1
Create Logic Model
The logic model (also known as an outcomes map or strategy map) is a visual model that shows how all projects and activities within an initiative will drive the attainment of outcomes. It is used for defining the scope of an initiative, identifying accountabilities, surfacing assumptions, as well as communicating and managing the Outcome Management process.
The logic model can be built in any one of three ways:
- Left-to-Right: Referred to as "initiative-driven," the organization in this case is taking advantage of an opportunity and attempting to determine whether a proposed initiative will provide desirable outcomes. The model is built by moving left-to-right to understand the capabilities to be delivered by the initiative, the business outcomes that can be achieved, and the strategic goals and priorities to which the outcomes contribute.
- Middle-Out: Referred to as "issue-driven," the organization here seeks to address a problem. The model typically starts in the middle and moves left to work out the required project activities and right to work out the final outcomes and contribution to strategic goals and priorities.
- Right-to-Left: Referred to as "target-driven," the organization in this case has identified a new strategic objective or has re-prioritized an objective. The model is built by starting with the new, prioritized goal (e.g. increased customer service), and then moving right-to-left to complete the logic model.
The outcomes expected from the initiative must be clearly defined as well as the contribution they will make to the organization's strategic goals and priorities. Lack of clarity on final outcomes can be the determining factor in the success or failure of a business (value) case, and of the initiative.
To define the final outcomes, the reasons for undertaking the initiative should first be explored. Often the purpose of the initiative can be to take advantage of an opportunity, address a problem, or the initiative can be mandatory (e.g. resulting from new legislation). Next, the outcomes' contribution to organizational priorities should be considered, which will demonstrate the strategic alignment of the initiative. If the initiative does not positively contribute to strategic goals and priorities, it should be stopped, deferred, re-evaluated, or possibly never even undertaken in the first place.
This very simple logic model / Outcomes Map illustrates a typical "look and feel" of the diagramming technique. It can take any one of a number of forms, but the principles remain the same.
Appendix B Logic Model / Outcomes Map Checklist
Checklist on how to prepare a Logic Model / Outcomes Map.
Appendix C Sample Outcome Types
Contains examples of the various types of intermediate outcomes that may be included in the logic model, as appropriate.
Step 2
Create Initiative Register
The Initiative Register is a list and description of all the initiatives / projects required to deliver the capability that then translates into the immediate and then the intermediate outcomes. The overview description of these initiatives are captured and documented in the Initiative Register, and includes considerations such as cost, time, schedule, resources, etc.
Appendix D Initiative Register
A template of the Initiative Register that documents key descriptions of the initiatives (projects).
Step 3
Create Outcome Register
The Outcome Register lists the various attributes of the key outcomes from the logic model that have been selected to be measured. All final outcomes should be listed, and those intermediate outcomes that are measurable with an appropriate level of effort are included.
Appendix E Outcome Register
A sample Outcome Register that lists the various attributes of the key outcomes from the logic model that have been selected to be measured.
Step 4
Assess Risk
Often conducted in parallel with the creation of the logic model, risk assessment is an important component of ensuring the success of an initiative. Two levels of risk management must be considered: risks that hinder an effective Outcome Management process and risks to the initiative itself which, if they materialize, could impact the achievement of the initiative outcomes. Both types of risks must be identified up-front and then managed throughout all phases of the project so that action can be taken to reduce impact on desired and expected outcomes.
There are numerous questionnaires to help identify risks to the outcomes management process and risks to the initiative itself. Risk mitigation plans must be identified and executed as required. The risk mitigation plan outlines the activities that will be conducted to minimize or eliminate the impact of the risk on the initiative and its expected outcomes. Plans for high risk items can break down activities into specific tasks, contain a schedule, and assign specific resources to each task. The activities can then be incorporated into the initiative's logic model, plan, and schedule. The migration plans are also captured in the Risk Register along with who is responsible for monitoring risk as part of the continual initiative/project status reporting process.
Appendix F Risk Questionnaire
A sample risk questionnaire to guide in the identification of risks in achieving effective Outcome Management.
Step 5
Create Risk Register
Information about the risks and the results of the risk assessment in the previous step (including an assessment of probability and impact) is recorded in the Risk Register.
In some cases the Project Management Office maintains a risk register that is associated with the execution of the project itself. This register may vary in format including a complex database or Excel spreadsheet. The risks to Outcome Management can be integrated into this register or maintained separately, as appropriate.
Appendix G Risk Register
A sample Risk Register that documents the assessment results.
3.3 Stage 2: Develop Outcome Realization Plan
Purpose
To develop a framework for ensuring that outcomes expected from a program are monitored and reported, and that the required change is managed successfully.
Step 1
Create value case
The overall framework for the initiative is captured and documented in the Value Case, which is an enhanced business case that focuses on the value of the undertaking.
Appendix H value case Template
A template for the Value Case that will structure all aspects of the initiative, including non-financial outcomes.
Step 2
Create outcome realization plan
The Outcome Realization Plan is a living document that goes through a series of iterations as more information is known and becomes more detailed as the implementation of the business changes nears. The detailed Outcome Realization plan is prepared for outcome owners to track how change is being accepted and adopted into their organization(s), in addition to the outcomes being realized.
The Outcome Realization Plan may also contain the Change Management Plan, which defines the activities to be undertaken to support the change and meet the requirements of each particular workshop in supporting acceptance and adoption of the change.
In establishing the Outcome Realization Plan, the organization must:
- Define which outcomes to track
- Identify outcome owners
- Establish outcome target metrics and timeframes
- Define the outcome reporting process and schedule
- Document reinvestment opportunities.
Define Which Outcomes to Track
While final outcomes are always tracked, in monitoring the progress of Outcome Management it is not necessary to track every single outcome along the way. The key intermediate outcomes and assumptions for tracking progress can be found through analyzing the logic model. These key outcomes can be thought of as the milestones towards realizing full outcomes, while the key assumptions and risks are those that could prevent or significantly hinder outcome attainment. By tracking only the key intermediate outcomes, final outcomes, and assumptions, the administrative overhead of collecting data and preparing and reviewing reports is decreased.
Identify Outcome Owners
The Outcome Owner is the individual that accepts responsibility for actively ensuring that one or more outcomes will be realized and must therefore have the requisite authority to be able to take action. Since projects and initiatives can span multiple years and personnel or sections can change, the ownership is tied to a position, not a person, and it is important to ensure that the responsibility maintains visibility throughout the process.
The outcome ownership resides with the appropriate management level, as they have identified a need or desire for an outcome, and are the sponsor for the initiative to drive business outcomes. Initiative managers, on the other hand, have the responsibility of completing the tasks on budget, on schedule, and within scope on behalf of the outcome owner. The need or desire for the outcome still resides with the Outcome Owner wanting the outcomes and sponsoring the initiative. Moving from tactical to strategic levels, outcome ownership moves upwards through the organization's hierarchy to executive levels. Tying outcome ownership to multiple levels in the organization creates:
- Organization-wide buy-in and endorsement of the initiative
- A greater understanding of the strategic direction of the initiative, and how individuals fit and contribute to realizing that direction
- An increased likelihood of success.
To help support ownership and participation in the Outcome Management process, there must be active accountability, requiring the refinement of position descriptions and job performance metrics for outcome owners. In enterprise-wide programs, it is possible for outcomes to have multiple owners. This is acceptable because a single outcome can have multiple performance metrics and targets. In this scenario, each outcome owner would be responsible and accountable for achieving his or her target. In addition, there is often an additional accountability for reporting on the metric, to distinguish between the two activities and responsibilities.
Establish Outcome Target Metrics and Timeframes
Initially, measures and metrics for outcomes are established in creating the business or value case for the initiative. As the initiative lifecycle proceeds, the baseline and target values can be confirmed or refined to ensure continued relevance and reasonableness. This is important for managing staff expectations and making certain that the business or value case continues to be relevant. Outcome measures can be:
- Binary – yes/no
- Quantitative – measured in numeric terms, such as dollars, hours, or turnover
- Qualitative – measured in non-numeric terms obtained through interviews, documentation, or direct observation
An outcome can have more than one measure.
Increased employee satisfaction, as an example, can have quantitative measures, such as number of complaints and employee retention, in addition to having qualitative measures, such as exit interview notes. This combination of measures can provide a useful context in explaining why progress is ahead, behind, or on schedule.
The measures should make sense and be pertinent to the target audience.
Not every measure needs to be reduced down to a dollar figure to be considered an outcome. For example, a police agency seeking to increase public safety could calculate this back to a dollar saving using insurance industry costs and estimates for types of offences. However, a more meaningful measure may simply be the number of reported offences.
Current operating performance measures can be reused.
For example, if a finance department tracks the number of expense reports processed per month, then this measure can be used to track staff efficiency, assuming the effect of the initiative can be isolated and all other factors remain constant.
Establish Outcome Metrics
Quantitative measures can be fairly simple to identify and agree upon, while qualitative outcomes can have a more descriptive metric and be based more anecdotal evidence. Since an outcome can have more than one measure, anecdotal metrics can be used in conjunction with quantitative metrics to provide a more complete context around the realized outcome.
Establish Baseline Value
Various techniques can be used in establishing a baseline value, which serves as the comparison point for future measurements. For manual business processes, time and motion studies can be conducted, or working groups can provide estimates of completion times. For automated business processes, business activity monitoring software and audit trails can give insight into current processing and turnaround, or response levels. Surveys can also be conducted to determine employee, customer, or stakeholder satisfaction. If it is a new measure, gathering the baseline from the first timeframe of the initiative is a legitimate approach. These activities should be included in the initiative plan and be conducted by either the project team, a third party (for surveys), or by another organizational group.
Establish Targets
Performance targets for the key intermediate and final outcomes are set in order for "success" to be defined. Project teams can provide these data based on their analysis of business processes and/or system activity and performance levels. To help monitor progress, performance plateaux can be used, which may be linked to implementation and roll-out plans, or be linked to increased capability across the organization. Tolerance levels can also be established, which determine the level of action on deviations from progress towards the targets. These performance targets should be realistic and based on current assessments of risk and capability. The targets should be reviewed during the lifecycle of the initiative, because circumstances, assumptions or risks can change and further analysis, design, and building can yield additional information on the practicality of the measures.
Establish Timeframes for Outcomes
The Outcome Realization Plan should contain a schedule for realizing outcomes, including a start date, an expected end date, and a frequency for collecting and reporting on measures. The timeframe may vary depending on the type of measures. For example, a binary measure (such as installed / not installed) may be tracked and reported on during one reporting period only, while surveys may be done biannually, quarterly, or annually. These timeframes can be adjusted as required, for reasons such as:
- An agreed change in the project or initiative scope that might delay outcomes
- A higher resistance to change than originally estimated, resulting in slower progress towards realizing outcomes
- An opportunity to reinvest (e.g. labour savings may be deferred as staff are redeployed to tackle backlogs in another unit; once backlogs are eliminated, then the labour savings could be realized)
Data Sources
The Outcome Realization Plan should also identify how and where the performance data will be collected, and for each outcome metric the data source should be identified. Data can be gathered from sources such as system statistics, business activity monitoring software, interviews, surveys and questionnaires. Data sources and their integrity are factors in defining measures and metrics. If source systems are not seen to be accurate, then there will not be trust in the reported progress. Third parties are a viable alternative for collecting objective, non-filtered data, and may be used to provide input into the progress report. Focus groups could be used as another means for gathering information regarding Outcome Realization progress.
Define the Outcome Reporting Process and Schedule
The organization must create and implement the reporting procedures and mechanisms that provide direction on how and when to collect data (performance measures), prepare outcome progress reports, and submit, review, and communicate progress. The process defines the roles and responsibilities to complete these activities, and includes positions and names associated with these roles. Determine who will see the report, what they will want to see in it, and what will be done with the information. Keep in mind that too many reports can be burdensome for senior managers. The Outcome Realization Plan will detail the timing, schedule and scope of data to collect for performance measures and create the outcome progress report. Where possible, the progress reporting cycle should align to existing cycles. On a case-by-case basis, sponsors and senior management groups may choose to increase or decrease the reporting frequency, depending on the stability of outcomes realized, positive or negative trends, or increased risks.
Data collection should begin sufficiently in advance of the actual review date, especially if anecdotal evidence is to be gathered or surveys are to be completed. Performance measures and current values are retrieved from the identified data sources, and the values are captured in the Outcome Register to support multi-period trend analyses.
The Outcome Register facilitates the creation of the draft report, containing graphs, tables, and/or textual data. The current performance level is compared against target and previous month(s) performance to determine trends and anomalies in realizing outcomes. If progress is off-course, causes are identified and corrective action(s) are designed and planned with the outcome owner. Causes can be identified by reviewing the logic model, anecdotes provided by data sources, system usage, or through change management feedback loops.
Analysis findings and follow-up actions/decisions are documented in the outcome progress report, which is finalized and signed-off by the outcome owner for distribution. After the progress / realization analysis and any recommendations have been completed, the review date, results, corrective actions, and issues (as required) are entered into the Outcome Register.
The outcome realization progress report is distributed for review to senior management and other parties. Presentation and discussion of these reports should be a standing agenda item for senior management group meetings.
Separate from the reporting, progress and current activities should be communicated to the organization to recognize the effort and commitment of staff in working towards realizing outcomes and building awareness of the capability of the organization to deliver outcomes. Also, it helps to ensure that the organization is aware of activities that will address ongoing concerns and issues raised during the progress reporting process.
Document Reinvestment Opportunities
The Outcome Realization Plan should document reinvestment opportunities for the organization. If the organization expects to realize significant time savings, then these savings can redirected towards value-added or other core activities. For example, if a new records administration system was expected to save a police service over one hundred person hours per day across the police force by spending less time on reporting, that time could be redirected to increasing police presence in high-risk areas or increasing the number of intelligence operations being conducted. The areas for reinvestment should be identified in the business or value case and tracked and reported as part of the outcome realization plan.
Appendix I Outcome Realization Plan
A template for the Outcome Realization Plan that will structure all aspects of the program / initiative's outcomes, risks and measures and how they will be managed and attained.
Appendix J Performance Metrics: Traps to Avoid
A description of common issues to be avoided when defining performance metrics as part of the Outcome Realization Plan.
3.4 Stage 3: Monitor Delivery of Outcomes
Purpose
To set up the set of activities to monitor the progress of the initiatives and reaffirm the logic of how outcomes will be realized.
Step 1
Implement Outcome and Risk Monitoring
Outcome and risk monitoring continues throughout the initiative lifecycle and should be done in a regular and consistent manner. The monitoring will begin during the execution phase of the initiative and will become more rigorous once the implementation phase has commenced. During the execution of the initiative, issues, risks, and changes are monitored to assess the impact to desired outcomes, test the continued validity of the logic model, and to reaffirm the initiative's business or value case. The monitoring process is completed only once all the outcomes have been realized and the target performance level(s) has been achieved and is stable. At that point, the performance level may become an operational baseline to be maintained and monitored in the future.
During the execution phase, focus on the outcomes is sometimes lost in the pressures of delivering on schedule and on budget. However, it is important to keep the end goals and outcomes of the initiative in mind so that changes, issues, and risks to the initiative (or its component projects) can be assessed in terms of their impact on the key intermediate and final outcomes. As shown in Figure 3 below, a relatively low level of Outcome Management activity exists during the "build" phase, following Stages 1 and 2. This level of involvement works with project management activities to ensure that focus is maintained, risks are managed, and changes to outcome levels are captured. It is here in Stages 3 and 4 that the level of involvement of Outcome Management rises in order to track and manage the attainment of the outcomes.
Update Logic Model and Outcome Realization Plan
The Logic Model and Outcome Realization Plan are living documents and as such, when new information becomes available, both documents should be updated. This information may appear during the review or in between reviews. Examples include unanticipated outcomes arising from an initiative, the identification of new initiatives requiring the realization of an outcome or benefit, a current assumption or risk being proven true, current assumptions or risks being proven false (these could lead to new initiatives as risk mitigation actions are developed), and the identification of new assumptions or risks. The new versions of the documents should be circulated to the project management team and the senior management group.
Enact Outcome Monitoring and Progress Reporting Process
The reporting process defined in Stage 2 is implemented here. Roles and responsibilities for collecting current outcome values and preparing progress reports are assigned to various individuals, they are trained in the process, and any tools supporting the process, and monitoring and progress reporting commences, as per the schedule outlined in the Outcome Realization Plan.
The Reporting Process: Roles and Responsibilities
The organization can implement either a centralized or a distributed outcomes tracking and reporting process.
In a centralized model, a central group or team is responsible for executing the Outcome Realization Plan: gathering data, preparing reports, and working with outcome owners to support Outcome Management. The outcome owner is still accountable for the outcomes, but the responsibility for collecting the data and preparing the reports has been delegated.
In a distributed model, each outcome owner is responsible for ensuring that the reports are prepared. Therefore, each department, region, or district may have its group or team prepares its reports. A centralized group may still exist to consolidate and prepare an initiative- or department-wide consolidated progress report, as well as to provide coaching and advisory services to the project teams and outcome owners.
Regardless of the model selected, the roles and responsibilities must be clearly defined and assigned to a position. The incumbent in the position must be trained in the process and understand the concepts and principles behind the Outcome Realization Plan and Outcome Management.
The reviews that are conducted must be objective and clear and test the logic of the model itself; the purpose being not to lay blame, but to ensure success. If progress is lagging, then this can be identified early and a mitigation plan can be executed. Review and analysis can hone estimating skills and better position the organization to set realistic targets and goals in the future.
Outcome realization progress reports should be consolidated up to the initiative level, and can be further consolidated to the portfolio level. This will ensure that progress is communicated upwards to the appropriate people. Detailed progress reports remain at the project or initiative level to enable outcome owners to monitor and take action, as required.
Outcome Review Meetings
Outcome review meetings are scheduled to monitor progress, evaluate risks, and establish risk mitigation activities. The reporting requirements may call for multiple meetings to be held, each focused on a different audience. The number and style of meetings held depends on the significance or criticality of the project and the organization's reporting structure.
At the lowest level, the outcome owner will review the outcome progress report. This is the most detailed review, as the outcome owner will be assessing trends in outcome realization, monitoring acceptance of the change, identifying weaknesses and risks, and defining corrective actions. These meetings are typically held on a monthly basis.
A second review meeting is usually held with the senior management group, also on a monthly basis. This is a summary level and may require the consolidation of all detailed outcome progress reports. This forum is used to communicate ongoing success and inform executives of significant risks and how they are being addressed.
A third review meeting may also be scheduled with an executive advisory group on a quarterly basis. This meeting is typically for information purposes only, though critical risks and issues may be escalated for decision/direction. The purpose of this review meeting is to communicate progress and to assure senior management that value is being realized for its investment in the change initiative. If the initiative stops having value, the decision to continue or stop must be taken.
For federal governmental departments and agencies, large or critical projects may need to report to the Treasury Board Secretariat or other organizations. A standing schedule and participant list should be established and the project sponsor should have a standing slot on the meeting agenda to present status (during execution) and progress (post-implementation). This presentation also serves to communicate success and to assure that value is being realized.
Escalation
If necessary, issues, risks, or changes associated with outcome realization can be escalated from outcome owners to the senior management group for decision. As with project issues and risk escalation procedures, escalation to the senior management group should be a last resort. The escalation process may differ depending on the initiative. For example, departmental or program initiative may appoint the senior management group as the arbiter of last resort, while cross-departmental initiatives may have an executive advisory group above the senior management group.
Implement Tools and Mechanisms for Outcomes and Risk Monitoring
To support data collection and report generation, Initiative, Outcome and Risk Registers (found in Stage 1 and Appendices E, F, and G) can be helpful in maintaining historical data, preparing reports, and tracking follow-up actions. The registers are tools used to capture and maintain performance-related data for the key initiatives, outcomes and risks / assumptions being tracked through the outcome monitoring and reporting process and the risk management procedure. The Outcome Register identifies who is responsible for ensuring the realization of key outcomes, through tracking the progress of realization, tracking changes to expected key outcomes, and identifying new or emerging outcomes. With respect to assumptions, the Risk Register identifies who is responsible for monitoring the validity of each assumption and documents the impact to the project (i.e., the risk to outcome realization). It also tracks the confirmation of whether the assumptions are valid or invalid, changes to the assumptions, and the identification of new risks. Outcome realization progress report templates should also be constructed and provided to those individuals tasked with preparing the final report.
Some organizations likely already have performance tracking and reporting tools for tracking operating performance. These tools can also be used in tracking outcome realization as well and be gradually evolved as the capability and degree of reporting grows within the organization. For example, a simple database or spreadsheet could be used as an initial register, and as the organization begins tracking more and more projects, a more robust solution or package would likely be required.
Establish Feedback Mechanism
The reporting strategy should also define a feedback loop, so that the efficiency and effectiveness of the reporting process can be monitored and increased over time. As the capability to track and manage outcomes grows within the organization, the process needs to maintain its focus and ease of use.
Appendix K Outcome and Risk Reporting Checklist
The reporting process and schedule is defined in the outcome realization plan. This checklist will help guide on-going outcome and risk reporting to ensure it stays active and on-track.
3.5 Stage 4: Realize and Optimize Outcomes
Purpose
To set up the governance structure to encompass the roles, responsibilities and accountabilities towards realizing outcomes. To actually attain or harvest the outcomes, and look for ways to meet or exceed anticipated targets. To communicate the success of the initiative / program in an outcome oriented manner.
Step 1
Create Outcome Management Office
In defining the roles and responsibilities for the outcome realization process, the governance framework can be put in the context of an Outcome Management Office (OMO). Similar in concept to a Project Management Office (PMO), the Outcome Management Office provides the Outcome Management competency and capability within an organization, supplying services to and working with other organizational units to ensure that initiatives are outcome-focused, and that outcomes are in fact realized.
The Outcome Management Office can be thought of as a Centre of Excellence that administers the outcome realization planning, monitoring, and reporting processes and is responsible for ensuring that:
- Outcome realization plans have been defined and are ready for implementation
- Approved investments are tracked and managed to ensure that business outcomes are achieved and that investment sponsors are provided with reporting for the achievement of those outcomes
- Senior management is advised on the implications of issues across initiatives, thereby supporting their decision-making process
- Progress reporting to various stakeholders is conducted.
Outcome Management has a much broader focus than project management, and as such, while the Outcome Management Office is interested in the project budget, schedule, issues, risks, and scope changes, its interest is focused on how changes to these aspects will impact the realization of desired outcomes. The owner of the initiative typically reports status to the Outcome Management Office as he or she has ultimate responsibility for ensuring that the outcome realization plans are created and monitored, and that the outcome realization reporting process is also executed.
The Outcome Management Office head has overall responsibility for the operations of the Outcome Management Office, including:
- Overseeing the outcome and risk monitoring, and the review process
- Preparing and delivering outcome progress reports
- Maintaining the logic model, Outcome Realization Plan and schedule
- Assisting in the identification and resolution of risks and issues
- Providing coaching and advisory services to the project management team, line management, and other personnel.
Appendix L Outcome Management Office Checklist
This checklist will help to set up and guide the on-going activities that the Outcome Management Office is responsible to set up and run.
Step 2
Harvest Benefits
To ensure that the desired outcomes are realized, Outcome Owners and other stakeholders must actively participate in the realization process. The Outcome Review meetings are not only for informational purposes, but are designed to provide required information to allow Outcome Owners and Senior Management to determine progress, make decisions, and take needed action. These actions can be remedial or opportunistic, allowing the organization to take advantage of opportunities to realize additional or unanticipated outcomes.
Reinvest as Defined in the Outcomes Realization Plan
Reinvestment of outcomes that have been identified within the Outcomes Realization Plan should be included in the Outcomes Realization Progress Reporting. For example, time savings enabling the redirection of staff effort to other core or value-added activities should be noted in the progress report, as well as the results of those activities.
Identify Opportunities to Increase Outcome Performance Levels
In some instances, unanticipated outcomes may occur that should be captured and included in progress reports. In other instances, there may be opportunities to increase the outcome performance level of expected outcomes. In cases such as this, the Outcome Owner may choose to undertake additional activities to leverage the outcomes realized. For example, a small change that expands the scope of the project by 2% that leads to a 10% increase in the level of the outcomes should be seriously considered, given its return.
Communicate Success and Manage Change
Progress and the realization of outcomes should be communicated to the broader organizational audience. Honest and clear messages regarding positive or negative trends, corrective actions, or new outcomes realized should be distributed. This will aid in fostering and solidifying continued commitment to the change(s), encouraging others to commit to the change due to the fact that the results are being seen.
For desired outcomes to be realized, change within the organization must occur and be implemented in a sustainable manner. Change management involves progressing staff through increasing degrees of commitment to change, beginning with awareness and ending with a permanent implementation. Active change management relies in part on timely and concise communications, preparation of staff to perform new activities or methods (through training, education and coaching), and creation of the belief that the end result will be beneficial. While much of the change management activities will occur during the execution phase, change must continually be monitored and reinforced throughout implementation to encourage trial, adoption, and ultimately part of the normal operation.
When to Stop Tracking
The Outcomes Realization Plan indicates the expected timing when outcome target values or levels are to be reached. The outcomes monitoring and reporting process continues until all outcomes are realized and stable, particularly if the benefit pertains to a measurable performance level.
During the reporting process, some intermediate outcomes may "drop off" the report once achieved. For example, an outcome target measure of "100% of personnel have access to the new system" has progress reporting until the target is achieved. Once completed, the outcome no longer needs to be reported. If the measure is "100% of staff members always use the system," then monitoring and reporting would continue until this is achieved and stable.
For performance measures with an achieved and stable target, the measure becomes the new operating standard. Continued achievement of the new standard is managed via the regular organizational performance management process. It can then be removed from the outcome progress report, as it becomes part of the regular operational performance reporting. This may require an update to a different reporting or communication channel. An organization can establish an Outcome Management Plan; in this case, an organization would cease to report at the end of the outcome lifecycle, as per the plan.
Appendix M Harvesting Outcomes Checklist
This checklist will help guide on-going benefits harvesting activities to ensure that they are set up correctly and stay active and on-track.
Appendix A: Outcome Management Readiness Checklist
The Outcome Management Readiness Checklist will assist the Outcome Management lead in determining whether or not the initiative is ready and identify any deficiencies that need to be corrected.
Note that when the next steps and roll-out approach are finalized, this checklist will need to be updated to reflect the specifics of these decisions.
Item | Yes | No. We Need to… | Additional Information |
---|---|---|---|
Confirm the Scope | |||
1. Confirm the objective, rationale, scope, deliverables and budget of the Outcome Management exercise. | |||
Engage the Team Outcome Management requires the input and support from a wide variety of groups. These stakeholders need to be ready to participate in an Outcome Management exercise. |
|||
2. Have you made contact with the TBS CIOB to engage their support? | Also, consider contacting colleagues who have conducted the Outcome Management process or external providers who are subject matter experts. | ||
3. Have you contacted the business sponsor and secured his/her buy-in and/or participation in the Outcome Management exercise? | |||
4. Have you identified the stakeholder groups in the initiative and secured the participation of key representatives for each group? | Stakeholders to consider include:
|
||
5. Have you communicated the Outcome Management process and expectations to the participants? | For example, you may want to issue an announcement and/or hold a preliminary information session with stakeholders. | ||
6. Have you engaged a neutral experienced facilitator (internal or external) who can guide the process? | A facilitator is required to conduct the workshops. | ||
Ready the Tools Outcome Management uses tools and templates to support the process. |
|||
7. Have you downloaded the latest guide to Creating and Using a Business Case from the Treasury Board Secretariat website? | |||
8. Have you downloaded the latest version of the TBS Guide to Outcome Management from the Treasury Board Secretariat website? | |||
9. Are you and the facilitator sufficiently familiar with the guides and tools? | |||
Gather the Background and Data Outcome Management leverages existing information from a wide variety of sources where possible. |
|||
10. Have you identified the information that can be used during the Outcome Management exercise that may impact risks, initiatives or outcomes? This information will serve as the starting point for the main or final outcomes. |
Consider:
|
||
11. Have you and the facilitator familiarized yourselves with the existing information? | |||
12. Have you distributed background / briefing information to the participants? |
Appendix B: Logic Model / Outcomes Map Checklist
The target audience for this checklist is the main organizer / facilitator for the logic model or outcomes map. This will provide them with the necessary guidance from the first steps of engaging the stakeholders through to completing the workshops.
Item | Yes | No. We Need to… | Additional Information |
---|---|---|---|
Engage the Stakeholders / Prepare the Interviews Creating a logic model (also known as an outcomes map) is a consensus building process across a wide variety of stakeholders. It is not intended to be an analytical exercise done by an individual. Therefore a lot of the process of building the logic model focuses on involving the stakeholders. Interviews are a first step to gather outcome and risk data and engage the stakeholders. Depending upon the size and complexity of the initiative, a minimum of two � day workshops is required, usually with a week's break between them. Compressed schedules are possible, completing the workshops within a few days, but it is not recommended as this does not provide time for stakeholders to consult with colleagues, new ideas to be introduced and "sober second thought" about the impacts of the change. More complex initiatives will likely take several workshop sessions. |
|||
1. Have you confirmed the participation of the initiative sponsor, business sponsor and/or other owners/champions to engage their support and participation? |
The business sponsor is the person accountable for obtaining the outcomes from the initiative or program. |
||
2. Have you identified the stakeholder groups in the initiative and secured the participation of key representatives for each group for the logic model interviews and workshops? |
For additional stakeholders – ask the question does any other department or agency do the same (or similar) work as you do? Also look for private sector companies doing similar work or industry association representation |
||
3. Have you scheduled the interviews with the appropriate stakeholders? |
The interviews take place prior to the workshops in order to create the first draft of the logic model, to bring in to the first workshop. They are typically 30-60 minutes long. |
||
4. Have you tailored the interview questions to target the specific types of outcomes for the subject matter area? |
Sample questions include:
|
||
5. For the interviews, did you assign a lead interviewer role and a scribe role for the interview team? |
|||
6. Following the interviews, did the scribe type up the interview notes to capture and highlight the relevant items? |
|||
7. Have you communicated the logic model / outcomes map process, interview questions, workshop dates and expectations with the participants in advance of the interviews and workshops? |
|||
Prepare for the Workshops Preparation for the logic model workshops involves the creation of a first draft of the model, based upon the interviews and the documentation read and research done. The workshops validate and complete the logic model, and build consensus across the various stakeholders about the complete set of outcomes. |
|||
8. Do you have logic model tools? |
|||
9. Have you scheduled the workshops at least 2 weeks in advance? |
Consider having a choice of dates available and confirm which one is best at the interviews. |
||
10. Have you laid out the main final (or ultimate) outcomes on the first draft logic model? |
The first draft of the final outcomes came from the Readiness Checklist item number 10 in Appendix A. |
||
11. Have you laid out all the known projects / activities that contribute to the outcomes on the first draft logic model? |
The contribution could have a positive or negative impact. |
||
12. Have you laid out the main intermediate outcomes on the first draft logic model and arranged them into paths? |
|||
13. Have you planned out how you (as facilitator) will navigate the logic model during the workshop in a planned and orderly fashion? |
Decide whether you approach the logic model from right-to-left, from left-to-right or middle-out. Select which paths you wish to follow so as to break the model down into manageable sections. |
||
14. Have you decided which facilitation method and materials you will use? (very large sheet of paper, or an 11 x 17 sheet for each participant, or on-screen "live" update of model using computer and projector) |
Large sheet of paper likely means a plotter output of either 4 or 6 feet high by 6 or 8 feet long. |
||
15. Do you have the correct material assembled (computer, projector, boardroom, large sized output, etc.) |
|||
16. Are you and the facilitator sufficiently familiar with the guides and tools? |
Ensure you have the necessary facilitation skills as well as the hard Outcome Management skills |
||
Conduct the Workshops Conducting the logic model workshops involves a combination of structured and unstructured discussions, and a variety of facilitation techniques. The objective is to get everyone to participate, and build consensus around a solid logic model that the participants have created and have ownership of. |
|||
17. Have you and the facilitator re-familiarized yourselves with the existing information? |
|||
18. If required, have you prepared a brief introductory presentation on Outcome Management and logic models for the first workshop? |
|||
19. Have you selected a path to start with in the workshop to gain some good momentum? |
Decide whether you approach the logic model from right-to-left, from left-to-right or middle-out. Select which paths you wish to follow so as to break the model down into manageable sections. |
||
20. During the workshop have you avoided vague words and instead encouraged greater precision around outcomes to be achieved? |
Vague words to avoid:
Precise word to use:
|
||
21. Have you identified as many assumptions and risks around the outcomes as possible and identified them as either manageable or outside our control? |
|||
22. Have you avoided creating any loops in the logic model which would make it impossible to evaluate? |
A loop (circle in logic) is unacceptable, as it is not measurable. The logic model is there to document the one-time change in state of the outcomes. |
||
Following the Workshops Following the logic model workshops it is important to capture all changes to the logic model and reconcile any contradictions. Schedule, prepare, and conduct as many additional workshops as needed until the logic model is sufficiently complete to satisfy the participants. |
|||
23. Have you captured all changes from the workshops and applied them to the logic model? In doing so, did you ensure that no loops were created and consistency of terminology and level of detail were followed? |
|||
24. Have you circulated the updated logic model to participants for them to review following the workshops? |
|||
25. Have you identified the set of outcomes that should be measured? Have you set up the interviews and workshops for the Outcome Register? |
|||
26. Ask participants to evaluate the workshop and provide feedback. |
|||
27. Conduct a (self) evaluation of the workshops and the process for lessons learned and self-improvement purposes. |
Appendix C: Sample Outcome Types
The following four sections provide examples of possible types of benefits or outcomes that an initiative value case could provide. These lists are not intended to be exhaustive, but to serve as a reminder and promote discussion with a business sponsor about possible outcomes.
Direct Quantitative Outcomes - Labour Savings
These savings represent a reduction in the time and effort required to perform given tasks and processes:
- Decreased number of people handling a file
- Eliminated duplication of effort
- Decreased inspection time
- Decreased time to book meetings / appointments
- Decreased time to book travel by using the web
- Decreased material or document distribution time
- Decreased information retrieval time
- Eliminated overtime due to peak demand.
Direct Quantitative Outcomes - Other Direct Savings
These are tangible, easily quantified cost savings that are currently budgeted expenditures and will be either avoided, decreased or possibly replaced by other costs.
- Postage / courier fees
- Bulk purchases of products (lower unit cost)
- Purchasing costs
- Telephone / Telecommunications / Network
- Printing costs
- Equipment rental or repair
- Inventory carrying costs
- Direct travel costs or other transportation costs
- Waste disposal, including hazardous waste
- Energy consumption
- Temporary contract help at peak season
- Environmental benefits - such as contaminated site cleanup
- Increased user fees or other revenue generated.
Indirect Outcomes
These are tangible but not as easily quantified costs that are currently budgeted. The cause and effect of the initiative on these types of outcomes needs to be well understood and documented in order to be precise.
- Decreased need for training fees and time
- Increased safety leading to fewer on-the-job accidents
- Decreased accommodation costs
- Lower absenteeism
- Increased protection of intellectual property
- Eliminated other administrative costs
- Faster response time to service request
- Created ability to track transactions through the business
- Increased compliance with legislation and policy.
Qualitative Outcomes
Qualitative Outcomes relate to performing given tasks more effectively, or behavioural or perception changes. These are not easy to quantify, but should be estimated, documented and included in the initiative value case in order to show the broadest view of the value of the work. If required they can be measured by survey or other proxy for the actual measure.
- Increased availability of management information
- Increased level of data integrity
- Increased level of corporate knowledge (retention and retrieval)
- Increased client satisfaction
- Increased public confidence
- More effective decision-making
- Better scheduling / workflow
- Lower risk of obsolete processes
- Reduced stress
- Improved working conditions
- Better quality of work life
- Increased productivity
- Experience with newer technologies
- Higher learning rate
- Increased level of staff morale
- Decreased fatigue.
Appendix D: Initiative Register
For each Initiative identified on the logic model, the following attributes should be documented:
ID |
Initiative Name |
Current Phase |
Sponsoring Organization |
Business Sponsor |
Project Manager |
Initiative Dependency |
Target Implementation Date |
One-time Implementation Cost |
---|---|---|---|---|---|---|---|---|
I-1 |
Call Centre Consolidation |
Initiation |
Program X |
Director of Program X (name) |
(name) |
I-6 |
dd-mm-yy |
$x.xM |
- ID –
- Unique identifier for the initiative (from the logic model).
- Name –
- The name of the initiative.
- Current Phase –
-
The current phase of the initiative:
- Initiation
- Planning
- Project Execution (Build and Implement)
- Outcome Realization
- Sponsoring Organization –
- The name of the department, unit, etc. of the organization that is leading the initiative.
- Business Sponsor –
- The title and name of the individual within the sponsoring organization who has overall responsibility for championing the initiative
- Project Manager –
- The name of the individual who has overall responsibility for delivering the initiative on time, on budget and within the defined scope
- Initiative Dependency –
- The ID number of any other initiatives on which this initiative is dependent.
- Target Implementation Date –
- The estimated implementation date (completion date) of the initiative.
- One-time Implementation Cost –
- The sum of all one-time costing components of the initiative in order to get it to implementation. These components include staff time, vendors, hardware, software, facilities, other materials, etc. Does not include the ongoing maintenance cost of running the operation.
Appendix E: Outcome Register
The Outcome Register lists the various attributes of the key outcomes from the logic model that have been selected to be measured. All final outcomes should be listed, and those intermediate outcomes that are measurable with appropriate effort are included. Each row in the Outcome Register lists the outcomes to be achieved identifies the measurement criteria, the person accountable for achieving the outcome, the method for collecting the information, and so on.
The person accountable for attaining the outcome is directly involved in the establishment of the Outcome Register for each key outcome identified on the logic model. In the Outcome Register, the following attributes are documented:
ID |
Outcome Name |
Description |
Comments |
Metric |
Frequency / Duration |
Measurement Method |
Estimated Cost of Measurement |
---|---|---|---|---|---|---|---|
O-6 |
Increased Client Satisfaction |
Specifics about any performance indicators |
Any additional details |
Rating of satisfaction level by target audience |
Annually / for 5 years |
Client survey |
$100K annually |
ID |
Baseline Value |
Highest Level Target Value / Date |
Most Likely Target Value / Date |
Profile |
Tolerance Limit |
Action if outside tolerance |
Responsibility for measuring outcome |
Accountability for attaining outcome |
---|---|---|---|---|---|---|---|---|
O-6 |
Satisfaction level of 3.5 out of 5 |
4.5 / March 31, 2007 |
4.2 / March 31, 2007 |
+/- 0.2 |
Take action based on feedback / survey results |
Manager, Client service |
DG, Program Delivery |
- Outcome ID / Name / Description / Comments –
- The outcome name from the logic model, its identifier (O-xx), and a brief description of the outcome to be achieved, and any additional comments pertaining to the outcome.
- Metric / Frequency / Duration –
- The measurement (sometimes called the Key Performance Indicator) that will enable the achievement of the outcome to be tracked. The unit of measure must be made clear. The frequency of how often the measurement will be taken and will be managed (daily, weekly, monthly, annually etc.) must be specified. The duration of how long this outcome should be measured to see the effect of this change must be specified.
- Measurement Method –
- The source of the data and/or the process by which the metric data will be collected (e.g. from a monthly report, annual survey, etc.).
- Estimated Cost of Measurement –
- This is an order of magnitude estimate of the cost of obtaining the measurement data. May be insignificant if the measurement already exists, but may be large if the measurement is new and no data exists. This should consider the ongoing cost of the measurement. This serves as a “reality check” that the measurement cost is in line with the benefits of measuring in the first place.
- Baseline Value –
- The current level of the metric. The starting value from which the Target Value seeks to improve upon. If there is any difficulty in obtaining the baseline value, it should be noted and at the very least the first measurement of the metric becomes the baseline value.
- Target Value (Highest Level and Most Likely) / Date –
- The desired future state of the metric, as compared with the current or baseline value of the metric. This also includes the target date by which the outcome should be achieved. The Highest Level is the most optimistic level that could be achieved, if all conditions become true. The Most Likely Level is the most realistic level that will be achieved. It is what goes in to the value case.
- Profile –
- This is a graph of the trend or pattern of the change from the Baseline Value to the Most Likely Target Level.
- Tolerance Limit –
- The variance from the Target Value that is permitted at any point in time without any corrective action needed. Can be + or - or both.
- Action if Outside Tolerance Limit –
- The corrective action that should be taken if the ongoing measurement Target Value is above or below the Tolerance Limit at any point in time. This anticipates possible problems and corrective action, and mitigates risk
- Responsibility for measuring outcome –
- The title and/or name of the person responsible for collecting the data and reporting the measurement of the outcome.
- Accountability for attaining outcome –
- The title and/or name of the person responsible for the delivery and attainment of the benefit. This is the person who will “pull” the upstream activities and the intermediate outcomes and assumptions in order to achieve the desired outcome. Ideally this outcome should be in the person's Management Accord in order to ensure the accountability toward the success of the outcome.
Appendix F: Risk Questionnaire
Two levels of risk management must be considered: risks that hinder an effective Outcome Management process and risks to the initiative itself which if they materialize could impact the achievement of the initiative's outcomes. Both types of risks must be identified up front and then managed throughout all phases of the initiative so that action can be taken to reduce the impact on desired and expected outcomes.
Questionnaire: Identifying the Risks to the Outcome Management Process
The following questions are intended to guide the team in identifying risks to the Outcome Management Process. As risks are identified, complete the Risk Register (Appendix F or equivalent) to capture the required information about the risk.
Are the outcomes sought anchored in a clear business vision?
- Are the outcomes clear and specific?
- Is there is a clear vision of the organization's final business outcomes?
- Is there a clear and credible logic between the initiative and the final business outcomes?
If the environment changes, will the outcomes still be realized?
- Will outcomes be realized under all reasonably expected conditions?
- Will outcomes be perceived as useful by customers under all reasonably expected conditions?
Is there good, informed organizational support for the initiative?
- Were the operational business areas involved in developing the Outcome Management process?
- Do the operational business areas understand the Outcome Management process?
- Is change supported by all those impacted by the initiative?
- Is there a strong sponsor able to bring about the necessary changes?
- Does the sponsor have the time and resources needed to make the project a success?
Does this initiative carry significant political risk?
- Is this project likely to attract the attention of the media?
- Is this project likely to attract the attention of the unions?
- Is this project likely to attract the attention of any other level of government?
What kind of profile is created in the event that the initiative fails or does not meet milestones?
- Is the media likely to be concerned about failure or underperformance of this initiative?
- Is the government unlikely to be concerned about failure or underperformance of this initiative?
Is there a clear understanding of the work involved in this initiative in order to achieve the final outcomes?
- Are the key activities of the initiative clearly defined?
- Is the linkage of the key activities to the final outcomes clear?
Has the process for monitoring of outcomes been defined?
- Have all the responsibilities for review of the outcomes been defined?
- Has an Outcome Management Office been established?
Are the activities and tasks necessary to realize and manage the outcomes clearly understood?
- Does a detailed Outcomes Realization Plan with specific management processes exist?
- Are all key outcomes stemming from the initiative measurable?
Is the scope of work necessary to realize outcomes within the “reach” of the likely resources?
- Are all impacts of initiatives within the sponsor's area?
- Is outcomes realization independent of other business programs?
- Is outcomes realization independent of parent organization-wide programs?
How committed are the affected business areas to realizing the outcomes?
- Does the program owner have clear and direct accountability for outcome delivery?
- Has accountability for outcomes been established before the program started?
Can the business cope with the program?
- Does the affected staff have proven capabilities to make necessary change?
- Are there are other major changes in the same business area to divert resources from the program?
- Does the organization have proven capability to implement projects of this complexity?
What is the risk of not completing the initiative?
- What is the impact of cancellation?
- What is the impact of postponement or not delivering on time?
Questionnaire: Identifying the Risks to the Initiative
The GoC has developed strong risk management processes for the identification and on-going management of risk throughout the lifecycle of a project. Please refer to the risk management processes in the TBS's Enhanced Management Framework for detailed project risk management guidance.
Appendix G: Risk Register
This is a sample risk register. It can be used as a stand-alone tool to capture details on each risk identified during the Outcome Management process. Alternatively, the details can be incorporated into an existing risk tracking tool if one already exists.
ID |
Owner |
Risk Statement |
Response |
Probability |
Impact |
Status |
Action Item(s) |
---|---|---|---|---|---|---|---|
R-1 |
Project Manager |
There is a risk that potential performance numbers for Phase III may not support client needs in a production environment. |
Mitigate. Compare likely measures and determine gap. |
Medium |
High |
2005-11-15: No further development in status. |
Follow-up with DG. Assigned To: Due Date: |
- ID –
- Unique identifier for the risk
- Owner –
- The title/name of the person responsible for mitigating the risk. Risk should be assigned to the party best able to manage it; not necessarily to the party accountable for the outcome.
- Risk Statement –
- Statement of the risk and its impact on the environment.
- Response –
- Risk response strategy. Start with one of the following keywords: Avoid, Control, Assume, Mitigate, Watch, Escalate, or Transfer. Then add a brief description.
- Probability –
- Likelihood of occurrence (Low / Medium / High).
- Impact –
- Degree of impact to affected stakeholders (Low / Medium / High).
- Status –
- Running status that provides a history of what is being done for the risk and changes in the risk. Include the date of the most recent update.
- Action Item(s) –
- Mitigation actions to reduce the likelihood and impact of the risk, including the person/title the item is assigned to, and the due date.
Appendix H: Value case Template
This Value Case focuses data collection activities for an initiative / project and positions for a Value Assessment. The initiatives are assessed, measured, and scored based on a pre-determined set of criteria as established by the stakeholder community. The benefit of the Value Case is that it encompasses numerous financial, non-financial tangible and intangible elements, and focuses not only on the project description, such as scope, budget and objectives, but also the value that this initiative will provide. By completing the Value Case, the overall benefits and value of this initiative will begin to emerge, providing a greater overall picture. A completed Value Case will ensure that initiatives will be assessed accurately and consistently within the portfolio. This provides support for managerial decision-making when determining funding and resource allocation for the highest value initiatives.
1.0 Owners / Roles
Role |
Name |
Title |
Approval / Signature |
Date |
---|---|---|---|---|
Business Sponsor e.g. DM or ADM responsible for business results |
||||
Secondary Sponsor Second departments DM or ADM |
||||
Key Business Manager e.g. DG or Director |
||||
Project Director or Manager |
||||
Subject Matter Expert(s) |
||||
Technical Representative |
||||
Other Roles as Required |
This section should already be part of the governance in the Project Plan / Charter.
2.0 Initiative Business Change Description
2.1 Context Background or context to the Initiative. |
|
---|---|
2.2 Description / Purpose Brief description of the Initiative. List the business and technical documentation available for the initiative. |
|
2.3 Objective(s) / Business Need(s) Describe the objective(s) of the Initiative and the business need(s) it fulfills. Does the Initiative respond to specific legislation or regulations? Does the initiative respond to government priorities as addressed in the Throne Speech or Budget? Be specific. |
|
2.4 Scope Describe the size and constraints of the Initiative. Describe the business / transaction volumes. Describe the timeframe for implementation including when the benefits are going to be realized. |
|
2.5 Users Describe the main users of the initiative. Users are those who will actually directly use, operate, or touch the initiative. Who involved? What are their roles? Where are they located? How many users are there? What percentage of the possible user community for the initiative does this represent? |
|
2.6 Stakeholders Other than the users, describe the main stakeholders that benefit from the initiative without actually directly using it. How are these stakeholders involved in the governance of the initiative? |
|
2.7 Security / Sensitivity How sensitive is the initiative and the data? Has a TRA / Sensitivity Analysis been done? Has a PPIA been done? Has a legal opinion been done on the initiative? |
|
2.8 Roles and Responsibilities Describe the main roles and responsibilities of the initiative owners and main players. Who has what authorities? |
|
2.9 Critical Success Factors What key business factors need to be in place for the initiative’s success? Are there any risks that do not yet have a mitigating action defined? |
|
2.10 Skill Sets Required List the key skill sets required to successfully use and support the initiative. |
|
2.11 Achievement of Desired Results Identify the extent to which the community is able to realize and measure the desired results to target audiences, users, and stakeholders. Include ability to obtain baseline measurements, establish performance measures, etc. The actual measures should be indicated in sections 4 and 5. This question relates only to the capacity of the initiative to do performance measurement activities. |
|
2.12 Technical solution Describe the technical solution for the initiative. Identify any interdependencies the solution will have to other systems both in and out. What development standard is proposed? How flexible is the solution? |
3.0 Business and Technical Risk Analysis
Key risks and assumptions, for the achievement of desired business benefits in addition to key impacts and probabilities, associated with the initiative and their associated mitigation strategies. Ensure that the costs of risk mitigations are included in the cost section
# |
Description of Risk/Assumption |
Mitigation Strategy |
---|---|---|
3.1 |
||
3.2 |
||
3.3 |
||
3.4 |
# |
Description of Risk/Assumption |
Mitigation Strategy |
---|---|---|
3.5 |
||
3.6 |
||
3.7 |
||
3.8 |
4.0 Key Financial Benefits
The information in this section should be quantifiable and measurable. This section will form part of the Outcome Register for the initiative.
# |
Description and Quantification The nature and magnitude of the benefit (e.g. cost savings (labour, capital, direct, indirect), cost avoidance, revenue generation, etc.) |
Measure Explanation of how this financial benefit is being tracked and measured. How do you know it is occurring? Timeframe, target date. |
Assumptions and Factors Assumptions made about achieving the benefit. List of factors that will have a major effect on the achievement of the benefit. |
Accountability The position accountable for the achievement of benefit. |
---|---|---|---|---|
4.1 |
||||
4.2 |
||||
4.3 |
||||
4.4 |
||||
4.5 |
||||
4.6 |
Additional rows can be added as required.
5.0 Key Non-Financial Benefits
The information in this section should be quantifiable and measurable. This section will form part of the Outcome Register for the initiative. Other Non-quantifiable benefits should be included here, as well.
# |
Description and Quantification A description of a non-financial benefit expected (e.g. increased customer service, safety, user benefits, staff morale, operational impact, etc.). Magnitude of the non-financial benefit. |
Measure Explanation of how each non-financial benefit will be tracked and measured. How do you know it is occurring? |
Assumptions and Factors Assumptions made about achieving the benefit. List of factors that will have a major effect on the achievement of the benefit. |
Accountability The position accountable for the achievement of this benefit. |
---|---|---|---|---|
5.1 |
||||
5.2 |
||||
5.3 |
||||
5.4 |
||||
5.5 |
||||
5.6 |
Additional rows can be added as required.
6.0 Key Financial Costs
# |
Description and Quantification The nature of the cost and dollar value of this financial cost. If possible, an indication of when this cost will be incurred. Separate business costs from technical costs and maintenance costs from implementation costs. Identify any costs associated with performance measurement. |
Assumptions and Factors Assumptions made about the financial costs. |
Accountability Position or role accountable for managing this cost. |
---|---|---|---|
6.1 |
|||
6.2 |
|||
6.3 |
|||
6.4 |
|||
6.5 |
|||
6.6 |
A financial spreadsheet will be used for calculating costs in detail.
7.0 Key Negative Effects
# |
Description and Quantification The nature and magnitude of the negative effects. (e.g. increased absenteeism, decreased user/employee satisfaction, decreased staff morale, reduction of visibility, loss of know-how/corporate knowledge) |
Measure Explanation of how this negative effect is being tracked and measured. How do you know it is occurring? |
Assumptions Assumptions made about the negative effect. |
Accountability The position accountable for managing the reduction of this negative effect. |
---|---|---|---|---|
7.1 |
||||
7.2 |
||||
7.3 |
||||
7.4 |
||||
7.5 |
Additional rows can be added as required.
8.0 Stakeholder Impact
# |
Stakeholder Identification Identification of the stakeholder group (e.g. users, public, client departments, suppliers) |
Nature and Degree of Impact Description and extent of impact. |
Degree of Stakeholder Support / Opposition Stakeholder change readiness. How, and to what extent, would they react? Would they support or not? |
Change Management Approach Culture change strategies that will be used to mitigate the impacts of the initiative retirement. As required, strategies related to the following aspects may be addressed: training, communication, staffing, etc. |
---|---|---|---|---|
8.1 |
||||
8.2 |
||||
8.3 |
||||
8.4 |
||||
8.5 |
Additional rows can be added as required.
Appendix I: Outcome Realization Plan Table of Contents
The following is a sample Table of Contents for the Outcome Realization Plan that should be produced as part of Stage 2.
- Outcome Management Deliverables (validated and updated)
- Outcome Register (Appendix E)
- Initiative Register (Appendix D)
- Risk Register (Appendix G)
- Outcome Realization Processes
- Performance Measurement Process
- Process Flow
- Roles and Responsibilities
- CRUD (Create / Read / Update / Delete) diagram
- Data Sourcing Issues
- Baseline Measurement Issues
- Monitoring and Reporting Process
- Process Flow
- Roles and Responsibilities
- CRUD diagram
- Performance Measurement Process
- Outcome Realization Plan Schedule and Resources
Appendix J: Performance Metrics: Traps to Avoid
Establishing performance metrics and targets can be challenging. The following sections identify some common issues and ways to deal with them:
Measures without Owners
Outcome performance metrics and targets should always be endorsed by an outcome owner. Without an owner, there is no one to take action if outcome realization is lagging. Ideally, the outcome owner has already been identified and is working with the team to define performance measures and metrics. Involving the outcome owner in this process is a change management tactic for eliminating or reducing resistance, as the owner will have built up commitment to the project and it will eliminate the perception that a target is being imposed on him or her. It will also likely dispel the notion that the measures will be used to lay blame and the owner will likely be more comfortable with his or her ability to achieve the target(s) in question. Measures can still be defined prior to identifying the outcome owner, but they should be considered merely as a model and used as an aid in discussions with the potential outcome owner. The measures should then be finalized once the outcome owner agrees to them.
Consolidating Measures: Risk of Double Counting
In situations where there are multiple outcome owners for a given outcome, the initiative team should be careful to ensure that the relative contributions and the consolidated metric make sense. If one owner is claiming a reduction in process time of 60 percent, and a second owner is claiming a reduction of 70 percent, a total reduction of 130 percent is obviously not possible. Therefore, the initiative team should ensure that the consolidated baseline and target metrics are realistic and in line with the supporting measures from the outcome owner.
Departmental vs. Organizational Metrics
It is important to be cautious when providing target values for outcomes. Cost savings of 20 percent may sound attractive at first glance, but they may be only savings for a department or unit and not for the organization as a whole. For example, where labour contracts are in force, labour savings may be expressed in dollars, but are not “bankable.” Employees may be redirected to resource pools or transferred to other groups or departments. In this context, while the department may realize a cost saving from the initiative, the organization does not. Ensuring clarity of where and how outcomes occur will help manage the expectations of senior management.
Mistrust in Qualitative Outcomes
Qualitative measures may be subject to more debate, as they are perceived as being less “solid” than quantitative measures. Soundness of the qualitative measures can be assessed according to four criteria:Footnote 4
- Credibility: establishing that the results are believable
- Transferability: the degree to which the results can be generalized or transferred to other contexts or settings
- Dependability: the degree to which the results are repeated or replicated
- Confirmability: the degree to which the results can be corroborated by others.
No Performance Management System in Place
For organizations that do not have or are currently developing a performance management system, more effort will be required. This can be due to several factors, including:
- Organizational resistance to performance management – a fear of unrealistic targets and inappropriate management practices
- Performance tracking mechanisms, tools, and frameworks do not exist and must be created, which increases (to varying degrees) initial workload and “overhead”
- Distrust of systems to accurately track performance (such as workflow systems, business activity monitoring systems)
- Technological impediments to collecting the data (source data may be in several source systems)
- Debates over appropriate measures to use and the ability (i.e. authority and responsibility) to meet targets.
Appendix K: Outcome and Risk Reporting Checklist
The reporting process and schedule is defined in the Outcome Realization Plan. This checklist will help guide on-going outcome and risk reporting.
Item | Yes | No. We Need to… | Additional Information |
---|---|---|---|
1. Does the Outcome Realization Plan have a sufficient reporting process and a schedule? |
|||
2. Is the reporting process and schedule integrated into existing project management reporting processes? |
Integrating outcome and risk reporting into existing project level reporting will ensure that effort is not duplicated. |
||
3. Have the tools supporting the reporting process been identified and customized? |
|||
4. Are the roles and responsibilities for collecting performance and preparing reports (as defined in the Outcome Realization Plan) assigned to individuals? |
|||
5. Are the individuals involved in the process trained? |
|||
6. Have outcome review meetings been scheduled? |
|||
7. Is the escalation process defined and implemented? |
|||
8. Has the logic model and Outcome Realization Plan been updated if required? |
Appendix L: Outcome Management Office Checklist
This checklist will help to set up and guide the on-going activities that the Outcome Management Office is responsible to set up and run.
Item | Yes | No. We Need to… | Additional Information |
---|---|---|---|
1. Does the Outcome Management Office (OMO) have a Terms of Reference and Mandate to be able to take action? |
|||
2. Have the linkages between the Project Management Office (PMO) and the OMO been defined? Is there a Memorandum of Understanding (MOU) formalizing this relationship? |
Reporting frequency, channels, data elements are some of the components of the MOU. |
||
3. Has the OMO identified the responsibilities for outcome monitoring and the outcome progress reporting? |
Could use a RACI chart to define the responsibilities |
||
4. Has the OMO communicated its mandate and role to all the necessary stakeholders to ensure they are aware of the OMO? |
Ensure that the overall owner of the initiative sends an announcement to all relevant stakeholders with the Terms of Reference of the OMO and the individuals staffing it. |
||
5. Ensure that the OMO is the focal point for all developments and emerging Best Practices in Outcome Management to be able to adapt and adopt them into the Outcome Management process. |
There should be linkages to other OMOs in other departments, central agencies and possibly even internationally for similar type programs. |
||
6. Define and create all OMO deliverables, formats, tools, etc. |
Deliverables include the logic model, Outcome Realization Plan, Value Case, etc. |
||
7. Work with various stakeholders to ensure outcome measurement processes and sampling points are in place in order to monitor achievement of outcomes. |
Appendix M: Harvesting Outcomes Checklist
This checklist will help guide on-going outcomes harvesting activities to ensure that they are set up correctly and stay active and on-track.
Item | Yes | No. We Need to… | Additional Information |
---|---|---|---|
1. Have you identified those outcomes that will allow for reinvestment opportunities? |
A common example of a reinvestment opportunity is freeing up a small portion of staff time – what is to be done with that freed-up time is the reinvestment opportunity. |
||
2. For each of those reinvestment opportunities, have you defined how much is freed-up to permit for reinvestment? Do you have alternate activities / actions to replace the freed-up opportunity? |
A common example is for staff to undertake some training that was unable to be scheduled due to workload, or to undertake some new, challenging tasks and activities. |
||
3. For each key outcome that is being measured, have there been additional opportunities identified that will exceed the originally intended target level of the outcome? |
A common example of one of these opportunities is an enhancement suggested by a user that is deemed out of scope during the main project, but would bring significant benefit if it were implemented. |
||
4. For each of these opportunities to exceed its target, has an analysis been done to examine the trade-off of time, cost, and change in outcomes? |
|||
5. Have the necessary communication vehicles been established for visibly communicating the success of the attainment of the outcomes? |
Typical communication vehicles include newsletters, electronic postings, presentations, etc. The outcome owners should be encouraged to author and / or present these messages wherever possible. |
||
6. Have all messages been brought forward, including the negative as well as positive ones? |
Sharing difficulties and failures as well as successes, and the lessons learned from them, can often be a powerful message that can help to change the culture towards openness in outcome reporting. |
||
7. Have the durations, or "expiry dates" of outcomes been established? Does the OMO and the outcome owner know when to stop tracking the outcomes on an exceptional basis and when to migrate it into normal operation or even stop tracking it entirely? |
|||
8. Is there a transition plan in place for all those outcomes that will move from the special circumstance to a steady-state performance measurement and monitoring? |
Appendix N: Glossary
These definitions are based in part on text from the documents TBS Guide to Realizing Outcomes from Government of Canada ProgramsFootnote 5 (draft of March 2004) and Evaluation and Aid Effectiveness: Glossary of Key Terms in Evaluation Results Based Management.Footnote 6
- Accountability for Results:
- The responsibility to report fairly and accurately on the attainment or non-attainment of outcomes, in addition to demonstrating that work has been conducted according to existing standards and/or agreements.
- Assumption:
- A condition for the realization of an outcome or of an initiative, over which the organization has no control.
- Benefits:
- Direct and indirect positive consequences resulting from an action. Includes both financial and non-financial information. Footnote 7
- Benchmark:
- A reference point or standard against which performance or outcomes can be measured.
- Best practice:
- This concept refers to a proven and reliable technique or methodology for accomplishing a task, formulated by studying business cases, case studies and highly successful organizations performing specific functions.
- Final outcome:
- The end result expected from an initiative.
- Initiative:
- A structured grouping of projects designed to produce clearly identified business results or outcomes.
- Intermediate outcome:
- A capability delivered by a project or a business impact resulting from a group of projects within the initiative.
- Lessons learned:
- Generalizations based on the evaluation of experiences with projects, programs, or policies including strengths and weaknesses that can apply to broader situations or other initiatives.
- Logic Model:
- The causal sequence for an intervention that stipulates the necessary sequence to achieve desired objectives, beginning with inputs and ending with outcomes, impacts, and feedback.
- Outcome:
- The expected result at the end of an intervention or change.
- Outcome Management:
- A management strategy focused on the achievement of results and outcomes.
- Performance:
- The degree to which an intervention is operating with respect to standards and guidelines, or the extent to which it is achieving results in accordance with stated goals or plans.
- Portfolio:
- A collection of initiatives, programs, or projects.
- Program:
- A set of initiatives with a broad mandate to deliver value.
- Project:
- A group of activities concerned with delivering a defined capability based upon an agreed schedule and budget.
- Result:
- The outcome or impact of an intervention or change. Results can be intended or unintended, as well as positive and/or negative.
Appendix O: References
Agence pour le Développement de l'Administration Électronique, Guide méthodologique MAREVA : Analyse de la valeur des projets d'ADELE, Ministere du Budget et de la reforme de l'Etat, 2005, http://www.adae.gouv.fr/IMG/pdf/050405_MAREVA_GuideMethodologique_vf.pdf
This document covers the e-government program ADELE from France, which sets out the government's online strategy for the period of 2004-2007. To evaluate gains and savings for each e-government project, the French government developed the evaluation methodology called MAREVA, which enables a precise evaluation of financial gains of e-government services for the State and the public sector, as well as of gains and benefits for end users.
Australian Government, Demand and Value Assessment Methodology, Information Management Office, 2004.
The methodology presented in this document represents the culmination of over a year's effort designing and refining a standardized system to forecast and articulate demand and value in any proposed e-government service.
Arveson, P. What is the Balanced Scorecard?, Balanced Scorecard Institute, 1998, http://www.balancedscorecard.org/basics/bsc1.html
This webpage provides an overview of Robert S. Kaplan and David Norton's balanced scorecard management system, a strategic management approach that was developed in the early 1990s. The system suggests viewing an organization from four perspectives (learning and growth, business process, customer, and financial) and developing metrics and collecting data based on these perspectives.
Binnendijk, A., Results Based Management in the Development Co-operation Agencies: A Review of Experience, DAC Working Party on Aid Evaluation, 2000, http://www.oecd.org/dataoecd/secure/14/29/31950852.pdf
This paper is based on a document review of the experiences and practices of selected OECD Member development co-operation agencies with establishing performance or results based management systems. Covered in the review are the experiences of seven donor agencies establishing and implementing their results based management systems, comparing similarities and contrasting differences in approach.
Canadian Transportation Agency, Performance Measurement Framework, 2004, http://www.cta-otc.gc.ca/about-nous/excellence/performance/performance-eng.pdf
This framework's purpose is to provide a consistent approach for systematically collecting, analyzing, utilizing, and reporting on the performance of the Canadian Transportation Agency's programs and activities. This document presents an overview of the framework, as well as performance measurement principles, the program management process, and key steps for measuring performance.
Covey, S., The Seven Habits of Highly Effective People: Restoring the Character Ethic, New York, NY: Simon and Schuster, 1989.
This book contains an integrated approach to solving personal and professional problems by learning principles rather than merely practices. The Seven Habits are a step-by-step pathway to the principles of fairness, integrity, honesty, and human dignity that give security to adapt to change in family and business lives.
Government of Canada, Privy Council Office, Regulatory Affairs and Orders in Council Secretariat: Glossary, 2001, http://www.pco-bcp.gc.ca/raoics-srdc/default.asp?Language=EandPage=glossary
This glossary provides unofficial definitions of Privy Council Office (PCO) terms in order to facilitate users' understanding of PCO documents and information.
Government of South Australia, Triple Bottom Line, Department for Environment and Heritage, 2005, http://www.environment.sa.gov.au/sustainability/triple_bottom_line.html
This webpage provides an overview of Triple Bottom Line (TBL) reporting and its benefits, as well as measures taken by the Government of South Australia and departmental agencies in implementing the practice.
Kaplan, R. and D. Norton, The Balanced Scorecard: Translating Strategy into Action, Boston, MA: Harvard Business School Press, 1996.
The Balanced Scorecard is a management system designed to channel abilities, energy, and knowledge toward achieving long-term strategic goals. Encompassing current and future performance, Kaplan and Norton's method can be used in four categories to meet organizational objectives: financial performance, customer knowledge, internal business processes, and learning and growth.
Lau, E. E-Government and the Drive for Growth and Equity, Organisation for Economic Cooperation and Development, E-Government Project, 2005, http://www.dsg.ae/en/ds_images/conf/05_lau.pdf
This paper presents three broad types of benefits related to e-government: financial, public, and economic, with emphasis on the latter two, arguing that they are the least well understood. The proposed outline of benefits seeks to allow governments to support investment decisions and evaluate results.
Office of the Auditor General of Canada, December 2000 Report of the Auditor General of Canada, 2000, http://www.oag-bvg.gc.ca/domino/reports.nsf/html/0019xe01.html
The December 2000 volume of the Office of the Auditor General (OAG) report contains 18 chapters of audits of governmental departments and progress of reporting performance to Parliament. The chapter on reporting performance contains a results chain diagram that was adapted into section 3.1 of this report.
Office of the Auditor General of Canada, Implementing Results-Based Management: Lessons from the Literature, 2000, http://www.oag-bvg.gc.ca/domino/other.nsf/html/00rbm-eng.html
A follow-up to a review prepared by the Office of the Auditor General in 1996, this report is a concise synthesis of lessons learned from implementing results-based management in a variety of Canadian and international jurisdictions. The first review, summarized in Annex A, focused on implementation, while this update also includes lessons learned on more operational issues such as development of indicators, data collection, analysis, monitoring, and reporting.
Organisation for Economic Co-operation and Development, Evaluation and Aid Effectiveness: Glossary of Key Terms in Evaluation and Results Based Management, DAC Working Party on Aid Evaluation, Development Assistance Committee, 2002, http://www.oecd.org/dataoecd/29/21/2754804.pdf
This document contains a glossary of terms relating to quality assurance, stakeholders, logical framework, results-based management, evaluation tools, and types of evaluations.
Organisation for Economic Co-operation and Development, OECD E-Government Project – Costs and Benefits of E-Government: Identifying Public Benefits, Public Governance and Territorial Development Directorate, Public Governance Committee, 2005.
This report is designed as a scoping paper that looks at different dimensions of the public benefits of e-government, and makes tentative suggestions as to how these might be measured. It provides answers relating to the non-financial public benefits governments can expect from e-government, how these softer indicators can be identified, categorized, and measured, as well as the proxy measures that exist for providing an estimate of public benefits.
Plantz, M. et al., Outcome Measurement: Showing Results in the Nonprofit Sector, United Way of America, 1997, http://national.unitedway.org/outcomes/resources/What/ndpaper.cfm
This article describes the activities of non-profit agencies in relation to Outcome Management initiatives, discussing 30 lessons learned and seven key challenges to be overcome. It also summarizes the history of performance measurement in the non-profit health and human services sector and defines the key concepts of outcome measurement.
Schacter, M., Results-based Management at the Water Cooler: Perspectives from the working level on RBM, Mark Schacter Consulting, 2004.
Based on opinion data gathered from approximately 100 public servants at a series of results-based management (RBM) workshops, this paper provides a window into perceptions of working-level officials about the implementation of RBM in Canada's public service. Analysis of the data suggests that public-sector staff have six types of concerns about RBM implementation, two of which are predominant: high-level leadership for RBM and technical capacity to implement RBM.
Treasury Board of Canada Secretariat, Business Transformation Enablement Program – Strategic Design and Planning Methodology, 2004, /btep-pto/documents/2004/method/method-eng.pdf
This document describes the first release of the BTEP Design and Planning Methodology, which is the overall process methodology for business transformation. It is intended to be used by business transformation teams responsible for producing transformation project deliverables. The goal of BTEP is to enable coherent business design across the government with a formal, standards-based approach that will guide and expedite business transformation to meet the government's high-level business objectives.
Treasury Board of Canada Secretariat, Changing Management Culture: Models and Strategies to Make It Happen, 2003, http://www.tbs-sct.gc.ca/cmo_mfc/Toolkit2/GCC/cmc-eng.pdf
This guide presents a step-by-step approach to managing change, one that deputy ministers, heads of agencies, and their executive teams can follow when undertaking management reforms. For illustration purposes, the guide focuses on Modern Comptrollership, but it is generic in nature and its approach can be applied to any effort to change management culture.
Treasury Board of Canada Secretariat, Companion Guide: The Development of Results-based Management and Accountability Frameworks for Horizontal Initiatives, 2002, http://www.tbs-sct.gc.ca/cee/tools-outils/comp-acc00-eng.asp
This guide was developed to complement the Guide for the Development of Results-based Management and Accountability Frameworks and provide federal managers with practical advice on how to develop effective RMAFs for horizontal initiatives. It addresses the challenges of building an effective team that will draft the RMAF, covers the five main components of an RMAF, and provides a list of additional lessons learned and reference documents.
Treasury Board of Canada Secretariat, An Enhanced Framework for the Management of Information Technology Projects, Project Management Office, Financial and Information Management Branch, 1996, http://www.tbs-sct.gc.ca/emf-cag/abu-ans/itp-pti/itp-pti00-eng.asp
This paper describes a proposed enhanced framework for the management of information technology projects in the federal government. This enhanced framework is designed to ensure that government information technology projects fully meet the needs of the business functions they are intended to support, deliver all expected benefits, and are completed within their approved time, cost, and functionality.
Treasury Board of Canada Secretariat, An Enhanced Framework for the Management of Information Technology Projects: Creating and Using a Business Case for Information Technology Projects, Project Management Office, Chief Information Officer Branch, 1998, http://www.tbs-sct.gc.ca/emf-cag/business-rentabilisation/business-rentabilisation-eng.asp
This document was designed to ensure that federal government IT projects fully meet the needs of the business functions they are intended to support and deliver all expected benefits, and are completed on time and within budget. Moreover, it identifies the need for a business case analysis before a government IT investment can be approved. The guide can be used as a planning tool for users to mark and monitor the factors that are crucial to implementing IT successfully.
Treasury Board of Canada Secretariat, An Enhanced Framework for the Management of Information Technology Projects Part II – Solutions: Putting the Principles to Work, Chief Information Officer Branch, 1998, http://www.tbs-sct.gc.ca/emf-cag/abu-ans/ppw-slp/ppw-slp00-eng.asp
This document is a companion to Part I, which was approved and published in May 1996. The purpose of the document is to facilitate implementation of the Enhanced Framework within federal government departments by providing an overview of the Enhanced Framework, identifying where and how to begin the process of implementation, outlining solutions to assist departments in applying the Framework, describing the roles and responsibilities of the key departmental players in project delivery, and providing guidance on how to get started.
Treasury Board of Canada Secretariat, Enterprise Value Management Outcome Management Practice Implementation Strategy, CIO Branch, Alignment and Stewardship, 2005.
This document outlines the recommended strategy for the establishment of an Outcome Management (Outcome Management) Practice within the Alignment and Stewardship Division of the CIO Branch, Treasury Board Secretariat. The key objective of the Outcome Management Practice is to help the Alignment and Stewardship Division further evolve in its role and to support improved decision-making to ensure that the best value for the government enterprise is paramount in all choices made for Canada's portfolio of initiatives.
Treasury Board of Canada Secretariat, Guide to Realizing Outcomes from Government of Canada Programs (Draft), 2004.
This guide presents the outcomes realization process, which is described as the set of activities for planning, managing, and realizing desired outcomes from initiatives. Through value management, the guide provides a framework involving tools and techniques to proactively plan, manage, and monitor the realization of the outcomes of a change initiative.
Treasury Board of Canada Secretariat, Integrated Measurement Framework: Concept Paper (Draft), Chief Information Officer Branch, 2005.
This document presents ideas towards the development of an Integrated Measurement Framework (IMF) for the Chief Information Officer Branch (CIOB). It presents the IMF vision, initial strategies, the initial IMF design, and the next steps in the IMF development.
Treasury Board of Canada Secretariat, Management Accountability Framework, President of the Treasury Board, 2003, www.tbs-sct.gc.ca/maf-crg/index-eng.asp
This document was developed to provide deputy heads and all public service managers with a list of management expectations that reflect the different elements of current management responsibilities. It is intended to translate the vision of modern public service management into a set of management expectations. The Framework focuses on management results rather than required capabilities, provides a basis of engagement with departments, and suggests ways for departments both to move forward and to measure progress.
Treasury Board of Canada Secretariat, Outcomes Management Realization: Service Canada Policy – Outcomes Sought, prepared for Service Canada, 2005.
This overview consists of an outline of the outcomes sought by Service Canada policy, as well as a summary of the four stages involved in outcomes management realization.
Treasury Board of Canada Secretariat, Privacy Impact Assessment (PIA) E-Learning Tool, 2003, http://www.tbs-sct.gc.ca/pgol-pged/piatp-pfefvp/index-eng.asp
The Privacy Impact Assessment (PIA) e-learning tool is an introductory course that reviews the basic principles of privacy in Canada discusses the fundamentals of the PIA process. The tool covers key privacy definitions, Canadian privacy legislation and policy, the main features and benefits of PIAs, and the key stakeholders involved in PIAs.
Treasury Board of Canada Secretariat, Privacy Impact Assessment Guidelines: A Framework to Manage Privacy Risks, 2002, http://www.tbs-sct.gc.ca/pubs_pol/ciopubs/pia-pefr/paipg-pefrld-eng.asp
This document contains guidelines that are intended to provide a comprehensive framework for the completion of a Privacy Impact Assessment (PIA). The PIA ensures that privacy principles and legislation are considered and adhered to throughout the lifecycle of a new program, service or initiative and where appropriate, for existing initiatives undergoing service transformation or redesign.
Treasury Board of Canada Secretariat, Results-based Management and Accountability Framework of the Modern Comptrollership Initiative, 2003.
This Results-based Management and Accountability Framework (RMAF) for the Modern Comptrollership Initiative (MCI) has been developed to provide managers in departments, agencies, and at the Treasury Board Secretariat (TBS) with a single, comprehensive, and reliable instrument to evaluate and report on the performance of this major learning and culture-changing initiative for the Government of Canada. This document contains a profile of the MCI, guidance for ongoing performance measurement, as well as evaluation and reporting strategies.
Treasury Board of Canada Secretariat, Results-Based Management and Accountability Framework of the Modern Comptrollership Initiative, 2003, http://www.tbs-sct.gc.ca/cmo_mfc/resources2/RMAF/RMAF.pdf
This document contains a profile of the Modern Comptrollership Initiative (MCI), guidance for ongoing performance measurement, in addition to evaluation and reporting strategies. This Results-Based Management and Accountability Framework (RMAF) was developed as a tool for managers in departments, agencies, and at the Treasury Board Secretariat (TBS) to help in measuring and reporting on results being achieved through the MCI.
Treasury Board of Canada Secretariat, Results-Based Management in Canada: Country Report Prepared for the OECD Outcome-Focused Management Project, Planning, Performance and Reporting Sector, Comptrollership Branch, 2000, http://www.ppx.ca/NewsArchives/PDF/Result_Based_Management.pdf
The objective of this report is to describe how outcome goals are defined and used, and how progress towards them is measured in the Government of Canada. In the report, there sections relating to how departments and agencies integrate results-based management in policy formulation and implementation, concrete examples to illustrate aspects of results-based management, a comparison of the working terminology of the OECD with that of Canada, and questions on results-based management for the annual OECD Survey of Budgeting Development.
Trochim, W. Qualitative Validity, The Research Methods Knowledge Base, 2005, http://www.socialresearchmethods.net/kb/qualval.htm
This webpage provides an overview of four proposed criteria for judging qualitative validity, which are: credibility, transferability, dependability, and confirmability. These four criteria, proposed by Guba and Lincoln, are intended to be analogous to the traditional quantitative criteria, which are: internal validity, external validity, reliability, and objectivity.
U.S. Government, Balancing Measures: Best Practices in Performance Management, National Partnership for Reinventing Government, 1999, http://govinfo.library.unt.edu/npr/library/papers/bkgrd/balmeasure.html
This report represents an extensive undertaking to survey and interview agencies and companies for practices that contribute to improving service as well as business results. The findings show that the process followed has not been exactly the same in every instance. Balancing business results with customer, stakeholder, and employee information generally produces marked improvement in performance, service, and overall satisfaction. This study partners report gains in efficiency, data tied to strategic goals and measurement systems, and improved relationships with employees and customers.
U.S. Government, Rating the Performance of Federal Programs, Office of Management and Budget, 2004, http://www.gpoaccess.gov/usbudget/fy04/pdf/budget/performance.pdf
This document gives background information on the Program Assessment Rating Tool (PART) of the U.S. federal government. The PART is a systematic method of assessing the performance of program activities across the U.S. government. As a diagnostic tool, the main of objective of the PART review is to improve program performance. The PART assessments help link performance to budget decisions and provide a basis for making recommendations to improve results.
U.S. Government, Serving the American Public: Best Practices in Performance Measurement, National Performance Review, 1997, http://govinfo.library.unt.edu/npr/library/papers/benchmrk/nprbook.html
This report documents the Performance Measurement Study Team's findings, which are to be used as a tool for public and private leaders and managers in identifying and applying best-in-class performance measurement and performance management practices. This intergovernmental benchmarking study identifies the processes, skills, technologies, and best practices that can be used by government to link strategic planning with performance planning and measurement by establishing and updating performance measures, establishing accountability for performance, gathering and analyzing performance data, and reporting and using performance information.
Page details
- Date modified: