Evaluation of the Protected Areas Program: chapter 6
3.0 Evaluation Design
3.1 Purpose and Scope
In accordance with the Treasury Board (TB) Policy on Evaluation (2009), the evaluation examined the relevance (i.e., continued need, alignment with government priorities and federal roles and responsibilities) and performance (i.e., achievement of expected outcomes, demonstration of efficiency and economy) of EC’s PA program activitiesFootnote12 over a five-year period from 2008-2009 to 2012-2013.
This evaluation was part of EC’s 2012 Risk-Based Audit and Evaluation Plan. The evaluation responds to requirements of both the Financial Administration Act (i.e., to evaluate 100% of all ongoing programs of grants and contributions every five yearsFootnote13), and the Policy on Evaluation (i.e., to evaluate 100% of direct program spending every five yearsFootnote14).
The scope of the evaluation includes the overall PA program and the IIBA G&C program, but excludes the HOTO, NWT-PAS, and CBRA.Footnote15 The inclusion of the IIBA within the evaluation’s scope fulfills specific evaluation commitments set out at the inception of this agreement.
3.2 Evaluation Approach and Methodology
This section describes the evaluation methodology, which was balanced to meet the evaluation timeframe and budget requirements, as well as to ensure triangulation of findings for each evaluation question across multiple lines of evidence. The following six core methods were used for the evaluation, divided between:
- Primary data sources, including key informant interviews, and case studies; and
- Secondary data sources, including a document review, file review, a review of performance/financial data, and a literature review.
Reviews of Primary Data Sources
Key Informant Interviews
Key informant interviews generated qualitative data on the views and experiences of both internal and external stakeholders of the PA program. French and English semi-structured interview guides were developed to reflect the experiences of each respondent group, and addressed all relevant issues/questions outlined in the evaluation framework. This semi-structured approach ensured that similar information was collected from all respondents, while allowing respondents to provide information or opinions on issues not specifically identified in the interview protocol. Interviews were conducted with:
- 32 internal stakeholders who have played a significant role in the design and delivery of the PA program , including EC senior management, program managers, and regional stakeholders, including regional managers and program staff;Footnote16 and
- 9 external stakeholders who are knowledgeable about the PA program, including NGOs/subject matter experts, IIBA stakeholders (including First Nations representatives), and representatives from international jurisdictions.
Case Studies
Protected Areas site reviews were performed using a multiple-case study design,Footnote17 which allowed for the collection of both qualitative and quantitative data at a high level of detail for specified NWAs or MBSs. Data collection for the case studies included the review of site-level files and data, as well as the interviews with three to five internal and external stakeholders, for a total of 19 interviews across all five case studies.Footnote18
Case study selection involved the initial identification of ten candidate sites by program staff, which was further narrowed to five sites that reflected as broad a variety as possible according to such criteria as their type, region, age, and size.Footnote19 Based on these criteria, the following five sites were reviewed:
- Four Protected Areas (2 MBSs and 2 NWAs), including Cap Tourmente (Quebec), CFB Suffield (Alberta), Last Mountain Lake (Saskatchewan), and Long Point (Ontario); and
- One NWA established under the IIBA; Akpait in Nunavut.Footnote20
The review of site-level files involved the systematic analysis of all documents, files and data generated from each site since its inception, with a focus on more recent sources produced during the four-year evaluation period. This information was obtained from EC program representatives. Case study evidence was analyzed to identify patterns and differences across and between the types of sites and assist in identifying lessons learned.
Reviews of Secondary Data Sources
The document review, file review, performance/financial data review, and literature review all followed similar approaches. To begin, for each method an inventory was identified by evaluation committee members, through the scoping interviews, and as part of the evaluation fieldwork. These documents, files and performance / financial data were systematically reviewed to extract and analyze information relevant information for all evaluation questions, which was then integrated by indicator and/or evaluation question in an evidence table.
Document Review
This review focused on the analysis of information contained in a wide range of policy and planning documents, such as acts and regulations, policies and procedures, program documentation, IIBA documents, documents from NGOs and material from other sources. A detailed list of the documents reviewed is contained at Annex 2.
File Review
A sampling of the files of 20% of protected areas was performed. Protected areas were selected to provide a representative sample across several variables: type (NWA or MBS); region; size in hectares; and year of establishment. The file review sites are outlined in Annex 3. The file review included consideration of contribution agreements, annual/progress reports pertaining to these agreements (including financial files, where available), management plans and other files pertaining to individual sites, such as permits, species inventories, and site visit reports.
Performance/Financial Data
Performance and financial data were used to evaluate the performance of the PA program and its components, especially its effectiveness (e.g., the achievement of outputs and outcomes), performance management, and efficiency (e.g., the production of outputs in relation to costs, leveraging of G&C). Key external sources of relevant performance and financial data included the Conservation Areas Reporting and Tracking System (CARTS) maintained by the CCEA, the Nature Conservancy of Canada Conservation Blueprints, and Nature Canada reports.
Literature Review and International Comparison
The literature review examined relevant policy articles and journals, as well as protected area strategies, activities and outcomes in two comparable international jurisdictions: the United States and Australia. These countries were selected following an initial scan that considered the similarities in federal/national involvement (e.g., role, authority, governance model) in protected areas; Aboriginal land rights considerations (e.g., agreements, legislation/policies); and geographical and/or environmental similarities (e.g., size of area, important habitats for birds and/or wildlife).Footnote21 The information analyzed during the literature review and international comparison supported an assessment of alternative program models, as well as the PA program’s efficiency by comparing activities and products with those delivered by other similar initiatives.
3.3 Limitations
This section outlines the challenges experienced during the conduct of the evaluation, as well as the related limitations and strategies used to mitigate their impact. While each challenge has the potential to impinge upon the reliability of findings, care was taken to address evaluation questions and issues using multiple lines of evidence wherever possible in order to enhance the robustness of research conclusions. Key challenges for the evaluation of the PA program included:
Inconsistency in File and Financial Information: File content and level of detail is not consistent from one region and/or site to the next, as there is no consistently applied definition of what constitutes a file for a PA. Financial data, likewise, is limited and not consistent from one region and/or site to the next with respect to the type or detail of the information. The information gaps due to a lack of consistent data were mitigated or filled in part through detailed interviews with CWS regional staff.
Lack of Performance Data: The Performance Management Framework is limited to only four indicators and there was no consistent performance data on the sites. The only performance data collected from the file review related to the size of sites and their date of establishment. The impact of the paucity in performance data was mitigated to some extent by the use of multiple alternative data sources.
Difficulty Comparing to Other Organizations: The performance data available from outside sources varied in its publication date and hence it was difficult to compare, for example, OECD data with CARTS data and with data from NGOs. In addition, financial data from the PA program were difficult to compare for efficiency purposes with data from other organizations, such as Parks Canada, because these different stakeholders involved in protecting lands do so for different purposes and under different constraints. For example, the Nature Conservancy of Canada and the US National Wildlife Refuge System make extensive use of volunteers; Parks Canada deals with significantly larger numbers of visitors to their sites which incur costs not typically associated with PAs. To enhance the comparability of these sources, only expenses related to site management were examined and compared. Further, the potential shortcomings of this analysis are clearly articulated when presenting these results.
Difficulty Obtaining International Data: International comparison data that was not publicly available, was difficult to obtain and sparse. The latest set of OECD environment indicators, for example, dated from 2008. Further, it was very challenging to identify appropriate contact names in the selected jurisdictions and once identified, the contacts were very difficult to reach and/or could not provide the required information/data. Use of this line of evidence was limited as direct comparisons with programs in the US and Australia was only possible at a national level, rather than at the program level examined in this evaluation. Limitations in this line of evidence were mitigated to some extent by outlining the limitations of these comparisons, as appropriate, throughout the report, and triangulating these findings with other lines of evidence.
Difficult to Conduct Northern Interviews: Northern key informant interviewees were difficult to reach given their geographic remoteness and possibly due to interview fatigue, as many of the potential northern interviewees had already been interviewed as part of the 2011/12 IIBA 5-year review. The use of technological solutions (e.g., group teleconferences) were not feasible to implement given limitations in the availability of these technologies and the geographic dispersion of potential respondents. Attempts to gather input from Inuit stakeholders in-person proved to be equally ineffectual, as a plan to hold a focus group with all of the relevant stakeholders at the Meeting of the Parties in Iqaluit was scuttled when the meeting was postponed to occur outside the timelines for this evaluation. Care was taken to identify limitations in the number of northern respondents when reporting on these results.