Evaluation of the Hydrological Service and Water Survey: chapter 4


3.0 Evaluation Design

3.1 Purpose and Scope

The evaluation assessed the relevance and performance (effectiveness, efficiency and economy) of the Hydrological Service and Water Survey and covered the five years from 2008-2009 to 2012-2013. An evaluation of the Hydrological Service and Water Survey is part of Environment Canada (EC)’s 2012 Risk-based Audit and Evaluation Plan which was approved by the Deputy Minister. The evaluation was conducted in order to meet the coverage requirements of the Treasury Board of Canada Policy on Evaluation, which require that all direct program spending be evaluated at least once every five years.

The evaluation focused primarily on the Hydrological Service and Water Survey in the context of the Water Survey of Canada (WSC) as the single national organization responsible for the collection, interpretation and dissemination of hydrometric data on behalf of EC, Aboriginal Affairs and Northern Development Canada (AANDC) and the provincial and territorial government partners in the National Hydrometric (NHP). The operations of the WSC are jointly funded by EC and the Federal/Provincial/Territorial (F/P/T) partners. The scope also included the WSC’s role in the governance of the NHP, but excluded a detailed assessment of NHP management performed specifically by P/T jurisdictions. The evaluation follows two recent audits: Monitoring Water Resources (September 2010) conducted by the Commissioner of the Environment and Sustainable Development (CESD); and the Audit of the National Hydrometric Program (March 2010) conducted by the Internal Audit Directorate, Audit and Evaluation Branch of EC. The evaluation took the findings of these audits into account in order to avoid duplication of data collection activities and to minimize respondent burden. The nine evaluation questions examined, and associated indicators and data sources, are presented in Annex 1.

3.2 Evaluation Approach and Methodology

Data collection and analysis for the evaluation involved:

  • Document Review: A review of documentation relating to the relevance and performance (effectiveness, efficiency and economy) of the Hydrological Service and Water Survey, building on the material contained in the two earlier audit reports published by EC’s Internal Audit Directorate, Audit and Evaluation Branch, and the CESD in 2010. Documents included Canada Water Act annual reports, departmental planning and performance reports, internal planning and operational documents, financial data, and a limited amount of performance data extracted from program databases, in addition to the two 2010 audit reports. The document review addressed all nine evaluation questions.
  • Literature Review: Published and grey literature relating to the rationale for, and benefits of, public sector delivery of hydrometric programs, and the design and performance of comparable programs in selected other jurisdictions. The literature review investigated aspects of the program’s relevance, design and effectiveness (evaluation questions one, three, four and eight).
  • Key informant interviews: A total of 50 individual and 4 joint interviews were conducted, giving a total of 58 participants, comprised of:
    • Program managers and staff at EC national and regional offices (15 interviews with 16 participants);
    • Program partners representing P/T governments and AANDC (11 interviews with 13 participants);
    • Representatives of hydrological service agencies in the US, Australia and New Zealand (3 interviews, 3 participants); and
    • A sample of secondary users of the WSC’s hydrometric data (25 interviews with 26 participants) from the public and private sectors.
    Potential participants in these interviews were drawn from suggestions provided by WSC managers, as well as additional candidates suggested by interviewees themselves. This data collection method addressed all nine evaluation questions except for the user interviews, which focused on program relevance, design and achievement of intended outcomes (evaluation questions one to four, and six). Interview guides were tailored for each type of participant, drawing on a master set of questions.Footnote9 Interviews were conducted by telephone or, in some cases, in person and lasted between 45 and 90 minutes. Qualitative analysis of the interview findings was conducted.Footnote10
  • Analysis and Reporting: Findings from each of the lines of enquiry were reviewed and integrated, and information relating to each of the evaluation questions synthesized. The synthesis used a triangulation process in which the findings from each line of enquiry were aligned to each of the evaluation questions and compared to identify areas of commonality, areas where divergent findings or opinions were observed, and possible reasons for such variations.

3.3 Limitations

Key informant interviews were the principal source of information on the program’s effectiveness and efficiency. While the program and partner samples were generally representative of the associated populations, it is possible that the user sample was not representative of the overall mix and distribution of different types of secondary users. This was a function of the program’s lack of information on users and the lack of published data or directories of potential users. However, the responses exhibited a high degree of uniformity and can be considered to be illustrative.

Page details

Date modified: