Section 3: Evaluation of the Federal Initiative to Address HIV/AIDS in Canada 2008–09 to 2012–13 – Evaluation description
3. Evaluation Description
3.1 Evaluation Scope
This is a scheduled evaluation as per the Public Health Agency of Canada/Health Canada's approved Five-Year Evaluation Plan 2013-2014 to 2017-2018 to meet Financial Administration Act requirements.
The evaluation focused specifically on activities funded by the Federal Initiative. As such, the following activities were considered out of scope: activities related to tuberculosis, hepatitis B and C and sexually transmitted infections; the Canadian HIV Vaccine Initiative; and any efficiencies gained from integrating HIV/AIDS activities with other communicable disease areas.
Data collection activities took place between April and August 2013. The evaluation examined activities conducted by all four federal partners involved in the Federal Initiative for the period of 2008-13, including activities implemented through regional offices of the partner departments.
3.2 Evaluation Issues
The evaluation issues were aligned with the Treasury Board of Canada's Policy on Evaluation (2009) and considered the five core issues under the two themes of relevance and performance, as shown in Appendix 3. Corresponding to each of the core issues, specific questions were developed based on program considerations and these guided the evaluation process.
3.3 Evaluation Approach and Design
An outcome-based evaluation approach was used to assess the progress made towards the achievement of expected outcomes and to identify lessons learned.
The Treasury Board's Policy on Evaluation (2009) outlines the core issues of relevance and performance to be addressed in evaluations of federal programs. The evaluation was designed so that data collection methods would meet the objectives and requirements of the Policy. A non-experimental design was used, which means that there was neither random assignment of sample groups for inclusion in the evaluation nor a control group to compare the sample to. As a non-experimental design, the evaluation relied on correlation to demonstrate effect, and did not imply causation. As such, the evaluation was designed to demonstrate the likely contributions of the program to the expected outcomes, rather than demonstrate direct causal links between the program and outcomes.
3.4 Data Collection and Analysis Methods
Evaluators collected and analyzed data from multiple sources:
- program performance data
- document and file review
- literature review
- financial data review
- key informant interviews (see Appendix 4 for more detailed information).
Data were analyzed by triangulating information gathered from the different sources and methods listed above. This included:
- systematic compilation, review and summarization of data to illustrate key findings
- thematic analysis of qualitative data
- trend analysis of comparable data over time
- comparative analysis of data from disparate sources to validate summary findings.
3.5 Limitations and Mitigation Strategies
Challenges are inherent in any evaluation and limitations can impact the evaluation findings. As such, mitigation strategies are used to ensure that the data collected produces a credible evaluation report with evidence-based conclusions and recommendations.
The following table outlines the limitations, their impact/potential impact on the evaluation and the mitigation strategies employed in this evaluation to limit these impacts.
Limitation | Impact/ Potential Impact | Mitigation Strategy |
---|---|---|
Limited primary data collection |
|
|
Limited quality and/or availability of financial data |
|
|
Limitations in performance data:
|
|
|
Regardless of the limitations outlined above, the Evaluation Directorate is confident that the findings accurately reflect and assess the activities of the Federal Initiative over the last five years.
Page details
- Date modified: