ARCHIVED - Integrated Strategy on Healthy Living and Chronic Diseases - Healthy Living Program Component



This evaluation is a formative evaluation and focused on the areas of relevance, success, and design and delivery effectiveness of the Program. Evaluation issues were identified in each evaluation focus area. An evaluation framework had been developed for this Program, and the framework was reviewed and updated as a first step. The evaluation covered the time period from November 2006 to September 2008, with some financial information to January 2009. The PERT data was collected at the midway point (six months) for national projects funded in 2008 through the HLF. The data collection was conducted from September 2008 through February 2009.

2.1 Objectives

The objectives of the Formative Evaluation of the Healthy Living Program are to:

  • assess program design and  its implementation;
  • assess success/progress and continuous improvement;
  • assess performance measurement systems being used including, the PERT;
  • identify lessons learned; and
  • make recommendations on key issues.

2.2 Evaluation Issues

The list of evaluation issues outlined below was developed based on the TB Evaluation Policy and the objectives of the assessment. The issues were confirmed and refined through consultations with staff. The evaluation was guided by the Evaluation Framework contained in Annex B.  The Framework outlines the key evaluation issues that drive more specific evaluation questions. In order to facilitate the collection of information in response to the questions, indicators, data sources and collection methods were established.   It should be noted that all evaluation questions have been addressed within the scope of this report.

The key evaluation issues, as identified by the client, are as follows:

  • Relevance – does the Component continue to be consistent with departmental and government-wide priorities, and does it realistically address an actual need?
  • Success/Progress – is the Component effective, within budget and without unwanted outcomes?
  • Design and Delivery – are the most appropriate and effective means being used to achieve objectives, relative to alternative design and delivery of approaches?

2.3 Methodologies

The evaluation used a multiple-lines-of-evidence approach. The key lines of evidence consisted of the following components, with further details outlined below:

  • document and web review;
  • the PERT system reports and documentation;
  • key informant interviews with PHAC staff and management;
  • key informant interviews with stakeholders; and
  • an on-line survey with the Intersectoral Healthy Living Network E-Bulletin subscribers.

The document review focused on three broad categories of documents namely: Healthy Living Program Policy documents; PHAC strategy and planning documentation (PHAC Five-Year Strategic Plan, Integrated Strategy on Healthy Living and Chronic Disease); and  Program documentation on the four components (HLF, Social Marketing, IHLN and KD&E). The list of the documentation reviewed is contained in Annex C. Key web sites were also reviewed as required.

The performance measurement systems reports and documentation reviewed included Project Evaluation and Reporting Tool (PERT) Reports, 6-month PERT Analysis and Summary Report, PERT User Guide, and the Program Consultant Orientation Manual.

Key informant interviews were conducted with internal staff and management. Interview lists were provided by the project authority and carried out by the consulting team using semi-structured interview guides (Annex D) which were approved by the project authority. Interviews were conducted with 18 staff and senior management as per the table below.

Table 3: Interviews - Internal Staff and Management
Groups Number
Healthy Living Program Staff and Management 11
Healthy Living Program Regional Managers and Program Consultants 7

Key informant interviews were also conducted with external stakeholders. In total 22 external stakeholders were interviewed as per the table below, by the consulting team. Semi-structured interviews were conducted using the guides (Annex D) approved by the project authority. Interview lists were provided by the project authority with the exception of unsuccessful applicants who were blindly selected from the unsuccessful applicant letter file. The duration of interviews was estimated at approximately one hour each.

Table 4: Interviews - External Stakeholders
Groups Number Total Population
Funding Recipients 12 13
Unsuccessful Applicants to HLF 3 38
Provinces and Territories for Bilateral Agreement process 3 13
IHLN: Co-Chairs of the HLIG
HLIG members
3 20
Other 1 N/A

An on-line survey was used to collect quantitative information from the subscribers to the IHLN. An invitation was sent to the 700 Healthy Living E-Bulletin subscribers. Of these, 124 subscribers initiated the survey for a response start rate of 17.7%. Of this group, 71 completed the survey, for a survey completion rate of 10.1%. Using the 124 subscribers who initiated the survey provides a confidence interval of 8% at 95% confidence level. If one considers the 71 who completed the survey, this results in a confidence interval of 11% at a 95% confidence level.

2.4 Limitations

The interview results were selected to provide qualitative data on specific topics. In the case of the HLF national stream, several interview dimensions were included (funding recipients, internal and external interviews) to allow for corroboration of findings. Interview results were also compared to document review and survey results where possible.

From a regional stream perspective, a very small number of bilateral process interviews were conducted with provincial and territorial representatives (3). An additional nine were conducted with PHAC regional and headquarters staff. The small number of P/T consultations limited the reliability of conclusions.

PERT data only covered the first six months of project implementation and did not provide a full depiction of outcomes as many of those questions were either to be covered in subsequent periods or at project end. The absence of this type of data was partially compensated by qualitative information obtained during interviews.

While there is evidence of some performance reporting for all four components, no targets have been set to measure performance against. Thus a large amount of detailed performance information can not be quantified without the necessary targets/benchmarks to fully measure performance against.

Report a problem or mistake on this page
Please select all that apply:

Thank you for your help!

You will not receive a reply. For enquiries, contact us.

Date modified: