ARCHIVED - Methodology
Two primary sources were used as lines of evidence in this formative evaluation report: a review of key documents and semi-structured key informant interviews with internal and external stakeholders. For reporting on Gs&Cs programs, additional information on project inputs and activities were used to assess progress on outputs and early outcomes.
Qualitative and Quantitative Data Sources
The formative evaluation includes data sources pertaining to the process and immediate outcomes of the implementation of the Surveillance Functional Component. The following qualitative approaches were used in conducting the evaluation:
- A document review
- Key informant interviews
The following quantitative approaches were used:
- Based on the interviews and document review, frequencies and cross-tabulations were calculated for responses regarding the alignment of the Surveillance Functional Component with strategic objectives, and progress made towards immediate and intermediate outcomes.
- Based on the document review, frequencies for coverage of Gs&Cs projects by target groups were also examined.
The Evaluation Plan, which outlines the link between data sources by methodology and the evaluation questions addressed in this formative evaluation of the Surveillance Functional Component of the ISHLCD, can be found in Annex B.
Interview Guide Development
An interview guide was constructed to elicit feedback on the implementation of the Surveillance Functional Component (see Annex C). The interview guide was modified as required for different groups, including external stakeholders. Given the differences between program activities associated with Gs&Cs and other Surveillance Functional Component activities, a modified guide was developed for the PHAC coordinator and Surveillance Director as well as Gs&Cs recipients.
To ensure the qualitative analysis complemented the evaluation questions for the Surveillance Functional Component, each question in the interview guide was linked to the relevant evaluation question outlined in the Surveillance Functional Component Monitoring and Evaluation Plan (Public Health Agency of Canada, 2007a, p. 8). Based on the evaluation questions, indicators and data sources, interview questions and probes were developed using the ISHLCD Operational Matrix and the Surveillance Functional Component Logic Model to help focus the discussion.
For the purposes of this research, the concept of engagement was operationalized based on previous evaluation work done on collaboration. Respondents were asked to assess how effective the Surveillance Component has been in terms of engagement of organizations, sectors and jurisdictions to increase each of the relationship characteristics of collaboration outlined by Frey et al. (2006). Additional validation work on the concept of engagement is clearly required to support any further quantitative operationalization of this approach.
Prior to data collection, the interview guides were piloted with three individuals who are knowledgeable of the ISHLCD’s implementation as well as work completed to date by the various committees associated with chronic disease surveillance.
2.1 Document Review
As part of the evaluation, key documents were reviewed. These documents were identified through a validation of key internal/external documents with stakeholders, where stakeholders were asked to identify gaps in terms of key guiding documents for the Surveillance Functional Component. Documents reviewed included strategic planning documents, meeting minutes, presentations, and progress reports that provided information on the context and processes for implementation of the Component. Key documents included results from the First and Second Implementation Reviews. The First Implementation Review, which examined the ISHLCD between October 2005 and November 2006, addressed the level of success the Strategy had on engaging appropriate stakeholders. The Second Implementation Review, covering December 2006 to December 2007, assessed governance structures and coordination mechanisms of the ISHLCD (Performance Management Network Inc., 2007; Public Health Agency of Canada, 2007d; Public Health Agency of Canada, 2008d; Public Health Agency of Canada, 2008c).
Analysts in the Evaluation Unit examined the documents for relevant information pertaining to issues of integration and engagement, including challenges and facilitators in the development and implementation of the Surveillance Functional Component. These documents, in conjunction with key informant interviews, were used to assess progress and relevance, including early outcomes.
2.2 Key Informant Interviews
Given the need for in-depth descriptive data for this formative evaluation, qualitative key informant interviews were conducted.
Identification of Key Informants
A purposive sampling strategy (Miles & Huberman, 1994, p. 27) was used to identify key informants, selecting individuals most likely to generate productive and in-depth discussions related to chronic disease surveillance. The selection criteria and recruitment strategy was developed in consultation with the Associate Director of the Surveillance Division, the Director, and the Surveillance Division management team. The selection criteria used included identifying key informants who were:
- deemed knowledgeable about the Strategy
- able to represent the views of their colleagues in chronic disease-specific areas
Efforts were made to include broad representation across chronic disease surveillance areas, reflecting departmental surveillance reporting categories used in PHAC’s Report on Plans and Priorities (RPP) commitment areas.
A total of 40 respondents were identified as key informants. This included policy leaders and managers among PHAC officials, as well as P/T officials, academics and members of various NGOs. On several occasions, key informants chose to be interviewed with a supervisor or colleague for the purpose of providing a more fulsome interview, thus providing a total of 37 separate interviews.
Respondents were categorized according to whether they were internal or external stakeholders, as well as whether their perspective reflected that of a policy leader or program manager. Background information on informants’ surveillance area, committee participation, current roles and/or titles, and interview data and interviewer were recorded in a chart, ensuring there was appropriate coverage according to the selection criteria listed previously, as well as within chronic disease areas. The process also allowed for the interviews to be divided amongst the various interviewers, and progress monitored on a continuous basis.
Data Collection Process
Interviews were scheduled for approximately one hour, and ranged in length from just less than an hour to two full hours. On average, interviews took 68 minutes to complete, with all interviews totaling approximately 38 hours of interview transcript. The interviews were carried out between November 10th, 2008 and January 12th, 2009.
Key informants were provided with a copy of the interview guide, the Surveillance Functional Component Logic Model, and the ISHLCD Operational Matrix in advance of the interview. Internal key informants were interviewed either in-person or via telephone by analysts from the Evaluation Unit. Analysts from the Evaluation Unit, and/or contractors hired to complete the interviews, interviewed external key informants via telephone. All interviews were digitally recorded and transcribed verbatim.
A total of 35 of 37 (95%) planned interviews were completed as part of this work. Two external key respondents either declined to participate or did not respond to at least four requests to be interviewed. Respondents were fairly well balanced between those who worked at PHAC (57%, n=20) and those who worked outside of PHAC (43%, n=15), particularly considering that the majority of policy leaders among respondents were external to PHAC. See Tables 1 and 2 below. Respondents from PHAC included regional representatives. In terms of the breakdown of the 15 external respondents, 40% (n=6) were representatives from academia/hospitals, 33% (n=5) were from cancer care/registries, 13% (n=2) represented NGOs, and 13% (n=2) were staff of P/T governments.
|Policy Leader/Program Manager||n||%|
(9 external, 3 internal)
(6 external, 17 internal)
Among the 35 interviews conducted, two interviewees did not have sufficient background in the Strategy or Surveillance Component to comment in-depth, and hence the interviews did not result in a rich enough data to warrant transcription. Thus, coding and analyses were carried out on 33 interviews.
Coding and Analysis
PHAC Evaluation Unit staff conducted the data analysis. NVivo 8 (QSR International, n.d.), a qualitative analysis software package, was used to code the data and assist with analysis. The interview transcripts were imported into NVivo 8 as separate sources, and initially segmented based on the questions posed during the interview. ‘Segmenting’ is viewed as an analytic action that can be directly mapped onto certain portions of text, which allows the researchers to define “the boundaries of a narrative or segment” (MacQueen & Guest, 2008, p. 14). This process allowed all responses to be initially segmented according to their interview question, with responses corresponding to the specific areas of inquiry for this evaluation.
Following the initial segmenting activities, the Evaluation Unit staff analyzed each question posed in the interview (all responses provided by informants) for key themes pertaining to the interview question specifically (e.g., lessons learned). The interview questions each related to the evaluation questions (e.g., design and delivery approaches). Certain key transcripts and/or nodes were examined further for relevant themes, and the first set of categories for data reduction was identified.
Finally, each analyst separately reviewed the textual data within the coded themes, and emerging categories and confirming examples were selected using a consensus validation approach. The analysts examined rival and/or competing themes (Miles & Huberman, 1994, p. 269) as applicable, considering each in the context of the major themes that emerged from the data. The analysts identified illustrative quotations – as applicable – at this stage of analysis. Additional codes were generated as themes or ideas emerged within the collective responses of each item, and by contrast across the group categories (e.g., external/internal and policy leader/program manager).
2.3 Methodological Limitations
One of the challenges raised in qualitative research is the issue of data quality and rigour, including validity and reliability of the data. Criticisms include concerns about self-reporting from the key informants and potential issues of bias. For this evaluation, various measures were taken to ensure the data used were accurate and reliable, including:
- Ensuring an adequate number of interviews were conducted and that they were representative of both stakeholders and disease areas (Miles & Huberman, 1994, p. 264);
- Developing clear guidelines on interview transcription and having the interviews transcribed by one person (Maxwell, 2005, p. 208 and 211);
- Clearly outlining the process used for coding, analysis and making conclusions from the data (Morse, 1995), including outlining an audit trail of steps taken and decisions made (Miles & Huberman, 1994, p. 286);
- Triangulating the data, both in terms of using multiple data sources and having multiple analysts code and analyze the data (Creswell, 2007, p. 208; Miles & Huberman, 1994, p. 267); and
- Making the data accessible for others (internal to government) for additional analyses and potentially to confirm the results (Miles & Huberman, 1994, p. 278).
In terms of scope limitations of the evaluation, this evaluation mainly focuses on internal stakeholders and their perceptions regarding the implementation of the Surveillance Component. Fifteen (15) external stakeholders were also interviewed; however, there was a lack of external participants who had a sufficient level of awareness of the Strategy. A comprehensive evaluation of external stakeholders is out of scope of the current evaluation as the intent is to gather information to influence the continued implementation and further development of the Surveillance Component. As well, interviews were conducted with key informants who had a certain depth of knowledge about the Strategy and the Surveillance Functional Component. Hence, the evaluation does not capture the full breadth of responses, in particular from those informants who have less in-depth knowledge of this work. In addition, by emphasizing the ISHLCD Operational Matrix and the Surveillance Logic Model, this evaluation largely concentrates on assessing progress toward integration, engagement and early outcomes.
An essential limitation of this evaluation work was the lack of a performance measurement system for the Surveillance Functional Component, and limited information from the previous Implementation Reviews. As a result, this evaluation, by necessity, takes on the characteristics of a point-in-time examination of implementation, while establishing the possibility of follow-up work in support of the Functional Component synthesis report. Respondents were required to provide retrospective assessments of Strategy performance, which involved a balance of internal and external perspectives.
An additional limitation was there were four different interviewers conducting the key informant interviews. However, the potential of the effect on the administration of the interview protocol was addressed by ensuring there was consistency by conducting training of the interviewees including: participation during an interview, reviewing an annotated interview guide, and reading transcripts from previous interviews.
Internal PHAC evaluators conducted this evaluation; however, these evaluators were not involved in the development or implementation of the Surveillance Functional Component. While some may argue that external evaluators are more “objective” than internal evaluators, this is not supported by the literature (Conley-Tyler, 2005). Hence, the use of internal evaluators is not a limitation of this evaluation.
A final limitation was there was no existing valid assessment tool to adopt or adapt for this evaluation. Hence, an interview guide was developed specifically for the purposes of this evaluation. However, to help to overcome this limitation, the interview guide was pilot tested with three experienced internal respondents.
Hence, while there are a number of limitations in this evaluation, mechanisms were put in place to address these limitations where possible.
Report a problem or mistake on this page
- Date modified: