Between worst and best: developing criteria to identify promising practices in health promotion and disease prevention for the Canadian Best Practices Portal

Nadia Fazal, MPHEndnote 1; Suzanne F. Jackson, PhDEndnote 1; Katy Wong, MScEndnote 2; Jennifer Yessis, PhDEndnote 2; Nina Jetha, MPHEndnote 3

https://doi.org/10.24095/hpcdp.37.11.03

This article has been peer reviewed.

Author references:

Endnote 1

Dalla Lana School of Public Health, University of Toronto, Toronto, Ontario, Canada

Return to endnote 1 referrer

Endnote 2

Propel Centre for Population Health Impact, University of Waterloo, Waterloo, Ontario, Canada

Return to endnote 2 referrer

Endnote 3

Public Health Agency of Canada, Ottawa, Ontario, Canada

Return to endnote 3 referrer

Correspondence: Nadia Fazal, University of Toronto, 296 Glebemount Avenue, Toronto, ON  M4C 3V3; Tel: 416-421-3806; Email: nadia.fazal@mail.utoronto.ca

Abstract

Introduction: In health promotion and chronic disease prevention, both best and promising practices can provide critical insights into what works for enhancing the health-related outcomes of individuals and communities, and how/why these practices work in different situations and contexts.

Methods: The promising practices criteria were developed using the Public Health Agency of Canada's (PHAC's) existing best practices criteria as the foundation. They were modified and pilot tested (three rounds) using published interventions. Theoretical and methodological issues and challenges were resolved via consultation and in-depth discussions with a working group.

Results: The team established a set of promising practices criteria, which differentiated from the best practices criteria via six specific measures.

Conclusion: While a number of complex challenges emerged in the development of these criteria, they were thoroughly discussed, debated and resolved. The Canadian Best Practices Portal's screening criteria allow one to screen for both best and promising practices in the fields of public health, health promotion, chronic disease prevention, and potentially beyond.

Keywords: best practices, promising practices, screening criteria, intervention studies, evaluation, public health, health promotion, chronic disease prevention

Highlights

The criteria for Promising Practices were developed using an iterative review process. Promising practices differ from best practices according to the following six measures:

  1. The program can be reported in grey literature reports as opposed to only peer-reviewed articles.
  2. The positive program outcomes can be short-term only (within 6 months of the intervention period) or only during the intervention period.
  3. The program can be low impact in that the positive outcomes affect less than half of the people they were meant to affect, or the positive outcomes are significant at a minimal acceptable level.
  4. The program can be implemented in the field only once and this can be a pilot test implementation.
  5. The program may require the participation of personnel with specialized skills that are rarely accessible within the intervention context.
  6. The quality of the study used to evaluate the program may be of only moderate quality

Introduction

In 2004, the Public Health Agency of Canada (PHAC) identified a critical need, as expressed by health practitioners, to have increased access to program-specific evidence to help them make informed decisions when designing, implementing, and evaluating community-based health promotion and chronic disease prevention interventions.Footnote 1,Footnote 2 To address this need, PHAC launched the Canadian Best Practices Portal for Health Promotion and Chronic Disease Prevention (the Portal).Footnote 3,Footnote 4 In order to identify best practices for inclusion on the Portal, inclusion and exclusion criteria were developed.Footnote 5,Footnote 6,Footnote 7,Footnote 8 The Portal became a public, searchable database of best practice interventions for practitioners where users could search online, based on a number of program variables including topic of interest, target group of focus, program strategy, etc. Although over the years, PHAC ensured that the Portal focused on the gold standard for best practices in chronic disease prevention and health promotion, promising practices remained an untapped resource of intervention evidence and learning. Numerous public health interventions from across Canada did not qualify as a best practice; yet, other promising initiatives were bringing forth knowledge that was very useful to public health practitioners.  In 2013, PHAC recognized the important need to expand the Portal to also include promising practices; this need was identified by the CBPI Advisory Group, and acknowledged more formally in a 2013/14 branch-wide meeting report for the CBPI regarding priority setting and the 2013/14 Knowledge Development and Exchange (KDE) Plan for the Centre for Chronic Disease Prevention. The work to expand the Portal to include promising practices allowed PHAC to tap into these rich sources of Canadian and international evidence, while still maintaining a focus on high quality methods and established criteria.

This paper is a result of the work that was accomplished to create inclusion and exclusion criteria so that promising practice interventions could also be included on the Portal, and it highlights the methodological and practical challenges encountered when developing these criteria. We began this study with the understanding that a promising practice is an intervention, program/service, strategy, or policy that shows potential (or 'promise') for developing into a best practice; and, that a best practice is an intervention that has repeatedly demonstrated a positive impact on the desired objectives of the intervention, given the available evidence, and is deemed most suitable for a particular situation or context. To our knowledge, there are no other databases/portals or criteria that distinguish between best and promising practices.

Overall, the main objectives of this study were to: (1) develop clear screening criteria to distinguish between best and promising practices in health promotion and chronic disease prevention; (2) use published interventions to pilot test these screening criteria with the Promising Practices Working Group (the working group) to ensure the criteria work across a range of study designs; and (3) in the interest of transparency, make these screening criteria accessible and easy to understand for all users.

Methods

Phase I: Establishing criteria for promising practices

We (NF and SJ) conducted a review of the related peer-reviewed and grey literature to gain insight about the ways in which promising practices have been understood, defined, classified, and talked about by academics and practitioners in the field of health promotion and chronic disease prevention. We used two major health-related bibliographic databases (MEDLINE and EMBASE) as well as Google Scholar to search for peer-reviewed literature. Key search terms included combinations of: 'promising/emerging/best/innovative practice/intervention,' 'inclusion/exclusion/screening criteria,' 'definition/classification,' 'program(me) evaluation,' and 'health promotion/disease prevention.' We also used Google to conduct internet-based searches for grey literature, and searched for non-academic reports and documents on the websites of selected relevant health-related and research organizations, such as the Canadian Public Health Association (CPHA), the Cochrane Collaboration, the National Collaborating Centre for Methods and Tools (NCCMT), the Evidence for Policy and Practice Information and Coordinating Centre (EPPI-Centre), and the National Institutes of Health (NIH).

Using the Portal's existing best-practices screening criteria as a starting point, we looked specifically for characteristics of interventions and evaluation study designs that would unequivocally distinguish a promising practice from a best practice and an excluded practice (a practice that does not qualify as either a best or promising practice). Since a promising practice is an intervention that may potentially develop into a best practice, we started with the same three pillars as those for the Portal's existing best practices: 1) the overall impact of the intervention; 2) the degree to which the intervention is adaptable and generalizable to other contexts and populations; and 3) the quality and strength of the evidence provided from the intervention evaluation, taking into consideration the strength of various study designs.

After completing this literature review, we synthesized the information into a list of potential definitions and criteria for promising practices. We then shared these criteria with the working group (see Acknowledgements section for a full list of the working group members), and made revisions based on the feedback from this group. Next, the criteria were tested using three pilot tests in a stepwise approach.

Phase II: Pilot tests - Distinguishing between promising and excluded practices

For the first pilot test, seven interventions related to the promotion of positive maternal and infant health (which were previously rejected from consideration on the Portal as best practices), were re-assessed by NF (first author) using the newly developed promising practices criteria.  Based on this pilot, a simpler, all-in-one triage system was introduced by establishing criteria that screened an intervention in or out before moving forward with the more time-intensive quality of evidence review process. Additional refinements were made to the screening criteria based on the findings from this pilot test and discussions with the working group; these refinements were made because of key issues that we faced (discussed further below).

For the second pilot test, four best practices reviewers for the Portal, working in pairs, were asked to review a set of three to four interventions in pairs (including NF, KW and JY). For these reviews, eight obesity prevention interventions and five mental illness prevention interventions that did not previously qualify as best practices were reassessed. In order to establish inter-rater reliability, each pair of reviewers compared their notes for each criterion of each intervention. The reviewers noted and discussed any discrepancies between their ratings or interpretation of the criteria. Scoring agreement between and across pairs confirmed the generic qualities of the criteria. When there were disputes, the working group discussed the dilemmas and reached a consensus about revising the criteria (some of the key issues, such as defining cut-off points and defining the significance of impact, are discussed in the Discussion section). Both the first and second pilot tests assessed the screening criteria's ability to distinguish between promising and excluded practices. The next phase was to determine whether the revised criteria were effective in differentiating between best, promising, and excluded practices.

Phase III: Pilot test - Distinguishing among best, promising and excluded practices

For the third pilot test, seven experienced reviewers each assessed four to nine interventions from a pool of 62 interventions that focused on mental illness prevention, injury prevention, violence prevention, tobacco control, maternal-infant health promotion and healthy eating. The focus of this review was to test the ability of the revised criteria to assess new interventions as best, promising, or excluded. Each reviewer independently completed a feedback form, identifying any issues or challenges they encountered in applying the screening criteria. The information from these forms were compiled by NF (first author), and the key themes and issues that emerged were discussed with the working group. Consensus was reached among all group members on all issues that emerged, and the necessary refinements were made to the criteria (some of the key issues at this stage were: capturing changes in context consistently, handling multiple papers about the intervention, and defining the significance of impact). This pilot resulted in five of the seven reviewers identifying 11 promising practices and one best practice using the new criteria. These interventions were added to the Portal (which can be accessed at: http://cbpp-pcpe.phac-aspc.gc.ca/).

Throughout the pilot phases, any complex challenges and issues related to the criteria that arose were discussed and debated among the working group; consensus was achieved by the group for each decision made to alter the criteria. Each revision also resulted in improvements in the guidelines accompanying each criterion, the scoring system for the quality of evidence assessment, and the content in the Portal's guidebook for reviewers (a step-by-step guidebook to help reviewers use the screening criteria, which includes examples and additional resources and tools for decision-making).  We believe that the most interesting aspects of this work are the issues and challenges we faced in creating these criteria and the definitions we settled on. These issues are presented and discussed in the remainder of this paper.

Results

The final definition of promising practices, based on the pilot test results, is described in Box 1. Table 1 summarizes the key criteria that were developed to distinguish promising practices from best practices, after all the pilot tests. Core criteria, essential for both best and promising practices, are indicated in the merged columns of Table 1.

Box 1. Definition of promising practices for the Portal

An intervention, program, policy or initiative that shows potential (or 'promise') for developing into a best practice. Promising practices may be in the earlier stages of implementation and/or evaluation.

Promising practices demonstrate:

  • medium-to-high impact: positive changes related to the desired goals must be seen; however, given the potential for future adaptation and growth, this standard is slightly lower than for best practices;
  • high potential for adaptability: high potential for producing similar positive results in other contexts and settings; this potential is considerably increased when the intervention has a strong theoretical underpinning or logic model;
  • suitable quality of evidence: as promising practices may be in the earlier stages of evaluation, the quality of evidence is less strict than for best practices.

Table 1 presents the differing criteria for best and promising practices. When using the Portal's screening criteria, the reviewer goes through each criterion one by one to determine if the intervention is: excluded (in which case the review is terminated immediately); a potential promising practice; or, a potential best practice. The last step is to assign numeric scores based on the quality of evidence assessment. The scores vary, depending on the type of study design (ranges from 6 to 19) and are assigned as either rigorous, moderate, or limited. The higher the score, the more rigorous the study design.

Table 1.  Criteria for distinguishing best and promising practices
N/A Best practice Promising practice
General criteria Date Primary reference article must have been published within the past 10 years
Intervention focus Intervention must address health at a population level; can include interventions at single or multiple levels including individual, community, organization, and societal levels. Clinical interventions are excluded, such as those that focus exclusively on one-on-one treatment recommendations for specific medical diagnoses or drug administration
Source Peer-reviewed article Grey literature or peer-reviewed article
Impact Significance of impact Intervention must rank as moderate to broad impact Intervention can rank as low impact
Positive outcomes Intervention must demonstrate positive outcomes for at least half of the primary objectives of the intervention
Intervention must demonstrate: long-term positive outcomes, intermediate outcomes, or short-term positive outcomes appropriate for relevant objectives Intervention can rank as short-term positive outcomes inappropriate for relevant objectives, or positive outcomes during the intervention implementation period
Evidence-based grounding Intervention must be based on evidence-based guidelines/models/standards/theory/evidence-based research/literature/past studies
Adaptability Implementation history Intervention must have been implemented more than twice (the first implementation could have been a pilot) Intervention may have been implemented only once (may be a pilot)
Expertise required The intervention cannot require any specialized skills, or it must require specialized skills that are easily available within the context, or provide specialized training as part of the intervention The intervention may require specialized skills that are rarely accessible within the context
Quality of evidence Assessment tool ranking The evaluation study of the intervention must rank, at minimum, a moderate score, according to the Portal's Quality of Evidence Assessment Tool applied
The evaluation study of the intervention must rank as rigorous, according to the Portal's Quality of Evidence Assessment Tool applied The evaluation study of the intervention can rank as moderate, according to the Portal's Quality of Evidence Assessment Tool applied

Discussion

The pilot testing of the Portal's screening criteria for best and promising practices revealed some key challenges and resulted in some in-depth methodological debates that were deliberated by the working group. The following is a list of the challenges we faced and the actions and decisions that were made to address them.

Defining the cut-off points among best, promising, and excluded practices

When defining the criteria for promising practices, a key challenge was to create a thorough ranking system for each of the pre-existing best practices criteria, and then establish new cut-off points that would distinguish between best, promising and excluded practices. In some cases, we found that there were core criteria essential for both best and promising practices (as shown in the merged columns of Table 1), which resulted in having only one cut-off point that would distinguish between best or promising and excluded practices. For example, a core criterion for both a best and promising practice is that the intervention must be based on evidence-based guidelines/models/standards/theory/evidence-based research/literature/past studies. If the intervention does not have this evidence-based grounding, it is automatically excluded from further review and is no longer in the running for either a best or promising practice. Another example of a core criterion is that the intervention must show positive outcomes for at least half of the primary objectives of the intervention. This is the cut-off point for further review and potential inclusion into the Portal as either a promising or best practice.

However, more specific distinguishing features were needed between best and promising practices, so we delved deeper to understand the different types of positive outcomes that can result from health promotion and chronic disease prevention interventions (i.e. different types of positive short-term, intermediate or long-term outcomes). Although this was a challenging process, in the end we were able to define five types of positive outcomes (described below) that help to distinguish between best and promising practices.

We defined long-term positive outcomes related to primary objectives as those outcomes that persist one year or more beyond the intervention period; these types of outcomes are associated with best practices. A best practice example of this is a smoking cessation program that has long-term goals to reduce the rates of tobacco use for at-risk youth with an outcome evaluation (conducted upon completion of the program) that showed positive results and a follow-up evaluation (conducted 1.5 years after the completion of the program) with sustained, positive results.

Intermediate outcomes related to primary objectives are those interventions with positive outcomes that persist for a time period between six months and one year beyond the intervention period; these types of outcomes are also associated with best practices. A best practice example of this is a healthy eating program that aims to encourage healthy eating patterns among high school students by providing healthier menu options in the school cafeteria, with an outcome evaluation (conducted seven months after the completion of the program) that showed sustained healthier eating patterns of students, withno further follow-up evaluation studies.

We defined short-term positive outcomes appropriate for relevant objectives as those interventions with outcomes that are measured within six months beyond the intervention period that are appropriately related to the short-term nature of the primary objectives; these types of outcomes are also associated with best practices. A best practice example of this is a program that aims to reduce the incidence rates of post-partum depression for new mothers with an outcome evaluation (conducted three months after childbirth) that showed the incidence rates of post-partum depression being lower for program participants than for the control group. For cases like these, a later follow-up evaluation is not appropriate, as a condition such as post-partum depression can only exist within a certain time period.

In summary, interventions with long-term positive outcomes related to primary objectives, intermediate positive outcomes related to primary objectives, and short-term positive outcomes appropriate for relevant objectives are the different types of outcomes that can qualify as a best practice.

We defined short-term positive outcomes inappropriate for relevant objectives as those that are measured within six months beyond the intervention period, even though the primary objectives of the intervention are long-term; these types of outcomes are associated with promising practices. A promising practice example of this is a tobacco cessation program that has long-term goals to reduce the rates of tobacco use among at-risk youth, with an outcome evaluation that showed positive results one month after the program is completed. Further evaluation data were not collected to ensure the sustained impact of the program, despite the long-term objectives of the intervention, so it can only be listed as a promising practice.

We defined positive outcomes during the intervention implementation period as those that demonstrate positive outcomes during the intervention period itself, but there is not yet a post-intervention follow-up study to show any sustained impact. These types of outcomes are also associated with promising practices. A promising practice example is a mental health promotion program that aims to create a more supportive social environment for adults experiencing depression, with an outcome evaluation about the perceptions of friendships formed during the program that showed positive results. This shows there is some potential for this practice and it can be scored as promising on this criterion.

In summary, interventions with short-term positive outcomes inappropriate for relevant objectives and positive outcomes during the intervention implementation period qualify only as promising practices and not best practices.

Capturing changes in context as part of adaptability in a way that reviewers can understand consistently

In reality, no intervention can ever be replicated (i.e. implemented in exactly the same way, more than once) because there are always contextual realities that shape the way in which a program is implemented.Footnote 9 Thus, drawing the line between a replicated intervention and an adapted intervention is a challenging and complex issueFootnote 10 and is one that emerged in the development of the adaptability criteria.

The Implementation History criterion examines the adaptability of an intervention by assessing the history of previous implementations. For this criterion, the distinguishing feature between a best and a promising practice is that a best practice has been implemented more than once whereas a promising practice has been implemented only once. In order to meet the best practice criterion, however, each implementation of the intervention must have been substantially the same. We included this additional caveat because although each implementation does need to adapt to its context to some degree, the changes/adaptations made should not be so extensive that they change the fundamental objectives and/or activities of the program itself. If the previous implementations of the intervention are not substantially the same as the others, the program is only considered to be in its first implementation, thus disqualifying it as a best practice (and qualifying it as a potential promising practice only). While this is a very challenging criterion to apply across a wide range of interventions, the criterion outlined above facilitates the review process so that reviewers are not relying solely on their personal judgment and so that interventions are being reviewed as consistently as possible across reviewers.

Handling multiple implementations and evaluation papers on a single intervention

In cases where an intervention is implemented or evaluated more than once, it is common that multiple papers will have been written and published about the intervention (either in a peer-reviewed journal and/or in the grey literature). When assessing an intervention to determine whether it is a best, promising, or excluded practice, the process of reviewing more than one paper against the established criteria is extremely difficult and the process is too onerous for a screening/review process. By attempting to review multiple papers simultaneously, through one set of screening criteria, there is a high risk of reviewers biasing the results by selecting only the positive (or negative) outcomes and characteristics from each of the available studies, and reporting only the most (or least) scientifically sound study design from the available options. This was an important and recurring issue that emerged in the pilot phases and it was decided that reviews should be based on one primary evaluation study document for the intervention under review.

The working group deemed the most important elements required in the primary evaluation study document to be intervention objectives, and evaluation design, methods, and outcomes. In the end, it was determined that if there are multiple evaluation papers on the same intervention, reviewers should select a primary evaluation study document by prioritizing (in this order) the following criteria: (1) it is a peer-reviewed paper; (2) it is a study that shows results from an outcome evaluation study as opposed to a process evaluation study; (3) it includes stronger methods than the other available papers; and (4) it is a more recent publication.

Defining the significance of impact

Throughout the pilot testing phase, we struggled with the significance of impact (previously called magnitude of impact) criterion the most and particularly around related concepts such as magnitude, significance, breadth, and reach of impact. It was challenging to develop a process to assess the level of impact across all types of interventions, especially when intervention target population sizes vary so much from one intervention to another (i.e. community programs versus policies). This type of problem is endemic in that it speaks to the core of the study design, methodology, and reporting conventions of various sub-disciplines and their peers/journals.

In the end, we decided to operationally define this criterion as the proportion of impact, as proportions can be used to effectively gauge the magnitude of impact, despite the type or size of the target population or study. In cases where the proportion is unknown, we relied on looking at the statistical significance of the primary outcomes as a measure of both the breadth and magnitude of the impact. A best practice intervention is required to show moderate to broad impact for this criterion, meaning that the intervention results in positive outcomes in a medium to high proportion (≥ 50%) of the members of the sample of the target population for which the intervention is designed. In cases where the proportion is unknown, all the primary outcomes must be of medium to large significance (p values < 0.05). Promising practices show low impact for this criterion, meaning that the intervention results in positive outcomes for a small proportion (< 50%) of the sample of the target population for which the intervention was designed. In cases where the proportion is unknown, positive outcomes for at least half (50%) of the primary outcomes need to be significant at a minimal accepted level (p value =.05).

Identifying an expiry date for best or promising practices

Another question that we faced during the pilot phases was the idea of specifying a cut-off or expiry timeframe for an intervention to be considered as a best or promising practice. For example, if an intervention conducted 20 years ago was a best practice then, would it still be considered a best practice today? Would this timeframe be different for promising practices, given that promising practices may eventually become best practices? Do promising practices need to evolve into best practices within a particular amount of time? Does the evaluation study design influence the expiry date of either a best or promising practice?

In thinking through these issues, we reviewed the methodological literature related to evaluation study design typesFootnote 11,Footnote 12,Footnote 13,Footnote 14,Footnote 16—including the Portal's Hierarchy of Evidence paperFootnote 16—and we consulted with the working group. Given that most study designs inherently include the context of the intervention within their analysis processes (which, as highlighted by the Hierarchy of Evidence paper, is a critical aspect of any program evaluation), it became clear that after a certain amount of time the context has changed too much for an intervention to be still considered as a best or promising practice.

After applying the screening criteria during the pilot tests, and after discussions with the working group, it was determined that all best practices, including those that had been evaluated using randomized controlled trials (RCT), should expire on the Portal after 10 years (in reference to the date of the most recent evaluation study that was conducted). For promising practices, the logic is different. Given that promising practices may eventually evolve into best practices, regardless of their evaluation study design, they should expire on the Portal more quickly. It was determined that after five years as a promising practice, if the intervention has not yet evolved into a best practice (in reference to more recent evaluation studies conducted), then it would no longer be a promising practice.

Strengths and limitations

One of the key strengths of this study is that we were able to examine our promising practices screening criteria through three pilot tests, and debate any complex methodological issues that emerged with the working group. This structured process allowed us to develop criteria that have been vetted and are consistent, efficient and manageable when implemented by multiple reviewers.  After considerable debate, we also considered policies and legislations to be interventions. We applied the promising practices criteria to these types of interventions as well, and were able to include two provincial school-based policies (one in Nova Scotia and one in Prince Edward Island) as promising practices on the Portal. This has filled a much needed gap of including promising policy and legislative interventions on the Portal.

A limitation is that there is (and likely always will be) tension in developing criteria that are fundamentally academic in nature while also ensuring they are applicable to a wide range of population-level health interventions. It is challenging to systematize a review process for interventions that are so diverse in their objectives, have different target population groups and sizes, apply different types of evaluation study designs, and produce a range of overall outcomes. In any standardized review process, it is necessary to make judgment calls for interventions collectively (that fall into certain categories) as opposed to dealing with each one on a case-by-case basis; however, in doing so, some of the most complex and unique grey areas are often not explored or analyzed in as much depth as they could be. While designing these screening criteria, we realized that if we tried to allow for room to explore the grey areas in a systematic way, we would be introducing too much subjectivity and bias into our review process and that our results would vary too much between reviewers. Thus, the decisions that were made in the development and refinement of the promising and best practices criteria reflect this balance between being able to address the unique circumstances of each intervention and the ability to assess interventions consistently and reliably across reviewers.

Conclusion

The process of systematizing a screening assessment to distinguish among best, promising, and excluded practices was a challenge that raised many complex issues that did not always have clear solutions. Because of the debates that arose throughout our study, we believe that we have defined key features of both best and promising practices that are useful for assessing interventions.

This work provides important insights for practitioners and evaluators to think through when designing a new type of intervention or evaluation study, or adapting/replicating an intervention from a different context. Overall, our intention is to allow for more transparency among practitioners about what works well and what shows promise to work (with whom and under what conditions) within the field of health promotion and chronic disease prevention. We believe that these criteria can be adapted for wide use by decision-makers and public health practitioners.

Acknowledgements

We would like to acknowledge the Promising Practices Working Group here, and thank each of them for their important contributions to this work:

  • Nina Jetha (Chair): Manager, Canadian Best Practices Initiative, Public Health Agency of Canada
  • Andrea Simpson: KDE Analyst, Regional Operations (Atlantic), Public Health Agency of Canada
  • Dawne Rennie: Manager, Partnerships and Strategies Division, Public Health Agency of Canada
  • Jennifer Yessis, PhD: Scientist, Propel, University of Waterloo
  • Kathryn Joly: Reg Warren Consulting Inc.
  • Katy Wong, MSc: Senior Manager, Propel, University of Waterloo
  • Kerry Robinson: A/Director, Interventions and Best Practices Division, Public Health Agency of Canada
  • Laurie Gibbons: Senior Policy Analyst, Chronic Disease Strategies Division, Public Health Agency of Canada
  • Lynne Foley: Analyst, Regional Operations (MB/SK), Public Health Agency of Canada
  • Margaret de Groh: Manager, Social Determinants and Science Integration Division, Public Health Agency of Canada
  • Mary-Pat Lambert: Epidemiologist/Policy Analyst, Population Health Promotion and Innovation Division, Public Health Agency of Canada
  • Mélissa Nader, PhD: Evaluator-Analyst, Regional Operations (Québec), Public Health Agency of Canada
  • Nadia Fazal, HBSc, MPH: PhD Candidate, University of Toronto; Reg Warren Consulting Inc.
  • Reg Warren: Reg Warren Consulting Inc.
  • Suzanne F. Jackson, PhD: Associate Professor Emerita, Dalla Lana School of Public Health, University of Toronto.

Conflicts of interest

The authors declare no conflicts of interest.

Authors' contributions and statement

NF, SJ, and NJ all contributed to the study design and the idea for the project. NF took the lead on the interpretation of results, and the writing of the manuscript. SJ provided mentorship to NF during the interpretation of results and writing of the manuscript. All authors (NF, SJ, KW, JY, NJ) informed the data analysis, assisted in the interpretation of results, and critically revised the manuscript and approved the final version.

The content and views expressed in this article are those of the authors and do not necessarily reflect those of the Government of Canada.

References

Footnote 1

Halfors D, Godette D. Will the "Principles of Effectiveness improve prevention practice? Early findings from a diffusion study. Health Education and Research. 2002;17(4):461-70.

Return to first footnote 1 referrer

Footnote 2

Keifer L, Frank J, Di Ruggiero E, et al. Fostering evidence-based decision-making in Canada. Canadian Journal of Public Health. 2005;96(3):I1-I19.

Return to first footnote 2 referrer

Footnote 3

Jetha N, Robinson K, Wilkerson T, Dubois N, Turgeon V, DesMeules M. Supporting knowledge into action: the Canadian best practices initiative for health promotion and chronic disease prevention. Canadian Journal of Public Health/Revue Canadienne de Sante'e Publique. 2008;99(5):I1-I8.

Return to first footnote 3 referrer

Footnote 4

Rush B, Robinson K. Best Practices Portal for Health Promotion and Chronic Disease Prevention: report on the 2007-08 evaluation and related knowledge exchange needs assessment. Main Report. 2008.

Return to footnote 4 referrer

Footnote 5

Cameron R, Jolin MA, Walker R, McDermott N, Gough M. Linking science and practice: toward a system for enabling communities to adopt best practices for chronic disease prevention. Health Promotion Practice. 2001;2(1):35-42.

Return to footnote 5 referrer

Footnote 6

Dubois N, Andrew C, Wilkerson T. Systematic review: best practice programs and related resources. Scotland (ON):  prepared by DU B FIT Consulting for the National Best Practices Consortium, Centre for Chronic Disease Prevention and Control, Health Canada; 2004.

Return to footnote 6 referrer

Footnote 7

Dubois N, Wilkerson T Hall C. A framework for enhancing the dissemination of best practices. Scotland (ON): Heart Health Resource Centre, Ontario Public Health Association (prepared by DU B FIT Consulting); 2003.

Return to footnote 7 referrer

Footnote 8

Valente TW. Evaluating health promotion programs. New York: Oxford University Press; 2002.

Return to footnote 8 referrer

Footnote 9

Bell SG, Newcomer SF, Bachrach C, Borawski E, Jemmott III JB, Morrison D, et al. Challenges in replicating interventions. Journal of Adolescent Health. 2007;40(6):514-520.

Return to footnote 9 referrer

Footnote 10

Stanton B, Guo J, Cottrell L, Galbraith J, Li X, Gibson C, et al. The complex business of adapting effective interventions to new populations: An urban to rural transfer. Journal of Adolescent Health. 2005;37(2):163.e17-163.e26.

Return to footnote 10 referrer

Footnote 11

Patton MQ. Qualitative research & evaluation methods: integrating theory and practice. 4th ed. Thousand Oaks: California: Sage Publications; 2015.

Return to footnote 11 referrer

Footnote 12

Robeson P, Dobbins M, DeCorby K, Tirilis D. Facilitating access to pre-processed research evidence in public health. BMC Pubic Health. 2010;10:95.

Return to footnote 12 referrer

Footnote 13

Evans D. Hierarchy of evidence: a framework for ranking evidence evaluating healthcare interventions. Journal of Clinical Nursing. 2003;12;77-84.

Return to footnote 13 referrer

Footnote 14

Daly J, Willis K, Small R, Green J, Welch N, Kealy M, et al. A hierarchy of evidence for assessing qualitative health research. Journal of Clinical Epidemiology. 2007;60(1):43-49.

Return to footnote 14 referrer

Footnote 15

Project STAR (2006). Available from: http://www.pacenterofexcellence.pitt.edu/documents/study_designs_for_evaluation.pdf (accessed 17 June, 2016).

Return to footnote 15 referrer

Footnote 16

Jackson S, Fazal N, Giesbrecht N. Hierarchy of Evidence: Which Intervention Has the Strongest Evidence of Effectiveness? Canadian Best Practices Portal for Health Promotion and Chronic Disease Prevention. 2010. Available from: https://www.researchgate.net/profile/Suzanne_Jackson/publications? sorting=newest&page=2 (accessed 8 April 2016).

Return to first footnote 16 referrer

Page details

Date modified: