Integrating Gender-Based Analysis Plus into Evaluation: A Primer (2019)
Developed in consultation with the Women and Gender Equality Canada
On this page
- Example 1. Logic model for a drug awareness campaign
- Example 2. Gender considerations in a drug awareness campaign
- Example 3. Partner considerations in a drug awareness campaign
- Example 4. Intersecting factors in a drug awareness campaign
- Example 5. GBA+ and snow removal
- Example 6. Case Study: Assessment of a performance information profile
- Example 7. Partner and stakeholder engagement in evaluations
- Example 8. How to improve outcomes and indicators: a second look at promoting gender equity in STEM performance information profile
- Example 9. Using qualitative methods to explore GBA+ considerations
- Example 10. Culturally sensitive evaluations of programs for Indigenous people
- Example 11. Descriptive statistics: application to fictional Promotion of Gender Equity in STEM Program
- Example 12. Inferential Statistics: application to fictional Promotion of Gender Equity in STEM Program
- Example 13. Weighting: application to Promotion of Gender Equity in STEM Program
Purpose of this document
Gender-based analysis plus (GBA+) is an analytical process used to assess how diverse groups of women, men and non-binary people may experience policies, programs and initiatives. The “plus” in GBA+ acknowledges that GBA goes beyond biological (sex) and socio-cultural (gender) differences. We all have multiple identity factors that intersect to make us who we are; GBA+ also considers many other identity factors, like race, ethnicity, religion, age, and mental or physical disabilityFootnote 1.
This primer contains advice for evaluators, particularly those at the junior and intermediate levels, on how to integrate GBA+ into every stage of Government of Canada evaluations in order to support commitments and directions. It provides an overview of the main points to consider, practical examples and methodological approaches.
The document is a general discussion of each key stage of an evaluation: planning, conducting and reporting. Particular emphasis is placed on the planning stage. A detailed review of existing tools and evidence (for example, logic model, theory of change, performance measurement information) forms the basis for:
- a systematic assessment of GBA+ implementation in the policy, program or service in question, which in turn, contributes to the informed development of evaluation questions
- the identification of suitable research and data collection methods
In 1995, the Government of Canada committed to using gender-based analysis plus (GBA+) when developing policies, programs and services in all federal departments and agencies. In 2015 and 2017, this commitment was renewed through the Prime Minister’s mandate letter to the Minister of Status of Women Canada (SWC)Footnote2. As well, the release of the Report of the Auditor General of Canada in 2016, Implementing Gender-Based Analysis, recommended that Status of Women Canada (SWC) work with the Privy Council Office and the Treasury Board of Canada Secretariat to implement GBA+ at a rigorous practice across governmentFootnote 3.
In July 2016, the Directive on Results took effect. It requires program officials and evaluators to include government-wide policy considerations, such as GBA+Footnote 4, where relevant, in performance information, as well as in reporting and other government activities (for example, consultations and budget deliberations). It also requires evaluators to plan evaluations to take those same considerations into account.
Applying a GBA+ lens increases the rigour of analyses and advances knowledge on equity across federal programming for Canadians, which increases understanding of how policies, programs and services are developed and of their related outcomes.
Why incorporate GBA+ into evaluation
Evaluation is a way to assess the extent to which gender and other intersecting identity factors were considered in the design, development and implementation of a policy, program or service and the outcomes of those considerations on various genders and subpopulations.
Figure 1 shows the policy-program-service delivery continuum. The body of knowledge developed through numerous evaluations indicates GBA+ should be applied at each point on the continuum. Specifically, questions related to performance, including the gender and other impacts of a program, often reveal different results depending at which point on the continuum the analysis is conducted. For instance, a policy can be examined and found to have a strong gender implications or other bias. The same policy, when developed as a program, could vary in the extent to which gender and other considerations were incorporated into its design, implementation and management.
Finally, the extent to which gender and other considerations are reflected in the delivery of the program can vary as well. Although it may not always be possible to examine all three elements of the continuum in-depth, it is still important to recognize that each one can have different strengths and challenges with respect to considering gender and other identity factors.
Figure 1. Policy, program, service delivery continuum
(design, implementation, management)
Benefits of integrating GBA+ into evaluations
Integrating GBA+ into evaluations of government policies, programs and services has many benefits. For example, it can help identify:
- biases, stereotypes and assumptions relating to gender (including non-binary gender identity), race and sexual orientation
- access and participation barriers, which affect the achievement of outcomes
- systemic gender inequality barriers experienced by various groups
- data gaps
It can also:
- make evaluation findings, conclusions and recommendations more comprehensive by considering all relevant population groups
- inform the improvement of policies, programs and services
Planning the evaluation
In this section
- Reviewing or developing the logic model, theory of change or both
- Reviewing existing performance information on the policy, program or service
- Formulating evaluation questions
- Defining research methods
- Planning for data collection
Ideally, GBA+ is conducted when policies, programs and services are being designed and developed and when related data collection is being planned. If GBA+ is undertaken at the planning stage, evaluators can help make it part of established processes, and thereby lay the groundwork for a comprehensive analysis during the evaluation. If GBA+ was not conducted at the outset, evaluators may need to gather – often retrospectively – supplementary data and carry out additional analysis to fill gaps.
GBA+ should be done during the evaluation scoping exercise. The scoping exercise, which involves identifying key areas of focus and engaging with policy, program or service leads, is necessary to develop the evaluation framework. During the scoping exercise, identifying target population groups (in other words, partners and stakeholders), as well as sub-populations of interest within these major groups) is key to the analysis.
Some questions that can help ensure that target population and sub-population groups have been accurately and comprehensively identified include:
- Who are the partners and stakeholders?
- Were representatives of the target population groups involved in designing, developing and implementing the policy, program or service?
- Should other target population groups be considered?
- Does the policy, program or service create or perpetuate barriers for certain target population groups?
- Is information about the policy, program or service equally accessible to the various target population and sub-population groups?
- Did the implementation of the policy, program or service have unintended outcomes for particular populations or sub-populations?
In ideal situations, as programs are designed, performance data for reporting and supporting evaluations are incorporated into related agreements, systems, and so on. Data being collected by the program area and service delivery partner are assessed to help determine what gender-based and other analyses could be conducted and contribute to a data collection strategy for the evaluation. This data also helps in making decisions about the scope, budget, period examined, and timing of the evaluation.
The resulting assessment of the existing level of integration of GBA+ into the policy, program or service will help determine the resources needed (for example, the time, capacity, expertise and level of effort) needed to conduct the evaluation. Depending on the complexities and gaps, more time and effort may be needed to, for example, develop appropriate data collection instruments, engage target population groups, and collect and analyze data.
Reviewing or developing the logic model, theory of change or both
Evaluators should assess the degree to which gender and other intersecting identity factors are appropriately integrated into, and outlined in, the policy, program or initiative’s logic model, theory of change, or both. They should look at, for example, whether these factors have been integrated purposefully, as part of the design and development process, or to merely meet the evaluation requirements. If these factors are not specifically highlighted, evaluators should consider where they could be incorporated into the evaluation process.
Example 1 shows a logic model for a drug awareness campaign. It clearly identifies the target population groups that are affected by the campaign activities and at which the outputs are aimed.
Outreach and engagement activities with youth, parents, teachers, sport coaches, and school administrators
Communication products tailored to each target group (for example, pregnant women, children under 18 years of age)
Prevention activities tailored to each target group (for example, mentally-disadvantaged people, counsellors)
Target groups are aware of the different risks of drug abuse and how to say no to illegal drug use
Drug abuse among youth decreases
Youth in Canada are drug-free
Evaluators should also consider the context within which the policy, program or service operates and how it intersects with other policies, programs or services. They can assess the interplay between the context, the relationships within various target population groups, and any power imbalances that may exist at the structural levelFootnote 5.
Although departments and agencies are required to consider gender and other factors in the design and development of policies, programs and services, evaluators should review foundational documents, such as Treasury Board submissions, memoranda to Cabinet, Departmental Results Frameworks, program inventories, performance information profiles, terms and conditions, to determine the extent to which they have been incorporated.
Useful questions in reviewing a logic model or a theory of change through a GBA+ lens include:
- Do any aspects of the policy, program or service have potential gender considerations?
- Does the policy, program or initiative have the potential to consider differences within and not just between target population groups?
Differences between men and women are not the only differences that could lead to varying program outcomes. Not all groups are homogenous, so outcomes could vary, for example, among women. Outcomes could also be experienced differently depending on gender identity or expression and sexual orientation identity.
Example 2 shows gender differences in a drug awareness campaign.
Girls and boys have different developmental problems during adolescence and cope with them differently. Research shows that, because of these differences, it is useful to take a gender-specific approach to all aspects of implementing prevention programs, including content, setting and practice. For instance, according to some reports, girls tend to prefer informal exchanges and small workgroups. Evidence also suggests that girls usually prefer that prevention programming be given by a woman, while boys prefer that it be given by a man. Youth aged 15 to 24 have the highest rate of problematic substance use nationally and the highest rate of past year use of illegal drugs. They are also are more likely to experience harm related to substance use than are older adultsFootnote 6.
Program partner impacts also differ between men and women. Evaluators should consider partners’ impacts (see Example 3):
- Which population groups does the policy, program or service target?
- Which target population groups participated in designing, developing and implementing or delivering the policy, program or service?
- What is the role of these groups in these different stages?
- Do these groups contribute to the policy, program or service, and how are they affected?
- Are the perspectives of these groups accurately captured?
The impact of families, parents, teachers, communities and school officials in shaping attitudes toward illegal drug use in youth has been found to differ between girls and boys. For instance, it has been found that family is a more significant influence for girls than boys.
In some cases, intersecting factors such as ethnicity may have an impact on a policy, program or service. Evaluators should consider intersecting factors:
- Are other identity factors (for example, beliefs, prejudices, assumptions) intersecting with gender?
- Are those factors creating barriers for target population groups to participate in the program or to access the service?
Example 4 shows intersecting factors in a drug awareness campaign.
Ethnicity and gender can be considered together in some programs. For instance, in Belgium, women from the Turkish and Moroccan communities have hosted events in which prevention workers provided information about drug use to the women’s families and friends. In other countries, such as the Netherlands, such pilot projects have failed because of the strong stigma attached to drugs in these cultures: non-using peers did not want to attend for fear of members of their own communities associating them with drugs. Methods of intervention should be adapted to take into account these types of considerationsFootnote 7.
In some cases, gender and other identity factors may not appear to have an important impact on a policy, program or service’s logic model or theory of change. Nonetheless, evaluators should try to determine whether the consideration of those factors could have unintended outcomes (see Example 5).
In conducting an analysis of its environmental policies, the City of Stockholm found that women, children and seniors were more likely to travel by public transportation, walk or bike, and that men were more likely to drive. The City’s snow removal policies had given priority to ploughing roads, which favoured men’s transportation preferences. It was concluded that those policies were not encouraging the use of green modes of transportation, and that they had important gender equity implications that had not been considered.
The City subsequently revised its snow removal policies and made clearing sidewalks, bicycle paths and roads used by buses a priority when there was enough snow to impede pedestrians but not carsFootnote 8.
Reviewing existing performance information on the policy, program or service
Evaluators must assess the degree to which existing performance information for a policy, program or service considers gender and other identity factors. This assessment involves, for example:
- reviewing performance information profiles
- consulting with program management to identify indicators that track information by gender
- gathering the data available
- determining whether the data can be disaggregated so that GBA+ can be conducted
- reviewing data collection methods linked to these indicators to identify potential biases or quality issues
If no performance indicators track information by gender, evaluators should aim to address this gap when they are developing the evaluation framework, particularly the evaluation matrix. During the review of existing performance information, evaluators should:
- identify target population groups that may be impacted, whether positively or negatively, by the policy, program or service
- highlight data gaps
- incorporate data collection methods that would assist in measuring outcomes for these groups
Not all policies, programs and services, and therefore, the evaluation of them, lend themselves to evaluation through a GBA+ lens. Whether they do depends on a variety of factors, including the evaluation needs, priorities and questions, as determined by evaluators in consultation with program officials.
Example 6 is an assessment a fictional performance information profile. The indicators in the profile measure only the extent to which women graduate or work in sciences, technology, engineering and mathematics (STEM), but not the level of degree they received (for example, bachelor’s versus doctorate) or the type of work they are performing (for example, administration versus research).
Program (fictional): Promotion of Gender Equity in Sciences, Technology, Engineering and Mathematics (STEM)
Description: The program promotes gender equity in the pursuit of STEM education by undertaking information campaigns in high schools, and encourages these underrepresented groups of women to pursue careers in STEM fields by providing scholarships for digital, mathematical and computer skills training in colleges.
|Immediate||Girls are aware of opportunities in STEM fields||Number of information sessions in high schools||250 in fiscal year 2019 to 2020||120 in fiscal year 2012 to 2013|
|Number of scholarship applications received from eligible students||50 in fiscal year 2019 to 2020||10 in fiscal year 2012 to 2013|
|Girls have access to opportunities in STEM fields||Number of scholarships awarded||250 in fiscal year 2019 to 2020||20 in fiscal year 2012 to 2013|
|Percentage of scholarships in STEM fields awarded to women||50% in fiscal year 2019 to 2020||25% in fiscal year 2012 to 2013|
|Intermediate||The gender gap in STEM graduates is reduced||Percentage of STEM graduates who are women||43% by 2030||33% in 2011|
|Ultimate||The gender gap in STEM professions is eliminated||Percentage of STEM positions in universities and in the private and public sectors that are filled by women||50% by 2035||28% in 2011|
Ways this performance information profile could be improved
The evaluator could consider the following questions:
- Are the existing indicators sufficient to collect the necessary data for a comprehensive GBA+?
- Can the data that is currently available be disaggregated by gender and other identity factors such as race, ethnicity or immigrant status?
- What other intersecting identity factors are important to consider?
- Are other indicators needed to address the outputs and outcomes in the logic model and theory of change?
- What are the barriers (policy or otherwise) that are impacting on these differences?
Although the data for these indicators can be disaggregated by sex, the disaggregated data does not provide sufficient information on the gender gap in STEM. It is not sufficient to measure the quantity of representation; the quality of representation needs to be measured too. For instance, the percentage of graduates who are women does not indicate the level of degrees received (bachelor’s, master’s or doctorate), and the percentage of STEM positions filled by women does not allow for analysis of the quality of those jobs.
For more information on analyzing performance information profiles, see section 6.2.1 Analysis of quantitative data.
Formulating evaluation questions
Depending on the nature of the policy, program or service, and on the type of evaluation (for example, delivery, impact), evaluators can include the following questions in their evaluations, as a way to integrate GBA+ into evaluations.
- Is the policy, program or service expected to contribute to promoting equality, diversity and inclusion?
- Does it meet the needs of its target population groups?
- Were the target population groups appropriately identified?
- Is the policy, program or service equally relevant to different target population groups?
- Are there population groups that should be targeted but have been omitted?
- Does the policy, program or service align with or contribute to the achievement of government-wide priorities on gender equalityFootnote 9?
- Does it align with or duplicate the objectives of other policies, programs or services?
- Are there lessons learned from comparable policies, programs or services that include a GBA + approach at the provinces, territories, municipalities, or other countries that could be applied?
Design and delivery
- Does the policy, program or service provide equal access to diverse groups of individuals?
- Does the policy, program or service create or perpetuate barriers for certain target population groups?
- What kind of barriers (for example, limited access due to financial barriers) do those target population groups perceive?
- How could the policy, program or service be improved to foster inclusion of target population groups (for example, by enhancing said groups’ feedback on or contribution to development of the policy, program or service)?
- Are there particular target population groups that are not being reached with this policy, program or service?
- To what extent have underrepresented target population groups participated in the policy, program or service?
- To what extent, and in what ways, have expected outcomes had an impact on different target population groups?
- Have outcomes differed across diverse target population groups? What accounts for the differences?
- To what extent have any disparities in outcomes for different target population groups been addressed, if necessary?
- Does the policy, program or service address the needs of various target population groups equally?
- Have there been any unexpected or unintended impacts (positive or negative) on any target population groups? If so, how were they addressed, if at all?
- Is the policy, program or service taking steps that could be considered gender-transformative? For example:
- To what extent has the policy, program or service fostered changes that promote gender equality?
- How have managers of the policy, program or service and representatives of target population groups taken into account relevant gender considerations as a result of activities conducted in relation to the policy, program or service?
- Are results related to equality and diversity likely to be sustained?
- To what extent are effective and efficient means being used to ensure that target population groups are included when implementing this policy, program or service?
- What are the administrative costs of the policy, program, or service for each target population group?
- Can administrative efficiency be improved for specific target population groups?
Partner and stakeholder involvement at different stages of the evaluation can make it more useful and credible, and can help incorporate GBA+. Partners and stakeholders can be involved in a number of ways, including:
- sitting on an external committee to share their views with evaluators
- participating in the development of evaluation questions
- contributing to the development or review of data collection instruments
- contributing to the data collection process
- helping share the evaluation results among different communities as findings become available
Defining research methods
When defining research methods and methodologies, including sampling strategies and analytical plans, evaluators should apply a GBA+ lens to determine whether the methods are appropriate for examining the issues evaluatedFootnote 10. The following sections provide details on quantitative, qualitative and mixed research methods.
Quantitative research methods
These empirical research methods examine the cause-and-effect relationship between independent identity factors (for example, gender, ethnicity, social class) and dependent identity factors to produce macro-level population analysisFootnote 11 . A quantitative methodology would, for example, use a structured randomized survey that would take into account specific sensitivities to the interaction with intersecting identity factors, those stated above, as well as the way these factors may be influenced by systemic issues of accessibility or class.
Many departments and agencies collect administrative or program data as part of their program management and performance measurement responsibilities. These data can be used and analyzed to address some evaluation questions. In certain cases, these data can be linked to other sources of information to answer more advanced evaluation questions. The more advanced the data collection method, the greater the opportunity for using experimental designs, quasi-experimental analyses, cost-benefit analyses, impact and net impact analysesFootnote 12.
Example 8 shows how outcomes and quantitative indicators can be improved by using a GBA+ lens to help fill some of the missing data gaps and more effectively measure outcome achievement.
Step 1. The evaluator assesses the initial performance information provided by the program and identifies the following shortcomings of the indicators.
- This data, on its own, will not capture unintended consequences.
- The indicators are not disaggregated by other identity factors, such as immigrant status, race, ethnicity, sexual orientation identity or expression, non-binary gender identity, or visible minority group.
- Complex relationships between target population groups cannot be identified.
Step 2. The evaluator suggests additional indicators that could address the shortcomings identified:
|Outcomes||Expected outcomes||Suggested indicators|
|Immediate||Girls are aware of opportunities in STEM fields||Number of information campaign in high schools facilitated by female researchers for target population groups (for example, Indigenous, immigrant status)|
|Number of scholarship applications received from eligible women students, by level (bachelor’s, master’s, doctorate) and by target population group (for example, Indigenous, immigrant status)|
|Girls have access to opportunities in STEM fields||Number of scholarships awarded to women from target population groups (for example, Indigenous, immigrant status)|
|Percentage of scholarships in STEM awarded to women from target population groups (for example, Indigenous, immigrant status)|
|Intermediate||The gender gap in STEM graduates is reduced||Percentage of STEM field graduates who are women, by level (bachelor’s, master’s, doctorate) and by target population groups (for example, Indigenous, immigrant status)|
|Percentage of STEM field postdoctoral fellowships held by women from target population groups (for example, Indigenous, immigrant status)|
|Ultimate||The gender gap in STEM professions is eliminated||Percentage of research-intensive positions in STEM fields held by women from target population groups (for example, Indigenous, immigrant status)|
|Percentage of researchers holding tenure positions in STEM fields who are women from target population groups (for example, Indigenous, immigrant status).|
|Percentage of STEM field-related senior positions in private companies held by women from target population groups (for example, Indigenous, immigrant status)|
|Percentage of women working in STEM fields who are earning pay equal to the average for men in the same positions and who are from target population groups (for example, Indigenous, immigrant status)|
|Percentage of women with a degree or diploma in a STEM field who are working in a STEM field 5 years after graduation and who are from target population groups (for example, Indigenous, immigrant status)|
Qualitative research methods
Qualitative research methods complement quantitative methods with disaggregated data to examine the level of outcome achieved. Qualitative research can often help validate patterns that may be identified through quantitative methods by determining whether the analysis resonates with participants. Qualitative data analysis therefore fosters a deeper understanding and more thorough explanation of trends observed in quantitative analyses.
For example, because of the oral culture in many First Nations and Inuit communities, a quantitative approach could completely miss the cumulative knowledge of environmental and cultural practices that is handed down from generation to generation among community members.
Qualitative research methods include interviews, focus groups, expert panel discussions and case studies.
In the fictional Promotion of Gender Equity in STEM program, the evaluator may not be able to collect data on all the additional potential indicators identified in the previous example. To complement the quantitative information, the evaluator could use qualitative research methods, including:
- interviews with program officials on reception of awareness campaigns, and barriers that keep women from applying
- focus groups with professionals who run awareness campaigns or with students who respond to those campaigns
- interviews with STEM professors to explore the underlying causes of a low ratio of women in these areas
- interviews with a stratified sample of women currently enrolled in undergraduate and graduate STEM programs
- interviews with women who have degrees in STEM fields but who are not currently employed in those fields
- case studies of women who hold senior tenure positions in STEM fields
Mixed research methods
Mixed research methods, which combine quantitative and qualitative methods, are standard in evaluation. They help give evaluators a more complete picture of the performance of a policy, program or service, including the social context and how different variables intersect with other operating dimensions, such as the “what,” “why” and “how” they relate to an outcome. Both quantitative and qualitative methods can provide information on the results achieved among different target population groups.
Mixed methods also help evaluators verify information obtained from multiple sources. Relying on multiple sources of evidence allows for the triangulation of findings and helps compensate for biases, which can favour different ways of knowing and communicatingFootnote 13. Triangulation involves drawing conclusions from various data sources; the variety of sources strengthens the rigour of the analysis.
Planning for data collection
Data collection methods need to be planned when indicators are developed. When deciding on how to collect data, evaluators can consider the following GBA+ questions:
- Have the needs and preferences of different genders and populations been considered?
- Will data be collected and analyzed based on gender identity, sex, sexual orientation or other intersecting identity factors, such as race, ethnicity, religion, age, and mental or physical disability?
- Can data collection methods enhance participant contribution?
- Can data can be collected by stakeholders (recognizing that this could introduce bias) as a way to recognize some of their rights (for example, the right of First Nations communities to own, control, access and possess information about their people and from a gender and women’s rights perspective, creating opportunities for evaluation data collection and analysis processes to be empowering for stakeholders)?
Example 10 outlines the principles relating to data that evaluators should consider when evaluating a policy, program or service designed for Indigenous people. When collecting personal data or information from First Nations, Inuit, Métis and other marginalized groups, evaluators must be sensitive and respectful.
When evaluating programs for Indigenous people, evaluators must respect diverse Indigenous practices and address concerns about previous evaluations in the design of research questions. They must also respect and commitments relating to reciprocity and knowledge-sharing with the participating Indigenous communities. The First Nations principlesFootnote 14 of ownership, control, access and possession (OCAP) of data can serve as guidelines:
- Ownership: the right of communities or groups to own their data and information in the same way an individual owns his or her personal information
- Control: the right of First Nations to control all aspects of research and information management processes that have an impact on them (including control of resources, the planning process, and data management)
- Access: the right for First Nations to have access to data about their communities and to make decisions about the right of access to their collective information
- Possess: the right of First Nations to possess and physically control their data as a way to protect the ownership of itFootnote 15
Careful planning is necessary to ensure that data is collected in a way that is inclusive, accessible and secure for everyone involved. This is particularly true when marginalized or vulnerable populations are the focus of a policy, program or service. The next section contains more information on data collection.
Conducting the evaluation
In this section
Once evaluators have developed an evaluation framework, GBA+ considerations will be tested while carrying out data collection and analysis. The design of appropriate tools for data collection that reflect the diversity of the target population groups will play an important role in gathering sufficient and reliable data. Intersecting factors should be recognized and addressed to the extent possible. Evaluators should also remain open to the emergence of unexpected interactions and unexpected outcomes. Suitable analysis and triangulation of data, that maintains a GBA+ approach, improves consistency and reliability.
The importance of available, reliable and high-quality data to undertaking GBA+ cannot be overstated, and data is central to advancing understanding of programs and their impactsFootnote 16. When designing data collection methods evaluators need to consider the context, circumstances and characteristics of the target population groups. During data collection, evaluators are encouraged to include different voices and provide a space for traditionally disempowered groups to be heard, in line with the activities, outputs and outcomes of the policy, program or service being evaluated. When evaluators capture the voices of vulnerable target population groups in a practical and ethical manner, and when the intervention is culturally and contextually valid, the validity and quality of the findings increases. Evaluators should carefully examine how, by whom, when and where data are collectedFootnote 17 . For instance, in some cases, surveys may not be the best way to collect data for GBA+ because of sensitivity of some personal information. Other reflections may be placed on the regional and demographic differences that exist across Canada because these differences can affect the level of participation and the type of responses received. Nonetheless, as a basic principle, intersectional analysis should begin with the assumption that it is not known in advance what dimensions of inequality will become relevant when exploring specific issuesFootnote 18 .
Table 1 lists some considerations for data collection.
|How data are collected||The preferences of different target population groups||Young people tend to use mobile phones extensively. Surveys that target youth may have higher response rates if they can be answered on a mobile device.|
|Young people tend to use mobile phones extensively. Surveys that target youth may have higher response rates if they can be answered on a mobile device.||A focus group is not generally an appropriate way to gather information on issues related to violence and domestic abuse because of privacy concerns and the sensitivity of the subject matter.|
|In cases where there are known differences between and among men and women in relation to a program (for example, awareness, take-up, completion), having separate focus groups for men and women or Indigenous persons may reveal different findings than if the groups are mixed. One contributing factor is individuals’ comfort level based on the make-up of the focus group, and even on the facilitator.||In cases where there are known differences between and among men and women in relation to a program (for example, awareness, take-up, completion), having separate focus groups for men and women or Indigenous persons may reveal different findings than if the groups are mixed. One contributing factor is individuals’ comfort level based on the make‑up of the focus group, and even on the facilitator.|
|Sensitivity to gender identitytable 1 note 1||Demographic questions in surveys usually categorize respondents simply as female or maletable 1 note 1. Evaluators may consider alternative approaches.
One option is to ask respondents to indicate their sex at birth and then to indicate their current gender identitytable 1 note 1.
Another option is to ask respondents to indicate their gender identify in the following way:
Please select the gender identity that best describes you.
It may also help to include how the information will be used and how it would inform program renewal.
|Who||Who collects the data can influence the validity of the findings||The gender and other intersecting identity factors of an interviewer may influence response rates and responses. If respondents feel embarrassed by a topic, they will be less likely to participate in discussions, particularly when the interviewer belongs to another genderytable 1 note 2. In certain cultures, it is considered unacceptable for a woman to discuss women’s issues with a male interviewer.|
|Who responds to surveys or participates in other data collection methods.||The different target population groups should be adequately represented in the data collected. When developing a survey sample, evaluators may use stratified random sampling (also called proportional random sampling), in which the population is divided into groups (strata) that represent different target population groups based on gender and intersectional considerations, with random sampling conducted within each strata. Evaluators must understand the potential extent of non response bias, which occurs when respondents and non respondents differ in meaningful ways, for example, with respect to gender or other intersecting identify factors relevant to the evaluation (see 6.1.1 Non-response bias).
When developing a sample for qualitative methods, such as interviews or focus groups, evaluators can intentionally select individuals who are knowledgeable about a subject of interest and who can communicate their experiences and opinions clearlytable 1 note 3. This technique (known as purposeful sampling) helps ensure that individuals from the target population groups are identified and selected based on their knowledge and familiarity with the subject and that their opinions are taken into consideration.
|When||When data collection occurs can influence the participation rate of different target population groups.||Target population groups may have responsibilities and activities that affect their availability to respond to surveys and participate in focus groups. Ensuring that data collection happens at different times of the day and week can make responses more representative. For instance, for women who are single mothers being asked to participate in focus groups, timing can be a critical deciding factor in their participation or not.
Response rates and consent to being interviewed may be affected by other timing factors, such as religious holidays or traditional holidays represented in the target population groups.
|Where||Where data collection occurs (for example, physical location, online) should consider the preferences and abilities of respondents.||If people with disabilities are to be interviewed, the venues must be accessible. Online surveys must also be adjusted for use by colour blind, blind, and visually impaired people if they are included in the sample.
In some cultures, holding focus groups in a building that is generally accessed by men only could discourage or even prevent women from attending.
Table 1 Notes
An important factor to consider before proceeding with data collection, and one that has implications for data collection methods chosen, is non-response bias. This bias occurs when respondents and non-respondents differ in meaningful ways from one another (for example, they have different genders or other intersectional identity factors). Non-response bias is defined as ’the average difference between the imputed estimator and the estimator we would have obtained had complete response been observedFootnote 19 .
Usually, two types of non-response bias can be identified:
- item non-response bias, which happens when respondents opt to not answer certain questions in a survey or form
- unit non-response bias, which happens when individuals invited to participate in a survey refuse to respond
To minimize non-response bias, a survey must:
- be properly designed
- be of appropriate length
- contain clear, concise and culturally sensitive language
Evaluators should be aware of the factors that can contribute to non-response bias. These factors include those linked to:
- data collection tools and approaches (for example, Internet surveys, telephone or in-person interviews, SMS text messages)
- when data collection occurs (for example, time of day or time of year)
The impact of the factors may be attenuated by sending reminders to non-respondents (unit non-response) or by making every question in the survey mandatory (item non-response). If all questions are mandatory, each one must have an option for neutral response (for example, not applicable, no response, do not know).
Analysis of quantitative data
Evaluators can use two main types of statistical analysis to analyze quantitative data related to gender and intersecting identify factors:
- descriptive statistics
- inferential statistics
Descriptive statistics present data disaggregated by gender and other relevant factors that characterize the target population groups. Typical descriptive statistical methods include:
- mode, median and mean, which are used to analyze the central tendencies of the data
- range, variance and standard deviation, which are used to measure the spread of the data
Referring to the fictional Promotion of Gender Equity in STEM Program, example 11 shows the number of graduates at each level in each STEM field, by gender.
|Field of study||Level of education||Women in STEM||Men in STEM||Percentage difference|
|Science and science technology||Bachelor degree||166,520||153,950||+8.2%|
|Engineering and engineering technology||Bachelor degree||37,120||226,975||-83.5%|
|Mathematics and computer and information science||Bachelor degree||41,570||104,970||-60.4%|
There are differences between the number of women and the number of men earning degrees in STEM. These differences tend to be most significant for women at the higher graduate level, particularly in engineering and engineering technology. There are slightly more female recipients of a bachelor degree in science and science technology than there are male recipients, although the difference persists and is significant in this field at the graduate levelsFootnote 20. Gender categories are not homogenous; there can be considerable differences within groups of men and groups of women. Further exploration might therefore be needed to determine whether the differences in numbers of graduates are associated with gender inequality.
Because most evaluations deal with samples of target population groups, it is usually impossible to draw conclusions about all or individual groups by relying on descriptive statistics alone. The general picture they provide, however, can inform the modelling for the second type of statistics used to analyze quantitative data, inferential statistical analysis.
Inferential statistics are used to examine the cause-and-effect relationship between independent factors (for example, ethnicity, social class, gender) and dependent factors (the subject of study) to produce macro-level population dataFootnote 21. Inferential statistics can help evaluators determine the extent to which, for example, the policy, program or service has had an impact on specific target population groupsFootnote 22. Common inferential statistical analysis methods include:
- chi-square tests, which test the strength of the association between two categorical variables
- t-tests, which are used to look for differences between two dependent or independent variables by comparing their means
- analysis of variance (ANOVA), which is used to look for the difference between group means
- regression analysis, which is used to test predictions of changes in an outcome variable based on changes in a predictor variable
The type of inferential statistical analysis chosen will depend on a number of factors, such as:
- the distribution of the data
- the types of variables
- the research design
This document is a primer, so it does not cover more advanced inferential techniques, such as econometrics, impact and net impact analysesFootnote 23.
Referring back to the fictional Promotion of Gender Equity in STEM program, an evaluator could use a multiple regression analysis to further investigate the impact of different independent variables such as gender, immigration status and age on graduation rates in STEM fields, (dependent variable).
Ordinary least squares (OLS) linear multiple regression assumes that the dependent variable is measured at the interval or ratio level. OLS regression can be carried out if the dependent variable is categorical, but it must have at least five categories, and dichotomous variables (for example, male or female and immigrant or non-immigrant) must be entered into the equation by creating dummy variables. In this example, it is assumed that all the other requisites of a linear regression are metFootnote 24.
|Independent variables||Dependent variable: graduation in Canada in STEM fields|
|Gender||Immigration status||Age||College, CEGEP or other non-university certificate||Bachelor’s||University certificate or diploma above bachelor level||Master’s||Doctorate|
|Women||Non-immigrant||25 to 34||22,085||56,275||2,365||13,300||2,325|
|35 to 44||30,360||43,155||2,030||12,020||3,780|
|45 to 54||36,655||28,250||1,565||6,860||2,480|
|55 to 64||31,740||20,230||1,165||4,620||1,730|
|65 or older||14,065||8,430||775||1,970||945|
|Immigrant||25 to 34||4,265||19,875||31,100||8,305||1,765|
|35 to 44||6,785||12,315||22,280||6,475||2,725|
|45 to 54||8,115||8,775||15,575||4,510||1,695|
|55 to 64||5,430||4,875||7,765||1,665||945|
|65 or older||3,485||2,255||4,310||1,165||645|
|Men||Non-immigrant||25 to 34||88,975||94,600||2,995||16,305||2,980|
|35 to 44||110,825||74,845||3,085||14,575||5,635|
|45 to 54||111,865||66,250||3,100||12,270||5,025|
|55 to 64||97,455||56,660||2,710||11,495||5,395|
|65 or older||55,655||46,425||3,520||10,115||5,860|
|Immigrant||25 to 34||17,770||35,710||2,030||13,035||2,690|
|35 to 44||20,790||22,200||1,545||11,430||5,215|
|45 to 54||22,825||19,265||1,035||9,935||5,005|
|55 to 64||17,265||13,595||1,035||5,075||3,490|
|65 or older||13,310||11,405||1,215||5,940||4,985|
Because the coefficient is not statistically significant, it is not possible to determine whether age plays a role in determining the likelihood of a person obtaining a degree in a STEM field. There is, however, a strong relationship between gender and graduation in STEM fields: women are less likely than men to graduate in a STEM field, especially if they are immigrants.
When they obtain survey results or administrative data, evaluators should assess the extent to which non-response bias may affect conclusions drawn from the inferential statistical analyses. To do this, evaluators can compare the gender-related demographic variables of respondents with those of the target population group, a sample or another reliable external source, such as Census data. In some cases, conclusions drawn from the analysis with respect to any of the target population groups will not be reliable or representative. In these cases, evaluators can compensate for the unit non-response by adjusting weights.
Example 13 illustrates weighing in the analysis process.
|Percentage of population||40%||50%||10%|
|Percentage of responses||60%||35%||5%|
Responses in this sample are different from the population under study, which has an equal distribution of men and women. To assign a weight, divide population by responses:
- Male: 40.0 / 60.0 = 0.67
- Female: 50.0 / 35 = 1.43
- Not available: 10 / 5 = 2.00
Men are over-represented in the survey, so all men’s responses receive a weight of 0.67. Women are under-represented in the survey, so all women’s responses receive a weight of 1.43.
Applying the weight
A survey was used to collect information about the number of men and women who received a scholarship. The following were the original results:
When the weights for gender are applied, the results are as follows:
|Yes||112 × 0.67 = 75.04
|56 × 1.43 = 80.08
|12 × 2 = 24
|No||234 × 0.67 = 156.78
|311 × 1.43 = 444.74
|35 × 2 =70
Unweighted results show that roughly two thirds of survey respondents who received scholarships were men and less than one third were women. After applying weights based on the population and available responses, the survey shows that over 4 in 10 women and over 4 in 10 men received a scholarship. About 13% of those who received a scholarship did not reveal their gender.
Analysis of qualitative data
Depending on the type and number of qualitative methods selected for the evaluation, the information gathered (for example, through in-person interviews, written responses to open-ended questions in surveys, focus group discussions) can result in large amounts of unstructured data. The analysis of this data will involve the identification of themes, categories and patterns. Although the analysis will be guided by the evaluation questions, the following lines of enquiry are examples of GBA+ themes that could be examined:
- Is there is evidence that the policy, program or service reduces, maintains or increases the gender inequalities between target population groups?
- Is there evidence of structural barriers to participation? If so, is there evidence that their root causes are based on unequal power dynamics?
- Does the analysis capture any changes that support or contribute to the transformation of existing structures with the aim of reducing discrimination and increasing inclusion and equality of marginalized groups?
- Does the evidence indicate that target population groups are affected differently by the intervention based on factors such as a gender, race, ethnicity, culture?
- Does the evidence indicate that the policy, program or service has affected some target population groups negativelyFootnote 25?
The strength of mixed methods is realized during data analysis. Evaluators can tell a more comprehensive story when using information from multiple sources, which reinforce the validity of findings and ensuring conclusions.
Reporting evaluation results
Evaluation reports and supporting technical studies, where available, should explain:
- how GBA+ was integrated into the approach and methodology
- how diverse target groups were included
- how gender and other intersecting identity factors were addressed during the evaluation
- any constraints encountered when conducting the evaluation (for example, lack of gender-disaggregated data) and the impact of those constraints
- how any constraints were mitigated to ensure the validity and reliability of findings
Findings and conclusions should discuss, to the extent possible, what was learned related to gender and other intersecting identity factors. Key stakeholders, including the targeted GBA+ community, should be consulted for validating the factual information and findings through different mechanisms. The recommendations should identify any areas requiring action, and the impact (positive or negative) that these may have on diverse target population groups.
When assessing the management response and action plan to address any recommendations that contain GBA+ elements, evaluators are encouraged to consider the following:
- How does the response address the recommendation in view of gender and other intersecting identity considerations? What will be its impact?
- Will impacts of the recommendation on different target population groups be monitored? If so, how?
- How does the response align with similar or related efforts by other organizations that have implemented GBA+ in their evaluations (including other government organizations)?
When possible, the evaluation report should be shared with the target population groups and with policy, program or service delivery partners and stakeholders. Evaluators should tailor the communication of findings to various audiences to maximize their utility for decision-making. They should also use different channels of communication (for example, blogs, social media) to meet the needs of the different audiences.
The Government of Canada has committed to placing a greater emphasis on incorporating GBA+ across government. Therefore, if evaluators determined that it was not feasible to integrate GBA+ into an evaluation, they should explain why, in the evaluation report. They should also consider including, in the report or in supporting technical studies, a section on future research or data development that would indicate what data would support improved GBA+ in the future.
GBA+ is an important tool for examining the impact of policies, programs and services on different groups of people, according to their sex, gender and other intersecting identity factors.
Although GBA+ should be considered at all stages of a policy, program or service, the evaluation function provides an opportunity to rigorously examine considerations related to gender and intersecting identity factors. GBA+ can be incorporated into all stages of an evaluation: planning, conducting and reporting. Taking a deliberate and systematic approach to how GBA+ is applied will improve the usefulness of evaluation insights for policy development and decision-making.
At the planning stage, GBA+ should be considered when examining pre-existing information, including literature and file reviews, on the policy, program or initiative, particularly to assess existing knowledge and data gaps. For instance, evaluators can help shed light on the extent to which a program included GBA+ consideration in its design, and which aspects of a program can contribute or benefit diverse target population groups
As the evaluation is conducted, applying a GBA+ lens will provide useful information on existing or potential gaps related to the design and delivery of a policy, program or service, and ways to further improve its design and implementation. Data collection should be adapted to the circumstances and characteristics of different target population groups to provide them with an adequate space to be heard and include their voices in the evaluation.
It is important for evaluation reports and supporting technical studies, where available, to describe how GBA+ was integrated into the approach and methodology, how diverse target population groups were included, and how gender and other intersecting factors were addressed in the evaluation. Similarly, reports should identify any limitations related to GBA+, the impact of those limitations, and how they could have been mitigated to ensure the validity and reliability of findings. Areas identified for action, and the effect that these may have on diverse target population groups, should be included in the recommendations and addressed through the management response and action plan.
In addition to incorporating GBA+ into their own work, evaluators can support the application of GBA+ in performance measurement, internal audit, research and other assurance functions by, for example:
- developing networks or communities of practice on GBA+
- participating in learning opportunities such as EVALConnex sessions
- developing tools and examples for implementing GBA+ in contexts that might not easily lend themselves to this particular lens
- establishing specialized training to support fuller integration of GBA + into all stages of evaluation work
- implementing participatory approaches with different stakeholder groups that have been marginalized by research and evaluation processes in the past
Appendix A - Aide-mémoire
GBA+ in Evaluations
Reviewing the logic model and theory of change:
- Do any aspects of the policy, program or initiative have potential gender considerations?
- Which population groups are expected to contribute to the policy, program or initiative? What is the role and position of these groups? Are they involved in design/planning, delivery, or in other ways?
- Are other identity factors intersecting with gender? Are there other factors (beliefs, prejudices, assumptions) that may create barriers for participation in the policy, program or initiative by specific gender target groups?
- To what extent do existing performance data take into account GBA+ considerations?
- Are gender-sensitive indicators established/being used?
- Are there potential biases in the way current indicators are framed?
- Are other indicators needed to evaluate with a GBA+ lens?
- Is the policy, program or initiative expected to contribute to promoting equality, diversity, and inclusion?
- Does it respond to the needs of its target population groups? Are there population groups that should be targeted that are not?
- Does it contribute to, or align with, government-wide priorities on gender equality?
- Does it align with or duplicate the work of other polices, programs or initiatives?
- Are there lessons from comparable policies, programs or initiatives that promote equality, diversity, and inclusion at the provincial, territorial or municipal levels or outside of Canada that could be applied?
Design and delivery:
- Does the policy, program or initiative create/perpetuate barriers for certain target population groups?
- What kind of barriers are perceived by target population groups as a result of the policy, program or initiative?
- In what ways can the policy, program or initiative be improved to foster inclusion of target population groups (e.g., by enhancing their feedback/contribution to policy, program or initiative delivery)?
- To what extent and in what ways have expected outcomes had an impact on different population groups?
- Have outcomes differed across different population groups?
- To what extent have any disparities in outcomes for different population groups been addressed, if necessary?
- Have there been any unexpected impacts (positive or negative) for any population groups?
- If so, how were these addressed, if at all?
- Is the program as set out currently gender-transformative?
- For example, to what extent has gender awareness increased among population groups targeted by the policy, program or initiative? Have internal representatives and target population groups learned about relevant gender considerations as a result of activities conducted by the policy, program or
- Are any results related to equality and diversity likely to be sustained?
- To what extent are effective and efficient means being used to ensure inclusion of target population groups when implementing a policy, program or initiative?
- What are the administrative costs by target population groups? Can administrative efficiency be improved for specific target population groups?
- How data is collected should be influenced by an understanding of the preferences of different population groups
- How data is collected should factor the privacy and ethical considerations of different population groups
- How data is collected on gender identity should receive particular attention
- Who collects the data may affect the validity of findings
- Who responds to surveys or participates in in other data collection methods matters
- Different population groups may be available to respond to surveys and interviews at different times, due to different responsibilities and activities
- Evaluators should factor in the preferences and abilities of respondents
- How was GBA+ integrated? What considerations were used? What stakeholder groups participated?
- If GBA+ was not integrated, a rationale should be provided
- How diverse target groups were included?
- How gender and other intersecting identity factors were addressed during the evaluation? What are the constraints encountered when conducting the evaluation? How any constraints were mitigated to ensure the validity and reliability of findings?
- Do findings, conclusions and recommendations incorporate learnings from GBA+?
Management Response and Action Plan:
- How does the MRAP address the recommendation in view of gender considerations? What will be its impact?
- Will impacts of the recommendation on different population groups be monitored and, if so, how?
- How does the response align with similar or related efforts by other organizations (including other government organizations)?
Appendix B - Glossary of terms
- the socially constructed roles, behaviours, expressions and identities of girls, women, boys, men and non-binary people. Gender is usually understood as binary (girl or woman, and boy or man), yet there is considerable diversity in how individuals and groups understand, experience and express genderFootnote 26.
- Gender statistics:
- the predetermined collection of statistics that includes gender-specific variables to measure and reflect the sociocultural and economic differences in the lives of women, men, girls and boys. Gender statistics include gender gaps in education, the labour market, earnings and healthFootnote 27. Gender statistics are vital to inform policymakers and to make advances towards achieving gender equality. Gender statistics have an important role in improving how statistical information is collected with the aim to describe more accurately and comprehensively the characteristics of a populationFootnote 28. Table A gives examples that can provide further clarity in the conceptual differences between sex-disaggregated statistics and gender statistics.
- Gender-based analysis plus (GBA+):
- a type of analysis that assesses how diverse groups of women, men and gender-diverse people may experience policies, programs and initiatives differently. The “plus” in GBA+ acknowledges that GBA goes beyond biological (sex) and socio-cultural (gender) differences, and looks at how sex and gender intersect with other identity factors, including race, religion, ethnicity, age, and mental or physical disability, among othersFootnote 29. A core component of GBA+ is examining and challenging assumptions about an issue, population group, or system to ensure that inequality for a particular group of people is not inadvertently perpetuatedFootnote 30.
- Gender equality:
- Gender equality refers to equal rights, responsibilities and opportunities for women, men and non-binary people. Equality refers to the state of being equal while equity refers to the state of being just, impartial or fair. However, equality of opportunity by itself does not guarantee equal outcomes for women, men and non-binary people.
- Gender equity:
- Gender equity refers to fairness, impartiality and justice in the distribution of benefits and responsibilities between women, men and non-binary people. Unlike gender equality, which simply provides for equality of opportunity, gender equity explicitly recognizes and actively promotes measures to address historical and social disadvantages. By ‘levelling the playing field,’ gender equity creates circumstances through which gender equality can be achieved. Gender equity means providing all social actors with the means to take advantage of equality of opportunity.
- a term used to describe policies, programs and initiatives that consider gender norms, roles and relations, and how they affect access to and control over resources. These policies, programs and initiatives address the causes of gender-based inequities and include ways to transform harmful gender norms, roles and relations. One of the objectives is to promote gender equality by including strategies to foster progressive changes in power relationships between women and men. Indicators allow for the measurement of change in the relations between women and men in a certain areas and measure the progress of eliminating the causes of gender-based inequities.
- disparities experienced by different groups in society and attributable to distinctions in gender, age, education, geography, cultural background/beliefs, income and/or ability, among other factors.
- inequalities that are systemic, socially produced and not biologically determined, but rather produced by differential social circumstances, and preventable, where knowledge exists to reduce these differences.
- Intersectional analysis (also referred to as intersectionality or intersectional approach):
- a type of analysis that explores the interaction of multiple aspects of identity (for example, race/ethnicity, indigeneity, gender, sexual orientation, gender identity and gender expression, class, sexuality, geography, age, disability/ability, migration status, religion). These interactions occur within a context of connected systems and structures of power (for example, laws, policies, state governments and other political and economic unions, religious institutions, media) and contribute to inequality of outcomes for different social groupsFootnote 31.
- organizations that assist in the implementation of the policy, program or initiative, or that have parallel policies, programs or initiatives that deal with similar elements.
- a set of biological attributes in humans and animals that is primarily associated with physical and physiological features including chromosomes, gene expression, hormone levels and function, and reproductive and sexual anatomyFootnote 32. Sex refers to a person’s biological and physiological characteristics. A person’s sex is most often designated by a medical assessment at the moment of birth. This is also referred to as birth-assigned sex.
- Sex-disaggregated statistics:
- the collection of statistics that are presented by sex to show the data for women and men separately.
Table A - Differences between sex-disaggregated statistics and gender statistics
|Gender-based analysis lens||Examples of studies that include a gender-based lens||Sex-disaggregated statistics
(limited to the binary biological definitions applied in the research of male/female)
|Gender-based analysis lens
(considers the social and cultural constructs as they apply to gender definitions)
Table A Notes
|Reveal inequalities||Study on participation in the economy||Only looking at the employment rate disaggregated by sex overlooks the work of women. “Work” has been defined in conventional statistics as paid activities. Because a high proportion of women perform unpaid work such as caregiving, domestic work and volunteering, the work of women has been significantly overlooked.||The inclusion of gender-sensitive indicators relating to unpaid work in labour force surveys allows for the analysis of gaps revealed by the data. It provides a basis for formulating new questions that consider gender as it relates to the total work performed in the economy.|
|Include diversity in concept definition and data collection||Study on leadershiptable A note 1||Men were found to be more likely to display leadership qualities. However, the definition of leadership was framed in terms of dominance and other styles that emphasized characteristics congruent with male stereotypes. These stereotypes associated with the concept being studied biased the research outcomes.||Recognizing the existence of a range of leadership qualities, including conflict resolution, consideration of others, and negotiation skills, would help correct the gender bias at the outset, before the data is collected.|
|Consider gender biases in data||Study on familiestable A note 1||Gender statistics are collected with consideration of negative connotations, such as negatively describing behaviours that do not fit traditional gender roles.|
|Survey question contains “fatherless homes”||Survey question contains “one-parent families”|
Appendix C. Reference list
Government of Canada
Government of Canada. “Data tables: 2016 Census”, Canada, Statistics Canada, 2016, (accessed 6 August 2019).
Government of Canada. “Definitions of Sex and Gender”, Canada, Canadian Institutes of Health Research, 2015, (accessed 6 August 2019).
Government of Canada. “GBA+ Research Guide”, Canada, Status of Women Canada, 2017, (accessed 6 August 2019).
Government of Canada. “Gender-Based Analysis”, Canada, Treasury Board of Canada Secretariat, 2016, (accessed 6 August 2019).
Government of Canada. “Gender-Based Analysis Plus: Demystifying GBA+: Job Aid”, Canada, Status of Women Canada, 2017, (accessed 6 August 2019).
Government of Canada. “Gender-Based Analysis Plus: Government of Canada’s Approach”, Canada, Status of Women Canada, 2016, (accessed 6 August 2019).
Government of Canada. Scott, Nick and Siltanen, Janet. Gender and Intersectionality: A Quantitative Toolkit for Analyzing Complex Inequalities, Canada, Employment and Social Development Canada, 2012, (accessed 6 August 2019).
Government of Canada. Measuring Impact by Design: A Guide to Methods for Impact Measurement, Canada, Privy Council Office, 2019, (accessed 6 August 2019).
Government of Canada. “Take the GBA+ Course”,Canada, Status of Women Canada, 2017, (accessed 6 August 2019).
Government of Canada. “The Girl Child”, Canada, Statistics Canada, 2017, (accessed 6 August 2019).
European Institute for Gender Equality. “Gender statistics and indicators”, Lithuania, European Institute for Gender Equality, 2018, (accessed 6 August 2019).
European Monitoring Centre for Drugs and Drug Addiction. Annual report 2006 - Selected issues 2: A gender perspective on drug use and responding to drug problems, Belgium, European Monitoring Centre for Drugs and Drug Addiction, 2006, (accessed 6 August 2019).
Oxfam. “Quick Guide to Gender-Sensitive Indicators.” United Kingdom, Oxfam, 2014, (accessed 6 August 2019).
UN WOMEN. Challenges of Integrating Gender Equality in Evaluation, UN WOMEN and Malaysian Evaluation Society, 2014, (accessed 6 August 2019).
UN WOMEN and the Independent Evaluation Office. How to Manage Gender-Responsive Evaluation: Evaluation Handbook, New York, UN WOMEN, 2015, (accessed 6 August 2019).
Anne Stephens, Ellen D. Lewis and Shravanti Reddy. Inclusive Systemic Evaluation for Gender Equality, ISE4GEMs: Environments and Marginalized Voices: A New Approach for the SDG Era, New York, UN Women, 2018, (accessed 6 August 2019).
UNESCO – Office Bangkok and Regional Bureau for Education in Asia and the Pacific. Closing the Gender Gap in STEM: Drawing More Girls and Women into Science, Technology, Engineering and Mathematics, Thailand, UNESCO, 2016, (accessed 6 August 2019).
UNICEF Evaluation Office. How to Design and Manage Equity-Focused Evaluations, New York, UNICEF, 2011, (accessed 6 August 2019).
United Nations Evaluation Group. Integrating Human Rights and Gender Equality in Evaluation: Towards UNEG Guidance, New York, UNEG, 2011, (accessed 6 August 2019).
United Nations Evaluation Group. Integrating Human Rights and Gender Equality in Evaluations, New York, UNEG, 2014, (accessed 6 August 2019).
United Nations Evaluation Group. Integrating Gender Equality and Huma Rights in Evaluation - UN SWAP Guidance, Analysis and Good Practices, New York, UNEG, 2014, (accessed 6 August 2019).
United Nations, Gender Statistics Manual, (accessed 6 August 2019).
World Health Organization, “WHO Gender Responsive Assessment Scale: Criteria for Assessing Programmes and Policies”, Gender mainstreaming for health managers: a practical approach, (accessed 6 August 2019).
Better evaluation. “Feminist Evaluation”, Better evaluation – Sharing information to improve evaluation, (accessed 6 August 2019).
Bustelo M., “Evaluation from a Gender+ perspective as a Key Element for (Re)gendering the Policymaking Process.” Journal of Women, Politics and Policy. Volume 38, Issue 1: Policymaking: Insights and Challenges from Gender and Other Equality Perspectives, 2017.
Cara Tannenbaum, Lorraine Greaves and Ian D. Graham. “Why Sex and Gender Matter in Implementation Research.” BMC Medical Research Methodology, vol.16, article number 145, 2016, (accessed 6 August 2019).
CBC. “Should Ottawa Adopt Sweden’s Gender-Balanced Snow-Clearing Policies?”,Canada, CBC, 2018, (accessed 6 August 2019).
Centre for Theory of Change. “What is Theory of Change?”, Center for Theory of Change – Setting standards for Theory of Change, (accessed 6 August 2019).
David Fetterman, Shakey Kaftarian,Abraham Wandersman. Empowerment Evaluation: Knowledge and Tools for Self-Assessment and Accountability. Sage Publications, 1995.
Davis R.E., Couper M.P., Janz N.K., Caldwell C.H. and Resnicow K., “Interviewer effects in public health surveys,” Health Education Research, 25(1), 2010.
Denice Ward Hood and Denice A. Cassaro, “Feminist Evaluation and the Inclusion of Difference.” New Directions for Evaluation, 96, 27 to 40, 2002.
Denmark F., Russo N.F., Frieze I.H., and Sechzer J.A., “Guidelines for avoiding sexism in psychological research.” American Psychologist, 43(7), 582 to 585. 1988.
Donna M. Mertens, “Inclusive Evaluations: Implications of Transformative Theory for Evaluation.” American Journal of Evaluation, 20, 1 to 14, 1999
Donna M. Mertens, Joanne Farley, Anna-Marie Madison and Patti Singleton, “Diverse voices in evaluation practice: Feminists, minorities, and persons with disabilities.” American Journal of Evaluation, 15, 123 to 129, 1994.
Essays. “The Impact of Gender in Research.” United Kingdom, Essays, November 2018, (accessed 6 August 2019).
Gloria Origgi and Anke Lipinsky, “Unravelling Implicit Biases: Research Evidence.” Workshop on Implicit Biases, European Commission, Belgium.
Jessica Fehringer, Brittany Iskarpatyoti and Mahua Mandal. “Want to integrate gender in your evaluation but don’t know where to start?”. United States, MEASURE Evaluation, 2017, (accessed 6 August 2019).
Kathryn A. Sielbeck-Bowen, Sharon Brisolara,Denise Seigart, Camille Tischler and E. Whitmore. “Exploring Feminist Evaluation: The Ground From Which We Rise.” New Directions for Evaluation, 96, 3 to 8, 2002.
First Nations Information Governance Centre. “The First Nations Principles of OCAP®”, Canada, First Nations Information Governance Centre, 2018, (accessed 6 August 2019).
Leonie Huddy, Joshua Billig, John Bracciodieta, Lois Hoeffler, Patrick Moynihan and Patricia, “The Effect of Interviewer Gender on the Survey Response.” Political Behavior, 19(3), 1997.
Michael Quinn Patton. “Utilization-focused evaluation,” Sage Publications, 4th Edition, 2008.
National Collaborating Centre for Aboriginal Health. “Indigenous Approaches to Program Evaluation”, National Collaborating Centre for Aboriginal, 2013, (accessed 6 August 2019).
Olena Hankivsky. “Intersectionality 101”, The Institute for Intersectionality Research & Policy, April 2014,(accessed 6 August 2019).
Palinkas LA., Horwitz SM., Green CA., Wisdom JP., Duan N. and Hoagwood K., “Purposeful Sampling for Qualitative Data Collection and Analysis in Mixed Method Implementation Research.” Administration and Policy in Mental Health and Mental Health Services Research, 42(5):533-44,2015
Sharon Brisolara, Denise Seigart and Saumitra Sengupta, “Feminist Evaluation and Research: Theory and Practice”, The Guilford Press, New York, 2014.
Pauly, B., MacDonald, M., Hancock, T., O'Briain, W., Martin, W., Allan, D., Riishede, J., Dang, P., Shahram, S., Strosher, H., & Bersenev, S. on behalf of the ELPH Research Team (2016). Health Equity Tools 2.0. Victoria, BC: University of Victoria, (accessed 6 August 2019).
Statistical Society of Canada. ”Handling Item Nonresponse in Surveys”, Canada, Statistical Society of Canada, 2008, (accessed 6 August 2019).
Marion Werner, Leah F. Vosko, Angie Deveau, Giordana Pimentel and Deatra Walsh. “Conceptual Guide to the Unpaid Work Module.” Gender and Work Database,Canada, (accessed 6 August 2019).
Report a problem or mistake on this page
- Date modified: