What We Heard report: Indigenous Online Safety

Prepared for:
Department of Canadian Heritage

Prepared by:
Archipel Research & Consulting Inc.

The following individuals from Archipel Research & Consulting Inc. contributed to the writing of this report:

Sophia Bain – Associate Researcher
Graham Paradis – Métis – Associate Researcher and Facilitator
Catherine Stockall – Associate Researcher

We are also grateful for the guidance of Elder Eddie Gardner (Sqwá First Nation - Councillor, Lands and Resources) throughout this research.

Content Warning

The following report contains material that may be harmful or triggering to some readers, including mentions of sexual assault, self-harm, child abuse, and racism.

On this page

Introduction

The following brief report was developed by Archipel Research and Consulting Inc. in collaboration with the Department of Canadian Heritage (PCH) to summarize what was heard in a consultation process with 25 Indigenous people from various parts of what is today called Canada in relation to the harms they face online. This consultation process was undertaken to provide advice and input to PCH to develop legislation that incorporates a risk-based approach to online safety anchored in a duty to act responsibly.

This report offers a summary of a focus group as well as one-on-one interviews with Indigenous people consisting of those who work with victims and advocacy groups for Indigenous people or who were themselves an Indigenous person who had experienced online harm. This report begins with an overview of the methodology, recruitment process, analysis protocol, and context of the project. It then provides a summary of the concerns raised by participants relating specifically to the need for an Indigenous-centric approach, the most harmful platforms, the priority to protect the most vulnerable, identifying and mitigating potential harms, the online safety regulator, peace bonds, and hate crimes and hate speech. Each of these topics is explored in the key findings section below. Finally, this report offers a brief recommendations section that provides a guideline for how PCH should incorporate feedback from the consultation process.

Methodology

The approach to conducting the outreach, focus group, interviews, and subsequent analysis of feedback prioritized Indigenous research methodologies. The guiding principle used included the prioritization of Etuaptmumk. Founded by Mi’kmaq Elders Murdena and Albert Marshall, Etuaptmumk is a Mi’kmaq methodology and framework also known as Two-Eyed Seeing. Two-Eyed Seeing is explained as learning to see from the strengths of two eyes, as one, where one eye represents Eurocentric ways of knowing and the other eye represents Indigenous ways of knowing. This approach involves starting with the Indigenous ways of learning and knowing and combining them with Eurocentric and academic ways of knowing, using both for the benefit of all (Bartlett et al. 2012). From their extensive work with the integration of Two-Eyed Seeing, Elders Murdena and Albert Marshall developed eight lessons:

  1. Acknowledge that we need each other and must engage in a co-learning journey
  2. Be guided by Two-Eyed Seeing
  3. View “science” in an inclusive way
  4. Do things (rather than “just talk”) in a creative, grow forward way
  5. Become able to put our values and actions and knowledges in front of us, like an object, for examination and discussion
  6. Use visuals
  7. Weave back and forth between our worldviews
  8. Develop an advisory council of willing, knowledgeable stakeholders, drawing upon individuals both from within the educational institution(s) and within Aboriginal communities (Bartlett et al., 2012).

Two-Eyed Seeing is not the merging of different perspectives, nor is it adding small, selective elements of Indigenous knowledge into Western institutions. Instead, Two-Eyed Seeing is a “weaving back and forth between knowledges in which each strand is necessary to the process” (Iwama et al. 2009, 5). For the purpose of this project, the use of Indigenous approaches to research was fundamental to ensure the culturally specific and decolonized understanding of the relationship between Indigenous communities and online safety. Further, Etuaptmumk allows for an understanding of the nuances of this topic that are tailored to the unique histories and realities of Indigenous communities.

The virtual focus group was run under the style of a “kitchen table talk” meeting. To provide context for participants, the focus group began with a presentation and question period from the Minister of Canadian Heritage, the Honourable Pablo Rodriguez, and representatives from PCH. The focus group was conducted by an Indigenous facilitator, and most of the notetakers, assistants, and report writers were also BIPOC (Black, Indigenous, and People of Colour). This approach was chosen to encourage participants to join in a relaxed environment and to facilitate a knowledge and energy exchange. Researchers also ensured that during each step of the project a roundtable approach was used, ensuring that diverse perspectives and holistic approaches were incorporated.

Further, focus group and interview data was collected through the conversational method, “a method of gathering knowledge based on oral storytelling tradition congruent with an Indigenous paradigm. It involves a dialogic participation that holds a deep purpose of sharing story as a means to assist others” (Kovach, “Conversational Method in Indigenous Research,” 2010, 40). This design was chosen because it prioritized Indigenous research methodologies, which highlight the importance of dialogic, reciprocal, and storied approaches to research. Margaret Kovach’s (2021) insights on Indigenous methodologies were vital to this project because they encouraged researchers and participants to develop and co-create knowledge through collaboration and dialogue. This method is part of a larger Indigenous research paradigm that centres around reciprocity, responsibility to community, and activism. Researchers were also committed to undertaking this research using a lens of anti-racism and anti-discrimination.

Recruitment

This research has included one virtual focus group conducted in November 2022 and a series of one-on-one interviews, which took place from November 2022 to January 2023. A total of 25 participants took part in these engagement activities. The participants represented diverse backgrounds and experiences and included First Nation, Inuit, and Métis people from across Canada. Participants either worked with victims and advocacy groups for Indigenous people or were themselves an Indigenous person who had experienced online harm.

Archipel researchers developed a list of potential participants using their extensive network of Indigenous community members and organizations from across Canada. Outreach was also undertaken through social media posts on Facebook and Instagram.

Demographics

Identity Focus Group Interviews Overall
First Nations 20% 80% 56%
Métis 30% 0% 12%
Inuit 40% 20% 28%
UnknownFootnote 1 10% 0% 4%

Analysis Protocol

Given the time constraints in the focus group, not all questions were addressed. As such, an additional 15 interviews were conducted to uncover more detailed considerations and specifics about the proposed legislation. Although not everything shared in the focus group was in the direct purview of the project guidelines and planned questions, they are still important to include in this report as complementary to the more specific details revealed in the interviews.

The analysis and preparation of this report are based on Indigenous-specific research methodologies rooted in Indigenous ways of knowing (epistemologies).Footnote 2 In collaboration with PCH, Archipel identified themes used to uncover the concerns important to Indigenous people about online safety. Both explicit and implicit dimensions of the participants’ personal narratives and experiences were captured. The findings were extracted from the data collected, compiled, and analyzed, and then synthesized into this report.

Consultation Fatigue

An important consideration for the findings of this research is the idea of consultation fatigue. Many participants, especially those in the focus group, expressed frustration with being repeatedly asked to participate in consultation processes with no guarantee of any meaningful change for them. Research fatigue “occurs when often marginalized, minority and/or Indigenous groups are repeatedly approached, surveyed, questioned and incorporated into research projects to share part of their understanding of a topic” (Kater 2022). Many of the participants had already participated in numerous engagement opportunities, sharing their experiences countless times. As participants stated:

[Racism] is happening on an everyday basis. We see it every day. What forms of racism? Systemic, all racism. It’s the people who are on the streets. The [2SLGBTQQIA+ community], the non-binary. It’s not like the government doesn’t know the Indigenous issues. They’re asking us what they want to do about it. How long is the fineFootnote 3 going to take? That person has already been hurt. That’s not good enough. Are we going to always be in a fight for human rights? Indigenous peoples, we live by our holistic traditions. You got to start from the healing. How much more education do we have to give? (Focus Group Participant)

There is so much nice talk especially in the government, but they don’t do anything. (Interview Participant)

Repeatedly sharing their experiences without seeing any tangible improvements can be traumatic and many Indigenous people are understandably skeptical about whether or not the government will take their experiences into account to create legislation that accurately reflects what they want to see. Issues of consultation fatigue may affect data collection because many Indigenous people may choose not to participate in consultation processes when there is no guarantee that their concerns will be taken seriously. The failure to act with sensitivity and urgency based on the feedback of Indigenous people has led to deep issues of distrust of governments in Indigenous communities. Further, some Indigenous people dealing with other more immediate challenges, such as housing or food insecurity, may struggle to fully participate in discussions concerning policy or legislation. This is not necessarily indicative of a lack of desire to partake in such discussions, only that they have more immediate needs that hinder their ability to participate.

Participants were also concerned that this engagement process was not being undertaken to genuinely listen to the experiences of Indigenous people:

I’d like to talk about tokenism. When we’re asked to be in situations on this by non-Indigenous folks. They tend to include one or two of us, just to say that we got their voice. That is dangerous. Tokenism is a real threat to our people. They are stealing our ideas and our identities. (Focus Group Participant)

These comments underscore the importance of sincerely reflecting on the experiences shared during the focus group and interviews and adjusting the legislation accordingly.

Context Setting

Approximately 94% of Canadian adults have at least one social media account. Social media allows Canadians to connect with one another in Canada and around the world and participate in their communities. Social media helps activists and civil society organizations to organize and amplify the voices of underrepresented and equity-deserving communities. Social media sites became increasingly important for connection and communication in light of the COVID-19 pandemic.

The use of social media to connect and share stories is especially important for Indigenous communities across Canada (Bascaramurty 2020). Indigenous people across Canada deserve to be safe online. Social media has proven to be an effective tool for Indigenous peoples to connect with one another, participate in their communities, and engage in activism. However, many have grown increasingly concerned over the prevalence of harmful content online, a trend that has disproportionately affected Indigenous peoples.

Indigenous people are particularly vulnerable to harmful content online, including hate speech, child sexual exploitation content, and content that depicts the non-consensual sharing of intimate images, to name a few. A growing number of Indigenous peoples are raising concerns about the harms they face online (Arce 2022, APTN 2020). The Government of Canada recognizes that the perspectives of Indigenous people who have experienced or worked with those who have experienced hate online are vital as the Government begins to develop policy to ensure a more transparent and accountable online environment.

Archipel held a virtual focus group and one-on-one interviews with First Nations, Métis, and Inuit from November 2022 to January 2023 to discuss and receive feedback on the core elements of a risk-based legislative and regulatory framework for online safety. The focus group and one-on-one interviews were part of the Government’s wider engagement effort on online safety, set out below.

The Government of Canada launched a national online consultation from July to September 2021 on what a legislative and regulatory framework for online harms could look like. Written feedback from this consultation process, including feedback from a few Indigenous victim and advocacy groups, indicated that there was support from most respondents for a national legislative and regulatory framework to confront harmful content online. There were, however, some notable concerns raised by participants in the consultation process, namely those relating to freedom of expression, privacy rights, the potential impact on certain marginalized groups, and compliance with the Canadian Charter of Rights and Freedoms. A summary report of this process can be found on Canadian Heritage’s webpage.

The Government has previously taken steps to make the online environment safer. For example, the Minister of Justice and Attorney General of Canada introduced Bill C-36 in June 2021 to amend the Criminal Code and the Canadian Human Rights Act (CHRA) to better denounce hate propaganda and hate crime and to make related amendments to the Youth Criminal Justice Act. The bill did not proceed through the previous Parliament before the election of the new Parliament. Please consult the description of Bill C-36 for further information.

The Government analyzed the feedback from the initial consultation process and found that there was a desire for a regime that could focus on the systems, tools, and approaches that online services could put in place, i.e., a risk-based approach rather than a take-down regime focused on the removal of specific categories of harmful content originally proposed for the 2021 consultation noted above.

In response to what was heard, the Government assembled an expert advisory group comprised of 12 experts from diverse backgrounds and experience across Canada. The expert advisory group was mandated to provide advice on how best to design the legislative and regulatory framework to address harmful content online while incorporating the feedback received during the national consultation in 2021. The advisory group participated in 10 workshops and met with subject matter experts from the United Kingdom, European Union, and Australia to hear how similar models are operating in other jurisdictions. Please consult the summary of the expert advisory group’s sessions for more information.

The findings from the expert advisory group can be divided into eight key themes:

A brief overview of these themes can be found in Appendix C of this document.

In addition to these measures, the Public Policy Forum, in collaboration with the Department of Canadian Heritage, held the third portion of their Citizens’ Assembly on Democratic Expression from June 15-19 in Ottawa, this time focusing on online safety. The Citizens’ Assembly was intended to gain insights from a group of representative citizens similar to the demographic of Canada and from regions across Canada on research and policy development on online harm and disinformation. Finally, a total of 19 roundtables were hosted across Canada – 12 in-person and 1 hybrid, reaching every region of the country, and 7 virtual roundtables organized around themes, equity-seeking groups, and industry, to hear the perspectives on online harm from those who would be most impacted by online safety legislation.

Participants in our focus group and interviews were provided with an overview of this information similar to the participant document used during the Minister’s roundtables as well as the discussion questions upon registration for the focus group to provide context for the topic and to promote a more fulsome discussion. A copy of the information packet provided to participants can be found in Appendix C of this document.

Key Findings

The issues and input raised during the focus group and interviews centred around a number of key findings:

  1. the need for an Indigenous-centric approach;
  2. the most harmful platforms;
  3. the priority to protect the most vulnerable;
  4. identifying and mitigating potential harms;
  5. the online safety regulator;
  6. peace bonds; and
  7. hate crimes and hate speech.

These, along with their subsequent subthemes are explored below.

Indigenous-Centric Approach

Participants were clear throughout the data gathering process that Indigenous communities have a unique relationship to the internet and, therefore, the approach to addressing online safety requires an Indigenous-specific lens. Many felt that solutions that may work for the general Canadian population would not work for Indigenous communities: “Making rules that blanket all of Canada might be at risk of harming our individual cultures” (Interview Participant). As one interview participant explained: “there needs to be an Indigenous and racialized lens,” meaning an approach that centres how Indigenous people see themselves and how they interact with the world around them. Therefore, this entire report should be understood through the lens of a specifically Indigenous context, acknowledging the issues that face Indigenous communities, including intergenerational trauma.

Most Harmful Platforms

Participants expressed differing sentiments in terms of which online platforms were the most harmful. Facebook was most frequently cited as being a harmful platform because participants felt that the algorithms that deliver content to users fosters an echo chamber environment and racist sentiments. Participants also had concerns that Facebook encouraged trafficking and grooming.

Facebook is how they get trafficked. It’s how they send them to johnsFootnote 4 and how they [trafficking victims] meet them. The most racist and discriminatory things said on there. (Focus Group Participant)

Facebook is the most harmful. Anyone can say whatever they want, and it reverberates. Facebook is the worst. (Focus Group Participant)

Facebook was also frequently cited by participants as a platform that harboured a significant amount of racist sentiment, especially in comment sections.

As new platforms gain popularity, youth are using older platforms like Facebook less and less. One participant exemplified this when they noted teenagers and children tend to favour newer apps like Snapchat and TikTok and expressed concerns about the dangers they faced and the lack of supervision on these platforms:

Snapchat and TikTok are used by more teenagers. Facebook is more for the age 30 and over. My teenager is more on TikTok and Snapchat. They use every single application for love. It is harmful. You don’t know if who you are talking to is your age or not. (Focus Group Participant)

Another potential harm from platforms such as TikTok and Facebook noted by participants was the spread of misinformation by accounts with large followings, also known as “influencer accounts.” Concerns raised by participants included misinformation related to the Covid-19 pandemic or websites that share racist and anti-Indigenous sentiments.

YouTube was also identified as a potentially harmful platform for children, as parents may not be aware of harmful content hidden in videos marketed towards children:

YouTube is dangerous for kids because they hide stuff in videos that as a parent, if you’re not paying attention, you wouldn’t catch because it’s saying it’s geared towards a child. Most parents don’t know about it. Parents think their kids are watching a video on unboxing toys. You think they’re watching something innocent. (Focus Group Participant)

Another platform that was discussed was message boards or private forums on specific websites. Websites that publish far-right wing and white supremacist content often have anonymous message boards that can serve as echo chambers, where radicalization could occur and could pose a risk of physical harm to people in extreme circumstances. One participant identified the danger in not knowing how many of these platforms exist across the internet or how to keep track of them:

I think online specific sites with private forums are the most dangerous, but that’s terrifying because I don’t think there’s any way you can keep on top of how many are out there and like, who knows them, but for public access? (Interview Participant)

Concerns were shared about how common anti-Indigenous and racist sentiment is across every online platform. One participant noted how Indigenous people are frequently targeted for hate speech across multiple platforms:

There are so many targeted posts towards Indigenous peoples, so much hatred, YouTube, Facebook, Instagram. All these platforms have so many violent posts targeted towards Indigenous people. (Interview Participant)

Overall, participants were concerned with all social media platforms. While participants were more concerned about the racism on Facebook, they felt that children and youth were more susceptible to harm on platforms like TikTok, Snapchat, and YouTube. One participant summarized, “I really think all the platforms are harmful. They’re all connected. When we look at vulnerable people, the rates of mental illness are high amongst our community” (Interview Participant).

Priority to Protect the Most Vulnerable

Concerns about the safety of children, youth, and other vulnerable community members were the most commonly cited concerns amongst participants. Many participants shared disturbing stories of their own individual experiences, or those of their children or other community members, that revealed widespread issues of predatory behaviour online.

The central priority discussed regarding online safety was to protect those that are most vulnerable from predatory intentions and actions. Preventing and removing child sexual exploitation content (CSEC) and any material produced or shared without consent was clearly identified by participants as a priority, as well as all grooming or predatory behaviour. Platforms were not distinguished in this portion of the conversation, with participants noting that they are frequently connected and have similarities in how people become exploited.

Yes, what I really see is that there’s human trafficking out there. But when we start looking at where it’s coming from, we have the online [platforms] where predators go and they start grooming young people, and they start grooming the most vulnerable people. (Focus Group Participant)

Specific predatory activities discussed included catfishing and befriending young or otherwise vulnerable people under false pretenses. Concerns were also raised about how to enforce legislation on those who commit these offences, specifically CSEC and the non-consensual sharing of intimate images, but do not reside in Canada.

Fraud Against the Elderly

In addition to crimes against children, participants were also concerned about elderly community members falling victim to online fraud. Specifically, participants were concerned about individuals or businesses that target elderly Indigenous people who may not have a full understanding of digital fraud and how to prevent it:

I often see Elders that are lonely being catfished by scammers that don’t have ethical practices often offering something to Elders and taking their money. (Interview Participant)

I’m thinking about like Elders here who barely know how to use the phone and get scammed. It makes me sad. (Interview Participant)

I see so many people also who maybe got some funky message and they clicked on it and now all of a sudden, they’re reposting about Bitcoin, and they’re locked out of their account. People get scammed by posts saying they’re the government or the CRA. (Interview Participant)

They shared stories of Elders and Knowledge Keepers in their communities who had fallen victim to fraud when they thought they were buying a car online or falling victim to a fraud where they thought they had won a free trip to Hawaii. Participants shared further stories of Elders who had been defrauded by having their social security numbers or credit card information stolen as they were not aware of the importance of protecting such information online.

Concerns about Grooming and Trafficking

One of the most common issues that concerned participants was the prevalence of child trafficking and grooming, which threatens and endangers the safety of Indigenous people and communities. Participants repeatedly raised concerns about the role that social media plays in grooming and trafficking children and vulnerable members of Indigenous communities: “The violence and racism and sex trafficking are so broad,” referring to how far-reaching and common these issues are. Participants shared that grooming and trafficking behaviours were being undertaken on sites like Facebook, Snapchat, and TikTok, as well as on the “dark web.” Footnote 5

Many participants felt that Indigenous children and youth were particularly targeted by traffickers because of their vulnerability:

For the online threats, we have to start looking at how they can be identified. How can we make sure that as a society our most vulnerable, the teenagers, the Aboriginal women and girls, the migrant workers, are safe. […] We still have the distrust with the not Aboriginal community. The threats that I see that are happening out there are the people who are recruiting our most vulnerable. How can we identify those rings, it’s a huge business out there. How are we going to educate our young and the most vulnerable? (Focus Group Participant)

Children and youth are especially vulnerable to online harms, including CSEM, grooming, and other predatory behaviours. Given this, many participants felt that there should be stronger mitigation measures against this type of content to protect the most vulnerable. Participants were clear that the Government has a role to play in educating youth to ensure they understand the risks and signs of being trafficked. Simply put, they wanted to see more emphasis placed on prevention, as opposed to just intervention.

Tougher Punishments for Crimes Against Children

Many participants also expressed that there should be more punitive measures for those who exploit children online. One participant shared a story about finding CSEC on their partner’s computer. When they approached the police with this information, the police declined to investigate further. Other participants shared comparable stories, noting that perpetrators of CSEC often target youth in low-income and racialized communities:

I went to high school with a higher middle-class boy who was very endeared for his youth involvement, and he only ever did that in low-income neighbourhoods. And later got caught for child pornography. He was only in jail for a couple months. […] The ones who are mainly targeted are Indigenous communities. We need better prevention. I understand that this bill is intervention, but we need better preventions. (Focus Group Participant)

Others felt that the Government needed to play a larger role in preventing and addressing crimes against children:

Something should be done in regard to tougher sentences for child pornography. We should have more curriculum in schools for internet safety. Something should be done on a more preventative level because this is an ongoing situation. It is very important that the curriculum be changed to have that in our schools. Some people are just given a slap on the wrist. (Focus Group Participant)

Although these actions are likely not within the immediate purview of this legislation, it is important to note as so many participants felt so strongly about it. Participants felt that there is room for improvement to address crimes against children outside the scope of this legislation. Some of these measures, which will be explored in more detail in the recommendations section of this report, include education for children and youth on online safety, removing fake profiles, and having a systems navigator to help in the removal of non-consensual sexual images. They also recommended that there be more punitive measures for those who exploit children.

Identifying and Mitigating Potential Risks

Interview participants overwhelmingly agreed that internet platforms should be treated like any other product when it comes to identifying potential risks and mitigating them. Participants made comparisons to other products, like cars, which are required to undergo extensive safety testing before being sold so that one can be reasonably confident that the product that they are buying is safe to operate. Participants wanted to see similar safety standards applied to internet platforms. Many participants saw this as simple common sense. Some participants specifically suggested that this may look like requiring internet platforms to adhere to a set of expectations specifically designed to protect Indigenous users.

Social media platforms are run like an institution, code of conduct, but they don’t have the same repercussions as institutions do. Canada could make better laws to mitigate harms. (Interview participant)

Despite widespread support for actions to better regulate the safety of internet platforms, some participants remained concerned about how these regulations would apply to international platforms. One focus group participant shared a story of unsuccessfully trying to get non-consensual intimate images, or “revenge porn,” removed from an international website, calling the process “unrealistic and discouraging.” They suggested that the federal government could help Indigenous people in similar positions remove non-consensual intimate images from an online platform, especially an international platform. They further explained that it would be helpful to have a systems navigator to advocate for them and help guide them through the process of having non-consensual intimate images removed from international platforms.

Warnings When Signing Up

Participants raised concerns surrounding the lack of sufficient warnings during the sign up and verification process of social media sites about the potential that users may encounter harmful materials on these sites. They wanted to see social media sites be required to be more upfront about potential dangers of using their sites, similar to a “buyer beware” warning. For instance, warnings that harmful content, including explicit images, monetary fraud, or human trafficking activity, could be present on the site and guidelines included for how to stay vigilant about such behaviours:

It might not be as simple as just accepting the terms and conditions. Maybe users should have to watch short video on the risks or potential harms of social media and educate themselves on online safety before they sign up for a social media site. (Focus Group Participant)

The age of users on social media sites was also cited as a concern for many participants, who explained that there were many children on these sites despite there being requirements that users be above a certain age. They acknowledged that while account registration requires users to be a certain age, there is no age verification process to ensure this during sign up.

The platforms aren’t verifying if [users are] actually old enough. How are you supposed to regulate if they’re not doing their part? When you go into a porn site, it asks you if you’re 18. Even though it still only has a stupid little pop up, and you just have to click yes, even if you’re a child. No one is actually checking. (Focus Group Participant)

This portion of the discussion also highlighted that there are existing community guidelines on social media sites, and they could be revised to reflect the unique needs of Indigenous Peoples. This could include guidelines that are available in Indigenous languages, that are clearly written, and that acknowledge and respect the realities Indigenous peoples face regarding intergenerational trauma, increased rates of suicide (Stober 2019; Statistics Canada 2019), and higher risk of sex trafficking (Roudometkina and Wakeford, 2018).

Education and Prevention

Participants in both the focus group and the interviews felt that a core element of identifying and mitigating potential risks for users online was education and prevention. Many felt that the focus of this consultation process and potential legislation was too centered on intervention when it should be focused instead on the prevention of online harm. As one interview participant succinctly explained, “education is the key.” This discussion centered mainly around issues of fraud, human trafficking, and sexualized violence. Participants wanted to see the Government focus more on preventative measures like education to help Indigenous people identify harms and avoid being taken advantage of online. Suggestions for this included working with the leadership of Indigenous communities to provide education and information sessions for community members, especially youth and vulnerable seniors.

I’m not very big on ‘we’re going to shock these people’ but I think a lot of people do need to understand how easily targeted Indigenous people are. And they might think ‘I would never fall for that’ but so many of these victims have thought the same thing. (Interview Participant)

Although outside of the scope of this legislation, participants also wanted to see educational sessions conducted in schools to educate youth on how to safely use the internet, especially because the internet is so widely used.

More information is needed, maybe even in schools like one-hour presentations or like a half day workshop. More awareness about how easy it is to get human trafficked and targeted. I think a lot of people are just so blind to it because people my age group grew up with the internet and the new generation, they don’t know life without internet. So, it’s a matter of adjusting to this new generation. Before we were taught not to get in a car with strangers, but they can talk to strangers all day on the Internet. (Interview Participant)

Not just ‘hey, here’s some random graphic we’re going to put on Facebook.’ No, I think there needs to be actual in class sessions for kids in school, and engagement sessions with the community about just Internet safety in general. (Interview Participant)

Online Safety Regulator

There was also widespread agreement amongst participants that the online safety regulator be given the power to remove specific pieces of content they deemed harmful. Participants especially wanted to see fake profiles removed from social media sites, as they were frequently cited as being a problem in issues such as fraud and harassment:

Fake profiles should be removed, people should have to verify who they are when making an account. (Interview Participant)

In addition to having fake profiles removed, participants wanted to see a more robust vetting process to ensure that any new profiles were not being created under false aliases.

However, some participants noted that some of the terms used in the information documents shared with participants during this project, namely ‘hate speech’ and ‘violence,’ needed to be more clearly defined to avoid potential issues about interpretation. They also noted that the terms that would be used in legislation, once developed, should be clearly defined and easy to understand:

I’d really like to see the terms defined more specifically. I know some terms [in the information package] were very broad terms. I think before we start really protecting people against it, we need to figure out what that is first. Maybe those categories need to be divided up into smaller categories, so that we're able to define them specifically. I think that would be helpful. But appointing somebody to monitor such a general term is really dangerous. (Interview Participant).

This participant felt that not clearly defining the terms like ‘hate speech’ and ‘violence’ could leave to much opportunity for misinterpretations, especially if the online regulator was not well-versed in Indigenous issues.

Finally, one specific issue where participants felt that an online safety regulator should be more proactive is around mentions of suicide:

If suicide is being discussed – the regulator should also have the responsibility of trying to get help for the person posting. (Interview Participant)

I know someone who committed suicide based on social media. (Interview Participant)

I know people that have committed suicide, or their mental health has completely been ruined because of what is happening on social media. (Interview Participant)

At least one participant linked issues of suicide to cyberbullying and harassment:

People [are] committing suicide because of cyberbullying. Inuit women are committing suicide because […] the cyberbullying has taken a toll on their mental health. (Focus Group Participant)

We lost a very loved singer because of online bullying, because of revenge porn. She committed suicide on Christmas Eve. She also suffered from mental health issues. So, on top of the mental health issues, she then faced online bullying, cyber-attacks, and people going into her account. […] This is very passionate for me because of the level, the amount of suicides that are in the North. (Focus Group Participant)

This issue is especially important because the rate of suicide is twice as high for Métis communities, three times higher in First Nations communities, and nine times in Inuit communities than for non-Indigenous people (Stober 2019; Statistics Canada 2019).

Concerns with Administrative Monetary Penalties

Participants focused part of their discussion in the focus group around potential administrative monetary penalties (AMPs) that could be issued to social media services in response to their non-compliance with the obligations under the online safety act. The participants raised questions about who would be penalized financially for the harmful content found on social media services—the creators or social media services or both. This concern was less common during the interviews, although one interview participant did note that “monetary penalties are a good deterrent.” The idea of issuing fines to individual users that posted harmful content was discussed further. Some focus group participants expressed concerns surrounding this idea because this would involve tracking users and could cause inefficiencies surrounding the speed at which the fine is issued. It would be too much for the regulator to take on and could cause privacy concerns for users of these social media services.

Participants also brought forward the idea of directing funds collected through AMPs paid by the social media services towards victims of harmful content or toward educational initiatives to mitigate future harms of the same nature.

My understanding was that the money is going to be used to pay for further enforcement of this policy. I get that federal funding has to come from somewhere. But if we’re talking about justice for our communities, I think we do need to discuss how some of this should go into services and programs that directly affect the communities affected. (Focus Group Participant)

Participants further clarified that if financial support was directed to victims that have experienced harmful content online, that this must be done with sufficient anonymity to protect victims “from stigma associated with whatever they’ve been victimized by.” Participants expressed concerns about the time it would take for administrative monetary penalties to be issued to social media services and in certain cases the time it would take for social media systems to react to the harms on their services or for certain harmful content to be taken down, noting that the harm has already occurred, and for the impact of regulation to be felt, certain extremely harmful material must be taken down immediately and AMPs processed promptly.

Concerns about Background of Commissioner

The prospect of having a regulating body or a person for enforcing policies discussed in this forum was met with several strong concerns. Concerns were raised around the amount of power this body would have, and what breadth of expertise they would hold, individually and collectively. If, for example, traditional seal hunting or products made from sealskin were flagged or removed, this is directly related to culture and race.

On certain platforms we can’t even write the word seal. It becomes a racial thing and it’s an attack on who I am. It all depends on who is the regulator. Where are they coming from? Do they have enough knowledge? Would they come from diverse backgrounds? (Focus Group Participant)

Another prominent concern was whether a regulation body would be vulnerable to only representing one perspective. If they only allow for viewpoints condoned by the Canadian government and could limit speech about critiques or alternatives, this was viewed as too constraining and dangerous. This theme was also present during the interview portion of data collection, with one participant stating:

Who would be making these decisions? What are their views? (Interview Participant)

Who is going to decide what constitutes hate speech? My best friend calls people Indians, he’s Ojibway. Others can slander using the same words but who is going to discern? (Interview Participant)

Similarly, some participants raised concerns around freedom of speech and expression and the difficulties of balancing the freedom to be able to say or express what they would like with the rights of others, such as Indigenous peoples that may be harmed by the content posted:

People should have freedom of speech unless it advocates criminal activity or hate. (Interview Participant)

It’s tough because you want to be able to have freedom of speech but don’t want hateful content online at the same time. (Interview Participant)

Participants noted that unmitigated powers should not be given to an online safety regulator to compel regulated entities to remove content that is deemed harmful as this could result in biases towards Indigenous communities, where some content could be labelled incorrectly as harmful content. Examples of this that were raised by participants included issues concerning protests and resource development. Participants were further concerned that if a new Government were to be elected that was more adversarial to Indigenous people, any content related to Indigenous rights may be flagged as harmful, hateful, or even as terrorist behaviour. Despite these concerns, it is important to note that the new Government regulator would be an independent regulator and their decisions would not be influenced by the Government.

One participant noted the unequal power dynamic that having a single safety commissioner may create, even if that person was independent from Government influence. Participants were concerned that personal or unconscious biases could lead to unfair decision making.

It’s too much power for one person to have. There is the danger of bias from regulator. (Interview Participant)

Instead, some participants suggested there should be a committee of commissioners, made of diverse people, instead of just one. This could help to create nuance in their approach and to avoid bias against any particular group.

Peace Bonds

The Participants were divided about whether the proposal in the former Bill C-36 for a new peace bond would help to prevent hate propaganda offences and hate crimes. In some cases, participants felt that a peace bond would be a useful tool to address harmful content, especially in instances of harassment:

If someone feels there is that much of a danger from someone making hateful posts, they should be able to have a peace bond placed. (Interview Participant)

I know peace bonds don’t always work but it’s one more step towards action and accountability. (Interview Participant)

Some participants also felt that having the option to obtain a peace bond would be a useful step in tracking harmful behaviour should further measures need to be taken:

It comes down to enforceability, if someone breaks a peace bond there may be grounds to enforce something heavier. (Interview Participant)

However, other participants were more doubtful that a peace bond process would help to address hate crimes. Participants felt this way for several reasons, notably that those who partake in hate speech online will find ways to engage in it regardless of a peace bond and that they did not feel the Canadian justice system would take hate speech against Indigenous peoples seriously.

People still had these rallies for hate before the internet. What makes you think they won’t find a way around it? Or find a way to disguise it? I think they could try it and it might help but if these people are willing and really believe in like what they’re doing, they're going to find a way, regulated or not and that’s pretty scary. (Interview Participant)

When the Canadian justice system hasn’t historically treated Indigenous people fairly, it’s hard to have faith in the process. (Interview Participant)

In short, participants were open to the idea of a peace bond process to prevent hate propaganda and hate offences, but, for several reasons, some also felt that it alone was too small a step forward.

Hate Crimes and Hate Speech

While there was some support, several participants were skeptical that a hate-speech complaint process in the Canadian Human Rights Act would be useful as an additional and complementary measure for combatting hate speech online:

In my personal experience with the human rights tribunal, it doesn’t provide sufficient retribution to victims. (Interview Participant)

This quote points to the need to center victims of online harm in any approach to address online harm and to ensure that they are being provided with adequate and culturally relevant support.

Nonetheless, other participants did feel that a hate-speech complaint process in the Canadian Human Rights Act could be useful for combatting hate speech online.

It won’t stop everybody, but it will help curb some occurrences. Anything to help prevent reoccurrences is helpful. (Interview Participant)

This would be important and useful, but it is also important that this process is helpful with cultural supports for people that have been victims of online violence. We need to ensure that they aren’t re-traumatized and that the process can be a healing one after what they have gone through. (Interview Participant)

Again, participants were clear that any measures undertaken to introduce a hate-speech complaint process in the Canadian Human Rights Act needed to centre the needs of the victim in a way that was culturally relevant. One participant suggested that “a good way to try to address this is to consider the suggestions from the MMIWG report” (Interview Participant). Of particular relevance in the Calls for Justice from the Final Report of the National Inquiry into Missing and Murdered Indigenous Women and Girls are Calls for Justice 5.6 ii, which states:

This Call for Justice echoes a similar Call to Action (40) in the Final Report of the Truth and Reconciliation Commission of Canada, which states that “all levels of government, in collaboration with Aboriginal people, to create adequately funded and accessible Aboriginal specific victim programs and services with appropriate evaluation mechanisms” (TRC Calls to Action, 2015, 5). Considering this, there is a need to provide adequate support for victims through the process of reporting harms they may have experienced online.

Reporting to Law Enforcement

Participants shared mixed opinions concerning whether online services should be required to report content that they believe is evidence of a criminal offence to law enforcement agencies. Many felt that requiring online services to report would help to provide necessary evidence for further prosecution, especially in instances of trafficking, sexual violence, or exploitation of minors: “Of course. It should be evidence, 100%” (Interview Participant). Participants generally felt more strongly about mandatory reporting to law enforcement when it involved non-consensual sharing of intimate images or CSEC: “Yes, especially if it’s things like child-exploitation & non-consensual sharing of intimate photos” (Interview Participant).

Many further felt that making it mandatory to provide evidence to law enforcement would help to prevent repeat offences:

Especially if they’re predators online who have gotten in trouble before. I see so many cases of around here some weirdo trying to pick up some girl or whatever and they already aren’t supposed to be on Facebook, but like I said, they can just create a whole like, new account, and new name. I definitely think they should be more mindful of that because these dudes around here will keep trying to pick up young chicks and you wonder how you don’t see them technically on Facebook or anything. But if they’ve done it before, they should be reported or monitored, at least.

Conversely, some participants were concerned that sharing this information with law enforcement could lead to further criminalizing some Indigenous or otherwise racialized people.

[Mandatory reporting] opens up the opportunity for Indigenous and racialized people to be even more criminalized under Canadian law. (Interview Participant)

Most participants were clear that they wanted to see opportunities for people to learn from their mistakes, instead of just criminalization. They wanted to see these opportunities be in line with traditional Indigenous understandings of restorative justice.

What is considered criminal under Western law might not necessarily be criminal in the eyes of sovereign nations. (Interview Participant)

In addition, participants also felt that many of the issues Indigenous people faced were caused by a fundamental lack of knowledge and understanding of Indigenous issues by non-Indigenous people.

A lack of education on Indigenous issues perpetuates misunderstanding online. (Interview Participant)

It’s [peace bond] a great idea if you’re allowed to get educated on what you did wrong. With no opportunity to learn, it risks further radicalization to hate. (Interview Participant)

Many participants felt that the federal Government had a responsibility to create learning opportunities for those who engaged in racist behaviour online:

People should have the opportunity to become a better person and not be painted with a certain brush for life. (Interview Participant)

If it leads to education, reform, and restorative justice and not just criminalization. (Interview Participant)

Many participants were interested in a system whereby those who had taken part in hate online would have the opportunity to undergo education instead of immediately getting involved in the criminal justice system. Ultimately, participants were clear that the Government should take a more preventive, rather than intervention-based approach, that prioritizes the lived experiences of Indigenous peoples and offers opportunities for growth and change for all Canadians.

Conclusion and Recommendations

This report offers an overview of the discussion at an online focus group and one-on-one interviews, which engaged a total of 25 Indigenous participants from across Canada between November 2022 and January 2023. The report explores the experiences of Indigenous people online, focusing specifically on issues surrounding the need for an Indigenous-centric approach, the most harmful platforms, the priority to protect the most vulnerable, identifying and mitigating potential harms, the online safety regulator, peace bonds, and hate crimes and hate speech.

Throughout the research process, participants made a number of recommendations that would help to identify, mitigate, and prevent the harms that Indigenous people face online. These are shared below.

  1. New Regulator or Commissioner: Participants were concerned about the breadth of expertise and undue hardship that could be caused to Indigenous people if the commissioner’s knowledge about Indigenous communities is ever insufficient. They recommended that:
    1. The new regulator and their staff need to be educated on issues and lived experiences that affect the Indigenous communities, including intergenerational trauma.
    2. The new regulator needs to remain at arm’s length from the Government, so as to prevent any influence or interference.
    3. There be a panel of commissioners with diverse Indigenous representation, instead of one commissioner, to help mitigate potential issues of bias.
  2. Administrative Monetary Penalties and their Use: Many participants, especially those in the focus group, were concerned that if monetary penalties were collected from online platforms who refused to adhere to safety standards, the money would not be reinvested in the communities that had experienced harm. Participants recommended that communities should decide what to do with any monetary penalties collected, and that it be directly invested in communities, with emphasis on healing programs for victims.
  3. Need For Plain Language Warnings and Terms and Conditions: Concerns were raised regarding warnings detailing the risks of participating in online forums or social activities for social media users regarding illegal, abusive, or explicit content. Participants acknowledged that internet platforms require registrants to sign terms and conditions prior to registering but explained that the formal language and amount of text meant that most registrants do not read it. Participants therefore recommended that warnings should be straightforward and in plain language or could take the form of a video that is mandatory to watch prior to registration.
  4. Mandatory Reporting to Law Enforcement: Many participants were concerned that some measures proposed to address online harm, especially mandatory reporting to law enforcement, could lead to the criminalization of Indigenous people, a demographic which is already overrepresented in the criminal justice system (Canada, Justice Research and Data 2019). Nonetheless, participants felt that requiring online services to report to law enforcement help to provide necessary evidence for further prosecution, especially in instances of trafficking, sexual violence, or exploitation of minors. However, given the potential that mandatory reporting may cause further harm to Indigenous communities, it is recommended that restorative justice, focusing specifically on traditional Indigenous approaches to justice, is prioritized in all aspects of any online safety legislation relating to Indigenous peoples.
  5. Victim Support: Participants expressed a desire to see more done to support Indigenous victims of online harm. Looking to Call to Justice 5.6 ii in the Final Report of the National Inquiry into Missing and Murdered Indigenous Women and Girls and Call to Action 40 in the Final Report of the Truth and Reconciliation Commission of Canada for guidance, it is recommended that the Government work with Indigenous communities to provide holistic and culturally relevant support for victims of online harm.
  6. Education and Digital Media Literacy: Participants underlined the importance of education as a preventative tool when it comes to harms online, particularly as it related to Indigenous Elders. They recommended that the Government provide support and funding for Indigenous communities to undertake education sessions and campaigns to educate community members on online safety and focus especially on education campaigns in schools and for Elders. They also recommended that the Government work with Indigenous communities to undertake media campaigns, which would appear on television, radio, and online, to educate Indigenous people on online safety, ensuring that all materials are available in Indigenous languages.
  7. Understanding Indigenous Cultures: Many participants expressed that a lack of understanding of Indigenous cultures sometimes meant that content that was not harmful was flagged and removed from social media sites. This concern was frequently cited in relation to the removal of any post which mentioned the sale of products made from seal hunting, a sustainable practice done by the Inuit for thousands of years which is vital to their livelihoods (ICC n.d.). Participants therefore suggested that all major internet platforms should be required to have an Indigenous analyst or committee to avoid such misunderstandings.
  8. Systems Navigator for Victims: Some participants expressed the difficulties they, or those they knew, had experienced trying to have non-consensual intimate images (revenge porn) removed from international platforms. They recommended that the Government aid victims of revenge porn on international platforms by providing them with a systems navigator to advocate for them and help guide them through the process of having these images removed from international platforms.
  9. Clearly Defining Terms: Participants noted that some of the terms used in the accompanying documents for this project needed to be more clearly defined. They recommended that all terms related to online safety legislation and enforcement be more clearly defined to avoid potential issues about misinterpretation.
  10. Proportionate Consequences for Engagement with CSE Content: Participants underlined the increased vulnerability of many Indigenous youth. They recommended that there be tougher and more proportionate ramifications for engaging with CSE content or any exploitative material, and that legislation focus on prevention as well as punitive and corrective measures when crimes or infractions occur. This is not directly within the purview of this research project, or the legislation at hand. Nonetheless, it was a major theme for participants, especially those in the focus group. It is, therefore, vital that it be included.

The feedback gathered during the roundtable and interviews provided valuable information and insights on what Indigenous people in the focus group and interviews are hoping to see included in future legislation. Canadian Heritage will be drawing upon the information collected during the roundtable and interviews in developing policy and legislation. Pursuant to his mandate letter, the Minister of Canadian Heritage continues to work to table legislation as soon as possible.

References

© His Majesty the King in Right of Canada, as represented by the Minister of [Canadian Heritage], [2023]
Catalogue No. XX0-0/0000
ISBN 0-000-00000-0

Page details

Date modified: