On this page
- Delivering on the Minister of Canadian Heritage’s (PCH) mandate letter commitment, the Department is in the process of developing a legislative and regulatory framework to address online harms. The goal is to promote a safer and more inclusive online environment so all Canadians can express themselves without being subject to hateful or threatening attacks.
B. Background and Current Status
- The Department is leading a coordinated approach to confront harmful content online with other Government departments. The Department’s leadership on online harms reflects key programming and resource allocations under the Digital Citizen Initiative, the Minister of Canadian Heritage’s mandate letter, and commitments in the 2020 Speech from the Throne.
- In January 2019, the Minister of Democratic Institutions announced a series of measures to safeguard the 2019 federal election and to protect Canada’s democracy. The Digital Citizen Initiative (DCI) was created as part of the “Enhancing citizen preparedness” pillar of this plan. It included a $7 million investment to the Department of Canadian Heritage in citizen-focused activities to strengthen citizens’ critical thinking about online disinformation, their ability to be more resilient against online harms, as well as their ability to get involved in democratic processes.
- In the 2019 Mandate Letter to the Minister of Canadian Heritage, the Government committed to introducing new regulations for social media platforms, starting with a requirement that all platforms remove illegal content, including hate speech, incitement to violence, child sexual exploitation, and terrorist propaganda within 24 hours.
- The Minister’s mandate letter also includes horizontal initiatives relating to online platform governance, such as supporting the Minister of Innovation, Science and Industry to create new regulations to better protect personal data as well creating a Data Commissioner.
- Finally, in the Speech from the Throne delivered on September 23, 2020, the Government pledged to address systemic racism, and committed to do so in a way informed by the lived experiences of racialized communities and Indigenous Peoples. One of the ways identified is taking action on online hate.
- Social media platforms such as Facebook or Twitter are increasingly central to participation in democratic, cultural and public life. An overwhelming majority of adults in Canada (94 percent) have an account on at least one social media platform, making Canada one of the most connected countries in the world;
- However, social media platforms can also be used to threaten, intimidate, and harass people – or used to promote racist, anti-Semitic, Islamophobic, misogynist, and homophobic views that target communities, put people’s safety at risk, and undermine Canada’s social cohesion or democracy;
- The COVID-19 pandemic has exacerbated this problem. There has been a documented increase in COVID-19 related online harms, including racism, hate crimes and verbal and physical harassment, suggesting action should be expedited rather than delayed.
- Several news articles recently denounced the scourge of child sexual exploitation materials and called for government intervention in this regard. This situation led PornHub, which is the world's most popular pornography site and has offices in Montreal, to pledge to adopt tougher practices to protect its platform from illegal content.
- The following motion was also introduced from the last session at the Standing Committee on Canadian Heritage: to “undertake a study of the creation of and implementation of new measures for online media platforms’ internet service providers requiring them to monitor, address and remove content that constitutes hate speech and remove any other content which is illegal in Canada or prohibited by the Criminal Code.”
C. Strategic Considerations
- Public attention to online harms has grown in response to events from around the world that undermine public safety for marginalized communities while also threatening national security. Events such as the ones at the U.S. Capitol in January, 2021, as well as the terrorist attacks on the Centre Culturel Islamique de Québec, recent vandalization of Muslim, Jewish and Sikh places of worship, and growth in white supremacist coordination worldwide are driving calls for action to address the serious impact caused by online harms.
- Public polling has shown that Canadians support Government intervention in this area. A recent poll commissioned for the Canadian Race Relations Foundation found a majority of Canadians supporting more government action to combat hate and racism online. The poll also found that 80 per cent of Canadians support a requirement that social media companies remove hateful content within 24 hours.
- PCH is developing a policy approach to deliver on the Minister’s mandate letter commitment by drawing on similar policy developments in international jurisdictions. A number of governments in other jurisdictions, such as Germany, Australia, France, the European Union, the United Kingdom and New Zealand have proposed or enacted legislation that imposes obligations on online platforms to restrict certain forms of content and activity. The Minister of Canadian Heritage, along with Departmental officials at all levels, have engaged with like-minded countries to identify best practices while developing a Canada-made approach.
- The speed, scale and global reach of harmful speech on social media platforms are significant. Today, there is an imbalance between limited oversight of online platforms and the significant threat posed by harmful speech to society. There are no broad regulatory requirements in Canada for social media platforms to identify, manage and reduce harmful content on their services. The sole exception is court-ordered takedowns enabled by the Criminal Code, which results in limited incentive for social media platforms to be proactive.
- The status quo is not sufficient to confront harmful content on social media platforms. Leaving content moderation or self-regulation to the sole discretion of platforms fails to stem the spread of these harms. In addition, law enforcement and security agencies face a number of challenges to address online harms, given limitations in resources and capacity to confront the breadth and pace of harms online.
Report a problem or mistake on this page
- Date modified: