What We Heard: The Government’s proposed approach to address harmful content online

On this page

Key Takeaways and Executive Summary

On July 29th, 2021, the Government of Canada published a legislative and regulatory proposal to confront harmful content online for consultation on its website. Interested parties were invited to submit written comments to the Government via email.

Feedback both recognized the proposal as a foundation upon which the Government could build and identified a number of areas of concern.

There was support from a majority of respondents for a legislative and regulatory framework, led by the federal government, to confront harmful content online.

Specifically, respondents were largely supportive of the following elements of the proposed regime:

However, respondents identified a number of overarching concerns including concerns related to the freedom of expression, privacy rights, the impact of the proposal on certain marginalized groups, and compliance with the Canadian Charter of Rights and Freedoms more generally.

These overarching concerns were connected to a number of specific elements of the proposal. Respondents specifically called for the Government to reframe and reconsider its approach to the following elements:

Though respondents recognized that this initiative is a priority, many voiced that key elements of the proposal need to be re-examined. Some parties explained that they would require more specificity in order to provide informed feedback and that a lack of definitional detail would lead to uncertainty and unpredictability for stakeholders.

Respondents signaled the need to proceed with caution. Many emphasized that the approach Canada adopts to addressing online harms would serve as a benchmark for other governments acting in the same space and would contribute significantly to international norm setting.

Background and Introduction

Harmful content online: a legislative and regulatory proposal

The proposal published for consultation on July 29, 2021, presented a detailed legislative and regulatory framework in order to solicit Canadians’ views.

The policy drivers for the Government of Canada are the pace at which harmful content is propagated online, the negative impacts it has on people in Canada and Canadian institutions, and the growing view among Canadians that more needs to be done to confront harmful content online. Allied countries such as the United Kingdom, Australia, France and Germany are also taking action.

The July 2021 proposal was published by the Ministers of Canadian Heritage, Justice, and Public Safety. It contemplated the creation of rules for how social media platforms and other online services must address harmful content. The proposal set out:

The Government asked for written submissions from Canadians based on two documents:

  1. A discussion guide that summarized and outlined the overall approach (Annex A) and
  2. A technical paper containing elements of a legislative proposal (Annex B).

The public consultation closed on September 25th, 2021. It solicited 422 unique responses and 8,796 submissions from open campaigns. Of the 422 unique responses: 350 were from individuals; 39 were from civil society and other organizations; 19 were from industry; 13 were from academics; and 2 were from Government or Government-adjacent organizations.

The Government designed this consultation to allow stakeholders and industry to submit business information in confidence and to allow victims groups, equity deserving communities or other parties to share their experience with harmful content online privately. As such, the submissions were not made public. While some parties decided to make their submissions available to the public on their own, the purpose of this report is to provide an overview of perspectives shared through the consultation, while still protecting the identities of individuals.

Purpose of the report

This report provides an overview of the feedback submitted through the consultation. While it shares all views expressed and perspectives that were submitted on the proposed legislative and regulatory framework, it should not be considered a verbatim recitation of all submissions to the consultation.

It is organized in three sections, based on the volume of comments received on each element. The structure of the report is not indicative of the seriousness or importance of the elements discussed. Rather, the first section presents submissions on elements of the framework that most respondents focused on and often discussed at length. The second section describes feedback received on elements commented on by many respondents, though not necessarily most. Finally, the third section goes through elements that only a few of the respondents mentioned.Footnote 2

Public consultation highlights

Almost all respondents commented that Government regulation has a role to play in addressing online harms and in ensuring the safety of Canadians online. Beyond this, there was a blend of positive and negative responses, though there was a predominantly critical perspective from civil society, academia and industry on both the process of the consultation and the design and substance of the framework itself.

Respondents criticized the design and conduct of the consultation. Some commented that the framework was already fully designed, notwithstanding the Government’s request for feedback. Some stakeholders mentioned that the consultation process happened at an inopportune time during a Federal Election period and a pandemic, or that the consultation time was too short, making it difficult to put forward meaningful and comprehensive comments. Others highlighted that there had not been meaningful consultation and engagement on the proposal’s design and elements in the period prior to it being published.

Regarding the substance of the proposal, although multiple individuals and organizations welcomed the Government’s initiative, only a small number of submissions from those stakeholders were supportive, or mostly supportive, of the framework as a whole. Given the breadth of respondents and their unique perspectives, the comments received were wide ranging. However, a number of common perspectives and points emerged. These are organized by volume in the sections below. The first outlines the most prominent issues raised. The second shares elements that received moderate interest. The third includes more specific aspects of the proposal, raised by only a few respondents.

1: Prominent issues raised

Scope of regulated entities

The proposal applied to “online communication service providers” (OCSPs). The concept of an OCSP captured major platforms and excluded products and services that would not qualify, such as fitness applications or vacation rental websites. It also exempted private communications, telecommunications service providers, certain technical operators, and encrypted services.

Definition of an Online Communication Service Provider (OCSP)

Many respondents indicated that the definition of OCSP was too broad and advocated for greater clarity from the Government in outlining what services would fall within the scope of the regulatory regime. Some respondents noted that the proposal should focus on services that pose the greatest risk of exposing Canadians to harmful content, such as social media networks and video-sharing platforms, where others noted that a narrower focus would leave out a significant set of intermediaries. In addition to the definition of OCSP, respondents were critical of the authority granted to the Governor in Council to scope in certain services through regulation-making. Some asserted that it was unclear which services would be included in the framework and what the threshold for inclusion would be. Similarly, other stakeholders criticized the proposed regime for including a broad range of entities without providing a clear list of determining or limiting factors. Respondents questioned whether services like personal websites, blogs, or message boards would fall within the scope of regulation. Some recommended that the Government introduce criteria to categorize entities, which would then be used to determine whether they fall within the legislative scope.

Many respondents stated that the definition of an OCSP overlooked the significant disparity in capacity between larger and smaller companies, noting that regulation could put an unnecessary human resource and financial burden on smaller companies to comply with the associated requirements. These respondents explained that the lack of nuanced application toward new and smaller providers could create unnecessary and unintended market consequences. There was strong desire to have smaller platforms be exempt from the framework. Respondents requested that the Government implement platform size thresholds to avoid creating a regime for which only entrenched incumbents can meet their obligations.

A small subset of respondents called for broadening the scope of regulated entities to include other types of online services that play a role in making illegal content accessible to users. A few called for the regime to apply to all internet service providers that participate in making content accessible to end users. These respondents were of the view that if not all entities in the communication chain were to be bound by the same terms, there would be a gap in regulatory oversight of harmful content that could be exploited.

Exclusion of telecommunication service providers, private and encrypted communications

Some stakeholders called for greater clarity regarding what constitutes private communication. For instance, they questioned whether the exemption would cover large chat groups.

There was support for the exclusion of private and encrypted communications and telecommunications services from the proposal. Respondents expressed that such exclusions would be necessary to avoid unintended consequences that would ensue if obligations like proactive monitoring, automated filtering and takedown requirements were to be applied to these services. For instance, they explained that requiring business-to-business cloud providers to proactively monitor and filter their customers’ networks for harmful content would have serious privacy and security risks. These providers have customers that operate in the healthcare, banking, and energy sectors. Given the sensitivity of their customers’ data, the providers design their systems to have limited, if any, visibility on the data they are hosting on behalf of the clients.

However, there were a few respondents who argued that the legislation should capture, at least in part, private communication and encrypted services. These respondents advocated that, at minimum, requirements for user notice and transparency reporting should be placed on these services. They asserted that extending the scope of regulated entities in this way would better enable harm reduction and would mitigate the risk of an incentive for companies to create more closed or private platforms as a means of sidestepping obligations.

Types and definitions of harmful content

The proposal targeted five types of harmful content: terrorist content; content that incites violence; hate speech; the non-consensual sharing of intimate images; and child sexual exploitation content. While the definitions proposed would have drawn from existing law, including current offences and definitions in the Criminal Code, they would have been modified in order to tailor them to a regulatory context.

Selection of five types of harmful content

Some respondents asserted that the five categories of harmful content were appropriately selected as they each pose imminent risk of harm to persons exposed to such content and are prohibited under the Criminal Code. However, most respondents criticized the regime for introducing types of content that are too diverse to be treated within the same regime. They were concerned that the Recourse Council would not be able to judge such a variety of content, regardless of how well qualified its members might be. Respondents asserted that the proposed regime would be inadequate in fulfilling the seriousness of the task of content moderation. Some pointed out that a one-size-fits-all approach would unlikely address the nuances of each type of content. Many respondents stated that the five types of content were very different and that each required a specialized approach, or perhaps an entirely separate regulatory regime.

Many respondents appreciated the proposal’s focus on areas of speech and conduct already defined in domestic law. Some also welcomed the definition of hate speech aligning with Supreme Court of Canada jurisprudence, asserting that such alignment provided a reliable and consistent measure through which to define online hate in Canada. Some stakeholders felt that the proposal should be limited to clearly defined categories of content that are illegal in Canada. Others questioned why a new regulatory regime would be necessary to address these types of harmful content, considering that they are already offences under Canada’s Criminal Code.

Some respondents appreciated the proposal going beyond the Criminal Code definitions for certain types of content. They supported the decision to include material relating to child sexual exploitation in the definition that might not constitute a criminal offence, but which would nevertheless significantly harm children. A few stakeholders said that the proposal did not go far enough and that legislation could be broader by capturing content such as images of labour exploitation and domestic servitude of children. Support was also voiced for a concept of non-consensual sharing of intimate images. They recognize that it is sometimes impossible to know if all individuals depicted gave their consent.

Others cautioned that opening these categories of prohibited speech to re-definition would likely cause confusion, create uncertainty, raise serious Charter concerns, and lead to significant controversy. Additionally, new regulatory definitions were viewed by some as likely to weaken the value that existing jurisprudence could have in helping actors interpret and apply these categories of harmful content. Respondents shared that it was important to carefully define the categories of what is regulated, as they argued that regulatory limitations on harmful speech should align with criminal limitations on harmful speech. Respondents advocated for using existing definitions found in the Criminal Code to ensure alignment, support Charter compliance, and avoid censorship.

Certain respondents requested that specific categories of harmful content be removed from the ambit of the legislation. For instance, some stressed that references to “terrorist content” should be removed, voicing a concern that Muslim Canadians would be disproportionately securitized, criminalized, and demonized under an approach to harmful content that included references to terrorism.

A few respondents stated that additional types of content, such as doxing (i.e., the non-consensual disclosure of an individual’s private information), disinformation, bullying, harassment, defamation, conspiracy theories and illicit online opioid sales should also be captured by the legislative and regulatory framework.

Lack of definitional detail

Many submissions criticized the lack of definitional detail provided for the harms that would be regulated. Respondents noted that the proposal said the definitions would be based on corresponding Criminal Code offences, but did not state precisely which offences would be included. Some mentioned that it was impossible to comment on the particularities of each harm, as none were defined. They asserted that the reader was left guessing what the Government intended, as well as how far-reaching the ultimate definitions would prove to be.

Many questioned how platforms would be expected to assess whether content falls within one of the five categories. They shared that one of the dangers of overly-broad definitions would be that a content moderation professional with no legal training could quickly resort to bias in deciding which content to remove. They explained that this type of chilling effect would have a disparate impact on marginalized communities and create a broader trend toward over-censorship of lawful expression writ large. Respondents also expressed that the ability to distinguish between lawful and illegal content was very difficult, even for legal experts. Much of the content that would be captured within the five types of harmful content was labeled by stakeholders as more aptly falling into a grey-zone. They questioned how platforms’ human moderators would be able to assess such content accurately, emphasizing that it would be even more difficult, if not impossible, for machine learning and AI to detect context or nuance and accurately categorize content into the five categories.

A few respondents emphasized the importance of providing definitions written in plain language to ensure that all victims could understand what behaviour would fall within the definition and make a complaint. Some stakeholders emphasized the importance of plain language for victims especially, and noted that the definition of hate speech as proposed would be particularly difficult for victims of this type of content to recognize and apply.

Legal yet harmful content

Many cautioned against opening up the categories of harmful content to speech that, though harmful, would nevertheless be lawful. Concerned stakeholders expressed that requiring the removal of speech that would otherwise be legal would raise risks of undermining access to information, limiting Charter rights, namely the freedom of expression, and restricting the exchange of ideas and viewpoints that are necessary in a democratic society. Respondents asserted that legislation imposing content moderation requirements on platforms should be limited to illegal content only. Similarly, some stakeholders cautioned the Government against implementing a regime that would create different legal standards for the online and offline environments, making legal expression offline illegal to share online.

Content moderation requirements

The proposal set out a statutory requirement for regulated entities to take all reasonable measures to make harmful content inaccessible in Canada. This obligation required regulated entities to do whatever is reasonable and within their power to monitor for the regulated categories of harmful content on their services, including through the use of automated systems based on algorithms. Regulated entities were also required to respond to the flagged content by assessing whether it should be made inaccessible in Canada, according to the definitions outlined in legislation, within 24 hours of the content being flagged. It also included flexibility for the Governor in Council to modify the 24-hour timeframe for content, or sub-types of content, that may require more, or less, time to assess.

Proactive monitoring obligation

A few respondents welcomed the obligation on platforms to take all reasonable measures to identify harmful content that is communicated on their platform and to make it inaccessible to persons in Canada. However, most stakeholders flagged these obligations as extremely problematic. The proactive monitoring obligation was considered by many as being inconsistent with the right to privacy and likely to amount to pre-publication censorship. They explained that in a regulatory context where content would be prevented from being uploaded, the requirement would operate as a de facto system of prior restraint. Stakeholders stated that by forcing platforms to proactively monitor their websites, as opposed to only moderating content by reactively responding to user flags, the regime would effectively force platforms to censor expression, in some instances prior to that expression being viewed by others. Some also mentioned that service providers should not be the entities to decide if particular content is unlawful. Multiple respondents also considered the obligation to be a human rights infringement. Many called for the provision to be removed, or more narrowly scoped.

24-hour inaccessibility requirement

Many respondents called for the removal of the 24-hour “take-down” period included in the proposed framework. A significant majority of respondents asserted that the 24-hour requirement was systematically flawed, because it would incentivize platforms to be over-vigilant and over-remove content, simply to avoid non-compliance with the removal window. Nearly all respondents agreed that 24 hours would not be sufficient time to thoughtfully respond to flagged content, especially when that content requires a contextual analysis. Many stakeholders said that the 24-hour requirement would not allow for judicious, thoughtful analysis that balances the right to freedom of expression in Canada under section 2(b) of the Charter of Rights and Freedoms against the pressing policy objective of countering online harms. They explained that the timeframe would effectively hinder the platforms’ abilities to make nuanced, content and circumstance-specific determinations. Some mentioned that the 24-hour requirement would cause expeditious removals of illegal content and thereby prevent law enforcement from identifying and preventing threats to public safety or investigating criminal activity.

Most stated that the likely outcome of this proposal would be that platforms would over-respond by taking down content that did not fall within the five categories. Others worried that since the proposed framework would not levy penalties on platforms for making the wrong decision, platforms might opt to dismiss flags too quickly and miss actual harmful content. This would allow the content in question to remain accessible to users - with a high probability of negatively impacting Canadians and particularly marginalized, racialized and intersectional groups. In either case, many respondents said that content moderation decisions would be made incorrectly. Some respondents also felt that faced with a plethora of incorrect moderation decisions, users would likely lose trust in online platforms’ content moderation processes.

By contrast, a few respondents indicated that the 24-hour timeframe would be too long for certain types of content. More specifically, they asserted that illegal sexual content should be automatically removed when flagged, even while a review is being conducted by the hosting service. They asserted that removals conducted at this pace would prioritize the victims who experience trauma from the discovery of this content.

Finally, some stakeholders acknowledged that the 24-hour “take-down” requirement outlined in the proposed framework included allowances for regulatory flexibility, such that the timeline could be amended depending on the type of content addressed.

Use of artificial intelligence and algorithms

The majority of respondents were of the view that both the 24-hour inaccessibility requirement and the proactive monitoring obligations would force platforms to make problematic use of AI tools to fulfill their duties. They explained that given the volume of content and the timelines associated with the moderation of it, platforms would only be able to meet their obligations by having automated systems sort through content and make moderation decisions. Respondents asserted that automated filters alone would produce a large volume of false positives - particularly with respect to context-dependent speech such as hate speech, incitement to violence and terrorist content. For example, content could appear to fall into those categories even if it is being used for lawful purposes such as news reporting or education. They further explained that the use of algorithms to make content moderation decisions would be problematic as these tools are imperfect and proven to perpetuate biases. Victim advocacy groups explained that the use of algorithms for content moderation decisions would lead to discriminatory censorship of content produced by certain marginalized communities, in some contexts. Some respondents stated that algorithmic bias would be inevitable, even if platforms were compelled to ensure that the implementation and operation of their procedures and systems not result in unjustifiably differential treatment.

Marginalized communities

Multiple respondents emphasized that the proposed approach to content moderation would likely hurt certain marginalized communities. Some explained that the flagging tools could be used by malicious actors to silence and abuse innocent individuals, especially from marginalized communities. For instance, users opposed to the rights of sex workers and LGBTQ2+ people could, and have already, strategically flagged adult content to censor legal posts that they deem offensive. Stakeholders explained that though user flagging is already a mainstay of content moderation, platforms may no longer be able to conduct due diligence in responding to flags if forced to make these determinations within 24 hours. They stressed that automated tools used to detect hate speech and harmful content would be particularly likely to be biased against the posts of marginalized communities. Many respondents explained that the proposal would result in a more harmful experience for certain marginalized communities as opposed to providing a safe space for sharing viewpoints, debating ideas, and discussion.

Alternative approaches

A number of respondents provided alternatives to the content removal framework proposed. Some stakeholders suggested focusing on the timely removal of only certain types of content. Some called for penalties to be imposed on platforms that over-remove content, or penalties for users who abuse flagging tools. Others recommended the use of a trusted flagger approach. Multiple respondents stressed that instead of proposing a framework that focuses exclusively on the moderation of content, the Government should put forth a proposal that targets the economic factors that drive platform design and corporate decision making. Stakeholders emphasized the importance of focusing on structural factors like advertising practices, user surveillance, and algorithmic transparency when setting out a regulatory regime.

Some respondents encouraged the use of pre-upload screening to improve the content moderation experience for victims. They emphasized that this would force platforms to detect content before it comes to the attention of users. They explained that it would also mitigate the negative effects experienced by victims from harmful content online, particularly those victimized by child sexual exploitation and/or the publishing of intimate images without consent.

Most respondents agreed that the best way to address potentially harmful content is by regulating the processes and systems that platforms have in place. For many, the role of public regulators should be limited to oversight, ensuring that content moderation and content curation systems are sufficiently transparent, and that people have clear and compelling grievance and redress mechanisms available to them.

Finally, some respondents emphasized that any legislative and regulatory obligations must be aligned with Canada’s international trade obligations, and specifically pointed to obligations under the Canada-United States-Mexico Agreement (CUSMA).

New regulators

The proposal envisioned the creation of a new Digital Safety Commission of Canada to support three bodies that would operationalize, oversee, and enforce the new regime: the Digital Safety Commissioner of Canada, the Digital Recourse Council of Canada, and an Advisory Board. The Digital Safety Commissioner of Canada would administer, oversee, and enforce the new regime requirements. The Digital Recourse Council of Canada would provide people in Canada with independent recourse for the content moderation decisions of regulated entities like social media platforms. Finally, the Advisory Board would provide both the Commissioner and the Recourse Council with expert advice to inform their processes and decision-making, such as advice on emerging industry trends and technologies and content-moderation standards.

The necessity of new regulators

The introduction of the regulatory bodies was broadly supported, as many respondents thought the new regulators seemed fit for purpose. With that said, stakeholders had questions about the details of how these new bodies would be implemented, adequately resourced, and how they might use their regulatory authorities. This issue was especially concerning, some said, considering that the proposal would introduce a regulator of expression online, an unprecedented role for a regulatory body in Canada. Others questioned the number of regulatory entities, emphasizing potential overlaps in authority and the sheer size of the proposed bureaucratic structure dedicated to “censoring” online expression. Some respondents said that creating a new separate administrative process under new regulators to adjudicate complaints may prove to be a lengthy and expensive bureaucratic process, with no guarantee of efficiency.

Recourse Council staffing

Many emphasized that the new regulators would need to be staffed with the right expertise to effectively meet their proposed functions. Some highlighted the importance of staffing the Council with experts in human rights law, considering that it would be making decisions and issuing orders that would infringe on the section 2(b) freedom of expression Charter right. Others emphasized the importance of experts having experience in health and social science, as well as the importance of having legal or constitutional experts or counsel on the Digital Recourse Council.

Some emphasized the need for the Council’s staff to receive adequate training on implicit bias, cultural humility, and victim-centred and trauma-informed approaches. Stakeholders stressed that such training should include mechanisms to address the particular needs of and barriers faced by groups disproportionately affected by harmful content.

Many expressed that the number of decision-makers for the Recourse Council was insufficient. They stated that the volume of complaints would likely be insurmountable for 3 to 5 decision makers to address. This concern was highlighted by some as being especially significant when one considers that complaints must be processed rapidly, otherwise there would be a risk that the regime could block access to legitimate content for a prolonged period of time. Others emphasized that the number of decision-makers would be insufficient to achieve the legislation’s stated objectives of achieving diversity. This was considered especially problematic by some stakeholders, who emphasized that meaningful representation of affected communities is essential to understand the lived experience of harm. Others pointed to the need for sufficient expertise required to deal with the complaints as well. Respondents suggested the number of decision makers be increased and include the relevant body of expertise needed to make these decisions.

Blocking provisions

The proposal introduced an exceptional enforcement power allowing the Digital Safety Commissioner, to apply to the Federal Court to seek an order requiring that Telecommunications Service Providers implement a blocking or filtering mechanism to prevent access to all or part of a service in Canada that has repeatedly refused to remove child sexual exploitation and/or terrorist content.

A few respondents voiced support for the blocking provisions, expressing that the limits on the requirements for internet service providers to block Canadian access to certain content were appropriate and that limiting the power to situations of non-compliance for child sexual exploitation content and terrorist content made sense considering this content is illegal and arguably the most harmful. A select few others felt that the blocking provision was too narrowly circumscribed, advocating for it to be extended to repeated noncompliance with removal orders for the other three types of content as well.

Most respondents questioned whether the blocking provision was necessary, effective, or proportionate. Some highlighted that many platforms are increasingly complying with content removal requirements in other jurisdictions without the threat of blocking. A few stakeholders mentioned that some service providers already voluntarily block customer access to websites hosting child pornography and that creating additional regimes could complicate their process. Other respondents acknowledged the proposal’s integration of language around proportionality, but insisted that more specific safeguards would be necessary given the extreme nature of the blocking power. Multiple respondents criticized the proposal for allowing the blocking of entire platforms, advocating instead for a more targeted and human rights compliant proposal of targeting specific webpages. A few advocates for sex workers explained that the overbreadth of the power was particularly worrisome to them as it would enable the censorship of sites that are crucial for sex workers’ safety. They explained that limitations on access to online platforms result in sex workers facing increased violence and precarity.

Some respondents advocated that the blocking provision stands in direct contrast to Canada’s commitment to protecting the principles of net neutrality and open internet. Others questioned how the Government planned to block individual pages or sections of the platforms, emphasizing the technical challenges involved with such an endeavor, such as the inability for telecommunication service providers to disable access to information within an online platform. Other respondents claimed that blocking orders would be ineffective. According to them, such powers lead to a perpetual game of whack-a-mole as blocked sites often relocate and internet service providers are left having to play catch up.

Some stakeholders asserted that the website blocking provision would incentivize platforms to censor any content related to terrorist content or child sexual exploitation content to avoid non-compliance. They explained that such a power would have a chilling effect on speech and would pose a real threat to an open and safe internet, as platforms would be inclined to take down all questionable content rather than risk being blocked. Others stated that for the provision to be acceptable, the definitions of content to which it applies would need to be perfectly aligned with what is already illegal. This would avoid the problematic situation of a platform being blocked for hosting controversial content that, though perhaps harmful, is not illegal.

Finally, some stakeholders were concerned with the cost associated with site blocking, mentioning that there could be an uneven impact where larger service providers may be at an advantage due to the ease of integrating blocking technologies. The relative burden on smaller platforms could impact their marketplace competitiveness.

Link to law enforcement and national security agencies

The proposal acknowledged that the construction of a regulatory framework with content removal requirements may push public threat actors beyond the visibility of law enforcement and the Canadian Security Intelligence Service (CSIS). To balance the public interest in protecting Canadians from exposure to harmful content with the need to ensure that law enforcement and CSIS can identify and prevent real-world violence emanating from the online space, the framework required regulated entities to notify law enforcement (and/or CSIS) of specific types of illegal content (national security-related content to CSIS) falling within the five categories to allow for appropriate investigative and preventive action. Two potential options for consideration were presented:

  • One approach was to require that regulated entities notify law enforcement in instances where there are reasonable grounds to suspect that there is an imminent risk of serious harm to any person or to property stemming from illegal content falling within the five categories of harmful content (One Size Fits All Approach).
  • Another approach was to require that regulated entities report public-facing information associated with certain types of illegal content falling within the five categories directly to law enforcement and content of national security concern to CSIS. The Governor in Council would have authority to prescribe the types of illegal content subject to reporting and the legal thresholds for reporting such content (Flexible Approach).

In addition, regulated entities were required to preserve prescribed information that could support an investigation when sought by lawful means (e.g., a production order). The specific nature of the information to be preserved (including basic subscriber information, location data, the content itself, the alleged offence or national security threat) would be determined through regulations issued by the Governor in Council. The reporting obligation was meant to provide law enforcement with public-facing information associated with certain types of illegal content with which to apply for a judicial authorization for production of further preserved information from a platform (user identifying information).

Of note, this element of the proposal did not intend to replace or supersede any existing law enforcement reporting or preservation requirements related to child pornography offences under the Mandatory Reporting Act, as those would remain status quo in accordance with existing legislation.

A select few respondents voiced strong support for the obligation on platforms to preserve information related to the reports they would submit to law enforcement and national security agencies. A few respondents welcomed the proposal’s inclusion of a critical incident protocol in response to the Christchurch Call to Action. Some stated that they understood the legitimate needs of law enforcement and were supportive of platforms making voluntary reports to law enforcement regarding illegal content and assisting law enforcement with judicially authorized production requests. However, the majority of respondents were critical of the newly proposed mandatory reporting and preservation obligations.

Data sharing

Many were critical of the proposal requiring that platforms report information on users to law enforcement and national security agencies without appropriate safeguards (e.g, judicial oversight or notification of affected individuals). Stakeholders explained that the requirements would pose a significant risk to individuals' right to privacy. They felt that the proposal expanded the legal and technical surveillance capabilities of the state using safety as a rhetoric, but without establishing the necessity of such obligations. Many argued that the proposal provided little to no clarity on the limitations to information sharing or time limits for how long agencies would be permitted to store data and information. The proposal’s lack of specificity regarding how law enforcement and national security agencies would handle information received in respect of harmful content, which it deemed to be not illegal (e.g. destruction, records segregation, etc.) was also criticized.

Mandatory reporting obligation

The first option for mandatory reporting was better received than the second, even though it, too, was met with criticism. Some considered it an appropriate option. However, most respondents called on the Government to provide more clarity regarding terms like “reasonable grounds to suspect” and “serious harm” in order to mitigate the risks of undermining Charter rights, including freedom of expression and the right to be secure against unreasonable search and seizure. Some also criticized the one size fits all approach as being inconsistent with the differences between categories and the threat environment. Many asserted that proper checks and balances would need to be put in place to protect Canadians.

The second option for mandatory reporting was more heavily criticized. Respondents asserted that it would be too discretionary for platforms to meaningfully carry out and recommended that it be abandoned. They also questioned how platforms would be able to determine whether content is illegal or not. That being said, a few respondents flagged the necessity of reporting and preservation of child sexual exploitation content.

Effects on marginalized communities

Many stakeholders argued that the mandatory reporting obligations would be likely to disproportionately impact certain marginalized communities. Given that content from these communities already receives a disproportionate amount of flagging compared to similar content from other communities, coupled with the 24-hour takedown obligation, some argued that the proposal could result in these groups finding that their posts are excessively forwarded by platforms to law enforcement or CSIS for investigation. Respondents explained that such a proposal could result in the unjustified, disproportionate reporting of content produced by marginalized groups to law enforcement, some of which may not be illegal. Moreover, preservation obligations requiring platforms to retain information related to the content could further produce privacy and confidentiality concerns for these groups. These concerns are especially prevalent, argued stakeholders, when the mandatory reporting obligations are coupled with the proactive monitoring obligation. According to many respondents, such an approach could result in the creation of a surveillance state, wherein platforms are online monitoring appendages of law enforcement.

Alternative approaches

Respondents presented alternatives to the options included in the proposal. Some suggested that mandatory reporting and preservation obligations should only apply to specific types of harmful content, like child sexual exploitation content and terrorist content. Others recommended that the reporting and preservation obligations be accompanied by due process safeguards to prevent the risks of unwarranted government surveillance and encroaching on users’ Charter rights, including their privacy rights and freedom of expression.

The role of platforms

Multiple submissions asserted that requiring platforms to proactively monitor and share user data would effectively deputize non-democratically accountable providers to make subjective determinations on criminal issues that they are not qualified to do. Respondents stated that such determinations are best left up to law enforcement, as they have the requisite expertise.

Feasibility

Some stakeholders indicated that they do not believe the reporting requirements would be workable for law enforcement. They explained that every post available to Canadians on a platform that may constitute a criminal offence would likely be reported. In such a situation, law enforcement and national security agencies could be overwhelmed with the number of reports they receive and may be unable to triage through them to identify the true threats.

Privacy

Multiple respondents worried that the preservation and reporting obligations would lead to improper and excessive surveillance and storage of personal data. They explained that it is crucial to ensure that Canadians’ right to privacy is respected. Others stressed that privacy concerns must be a fundamental part of any new legislation, as personal data is tantamount to currency for online platforms, and Canadians’ identity, data, and money would be vulnerable in such an environment. Some argued that sharing information on foreign persons to Canadian law enforcement or national security authorities could violate the privacy laws of other countries.

Some stakeholders also pointed out that, if the framework were to include demographic reporting requirements, platforms would effectively be forced to start collecting additional sensitive data about Canadian users, contrary to user privacy interests. They also argued that it would create an ongoing privacy risk for Canadians by forcing platforms to indefinitely retain detailed demographic data about all of their Canadian users, some of whom could be harmed if their sensitive demographic data were to become public as a result of a data breach. As such, these stakeholders recommended that any demographic data collection obligation be abandoned.

2: Issues raised with moderate interest

Content moderation requirements

Platform liability

Some respondents supported that the proposed framework would decouple platforms’ initial content moderation decisions from liability. These submissions noted that the regime would compel platforms to make decisions about whether content is harmful, but would not impose fines if those decisions are incorrect. Stakeholders considered this element a useful protection for the section 2(b) freedom of expression Charter right and an important safeguard against the incentive for platforms to over-remove content.

Complaints process

Some stakeholders appreciated that platforms would be mandated to respond in a timely fashion to user complaints and compelled to provide a clear appeal process. A few also welcomed the appeal mechanism to the Digital Recourse Council. Stakeholders explained that these mechanisms help ensure procedural fairness for users and may also reduce the likelihood of platform bias towards either the flagger or the poster of the content as both are granted the same recourse.

Burden on smaller platforms

Multiple respondents highlighted how difficult it might be for smaller platforms to meet the proposal’s content moderation obligations. They advocated that the regime carries very significant resource implications as its implementation would require platforms to create or redesign automated AI systems for the Canadian market and engage sufficient human resources to review, assess and respond to a likely abundance of flagged content. Some were concerned that burdensome regulations may force smaller players out of the marketplace. Advocates for sex workers explained that such a phenomenon could be particularly harmful for sex workers in Canada as it would discourage the kind of autonomous working environments that they use to exercise more agency and self-determination in their careers. They explained that independent sex workers do not have the necessary resources to moderate content on their platforms under strict timelines. Faced with onerous obligations, they would likely be forced to join forces with larger platforms, exacerbating harms associated with the larger oligopolies already predominant in the sex working industry.

Ineffectiveness of the regime

Content posted online often exists within national jurisdictions across the globe. Some felt that the “take-down” obligations to make content inaccessible in Canada - while that same content would remain available in other countries - could cause a fragmentation of the internet. These respondents explained that other countries’ content regulations online have not stopped their citizens from viewing the content in question. Instead, people have used VPNs and other technology to view the content. According to stakeholders, this fragmentation also harms victims. They explain, these victims are no longer able to view the harmful content, as they have reported it and it has been rendered inaccessible. However, the content is still being viewed by people in other jurisdictions, or those who use technology, like VPNs to bypass the inaccessibility. As such, the victims continue suffering as the harmful content can still be accessed and they can no longer report it. There were also concerns about how the appeal process would work if individuals from outside of Canada wanted to appeal the removal of their content.

Transparency and accountability requirements

The proposal compelled regulated entities to be more transparent in their operations. Under the proposal, entities were required to publish information that they do not currently publish, with baseline transparency requirements set out in statute and further specified in regulation. This included Canada-specific data on the volume and type of content dealt with at each step of the content moderation process, as well as information on how regulated entities develop, implement, and update their guidelines for the kinds of content they prohibit. Regulated entities would also be required to publish transparency reports on the Canada-specific use and impact of their automated systems to moderate, take down, and block access in Canada to harmful content.

Many respondents emphasized the importance of imposing transparency and accountability requirements on online platforms. Some expressed that mandated and audited transparency is among the most powerful platform governance tools available to Government. They considered these obligations as offering important safeguards to mitigating the regime’s potential for over-removal and censorship.

The requirement to report how platforms monetize harmful content was met with contrasting viewpoints. Some supported the notion, labeling it a valuable metric to track and disclose. However, other stakeholders claimed that the requirement was designed to publicly shame companies and would thereby render the data they provide less reliable, considering that platforms would be incentivized to claim that they do not profit from the harmful content posted by their users.

Some respondents provided insight into what they believed should be captured by the transparency obligations. Examples included data that sheds light on the social locations of the communities that are targeted, the posters of the content, and the content itself, to help facilitate the identification of any discriminatory trends. Others emphasized that the obligation to include demographic data in platform reports would be impractical and would undermine users’ privacy rights. They explained that such a requirement would compel platforms that do not already do so to start collecting and retaining additional sensitive data about their users, contrary to privacy interests and data minimization principles. Some stakeholders also mentioned that the potential categories of data to be included in the report would need to be re-examined, in order to compel platforms to release the most useful and relevant information. They also mentioned that some of the proposed categories would be too ambiguous to quantify or impossible to measure.

New regulators

Independence, accountability and oversight

A few respondents considered the regulatory bodies’ independence a positive feature of the proposal. They emphasized that the independence of the three new regulatory bodies that have enforcement or adjudicatory functions would be key to ensuring the appropriate functioning of the Act. Some focused their appreciation on the Digital Recourse Council as an independent and impartial regulator, explaining that it would only be appropriate for a regulator to moderate content moderation disputes if the body deciding the appeals is free from political and commercial influence.

Other respondents criticized the proposal for not providing sufficient mechanisms for ensuring oversight and accountability by Parliament of these new regulatory bodies. According to stakeholders, such oversight would help ensure that the regulators acted pursuant to the public interest and intervene in markets in a non-arbitrary way. Though respondents acknowledge the proposal’s transparency requirements, these obligations were deemed insufficient to safeguard against potential regulatory abuse.

Digital Safety Commissioner’s mandate

Some respondents expressed appreciation for the Digital Safety Commissioner’s broad mandate. They noted that the regulator’s functions extend beyond the five prescribed types of harmful content in the proposal, welcoming how this feature would hopefully allow the Commission to engage in partnerships and research on broader issues of digital safety not yet in scope for regulatory action (e.g., disinformation harmful to public safety, synthetic media or automated/bot content labelling, ad transparency, doxing, algorithmic transparency, etc.). Some also supported the Commissioner’s authority to engage in partnerships with civil society and international allies. Still others noted that there should be clear checks and balances on the Commissioner’s authority.

Recourse Council decisions

Some criticized the Digital Recourse Council for only being able to make binding decisions when mandating content takedown, but not for the reinstatement of content. In addition, a few respondents asserted that persons should be able to initiate a complaint to the Council without first exhausting the appeal mechanisms at the platform level. These few stakeholders emphasized that platforms’ reconsideration processes are often ineffective and slow.

Notification and appeal requirements

The proposal compelled regulated entities to establish robust flagging, notice and appeal systems for both authors of content and those who flag content.

Some respondents supported the obligation on platforms to notify users of their content moderation decisions, as well as to allow them the opportunity to seek redress. Others appreciated the fact that such procedural safeguards would be afforded to both the author of the flagged content and the flagging user.

Some called for an appeal mechanism for decisions taken by the Recourse Council and Commissioner. Respondents emphasized that appeal mechanisms would be crucial, especially considering that Canadians’ rights and freedoms are at play. A few welcomed the fact that compliance orders would be appealed to the Personal Information and Data Protection Tribunal.

A few respondents were critical of this process. They noted that if there was a take-down error, the platform would have the final say over whether the content should be restored based on its own community guidelines – an outcome which would ultimately undermine the kind of recourse envisioned.

Compliance and enforcement measures

The proposal gave the Commissioner compliance and enforcement powers, such as the power to: proactively inspect for compliance; issue compliance orders; and in specific instances of non-compliance with legislative and regulatory obligations, recommend Administrative Monetary Penalties up to 10 million dollars, or 3% of an entity’s gross global revenue, whichever is higher, for non-compliance to the Personal Information and Data Protection Tribunal proposed in the Digital Charter Implementation Act, 2020 (Bill C-11).

Adequacy of enforcement toolkit

Most respondents recognized the need for appropriate, reasonable, and proportionate enforcement mechanisms to address platform non-compliance. In that spirit, some welcomed the proposed administrative monetary penalties (AMPs) that would be applied to non-compliant regulatory entities. However, a number of submitters had concerns with the proposed enforcement regime.

A few respondents called for the strengthening of the enforcement tools. Though they welcomed the monetary penalties, they felt that more needed to be done to enforce platform obligations, such as rendering company executives and board members personally liable. In contrast, others felt that enforcement tools should only be applied in cases of systemic failures, rather than individual cases of non-compliance. The concern for respondents was that imposing AMPs to specific cases of non-compliance could stifle access to information, free expression, and innovation. They were equally concerned that platforms may be subject to penalties for making mistakes regarding content-moderation decisions, even when acting in good faith. Some assumed that the regime would provide harsh fines for platforms who might not remove harmful content within 24-hours. These respondents asserted that such disproportionate sanctions would inevitably lead to platform over-compliance and the harming of free expression and access to information.

A few respondents argued that the regime would be unenforceable, as it would apply obligations to platforms who conduct no operational functions in Canada. According to these respondents, the Government should not enact a regime, especially one so elaborate and expensive, that would be unenforceable for all practical purposes. They explained that though major platforms might voluntarily comply with the regime, platforms who deliberately house harmful content would remain outside the reach of Canadian law.

AMPs amounts

A few respondents claimed that the AMPs amounts were not sufficiently large for big platforms with high potential earnings. These respondents explained that the enforcement measure may not be effective and may instead result in a pay-to-play system, where platforms consider fines a cost of doing business and decide not to comply with the regime. Other respondents claimed that the proposed penalties were punitive and disproportionate, especially for smaller platforms. They asserted that penalties should be proportionate to the violation. Some argued that associating penalties with a platforms’ gross global revenue was inappropriate. These submitters asserted that this practice would result in penalties that are disconnected from the platforms’ activities in Canada and the reality of their presence in the Canadian marketplace.

Inspections and order making authority

Some respondents highlighted the inspection and order making authorities as being particularly worrisome. While some recognized that the new regulators would require sufficient powers to carry out their functions, most considered the authorities incredibly broad, intrusive, and inviting of abuse. They also worried that such powers would set an unnecessary and unfortunate precedent for other regulatory regimes. Respondents recommended significantly circumscribing the authorities.

Education and research

Some respondents requested that the new regulator be equipped with the ability to lead and participate in research and programming aimed at reducing harmful content online. Some also called for legislation to compel platforms to hold digital citizenry campaigns and public awareness campaigns around harmful content online. Respondents explained that citizen outreach and educational campaigns are especially critical for youth, considering their high usage of social media. Stakeholders emphasized that both research initiatives and citizen outreach are valuable tools that should be used to advance the framework’s goals.

3: Issues raised by a select few respondents

In camera hearings

Some criticized the Digital Safety Commissioner’s ability to conduct hearings in private where privacy, national security, international relations, national defence, or confidential commercial interests are at play. Respondents explained that a regime based on the delicate balance of Canadians’ rights would require an open and active public debate and full transparency with regards to commercial practices. As such, according to some, protecting confidential commercial interests should never by itself be enough to justify an in camera hearing. Others suggested that clear thresholds and criteria be established delineating situations where hearings could be held in camera.

Regulatory charges

Some stakeholders were critical of the proposed regulatory charges, calling them a tax on some, but not all, online platforms. Respondents emphasized that the cost of these charges would be passed onto consumers. They also highlighted that platforms would have to incur a substantial amount of costs already in order to meet the regime’s other legislative and regulatory obligations. Others requested additional information about the cost of the proposed scheme and how the financial burden would be distributed among the various types of regulated entities. Some respondents questioned whether the larger platforms would have to contribute more, explaining that the charges could create significant inequities as law abiding platforms would be forced to pay the cost of policing non-compliant platforms.

Not acting fast enough or not doing enough

A select few respondents criticized the proposed regime for not doing enough, or not moving fast enough. Some called for a formal Bill to be tabled immediately, highlighting concerns that such a framework would take months, if not years, to fully be implemented. These stakeholders emphasized that communities, especially marginalized groups, cannot afford to wait as they are currently facing irreparable harm. Others emphasized that the regime should be holding platforms accountable through an obligation of result, as opposed to an obligation of best effort, claiming that the latter standard is not conducive to adequately addressing the safety of Canadians online.

Tailoring of regulatory requirements

Most respondents appreciated that the proposal allowed for the tailoring of regulatory obligations to different categories of online platforms, taking into account various business models, sizes, and resources of the potentially regulated entities. However, some expressed concern over the ability to tailor obligations, asserting that moderation obligations for content like child sexual exploitation material and the non-consensual sharing of intimate images should be imposed regardless of differing business models or capacity.

Mandatory Reporting Act Amendments

The Government proposed to amend the Mandatory Reporting Act to better enable it to deal with the rapid pace of change and the evolution of how material depicting child sexual exploitation is created and shared online today. Targeted reforms to the Mandatory Reporting Act and its regulations would enhance measures to address online child sexual exploitation, support investigations and assist in rescuing children who are in abusive circumstances. The Government would amend the Mandatory Reporting Act in the following ways:

  • Centralize mandatory reporting of online child pornography offences through the Royal Canadian Mounted Police’s National Child Exploitation Crime Centre (NCECC);
  • Clarify that the Mandatory Reporting Act applies to all types of internet services, including social media platforms and other application-based services;
  • Enhance transparency by requiring an annual report to the Ministers of Public Safety and Emergency Preparedness and Justice from the NCECC;
  • Impose a 12-month preservation requirement for computer data (as opposed to the current 21 days);
  • Designate a person in regulations for the purpose of collecting information to determine the application of the Mandatory Reporting Act; and
  • Add a requirement for persons who provide an internet service to provide, without a requirement for judicial authorization, additional information to the NCECC where a child pornography offence is identified.

Most respondents did not address the proposed amendments to the Mandatory Reporting Act. A few were pleased that the framework included amendments to strengthen the Act. They asserted that many of the suggested changes would enhance efforts to curb child pornography. They advocated strongly in favour of amendments that would allow law enforcement to locate offenders faster, such as the requirement for companies to include basic subscriber information (BSI) in their reports to law enforcement. Some acknowledged that no longer requiring judicial authorization to obtain transmission data or BSI is a necessary amendment to expedite police response in cases where an offence is evident. Others expressed concern, stating that due to the proactive and mandatory nature of the reporting, such reporting may include false positives and thus, requiring additional personal identifying information could negatively impact innocent users’ privacy rights. It was noted that only requiring reporting of transmission data, as opposed to transmission data and BSI, would be preferable as it would serve the purpose of expediting the police response, while also respecting Charter rights, including the freedom of expression and privacy considerations.

CSIS Act Amendments

The Government proposed to amend the CSIS Act. Amendments were proposed to the judicial authorization process and would represent a way to enable CSIS to identify online threat actors quicker and to investigate and mitigate the spread of violent extremist narratives that may inspire real-world acts of violence. Currently, CSIS only has one warrant option, which is designed for seeking intrusive powers from the Federal Court. It takes 4 to 6 months to develop the application and seek the Federal Court’s approval. Canadian law enforcement, by contrast, is able to obtain BSI in 8 to 10 days. The potential new authorization for BSI would be issued by an independent judge of the Federal Court and be subject to Ministerial oversight. It would not replace or eliminate CSIS’ requirement to obtain full warrant powers from the Federal Court should further investigation into the threat be necessary. As with all CSIS activities, requests for BSI could be reviewed by the National Security and Intelligence Review Agency and the National Security and Intelligence Committee of Parliamentarians.

Most respondents did not address the proposed CSIS Act amendments, though some did raise concerns. These explained that lowering the legal threshold and associated due process to access BSI could result in significant privacy infringements, have chilling effects on expression, disproportionately impact certain marginalized groups and could easily lead to abuse. They stated that prior to making such amendments, the Government should provide clear evidence demonstrating why such new authorities are necessary and what additional safeguards and oversight could be effective in mitigating such concerns.

Next Steps

The Government continues to consider next steps and will announce further action shortly.

Appendices

Page details

Date modified: