Session Two: Types of Content to Regulate
What is a Worksheet?
Each advisory group session will be supported by a worksheet, like this one, made available to the group in advance of each session. The goal of these worksheets is to support the discussion and organize feedback and input received. These worksheets will be made public after each session.
Each worksheet will have a set of questions for which the group members will be asked to submit written responses to. A non-attributed summary of these submissions will be published weekly to help conduct the work in a transparent manner.
The proposed approach in each worksheet represents the Government’s preliminary ideas on a certain topic, based on feedback received during the July-September 2021 consultation. It is meant to be a tool to help discussion. The ideas and language shared are intended to represent a starting point for reaction and feedback. The advice received from these consultations will help the Government design an effective and proportionate legislative and regulatory framework for addressing harmful content online. Neither the group’s advice nor the preliminary views expressed in the worksheets constitute the final views of the Department of Canadian Heritage nor the Government of Canada.
Discussion Topic
What should the scope of regulated content be under the legislative and regulatory framework, and how should the regulated content be defined?
Objectives
- Determine what content should be regulated under a legislative and regulatory framework. The paragraphs below outline the categories of harmful content that should be initially scoped in. These categories represent the most egregious forms of content online. However, there is a plethora of other harmful content present online. It will be important to establish what can, and should, be regulated as a first step for online content regulation in Canada.
- Assess whether the framework’s legislative and regulatory obligations should differ depending on the category of regulated content. Some content is more harmful than other content. Regulated entities could be required to address the spectrum of harmful content differently, depending on the degree of risk posed by each category of content.
- Determine the best way to scope and define regulated content. As opposed to creating a category for ‘illegal’ content, and another for harmful content, all regulated content could be scoped into one category – harmful content. Legislation could set baseline definitions inspired by the Criminal Code, other Canadian legislation, and relevant jurisprudence. It could provide enough specificity so that platforms are able to interpret the definitions accurately and create their own more detailed definitional standards in conformity with the more general requirements set out in legislation.
- Determine the appropriate level of flexibility to provide in legislation and regulations. As the regulatory framework matures, and norms and standards develop, legislation could include mechanisms to allow for existing definitions to be updated and additional definitions of content to be introduced over time.
Starting Points
- The intent is to regulate content that Canadians generally recognize as harmful while respecting freedom of expression. When it comes to the scope of content to regulate, the goal is for the legislative framework to capture content that is recognizable and intuitively likely to have harmful effects, and which is seen as being of ‘low value’. Canadian jurisprudence holds that the degree of constitutional protection afforded to expression may vary depending on the nature of the expression at issue, among other factors. In this context, 'low value' content would mean content that does little to promote the values underlying the freedom of expression, like child pornography, hate speech, and incitement of violence. The framework would also target content that poses an imminent risk of harm to Canadians. The intention is that the benefit of suppressing such content would outweigh the detrimental effect of restricting its expression.
- The regulatory regime should initially focus on the most egregious forms of harmful content online, with a view to expanding the scope over time. The framework would regulate five categories of harmful content – child sexual exploitation content, the non-consensual sharing of intimate images, terrorist content, content that incites violence, and hate speech. Rooted in Canadian criminal law, these types of content can most easily be identified and scoped into a new legislative and regulatory framework.
- The Criminal Code should be the point of departure, but regulating content is fundamentally different from prosecuting criminal offences. Most offences in the Canadian Criminal Code contain two elements: mens rea and actus reus. The mens rea refers to the accused’s intention, while the actus reus refers to their conduct. In order for someone to have committed a criminal offence, they require both a guilty mind (mens rea) and generally must have committed an overt act in furtherance of a crime (actus reus). The latter requirement could apply in a regulatory context. For instance, certain harmful communications of hate speech would be prohibited by the proposed amendments to the Canadian Human Rights Act regardless of the communicator’s intentions. It is possible, in a regulatory context, to go after the communication, or sharing, of a piece of content. However, the former requirement of a mens rea does not translate easily to a regulatory framework. For example, the offence of wilfully promoting hatred against an identifiable group requires the mens rea of will or intentionality. It will not be possible for a regulated online service to know the intention of a person posting a specific piece of content. Neither legislative definitions, nor more specific community guidelines from platforms can assess intention on a mass scale. Nor should they have to: the proposed framework would instead be concerned with monitoring for and managing harmful content, not with why harmful content has been posted. Where some have claimed that “illegal content” should be the target for confronting harmful content online, a more nuanced, and indeed unique approach to defining harmful content is required for the purpose of regulation.
Overview of Proposed Approach
- The informational advantage that online platforms hold over the nature and extent of harmful content online is a key fact that informs how Canadian legislation would define harmful content online. Policy and regulatory design teams within the federal civil service, and the eventual administrative office of the Digital Safety Commissioner, have a fundamental information disadvantage when it comes to how harmful content manifests on online platforms. In contrast to the platforms themselves, the Government does not have access to relevant information and reporting from platforms to understand how this content manifests online – nor do other interested parties in academia or civil society. This directly impacts the ability to define the relevant categories of harmful content in legislation.
- Given this issue, legislation should set a baseline standard that both speaks to Canadians’ intuitive sense for harmful content and gives direction to platforms for how they should monitor and moderate that content. Ongoing environmental scanning, regular reporting by platforms, and audits and inspection powers will all be designed to help the Government better understand how these harms manifest, are monitored and moderated. To the extent that new information reveals gaps or inadequacies, there will likely be a need for regulatory and/or legislative amendments, or new regulations as appropriate, to revise these definitions, or even introduce new categories of harmful content.
- Legislation would begin by including five broad definitions for the above-mentioned types of harmful content. The broad definitions would be drawn from the Criminal Code, jurisprudence, and other Canadian legislation. Adapted for a regulatory context, most of the definitions, unlike the Criminal Code, focus on the content and its likely effects, as opposed to the mental state of the person who posted the content.
- Content related to child sexual exploitation: There are numerous offences within the Criminal Code related to the sexual exploitation of children. The regulatory definition, drafted in a more general manner, would capture many of these offences.Footnote 1 Broadly speaking, it would include a) a visual representation that shows a child engaged in or depicted as being engaged in explicit sexual activity; and b) other content if it is reasonable to suspect that the content is related to the sexual exploitation of children and it is likely that the content will perpetuate harm against children. Footnote 2
- Terrorist content and content that incites violence. The Criminal Code creates criminal liability for any person who counsels the commission of any crime in the Code. The definitions of terrorist content and content that incites violence are meant to approximate the concept of counselling in criminal law. To be defined as counselling, the criminal law requires two components: 1) active encouragement of a crime, and 2) that a person intended that the crime be committed or was aware of a substantial and unjustified risk that the crime would be committed (in other words, recklessness). The definitions of both terrorist content and content that incites violence have modified these requirements for a regulatory context, they include: 1) active encouragement, and 2) the likelihood that the harm being encouraged will take place.
- Content that actively encourages or threatens violence would capture content in which an act of physical violence or substantial property damage is actively encouraged or threatened if the communication of that content is likely to result in an act of physical violence or substantial property damage.
- Terrorist content would capture content that actively encourages or threatens the commission of an act or omission that is likely to result in any of the following harms if it is reasonable to suspect that the content is communicated for a political, religious or ideological purpose and for the purpose of either intimidating the public, or a segment of the public, with regard to its security or compelling a person, a government or a domestic or an international organization to do or to refrain from doing any act:
- causing death or serious bodily harm to a person;
- endangering a person’s life;
- causing a serious risk to the health or safety of the public or any segment of the public;
- causing substantial property damage that is likely to result in the harm referred to in any of paragraphs (a) to (c); or
- causing serious interference with or serious disruption of an essential service, facility or system, whether public or private, other than as a result of advocacy, protest, dissent or stoppage of work that is not likely to result in the harm referred to in any of paragraphs (a) to (c).
- Hate speech would be defined in the same way as it would be defined under the proposed new section 13 of the amended Canadian Human Rights Act (as in former Bill C-36). First, the definition would define hate speech in terms of its hateful content, namely any content that expresses detestation or vilification of persons on a prohibited ground of discrimination.Footnote 3 Second, it would deem hate speech to be harmful content only when it is communicated in a context in which it is likely to foment hatred. This approach follows the guidance of the Supreme Court of Canada.
- Non-consensual sharing of intimate images would make use of the offence in the Criminal Code of the publication of an intimate image without consent. Unlike the other definitions, this definition would not include a likely effects provision. Instead, it would focus solely on content. Regardless of the effect of the content, if the person does not consent to the sharing of the intimate image, or if consent cannot be determined, it would be considered harmful content. Whether there was consent prior to the making or sharing of the content is irrelevant, consent would be assessed at the moment of flagging. Once an intimate image is flagged, the platform would be required to determine whether the person depicted gave their consent. If they cannot make this determination, the image would be considered harmful. The definition would read along the following lines: a visual recording of a person in which the person is nude, is exposing their genital organs or anal region or her breasts or is engaged in explicit sexual activity if it is reasonable to suspect that (a) at the time of the recording, the person depicted in the recording retained a reasonable expectation of privacy, and also, at the time the recording was communicated; and (b) the person depicted in the recording did not consent to the communication of the recording.
- The regime would set baseline standards for how harmful content is defined and, in turn, monitored and moderated by regulated services. It would institute transparency, oversight and accountability measures to ensure that platforms have the necessary processes in place to determine whether content is harmful. It would grant platforms flexibility and leniency in their own decision-making. So long as platforms are able to demonstrate that they have systems and processes in place to identify, monitor, and moderate content that falls within the definitions above, the intent would be to set up a regime under which they would not be penalized for arriving at a reasonable conclusion about whether the content meets the legislated definitions of harmful content.
- Beyond the categories of harmful content themselves, there are different modalities in which harmful content can be transmitted online, which may call for different kinds of regulatory responses. For instance, the live streaming of harmful content would likely necessitate more timely removal than asynchronous content would. Volatile content that would likely have imminent and cascading harmful societal effects would necessitate a more severe response than content that though harmful, may have more distant effects. Categories of obligations could be differently applied to these modalities, such that there could be a set of regulatory obligations reserved, for example, for live streamed terrorist content and/or child sexual exploitation content.
Supporting questions for discussion
- Determine what content should be regulated under a legislative and regulatory framework.
- Do the categories of content mentioned above cover the full breadth of harmful content that should be regulated? Is there other content that should feature in a regulatory scheme of this nature? If so, is there available evidence to demonstrate that this additional type of content is harmful to Canadian users of online platforms?
- Is there content that is not harmful, but should nevertheless feature in the online harms regulatory scheme? For example, is there content that should be protected from platform moderation (i.e. journalistic content or content of democratic importance)?
- Should there be additional or separate categories of regulation for livestreaming or volatile content linked to the types of harm already identified?
- Assess whether legislative and regulatory responses should differ depending on the category of regulated content.
- Considering the content proposed to be regulated, is there some content that is more harmful to Canadian users? Should that content be treated more stringently than other forms of regulated content?
- How do we identify how harmful content is? What factors should be considered?
- If you identified additional types of content to regulate, where would that content fall on the spectrum of harm? How do you believe it should be treated?
- Determine the best way to scope and define regulated content.
- Considering that simply defining harmful content as “illegal” under criminal law is a problematic approach to take in a regulatory context, given the above-noted considerations, what standards and thresholds do you believe should be implemented to scope in harmful content in Canada? Should the Government produce guidance to help interpret these standards and thresholds in the regulatory context?
- Do you still see merit in somehow aiming to differentiate between “illegal” and harmful content in a regulatory scheme? If so, how would you propose to delineate the two categories?
- Determine the appropriate level of flexibility to provide in legislation.
- Do you envision that the definitions of harmful content will require regular updates as precedent regarding content moderation establishes itself? Do you think that the categories of content that require more contextual analysis and that are more subjective, like hate speech, may be more in need of definitional refinement over time?
- Do you envision that the Government will need to regulate new types of harmful content online in the future? If so, do you have any predictions on type(s) of content that are not necessarily harmful right now, but may be in need of regulation down the road?
Page details
- Date modified: