Supplementary Worksheet: Objects of Regulation

What is a supplementary worksheet?

The Expert Advisory Group on Online Safety launched by the Minister of Canadian Heritage on March 30, 2022, meets each week to discuss key elements of the legislative and regulatory framework for online safety. As discussions take place, additional questions emerge.

This supplementary worksheet contains follow-up questions to collect more detailed information to inform the design of an effective and proportionate legislative and regulatory framework. Neither the group’s advice nor the preliminary views expressed in the worksheets constitute the final views of the Department of Canadian Heritage nor the Government of Canada.

Objective

  1. Refine the scope of regulated content

Follow-up questions

  1. In an ex ante systems- and risk-based approach to online safety, are there limits (e.g., legal, practical, or otherwise) on the scope of content that should be regulated?
    1. If so, what are they?
    2. If not, why?
    3. Relatedly, are there categories of harmful content that should be excluded from this framework?
  2. Should legislation and/or regulation define the regulated content at all?
    1. If it so, should legislation and/or regulation define the regulated content on the basis of its characteristics, its likely effects, its actual effect, or some hybrid of each?
  3. What degree of specificity is needed in an ex ante, systems- and risk-based framework:
    1. For regulated services to identify the content for which they are responsible and design the requisite tools and processes to manage it?
    2. To attach different regulatory obligations to different types or degrees of harmful content?
    3. To assess the validity and extent of a regulated service’s risk assessment or plan?
  4. If the legislative and regulatory framework is to define regulated content, should the definition be drawn only from legal definitions and existing jurisprudence, or should it consider other elements?
    1. What are the limits (e.g., legal, practical, or otherwise) that should apply to considering content for regulation?

Summary of the Expert Advisory Group Discussion

Systems-based approach

Many experts agreed that the framework should adopt a risk-based approach whereby regulated services are compelled to act in a responsible manner. Some insisted that there is real value in articulating what the risk assessments imposed on platforms might look like. They stressed the importance of providing a standard or benchmark against which regulated services’ behaviour can be compared to, and which would be used to determine whether a service met its obligations.

Some of the experts who advocated for a risk-based approach did so through a product-safety lens. They recommended imposing performance standards, through regulations and guidelines, and product assessments, through transparency reports and audits, on regulated services.

Experts stated that it would allow for an adaptable framework that can keep pace as technology evolves and new harms develop. They highlighted that under such a framework, legislation would not have to be amended every time new harms, or services, emerged. Experts also asserted that a risk-based model would allow for flexibility for both the definitions of content and the obligations imposed. This flexibility is necessary, they stated, as some content like hate speech is too opaque to be assessed in real time, whereas child sexual exploitation content and the livestreaming of violence should be removed immediately or prevented from even being posted.

Some experts advocated that the framework would need to compel information from regulated services on the algorithms and other systems, structures and tools they use to distribute content to users. They explained that it would not be possible to assess a service’s risk profile nor the adequacy of its risk management without this information, a key part of holding them accountable.

Differentiating between Illegal and Legal yet Harmful Content

Many experts recommended that the framework differentiate between illegal and legal yet harmful content, imposing distinct obligations on regulated services for each type of content. They argued that illegal content should be removed and stressed that complications arise when one considers how to address legal yet harmful content. They stated that one of the biggest mistakes made by the United Kingdom in its Online Safety Bill has been to try to tackle legal but harmful content through onerous obligations. Some suggested that a softer approach be adopted for such legal yet harmful content, one based in self-regulation or standard setting.

Other experts explained that differentiating between illegal and legal yet harmful content would be difficult, and voiced concern over outsourcing the judicial function of determining the legality of content to private bodies (i.e., online services). They argued that there is value in setting different risk-based obligations based on a range of types of content. They observed that different types of content present unique challenges: some are more easily recognizable for their harm (e.g., child pornography), while others require variable degrees of analysis to ascertain whether it is harmful (e.g., hate speech; incitement to violence; disinformation; defamation). Reflecting these realities and challenges, it was proposed that categorizing content for regulation could be made based on other factors, beyond legality.

Defining Harmful Content

Many experts agreed that defining the scope of harmful content to be regulated would be challenging. Some argued that there is a lot of content that is not harmful at face value. They looked to child sexual exploitation images as an example. They explained that there are videos of abuse spliced into multiple different images which, on their own, do not depict an apparent harm. But when put together the harm is clear and apparent. Some experts stressed the importance of relying on existing legal definitions for content like hate speech and not going past the laws we have established. Other experts highlighted that there are challenges with current definitions in Canadian legislation for some types of content. They cited concerns with how terrorism, violent extremism and violent radicalization are defined and considered in Canadian criminal law. By relying on existing definitions, they explained, the framework would risk leading to the biased censoring of certain types of content.

Many experts stated that it would be important to find a way to define harmful content in a way that brings in lived experiences and intersectionality. They explained that a number of harms online are exemplified by issues like colonization and misogyny, and a regulatory framework would need to recognize these factors.

Range of Harms

Most, if not all, experts asserted that the range of harms should be expanded beyond the five types of content enumerated in the worksheets. They highlighted that the five types of content previously proposed were too narrow in scope. Instead, they stated that the framework should include a broad range of illegal and legal but harmful content. Several experts indicated that a short list of harms would be incompatible with an ex ante, risk-based ‘duty of care’ regulatory scheme. They cautioned against an “encyclopedic” approach that would purport to adequately regulate risk and harm through the use of a list of harmful content that keeps growing over time.

Some experts explained that additional types of harmful content would need to be included if the framework were to delineate specific objects of regulation. A range of harmful content was said to be important to scope in, including: fraud; cyberbullying; mass sharing of traumatic incidents; defamatory content; propaganda, false advertising, and misleading political communications; content or algorithms that contribute to unrealistic body image, or which creates a pressure to conform; and content or algorithms that contributes to isolation or diminished memory concentration and ability to focus

Many experts also explained that it would be important to be specific about what types of harms, if any, would be excluded under a risk-based framework. Some asserted that there may be types of illegal content that the framework does not seek to address, such as counterfeit goods or copyright infringement. The recent creation of a tort for online defamation was cited as an example of just how challenging it would be to extend the scope of regulated harm so broadly.

Harm as subjective

Some experts provided examples of traumas that society has little vocabulary and ability to deal with. They explained that the online world can exacerbate these problems as users encounter content that for someone else may not be harmful, but for them can be triggering. Experts insisted that a definition of harmful content must include an understanding of how individualized and contextual such material is. Harm, these experts explained, is perspective driven. For instance, a racialized person with lived experience on the psychological toll of racism and its systemic impact would likely have a different perspective on what constitutes harmful content compared to a cis-white male. Others emphasized that harm looks different for children than it does for adults. Many agreed that it would be important to consider the variety of users present online and acknowledge that some are more vulnerable than others to specific types of content.

Child Sexual Exploitation Content

Some experts emphasized that particularly egregious content like child pornography, or child sexual exploitation content more generally, may require its own framework. They explained that the equities associated with the removal of child pornography are different than other kinds of content, in that context simply does not matter with such material. In comparison, other types of content like hate speech may enjoy Charter protection in certain contexts. Experts explained that a take-down obligation with a specific timeframe would likely make the most sense for child pornography.

Misinformation and Disinformation

Many experts voiced concern over misinformation and disinformation and highlighted that it was not included in the proposed five types of harmful content. They explained that it should be scoped in as it has serious harmful effects on individual Canadians and society as a whole. They stressed that Canadians’ ability to have conversations about basic policy disagreements has been severely impacted and complicated by the phenomenon of disinformation. They explained that it erodes the foundations of democracy, polarizes people, and reduces social dialogue to confrontational encounters.

Limited vs. Broad Scope of Regulated Content

The expert group disagreed over whether to define specific types of harmful content or use the concept of risk to capture a broad range of content.

Some argued that the need for specificity of content stemmed from the government’s previously proposed take-down model. They stated that under a risk-based approach, such specificity was not necessary. They argued that the framework should have no categories or detailed definitions of harmful content. Instead, they insisted that focus should be placed on risk-based measures and standard setting. They explained that it would be important for the framework not to predetermine what regulated services will find in terms of harmful content on their platforms. They also emphasized that there are harms that cannot be foreseen. As such, they stated, the framework should empower and encourage services to identify and manage harms themselves on an ongoing basis.

Others argued that harmful content should be defined and categorized. They explained that it would be critically important to define what the harm is. They stated that a major advantage to providing definitions of categories of content is that it gives online services direction on the risk that they are obligated to look for, moderate and manage. They argued that a legislative and regulatory framework could not simply tell online services to suppress harmful content writ large without providing direction and definition. These experts explained that there would be a tremendous amount of uncertainty about a platform’s obligations, as well as the rights of victims to seek redress, without clarity and specificity in the legislation and regulation. They insisted that categories of content would be inevitable and necessary, and should be a factor in how the expert group conceptualizes the objects of regulation.

Page details

Date modified: