Summary of Session Two: Objects of Regulation

The Expert Advisory Group on Online Safety held its second session on April 21 from 1:00-4:00 pm EDT, on the Objects of Regulation. Eleven members were present. The Advisory Group was joined by Government representatives from the Departments of Canadian Heritage, Justice, Innovation, Science and Economic Development, Public Safety, and the Privy Council Office. Representatives from the Royal Canadian Mounted Police were also present.

This summary provides an overview of the second session. Per the Terms of Reference for the Advisory Group, these sessions operates under Chatham House Rule. As such, this summary does not attribute the views expressed to any one group member or organization. It outlines the views expressed during the session; reports areas of agreement, disagreement, and discussion; and organizes the discussion under thematic categories. It should not be considered a verbatim recitation of the discussion.

The topic for the workshop was “What should the scope of regulated content be under the legislative and regulatory framework, and how should the regulated content be defined?”

The worksheet for the session included four objectives:

  1. Determine what content should be regulated under a legislative and regulatory framework;
  2. Assess whether the framework’s legislative and regulatory obligations should differ depending on the category of regulated content;
  3. Determine the best way to scope and define regulated content; and
  4. Determine the appropriate level of flexibility to provide in legislation and regulations.

This summary reports on the perspectives raised in relation to these objectives and organizes the discussion points according to issue-specific themes.Footnote 1

Theme A: The Legislative and Regulatory Framework

Objective of the Legislation

Many experts emphasized that it would be important to identify a clear objective for the framework. Some put forth possible objectives such as reducing harmful content online and compelling regulated services to be both proactive and reactive in managing content on their platforms. Several experts reiterated that the goal of the framework cannot be perfection, noting that the regime would necessarily be one of trial and error. Some suggested that an effective legislative framework should strive to be concise, accessible, clear, and flexible. Experts stated that such a framework would have a defined set of principles, introduce standards for what responsible action looks like, and would apply to a large range of online services.

Role of Online Services

Many experts underlined the role that online services play in creating a space that enables harmful discussion or the circulation of harmful content. Some experts explained that as private entities, online services are empowered to make their own choices about what and who they allow on their platforms. Some stressed that having a social media account is a privilege, and not a right. Many experts highlighted the importance of creating a framework that incentivizes online services to rethink their business models, which currently promote the harmful content that the regime is seeking to limit. Within this discussion, the question of whether online services can be trusted arose. Some experts reiterated that even when notified of flagrant and egregious content on their platforms, many online services will not remove the content in question. They explained that as private entities motivated by profit, these services act in response to public pressure. The key to a successful framework, they claimed, is to compel regulated services to act in a responsible way through public shaming or profit incentives. Other experts stressed that once platforms commit to action, they innovate quite well and implement unique solutions. Finding a middle ground is key, they explained, and would support a regulatory environment where regulated services can be trusted to think creatively about how to ensure their platforms are safe for Canadians.

Constitutional Constraints

Several experts reiterated that any successful framework would be constrained by legal considerations and would have to operate within these boundaries. Some experts pointed to legislative provisions in online safety frameworks from other jurisdictions as examples of what to avoid. They described that these provisions obligate regulated services to consider the freedom of expression in fulfilling their obligations, but do not provide any details on how to do so. Experts explained that outsourcing the duty to consider fundamental rights to private companies is problematic in any jurisdiction but is especially concerning in Canada, as, in their view, Canada does not have a clear articulation of what freedom of expression means. They indicated it would be especially important to be as clear as possible in legislation about what regulated services are expected to do in considering their users’ fundamental rights and freedoms.

Many experts stressed that there would be Charter concerns with a framework that seeks to impose obligations on services to remove content that is not illegal. They emphasized that a lot of this content would likely qualify as protected expression under section 2(b) of the Charter and thus could likely not justifiably be part of a regulatory takedown scheme.

Effects of Legislation

Experts discussed what the potential effects of online safety legislation would be both within Canada and abroad. Many emphasized the importance of drafting a framework that is best for a Canadian context and is tied to Canadian values. Some explained that Canadian legislation would set a standard for other countries to emulate. Others cautioned that any legislation introduced must not be susceptible to misuse by future governments. Some experts stressed that this legislation must be easy to comprehend as many average Canadians will be looking to it to understand their rights as users operating in the online space.

Theme B: The Legislative and Regulatory Framework

Obligations

Many experts agreed that the framework should adopt a risk-based approach whereby regulated services are compelled to act in a responsible manner. Some insisted on defining what a “duty of care” or a “duty to act responsibly” means in practice, and to illustrate the difference between the two concepts. Others voiced that there is real value in articulating what the risk assessments imposed on platforms might look like. They stressed the importance of providing a standard or benchmark against which regulated services’ behaviour can be compared to, and which would be used to determine whether a service met its obligations.

Some of the experts who advocated for a risk-based approach did so through a product-safety lens. They recommended imposing performance standards, through regulations and guidelines, and product assessments, through transparency reports and audits, on regulated services.

Advantages

Experts who advocated for a risk-based regulatory approach were specific about its advantages. First, they explained that it would allow regulated services to be creative in fulfilling their obligations. For instance, they illustrated that instead of prescribing what platforms should do to address the livestreaming of harmful content, a risk-based approach would allow platforms to develop their own solutions, which could include creating an incident response team or limiting the livestreaming of content to people in Canada of a certain age. Second, they stated that it would allow for an adaptable framework that can keep pace as technology evolves and new harms develop. They highlighted that under such a framework, legislation would not have to be amended every time new harms, or services, emerged. Third, experts stated that process-oriented obligations are more easily enforceable compared to results-oriented obligations. They explained that this is a significant advantage of the risk-based framework, especially since there may be difficulty enforcing the regime on foreign regulated services. Fourth, experts asserted that a risk-based model would allow for flexibility for both the definitions of content and the obligations imposed. This flexibility is necessary, they stated, as some content like hate speech is too opaque to be assessed in real time, whereas child sexual exploitation content and the livestreaming of violence should be removed immediately or prevented from even being posted.

Some experts emphasized that to be effective, a risk-based regulatory approach would need to be backed by a robust compliance and enforcement toolkit that includes transparency obligations, oversight, and sufficient enforcement to deter and address noncompliance. They explained that the regulator would need to be equipped with audit powers to investigate services’ decision-making in both an ad hoc and recurring fashion. Some experts went further, advocating that the framework would need to compel information from regulated services on the algorithms and other systems, structures and tools they use to distribute content to users. They explained that it would not be possible to assess a service’s risk profile nor the adequacy of its risk management without this information, a key part of holding them accountable.

Takedown Requirements

Experts disagreed on whether a legislative and regulatory framework should include elements of an ex post, ‘takedown’ regulatory model. On one hand, some experts spoke about how self-regulation does not work and insisted on the necessity of compelling regulated services, through specific obligations and corresponding enforcement measures, to take down egregious content within a specific period of time. These experts worried that without such an obligation, harmful content would not be removed, and victims would continue to suffer. They advocated that the takedown requirement could apply only to egregious content such as child sexual exploitation content or the livestreaming of an attack or homicide, and could be coupled with the availability of an appeal process. Other experts voiced strong disapproval for a takedown obligation. They explained that these regimes are not practical enough to address the breadth, scope and variability of content at issue, as it becomes difficult to set a precise timeframe. They stated that twenty-four hours would be too long for the livestreaming of an act of violence to be circulated online – but that the same period of time would not be enough to monitor and assess the risk posed by other, context-specific content like hate speech. They were also concerned about the chilling effects of a takedown model when it comes to adjudication and the freedom of expression. Finally, some experts insisted that research shows that takedown obligations do not limit the distribution of harmful content. Instead, they explained, users immediately download the content, and it is quickly distributed through other channels, even when removed swiftly.

Some experts recommended taking a risk-based approach for lawful but harmful content, and adopting a takedown approach for illegal, or more egregious content. They explained that this could ensure that egregious content is immediately removed from circulation while also encouraging the development of unique product safety standards for content that is legal yet harmful.

Multiple experts emphasized that whatever framework is chosen, it would be critically important that it not incentivize a general system of monitoring.

Regulated Services

Some experts argued that a risk-based approach could encapsulate a wide range of service and harms without having to be detailed. They stated that under such an approach it would not be necessary to constrain regulated services by type, size or design - as all online services can pose risk and, as such, should have a responsibility to manage it. Experts explained that under such a framework, services lower on the internet stack would be obligated to think about the safety of their service in a manner that is consistent with their functionality. They asserted that such services would be obligated to develop policies and justify their operational decisions and specific performance metrics, just like top-stack regulated services. It was shared that shorter, concise legislation – on the model of the Canada Health Act when it was introduced – oriented to a risk-based ‘duty of care’ could achieve the goals of scoping in the range of services and harms online, without over-defining and overburdening the regime.

Theme C: Defining and Categorizing Harmful Content

Range of Harms

Most, if not all, experts asserted that the range of harms should be expanded beyond the five types of content enumerated in the worksheets. They highlighted that the five types of content previously proposed were too narrow in scope. Instead, they stated that the framework should include a broad range of illegal and legal but harmful content. Several experts indicated that a short list of harms would be incompatible with an ex ante, risk-based ‘duty of care’ regulatory scheme. They cautioned against an “encyclopedic” approach that would purport to adequately regulate risk and harm through the use of a list of harmful content that keeps growing over time.

Some experts explained that additional types of harmful content would need to be included if the framework were to delineate specific objects of regulation. A range of harmful content was said to be important to scope in, including: fraud; cyberbullying; mass sharing of traumatic incidents; defamatory content; propaganda, false advertising, and misleading political communications; content or algorithms that contribute to unrealistic body image, or which creates a pressure to conform; and content or algorithms that contributes to isolation or diminished memory concentration and ability to focus

Many experts also explained that it would be important to be specific about what types of harms, if any, would be excluded under a risk-based framework. Some asserted that there may be types of illegal content that the framework does not seek to address, such as counterfeit goods or copyright infringement. The recent creation of a tort internet harassment was cited as an example of just how challenging it would be to extend the scope of regulated harm so broadly.

Limited vs. Broad Scope

The expert group disagreed over whether to define specific types of harmful content or use the concept of risk to capture a broad range of content.

Some argued that the need for specificity of content stemmed from the government’s previously proposed takedown model. They stated that under a risk-based approach, such specificity was not necessary. They argued that the framework should have no categories or detailed definitions of harmful content. Instead, they insisted that focus should be placed on risk-based measures and standard setting. They explained that it would be important for the framework not to predetermine what regulated services will find in terms of harmful content on their platforms. They also emphasized that there are harms that cannot be foreseen. As such, they stated, the framework should empower and encourage services to identify and manage harms themselves on an ongoing basis.

Others argued that harmful content should be defined and categorized. They explained that it would be critically important to define what the harm is. They stated that a major advantage to providing definitions of categories of content is that it gives online services direction on the risk that they are obligated to look for, moderate and manage. They argued that a legislative and regulatory framework could not simply tell online services to suppress harmful content writ large without providing direction and definition. These experts explained that there would be a tremendous amount of uncertainty about a platform’s obligations, as well as the rights of victims to seek redress, without clarity and specificity in the legislation and regulation. They insisted that categories of content would be inevitable and necessary, and should be a factor in how the expert group conceptualizes the objects of regulation.

Defining Harmful Content

Many experts agreed that defining the scope of harmful content to be regulated would be challenging. Some argued that there is a lot of content that is not harmful at face value. They looked to child sexual exploitation images as an example. They explained that there are videos of abuse spliced into multiple different images which, on their own, do not depict an apparent harm. But when put together the harm is clear and apparent. Some experts stressed the importance of relying on existing legal definitions for content like hate speech and not going past the laws we have established. Other experts highlighted that there are challenges with current definitions in Canadian legislation for some types of content. They cited concerns with how terrorism, violent extremism and violent radicalization are defined and considered in Canadian criminal law. By relying on existing definitions, they explained, the framework would risk leading to the biased censoring of certain types of content.

Many experts stated that it would be important to find a way to define harmful content in a way that brings in lived experiences and intersectionality. They explained that a number of harms online are exemplified by issues like colonization and misogyny, and a regulatory framework would need to recognize these factors.

Child Sexual Exploitation Content

Some experts emphasized that particularly egregious content like child pornography, or child sexual exploitation content more generally, may require its own framework. They explained that the equities associated with the removal of child pornography are different than other kinds of content, in that context simply does not matter with such material. In comparison, other types of content like hate speech may enjoy Charter protection in certain contexts. Experts explained that a takedown obligation with a specific timeframe would likely make the most sense for child pornography.

Misinformation and Disinformation

Many experts voiced concern over misinformation and disinformation and highlighted that it was not included in the proposed five types of harmful content. They explained that it should be scoped in as it has serious harmful effects on individual Canadians and society as a whole. They stressed that Canadians’ ability to have conversations about basic policy disagreements has been severely impacted and complicated by the phenomenon of disinformation. They explained that it erodes the foundations of democracy, polarizes people, and reduces social dialogue to confrontational encounters.

Differentiating between Illegal and Legal yet Harmful Content

Many experts recommended that the framework differentiate between illegal and legal yet harmful content, imposing distinct obligations on regulated services for each type of content. They argued that illegal content should be removed and stressed that complications arise when one considers how to address legal yet harmful content. They stated that one of the biggest mistakes made by the United Kingdom in its Online Safety Bill has been to try to tackle legal but harmful content through onerous obligations. Some suggested that a softer approach be adopted for such legal yet harmful content, one based in self-regulation or standard setting.

Other experts explained that differentiating between illegal and legal yet harmful content would be difficult, and voiced concern over outsourcing the judicial function of determining the legality of content to private bodies (i.e., online services). They argued that there is value in setting different risk-based obligations based on a range of types of content. They observed that different types of content present unique challenges: some are more easily recognizable for their harm (e.g., child pornography), while others require variable degrees of analysis to ascertain whether it is harmful (e.g., hate speech; incitement to violence; disinformation; defamation). Reflecting these realities and challenges, it was proposed that categorizing content for regulation could be made based on other factors, beyond legality.

The Connection between Content and Harm

Experts disagreed on the connection between harmful content and real-world harms. Some experts shared that there is a broad assumption that just because content exists online, it must naturally be causing harmful effects. They stated that there is a lack of research that examines how content is received and how it impacts people. As such, they explained that it is difficult to make this connection. Other experts stated that there is evidence of a link between exposure to violence online and violent behavior offline. They argued that research shows that exposure to extremist content online is associated to increases in extremist attitudes. They asserted that there is a catalyzing, though not causal, link between the content and extremist behaviours. They also stated that new emergent research demonstrates that the combination of social isolation and high internet consumption of violent content is a risk factor to violent action.

Harm as Subjective

Some experts provided examples of traumas that society has little vocabulary and ability to deal with. They explained that the online world can exacerbate these problems as users encounter content that for someone else may not be harmful, but for them can be triggering. Experts insisted that a definition of harmful content must include an understanding of how individualized and contextual such material is. Harm, these experts explained, is perspective driven. For instance, a racialized person with lived experience on the psychological toll of racism and its systemic impact would likely have a different perspective on what constitutes harmful content compared to a cis-white male. Others emphasized that harm looks different for children than it does for adults. Many agreed that it would be important to consider the variety of users present online and acknowledge that some are more vulnerable than others to specific types of content.

Theme D: Recourse Mechanisms

Most experts expressed that Canadians need a way to voice their grievances regarding platform behaviour. Some insisted that victims must be given the tools to trigger the content removal process. They illustrated that the appeal process on many online services is very difficult for users to navigate. They explained that in many instances legitimate actors and protest movements get suppressed with no avenue for redress. As such, experts emphasized, obligations must require services to provide user-friendly appeal processes that are quick, transparent, and fully functional.

Some experts suggested that each regulated service be required to have their own ombudsperson as part of what it means to be a responsible business. They explained that such requirements would be no different than mandating companies to have chief privacy officers, as the EU General Data Protection Regulation requires. Some experts stated that an internal ombudsperson would need to be supplemented by an external ombudsperson office independent from both Government and the regulated services. They explained that the office would be staffed with individuals who have the necessary expertise to make assessments about context-specific content. Other experts questioned whether a social media council would be an appropriate way to provide victims with such recourse.

Theme E: External Stakeholder Engagement

Experts also spoke about the upcoming stakeholder engagement process. Some emphasized the need to engage with stakeholders, voicing the benefit of hearing from the lived experiences of different communities and learning from industry players about their capabilities and constraints.

Many experts expressed concern over the short timeframe given to them to meet with external stakeholders. They explained that it will be necessary, but difficult, to engage with a variety of stakeholders including victim groups, civil society, and industry, within this short period of time.

Next Steps

The next workshop for the Expert Advisory Group will take place on Friday, April 29 from 1:00-4:00 p.m. EDT. Experts will discuss the Obligations worksheet at this session.

Page details

Date modified: