Summary of Session Five: Risk-Based Approach

The Expert Advisory Group on Online Safety held its fifth session on May 13 from 1:00-4:00 pm EDT, on a Risk-Based Approach. Eleven members were present. The Advisory Group was joined by Government representatives from the Departments of Canadian Heritage, Justice, Innovation, Science and Economic Development, Public Safety, Women and Gender Equality, and the Privy Council Office. Representatives from the Royal Canadian Mounted Police were also present.

This summary provides an overview of the fifth session. Per the Terms of Reference for the Advisory Group, these sessions operate under Chatham House Rule. As such, this summary does not attribute the views expressed to any one group member or organization. It outlines the views expressed during the session; reports areas of agreement, disagreement, and discussion; and organizes the discussion under thematic categories. It should not be considered a verbatim recitation of the discussion.

This session was added to the initial schedule following the Expert Group expressing an interest in wanting to further explore the components of a risk-based legislative and regulatory scheme.

The supplementary worksheets for the session included three objectives:

  1. Refine the scope of regulated entities;
  2. Refine the scope of regulated content; and
  3. Specify what duties regulated services ought to have, and how prescriptive those duties should be.

This summary reports on the perspectives raised in relation to these objectives and organizes the discussion points according to issue-specific themes.Footnote 1

Theme A: Defining “Duty to Act Responsibly”

Flexibility of the definition

The group kicked off the discussion on a risk-based approach by exploring the different components of a “duty to act responsibly” and the required level of specificity when defining legislative and regulatory obligations. Experts expressed different views regarding the use of the terms “duty of care” and “duty to act responsibly”. Some highlighted that “duty of care” could set clearer expectations because it is well-established in existing law. Other experts raised concern over the legal baggage associated with the term “duty of care”, explaining that a new standard would be preferable as it would provide more flexibility.

Most experts agreed that the legislated duty should be flexible enough to ensure it does not become outdated in a couple of years given the changing nature of online harms, but also not too vague since online services, especially small platforms, need to know what is expected of them. Some experts argued that while ambiguous or ill-defined obligations could be problematic, they could also result in regulated services being creative and innovative in finding new ways to fulfill their duty to ensure their users are safe. A few experts also explained that a flexible definition would enable the regulator to adapt the obligations to new online services’ business models and features. Some experts also shared the concern that defining the duty to act responsibly too broadly could potentially result in the over deletion of content pertaining to diverse and marginalized voices, groups that are already facing barriers in access to justice.

Framing the duty to act responsibly with the exercise of positive rights

Some experts raised the importance of framing online safety as being about more than just content removal. According to these experts, online safety is also about the exercise of positive rights, more specifically, fundamental human rights to free expression, privacy, and equality. Some experts argued that one of the framework’s objectives should be to ensure the equal value of users’ fundamental rights online. Other experts emphasized that the framework should consider users’ privacy rights, especially when requiring regulated services to publish reports and share data.

Elements of a duty to act responsibly

Some experts expressed that the duty to act responsibly should be viewed as a “duty to action content”. In their view, actioning content does not necessarily mean content removal but rather involves disrupting the creation, sharing and distribution of harmful content through tech solutions like limiting sharing, enabling warnings and flagging, demoting, etc. Experts noted that duties to action content could differ based on the type of harmful content at stake. For instance, they explained, content tied to traditionally illegal categories should be subject to stricter obligations to action it. Other experts approached the duty to act responsibly as imposing a “responsibility” on regulated services towards different stakeholders, such as the general public, journalists, civil society, and children.

A few experts expressed a desire to see a strong preamble in the legislation that would outline the balancing of harm reduction with the protection of human rights. According to such experts, the legislation should also clearly define the terms used, acting as a reference guide for individuals who are not trained in law.

Some experts argued that legislative obligations should be supplemented and further detailed by regulations. They suggested that legislation set out basic obligations attached to a duty to act responsibly, and that further regulations and codes of practices be developed to allow for the tailoring of obligations depending on the regulated entity. Many experts stated that the duty to act responsibly should include obligations to be transparent, conduct risk assessments and adopt mitigation measures for any risks identified.

Most expert agreed that regulated services should be required to be transparent about both their risk assessments and risk mitigation measures. They also felt that services should be compelled to share a range of data with public officials, journalists, researchers and civil society. Some experts suggested that regulated services be required to conduct their own independent self-audits to ensure they are identifying and managing risk in compliance with their duties. Some experts emphasized that regulated services should be mandated to publish their content moderation manuals. Other experts highlighted there will presumably be information that companies may not wish to make public for legitimate reasons, for instance if there is a risk that such information be gamed or weaponized. These experts suggested allowing regulated services to protect certain confidential information.

Duty to act responsibly for smaller platforms

A few experts cautioned that it would be important to consider smaller platforms when designing regulatory requirements. They suggested implementing a guide that outlines a set of steps these services must adopt to fulfill their duty to act responsibly, as well as spaces to detail what these services are doing on top of the basic requirements or to further explain their approach. They explained that the regulator could vet the submissions they receive from smaller services to help assure them that they are fulfilling their duty or help guide them in the right direction.

Theme B: Differential nature of obligations based on the type of content

Child Sexual Abuse Material and Protection of Children

Some experts raised that a “safety by design” framework could be difficult to reconcile with the protection of children. They also explained that more robust obligations would be necessary to deal with child sexual abuse material (CSAM). They suggested two elements to help promote the protection of children under a risk-based scheme: 1) compel services to adopt existing hash matching technology to stop CSAM from resurfacing on their platforms; and 2) compel services to use content moderation tools to compensate for gaps in AI detection.

Some experts also suggested that there be a general duty to act responsibly and a specific duty of care for children (“duty of special care”) since the latter are entitled to higher protection under law. The duty to act responsibly could encompass separate and stricter obligations regarding CSAM and the protection of children since, according to experts, online services have a fiduciary role. Many experts also supported the idea of adding “likely to be accessed by a child” to the risk framework.

Experts argued that a legislative and regulatory framework should compel differential risk assessments based on the child’s age.

Disinformation

Many experts stressed the importance of addressing disinformation in an online safety framework. However, they recognized the challenges in scoping and defining this type of content given that it is highly dependent on context as well as the way individuals interpret and connect facts. These experts explained that while the effects of disinformation are not obvious on a day-to-day basis, it can have serious consequences over time. Some experts suggested requiring regulated services to develop a crisis management plan in their Digital Safety Plans in instances of massive disinformation campaigns with serious risks for Canadians’ safety, such as a war or a pandemic. They pointed to the example of the Election Incident Public Protocol but outlined that the monitoring of platforms only takes place during electoral period under this protocol, leaving malign actors to continue to be active and influence voters’ perception in between elections. Experts also referred to other government programs such as the Digital Citizen Initiative as potential ways to increase citizens’ digital literacy.

Extremist Content

A few experts raised the issue of extremist content being accessible on platforms in languages other than English. Experts mentioned that ISIS content is usually removed from platforms in English, but that it is still discoverable in other languages such as Arabic. They suggested that the framework include requirements for content moderation in languages other than English, but also ensure that such moderation activities are conducted accurately.

These experts also emphasized that there is a real risk of regulated services over-policing their platforms, labeling activist content like material from Black Lives Matter campaigns as extremist content. They explained that there are biases in the assessments of content by platforms. Experts suggested that social movements be identified in the regulatory scheme as a protected category of content.

These experts also pointed out that the Global Internet Forum to Combat Terrorism (GIFCT) has a Crisis Response framework, in which multiple platforms cooperate, to remove content after certain major terrorist events and that, in their opinion, it has been working fairly well. According to such experts, the regulatory scheme could nudge platforms towards improving their practices under this framework.

Defining Regulated Content

Some experts expressed that there is a real risk in not clearly defining what is considered harmful content under the legislative and regulatory framework. According to them, not defining harmful content clearly enough could lead to regulated services’ content moderation activities being overly broad and infringing users’ freedom of expression. Some experts suggested that the legislative and regulatory regime define harmful content in a non-exhaustive manner to allow for the inclusion of new harms over time.

Other experts also raised that the term “contenu scandaleux” in French is not an accurate translation of “egregious content” in English and that it would likely not pass the reasonableness test as a limit on freedom of expression.

Theme C: A Broader Framework on Online Safety

Some experts emphasized that a legislative framework alone cannot solve the issue of harmful content online. They suggested that the Government explore the idea of a three-prong framework for online safety involving: 1) programs for the prevention, education, and connection with community organizations that do research in this domain; 2) the protection of children and other vulnerable citizens based on existing legal frameworks; and 3) the legislative and regulatory framework for online services discussed by the Expert Group. A few experts referred to the New Zealand model as one that could serve as inspiration as it includes a multitude of education and prevention tools.

A few experts suggested providing online services with explanatory models to help them conduct their risk assessments. In their view, the risk assessments should look at the nature of the content and its characteristics, the characteristics of the platform, the population targeted by the content, who is spreading the content, its reach, the harm caused by it, and its impact on human rights.

Theme D: Scoping Regulated Entities

Experts did not reach a consensus on what types of services should be regulated. Most experts agreed that the regulated entities should reflect the Internet ecosystem where harmful activities take place. Disagreement arose around whether to regulate services from the entire internet-stack.

Intermediaries

Some experts suggested that the framework should regulate services that play a mediation role between users. Others recommended that services that have oversight capacity on the content circulating on their platforms be regulated. A few experts noted that intermediaries, who do not perform an editorial function, can only be held liable for content they host once it is demonstrated that they were aware of the illicit nature of such content.

Online Marketplace Services

Many experts referred to platforms like Amazon and Airbnb, explaining that such services present marketplace features and would be liable for the products they sell and the transactions occurring on their websites. Some experts argued that online marketplace services, given their commercial and transactional nature, are subject to other applicable laws and should not fall under a legislative regime to address online harms per se. Other experts argued that services that allow comments and reviews on their platforms should be within regulatory scope.

Some experts stressed that online marketplace services use search and recommendation algorithms that prioritize content, which can result in the spread of harmful content. They suggested that these services be regulated, and that obligations regarding algorithmic accountability be imposed on them. Experts explained that such obligations would be beneficial not only for an online safety framework, but also from a wider consumer protection lens. Some experts raised that many platforms will likely be accountable to multiple regulators such as the Competition Bureau, the Privacy Commissioner, and a new online safety regulatory body.

Regulated entities providing multiple online services

Experts also explored how best to address services with multiple functions such as Amazon, a service that runs a marketplace platform with reviews and comments features but also provides cloud services through Amazon Web Services. When discussing whether such entities should be required to fill out different risk assessments, some experts referred to the concept of materiality under the Environmental, Social and Governance framework where the reporting relates to material risk. According to those experts, conglomerates use tools like material risk to assess their portfolio and their different lines of business.

Categorization of regulated entities

Some experts cautioned against developing categories of regulated entities as it would move the framework away from a risk-based approach. In their view, attaching specific obligations to categories of regulated services might lead to a rigid framework. They also raised the risk of displacing content from one platform to another. They were of the view that distinguishing between different types of platforms would be appropriate in imposing regulatory obligations regarding services’ risk assessments but should not be the departure point for setting legislative obligations.

Next Steps

The next session of the Expert Advisory Group will take place on Friday, May 20 from 1:00-4:00 p.m. EDT. Experts will discuss Freedom of Expression and Other Rights at this session.

Page details

Date modified: