Supplementary Worksheet: Subjects of Regulation

What is a supplementary worksheet?

The Expert Advisory Group on Online Safety launched by the Minister of Canadian Heritage on March 30, 2022, meets each week to discuss key elements of the legislative and regulatory framework for online safety. As discussions take place, additional questions emerge.

This supplementary worksheet contains follow-up questions to collect more detailed information to inform the design of an effective and proportionate legislative and regulatory framework. Neither the group’s advice nor the preliminary views expressed in the worksheets constitute the final views of the Department of Canadian Heritage nor the Government of Canada.

Objective

  1. Refine the scope of regulated entities and their obligations

Follow-up questions

  1. Do you believe that an effective and efficient legislative and regulatory framework to address harmful content online should scope-in the following types of services:
    1. Top of the Stack
      1. Social media platforms (e.g., Facebook, Instagram, Twitter)
      2. Video sharing platforms (e.g., YouTube, TikTok)
        1. Including adult content sites (e.g., Pornhub)
      3. Search engines (e.g., Google)
      4. Online Forums (e.g., Reddit, Discord)
      5. Online Marketplaces (e.g., Facebook Marketplace, Craigslist)
      6. Dating applications (e.g., Tinder, Bumble)
      7. Blogging services (e.g., Tumblr)
      8. Other Website (e.g., AirBnB, Uber)
      9. Private messaging (e.g., WhatsApp, Facebook Messenger)
      10. Online video games
      11. News websites (e.g., CTV.ca)
      12. Non-marketplace stores (e.g., Ikea.com, or a store selling historical military memorabilia or sex toys)
      13. Other websites that present its own content without user interaction
    2. Lower Stack Services
      1. Web hosting providers (e.g., Shopify or Wordpress)
      2. Cloud hosting services (e.g., Amazon Web Services and Microsoft Azure)
      3. Content delivery networks (e.g., Amazon CloudFront, CloudFlare)
      4. Domain name registries (e.g., Domain.com, Bluehost, Network Solutions)
      5. Internet service providers (e.g., Bell, Rogers, Videotron)
  2. Why do you believe the services you have identified above should be regulated under an Online Safety framework?
    1. Do these services pose a particular risk to Canadians?
    2. Should they have a particular responsibility to ensuring the safety of Canadians online?
    3. Will services lower on the internet-stack be able to identify harmful content? Put differently, will they be able to determine whether content is likely to have harmful effects?
  3. Are there limits for the kinds of services that should be regulated under a framework for online safety? If so, what are they?
  4. What types of obligations do you believe should be imposed on these services?
    1. What specific steps would these regulated entities need to take to fulfill their overall “duty of care” to take reasonable steps to ensure their products are safe for Canadians?

Summary of the Expert Advisory Group Discussion

The Scope of Entities Captured

During the discussion on Subjects of Regulation, experts agreed that the scope of entities to be captured under the legislative framework should be broad in scope. Many shared that a broad scope of application is a natural consequence of a systems-based, ‘duty of care’ model for regulation. Most experts noted that the scope of regulated entities should be broad to incorporate all entities that communicate online and that duty of care obligations should be followed, not only by social media platforms, but also by other actors within the technology stack, such as intermediary services. A few explained that it would be hard to assert generalized obligations to act responsibly on some platforms (e.g., social media companies) but not others who operate on the same level of the tech stack (e.g., AirBNB, video gaming platforms). Some experts noted that a broad definition would help address evolving/emerging technologies to help future-proof the legislation.

Many experts mentioned that there is justification to look more widely at including some interactive services like Airbnb and gaming platforms or content delivery networks like Cloudflare under the scope of legislation, but a few experts also indicated that it could be difficult to regulate all of the various types of business models found in platforms. For example, it may be difficult to regulate platforms that are based on live verbal communications such as ClubHouse or gaming platforms. Many experts noted that a broad definition could be the best approach for capturing these different types of business models. A few added that a broader scope would also help to include entities that are successful in the recruitment of violent extremists. These communities adapt quickly and have been moving to video game services, file sharing sites, and live audio applications like ClubHouse.

A few experts spoke to the concept of intermediary liability. They highlighted that setting clear rules of the road for intermediary liability could, for example, create a regulatory environment wherein a regulated service would not be liable for the content itself, but it could be liable for not doing a good enough job monitoring, moderating and managing harmful content transmitting its service.

A few experts cautioned against an overly broad scope. They noted that the scope would need to be broad enough to capture entities that are likely to cause harm while also balancing Charter rights and freedoms, including privacy rights. Some pointed to the challenge of regulating entities that facilitate the housing and distribution of harmful content, particularly when it comes to child sexual exploitation, and discussed how this could include the regulation of entities deeper in the tech stack (e.g., server operators) that are not as intuitively public-facing as top-of-the-stack applications like social media platforms.

Adult Content Services

Members discussed whether adult content and adult content services should be regulated or treated in a different manner than other content and services

Many experts suggested that child sexual exploitation content should be treated the same as other types of content and should not constitute its own category. Experts mentioned that there is room for this type of content to be addressed in a risk-based framework. They indicated that treating it differently may create a slippery slope – such that, in practice, regulated services would push for other categories of harmful content to be treated in unique fashion, too.

A few experts mentioned that, although they don’t think child sexual exploitation content should be treated separately, there could be different obligations in place or mechanisms used for more immediate action against it and extreme prejudicial content. For example, mechanisms like artificial intelligence work well with certain types of content such as child sexual exploitation content, using hashes and extensive databases to enable quick action. Some experts noted that mechanisms that work well for certain types of content may not work as well with other types of content and there could be other ways to address these types of content.

Duty of Care Approach and Obligations of Regulated Entities

Most experts noted that an ex-ante, ‘duty of care’ approach that puts the responsibility and obligations on platforms would be the best approach when it comes to dealing with platforms with varying business models.

Members discussed the idea of categorizing regulated entities, such that the obligations for a regulated entity could vary according to its size, volume of content, or risk of harm. The experts shared a range of views on this topic.

A few cautioned against the use of categories. Some mentioned that categories might be useful when it comes to adult content services or content such as child sexual exploitation content, because these are comparatively clear-cut – but that it would be difficult to have clear categories around other types of service and/or content. It was also stressed that multiple regulatory categories could lead to debates about which entities belong to which categories, especially if certain entities are in the grey area between categories, and undermine the efficiency of the regime as a whole.

Others shared that the focus should be on risk, first of all, instead of arranging services by category or type. They explained that a ‘duty of care’ regulatory framework would not, by definition, be one where some services having to do with the internet are regulated, but others not. They shared that a focus on risk assessment would draw out natural fault-lines and categories of risk, whereby categories of low-risk, medium-risk and high-risk would surface naturally. They mentioned that, in this model, the onus would be on the platforms to demonstrate their risk level. They explained that there could be more stringent obligations on entities with the highest risk of harm, and a lower level of obligation on entities with the lowest risk of harm.

Many experts mentioned that regulated entities should have a ‘duty to act responsibly’, which includes determining the degree of risk on their platforms. They stressed the importance of a clear, broad and accessible definition of risks that would be managed under a prospective 'duty of care' or ‘duty to act responsibly’ model.

Enforcement of Obligations on Regulated Entities

A few experts raised the importance of thinking about implementation. They stressed the need to develop an approach that does not exceed the practical limits of the government’s ability to deliver and execute.

Experts said that a systems-based approach is particularly useful because platforms are also the source of innovation in devising solutions, in particular, to reduce harm and to address some of the more lawful but awful types of content. In this view, regulations should be about creating incentives to address harmful content on their platforms, while also pairing these incentives with accountability mechanisms.

A few experts mentioned that they do not trust larger platforms to do what they should be doing under a ‘duty of care’ model. They strongly advised against self-regulation or self-monitoring altogether as, in their view, it simply doesn't work. They emphasized that the tools that large companies have in place already to address harmful content are falling short when it comes to protecting minority communities and vulnerable communities.

Page details

Date modified: