Summary of Session Four: Regulatory Powers

The Expert Advisory Group on Online Safety held its fourth session on May 6 from 1:00-4:00 pm EDT, on Regulatory Powers. Eleven members were present. The Advisory Group was joined by Government representatives from the Departments of Canadian Heritage, Justice, Innovation, Science and Economic Development, Public Safety, and the Privy Council Office. Representatives from the Royal Canadian Mounted Police were also present.

This summary provides an overview of the fourth session. Per the Terms of Reference for the Advisory Group, these sessions operate under Chatham House Rule. As such, this summary does not attribute the views expressed to any one group member or organization. It outlines the views expressed during the session; reports areas of agreement, disagreement, and discussion; and organizes the discussion under thematic categories. It should not be considered a verbatim recitation of the discussion.

The topic for the workshop was “What regulatory authorities and powers would be needed for a Digital Safety Commissioner to be effective?”

The worksheet for the session included three objectives:

  1. Determine if the compliance and enforcement tools proposed for the Digital Safety Commissioner would be effective and appropriate.
  2. Explore the elements of an effective, proportionate Administrative Monetary Penalty (AMP) enforcement regime.
  3. Consider how progressive enforcement tools could be included to address persistent and/or gross non-compliance.

This summary reports on the perspectives raised in relation to these objectives and organizes the discussion points according to issue-specific themes. Footnote 1

Theme A: Digital Safety Commissioner and the Powers Required

There was consensus on the need for a regulatory body, which could be in the form of a Digital Safety Commissioner. Experts agreed that the Commissioner should have audit powers, powers to inspect, have the powers to administer financial penalties and the powers to launch investigations to seek compliance if a systems-based approach is taken - but views differed on the extent of these powers. A few mentioned that it would be important to think about what would be practical and achievable for the role of the Commissioner. Some indicated they were reluctant to give too much power to the Commissioner, but others noted that the regulator would need to have “teeth” to force compliance.

It was proposed that there could be two overall models for a regulator to follow:

  1. A Commissioner focused on risk-based matters. In this option, the Commissioner would have powers to audit, powers to inspect, and powers to conduct investigations.
  2. A Commissioner focused on a combined ex-ante and ex-post approach. This option would be broader in scope. The Commissioner would have risk-based/ex-ante approaches like option 1, but there could also be an individual, complaint-based, ex post component.

Some experts noted that for both options, the Commissioner’s office should include a research component and not just be mandated to regulate and punish platforms. A research component would help the office of the Commissioner keep abreast of the evolving ecosystem.

Theme B: Digital Safety Commissioner’s Office

Ombudsperson and the Digital Safety Commissioner

The Expert Group discussed the idea of an Ombudsperson and how it could relate to a Digital Safety Commissioner. Experts proposed that an Ombudsperson could be more focused on individual complaints ex post, should users not be satisfied with how a given service was responding to their concerns, flags and/or complaints. In this scheme, the Commissioner would assume the role of the regulator ex ante, with a mandate devoted to oversight and enforcement powers. Many argued that an Ombudsperson role should be embedded in the Commissioner’s office, and that information sharing between these functions would be useful. A few experts noted that the term ‘Ombudsperson’ would be recognizable across the country as it is a common term and meaning across other regimes in Canada.

It was mentioned that the Ombudsperson could play more of an adjudicative role, as distinguished from to the Commissioner’s oversight role, and would have some authority to have certain content removed off of platforms. Some experts noted that this would provide a level of comfort to victims. A few experts raised questions about where the line would be drawn between a private complaint and resolution versus the need for public authorities to be involved.

The Privacy Commissioner and the Digital Safety Commissioner

It was mentioned that the Privacy Commissioner could end up sharing a lot of the same functions being examined for a Digital Safety Commissioner – and that it might make sense for a Digital Safety Commissioner to be part of the Privacy Commissioner’s office. It was noted that the Privacy Commissioner has an investigation of complaint function, an audit and oversight function, a reporting function and a leadership function, all of which are functions that could be found in a Digital Safety Commissioner’s regime. It was also clarified that systemic risk and harm could be connected to privacy in a number of ways. A few experts indicated that this could be a good way to reconcile different equities implicated in online safety and privacy with values that would extend across the mandate (e.g., freedom of expression; equality rights; protection of information and reputation).

It was mentioned that this idea would be in line with challenges that the Privacy Commissioner has had in increasing their mandate already. It was noted that there is hesitancy to put all regulatory design ambitions in an office that struggles to increase their mandate and power and resources already.

The EU’s General Data Protection Regulation (GDPR) was proposed as a model to examine. Under this regulation, companies need to appoint people to be responsible for the implementation of the legislation and privacy officers within the company. When there are complaints, the officers receive the complaint and try to resolve the issue privately, and if there is still dissatisfaction, the issue moves to public recourse. Others noted the flaws of the GDPR, whereby other jurisdictions have incorporated this model and have run into capacity and resourcing issues in enforcing it.

Theme C: Recourse and Complaints Mechanism

The Expert Group remained divided over the need and design of an independent, external recourse mechanism for content moderation decisions. Some members noted that a complaints or recourse mechanism would be needed for individual complaints, while some mentioned that a well-designed system-based approach would reduce the need for this. It was noted that if both ex-ante and ex-post mechanisms are present, such as a Commissioner and an Ombudsperson, the roles and powers would have to be clearly defined.

Some members explained that there is a large amount of content posted online daily, and having a recourse mechanism open to all complaints would be unachievable. They noted that the use of the resource mechanism could be reduced if its mandate were limited to dealing with illegal content. Some members mentioned that there could be a multi-step test for complaints to go through before they reach an Ombudsperson, essentially reducing the number of complaints. They mentioned that steps could include, first going through the platform’s complaint mechanism (a systems-based approach could oblige platforms to have one in place) and that content should be illegal. If the complaint meets the outlined steps, then an independent third party that has quasi-judicial powers of enforcement could have the platform remove the content. It was also suggested that an audit by the Commissioner could be triggered if the Ombudsperson is finding that there are too many complaints from a single platform.

On the other hand, members were uncertain how such a body would manage legal, but harmful content. A few mentioned that, at some point, there would need to be a decision around where the line is drawn with a recourse mechanism and what type of content would be filtered through this mechanism.

A few experts pointed out that there may be a need for a third element, in addition to the Commissioner and an Ombudsperson – a notice and takedown regime. They explained that if there is a recourse mechanism adjudicating content, there would need to be a regime to issue notices to platforms and take down requests for content. They mentioned that maybe this element could be limited to child sexual abuse material and very clear violent content, where there is the need for faster or harsher mechanisms to manage this content.

Some experts said that it would be important to make sure the regulatory scheme doesn’t lead to increased privatized enforcement. They elaborated by stating that there should be a proper balance of power, whereby too much power shouldn’t be given to the platform or the Government on deciding what content is taken down. Some experts said that it would be helpful to examine other international jurisdictions like Australia, Germany and New Zealand to examine and design an effective recourse mechanism.

A few members mentioned that capacity, and funding would be needed on a large scale to support a recourse mechanism. This mechanism would need to be fully funded and have the resources and expertise needed to adjudicate properly and act swiftly.

Theme D: Enforcement Mechanisms

The Expert Advisory Group agreed that penalties should be used on platforms for systemic failures and non-compliance. A few experts believe that there could be greater platform compliance if penalties outlined in the regulatory framework in Canada match those of international jurisdictions.

Administrative Monetary Penalties (AMPs)

There was consensus for the use of Administrative Monetary Penalties on platforms for non-compliance. Experts mentioned that AMPs are used in other countries and have varying amounts, ways and reasons for issuing an AMP, for example, the UK has the ability to issue and AMP at 6% of the company’s global revenues, while the proposal is suggesting 3% of global revenues. Experts noted that the amount should be high enough to have “teeth” and to force compliance.

Some experts mentioned that there could be two different levels of AMPs, but there were mixed views on how to determine those levels. A few mentioned that the company could be issued an AMP at a lower level for a failure to comply with a duty to act responsibly and a higher level for more egregious violations. Others mentioned that there could be a dispute on how to determine if the entity fell into the low or high level. Others mentioned that the amount should be flexible and on a sliding scale and that it should be up to the Commissioner based on how egregious the violations are. Many experts mentioned that there could be lower fines for single offences and higher fines for repeated behaviors or for more egregious offences. The Commissioner would have the ability to determine if the fine should be on the lower end or the higher end.

Many experts suggested that AMPs should have scalability based on a number of factors and a few suggested that the AMP be reinvested to support victims and small platforms. They mentioned that the scalability of AMPs should be based on size of the platform, ability to pay, and seriousness of the breach. It was mentioned that AMPs collected from platforms due to non-compliance could be reinvested in victims’ groups or to smaller platforms to help them develop resources to better manage their content. This will make the company not only legally, but also socially responsible for their actions.

It was mentioned that the courts may need to be involved for the collection of high amounts. Experts explained that the chance of the company paying the AMP could be low, drawn out or have an alternative effect. It was clarified that it would also be important to have other compliance mechanisms in place as well to ensure compliance.

A few experts pointed to the importance of thinking about smaller companies while developing the AMPs regime. The percentage of the penalty would likely hit smaller companies harder than larger companies such as Facebook, which could cause the closure of these companies.

Liability

Many experts disagreed with the idea of executive liability, whereby an executive from a regulated service could be charged criminally for non-compliance. It was mentioned that the deployment of this scheme and the designation of an executive could be unfairly determined within a company. It was clarified that this scheme could lead to little change within the company because the executive could simply be dismissed from the role and the company in order to “get rid of the bad apples”, protecting the company’s reputation. There were also questions about the residence of that employee and if the appointee would need to live in Canada. A few mentioned that there are already regulatory schemes like this in place in Canada that appoint an executive for liability. Others indicated that there could be an obligation for regulated services to appoint a compliance officer instead, which could drive compliance without introducing liability penalties.

The notion of carrying intermediary liability protections was also mentioned. Intermediary liability explains the idea that platforms are generally not responsible for harms caused by user-generated content and that the primary locus of responsibility is on the person who posts it. Another expert also mentioned the idea of issuing fines to individuals that upload the content, but noted that this could be a difficult bar to meet.

Website Blocking

Some experts mentioned that there could be a small set of extreme circumstances where a blocking power could be used as a last resort. In these cases, the platform could only be blocked for a certain amount of time and that the blocking power should only be used in extreme circumstances where fundamental human rights are affected. It was also noted that blocking powers should only be used after a judicial review. It was mentioned that the courts already have inherent jurisdiction to issue an injunction.

An example shared suggested that website blocking is already taking place in Canada and has been for a number of years. For example, Project Cleanfeed Canada, in partnership with Cybertip.ca blocks access to hundreds of child pornography websites and content, in particular, foreign-based websites that are hosting the most egregious child sexual abuse material. This project was developed though an intensive consultation process and is supported by major TSPs.

Publishing Instances of Non-Compliance

The idea of publishing instances of non-compliance for public consumption was brought forward by a few experts. It was noted that damage to the company’s corporate image could drive compliance. For example, the Commissioner could make public, instances of child pornography on a company’s platform. Public reporting may not be efficient each time, but could have some teeth.

Next Steps

The next session of the Expert Advisory Group will take place on Friday, May 13 from 1:00-4:00 p.m. EDT. Experts will discuss a risk-based approach at this session.

Page details

Date modified: