Concluding Workshop Summary
The Expert Advisory Group on Online Safety held its final session on June 10 from 9:00 am until 2:00 pm EDT. Ten members were present. The session was conducted as a hybrid workshop with eight members participating in-person and two participating over videoconference. The Advisory Group was joined by Government representatives from the Departments of Canadian Heritage, Justice, Innovation, Science and Economic Development, Public Safety, Women and Gender Equality, and the Privy Council Office. Representatives from the Royal Canadian Mounted Police were also present.
This summary provides an overview of the concluding session. Per the Terms of Reference for the Advisory Group, these sessions operate under Chatham House Rule. As such, this summary does not attribute the views expressed to any one group member or organization. It outlines the views expressed during the session; reports areas of agreement, disagreement, and discussion; and organizes the discussion under thematic categories. It should not be considered a verbatim recitation of the discussion.
The objectives for the session were to:
- Canvass the main takeaways from each workshop discussion;
- Consider how a risk-based approach to online safety might work in practice;
- Delve deeper into some remaining core questions; and
- Receive final advice regarding next steps.
The objective of this session was to surface the range of opinions on the key issues involved in a regulatory framework and gain clarity on both points of agreement and disagreement. It was not to reach consensus on every aspect of a legislative and regulatory framework for online safety.
This summary reports on the perspectives raised in relation to this objective and organizes the discussion points according to issue-specific themes.Footnote 1
The Expert Advisory Group limited its outreach to meetings with representatives of the United Kingdom, the European Union, and Australia, which took place on June 8, 2022.
Theme A: Introducing a Risk-Based Framework
Experts advised that a risk-based approach to regulation, anchored in a duty to act responsibly, would be most appropriate. They explained that the duty to act responsibly ought to obligate regulated services to fulfill three key steps. The first step would have regulated services identify and assess the risks posed by their service. In the second step, services would mitigate these risks. Third, services would report on their identification and mitigation tools to ensure accountability. On this latter step, experts emphasized the need for rigorous, significant and sophisticated transparency requirements. They explained that the information gathered through the accountability step would then feed into the services’ next round of risk assessment. This cycle would promote continuous improvement of services’ systems and tools aimed at reducing the risk they pose to users.
Experts emphasized that the legislative and regulatory framework should strive for meaningful improvement, not perfection. They explained that the risk-based approach would not remedy all incidents of harmful content online. Rather, they stated, this model would improve online services’ overall practices in addressing risk. Experts emphasized that the approach should involve regulated services continually testing the effectiveness of certain tools and adapting them based on their findings. For example, experts explained that services could be compelled to adopt ‘Safety by Design’ considerations when developing and implementing new features and updating and/or modifying older features. This approach, experts emphasized, would provide data and information that other platforms, the regulator, and researchers could use to build an improved approach to mitigating risk online.
Experts stressed that a regulatory regime should put equal emphasis on managing risk and protecting human rights. They explained that regulated services must both minimize the risk posed by their services and maximize fundamental rights and freedoms including the rights of Indigenous Peoples under section 35 of the Constitution Act, privacy rights, the freedom of expression, and equality rights, among others. Experts stressed the need to balance Charter rights and emphasized that such balancing ought to be included in the preamble of any new legislation on online safety.
Experts stated that legislative and regulatory obligations would need to be flexible and future-proof to ensure they do not become outdated. They emphasized that the legislation should not be rendered obsolete by the next technological advancement. Experts stressed that the legislation should be technology neutral and focus on outcomes and principles instead of specific tasks that regulated entities would need to perform.
Experts asserted that it would be necessary to impose a special duty to protect children. They explained that children must have enhanced protections because of their inherent vulnerability, an area where existing laws governing digital spaces fall short. They stressed that online services must be obligated to assess and mitigate any risk that their services pose to children specifically. Experts discussed obligations like safety by design tools, instituting user-reporting mechanisms, an independent avenue for user appeal, and introducing timeframes for removing offending content specifically harmful to children. Some experts mentioned that Australia’s safety by design approach could be a good departure point for Canada.
Experts affirmed the need for non-legislative tools. They emphasized that prevention is essential and stressed that public education ought to be a fundamental component of any framework. They also suggested implementing programs to improve media literacy and developing a concept of e-citizenship through outreach programs in schools and communities.
Experts emphasized the need for a whole of Government approach to addressing the risks and challenges that come with digital services. They noted that competition law, privacy reform, regulations around user data, artificial intelligence, algorithms and the online safety framework must all work together to impose an overall duty to act responsibly on online services. Experts explained that viewing these matters as siloes separate from one another would be inefficient and lead to confusion for both users and regulated services. They stressed the need for Government departments and agencies to work in tandem through a networked approach on online governance.
Experts disagreed on whether legislation should be specific or broad in setting out obligations for regulated services. Some argued for a broad approach and explained that the duty to act responsibly would change over time and thus there would be a need to provide flexibility regarding what fulfilling this means. They explained that one of the benefits of a duty to act responsibly is that it could be calibrated depending on the type of service regulated, with each service determining how best to fulfill their duty based on their own business structure. They stated that precise obligations would undermine the flexibility of this approach. Instead, specific obligations could be set through codes of practice, informed by regulated services’ due diligence and the cycle of the risk-based approach discussed earlier. On the other hand, other experts argued that specific legislated obligations are necessary for clarity and certainty. Without specific obligations, these experts argued, services might not know what they are expected to do, nor would the regulator know when regulated services would be fulfilling their duty.
Experts diverged on whether the legislation should compel services to remove content. Some experts voiced concern over mandating removal of any form of content, except perhaps content that explicitly calls for violence and child sexual exploitation content. Other experts voiced preference for obligations to remove a wider range of content. They explained that it would be better to err on the side of caution. They expressed a preference for over-removing content, rather than under-removing it. Other experts stressed that an obligation for content removal is not compatible with an approach focused on risk management, and could risk disproportionately affecting marginalized groups. They explained that instead of being mandated to takedown content, services would be obligated to manage their risk, which they could do through takedown, but they could also do through other tools. Many experts emphasized that there is a range of options between simply deciding whether to leave content up or take it down.
Experts emphasized that particularly egregious content like child sexual exploitation content would require its own solution. They explained that the equities associated with the removal of child pornography are different than other kinds of content, in that context simply does not matter with such material. In comparison, other types of content like hate speech may enjoy Charter protection in certain contexts. Some experts explained that a takedown obligation with a specific timeframe would make the most sense for child sexual exploitation content.
Experts differed on whether to require entities to proactively monitor content on their services. Some experts cautioned against requiring or even incentivizing proactive or general monitoring by regulated services. They stated that it is very challenging to justify such a scheme as it introduces risks to fundamental rights and freedoms under the Charter. Other experts emphasized that services should be compelled to proactively monitor their platforms, as such monitoring, in many cases, could effectively prevent a violent attack. Some experts also emphasized that proactive monitoring for certain types of egregious content, like child sexual exploitation content, would be justified.
Theme B: Regulated Services
Experts agreed that the regulatory regime should apply to a broad range of services operating online, including those lower in the ‘tech stack’. Experts explained that the regime ought to capture services that host third-party content. They stressed that the framework should not regulate an individual with a personal online blog, but rather a service that hosts several blogs, targeting the “intermediary” role of service. They explained that this would include services like App Stores, Web Hosting services, and Content Delivery Networks.
Experts diverged on whether legislative intermediary liability protection should be part of the legislation, but those who supported the protection emphasized that it should be coupled with the duty to act responsibly imposed on regulated services. Some explained that this would mean that the legal responsibility for content would be on the person who posted it, not the service that hosts the content. However, experts stated, regulated services acting as intermediaries would also have a duty to act responsibly. They stressed that the regime would not be about addressing hate between one user and another user, but rather about injecting transparency and accountability into how online services operate. Some experts also stated that it would be necessary to align any federal laws on liability with existing provincial laws on liability to ensure constitutionality regarding jurisdiction. Other experts emphasized that online services should be held liable for content posted on their platforms, especially for child sexual exploitation content.
Experts disagreed on whether, and how, to include private communications in a regulatory framework. Some experts stated that there is a place for the regulation of private communication services in this framework wherein services provide tools for users to shape their experience online, instead of monitoring and removing content in private conversations. Some experts specified that private communication services that harm children should be regulated, and should, at the very least, be compelled to take steps to prevent the dissemination from known CSAM via private messaging. Other experts emphasized concerns regarding user privacy rights in explaining why the regulation of private communication services would be difficult to justify from a Charter perspective.
Experts differed on whether to tailor regulatory obligations based on the risk posed by the service. Some experts emphasized that the duty to act responsibly should apply to all online intermediaries, but that the specific obligations should be proportionate to the risk posed by the service. In terms of assessing risk, experts stressed that obligations should not be tailored based on size. Instead, they explained that risk should be based on the service’s business model. In contrast, other experts emphasized that the same broad obligations in terms of fulfilling a duty to act responsibly should apply to all regulated entities regardless of their business model or size. They explained that under a broad duty to act responsibly, each regulated entity could calibrate their own approach without the need for specific obligations tied to their business model.
Theme C: Defining Harms
Experts stressed that some forms of harm are extremely difficult to define but should nonetheless be captured by a regulatory regime. Some experts brought up the concept of ‘slow violence’ - content that when put within a certain social context, like conspiracy theories, becomes dangerous over time. They explained that on its own this type of content does not violate terms of service but that collectively, within a broader context, it can lead to violence by radicalizing or inciting hatred and even violence. Others explained that there is a lot of content that is not harmful at face value. They looked to child sexual exploitation images as an example. They explained that there are videos of abuse spliced into multiple different images which, on their own, do not depict an apparent harm. But when put together the harm is clear and apparent. Experts emphasized that it would be difficult to define such content in legislation, but that the harm associated with this content is worthy of attention in a legislative and regulatory framework.
Experts stated the importance of framing the legislation in a positive way, focusing on promoting human right, amplifying “good” content, protecting children and other vulnerable groups, and empowering users. They argued that the ultimate goal of the regime ought to be a shift in the balance of content online in favour of “positive communication” over “harmful communication”.
Experts advised that the framework must consider harm not necessarily associated with content. Some experts emphasized that not all harm is related to content – harm can also come from user behaviour and systemic biases. They explored how actors can manipulate platforms to spread harmful, deceptive, or manipulative narratives through the use of bots, for example. They also cited research showing that women do not see online advertisements for jobs in science as much as men do. Experts stressed that systemic harms of this nature go beyond content and are imperative to consider. They explained that online services use algorithms and other artificial intelligence tools that perpetuate biases and harm users. They suggested that obligations regarding algorithmic accountability be introduced.
Experts disagreed on the usefulness of the five categories of harmful content previously identified in the Government’s 2021 proposal. These five categories include hate speech, terrorist content, incitement to violence, child sexual exploitation, and the non-consensual sharing of intimate images. Many experts emphasized that child sexual exploitation content and content that incites terrorism are particularly harmful to Canadians and must be addressed in an unambiguous manner by future legislation. Some experts considered setting out a set of specific responsibilities for these five types of content, coupled with less specific obligations for content beyond these five categories. Other experts viewed the five categories as deeply problematic, arguing that they are seen to perpetuate biases found in their corresponding Criminal Code provisions. For instance, they expressed that there are issues with the definition of terrorism as it almost exclusively deals with Islamic terror and omits other forms of terrorism.
Experts diverged on how to define harm under a risk-based regulatory framework. Some experts argued that legislation should move away from an approach that identifies specific types of harmful content. Instead, they argued that harm could be defined in a broader way, such as harm to a specific segment of the population, like children, seniors, or minority groups. They explained that if the regime were to specify types of content, these types should be used as examples of what harmful content could be, not as defined categories. Experts also emphasized that new categories of harm may surface in the future and that specifying types of harmful content risks legislation becoming quickly outdated. Other experts disagreed, stating that regulated services must be given some indication of what harm means to know what they must do to comply with new rules. They also explored how the question of defining harmful content fits within the context of law enforcement. They emphasized that, if services are obligated to preserve and report data to law enforcement, law enforcement agencies would need to be given clear definitions in order to know when content is being reported or preserved for their investigative purposes. Within the experts who advocated for defining harm in a specific way, some suggested a focus on what harmful content does, or could do, as opposed to a set definition. Put differently, they suggested focusing on the likely effects of a given piece of content instead of providing a specific definition.
Experts emphasized that something must be done about disinformation, but acknowledged that it is challenging to scope and define. They agreed that disinformation has serious immediate, medium-term, and long-term consequences. They discussed how disinformation can be used to incite hatred and violence, undermine democracy and democratic discourse, reduce trust between citizens, and threaten national security and public health. However, they also expressed extreme caution against defining disinformation in legislation for a number of reasons, including that doing so would put the Government in a position to distinguish between what is true and false – which it simply cannot do. Instead of defining disinformation in legislation, some experts argued that legislation could focus on the harmful effects of disinformation or certain behaviours associated with disinformation, like coordinated manipulation using bots and bot networks.
Theme D: Regulatory Powers
Experts stressed that the regulator should be equipped with robust audit and enforcement powers. They agreed that the regulator should be able to audit services, levy administrative monetary penalties (AMPs) and issue compliance orders. They argued that without strong enforcement powers, it would be very difficult, if not impossible, for a regulator to promote compliance. Experts insisted that clear consequences ought to be set out for regulated services that do not fulfill their obligations.
Experts emphasized that the regulator needs to be well-resourced to be successful. Some experts argued that in addition to resources for compliance and enforcement, a new regulator needs sufficient capacity to intake, understand, and analyze complex data coming from regulated services. As a way to build this capacity, some experts suggested that the framework leverage the research community to also assess data and information collected through transparency reporting. They proposed that international coordination among regulators would be another way to build regulatory capacity. Experts also argued that if the regulator is tasked with developing innovative ways to address harm, it will need the resources to do so.
Experts stated that Codes of Practice are essential, and some said they ought to be co-developed with industry and other stakeholders. Many experts emphasized the importance of a multi-stakeholder approach to developing obligations placed on regulated services. They explained that this approach could be accomplished through codes of practice or guidelines co-developed by the regulator and stakeholders, outlining how regulated services could fulfill their regulatory obligations. Many experts also agreed on the use of table-top discussions where a group of industry representatives, community organizations, victim advocacy groups and government officials would come together on a regular basis to discuss the transparency reports received, consider scenarios, and help promote norms and best practices. Experts explained that such table-top exercises would help strengthen the system of collaboration between these actors and promote a regulatory approach that would be consistently improving and evolving. Other experts spoke about an advisory body that would help build best practices, which could then inform codes of practice. Some experts highlighted the differing motives and interests of industry and expressed skepticism over having industry at the table for important decisions impacting safety given those differences and the power imbalance between victim advocacy groups and industry. They stressed the importance of an independent regulator and the need for that regulator to establish minimum standards free from industry influence.
Theme E: Victim Support
Experts affirmed the need to impose a content review and appeal process at the platform level. Some experts agreed on requiring regulated services to have both an appeal process for content moderation decisions and to establish an internal ombudsperson to help support victims who need guidance concerning problematic content or platform behaviour. Experts agreed that as part of their duty to act responsibly, these appeal mechanisms should be effective, efficient, and user-friendly, and that an internal ombudsperson should support and guide users though this appeals process. Other experts expressed concern over the independence of an “internal ombudsperson” and highlighted the need for an independent digital advocate.
Experts stressed that historically marginalized groups need to be protected from any unintended consequences of a new regulatory framework. They insisted that any obligations imposed on regulated services, and any tools available to users, consider how best to protect marginalized communities. They explained that such tools could potentially have unintended consequences on or could be biased against marginalized and diverse communities if sufficient care is not taken. Other experts noted that historically marginalized communities, such as children, women and LGBTQ2 communities, are also the ones who have been disproportionately harmed by the absence of a regulatory framework to date.
Experts recommended the introduction of an ombudsperson for victim support, independent from Government, online services, and law enforcement. They explored how an ombudsperson could play a useful intermediary role between users and the regulator. They emphasized that an independent ombudsperson could support users by advising them, providing technical support, and walking them through how to file a complaint. They explained that this ombudsperson could collect data on the most pressing issues for users and victims, and that this information could help the regulator in its oversight and investigation roles and inform the development of codes of practice. Some suggested that the regime could begin with an ombudsperson as a hub for victim support, and grow into a body that adjudicates disputes later, if deemed necessary. Experts explained that this ombudsperson would be independent from the Digital Safety Commissioner. They emphasized that the ombudsperson would provide information to the Digital Safety Commissioner on complaints received, and the Commissioner would be equipped with audit and enforcement powers to investigate further and instill compliance where necessary.
Experts voiced a preference for the term ombudsperson, or public advocate, instead of “recourse council”. They emphasized that “ombudsperson” is a term people largely understand and feel comfortable with. Other experts suggested the term “public advocate” and put forth the idea of introducing a public advocate for digital matters in Canada, a body to take up complaints, hear victims, and write up periodic reports publicizing such complaints. In either case, experts agreed that if a recourse body was to be created, it would be necessary to use a term that Canadians can understand and feel comfortable with.
Experts disagreed on the need to introduce independent recourse to revisit platform decisions. Some experts emphasized that the issue is not whether to impose an external body to make takedown decisions, but rather how best to support users who have nowhere else to turn. They explained that under the framework, services would have their own internal complaint and transparency mechanisms. However, they stated, victims often feel that the platforms do not listen to their complaints, that they are still being harmed, and that the content they are flagging is not being addressed. They explained that victims need a venue to express their concerns. Experts emphasized that this support can be achieved without creating a content removal authority. They stressed that creating an independent body to make takedown decisions would be a massive undertaking akin to creating an entirely new quasi-judicial system with major constitutional issues related to both federalism and Charter concerns. Other experts argued that users need more meaningful recourse and sufficient access to justice. These experts argued that there must be a body to make takedown decisions in order to effectively help victims. On this latter point, some experts suggested that there be a connection between the ombudsperson and the courts, where the former could bring claims on behalf of victims to the latter. Some experts suggested that the new legislation promote the development of the courts to conduct their business online to be efficient and provide timely solutions to victims.
Experts who advocated for a body to make content removal decisions disagreed on the thresholds to make those decisions. Some experts were in favour of removal decisions in very limited circumstances. They suggested instituting separate recourse bodies for certain types of content, like child sexual exploitation content and the non-consensual sharing of intimate images. Some experts stated that there should be two classes of harm, the more severe and criminal category would be subject to independent recourse, while the other category of harm would not be. Other experts suggested that the threshold for independent recourse should be set where the regulated entity was enforcing its own code of conduct. At that point, experts explained, a user could go to a recourse body that would render its own interpretation of whether the service was acting in a way that was consistent with its own code of conduct or not.
Experts diverged on whether a content removal decision making body would be workable. Some experts voiced concern over the volume of content an independent body would have to deal with. They explained that the body would not be able to make decisions at a speed commensurate with the expectations of users. Other experts argued that the Government cannot expect platforms to deal with high volumes of content while at the same time asserting that a regulator cannot address such a high volume. Other experts recognized that volume is a real impediment to setting up effective independent recourse and acknowledged that no real solution has emerged to effectively and efficiently adjudicate the high volume of content that arises in the online space. They emphasized however that instead of simply abandoning the idea, it requires further development and testing.
Experts disagreed on whether a recourse body would also have the power to compel the reinstatement of content. Some experts stated that such a power would be necessary. Others emphasized however that such a tool should only be provided if it were legal, they explained that compelling speech may cause Charter concerns and that more analysis on this topic would be needed prior to granting such a power.
Theme F: Law Enforcement and Crisis Response
Experts emphasized that there is a low level of trust in law enforcement for some marginalized communities, which fuels a concern that requirements to report certain content to law enforcement or preserve such content for criminal investigations could lead to over policing of these communities. Experts voiced concern over issues like marginalized communities being disproportionately targeted by police, their overrepresentation in the carceral system, and existing biases that are found within Criminal Code offences, for instance in the definition of terrorism. Experts explained that mandatory notification obligations have the potential to increase harmful police interactions with marginalized communities when no crime has been committed. They explained that placing a threshold of imminent risk of harm for reporting and preservation obligations could lead to the over policing of content. Some suggested that any reporting to law enforcement or preservation requirements would need to be made on actual knowledge, to avoid over policing, and privacy concerns, especially for marginalized communities. Some experts expressed that mandatory reporting of child pornography should continue to be handled separately under An Act respecting the mandatory reporting of Internet child pornography by persons who provide an internet service, and should be amended to reflect the evolution of how such content is produced and disseminated online.
Experts stressed that a critical incident protocol ought to be part of a regulatory framework, and that content falling under this framework must be carefully managed following the incident itself. They emphasized the importance of preventing the same copies of some videos, like live-streamed atrocities, and child sexual abuse, from being shared again. Experts stressed that many file sharing services allow content to spread very quickly. They voiced that regulating the sharing of specific Dropbox links and other URLs would be impactful way of addressing this problem.
Experts disagreed on whether mandatory reporting or preservation to law enforcement should form part of the online safety framework. Some raised the concern that, if not done thoughtfully, these obligations could have spill-over effects on the rest of the framework. They argued that broad requirements for reporting and data preservation, as well as corresponding automated detection mechanisms, could pose a risk to users’ privacy and incentivize a general system of monitoring. Other experts emphasized that law enforcement must be given the tools necessary to ensure they are able to successfully conduct their investigations.
Page details
- Date modified: