Summary of Session Eight: Disinformation
The Expert Advisory Group on Online Safety held its eighth session on June 3rd from 1:00-4:00 pm EDT, on Disinformation. Eleven members were present. The Advisory Group was joined by Government representatives from the Departments of Canadian Heritage, Justice, Innovation, Science and Economic Development, Public Safety, Women and Gender Equality, and the Privy Council Office. Representatives from the Royal Canadian Mounted Police were also present.
This summary provides an overview of the eighth session. Per the Terms of Reference for the Advisory Group, these sessions operate under Chatham House Rule. As such, this summary does not attribute the views expressed to any one group member or organization. It outlines the views expressed during the session; reports areas of agreement, disagreement, and discussion; and organizes the discussion under thematic categories. It should not be considered a verbatim recitation of the discussion.
The worksheet for the session included two objectives:
- Obtain views on the Government’s role in addressing disinformation.
- Explore new ways to address and mitigate the effects of disinformation.
This summary reports on the perspectives raised in relation to these objectives and organizes the discussion points according to issue-specific themesFootnote 1.
Theme A: Understanding the Magnitude of the Challenge
Experts asserted that disinformation is not a new problem, but the emergence of online services has amplified it to an unprecedented degree. Experts noted that deliberately misleading or false information has always been used to advance political, social, or economic interests. However, in the last few years especially, experts stated that disinformation has become more easily created and shared using online services and social media.
The Expert Advisory Group agreed that the problem has grown to become one of the most pressing and harmful forms of malicious behaviour online. Experts agreed that disinformation has serious immediate, medium-term, and long-term consequences. They discussed how disinformation can be used to incite hatred and violence, undermine democracy and democratic discourse, reduce trust between citizens, and threaten national security and public health. They pointed to how disinformation was used in the context of the COVID-19 pandemic and to undermine democracy in the United States as particular examples of the serious and immediate threats disinformation poses. They went on to explain how the effects of disinformation are insidious – its effects may not be readily apparent, but slowly erode trust and social inclusion.
Some experts introduced the notion that disinformation undermines the rights of users. They asserted that by polluting the information environment with false, deceptive, and/or misleading information, disinformation undermines citizens’ rights to form their own informed opinions. Some experts stressed that disinformation undermines ‘freedom of attention’ by crowding and diverting citizens’ attention and focus on intentionally misleading or deceptive information.
Members considered how disinformation disproportionately affects children. They noted that children are more vulnerable to disinformation because they are more impressionable than adults, and their exposure to online environments is near-constant. Members also cited examples where disinformation was used as a way to abuse children. They explained how disinformation is used to lure and groom children, to revictimize, bully, harass, and justify abuse against minors.
Most experts agreed that something must be done, but the Government’s role must be carefully circumscribed to protect fundamental rights. Given the serious and urgent nature of the harms created by disinformation, experts argued that legislation on online safety should consider disinformation in some capacity. These experts argued that by not including disinformation in the approach, the Government would signal that it is less important than other harms – a notion that experts widely disagree with. However, most experts agreed that the Government cannot be in the business of deciding what is true or false online, or of determining intent behind creating or spreading false information. Nor can the Government censor content based on its veracity, no matter how harmful. Doing so would undermine fundamental rights enshrined in the Charter of Rights and Freedoms.
Theme B: Whether to Define Disinformation in Legislation
Most experts expressed extreme caution against defining disinformation in legislation. Experts argued that the very process of defining disinformation in legislation is problematic for a number of reasons. First, defining disinformation would put the Government in a position to distinguish between what is true and false – which it simply cannot do. Second, experts noted that most definitions of disinformation contain an element of intent. They stressed that determining intent is problematic, both practically and as a matter of principle. Third, experts pointed to the troubled attempts in the United States to address disinformation through a Disinformation Board as an example of how Government-created definitions of disinformation cannot withstand public scrutiny. Finally, experts noted that the term disinformation itself is not static nor absolute. They pointed out how the nomenclature to refer to this problem is still evolving, as demonstrated by the fluid definitions of ‘disinformation’, ‘misinformation’, ‘malinformation’, and ‘fake news’. These experts argued that trying to codify such a dynamic definition in legislation would risk the legislation quickly becoming outdated.
Some experts noted that there may be certain cases where disinformation is easier to conceptualize and address. These experts pointed to disinformation campaigns by foreign state actors as an example where the Government could more easily identify and address disinformation in a justifiable way. In these cases where the actors and intent behind disinformation is clear and national security threats are at play, experts explained that the Government may justifiably act through legislation.
Theme C: Approaches to Addressing Disinformation and its Effects through a Risk-Based Approach
Experts explored how a risk-based legislative approach could deal with disinformation. They asserted that the legal underpinning to address disinformation is the same as other online harms, which is to formalize a duty to act responsibly for services. This would require services to address harmful content online, which includes disinformation, by conducting risk assessments of content that can cause significant physical or psychological harm to individuals.
Experts agreed that a risk-based legislation could address disinformation by targeting certain behaviours. They surveyed forms of ‘coordinated inauthentic behaviour’ that are leveraged to create, spread, and amplify disinformation – they cited the use of bots, bot networks, inauthentic accounts, and ‘deepfakes’ as examples. Experts asserted that a system-focussed approach to regulation could set out rules or standards targeting these practices with an eye towards limiting the tools used by malicious actors. By focussing on behaviours through a system-based approach, experts explained that legislation would not need to determine what constitutes disinformation or determine what is true or false.
However, a few experts argued that a systems-based approach focussed on behaviour still poses risk. These experts argued that in order to approximate the behaviours and mechanisms used to create and spread disinformation, services would still have to identify disinformation, and therefore make a determination of falsehood. They questioned how online services would know what behaviours to address without detecting and judging the veracity of content in the first place.
Other experts suggested that legislation should target inauthentic behaviour by focussing on its effects, rather than its veracity. They suggested that any regulated inauthentic behaviour should be accompanied by an actual or foreseeable negative effect on, for example, the protection of public health, minors, civic discourse, or electoral processes and public security. These experts argued that a focus on effects would not require services to make a determination of truth or falsehood.
A few experts considered how providing users more control over what they see online could help alleviate the problem. These experts suggested that requirements to improve user controls and users’ right to shape their own experiences online could help to address the spread and impact of disinformation. With such tools, users could filter out the typically known sources of deceptive or manipulative information from their feeds. Conversely, some experts suggested that user controls are a double-edged sword – users could also control for content or narratives laden with false, misleading and/or deceptive information.
Some experts highlighted the need to address the financial and economic elements behind disinformation. These experts asserted that disinformation can be lucrative when used in marketing and advertising practices. They suggested that efforts to demonetize disinformation could also play a part in a systemic, risk mitigation approach. However, they acknowledged that this avenue may lie beyond the purview of online safety legislation if advertising practices are not covered. Some suggested that advertising and marketing law are more appropriate tools to demonetize disinformation.
Some members asserted that the harms posed by disinformation could be addressed not by defining disinformation itself, but by targeting the harms it creates or amplifies. Experts noted that disinformation is used as a tool to inflict other forms of harm, and that legislation should focus on these harmful effects rather than focussing on disinformation itself. For example, disinformation can be used to incite violence or hatred – some experts argued that by requiring services to address violence or hatred, legislation already addresses the effects of disinformation. These experts suggested that legislation consider what other harmful effects arise from disinformation and explore ways to address those rather than address disinformation as a ‘standalone’ harm. They also noted that some forms of disinformation are dealt with already in other laws, citing fraud, defamation, and election interference as examples.
Some argued that disinformation and its related harms could be dealt with through Codes of Practice rather than legislation. These experts explained how non-binding Codes of Practice, developed collaboratively between online services, civil society, and a regulator, could address harms without undermining fundamental rights. They mentioned how Codes of Practice aimed at harms related to disinformation, including hate speech and electoral integrity, could help solve many of the risks posed by disinformation. Experts questioned whether a specific Code of Practice for disinformation would be desirable. If Codes of Practice could address all or many of the harms created by disinformation, these experts argued, then perhaps a specific Code of Practice for disinformation might not be necessary.
Looking to the Digital Services Act in Europe, some experts considered if stronger action should be available to address disinformation in times of crisis. These experts considered how the Digital Services Act creates a mechanism by which the European Commission can take stronger action to deal with disinformation in times of crisis. Experts noted how this provision is related to the ongoing events in Ukraine and efforts by Russia to spread false claims in an attempt to justify their aggression. Experts considered whether such a provision would be necessary or desirable in Canada, and whether stronger action should be available to the Government during elections or public health crises.
Experts expressed concern over how any measures to address disinformation could be replicated or abused by Governments that do not respect fundamental rights. There was widespread agreement that any legislation targeting disinformation needs to be ‘democracy-proofed’. This means that the Government needs to ensure that provisions of the legislation cannot be abused or misused by either future Governments in Canada or by authoritarian regimes in other countries to justify censorship of journalism and legitimate criticism. However, some experts noted that if new rules for online services are only meant to apply existing law to the online environment, the problem of ‘democracy-proofing’ is less of a risk.
Experts revisited the issue of whether online services should be held liable for the content they host. While a few experts stressed that liability is an important tool to combat harmful content online, most experts agreed that doing so is neither practical nor justifiable. Proponents of liability asserted that online services, or those seeking to launch an online service, need to think carefully about the potential for harm – liability would serve as a powerful deterrent for those that would allow harmful content to spread on their services. However, most experts agreed that liability would undermine the core principles of the internet and run counter to international trade agreements. They also argued that liability may be incompatible with a risk-based approach that relies on open, good faith management of harm between regulated entities and a regulator. If services are held liable, experts argued, they would be less inclined to share and examine risks they pose for fear of legal action.
A few experts suggested that online safety legislation should consider provincial law. These experts noted that there may be potential overlap with provincial mechanisms that concern civil liability in online environments. They asserted that new legislation should be compatible and work together with these other systems. They noted that this could be especially important for child sexual abuse material and the non-consensual sharing of intimate images, where provincial civil mechanisms might already exist. They considered what other mechanisms might exist in relation to disinformation in both provincial law and federal law, citing libel and provisions related to hate speech in the Criminal Code.
Some experts argued that when it comes to risk-assessments, children need more protection. These experts stressed that the harms posed to children are different in both nature and severity than harms posed to adults. They mentioned how certain behaviours and mechanism, including anonymity and inauthentic behaviour, pose higher degrees of risk to children. They called for requirements on online services to identify and assess specific risks to children, including risks related to disinformation.
Theme D: The Role of Transparency Reporting and Auditing
Experts discussed transparency and audit requirements at both a general level and in the context of disinformation. Overall, experts agreed that more transparency is beneficial, but legislation needs to compel the right information to have a meaningful impact. According to these experts, online services are not sharing crucial information about their processes and how they deal with harms including disinformation, something that legislation should compel them to do. They also stressed that any transparency requirements need to take special care to respect the privacy of users.
Experts stressed the need for online services to share qualitative information in addition to quantitative data. Qualitative information, experts asserted, should include the thought-processes and justification for decision-making and changes to services, organizational charts, and an explanation of how the rules on online services have changed over time. Such qualitative information, experts argued, would provide much-needed context for understanding how online services operate beyond data points.
Experts agreed that current transparency reporting by online services is not detailed enough. They pointed out how online services state what percentage of harmful content they take down, but they do not include information on how long it took to take that content down. They also noted that in many cases, online services commit to taking certain action to address problems identified in transparency reports but are not held accountable for taking this action. As a general concern, experts argued that transparency reports are too often used as a tool to convince governments and the public that services are doing something about harms. Instead, experts argued, transparency reports should inform users of the processes that go into creating the online environments they interact with and should explore the risks associated with these services in an honest way.
Experts expressed concern about if and how smaller companies would be able to comply with transparency requirements. They asserted that some smaller services may not have the resources, technology, or expertise to compile and publish detailed transparency reporting. For this reason, experts generally agreed that obligations related to transparency need to be flexible to account for the function and scale of regulated services. They also considered what role a regulator could play in supporting smaller services in this respect.
Experts discussed how transparency requirements could inform the ongoing operation of the legislative and regulatory framework, especially in the context of disinformation. They argued that both qualitative and quantitative information from online services could be used to learn about the current and evolving mechanisms and behaviours related to disinformation. This is important, some experts argued, because lawmakers and researchers have severely limited information about how disinformation is created, spread, and amplified online. With the information and lessons gleaned through detailed transparency reporting, legislation or regulation could develop specific targeted mechanisms to address different forms and effects of disinformation. A few experts pointed to the European Digital Media Observatory as a model for using data to glean insights on how to address these problems.
However, experts identified potential adverse impacts of transparency reporting. Experts discussed three ways in which transparency reporting could lead to negative effects. First, they pointed out how malicious actors can use transparency reports to learn how services identify certain forms of harmful content with an eye towards circumventing or exploiting these methods. Second, they reasserted that any requirements, even transparency reporting, should be ‘democracy-proofed’. It would be important, these experts argued, that requirements could not be used with malicious intent or co-opted by authoritarian governments to undermine fundamental rights. Finally, they expressed concerns about the incentive structure of transparency reporting. It is important, experts agreed, that when transparency reporting leads to new findings about potential harms, online services should not be disincentivized to share these findings.
Experts explored the question of who should have access to what information from online services. They considered what information should be shared with the regulator, with researchers, and with the public. Experts discussed how risk assessments and mitigation plans should be shared with the regulator, but perhaps not the public. Experts warned that if risk assessment and mitigation plans are published openly, they could be used by malicious actors to circumvent risk-mitigation measures. For both the regulator and researchers, experts explored the notion of a privileged access regime that provides these parties with useful data while accounting for the commercial sensitivity and technical complexity of this data. Finally, some experts stressed that information made publicly available should inform users about the processes put in place to determine what they see online and should include data and information related to advertising.
Experts explored how best to calibrate and structure the regulator’s audit function. Some experts see audit powers as a form of transparency, and that auditing powers should be used when transparency requirements either fall short or are not being fulfilled. They considered the broad auditing powers provided through the European Unions’ Digital Services Act whereby authorities can audit “anything they see necessary” and questioned whether a similar approach would be appropriate for Canada. A few experts also expressed concern over new legislation inadvertently creating a new ‘auditing industry’ in Canada for online services.
Some experts argued that transparency reporting and auditing is essential in the fight against child sexual abuse. A few experts asserted that the current performance of online services in removing child sexual abuse material is unacceptably poor, and that audits are sorely needed to understand the extent of the problem. They noted that there is clear evidence that some online services are not removing child sexual abuse material fast enough or are not removing it at all. They argue that publicizing these failures could be used to pressure services into being more responsive and acting quickly. These experts re-iterated that requirements in this space need to be backed up by significant financial penalties commensurate with the severity of harm posed to children.
Theme E: Non-Legislative Tools to address Disinformation
Experts explored tools to incentivize cooperation and action beyond legislation. They considered how a regulator could encourage services to build upon what they are already doing to address the problem, and how the Government could incentivize cooperation between services, academia, and civil society. Some experts cautioned against relying on self-regulation, though. They argued that relying on services to do the right thing on their own has failed thus far.
Experts expressed widespread agreement that education and prevention play a key role in addressing disinformation and its effects. Experts agreed that Government action to address disinformation should include an educational component focused on literacy and prevention. They argued that even if education is squarely within provincial jurisdiction, an online safety regulator could have an education mandate that involves working with all levels of government, civil society organizations, and academia.
Next Steps
The next and final session of the Expert Advisory Group will take place on Friday, June 10 from 9:00 a.m. - 12:00 p.m. EDT. There is no specific topic for this final session. Experts will share their overall thoughts and takeaways from the process and provide some final advice for how the Government should proceed.
Page details
- Date modified: