Executive Summary

This report is based on the views expressed during, and short papers contributed by speakers at, a workshop organised by the Canadian Security Intelligence Service as part of its Academic Outreach and Stakeholder Engagement (AOSE) and Analysis and Exploitation of Information Sources (AXIS) programs. Offered as a means to support ongoing discussion, the report does not constitute an analytical document, nor does it represent any formal position of the organisations involved. The workshop was conducted under the Chatham House rule; therefore no attributions are made and the identity of speakers and participants is not disclosed.

The threats posed by disinformation to security and democracy have been assessed as a significant and ongoing, if not habitual, concern. Spurred by advancements in artificial intelligence (AI), deepfakes are viewed as a modern evolution of disinformation which poses new challenges for governments, individuals, and societies. Safeguarding the integrity of the information ecosystem is a fundamental priority not only for democracy, but also for society as a whole.

Technological Advancements and Prosocial Applications

Deepfakes, originally a portmanteau of deep learning and fake media, is now used more broadly to refer to any impersonating media created or edited by deep learning algorithms.  Manipulated videos, images, audio/voice, and text created using generative AI techniques have quickly evolved to become increasingly accessible and realistic. In many ways, these advancements pose exciting opportunities.

Threats to Society and Security

As the capacity for generating media becomes more widely available and precise, the probability for misuse intensifies. Among the primary concerns with deepfakes is the potential for spreading disinformation and manipulating political discourse, leading to confusion, distrust, and social instability in democratic societies.

While deepfakes are more likely to advance already existing security threat-related activities rather than generating new concerns, it is important to recognize the potential risks associated with deepfakes and develop robust technological solutions, ethical guidelines, and legal frameworks to address these challenges and mitigate their negative consequences.


Deepfakes are designed to deceive, and the human mind cannot consistently identify the outputs of sophisticated technologies. While tech giants have begun flagging deepfake content as disinformation, detection systems integrating both human and model predictions are of greater value. Governments have a role to play in facilitating the application of deepfake technologies that both benefit and protect citizens and democracy, and individual citizens have agency in protecting themselves and their communities.

That deepfake technology will continue accelerating towards producing more realistic content more efficiently and more cost-effectively is a certainty. Considering deepfakes from a global perspective allows for comprehensive approaches to maximize the benefits of the evolving technology while addressing the associated individual and national security risks, upholding privacy rights, and maintaining public trust in media and information sources.

Page details

Date modified: