Line of Effort 3: Ethics, Safety, and Trust

The challenge

Canadians expect us to procure, develop and implement AI that is legal, inclusive, ethical and safe. The use of AI within Defence, particularly in applications related to the use of force, will be closely scrutinized to ensure it is in line with our shared values and Canadian and international law. Our legitimacy in the use of military force comes from the consent of the people we serve, and we must ensure we retain this legitimacy in the use of AI. Failing to do so may risk loss of public trust, reputational damage, the perpetuation of discrimination and bias, and poor morale among personnel.

AI may involve risks to human rights, privacy, and safety. While DND/CAF faces real risks if it fails to keep pace with adversaries in AI, it must also remain cognisant of the potential human rights risks of this technology. AI is a human product, with human biases and flaws embedded in its data, algorithms, models, and accompanying processes. A growing body of incidents demonstrate that AI can fail or cause harm through these flaws, producing discriminatory decisions that cannot be explained or audited. AI can also cause harms through a lack of testing, evaluation, and oversight, and can create new data protection and privacy risks. In a national defence context involving highly classified information, the risks from AI also include leaks, cyber-attacks, inadvertent exposure of intelligence equities, manipulation, and bias.

Ethical, safe AI will be central to ensuring our people trust and use it. The willingness of members and employees to use AI applications, especially in the battle space, will depend on their having confidence in the safety and ethics of these technologies and their effects and decisions. Consequently, widespread adoption will require that we demonstrate that safety to our people. AI decisions, behaviours, and performance must be as consistent, reliable, and trustworthy as possible.

What we must do

We must accord equal billing to ethical, security, and technical concerns. Identifying, mitigating, and addressing sources of both unintended harm and malicious activities must be part of the lifecycle for AI systems, and should be accorded equal importance with the resolution of technical problems.

We must embed ethical, equity, and security requirements into every stage of project and system lifecycles, from design, development, validation and certification, procurement, and deployment of AI systems to their eventual decommissioning. Decisions to augment human decision-making or human judgement-based tasks must be justified and documented, with mechanisms in place to ensure the ultimate decisions are traceable and explainable, and with appropriate accountability measures in place. All personnel involved in developing, procuring, and using AI must clearly understand their role, level of responsibility and authority with respect to these projects and systems. Projects must also incorporate GBA Plus throughout the AI lifecycle to ensure solutions respond to the needs of diverse groups and experiences, contribute to positive outcomes, and do not create harms resulting from algorithmic or data bias.

We must make AI ethics clear, consistent, and actionable. Many organizations have created AI ethics principles, but most have failed to communicate how to put them into practice. Those involved in AI system design, development, and delivery need clear steps to follow to integrate ethical approaches into their process. We must create tools to enable AI project teams to identify risks and mitigation strategies and adopt sound ethical operating practices at every stage of the project life cycle.

How we will do this

  1. Ensure that any new AI or AI-enabled technology is developed and implemented in accordance with applicable laws, policies and guidelines. These include Canadian and international law, applicable regulation, and Government of Canada policies such as the Voluntary Code of Conduct on the Responsible Development and Management of Advanced Generative AI Systems , Guide on the use of Generative AI, and Directive on Automated Decision-Making. It will also include DND/CAF commitments to integrate GBA Plus into operations, policies, and programs, risk management, and other related DND/CAF instruments such as the targeting cycle, rules of engagement, and security and information management policies and guidelines. DND/CAF will also respond to guidance from external review bodies such as the National Security and Intelligence Committee of Parliamentarians (NSICOP), the National Security and Intelligence Review Agency (NSIRA) and the Office of the Privacy Commissioner (OPC).
  2. Develop AI ethics principles, risk frameworks and operating practices for the AI life cycle. Drawing on federal and international best practice, DND/CAF, led by the DCAIC will develop a set of ethics principles, operating practices, and a risk framework for AI to embed best practice into the entire AI lifecycle. This framework will identify risks and impacts so they can be mitigated and ensure a level of transparency, comprehension, and human involvement appropriate to the risk and impact involved. In alignment with the Directive on Automated Decision Making and Algorithmic Impact Assessments, it will consider risks to the rights, health or well-being, and economic interests of individuals or communities affected by the system, including discriminatory effects arising from data or algorithmic bias, and any risks to the ongoing sustainability of an ecosystem. As technology and best practice develops, we will continue to update this framework and these practices to ensure they remain evergreen.
  3. Integrate standards and develop requirements for ethical, safe, inclusive, and trustworthy AI systems in defence and security. These include existing international and domestic ethical standards as well as applicable standards on, data, digital trust, and identity management. We must also encourage adoption and operationalization of AI principles in third party vendors and collaborate with key allies and partners to continue to develop and integrate national and international standards for data and AI ethics, such as the NATO Principles of Responsible Use of Artificial Intelligence in Defence.
  4. Collaborate with internal and external partners on ethical, safe, and trusted AI. DND/CAF will leverage Canada’s public sector and civil society leadership in AI ethics, and internal expertise from the Defence Ethics Programme, Director Gender Equality and Intersection Analysis (DGEIA) and others to support the responsible, safe and inclusive use of AI technologies. We will work with other Government of Canada agencies and departments and our security partners to continue to advance military AI ethics, safety, and trust, and with other nations to develop standards, norms and confidence-building measures for AI, open channels for communication about accidents, unexpected system behaviour, cyber-attacks, and emergent effects as a result of system interaction, and encourage responsible AI development and use by other nations.

Page details

Date modified: