Responsible use of artificial intelligence (AI)
Exploring the future of responsible AI in government
Artificial intelligence (AI) technologies offer promise for improving how the Government of Canada serves Canadians. As we explore the use of AI in government programs and services, we are ensuring it is governed by clear values, ethics, and laws.
Information and services
To ensure the effective and ethical use of AI.
Follow the evolution of our work to support the responsible use of AI in the Government of Canada.
See how we ensure that the government's automated decision-making systems are used responsibly.
See how the AIA helps designers understand and manage the impacts of their AI solutions from an ethical perspective.
Explore our guide, which provides guidance to federal institutions in their responsible use of generative AI.
Section 4.5 provides additional guidance on the responsible and ethical use of automated decision systems.
Current list of businesses looking to sell AI solutions to the Government of Canada.
Our guiding principles
To ensure the effective and ethical use of AI the government will:
- understand and measure the impact of using AI by developing and sharing tools and approaches
- be transparent about how and when we are using AI, starting with a clear user need and public benefit
- provide meaningful explanations about AI decision making, while also offering opportunities to review results and challenge these decisions
- be as open as we can by sharing source code, training data, and other relevant information, all while protecting personal information, system integration, and national security and defence
- provide sufficient training so that government employees developing and using AI solutions have the responsible design, function, and implementation skills needed to make AI-based public services better
AI procurement for a digital world
Algorithmic Impact Assessment
Release of the Guide on the use of generative artificial intelligence (September 6, 2023)
- Provides guidance to federal institutions in their use of generative AI
- Includes an overview of generative AI, identifies limitations and concerns about its use, puts forward “FASTER” principles for its responsible use, and includes policy considerations and best practices
Updates to the Directive on Automated Decision-Making (April 25, 2023)
- The Directive was amended following the third review of the instrument
- Key changes include an expanded scope and new measures for explanation, bias testing, data governance, GBA+, and peer review
- The Algorithmic Impact Assessment was updated to support changes to the directive. This includes new questions concerning the reasons for automation and impacts on persons with disabilities
Stakeholder engagement on the third review of the Directive on Automated Decision-Making (April – November, 2022)
- Engagement with over 30 stakeholder groups, including in federal institutions, universities, civil society organizations, governments in other jurisdictions, and international organizations
- Engagement included roundtables with the GC Advisory Council on AI, Canadian Human Rights Commission, Digital Governance Council, bargaining agents, networks for equity-seeking federal employees, and representatives from relevant GC functional communities
Updates to the Directive on Automated Decision-Making (April 1, 2021)
- The Directive was amended based on feedback received from stakeholders
Compliance with the Directive on Automated Decision-Making (April 1, 2020)
- All new automated decision systems must now comply with the Directive
Launch of the Directive on Automated Decision-Making (March 4, 2019)
- Official launch of the Directive during the Second AI Day
Lunch and Learn with GC Entrepreneurs group (October 12, 2018)
Consultations in Toronto and Montreal on the Directive and Algorithmic Impact Assessment
- External stakeholders included UQAM, CIFAR, Osgoode Law, and AI Impact Alliance (AiiA)
Consultation with the Office of the Privacy Commissioner of Canada (September 18, 2018)
Justice AI taskforce session (June 12, 2018)
- Justice AI taskforce created to provide input and direction on legal issues
- 25 representatives including from human rights, IP, commercial, IRCC, ESDC, and TBS
AI Day (May 28, 2018)
- 120 participants from industry, academia, and government
AI policy working group kick-off (February 16, 2018)
- Hosted by GAC to develop departmental policies on AI
Policy Horizons Directive Design Session (February 13, 2018)
- Interdepartmental workshop to talk about the development of the Directive
- Participants included TBS, IRCC, ISED, and ESDC
Kick-off session with Departments (January 22, 2018)
- Organized workshop with over 100 participants
- Participants included TBS, IRCC, DFO, AAFC, CBSA, Funding Councils, GAC, ESDC, NRC, PCH, HC, NRCAN, Canada Council for the Arts, CRA, ISED, Policy Horizons, and SSC
Drafting of the Directive (October, 2017 – March, 2019)
- TBS binding policy focused on the automation of decisions
Drafting of the AI whitepaper (October, 2016 – October, 2017)
- Developed in the open with several academic, civil society, and government subject matter experts
- Date modified: