Responsible use of artificial intelligence (AI)
Exploring the future of responsible AI in government
Artificial intelligence (AI) technologies offer promise for improving how the Government of Canada serves Canadians. As we explore the use of AI in government programs and services, we are ensuring it is governed by clear values, ethics, and laws.
Information and services
To ensure the effective and ethical use of AI.
Follow the evolution of our Directive on Automated Decision-Making.
Current list of businesses looking to sell AI solutions to the Government of Canada.
See how the AIA helps designers determine how acceptable their AI solutions are from an ethical and human perspective.
See how we ensure that the government's automated decision-making systems are used responsibly.
Section 4.5 provides additional guidance on the responsible and ethical use of automated decision systems.
Our guiding principles
To ensure the effective and ethical use of AI the government will:
- understand and measure the impact of using AI by developing and sharing tools and approaches
- be transparent about how and when we are using AI, starting with a clear user need and public benefit
- provide meaningful explanations about AI decision making, while also offering opportunities to review results and challenge these decisions
- be as open as we can by sharing source code, training data, and other relevant information, all while protecting personal information, system integration, and national security and defence
- provide sufficient training so that government employees developing and using AI solutions have the responsible design, function, and implementation skills needed to make AI-based public services better
AI procurement for a digital world
Algorithmic Impact Assessment
Updates to the Directive on Automated Decision-Making (April 1, 2021)
- The Directive was amended based on feedback received from stakeholders
Compliance to the Directive on Automated Decision-Making (April 1, 2020)
- All new automated decision systems must now comply with the Directive
Second AI Day (March 4, 2019)
- Directive on Automated Decision-Making officially launched
Lunch and Learn with GC Entrepreneurs group (October 12, 2018)
Consultations on the Directive and Algorithmic Impact Assessment
- Toronto and Montreal
- External stakeholders including uQUAM, CIFAR, Osgoode Law, AI Impact Alliance (AiiA), and others
Office of the Privacy Commissioner (September 18, 2018)
- Consultation and feedback
Legal - Justice session (June 12, 2018)
- Creation of Justice AI taskforce to provide input and direction
- 25 representatives - multi- sectoral
- Human rights, IP, commercial, IRCC, ESDC, TBS, and others
- Changes were made based on their comments
AI Day (May 28, 2018)
- 120 participants from industry, academia, and government
AI policy working group kick-off (February 16, 2018)
- Hosted by GAC to develop departmental policies on AI
Policy Horizon's Directive Design Session (February 13, 2018)
- Interdepartmental workshop to talk about the development of the Directive
- IRCC, ISED, ESDC were present
Kick-off session with Departments (January 22, 2018)
- Organized workshop with over 100 participants
- IRCC, DFO, Agriculture, CBSA, Funding Councils, GAC, ESDC, NRC, PCH, HC, NRCAN, Canada Council for Arts, CRA, ISED, Policy Horizons, and SSC all participated
Drafting of the Directive (October 2017 - present)
- TBS binding policy focused on Automate Decisions
Drafting of the whitepaper (October 2016 - October 2017)
- Built in the open with several academic, civil society, and government subject matter experts
Report a problem or mistake on this page
- Date modified: