Responsible use of artificial intelligence (AI)

Exploring the future of responsible AI in government

Artificial intelligence (AI) technologies offer promise for improving how the Government of Canada serves Canadians. As we explore the use of AI in government programs and services, we are ensuring it is governed by clear values, ethics, and laws.

Information and services

Our guiding principles

To ensure the effective and ethical use of AI.

Our timeline

Follow the evolution of our Directive on Automated Decision-Making.

List of qualified Artificial Intelligence (AI) suppliers

Current list of businesses looking to sell AI solutions to the Government of Canada.

Algorithmic Impact Assessment (AIA)

See how the AIA helps designers understand and manage the impacts of their AI solutions from an ethical perspective.

Directive on Automated Decision-Making

See how we ensure that the government's automated decision-making systems are used responsibly.

Guideline on Service and Digital

Section 4.5 provides additional guidance on the responsible and ethical use of automated decision systems.

Our guiding principles

To ensure the effective and ethical use of AI the government will:

  1. understand and measure the impact of using AI by developing and sharing tools and approaches
  2. be transparent about how and when we are using AI, starting with a clear user need and public benefit
  3. provide meaningful explanations about AI decision making, while also offering opportunities to review results and challenge these decisions
  4. be as open as we can by sharing source code, training data, and other relevant information, all while protecting personal information, system integration, and national security and defence
  5. provide sufficient training so that government employees developing and using AI solutions have the responsible design, function, and implementation skills needed to make AI-based public services better

AI procurement for a digital world

AI procurement for a digital world - Transcript

The Government of Canada is starting to use Artificial Intelligence to inform decision-making, be more efficient, and provide better services to Canadians.

While AI is a powerful tool, it must be used responsibly. We have to eliminate bias, be open about how AI is informing decisions, and ensure potential benefits are weighed against unintended results. That’s why we build responsible use into everything we do, including our first AI procurement process.

Here’s how the process works:

  1. First, interested suppliers must apply and demonstrate that they can deliver AI solutions in a responsible manner. 
  2. The Government will then present them with challenges.
  3. Interested bidders will need to specify which challenges they’d like to work on.
  4. From this group, Government will pick three suppliers and randomly select another seven. These suppliers will be eligible to submit proposals.
  5. Finally, the Government will evaluate bids and award contracts.

This simpler, faster process will not only facilitate collaboration between Government and small and medium-sized enterprises, it will also ensure that we build ethics and responsibility into projects from start to finish.

Agile, transparent, collaborative: that’s procurement for a digital world. Find out more at

Algorithmic Impact Assessment

Algorithmic Impact Assessment - Transcript

Artificial Intelligence can help us do great things, like preserving indigenous languages or helping Canadians do their taxes and access benefits. However, as for any new disruptive technology, we need to ensure it is used correctly, with the best interests of Canadians in mind.

That’s why rooting out bias and inequality in AI design has become a top priority. We need to shape how AI is built, monitored and governed from the get-go. The Government of Canada’s Algorithmic Impact Assessment (AIA) aims to do just that.

The AIA provides designers with a measure to evaluate AI solutions from an ethical and human perspective, so that they are built in a responsible and transparent way. For example, the AIA can ensure economic interests are balanced against environmental sustainability.

The AIA also includes ways to measure potential impacts to the public, and outlines appropriate courses of action, like behavioral monitoring and algorithm assessments.

Visit to find out how Canada is leading the way in responsible and ethical use of AI.

Our timeline

  • Updates to the Directive on Automated Decision-Making (April 1, 2021)

    • The Directive was amended based on feedback received from stakeholders
  • Compliance to the Directive on Automated Decision-Making (April 1, 2020)

    • All new automated decision systems must now comply with the Directive
  • Second AI Day (March 4, 2019)

    • Directive on Automated Decision-Making officially launched
  • Lunch and Learn with GC Entrepreneurs group (October 12, 2018)

  • Consultations on the Directive and Algorithmic Impact Assessment

    • Toronto and Montreal
    • External stakeholders including uQUAM, CIFAR, Osgoode Law, AI Impact Alliance (AiiA), and others
  • Office of the Privacy Commissioner (September 18, 2018)

    • Consultation and feedback
  • Legal - Justice session (June 12, 2018)

    • Creation of Justice AI taskforce to provide input and direction
    • 25 representatives - multi- sectoral
      • Human rights, IP, commercial, IRCC, ESDC, TBS, and others
    • Changes were made based on their comments
  • AI Day (May 28, 2018)

    • 120 participants from industry, academia, and government
  • AI policy working group kick-off (February 16, 2018)

    • Hosted by GAC to develop departmental policies on AI
  • Policy Horizon's Directive Design Session (February 13, 2018)

    • Interdepartmental workshop to talk about the development of the Directive
    • IRCC, ISED, ESDC were present
  • Kick-off session with Departments (January 22, 2018)

    • Organized workshop with over 100 participants
    • IRCC, DFO, Agriculture, CBSA, Funding Councils, GAC, ESDC, NRC, PCH, HC, NRCAN, Canada Council for Arts, CRA, ISED, Policy Horizons, and SSC all participated
  • Drafting of the Directive (October 2017 - present)

    • TBS binding policy focused on Automate Decisions
  • Drafting of the whitepaper (October 2016 - October 2017)

    • Built in the open with several academic, civil society, and government subject matter experts
Report a problem or mistake on this page
Please select all that apply:

Thank you for your help!

You will not receive a reply. For enquiries, contact us.

Date modified: