Guide on the Scope of the Directive on Automated Decision-Making

On this page

Introduction

The Directive on Automated Decision-Making applies to departments that are using automated decision systems to fully or partially automate an administrative decision. Automated decision systems include those that rely on artificial intelligence (AI) and other technologies. The directive applies to systems developed or procured after April 2020.

Not all AI used in the federal public service falls under this directive; only AI systems involved in administrative decision-making are required to comply. For AI systems not used in administrative decision-making, other requirements, such as those concerning security, privacy and information management, still apply.

The purpose of this guidance is to explain the situations where the directive applies. It’s important to understand the boundaries of the directive in order to comply with it. Consider voluntary compliance with the directive when developing or deploying AI systems that do not fall within the scope of the directive, such as those supporting policy decisions.

Purpose of the directive

The purpose of the directive is to ensure that departments are transparent, accountable and fair in automated decision-making. It requires departments to:

  • assess the impacts of automated decision systems
  • be transparent
  • ensure quality
  • provide recourse on decisions
  • report publicly on system effectiveness and efficiency

The directive helps departments:

  • identify, assess and mitigate the risks of automated decision systems to people and departments
  • comply with principles of administrative law such as transparency, accountability, legality and procedural fairness

Five key elements of the scope

The five key elements that must be met for a system to fall within scope

Use by a department

The directive applies to departments as defined in section 2 of the Financial Administration Act: departments listed in Schedule I, Schedule I.1 and Schedule II. These departments constitute approximately 97 federal institutions.

Excluded institutions

Some institutions are excluded by virtue of their enabling legislation, such as the Canada Revenue Agency (subsection 30(2) of the Canada Revenue Agency Act) and Invest in Canada Hub (subsection 8(2) of the Invest in Canada Act). Other institutions, such as agents of Parliament, are excluded in full or in part under section 9.1.1 of the directive.

The Canadian Armed Forces is not subject to the directive because it does not fall within the definition of department in the Financial Administration Act. However, National Defence is subject to the directive as it is listed in Schedule I.

Developed after April 1, 2020

The directive applies:

  • to automated decision systems developed or procured after April 1, 2020
  • when a system developed or procured before April 1, 2020, is significantly modified

In other words, an existing system with modifications to its scope or functionality that could impact the decision is to be treated as a new system and must comply with the directive.

Modifications to systems that have already been deployed

Modifications that would require an existing system to comply with the directive include but are not limited to:
  • upgrading analytical capabilities or functioning
  • changing the population of clients affected
  • using the system to deliver a related service
  • using the system to deliver a different service (this may include one department giving the system to a different department to run on its service)
Modifications that would not require an existing system to comply with the directive include but are not limited to:
  • regular system maintenance
  • model retraining that does not significantly impact the analytical capabilities or functioning
  • security patches
  • changes in the hosting location of the system, such as as-is migration from on-premises to cloud-based hosting

Ensuring compliance of modified systems

For systems developed or procured after April 1, 2020, that undergo significant modifications, departments must verify that the new system is compliant with the directive. Practically speaking, this verification involves a new and updated Algorithmic Impact Assessment (AIA), which will highlight whether new requirements are necessary.

Verification may also involve updating existing documentation to reflect the changes, such as the privacy impact assessment, security assessment, legal advice and Gender-based Analysis Plus (GBA Plus). Additional or modified mitigation measures may also be required.

Use within an administrative decision-making process

An administrative decision is defined in Appendix A of the directive. In short, it is any decision that has the potential to affect legal rights, privileges or interests.

An automated decision system is within the scope of the directive if it makes or supports decisions at any point in the decision-making process.

Automated decision systems that make or assist an officer in making an administrative decision or related assessment are within the scope of the directive. The directive applies to both full and partial automation of decisions, including cases where there is a human in the loop or an officer makes the final decision.

Examples of administrative decisions

Departments typically make administrative decisions when they deliver services to clients. Clients are individuals or businesses (as defined in Appendix A of the Policy on Service and Digital). The directive covers individuals within and outside government. This broad inclusion ensures consistent safeguards for decisions that impact members of the public (Canadians and non-Canadians located within or outside Canada) and federal employees.

Examples of administrative decisions include but are not limited to:

  • assessing applications for permits and benefits, and information in support of such applications
  • assessing eligibility and admissibility for entry to Canada
  • appointing individuals to a position in the public service
  • granting market authorization for a pharmaceutical product
  • launching an investigation into an individual’s conduct

When the directive applies in administrative decision-making

The directive applies when the administrative decision involves full or partial automation.

Full automation

A decision is fully automated when the system makes the decision on its own about a client.

Partial automation

A decision is partially automated when the system contributes to making the decision. This contribution could be when the system:

  • provides information to an officer
  • makes an assessment or other analysis that influences, augments or assists in an officer’s decision
  • makes one or more decisions in a series of decisions that contribute to the final decision

Partial automation can be any automation in the decision-making process and does not have to be only a part of the final decision. For example, the directive applies when:

  • the final decision is made by the automated system
  • the final decision is made by the automated system, and an officer reviews the decision
  • an intermediate decision or determination is made by the system, but an officer makes the final decision
  • an assessment, recommendation, score, summary or other information is generated by the system and is presented to the officer during the decision-making process

When the directive does not apply in decision-making

Not all government decisions are administrative. Many decisions that the government makes do not fit the criteria of an administrative decision under the directive. Non-administrative decisions do not directly affect the rights, interests or privileges of clients.

Automating non-administrative decisions is not subject to the directive. Your departmental legal services should be consulted when help is needed to determine whether a decision is administrative.

Replaces or assists judgment

The directive applies to a broad category of automated systems, which could range from deterministic rules-based systems to advanced AI systems.

The directive applies to more than just artificial intelligence systems

The automated decision systems could use rules, regression, advanced analytics, AI, generative AI, machine learning, neural networks, robotic process automation, natural language processing and many others.

The directive is therefore not limited to AI systems. When the automated system is used to make or assist in making an administrative decision or a related assessment about a client, that system falls within the scope of the directive.

How to know if a system is an automated decision system

To determine whether a system is an automated decision system as defined in Appendix A of the directive, consider whether the system’s performance assists or replaces human judgment, discretion or critical thought. Humans will usually apply rules with reasoning and an understanding of the context that the decision is being made in. If the system replaces or augments that process, it is in the scope of the directive.

Replacing or assisting judgment

Systems, even seemingly simple ones, could be within scope when they:

  • are designed to replace or automate judgment, whether they make the final decision or assist in the decision process at any point
  • rely on rules or criteria that were developed by humans using their experience, judgment, discretion and critical thought
  • developed the rules or criteria on their own or from a combination of system-developed and human-developed rules or criteria
Not replacing or assisting judgment

Systems are not within scope when they:

  • automate calculations
  • implement straightforward criteria as defined in laws or regulations
  • follow legislatively defined rules in a manner that does not replace judgment

An example of a system that does not fall in scope is one that limits eligibility of a program to those 18 years of age or above when this is a clear requirement under the regulation.

Use in a production environment

The directive applies only to systems that will be deployed or are in production.

Research and experimentation are out of scope of the directive so that the requirements of the directive do not unduly burden researchers or discourage exploration of technological capabilities in the federal government.

Deployment of systems in scope of the directive

Systems are considered in production when they are performing their intended purpose on client information in the real world, such as when they are:

  • fully deployed in a production environment to support departmental operations
  • deployed in a production environment at a smaller scale, such as in a pilot (including beta testing and client experience testing) where outputs contribute to decisions that impact some clients

Deployment of systems outside the scope of the directive

Systems are not in scope when they are used solely for research and experimentation purposes and are not intended to be used to automate operations, such as when they are:

  • run on sample data as part of the development, testing or proof of concept phases
  • being explored in a sandbox or research lab where results are not used in decisions that impact clients

Although systems used solely for research and experimentation purposes do not fall within the scope of this directive, there are other requirements that must still be followed, such as those relating to privacy, security, information management, and values and ethics.

Ensuring compliance

When departments develop and test systems that will fall within the scope of the directive, they must plan for compliance with the directive. They should become aware of the requirements of the directive and take steps to meet them.

For example, departments should test not only the functionality, efficacy, security and validity of the system, but also the mitigation measures that address risks identified from early AIAs, privacy impact assessments, security assessments, GBA Plus and so on.

Examples of system activities that are in or out of scope of the directive

In scope
  • Triaging client applications based on their complexity as determined through machine-defined criteria
  • Examining a financial transaction to estimate the probability of fraud
  • Generating an assessment, score or classification about the client
  • Generating a summary of relevant client information for officers to determine eligibility to a program
  • Presenting information from multiple sources to an officer (such as by data matching and fuzzy matching)
  • Using facial recognition or other biometric technology to target subjects for additional scrutiny
  • Recommending one or multiple options to the decision maker
  • Using an AI resumé-screening tool or skills-based assessment tool to filter top-performing candidates to the interview stage in a recruitment process
  • Reviewing client applications for benefits and recommending approval or denial to an officer
  • Chatbot that officers use to recommend a course of action
Out of scope
  • Triaging client applications based on legislatively defined rules that do not require judgment or discretion to interpret (such as age and province of residence)
  • Repetitive tasks of a clerical nature such as checking that all mandatory fields are filled in before moving the form on to the next step in processing
  • Chatbot that directs a client to government websites
  • Automatically sending an email to an employee or inbox based on defined keywords
  • Using generative AI to produce a summary of an unclassified news article for dissemination to team members
  • Email spam filters
  • Automating cost calculations based on published prices and formulas
  • Actions that do not impact the legal rights, privileges or interests of individuals or employees such as research, program evaluation, brainstorming and drafting documents

Seeking an exception to the directive

Under subsections 4.1.1.2 and 4.1.1.2.3 of the Directive on Service and Digital, exceptions to requirements under directives in the service and digital policy suite must be sought through the Government of Canada’s Enterprise Architecture Review Board (GC EARB). Identify the requirement(s) at issue and develop a rationale explaining why you are unable to comply with them. Once approved by the assistant deputy minister responsible for the automated decision-making project and the departmental chief information officer, email the exception request to the GC EARB at EA.AE@tbs-sct.gc.ca.

Approach to national security

TBS’s direction on how to apply the directive to National Security Systems (NSSs) is evolving. NSSs constitute a broad category that covers a wide range of systems that support a variety of security goals that have varying levels of risk and classification.

Although NSSs are scoped out of the directive as set out in section 6.3 of the Policy on Service and Digital, complying with the directive when using NSSs offers benefits to departments and their clients. Compliance helps build public trust and ensures the proper operation of systems through measures such as bias testing and monitoring of outcomes.

Departments using NSSs in administrative decision-making should comply with as many requirements of the directive as possible without compromising national security. For example, a department could comply with all requirements of the directive except specific transparency requirements that would result in the inappropriate disclosure of sensitive information. Completion and publication of an AIA is possible in some cases, as the tool is designed to avoid the release of sensitive information.

Departments can perform case-by-case evaluations of automated decision systems in national security contexts to understand their policy implications. Departments should comply with all requirements except for specific ones where compliance can be reasonably expected to be injurious to national security.

Contact us

TBS’s Responsible Data and AI team is available at ai-ia@tbs-sct.gc.ca to advise departments on how to apply the directive and help complete AIAs.

Related links

© His Majesty the King in Right of Canada, as represented by the President of the Treasury Board, 2024
ISBN: 978-0-660-72712-7

Page details

Date modified: