Guide on the use of generative artificial intelligence

On this page

Overview

Generative artificial intelligence (AI) tools offer many potential benefits to Government of Canada (GC) institutions. Federal institutions should explore potential uses of generative AI tools for supporting and improving their operations, but they should not use these tools in all cases. Institutions must be cautious and evaluate the risks before they start using them. They should also limit the use of these tools to instances where they can manage the risks effectively.

This document provides guidance to federal institutions on their use of generative AI tools. This includes instances where federal institutions are deploying these tools. It provides an overview of generative AI, identifies challenges relating to its use, puts forward principles for using it responsibly, and offers policy considerations and best practices.

This guide also seeks to raise awareness and foster coordination among federal institutions. It highlights the importance of engaging key stakeholders before deploying generative AI tools for public use and before using them for purposes such as service delivery.

Stakeholders include:

  • legal counsel
  • privacy and security experts
  • the Office of the Chief Information Officer at the Treasury Board of Canada Secretariat (TBS)
  • bargaining agents
  • advisory groups
  • clients of GC services

The guide complements and supports compliance with many existing federal laws and policies, including those in the areas of privacy, security, intellectual property, and human rights.

This second version of the guide incorporates feedback from internal stakeholders and external experts.

It will be updated regularly to keep pace with regulatory and technological change.

To support public servants considering the use of these tools in their daily work, a concise summary of this guide offering do’s and don’ts is also available.

What is generative AI?

The Directive on Automated Decision-Making defines AI as information technology that performs tasks that would ordinarily require biological brainpower to accomplish, such as making sense of spoken language, learning behaviours or solving problems.

Generative AI is a type of AI that produces content such as text, audio, code, videos and images. Footnote 1 This content is produced based on information the user inputs, called a “prompt,” which is typically a short instructional text.

Examples of generative AI tools:

  • large language models (LLMs) such as ChatGPT, Copilot and LLaMA
  • GitHub Copilot and FauxPilot, which produce code based on text prompts
  • DALL-E, Midjourney and Stable Diffusion, which produce images from text or image prompts

These examples include both proprietary and open-source models. Both types have their own benefits and drawbacks in terms of cost, performance, scalability, security, transparency and user support.

In addition, generative AI models can be fine-tuned, or custom models can be trained and deployed to meet an organization’s needs. Footnote 2

Many generative AI models have been trained on large volumes of data, including publicly accessible data from the Internet. Based on the training data, these models generate content that is statistically likely in response to a prompt, Footnote 3 for example, by predicting the next word in a sentence. Techniques such as human supervision and reinforcement learning can also be applied to further improve the outputs, Footnote 3 and users can provide feedback or modify their prompt to refine the response. Generative AI can therefore produce content that looks as though a human produced it.

Generative AI can be used to perform or support tasks such as:

  • writing and editing documents and emails
  • generating images for presentations
  • coding tasks, such as debugging and generating templates and common solutions
  • summarizing information
  • brainstorming
  • research, translation and learning
  • providing support to clients (for example, answering questions, troubleshooting)

Challenges and opportunities

Before federal institutions start using generative AI tools, they must assess and mitigate certain ethical, legal and other risks. For example, these tools can generate inaccurate content; amplify biases; and violate intellectual property, privacy and other laws. Further, some tools may not meet federal privacy and security requirements. When institutions use these tools, they must protect personal information and sensitive data. As well, because these tools generate content that can look as though a human produced it, people might not be able to tell whether they are interacting with a person or a tool. The use of these tools can also affect the skill and judgment of public servants and can have environmental costs. The development and quality assurance practices of some generative AI models have also been associated with socio‑economic harms such as exploitative labour practices. Footnote 4 For example, data‑labelling or annotation requires extensive manual input, and this work is often outsourced to countries where workers are paid very low wages. Footnote 5

Generative AI tools rely on models that pose various challenges, including limited transparency and explainability. They also rely on training data that is difficult to access and assess. These challenges stem in part from large model sizes, high volumes of training data, and the proprietary nature of many tools. In addition, the outputs of the models are constrained by the prompts users enter and by the training data, which may lack context that is not publicly accessible on the Internet.

Training data could also be outdated. For example, ChatGPT-3.5 is trained on data up to early 2022, so it has a limited ability to provide information on events or developments after that. Footnote 6 Footnote 7 Training data can also be biased and lack a diversity of views, given that the Internet is frequently the data source. These biases can then be reflected in the outputs of the tools.

The performance of these tools can also vary from language to language. Models in English and other languages that are well represented in the training data often perform better than models in languages that are less well represented. Footnote 8 As well, these tools have limitations that reduce their utility for certain purposes; for example, they tend to perform inconsistently on tasks related to emotional or nuanced language. Footnote 9 Footnote 10

Generative AI could also pose risks to the integrity and security of federal institutions, given its potential misuse by threat actors. Federal institutions should be aware of these risks and consider the best practices recommended by the Canadian Centre for Cyber Security in their guidance Generative Artificial intelligence (AI) - ITSAP.00.041.

Although these tools present challenges and concerns, they also offer potential benefits to public servants and federal institutions. For example, they can enhance productivity through increased efficiency and quality of outputs in analytical and writing tasks in several domains. Footnote 11 Footnote 12 More analysis is needed to determine the most appropriate and beneficial uses of these tools by federal institutions. Experimentation, coupled with performance measurement and analysis, is needed to better understand potential gains and trade-offs and to inform the government’s approach to the use of these tools.

Recommended approach

In this section

Federal institutions should explore how they could use generative AI tools to support their operations and improve outcomes for Canadians. Given the challenges and concerns relating to these tools, institutions should assess and mitigate risks and use them only for activities where they can manage the risks effectively. With the growing adoption of these technologies in different sectors and by the public, exploration by federal institutions will help the government understand the risks and opportunities of these tools and keep pace with the evolving digital landscape.

The risks of using these tools depend on what they will be used for and on what mitigation measures are in place.

Examples of low‑risk uses:

  • writing an email to invite colleagues to a team‑building event
  • editing a draft document that will go through additional reviews and approvals

Examples of higher‑risk uses (such as uses in service delivery):

  • deploying a tool (for example, a chatbot) for use by the public
  • generating a summary of client information

Federal institutions should experiment with low-risk uses before they consider higher‑risk uses. They should always tailor best practices and risk‑mitigation measures to each use.

When deciding whether to use generative AI tools, public servants should refer to the guide to ethical decision-making (section 6 of Values Alive: A Discussion Guide to the “Values and Ethics Code for the Public Sector”).

To maintain public trust and ensure the responsible use of generative AI tools by federal institutions, institutions should align with the “FASTER” principles that TBS has developed:

  • Fair: ensure that content from these tools does not include or amplify biases and that it complies with human rights, accessibility, and procedural and substantive fairness obligations; engage with affected stakeholders before deployment
  • Accountable: take responsibility for the content generated by these tools and the impacts of their use. This includes making sure generated content is accurate, legal, ethical, and compliant with the terms of use; establish monitoring and oversight mechanisms
  • Secure: ensure that the infrastructure and tools are appropriate for the security classification of the information and that privacy and personal information are protected; assess and manage cyber security risks and robustness when deploying a system
  • Transparent: identify content that has been produced using generative AI; notify users that they are interacting with an AI tool; provide information on institutional policies, appropriate use, training data and the model when deploying these tools; document decisions and be able to provide explanations if tools are used to support decision-making
  • Educated: learn about the strengths, limitations and responsible use of the tools; learn how to create effective prompts and to identify potential weaknesses in the outputs
  • Relevant: make sure the use of generative AI tools supports user and organizational needs and contributes to better outcomes for clients; consider the environmental impacts when choosing to use a tool; identify appropriate tools for the task; AI tools aren’t the best choice in every situation

For assistance in determining the appropriate use of these tools, public servants should consult relevant stakeholders such as:

  • their institution’s legal services, privacy and security experts
  • the offices of the chief information officer and chief data officer for their institution
  • their institution’s diversity and inclusion specialists

The following can also provide support:

  • the Canadian Centre for Cyber Security
  • Statistics Canada
  • the Office of the Chief Information Officer of Canada (part of TBS)

Responsibilities for federal institutions

Federal institutions should evaluate generative AI tools for their potential to help employees, not replace them. Institutions are encouraged to responsibly explore uses and to enable employees to optimize their work while ensuring that all uses of these tools are ethical, align with the FASTER principles and comply with policies and laws.

In evaluating these tools and exploring how they could use them, institutions have a number of responsibilities including:

  • ensuring that employees can access and take training on the effective and responsible use of generative AI tools
  • supporting employees in improving their knowledge of topics such as detecting biased and inaccurate content
  • providing access to secure generative AI tools that meet government information, privacy and security requirements
  • enabling access to online generative AI tools, in alignment with the Policy on Service and Digital (requirement 4.4.2.5) and the Directive on Service and Digital Appendix A: Examples of Acceptable Network and Device Use (Non-Exhaustive List)
  • implementing oversight and performance management processes to monitor the impacts of these tools and to make sure both the tools themselves and their uses comply with applicable laws and policies and align with the FASTER principles, particularly during deployment
  • engaging with employees to understand their needs
  • consulting with stakeholders such as end-users, client representative groups and bargaining agents before high-risk deployments

Institutions should have effective change management practices so that they can help employees improve their current skills and develop new ones.

Managers need to understand what these tools can and can’t be used for and should have realistic expectations of how the tools might help improve employees’ productivity.

Institutions should also evaluate the risks and opportunities associated with using these tools and develop guidance for their institution that aligns with this guide and is tailored to their organization’s context and needs.

Policy considerations and best practices

In this section

Does the Directive on Automated Decision-Making apply?

The Directive on Automated Decision-Making applies to automated systems that are used to support or make administrative decisions, including systems that rely on AI. Like other AI systems, generative AI systems have capabilities that allow them to make assessments or determinations about clients as part of the service delivery process. For example, a generative AI system could be used to summarize a client’s data or to determine whether they are eligible for a service. Footnote 13 These administrative uses can affect how an officer views and decides on a case, which has implications for the client’s rights, interests and privileges.

The directive applies to generative AI systems when they are used to make or inform administrative decisions. Institutions must make sure they meet the requirements of the directive, which include completing the Algorithmic Impact Assessment (AIA) and other requirements that support transparency, quality assurance and procedural fairness.

However, generative AI may not be suited for use in administrative decision-making. The design and function of generative models can limit institutions’ ability to ensure transparency, accountability and fairness in decisions made by these systems or informed by their outputs. As well, the terms of use for the generative AI products of many technology companies prohibit using their products to make high-impact decisions. For example, OpenAI does not allow the use of their models for decisions about health conditions, credit, employment, educational institutions, or public assistance services; law enforcement and criminal justice; and migration and asylum. Footnote 14 Similarly, Google prohibits use of their generative AI services from making “automated decisions in domains that affect material or individual rights or well-being.” Footnote 15 These limitations underscore the importance of complying with the directive’s requirement to consult legal services during the design phase of an automation project. This requirement helps make sure federal institutions understand the legal risks of administrative uses of generative AI systems both for themselves and for their clients.

Not all uses of generative AI are subject to the directive. For example, using generative AI tools in research or to brainstorm, plan, or draft routine correspondence falls outside the scope of the directive. However, such non-administrative uses are still subject to the laws and policies that govern federal institutions.

Privacy considerations

All personal information handled by federal institutions is subject to the requirements of the Privacy Act and related policy instruments. Personal information is defined as information about an identifiable individual that is recorded in any form. The act and the privacy policy suite include requirements for when and how personal information is collected, created, used or disclosed using a generative AI system.

The privacy risks will depend on how the generative AI system is used, how it processes information about individuals, and whether it is publicly accessible online or is deployed on the government’s secure network.

Public servants must not input personal information into publicly available online generative AI tools. Doing so would constitute an unlawful disclosure of the information because the supplier might store a copy of the information. Public servants may, however, input personal information into generative AI systems that are controlled or configured by the government when the appropriate privacy and security controls are in place. When using a generative AI tool controlled by the institution, employees are responsible for following all privacy and protection of personal information requirements as they are for any other system.

If the output of a generative AI tool results in the creation of new personal information, the institution must manage the new information according to privacy requirements. For example, if an institution deploys a generative AI tool on its network to assist in assessing risk, the risk level attributed to individual applications would constitute personal information. As another example, a summary of an application for a service or benefit produced by a generative AI tool could constitute new personal information. In both of these examples, the risk level and the summary would require the appropriate privacy protections and be subject to other requirements for handling personal information.

When institutions are considering procuring, developing or deploying a generative AI tool, privacy officials must be consulted to determine whether a Privacy Impact Assessment is needed to identify and mitigate privacy risks ahead of deployment. When institutions are building IT solutions that use generative AI, they must make sure they meet privacy requirements. The Digital Privacy Playbook contains more information on these requirements and on how to incorporate privacy guidance into IT solutions that use generative AI.

De-identification and the use of synthetic data can help institutions reduce the impact and likelihood of privacy breaches when developing, training, using and evaluating the outputs of generative AI tools. Privacy Implementation Notice 2023‑01: De-identification contains more information about these privacy preserving techniques. Other safeguards such as administrative controls, access rights, and auditing are also important to reduce the risk of inadvertent disclosure or unauthorized access, re‑identification or inference, and to generally preserve the privacy of individuals.

Documentation requirements

Under the Directive on Service and Digital, GC employees are responsible for documenting their activities and decisions of business value. For example, records of decisions to develop or deploy a generative AI tool and steps taken to ensure that the tool produces appropriate and accurate outputs; and when the generative AI tool is used in a way that would trigger the Directive on Automated Decision Making. Appendix E (Identifying and Recognizing Information and Data of Business Value) of the Guideline on Service and Digital can assist institutional officials with this process. Library and Archives Canada’s Generic Valuation Tools (GVT) may also be helpful when they cover a business activity within the scope of which generative AI is used.

The context in which generative AI is used must be considered when evaluating the relative business value of the associated information and data. The business value will inform documentation requirements. For example, the use of generative AI tools to assist with daily tasks such as the following is more likely to yield transitory information:

  • drafting emails or documents
  • generating images or assisting in creating presentations
  • supporting coding activities by computer programmers
  • brainstorming ideas or research information
  • translating text between languages

Under the Directive on Service and Digital, the institutional chief information officer, in collaboration with other institutional officials, as necessary, is responsible for the open and strategic management of information and data and for documenting corresponding institutional information and data life cycle management practices. The Guideline for Employees of the Government of Canada: Information Management (IM) Basics, Guidance on Data Quality and Guidance on Metadata Life Cycle Management offer related guidance and best practices.

When institutions deploy generative AI tools to support or make administrative decisions, additional documentation requirements related to transparency, quality assurance and reporting apply. These requirements are set out in subsections 6.2, 6.3 and 6.5 of the Directive on Automated Decision-Making.

All documentation surrounding the use, development and deployment of generative AI systems under the control of a government institution is subject to the Access to Information Act and the Library and Archives of Canada Act and must be retained and disposed of accordingly.

Potential issues and best practices

This section provides an overview of several areas of risk and sets out best practices for the responsible use of generative AI in federal institutions. In addition to the best practices for users of generative AI in the federal government, there are best practices specific to federal institutions that develop or deploy these tools to ensure that risks are appropriately assessed and mitigated, and to distinguish between the responsibilities of users and developers. Not all best practices apply in every situation, so federal institutions are encouraged to tailor their approach to each use.

a. Protection of information

Issue: some generative AI tools do not meet government information security requirements

The protection of personal, classified, protected and proprietary information is critical when using generative AI systems. The suppliers of some generative AI tools may inspect input data or use it to further train their models, which could result in privacy and security breaches. Risks can also arise from input data being stored on servers not controlled by the GC, where data might be retained for longer than necessary, made accessible, further distributed, or vulnerable to a data breach. Footnote 16 Some tools, public or otherwise, may not meet the privacy and security requirements set out in federal law and policy. Developing or deploying generative AI systems on institutions’ secure networks may mitigate some risks, and institutions should seek approval from their internal officials, such as their chief security officer, designated official for cyber security and chief information officer. As part of these approvals, a formal risk assessment and acceptance is recommended because additional mitigation measures may be needed to lower risks to an acceptable level.

Best practices for all users of generative AI in federal institutions
  • Don’t enter sensitive or personal information into any tools not managed by the GC.
  • Don’t submit queries on non-GC managed tools that could undermine public trust if they were disclosed. See Appendix B of the Directive on Service and Digital for examples of unacceptable network and device uses.
  • Understand how a system uses input data (for example, whether it’s used as training data and whether it’s accessible to suppliers).
  • Ask legal services, the institutional chief security officer, and the privacy team to review a supplier’s terms of use, privacy policy and other legal documents before using any system to process sensitive or proprietary information.
  • Use infrastructure and tools that are appropriate for the security classification of the information, in accordance with the Directive on Security Management.
  • Seek the approval of the institutional chief security officer before using, procuring or deploying generative AI for protected or other sensitive information.
  • Consider the requirements for information and data residency in the Directive on Service and Digital and related guidance in the Guideline on Service and Digital.
  • Use the “opt-out” feature, where possible, to ensure that prompts are not used to train or further develop an AI system.
Additional best practices for federal institutions deploying a generative AI tool
  • Conduct regular system testing before and during the operation of a system to ensure that risks of potential adverse impacts, as well as risks associated with inappropriate or malicious use of the system, are identified and mitigated.
  • Apply more in-depth testing methods to identify and mitigate vulnerabilities in instances where systems will be made publicly available. This should include penetration testing, adversarial testing or red teaming.
  • Plan independent audits for assessing generative AI systems against risk and impact frameworks. Leverage existing risk management frameworks, when appropriate. See the guidance on the Risk management page.
  • Develop a plan to document and respond to cyber security events and incidents, aligned with the Government of Canada Cyber Security Event Management Plan (GC CSEMP). Update the plan as needed to address the evolving threat environment.
  • Consider best practices such as the National Cyber Security Centre’s Guidelines for Secure AI System Development and the National Security Agency’s Deploying AI Systems Securely.

b. Bias

Issue: generated content may amplify biases or other harmful ideas that are dominant in the training data

Generative AI tools can produce content that is discriminatory or not representative, or that includes biases or stereotypes (for example, biases relating to multiple and intersecting identity factors such as gender, race and ethnicity). Footnote 17 Footnote 18 Footnote 19 Many generative models are trained on large amounts of data from the Internet, which is often the source of these biases. For example, training data is likely to reflect predominant historical biases and may not include perspectives that are less prevalent in the data or that have emerged since the model was trained. Footnote 17 Other sources that may contribute to biased content include data filtering, which can amplify the biases in the original training set, Footnote 20 framing of the prompt, Footnote 21 and model bias. Widespread use of these technologies could amplify or reinforce these biases and dominant viewpoints, and lead to less diversity in ideas, perspectives and language, Footnote 17 Footnote 22 as well as potential harms.

Best practices for all users of generative AI in federal institutions
  • Learn about bias, diversity, inclusion, anti-racism, and values and ethics to improve your ability to identify biased, non-inclusive or discriminatory content.
  • Review generated content to make sure it aligns with GC commitments, values and ethics, and meets legal obligations. This review includes assessing for biases and stereotypical associations.
  • Formulate prompts to generate content that provides holistic perspectives and minimizes biases.
  • Strive to understand the data that was used to train the tool, for example, where it came from, what it includes, and how it was selected and prepared.
  • Clearly indicate when content has been produced by generative AI.
Additional best practices for federal institutions deploying a generative AI tool
  • Consider potential biases and approaches to mitigating them from the planning and design stage, including using gender-based analysis plus (GBA Plus) to understand how deploying a generative AI tool might impact different population groups. Don’t deploy the tool if you can’t manage the risk of biased outputs.
  • Consult GBA Plus experts in your organization and consult people who would be directly impacted by the deployment of the tool (such as clients) in the planning and design stages, as well as in the evaluation and auditing stages, to identify impacts of the use of these tools on different population groups and to develop measures to address any negative impacts.
  • Test for biases in the data, model and outputs before deploying a tool, and on an ongoing basis.
  • Regularly monitor the system for adverse impacts after deployment.
  • Reflect population diversity when deploying virtual assistants, for example, by varying the gender of the assistant.

c. Quality

Issue: generated content may be inaccurate, incoherent or incomplete

Generative AI technologies can produce content that appears to be well developed, credible and reasonable but that is in fact inaccurate, nonsensical or inconsistent with source data. Footnote 23 Footnote 6 This content is sometimes referred to as a “hallucination.” Also, content generated by AI tools may not provide a holistic view of an issue. Instead, it may focus on prevalent perspectives in the training data. Footnote 17 Content might be out of date, depending on the time period the training data covers and whether the system has live access to recent data. There may also be differences in the quality of the outputs in English and French, depending on the model, task and prompt. The output quality in each language should be evaluated to ensure compliance with official language requirements.

The risks associated with inaccurate content will vary depending on the context and should be assessed. For example, using generative AI tools to learn about a topic may produce incorrect information or non‑existent sources. Footnote 24 If the results were to be used in decision-making, they could lead to unfair treatment of some people or to misguided policy. As well, when considering the use of these tools in public‑facing communications, it is critical that the government not share inaccurate information, which would contribute to misinformation and erode public trust.

Best practices for all users of generative AI in federal institutions
  • Clearly indicate that you have used generative AI to develop content.
  • Don’t consider generated content as authoritative. Review it for factual and contextual accuracy by, for example, checking it against information from trusted sources or by having a colleague with expertise review the response.
  • Verify personal information created using generative AI to make sure it is accurate, up to date and complete.
  • Assess the impact of inaccurate outputs. Don’t use generative AI when factual accuracy or data integrity is needed.
  • Strive to understand the quality and source of training data.
  • Consider your ability to identify inaccurate content before you use generative AI. If you can’t confirm the quality of the content, don’t use it.
  • Learn how to create effective prompts and provide feedback to refine outputs to minimize the generation of inaccurate content.
  • Don’t use generative AI tools as search engines unless sources are provided so that you can verify the content.
Additional best practices for federal institutions deploying a generative AI tool
  • Test performance and robustness across a variety of uses before deployment. This includes making sure the quality of tools and outputs meets official languages requirements.
  • Assess training data quality when refining models.
  • Use grounding and prompt engineering so the models build responses from only the information you provide and control.
  • Notify users that they are interacting with generative AI.
  • When content is generated by AI, include links to authoritative sources to provide users with additional context and foster transparency.
  • Provide information about the appropriate use of the tools, capabilities and limitations of the system, risk mitigation measures, source of training data and how models were developed.
  • Monitor the performance of generative AI tools on an ongoing basis to understand potential impacts and make sure they are meeting performance targets. Document problems, pause deployment, and update the tool if performance levels are not being achieved.

d. Public servant autonomy

Issue: overreliance on AI could unduly interfere with judgment, stifle creativity and erode workforce capabilities

Overreliance on generative AI tools could interfere with individual autonomy and judgment. For example, some users may be prone to uncritically accept system recommendations or other outputs, which could be incorrect. Footnote 25 Footnote 26 Overreliance on AI systems can be a sign of automation bias, which is the tendency to favour results generated by automated systems, even in the presence of contrary information from non-automated sources. Footnote 25 As well, confirmation bias could contribute to overreliance Footnote 25 because the outputs of generative AI systems can reinforce a user’s preconceptions, especially when prompts are written in a way that reflects their assumptions and beliefs. Footnote 27 Excessive dependence on AI systems may result in a decline in critical thinking. This could limit diversity in thought, stifle creativity and innovation and lead to partial or incomplete analyses. As such, overreliance on AI may impede employees’ abilities to build and maintain the skills they need to perform tasks that are assigned to generative AI systems, which could potentially erode workforce capabilities.

Best practices for all users of generative AI in federal institutions
  • Consider whether you need to use generative AI to meet user and organizational needs.
  • Consider the abilities and limits of generative AI when assigning tasks and reviewing system outputs.
  • Build your AI literacy so that you can critically assess these tools and their outputs.
  • Use generative AI tools as aids, not as substitutes. Do not outsource a skill that you do not understand or possess.
  • Form your own views before seeking ideas or recommendations from AI tools.
  • Use neutral wording when formulating prompts to minimize biased outputs.
  • Always review content generated by AI, even if the system seems to be reliable in providing accurate responses.

e. Legal risks

Issue: generative AI poses risks to human rights, privacy, intellectual property protection, and procedural fairness

The government’s use of generative AI systems poses risks to the legal rights and obligations of federal institutions and their clients. These risks arise from the data used to train AI models, the way systems process input data, and the quality of system outputs.

The use of copyright-protected materials like articles, books, code, paintings or music by suppliers or federal institutions to train AI models may infringe on intellectual property rights. The use or reproduction of the outputs generated by these models could also infringe on such rights if they contain material that is identical or substantially similar to a copyright-protected work. Further, the ownership of content created by or with the help of generative AI is uncertain. Privacy rights could also be at risk because data used to train generative AI models could include unlawfully collected or used personal information, such as personal information obtained from publicly accessible online sources.

Risks could also arise from the opacity of generative AI models and their potential for producing inaccurate, biased or inconsistent outputs. This opacity makes it difficult to trace and understand how the AI system produces outputs, which can undermine procedural fairness in instances where a federal institution is obliged to provide clients with reasons for administrative decisions, such as decisions to deny benefits. The quality of AI outputs can also impact individuals’ legal rights. For example, biased outputs could lead to discrimination in services, potentially violating human rights.

These risks extend beyond decision-making scenarios. When federal institutions use generative AI tools to help the public find information (for example, chatbots on institutional websites) or to produce public communications, there’s a risk that these tools will generate inappropriate content or misinformation that could contribute to or cause harm for which the government could be liable.

Best practices for all users of generative AI in federal institutions
  • Consult your institution’s legal services about the legal risks of deploying generative AI tools or using them in service delivery. The consultation could involve a review of the supplier’s terms of use, copyright policy, privacy policy and other legal documents.
  • Comply with the Directive on Automated Decision-Making when using generative AI in administrative decision-making.
  • Check whether system outputs are identical or substantially similar to copyright-protected material (look at the Frequently asked questions for more information on how to do this). Give proper attribution, where appropriate, or remove the problematic material to minimize the risk of intellectual property infringement.
  • Consult designated officials on the licensing and administration of Crown copyright if you are planning to include outputs in public communications, in accordance with the Procedures for Publishing.
  • Evaluate system outputs for factual inaccuracies, biases or harmful ideas that may conflict with GC values.
  • Keep up to date on legal and policy developments related to AI regulation.
Additional best practices for federal institutions deploying a generative AI tool
  • Verify the legality of the method used to obtain data for training AI models and make sure you have permission to use the data for this purpose.
  • Document the data provenance and that you have authorization from the copyright owner.
  • Where feasible, train your model using open‑source data that you have confirmed you may use in this way.
  • Be transparent when you use generative AI. For example, notify users if they are interacting with a system rather than a human. Where relevant, include a disclaimer to minimize liability risks and increase transparency.

f. Distinguishing humans from machines

Issue: people may not know that they are interacting with an AI system, or they may wrongly assume that AI is being used

Conversational agents or chatbots that use generative AI can produce responses that are so human‑like that it may be difficult to distinguish them from those of a real person. Footnote 28 As a result, clients may think they are interacting with a human. Similarly, clients may think an email they have received was written by a person when it was actually generated by an AI tool. On the other hand, clients might think they are interacting with an AI tool when they are actually dealing with a real person. Transparency about whether a client is interacting with a person or a chatbot is essential to ensure that the client is not misled and to maintain trust in government.

Best practices for all users of generative AI in federal institutions
  • Clearly communicate when and how you are using AI in interactions with the public.
  • Inform users when text, audio or visual messages addressed to them are generated by AI.
Additional best practices for federal institutions deploying a generative AI tool
  • Offer alternative non-automated means of communicating.
  • Use watermarks to identify content that is generated by AI.
  • Publish information about the system, such as a plain-language description of how it works, why your institution is using it and what steps were taken to ensure the quality of the outputs.

g. Environmental impacts

Issue: the development and use of generative AI systems can have significant environmental costs

The development and use of generative AI systems can be a significant source of greenhouse gas (GHG) emissions and water usage. These emissions come not only from the compute used to train and operate generative models but also from the production and transportation of servers that support AI programs. In addition, data centres are energy-intensive and consume vast quantities of water for on-site cooling and off-site electricity generation. Footnote 29 Although generative AI has the potential to help combat climate change, its use must be balanced against the need for swift and drastic action to reduce global greenhouse gas emissions and avert irreversible damage to the environment. Footnote 30

Best practices for all users of generative AI in federal institutions
  • Use generative AI tools hosted in net-zero or carbon-neutral data centres.
  • Use generative AI tools only when relevant to program objectives and desired outcomes.
  • Understand that the environmental impact of generative AI comes not only from its use but also from the training of the models and the manufacturing of the hardware components that run data centres, as well as the mining, manufacturing, assembly, transportation, and disposal of them.
Additional best practices for federal institutions deploying a generative AI tool
  • Find out whether your AI supplier has set any greenhouse-gas reduction targets. Footnote 31
  • Conduct an environmental impact assessment as part of the proposal to develop or procure generative AI tools. Make sure any decision to procure these tools is made in accordance with the Policy on Green Procurement.
  • Encourage developers to be transparent about sustainability by giving preference to those that clearly communicate the environmental impacts of their AI systems through reports on compliance with GHG Protocols. Footnote 32
  • Use carbon‑awareness tools to estimate the carbon footprint of systems before training or production. Footnote 33

Use of this guide

Federal institutions are encouraged to use this guide as they continue to develop their own guidance on the use of generative AI. This guide and the community will continue to evolve.

Additional support

Information and guidance on specific uses of generative AI

Courses and events

The Canada School of Public Service offers different courses and events on AI, including Using Generative AI in the Government of Canada.

Additional resources

Federal institutions can also contact the following for additional support:

  • Communications Security Establishment (including the Canadian Centre for Cyber Security)
  • Statistics Canada

Frequently asked questions

   
Can I use generative AI to draft emails or briefing notes?

Yes. Depending on the context. If you use a generative AI tool to draft an email or briefing note, you are responsible for making sure that:

  • the data you input into the tool doesn’t include personal, protected, classified or other sensitive information, unless you have confirmed that the tool is appropriate for the security classification of the information
  • generated content is accurate, non-partisan, unbiased, and doesn’t violate intellectual property laws
  • you inform management that you used a generative AI tool in the drafting process
Can I use generative AI to develop content for public communications (for example, web posts, social media)?

Use caution.

When you use generative AI to develop content, you are responsible for making sure that:

  • the content is accurate, clear, non‑partisan and unbiased
  • you have the necessary permissions to reproduce, adapt, translate or publish third‑party material
  • the content complies with intellectual property laws
  • you have informed the public of any significant use of generative AI in the production of content
  • the outputs are trusted, given the potential reach and impact of public communications
Can I use generative AI for programming tasks?

Yes, but you must consider the security classification of the code. Also, when it comes to code generation, some generative AI tools may produce content that violates the open‑source licences of the source code they were trained on. To address this issue, use tools to identify potential matches in public code repositories or limit the use of generative AI to tasks like debugging or code explanation.

Can I use generative AI when developing policy?

Yes, but be mindful of the strengths and limitations of generative AI tools, and tailor the tasks you assign to them accordingly. You can use these tools to help with brainstorming during policy development, but use caution and validate the generated content if you will be using it as evidence. Don’t use generative AI tools to recommend, make or interpret policy.

When deciding on policy positions, make your own value judgments, in consultation with the relevant stakeholders and consistent with applicable laws. Be transparent and vigilant about any significant use of generative AI during the policy process, including in research and stakeholder engagement. Make sure the prompts used in such contexts don’t include any information that would pose legal or reputational risks to the government.

Can I use generative AI to automate assessments, recommendations or decisions about clients?

Use caution.

The use of generative AI to make or inform decisions must comply with the Directive on Automated Decision‑Making, which seeks to ensure transparency, accountability and fairness in decisions made or informed by automated systems such as those that use generative AI.

If you are considering using generative AI in administrative decision‑making, consult with stakeholders in your institution, including the following, at the planning and design stage:

  • the legal services team
  • the offices of the chief security officer, chief information officer and chief data officer
  • GBA+ experts
  • the privacy team

They will help you make sure you can use generative AI for the identified purpose and will help you identify and mitigate risks.

As well, make sure you:

  • understand how the tool produces its outputs and can find the data it relied on
  • assess outputs for factual accuracy and undue bias toward clients
  • consider potential variation in outputs produced in response to similar prompts, which could lead to inequalities in the treatment of clients
  • consider how you will meet transparency and explainability requirements
How do I check whether system outputs are identical or substantially similar to copyright-protected material?

You can check this by doing an Internet search to compare the output from the generative AI tool with already published material. The risk of copyright issues depends on what you are asking the generative AI system to do, the subject matter, and how you will use the output.

You can lower the risk of a having problem by:

  • using generative AI for tasks that have a lower likelihood of including copyright-protected information, such as editing content that you have drafted yourself as opposed to generating new content
  • using materials where copyright issues are unlikely to be present, for example, a publicly available or paid-for stock image, rather than using generative AI

Your level of diligence in making sure you don’t have a copyright issue also depends on what you are asking the generative AI system to do, the subject matter, and how you will use the output. If the output will be publicly available, you must be extremely vigilant in checking that there are no copyright issues. You may want to consult your institution’s legal services to get advice tailored to your operational requirements.

What do I include when I’m notifying people that I used a generative AI system?

Exactly what you tell people will depend on the context. Consider, for example:

  • who the audience is
  • how the system is being used
  • what format the generated content is (such as text, images, or code)
  • how much of the generative AI content was included in the final product

At a minimum, tell people which system and version you used. In addition, you may want to also include the following:

  • the purpose for using it
  • whether and to what extent you reviewed, validated or modified the output

When you use generative AI in your work, inform your manager.

The following table contains a few scenarios where you might use a generative AI tool and suggestions for information you might provide.

Scenario Information you could provide

Using a generative AI tool to summarize information for your director to prepare them for a meeting

A quick note at the bottom of the email saying:

  • which generative AI tool you used to help you summarize the information
  • that you reviewed the content for accuracy

Using a generative AI tool in building a presentation

A note on a slide saying which generative AI tool you used and that you used it to organize the slides, generate images, and suggest next steps

Deployment of a public‑facing generative AI tool such as a chatbot

A note up front telling clients:

  • that they aren’t interacting with a human
  • how to use the tool
  • what types of responses to expect
  • a reminder to not input personal or sensitive information

In addition to informing clients up front, you should take other transparency measures such as publication of information on:

  • the tool’s capabilities and limitations
  • what risk mitigation measures have been taken
  • the source of training data
  • what model is being used and how it was developed

Using a generative AI tool to summarize articles for a monthly newsletter posted on a collaborative site

  • which generative AI tool you used to create the summary
  • how much of the final summary came from the AI tool
  • what steps you took to make sure the final content was accurate, up-to-date, complete, valid, unbiased and non‑partisan

Using a generative AI tool when writing a research paper

  • follow style guides and journal publication requirements to determine whether these tools can be used and the format of such citations
Do I need to record my use of generative AI tools?

It depends. If your use of these tools is an activity of business value, you must document it. The context in which you are using the tools will inform whether documentation is needed and how long the documentation should be retained.

If you are considering using these tools to make administrative decisions or related assessments, you should comply with the documentation requirements in the Directive on Automated Decision-Making.

If you are using these tools for daily tasks such as brainstorming, translation or drafting and editing documents, you may not need to document their use because the information generated will likely be transitory.

Even if the tools produce transitory information, you should still notify your manager of any substantive use.

Your institution may also have guidance on what should be documented. Check with the office of the chief information officer or office of the chief data officer for your institution to find out if there is institution-specific guidance.

Should I use a personal or work email address to register for AI tools?

When using generative AI tools for GC business, you should use an official work email address. Using a work email address:

  • helps ensure transparency and accountability
  • helps ensure that decisions are properly recorded and transferred to corporate repositories, as appropriate

Other recommendations when registering for AI tools:

  • Use a strong, unique password that is different from and not a variant of passwords you use on a corporate device or other service
  • When possible, use two‑factor authentication on third‑party services
How do I write effective prompts?

How you ask a generative AI system to produce content affects the generated outputs.

When writing prompts, be as clear and specific as possible and provide context.

For example, you can:

  • provide background information
  • identify the audience
  • specify the length and tone of the desired output

You can even give the tool a few examples of possible input and the corresponding output you’re seeking, and then ask it to replicate the approach for new inputs. Once you have an output, you can reply with additional prompts to help refine the initial output.

When developing prompts, consider your own views and biases and write prompts that provide holistic and unbiased outputs.

You may also want to ask the system to only generate responses based on certain sources or input that you provide it. For example, you could copy information into the prompt and ask the system to generate an executive summary of it for your senior leader. Or you may want to include the content of an upcoming presentation and ask the system to create engaging introductory speaking points based on the presentation.

There are many other prompting techniques that are not covered here, and best practices may vary depending on the model used.

Experiment with your prompts and language and take training on effective prompt techniques. The Canada School of Public Service offers learning products on AI, including Using Generative AI in the Government of Canada and Working with Artificial Intelligence Series: Writing Inclusive Prompts. Whatever prompt technique you use, follow the FASTER principles and the best practices provided in this guide.

Page details

Date modified: