Guide on Departmental AI Responsibilities
On this page
Purpose
The purpose of this guide is to:
- clarify recommended functions and responsibilities for artificial intelligence (AI) adoption and experimentation
- ensure effective and consistent governance and management of departmental AI projects and their related challenges and opportunities
Authorities
This guide is issued by the Chief Information Officer (CIO) of Canada pursuant to section 4.1.2 of the Policy on Service and Digital.
Application
This guide applies to departments covered in section 6.1 by the Policy on Service and Digital.
Other departments or separate agencies that are not subject to this guide are encouraged to follow this guidance as good practice.
Context
Most forms of AI consume or generate large amounts of data. Machine learning, the most frequently used method of AI, requires data throughout the machine learning life cycle to:
- train, test and validate models as inputs for analysis
- update or evaluate models during deployment
Even off-the-shelf AI solutions built using vendor data will generate large quantities of data when in use. This data must be appropriately governed to ensure that its collection, use and retention are consistent with law and policy.
As departments pursue an increasing number and variety of AI approaches and solutions, investments in data fundamentals and management will be essential for their success. The 2023–2026 Data Strategy for the Federal Public Service lays out the key steps that the Government of Canada needs to take to improve how it designs, manages and uses data. This foundational work will also help enable successful AI adoption. It will also unlock many other opportunities for departments and the GC to derive value for Canadians from the sharing and analysis of their data to:
- better protect their data security and privacy
- advance broader organizational goals such as Indigenous self-determination and inclusive services
Building on this foundation, the implementation of the AI Strategy for the Federal Public Service will provide guidance on:
- implementing departmental AI governance consistently
- ensuring that AI adoption and experimentation are managed responsibly and effectively and in alignment with the strategy’s objectives and principles
Guidance
In this section
Functions of departmental chief data officers and chief information officers
Successful AI implementation requires a multidisciplinary approach, bringing together technology, data, policy and subject matter expertise. Because of their role in data analytics and data science, including AI, departmental chief data officers (CDOs) should have a lead function in AI adoption and departmental AI governance and policy.
The Directive on Service and Digital states that CIOs are responsible for the strategic management of IT, information and data. This guide does not alter or replace that policy but reinforces the importance of the departmental CDO and their involvement in responsible AI deployment. Where a department does not have a CDO, this guidance applies to the official responsible for the management of the department’s data governance and analysis or digital enablement.
Departments should seek to develop their own AI strategies to guide their direction for AI adoption and management. These strategies must align with GC direction and outline a vision to transform their departments into AI-enabled organizations where employees are empowered to leverage AI responsibly to generate insights, drive operational efficiency, and improve programs and services.
Read more
Business opportunities and return on investment
Before implementing AI, departments should ensure that AI is the right tool to meet their needs. Departmental CDOs can help policy, program and service delivery experts identify opportunities where AI would be an appropriate technology to meet business needs and ensure that all preconditions for success can be met.
Opportunities for AI adoption must:
- clearly provide business value, generating improvements to departmental efficiency, effectiveness and/or productivity
- be feasible
- be assessed for scalability
The performance of implemented AI solutions should be measured throughout their life cycles.
Read more
- Artificial Intelligence Is Here: Deciding When and How to use AI in Government, Canada School of Public Service
Maturity
Departments considering AI adoption will be at differing levels of data, technological and policy maturity. Before beginning a project, departments considering AI projects should review their existing departmental data governance and assets and consult their departmental CDO on the department’s data maturity.
Relevant data assets will need to be well organized and of good quality for use with AI solutions. Where appropriate, AI tools can responsibly support data maturity by generating synthetic data to augment datasets, ensuring privacy and diversity without compromising sensitive information. They can also enhance data quality through:
- automated cleaning processes
- detecting anomalies that fall outside acceptable ranges
- identifying and correcting errors to maintain data integrity
Additionally, AI-driven insights can help facilitate better decision-making, enabling organizations to leverage their data assets more effectively.
Where data maturity is not sufficient for AI, departments should consider leveraging tools that are less demanding of data, such as robotic process automation, until the conditions for AI are in place.
Read more
Governance and risk management
Poorly governed AI can amplify risks to fairness, equity, privacy, security and legal compliance, leading to biased results and reputational harm.
Before adopting AI, departments should familiarize themselves with the Directive on Automated Decision-Making and the Guide on the Use of Generative AI to understand the risks involved and ensure the responsible use of AI solutions. They must also review privacy and security policies and consult departmental CDOs and privacy, security, legal, data ethics and equity teams to determine what assessments and mitigations are required.
Considerations and the degree of consultation needed will vary according to the AI tool or use case. In general, a high-risk, high-impact project, such as predictive modelling using personal information, will require significant consultation through the project life cycle with the departmental CDO and with privacy and security teams.
Departments should implement robust tracking, assessment and monitoring processes to advance promising AI use cases to maximize AI’s benefits across the policy service continuum.
Departments should commit to responsible and ethical AI development that reflects the diverse experiences and needs of their clients, stakeholders and workforce. Using a human-centric approach to developing AI initiatives:
- ensures fairness, inclusivity and transparency
- helps ensure that underserved populations are not adversely impacted
Strong AI governance builds in the guardrails that ensure that AI decisions provide consideration for essential elements, including human rights, privacy, ethics, security and the use of trusted data, while aligning with departmental priorities.
Read more
Capacity
Departments will need to attract, develop and retain or re-skill employees with specific skills for successful AI adoption. Although the skills needed will vary depending on whether AI is developed in-house or outsourced, they may include:
- technical skills such as programming, modelling, data management and data science and analytics
- complementary non-technical skills such as service delivery, project design, management, ethics and procurement
To ensure success, departments should not invest in technology unless they have the capacity needed to understand, develop and maintain the project. Until then, departments should prioritize capacity-building and employee upskilling while making use of free and low-cost tools such as online generative AI tools to experiment following a risk-based approach.
As a first step, departments should look to the GC’s 2024 Application Hosting Strategy to support these efforts. The Canada School of Public Service, departments and third-party providers also offer courses, communities of practice and events on AI that can help develop AI literacy.
It is also important to consider the department’s capacity to integrate AI solutions into their greater enterprise IT architecture. Departments should align with the Government of Canada Enterprise Architecture Framework to ensure that their digital initiatives are consistent with enterprise architectures across business, information, application, technology and security domains.
AI should not be regarded as a means to replace employees. Instead, departments should evaluate AI tools and solutions for their potential to enable employees, departments and Canadians to do more, faster and better, and to free employees to undertake the kind of challenging, complex and creative tasks at which humans excel. Over time, some of these repetitive tasks will be replaced by higher-value work, which will require changes to GC business models and how the GC’s workforce operates.
Read more
- Learning catalogue, Canada School of Public Service
- The Government of Canada Digital Talent Strategy
- Directive on Digital Talent
- Skills library, GC Digital Talent
Infrastructure
Issues such as legacy systems and vendor lock-in can impede AI integration by limiting system interoperability, data access or modifications. Departments that have aging infrastructure will need to assess whether AI tools can be integrated into existing systems without excessive costs, workarounds or system destabilization. Should integration prove too costly or complex, departments could consider software as a service (SaaS) solutions that require minimal system integration until their infrastructure has been modernized but must ensure that these meet any requirements for security, privacy and data sovereignty.
Read more
Enquiries
For advice on this guidance and other AI policy, contact the Government of Canada Responsible Data and AI team at ai-ia@tbs-sct.gc.ca.
The Office of the Chief Information Officer of Canada at TBS published AI Strategy for the Federal Public Service: 2025-2027 in spring 2025. The strategy outlines key priorities and opportunities for departments and supports coordination of AI efforts across government in ways that are responsible, transparent and aligned with public service values.
References
- Policy on Service and Digital
- Guideline on Service and Digital
- Directive on Automated Decision-Making
- 2023–2026 Data Strategy for the Federal Public Service
- Canada’s Digital Ambition 2023-24
- List of interested artificial intelligence (AI) suppliers
- Digital Standards
- Learning catalogue, Canada School of Public Service