Roadmap to Scale AI Projects in the Public Sector

Roadmap to Scale AI Projects in the Public Sector

The Roadmap to Scale AI Projects in the Public Sector proposes key actions to identify artificial intelligence (AI) solutions used in one organization, and reuse or expand them across others. It is organized according to the five stages of an AI project and four pillars that apply at each stage. The roadmap fulfills the commitment made in the G7 Leaders’ Statement on AI for Prosperity and was developed in collaboration with the G7 AI Network (GAIN).

AI project stages

The roadmap is structured around governing questions for each stage of an AI project.

  1. Problem and planning: Is the problem accurately identified? Is there an existing solution that can be reused or scaled?
  2. Piloting and observation: Is it feasible to reuse or scale the AI project across organizations in a way that is responsible, safe and generates value?
  3. Adoption and enablement: Are we ready to operate and scale this AI project across the organization so that it is used effectively?
  4. Monitoring and optimization: Is the AI project operating as intended across organizations? If not, what needs to change, based on evidence?
  5. Decommissioning or renewal: Should the AI solution be renewed replaced, or retired? If so, how do we do that responsibly?

Pillars

The decision to reuse or scale a solution requires assessment and action across four recurring pillars.

  • Design and engagement

    Users and communities

    What it is: Incorporating collaborative, cross-organizational work and user and community feedback to align problem definitions and inform design choices.

    Importance: Ensures that real user and community needs are reflected and keeps the solution responsive to changing contexts as it scales.

  • Governance and responsible AI

    Ethics, privacy, security, accountability

    What it is: A continuous set of practices, safeguards and oversight to ensure that AI is designed, tested, deployed and managed in line with governance and responsible AI principles.

    Importance: Sets accountabilities and responsibilities, ensuring that solutions are safe, lawful, trustworthy, and aligned with organizational values throughout their life cycle and that funding is available and well managed.

  • Capacity-building

    Infrastructure, change management, skills development

    What it is: Assessment of workforce size, expertise, skills, training needs; workflow impacts; and infrastructure readiness to support the system as it grows.

    Importance: Ensures that users can use the system responsibly and efficiently, and that teams are established and empowered to deliver. Ensures that infrastructure is available at scale and that the system remains safe, secure and compliant across the enterprise.

  • Measurement and continuous improvement

    Metrics and return on investment

    What it is: Ongoing monitoring of system performance, comparison against baselines, identification of drift or issues, and targeted improvements based on evidence.

    Importance: Ensures that issues are detected early, that value is demonstrated, and that improvements keep the system effective as it expands across organizations.

Roadmap

The roadmap is organized across AI project stages, with a desired outcome identified for each stage. Not all actions are required for each project.

Stage 1: Problem and planning

Design and engagement

  • Define the problem (establish context; identify use cases, users, communities and interests; determine whether the problem is a short- or long-term one)
  • Understand the scope of the problem, and decide whether it would benefit from an enterprise or cluster approach
  • Engage pilot users to understand needs
  • Map governance considerations at the federal, regional and municipal levels
  • Identify constraints that could affect cross-jurisdiction scaling
  • Identify whether any AI or non-AI solutions exist in the organization that could be reused, adapted or scaled in part or in full to address the problem
  • Map out the impacts and risks related to the problem and the potential positive and negative effects of the existing solution
  • Map out the timeline of how the problem has progressed and the likely timeline impact if a solution were introduced

Governance and responsible AI

  • Identify policy and legal requirements, including those specific to operating enterprise or cluster solutions
  • Identify governance gates for cross-organizational, enterprise or cluster solutions
  • Identify security and privacy risks, including those specific to operating cross-organizational, enterprise or cluster solutions
  • Evaluate the quality and availability of data across organizations, establish a plan to address any gaps
  • Determine the operating model, the data-governance model and the costing model needed to deploy the solution across organizations (for example, a cluster or enterprise solution)
  • Identify enterprise authorities for data stewardship, custodianship and system operation; obtain authorities, if needed
  • Determine the procurement model (enterprise, cluster or joint procurement)
  • Assess whether scaling requires retendering or revised contracting
  • Clarify the funding model and budget authority across participating entities; discuss funding commitments with participating organizations
  • Confirm authorities, accountabilities, roles and responsibilities

Capacity-building

  • Assess enterprise or cross-organizational scalability readiness
  • Assess IT infrastructure readiness, including cloud architecture and compute capacity
  • Assess IT security posture and identify cross-organizational security constraints
  • Consider developing reference architectures to standardize implementation approaches
  • Encourage employees to enhance their AI literacy

Measurement and continuous improvement

  • Identify and define metrics and return-on-investment targets
  • Measure the current state of the existing business process without solution implementation

Outcome

The problem is clearly defined; reuse options are assessed; and procurement, governance, infrastructure and IT security considerations understood. The decision to scale is made.

Stage 2: Piloting and observation

Design and engagement

  • Collaborate with clients and client groups to co-design the approach for scalability and replicability
  • Consider public engagement (if appropriate) for systems with potential social, legal, ethical, economic and environmental implications
  • Determine phased piloting and onboarding approach based on organizational needs and readiness
  • Obtain feedback from users on early pilots and use the feedback to make updates to the system

Governance and responsible AI

  • Formalize funding commitments across participating organizations
  • Establish common set of rules to be applied across organizations
  • Consider establishing consistent transparency measures for AI systems such as proportional approaches based on risk
  • Validate contracting mechanisms for enterprise or cross-organizational scaling
  • Assess vendor scalability, interoperability and long-term flexibility
  • Complete required privacy impact, algorithmic impact, and security assessments at the enterprise level, if possible, to reduce duplication of assessments across organizations
  • Develop a framework for ethical alignment, bias, explainability and human oversight
  • Set expectations through enterprise standards, templates and guardrails that align with organizational values and policies
  • Establish continuity management and emergency response plans

Capacity-building

  • Build a network of champions and advocates to promote adoption of the AI solution and support change
  • Identify workforce groups affected by the AI solution
  • Develop training materials for pilot users
  • Train pilot users
  • Test that the solution is appropriate for use across organizations
  • Test IT infrastructure performance at scale beyond pilot users
  • Validate IT security alignment across organizations

Measurement and continuous improvement

  • Define metrics and return-on-investment expectations, considering whether different expectations are needed across organizations
  • Define clear performance thresholds that must be met before full deployment
  • Measure performance across different organizations
  • Develop a monitoring plan and consider setting up a dashboard to track and monitor system performance across enterprise
  • Use measurement results to inform scaling

Outcome

Contracts, vendors, infrastructure and IT security have been validated for scale; measurable performance criteria are defined; and responsible AI governance frameworks and relevant standards are formally established.

Stage 3: Adoption and enablement

Design and engagement

  • Coordinate internal and external communications to raise awareness

Governance and responsible AI

  • Implement scalable enterprise or cross-organizational onboarding and approval processes
  • Ensure alignment of procurement and compliance across jurisdictions
  • Ensure ongoing compliance with cross-organizational rules
  • Implement governance, monitoring and oversight consistently across organizations

Capacity-building

  • Communicate role changes, expectations, and escalation paths
  • Train all users
  • Make sure communities of practice participate in onboarding and peer learning; establish communities of practice if they do not exist yet
  • Embed continuous AI literacy development across user groups
  • Train a specialized workforce to detect and respond to adverse events

Measurement and continuous improvement

  • Measure performance against baseline metrics and expectations for return on investment; document results
  • Identify unique and common issues when scaling across the enterprise

Outcome

Funding, compliance and governance structures are operational, and users are supported through communities of practice and continuous AI literacy efforts.

Stage 4: Monitoring and optimization

Design and engagement

  • Communicate and review performance results with users and clients

Governance and responsible AI

  • Update risk controls based on monitoring insights
  • Adapt IT security and cyber-risk controls as adoption expands
  • Adapt to organizational rule changes
  • Monitor system outcomes across user groups and address any issues
  • Validate the continued financial sustainability of the solution across organizations

Capacity-building

  • Support continuous change management and training as workflows evolve and adapt to organizational needs
  • Consider mandating or incentivizing use of the tool to drive adoption
  • Implement response plans as critical issues and events are detected

Measurement and continuous improvement

  • Review monitoring results with oversight bodies
  • Measure system metrics when appropriate for the risk level of the system
  • Measure system integration for replicability and scalability
  • Track return on investment and service impact against baseline metrics
  • Monitor cyber resilience, infrastructure stability, and service impact at scale
  • Make changes to improve the system when it does not meet performance targets

Outcome

Return on investment, operational performance, system outcomes, risk indicators, and cyber resilience are continuously monitored. Structured feedback mechanisms are in place to optimize value, manage impacts, and sustain performance.

Stage 5: Decommission or renewal

Design and engagement

  • Determine whether the problem still exists; and confirm any changes to problem scoping
  • Engage with users to understand the ongoing value of the tool and to identify any new user needs
  • Consider options for addressing the problem at this stage, such as renewing, updating or replacing the existing system

Governance and responsible AI

  • Determine and implement data-retention, data-disposition, and archiving procedures
  • Review renewal terms and vendor-dependency risks across organizations
  • Plan a secure transition or exit strategy where required; tailor approaches to specific organizations as needed
  • Ensure secure shutdown or transition procedures across organizations
  • Consider implications of system shutdown or renewal across organizations

Capacity-building

  • Decide whether the cross-organizational or enterprise AI project should be renewed, replaced or retired
  • Communicate the timeline and impacts to the organizations that are using the tool

Measurement and continuous improvement

  • Record lessons learned for enterprise or cross-organization scaling
  • Develop business-case options for or against renewal, considering the needs in different organizations
  • Assess the long-term cost sustainability and enterprise value added
  • Assess the scope of decommissioning or renewal based on measured performance trends and on continuous improvement insights

Outcome

Decisions on renewal, replacement or retirement are made through a disciplined lifecycle review that assesses proven value, sustained performance, risk exposure, and continued strategic alignment with organizational priorities.

Contact

For more information, contact the TBS Responsible Data and AI team at ai-ia@tbs-sct.gc.ca.

Page details

2026-03-31