Compendium of best practices for a human-centered development and use of Artificial Intelligence in the world of work
On this page
- Introduction
- 1. Fully leveraging the potential of AI in the labour market through skills development
- 2. Navigating automation, productivity and fairness in the workplace
- 3. Protecting privacy and advancing non-discrimination in the world of work
- 4. Strengthening occupational safety and health, autonomy, agency and dignity
- 5. Strengthening transparency, explainability and accountability
- 6. Leveraging social dialogue
- Conclusion
- References
Alternate formats
Large print, braille, MP3 (audio), e-text and DAISY formats are available on demand by ordering online or calling 1 800 O-Canada (1-800-622-6232). If you use a teletypewriter (TTY), call 1-800-926-9105.
Introduction
This compendium of best practices to advance the implementation of the 2024 G7 Action Plan for a human-centered adoption of safe, secure and trustworthy artificial intelligence (AI) in the world of work was developed under the G7 Employment Working Group during Canada’s Presidency of the G7 in 2025, with contributions from G7 countries and engagement groups. The compendium supports the G7 Leaders’ Statement on AI for Prosperity, adopted in Kananaskis in June 2025, which emphasized the need to build resilient future workforces and prepare workers for transitions related to AI.
AI is a transformative technology with the potential to reshape jobs, workplaces and the lives of workers. According to International Labour Organization (ILO) estimates, among G7 countries, 6.5 per cent of jobs - equivalent to 25 million jobs - are highly exposed to generative AI technology, with most tasks at high potential of automation. Furthermore, an additional 28 per cent of employment (109 million jobs) is likely to be transformed as AI becomes increasingly incorporated into day-to-day tasks.Footnote 1 At the same time, AI will create new jobs and can generate significant benefits for employers and employees alike. The AI surveys of employers and workers conducted by the Organisation for Economic Co-operation and Development (OECD), for instance, show that around 80 per cent of workers using AI report improved performance, while only 8 per cent report negative effects (Lane, Williams and Broecke 2023). Managing this change requires future-ready and resilient labour markets that ensure all workers and their families reap the benefits of the opportunities created by technological change. For this reason, the G7 countries agreed in 2024 on the Action Plan for a human-centred development and use of safe, secure and trustworthy AI in the world of work (henceforth the "G7 Action Plan"), and in 2025 G7 Leaders agreed to advance implementation of the Action Plan. The G7 Action Plan identifies policy measures that G7 countries can use to manage this change, covering the areas of skills development; automation, productivity and fairness; privacy and non-discrimination; occupational safety and health, autonomy, agency and dignity; transparency, explainability and accountability; and social dialogue. In the Action Plan, G7 countries asked the ILO and the OECD to support them and to report on country progress. This compendium, prepared jointly by the two organizations, describes policy measures on AI in the world of work reported under the policy areas of the G7 Action PlanFootnote 2 based on a dedicated questionnaire on policy highlights. The compendium, therefore, does not represent an exhaustive list of initiatives.
1. Fully leveraging the potential of AI in the labour market through skills development
Current practices
- Establishing dedicated centres to monitor skills trends and deliver training
- Updating certification and skills frameworks to reflect AI
- Expanding financial support for AI-related training
- Creating programmes to equip workers and jobseekers with the skills needed to use AI
The use of AI in the world of work is rapidly changing the tasks of many workers and the types of skills that are in demand. Equipping workers with the right skills is essential to reap the benefits from AI at the aggregate and the individual level. While some occupations will require specific expertise in AI design, development and maintenance, most workers will need improved digital literacy skills, problem-solving skills, as well as soft skills to complement AI systems (ILO, forthcoming). Recent evidence shows that in occupations with high AI exposure, 72 per cent of vacancies demand at least one management skill, 67 per cent at least one skill from the business processes skill grouping, and over 50 per cent at least one skill from social, emotional or digital skill groupings (Green 2024). The G7 Action Plan puts forward actions that countries can consider to ensure that education and training institutions, together with adult learning programmes, can respond rapidly to these shifts and can provide accessible, high-quality opportunities for all groups of workers, including those in small and medium-sized enterprises (SMEs), as well as older and lower-skilled workers.
G7 countries have adopted a wide range of measures to build the skills needed to leverage AI's potential in the labour market. Some have created dedicated centres that analyse skills trends and provide training or tools, such as Canada's Future Skills Centre, the United Kingdom's AI Skills Hub and Skills England, and France's Centres of Excellence in AI and LaborIA. Some countries are updating certification and skills frameworks to reflect the impact of AI: for example, France's Decree No. 2025-500 of 6 June 2025 requires AI to be integrated into the skills reference framework for workers; the United Kingdom is revising its Essential Digital Skills Framework; the European Union will update its Digital Competence Framework at the end of 2025 to more fully integrate AI-related competences; and the United States is working on identifying high-quality AI training and certification pathways.
Financial support is another tool that some G7 countries are using to help people develop the skills needed for the AI era. In Japan, for example, individuals can access educational training benefits to cover part of the costs of designated courses (which include digital and AI courses). Separately, and under a different scheme employers receive human resources development subsidies when they provide training to their workforce. Another example is from the United Kingdom, which has multiple scholarships and fellowships to support people entering into AI professions, such as the Turing AI Fellowships. Also, in the United States, federal grants are available to help organizations create and expand programs that teach workers the skills needed for AI jobs.
G7 countries have launched various programmes to equip workers and jobseekers with the skills needed to use AI effectively. For instance, the United States recently launched America's AI Action Plan, which includes a focus on advancing a priority set of actions to expand AI literacy and skills development, and rapidly retrain and help workers thrive in an AI-driven economy. This is reinforced in the recently launched America's Talent Strategy. Italy's 2024-26 AI Strategy includes training and webinars to promote the safe and effective use of AI. Some programmes target all workers and jobseekers (for example, Canada's Skills for Success programme; and Italy's Digital Literacy for Work and New Skills Fund programmes), while others are tailored to specific groups, such as public servants (Canada School of Public Service) or to priority sectors (United Kingdom's Skills Bootcamps). In Italy, several national-level sectoral collective agreements include digital skills training. Work-based learning is another avenue for skills development, with apprenticeships and work placements including Canada's IT Apprenticeship Program for Indigenous Peoples; the United Kingdom's Sector-Based Work Academy Programme; and the United States' plan to expand access to and strengthen apprenticeships. Governments are also reforming existing programmes, for example through Japan's reforms of public vocational training, France's increased incorporation of AI at all degree levels, and the European Union's Centres of Vocational Excellence projects, supported by Erasmus+. The United States intends to create direct career pathways from high school directly into technology industries. Funding instruments such as the European Union's Digital Europe Programme, and the United States' Workforce Innovation and Opportunity Act funding will also support the design of new training initiatives. Furthermore, governments are partnering with the private sector, as in the UK Government's collaboration with 11 large companies to train 7.5 million workers in essential AI skills, and the United States' proposed industry-driven training programmes for priority AI infrastructure occupations. G7 countries are also using AI itself to deliver and target training. Germany, for example, is using virtual reality headsets that let employees self-assess their own AI skills (AI literacy and Meta AI Literacy Scale).
2. Navigating automation, productivity and fairness in the workplace
Current practices
- Monitoring labour market impacts
- Targeting upskilling and reskilling for jobseekers, people affected by mass lay-offs and those most exposed to AI
- Supporting research and experimentation to increase AI adoption
- Supporting SMEs in AI adoption
- Establishing AI adoption guidelines for the workplace aligned with international standards and human-centred principles
The ILO estimates that 6.5 per cent of jobs in the G7 are at high risk of automation from generative AI technology.Footnote 3 In addition, the OECD estimates that 27 per cent of employment in OECD countries is in occupations with a high risk of automation from all automating technologies (OECD 2023). While some of these occupations will transform gradually to perform other functions, some will decline and others will be created; given this, it is critical for countries to monitor developments and provide employment support. The transformation of occupations, as well as new occupations generated, can lead to important benefits, including higher productivity. Not all groups of workers and firms, however, are well placed to take advantage of the benefits. For example, adoption of AI has been uneven - with 40 per cent of firms in the OECD with 250 or more employees using AI in 2024, compared with 20 per cent of medium-sized firms (50 to 249 employees) and only 12 per cent of small firms (10 to 49 employees) (OECD, 2025)The G7 Action Plan presents measures that countries can use to give workers and employers opportunities to deploy and use AI tools that are human-centred, have the potential to increase productivity and alleviate labour shortages, while ensuring fairness. The skills policies mentioned in the previous section are central to these efforts.
To better understand the labour market impacts of AI, G7 countries are monitoring developments through dedicated observatories. For instance, Germany has established the Observatory on Artificial Intelligence in Work and Society, Italy is developing the National Observatory on the Adoption of AI in the World of Work to analyse how AI technologies affect businesses and workers support policy development through research, and monitor their impact on the labour market. The United States is planning on starting the AI Workforce Hub. France hosted the AI Action Summit in Paris on 10 and 11 February 2025, launching the Network of Observatories on AI and Work and introducing the Pledge for a Trustworthy AI in the World of Work.
To support workers and jobseekers most affected by AI and automation, G7 countries are using measures targeted at jobseekers (such as the United Kingdom's Jobcentres and Get Britain Working reforms and Italy's Employment Transition Agreements), targeted upskilling and reskilling for people affected by mass lay-offs (such as Canada's Retraining and Opportunities Initiative - Funding program and the United States' AI Action Plan's retraining initiatives), as well as specific programmes to support workers in sectors most exposed to AI (such as the United States' piloting of rapid retraining and new workforce models to address AI-related labour market shifts). Public employment services (PES) are also being improved by making use of AI tools such as France Travail's ChatFT, MatchFT and QualiFT, and Italy's virtual coach AppLI, which helps young NEETs seek personalized training and suitable job offers on the virtual job matching platform SIISL. In the European Union, the PES Network supports cooperation, mutual learning and experience exchange in key areas of PES responsibilities, including digitalization of PES, digital transformation of PES organizations and more and more prominently, the use of AI in PES. Additionally, Canada will integrate AI into the Job Bank platform to improve job matching and launch a national tool to help adults find short, skills-based training by location and format.
As for measures that promote the safe and human-centred use of AI, G7 countries are supporting research and experimentation to increase AI adoption. In Canada, for example, the AI for Productivity Challenge programme, supports AI uptake in clean technology, agriculture and manufacturing sectors and is collaborating with the private sector to strengthen Canada's commercial AI capabilities. Germany has established Regional Competence Centres for Work Research, which connect science, industry and social partners to develop and transfer AI knowledge to shape the future of work. The United States is working on streamlining regulations in consultation with businesses and the public at large to encourage AI development and deployment. Countries are also supporting adoption in businesses, particularly SMEs. For instance, in Canada, the AI Assist programme - an initiative by the National Research Council of Canada Industrial Research Assistance Program - supports SMEs in developing generative AI and deep learning solutions safely and ethically. France launched Dare AI in July 2025 to accelerate uptake among SMEs. Through LaborIA, France also provides a guide and self-diagnostic tool for deploying AI at work. At the EU level, the AI Innovation Package, launched in 2024, supports AI start-ups and SMEs and invests in strengthening the generative AI talent pool.
Several G7 countries have also issued (or will issue) guidelines for AI adoption that apply to the world of work. Japan's Act on Promotion of Research & Development and Utilization of Artificial Intelligence-related Technology (2025) states that the Government shall establish guidelines in accordance with the purport of the international norms, such as the Hiroshima AI Process International Guiding Principles and International Code of Conducts. Italy has published for public consultation draft Guidelines on AI in the World of Work and Guidelines on AI in the Public Administration, among others. In Canada, the Guide on the use of generative AI offers guidance for public servants on the responsible, effective and fair use of generative AI tools. In Germany, the Observatory on Artificial Intelligence in Work and Society provides voluntary guidelines for human-centred AI adoption in companies, tools for upskilling workers and software for human oversight of AI systems.
3. Protecting privacy and advancing non-discrimination in the world of work
Current practices
- Establishing safeguards for data protection and privacy
- Implementing measures to improve data quality for reliable and responsible AI deployment
- Updating existing frameworks
- Issuing practical guidance on trustworthy AI at work
As stated in the G7 Action Plan, the greater collection and analysis of data on workers and job applicants as a result of AI's growing integration in the world of work presents opportunities to improve performance and fairness in decision-making. Nevertheless, if AI systems are poorly designed, trained on selective, incomplete or biased datasets, and if there is limited understanding by management of how to use and interpret the data, then legal, ethical and practical risks for firms and their workers may arise (Berg and Johnston, 2025). The G7 Action Plan for a human-centered adoption of safe, secure and trustworthy artificial intelligence (AI) in the world of work presents measures that countries can use to promote privacy and non-discrimination in the workplace.
Data protection and privacy laws within the EU, such as the General Data Protection Regulation (GDPR), set requirements for lawful and fair processing of personal data. In Canada, the Personal Information Protection and Electronic Documents Act (PIPEDA) governs the use of personal information in the course of commercial activities, while the Voluntary Code of Conduct on generative AI commits signatories to mitigate risks. As of 2025, 46 organizations have joined, ranging from SMEs to global multinational corporations. These instruments establish safeguards such as limiting the collection, use and disclosure of data to lawful and clearly defined purposes, requiring consent and mandating measures to prevent unauthorized data access, loss or misuse. In addition, the Canadian Artificial Intelligence Safety Institute (CAISI) has been established to advance scientific understanding of the risks associated with most advanced AI systems and to provide tools to address those risks. CAISI also engages with international network of AISIs to advance joint projects on AI safety. Italy's Act on Artificial intelligence sets transparency, proportionality, security, personal data protection, privacy, accuracy, non-discrimination, gender equality, and sustainability as the general principles for research, experimentation, development, adoption, application, and use of AI systems and models.
Measures to improve data quality include Germany's KITQAR project to develop quality standards for test and training data for the application of AI in companies and organizations. Germany also has the KIDD - AI in the Service of Diversity project, in which the Government, together with corporate partners, developed a standardized process to enable companies to purchase or develop and introduce fair software applications.
Some countries are updating their frameworks to address privacy and non-discrimination linked to AI systems in the world of work. For instance, Germany has announced a forthcoming draft law on employee data protection. In France, the Commission Nationale de l'Informatique et des Libertés (CNIL) - the data protection authority - is consulting stakeholders to develop sector-specific recommendations that provide greater legal certainty on privacy issues. The United Kingdom is using its existing regulatory framework, the Equality Act 2010. to prohibit discrimination in employment when AI is used in hiring or management. In addition, Article 22 of the UK GDPR sets rules for decisions based solely on automated processing with legal or similarly significant effects: individuals must be informed, can obtain human intervention, challenge decisions and make representations. The EU AI Act uses a risk-based approach, classifying AI systems in employment and worker management as high risks, requiring risk assessments, quality data, transparency and oversight to prevent discriminatory outcomes.
Several frameworks address privacy and non-discrimination at the same time. For instance, Canada's PIPEDA and the EU GDPR grant individuals rights to access their personal data, request corrections, and, in some jurisdictions, request deletion or restriction of processing. Additionally, they require accessible channels for complaints or inquiries about personal data handling, and investigating and addressing substantiated complaints. Privacy and impact assessments are required in certain contexts, for example under the EU GDPR, and Canada's Directive on Automated Decision‑Making and Policy on Privacy protection.
Additionally, G7 countries have issued practical guidance on trustworthy AI at work. Italy's Guidelines on AI in the World of Work, for example, recommend assessing impacts on employment, privacy and workers' rights. The United Kingdom's 2024 guide, Responsible AI in Recruitment, sets out assurance mechanisms to support fair hiring practices. Both Italy's and the United Kingdom's guidelines also recommend taking steps to ensure the accuracy and quality of data and to identify and address potential biases or unfair impacts caused by AI models. The United States, in its recently launched America's AI Action Plan, has reiterated its commitment to respecting individual rights, civil liberties, privacy, and confidentiality in the creation of AI-ready scientific datasets, as well as its commitment to avoiding overly burdensome AI regulation that would hamper the realization of AI’s benefits for workers, including AI’s potential to reduce discrimination in the workplace.
4. Strengthening occupational safety and health, autonomy, agency and dignity
Current practices
- Supporting the development of AI and robotics tools to enhance occupational safety and health
- Embedding AI into labour and safety frameworks
- Encouraging employers to conduct risk and impact assessments to safeguard occupational safety and health in AI-enabled workplaces
- Monitoring and raising awareness of AI implications for occupational safety and health
The use of AI in the workplace can strengthen occupational safety and health by preventing work-related injuries and diseases, for example through the automation of hazardous tasks or the deployment of safety equipment that monitors fatigue and other risk factors. For example, in the OECD AI Surveys, 56 per cent of the workers in financial services and manufacturing reported that AI improved their physical health and safety at work and 63 percent reported improved enjoyment of work (Lane, Williams and Broecke 2023). At the same time, the integration of AI can introduce new risks for occupational safety and health or alter existing ones (ILO, 2025; Rani et al., 2024; Milanez, 2023).
G7 countries are supporting the development of AI and robotics tools that improve occupational safety and health. For example, in Germany, the Federal Ministry of Labour and Social Affairs funded the development of an AI system at the BG BAU, the German accident insurance institution for the construction sector, which analyses accident reports, past violations, training records and company data to help target inspections more effectively. As a result, the rate of visits uncovering violations has increased by 29 per cent to 64 per cent. In Italy, the Istituto Italiano di Tecnologia is developing AI-based robotics, such as exoskeletons, to reduce workplace health and safety risks.
G7 countries are also embedding AI issues into existing labour and safety frameworks while issuing additional guidance. For example, in the UK the Health and Safety Executive (HSE) in Great Britain (GB) has clarified that the goal setting nature of the GB regime means that existing legislation applies to the use of AI in the workplace (HSE's regulatory approach to AI). The general prevention principles of the EU Framework Directive on Safety and Health at Work (1989) remain applicable to workers using AI systems, while the EU Strategic Framework on Health and Safety at Work 2021-2027 highlights emerging occupational safety and health challenges. Italy's Guidelines on AI in the World of Work recommend safeguards such as breaks to avoid "automation stress". In Canada, the Guide on the Use of Generative AI addresses risks to public servants' autonomy and agency.
Most G7 countries use risk and impact assessments to promote occupational safety and health when AI is used in the workplace. For instance, in Canada, the Federal Directive on Automated Decision‑Making requires an algorithmic impact assessment that evaluates implications for equality, dignity, privacy and autonomy of workers. In the UK, employers must conduct risk assessments for uses of AI which impact on health and safety and ensure appropriate controls are put in place (HSE's regulatory approach to AI). France requires companies to assess AI-related risks, including mental overload, loss of meaning and isolation (Article L. 4121-1).
Additionally, G7 countries are monitoring and raising awareness about the implications of AI systems for occupational safety and health. For example, in the United Kingdom, the HSE in GB has undertaken research to increase understanding of AI use across sectors, noting impacts in maintenance systems, health and safety management, equipment control and occupational monitoring (Understanding How AI is Used in HSE Regulated Sectors). At the EU level, the European Agency for Safety and Health at Work runs outreach initiatives, including the 2023-25 Healthy Workplaces - Safe and Healthy Work in the Digital Age campaign.
5. Strengthening transparency, explainability and accountability
Current practices
- Promoting transparency, human oversight and traceability measures
- Providing self-assessment tools to support businesses in strengthening AI governance practices
- Promoting risk management frameworks to identify, assess and mitigate AI-related risks proportionate to their impact
In order to promote a trustworthy adoption of AI in the world of work, AI systems should be designed in ways that support appropriate transparency and provide workers with clear information about their use, such as with informed consent. Decisions based on AI should be explainable and seeking redress needs to be possible if mistakes occur. Evidence shows that managers and employees have some concerns about their ability to understand how AI systems reach decisions and about the absence of clear channels for redress. For instance, an OECD study found that 28 per cent of managers reported unclear accountability in cases where algorithmic management tools can make a wrong decision and 27 per cent pointed to the lack of explainability as a concern (Milanez, Lemmens and Ruggiu 2025). At the same time, AI systems could also improve transparency and accountability through features such as complete audit records, consistent decision logs and post hoc explanation tools that are not available in human-based processes. Challenges with both AI and human decision-making systems including unclear accountability, can create difficulties for regulatory enforcement. The G7 Action Plan proposes measures that countries can take to promote transparency, explainability and accountability.
Many G7 countries have developed targeted initiatives to promote transparency, explainability and accountability which apply to the use of AI in the world of work. Japan's AI Guidelines for Business (Version 1.1) emphasize transparency, explainability and accountability. Canada's Voluntary Code of Conduct on generative AI includes commitments on transparency and accountability, such as adequate human oversight and risk management. Canada also established the Guide on the Use of Generative Artificial Intelligence and the Directive on Automated Decision-Making for the Federal public service. The Directive requires departments to record decisions and publish information on the performance and fairness of AI systems, thereby supporting accountability. The United States revised its policies on the acquisition of AI by Federal agencies in April 2025. Germany is developing software solutions to make the central functions of AI systems understandable for employees and enable them to check and approve decisions or stop the systems if necessary as part of the AI Cockpit project.
In the European Union, the GDPR contains provisions on automated decision-making, while the 2024 Platform Work Directive sets rules on transparency and accountability in algorithmic management for platform workers. The EU AI Act mandates strict transparency requirements, human oversight and traceability measures for auditing AI operations. National initiatives complement these European frameworks, such as a February 2025 French court ruling strengthening workers' rights to transparency in relation to AI in the workplace. In Germany, 20 administrations developed Guidelines for the Use of AI in the Administrative Work of Employment and Social Protection Services. In Italy, the Guidelines on the Use of AI in the Public Administration and the specific Guidelines on AI in the World of Work have been subject to public consultation and are now under review. In addition, Italy's collective agreements (for example, Assogrocery and Everli) establish binding transparency requirements for algorithmic management systems, and the Konecta call-centre agreement sets ethical rules on the use of AI tools.
Many G7 countries promote risk management approaches, which entail establishing clear frameworks to identify, assess and mitigate risks proportionate to the scale and impact of AI systems - as done, for example, in the EU AI Act (Article 9) and Canada's Voluntary Code of Conduct on generative AI. Algorithmic impact assessments are also commonly required to identify potential harms and ensure that mitigation strategies are in place. Impact assessments are, for example, required by in the EU AI Act (Article 27) and Canada's Directive on Automated Decision-Making (Algorithmic Impact Assessment tool), and are included as a measure under Canada's Voluntary Code of Conduct. Some countries have made self-assessment tools available to provide guidance and practical instruments to help businesses, particularly SMEs, assess and improve AI governance practices, such as the United Kingdom's AI Management Essentials Tool, subject to public consultation in 2025.
6. Leveraging social dialogue
Current practices
- Implementing national strategies that embed social dialogue as a principle in AI adoption and use
- Encouraging consultation with social partners at different levels, from national-level consultations to workplace practices
Consulting social partners can help ensure AI technologies in the workplace are introduced and used in ways that are innovative, wide-spread and secure. Strong participation rights and laws have supported advancements in consultation and negotiation over AI (Doellgast et al. 2025) which, in turn, supports positive outcomes for job quality and workplace trust. The G7 Action Plan puts forward measures that countries can use to promote engagement in social dialogue to harness the potential of new technologies while advancing job quality.
Many G7 countries have adopted national strategies that embed social dialogue as a principle in AI adoption and use. For instance, Canada's AI Strategy for the Federal Public Service 2025-2027 commits the Government of Canada to early and meaningful public and stakeholder engagement on AI initiatives of significant public interest. In the United Kingdom, the Government's Plan to Make Work Pay sets out proposals to strengthen worker consultation in the deployment of new technologies. In Italy, social partners are involved in the National Observatory on the adoption of AI in the World of Work, established by Law No. 132 of 23 September 2025.
G7 countries are also encouraging consultation with social partners at different levels, from national-level consultations to workplace practices. For instance, France's Labour Code (Article L. 2312-8) requires mandatory consultation with employee representatives (comité social et économique) at the company level before introducing any technology that could significantly affect working conditions. In France, ANACT, the national agency for the improvement of working conditions, tested ways to involve workers from the design phase onwards, anticipate organizational and psychosocial impacts, and safeguard worker autonomy. In the United Kingdom, the Plan to Make Work Pay proposes that the introduction of surveillance technologies in the workplace be subject to consultation and negotiation with trade union or employee representatives. In Germany, the Works Constitution Act gives works councils rights to participate in workplace organization and occupational health and safety, including through digitalization committees that review the introduction or major modification of technologies. (German Federal Office of Justice, 2024). Additionally, in Germany, the AI Studios initiative, part of the AI Observatory project, aims to explain AI technologies and their implications to enable workers, representatives and social partners to actively participate in designing workplace AI use. Social partners in many G7 countries have negotiated collective bargaining agreements that address AI's integration into the workplace, at both the enterprise and sectoral level.
Conclusion
This compendium has presented examples of policy measures to promote AI in the world of work that are aligned with the G7 Action Plan based on countries' responses to a dedicated questionnaire. This compendium does not present an exhaustive list of measures, however, some trends do emerge. G7 countries have been actively promoting measures to seize the benefits of AI in the world of work while addressing the risks. Examples submitted by G7 countries span a wide range of interventions, from investing in AI tools, to skills training, data privacy, transparency, non-discrimination and occupational safety and health. These interventions are pursued both through existing and new policy measures, often with the support of social partners. Going forward, it will be important to continue monitoring policy measures aligned with the G7 Action Plan, to help G7 countries identify which approaches are most effective in practice. Evaluations of the policies put in place by countries can also provide insights on effective design of AI-oriented policies.
Beyond the six policy areas of the G7 Action Plan, countries are also pursuing other initiatives to shape the use of AI in the world of work. For instance, France highlighted in the questionnaire that as a follow-up to the G7 Action Plan, and as a contribution to the G7 agenda under the French Presidency in 2026, its labour minister also announced a tripartite working group with researchers and experts, which has produced a webinar series along the themes of the Action Plan to exchange with social partners on key AI-related issues in the workplace. In September 2025, Italy's Parliament has approved the "AI Act" ("Provisions and delegations to the Government regarding Artificial Intelligence"), a bill that covers many of the G7 Action Plan's policy areas.
Continued policy action, as well as evaluations of the measures G7 countries have put in place, will be needed to ensure that the benefits of AI in the world of work are fully realized, while risks are effectively managed. Ongoing monitoring of the progress with the G7 Action Plan will help identify effective approaches and highlight areas where action may be needed.
References
Berg, Janine, and Hannah Johnston. 2025. "AI in Human Resource Management: The Limits of Empiricism". ILO Working Paper 154.
Doellgast, Virginia, Shruti Appalla, Dina Ginzburg, Jeonghun Kim and Wen Li Thian. 2025. "Global Case Studies of Social Dialogue on AI and Algorithmic Management". ILO Working Paper No. 144. https://doi.org/10.54394/VOQE4924.
Gmyrek, Pawel, Janine Berg, Karol Kamiński, Filip Konopczyński, Agnieszka Ładna, Balint Nafradi, Konrad Rosłaniec and Marek Troszyński. 2025. "Generative AI and Jobs: A Refined Global Index of Occupational Exposure". ILO Working Paper No. 140. https://doi.org/10.54394/HETP0387.
Green, Andrew. 2024. "Artificial Intelligence and the Changing Demand for Skills in the Labour Market". OECD Artificial Intelligence Papers No. 14, https://doi.org/10.1787/88684e36-en.
ILO. 2025. Revolutionizing Health and Safety: The Role of AI and Digitalization at Work. https://doi.org/10.54394/KNZE0733.
ILO. Forthcoming. World of Work Series: Lifelong Learning and Skills Dynamics.
Lane, Marguerita, Morgan Williams and Stijn Broecke. 2023. "The Impact of AI on the Workplace: Main Findings from the OECD AI Surveys of Employers and Workers". OECD Social, Employment and Migration Working Papers No. 288. https://doi.org/10.1787/ea0a0fe1-en.
Milanez, Anna, Annikka Lemmens and Carla Ruggiu. 2025. "Algorithmic Management in the Workplace: New Evidence from an OECD Employer Survey". OECD Artificial Intelligence Papers No. 31. https://doi.org/10.1787/287c13c4-en.
Milanez, A. (2023), "The impact of AI on the workplace: Evidence from OECD case studies of AI implementation", OECD Social, Employment and Migration Working Papers, No. 289, OECD Publishing, Paris, https://doi.org/10.1787/2247ce58-en.
Rani, U., Pesole, A. and Gonzalez Vazquez, I. 2024, Algorithmic Management practices in regular workplaces: case studies in logistics and healthcare, Publications Office of the European Union, Luxembourg, https://op.europa.eu/en/publication-detail/-/publication/bff25994-cfc2-11ee-b9d9-01aa75ed71a1/language-en.
OECD. 2023. OECD Employment Outlook 2023: Artificial Intelligence and the Labour Market. https://doi.org/10.1787/08785bba-en.
OECD. 2024. "Using AI in the Workplace: Opportunities, Risks and Policy Responses". OECD Artificial Intelligence Papers No. 11. https://doi.org/10.1787/73d417f9-en.
OECD. (2025). Businesses using artificial intelligence (AI). Retrieved September 2025, from OECD ICT Access and Usage by Businesses Database: https://goingdigital.oecd.org/datakitchen/#/cover/5/ict/indicator/explore/en