Challenges of using Artificial Intelligence in the Public Employment Domain – Part 2

6 min read

Maritime Passenger Targeting

In Part 1 of this blog, we explored the opportunities that the use of Artificial Intelligence (AI) can bring to Public Employment Services (PES). However, the emergence of AI and its growing prevalence has also come with various challenges and risks. In this blog, we analyze the challenges of using AI for Public Employment Services. Most of these risks are not only specific to PES but also relate to the use of AI in general. Nonetheless, these challenges can impact PES as a public organization and its work on the citizens.

These potential challenges and risks include:  

Accountability, Transparency, and Explainability

Accountability, transparency, and explainability are the key pillars for the responsible and trustworthy use of AI, as highlighted by the OECD in a recently published paper. The recently enacted AI Act in the European Union (EU) reinforces these principles by mandating the responsible application of AI, particularly in high-risk domains such as the employment one. This legislation places accountability on all AI actors—including organizations and individuals involved in designing, deploying, and operating AI systems—to ensure the results of their models can be explained and interpreted effectively. 

While some AI stakeholders prioritize results over the process, public organizations like PES cannot afford to do so. For PES, explainability is vital, as their systems significantly impact jobseekers’ lives. Ensuring transparent and accountable AI processes is therefore not just a regulatory obligation but a moral imperative in fostering trust and fairness in employment services. 

Data quality and privacy

AI models rely heavily on data as their primary input, and Public Employment Services hold substantial volumes of administrative data gathered through interactions with jobseekers, employers, and service providers. However, the quality and readiness of this data for AI integration should not be taken for granted—especially when merging multiple datasets from diverse sources. It is essential for PES to continuously ensure the accuracy, reliability, and relevance of their data, enabling them to detect and address errors or biases proactively.

While PES often work with large datasets, “big data” does not necessarily equate to high-quality data. Beyond refining input data for AI training, PES are responsible for establishing robust frameworks for effective data collection and validation. Given the personal and sensitive nature of the data PES manage, privacy concerns are paramount. PES are required to comply with data privacy and protection regulations, such as the General Data Protection Regulation (GDPR) in the EU, and align with both national and international legal standards.

Furthermore, the use of third-party generative AI tools presents additional ethical and privacy challenges. These tools are frequently trained on data scraped from the web without the consent of individuals, potentially exposing sensitive or private information. To address these concerns, PES must commit to transparent and ethical data practices, ensuring robust safeguards for data protection and privacy in every aspect of their AI implementation.

Ethical concerns and risk of bias

In AI’s definition, bias refers to the models that generate systematically unfair outcomes, specifically in giving preferential treatment to some groups over others. In addition, bias within AI models is mostly derived from the human which is then replicated by the AI. In the realm of public employment, PES has experienced biases within their labor market matching tools which are designated to match jobseekers with vacancies. This had happened due to the historical data that had been injected into the models and unrepresentative training data show bias against certain groups in its recommendation. Mitigating biases and discrimination within the AI system are extremely fundamental due to the sensitivities of their work and implications towards jobseekers and any other stakeholders. Therefore, fairness must be prioritized at each stage of the lifecycle of the AI tools that are utilized, developed, and implemented by the PES, and post-deployment and output monitoring are needed to ensure their AI system remains reliable and does not develop biases over time. 

Artificial Intelligence for PES

Resistance or lack of skills among PES staff and clients

The introduction of AI in organizations, including the public sector, often encounters resistance and skepticism from staff, clients, and the general public. This hesitation stems from various factors, including resistance to change and fears of being replaced by the development of advanced technologies. For Public Employment Services (PES), such negative sentiments pose significant challenges, potentially hindering the adoption of AI tools and limiting their effectiveness. Staff avoidance, rejection of AI systems, or low adoption rates could jeopardize the transformative potential of these technologies. 

To address these concerns, PES must take proactive steps to build trust and foster acceptance of AI. This includes engaging employees in decision-making processes around AI solutions, promoting open communication, and ensuring transparency about the purpose and function of AI tools. As PES staff will work alongside systems like profiling or job matching tools, tailored training programs are crucial. These programs should demystify AI technologies, demonstrate their practical benefits, and equip staff with the necessary skills to use them effectively. 

By addressing resistance and focusing on skill development, PES can create an environment where employees feel empowered rather than threatened by AI, paving the way for successful integration and meaningful outcomes. 

Need for ongoing monitoring and evaluation

AI systems and tools require ongoing monitoring and evaluation to ensure they perform as intended, particularly within Public Employment Services. Regular assessments are crucial for measuring the impact and output of AI models, improving their integration with PES operational processes, and analyzing whether the costs of innovation are justified by the benefits. These efforts help ensure that AI systems deliver reliable performance and meaningful results. 

By maintaining a rigorous approach to monitoring, PES can identify algorithms that fail to meet expectations or deliver desired outcomes. This allows for timely adjustments, whether through refining underperforming algorithms or replacing them altogether. Such continuous evaluation ensures that AI operations evolve sustainably, supporting PES in achieving their goals while maintaining trust and efficiency in their AI-powered processes. 

WCC’s expertise in utilizing AI for the Public Employment Domain

In conclusion to this 2 part blog series, the adoption of AI by Public Employment Services holds immense potential to enhance efficiency, decision-making, and service delivery. However, as this blog highlights, there are also significant challenges of using AI for Public Employment Services, ranging from ensuring accountability and fairness to addressing biases, data privacy, and resistance among stakeholders. By proactively addressing these concerns—through robust frameworks, ethical practices, staff training, and ongoing monitoring—PES can responsibly integrate AI into their operations. Striking the right balance between innovation and trust will be key to maximizing the benefits of AI while upholding their commitment to public service and inclusivity.

At WCC, thanks to almost three decades of experience in supporting PES across the world, the responsible use of AI has become second nature to us. We have expertise on how to face the challenges that come with applying AI in the labor market, and how to stay compliant with emerging legislation by applying the right types of algorithms in the right way. We continuously work on ensuring our frontrunner role through partnerships like the Nederlandse AI Coalitie (NL AIC) and the EU AI Pact. 

Read more about our solutions for Public Employment Services here.

 

Article by: WCC Community

Published on: November 26, 2024

WCC - Software that Matters

Our team is ready to answer your questions.

    Get in touch


    Solutions interest:

    Software solutions for Employment Services

    Passenger Screening

    Civil Registration and Vital Statistics

    Identity Matching

    Data Matching

    Message (optional):

    By submitting you agree to WCC's Terms of Use. Your personal data will be processed in accordance with WCC's Privacy Policy.

      Request a demo


      Solutions interest:

      Software solutions for Employment Services

      Passenger Screening

      Civil Registration and Vital Statistics

      Identity Matching

      Data Matching

      Message (optional):

      By submitting you agree to WCC's Terms of Use. Your personal data will be processed in accordance with WCC's Privacy Policy.

        Subscribe to our newsletter


        By submitting you agree to WCC's Terms of Use. Your personal data will be processed in accordance with WCC's Privacy Policy.

          Please register

          You need to register before you can download this PDF.

          We will send the file link to this email address.


          By submitting you agree to WCC's Terms of Use. Your personal data will be processed in accordance with WCC's Privacy Policy.

            Please register

            You need to register before you can watch this webinar.

            We will send the webinar link to this email address.


            By submitting you agree to WCC's Terms of Use. Your personal data will be processed in accordance with WCC's Privacy Policy.

            Contact
            Demo
            Newsletter