Contents:

  • How can AI help in recruitment?
  • When should a recruitment business use AI and when should it rely on humans?
  • How can AI reduce bias and support diversity and inclusion in the recruitment process?
  • What are the potential pitfalls of using AI in recruitment?
  • Should recruitment businesses develop their own AI or get it from a specialist vendor?
  • What should recruitment businesses look for when looking for an AI solution?
  • If developed in-house, what should recruitment businesses do when training their AI?
  • Will AI impact the recruitment industry by causing job losses?
  • What is the guidance around the use of Artificial Intelligence?

 

Artificial Intelligence (AI) has seen significant growth and development in recent years. AI is being widely adopted across a variety of industries and is having a profound impact on many aspects of our lives. AI technologies, such as machine learning and natural language processing, have improved greatly and are now being used in areas like healthcare, finance, and customer service. They are developed to tackle complex tasks and make human lives easier and more efficient.

In the UK, the DCMS (Department for Digital, Culture, Media & Sport) appointed EY to conduct an evidence analysis and primary market research to assess the extent of data foundations and AI adoption. The research findings “Data foundations and AI adoption in the UK private and third sectors” were published in August 2021 and the overwhelming response from participants suggested that data is deemed important to the success and growth of organisations across the private and third sector.

The research found that adoption is significantly higher in the UK private sector, with 70% of private-sector organisations planning or already using AI, which compares with 42% in the third sector. Within the UK private sector, 90% of large organisations have planned or already adopted AI, compared with 48% of SMEs. From an industry perspective, organisations operating in Finance and Technology, Media and Telecom (TMT) report the highest levels of adoption.

In the recruitment industry, the rise of AI has also been substantial. AI technologies are being increasingly adopted to streamline and optimize various aspects of the recruitment process. AI algorithms can be used to automate many tasks, some of which are described below. This has resulted in increased efficiency and productivity for recruiters, as well as a better candidate experience. AI has also been able to tackle issues such as unconscious bias and has the potential to enhance diversity in the hiring process.

As with any transformation, AI adoption also has its hazards. There are a variety of concerns regarding this technology, ranging from job displacement to ethical concerns, so guidelines around its use have started to be developed. This article looks at the growing use of AI in recruitment, how the industry can benefit from it and what it can do to avoid potential pitfalls. We’ll also touch on guidelines, in the UK and other parts of the world, around it.

 

How can AI help in recruitment?

When trained and used correctly, AI can increase efficiency, reduce manual work, and provide valuable insights and predictions, allowing recruiters to make data-driven decisions to grow their business and improve the candidate experience. Some of the uses of AI in recruitment are:

  • Automated Resume Screening: AI can quickly sort through large amounts of resumes, identifying the most relevant candidates based on specific job requirements, saving recruiters time and effort.
  • Predictive Analytics: AI can analyse data on job market trends, salary benchmarks, and other factors to make predictions on talent supply and demand, allowing recruitment companies to make better decisions on where to focus their efforts.
  • Predictive hiring: AI algorithms can analyse data from past hiring practices to identify patterns and make predictions about which candidates are most likely to be successful.
  • Personalised job matching: AI can match candidates with job opportunities based on their skills, experience, and preferences.
  • Chatbots: AI-powered chatbots can handle repetitive tasks such as answering frequently asked questions and scheduling interviews, freeing up recruiters to focus on more strategic tasks.
  • Video Interviewing: AI can provide a more objective and consistent evaluation of candidates through the use of video interviewing, reducing the risk of human bias and improving the accuracy of the selection process.

When should a recruitment business use AI and when should it rely on humans?

The predominant view from thought leaders and industry experts is that recruiters should use a combination of AI and human involvement in the recruitment process to leverage the strengths of both. AI can be used to handle repetitive and simple tasks, while human recruiters can provide the critical thinking, empathy, and judgement necessary to make well-informed hiring decisions.

When deciding whether to use AI or rely on humans in the recruitment process, recruiters should consider the following:

  • Complexity of the task: AI can handle repetitive and simple tasks quickly and accurately, while human recruiters are better suited to handle more complex and nuanced tasks that require critical thinking and judgement.
  • Candidate experience: While AI can automate certain aspects of the recruitment process, human recruiters are often better equipped to provide a personalized and empathetic candidate experience.
  • Bias: AI algorithms can minimise unconscious bias in the recruitment process, but they can also perpetuate existing biases in the data they are trained on. Human recruiters have the ability to recognise and counteract bias in the recruitment process.
  • Data privacy: AI technologies can handle and process large amounts of candidate data, but there are concerns about the privacy and security of this data. Human recruiters are better equipped to ensure the appropriate handling of sensitive candidate information.

How can AI reduce bias and support diversity and inclusion in the recruitment process?

AI can help reduce bias and support diversity in the recruitment process in several ways:

  • Resume screening: AI algorithms can be trained to screen resumes objectively and eliminate human biases in the selection process. By removing names, addresses, and other personal information from resumes, AI algorithms can help to prevent unconscious bias.
  • Predictive hiring: AI systems can analyse vast amounts of data to identify patterns and make predictions about which candidates are likely to be successful in a particular role. By using a data-driven approach, AI can reduce the impact of human biases in the hiring process.
  • Skill-based matching: AI systems can match candidates to job openings based on their skills and qualifications, rather than demographic information or other biases.
  • Diversity monitoring: AI systems can analyse recruitment data to identify areas where bias may be present, such as underrepresented groups being less likely to be invited to interview or offered a job. Recruiters can then take steps to address these biases and promote diversity in the recruitment process.

However, it is important to note that AI algorithms can only be as unbiased as the data they are trained on. Therefore, organisations who either purchase an AI system, or develop it in house should ensure that their algorithms are fit for purpose, that their AI is trained correctly and that their recruitment data is diverse and free from bias.

What are the potential pitfalls of using AI in recruitment?

While AI has the potential to improve recruitment processes, it is essential to be aware of possible downsides and to take appropriate measures to mitigate them.

Such risks could include:

  • Bias: AI algorithms may perpetuate existing biases in the recruitment process, such as gender, race, or age bias. As mentioned above, it is critical to ensure that the data used to train the AI is diverse and unbiased and that the AI is regularly tested for fairness.
  • Lack of transparency: AI-driven recruitment processes can sometimes be opaque, making it difficult for candidates and recruiters to understand how decisions are being made. This can result in a lack of trust and may negatively impact the candidate experience.
  • Ethical concerns: AI raises a number of ethical questions, including issues around privacy, accountability, and control.
  • Privacy concerns: The use of AI in recruitment requires access to personal data, which can raise privacy concerns. It is essential to ensure that the AI solution is secure and that the vendor has appropriate data protection measures in place.
  • Unreliability: AI algorithms are only as good as the data they are trained on. If the data used for training is inaccurate or outdated, the AI’s predictions may be unreliable.
  • Displacement of human recruiters: AI-driven recruitment may result in the displacement of human recruiters, potentially leading to job losses and causing negative social and economic impacts.
  • Technical issues: AI algorithms are complex and can be prone to technical issues, such as bugs or system failures. It is important to ensure that the AI solution has robust backup and recovery systems in place.
  • Security risks: AI systems can be vulnerable to cyber attacks, which can have serious consequences, particularly in critical systems such as healthcare, finance, and transportation.
  • Unintended consequences: The deployment of AI systems can have unexpected and sometimes undesirable consequences, particularly when they are used in complex real-world settings.
  • Dependence on algorithms: Relying too heavily on AI can result in a loss of critical thinking and decision-making skills, as well as a reduced ability to understand and interpret the world around us.

From academics to businesses, Chat GPT’s meteoric rise is causing concern across all fields. The misuse of AI and automation tools in recruitment has become a growing concern as these tools can be used to manipulate and fabricate data to give an unrealistic view of a candidate’s qualifications and experience, or their cultural fit with the hiring organisation. Therefore, it is important for recruiters to be aware of such trends and be extra vigilant when reviewing CVs to ensure that they are receiving genuine applications from qualified candidates.

Should recruitment businesses develop their own AI or get it from a specialist vendor?

Both developing an AI in-house and obtaining it from a vendor have their own advantages and disadvantages. Companies should carefully consider their specific needs, resources, and goals before making a decision.

The decision to develop an AI in-house or obtain it from a specialist AI vendor depends on several factors, including:

  • In-house expertise: If the company has the necessary in-house expertise, such as data scientists, machine learning engineers, and software developers, it may be more cost-effective to develop the AI in-house.
  • Budget: Developing an AI in-house can be a significant investment, both in terms of time and money. If the company has limited resources, it may be more cost-effective to obtain an AI solution from a vendor.
  • Time to market: Obtaining an AI solution from a vendor is generally faster than developing it in-house, as the vendor has already done the development work and can provide a ready-made solution.
  • Customisation needs: If the company requires a highly customised solution, it may be more effective to develop the AI in-house, as vendors may not be able to provide exactly what the company needs.
  • Maintenance and support: Developing an AI in-house requires ongoing maintenance and support, which can be time-consuming and expensive. Obtaining an AI solution from a vendor usually includes ongoing maintenance and support as part of the package.

 

What should recruitment businesses look for when looking for an AI solution?

When considering externally developed AI solutions, recruitment businesses should take the following factors into account:

  • Functionality: Ensure that the solution meets their specific requirements and can perform the tasks that are necessary for their recruitment process.
  • Accuracy and fairness: Look for a solution that has been tested and validated to ensure that it is making accurate predictions and avoiding bias.
  • Integration with existing systems: Consider the compatibility of the solution with the company’s existing recruitment software and HR systems.
  • User-friendliness: Ensure that the solution is easy to use and that the vendor provides adequate training and support.
  • Scalability: Consider the vendor’s ability to support the company’s growth and increase in volume of recruitment processes.
  • Data privacy and security: Ensure that the vendor has adequate measures in place to protect candidate data and comply with relevant data protection regulations.
  • Cost: Consider the vendor’s pricing model and ensure that it fits within the company’s budget.
  • Vendor reputation and track record: Look for a vendor with a good reputation and a proven track record of delivering high-quality AI solutions in the recruitment industry.

 

If developed in-house, what should recruitment businesses do when training their AI?

We recommend focusing on the following steps:

  • Define clear objectives and use cases for the AI: What tasks should the AI be able to perform and what problem should it solve?
  • Choose appropriate data sets: Ensure that the data used for training the AI is diverse, relevant, and reflects the target population.
  • Develop and implement ethical guidelines: Consider the ethical implications of AI-driven recruitment, such as avoiding bias in decision-making, protecting candidates’ privacy, and ensuring fairness and transparency.
  • Validate the model: Use appropriate evaluation metrics to ensure that the AI is making accurate predictions and adjust the model as needed.
  • Continuously monitor and improve the AI: Regularly assess the performance of the AI and make improvements as needed to maintain its accuracy and usefulness.
  • Foster a culture of collaboration and transparency: Encourage open communication and collaboration between recruiters and the AI team to ensure that everyone is working towards the same goals.

 

Will AI impact the recruitment industry by causing job losses?

AI has the potential to impact job creation both positively and negatively. On one hand, it can automate certain tasks, which may lead to job losses in certain industries. On the other hand, it also has the potential to create new jobs and industries, such as those related to developing, deploying, and maintaining AI systems. Additionally, it can improve efficiency and productivity, leading to economic growth and the creation of new job opportunities. One can expect that the net impact of AI on job creation will depend on various factors, including the speed of adoption, government policies, and the ability of workers to acquire new skills to adapt to a changing job market. If you’d like to read about a recruitment experts views on AI, see our interview here with Louise Triance From UK Recruiter, a leading news and networking platform.

 

What is the guidance around the use of Artificial Intelligence?

Many governments have issued guidance around the use of AI in the private sector. The specifics can vary greatly depending on the jurisdiction, but generally governments aim to ensure the responsible and ethical development and deployment of AI. This often includes guidelines on transparency, fairness, safety, and accountability. Several countries have national AI strategies that outline their plans for ensuring responsible development. The European Union and organizations such as the Organisation for Economic Co-operation and Development (OECD) have also developed principles for the ethical use of AI.

 

The UK announced a national strategy that outlines its plans to ensure responsible and ethical development and deployment of AI. This includes guidelines on transparency, fairness, safety, and accountability. The UK’s Information Commissioner’s Office (ICO) has issued guidance on AI and data protection, which includes recommendations on data privacy, security, and human rights considerations when using AI. Additionally, the Alan Turing Institute, the UK’s national institute for data science and artificial intelligence, provides guidance and research on responsible AI.

 

In The Netherlands, the Dutch data protection authority (‘AP’) published a document on artificial intelligence and algorithms in February 2020. The document emphasized the need for supervision of such technologies, and underlined that the principles of lawfulness, fairness, and transparency provide the appropriate basis for allowing the use of AI and algorithms.

 

 

In conclusion, Artificial Intelligence certainly looks like it’s here to stay. In 2021, The USA’s International Trade Administration estimated that the UK’s AI market is set to add $880bn to the economy by 2035, investment in artificial intelligence having reached record highs, with AI scaleups raising almost double the amounts raised in France, Germany and the rest of Europe combined. The UK was named as the third country in the world (after the U.S. and China) in terms of investment in AI.  Moving forwards, it will certainly be a technology every business should keep in mind when planning for growth and for additions to its tech stack. Yet, this should not be done without also keeping in mind the privacy, security and ethical considerations that we have described above. If you are contemplating using AI in your operations, it’s best to take into account all pros and cons and to consult with relevant authorities in your jurisdiction to determine the specific guidance available.