AI in recruitment: is it helping or hindering your drive for DE&I?

99% of Fortune 500 companies use AI recruitment software to bolster hiring - but it can be problematic

We are all familiar with the role artificial intelligence (“AI”) plays in our everyday lives. We have gotten used to using facial recognition to access our phones, talking to Alexa and receiving personalised marketing.

However, the practical applications of AI extend far beyond predicting consumer purchasing habits, and it is increasingly being used as a tool to optimise the recruitment process. In 2019, Unilever reported that using AI in recruitment saved the business nearly £1 million, as well as roughly 100,000 hours of interviewing time. It is therefore unsurprising that 99% of Fortune 500 companies rely on the talent-sifting AI recruitment software (Forbes) as a means of targeting higher quality candidates, speeding up the hiring process and freeing up employee time.

AI, recruitment and discrimination

At face value, it is easy to see the appeal of AI technology – but it can become problematic when AI is relied upon to reduce bias in the hiring process.

It is well known that a diverse workforce gives employers access to a greater range of talent, driving creativity and innovation and also producing commercial advantages. With growing pressure to meet diversity and inclusion goals and to establish meritocratic cultures, many companies are now relying on AI in recruitment to ensure candidates are assessed “objectively” and without reference to characteristics such as gender and race.

When implemented correctly, AI tools can be a great way to search in an unbiased manner and removing human decision-making from at least part of the recruitment process seems like a no-brainer for reducing unconscious bias in hiring. But unfortunately, a recent study by the University of Cambridge has shown that relying on AI to remove bias and discrimination can be counterproductive and is a dangerous example of “techno-solutionism” (turning to technology to provide quick fixes for deep-rooted societal issues).

Sourcing

As an example, AI is commonly used to market job vacancies to the individuals most likely to apply for them. This can provide cost efficiencies by reducing the scope of an advertising campaign without necessarily reducing its impact.

To do this, a data set of historically successful candidates and/or current employees will be given to the AI technology, which then uses machine learning to develop an algorithm that will target future successful candidates. While this may sound effective, it means that any historical bias in previous recruitment processes or within the existing workforce will be learned and replicated in the algorithm’s pre-selection of prospective applicants. This type of practice could fall foul of discrimination laws.

So why not just tweak the algorithm? Unfortunately, changing the algorithm to counter any bias discovered could then amount to positive discrimination, and regardless, the technology provider often will not understand the algorithm well enough to confidently correct it (as the algorithm has been developed by machine learning, not the provider).

Screening and selection

AI can also be used to conduct an initial sift of the applicants for a role. However, using historical data sets, AI can quickly learn to reject applicants who do not fit the profile of previously successful candidates. Amazon fell victim to this in 2015 when it discovered that its recruitment algorithm had learned to downgrade CVs which featured the word “women” (e.g. “women’s chess club”) after the vast majority of CVs in its data set belonged to men.

Similar issues arise in automated video interviewing. Individuals who present differently from those in the given data set, for example, due to their race, gender or physical disability, are statistically more likely to be rejected. The University of Cambridge study revealed that AI technology outcomes are also affected by a number of irrelevant variables, including lighting, background, clothing and facial expressions and that the technology draws “spurious correlations between personality and apparently unrelated properties of the image, like brightness”. Recruitment decisions made using this technology are, therefore, more likely to be unfair. Additionally, there is a risk that these “irrelevant variables” could be manipulated or learned by individuals to increase their success rate in progressing through AI assessed recruitment stages – undermining the purpose of implementing the AI technology to increase fairness.

Legal developments

It is becoming increasingly important for businesses to take steps to avoid bias and discrimination when using AI. Regulators are calling for the tightening of AI laws, and recruitment looks to be under particular scrutiny. The AI Act, soon to be implemented by the EU, has specifically classified areas of “employment” and “management of workers” as “high risk”, and any AI technology used in “high risk” areas will need to comply with strict obligations before it can be put to market.

Not only will this impact any UK businesses whose operations extend to the EU, but the AI Act is likely to serve as a template for other countries looking to regulate AI more effectively. In the UK, the Government has published an “AI Regulation Policy”, and more proposals for AI regulation in the UK are expected in the coming months. For now, the UK’s Information Commissioner’s Office has already begun investigating allegations of bias and discrimination in AI recruitment systems where algorithms have been used “to sift recruitment applications, which could be negatively impacting employment opportunities of those from diverse backgrounds”.

What should businesses be doing?

Until there are industry-wide best practices, the responsibility to ensure that AI systems are used in non-biased ways falls upon those developing the technology and the employers using it. When implemented and used correctly, AI can be a fantastic resource to improve efficiency in the hiring process, but we recommend:

  • Establishing clear and transparent policies and practices around the use of AI to make recruitment decisions.
  • Casting a wide net when advertising vacancies, including using different methods of advertising across a variety of different platforms to ensure advertisements reach a wide pool of potential applicants.
  • Asking suppliers to explain how AI is being used in their systems so that you can make your own assessment as to whether it fits in with your diversity and inclusion objectives.
  • Training HR teams and recruiters in understanding how AI recruitment tools work and their potential limitations.
  • Considering using AI to improve efficiencies in less problematic stages of the recruitment process such as reference checking, scheduling and communicating decisions.
  • Only using AI to assist with specific elements in the recruitment process and reducing reliance upon it. For example, after an initial sift by AI, consider asking HR teams and recruiters to undertake a second look at applications of those from underrepresented minorities to mitigate any potential biases in the AI sift.
  • Routinely and regularly testing and verifying AI systems used.

There are obvious benefits to AI in recruitment, but it should be deployed with a full understanding of its drawbacks and in a more nuanced manner than may be the case in other areas of modern life.

Emily Hocken is an Associate at Stevens & Bolton.

Rate This: