AI-powered recruitment can be racist or sexist – and here’s why

With over 30 years of experience, an advisor to the UK and Indian Governments, Was Rahman knows a thing or two on the use of AI and data

Addressing racism and sexism in the workforce is now on the agenda of most successful organisations, as the business benefits of a more diverse workforce have become increasingly accepted. However, leaders can face formidable challenges putting the principles into practice. Policies, strategies and education are important tools for achieving a more diverse workplace, but in today’s world of ATS and online job posting, technology is also key.

One of the most important technologies used in HR today is AI (Artificial Intelligence), particularly in recruitment. Its primary benefits are recruitment efficiency and candidate quality, but AI can also reduce sexism and racism in hiring processes. However, as is periodically reported in the press, AI can make these problems worse, leading to concerns about its use for both recruiters and candidates.

HR professionals can play a significant role in ensuring AI-powered recruitment has a positive impact on workplace diversity. Achieving this requires understanding how potential biases can be introduced so that practices and safeguards are in place.

https://diversityq.com/companies-worldwide-are-acknowledging-the-need-for-diversifying-their-workforce-1508311/

How AI is used in recruitment

There are many forms of AI, with at times confusing labels describing them. One of the forms of AI in recruitment systems is called intelligent matching. This is similar to the AI used in movie recommendations and dating apps and works by analysing large quantities of data to make forecasts and recommendations.

In recruitment, the AI processes data about candidates and job vacancies to suggest applicants to shortlist, interview and even select. This is an extension of keyword matching between job descriptions and applications used by recruitment software for many years. AI does this in a more sophisticated way to achieve more accurate matches and then learns to improve the quality of matches further over time.

Intelligent matching is only one type of AI used in recruitment. Another can tailor recommendations to an organisation, not just a job vacancy. To do this, the AI reviews historical data about existing employees in the hiring organisation to understand what makes a successful employee there. This goes well beyond features of a role such as skills or education. Instead, it identifies broader types of fit, such as organisational culture.

AI determines this from past employee data, using complex statistical models and mathematical equations to learn how to recognise successful employees’ characteristics. This is known as “training” the AI system, and the historical data is referred to as “training data”.

The set of equations it uses to do this is known as an “algorithm” and allows the AI to predict how well a new candidate is likely to fit the organisational if hired. As the system is used, and new data becomes available, AI refines its own algorithm to improve its accuracy.

Today, AI-powered recruitment systems generally include both of these types of AI to assess how well an applicant matches an open position and how likely they are to succeed in the organisation if hired. Other forms of AI can be used elsewhere in the hiring process, but it’s in these two that the risk of introducing racism and sexism are greatest.

https://diversityq.com/blind-recruiting-aids-diversity-and-inclusion-1510440/

Racism and sexism from biased data

The first and most well-known source of racism and sexism in AI-powered recruitment is the training data. This is because data about existing employees reflects past hiring and promotion practices, including any previous diversity issues.

For example, if an organisation is in a historically male-dominated industry, it is likely that the existing workforce is mostly male, and therefore data about existing employees is inherently gender-biased. Thus, if an AI system is trained using this data, it could conclude that male employees are more successful than female ones – because there are more of them – and therefore be biased in favour of male employees in its recommendations.

This issue goes beyond simply whether gender and ethnicity data is analysed, because factors leading to racism and sexism may be masked. For example, if an AI system analyses data about where candidates live, it may find a correlation between postcode and organisational fit. However, further investigation may uncover that different residential areas have different ethnic concentrations, and some postcodes are in effect, indirectly reflecting employee ethnicity. If the organisation has historically had an issue with ethnic diversity, then training an AI algorithm with postcode data to assess organisational fit may be inadvertently repeating that issue.

Diversity and algorithm design

The other major source of bias in AI recruitment systems is algorithm design and the work of people responsible for algorithm design choices.

HR experts and data scientists work together to specify which factors to consider and how to account for them. They typically do this based on how a person would perform an activity, but simply replicating that can lead to problems. This is because people may apply unconscious judgements which can be overlooked when designing a computer system to do the same thing. For example, a human recruiter may look at a candidate’s address to decide if a candidate is within commuting distance of a firm. But doing the same thing for AI recruitment may inadvertently also introduce a proxy for ethnicity.

Data scientists select the most appropriate algorithm to start with and configure how it uses selected data. Training data and machine learning will refine and improve an algorithm. Still, if data scientists select the wrong base algorithm or set it up poorly, it may end up containing inherent biases regardless of accuracy.

https://diversityq.com/technology-and-ai-will-get-women-off-the-bench-1510515/

Avoiding biased data

An obvious way of preventing biased AI recruitment is to exclude from training data any details of ethnicity, gender or other forms of desired diversity.

Doing this properly involves understanding how related data could create indirect bias. For example, career breaks are less common in male-biased training data because of factors such as maternity leave. So unless appropriately handled, data about the continuity of employment is a potential source of gender bias.

Examples such as this are straightforward to deal with once recognised, and data scientists have statistical tools to help investigate and address potential problems. But their use requires HR professionals to understand the issue, and know how to work with AI teams to ensure training data is unbiased.

Ensuring unbiased algorithms

AI systems meet requirements set by humans, and humans also make key decisions about how those systems work – even if computers then refine and improve them. For AI-powered recruitment systems to be free of bias, the people defining requirements and designing algorithms need to be conscious of the risk of including discriminatory factors into algorithms. Techniques exist to assist, but HR professionals need to ensure data scientists use them effectively.

Another responsibility for HR professionals is understanding how to test that bias has been prevented. This is a form of user testing that should happen with any IT system, but not always addressed sufficiently. These tests should also be repeated periodically, in case ongoing machine learning leads to algorithm changes that introduce bias later.

Was Rahman is an expert in the ethics of artificial intelligence, the CEO of AI Prescience and the author of AI and Machine Learning.

This image has an empty alt attribute; its file name is AI-and-Machine-Learning-COVER-_Was-Rahman-640x1024.jpeg
https://diversityq.com/its-time-to-rethink-the-traditional-recruitment-process-1510364/
Rate This: