Companies worldwide are acknowledging the need for diversifying their workforce. Can AI-driven assessments help?
From increased financial performance to better decision-making, companies are seeing a whole host of benefits associated with diverse workforces.
Unfortunately, numerous studies have shown human bias to still be prevalent in the hiring process. An extensive review of studies on hiring practices shows no significant reduction in racial discrimination in hiring for the past thirty years – a statistic clearly calling for urgent action to increase diversity in the workplace.
Organisations must begin considering new tools and practices to supplement diversity training programs. We need to see innovation and an increase in active measures and efforts to reduce discriminatory practices in hiring.
AI-driven tech
One solution that organisations are looking to is the use of AI-driven technology. Lack of consistency in assessment practices, use of past achievement indicators, as well as implicit and unconscious human bias are some of the ways in which bias creeps into the recruitment process. By leveraging technology to make
A clear example is the use of assessments, driven by artificial intelligence (AI). This technology uses the same evaluation process for each candidate, focuses on measuring indicators of future job performance rather than past achievement, and enables organisations to sort through high volumes of candidates using objective criteria.
The use of assessments in hiring
Assessments have been used as part of recruitment processes for decades. As they focus on determining and measuring the key indicators of high job performance, they have been shown to be good predictors of future job success.
They provide a much fairer way of assessing candidates based on potential, rather than on past achievements such as exam results or which university a candidate attended. Pre-hire assessments offer standardised processes for evaluating a candidate’s likelihood to perform well in each role. Each focuses on measuring competencies, traits and abilities that are relevant for job success in a specific job role and can be created for a variety of roles and job levels.
However, traditional assessments, which often consist of dozens to hundreds of multiple-choice questions, can be lengthy and stressful for candidates. Large amounts of data are required to ensure these assessments are predictive. Some candidates may be affected by higher levels of test anxiety as a result of stereotype threat. This is a phenomenon whereby candidates believe their performance on a test may confirm inaccurate stereotypes about their group – leading to a lower score. This typically results in high candidate dropout rates and a reduction in the predictive validity of the assessments.
By leveraging AI, the length of these assessments can be greatly reduced. Organisations can offer an extensive assessment of more of the “whole candidate” and reduce any potential bias against certain groups of candidates.
AI is not as scary as it seems
AI has gathered a bad reputation, with plenty of media articles claiming AI algorithms are carriers of bias and discrimination. A recent example is the algorithm developed by Amazon for CV screening. Indeed, without taking deliberate action to audit these algorithms and eliminate any potential bias that can be found in its training data or its creators, AI algorithms are at risk of leading to biased decisions.
However, by using ethical design principles and continuous auditing of the algorithms being built, AI has the potential to help recruiters make fair decisions and to increase the quality of their hires in ways that can actually increase diversity within organisations.
Firstly, AI-driven assessments enable data-driven decision-making, which can help overcome any unconscious bias that recruiters may have. They also allow candidates to be evaluated on a much larger set of competencies and abilities, in a much shorter testing time, when compared to traditional assessments. Secondly, due to the richness of data used in scoring AI-driven assessments, it becomes much more feasible to find the source of bias in the algorithm and remove it.
Making AI more human
Crucially, AI-driven assessments are not there to replace people. They are intended to serve as decision-making aids for human recruiters, and are built by humans. The hiring process for most organisations includes person-to-person interviews following the assessment or screening stage, with hiring decisions ultimately made by people.
AI-driven assessments combine the expertise of organisational psychologists and data scientists. Organisational psychologists design the assessments of competencies relevant to job success in specific roles, while data scientists build the machine learning algorithms scoring these assessments. Psychologists and data scientists work together from the design phase through bias testing and validation to ensure the validity and fairness of these assessments, at all stages of the process.
Implicit bias
Auditing the quality of these algorithms is a key stage in the development of AI-driven assessments. Critically, it is at this stage that developers need to check whether a protected group is being discriminated against during the selection process. If evidence of adverse impact is found, addressing the root cause of bias is necessary. For AI-driven assessments, this entails looking for any implicit bias that may exist in the algorithm’s training data and – most importantly – removing it from the prediction.
It is important to remember that AI-driven assessments are not intended to remove the human element from recruitment. Instead, scores derived through these assessments represent objective evaluations of a candidate’s likelihood to perform well on the job. These scores are then used as tools to support recruiters’ hiring decisions.
AI-driven assessments support both organisations and candidates, helping to ensure a process that is as fair as possible, is legally defensible, and helps to objectively identify the best talent. Organisations can sift through large volumes of applications using a systematic, objective, and fair way to evaluate candidates. This gives the right candidates a better chance of being hired, and as a result can lead to an increase in workforce diversity.
by Sonia Codreanu, a Business Psychologist at HireVue.