The Institute for the Future of Work has found that companies may be inadvertently breaking UK equality law by relying on American systems to check for bias, discrimination and inequality when using automated systems.
In a new report, AI in hiring: assessing impacts on equality, the Institute for the Future of Work has found that auditing tools intended to support companies in their efforts to comply with equality legislation are not fit for purpose.
Companies are having to respond at speed to the shock caused by the COVID-19 pandemic and are increasingly using automated systems in their operations.
The Institute’s report has found that the checks companies use to prevent bias are often inadequate and operate from a US perspective.
Breaking UK equality law
Anna Thomas, the director of the Institute for the Future of Work, says companies are at risk of breaking UK equality law and damaging their businesses: “Equality matters. Research has shown how reducing barriers to employment for women and people from BAME groups has fuelled both innovation and growth in productivity since the 1960s. Companies recognise this. But too often they are using systems to audit their AI processes that simply aren’t up to the job.
“This means they could be damaging their businesses by unwittingly hampering the development of a diverse workforce and even breaking equality legislation. Ultimately this undermines confidence in new technologies and could damage a company’s reputation in the eyes of its employees.”
Regular auditing for equality, and taking steps to make appropriate adjustments where inequality is identified, are used to avoid breaches of the Equality Act where AI systems are used to determine access to work.
But the Institute’s report finds that these tools are rarely explicit about how they understand fairness or bias and are often built to meet legal requirements in the United States, making them unfit for use in the UK.
Key recommendations
Among a series of recommendations, the Institute proposes that companies should integrate technical auditing of their AI systems into wider “Equality Impact Assessments” to help them identify threats to equality and act appropriately.
It also calls on computer scientists and policymakers to work together to develop a greater understanding of the risks posed by AI.
This will help to identify patterns of systemic inequality over time that go beyond statistical bias as well as approaches to addressing bias, discrimination, or patterns of inequality detected in AI systems.
Report findings
• US Auditing tools routinely make assumptions from a US perspective where assumptions about the requirements of Equality law are different from the UK
• Companies creating auditing tools are rarely clear about their purpose or methods – users need to understand what they are evaluating and why
• Auditing tools offer a ‘snapshot’ of bias in an AI system but do not evaluate its impact over time.
• The effects of AI systems on equality are not adequately considered, or prioritised, within existing approaches to auditing
• Auditing tools are not equipped to address or mitigate many forms of bias, discrimination and inequality when they are detected
• Unless auditing tools are focused on relevant equality questions, and their use is integrated into a wider equality impact assessment, their usefulness for promoting equality is limited
Equality Taskforce
The report was written to support the work of the Institute’s Equality Taskforce. Its Chair, Helen Mountfield, QC said: “Use of Artificial Intelligence and Machine Learning in the workplace risks super-charging existing inequalities in society – if we let it. Ensuring good work in the age of artificial intelligence depends on making good social policy.
“The Institute for the Future of Work’s proposals on conducting meaningful equality impact assessments [and audits] for artificial intelligence offer employers, policymakers and programmers practical, important and timely advice on how to identify, and avoid reproducing unfair patterns of discrimination in the new world of work.”
The report recommends a series of actions for technology companies, industry bodies, businesses, unions, and policymakers and legislators:
• Equality should be recognised as a guiding principle in the deployment of AI and auditing systems, alongside Fairness, Accountability, Sustainability and Transparency and Data Protection principles.
• Professional and industry standards for auditing tools, including auditing for equality, are urgently required to maintain high, consistent standards.
• Regular auditing for equality, and taking steps to make appropriate adjustments where inequality is identified, are required to avoid breaches of the Equality Act where AI systems are used to determine access to work.
• Companies should integrate technical auditing into a wider equality impact assessment to help understand the different types of impact on equality and take action in response.
• To promote legal compliance, good governance and best practice, this wider equality impact assessment should aim to exceed the requirements of national equality legislation, as well as data protection and employment legislation.
• Auditing must fit within a broader approach to evaluating the impact of AI systems on equality. It should consider the effects on equality of opportunity and outcome, and focus companies on finding ways to tackle problems.
• Before deploying automated hiring tools, companies should consult their workforce and any affiliated union, to discuss potential effects.