Here’s how to break the biometric bias for gender equality

Digital access has the power to level the playing fields

Our digital identity holds a lot of power. We use it every day to prove who we are to our banks, insurers, entertainment providers, healthcare providers, the Government and more.

Our digital identities will be at the heart of post-pandemic economic revival, driving the adoption of the sprawling sharing economy and improving safety and security in this digital revolution. But the value of what digital identities can offer goes far beyond this.

Making digital identities mainstream will make it possible to open up a new world of inclusion for billions of women. While the possibilities are huge, there is one major issue to contend with: cited by UN Women, 3.7 billion people do not have access to the internet and half of them are women. This digital exclusion also means that women, in particular, are missing out on potential innovation that comes with inclusivity, as well as the obvious economic and financial benefits.

That said, mobile technologies possess tremendous power to accelerate change. The technology industry holds the responsibility to build and offer technologies that can connect digitally underserved communities. Creating technologies has an inherently moral dimension, to “shape how the people using them can realise their potential, identities, relationships, and goals”, according to the World Economic Forum.

Questioning the design process

Race and gender equality is probably the most widely discussed issue and must be urgently addressed. Facial recognition systems are under scrutiny and, in some cases, have already been proven to be racist.

It’s important to note that biometrics themselves are not actually biased, as they are not making any decisions based on human values. Bias and inequality in biometric technologies are caused by a lack of diverse demographic data, bugs, and inconsistencies in the algorithms. For example, if the training data primarily includes information related to just one demographic, the learning models will disproportionately focus on the characteristics of that demographic.

The inability to identify people within these groups has consequences in the real world. The lack of accuracy of these technologies can lead to people being mistreated, from not gaining approval for financial services products and services to facing issues with the Government or police due to misidentification. People who aren’t represented lose the ability to be acknowledged by the wider society.

Eliminating biometric bias with trust

When it comes to diversity in identity technologies, there is also an urgent business need. Gartner recently found that most companies see minimising bias and discrimination as a key driver behind their selection of identity technologies. With the adoption of these technologies rising rapidly since the onset of the COVID-19 pandemic and identity verification from home becoming the norm, there is no better time to act.

Trust is everything in the technology sector, specifically in the identity space. Open disclosure on status and intentions will set organisations apart. Companies that consistently report on the outcomes of their diversity initiatives, and even their gaps and failures, will earn trust.

Identity verification is all about enabling trust in the digital economy. This means we need to be transparent in explaining the technology’s impact. For example, how we are using AI and machine learning to protect consumers while simultaneously meeting consumer demand for convenience, accuracy, and quality. Being open about how this works and why it’s necessary will be crucial.

Looking to the future

Putting these practices in place will be a step forward in gender equality. To succeed long-term, technology firms should prioritise access for everyone, including women.

Inclusion means equal access, but we aren’t there yet. We still have a long way to go, even with some of the world’s most widely adopted technologies. Collectively we have a responsibility to ensure digital identity technologies are truly inclusive. That means not misrepresenting the underrepresented through racist and sexist facial recognition and artificial intelligence (AI) algorithms, apps, and mobile devices that don’t consider women and people of colour.

For identity technology providers, this means solutions need to enable inclusive access, whether geographic or biometric, free from bias and discrimination. We can create technology solutions that offer digital access to everyone and work with governments and policymakers to make equal access a reality.

Gender equality is the key to making our world a better place, and technology has its role to play.

By Cindy White, CMO at Mitek, a global leader in digital identity verification.

Rate This: