The blind spots of the mainly white, male tech industry are becoming increasingly clear.
Newsletter
DiversityQ supports board members setting and enacting their D&I strategy, HR directors managing their departments to take D&I best practice and implement it in real-life workplace situations
The blind spots of the mainly white, male tech industry are becoming increasingly clear. Time to accelerate diversity in tech?
When I was a student at an all-women’s college, my visiting male friends and relatives used to protest about the terrible design of the men’s toilets. It was perhaps one of the few times they encountered facilities designed with women, rather than men, in mind, and they noticed. And none of us considered how the toilets might have appeared to someone in a wheelchair or someone who was transgender.
Hopefully, thanks to some high-profile campaigns, these days, anyone designing a toilet would bear all these groups in mind. But while physical spaces may be becoming more accessible, these days we are as likely to socialise, learn, shop, and do business online. Just as physical architecture has long had blind spots, so too, does the architecture of the online world. And when tech firms make these oversights, they do not only exclude people from a vital part of 21st-century living, they lose out on potential customers too.
Design blind spots
There are plenty of ways people can be alienated from tech, and often those shaping the internet will barely be aware of the consequences. Take picture captions. Many of those uploading content to the internet see them as just another box to fill, or leave blank if possible. But as the blind veteran Rob Long has eloquently explained, for internet users with limited vision, a detailed descriptive picture caption allows them to illustrate an article with their imagination.
“In this brave new world of pets in tutus, cats who resemble dictators and wardrobe malfunctions of political candidates, blind users can be left wondering how bad can a hairpiece be to warrant this level of anger,” he wrote in the New Statesman. “After all, a picture that has caused such furious debate may just be read out as ‘box contains image’.” While some social media platforms automatically generate descriptions, others simply leave it up to the consideration of users.
Even more of us may be affected by the blind
spots in artificial intelligence. Many pioneers were optimistic about AI’s
ability to learn from the wisdom of the crowd – but certain experiments
suggested it could instead end up reinforcing stereotypes. In 2016, Microsoft
created a bot designed to learn through conversation, and unleashed it on the
web. Perhaps Microsoft’s mainly white and male engineers were hopelessly
optimistic, or simply unaware of the misogyny and racism that courses through
social media channels. Either way, within 24 hours, the bot became an offensive
troll and had to be retired.
And then there’s facial recognition, which is being rolled out everywhere from passport control to online security. But while the software is already accurate enough to identify a white male 99% of the time, the MIT Media Lab found in 2018 that it was less accurate at identifying white women, and strikingly bad at identifying women with darker skin.
A 2019 study by the information systems researcher Lauren Rhue found that when facial recognition software is combined with emotional analysis technology, it interprets black male faces as having more negative emotions than white male faces. “There is good reason to believe that the use of facial recognition could formalize preexisting stereotypes into algorithms, automatically embedding them into everyday life,” Rhue concluded.
It seems fair to assume that most of the
people designing these new technologies do not get out of bed determined to
deliberately alienate their black female customers. But that doesn’t mean that
the choices they make along the way don’t have unintended consequences. So how
can the tech industry work to counter this tendency?
As I wrote just before Easter, some tech giants are trying to change the situation, with outreach programmes, targeted recruitment drives and high profile appointments of women and ethnic minorities. But even with the best will in the world, it is hard to keep up with the pace of technological development. Just as the potency of political campaigning on Facebook caught politicians and tech bosses alike by surprise, so too could the first AI scandal.
For this reason, it’s also important that rather than simply relying on a future utopian workforce, tech bosses also invest in educating their existing one now in recognising implicit biases. We know, thanks to Harvard’s Implicit Association Tests, that the majority of people do hold biases against people based on their features, even if they are not conscious of them. In the UK, assumptions based on accents are also commonplace, with 28% of Brits believing they have been discriminated against because of the way they talk (for an amusing take on what this could mean, watch the BBC sketch from 2011 in which two Scotsmen trapped in a lift hopelessly shout for floor “eleven” at an uncomprehending piece of American voice recognition software).
Examples of inclusive technology
As well as raising awareness among their own
workforces, employers may also be able to borrow from the examples of
campaigners for more equal tech. Some of these are academics like the late
David Mackay, who invented Dasher, a way to type by simply tracking the eyes, to
Josie Young, who has designed a blueprint for creating feminist chatbots. But
there are also examples set by large corporations. Apple’s software supports
Braille, while the iPhone camera app announces to the visually impaired when
someone is in the shot. Google has appointed a head of ethical machine learning
to test and scrutinise new technologies.
Facebook is now analysing the way its algorithm affects certain groups through a tool called Fairness Flow.
Ultimately, there are hard business reasons for countering bias in tech: no one wants to lose customers. But as anyone who has witnessed the privacy battles and political upheavals of the last decades knows, it is also no ordinary industry. Those employers that fail to acknowledge the impact of their blind spots now may find they are haunted by them later.
Julia Rampen is the digital night editor at the Liverpool Echo, a former digital news editor at the New Statesman and financial journalist.
More by Julia Rampen
If the digital revolution is to benefit everyone, we need diversity in tech
Diverse group of developers and IT professionals working in tech
The blind spots of the mainly white, male tech industry are becoming increasingly clear.
Newsletter
DiversityQ supports board members setting and enacting their D&I strategy, HR directors managing their departments to take D&I best practice and implement it in real-life workplace situations
Sign up nowThe blind spots of the mainly white, male tech industry are becoming increasingly clear. Time to accelerate diversity in tech?
When I was a student at an all-women’s college, my visiting male friends and relatives used to protest about the terrible design of the men’s toilets. It was perhaps one of the few times they encountered facilities designed with women, rather than men, in mind, and they noticed. And none of us considered how the toilets might have appeared to someone in a wheelchair or someone who was transgender.
Hopefully, thanks to some high-profile campaigns, these days, anyone designing a toilet would bear all these groups in mind. But while physical spaces may be becoming more accessible, these days we are as likely to socialise, learn, shop, and do business online. Just as physical architecture has long had blind spots, so too, does the architecture of the online world. And when tech firms make these oversights, they do not only exclude people from a vital part of 21st-century living, they lose out on potential customers too.
Design blind spots
There are plenty of ways people can be alienated from tech, and often those shaping the internet will barely be aware of the consequences. Take picture captions. Many of those uploading content to the internet see them as just another box to fill, or leave blank if possible. But as the blind veteran Rob Long has eloquently explained, for internet users with limited vision, a detailed descriptive picture caption allows them to illustrate an article with their imagination.
“In this brave new world of pets in tutus, cats who resemble dictators and wardrobe malfunctions of political candidates, blind users can be left wondering how bad can a hairpiece be to warrant this level of anger,” he wrote in the New Statesman. “After all, a picture that has caused such furious debate may just be read out as ‘box contains image’.” While some social media platforms automatically generate descriptions, others simply leave it up to the consideration of users.
Even more of us may be affected by the blind spots in artificial intelligence. Many pioneers were optimistic about AI’s ability to learn from the wisdom of the crowd – but certain experiments suggested it could instead end up reinforcing stereotypes. In 2016, Microsoft created a bot designed to learn through conversation, and unleashed it on the web. Perhaps Microsoft’s mainly white and male engineers were hopelessly optimistic, or simply unaware of the misogyny and racism that courses through social media channels. Either way, within 24 hours, the bot became an offensive troll and had to be retired.
And then there’s facial recognition, which is being rolled out everywhere from passport control to online security. But while the software is already accurate enough to identify a white male 99% of the time, the MIT Media Lab found in 2018 that it was less accurate at identifying white women, and strikingly bad at identifying women with darker skin.
A 2019 study by the information systems researcher Lauren Rhue found that when facial recognition software is combined with emotional analysis technology, it interprets black male faces as having more negative emotions than white male faces. “There is good reason to believe that the use of facial recognition could formalize preexisting stereotypes into algorithms, automatically embedding them into everyday life,” Rhue concluded.
>See also: Digital divides in the workplace
Countering bias in tech
It seems fair to assume that most of the people designing these new technologies do not get out of bed determined to deliberately alienate their black female customers. But that doesn’t mean that the choices they make along the way don’t have unintended consequences. So how can the tech industry work to counter this tendency?
It should start with itself. While women are if anything more engaged than men in buying new tech products, the industry that serves this customer base is overwhelmingly white and male. The figures are even worse when it comes to the technology in the firing line, artificial intelligence. According to research conducted by Wired magazine with the firm Element AI in 2018, just 12 per cent of machine learning researchers are women.
As I wrote just before Easter, some tech giants are trying to change the situation, with outreach programmes, targeted recruitment drives and high profile appointments of women and ethnic minorities. But even with the best will in the world, it is hard to keep up with the pace of technological development. Just as the potency of political campaigning on Facebook caught politicians and tech bosses alike by surprise, so too could the first AI scandal.
For this reason, it’s also important that rather than simply relying on a future utopian workforce, tech bosses also invest in educating their existing one now in recognising implicit biases. We know, thanks to Harvard’s Implicit Association Tests, that the majority of people do hold biases against people based on their features, even if they are not conscious of them. In the UK, assumptions based on accents are also commonplace, with 28% of Brits believing they have been discriminated against because of the way they talk (for an amusing take on what this could mean, watch the BBC sketch from 2011 in which two Scotsmen trapped in a lift hopelessly shout for floor “eleven” at an uncomprehending piece of American voice recognition software).
Examples of inclusive technology
As well as raising awareness among their own workforces, employers may also be able to borrow from the examples of campaigners for more equal tech. Some of these are academics like the late David Mackay, who invented Dasher, a way to type by simply tracking the eyes, to Josie Young, who has designed a blueprint for creating feminist chatbots. But there are also examples set by large corporations. Apple’s software supports Braille, while the iPhone camera app announces to the visually impaired when someone is in the shot. Google has appointed a head of ethical machine learning to test and scrutinise new technologies. Facebook is now analysing the way its algorithm affects certain groups through a tool called Fairness Flow.
Ultimately, there are hard business reasons for countering bias in tech: no one wants to lose customers. But as anyone who has witnessed the privacy battles and political upheavals of the last decades knows, it is also no ordinary industry. Those employers that fail to acknowledge the impact of their blind spots now may find they are haunted by them later.
>See also: How Tim Cook has used Apple to turn diversity into a priority for tech firms
Julia Rampen
Julia Rampen is the digital night editor at the Liverpool Echo, a former digital news editor at the New Statesman and financial journalist. More by Julia Rampen
Latest
Samsung UK: women should not allow self-doubt to hinder their careers
TechSmith transforms meeting for enhanced collaboration, equity
Crafting a comprehensive benefits literacy plan for your employees
Related
Crafting a comprehensive benefits literacy plan for your employees
Benefits literacy is essential for employees to maximise their wellbeing
The urgent need for equity and inclusion in a divided society
Dr don Trahan Jr. combats anti-DEI movements and pioneers change through Global Equity Entertainment
Women against women: the hidden obstacle in Corporate America
Reflections on the betrayal and hostility from fellow women in the workplace
Benchmarking progress toward digital accessibility
Organisations are failing in their duty to make online activity accessible to all