spot_img

HinduPost is the voice of Hindus. Support us. Protect Dharma

Will you help us hit our goal?

spot_img
Hindu Post is the voice of Hindus. Support us. Protect Dharma
39.1 C
Sringeri
Thursday, March 28, 2024

How to Close Technology’s Race Gap

In 2017, a soap dispenser went viral. The video, uploaded by Twitter user Chukwuemeka Afigbo, shows a white man waving his hand underneath the dispenser and soap coming out as normal. When a black palm is placed underneath, nothing happens. The offending machine, a product of Rubbermaid Commercial Products, is tested again as the two men are heard chatting to one another. One directs a question into the air: “What happened to your hand?” The other answers, “Too black.”

He’s not entirely wrong. The soap dispenser sends out an invisible light from an infrared LED bulb and activates when a hand reflects light back to the sensor. Darker colors absorb light rather than bounce it back, preventing the soap from being released.

The video was shared globally, racking up hundreds of thousands of shares and plenty of outrage, illustrating a basic truth along the way. Far from the long-held narrative that technology propels us forward above all else, technology has a race gap.

The idea of technology failing in apps designed for fun is one thing, but look around and you’ll notice that this race gap feeds into most of the tech you use from day to day. If you suspect that the world is designed with white people in mind, nothing makes the point stronger than looking at the new wave of technology omitting people of color from its early development.

Most roads lead back to developers, and tech startups are trying to fill the gaps that the “pale male” developers in Silicon Valley and other places have left wide open. The idea that developers make things for themselves is hardly a new idea, but the idea that you would make technology that omits huge portions of the global population is, beyond anything else, bad business.

In 2017, there were reports (with accompanying video) about a Chinese boy known as Liu whose iPhone X’s facial recognition lock software seemingly couldn’t differentiate between him and his mum. The failure posed security risks based solely on issues of race.

A.I. systems were better at identifying pedestrians with lighter skin tones than darker, and a white person was 10% more likely to be correctly identified as a pedestrian than a black person.

It reminded me of attending a tech conference in Portugal that year, where a company named Furhat showcased what it called “the world’s most customizable ‘social robots.’” Asking the question of how to represent skin tones, however, was apparently beyond the extent of its customizable abilities, and Furhat admitted that the robots “had yet to be developed into different skin tones.”

Skin tone takes on a more sinister edge when applied to facial recognition software’s inability to rid itself of unconscious bias in the context of black faces and law enforcement. In May, when Representative Alexandria Ocasio-Cortez posed a line of questioning to the House Oversight Committee about Amazon and Immigration and Customs Enforcement (ICE), she proposed that “these algorithms are effective to different degrees.” She asked whether the algorithms were effective on women, people of color, people of different gender expressions. When the replies all came in negative, she asked a pointed question: Who are these algorithms mostly effective on? The answer was short: “Definitely white men.”

This year, perhaps most alarmingly, a report from Georgia Tech looked at self-driving cars and analyzed the effectiveness of various “machine vision” systems at recognizing pedestrians with different skin tones. The results showed that A.I. systems were better at identifying pedestrians with lighter skin tones than darker, and a white person was 10% more likely to be correctly identified as a pedestrian than a black person.

Another recent report showed that wearable heart rate trackers, like the Fitbit, were less reliable for people of color because the tech uses a green light that is readily absorbed by melanin.

There are countless examples that show just how modern these design omissions are. People of color historically have not had the world designed with them in mind, and as we innovate toward A.I. and automation, it appears that the future won’t be either. But while automated cars and facial recognition are a long way off from being globally scaled, there is, at this very moment, universal tech in our day-to-day lives sending a message about racial privilege from our pockets.

“The only thing I use frequently is Siri on my iPhone for basic things and sentences like ‘Take me home,’” says Julian Mendel over FaceTime. “But there have been times where it doesn’t work for me or understand what I’m saying.” He’s explaining his experience of the efficacy of Siri with a thick Jamaican accent (to a British ear) merged with an American twang, thanks to recent years lived in the United States after growing up in Jamaica.

“I think I do put on a different voice to speak to it, because there’s no way it would understand patois. I don’t think there is such thing that exists in Jamaica, and I’m pretty sure that’s a long way off. For a lot of people, it would be almost useless. There’s a ton of people who could probably afford the technology but can’t use it because in their daily life they speak patois.”

It’s not a new phenomenon in communities of color to adopt a clearer (read: “whiter”) phone voice in order to be understood, but the idea that people adopt linguistic affectations to speak to new tech is depressing, to say the least. After all, just how effective is technology that shrinks its own market based on accent?

Mendel is not alone. I spoke to English speakers with thick Indian, Korean, and Spanish accents who all said similar things. And this is all before we even consider other languages. So, are there solutions to this phenomenon that renders technology useless to chunks of the global population?

In Toronto, on the stage at Collision, one of North America’s largest tech conferences, Katharina Borchert, chief innovation officer at Mozilla, is addressing a crowd of at least 200 people. She’s aiming to provide a solution to a gap in the market: voice data. Smartly dressed, with brunette hair, glasses, and a German twang to her accent, Borchert is explaining how the “pale, stale, male” monopolies of tech companies have been slow to serve what she calls people from “emerging markets.”

She flags something that many people already know: Voice recognition software does not serve you well if have an accent or native language other than English (Even dominant languages like Mandarain Chinese, she notes, are dominated by male voices). She is an advocate of Common Voice, a project that aims to recruit your voice as a dataset to be used to better diversify A.I. speech Click on Common Voice, and you are invited to donate your voice by recording phrases like “The houses are all made of wood,” or “Various anti-national phobias and prejudices operate with ethnic stereotypes.”

“The larger the diversity of speakers, the greater the quality over the long haul.”

“We realized that the ecosystem is very closed and locked down, because it’s the usual same big companies that own all the training data,” Borchert tells me over tea after she comes off stage. “Apple, Amazon, Google, Nuance, Microsoft. You can license their datasets, or you can work with Amazon Alexa skills, but it doesn’t really scale very well if you’re a mission-driven company like Mozilla who cares about the open web and creating an ecosystem of opportunity.

“Something we learned early on about companies that started years ago with voice recognition is that they often took datasets that came from public radio or things like that, because they didn’t have to worry so much about copyright issues. And those tended to be male, native speakers with really trained voices, so you had people articulating very clearly, because that was the largest part of the training data.

That automatically led to a biased result, because that is all the machine has. There’s not a lot of female voices, and it doesn’t have people with crazy accents. That is why the early version had real issues understanding women, because it’s a different pitch. So, the larger the diversity of speakers, the greater the quality over the long haul.”

There are some concerns — the size of the dataset is based on the willingness of the speakers, and there are sensitivities around who is asking for it. Borchert recently worked with communities doing hackathons everywhere from Berlin to Rwanda, and there is a natural skepticism of white “innovation officers” asking East African locals to lend their voices.

“Of course, it’s much easier to find people who are willing to speak in English than in Kinyarwanda because of numbers,” she explains. “So that makes it easier to scale than others, but I think it really comes down to community activism and engagement.”

Borchert also recognizes people’s reticence to give their voice data without knowing exactly the end point of where it will be utilized. (Borchert ensures that the terms and conditions act as a safeguard to unethical use and that voice data is anonymized, disallowing personal identification.)

“We don’t want to own all of that,” she emphasizes. “We don’t want to build all of that. I want us to be a catalyst for the crazy next wave of innovation of voice and speech as an interface. My mother has a strong German accent, and she can use the German versions of Amazon Echo, but she can’t use the English version, or Siri… My dad can’t talk to Siri in English.”

She argues that the data could be used not only in elderly care, but also, ideally, in mobile health, asking simple questions that you don’t have to type in. It’s clear how a project like this, aside from redressing the balance of voices, can also be appropriated as a tool of political activism. Borchert recalls a recent instance where activists and universities harvested Catalan- and Welsh-language voices for the project in a political climate where there are fears of those languages dying out.

Common Voice is a long and arduous project, but it provides an interesting corporate solution that has the potential to radically change the way huge portions of an aging global population engage with technology by literally lending your voice in protest. While there must be obvious changes in companies’ structural makeup for us to see real change, the future must be redesigned with urgency. A future world will rely on tech in the fabric of everyday life.

For now, recognizing the gap is just the first step. It may be a bitter pill to swallow, but if you are a person of color in a thriving technological space with access to home systems and A.I., the world will still be just that bit more difficult to navigate. Putting on a “white” voice, presenting a white napkin under soap dispensers, being policed based on confused algorithms, losing the lottery on elderly care, and having less safety as a roadside pedestrian is what the future may look like for a while yet.

-By Kieran Yates

(This article was published on onezero.medium.com on August 8, 2019 and has been reproduced here in full.)

(Featured Image Credit: Steffi Loos/Getty Images)


Did you find this article useful? We’re a non-profit. Make a donation and help pay for our journalism.

Subscribe to our channels on Telegram &  YouTube. Follow us on Twitter and Facebook

Related Articles

Web Desk
Web Desk
Content from other publications, blogs and internet sources is reproduced under the head 'Web Desk'. Original source attribution and additional HinduPost commentary, if any, can be seen at the bottom of the article. Opinions expressed within these articles are those of the author and/or external sources. HinduPost does not bear any responsibility or liability for the accuracy, completeness, suitability, or validity of any content or information provided.

LEAVE A REPLY

Please enter your comment!
Please enter your name here

Latest Articles

Sign up to receive HinduPost content in your inbox
Select list(s):

We don’t spam! Read our privacy policy for more info.

Thanks for Visiting Hindupost

Dear valued reader,
HinduPost.in has been your reliable source for news and perspectives vital to the Hindu community. We strive to amplify diverse voices and broaden understanding, but we can't do it alone. Keeping our platform free and high-quality requires resources. As a non-profit, we rely on reader contributions. Please consider donating to HinduPost.in. Any amount you give can make a real difference. It's simple - click on this button:
By supporting us, you invest in a platform dedicated to truth, understanding, and the voices of the Hindu community. Thank you for standing with us.