Financial regulators are investigating Apple’s new credit card for discriminating against women in what is just the latest example of bias in AI systems.
Artificial intelligence is the capability of a machine to imitate intelligent human behaviour. In order to imitate such behaviour, AI systems are fed a number of datasets, and just like humans, they are what they eat.
According to a report by Element AI, only 12 percent of AI researches are women. This means that almost 50 percent of the human population is not represented in the creation of such a lifechanging technology. Similarly, according to MIT Technology Review, women account for only 18 percent of authors at leading AI conferences, 20 percent of AI professorships, and 15 percent and 10 percent of research staff at Facebook and Google, respectively.
On Friday, software developer David Heinemeier Hansson tweeted that the tech giant’s new credit card offers him “20x the credit limit she does.”
The couple, said Hansson, files joint tax returns, live in community-property state and have been married for a long time.
In a long thread, Hansson explained that although Apple’s customer service manually raised his wife’s credit limit, they had no idea how the algorithm reached its decision, nor could they do anything to permanently change it.
Other users reported similar problems, including Apple Co-Founder Steve Wozniak, who tweeted saying he received ten times the credit limit his wife did.
The same thing happened to us. I got 10x the credit limit. We have no separate bank or credit card accounts or any separate assets. Hard to get to a human for a correction though. It’s big tech in 2019.
— Steve Wozniak (@stevewoz) November 10, 2019
Shortly after the news came out, Linda A. Lacewell, Superintendent of New York State Department of Financial Services, explained that the DFS would examine whether the algorithm “violates state laws that prohibit discrimination on the basis of sex.”
In a statement, Goldman Sachs, the banking giant that is currently issuing Apple credit card, said they “have not and will not make decisions based on factors like gender.”
— GS Bank Support (@gsbanksupport) November 11, 2019
Wozniak commented on the issue in an interview with Bloomberg and said, “These sorts of unfairnesses bother me and go against the principle of truth. We don’t have transparency on how these companies set these things up and operate. Our government isn’t strong enough on the issues of regulation. Consumers can only be represented by the government because the big corporations only represent themselves.”
Back in April, US Senators Cory Booker and Ron Wyden proposed a bill that introduces a framework to require organisations to asses and “reasonably address in a timely manner” any biases found in the algorithm. The Algorithmic Accountability Act of 2019 is however still a draft, and has been criticised for “holding algorithms to different standards than humans, not considering the non-linear nature of software development, and targeting only large firms despite the equal potential for small firms to cause harm.”
To date, the examples of sexist bias in AI are more than anyone would like to admit – Apple being just the latest of a long list.
Last year, Amazon had to scrap an AI recruitment tool that discriminated against female candidates. The system was fed data submitted by applicants over a 10-year period, mostly men. Due to this data, the tool scraped any CV that contained any reference to the candidate being a woman.
Voice assistants – mostly “females”, i.e. Siri, Cortana, Alexa – have been long criticised for reinforcing gender stereotypes and portraying an idea of women as servants. The choice, according to different studies, leads back to human preference, and historical bias – when people need help, they prefer to have it delivered from a female voice, while male voices are preferred for authoritative statements.
A woman paediatrician in Cambridge found herself locked out of the women’s changing room of her gym because the algorithm assumed every person with the title “Dr” was a man.
@PureGym dont understand the concept THAT WOMEN CAN BE DOCTORS.If you have the title of DR yr pin no ONLY LETS YOU IN THE MALE CHANGING ROOM
— Lou (@louselby) March 14, 2015
The issue however is drilled a lot deeper into our reality than the biased results of an algorithm. Although technological innovation moves faster than ever, the rate of women in STEM positions is still increasing way too slowly, with on average only 28 percent of women in the field, as opposed to 72 percent of men.
Automation, according to a recent study by McKinsey will only worsen the situation – between 40 and 160 million women may need to transition between occupations due to this new technology.
“We live in a time when human behaviour and skills have a direct impact on technology and vice versa – which means, it is crucial that both men and women play a fundamental role in the evolution of automation and digitisation,” said Shukri Eid, Managing Director of the East Region at Cisco Middle East. “The correlation between emerging technologies and the decline in women’s participation in the workforce largely depends on the automation of the roles traditionally held by women, and the skills gap which prevents women from innovating at work. This realisation presents new responsibilities for organisations when it comes to taking adequate steps for the future of inclusive work.”
Although the future looks direly biased as women find themselves excluded from AI development and suffer the results of automation, a recent study by UNESCO suggests it is not too late to right the wrongs of sexist AI systems as these are still in their infancy – but the clock is ticking.
“There is nothing predestined about technology reproducing existing gender biases or spawning the creation of new ones. A more gender-equal digital space is a distinct possibility, but to realize this future, women need to be involved in the inception and implementation of technology,” explains the study. “This, of course, requires the cultivation of advanced digital skills.”
For example, Eid tells us that Cisco established a foundational dialogue called “The Future of Fairness”, which they consider to be the fuel that strengthens the power of teams and accelerates participation and harmony in the workplace, and added, “As the nature of work transitions, organisations need to foster inclusion by internally building a culture of tolerance, education and openness about this cause.”
Similarly, Tatiana Labaki, Senior Manager – Revenue & Analytics at Emaar Hospitality Group, believes we will never reach the results we hope if organisations don’t move quickly enough to solve the problem.
“Unless women become an equal player with equal influence and impact in the field, despite the numbers, AI will remain a reflection of a society and barriers we have long fought to break as women,” explained Labaki. “Numerous studies have proven that the unconscious bias towards women and how they are represented are strengthened by the scarcity of women leaders who are in decision-making positions in the Machine Learning field, yielding in technologies that represent assistants as females and change-making robots as males, unfortunately.”
Although we are making strides to solve the issue (ie. thankfully Siri doesn’t respond “I’d blush if I could” when called a “b*tch”), there is still long to go before AI is fully representative of all groups and absent of bias.
Josie Young, a feminist AI researcher, advocates for designing Artificial Intelligence (AI) products and systems using ethical and feminist principles. In a TED Talk, she argues that “assigning a gender to a voice bot or chatbot is poor design” as this reinforces gender stereotypes that society has been trying to eradicate for the past 50 years. With this in mind, she has created a practical tool for teams to use when building a bot, prompting developers to question their own bias and training them to address these issues themselves in the future.
The aforementioned research by UNESCO shares a series of suggestions to improve the situation, including “performing ‘algorithmic audits’ to map and label the sources of gender bias in AI technology” and more gender-equal teams that enable women to assume leadership jobs and roles.