Apple CEO, Tim Cook, presents the Apple card during an Apple headquarters at Cupertino, California, March 25, 2019.
Noah Shepherd | AFP | Getty images
When the technological entrepreneur David Heinmeier Hansson recently visited Twitter by saying that the Apple card had given him a credit limit which was 20 times higher than that of his wife, despite the fact that she had a rating of higher credit, maybe it was The first major title on algorithmic bias You read in your daily life. It was not the first – there were major stories about potential algorithmic prejudices in child care and insurance – and it will not be the last.
The director of technology of the Basecamp project management software company, Heinmeier was not the only technological character to talk about algorithmic biases and the Apple card. In fact, Apple Steve Wozniak’s co -founder had a similar experience. Presidential candidate Elizabeth Warren I entered the actiondenigrate Apple and Goldman, and the regulators said they were Launch of a probe.
Goldman Sachsadministering Appledenied the allegations of gender algorithmic bias and also said that she Examine credit assessments On a case -by -case basis when candidates believe that the card determination is unfair.
Goldman spokesperson Patrick Lenihan said that algorithmic biases is an important problem, but the Apple card is not an example. “Goldman Sachs does not have and will never make decisions according to factors such as sex, race, age, sexual orientation or any other legally prohibited factor when determining dedication. There is no “black box”. “He said, often referring to a term often used to describe algorithms. “For the credit decisions we make, we can identify the factors of the credit report issued by a credit office or declared income contribute to the result. We welcome a discussion of this subject with decision -makers and regulators.”
As AA and the algorithms underlying technology become an increasingly large part of daily life, it is important to learn more about technology. One of the main claims made by technological companies using algorithms in decisions such as credit rating is that algorithms are less biased than human beings. This is used in fields such as employment hiring: California state has recently adopted a rule to encourage the development of more employment -based algorithms to eliminate human biases from the hiring process. But it is far from being 100% scientifically proven that AI which is based on the code written by humans, as well as the data which integrated it as a learning mechanism, will not reflect existing biases of our world.
Here are key points on AI algorithms that will take into account future titles.
1. AI is already widely used in the key areas of life
As Hansson and his wife have discovered, AI systems are becoming more and more common in the fields on which everyday people count.
This technology is not only introduced in credit and employment hiring, but also insurance, mortgage and child protection.
In 2016, the county of Allegheny, Pennsylvania, presented a tool called the screening tool for the Algheny family. It is a predictive risk modeling tool that is used to help protect child protection, when children’s abuse concerns are raised in the County Social Services Department.
The system has collected data on each person in reference and uses them to create a “global family score”. This score determines the probability of a future event.
Algheny faced a backlash, but a conclusion was that she created “Less bad biases.” Other places, including Los Angeles, have used similar technology to try to improve child protectionAnd this is an example of how AI systems will be used in a way that can affect people in great way, and therefore, it is important to know how these systems can be defective.
2. Ai may be biased
Most AIs are created from a process called Machine Learning, which teaches a computer something by nourishing thousands of data to help them learn information from the data set in itself.
An example would be to give an AI system thousands of dog photos, in order to teach the system what a dog is. From there, the system would be able to look at a photo and decide whether it is a dog or not according to this past data.
What if the data you feed a system is 75% Golden Retrievers and 25% Dalations?
A postdoctoral researcher at the AI NOW Institute, Dr. Sarah Myers West, says that these systems are built to reflect the data they are fed and that the data can be built on biases.
“These systems are trained on data that reflect our wider company,” said West. “Thus, the AI will really reflect and amplify the past forms of inequality and discrimination.”
An example of the real world: although the hiring process based on the human manager can undoubtedly be biased, a debate remains on the question of whether the algorithmic job demand technology undoubtedly abolishes human bias. The AI learning process could incorporate the biases of the data that is fed them – for example, the curriculum vitae of the best performing candidates in the best companies.
3. People who program AI can be biased
The AI Institute also has also prejudices found in people who create AI Systems. In an April 2019 study, they found that only 15% of AI staff in Facebook Women are they, and only 4% of their total workforce is black. GoogleThe workforce is even less diverse, with only 10% of their AI staff being women and 2.5% of their black workers.
Joy Buolamwini, a MIT computer scientist, discovered that during her research on a project that would project digital masks on a mirror, that the generic facial recognition software that she used would identify her face only if she uses a mask of white color.
She found that her system could not identify the face of a black woman, because the set of data on which he worked was extremely slightly light.
“It’s very clearly, this is not a resolved problem,” said West. “It is actually a very real problem that continues to resurface in AI systems on a weekly, almost daily basis.”
4. Algorithms is not public information
AI algorithms are completely owned by the company that created them.
“Researchers are faced with really important challenges to understand where there is an algorithmic bias because many of them are opaque,” said West.
Even if we could see them, that does not mean that we would understand, explains the co -director of the Digital Platforms and Democracy project, and Shorenstein Fellow at Harvard University, Dipayan Ghosh.
“It is difficult to draw conclusions based on source code,” said Ghosh. “The Apple owner solvency algorithm is something that Apple cannot easily pin and say:` `Okay, here is the code for that ” because it probably involves many different data sources and Many different implementations of code to analyze this data in different partitioned areas of the company.
To go further, companies like Apple write their code to be readable for Apple employees, and this may not have meaning for people outside the company.
5. There is a supervision of the limited government of AI
Currently, there is little government surveillance of AI systems.
“When AI systems are used in fields of incredible social, political and economic importance, we have an interest in understanding how they affect our lives,” said West. “We do not currently have the avenues for the type of transparency that we would need for responsibility.”
A presidential candidate tries to change this. The New Jersey senator, Cory Booker, sponsored a bill earlier this year entitled “The law on algorithmic responsibility”.
The bill obliges companies to examine the defective algorithms which could create unjust or discriminatory situations for the Americans. Under the bill, the Federal Trade Commission would be able to create regulations to “carry out impact assessments of very sensitive automated decision -making systems”. This requirement would have an impact on systems under the jurisdiction of the FTC, new or existing.
Cory Booker’s website description of the bill directly cites a professional algorithmic fault of Facebook and Amazon in recent years.
Booker is not the first politician to call for better AI regulations. In 2016, the Obama administration called for development in the algorithmic audit industry and external tests of Big Data systems.
6. The algorithms can be verified, but this is not a requirement
Although government surveillance is rare, growing practice is the third party audit of algorithms.
The process implies an external entity that arrives and analyzing how algorithm is done without revealing trade secrets, which is a great reason why algorithms are private.
Ghosh says it happens more frequently, but not all the time.
“This happens when companies feel obliged by public opinion or the public hold to do something because they do not want to be called not having any audit,” said Ghosh.
Ghosh has also said that regulatory measures can occur, as seen in the many FTC surveys on Google and Facebook. “If a company is shown that detrimental discrimination, then you could ask a regulatory agency to say” hey, we will either continue to court, or you will do X, Y and Z. Do you want to do? ” “”


This story has been updated to include a commentary by Goldman Sachs which he does not have and will never make decisions according to factors such as sex, race, age, sexual orientation or any other legally prohibited factor when determining the value of the credit.