Coding With Fun
Home Docker Django Node.js Articles FAQ

As AI AI computers make more decisions, there is growing concern about biased algorithms


May 30, 2021 Article blog



Source: MSN

By Shara Tibken

Translation: W3Cschool

When the United States began distributing the COVID-19 vaccine late last year, an important question arose: Who should give priority to vaccination? M any medical institutions and health officials have decided to give priority to vaccinating staff in close contact with infected persons, including health care and security personnel. S tanford University, one of the country's top universities, has built an algorithm for determining order.

The only problem with letting computers decide who gets vaccinated first is that its "very complex algorithms" (which turn out to be less complicated) are based on false assumptions and data. I n other words, the algorithm prioritizes medical staff of a specific age, regardless of how many older doctors do not see patients regularly. O f the 5,000 doses of Stanford Medical's first COVID-19 vaccine, only seven were assigned to first-line resident doctors. T he vast majority are assigned to senior teachers and doctors who work from home or have little contact with patients infected with COVID-19. S tanford quickly eliminated the algorithm and vaccinated front-line workers.

Tim Morrison, director of Stanford University's outpatient care team, said in a Twitter video posted in mid-December: "Our algorithms, ethicists and infectious disease specialists have been working for weeks, using age, high-risk work environments, and positive rates in that environment... and so on, obviously there's no right job."

Stanford University's vaccine crash is just one of many ways algorithms are biased, a problem that is becoming increasingly apparent as computer programs replace human decision makers. A lgorithms are expected to make decisions based on data without emotional impact: decisions can be made faster, fairer, and more accurately. H owever, algorithms are not always based on ideal data, and this shortcoming is magnified when making life-or-death decisions, such as the allocation of important vaccines.

The impact is even broader because computers can determine whether someone gets a home loan, who is hired, and how long a prisoner is being held, according to a report released Tuesday by Greening Institute, a nonprofit in Oakland, California, dedicated to racial and economic justice. D ebra Gore-Mann, Greening's chief executive, says algorithms often retain the same racial, gender and income-level biases as human decision makers.

"You're seeing these tools being used in criminal justice assessments, housing assessments, financial credit, education, job hunting," Gore Mann said in an interview. I t has become so common that most of us don't even know that some kind of automation and data assessment is taking place.

The Greening report examines how poorly designed algorithms threaten to exacerbate systemic racism, sexism, and prejudice against low-income people. B ecause the technology was created and trained by people, these algorithms, intentionally or not, reproduce patterns of discrimination and prejudice that people often don't realize happens. F acial recognition is one of the technical areas where racial bias has been proven. F itness bands have been trying to accurately measure the heart rate of people of color.

"The same technology used for super-target global advertising is also being used to charge people different prices for products that are critical to economic health, such as mortgage insurance, and less important products such as shoes," said Vinhcent Lecent, Greening's technical equity counsel.

In another example, Greening marks an algorithm created by Optum Health that can be used to prioritize a patient's medical care. O ne factor is how much patients spend on health care, assuming the most seriously ill spend the most on health care. U sing this parameter alone does not take into account that people who don't have that much money sometimes have to choose between paying rent and paying medical bills, which can disproportionately hurt black patients.

Optum Health says health care providers tested the use of the algorithm in this way, but ultimately did not use it to determine health care.

"There is no racial bias in this algorithm," Optum said in a statement. T he tool is designed to predict possible future costs based on an individual patient's past medical experience and not to lead to racial bias when used for this purpose -- a fact that the study authors agree with.

There is no easy solution

Greening provides governments and companies with three ways to ensure better technology. Greening recommends that organizations implement algorithmic transparency and accountability, develop racial awareness algorithms where meaningful, and specifically seek to include vulnerable groups in algorithmic assumptions.

The responsibility for ensuring that this happens rests with legislators.

" (The significance of this report) is to build the political will to start regulating artificial intelligence," Le said. ”

In California, the state legislature is considering Act No. 13, also known as the Automatic Decision System Accountability Act of 2021. L aunched on December 7th and sponsored by Greening, it will require businesses using an "automated decision-making system" to test their biases and their impact on marginalized groups. I f there is an impact, the organization must explain why discriminatory treatment is not illegal. " You can treat people differently, but it's illegal if it's based on protected characteristics, such as race, gender, and age," Le said. ”

In April 2019, Senator Cory E. Comey of New Jersey, a Democrat, said he was "very likely" to be elected to the United States. C ory Booker and Senator Ron W. Bush of Oregon R on Wyden, D-N.Y., and Representative Yvette D. Ryan of New York. D . Clarke (Yvette D. Clarke). T he Algorithm Accountability Act, which requires companies to study and modify flawed computer algorithms that lead to inaccurate, unfair, biased, or discriminatory decisions, was introduced. A month later, New Jersey introduced a similar algorithmic accountability bill. N either bill passed the committee.

Le said that if California's AB13 passes, it would be the first such law in the United States, but it could fail because it is too broad because it is currently written. I nstead, Greening wants to narrow the bill's mandate to focus first on algorithms created by the government. I t was to be hoped that the bill would set an example for national efforts.

"Most of the problems with algorithms aren't because people are biased against goals," Le said. T hey're just taking shortcuts when developing these programs. A s far as the Stanford vaccine program is concerned, algorithm developers don't take the consequences into account.

Le added: "No one is really sure of everything that needs to change. B ut (we) do know that the current system doesn't handle AI well.