AI needs surveillance-but people too trust algorithms

Artificial intelligence [AI] is convenient, but requires human oversight. Both government reports and experts around the world emphasize that it is important for human decision makers to stay informed when using AI.

"Human agency and oversight" is the first important requirement set out in a white paper on AI regulations published by the European Commission in early February. The UK Government's Advisory Committee on standards in public life has also recommended that monitoring of the entire AI process be established. Just recently, London Police Commissioner Cressida Dick reiterated his commitment that new technology would not overturn police officers, but that human security always make the final decision.

In the United States, the Pentagon published in 2019 ethical guidelines on the use of AI for military purposes. The paper states that deploying an autonomous system always requires the "appropriate" level of human judgment.

Encouraging people to constantly monitor decisions made by AI systems is right, especially when it comes to critical areas such as war and security. But as a practical matter, can humans grasp the flaws in AI systems?

Hannah Fry, an associate professor of urban mathematics at the University of London, says it won't be enough. Fry, who spoke at a conference in London by UK-based tech company Fractal, explained that human oversight of AI systems does not completely solve the problem. It is of little help in overcoming human innate flaws. According to Fry, humans place too much trust in AI systems, which can have serious consequences.

"I can say for sure that humans cannot be trusted. We are lazy and tend to follow cognitive shortcuts. Everyone has the potential to inadvertently trust machines." ]

For example, several years ago,Three Japanese traveling in Australia,When I drove to North Stradbroke Island, I drove into the Pacific Ocean off the coast. The GPS system did not indicate that there was 9 miles of sea between the island and the mainland.

It's a laugh, but Fry said that we are all much more like this Japanese tourist than we think. In this case, the main damage from over-reliance on GPS is the damage to the rental car, but relying on self-driving cars, for example, makes overconfidence in technology much more expensive.

Fry explained that humans are not good at paying attention, being aware of the surroundings, and acting under pressure when driving. Still, he pointed out that the basic idea of ​​self-driving cars is that humans monitor the system, intervene at the last, most dangerous moment, and operate at the highest performance.

Humans subvert the decision of autonomous vehicles? "That doesn't happen very often," Fry warned.

He doesn't completely deny the algorithm. Rather, the opposite is true. Fry is a self-proclaimed AI advocate, and believes that AI can have significant benefits in areas such as healthcare. But he said there is one simple rule that should be applied to all AI systems. It uses algorithms only if humans can disable it when needed.

Think tankPew Research CenterIn a study published in 2018, she surveyed about 1,000 technical experts and compiled insights into the future of humans in the AI ​​era. One of the main points was the same as Fry's concern. Ultimately, as humans become more dependent on algorithms, humans will lose their ability to think for themselves.

Fry's solution is to take a “human-centered” approach to developing new technology. In short, an approach that assumes human flaws. Fry suggested a "partnership" that combines the strengths of humans and machines, and that there is always room for humans to question algorithm decisions.

One area where such partnerships are promising is healthcare. For example, in diagnosing cancer, doctors need to be sensitive so that they don't miss signs of cancer, and have the judgment to avoid overdiagnosis.

Although human sensitivity is "dust" and the algorithm is "super sensitive", Fry says that humans have great power in terms of specificity [the probability of a test result being correctly negative if the patient is not ill] He said. He concluded that combining both skills could have tremendous benefits in health care.

"This is the future I want," he said. In other words, when deploying new technologies, the future is perceived that defects exist not only in machines but also in humans.

This article is from overseas CBS InteractivearticleWas edited by Asahi Interactive for Japan.

Source link

Do you like this article??

Show More

Leave a Reply

Your email address will not be published. Required fields are marked *

Back to top button