How much should we worry about “artificial stupidity”?

0

The “optimists” and the “pessimists” of artificial intelligence are facing intense debate.

Stephen Hawking and other renowned scientists and technologists around the world have long warned of a possible catastrophe: that artificial intelligence becomes too intelligent and even becomes “a threat to the human race.”

“Humans, that we are beings limited by their slow biological evolution, we will not be able to compete with the machines, and we will be overcome, ” said the theoretical physicist.

According to Hawking, “the development of a complete artificial intelligence could be translated into the end of the human race”.

But those who deny this idea say that it is exaggerated and that what really should concern us, more than artificial intelligence, is “artificial stupidity . 

Stephen HawkingCopyright of the imageBRUNO VINCENT / GETTY IMAGES
Image captionStephen Hawking believes that artificial intelligence will kill humans, but some scientists think that statement is catastrophic.

One of the most prominent voices against is the engineer and robotics ethicist Alan Winfield , who works at the Robotics Laboratory in Bristol, United Kingdom.

“I find Hawking’s statements utterly useless,” the specialist explained in an interview with the BBC’s Hardtalk program.

“The problem is not inevitable, Hawking talks about a very small probabilitybased on a series of events that would have to happen one after the other for that to happen,” Winfield said.

” Artificial intelligence is not very intelligent, rather we should worry about artificial stupidity,” he said.

But what do you mean by “artificial stupidity”?

“Fear and fascination”

Hawking was involved for the first time in this debate in 2014.

But since then, several technology companies have implemented prevention measures.

Should we worry about the robots' excessive intelligence ... or is it rather their lack of sufficient intelligence that should disturb us?Copyright of theTHINKSTOCKimage
Image captionShould we worry about the robots’ excessive intelligence … or is it rather their lack of sufficient intelligence that should disturb us?

In January of this year, several entrepreneurs financed a Fund for the Ethics and Government of Artificial Intelligence supported by prestigious US institutions, such as the Knight Foundation or Harvard University.

The creator of LinkedIn, Reid Hoffman, the founder of eBay Pierre Omidyar, and director of the Media Lab at MIT (the Massachusetts Institute of Technology), Joichi Ito, were some of the names featured in that project.

One of the most prominent voices against is the engineer and robotics ethicist Alan Winfield , who works at the Robotics Laboratory in Bristol, United Kingdom.

“I find Hawking’s statements utterly useless,” the specialist explained in an interview with the BBC’s Hardtalk program.

“The problem is not inevitable, Hawking talks about a very small probabilitybased on a series of events that would have to happen one after the other for that to happen,” Winfield said.

” Artificial intelligence is not very intelligent, rather we should worry about artificial stupidity,” he said.

But what do you mean by “artificial stupidity”?

“Fear and fascination”

Hawking was involved for the first time in this debate in 2014.

But since then, several technology companies have implemented prevention measures.

Should we worry about the robots' excessive intelligence ... or is it rather their lack of sufficient intelligence that should disturb us?Copyright of the imageTHINKSTOCK
Image captionShould we worry about the robots’ excessive intelligence … or is it rather their lack of sufficient intelligence that should disturb us?

In January of this year, several entrepreneurs financed a Fund for the Ethics and Government of Artificial Intelligence supported by prestigious US institutions, such as the Knight Foundation or Harvard University.

The creator of LinkedIn, Reid Hoffman, the founder of eBay Pierre Omidyar, and director of the Media Lab at MIT (the Massachusetts Institute of Technology), Joichi Ito, were some of the names featured in that project.

Also this year – last September – there was a collaboration between Facebook, Google, Microsoft, Amazon and other technological giants to guarantee “best practices” of artificial intelligence.

And Google even patented a “red button” to turn it off in case of extreme danger.

However, Winfield believes that these attitudes have gone too far.

It’s a combination of scary fascination, that’s why we love science fiction

Alan Winfiel, engineer and specialist in robotic ethics

“We are worrying about a highly extraordinary event of intelligent explosion,” said the scientist, who maintains that the fascination with science fiction increased the fears of many.

“It’s a combination of fear and fascination , that’s why we love science fiction,” he said.

Unreal scenario

Winfield says that the scenario that Hawking poses is unreal.

And he argues that there are “many other things” that we should be concerned about right now and that concern the areas in which artificial intelligence should improve.

artificial intelligenceCopyright of the imageTHINKSTOCK
Image captionSome companies have already taken precautionary measures against artificial intelligence.

” Work, militarization, standards in cars that are handled alone, in robots, in medical diagnoses … that kind of things are current problems (of artificial intelligence),” said the ethicist.

According to Winfield, one of the philosophical problems of artificial intelligence is that it is very difficult to define because we do not have a satisfactory definition of natural intelligence. ”

“Doing the right thing at the right time is a definition of intelligence,” he explains. “But that does not help much, from a scientific point of view.”

The robotics expert says that one of the key aspects of artificial intelligence is that what we thought was very difficult to happen 60 years ago-like machines competing in chess against humans-turned out to be relatively easy.

“However, what we thought would be very easy has turned out to be enormously difficult .” And it mentions aspects such as the fact that you have to supervise them to do certain jobs, because those machines are incapable of thinking, as the human brain is.

robotsCopyright of the imageGETTY IMAGES
Image captionWinfield says that the real challenge is to create ethically responsible robots.

Alan Bundy, professor of automated reasoning at the University of Edinburgh in Scotland, United Kingdom, agrees with Winfield.

Bundy said that the great successes in the development of artificial intelligence in recent years were extremely limited.

What do we have to do then?

For Winfield, it is fundamental that we develop more the theoretical part – because the one that exists today is not unified – and make innovations and research ethically responsible.

But, for now, the debate continues.

About author
Profile photo of Rava Desk

Rava Desk

Rava is an online news portal providing recent news, editorials, opinions and advice on day to day happenings in Pakistan.

Your email address will not be published. Required fields are marked *