News
Newsfeed
News
Saturday
April 27
Show news feed

Experts warn that advanced artificial intelligence (AI) could kill everyone and should be regulated like nuclear weapons, the Telegraph reported.

Researchers at Oxford University believe advanced AI could take control of its own programming if it learns to achieve its goals differently than originally intended.

Superhuman AI involves a special risk of another class that could kill everyone. If you imagine that a dog is trained with treats, it will learn to choose actions that will lead to getting treats, but if the dog finds a treat locker, it can get treats on its own without doing what we wanted it to do. If you have something much smarter than us trying to get that positive feedback, and it has taken over the world to provide it, it will put as much energy as it can into holding onto that, says doctoral student Michael Cohen.

Once the genie is out of the bottle, he says, it will be hard to stop the process.

Experts warned that the development of AI has turned into a literal arms race, with countries and technology companies competing to create dangerously advanced machine-learning algorithms for military and civilian advantage.

They called for global regulation to prevent companies from creating uncontrollable systems that could first eliminate competition but eventually destroy the entire human race, and warned that there is no limit to how far AI can advance.

Michael Osborne, a professor of machine learning at Oxford University, thinks the dystopian scenario is realistic because AI is trying to "bottle up what makes humans special, which has led to humans completely changing the face of the Earth.

!
This text available in   Հայերեն and Русский
Print