Hawking highlights huge risks from AI
Stephen Hawking, renowned scientist. [Photo/Agencies] |
Imagine a space journey where you travel with the artificial intelligence robot Hal, which is known for being foolproof and incapable of error. Putting Hal in charge of the flight, you are able to literally be asleep at the wheel.
Suddenly the robot reports that an antenna control device has malfunctioned, but you find nothing is wrong and a comprehensive check indicates that Hal has made an error. Hal, however, insists that the problem exists and believes it is due to human error.
A conflict between man and an AI robot ensues.
You decide to disconnect him in case of any emergency. Hal, however, is determined to make a preemptive strike.
This is the scenario of the legendary science fiction film 2001: A Space Odyssey. But with the swift development of technology, an AI robot like Hal that can rival or even outcompete human beings is maybe closer to reality than you think, and it is something that Stephen Hawking is worried about.
The renowned scientist reiterated his warning about the risks posed by AI at a technology conference in Beijing on Thursday. And his views were echoed by other participants.
"The development of full AI could spell the end of the human race," said Hawking, who is credited with pushing the boundaries of technology and science in pioneering ways.
"AI would take off on its own, and re-design itself at an ever increasing rate. Humans, who are limited by slow biological evolution, couldn't compete, and would be superseded," he said in a rare video speech to a Chinese audience.
According to Hawking, there is no real difference between what can be achieved by a biological brain, and what can be achieved by a computer. As a result, the real question is how to strike a balance between reaping AI's benefits while avoiding its pitfalls.
Kai-Fu Lee, CEO of Sinovation Ventures [Photo/VCG] |
Kai-Fu Lee, CEO of the technology incubator Sinovation Works, also expressed the concern that clever machines could undertake work currently done by humans, and destroy millions of jobs.
"More people will move toward the service industry where love and hospitality are needed to perform good jobs, such as teachers and caregivers. AI-enabled robots cannot deliver such subtle feelings," the former Google China chief said.
"Machines also can't replace the most talented people in a profession, and those in the art industry," he added.
Still, more efforts are called for to adapt ourselves to a rapidly changing era when useful knowledge can be irrelevant in seconds, Lee said.
Zhang Yaqin, president of Baidu Inc (left) [Photo/VCG] |
Zhang Yaqin, president of Baidu Inc, warned a group of college students that "the machine is learning and you must learn faster".
Undoubtedly, AI is empowering a bunch of new innovations such as autonomous vehicles, from drones to self-driving cars. But it also makes it possible to make lethal intelligent autonomous weapons. Also, more research is needed to decide how a self-driving car may, in an emergency, have to decide between the minor risk of a major accident, and the major probability of a minor accident, experts said.
Privacy concern also abounds given that cutting-edge AI is becoming increasingly capable of interpreting large surveillance datasets.
Lee said tech heavyweights boast endless data, which enables them to swap user privacy for profits. It is quite hard for them to resist such attempts. Once big tech companies can't restrain themselves, it will also stifle innovation from startups.
Hawking said that although the companies are currently using the data only for statistical purposes, the use of any personal information should be banned.
"It would help protect privacy, if all material on the internet were encrypted by quantum cryptography with a code that the internet companies could not break in a reasonable time. But the security services would object to this," he said.
In long term, the ultimate concern is the potential loss of control of AI systems. That is, the rise of super-intelligences that does not act in accordance with human wishes, Hawking added.
"Success in creating AI could be the biggest event in the history of civilization. But it could also be the last, unless we learn how to avoid the risks," the globally respected scientist said.