"Science" post!25 top scientists call: In the rapid development of AI

4 min read

Financial Association May 22 (Editor Zhou Ziyi)In the face of artificial intelligence that may develop rapidly, global risk response ability does not seem to be on track.

On the occasion of the second artificial intelligence security summit (May 21st to 22nd) held in Seoul, South Korea, 25 world -leading artificial intelligence scientists jointly published an expert consensus in the “Science” magazine. They believed that they thoughtToday, the world is not enough to protect human beings from technology breeding.

In the paper, they outline the priority of emergency policies that global leaders should adopt to respond to the threat of artificial intelligence technology.

Important suggestions

The writers of this paper include 25 world -leading artificial intelligence and governance academic experts including Geoffrey Hinton, Andrew Yao, Dawn Song, Daniel Kahneman.These authors come from the United States, China, the European Union, the United Kingdom, and other artificial intelligence powers, including the Turing winner, the Nobel Prize winner, and the author of the standard artificial intelligence textbook.

These scientists pointed out that leaders in the world must seriously treat the possibility of developing a strong general artificial intelligence (AGI) system in the current decade or next ten years. This system may exceed humans in many key areas.

They also said that although governments around the world have been discussing cutting -edge artificial intelligence and making some attempts in the introduction of preliminary guidance, this is not like the possibility of rapid and reform progress expected by many experts.

Professor Philip Torr of the Department of Engineering Science at Oxford University and the Department of Engineering Science of the University of Oxford said, “At the last artificial intelligence summit, the whole world agreed to take action, but it is time to change from fuzzy suggestions to specific commitments.”

This paper provides some important suggestions for the company and the government to promise, including:

Establish an expert agency that can quickly respond to risks to supervise artificial intelligence and provide more funds for these institutions.

More stringent risk assessment and executable results, rather than depend on voluntary or unclear model evaluations.

Artificial intelligence companies are required to give priority to security and prove that their systems will not cause damage, and artificial intelligence developers should consider security.

Implement the risk slow release standards consistent with the level of risk levels composed of an artificial intelligence system.Appropriate strategies should be set up to automatically trigger strict requirements when artificial intelligence meets certain capabilities milestones; but if artificial intelligence progresses slowly, the request will be relaxed accordingly.

It is worth mentioning that at the artificial intelligence summit on Tuesday (May 21) local time, Microsoft, Amazon, Openai and other technological giants reached aInternational protocol with a mileage significanceEssence

Under this agreement, many enterprises make commitments, including the issuance of a security framework to list potential risks; once this extreme situation is found, if the risk is not guaranteed, companies need to open the “emergency stop switch” and stop AI for AIModel development process.

Government supervision leader

According to the author, the government must prepare the leading role in supervision for the abnormal artificial intelligence system in the future.This includes: permits the development of these systems, restricts its autonomy in key social roles, stops its development and deployment in response to worrying functions, forced access control, and requires strong information security measures for national hackersEssence

The Professor of Computer Science, the University of California, Berkeley, and the author of the world standard artificial intelligence textbook Stuart Russell Obe Obe said, “This is a dissertation reached by top experts, which calls on the government to strictly supervise, instead of formulating voluntary behaviors by the industry to formulate voluntarilyCode.

You May Also Like

More From Author