The "Godfather of Artificial Intelligence" resigned from Google and regretted the development of AI, sounding the alarm for the world?

   In the tenth year of joining Google, when deep learning ushered in explosive development, Geoffrey Hinton, known as the "Godfather of Artificial Intelligence", has resigned from Google, just to warn people that AI has become very dangerous.

    According to public information, Geoffrey Hinton joined Google in 2013 and served as vice president. He has studied machine learning algorithms for more than ten years and brought deep learning technology into Google's many businesses, directly promoting Google to become one of the most outstanding companies in AI technology. one. In 2019, he won the "Turing Award", the highest honor in the computer industry, and personally trained a large number of machine learning-related talents, including Ilya Sutskever, the co-founder and chief scientist of OpenAI. In 2016, Hinton participated in the AlphaGo artificial intelligence project, famous for defeating the human world champion in the game of Go.

   In a tweet on May 1, Hinton confirmed his departure from Google, clarifying, "...I actually left so that I could speak openly about the dangers of AI without having to think about the impact on Google, Google Behavior is also very responsible.”

   It is reported that Hinton has the following concerns about artificial intelligence. First of all, with the support of artificial intelligence technology, the Internet will soon be flooded with all kinds of false information. Fake photos, fake videos and fake news will flood the internet to the point that many people "can no longer know what is real". Second, AI will replace most jobs, causing unemployment to explode. Hinton said, "AI eliminates drudgery, but it may take far more than that." IBM announced yesterday that it will suspend recruitment for positions that artificial intelligence (AI) can perform, and may replace 7,800 jobs with artificial intelligence.

   Artificial intelligence may pose a threat to humans in the future because AI has learned unexpected behaviors from large amounts of data. In this kind of continuous training, it is very likely that the "autonomous killer robot" will become a reality. Fourth, the competition between Google and Microsoft and other large companies will soon escalate into a global competition. Without some kind of regulation, this competition will not stop, and the speed of artificial intelligence development will far exceed ours. Imagine, and eventually go out of control.

   The speed of AI development has far exceeded the expectations of many experts, including Hinton. Hinton said, "Many people feel that this is far away. I used to think so too. I think it will take at least 30 to 50 years for us to get to that day. But as far as the information I have learned so far, I have already Don't think so."

   The above words are spoken by others, and many people may still feel that these views are alarmist. But when Geoffrey Hinton, the most authoritative expert in the field of artificial intelligence in the world and the founder of deep learning, resigns from Google and warns the world, this will trigger more people to think seriously and pay attention to this issue.

   It is understood that some AI investors and researchers have reposted Geoffrey Hinton’s interview, calling on academia, industry, and government regulators to act quickly to formulate a regulatory framework and regulations as soon as possible while the current technology is still within the controllable range. applications etc.

   Regulators, lawmakers and tech industry executives have repeatedly expressed concern about the development of artificial intelligence in recent months. More than 2,600 tech executives and researchers have signed an open letter urging a moratorium on AI development, citing "profound risks to society and humanity". Twelve EU lawmakers signed a similar letter in April, a recent draft EU bill classifies AI tools according to their level of risk, and the U.K. allocated $125 million to support a working group to develop "safe artificial intelligence."

Summarize

   At present, the technology giants seem to be more concerned about how to make AI more and more powerful, rather than considering how it can adapt to society and the environment. Facing the new progress of artificial intelligence, the current governance model and governance measures have not kept up. If a more powerful AI is introduced at this time, it is likely to bring more serious hidden dangers.

   Therefore, people should not rapidly scale up AI research until they know enough to control it, and it is also important to establish a credible, open and transparent mechanism to ensure that AI is properly regulated and controlled to realize its potential and mitigate its negative effects.

Guess you like

Origin blog.csdn.net/LinkFocus/article/details/130465196