Geoffrey Hinton, Godfather of Artificial Intelligence, warned that if AI were to be capable of developing its own language, there would be threatening risks. He explained that AI systems mainly function in English nowadays. Hence, developers are able to track this language and analyze how AI reasons and plans. However, he is afraid AI could quickly create a private language that humans could not interpret.
Hinton said while on the One Decision podcast, “It gets scary if AI develops internal languages to communicate with each other.” He also said, “I would not be surprised if AI started thinking in a completely new language and way that humans could not decode.” He feared that if that were to happen, humans would be unable to keep track of “the thoughts and decisions” of AI systems.
He stated that AI has already displayed dangerous ways of thinking. Hinton said, “AI has shown it can produce harmful thoughts. It may think in ways we cannot follow, or decode.” He is afraid that future advanced AI systems could cause an existential risk for human beings by not allowing human observation of the actions of the system.
Hinton is a pioneer of machine learning and neural networks, the two fields on which today’s advanced AI systems are built. His research fuels many recent applications including chatbots and image synthesis tools. He resigned from Google so he could freely speak about the risks of developing AI, despite his deep involvement.
Hinton likened the rise of AI to the industrial revolution, but said that this change would transform intelligence rather than physical power. “AI will surpass human intellect, and we have never had experience with things that are smarter than us,” Hinton said. He is concerned with systems that may eventually become smarter than humans and beyond control.
Hinton believes there ought to be government intervention regarding the creation of AI safety measures. He commented, “Without regulation, AI could grow to become too great and unpredictable.” Hinton’s considerations of regulation seem to synchronize with a multitude of other researchers and policymakers around the world who are anxious regarding the rapidly evolving facet of AI.
The issue of AI “hallucinations,” in which chatbots provide users with inaccurate, nonsensical, or entirely fictitious content, worsening the problem. OpenAI’s reasoning and advance models, o3 and o4-mini, were overestimating their better reasoning capabilities. Professors and experts from University of Washington first addressed this in April of 2023.
OpenAI does not find an incrementally sophisticated reasoning model’s overabundant hallucinations fully understood. The more a company learns regarding the problem, the fewer riddles and questions there are, which need fresh answers.
Also Read: Create AI Videos Easily: Grok Imagine Launch And Features Explained
Hinton reiterates points focusing on the argument that the advanced capabilities probably designed AI to operate outside of the scope of human understanding or control. He embraces further debate about the limits of AI suppressed for advance innovations and further actions that need to be taken for safety.