‘Godfather of AI’ Geoffrey Hinton Wants Machines to ‘Care for Us, Like We’re Their Babies’

Geoffrey Hinton, often called “AI sponsor”, calls for researchers to design AI systems with integrated nourishing instincts, arguing that this is necessary to ensure the safety of humanity while machines go beyond human level intelligence. The IA expert believes that machines must be trained to “take care of us, as we are babies”.
The decoder reports that speech during the recent AI4 conference in Las Vegas, Hinton argued that the test of keeping permanently an AI supentinigent under human control is unrealistic. Rather than humans acting as the boss of advanced AI, he envisages a future where people relate to hyper -capable machines more like the way a child depends on his mother.
“We have to make machines that are smarter than we take care of ourselves, as if we were their babies,” said Hinton in his speech. “The objective of the development of AI should develop beyond the simple manufacture of increasingly intelligent systems, to ensure that they are imbued with real concerns for human well-being.”
As part of Hinton’s framework, the role of humanity would go from the order of AI to nourish it, even if it grows to overshadow human capacities. He pulled an analogy with good parenting, where caring mothers help guide the development of children who will ultimately become more capable than them. Hinton maintains that AI research should strive to wire a similar dynamic between people and machines.
The former researcher of Google, who left the company to discuss the risks of AI more freely, believes that his approach to “landing AI” could unite the international community around the development of a safe artificial intelligence. “Each country wants an AI that increases and supports its citizens, not replacing them,” said Hinton. “Integrated nourishing instincts provide a natural path to this type of support AI.”
The chief scientist of Meta AI, Yann Lecun, described the idea of Hinton as a simplified version of a security approach for which he has long recommended, that Lecun surge “the AI focused on the objective”. It implies architectural binding AI systems so that they can only take measures at the service of specific and hard -coded objectives and values.
“You essentially define the equivalent of the AI of discs and instincts found in humans and animals,” said Lecun in a LinkedIn post. “So, in addition to training for things like empathy and subsistence to people, you have a large number of low level rules like” do not run on humans “or” do not swing your arm if you hold a knife near people “.” “”
Learn more about the decoder here.
Lucas Nolan is Breitbart News journalist covering the problems of freedom of expression and online censorship.

