Before modern robots, a 1962 B-movie and 1990s MIT research reshaped machine intelligence


Long before killer cyborgs stalked Sarah Connor or sentries patrolled the dystopian skies, the low-budget 1962 film The creation of humanoids (which can be found on YouTube) posed a worrying question that seems even more relevant today: what if machines not only served humanity, but replaced it?
Set in a post-nuclear world, the film imagines a society dependent on robots. A scientist develops a “thalamic transplant”, transferring human memories into synthetic bodies connected to a “huge central computer”.
But centralization of knowledge alone is not enough. It is only when machines gain a lived sensory experience that they begin to transcend their programming and threaten humanity.
The premise of the low-budget production covered surprisingly modern ideas, like memory transfer, synthetic embodiment, centralized computing, and machine self-replication. The themes would be addressed decades later in more famous film franchises. But at the time, they were extremely original.
More than thirty years after the film’s release, Popular mechanics revisited his ideas in a July 1995 article that shared his title. The article referenced the film and then explored what was happening with robots in the real world at that time, focusing on work being done in the robotics lab of MIT researcher Rodney Brooks.
Although framed around The creation of humanoidsthe article actually opened with a different, better-known cinematic reference. “In ‘2001: A Space Odyssey,’ an artificial intelligence named HAL controlled the spacecraft Discovery bound for Jupiter.”
If you’ve seen the movie, you may remember that HAL became operational on January 12, 1992 (it was 1997 in Arthur C. Clarke’s source novel). When that date passed in the real “HAL-free” world, Brooks decided to pursue a different vision of artificial intelligence.
“Instead of imbuing a spaceship with a human soul,” the magazine wrote, Brooks was determined to “put the mind of a human in the body of a robot.”
This robot was Cog.
Not a GOFAI
By mid-1995, Cog was both ambitious and unimpressive: “a collection of computer chips, motors, joints, rods, cables, wires, and video cameras hung together on a black anodized aluminum frame,” as Cog explains it. Popular mechanics described it. It had a head, neck, shoulders, chest and waist, but no legs, skin or fingers.
Philosophically, however, it represented a departure from what researchers at the time called GOFAI – Good Old-Fashioned Artificial Intelligence – the “brain in a box” model exemplified by systems like Deep Thought.
In GOFAI, intelligence meant building a complete internal representation of the world and reasoning about it.
Brooks disagreed.
“The idea is that the complexity of the world occurs in the world, not in the creature,” he said. Rather than building massive internal maps, Cog relied on “parallel behaviors” – simple routines driven by sensors working together. Insect-like robots built in Brooks’ lab had used the same principle to navigate obstacles without a blueprint.
“Cog represents the same principle,” Brooks explained. “We just skipped a few evolution levels.”
Alan Turing’s thought experiment
Where the film imagined intelligence emanating from a central machine, Brooks bet on incarnation.
He was inspired by Alan Turing’s 1950 thought experiment. Popular mechanics quoted him, “Turing argued that one should make a robot like a human and let it wander the countryside and experiment with what humans do.” Brooks added: “Putting it all together – and not entirely without inspiration from Star Trek’s Commander Data – I decided to build a human. »
As limited as Cog is, it was built around forward-looking ideas. “Each of Cog’s eyes has a wide-angle camera and a narrow-field camera, and each camera can pan and tilt.”
He had to “learn to relate what he sees in the camera to his own head movements.” This developmental framework, learning like an infant, anticipated modern self-supervised learning approaches, in which robots construct representations through exploration rather than explicit programming.
Brooks also speculated about how the machine would feel, describing ways to control “the amount of current flowing through Cog’s motors” to simulate fatigue or pain, and proposing future “skin with sensors so it can learn by touch.”
Three decades later, touch-sensing networks and compliant actuators have become the standard in collaborative robots.
Today’s most powerful AI systems are trained in vast data centers, with their “brains” spread across racks of GPUs rather than patch cables in a lab.
Large language models and multimodal systems undergo enormous centralized training cycles before being deployed to all devices. In robotics, cloud systems allow machines to offload their calculations to remote servers – an echo of that fictional “huge mainframe.”
Brooks’ emphasis on embodiment has proven enduring, with modern humanoids from Boston Dynamics, Tesla, Figure, and Agility Robotics relying heavily on real-time sensor fusion.
Common cameras, force sensors, and encoders power neural networks that learn through physical interaction. Reinforcement learning in simulation is refined in the physical world, and the world remains the model.
Run before you can walk
Although Cog was given arms, legs – and walking – were considered too problematic at the time. Popular mechanics noted, however, that another MIT robotics researcher, Marc Raibert, had taken on the challenge with his own walking and balancing robots. “We believe that running is easier than walking,” he said, “that’s why one of our mottos is: ‘You have to run before you can walk.’ »
Today’s bipeds can run, jump, and recover after a push, although large-scale autonomy remains limited. Many humanoids are still being monitored.
The anxiety that The creation of humanoids captured – replacement, identity, loss of control – exist today, but rather than robots supplanting biological humanity, the emphasis is on cognitive displacement: algorithms write, design, diagnose and compose.
The “mainframe” is a cloud AI service. “Thalamic grafting” consists of training data drawn from billions of human artifacts.
Brooks once described his goal as “bridging the gap between HAL’s boxed brain and Data’s embodied, quasi-human mind.” He conceded: “The end result will be, yes, like Commander Data. But it’s still a long way off.”
This distance has narrowed over the years, but it has not disappeared. Humanoids can navigate warehouses and AI systems can converse and generate images. Yet none really “walk around the countryside and experience what humans do” in the open way Turing imagined.
More than sixty years later The creation of humanoids ” issued his warning, we are still exploring the space between centralized intelligence and lived experience, without having reached either extreme.
Follow TechRadar on Google News And add us as your favorite source to get our news, reviews and expert opinions in your feeds. Make sure to click the Follow button!
And of course you can too follow TechRadar on TikTok for news, reviews, unboxings in video form and receive regular updates from us on WhatsApp Also.




