Videos: Musculoskeletal Robot Dogs, Robot Snails, More


Video Friday is your weekly selection of awesome robotics videos, collected by your friends on IEEE Spectrum robotics. We are also publishing a weekly calendar of upcoming robotics events for the next few months. Please send us your events for inclusion.
ICRA 2026: June 1-5, 2026, VIENNA
Enjoy today’s videos!
Suzumori Endo Lab, Science Tokyo has developed a musculoskeletal robot for dogs using fine McKibben muscles. This robot mimics the flexible “hammock-like” structure of the shoulder to study the biomechanical functions of the dog’s musculoskeletal system.
[ Suzimori Endo Robotics Laboratory ]
SNAIL HOLES!!!
[ Freeform Robotics ]
We present a system that transforms speech into physical objects using 3D generative AI and discrete robotic assembly. By leveraging natural language, the system makes design and manufacturing more accessible to people without expertise in 3D modeling or robotics programming.
[ MIT ]
Discover the next generation of cutting-edge AI. A fully autonomous vision system designed for real-world robotics, automation and intelligence. Learn how OAK 4 brings 3D computing, sensing, and perception together in a single device.
[ Luxonis ]
Thanks, Max!
Inspired by the winding tenacity of vines, engineers at MIT and Stanford University developed a robotic gripper capable of snaking and lifting various objects, including a glass vase and a watermelon, offering a gentler approach compared to conventional gripper designs. A larger version of the tendril robots can also safely lift a human out of bed.
[ MIT ]
[ Paper ]
Thanks, Bram!
Autonomous driving is the ultimate challenge for AI in the physical world. At Waymo, we solve this problem by prioritizing demonstrably safe AI, where safety is at the heart of how we design our AI models and ecosystem from the ground up.
[ Waymo ]
Built by Texas A&M engineering students, this AI-powered robotic dog reinvents the way robots work in disaster zones. Designed to climb through rubble, avoid hazards and make autonomous decisions in real time, the robot uses a personalized multimodal language model (MLLM) combined with visual memory and voice commands to see, remember and plan its next move like a first responder.
[ Texas A&M ]
[ MIT ]
In this audio clip generated by data from the SuperCam microphone aboard NASA’s Perseverance, the sound of an electrical discharge can be heard as a Martian dust devil flies overhead. The recording was collected on October 12, 2024, the 1,296th Martian day, or sol, of the Perseverance mission to the Red Planet.
[ NASA Jet Propulsion Laboratory ]
In this episode, we open the archives of host Hannah Fry’s visit to our California robotics lab. Filmed earlier this year, Hannah interacts with a new set of robots, ones who don’t just see, but think, plan and act. Watch the team go behind the scenes to test the limits of generalization, challenging robots to manipulate invisible objects autonomously.
[ Google DeepMind ]
This GRASP Robotics Seminar is hosted by Parastoo Abtahi of Princeton University, on “When Robots Disappear – From Haptic Illusions in VR to Object-Oriented Interactions in AR.”
[ University of Pennsylvania GRASP Laboratory ]Advances in audiovisual rendering have led to the commercialization of virtual reality (VR); however, haptic technology has not kept up with these advancements. While a variety of robotic systems aim to fill this gap by simulating the sensation of touch, many hardware limitations make realistic touch interactions difficult in VR. In my research, I explore how, by understanding human perception through the lens of sensorimotor control theory, we can design interactions that not only overcome the current limitations of robotic hardware for virtual reality, but also extend our capabilities beyond what is possible in the physical world.
In the first part of this conference, I will present my work on redirection illusions that exploit the limits of human perception to improve the perceived performance of haptic devices of the type encountered in VR, such as the positional accuracy of drones and the resolution of shape displays. In part two, I’ll share how we apply these illusory interactions to physical spaces and use augmented reality (AR) to facilitate situated, two-way human-robot communication, linking users’ mental models and robotic representations.
From the articles on your site
Related articles on the web


