Scientists taught an AI-powered ‘robot dog’ how to play badminton against humans — and it’s actually really good

Scientists have formed a four -legged robot to play badminton against a human opponent, and he rushes on the field to play rallies of up to 10 shots.
By combining whole body movements with a visual perception, the robotcalled “Anymal“learned to adapt the way he moved to the shuttle and successfully returned it to the net, thanks to artificial intelligence (Ai).
This shows that four -legged robots can be built as an atrs in “complex and dynamic sports scenarios”, wrote researchers in a study published on May 28 in the journal Scientific robotics.
Anymal is a four -legged robot in the shape of a dog which weighs 110 pounds (50 kilograms) and measures approximately 1.5 feet (0.5 meters) in height. Having four legs allows Anymal and Similar quadruple robots move on difficult terrain and Mount the obstacles from top to bottom.
The researchers have already added weapons to these dog -shaped machines and taught them to Recover special objects Or open house by seizing the handle. But the coordination of members’ control and visual perception in a dynamic environment remains a challenge in robotics.
In relation: Look at a “robot dog” rushing into a basic parkour with the help of AI
“Sport is a good application for this type of research because you can gradually increase competitiveness or difficulty”, co-author of the study Yuntao maA robotic researcher previously at Eth Zürich and now with the startup Light Robotics, told Live Science.
Teach a new dog new tips to a new dog
In this research, MA and his team attached a dynamic arm holding a racket of badminton at an angle of 45 degrees on the standard Anymal robot.
With the addition of the arm, the robot held 5 feet, 3 inches (1.6 m) high and had 18 joints: three on each of the four legs and six on the arm. The researchers designed a complex integrated system that controlled the movements of the arms and legs.
The team also added a stereo camera, which had two lenses stacked on top of each other, just to the right of the center on the front of the robot body. The two objectives allowed him to process visual information on the incoming shuttles in real time and to determine where they were heading.
The robot then learned to become a badminton player through learning to strengthen. With this type of automatic learning, the robot explored its environment and used trials and errors to learn to spot and follow the shuttle, navigate to it and swing the racket.
To do this, the researchers first created a simulated environment made up of a Badminton court, with the virtual robot counterpart in the center. The virtual shuttles were served near the center of half of the opponent’s courtyard, and the robot was responsible for following its position and estimating its flight trajectory.
Then, the researchers created a strict training diet to teach Anymal how to strike the ruffles, with a virtual coach rewarding the robot for a variety of characteristics, including the racket position, the angle of the racket head and the speed of the swing. Above all, swing rewards were based on time to encourage safe and timely blows.
The shuttle could land anywhere on the ground, so the robot was also rewarded if it was effectively moving on the ground and if it does not accelerate unnecessarily. Anymal’s goal was to maximize the amount of reward in all tests.
Based on 50 million trials of this simulation formation, the researchers created a neural network which could control the movement of the 18 joints to head towards and strike the shuttle.
A quick learner
After the simulations, scientists transferred the neural network to the robot, and Anymal was tested in the real world.
Here, the robot was formed to find and follow a shiny orange shuttle served by another machine, which allowed researchers to control the speed, the angles and the landing places of the shuttles. Anymal had to rush on the ground to hit the shuttle at a speed that would return it on the net and in the center of the court.
The researchers found that, after in -depth training, the robot could follow the shuttles and return them with precision with swing speeds up to approximately 39 feet per second (12 meters per second) – about half of the swing speed of a medium -sized amateur badminton player, noted the researchers.
Anymal also adjusted his movement models according to the distance to which he had to go to the shuttle and how long he had to reach it. The robot did not need to travel when the shuttle should only land a few meters (half meter), but about 5 feet (1.5 m), Anymal rushed to reach the shuttle by moving the four legs. About 7 feet (2.2 m) from a distance, the robot galloped towards the shuttle, producing an elevation period which extended the range of the 3 feet arm (1 m) towards the target.
“Control of the robot to look at the will is not so trivial,” said my. If the robot looks at the shuttle, it cannot move very quickly. But if it doesn’t seem, he won’t know where he has to go. “This compromise must occur in a somewhat intelligent way,” he said.
MA was surprised by the way the robot understood how to move the 18 joints in a coordinated manner. It is a particularly difficult task because the engine of each articulation learns independently, but the final movement forces them to work in tandem.
The team also found that the robot had spontaneously started to return to the center of the field after each blow, similar to the way in which human players are preparing for incoming shuttles.
However, the researchers noted that the robot did not consider the movements of the adversary, which is an important way for human actors predicts the shuttle trajectories. Including human installation estimates would help improve Anymal’s performance, the team said in the study. They could also add a neck seal to allow the robot to monitor the shuttle for more time, noted MA.
He thinks that this research will ultimately have applications beyond sport. For example, he could support the elimination of debris during help efforts in the event of a disaster, he said, because the robot would be able to balance the dynamic visual perception with an agile movement.



