Google set up two robotic arms for a game of infinite table tennis

https://www.profitableratecpm.com/f4ffsdxe?key=39b1ebce72f3758345b2155c98e6709c

In the early evening of June 22, 2010, American tennis star John Isner began an exhausting match of Wimbledon against French Nicolas Mahut who would become the longest in the history of sport. The battle of the marathon lasted 11 a.m. and extended over three consecutive days. Although Isner finally prevailed from 70 to 68 years in the fifth set, some present wondered halfway at the time if the two men could be trapped on this court for eternity.

An equally unanswered snacks currently takes place at only an hour’s drive south of the All England Club – at Google Deepmind. Known for its pioneering models of AI who have surpassed the best human players at chess and GoDeepmind now has a pair of robotic weapons engaged in a kind of infinite table tennis game. The objective of this current research project, which started in 2022, is that the two robots are continuously learning from each other thanks to competition. Just as Isner finally adapted his game to beat Mahut, each robotic arm uses AI models to move strategies and improve.

But unlike the example of Wimbledon, there is no final score that robots can reach to end their slugfest. Instead, they continue to compete indefinitely, in order to improve each swing along the way. And although the robotic arms are easily beaten by advanced human players, they have been shown that they dominate beginners. Against intermediate players, robots have ratings of around 50/50 – placing them, according to the researchers, at a “solidly amateur human performance”.

Meet our robot powered by AI which is ready to play table tennis. 🤖🏓

He is the first agent to achieve performance of the amateur human level in this sport. Here’s how it works. 🧵 pic.twitter.com/axwbrqwyib

– Google Deepmind (@googleepmind) August 8, 2024

All this, as two researchers involved noted this this week in a Spectrum ieee The blog is underway in the hope of creating an advanced and general use model which could serve as a “brain” of humanoid robots which could one day interact with people in real world factories, houses and beyond. Deepmind researchers and elsewhere hope that this learning method, if it is scaled, could trigger a “pussy moment” for robotics – the rapid monitoring of the field from stumbling and clumsy metal horses to really useful assistants.

“We are optimistic according to which continuous research in this sense will conduct more competent and adaptable machines which can acquire the various skills necessary to operate effectively and safely in our unstructured world,” said DeepMind Senior Staff Engine professor Spectrum ieee.

In relation: [Robots could now understand us better with some help from the web]

How Deepmind formed a table tennis robot

The initial inspiration for racket robots came from a desire to find better more evolutionary means of forming robots to accomplish several types of tasks. Although the Costaudic humanoids like the Boston Dynamics Atlas were able to perform terribly impressive acrobatics exploits for a better part of the decade, many of these exploits were scripted and the result of a meticulous coding and a fine adjustment by human engineers. This approach works for a technological demo or limited single -use cases, but it fails when designing a robot intended to operate alongside people in dynamic environments such as warehouses. In these settings, it is not enough for a robot to simply know how to load a box on a body – it must also adapt to people and an environment that constantly introduces new and unpredictable variables.

It turns out that table tennis is a fairly effective way to test this unpredictability. Sport has been used as a reference for robotics research since the 1980s because it combines speed, responsiveness and strategy at the same time. To succeed in sport, a player must master a range of skills. They need a fine engine control and perceptual capacity to follow the ball and intercept it, even when it arrives with variable speeds and towers. At the same time, they must also make strategic decisions on how to outdo their opponent and when to take calculated risks. Deepmind researchers describe the game as an “constrained, but very dynamic environment”. »»

Deepmind started the project using strengthening learning (where an AI is rewarded for making the right decision) to teach a robotic arm the basics of sport. At first, the two arms were resulted simply to engage in cooperative rallies, so none had a reason to try to earn points. Finally, with a fine adjustment by engineers, the team developed two robotic agents capable of maintaining long rallies of long gatherings.

Learn humans on the road to an infinite game

From there, the researchers adjusted the parameters and asked the arms to try to win points. The process, he wrote, quickly overwhelmed the still experienced robots. The arms would take new information during a point and learned new tactics, to forget some of the previous movements they had made. The result was a constant flow of short rallies, often ending with a robot slamming an uninhabitable winner.

Interestingly, robots have shown a notable peak in improvement when they were responsible for playing against human adversaries. At the beginning, humans of various skill levels were better to keep the ball in play. It turned out to be crucial to improve the performance of robots, because he exposed them to a greater variety of plans and styles of play to learn. Over time, the two robots have improved, not only increasing their coherence, but also their ability to play more sophisticated points, mining in defense, offensive and greater unpredictability. In total, the robots won 45% of the 29 games they played against humans, including beating players of intermediate level 55% of the time.

Since then, the robots have now veterans have faced each other again against each other. Researchers say they are constantly improving. Part of this progress has gone through a new type of AI coaching. Deepmind used Google Gemini’s visionary language model to watch robots of robots playing and generate comments on how to earn points better. The videos of “Coach Gemini” in action show the robotic arm adjusting his game in response to commands generated by AI like “hit the ball as far as possible” and “hit a shallow ball near the net”.

Longer gatherings could one day lead to useful robots

Hope to Deepmind and other companies is that agents materialize each other will help improve AI software for general use in a way that looks more like humans learn to sail in the world around them. Although AI can easily surpass most humans to tasks such as basic coding or failures, even the most advanced robots have trouble walking with the same stability as a toddler. Intrinsically easy tasks for humans – such as tie a shoe or type a letter on a keyboard – monumental challenges for robots. This dilemma, known in the community of robotics under the name of Moravec paradox, remains one of the greatest obstacles to the creation of a Jets-Style “Rosie” robot which could actually be useful in the house.

But there are early signs that these roadblocks could start to calm down. Last year, Deepmind finally managed to teach a robot to attach a shoe, a feat which was once considered as in the years. (Whether or not he has linked the shoe well is another story.) This year, Boston Dynamics has published a video showing his new lighter autonomous atlas robot to adjust in real time to the errors she made during the loading of materials in a manufacturing simulation.

These may seem to be baby stages – and they are – but the researchers hope that generalized and versatile AI systems, like the one that table tennis robots train, could help such progress more frequently. Meanwhile, Deepmind robots will continue to move away, ignoring their endless odyssey of the fifth set.

More offers, reviews and purchasing guides

Mack Degeurin is a technological journalist who has spent years investigating where technology and politics collide. His work previously appeared in Gizmodo, Insider, New York Magazine and Vice.


Related Articles

Leave a Reply

Your email address will not be published. Required fields are marked *

Back to top button