Google’s DeepMind has just shown off its new robot, and it’s probably not what you expect. DeepMind, which previously developed superhuman AIs for games like Go or chess has now turned its eyes to table tennis. This robot isn’t good enough to beat the pros, but it’s able to take on skilled amateur players.
A robot to play ping pong
Table tennis is a sport played at a rapid pace where you constantly need to adapt to the spin, strength, and placement of your opponent. It’s a challenging arena for both humans and machines. For humans, reaching a competitive level in this sport typically requires years of dedicated training. For robots, the speed, precision, and adaptability required make it an ideal testbed for algorithms because it requires fast and precise calculations.
Unlike other games such as chess or Go, where strategy is paramount and the physical component is absent, table tennis demands a combination of both. A player must not only strategize but also execute physically demanding movements with split-second timing. However, in terms of strategy, there’s a key difference.
“A strategically suboptimal — but confidently executable — low-level skill might be a better choice. This sets table tennis apart from purely strategic games such as Chess, Go, etc,” notes the DeepMind team.
The ping pong robot has a hierarchical and modular architecture. Its decision-making is divided into two levels: a low-level controller that manages specific physical actions, and a high-level controller that orchestrates these actions depending on the game context, the opponent, and any other cuts.
The low-level controller manages single physical actions like a forehand attack or topspin, a backhand cut, or any other shot the robot has in its arsenal. Each of these actions is quantified with detailed descriptors that outline its strengths and limitations. For instance, one skill may be good against one type of ball spin, others may excel at countering faster balls. Further still, one shot may have a higher risk of going out than another.
The detailed analysis enables the robot to choose the most appropriate skill — but it needs to decide fast. Decisions are made by the high-level controller, which opts for what move to use and when. It does this by analyzing how the point and the match are going and looking at both its own performance and the opponents’ behavior and style. It constantly updates its assessment of the opponent’s behavior throughout the match, adapting to new challenges accordingly.
This adaptability is a key feature in competitive environments, where the robot cannot rely on pre-programmed responses to every possible scenario. Instead, it must analyze its opponent’s tactics on the fly and adjust its strategy accordingly. All of this must be calculated fast and in real-time, making it particularly challenging for the algorithm.
Putting it to the test
The robot was tested against 29 table tennis players of varying skill levels, from beginner to “advanced+” as determined by a professional table tennis coach. The results were promising: the robot won 45% of the matches and 46% of the games overall. It was particularly successful against beginners, winning all of its matches, and it also managed to defeat intermediate players in 55% of the games. However, against advanced players, the robot struggled, losing all of its matches.
These results indicate that the robot has reached a solid amateur level of play, capable of holding its own against most human opponents but still facing challenges against highly skilled players.
Perhaps even more importantly, these games offered even more valuable data for further training, highlighting areas where it could improve. The most skilled human players noted that the robot struggled most at handling underpin. The researchers checked the parameters of the robot and indeed confirmed that the robot made a lot of mistakes when facing underpin. So, this would be a key area to improve.
Humans enjoyed playing against the robot
Playing chess or Go against a robot can be notoriously frustrating — but against the table tennis bot, people actually had fun.
Advanced players, who were able to exploit some of the robot’s weaknesses, still found the matches enjoyable and saw the potential for the robot as a more dynamic practice partner than traditional ball-throwing machines. The robot’s ability to adapt to different playing styles and provide a challenging, yet fair, game was particularly appreciated.
Barney J. Reed, a professional Table Tennis Coach who was involved in the project, said it was “truly awesome” to see the robot play against different players.
“Going in, our aim was to have the robot be at an intermediate level. Amazingly it did just that, all the hard work paid off. I feel the robot exceeded even my expectations. It was a true honor and pleasure to be a part of this research. I have learned so much and am very thankful for everyone I had the pleasure of working with on this.”
Not just about table tennis
The DeepMind scientists note that no prior research in robotic Table Tennis has addressed the challenge of a robot playing a full competitive game against previously unseen humans. Although it has a way to go before reaching Olympic-level play, this research marks a significant milestone in the journey toward achieving human-level performance in robotics.
The potential applications of this technology are not limited to table tennis. The hierarchical and modular approach used in this study could be adapted to other sports or physical activities. Robots could serve as training partners or even competitors. Moreover, the ability to achieve human-level performance in dynamic, real-world environments could have significant implications for industries such as manufacturing, healthcare, and service robotics, where robots are increasingly being integrated.
Read the entire DeepMind study here.
Thanks for your feedback!
Discussion about this post