Toyota’s newest humanoid, the CUE7, stunned a packed arena in Tokyo by rising, dribbling and sinking a free throw under its own control, and that halftime show is really a live demonstration of a bigger shift: robots learning through experience. This article walks through the robot’s public debut, the engineering choices that set CUE7 apart from earlier models, how reinforcement learning changes the game, and why those same methods matter beyond sports. Expect clear descriptions of the motion systems, sensing suite, and the practical uses Toyota is exploring for adaptive robots. The takeaway is not the spectacle but the way embodied AI is moving from programmed motion toward machines that learn physical skills dynamically.
At Toyota Arena Tokyo, roughly 8,400 people watched a 7-foot-2 robot named CUE7 stand from a seat, dribble and make a free throw with no human commands. For most of the crowd it was a mix of awe and amusement, a vivid display of where robotics is headed when you mix high-performance hardware with modern AI. The scene looked like halftime theater, but for engineers it was a carefully staged test of learning-driven control in front of a noisy, unpredictable crowd.
The CUE program did not appear overnight. It began as a voluntary side project in 2017 and grew into an official research effort, with earlier iterations already claiming Guinness records for repeated free throws and long-distance shots. Those past successes came from meticulous human programming and clever mechanics, but Toyota’s team decided to rethink the software foundation rather than keep iterating the old way. That rethink led to a shift from handcrafted motion plans to systems that develop behavior through trial and error.
“We made full use of AI, and we discarded everything we had built up and started again from scratch,” said Tomohiro Nomi, research leader for humanoid robots at Toyota’s Frontier Research Center. This admission signals more than a marketing line; it describes a philosophical change where experience and adaptation replace step-by-step human scripting. The result is a machine that can refine its own motions instead of requiring engineers to specify every joint movement.
Earlier CUE versions relied on model predictive control, where engineers define the sequence of moves and the robot follows that script precisely. CUE7 layers reinforcement learning on top of that concept, so the robot acts as an autonomous agent that experiments, observes outcomes and adjusts its approach over many attempts. That learning loop produces behaviors that are more resilient to unexpected conditions, like an uneven court surface or a stray bounce during a live show.
The team calls the control approach hybrid: reinforcement learning handles adaptation and higher-level decision making, while model predictive control supplies stability and fine-grained motion planning. Think of it as combining a smart, adaptive player who reads the environment with a steady technical coach that keeps the mechanics safe and consistent. That mix lets the robot juggle fluid tasks such as dribbling with precise, timed actions like releasing a shot.
On the hardware side, CUE7 is notably lighter and more streamlined than its predecessor, dropping from roughly 265 pounds to around 163 pounds by simplifying structure and reducing axles. Toyota swapped four wheels for a two-wheel configuration and tuned actuation to deliver faster, more humanlike movement, which helped when the robot rose from a seated position and drew audible reactions from the crowd. Sensors include lidar in the torso for spatial awareness and a stereo camera in the head for distance and angle calculations, all powered by high-performance batteries borrowed from racing tech.
Training used human motion data so CUE7 moves in ways that feel natural instead of mechanical, and that matters for fluid tasks that require coordination and balance. The robot measures distance to the hoop, computes trajectory and force, releases the ball, then updates its policy based on success or failure in that attempt. Over enough repetitions, it refines a surprisingly humanlike shot without having been told each motion in advance.
“We believe it is an exceptionally valuable opportunity to validate a reinforcement-learning-based robot in the inherently uncertain environment of a basketball arena,” Tomohiro Nomi, Head of Humanoid Robotics Research Unit, Frontier Research Center, Toyota Motor Corporation, told CyberGuy. “Moving forward, we will continue developing robots that inspire and bring joy to people.” Those lines make clear Toyota sees the arena not as spectacle alone but as a controlled stress test for embodied intelligence.
The practical implications go beyond applause. Basketball tests target identification, distance gauging, trajectory computation, coordinated motion and calibrated force control all at once, and those are the same challenges factory robots and advanced vehicles face in the real world. Toyota treats CUE7 as a research platform for vision systems, motion control and adaptive behavior that can transfer into manufacturing, automotive systems and assistive robots in homes and care settings.
Watching CUE7 shoot free throws is entertaining, but the deeper story is a technology shift: robots that learn physical skills through experience instead of strictly following human-coded scripts. As reinforcement learning and hybrid control mature, the line between programmed machines and adaptive, embodied AI will keep blurring, and applications spanning production floors to daily-life helpers stand to benefit from that change.
