Figure AI has just released the documentation and demos for its latest humanoid robot, Helix. Helix is built on a Vision-Language-Action (VLA) framework, designed to enable humanoid robots to reason and operate with human-like capabilities. This approach aims to tackle the challenge of scaling robotics from controlled industrial environments to the unpredictable, varied settings of homes. Below is a comprehensive breakdown of everything known about Helix based on available information.
Helix is the first VLA model to provide high-rate, continuous control over an entire humanoid upper body, including the torso, head, wrists, and individual fingers. With 35 degrees of freedom (DoF), it represents a significant leap in robotic dexterity and autonomy. Unlike traditional robotic systems that require extensive manual programming or thousands of task-specific demonstrations, Helix allows robots to perform complex, long-horizon tasks dynamically using natural language prompts. This capability is a major step toward making robots practical for home environments, where they must handle diverse objects and adapt to unpredictable situations.
Helix employs a dual-system architecture inspired by human cognitive models, particularly Daniel Kahneman’s “Thinking, Fast and Slow” framework:
System 2 serves as the “big brain” – a 7-billion-parameter Vision-Language Model (VLM) pretrained on internet-scale data. It handles high-level reasoning, language comprehension, and visual interpretation. This system enables Helix to process abstract commands (e.g., “Pick up the desert item”) and translate them into actionable steps by identifying relevant objects and contexts.
System 1 is an 80-million-parameter visuomotor policy optimized for fast, low-level control. It executes precise physical actions, such as grasping or manipulating objects, based on directives from System 2. Its smaller size ensures rapid response times for real-time robotic operations.
Both systems run on embedded GPUs with low power consumption, making Helix commercially viable for deployment without reliance on external computing resources. This self-sufficient processing capability is crucial for real-world applications.
Also Read: Top 6 Humanoid Robots in 2025
Figure has released several videos showcasing Helix in action:
This video features two Figure robots, both controlled by a single Helix neural network, working together to store groceries. The items are novel—meaning the robots have never encountered them before—and include objects with diverse shapes, sizes, and materials (e.g., bags of cookies, cans, or produce).
The robots demonstrate coordination, such as handing items to each other and placing them into drawers or containers, all based on natural language prompts like “Hand the bag of cookies to the robot on your right” or “Place it in the open drawer.” This showcases Helix’s ability to manage multi-robot collaboration and zero-shot generalization (performing tasks without prior training on specific objects).
This video emphasizes Helix’s control over a 35-degree-of-freedom (DoF) action space at 200Hz. The robot manipulates household items while coordinating its entire upper body—torso, head, wrists, and individual fingers. For example, it tracks its hands with its head for visual alignment and adjusts its torso for optimal reach, all while maintaining precise finger movements to grasp objects securely. This demonstrates the model’s real-time dexterity and stability, overcoming historical challenges like feedback loops that destabilize high-DoF systems.
Helix handles high-level commands. It turns them into precise actions. Prompted with ‘Pick up the desert item,’ it acts. The robot spots a toy cactus. It picks it from various objects. It chooses the right hand. Then it grips it securely. This shows Helix’s skill. It links broad language understanding to motor control. It reasons about abstract ideas and acts without prior demos.
Helix is Figure’s in-house AI. It’s a groundbreaking Vision-Language-Action model. It gives humanoid robots human-like reasoning and dexterity. Its dual-system architecture aids this. So does its generalized object handling and onboard processing. These make it a key robotics advancement. It’s especially suited for homes. Helix lets robots understand natural language. They can reason through tasks. They can manipulate almost any household item without prior training. This fulfills Figure’s ‘step change’ promise in robotics.
Stay updated with the latest happenings of the AI world with Analytics Vidhya News!