Gamma Figure 02s AI News Shock Entire Humanoid Robot Industry (NVIDIA COSMOS)

Spread the Truth

5G
Dollars-Burn-Desktop

  

📰 Stay Informed with Truth Mafia!

💥 Subscribe to the Newsletter Today: TruthMafia.com/Free-Newsletter


🌍 My father and I created a powerful new community built exclusively for First Player Characters like you.

Imagine what could happen if even a few hundred thousand of us focused our energy on the same mission. We could literally change the world.

This is your moment to decide if you’re ready to step into your power, claim your role in this simulation, and align with others on the same path of truth, awakening, and purpose.

✨ Join our new platform now—it’s 100% FREE and only takes a few seconds to sign up:

👉 StepIntoYourPower.com

We’re building something bigger than any system they’ve used to keep us divided. Let’s rise—together.

💬 Once you’re in, drop a comment, share this link with others on your frequency, and let’s start rewriting the code of this reality.


🌟 Join Our Patriot Movements!

🤝 Connect with Patriots for FREE: PatriotsClub.com

🚔 Support Constitutional Sheriffs: Learn More at CSPOA.org


❤️ Support Truth Mafia by Supporting Our Sponsors

🚀 Reclaim Your Health: Visit iWantMyHealthBack.com

🛡️ Protect Against 5G & EMF Radiation: Learn More at BodyAlign.com

🔒 Secure Your Assets with Precious Metals: Get Your Free Kit at BestSilverGold.com

💡 Boost Your Business with AI: Start Now at MastermindWebinars.com


🔔 Follow Truth Mafia Everywhere

🎙️ Sovereign Radio: SovereignRadio.com/TruthMafia

🎥 Rumble: Rumble.com/c/TruthmafiaTV

📘 Facebook: Facebook.com/TruthMafiaPodcast

📸 Instagram: Instagram.com/TruthMafiaPodcast

✖️ X (formerly Twitter): X.com/Truth__Mafia

📩 Telegram: t.me/Truth_Mafia

🗣️ Truth Social: TruthSocial.com/@truth_mafia


🔔 TOMMY TRUTHFUL SOCIAL MEDIA

📸 Instagram: Instagram.com/TommyTruthfulTV

▶️ YouTube: YouTube.com/@TommyTruthfultv

✉️ Telegram: T.me/TommyTruthful


🔮 GEMATRIA FPC/NPC DECODE! $33 🔮

Find Your Source Code in the Simulation with a Gematria Decode. Are you a First Player Character in control of your destiny, or are you trapped in the Saturn-Moon Matrix? Discover your unique source code for just $33! 💵

Book our Gematria Decode VIA This Link Below: TruthMafia.com/Gematria-Decode


💯 BECOME A TRUTH MAFIA MADE MEMBER 💯

Made Members Receive Full Access To Our Exclusive Members-Only Content Created By Tommy Truthful ✴️

Click On The Following Link To Become A Made Member!: truthmafia.com/jointhemob

 


Summary

➡ Three advanced AI robots are being prepared for sale, with the first being the figure zero two humanoid, which can autonomously perform complex tasks like folding laundry. This robot uses an end-to-end neural network for seamless integration of perception, decision-making, and fine motor control. The second robot, gamma, is being prepared by 1X for a wide range of household tasks. Lastly, a collaboration between Xhumanoid and Giga AI has resulted in a breakthrough in robot perception and understanding of their environments, which is crucial for autonomous task planning and navigation.

Transcript

Today on AI News, three AI robots are coming for sale, but which will be first? Number one is figure zero two humanoid, having just demoed its helix system autonomously working and learning. In fact, just a few weeks ago, figure zero two was learning logistics with its two-part helix AI system. But now it has also begun folding laundry by itself, one of the most complex manipulation tasks for humanoid robots. This is because towels and clothes are not rigid, but instead bend, bunch, wrinkle. And even tangle unpredictably. And unlike handling boxes, there is no fixed shape or single correct way to pick them up.

Meaning that even minor slips can derail the entire process. Because of this, success requires the ability to adapt instantly, trace edges, pinch corners, unravel knots, and smooth out surfaces, all while maintaining a delicate and coordinated touch. But what really sets helix AI apart with its latest achievement is its use of an end-to-end neural network, allowing for seamless integration of perception, decision-making, and fine motor control. Notably, this milestone marks the first time a humanoid robot with multi-fingered hands has folded laundry fully autonomously, relying on the same underlying artificial intelligence architecture previously tested in logistics.

And what’s even more impressive is that the shift from packages to towels didn’t require any changes to helix’s model or its training parameters. Only new data specific to laundry tasks. On top of this, helix’s capabilities extend beyond just manipulation. For example, during its laundry folding sessions, the robot also exhibited natural multimodal interaction by maintaining eye contact, directing its gaze, and using expressive hand gestures while engaging with nearby humans. Furthermore, helix’s process is remarkable not just in terms of its adaptability, but also in terms of resilience. With the robot successfully picking towels from mixed piles, dynamically adjusting its folding techniques based on how each item was presented, and even demonstrating error recovery, such as returning extra items when multiple pieces were unintentionally picked up.

Additional fine manipulation skills that helix put on display included tracing along an edge with a thumb, carefully pinching corners, and untangling balled-up fabric before folding. And the twist is that helix achieved all of this without relying on brittle object-level representations, which often fail when dealing with highly deformable items like towels. Instead, its end-to-end approach links visual and language instructions directly to precise motor actions in real time, allowing the robot to adapt and generalize across tasks and data more gracefully. Looking into the future though, as more data is incorporated, helix’s dexterity, speed, and generalizability will continue to improve, but tell us in the comments what this humanoid should learn to do next.

As for the robot’s price and release date, it is currently only selling to commercial clients at a price tag that’s rumored to be over $100,000 per robot, with a wider rollout eventually bringing the price down to a target of $20,000 per robot. But competition is only intensifying, as 1X is also preparing for a commercial release to get its newest gamma humanoid into homes as soon as possible. In fact, the company just consolidated its headquarters to Palo Alto, California to speed up their release timeline, and when it comes to real-world demonstrations, the company just revealed new footage of gamma holding and carrying what appears to be a bag of dog food.

Additionally, the robot knows to use both arms when handling such a large object, and when it comes to the robot’s release, the company still hasn’t specified its timeline, with the internet expecting the robot to cost between $30,000 to $70,000. For now though, the company is still in testing as 1X prepares gamma to generalize in homes with an increasingly wide range of household tasks, but tell us in the comments how much you’d pay to buy your own gamma as artificial general intelligence is closing in with a new collaboration between Xhumanoid and Giga AI that has produced a breakthrough in how robots perceive and understand their environments.

The team has unveiled a generalized multimodal occupancy perception system that blends both hardware and software innovations, including integrated data acquisition devices and a comprehensive annotation pipeline. At the core of this system is its advanced multimodal fusion approach, which combines different types of sensory data to create grid-based occupancy outputs. These outputs not only capture the presence or absence of objects, but also assign semantic labels, enabling truly holistic environmental understanding. This advancement is critical for downstream applications like autonomous task planning and navigation, which are essential for the next generation of humanoid robots. As for the sensor array, it’s carefully engineered for robust perception, including six standard red-green-blue cameras, with one positioned at the front, back, and two on each side for a horizontal field of view of 118° and a vertical field of view of 92°.

Further complementing these cameras is a 40-line 360° omnidirectional light detection and ranging unit with a vertical field of view of 59°, offering comprehensive spatial awareness. And for data acquisition, the researchers are using a wearable device outfitted with the same sensor suite intended for final robot integration. Human data collectors approximately 160 cm tall wear the device directly on their heads to mirror the robot’s sensor position. To ensure consistent high-quality data, a neck stabilizer minimizes movement, and collectors’ walking speed is limited to no more than 1.2 meters per second, with a maximum turning speed of 0.4 radians per second.

On top of this, the team addressed unique challenges posed by humanoid robots, such as kinematic interference and sensory occlusion, by refining the sensor layout strategy. Furthermore, the project resulted in the first panoramic occupancy dataset tailored specifically for humanoid robots, creating a critical benchmark for future research. And finally, NVIDIA just expanded its robotics ecosystem even further, with the launch of several advanced AI models and infrastructure tools designed to streamline and enhance the development of embodied AI and robotic systems. The key highlight is Cosmos Reason, a 7 billion parameter vision language model purpose built for enabling robots and artificial intelligence agents to understand, plan and reason about actions in the physical world.

Announced at the SIGGRAPH conference, Cosmos Reason is engineered with advanced memory and physics understanding, allowing it to serve as a planning model capable of suggesting the next steps an embodied agent should take. And according to NVIDIA, this translates directly into improved capabilities for data curation, robot planning, and even video analytics, making it a versatile foundation for a wide range of robotics applications. On top of this, NVIDIA introduced Cosmos Transfer 2, a model optimized for accelerating the generation of synthetic data from three-dimensional simulation scenes or spatial control inputs. There is also a distilled, more lightweight version of Cosmos Transfer that is tailored for speed, enabling faster iteration without significantly compromising data quality.

Together, these models are intended to help developers create synthetic datasets spanning text, image and video, which are crucial for training robust robotic systems and artificial intelligence agents. Furthermore, NVIDIA rolled out new neural reconstruction libraries, including tools for rendering three-dimensional real-world environments from sensor data. These capabilities are being integrated into open-source platforms such as Carla, a widely utilized simulator among robotics researchers. The company also announced updates to its Omniverse software development kit and new server infrastructure including the RTX Pro Blackwell Server for on-premise robotics workloads and the DGX Cloud. Anyways, like and subscribe for more AI news.

Thanks for watching and check out this video here. [tr:trw].

5G
Dollars-Burn-Desktop

Spread the Truth

Leave a Reply

Your email address will not be published. Required fields are marked *

 

No Fake News, No Clickbait, Just Truth!

Subscribe to our free newsletter for high-quality, balanced reporting right in your inbox.

TruthMafia-Join-the-mob-banner-Desktop
5G-Dangers