Nvidia R2D2 Demo Boston Dynamics ATLAS MTS General AI Robot Manipulation

Spread the Truth

Dollars-Burn-Desktop
5G

[ai-buttons]

  

📰 Stay Informed with Truth Mafia!

💥 Subscribe to the Newsletter Today: TruthMafia.com/Free-Newsletter


🌟 Join Our Patriot Movements!

🤝 Connect with Patriots for FREE: PatriotsClub.com

🚔 Support Constitutional Sheriffs: Learn More at CSPOA.org


❤️ Support Truth Mafia by Supporting Our Sponsors

🚀 Reclaim Your Health: Visit iWantMyHealthBack.com

🛡️ Protect Against 5G & EMF Radiation: Learn More at BodyAlign.com

🔒 Secure Your Assets with Precious Metals: Get Your Free Kit at BestSilverGold.com

💡 Boost Your Business with AI: Start Now at MastermindWebinars.com


🔔 Follow Truth Mafia Everywhere

🎙️ Sovereign Radio: SovereignRadio.com/TruthMafia

🎥 Rumble: Rumble.com/c/TruthmafiaTV

📘 Facebook: Facebook.com/TruthMafiaPodcast

📸 Instagram: Instagram.com/TruthMafiaPodcast

✖️ X (formerly Twitter): X.com/Truth__Mafia

📩 Telegram: t.me/Truth_Mafia

🗣️ Truth Social: TruthSocial.com/@truth_mafia


🔔 TOMMY TRUTHFUL SOCIAL MEDIA

📸 Instagram: Instagram.com/TommyTruthfulTV

▶️ YouTube: YouTube.com/@TommyTruthfultv

✉️ Telegram: T.me/TommyTruthful


🔮 GEMATRIA FPC/NPC DECODE! $33 🔮

Find Your Source Code in the Simulation with a Gematria Decode. Are you a First Player Character in control of your destiny, or are you trapped in the Saturn-Moon Matrix? Discover your unique source code for just $33! 💵

Book our Gematria Decode VIA This Link Below: TruthMafia.com/Gematria-Decode


💯 BECOME A TRUTH MAFIA MADE MEMBER 💯

Made Members Receive Full Access To Our Exclusive Members-Only Content Created By Tommy Truthful ✴️

Click On The Following Link To Become A Made Member!: truthmafia.com/jointhemob

 


Summary

➡ A new robot named Phantom, designed by Foundation, showcases human-like movements and is built for hazardous environments. The robot’s movements are made possible by a vision-based approach using cameras instead of sensors, which often cause errors. The robot also uses unique actuators for smooth, natural responses and open-loop force control for safety and performance. Additionally, the robot can handle demanding tasks, interact safely with humans, and move at a speed of 3.28 feet per second.

Transcript

I’m AI News, and today I’m going to show you a new robot with next-level human-like movement that’s designed for dangerous environments. Plus, I’ll dive into NVIDIA’s new R2D2 workflow, and stay until the end as I show you the top three dexterous AI robot manipulation models. But first, Foundation just released new footage of its MK1 robot, called the Phantom, and it’s showing off remarkable human-like movements. And they revealed that this is due to their vision-based approach, which prioritizes cameras over sensors. And according to the company, this is because multiple sensors often introduce noise and errors into the system.

And this complicates the process of accurate prediction and control. But cameras, on the other hand, provide a superior cost-to-performance ratio, plus they deliver clearer, more reliable data for the robot’s decision-making processes. And an example of this is in earlier iterations of the Phantom, where they used torque sensors in the feet to measure forces and interactions with the ground. But these sensors generated unwanted noise that degraded the robot’s performance and stability. So to address this, Foundation simply eliminated the need for these sensors by using reinforcement learning instead. And this allowed for the robot to adapt and refine its movements through trial and error without the noisy sensor data getting in the way.

But the robot’s human-like motion is also a direct result of its proprietary cycloid actuators. And these actuators feature low back-driving torque, which means that they require minimal push to move them with external forces. And this allows for smooth, natural responses to physical interactions. And they also exhibit low inertia, which refers to the actuator’s ability to quickly accelerate and decelerate. And this contributes to the robot’s agile and fluid movements. And they operate at high bandwidth of over 100 Hz. And these actuators can process and respond to commands rapidly, ensuring precise control without the need for additional sensors or complex software to achieve their behavioral compliance.

But on top of this, Phantom also employs open-loop force control, which is a technique where the system applies predetermined torque commands without continuous feedback. And this approach enhances the overall safety by reducing the risk of unexpected reactions to external forces. And it improves performance by simplifying the control architecture. And in terms of its power, the robot delivers a maximum peak torque of 118 ft-lbs, which is equivalent to 160 Nm. And this lets the robot handle demanding physical tasks. And its back-driving torque remains exceptionally low at less than 1 Nm.

And this just lets the robot be easily manipulated or stopped by external forces like a human, which enhances the overall safety with human-to-robot interactions. And the robot achieves a walking speed of 3.28 feet per second, or 1 meter per second, and it stands at a height of 5 feet 9 inches tall. And the upper body of the robot features 19 degrees of freedom. But before we get into NVIDIA’s AI and what it’s doing with the new Atlas MTS, first, a message from today’s sponsor. As AI reshapes business and transforms the global market, are you arming yourself with the skills to stay ahead? For example, in today’s job market, AI expertise is no longer merely a benefit.

It’s essential. Whether you want to thrive in your current role or secure your career moving forward, mastering AI sets you apart. And that’s why Growth School is offering a unique 3-hour interactive AI training session, designed to help professionals like you conquer 25 advanced AI tools. And for a limited period, it’s completely free for the first 1000 viewers from AI News. From marketing, sales, HR, design, product, to finance, not just tech pros, AI skills can streamline your workflows, enhance decisions, and increase productivity. Plus, all registrants will receive $500 in bonus resources.

So don’t miss this opportunity to join the top 1% by developing vital AI skills. Click the link below to join Growth School’s WhatsApp group right now for exclusive updates. But now for NVIDIA’s Robotics Research and Development Digest, or R2D2, where they’ve unveiled three of the top AI robot dexterity workflows and datasets for tackling key challenges in robot manipulation. Starting with number three, Dextra RGB. And this workflow achieves dextrous grasping from stereo RGB input, and it delivers zero-shot simulation to real performance. And it’s trained entirely in simulation using NVIDIA’s Isaac Lab, which allows for robots to grasp novel objects in dynamic environments at an unprecedented level.

And to demo it, Boston Dynamics teamed up with NVIDIA to deploy the system in its Atlas MTS robot, and it showcased its ability to handle both lightweight and heavy objects using its three-fingered grippers. But the real magic is in Dextra RGB’s two-stage training pipeline, where first a teacher policy is trained in simulation using reinforcement learning, and then a privileged fabric-guided policy is used too. And it uses geometric fabrics as a form of vectorized control to define motion as joint position, velocity, and acceleration signals. And then it embeds safety features like collision avoidance and goal-reaching behaviors inside of it.

Additionally, an LSTM layer in the teacher policy reasons about the physics of the situation, and this allows the robot to employ corrective actions like regrasping and failure detection. And in the second stage, the teacher policy is distilled into a student policy, and this uses photorealistic RGB images from a stereo camera. And this imitation learning process runs on the dagger framework and allows the student policy to infer depth and object positions implicitly. In fact, demonstrations with the Atlas robot even showcased emerging retry behaviors for failed attempts. Entering number two, DextMimicGen, which is transforming bimanual manipulation by reducing the amount of human demonstrations needed.

And this allows the generation of large-scale trajectory data sets with just a handful of human inputs. And this not only addresses the data scarcity problem and imitation learning for humanoid robots, but it also uses a reel-to-sim-to-reel pipeline, allowing DextMimicGen to transform a small set of teleoperated demonstrations into thousands of simulated trajectories. And it all starts with a human demonstrator performing a task while using a teleoperation device. Then DextMimicGen augments the demonstrations and simulation where it generated 21,000 demos from just 60 human inputs. Next, a policy is trained on the data set using imitation learning and then it’s deployed to the physical robot.

And DextMimicGen covers the complexity of bimanual tasks separated into three different categories, with the first being parallel, where each arm performs independent tasks, the second being coordinated, where its arms synchronize to lift a large object, for instance, and the third is sequential, where a task follows a specific order, like moving a box and placing an item inside of it. And by using asynchronous execution, synchronization mechanisms, and ordering constraints, DextMimicGen ensures precise task execution. And the results speak for themselves, with robots achieving 90% success rates with DextMimicGen’s generated data, compared to 0% when trained solely on human demonstrations.

And finally, number one, GraspGen. This is a massive synthetic data set powering the future of robotic grippers. In fact, GraspGen is available on Hugging Face right now, and it contains over 57 million Grasps for three distinct grippers, the Francapanda, the Robotic 2F140 industrial gripper, and a single contact suction gripper. And it’s all generated entirely in simulation with the data set, including 60 gripper transformations and success labels for various object meshes. And this just gives researchers a treasure trove of data to train future grasping policies. And GraspGen tackles the challenge of data scarcity head on.

And it does this by automated data generation. And this saves time and money and scales in terms of both size and diversity of data sets. And as a result of this, robots can now learn how to handle a wider range of objects with the Francapanda gripper mastering delicate tasks, while the Robotic 2F140 handles heavy duty applications, or the suction gripper for smooth surfaces. And by providing a standardized high quality data set, GraspGen allows researchers to develop generalizable grasping models that like and subscribe and click this video here if you want to know more about the latest in robotics and NVIDIA’s progress.

[tr:trw].
Dollars-Burn-Desktop
5G

Spread the Truth

Leave a Reply

Your email address will not be published. Required fields are marked *

 

No Fake News, No Clickbait, Just Truth!

Subscribe to our free newsletter for high-quality, balanced reporting right in your inbox.

TruthMafia-Join-the-mob-banner-Desktop
5G-Dangers