Unitree G1 Humanoid Robot Demos 5 Fingered AI Hand (3 NEW TECH UPGRADES)

Spread the Truth

Dollars-Burn-Desktop
5G

[ai-buttons]

  

📰 Stay Informed with Truth Mafia!

💥 Subscribe to the Newsletter Today: TruthMafia.com/Free-Newsletter


🌟 Join Our Patriot Movements!

🤝 Connect with Patriots for FREE: PatriotsClub.com

🚔 Support Constitutional Sheriffs: Learn More at CSPOA.org


❤️ Support Truth Mafia by Supporting Our Sponsors

🚀 Reclaim Your Health: Visit iWantMyHealthBack.com

🛡️ Protect Against 5G & EMF Radiation: Learn More at BodyAlign.com

🔒 Secure Your Assets with Precious Metals: Get Your Free Kit at BestSilverGold.com

💡 Boost Your Business with AI: Start Now at MastermindWebinars.com


🔔 Follow Truth Mafia Everywhere

🎙️ Sovereign Radio: SovereignRadio.com/TruthMafia

🎥 Rumble: Rumble.com/c/TruthmafiaTV

📘 Facebook: Facebook.com/TruthMafiaPodcast

📸 Instagram: Instagram.com/TruthMafiaPodcast

✖️ X (formerly Twitter): X.com/Truth__Mafia

📩 Telegram: t.me/Truth_Mafia

🗣️ Truth Social: TruthSocial.com/@truth_mafia


🔔 TOMMY TRUTHFUL SOCIAL MEDIA

📸 Instagram: Instagram.com/TommyTruthfulTV

▶️ YouTube: YouTube.com/@TommyTruthfultv

✉️ Telegram: T.me/TommyTruthful


🔮 GEMATRIA FPC/NPC DECODE! $33 🔮

Find Your Source Code in the Simulation with a Gematria Decode. Are you a First Player Character in control of your destiny, or are you trapped in the Saturn-Moon Matrix? Discover your unique source code for just $33! 💵

Book our Gematria Decode VIA This Link Below: TruthMafia.com/Gematria-Decode


💯 BECOME A TRUTH MAFIA MADE MEMBER 💯

Made Members Receive Full Access To Our Exclusive Members-Only Content Created By Tommy Truthful ✴️

Click On The Following Link To Become A Made Member!: truthmafia.com/jointhemob

 


Summary

➡ Unitree’s G1 robot has received new AI upgrades, including Adaptive Motion Optimization (AMO) from UC San Diego, which improves the robot’s control and adaptability. AMO uses a combination of reinforcement learning and trajectory optimization to adapt to new tasks and environments. It also uses a neural network for real-time control and is fine-tuned using reinforcement learning for better performance. Additionally, AMO’s VR interface allows operators to guide the robot’s movements, and it can also learn tasks through imitation, making it useful for automation in real-world scenarios.

Transcript

I’m A.I. News and Unitree’s G1 robot just got several new general AI upgrades, but what can it do, how does it learn, how do you train it, and how much does it cost? This is a breakdown of the newest intelligence tech as we explore how close we are to robot AGI and what likely follows in the near future, starting with AMO from UC San Diego, which stands for Adaptive Motion Optimization, and it’s a breakthrough for hyperdextrous whole-body control in humanoids. And up until now, robots with many degrees of freedom have posed extreme control challenges, and this is because traditional methods like pre-programmed trajectories, they lack flexibility, and motion capture struggles with real-time adaptation.

And while reinforcement learning often struggles with adapting to untrained scenarios, AMO finally addresses these issues by combining reinforcement learning with trajectory optimization, and this creates a single policy that adapts to diverse out-of-distribution commands, for instance, picking up an item with a new angle, or from within an environment that the robots never encountered before. And this brings a whole new level of generalization to humanoid robots. But AMO’s development actually hinges on three crucial pillars, with the first being a team-curated hybrid dataset that blends retargeted human motion capture data, or MOCAP, with trajectory-optimized joint movements.

And the motion captures for natural human motions, and the MOCAP and DO mitigate distribution bias, which allows AMO to handle these new tasks that haven’t been seen before. But the second pillar of AMO’s core is its neural network that’s trained for real-time control with G1’s 29 degrees of freedom. And by using sim-to-real reinforcement learning, the network learned in a simulated environment, and it does this by locking onto critical parameters like the torso orientation and its base height, while allowing free-arm movement for more modular control. And this balance ensures that the G1 maintains stability during dynamic tasks, like reaching for moving objects, and this makes AMO more versatile in its control.

But the third pillar is AMO being fine-tuned using reinforcement learning, with the trained network as a motion prior to accelerate learning. And by focusing on whole-body tasks like object pickup, the reinforcement learning rewards the robot with stability and adherence to its desired orientations, and this lets the G1 execute more complex motions more efficiently in real-world conditions. And when it comes to performance, AMO shines in both simulation and real-world tests, where in simulation it outperformed baselines handling extreme poses and dynamic scenarios with ease, and on the G1 AMO demonstrated superior stability and an expanded workspace, and it picked up objects while maintaining its own torso control, and compared to traditional methods, AMO’s kinematic range is significantly larger, and this unlocks the G1’s full potential for more applications.

Plus, AMO’s VR interface enhances its overall usability, and it does this by mapping three-track poses to 18-dimensional control signals, and this allows for operators to intuitively guide the G1’s arms and torso while AMO ensures stable responses, and this works even for unconventional outputs, and this is especially useful for hazardous settings where non-experts could control humanoids, and beyond teleoperation, AMO also supports autonomous task execution via imitation learning, which means that you can basically show the robot what to do, and it will learn how to do it, and the G1 does this by observing VR-guided demonstrations, and then it replicates the tasks like object placements, and this provides for more consistent performance even with environmental variations, which makes it ideal for automation in real-world scenarios.

But there’s another powerful AI system that’s turning the G1 into a kind of robot apprentice to learn and mirror your moves in the future, and it’s called the Teleoperated Whole Body Imitation System, or TWIST for short, allowing anyone with mocap equipment to execute new tasks with new precision, and TWIST is a product of combining two AI tricks, with the first being reinforcement learning and the second being behavior cloning, and its results are beating other AI systems like Human Plus and OmniH2O, and with the other setups they’ve got issues, such as that reinforcement learning alone makes robots slip around, while Dagger stumbles on new moves without clear goals, but TWIST blends the best of both for smooth, accurate motion every time, and the cool part is that TWIST even gets a boost from some in-house motion capture clips, which are these kind of messy and shaky but chaotic clips that actually help the system handle new moves better, and you can think of it like training to drive on a bumpy road to improve performance on any terrain, but with that being said, TWIST still isn’t perfect because it gets jittery when the robot needs to push or lift things since it’s mainly built for hitting target positions instead, so the fix is to throw some curveballs at the system like messing around with the robot’s grip to teach it how to handle force.

As for speed, there’s about a 0.9 second response lag, which is due to the robot setting up its next move, but the actual brain work happens in about 0.2 seconds of that, and they’re working to make it even faster, and on top of this, TWIST also shined and tests using the Booster T1 robot, but it’s still not quite smooth sailing, and this is because TWIST still struggles with blind spots during control, and it currently has no touch feedback to know how tight its grip is on an object, plus the motors begin to get hot after just 5-10 minutes of use.

And finally, Unitree’s G1 humanoid just got dextrous too, with Reworn applying its proprietary datasets and AI models, and it supports everything from human interaction to even simulated learning, and this helps deploy new robot policies into the real world even faster to conquer a wide variety of tasks that require ultra-precise manipulation. And we can see the robot doing a series of tasks, including playing rock, paper, scissors, stone, cutting food items with a knife, and much more. And this kind of robot manipulation with 5-figured hands may be a look into the future of the first wave of commercial humanoid robots, and this all comes as Reworn just entered into a partnership with Unitree Robotics to advance their physical AI, and their partnership is using Reworn’s Roboverse training platform, their vast human motion datasets, and their embodied AI models, and they’re combining them with Unitree’s advanced robotics with the goal to fast-track the creation of robots that are altogether tougher, smarter, and more adaptable to complex environments.

And by merging these strengths, both companies aim to roll out robots that are capable of tackling tasks in ways that we haven’t yet seen. And one major focus is Reworn’s Roboverse simulator, which they’re using to train Unitree’s robot models using high quality synthetic data and standardized benchmarks. And this setup is a game changer because they’re boosting reinforcement learning by up to 30 times, and enabling Unitree’s robots to master new skills with lightning fast efficiency. Meanwhile, Reworn’s extensive human motion datasets also help these robots to move and adapt seamlessly, and beyond this tech, the partnership is also fostering a vibrant open source developer community.

And by combining Reworn’s infrastructure with Unitree’s robots, they’re creating this hub for global innovation to share ideas and push the boundaries towards proto AGI. And this will open the door to more real world applications like retail, industrial automation, and service industries. But comment down below, what kind of hand do you think would be the best for a dexterous robot? Would it be a five-fingered hand? Should there be 11 or 12 degrees of freedom or over 20? And should they be active? And how much would you be willing to pay? Anyways, like and subscribe, and click this video here if you want to know more about the latest in AI news.

[tr:trw].
Dollars-Burn-Desktop
5G

Spread the Truth

Leave a Reply

Your email address will not be published. Required fields are marked *

 

No Fake News, No Clickbait, Just Truth!

Subscribe to our free newsletter for high-quality, balanced reporting right in your inbox.

TruthMafia-Join-the-mob-banner-Desktop
5G-Dangers