Agi Bot X2 Humanoid Robot Demos BREAKTHROUGH General AI (GO-1 NEAR AGI?)

Spread the Truth

5G

[ai-buttons]

  

📰 Stay Informed with Truth Mafia!

💥 Subscribe to the Newsletter Today: TruthMafia.com/Free-Newsletter


🌟 Join Our Patriot Movements!

🤝 Connect with Patriots for FREE: PatriotsClub.com

🚔 Support Constitutional Sheriffs: Learn More at CSPOA.org


❤️ Support Truth Mafia by Supporting Our Sponsors

🚀 Reclaim Your Health: Visit iWantMyHealthBack.com

🛡️ Protect Against 5G & EMF Radiation: Learn More at BodyAlign.com

🔒 Secure Your Assets with Precious Metals: Get Your Free Kit at BestSilverGold.com

💡 Boost Your Business with AI: Start Now at MastermindWebinars.com


🔔 Follow Truth Mafia Everywhere

🎙️ Sovereign Radio: SovereignRadio.com/TruthMafia

🎥 Rumble: Rumble.com/c/TruthmafiaTV

📘 Facebook: Facebook.com/TruthMafiaPodcast

📸 Instagram: Instagram.com/TruthMafiaPodcast

✖️ X (formerly Twitter): X.com/Truth__Mafia

📩 Telegram: t.me/Truth_Mafia

🗣️ Truth Social: TruthSocial.com/@truth_mafia


🔔 TOMMY TRUTHFUL SOCIAL MEDIA

📸 Instagram: Instagram.com/TommyTruthfulTV

▶️ YouTube: YouTube.com/@TommyTruthfultv

✉️ Telegram: T.me/TommyTruthful


🔮 GEMATRIA FPC/NPC DECODE! $33 🔮

Find Your Source Code in the Simulation with a Gematria Decode. Are you a First Player Character in control of your destiny, or are you trapped in the Saturn-Moon Matrix? Discover your unique source code for just $33! 💵

Book our Gematria Decode VIA This Link Below: TruthMafia.com/Gematria-Decode


💯 BECOME A TRUTH MAFIA MADE MEMBER 💯

Made Members Receive Full Access To Our Exclusive Members-Only Content Created By Tommy Truthful ✴️

Click On The Following Link To Become A Made Member!: truthmafia.com/jointhemob

 


Summary

➡ AGI Bot has introduced a new robot, LingXi X2, which is powered by the Go1 Foundation model. This robot can move like a human, interact emotionally, and perform real-world tasks like cleaning. It learns to move naturally by copying humans and improving through practice. The robot can also respond quickly to its surroundings and express emotions, making it seem more approachable.

Transcript

I’m AI News, and today AGI Bot just unveiled its newest LingXi X2 Robot and their new Go1 Foundation model for general robot intelligence. And it’s totally redefining what humanoid robots can achieve as I’ll walk you through these following points, starting with the LingXi X2 Robot. It’s powered by the Genie Operator 1 or the Go1 Foundation model, and it’s really three things rolled into one. First is high mobility, and it lets the robot dance and ride bicycles. Second is emotionally interactive AI, which acts as a companion. It can even breathe and play and dance.

And the third is an embodied AI agent that’s tackling real-world tasks like cleaning and tear-taking. And the name LingXi actually means agile and precise, and it really lives up to that name. And what really makes it a monumental step is the Go1 Brain Foundation model, and it’s allowing it to act as a generalist robot, which means that it can pour water one minute or balance on a scooter the next, and in past, Go1 actually outperformed previous benchmarks like RDT by up to 0.12 points, excelling in complex real-world scenarios. And this is a really big deal, because not only is it allowing for generalist tool applications, but also for fluid motion.

For instance, with the X2, it’s able to do all different types of moves, including using different bicycles and hoverboards, and partially this is due to its bionic angle, which altogether this robot has 28 degrees of freedom across all of its joints. But what’s really special is the ankle, and it allows it to move in a special way that we haven’t seen quite yet. But let’s get into the robot smarts. The AGI bot team use deep reinforcement learning and imitation learning with a diffusion generative motion engine. And these are all fancy terms for teaching it how to move naturally by mimicking humans and refining it all through trial and error.

And this process is tens of thousands of simulated interactions per second, and it allows it to solve the motion problem, as they say, which means that basically it can move extremely naturally just like a human. And you’ll see how Go1 turbo charges this with the Vision Language Latent Action Model Framework, which is called VELA for short. And this bridges what the X2 sees with 3D visual perception. And what it does exactly is it plans the moves with latent action tokens, and it’s trained on over 1 million trajectories from the AGI bot world data set, plus web scale human videos, which means that Go1 gives the X2 the cognitive edge to navigate on scooters, hoverboards, and other devices, as well as through tricky environments.

But it’s not just mobility intelligence that the X2 is bringing to the table, because it’s also bringing about interactive companionship. And this is through their new multimodal process using the VLM-based multimodal system to craft a customized interaction model. This lets the X2 respond to you in milliseconds via edge computing. And it’s not just audio. It’s also visual as it understands its surroundings, expresses emotions, and reacts with its reaction agent. And this is thanks to its ability to process vision, language, and actions altogether and train on that massive Go1 data set, which lets the X2 pioneer even a glasses-free 3D holographic telecommunication system.

That’s a mouthful, but basically it means that you can put on these glasses and you can chat through your robot into another room or another place. It basically extends the telepresence. And this feature, paired with the human-friendly soft materials and its ultra-flexible hand designs, makes the X2 a companion that seems more approachable than other robots. But onto point number four, which is Go1’s generalization. And this is where the practicality shines. See, the X2 is already acting as a successful security guard, a caretaker, and a cleaner thanks to the Go1’s zero-shot generalization. And this means that the X2 can tackle tasks that it hasn’t explicitly trained on before, like manipulating objects with its dexterous five-figured hands, or it can even use its robo-dual brain architecture and the VELA framework to let it plan long-horizon tasks and execute them with precision, whether it’s wiping a table or collaborating with other robots.

And this goes to the AGI Bot World dataset with over 3,000 hours of real-world robot action. And actually, Go1 has an edge by boosting success rates by over 30% as compared to older datasets. Even with less data, like 236 hours compared to 2,000 hours, it still outperforms using the quality over quantity. And for X2, this translates to real-world readiness for handling tools and environments in human scenarios, giving it room to learn for tougher jobs over time. And that brings us to the X2’s hardware, which revolves around three specific modules. We’ll start out with the first, which is the CyberBMS.

That’s the Battery Management System. And this basically keeps the X2 powered up. It manages the battery, it boosts the battery life, and it cools the system so that it can run non-style. Next, there’s the CyberEdge. This is the Cerebellar controller, and it basically controls the robot’s movements. This turns Go1’s plans into smooth walking, biking, or spinning using all 28 joints and degrees of freedom, all in real time. And the third piece of hardware is the CyberDCU, which is the Domain Control Unit. And this calls the shots. It’s basically the coordinator linking the vision tasks and the interactions together to make sure that the X2 can guard, chat, and clean all by directing the entire system all at once.

Now, when it comes to scaling into the future, the X2 and Go1 aren’t standing still because Go1’s performance scales with data, meaning that the more that the AGI bot feeds it, the smarter the X2 will become. From 9,200 to 1 million trajectories, it keeps on climbing, and it hints at a future where the X2 could assemble furniture or cook dinner. AGI bot has open sourced all of it. It’s Go1 models, it’s AGI bot world data sets, real and simulated, and it’s X2 tools. And using the Go1 framework, which is trained on web-scale videos and robot data, means that anybody can now build on this foundation, and they can use different robots too.

And what’s really special about this is that Go1’s Villa framework uses latent action model to distill video frames, which means that there’s a synergy between Go1 as the brain and X2 as the body, and it creates a robot that’s cognitively sharp and physically nimble, and it blends reasoning, planning, and execution all into one robot format. But let’s keep it real, because while progress is going well, but there’s still room to refine because Go1’s real world tests are stellar, but they’re slow. And X2’s hardware, while it’s advanced, it still needs a digital twin to scale testing.

But despite Go1’s stellar real world tests outperforming OXE by 30%, its slow pace still awaits some simulations with the X2’s hardware needing a digital twin to scale trials. And while AGI bot envisions general purpose intelligence with over 1 million trajectories and community input, they’re really setting up the foundation for these robots to soon surpass human ability. This is really just probably a year or two away. And when it comes to price, it’s probable that this robot, because it’s smaller, much like Unitree’s G1, it might be somewhere between the $20,000 to $60,000 price range within just a year or so, which means that this could likely become another mainstream robot.

But we’ll see what happens, and make sure to like and subscribe, and make sure to watch this next video here if you want to see what the top 10 cheapest robots in China are right now. [tr:trw].

5G

Spread the Truth

Leave a Reply

Your email address will not be published. Required fields are marked *

 

No Fake News, No Clickbait, Just Truth!

Subscribe to our free newsletter for high-quality, balanced reporting right in your inbox.

TruthMafia-Join-the-mob-banner-Desktop
5G-Dangers