[ai-buttons]
📰 Stay Informed with Truth Mafia!
💥 Subscribe to the Newsletter Today: TruthMafia.com/Free-Newsletter
🌟 Join Our Patriot Movements!
🤝 Connect with Patriots for FREE: PatriotsClub.com
🚔 Support Constitutional Sheriffs: Learn More at CSPOA.org
❤️ Support Truth Mafia by Supporting Our Sponsors
🚀 Reclaim Your Health: Visit iWantMyHealthBack.com
🛡️ Protect Against 5G & EMF Radiation: Learn More at BodyAlign.com
🔒 Secure Your Assets with Precious Metals: Get Your Free Kit at BestSilverGold.com
💡 Boost Your Business with AI: Start Now at MastermindWebinars.com
🔔 Follow Truth Mafia Everywhere
🎙️ Sovereign Radio: SovereignRadio.com/TruthMafia
🎥 Rumble: Rumble.com/c/TruthmafiaTV
📘 Facebook: Facebook.com/TruthMafiaPodcast
📸 Instagram: Instagram.com/TruthMafiaPodcast
✖️ X (formerly Twitter): X.com/Truth__Mafia
📩 Telegram: t.me/Truth_Mafia
🗣️ Truth Social: TruthSocial.com/@truth_mafia
🔔 TOMMY TRUTHFUL SOCIAL MEDIA
📸 Instagram: Instagram.com/TommyTruthfulTV
▶️ YouTube: YouTube.com/@TommyTruthfultv
✉️ Telegram: T.me/TommyTruthful
🔮 GEMATRIA FPC/NPC DECODE! $33 🔮
Find Your Source Code in the Simulation with a Gematria Decode. Are you a First Player Character in control of your destiny, or are you trapped in the Saturn-Moon Matrix? Discover your unique source code for just $33! 💵
Book our Gematria Decode VIA This Link Below: TruthMafia.com/Gematria-Decode
💯 BECOME A TRUTH MAFIA MADE MEMBER 💯
Made Members Receive Full Access To Our Exclusive Members-Only Content Created By Tommy Truthful ✴️
Click On The Following Link To Become A Made Member!: truthmafia.com/jointhemob
Summary
Transcript
So it doesn’t know exactly how to do it going into the task, but just by listening to the human tell it what needs to be done, it figures it out as a result. And Gemini Robotics is also able to deal with new objects, new instructions, and new environments. In fact, on average, it more than doubles performance on comprehensive generalization benchmarks when it’s compared to other state-of-the-art VLAs. And you can see this in some of the different demonstrations done. For instance, look at this Abtronic humanoid demonstration, where it’s playing tic-tac-toe using its vision-language action model to determine which move to make next.
In another example, it has these different tiles with letters on them, and a woman says, spell out something with these tiles that you would find in a deck of cards, and it reasons to spell out ace. And in real time, it figures out its robot trajectories to line these up in the correct order. In another example, it’s told to pack a lunch, and it has to use its vision-language action model to determine what’s food, what to put inside the box. It also determines what type of force control to use, etc. But this brings us into the second point, which is interactivity.
Basically, in order to operate in the real world, robots have to be able to look at their surroundings and figure out on the fly exactly what to do. And because Gemini Robotics is built on Gemini 2.0, it’s able to figure all this out quite intuitively. For instance, it’s able to use Gemini’s advanced language understanding to understand these different phrases or commands that humans might say. And it’s also able to respond with much broader sets of natural language instructions than previous models. For instance, it can even change its behavior according to the input.
So if a human says, you know, put this on top of the refrigerator. No, put it down in the sink. It can switch and detect changes in its environment. And it can change the instructions according to what it needs to do next. And this type of steerability actually allows these robots to work much more naturally with humans in real settings. And this brings us to number three, which is dexterity. So Gemini Robotics allows for a brand new level of dexterity that, frankly, I’ve never seen before. It allows for these robots to carry out these extremely complex multi-step tasks that require an extreme level of dexterity and force control.
For instance, origami folding or packing snacks in a Ziploc bag. This is something that hasn’t quite been done yet to this level, but Gemini Robotics allows it. And not only that, but it also allows it across multiple embodiments, which means that Gemini Robotics allows for different robots to carry out these different tasks, including by arm robotic platforms like Aloha 2, or different robot humanoids like Aptronic. And Gemini Robotics can even be specialized for even more complex embodiments, but now onto Gemini’s enhanced understanding. See, alongside Gemini Robotics, they’ve also introduced the Embodied Reasoning update, which is a model that enhances the overall understanding of Gemini’s robots.
And it gives them better spatial reasoning to allow these roboticists to connect with the existing low-level controllers. So basically, Gemini Robotics Embodied Reasoning improves on Gemini’s 2.0’s existing skills by allowing it to use 3D detection and pointing, which allows it to combine spatial reasoning and Gemini’s coding abilities. And as a result, Gemini Robotics Embodied Reasoning can instantiate entirely new capabilities on the fly. So for example, they can be shown a coffee mug and they can determine what exactly to do with a two-finger grass to pick it up and handle it correctly and safely.
And Gemini Robotics Enhanced Reasoning can even perform different steps to control robots right out of the box that haven’t been programmed before, including for perception and state and 3D understanding or task planning and code generation. And it can even do end-to-end tasks. And as a result of this, Gemini’s Robotics Embodied Reasoning was able to achieve a 2 to 3x success rate compared to Gemini 2.0 by itself. And if code generation isn’t sufficient for the task, Gemini Robotics ER can even use its power of in-context learning, which allows it to even follow patterns of human demonstrations.
But along with generalizability, interactivity, and dexterity, there’s also the robot constitution, because Google DeepMind is placing a very strong emphasis on safety with its Gemini Robotics ER. And this is so that it can ensure that these advanced models actually adhere to specific robotic safety standards, including collision avoidance, limitations on contact forces, and maintenance of dynamic stability for mobile units. And these foundation measures are seamlessly interfaced with the low-level safety critical controllers for each one of these robotic embodiments. And Gemini uses its sophisticated reasoning capabilities to determine whether or not these actions are safe to execute in real time.
And it adapts its responses to the context to minimize the risks. In fact, to extend these efforts beyond their own research, DeepMind even released the ASIMOV dataset, which is a comprehensive resource that’s designed to evaluate and improve semantic safety in these embodied AI systems. And this initiative even builds a robot constitution, which was also inspired by Isaac ASIMOV based on the three laws of robotics. And these guide the AI towards safer task selection. Basically, DeepMind has developed an automated framework that generates data-driven natural language rules. And it allows these users of the robots to define specific behavioral guidelines for these robot actions to adhere to, to make sure that they’re within these human ethical standards.
And the ASIMOV dataset also enhances the framework by offering a standardized method to test and quantify the safety implications of each robot decision across these real-world environments. And this is just part of what DeepMind is doing to advance its AI responsibility. But overall, these robots moving into the future are able to do things that they haven’t ever been able to do before. By combining generalizability, interactivity, and dexterity, we’re about to see robots generalize in real time in real-world environments and take over human tasks in a way that’s safer and smarter than ever before.
And it’s probably going to be able to be interfaced into any robot because of these multiple embodiments. But make sure to tell me what you think in the comments. Like and subscribe. And watch this next video if you want to know more about the next paradigm shift in Japanese robots. [tr:trw].