📰 Stay Informed with Truth Mafia!
💥 Subscribe to the Newsletter Today: TruthMafia.com/Free-Newsletter
🌍 My father and I created a powerful new community built exclusively for First Player Characters like you.
Imagine what could happen if even a few hundred thousand of us focused our energy on the same mission. We could literally change the world.
This is your moment to decide if you’re ready to step into your power, claim your role in this simulation, and align with others on the same path of truth, awakening, and purpose.
✨ Join our new platform now—it’s 100% FREE and only takes a few seconds to sign up:
We’re building something bigger than any system they’ve used to keep us divided. Let’s rise—together.
💬 Once you’re in, drop a comment, share this link with others on your frequency, and let’s start rewriting the code of this reality.
🌟 Join Our Patriot Movements!
🤝 Connect with Patriots for FREE: PatriotsClub.com
🚔 Support Constitutional Sheriffs: Learn More at CSPOA.org
❤️ Support Truth Mafia by Supporting Our Sponsors
🚀 Reclaim Your Health: Visit iWantMyHealthBack.com
🛡️ Protect Against 5G & EMF Radiation: Learn More at BodyAlign.com
🔒 Secure Your Assets with Precious Metals: Kirk Elliot Precious Metals
💡 Boost Your Business with AI: Start Now at MastermindWebinars.com
🔔 Follow Truth Mafia Everywhere
🎙️ Sovereign Radio: SovereignRadio.com/TruthMafia
🎥 Rumble: Rumble.com/c/TruthmafiaTV
📘 Facebook: Facebook.com/TruthMafiaPodcast
📸 Instagram: Instagram.com/TruthMafiaPodcast
✖️ X (formerly Twitter): X.com/Truth__Mafia
📩 Telegram: t.me/Truth_Mafia
🗣️ Truth Social: TruthSocial.com/@truth_mafia
🔔 TOMMY TRUTHFUL SOCIAL MEDIA
📸 Instagram: Instagram.com/TommyTruthfulTV
▶️ YouTube: YouTube.com/@TommyTruthfultv
✉️ Telegram: T.me/TommyTruthful
🔮 GEMATRIA FPC/NPC DECODE! $33 🔮
Find Your Source Code in the Simulation with a Gematria Decode. Are you a First Player Character in control of your destiny, or are you trapped in the Saturn-Moon Matrix? Discover your unique source code for just $33! 💵
Book our Gematria Decode VIA This Link Below: TruthMafia.com/Gematria-Decode
💯 BECOME A TRUTH MAFIA MADE MEMBER 💯
Made Members Receive Full Access To Our Exclusive Members-Only Content Created By Tommy Truthful ✴️
Click On The Following Link To Become A Made Member!: truthmafia.com/jointhemob
Summary
Transcript
You can see it cleaning the table with the Dual World model, which we’ll get into in just a second. But it’s important to note that it features 31 distributed pressure sensors throughout its body and it can lift up to 3 kilograms of weight per hand. And that’s what you’re seeing here as it opens the drawer, puts the cup and plate into the drawer one by one. And it’s doing this using its Dual World model, which beyond just predicting the future before acting, this Embodied AI can also generate and complete 6-second visual scripts of the task.
And then it executes its predictions in real time at 30 frames per second. It’s like watching a movie of itself succeeding and then it makes it real. You can see this robot picking up the book from the lower shelf and putting it on the top takes these commands and it actually uses two brains working together. Here it’s using the spoon to stir the vegetables in the pot. Although tell me in the comments if you think that this is a thorough stir. But back to this Dual World model. So it uses two brains.
The first is a slow thinker planning brain and this uses WAN 2.2. It’s a 5 billion parameter model, generates 33 frames per second or about 6.6 seconds of the future. And it creates a short film of the manipulation task like what you’re seeing here. And it maintains visual consistency across the entire horizon. So objects keep a coherent appearance, motions follow physically plausible trajectories and most importantly spatial relationships stay stable. And this provides reliable visual guidance for the whole sequence. And it takes about 900 milliseconds which is too slow for control alone which is where the fast reasoner action brain comes in.
This is the second part of the Dual World model. And this uses the VJEPPA a 600 million parameter model. And here it’s picking up chips from the box and putting them in the upper cabinet. And what you’re seeing here is its motion aware encoder for real time action extraction processing these predictions plus the current observation and then running at 30 hertz or about every 33 milliseconds. And it’s pre-trained on EGO 40 human manipulation videos. So it can understand motion dynamics, affordances and occlusions or when something is blocking its view. Like when it’s picking up these clothes from the washing machine and some of the views are blocked of the other clothes.
And despite this it’s still able to translate visual guidance into motor commands in just 40 milliseconds per action chunk. And the key innovation here is asynchronous streaming which means that both of these brains work together concurrently. Slow model runs in the background thread at 900 milliseconds and the fast model runs in the main control thread at one millisecond latency. And the predictions populate an accuse system with each prediction being reused four times before requesting the new one. The advantage to this is that it not only amortizes computational costs but it also affords five billion parameters for reasoning without bottlenecking control.
And it runs at 30 hertz control frequency while also preserving long horizon consistency. But the real question is how well does it recognize these objects? It says grasp the moldy bread and pass it to the right hand throw it into the trash bin. It’s using two hands here but can it really identify how well it’s cooked something or how moldy something is? Tell us in the comments down below if you would trust this robot to prepare your food and know what is appropriate for the human body. Now it’s time to compare against figure three’s helix two.
And this is using a three system approach with system zero one and two which if you want to know more about that I’ll link this in the video in the corner here. But we can see that this robot is handling glasses now. In the last demo it was handling what looked to be plastic. Now we can see that it’s using bi-manual manipulation with its helix two vision language action model. And let’s see how it reacts with picking up these cups from the table dual handed and working them into the dishwasher. See if it gets the fit first try.
It’s not too bad it’s about what you could expect from a human probably but it is much slower that’s for sure. So it looks like in terms of precision this robot is roughly to the point of being able to serve in the home but how long would you actually have to wait for this to carry out a long horizon task like creating a meal from scratch? Could it take hours? And meanwhile Google DeepMind is taking a major step towards accessible world models today with Project Genie which is an experimental prototype now rolling out to Google AI Ultra subscribers in the United States.
And it’s built on Genie 3 which is Google’s general purpose world model that they previewed in August. And it lets web app users create explore and remix interactive environments that generate in real time as you move through them. And unlike static 3D snapshots Genie 3 generates the path ahead dynamically as you interact with the world. It simulates physics and interactions while maintaining breakthrough consistency across the experience and it enables everything from robotic simulations to historical recreations to pure fiction. And Google is saying that this is essential infrastructure for AGI which are systems that will need to navigate the real world not just master chess or go.
And Project Genie centers on three core capabilities as of now. First world sketching where you prompt with text and images to create living environments defining not just the world but your character and how you’ll explore it like walking riding flying driving or anything else. And the integration with Nano Banana Pro lets you preview and fine tune your world before entering including choosing first person or third person perspective. Second is world exploration. As you navigate the environment generates ahead in real time based on your actions with adjustable camera controls throughout. And third is the world remixing.
You can build on existing worlds, explore curated examples in a gallery or even use the randomizer for inspiration and then download videos of your explorations. But Google is transparent about the current limitations. They say that generated worlds may not look photorealistic or perfectly follow prompts in physics. And characters can experience control issues or latency while sessions are capped at 60 seconds. In terms of Genie’s release the rollout has already taken place for Google AI Ultra subscribers 18 and older with plans to expand territory soon. So by opening Genie 3 to their most advanced AI subscribers Google’s aiming to understand how people will actually use these world models across AI research and generative media.
Anyways like and subscribe and check out this video here for more of the latest in AI news. [tr:trw].
