📰 Stay Informed with Truth Mafia!
💥 Subscribe to the Newsletter Today: TruthMafia.com/Free-Newsletter
🌍 My father and I created a powerful new community built exclusively for First Player Characters like you.
Imagine what could happen if even a few hundred thousand of us focused our energy on the same mission. We could literally change the world.
This is your moment to decide if you’re ready to step into your power, claim your role in this simulation, and align with others on the same path of truth, awakening, and purpose.
✨ Join our new platform now—it’s 100% FREE and only takes a few seconds to sign up:
We’re building something bigger than any system they’ve used to keep us divided. Let’s rise—together.
💬 Once you’re in, drop a comment, share this link with others on your frequency, and let’s start rewriting the code of this reality.
🌟 Join Our Patriot Movements!
🤝 Connect with Patriots for FREE: PatriotsClub.com
🚔 Support Constitutional Sheriffs: Learn More at CSPOA.org
❤️ Support Truth Mafia by Supporting Our Sponsors
🚀 Reclaim Your Health: Visit iWantMyHealthBack.com
🛡️ Protect Against 5G & EMF Radiation: Learn More at BodyAlign.com
🔒 Secure Your Assets with Precious Metals: Kirk Elliot Precious Metals
💡 Boost Your Business with AI: Start Now at MastermindWebinars.com
🔔 Follow Truth Mafia Everywhere
🎙️ Sovereign Radio: SovereignRadio.com/TruthMafia
🎥 Rumble: Rumble.com/c/TruthmafiaTV
📘 Facebook: Facebook.com/TruthMafiaPodcast
📸 Instagram: Instagram.com/TruthMafiaPodcast
✖️ X (formerly Twitter): X.com/Truth__Mafia
📩 Telegram: t.me/Truth_Mafia
🗣️ Truth Social: TruthSocial.com/@truth_mafia
🔔 TOMMY TRUTHFUL SOCIAL MEDIA
📸 Instagram: Instagram.com/TommyTruthfulTV
▶️ YouTube: YouTube.com/@TommyTruthfultv
✉️ Telegram: T.me/TommyTruthful
🔮 GEMATRIA FPC/NPC DECODE! $33 🔮
Find Your Source Code in the Simulation with a Gematria Decode. Are you a First Player Character in control of your destiny, or are you trapped in the Saturn-Moon Matrix? Discover your unique source code for just $33! 💵
Book our Gematria Decode VIA This Link Below: TruthMafia.com/Gematria-Decode
💯 BECOME A TRUTH MAFIA MADE MEMBER 💯
Made Members Receive Full Access To Our Exclusive Members-Only Content Created By Tommy Truthful ✴️
Click On The Following Link To Become A Made Member!: truthmafia.com/jointhemob
Summary
Transcript
This approach, known as zero-shot SIM-to-real transfer, means the system had to perform in conditions that it had never physically encountered before. And the fact that the hand succeeded repeatedly suggests a high degree of fidelity between Sanctuary AI’s simulated training environment and the actual dynamics of the real world. And this manipulation itself took place entirely at the fingertips. The palm plays no supportive role. This requires the hand to do two things simultaneously. Maintain a stable grasp on the cube while also making incremental progress towards the goal orientation. And fingertip-only manipulation is considered to be more difficult than palm-assisted approaches because it demands precise coordination across multiple fingers with limited surface contact.
According to Sanctuary AI, capabilities like these form a foundation to form complex tasks such as precise insertion and tool use. And simulating anthropomorphic hands at this level of detail remains a significant technical challenge. These hands involve high degrees of complex contact dynamics and mechanical interactions that are difficult to model accurately. Plus, small errors in simulation can compound quickly, causing policies that work in virtual environments to fail when transferred to the physical system. So the fact that Sanctuary AI’s policy transferred successfully indicates that the company has made meaningful progress in building realistic simulation environments that are capable of capturing the nuances of dexterous hand movements.
On top of this, Sanctuary’s robotic hands are hydraulically actuated, which the company says provides advantages in strength, speed, and control compared to other actuation methods. And the hands’ active degrees of freedom enable finger abduction, which is the ability to spread fingers apart, and other sophisticated motions that aren’t commonly available in commercial robotic hands. Importantly, in-hand manipulation has long been considered to be one of the more demanding problems in the field, as it requires a system to manage unstable contact points, adapt to shifting object dynamics in real time, and execute fine motor adjustments, all without any external support.
So achieving repeated success at this task, particularly with a policy that’s never been exposed to the real world, reflects both the effectiveness of learned control strategy and the mechanical capability of the hardware platform. But another company says that its robots just stop failing. So Generalist AI just announced Gen 1, an embodied foundation model that the company says just crossed into production-level reliability on a set of simple physical tasks averaging 99% success rate across several dexterous operations. These tasks include folding boxes, packing phones, folding t-shirts, and servicing robot vacuums, each performed hundreds of times in succession with near zero failures.
Box folding, for example, was completed over 200 runs at 99% success rates in roughly 12 seconds per cycle, which is about 2.8 times faster than the company’s previous Gen 0, which took approximately 34 seconds on identical boxes. With Gen 1, the company reports that further scaling, combined with algorithmic improvements, have pushed certain tasks past what it considers the threshold of commercial viability. So each task shown required only about one hour of robot-specific data to fine-tune. And what’s important to note is that the base model is pre-trained without any robot data.
Instead, it draws from what Generalist AI describes as over half a million hours of real-world interaction data captured through low-cost wearable devices worn by humans performing everyday physical activities. So this approach sidesteps the expense and scalability limitations of traditional teleoperation-based data and beyond reliability and speed, the company highlighted what it calls improv intelligence, the model’s ability to recover from unexpected situations without predefined instructions. So in one example, during an automotive parts-kitting task, the robot autonomously regrasped a displaced washer using multiple recovery strategies, including bimanual coordination, where none of these were explicitly part of the training data.
But robots aren’t the only AI systems acquiring abilities that they weren’t explicitly trained for. As Alibaba’s newly released QUEN 3.5 Omni, an omni-modal model that processes text, images, audio, and video, just developed an unexpected capability during training. It can write functional code from spoken instructions and video input, which is a skill that the team never specifically built in. And the QUEN team calls it audio-visual vibe-coding, where it describes it as the emergent byproduct of scaling native multimodal pre-training across more than 100 million hours of audio-visual data. And in one demonstration, the model builds a working snake game from a verbal description paired with a video clip.
And beyond the surprise of its coding ability, QUEN 3.5 Omni Plus claims state-of-the-art results on 215 audio and audio-visual subtasks. But unlike previous QUEN releases, model weights haven’t yet been published, and QUEN 3.5 Omni is currently only available as an API service. But while Alibaba is keeping its latest model weights behind an API, Google is going the opposite direction as they just released Gemma 4, which is their most capable open model family to date, and it comes with a fully-permissive Apache 2.0 license, which is a direct response to developer feedback requesting fewer restrictions.
And more importantly, Gemma 4 arrives in four sizes, effective 2B, effective 4B, a 26B mixture of experts variant, and a 31B dense model. And it’s built from the same research underpinning Google’s proprietary Gemini 3, which is a family that was designed for advanced reasoning and agentic workflows. And the 31B model currently ranks as the third highest open model on the Arena AI text leaderboard, while the 26B model holds the 26B spot, which both outperform models up to 20 times their size. And as for the smaller E2B and E4B models, they’re engineered for on-device use, running offline on phones, Raspberry Pi units, and even edge hardware with near zero latency.
And all four models natively process images and video, support context windows of up to 256,000 tokens, and handle over 140 languages. Plus, the E2B and E4B variants also accept native audio input. And since the first generation, the Gemma series have already been downloaded over 400 million times, spawning more than 100,000 community-built variants. And finally, here’s a bonus from Blockbusterly. So this is a preview of their BBLI-01, which is a humanoid robot that they’re planning to build, with key features including autonomous setup for tripod and lighting positioning, heavy lifting for carrying gear, smart power for autonomous battery monitoring and induction charging, and an instant backup for real-time onset cloud redundancy.
But these visuals are AI generated, so tell us in the comments if you think that this will actually be brought to production or if it’s just some clever marketing. Anyways, like and subscribe, and check out this video here for more on the latest in AI news. [tr:trw].
