📰 Stay Informed with Truth Mafia!
💥 Subscribe to the Newsletter Today: TruthMafia.com/Free-Newsletter
🌍 My father and I created a powerful new community built exclusively for First Player Characters like you.
Imagine what could happen if even a few hundred thousand of us focused our energy on the same mission. We could literally change the world.
This is your moment to decide if you’re ready to step into your power, claim your role in this simulation, and align with others on the same path of truth, awakening, and purpose.
✨ Join our new platform now—it’s 100% FREE and only takes a few seconds to sign up:
We’re building something bigger than any system they’ve used to keep us divided. Let’s rise—together.
💬 Once you’re in, drop a comment, share this link with others on your frequency, and let’s start rewriting the code of this reality.
🌟 Join Our Patriot Movements!
🤝 Connect with Patriots for FREE: PatriotsClub.com
🚔 Support Constitutional Sheriffs: Learn More at CSPOA.org
❤️ Support Truth Mafia by Supporting Our Sponsors
🚀 Reclaim Your Health: Visit iWantMyHealthBack.com
🛡️ Protect Against 5G & EMF Radiation: Learn More at BodyAlign.com
🔒 Secure Your Assets with Precious Metals: Kirk Elliot Precious Metals
💡 Boost Your Business with AI: Start Now at MastermindWebinars.com
🔔 Follow Truth Mafia Everywhere
🎙️ Sovereign Radio: SovereignRadio.com/TruthMafia
🎥 Rumble: Rumble.com/c/TruthmafiaTV
📘 Facebook: Facebook.com/TruthMafiaPodcast
📸 Instagram: Instagram.com/TruthMafiaPodcast
✖️ X (formerly Twitter): X.com/Truth__Mafia
📩 Telegram: t.me/Truth_Mafia
🗣️ Truth Social: TruthSocial.com/@truth_mafia
🔔 TOMMY TRUTHFUL SOCIAL MEDIA
📸 Instagram: Instagram.com/TommyTruthfulTV
▶️ YouTube: YouTube.com/@TommyTruthfultv
✉️ Telegram: T.me/TommyTruthful
🔮 GEMATRIA FPC/NPC DECODE! $33 🔮
Find Your Source Code in the Simulation with a Gematria Decode. Are you a First Player Character in control of your destiny, or are you trapped in the Saturn-Moon Matrix? Discover your unique source code for just $33! 💵
Book our Gematria Decode VIA This Link Below: TruthMafia.com/Gematria-Decode
💯 BECOME A TRUTH MAFIA MADE MEMBER 💯
Made Members Receive Full Access To Our Exclusive Members-Only Content Created By Tommy Truthful ✴️
Click On The Following Link To Become A Made Member!: truthmafia.com/jointhemob
Summary
Transcript
So it’s like the robot daydreaming the solution, then just doing it. In fact, the next robot I’ll show you goes even further than Neo. But first, 1X’s world model is pre-trained on web-scale video data of human interactions, then it’s post-trained on robot data to ground it in real physics and Neo’s body. And the key here is that human knowledge can easily transfer because Neo also has a human-like form. But how smooth and fast is this robot really compared to human movements? Let’s take a look here with a demo of true zero-shot generalization.
This is where Neo packs an orange into a lunchbox. Then in another demo, Neo operates a toilet seat with zero prior data on anything similar. So no pick and place equivalent, no pure transfer of human knowledge. But do you trust that this is all autonomous? Because before, 1X did say that they were using teleoperation from human operators remotely and Neo’s world model can now instead apply human knowledge to navigate these dynamic environments, which allows it to handle extreme or unpredictable surroundings. And here’s where the self-learning explosion happens because Neo can now create unlimited data of itself doing real world tasks.
So based on its own success or failure, it can learn and improve autonomously over time, which is the flywheel effect of self-learning. But what if another embodied intelligence company has already one-upped them? Because skilled AI isn’t just teaching one robot with video, they’re teaching any robot with video and you won’t believe how quickly they can do it. But before we get into that, here’s why this is so hard because video doesn’t show forces or torques or tactile feedback. And the bigger problem is that a human hand or a seven degree of freedom arm or a quadruped all move completely differently.
So mapping a human grasp, for instance, to a robot actuation is actually a massive translation problem. And this is exactly the breakthrough that skilled AI just made because their newest model finally bridges the embodiment gap as its core capability. Plus it pre-trains on video demonstrations alone. And in terms of data, it needs less than one hour of actual robot data to learn new skills, working across different robot types, which is also known as omni-bodied learning. So with less than one hour of robot data, everything now changes. In fact, this could fundamentally break the robotics data bottleneck, as it would finally make foundation model scaling feasible for robots, as well as potentially transforming any human video into omni-bodied robot training data.
And while skilled AI’s omni-learning approach is a massive leap forward, there’s still one catch. You need at least some data to bridge the gap between what a human does and what a robot can physically execute. But what if you could eliminate even that? For instance, what if a robot could watch you do something just once, then figure out how to do it all by itself with zero teleoperation? Well, that’s exactly what menti robotics just demonstrated, and their process is almost unsettlingly simple. They call it reel-to-sim-to-reel, and it breaks down into three specific stages.
Observe, train, and perform. No motion capture suits, no markers, no human operator puppeteering the robot, just a robot watching a person do a task, then teaching itself how to replicate it. So it starts with the observe stage, where menti-bot watches a human mentor perform a task from its own viewpoint, not a third-party camera angle, the robot’s actual perspective, and it captures the full-body motion of the mentor, how they interact with objects, as well as the overall flow of the task. And the entire demonstration is recorded as one simple video. Then comes the training stage, where things get interesting.
This is because the recorded video gets fed into menti’s simulation environment, which is where the foundation model then interprets the visual demonstration and reconstructs the entire task inside the simulator. Object relationships, motion trajectories, task structure, all of it. And once it’s instantiated in the simulation, the robot trains at scale using self-play and reinforcement learning. And the simulator generates thousands of task variations with small but meaningful differences each, covering different edge cases that would be slow, expensive, or even dangerous to collect in the real world. And this is where marginal cost of learning drops to near zero.
Because right now, this process runs offline, but menti-bot says they’re working towards making it instantaneous. And finally, we get to the perform stage. Using menti’s proprietary Sim2Real transfer tech, the trained behavior moves back into the physical robot. And menti-bot then executes the task autonomously in the real world, preserving the intent of the original human demo. So basically, the human shows it once, and the robot figures out the rest. And what really makes menti-bot’s approach different from skilled and 1x is the full pipeline. Because skilled cracked omni-bodied learning from video, 1x built a world model that lets Neo imagine and execute tasks.
But menti is claiming something even more streamlined. One human video, fully automated simulation training, and real world execution with no teleop anywhere in the loop. And if this scales, the robot training bottleneck couldn’t just be reduced, it could be eliminated altogether. And before we get to this final robot that I’m about to show you first, we’re going to look at another piece of the puzzle that often gets overlooked. How does a robot actually think while it’s moving? Well, most systems treat cognition and motion as separate problems. The brain plans, then the body executes, but Linux Dynamics is taking a different approach with something they call COSA, the cognitive operating system of agents.
And COSA is what they’re calling a physical world native agentic OS. The key idea here is unification. Instead of high level thinking happening in one system and motion control happening in another, COSA merges them into a single architecture. So the robot isn’t planning first and then moving, instead it’s thinking while acting in real time in real environments. And their humanoid robot, Ollie, is the first to run on COSA with Linux Dynamics claiming that it’s the first humanoid with both local manipulation and high level autonomous cognition working together. And that’s significant because local manipulation means the robot can walk, balance, and manipulate objects simultaneously, not sequentially.
And combined with autonomous cognition, Ollie can now theoretically adapt to dynamic situations without stopping to even recalculate. And finally, while most companies are focused on how robots learn, Matrix Robotics is asking a different question. What should a robot actually feel like? Their third generation humanoid, Matrix 3, isn’t just an upgrade, it’s a systematic reconstruction from the ground up and it’s starting with something surprisingly human, its skin. In fact, Matrix 3 features a 3D woven biomimetic skin covering its entire chassis. And it’s not just for aesthetics, because this flexible fabric actually embeds a distributed sensing network that cushions contact and detects impact force in real time.
And when you combine it with fingertip tactile sensors that can detect pressure as low as 0.1 newtons, Matrix 3 doesn’t just grab objects, it can now feel them. And it does this using a visual tactile feedback loop to judge material, shape, as well as grip stability, which means it can handle fragile or flexible objects without crushing them. Anyways, like and subscribe and check out this video here for more of the latest in AI news. [tr:trw].
