

[ai-buttons]
Stay Informed with Truth Mafia!
Subscribe to the Newsletter Today: TruthMafia.com/Free-Newsletter
Join Our Patriot Movements!
Connect with Patriots for FREE: PatriotsClub.com
Support Constitutional Sheriffs: Learn More at CSPOA.org
Support Truth Mafia by Supporting Our Sponsors
Reclaim Your Health: Visit iWantMyHealthBack.com
Protect Against 5G & EMF Radiation: Learn More at BodyAlign.com
Secure Your Assets with Precious Metals: Get Your Free Kit at BestSilverGold.com
Boost Your Business with AI: Start Now at MastermindWebinars.com
Follow Truth Mafia Everywhere
Sovereign Radio: SovereignRadio.com/TruthMafia
Rumble: Rumble.com/c/TruthmafiaTV
Facebook: Facebook.com/TruthMafiaPodcast
Instagram: Instagram.com/TruthMafiaPodcast
X (formerly Twitter): X.com/Truth__Mafia
Telegram: t.me/Truth_Mafia
Truth Social: TruthSocial.com/@truth_mafia
TOMMY TRUTHFUL SOCIAL MEDIA
Instagram: Instagram.com/TommyTruthfulTV
YouTube: YouTube.com/@TommyTruthfultv
Telegram: T.me/TommyTruthful
GEMATRIA FPC/NPC DECODE! $33 
Find Your Source Code in the Simulation with a Gematria Decode. Are you a First Player Character in control of your destiny, or are you trapped in the Saturn-Moon Matrix? Discover your unique source code for just $33!
Book our Gematria Decode VIA This Link Below: TruthMafia.com/Gematria-Decode
BECOME A TRUTH MAFIA MADE MEMBER 
Made Members Receive Full Access To Our Exclusive Members-Only Content Created By Tommy Truthful
Click On The Following Link To Become A Made Member!: truthmafia.com/jointhemob
Summary
Transcript
For example, their AVP teleoperate repository allows for remote control over the G1 using headset devices like the Apple Vision Pro. And this captures data like joint states, actions and camera inputs. Then this data is recorded and stored in a JSON format so it can be repurposed for training. And this is where the Unitree IL-LaRobot project comes into play. And it’s designed to take this teleoperation data and turn it into something that the robot can learn from autonomously. For example, think of the joint angles, velocities and camera feeds from sparring sessions, which are then used in imitation learning to teach the robots to mimic the actions that it was guided to perform, like throwing punches or dodging in a boxing match.
And Unitree confirms that its G1 is actively learning new skills with both robots fighting each other to collect more training data, as the G1 can be controlled by motion capture, controller or voice commands. But Unitree says they’ll be releasing several new features via a series of over-the-air updates with an emphasis on safety. And if this doesn’t become a multi-billion dollar cottage industry where people bet billions of dollars on robots like they do in real life sports, it’ll probably at least become some kind of benchmark for measuring how agile a robot is, simply by having it duke it out with another robot and see who wins.
In fact, I wouldn’t be surprised that these robots are boxing humans and winning a good percentage of the time by 2030, which means that employing a robot bodyguard to protect you while you’re walking down the street might not be too far away. But there’s another robot that’s learning a different set of skills to transform your home and much sooner, as Steve Jervison just shared a clip on X of Neo Gamma doing real world house chores on stage. And this was while Bernd Bornish, the founder and CEO of 1X, talked on stage about guiding their 1X Neo Gamma with expert demonstrations so it can learn how to complete various tasks.
And a clear example of this is Neo Gamma already autonomously doing yard work as it bags up mulch all by itself. Again, this was not teleoperated, but likely taught instead to the robot via expert demonstrations using teleoperation to guide it, and then generate high quality training data sets to continue on autonomously. And when it comes to house chores, safety is obviously a top concern, which is why 1X has also designed its Neo Gamma to be as safe as possible with a knitted suit, which eliminates any pinch points, meaning that the robot won’t accidentally catch skin or hair in its joints.
And 1X went a step further when it comes to safety by designing its robots to have tendons that are pulled, which makes the robot more quiet, lightweight, and safe. And all of this allows Neo Gamma to work in spaces nearby humans and learn while they do it. And to accelerate full autonomy, 1X is working with NVIDIA and they’ve created a dataset API where they’re sharing real world data from their offices and homes, plus there’s an imprint software developer kit for real time action using a 5Hz vision action loop on an NVIDIA GPU.
And this is trained on an end to end neural network that’s based on NVIDIA’s Groot N1 model. And this end to end neural network works by taking raw camera inputs and directly outputting actions. And this lets Neo Gamma do things that it may not have been trained on directly before, like picking up a cup or doing various chores inside of a kitchen. And within just a week of operating inside of the home, 1X engineers were able to fine tune the model using thousands of hours of Neo Gamma data. But Neo Gamma isn’t the only humanoid robot that’s using a different system for movement, as clone robotics also just showcased its ProtoClone V1 with four design paradigms that could make it the most human-like of all.
And the first of these is the clone’s skeletal system, which consists of 206 bones, all of which being made of lightweight polymers, with some of them being fused together for more stability. And on top of this, the robot’s joints are fully articulated with artificial ligaments and tendons that are placed like humans, giving the upper torso a total of 164 degrees of freedom, with 20 from its shoulders, 4 joints, 6 per vertebrae in the spine, and 26 across the robot’s hand, wrist, and elbow. And the second design paradigm is the robot’s muscular system, which relies on myofiber technology.
These are synthetic muscle fibers that are combined into single musculotendent units, and they connect directly to specific bone points to prevent tendon issues. And each fiber weighs 3 grams, but produces 1 kilogram of force and contracts 30% in under 50 milliseconds, which matches the performance of animal muscle, and overall gives the robot a better balance of speed, strength, and efficiency. Next, the third paradigm is the robot’s nervous system, which controls its actions through a network of sensors and processors, and this works with 4 depth cameras in the robot’s skull to handle vision, while 70 inertial sensors track joint angles and velocities, and 320 pressure sensors measure its muscle force, with microcontrollers along the vertebrae to manage valve operations and process its sensor data, feeding into an Nvidia Jetson Thor GPU to run the CyberNet model onboard the robot, and this setup allows clone to respond much more quickly with visual and prio-perceptive feedback.
And finally, the fourth design paradigm is the robot’s vascular system, which powers its muscles with a hydraulic network, and it uses a 500 watt pump that is the size of a human heart, and it circulates liquid at 40 standard liters per minute, and 100 PSI, which supplies the entire system, and it features AquaJet valves which control the water pressure with a three-way design, ensuring accurate fluid distribution, which keeps the muscles operational, and this system integrates clones components which enable consistent performance across its entire body. And in the latest generative AI breakthrough, Google just launched its Firebase Studio, which is a cloud-based platform designed to make creation, testing, and deployment of AI-powered applications faster than ever, and it integrates tools like Project IDX, Genkit, and Gemini into Firebase, offering a unified workflow for developers, and it lets users go from concept to production quicker than ever, all in one interface, and it has the following features, the first being rapid prototyping with an app prototyping agent, and this generates functional Next.js web apps in seconds with prompts, images, or sketches, and it automatically configures Genkit and provides a Gemini API key for instant AI functionality, and this eliminates the manual setup process.
The next function is conversational editing within Gemini, and this lets users refine their apps just by chatting with Gemini within the platform, and it supports adding features like user authentication, adjusting layouts, or modifying AI flows without use of any direct coding. And the next feature is CodeOSS-based IDE, which is a robust coding workspace with Gemini-powered tools for code completion, debugging, and explanations, as well as full terminal access, including integrations with a ton of Firebase services for rapid back-end development, and this will automate things like code migration, AI model testing, and documentation generation.
And if you want to know more about the latest in AI news, make sure to check out this video here. [tr:trw].

