New AI Robot Tech Masters 500+ Tasks Per Hour ($100K S1 HUMANOID)

Spread the Truth

Dollars-Burn-Desktop
5G

[ai-buttons]

  

📰 Stay Informed with Truth Mafia!

💥 Subscribe to the Newsletter Today: TruthMafia.com/Free-Newsletter


🌟 Join Our Patriot Movements!

🤝 Connect with Patriots for FREE: PatriotsClub.com

🚔 Support Constitutional Sheriffs: Learn More at CSPOA.org


❤️ Support Truth Mafia by Supporting Our Sponsors

🚀 Reclaim Your Health: Visit iWantMyHealthBack.com

🛡️ Protect Against 5G & EMF Radiation: Learn More at BodyAlign.com

🔒 Secure Your Assets with Precious Metals: Get Your Free Kit at BestSilverGold.com

💡 Boost Your Business with AI: Start Now at MastermindWebinars.com


🔔 Follow Truth Mafia Everywhere

🎙️ Sovereign Radio: SovereignRadio.com/TruthMafia

🎥 Rumble: Rumble.com/c/TruthmafiaTV

📘 Facebook: Facebook.com/TruthMafiaPodcast

📸 Instagram: Instagram.com/TruthMafiaPodcast

✖️ X (formerly Twitter): X.com/Truth__Mafia

📩 Telegram: t.me/Truth_Mafia

🗣️ Truth Social: TruthSocial.com/@truth_mafia


🔔 TOMMY TRUTHFUL SOCIAL MEDIA

📸 Instagram: Instagram.com/TommyTruthfulTV

▶️ YouTube: YouTube.com/@TommyTruthfultv

✉️ Telegram: T.me/TommyTruthful


🔮 GEMATRIA FPC/NPC DECODE! $33 🔮

Find Your Source Code in the Simulation with a Gematria Decode. Are you a First Player Character in control of your destiny, or are you trapped in the Saturn-Moon Matrix? Discover your unique source code for just $33! 💵

Book our Gematria Decode VIA This Link Below: TruthMafia.com/Gematria-Decode


💯 BECOME A TRUTH MAFIA MADE MEMBER 💯

Made Members Receive Full Access To Our Exclusive Members-Only Content Created By Tommy Truthful ✴️

Click On The Following Link To Become A Made Member!: truthmafia.com/jointhemob

 


Summary

➡ Anews introduces two robots with different artificial intelligence systems. The first robot, S1 by Asterbot, uses a dual-core intelligence model to learn and adapt to real-world tasks like folding clothes, playing instruments, and even making breakfast. It can be trained quickly and operates with high precision and a success rate of about 99%. The second robot, developed by TeleAI, uses an adversarial locomotion and motion imitation framework to improve whole body coordination, dividing control between the upper and lower body for more stable and diverse movements.

Transcript

I’m Anews and I’m going to show you two robots with two different artificial intelligence systems. Plus, stay till the end to find out if they learn by watching, if they understand voice commands, and how much they could cost. But first, Asterbot just released this video of its S1 robot using its new dual-core intelligence, which is the company’s end-to-end vision-language action model. And this gives the robot a new ability to reason, learn, and adapt to the real world, doing things like folding clothes or competitively stacking cups. It also does calligraphy with a brush, and it can dance, vacuum with a cat, or play with it too.

It can play musical instruments, prepare waffles for breakfast, even make a cup of tea. And using two hands, it can even open drawers and put things away. And the company provides an API as well as a user-friendly visual development interface, and this integrates with things like Mujoko and other simulation platforms for training. And it even features a zero-code Asterbot UI for teaching the robot to do new things. This is mainly through teleoperation demonstrations, and this is with very low latency feedback. And this just means that the robot moves with minimal lag. And this is super important when it comes to the speed at which you can train it, because it allows the robot to perform one pick-and-place task within about two seconds.

And when training over an hour, it can successfully perform over 500 times. And in terms of the quality of its performance, it operates at a success rate thereafter of about 99%. And over time, it minimizes errors like dropping or falling out of sink or freezing, et cetera. And it does this all with about 0.1 millimeters of precision when repositioning. And the robot will improve over time using its Pi0 embodied intelligence framework. This means that it can tackle real world long horizon tasks. And now the company is showing off what they’re calling the dual core end-to-end vision language action model.

And this powers the robot zero shot with no teleoperation. And this is to say that it can now see, understand, and act on tasks instantly without any human control or pre-training in some cases. So you can set it up in the real world fast. And as for the robot stats, it currently stands at a height of about five feet, eight inches, which is about 173 centimeters. And it was previously stated to have a target cost of just under 100,000 US dollars. But how does this S1 perform in comparison to an average human male? Well, both the S1 and average male have about seven degrees of freedom in each of their arms.

But the S1 has an additional reach at 194 centimeters, which is about six feet, four inches. Whereas the average adult male has a reach of about 175 centimeters, which is about five feet, nine inches. And because the robot’s repeatability is 0.1 millimeters of precision, it means that it’s five times more precise than that of the average human. And when it comes to the S1 understanding voice commands, currently it’s using the Pi0 embodied framework. But if it were to upgrade to the Pi0.5, then it would be able to, in theory, do autonomous operations just after hearing a voice command.

But there’s another breakthrough because researchers from TeleAI just unveiled an adversarial locomotion and motion imitation framework. And this is an AI driven framework designed to advance whole body coordination and humanoid robots. But the problem is that traditional methods mimicking human movements usually overlook the separate roles of upper and lower body functions. And this just resulted in overly complex computations and frequent robot instability during real world tasks. But Almi tackles these issues by employing adversarial policy learning instead. And it divides control between the lower body, which focuses on stable locomotion to follow velocity commands, and the upper body, which tracks diverse motions.

And here’s where Almi comes in to synchronize these policies for seamless whole body control, enabling applications like local manipulation with teleoperation systems. And Almi serves to synchronize these policies for seamless whole body control. And this enables applications like local manipulation with teleoperation systems. In fact, they’ve already conducted extensive testing using Unitree’s H1 robot and the robot era X10. And this proved Almi’s locomotion and motion tracking abilities in both simulations and real world scenarios. And this framework supports various upper body control interfaces, including open loop controllers, Almi policies and VR devices while Almi train policies enhance the lower body stability.

And on top of this, the team also released Almi X, which is a comprehensive data set of high quality whole body motion trajectories. And these are from Mujoko simulations, but they’re ready for deployment in real robots. And as a result, in the experiments, the robot was able to do all different types of movements while using its upper and lower body at the same time, you can see here on the screen, some of these different things that it was able to accomplish. And it’s quite impressive. You can see in each one of these, it’s using both its upper and lower body.

While for humans, walking and chewing gum, for instance, is quite easy for robots, not so much, but it seems that tele AI has accomplished this feat. But researchers from Google DeepMind also just unveiled another bombshell, and it’s called SAS prompt. And it’s using large language models to let robots teach themselves new skills just by being spoken to. And SAS prompt essentially serves as a single prompt to transform these LLMs into what they call numerical optimizers. And this just allows the robots to iteratively refine their own behavior based on natural language instructions.

And a real world example of this is a robot that can perfect its table tennis swing just by being told to hit the ball to the right side of the table, and it performs accordingly. And the SAS prompt actually transforms robot learning by consolidating three critical steps into a single LLM driven process. This is by summarizing past performance, analyzing key parameters, and then synthesizing improved actions. And this is unlike traditional methods, which rely on complex handcrafted components like reward functions and update rules, because SAS instead just lets the robots interpret their own behavior.

And it uses in context examples so they can optimize their actions autonomously. And this is actually a breakthrough because it finally enables for explainable policy search where robots not only improve themselves, but they also provide clear reasoning for their decisions all within the LLMs framework. For objectives like hit the ball to the far right or hit it to the left corner. And the LLM can analyze its previous attempts, identify its best performing parameters, and then it proposes new ones to get closer to the goal. And over the preceding 30 iterations, the ball’s landing position visibly shifted towards a desired target.

But what really stands out about this approach is its reliance on the LLMs latent ability to perform numerical optimization. In fact, the LLM successfully minimized complex equations in both 2D and 8D spaces across 50 experiments. And because it doesn’t need specialized algorithms, SAS opens the doors to more versatile language driven paradigms of robot self-improvement. And in the future, this means that robots will simply be able to train and optimize themselves just by being told to do so in natural language. And finally, Limex Dynamics just released a preview of their CL3 humanoid robot.

And while most of the stats are unknown for sure, there are rumors that its waist has 3 degrees of freedom, with each arm having 7 degrees of freedom and each leg having 6 degrees of freedom. But tell us in the comments how much would you be willing to pay for a robot like this? And do you expect it to have dexterous hands as a prerequisite for you to purchase? Anyways, if you want to know more about the latest in AI news, check out this video here. But like and subscribe, share this video, and thanks for watching.

Bye. [tr:trw].

Dollars-Burn-Desktop
5G

Spread the Truth

Leave a Reply

Your email address will not be published. Required fields are marked *

 

No Fake News, No Clickbait, Just Truth!

Subscribe to our free newsletter for high-quality, balanced reporting right in your inbox.

TruthMafia-Join-the-mob-banner-Desktop
5G-Dangers