Generic selectors
Exact matches only
Search in title
Search in content
Post Type Selectors
Spread the Truth

Dollars-Burn-Desktop
5G Danger


Summary

➡ The Neo Humanoid robot by 1X has shown impressive abilities in a two-day experiment living with humans, performing tasks like making coffee and assisting with cooking. It learns and adapts to its environment, gathering data to improve its intelligence. Safety measures are in place to prevent accidents and protect user privacy. The robot can be leased, with users paying a deposit and monthly subscription fees for ongoing intelligence updates. Meanwhile, Disney Research has developed a method for robots to learn complex movements, and researchers from TU Munich have trained a piano-playing AI using internet demonstrations. Lastly, a new AI coding assistant, Replit Agent, can handle the entire software development process, creating apps and automating cloud deployment.

Transcript

In a first-ever experiment, 1X’s Neo Humanoid spent two days living with a human, revealing a series of unexpected results. This is partly because Neo demonstrated several intelligent abilities that shocked observers, but just how smart is it already? To start, Neo demonstrates a wide array of capabilities while doing chores around the house. From making a cup of coffee to assisting with cooking, Neo executed various real-world tasks that require both dexterity and planning. On top of this, Neo can also fetch ingredients, hand over utensils, and even engage in simple conversations, including telling jokes. But Neo’s accelerating ability to learn and adapt to its environment on the fly is what really raises eyebrows.

This is because by living among humans, Neo constantly gathers data from anyone and anything around it to grow its hive mind intelligence. These are the minds of robots around the world, connecting together via a distributed swarm brain. This will allow them to manufacture themselves and spawn into self-organizing groups of robots that monitor and respond to the needs of humans more effectively, on a person-by-person basis. And as for autonomy, Neo uses a combination of AI models to learn and gradually increase its autonomous functionality over time. On top of this, Neo can also be teleoperated to help expedite its learning process by imitation.

Importantly, this approach allows for continuous improvement and over-the-air updates to the robot, so that the same hardware can become increasingly useful over time. And concerning safety, firstly, the robot uses only low kinetic energy by design in order to minimize the risk of it accidentally injuring humans. Moreover, Neo is programmed to recognize and avoid dangerous situations, respect physical boundaries, and self-recover from potential failures. Furthermore, Neo prevents bad actors from gaining control of it and going rogue inside of your home. Specifically, robot privacy concerns are addressed through user-controlled settings, with owners specifying which areas or objects Neo can interact with and what information it can access.

This level of customization prevents the robot from recording, accessing, downloading, reporting, augmenting or withholding data that it shouldn’t. And in recognizing the high-stakes nature of giving these robots access to your home while you sleep, 1X has implemented security measures and conducted extensive testing to ensure Neo remains safe from cyber threats and state actors. This focus on cybersecurity is intended to prevent swarms of robots from suddenly being hijacked and going rogue en masse. Furthermore, 1X is offering 24-hour support for its robots for the quickest resolution of any control issues that may arise around the robot and its operations or access.

And when it comes to Neo’s expected price per unit, 1X will likely institute some kind of leasing strategy. This would involve users first paying a few thousand dollars upfront as a kind of safety deposit to take delivery of the robot. Then they’d continue to pay monthly subscription fees for the robot’s ongoing intelligence. On top of this, there could be extra services, like giving your robot the ability to download knowledge from other robots, upgrading your robot’s security software, and more value-added extras. And as tens of thousands of these Neo-humanoids are manufactured through the end of 2025, humans will experience another chat-GPT moment with robots as they suddenly transform the way they live and work.

As a result, those who assimilate to this new way of doing things will likely thrive as operators of robot swarms as this brand new business vertical grows. Meanwhile, engineers at Disney Research have just discovered how robots can learn complex movements from unstructured motion data, and they’ve demonstrated it with dancing. This two-step method promises to enhance the future of animation with robots seamlessly executing complex routines, and here’s how it works. First, it controls a character using full-body kinematic motion by starting with training a variational autoencoder and creating a latent space encoding. By processing short motion segments from unstructured data, the variational autoencoder captures the essence of all of the movements.

Then, in the next stage, this encoding guides a conditional policy, transforming the kinematic inputs into dynamics-aware outputs. This separation of stages significantly improves the quality of latent codes and circumvents issues like mode collapse. Additionally, the method’s efficiency and robustness have been demonstrated both in simulations and on a bipedal robot, where dynamic motions have been brought to life with striking accuracy. This policy conditions on both the current kinematic state and the latent code, aligning new inputs with learned motions. The framework includes rewards for tracking accuracy, maintaining balance, and smoothness, alongside domain randomization to boost robustness and prevent overfitting.

This technique adeptly handles unseen inputs, maintaining high fidelity in motion control for both virtual and robotic characters. While the technique scales effectively with motion diversity and training complexity, it faces challenges with movements requiring long-term planning, such as acrobatics. Additionally, although adept at tracking kinematic references, its generative potential remains largely unexplored. By demonstrating expressive motions on robotic hardware, Disney’s research bridges advancements in computer graphics and robotics. The success of this project suggests that self-supervised and reinforcement learning techniques could pave the way for universal control policies in animation. And robots are getting huge dexterity upgrades too, as the researchers from TU Munich just introduced Piano Mime as a framework for training piano-playing AI agents using internet demonstrations.

In fact, because the internet offers such a vast source of large-scale video demonstrations for training robot agents, the researchers used YouTube to feed the AI videos of professional pianists performing a wide variety of songs. As a result, the team was able to train a generalist piano-playing agent capable of playing any song. Their framework consists of three specific parts. A data preparation phase to extract informative features from the videos, a policy learning phase to train song-specific expert policies, and a policy distillation phase to merge the policies into a single generalist agent. The researchers explore different policy designs and evaluate the effect of training data volume on the agent’s ability to generalize to novel unseen songs.

Their results show the policy achieves up to a 56% F1 score on unseen songs. And finally, a new tool named Replit Agent just introduced a new AI coding assistant designed to handle the entire software development process from beginning to finish. To prove it, Replit’s agent demonstrated its capabilities when it was prompted to create an app with local points of interest based on the user’s location. As a result, the agent produced a fully editable map that correctly displayed all relevant points of interest. On top of this, it even gives users the option to provide feedback on further tweaks and additional features.

In fact, the agent operates much like a human developer within an integrated development environment. It edits code, installs dependencies, and utilizes standard tools. Additionally, it automates cloud deployment, eliminating the need for manual server or database setup. And in another example, Replit Agent created an app to notify via Slack when there are new or cancelled customers, but at a significantly lower cost than similar functionalities previously managed through Zapier. Additionally, the Replit Agent built a working Wordle clone in just under three minutes. Remarkably, it also developed a live website with Postgres support, Flask, and Vanilla JavaScript for sharing LLM prompts in less than 10 minutes, all without any humans writing a single line of code.

[tr:trw].

Dollars-Burn-Desktop
5G Danger

Spread the Truth

Tags

adaptive learning robots AI training through internet demonstrations data gathering robots Disney Research robot movements leasing humanoid robots monthly subscription for robot updates Neo Humanoid robot 1X abilities robot assisting with cooking robot making coffee robot performing daily tasks robot safety measures TU Munich piano-playing AI two-day experiment living with humans user privacy in robotics

Leave a Reply

Your email address will not be published. Required fields are marked *

Truth-Mafia-100-h

No Fake News, No Clickbait, Just Truth!

Subscribe to our free newsletter for high-quality, balanced reporting right in your inbox.

5G-Dangers
TruthMafia-Join-the-mob-banner-Desktop