Generic selectors
Exact matches only
Search in title
Search in content
Post Type Selectors
Spread the Truth

Dollars-Burn-Desktop
5G Danger


Summary

➡ Unitree is launching its $16,000 G1 AI robot, a humanoid robot for home use. The G1 robot can perform complex movements, handle delicate objects, and even prepare food. It uses advanced vision systems and a large battery for power. Although it’s not ready for household tasks, it’s designed to learn by imitation, making it a useful tool for robotics researchers.

Transcript

Unitree just announced it will begin selling its $16,000 G1 AI robot, making it one of the first capable humanoid robots available for home use, but what tech is inside? Importantly, Unitree’s G1 builds on the foundations of its H1 humanoid bot, featuring an unnamed 8-core high-performance CPU while offering up to 43 degrees of freedom through powered joints in its arms, legs, and torso. This allows the G1 to perform complex movements such as jumping, walking over seven miles per hour or 3.3 meters per second, and climbing debris-covered stairs with ease. On top of this, the robot’s hands have evolved from simple stumps to three-finger grippers, capable of dexterous tasks like soldering wires, handling delicate objects, and even preparing food.

And as for the robot’s head, it houses a sophisticated vision system with a Livox mid-360 LiDAR camera and an Intel RealSense D435 depth camera for its 3D perception of its environment. And as for portability, the robot stands at just 4.3 feet tall or 1,320 millimeters, with the ability to fold down for easier transport and storage. And while the G1 weighs a total of 35 kilograms or 77 pounds when including its battery pack, it’s packed with an impressive 9,000 milliamp hour battery to power it for two hours at a time. But the G1 isn’t quite ready to do household tasks just yet, but the company does say it’s designed to learn through imitation.

This learning process is facilitated by Unitree’s Unitree Robot Unified Large Model, or Unifom for short, which makes the G1 an appealing platform for robotics researchers to build on top of. And beyond its artificial intelligence systems, the G1 employs some impressive physical specifications too, including 43 degrees of freedom for intricate and human-like movements, with a maximum joint torque of 120 newton meters to handle a wide variety of tasks. For computer vision, the robot’s 360-degree perception incorporates both 3D LiDAR and depth cameras, ensuring a comprehensive understanding of its surroundings. The robot also features a 5W speaker and a microphone array equipped with noise and echo cancellation for clear communication.

And as for flexibility, each arm is articulated with 3 degrees of freedom in the shoulder, 2 in the elbow, and 2 in the wrist, while each leg has 3 degrees in the hip, 1 in the knee, and 2 in the ankle. Plus, hollow joint wiring streamlines the robot’s internal structure to minimize pinch points and increase safety when working around humans. Altogether, if Unitary can deliver this robot for 16,000 USD, it will certainly give Tesla and Boston Dynamics a run for their money. And in another groundbreaking collaboration, a team of roboticists from MIT and the University of California just unveiled a novel remote control system for robots known as Open Television.

It works by drawing inspiration from VR technology commonly used in gaming, where players wear VR helmets and use controllers to navigate and interact within a virtual environment. In the same way, the researchers adapted similar technology to enable anyone to interact with the real world through these robotic proxies. With Open Television, an operator wears a VR helmet that offers a stereoscopic view through the robot’s cameras. In this way, the pilot can control the robot’s head movements by simply turning their own head. Plus, the system extends this intuitive control to the robot’s limbs, arms, hands and fingers.

And instead of being encumbered by multiple sensors, the operator’s movements are captured using remote sensors akin to those found in a connect system. This setup creates a sensation that feels like being physically present at the robot’s location. As the operator moves, the robot mirrors these actions, performing tasks like picking up objects or manipulating tools. The experience is so seamless that it can feel like an extension of one’s own body. One of the key demonstrations of Open Television’s capabilities involved a team member at MIT controlling a robot stationed at UCSD. The operator reported a profound sense of presence, as if they were physically performing tasks at the remote site.

And the implications of such systems are vast. Open Television could transform industries that require precise remote manipulation. For instance, in the medical field, this technology could enable surgeons to perform complex procedures from afar, offering expertise in areas lacking specialized medical professionals. In search and rescue operations, robots could navigate hazardous environments without risking human lives. Furthermore, Open Television opens the door to interplanetary exploration, allowing scientists to control robots on distant planets with unmatched precision. And in yet another AI breakthrough, Runway just introduced its newest Gen 3 Alpha Turbo as its leading text-to-video AI model that’s now seven times faster and costs half as much as its predecessor, all while maintaining similar quality.

And with such bold claims, users were quick to point out the advantages and drawbacks of the model. To start, Runway claims Gen 3 Alpha Turbo sets a new standard for efficient, high-resolution video production and even enables near-real-time interactivity, with the Turbo model being included in all Runway plans including free trials for new users. And users sounded off online as they compared the models on X, noting that while the basic model better handles dynamic movements, it’s more prone to distortions. However, the Turbo model excels with simpler, more stable movements, with fewer motion artifacts.

On top of this, users also found that the basic model significantly outperforms the Turbo version for complex movements and effects. For example, when prompting the model for a dragon-breathing fire from its mouth, the normal model produced a more impressive fire effect than the Turbo model. Finally, users noted the Turbo model stays closer to the original image, while the basic model is more creative, with some even suggesting the Turbo model may be preferable for specific scenarios due to its greater stability. Overall, the Turbo model is a valuable addition to Gen 3, excelling in shots requiring simple movements, stability, and closer adherence to the original image, while the basic model remains better for complex movements and creativity.

But that’s not the only advance this week in text-to-video, because Luma AI also just unveiled version 1.5 of its Dream Machine video generator, which is aimed at producing high-quality, realistic videos from both text and images, but what new features and improved functionality can be expected. Released just two months after the original version, Dream Machine 1.5 introduces several advantages, starting with higher-quality text-to-video generation, a smarter understanding of prompts, and custom text rendering. Additionally, the image-to-video capabilities have been significantly upgraded, promising more seamless and realistic transformations. Another standout feature of this update is Extend Video.

This allows users to lengthen videos based on prompts while maintaining contextual coherence. With this feature, videos can now be extended to a maximum length of 1 minute and 20 seconds, a substantial increase from the previous 5-second limit. This enables more comprehensive storytelling and dynamic content creation. Additionally, Luma AI has also focused on enhancing the user experience. Standard, Pro and Premiere users now have the ability to remove watermarks, offering a cleaner final product. The company has announced plans to introduce a suite of editing features and intuitive prompt controls, further empowering users to fine-tune their creations.

Upcoming updates are expected to include more sophisticated editing options and user-friendly interfaces, making it easier for creators to bring their visions to life. [tr:trw].

Dollars-Burn-Desktop
5G Danger

Spread the Truth

Tags

000 humanoid robot advanced vision systems in robotics food preparation AI robot G1 robot complex movements handling delicate objects with AI home use AI robot large battery powered AI robot learning by imitation AI robotics research tool Unitree G1 AI robot launch Unitree G1 for researchers

Leave a Reply

Your email address will not be published. Required fields are marked *

Truth-Mafia-100-h

No Fake News, No Clickbait, Just Truth!

Subscribe to our free newsletter for high-quality, balanced reporting right in your inbox.

5G-Dangers
TruthMafia-Join-the-mob-banner-Desktop