Generic selectors
Exact matches only
Search in title
Search in content
Post Type Selectors
Spread the Truth

Dollars-Burn-Desktop
5G Danger


Summary

➡ Uni X AI has launched Wanda, the first widely available, full-sized robot that can do a range of tasks, from cooking and cleaning to providing health advice and emotional support. Wanda uses four key technologies, including a tactile AI model for handling objects and a central platform for understanding its environment. The company is also working on wheeled and bipedal versions of the robot, which they plan to release in the next few years. As more countries focus on robot development, we can expect to see these robots used in manufacturing, services, and homes.

Transcript

Uni X AI has just introduced Wanda as the world’s first mass-produced, consumer-grade, full-sized, general-purpose humanoid robot, but with a serious twist. To start, Wanda can perform delicate tasks such as handling tender tofu and assisting with making tofu soup. It can identify clothes that need washing and interact with people in real time. In the household setting, Wanda can wash dishes, clean tables and even coordinate with robotic sweepers to ensure a spotless home. But Wanda isn’t just limited to household chores. It can also undertake a variety of operational tasks, provide health diagnoses and advice, and is committed to freeing humans from complex labour, thereby enhancing the quality of life.

And one of the ultimate goals for humanoid robots like Wanda is to provide family care services. Wanda can offer family education, medical monitoring and intimate companionship, addressing emotional needs and providing support to family members. Plus, Uni X AI’s efforts have already yielded several impressive tech achievements. Here’s a rundown of the four critical intelligence technologies that Wanda combines. Number one. Thanks to the Uni Touch’s tactile AI model, the robot’s stable gripper is highly adaptive to different grasping and releasing scenarios. The Uni Touch tactile large model, a breakthrough in tactile perception, enables Wanda to handle various objects accurately.

This model solves issues related to data scarcity and long sequences in the generalization process. Number two. The robot’s central platform covers multi-layer semantic expression, 3D environmental perception, fusion positioning and mobile obstacle avoidance. Number three. The bionic arm includes a core harmonic reducer and a permanent magnet brushless DC servo motor. Number four. Generalizable motion primitives allow for flexible and precise movement. Additionally, Uni X AI’s core components are modularized and self-developed, providing cost advantages and laying a solid foundation for large-scale mass production. But its most important tech integration is its brain center that makes use of its tactile AI model.

This allows Wanda to perceive, recognize and identify different objects, enabling precise task execution. Wanda can handle objects of various shapes and materials, safely transport fragile and dangerous items and efficiently complete household tasks like placing tableware, tidying rooms and delivering items. In terms of the robot’s future, its team is focused on the research, development and production of both a wheeled and a bipedal universal humanoid robot, which they intend to introduce at the 2024 World Robot Conference. Their wheeled model is set to enter production later this year, and the bipedal model is expected to hit the market sometime in 2025.

And as the race for humanoid robot development heats up globally, many countries are prioritizing humanoid robots on a strategic level. This means that humanoids will soon find applications within intelligent manufacturing, commercial services and even everyday household applications, with intelligent manufacturing being poised as the first of these large-scale use cases. Meanwhile, a team of researchers from Google Research and Stanford University has just introduced an unbelievable new AI system named StreetScapes. This cutting-edge technology can generate uncannily realistic street views of entire cities from scratch, simulating a drive-through experience that can be exported in 3D format via neural radiance fields or NERF for short.

The secret source is in StreetScape’s use of diffusion models, which are prominent in the fields of image and video generation. These are trained on millions of real street views from Google Street View, which has already learned the nuances of typical street scenes. Next, the AI uses a street map, a height map of buildings, and a desired camera path through a virtual city as inputs to generate extremely realistic video sequences step by step, complete with detailed elements like windows, cobblestones, and vegetation. Impressively, light and shadows are also rendered naturally to enhance the AI model’s realism.

But the real standout feature of StreetScapes is its motion module, which ensures movement and temporal consistency between consecutive images. This is complemented by a technique called temporal imputation, where each new image is generated with consideration of previous images, allowing for longer and more cohesive video sequences. StreetScapes can generate sequences up to 100 frames long, with camera movements covering more than 170 meters. And while StreetScapes uses an architecture that has since been surpassed by models like OpenAI’s Sora, its underlying diffusion model is designed to be easily interchangeable, meaning that future iterations of StreetScapes are expected to deliver even better results.

And one of the most exciting aspects of StreetScapes is its ability to be controlled through text prompts. Users can specify different times of day or weather conditions, and even mix city layouts and architectural styles, such as visualizing Parisian streets as instead styled like New York City. This opens up a world of creative applications, from urban planning to entertainment. And StreetScapes researchers view it as a significant step towards AI systems that are capable of generating entire, unlimited scenes. Their future plans include improving control over moving objects like cars and further enhancing the consistency between consecutive images.

This continuous development aims to push the boundaries of what AI can achieve in realistic scene generation, paving the way for increasingly immersive virtual experiences. And finally, researchers unveiled Audio Flamingo as a groundbreaking model that augments large language models in audio understanding, including non-speech sounds and non-verbal speech. The following is a video demo of their work. So impressively, Audio Flamingo can also be adapted for new tasks via in-context few-shot learning and retrieval augmented generation without the need for task-specific fine-tuning. Additionally, Audio Flamingo can borrow keywords from retrieved samples on audio captioning or else disregard retrieved samples if they are noisy and ineffective.

Audio Flamingo even supports multi-turn dialogue to have multiple rounds of dialogues with the user, understanding complex context and capturing correlations between rounds. Altogether, Audio Flamingo can understand what happens in audio, the order of sounds, clearness of audio, the change in loudness over time, and even faint sounds from far away. This advancement is set to revolutionize real-world applications, enabling AI to engage more naturally and effectively across diverse contexts. But what do you think is the best application for this tech?
[tr:trw].

Dollars-Burn-Desktop
5G Danger

Spread the Truth

Tags

bipedal robot development central platform for environment understanding cooking and cleaning robot countries focusing on robot development emotional support robot full-sized robot Wanda health advice providing robot robot release plans robot services robot use in manufacturing robots in homes tactile AI model Uni X AI Wanda launch Wanda robot tasks wheeled robot development

Leave a Reply

Your email address will not be published. Required fields are marked *

Truth-Mafia-100-h

No Fake News, No Clickbait, Just Truth!

Subscribe to our free newsletter for high-quality, balanced reporting right in your inbox.

5G-Dangers
TruthMafia-Join-the-mob-banner-Desktop