Generic selectors
Exact matches only
Search in title
Search in content
Post Type Selectors
Spread the Truth

5G Danger


Summary

➡ Cyan’s Orca-1 humanoid robot and Pudu Robotics’ Pudu DH-11 hand are advancing robotic capabilities. Orca-1 can walk, climb, spin, and adapt to outdoor environments, while also interacting naturally with humans and performing complex tasks. Pudu DH-11, designed like a human hand, can handle tools, complete intricate tasks, and improve human-robot interactions. Meanwhile, META’s AI research team is developing touch perception and dexterity in robots, with systems like Metasparsh and Digit360, and Microsoft researchers are working on 3D geometry extraction and motion capture methods to further enhance robot capabilities.

Transcript

Are robots about to become more dexterous than humans? Cyan just demoed their Orca-1 humanoid robot in an event that showcased several new locomotion capabilities, including walking, climbing, spinning, and adapting to outdoor environments with a straight knee posture. But what can it do in the real world? To start, Orca-1 can carry on natural language interactions as well as execute by manual manipulation, with all of this being supported by a macro-language model for emotional expression in an attempt of full anthropomorphism. Additionally, the robot’s design and hardware enhance motion control, dialogue, and dual arm operations by allowing it to perform movements like splits while maintaining balance.

Furthermore, the robot’s motion control algorithm integrates with hardware for effective terrain navigation. In fact, Cyan says it’s completely optimized the robot’s hardware-to-software integration for a more natural human gait. And then there’s Orca-1’s water drop facial design and language model, which enable emotional expression during interactions to give the humanoid a social use case too. And while the pricing of this robot will likely fall between 20 to 150,000 US dollars, the company’s roboticists continue to refine their hardware and motion control implementations as they race to develop the most stable and adaptable home robot for the commercial market.

But when it comes to next-level humanoid dexterity, Pudu Robotics is revealing the newest Pudu DH-11, an 11 degree of freedom dextrous hand engineered to enhance robotic manipulation for both semi-humanoid and humanoid robots. This key upgrade in dexterity builds on the success of the Pudu D-7, but how much is it going to cost? To start, for the best possible design, the Pudu DH-11 emulates a human hand with its five-finger design with 11 degrees of freedom and 12 tactile sensing areas. This configuration enables it to perform tasks similar to human hands. It is both lightweight and agile, incorporating durable touch surfaces on the palm, fingers and thumb.

These attributes allow robots equipped with the DH-11 to handle a variety of tools, complete complex tasks and improve interactions between humans and robots. Furthermore, the DH-11’s design focuses on being both lightweight and flexible. Its biomimetic hand structure utilizes a cable-driven system and underactuated mechanisms to substantially reduce both weight and size. Along with providing even greater flexibility, this optimization also facilitates smoother actions when doing things like gripping and lifting. Additionally, the DH-11 offers extensive touch coverage with a total of 1018 sensor pixels, delivering extremely detailed feedback data for precise and safe interactions.

All of this makes the DH-11 extremely durable as it’s constructed with multi-strand steel cables, offering high wear resistance and tensile strength to lift up to 40 kilograms or 88 pounds, and it’s designed to protect itself from both water and dust to minimize maintenance needs. This comes after the release of PUDU’s D7 humanoid, with the DH-11 offering another upgrade to enable humanoids to autonomously execute intricate tasks across various applications and environments. With this, PUDU Robotics continues to study their customer needs to drive their next steps of innovation in mobility, operation and artificial intelligence, with the company aiming to accelerate the deployment of service robots across sectors like food and beverage, retail, healthcare and more.

But that’s only the beginning, because robots just reach the next level of cognition with META’s fundamental AI research team having just unveiled several breakthroughs to push the boundaries of touch perception, robot dexterity and human-robot interaction. All this is to say that AI is quickly developing a sense of touch, the ability to interpret physical nuances, and the ability to navigate changing environments effectively. This technological leap starts with Metasparsh, a general-purpose touch representation system that translates various types of tactile signals from numerous sensors, allowing AI to finally interpret touch in much the same way humans do.

Named after the Sanskrit word for touch, Sparsh leverages self-supervised learning to process tactile information across diverse sensors without requiring task-specific models, making it versatile and scalable. This could be crucial for future AI applications in complex fields like healthcare and manufacturing, where machines will inevitably be required to perform delicate and dexterous tasks that previously required a human touch. But META also unveiled META’s Digit360, a fingertip sensor equipped with human-level multimodal sensing capabilities, allowing it to capture tactile details with remarkable accuracy. With over 18 sensing features, Digit360 processes minute forces, surface textures, and even thermal properties, offering AI systems a more nuanced understanding of physical interactions.

This could be transformative for fields such as prosthetics, virtual reality, and robotics, where detailed tactile feedback is essential for realistic interaction. Amazingly, by capturing omnidirectional tactile data through its unique optical lens and on-device AI processing, Digit360 allows robots to respond instantly to touch stimuli, enabling them to handle delicate tasks with unprecedented precision, and to further support AI research in touch perception and dexterity, META also just introduced the META DigitPlexus platform. This standardized hardware-software framework integrates various tactile sensors such as Digit and Digit360 onto a single robotic hand. Additionally, facilitating data collection and analysis across multiple touch sensors, DigitPlexus provides a cohesive system for developing more capable and coordinated robotic hands.

This is expected to accelerate research into robotic dexterity and AI’s ability to interact with physical objects in a sophisticated manner, particularly as AI systems are increasingly deployed in real-world applications that require both skill and adaptability. And by partnering with industry leaders, META aims to make these innovations widely accessible, fostering a global research community that can build on these advancements to refine and expand robotic applications. But another significant initiative from META’s researchers is the partner benchmark designed to advance research in human-robot collaboration, which is short for planning and reasoning tasks in human-robot collaboration.

This provides a standardized framework to evaluate how well AI models collaborate with humans on tasks within home-like environments, and it’s built on a high-speed simulator called Habitat 3.0, allowing partner to replicate realistic settings with over 5,800 unique objects and 100,000 natural language tasks across simulated homes. All of this is towards the goal of enabling scalable training for embodied AI models, offering researchers the tools to study collaborative intelligence in robots under safe, controlled conditions. Meanwhile, Microsoft researchers just unveiled MOGI, an innovative AI model designed to extract 3D geometry from single open-domain images.

Unlike traditional methods, MOGI generates a 3D point map using an affine invariant representation, which makes it adaptable to variations in scale and position. This approach avoids ambiguous feedback during training, allowing the model to learn geometry more effectively. On top of this, MOGI employs both global and local geometry supervision to enhance its learning capabilities, with its key innovations including a highly accurate point cloud alignment solver for shaping global structures and a multi-scale geometry loss that sharpens local details. But that’s not all, because Microsoft researchers also introduced a new motion capture method to accurately capture face, body, and hand movements simultaneously, all without markers, manual setup, or specialized hardware.

This new method is called LUCMA, no markers, and it overcomes traditional limitations by using machine learning trained on synthetic data, combined with advanced models of human shape and movement. Impressively, it works across different environments, camera setups, and clothing, enabling versatile and stable 3D reconstructions of the entire human body, including finer details like eyes and tongue. [tr:trw].

5G Danger

Spread the Truth

Tags

advancements in robotic capabilities Cyan Orca-1 humanoid robot features improving human-robot interactions META AI touch perception in robots Orca-1 human interaction features Orca-1 outdoor environment adaptation Pudu DH-11 intricate task completion Pudu DH-11 tool handling abilities Pudu Robotics Pudu DH-11 hand capabilities

Leave a Reply

Your email address will not be published. Required fields are marked *

Truth-Mafia-100-h

No Fake News, No Clickbait, Just Truth!

Subscribe to our free newsletter for high-quality, balanced reporting right in your inbox.

5G-Dangers
TruthMafia-Join-the-mob-banner-Desktop