Generic selectors
Exact matches only
Search in title
Search in content
Post Type Selectors
Spread the Truth

Dollars-Burn-Desktop
5G Danger


Summary

➡ Fourier Intelligence and Kepler Robot have unveiled advanced humanoid robots, GR2 and Forerunner K2, respectively. These robots are capable of performing complex tasks with high precision and sensitivity, thanks to their numerous degrees of freedom, tactile sensors, and powerful AI frameworks. Meanwhile, researchers have developed a new system, IDP3, that allows robots to operate autonomously in unpredictable environments. Lastly, Microsoft is automating desk jobs with new agent capabilities in Copilot Studio, which can perform tasks ranging from simple responses to fully autonomous business processes.

Transcript

Fourier Intelligence just revealed a new demo of its GR2 humanoid robot doing several tasks and movements to flaunt its dexterity and strength, but how capable is it? In fact, the new GR2 stands at 175 centimeters and weighs 63 kilograms with an impressive 53 degrees of freedom for fluid, human-like movements. Plus, its single-arm load capacity of three kilograms broadens its industrial applications. But one of this robot’s most impressive features is its dexterous hand with 12 degrees of freedom and six array-type tactile sensors, providing exceptional sensitivity and precision for tasks requiring fine motor skills, such as delicate assemblies or surgical procedures.

Additionally, the GR2 is powered by Fourier’s smart actuators, which deliver peak talks over 380 newton meters for efficient movement, as well as a dual encoder system that enhances control accuracy, ensuring smooth and precise actions. And further complementing the hardware is the Fourier Toolkit, which serves as a suite of optimized tools for frameworks like NVIDIA Isaac Lab, Mujoko, and Robot Operating System. But there’s another AI-powered humanoid that’s pushing the envelope of hardware, as Kepler Robot has just introduced its latest Forerunner K2. And what stands out in particular is its integrated limb design, providing enhanced rigidity and ease of manufacturing and maintenance.

Each arm boasts an impressive 11 degrees of freedom, combining active and passive movements and a single-hand load capacity of up to 15 kilograms. This flexibility is complemented by sophisticated fingertip technology, featuring 96 flexible sensors per fingertip for precise manipulation and interaction with all shapes of objects. Furthermore, the robot’s Starline wiring method simplifies the internal wiring, contributing to its robust and efficient design. And powering the K2 is a 2.3 kilowatt-hour battery, ensuring up to eight hours of uninterrupted operation. Plus, this battery includes both direct and automatic charging interfaces to enhance its usability across various environments, making it capable of executing extended tasks both indoors and outdoors.

And when it comes to intelligence, Kepler’s Forerunner K2 operates with a sophisticated AI framework, combining a cloud-based large model with a unique brain and embodied cerebellum architecture. This design enhances the robot’s ability to autonomously complete tasks in specific scenarios, thanks to a blend of imitation learning and reinforcement learning techniques. And for its brains, the robot’s onboard hardware packs an impressive 100 trillion operations per second of computing power, allowing it to process complex data and make real-time decisions effectively. Furthermore, it’s also equipped with high-performance GPU motherboards and advanced visual recognition and navigation systems, giving the K2 the ability to execute fully autonomous planning and precision control.

And the Forerunner K2 is already undergoing real world testing doing various tasks to showcase its ability to adapt to and excel in complex work environments. Therefore, as the K2 robot learns, its implementation process advances smoothly, with Kepler’s algorithms facilitating seamless integration of preset actions and autonomous decision-making capabilities. Meanwhile, researchers have developed a breakthrough system that brings humanoid robots closer to real-world autonomy using the improved 3D diffusion policy, which is a brand new method that enables robots to perform complex tasks in unpredictable environments without constant supervision. And at the heart of this breakthrough is IDP3, the new visual motor policy designed for general-purpose robots.

Unlike traditional methods requiring precise camera setups, IDP3 uses egocentric 3D visual data and relies on the robot’s own camera frame. This approach removes many constraints, allowing robots to function flexibly in dynamic environments and learn to interact with the world from their own perspective, a crucial step toward true autonomy. Plus, the system also includes a humanoid robot platform as a robust data collection mechanism for real-world deployment strategies. For training, researchers use Apple Vision Pro to tele-operate the robot’s upper body in order to track human movements with precision. This data enables real-time mimicry using relaxed inverse kinematics, plus the inclusion of waist movement enhances the robot’s range of motion and dexterity, with the Apple Vision Pro also streaming the robot’s vision back to the operator to create a seamless interaction loop.

But overcoming deployment challenges is also a crucial component for robots when operating in uncontrolled environments where adapting to new views, objects and scenes is essential. This is important because traditional robots often fail when faced with slight variations from their training conditions. However, IDP3 excels in three types of generalization, view, object and scene. In terms of view generalization, IDP3 adeptly grasps objects from various camera angles, handling even drastic perspective changes with ease and surpassing previous systems like the diffusion policy, which struggled with minor shifts. And when it comes to object generalization, IDP3 manages unfamiliar objects effortlessly, outperforming older methods reliant on data augmentation techniques like color jitter.

This emphasis on 3D representations, rather than color features, allows for precise manipulation of diverse objects. And for scene generalization, IDP3 adapts seamlessly to new environments, maintaining consistent and fluid movements. It even effectively interacts with untrained objects, unlike diffusion policies, which often faltered in real-world settings. And the implications of new technology are vast. Robots equipped with IDP3 can autonomously adapt on factory floors, handling different parts and troubleshooting in real time. This system is therefore a serious leap forward in robotics, bridging the gap between controlled labs and the unpredictable real world.

And by enabling robots to generalize skills across views, objects and environments, IDP3 paves the way for truly autonomous machines capable of complex tasks. And finally, Microsoft is automating desk jobs, as it just announced new agent capabilities to further transform business operations with Copilot Studio, which will be entering public preview next month, to enable organizations to create autonomous agents. These agents, viewed as new apps for an AI-driven world, will perform tasks ranging from simple responses to fully autonomous business processes, handling everything from lead generation to supply chain automation. Here’s how it works.

Agents will draw data from Microsoft 365 Graph and other systems, supporting functions like IT help desks and employee onboarding. And already, organizations like Clifford Chance and McKinsey & Company are utilizing these agents to increase revenue and reduce costs. For instance, Pets at Home has already developed an agent that can save millions of dollars annually by compiling cases for human review more efficiently. But Microsoft also just unveiled 10 new autonomous agents in Dynamics 365 as well, designed to enhance productivity across sales, service, finance, and supply chain teams. Some notable examples include the sales qualification agent, which prioritizes sales opportunities, and the supplier communications agent, which optimizes supply chains by monitoring supplier performance.

And currently, Microsoft itself is already using Copilot and agents to transform internal processes, achieving remarkable improvements in sales, customer service, and marketing. With all of this progress towards automation, Microsoft is now also opening up to businesses to explore further capabilities as partners start building agents in Copilot Studio. [tr:trw].

Dollars-Burn-Desktop
5G Danger

Spread the Truth

Tags

autonomous business processes with Copilot Studio autonomous operation in unpredictable environments complex tasks high precision robots Copilot Studio agent capabilities Forerunner K2 humanoid robots Fourier Intelligence advanced humanoid robots IDP3 system for robots Kepler Robot GR2 Microsoft automating desk jobs powerful AI frameworks for robots tactile sensors in robots

Leave a Reply

Your email address will not be published. Required fields are marked *

Truth-Mafia-100-h

No Fake News, No Clickbait, Just Truth!

Subscribe to our free newsletter for high-quality, balanced reporting right in your inbox.

5G-Dangers
TruthMafia-Join-the-mob-banner-Desktop