Googles NEW AI Robot Stuns Entire Tech Industry
Summary
➡ Google DeepMind and Stanford University introduced a low-cost, open-source robot called Mobile Aloha which can do a variety of tasks usually done by humans.
➡ This robot can make food and do complex tasks with almost 90% success, showing an important step in robot learning.
➡ Mobile Aloha uses a new learning method that copies human actions, allowing it to learn and do tasks by itself, accurately and reliably.
➡ Mobile Aloha is steady, fast, and can navigate different places. It’s built with a self-powered system and can be remotely controlled by human operators for detailed tasks.
➡ Although the robot isn’t as fast as humans, it can do tasks very well. Future plans include reducing the size of the robot and increasing the range of tasks it can do, extending its use in different places.
➡ Mobile Aloha costs around $32,000, making it affordable for research or business use. By making it open-source, access to advanced robotics is expanded, potentially paving the way for cheaper and more efficient robot solutions.
Transcript
Google DeepMind and Stanford University just unveiled the world’s most dexterous and cost effective open source humanoid robot, with the AI to match, bringing about a new standard of automation and machine intelligence. But what can it do? In fact, Mobile Aloha’s debut on Twitter was nothing short of shocking, as it dexterously and masterfully prepared a three course meal, a task previously deemed exclusive to a human skill. This remarkable feat not only captured widespread attention, but also demonstrated a new tier of robot dexterity and generalizability.
The project is the brainchild of Xiangfu, Tony Z and Chelsea Finn, a dynamic team blending the robotics expertise from both Stanford University and Google DeepMind with this collaboration having birthed a machine that’s redefining the limits of general robotics. At the heart of Mobile Aloha’s success lies an innovative training approach. The robot has been taught through imitation learning closely mimicking human actions. This method has been enriched with supervised behavior, cloning and co training, enabling mobile Aloha to perform a wide array of tasks autonomously with impressive accuracy.
And witnessing Mobile Aloha in action is nothing short of breathtaking. The robot has demonstrated up to a 90% success rate in autonomously executing complex tasks like cooking shrimp and operating elevators. Its ability to learn and adapt to tasks beyond its initial programming highlights a significant leap in robotic learning capabilities. But it’s not just about task execution. Mobile Aloha’s design focuses on stability and mobility, crucial elements in a dynamic environment.
Equipped with a self powered system and the tracer mobile base commonly used in warehouses, this robot is as fast and steady as a walking human, yet versatile enough to navigate various terrains. Furthermore, Mobile Aloha includes a teleoperation mode, allowing human operators to remotely control it, an essential feature for tasks requiring intricate interaction. Moreover, its operation extends into the virtual realm, with VR Teleop, showcasing the robot’s adaptability across diverse applications, from rescue operations to intricate manufacturing processes.
But what sets mobile Aloha furthest apart in the world of robotics is its affordability and open source nature. Priced at around $32,000, including peripherals, its cost effectiveness not only makes it a viable option for research and commercial use, but also challenges the traditional view from being prohibitively expensive. The secret to mobile Aloha’s efficiency lies in its co training learning process. By learning from two distinct sets of examples, the robot has mastered tasks that require both precise hand coordination and mobility, a feature that significantly enhances its functionality and application scope.
Realtime demonstrations of the robot offer a glimpse into the current state and future potential of robotics. Although it may not match human speed. Yet the robot’s effectiveness in task execution is undeniably impressive, providing a clear roadmap for future enhancements. Looking ahead, the team aims to scale down the robot’s size and increase its range of movement to make it suitable for a wider range of environments. They also plan on broadening the robot’s learning abilities to include a diverse array of task examples, and the implications of this robot’s development are even more far reaching as it opens up conversations about the integration of robotics in everyday life, the potential displacement of jobs, and the role of autonomous robots in simplifying human tasks.
By making the robot platform open source, the team is also effectively democratizing access to advanced robotics, setting a new standard in the research community, paving the way for more cost effective and efficient robotics solutions, and challenging the notion that cutting edge robotics are out of reach. Meanwhile, Limex Dynamics has just unveiled its latest CL one humanoid robot in a new video that demonstrates the robot’s remarkable ability to navigate complex terrains autonomously.
The video shows the CL one climbing stairs and walking down slopes, with the company explaining that the robot successfully closes the loop between real time perception, gate planning and locomotion control, and hardware and data stream. This integration allows CL one to perceive its terrain in real time and proactively adjust its motion, enabling it to navigate curbs, climb stairs dynamically, and walk down slopes. To ensure the robustness and reliability of Cl one, LimX dynamics conducted extensive tests both indoors and outdoors, and from afternoon to dusk.
This rigorous testing regime was designed to evaluate the robot’s performance under different environmental conditions, confirming its ability to adapt to various scenarios. Finally, Google research and DeepMind just unveiled yet another innovation known as the articulate Medical intelligence explorer, or just Amy, this model is transforming medical diagnostics as a specialized chatbot. Amy steps beyond traditional healthcare AI, focusing on the complex task of differential diagnosis. Unlike typical healthcare ais that create medical summaries or answer questions, Amy is uniquely tailored for diagnostic assistance.
Developed on Google’s powerful palm model and trained on extensive medical data, Amy employs an innovative self play based diagnostic dialogue system involving simulated conversations with an AI patient simulator. These interactions, reviewed and evaluated for effectiveness, serve as a basis for continuous refinement. This iterative learning process, complemented by chain of thought reasoning, has significantly elevated Amy’s conversational quality. In fact, in a double blind study, Amy matched or surpassed human doctors in text based medical consultations.
Tested against both human actors and real physicians, Amy displayed higher diagnostic accuracy and performance in critical aspects of consultation quality. Although evaluated in a controlled environment, these findings by medical specialists and patient records highlight Amy’s potential as a reliable diagnostic aide. Despite its achievements, Amy’s current format focusing on text consultations may not entirely replicate the depth of face to face human interactions in medicine. The study’s design, emphasizing text based interactions and rarer illnesses, suggests room for further development to realize Amy’s full potential in diverse clinical scenarios.
Google’s Amy stands at the forefront of integrating AI into healthcare diagnostics. With ongoing research focusing on fairness, privacy and real world applicability, Amy is poised to become a vital tool in modern healthcare, enhancing the accuracy and efficiency of medical diagnostics as it evolves. Amy promises to be a valuable asset in supporting healthcare professionals and improving patient outcomes altogether. The unveiling of Google, DeepMind and Stanford University’s mobile Aloha is a significant open source leap towards cost effective humanoid robots that are capable of intricate tasks like cooking and operating machinery, while the CL one robot demonstrates advanced terrain navigation, both marking a significant advancement in autonomous robotics.
Furthermore, Google’s Amy is revolutionizing medical diagnostics with the next generation of AI for enhanced quality and accuracy of medical consultations. These innovations are quickly making way for a future where the boundaries between humans, AI and robots will either become blurred or disappear entirely, with robots becoming ever cheaper and AI models surpassing human ability by the day. It stands to reason that the introduction of these intelligent sidekicks will first result in more net benefits than net negatives.
But will this remain the case moving into the distant future? In the end, only time will tell us whether or not this trust was one of our last mistakes. .
Great for robots, but in the long run, they will replace humans☮️💟