A breakthrough in bimanual robot dexterity has just been unveiled with the introduction of the autonomous domestic ambidextrous manipulator, or Adam for short, which is capable of intelligently managing household activities and likely much more soon. In fact, Adam is designed for a wide range of indoor tasks featuring a vision system plus two arms equipped with grippers. Even more importantly, though, it's designed to learn and improve its own tasks through an advanced imitation learning method. Adam comes in response to a growing need for assistance firstly among the elderly, with the research team identifying a gap in the market for a robot that could not only perform cognitive tasks such as memory training and dementia symptom alleviation, but also handle physical tasks within the home environment. Furthermore, Adam's design stands out from other personal robots with its modular design comprising a base, cameras, arms and hands that provide multiple sensory inputs. This modular approach allows for independent or cooperative operation at various levels, making Adam a versatile tool for both research and practical care. Its arms are designed for collaboration, moving in response to the immediate environment and ensuring safety by continuously monitoring the presence of people. At 160 cm tall, Adam's physical structure is reminiscent of a petite human adult with arms capable of extending up to 50 cm wide and handling loads of up to 3. Humanlike design allows Adam to seamlessly integrate into domestic settings, navigating the spaces and interacting with the environment. Designed for human use. Powering Adam's operations are batteries located in its base, supporting its movements, cameras and 3d lidar sensors for up to 4 hours on a single charge. The robot's computational power is divided between two internally connected computers with WiFi modules enabling external communication. Advanced sensory equipment, including an RGBD camera and lidar sensors, facilitates navigation and object interaction, further enhanced by a duck gripper system in its hands for precise object manipulation. From navigating through doors to sweeping floors, moving furniture, setting tables, preparing simple meals, and even pouring water, Adam is capable of adapting to homes of various sizes, ensuring both safety and optimal performance. Impressively, Adam's effectiveness was demonstrated in the heterogeneous intelligent multirobot team for assistance of Elderly People project, where it worked alongside a second robot. This collaboration resulted in a remarkable 93% satisfaction rate, showcasing the potential of using multiple robots for enhanced care. Future improvements will focus on enhancing Adam's perception system by manual capabilities and task planning strategies, with the ultimate goal of creating a more complete robotic companion for elderly care. Its researchers from Robotnik and UC three M, emphasize the need for real world testing in authentic domestic environments to fully assess its effectiveness and user satisfaction. Moving forward. On top of this dexterity, researchers from Carnegie Mellon University and Google DeepMind just unveiled robotool, a groundbreaking system designed to enable robots to use tools creatively and become imbued with a stunning level of problem solving pro s. In fact, creative tool use is a concept familiar to humans and a select few animal species, involving the ability to not just use tools for their intended purposes, but to employ them in novel ways to achieve desired outcomes. This advanced intelligence is characterized by the capacity to predict outcomes and innovate solutions. The challenge primarily lies in teaching robots to navigate all of the unknown unknowns of creative tool use without direct demonstrations. The solution proposed by the researchers involves leveraging large language models to equip robots with the ability to brainstorm solutions by extracting knowledge from the Internet. In addition, robotool utilizes llms to process natural language instructions about a robot's environment to generate directly executable python code. This code serves as a plan for the robot to complete tasks using whatever tools are at hand. Importantly, instead of giving robots concrete directions, Robotool provides a high level objective, leaving the specifics of tool use to the robot's discretion. The power of robotool was demonstrated through a series of tests involving two robots tasked with tool selection, sequential tool use, and tool manufacturing. Its tasks ranged from retrieving a milk carton and navigating from one sofa to another, to more complex challenges such as using a series of tools in a specific order and crafting tools from available materials to accomplish tasks. These tests showcased the robot's ability to understand object properties, analyze their relevance to the task at hand, and creatively manipulate tools to achieve their goals. Looking ahead, the research team plans to integrate vision models into robotool to enhance the system's perception and reasoning capabilities further. Additionally, efforts are underway to develop more interactive methods for humans to guide and participate in the robot's creative tool use processes. This development not only pushes the boundaries of what robots can achieve, but also opens up new possibilities for their application in various fields. By enabling robots to creatively utilize tools, robotool paves the way for them to tackle a broader range of tasks, some of which may have previously been deemed impossible for robotic systems. This progress signifies a monumental stride towards creating more autonomous, intelligent, and adaptable robotic assistants capable of solving complex problems in innovative ways. And robot dexterity is also being flaunted by a new humanoid robot capable of real time sketching, mirroring the nuanced approach of human artists. This advancement, detailed in cognitive systems research, marks a significant leap beyond traditional aigenerated art, bringing the creative process into the tangible world through robotic innovation. Traditionally, aigenerated art has been the domain of digital algorithms, but the team sought to transcend this by enabling a robot to engage directly in the act of creation. Through a deep reinforcement learning model, the robot learns to produce sketches stroke by stroke, resembling the way humans create art. The project differs from existing robotic art creators that operate more like printers, simply reproducing predesigned images. Instead, this humanoid robot uses advanced learning techniques to decide how to draw, introducing a level of creativity and spontaneity in its artwork. This approach was inspired by prior research into robotic painters trained using the quick draw data set and deep q learning for executing complex, emotionally expressive trajectories. Key to their method is a pretraining step involving a random stroke generator, enhancing the robot's ability to mimic human sketching techniques. The model also integrates additional information on distances and tool positions, guiding the robot's movements to achieve more lifelike drawings. The researchers tackled the challenge of translating aigenerated images to a physical canvas by creating a virtual space within it, allowing the robot to accurately follow the model's instructions. This method enables the direct translation of digital sketches into real world artwork. As technology continues to evolve, this intersection of art and machine opens new avenues for exploring creativity, blurring the lines between human and robotic artistry. With these advances, robot dexterity set to quickly outpace human dexterity. .