Summary
Transcript
Number one. Perception. This robot’s 360-degree multimodal perception features a head design with integrated panoramic fisheye cameras and 3D stereoscopic vision. This allows S1 to perceive its environment comprehensively using all four of its RGBD cameras, making it adept at navigating and understanding complex industrial environments. Number two. Manipulation. The Walker S1 is now equipped with a total of 41 servo joints and two dexterous hands, which each feature a total of six tactile sensor arrays for pressure sensing. This full-stack dexterous manipulation, combined with advanced strategy management, ensures that the robot can handle delicate and intricate tasks with human-like precision.
And for power, the robot’s custom-designed integrated joints and new rotary drive offer a peak torque of 250 Nm to further enhance its manipulation capabilities. Number three. Software. Another one of the core strengths of Walker S1 is its onboard artificial intelligence which uses the ROSA 2.0 robotics operating system, making the robot capable of sophisticated task planning and execution too. Number four. Navigation. The Walker S1 employs a novel semantic visual slam navigation system, which integrates semantic perception information with traditional V-Slam technology. This two-stage semantic navigation enhances the robot’s spatial understanding, plus it addresses visual positioning challenges, ensuring reliable performance under varying lighting and environmental conditions.
Number five. Motion. The S1 also incorporates a learning-based whole-body motion control framework, integrating perception and control through end-to-end learning. This approach allows the robot to develop dexterous manipulation skills and stable walking abilities, essential for executing complex, non-structured tasks in industrial settings. Number six. Physique. The Walker S1 is full stack and stands 172 cm tall while weighing 76 kg, plus it has an impressive carrying capacity of 15 kg, allowing it to do common workplace tasks easily. Number seven. Solutions. Importantly, this robot also excels in solving three critical industrial application challenges, with the first being the issue of visual positioning, where the robot adapts to different lighting and environmental conditions to ensure accurate visual positioning.
But more importantly, it solves the second problem of motion control by maintaining robust control even under dynamic high-load conditions. And finally, it solves component overheating by managing joint cooling during prolonged high-load operations. And it’s already working in several factories in the real world. But that’s only the beginning, because in another advancement, Robot Era just ran its new Star 1 humanoid robots in a unique race through the Gobi Desert while outpacing humans on foot. In fact, this race featured two Star 1 humanoids, where one of the robots sported a pair of sneakers, while the other ran without any footwear at all, and the results were shocking.
As for specifications, the Star 1 stands at a height of 171 cm and weighs 65 kg, giving it a very distinctive running style. And Robot Era’s focus on robotic limb control and dynamic centre of gravity adjustment is crucial for autonomous movement on rugged terrain. This involves managing soft, uneven surfaces with flexible joints that can absorb impacts, where conversely, traversing hard surfaces requires precise joint control for stability. But despite its late start, the sneaker-wearing humanoid quickly overtook its opponent, reaching speeds of up to 3.6 metres per second over a 34-minute long run.
Both robots were powered by the same proprietary 400 Nm joint motors, featuring precision planetary reducers, high-precision encoders and high-speed communication modules. But the Star 1 integrates advanced AI and large language model technologies too, allowing it to learn new skills rapidly and adapt to various tasks. This flexibility enables the robot to switch between running, walking and jumping modes across different terrains, including roads, grass and deserts. Plus, the built-in AI model supports both imitation learning and reinforcement learning. Additionally, Robot Era claims to have developed the first denoising world model, enabling the humanoid to predict and extract key environmental information from simulations.
And finally, Adobe just introduced its new Firefly AI video model alongside several new AI features for Photoshop. First, the Firefly AI video model allows users to create or modify videos using simple text prompts, making video production faster and more accessible. Its generated outputs are 5-second video clips at 720p resolution and 24 frames per second, using either text descriptions or image input. This feature is intended for filmmakers and marketers to quickly create B-roll, filler footage or visualize special effects before shooting. And then there’s Adobe Premiere Pro with its new Extend feature that allows users to add up to 2 seconds of footage to existing clips, available in 720p or 1080p at 24 frames per second.
Additionally, Firefly can also generate atmospheric effects, such as fire, water, smoke and reflections, which can be integrated into video projects in Premiere Pro or After Effects using black or green screen overlays. Alongside the video model, Adobe introduced several new AI-powered tools for Photoshop, beginning with the improved Remove tool, which can now eliminate unwanted elements like people or wires with just a click to significantly speed up the editing process for clean images. Plus, the Generative Fill tool has also been enhanced, producing more realistic and context-aware results when filling gaps in images or expanding content.
And paired with the new Generative Expand feature, it lets users extend images beyond their original borders. And perhaps the most exciting addition to Photoshop is the new Generative workspace, currently in beta, with this workspace enabling users to explore multiple AI-generated concepts and iterations simultaneously. Anyways, like and subscribe for more AI news and check out these bonus clips. Thank you. Thank you. [tr:trw].