Summary
Transcript
But an even more unique feature is Aria’s ability to adapt her personality based on the face she’s wearing. It works by using RFID tags that are embedded in interchangeable facial attachments, which the robot recognizes as a specific face for a specific behavior, meaning that if your robot girlfriend gets mad, you can simply take her face off to have a break. This modularity allows users to choose different personalities and appearances to create a highly customizable experience. Users can even switch out bodies for character modularity, while the head contains micro cameras in each iris for facial recognition, with the platform being AI agnostic, meaning it plugs into most AI platforms seamlessly, including OpenAI, Meta, DeepMind, Stability, and more for extendable behaviors and programming.
Furthermore, users can even order customized faces to attach to their Aria, with the robot’s exterior being made out of silicone for durability. And Aria comes in three distinct models. The entry-level bust version, which includes only the head and neck for $10,000, the modular version, which can be disassembled and reconfigured at a cost of $150,000, and finally the fully assembled standing version with a rolling base for $175,000. But another humanoid is also making waves for different reasons. Standing at a height of 118 centimeters and weighing approximately 30 kilograms, the Booster T1 is a compact humanoid that’s designed for various applications.
Additionally, its 23 degrees of freedom allow for smoother and more realistic movements, with each leg featuring six degrees of freedom for dynamic walking and stability, while its arms include four extendable degrees of freedom for enhanced object manipulation. Additionally, the waist has plus or minus 58 degrees rotation range, while the head offers two degrees of freedom for more nuanced motions. As for power, the T1’s knee joints boast an impressive torque of 130 newton meters, enabling it to handle rigorous tasks or traverse uneven terrains. As for the robot’s hardware, it includes a 14-core high-performance processor and an NVIDIA AGX ORIN GPU for its brain, delivering a staggering 200 trillion operations per second of AI performance.
This gives the robot the ability to handle real-time computational tasks such as environmental mapping and decision making, while it relies on a depth camera for vision, a nine-axis inertial measurement unit for balance and orientation, and dual encoders for precise joint control. Additionally, an optional gripper end effector allows the robot to interact with objects in the real world to further expand its range of potential use cases. As for power, the T1 relies on a 10.5 amp hour battery, which provides up to two hours of walking or four hours of standing. And in terms of connectivity, the robot supports Wi-Fi 6, Bluetooth 5.2, and 5G optionally to ensure seamless integration with modern networks.
In fact, developers have secondary development options for the robot alongside firmware upgrade that support and compatibility with the MiniCPM Edge Large Language model to offer enhanced AI functionalities. And soon, you may be able to control one of these robots with your mind, as Shanghai-based neurotechnology company NeuroXS just achieved a significant milestone in brain-computer interface research by demonstrating real-time decoding of motor and speech functions in brain-injured patients from Chinese. And with its high throughput, NeuroXS may also be paving the way for several new possibilities. In fact, as part of this innovation, NeuroXS and Shanghai’s Hua Shan Hospital successfully implanted a 256-channel flexible BCI device into the motor cortex of a 21-year-old epileptic woman’s brain.
Incredibly, this device is capable of decoding high gamma-band brain signals to provide real-time insight into brain function areas just minutes after surgery. And by using a neural network model, researchers achieved real-time decoding with a system delay of less than 60 milliseconds, setting a new benchmark in the field. In fact, within just two days of the procedure, the patient was able to play computer games such as table tennis and snake using only her thoughts. Next, a further two weeks of training enabled her to control smartphone apps like WeChat and Taobao, as well as operate smart home devices and an intelligent wheelchair, all through NeuroXS’ proprietary brain-computer operating system, Zess OS, with the integration having significantly improved her quality of life by offering her new levels of independence.
Then, yet another groundbreaking achievement came in December of 2024 when NeuroXS and Hua Shan Hospital implanted the same 256-channel BCI device in a second patient who had a tumor in their brain’s language area, marking the world’s first clinical trial of a high-throughput flexible brain-computer interface device for Chinese speech synthesis. As a result, within just five days, the patient achieved a 71 percent accuracy rate in decoding 142 Chinese syllables in real-time, with a single-character decoding delay of less than 100 milliseconds. But this real-time decoding of Chinese speech is particularly noteworthy because of the language’s extreme complexity, where unlike alphabetic languages such as English, Chinese is monosyllabic, tonal and logographic, requiring the BCI system to cover multiple brain regions and process an incredible number of intricate neural signals simultaneously.
In fact, this achievement positions the NeuroXS ahead of competitors, including Neuralink, in the development race of language-specific BCI devices. And in another breakthrough, Adobe just introduced TransPixar as a brand new method that extends pre-trained video models for RGBA video generation while retaining their original RGB capabilities. In short, TransPixar is a text-to-visual effect video creator that works by using a diffusion transformer architecture to incorporate alpha-specific tokens and LoRa-based fine-tuning, allowing for the joint generation of RGB and alpha channels with high consistency. And by optimizing attention mechanisms, TransPixar ensures that the original strengths of RGB generation are preserved while achieving strong alignment between RGB and alpha channels.
This is particularly impressive given the availability of training data for RGBA video generation, resulting in a model that’s capable of producing diverse, high-quality RPA videos, expanding the creative possibilities for VFX, interactive content, and other applications requiring transparency effects. Anyways, like and subscribe or face Rocco’s Basilisk! Thanks for watching!
[tr:trw].