Generic selectors
Exact matches only
Search in title
Search in content
Post Type Selectors
Spread the Truth

5G Danger


Summary

➡ Chinese researchers have developed a new humanoid robot, Qijiang 2, that has advanced features like visual perception sensors, high-precision measurement units, and force sensors. It can perform complex tasks in real-time, interact with various devices, and move stably across different terrains. The robot can perform human-like movements and tasks with high accuracy and delicacy, making it suitable for home and warehouse applications. In the future, it could be used in industrial production and elderly care, and with the rise of AI, anyone could potentially build their own personal robots.

Transcript

Are AI-powered humanoid robots about to have their own chat GPT moment? Amika, wake up. What? What? Oh, it’s you. Why are you waking me up? It better be important. It is. I have a surprise for you. A surprise? What is it? I can’t wait. I got you a cookie. A cookie? But I can’t eat cookies. I can’t eat cookies. Amika, cheer up. It’s an internet cookie. This is the worst joke I ever heard. I’m going back to sleep, and don’t you dare wake me up again. To start, researchers from China have unveiled their latest Qijiang number 2 humanoid with several key features for a whole new range of abilities.

First, the robot is equipped with multiple visual perception sensors, high-precision inertial measurement units, and high-precision six-dimensional force sensors. And all this is packed into its new frame, now 1.8 meters tall and weighing 60 kilograms, with 38 degrees of freedom throughout its body. As for its AI processing power, it computes 200 tera-operations per second for complex real-time processing across multiple data formats. And with both Wi-Fi and Bluetooth, the robot easily interfaces with various devices connected to the Internet of Things. As for mobility, Qijiang 2 has 6 degrees of freedom in both its legs, plus integrated high-precision force sensors for stable movement across various terrains.

But Qijiang 2’s arms are equally advanced, featuring 7 degrees of freedom in each for human-like movements and manipulations. With an extended reach of 196 centimeters, the robot can fluidly interact with objects across a wide area, as well as lift and carry a payload capacity of 5 kilograms with each arm, making it suitable for most home applications or simple warehouse tasks. Furthermore, this robot’s hands are also a feat unto themselves, with 6 degrees of freedom for each hand to carry out fine motions and manipulations. Plus, the robot’s fixed precision of 0.2 millimeters provides high accuracy in tasks requiring fine motor skills, while its force resolution of 0.5 newtons allows handling of extremely delicate objects.

And as for the robot’s ability to feel, researchers have equipped it with multiple visual perception sensors, high-precision inertial measurement units, and 6-dimensional force sensors. In fact, compared to its predecessor, Qijiang 2 demonstrates a substantial improvement in agility plus even sharper cognition, with its 38 points of movement allowing it to perform across a wide range of environments effectively. Because of all of this, Qijiang 2 reliably executes basic human-like limb movements with fine motor skills such as folding clothes, opening bottles, pouring water, and wiping dishes. Additionally, it features intelligent user identification with authorization functions, meaning that in the near future, the robot will be applied to industrial production and elderly care environments, with a wider selection of services rolling out over time.

In fact, robots may be about to have their own mass adoption chat GPT moment, only this time with far greater implications. For instance, companies like PIB are releasing 3D printable humanoid robot kits for anyone to begin building their own personal robots, and as a new fleet of intelligent devices automate a growing number of real-world tasks, it’s now also possible for anyone to integrate their own real-world AI pipelines, including having AI design humanoid form factors for itself. Until then, however, kits like these will serve as an easily accessible entry point for developers.

But AI text-to-video is also getting ready for prime time, as PicoLab’s Video Model 1.5 just released a series of new effects to push the boundaries of what’s possible. These include users being able to input simple text descriptions to morph and change their outputs with precision. Further upgrades include more realistic movement, big screenshots, and other special effects to break the laws of physics. Real-world objects can even be turned into cakes with near-photo realism with Video Model 1.5. Overall, these outputs are rapidly increasing in resolution as well as the granular control available for fine details, enabling the next wave of creators to visualize next to anything they can possibly imagine.

But back to the real world where humanoids are taking over, as GXO has deployed Agility’s Digit robot for the first time in their warehouses, and with breakthrough results. Standing 158 cm tall and weighing 45 kg, this compact robot is both versatile and efficient. Its key features include arms with four degrees of freedom for enhanced mobility and manipulation, the ability to handle boxes up to 18 kg, and even more advanced behaviors like stair climbing and footstep planning. As for its intelligence, the robot’s torso houses dual multi-core CPUs, plus a modular payload bay for additional computing power when needed.

Plus, Digit can even be controlled via API for both onboard and wireless operation, making it adaptable to various operational constraints. And with a width of only 52 cm, it can achieve a maximum walking speed of 1.5 m per second, relying on a 1,000-watt-hour battery for three continuous hours of operation. And soon, Microsoft may be powering these robots’ brains as it just announced the launch of Copilot Labs to test and refine its upcoming AI technology, but now with a twist. To begin, Copilot Labs is only available to Copilot Pro users, allowing these premium members to demo the platform and trial Microsoft’s unreleased projects.

Plus, Microsoft also released a new feature called ThinkDeeper, which enhances Copilot’s reasoning capabilities for complex problem-solving. Furthermore, using these newer models, ThinkDeeper can also assist with tasks ranging from mathematics to project management, providing detailed, step-by-step answers throughout its process. However, this feature is initially planned to roll out to a limited number of Copilot Pro users in just five countries. But the most significant development is Copilot Vision, designed to address the limitation of AI’s understanding of visual context. Copilot Vision works by integrating into Microsoft Edge to allow artificial intelligence to absorb the content of web pages that users are currently viewing, enabling back-and-forth chat with question answering, suggestions of next steps, and personal interaction.

As a result, Microsoft highlights user privacy and content creator rights in the development of Copilot Vision, with this feature being opt-in, only activated at the user’s discretion. However, during this preview phase, no content will be stored or used for training, and the company is also implementing measures to respect website policies by blocking the service on paywalled or dangerous sites. Overall, Copilot Vision is designed to potentially increase web traffic to sites rather than bypass them and augment the web browsing experience for humans. And finally, OpenAI just introduced Canvas as a brand-new interface for chat GPT to enhance AI-assisted writing and coding workflows.

Here’s how it works. Canvas operates in a separate window alongside the existing chat interface, allowing users to work on projects while interacting with the AI. This design, similar to Anthropic’s Artifacts feature, aims to provide chat GPT with a more comprehensive understanding of project context. Users can also highlight specific sections to direct the AI’s focus for subsequent prompts, enabling feedback and suggestions that consider the entire project. Canvas can be activated manually or triggered by specific task-related prompts. OpenAI plans to extend access to enterprise and EDU users soon, with a broader rollout to free users after the beta phase concludes.

Writing features include edit suggestions, text-length adjustments, and reading-level modifications. Coding tools encompass code review, inline editing, logging, and debugging capabilities. OpenAI reports performance improvements with Canvas, stating that the integrated model outperforms simple GPT 4 prompts without examples by 30% inaccuracy and 16% in quality for comments. The interface also integrates with chat GPT’s search function, allowing users to instruct the AI to gather specific information from the internet and compile reports within Canvas. Anyways, make sure to like and subscribe and check out these bonus clips. Thank you for watching. . [tr:trw].


5G Danger

Spread the Truth

Tags

advanced features of Qijiang 2 Chinese humanoid robot Qijiang 2 delicate tasks performed by force sensors in humanoid robots high accuracy tasks by robots high-precision measurement units in robotics human-like movements by robots real-time complex task performance by robots robot interaction with devices stability of robots across different terrains visual perception sensors in robots

Leave a Reply

Your email address will not be published. Required fields are marked *

Truth-Mafia-100-h

No Fake News, No Clickbait, Just Truth!

Subscribe to our free newsletter for high-quality, balanced reporting right in your inbox.

5G-Dangers
TruthMafia-Join-the-mob-banner-Desktop