Summary
Transcript
And what’s more is that its bio-inspired anatomy replaces traditional rigid hydraulics, giving it a previously unattainable level of precision and fluidity when moving, whether manipulating fragile items or lifting heavy ones weighing up to 44 pounds. Neo’s upper body design is just as powerful as its lower body design, including a walking speed of 4 km per hour and a faster walking speed of up to 12 km per hour. On top of this, Neo’s approach to communication is unique. This is because beyond its OpenAI-powered chat system, the Neo Beta also appears to use body gestures to communicate with humans too.
This kind of nonverbal interaction for home use could allow humans to communicate their intentions via subtle cues, like a nod or glance, or else just issue commands. And the robot has a pair of five fingered hands, engineered with 20 degrees of freedom in each for a level of complexity and dexterity that may be world-class. This is because Neo Beta naturally picks objects of various sizes and fragilities with a level of understanding that is yet unachieved by other home robots launched this year. And it’s extremely impressive because handling various objects with differing properties could be an indicator of the AI system’s generalizability.
But under the hood, what drives Neo is actually its embodied intelligence, the robot’s AI system, that allows it to learn and adapt continuously. It does this using information from its surroundings as well as its past experiences by using reinforcement learning, computer vision, and other forms of AI to recreate human locomotion and manipulation policies. Additionally, the robot also builds on the experiences of its predecessor robot named Eve, meaning these robots share a kind of hive mind awareness that is distributed between all of the robots instead of just one. And as for the robot’s price and availability, 1X is currently rolling out the robot to real-world home settings for testing and feedback to further optimize and refine the technology.
Upon mass production, the cost to produce Neo Beta will likely drop to enable its price to be around what you’d pay for a new or used car. This is because the next step for humanoid robotics companies is to not only sell, but also lease their hardware to users in exchange for monthly subscription payments according to usage, data, model, and more. To this end, 1X has already raised $100 million from various companies, including OpenAI, with the goal of expediting their release to market for a chance at having a first mover advantage. In fact, the last time a consumer trend happened with this kind of price tag was probably that of the automobile, explaining the rise in robot demos from across the world.
And as for the future, 1X plans to further refine its Neo Beta through real-world testing to prepare for a mass production and a mainstream release sometime early next year, which will likely be the world’s pivotal chat-gpt moment for humanoid robots. Meanwhile, Google’s newest game engine AI just changed gaming forever by simulating the classic game Doom in real time, but the implications of this tech go much deeper. Developed by researchers from Google Research, Google DeepMind, and Tel Aviv University, this pioneering system transforms AI-assisted game development in more ways than one. To start, game engine’s ability to simulate Doom at over 20 frames per second using a single Google TPU chip is a significant breakthrough.
This is because it achieves a peak signal-to-noise ratio of 29.4, which rivals the quality of lossy JPEG compression. Plus, human evaluators struggled to distinguish the simulation from actual gameplay during testing. The AI system achieved this by undergoing a two-stage training process. Initially, an AI agent learned to play Doom and recorded its own gameplay. Next, a diffusion model was trained to generate subsequent images based on previous frames and actions. But it’s not just frames, because game engine also excels in complex game state updates, such as tracking health and ammo, picking up weapons, attacking enemies, and interacting with the environment.
In fact, it can maintain these game states for up to 2.5 minutes at a time to enable continuous gameplay by chaining states together. Despite these advancements, though, game engine does have some limitations, such as having memory that only spans about three seconds at a time, which restricts its ability to respond to long-term events in the game. This presents challenges for recognizing and managing extended gameplay relationships. Overall, though, game engine is definitely going to transform game engine technology, already having surpassed previous models like GameGAN and GAN Theft Auto in complexity, speed, stability, and visual quality.
Amazingly, researchers even see this as a step towards neural models that can autonomously generate entire games. And while game engine brings about new possibilities, questions still remain about optimizing neural game engines and effectively incorporating human input. Nevertheless, as this technology evolves, it will redefine how games are developed and experienced, ushering in the era of generative gaming where no experience is the same. And finally, OpenAI is on the brink of releasing two new AI models. The first is named Strawberry and is crafted to tackle complex math and programming challenges, while the second model is named Orion and is expected to be the next step after GPT 4.0.
As for release dates, OpenAI may unveil a chatbot version of Strawberry as early as this fall by possibly integrating it into Chat GPT, with the purpose of Strawberry being to solve novel math problems and optimize programming tasks with enhanced logic to address language-related challenges more effectively. In internal tests, Strawberry even successfully tackled a New York Times puzzle called Connections, which is an achievement hinting at its potential to serve as a foundation for advanced AI systems that generate content and execute actions. And according to OpenAI’s internal documents, the company has plans to use Strawberry for autonomous internet searches to enable strategic planning and in-depth research.
As for release timelines, the exact launch date for Strawberry remains uncertain. Strawberry will likely be a streamlined version of the original model, mirroring OpenAI’s strategy with its GPT 4 variants. This is to deliver powerful performance with reduced computational demands in a methodology echoing the self-taught reasoner from Stanford, which focuses on enhanced reasoning. But Project Orion is OpenAI’s flagship language model that’s intended to surpass its GPT 4 series, with Strawberry likely playing a crucial role in generating data for Orion, potentially reducing errors when compared to previous models. And Google DeepMind is also making strides in AI with advanced mathematical models like Alpha Proof and Alpha Geometry 2, which achieved silver at the International Mathematical Olympiad.
But the scalability and generalization of these models remain to be seen, and so far, OpenAI has a definite advantage in terms of overall performance and user adoption. [tr:trw].