

[ai-buttons]
Stay Informed with Truth Mafia!
Subscribe to the Newsletter Today: TruthMafia.com/Free-Newsletter
Join Our Patriot Movements!
Connect with Patriots for FREE: PatriotsClub.com
Support Constitutional Sheriffs: Learn More at CSPOA.org
Support Truth Mafia by Supporting Our Sponsors
Reclaim Your Health: Visit iWantMyHealthBack.com
Protect Against 5G & EMF Radiation: Learn More at BodyAlign.com
Secure Your Assets with Precious Metals: Get Your Free Kit at BestSilverGold.com
Boost Your Business with AI: Start Now at MastermindWebinars.com
Follow Truth Mafia Everywhere
Sovereign Radio: SovereignRadio.com/TruthMafia
Rumble: Rumble.com/c/TruthmafiaTV
Facebook: Facebook.com/TruthMafiaPodcast
Instagram: Instagram.com/TruthMafiaPodcast
X (formerly Twitter): X.com/Truth__Mafia
Telegram: t.me/Truth_Mafia
Truth Social: TruthSocial.com/@truth_mafia
TOMMY TRUTHFUL SOCIAL MEDIA
Instagram: Instagram.com/TommyTruthfulTV
YouTube: YouTube.com/@TommyTruthfultv
Telegram: T.me/TommyTruthful
GEMATRIA FPC/NPC DECODE! $33 
Find Your Source Code in the Simulation with a Gematria Decode. Are you a First Player Character in control of your destiny, or are you trapped in the Saturn-Moon Matrix? Discover your unique source code for just $33!
Book our Gematria Decode VIA This Link Below: TruthMafia.com/Gematria-Decode
BECOME A TRUTH MAFIA MADE MEMBER 
Made Members Receive Full Access To Our Exclusive Members-Only Content Created By Tommy Truthful
Click On The Following Link To Become A Made Member!: truthmafia.com/jointhemob
Summary
Transcript
But here’s the twist. Because the GoVLA model consists of three parts, with the first being a fast system, the second being a slow system, and the third being a basic spatial interaction model. And an example of it working would be the robot analyzing user voice commands and real-time environmental information for the robot’s status, allowing the two systems to work together, with the fast system one quickly responding to simple tasks and outputting its action trajectories, then the slow system two handling the robot’s complex logical reasoning, task decomposition, and output of language interaction content, and then combining real-time responses and complex decision-making capabilities.
And on top of this, Trippingfang also introduced the DeepSeek language model into its slow system for the robot’s GoVLA large model, so that it can now understand and analyze long horizon tasks, which are basically more complex things that take a longer amount of time. And when comparing performance to other more conventional vision-language-action models, where, for instance, a robot making a meal would normally require that the ingredients are set in front of it on the table. Instead, Alphabet 2’s use of its GoVLA large model lets it perceive its surrounding environment in 360 degrees, as well as getting ingredients itself.
Plus, after cooking, the robot can even deliver the food to the dining table, and complete a full chain of services like a butler. But now for part two, the robot’s specs. In terms of hardware architecture, the robot can use its 360 by 360 degree full space detection and perception system. And as for human-like movement, the robot features 34 degrees of freedom, and uses a waist-leg lifting method with a vertical working range of 0-240 centimeters, which is just under 8 feet, and its arm span has a reach of 700 millimeters, excluding its attached manipulators, which is about 2.3 feet.
And Alphabet 2 is more than just a toy at home, because they also announced a partnership with Bloomage Biotechnology, and they’re helping with material transportation, intelligent unpacking and disinfection, intelligent visual inspection, multi-material collaborative intelligent feeding, and other operations in their factories. And the company also announced that later this year, its robots will also be deployed at airports and in cities across China. And the company’s planning to scale its robot production to 1 million units by the year 2033. But there’s another humanoid that’s pushing the envelope of collaborative robotics even further, with Kepler Robotics releasing a video of its 4Runner K2 robots working together in the real world.
And these robots are currently doing quality control checks on cars at the SAIC General Motors factory, but they’re also using what Kepler is calling a dynamic visual cruise system, which allows these robots to effectively work with each other to lift large objects or heavy objects together. And this lets the robot handle different car components autonomously, using its five-fingered hands with 11 active degrees of freedom. And what’s cool is that each of the 4Runner K2’s fingertips feature 92 contact points. And as for strength, the robot can lift up to 35 pounds, and it uses its Nebula AI system for thinking, and it stands at 5 feet 10 inches tall or about 178 centimeters, while it weighs 187 pounds, which is just under 85 kilograms.
But the kicker is that the 4Runner K2 has 52 degrees of freedom, and it’s slated to retail for a price of between $20,000 to $30,000. And this actually gives Tesla’s Optimus a run for its money. And this brings us to Elon Musk’s recent announcement of its robot production. Making good progress on Optimus, we expect to have thousands of Optimus robots working in Tesla factories by the end of this year. And we expect to scale Optimus faster than any product, I think, in history to get to millions of years per year as soon as possible.
I think we’re confident in getting to a million years per year in less than five years, maybe four years. Almost everything in Optimus is new. There’s not like existing supply chain for the motors, gear boxes, electronics, actuators, really anything in the immediate or same thing in that Optimus offering the AI for Tesla, the Tesla AI computer. You know, with respect to humanoid robots, I don’t think there’s any company in any country that can match Tesla. Now, I’m a little concerned that on the leaderboard, ranks two through 10 will be Chinese companies.
But AI is transforming much more than just humanoids as generative AI also just took a leap with Alibaba’s release of Vase, its new AI model designed to transform video generation and editing. And unlike other specialized tools, Vase handles multiple tasks in one system, creating videos from text, editing footage and doing more. But there’s something that actually makes Vase stand out amongst other text to video generators. And this is at the heart of Vase, which is its video condition unit or VCU, which is a new input format. And this combines text prompts, reference images, videos and spatial mass into a single unified representation.
And it divides videos into reactive areas for editing and inactive zones which are left untouched. And then these inputs are processed in a shared feature space with time embedding layers, ensuring frame to frame consistency. And an attention mechanism ties everything together, which allows Vase to handle complex tasks without losing its coherence halfway through. And here’s what Vase can do with all this tech covering four main areas, generating videos from text prompts, synthesizing footage from reference images or clips, editing existing videos and applying targeted edits with masks. And Alibaba shows different demos where it’s animating people walking out of frames, creating animated characters surfing and swapping penguins for kittens or expanding a background.
And to build Vase, Alibaba’s team trained it on internet videos enriched with depth and pose annotations for precision. And they started with basics like inpainting, where you fill the missing parts of a video, and outpainting, as well as other more advanced editing techniques. And to test the performance, researchers created a benchmark with 480 examples across 12 tasks, including stylization, depth control and reference guided generation. And the results were that Vase outperforms open source models and quality and user studies. However, it does still lag behind commercial models like video and cling and reference to video generation, showing room for growth.
But looking ahead, Alibaba says it plans to scale base with larger data sets and more computing power. And it says that parts of this model are going to be released open source on GitHub, making it available to developers. And Vase fits into Alibaba’s broader AI ambitions with their quen language models with other Chinese tech giants like ByteDance also advancing their video AI capabilities, trying to rival Western models like open AI Sora. And Vase is trying to address just this with its single solution for video creation and editing with an open source plan in the future.
But make sure to like and subscribe, and check out this video if you want to know more about the latest in AI news. But thanks for watching. Bye
[tr:trw].

