Google Launches Gemini Enterprise AI Agent Platform with Built-in Memory Bank and Simulation Testing
2026-04-23 09:53
Favorite

en.Wedoany.com Reported - Google announced a series of tools for building AI agents at the Google Cloud Next 2026 conference in Las Vegas on April 22 local time, aiming to help businesses automate tasks. Google Cloud CEO Thomas Kurian stated in his keynote that Gemini Enterprise has evolved into "an end-to-end system for the agent era," specifically built for AI agents capable of executing complex, multi-step workflows. This release upgrades and renames the original Vertex AI developer platform to the Gemini Enterprise AI Agent Platform. Centered around four pillars—Build, Scale, Govern, and Optimize—it provides enterprises with a unified system for AI agent development, deployment, and monitoring.

The core functionalities of the new platform focus on addressing two major pain points for AI agents: long-term memory and reliability testing. The Memory Bank feature endows AI agents with persistent long-term memory across sessions, allowing agents to remember past contexts after multiple interactions with users, eliminating the need to start from scratch each time. Memory Profile further refines memory into user-manageable personal profiles, helping AI agents operate with appropriate context. The Agent Simulation tool allows developers to conduct synthetic interaction stress tests on AI agents before official deployment, simulating real-world scenarios to evaluate the stability of agent behavior and reduce deployment risks. Google also launched the Projects collaboration platform, integrating data sources like Workspace, Microsoft OneDrive, and internal corporate chat logs, enabling AI agents to collaborate with employees on tasks with full contextual awareness.

Addressing enterprise governance needs, Google equipped the AI Agent Platform with multiple security and management mechanisms. Agent Identity assigns a unique cryptographic identity to each AI agent, configures clear authorization policies, and forms a complete audit trail. Agent Gateway serves as the secure entry point for the AI agent ecosystem, guarding against risks such as prompt injection, tool poisoning, and data leaks. The AI Agent Anomaly Detection system identifies suspicious actions by analyzing agent behavior intent, providing intervention opportunities before behavior goes out of control. On the employee side, the platform adds a dedicated inbox for AI agents to publish information and progress reports. Notifications are categorized as "Requires Your Input," "Error," and "Completed," allowing employees to centrally track the status of all agent tasks.

Non-technical employees can also build their own dedicated AI agents using the Agent Designer low-code tool. By describing requirements in natural language, they can create automated processes without writing code. For developers, the platform provides the Agent Studio low-code creation interface and an upgraded agent development kit, featuring a new graph-based framework for multi-agent collaborative orchestration. The platform's built-in Model Garden offers prioritized access to over 200 models, including Google's first-party models like Gemini 3.1 Pro and Lyria 3, as well as third-party models like Anthropic's Claude. Users can flexibly select models based on their AI agent task requirements.

Ecosystem partnerships and customer adoption data were disclosed simultaneously. Google announced expanded strategic collaborations with companies including Accenture, Atlassian, and Valeo, covering areas like creative production, automotive technology, and team collaboration. Nearly 75% of Google Cloud customers are using Google's AI products to drive business development. In the past 12 months, 330 Google Cloud customers each processed over 1 trillion tokens, with 35 customers reaching the 10 trillion token milestone. Google CEO Sundar Pichai revealed that 75% of new code generated internally at Google is now produced by AI agents and reviewed by engineers, a significant increase from 50% last fall.

Google also simultaneously released the eighth-generation TPU chip, split into two specialized architectures: the TPU 8t training chip and the TPU 8i inference chip. TPU 8t clusters can scale to 9,600 chips, equipped with 2PB of shared high-bandwidth memory, delivering 121 ExaFlops of computing power, with performance nearly tripling compared to the previous Ironwood generation. The TPU 8i is equipped with 288GB of high-bandwidth memory and 384MB of on-chip SRAM, offering three times the capacity of its predecessor. Inter-chip communication requires a maximum of 7 hops, achieving an 80% improvement in price-performance ratio. In the competitive landscape, Google is adopting a vertical integration strategy—from custom chips and large models to the AI agent development platform and Workspace distribution channels—to compete with OpenAI and Anthropic. Its cloud business grew 50% year-over-year in Q4 2025, the fastest growth rate among the three major cloud providers. The company expects capital expenditures to reach $175 billion to $185 billion in 2026, nearly six times the $31 billion in 2022.

This article is compiled by Wedoany. All AI citations must indicate the source as "Wedoany". If there is any infringement or other issues, please notify us promptly, and we will modify or delete it accordingly. Email: news@wedoany.com