April 17, 2026 ChainGPT

Brookings: Federal AI Surge Faces Talent, Trust Gaps — Crypto Community Should Watch

Brookings: Federal AI Surge Faces Talent, Trust Gaps — Crypto Community Should Watch
Federal agencies have raced into AI over the last two years, but the surge is bumping up against talent shortages, institutional inertia, and growing public mistrust — putting the promise of smarter government at risk, a new Brookings Institution report finds. Brookings analyzed agency AI inventories from 2023–2025, federal jobs listings, OMB guidance, and interviews with current and former technologists across eight agencies. The topline: adoption exploded in 2025, when 41 agencies documented more than 3,600 distinct AI use cases — a 69% jump from 2024 and roughly five times the count in 2023. Use cases span the gamut: over half of Social Security Administration entries support service delivery and benefits processing, while a majority of Department of Justice entries back law enforcement activities. But that growth is concentrated. For three years running, five large agencies have been responsible for more than half of reported AI activity, and large agencies accounted for 76% of the 2025 inventory. Smaller agencies lag: the 11 small agencies that reported in 2025 combined for only 60 use cases — about 2% of the total. The report flags several chokepoints slowing broader, responsible deployment: - Talent gaps. Of more than 56,000 federal technical job postings since 2016, only about 1,600 (under 3%) explicitly mentioned AI skills. A Biden-era hiring push boosted AI hiring, but workforce cuts in early 2025 may have undercut those gains — at least 25% of AI-specific listings were posted from 2024 onward, meaning many hires were recent and vulnerable to layoffs. - Risk-averse culture and slow experimentation. Nearly 60% of AI use cases are still in pilot or pre-deployment stages, signaling a phase of learning and testing that agencies struggle to fund and protect. The report also says the Trump administration’s linkage of AI deployment to workforce reductions via the Department of Government Efficiency (DOGE) may be reinforcing caution among agency leaders. - Accountability gaps. Despite OMB requirements, more than 85% of high-impact deployed AI use cases in 2025 lacked some required documentation on risk mitigation. - Public skepticism. Pew Research Center polling shows rising unease: roughly half of Americans are now more concerned than excited about AI (up from 37% four years ago), and only 17% expect AI to positively affect the U.S. over the next two decades. That’s unfolding against historically low public trust in Washington — just 16% of Americans say they trust the federal government to do what’s right most of the time. Brookings warns the stakes are real: botched AI deployments could erode trust further, while well-designed applications that improve services could help rebuild confidence in government institutions. To move forward responsibly, the report urges policy and operational fixes, including expanding AI literacy training across agencies, overhauling procurement rules built for static software, improving transparency for high-risk systems, and prioritizing public-facing use cases that deliver clear benefits. For the crypto and tech community watching how governments adopt emerging tech, Brookings’ findings are a reminder that deployment momentum alone isn’t enough. Talent, governance, and public buy-in will determine whether government AI becomes a trusted service improvement or a reputational liability. Read more AI-generated news on: undefined/news