Insights about
Culture
-

Why AI Assistance Destroys the Skills You Need to Supervise AI
Developers using AI assistants scored lower on skill assessments with zero productivity gain. They shipped code they couldn’t explain, bypassed errors where learning happens, and built dependency instead of capability. Research reveals six interaction patterns—three that preserve learning, three that destroy it. We’re training people to supervise AI while undermining the exact skills supervision requires.
-

Organizations Drawing the Hardest Lines Around AI Protect the Smallest Territory
Award eligibility, exhibition space, platform curation—prestigious institutions banned AI content throughout 2025. But most creators make their living in the commercial middle: stock imagery, background music, copywriting, design templates. Bans protect the showcase while displacement happens in the warehouse. Here’s where durable protections are actually coming from—and why they look nothing like categorical prohibition.
-

The Company That Ignored Genocide Is Building Your AI
Meta’s Reality Labs burned $73B. But that’s not the real cost. Internal researchers documented genocide amplification, teen mental health crises, and sex trafficking—then watched leadership ignore the warnings. The AI pivot changes the technology, not the incentives.
-

Why the Companies Building AI Can’t Use It in Their Own Codebases
Meta and Google can’t effectively use standard AI coding tools in their own codebases. A recent study showed experienced developers predicted 24% gains but got 19% slower instead. The gap between AI marketing and enterprise reality reveals something important about deployment strategies.
-

What 100 Trillion Tokens Tell Us About What People Actually Want from AI
Open-source AI usage is 52% creative roleplay, not productivity. The industry is building for the wrong problem. Analysis of 100 trillion tokens reveals a massive gap between stated priorities and actual behavior. Here’s what users really want from AI—and what it means for your product strategy.
-

Why Are Teams Using AI Tools They Fundamentally Don’t Trust?
Teams adopt AI at record rates but trust it at half that speed. Uncover the hidden “shadow work,” backlash patterns, and what separates lasting implementations from failed experiments.
-

When Your Voice Becomes Someone Else’s Weapon
A single phone call cost Jaguar Land Rover £1.9 billion and devastated 5,000 supply chain organizations when an employee shared credentials with what sounded like a trusted voice. Vishing attacks surged 442% while AI voice cloning requires just 30 seconds of audio. Voice is no longer proof of identity. Do not trust what you hear.
-

Why AI Won’t Save Your Broken System
Google Cloud’s 2025 DORA research surveyed 5,000 technology professionals and revealed AI as an amplifier, not a transformer—magnifying organizational strengths and dysfunctions equally. The uncomfortable truth: organizations celebrating individual productivity while ignoring foundational systems are funding failure at scale.
-

When Humans and AI Teams Actually Work Together
Three researchers from MIT, University of Calgary, and University of Tennessee reveal the complex dynamics of human-AI teams, but the deeper question is whether these partnerships enhance human capability or create technological dependency. Expert insights reveal teams that appear most successful with AI are actually generating inefficiency while organizations chase the wrong metrics.
-

Our Most Productive Employees Are Getting Penalized for Working Remotely
Remote workers clock one hour less per day than in 2019, yet productivity remains steady or improves. Despite this, they are 31% less likely to be promoted and 35% more likely to be laid off—described by Stanford economist Nicholas Bloom as “discrimination.” Organizations must recognize this hidden career penalty.