The Science Fiction and Fantasy Writers Association tried to split the difference. In December 2025, they proposed letting works created with AI language models stay eligible for the Nebula Awards—one of the most prestigious prizes in the field—as long as authors disclosed they’d used them. Voters could make their own calls about whether that mattered.
The backlash was immediate. Three days later, SFWA’s board issued a formal apology and reversed course entirely. The new rule: any work “written, either wholly or partially, by generative large language model tools” is ineligible. Period.
This wasn’t a policy debate. This was panic dressed up as principle.
I get why they did it. When you’re protecting something as culturally significant as the Nebula Awards, you want clear lines. You want to be able to say “this is human work” with confidence. But the speed of that reversal suggests SFWA hadn’t actually worked through what “AI use” means in practice. They drew a hard line, then immediately had to figure out where it actually falls.
And they’re not alone. San Diego Comic-Con updated its art show rules in January 2025 to ban AI-generated art entirely—a policy shift from earlier rules that had allowed display but not sales. Two weeks ago, on January 13th, Bandcamp announced it was banning AI-generated music from the platform entirely. The pattern is striking: major cultural institutions choosing categorical prohibition over nuanced policy, and the pace is accelerating.
These feel decisive. They’re also the easy part.
The Organizations Drawing Lines
When you look at who’s implementing outright bans, you see a pattern. SFWA protects award eligibility. Comic-Con controls exhibition space. Bandcamp curates a platform. All three are gatekeepers for prestige and visibility—they control who gets seen, not who makes a living.
That distinction matters. If you’re an emerging science fiction writer, Nebula eligibility is huge for your career. Same for getting your art into Comic-Con or your music featured on Bandcamp. These policies protect the scarcity and signaling value of those credentials. They say “human achievement still means something here.”
But these are showcase windows, not economic reality. Most writers don’t make their living from award-eligible novels—they make it from freelance articles, copywriting, content work. Most visual artists don’t survive on gallery exhibitions—they do commercial illustration, design templates, stock imagery. Musicians? Background music, advertising, licensing deals.
None of these bans touch that commercial work. And that’s where AI is actually displacing people.
So when I see headlines celebrating these policies as “taking a stand,” I have a more complicated reaction. Yes, they’re protecting something real. But they’re protecting the smallest territory while the economic ground shifts underneath everyone.
Why Bans Are Harder Than They Look
Here’s where things get genuinely messy: what does “AI use” even mean anymore?
SFWA’s rule prohibits work where “LLMs were used at any point in its creation.” Okay. But if you use Google to research your novel, you’re using an LLM—Google’s search is powered by AI models now. If you use Grammarly for editing, that’s an LLM. Microsoft Word’s editor? LLM. Your citation manager, your research database, probably your email client… all embedding AI components.
Jason Sanford, who tracks these issues in his Genre Grapevine newsletter, put it directly: “If you use any online search engines or computer products these days, it’s likely you’re using something powered by or connected with an LLM.”
So what’s actually prohibited? SFWA hasn’t published detailed guidance distinguishing between AI that generates creative content (prohibited) and AI-assisted tools for non-creative tasks (presumably permitted). The line exists in theory. In practice, it’s blurry as hell.
And that creates a problem: when policies have this much definitional ambiguity, you’re essentially running on an honor system. Authors disclose what they think counts as “AI use.” Voters make individual judgments. The policy works as long as everyone shares the same unstated understanding of where the line falls.
That works in small, coherent communities. It breaks down under pressure.
The Detection Problem
Let’s talk about enforcement, because this is where the technical reality collides with policy intent.
Stanford’s Academic Integrity Working Group spent serious time evaluating AI detection tools—software designed to identify whether text was AI-generated. Their conclusion? These tools are “unsuitable for high-stakes situations, especially as evidence in academic misconduct cases.”
The problems are fundamental:
- High false positive and false negative rates
- Can’t handle mixed human-AI content reliably
- Show differential accuracy across demographic groups
In other words: detection doesn’t work. You can’t programmatically distinguish AI-written text from human-written text with enough confidence to base disciplinary action on it. Especially when most real-world content involves some iterative mix of human and AI input.
So universities pivoted. Instead of trying to detect AI use after the fact, they’re shifting to in-person assessment—oral exams, proctored writing, synchronous evaluation. They’re also implementing disclosure requirements where students explain what tools they used and how.
This matters for understanding organizational AI policies more broadly: if you can’t enforce a policy, you’re just creating incentives to hide behavior.
Comic-Con and Bandcamp rely on community reporting and curator vigilance. SFWA depends on author honesty. All three are essentially saying “we’re trusting people to comply, and we’ll respond if violations get flagged.” That approach can work when you have tight-knit communities and limited submission volumes.
But as AI capabilities improve and incentives grow to circumvent restrictions, manual enforcement becomes unsustainable. You end up with a policy that signals values but can’t actually ensure compliance at scale.
When Corporations Drew the Line
Some of the earliest organizational AI restrictions came from places you wouldn’t expect—major tech companies and financial institutions.
JPMorgan Chase, Apple, Samsung, Amazon, Verizon, and Northrup Grumman all restricted employee access to ChatGPT and similar public AI tools. A 2024 Cisco survey found that 27% of companies had banned generative AI entirely, while 61% controlled which tools employees could use.
But their reasoning was completely different from the creative industry bans. They weren’t worried about authenticity or artistic integrity. They were worried about proprietary data accidentally getting fed into AI training pipelines.
Samsung found out the hard way in 2024 when engineers uploaded sensitive source code to ChatGPT. Apple restricted access after concerns about product roadmaps leaking through model interactions. When you use public AI tools, you’re effectively transferring company data outside your security perimeter—and OpenAI’s terms of service historically permitted using that data for model training.
Most of these companies have since moved beyond blanket bans to managed internal deployment. Samsung now has secured AI environments. Microsoft offers enterprise Copilot with data isolation guarantees. The question shifted from “should we use AI?” to “how can we deploy it safely?”
I mention this because it illustrates something: organizational resistance to AI isn’t monolithic. Creative organizations are fighting for recognition and compensation. Enterprises are fighting for data security. They’re both drawing lines, but the lines protect completely different things.
What’s Actually Working
When I look at organizational responses that seem more durable, they share something: they don’t try to ban AI categorically. They focus on specific harms and build accountability around those.
The Union Model
The Writers Guild of America and SAG-AFTRA didn’t ban AI. They negotiated contractual protections through collective bargaining. After their 2023 strikes—which effectively halted Hollywood production for months—both unions secured groundbreaking agreements.
WGA’s contract: Studios can’t use AI to write or rewrite scripts without explicit writer approval and compensation. AI outputs must be evaluated alongside human work. Usage requires disclosure.
SAG-AFTRA’s contract: Actors must provide explicit written consent for any digital replica of their voice or likeness. They can refuse consent without career penalty. The terms include suspension rights during strikes.
These are specific, enforceable protections tied to defined work. They don’t try to draw abstract lines around “AI use”—they establish who controls deployment, who gets compensated, and what requires consent.
The approach acknowledges that AI is part of the production environment. The question isn’t “is AI present?” but “who has power over how it’s used?”
But—and this is the gap that troubles me—both union agreements left unresolved whether AI companies can train on existing WGA or SAG-AFTRA work without compensation. They secured protections for work going forward. The millions of scripts and performances that already trained current models? Still up for grabs legally.
That matters because if training on existing work is free and legal, then these protections only apply to new creation. The economic leverage creators had—their existing catalog—is already incorporated into the models. The unions won the battle for future work while the war over historical use continues in courts.
The Litigation Track
The New York Times didn’t implement a ban. They sued OpenAI and Microsoft for copyright infringement in December 2023, alleging the companies trained ChatGPT on millions of Times articles without permission or compensation. The suit claims damages in the billions.
Getty Images sued Stability AI in both US and UK courts for training Stable Diffusion on Getty-owned images without licensing.
And here’s a case that hasn’t gotten enough attention: three visual artists—Sarah Andersen, Kelly McKernan, and Karla Ortiz—filed a class action in January 2023 against Stability AI, Midjourney, DeviantArt, and Runway AI. They’re not just fighting for themselves. They’re representing potentially millions of artists whose work was scraped into training datasets without permission.
In August 2024, a federal judge allowed their core copyright claims to proceed, denying defendants’ motions to dismiss. That’s significant—it means a court is seriously examining whether storing copyrighted images in training datasets and using them to train style-replication models constitutes infringement. The legal question isn’t settled, but it’s being treated as legitimate rather than frivolous.
Then in September 2025, Anthropic—the company behind Claude AI—agreed to settle with authors for $1.5 billion after being shown to have trained on copyrighted material without authorization. The settlement awaits final court approval, but the number alone signals that courts are taking these claims seriously.
This is a different organizational response entirely: using litigation to establish that unauthorized training creates legal liability. If courts consistently determine that training constitutes infringement, AI companies face damages that incentivize licensing agreements and compensation frameworks.
The strategic advantage here is that legal precedent binds across organizations and industries. A court ruling that AI training requires licenses affects every AI company, not just one platform’s submission policy.
But litigation has real limitations. These cases take 3-5 years to resolve. Discovery costs millions. Most creators can’t afford to sue. And the big cases might settle before establishing precedent—that $1.5B Anthropic settlement? If approved, the trial that would’ve created binding precedent never happens.
Still, I’m seeing a shift: major publishers and media organizations are betting that copyright law, not platform policies, will establish the real boundaries around AI training.
In December 2025, the Times escalated again by suing Perplexity—an AI-powered search startup—for systematically extracting Times content and offering it to users as direct competition to Times journalism. The litigation track is expanding from training practices to deployment practices. Every new business model built on copyrighted content faces potential liability.
The Netflix Framework
This is the approach I find most interesting because it acknowledges reality without surrendering to it.
In August 2025, Netflix published comprehensive guidelines for AI use in content production. Instead of asking “is AI present?” they ask “how is AI being used, by whom, and with what oversight?”
The framework distinguishes:
Low-risk uses (approved without escalation): Ideation, brainstorming, exploring visual concepts—AI generating exploratory material not intended for final delivery.
High-risk uses (requiring executive approval): Creating character designs that could impact legal rights, using AI on Netflix proprietary data, altering performer likenesses, replacing union work without consent.
Categorical prohibitions: AI cannot replace unionized talent without explicit consent and compensation. AI-generated material can’t form final deliverables without approval. Production data can’t be stored or reused for AI training.
Netflix wants to keep using AI for speed and exploration—they’re not pretending it doesn’t exist in their workflows. But they’re drawing bright lines around talent consent, copyrighted inputs, and anything that could mislead viewers.
This framework aligns with WGA and SAG-AFTRA protections rather than conflicting with them. It implements union agreements operationally. And because it targets specific risks rather than AI categorically, it reduces the false-positive burden on creators using legitimate tools.
I think this model might prove more durable than categorical bans for a simple reason: it doesn’t require believing you can eliminate AI from creative workflows. It requires believing you can govern how it’s used.
The Opt-Out Trap
Opt-out systems are burden-shifting dressed up as creator empowerment.
In October 2025, LinkedIn announced that starting November 3rd, the platform would use member data and content to train AI models by default. Users could opt out. But opting out only prevents future use—data already uploaded stays available for training.
OpenAI promised to deliver a “Media Manager” tool by end of 2024 that would let creators specify whether their work should be included in AI training. They failed to deliver it. No revised timeline has been published.
Creators cannot opt out of systems they don’t know exist, from AI companies they’ve never heard of, while maintaining visibility in online spaces.
If you’re a photographer, your work is on Instagram, Flickr, portfolio sites, maybe licensed through stock agencies. Are you supposed to track down every AI company that might scrape those platforms? Monitor new startups? Check privacy policies quarterly? That’s not protection—that’s requiring full-time vigilance to prevent unauthorized use of your own work.
The Copyright Alliance put it clearly in their November 2025 analysis: “Opt-out systems are ineffective unless there’s transparency…Whatever shred of utility an opt-out system may have is rendered completely useless if there are no accompanying transparency standards or obligations to enforce the opt-out.”
OpenAI has stated publicly that “it would be impossible to train competitive AI models” without copyrighted data. So when they promise opt-out tools, they’re essentially saying “we’ll give you the option to exclude yourself from something we’ve already determined we need to remain competitive.”
That’s not consent. That’s post-hoc notification with an impossible choice: participate in the system that’s training on your work, or lose visibility in the online spaces where your work needs to exist.
I’ve spent 25 years building technology that’s supposed to empower people, not trap them in false choices. When organizations implement opt-out rather than opt-in systems, they’re shifting the burden of enforcement from companies with resources to individuals without leverage. It’s the policy equivalent of saying “it’s your responsibility to stop us.”
That violates something fundamental about how technology should work. You don’t get to take something, then offer people the “courtesy” of asking you to stop.
The Collective Response
In February 2025, over 1,000 musicians released an album called “Is This What We Want?”—recordings of vacant recording studios and empty performance venues. Deliberate silence as protest.
Kate Bush, Billy Ocean, Annie Lennox, Damon Albarn, Hans Zimmer, Imogen Heap, Ed O’Brien—artists across generations and genres. The album was direct opposition to proposed UK copyright law changes that would allow AI companies to train on copyrighted music by default, requiring artists to opt out.
Their argument: opt-out places impossible burden on creators, and the proposed changes would provide no compensation for training use. Paul McCartney, Elton John, and Simon Cowell added their support. The UK music industry contributes £7.6 billion to the economy annually. This wasn’t fringe protest—this was the industry itself saying no.
Separately, over 200 musicians including Billie Eilish, Nicki Minaj, and the Arkells signed an open letter in April 2024 calling on AI companies to stop “predatory use” of AI to steal professional artists’ voices and likenesses.
These collective actions differ from institutional bans—they’re grassroots, artist-initiated resistance campaigns leveraging visibility and political influence. They’ve proven effective at raising consciousness and shaping policy debates, but they lack enforcement mechanisms beyond legislative advocacy.
Ed Newton-Rex, who organized the silent album protest, founded Fairly Trained—a nonprofit that certifies AI companies based on whether they obtained creator consent for training. It’s not a ban, it’s a transparency mechanism. Companies using only consented training data can claim certification, potentially creating competitive advantage in creator-conscious markets.
But Fairly Trained faces the same limitation all voluntary systems face: it depends on companies choosing to participate and honestly disclosing their training sources. Most don’t.
Artists are refusing to wait for institutions to protect them. They’re organizing, making noise, demanding accountability. That matters even when the mechanisms are imperfect.
What This Means If You’re Making Decisions Now
If you’re running an organization trying to figure out your AI policy:
Bans work in enclosed ecosystems. If you control both submission and publication gates—like Paizo does with Pathfinder, or Bandcamp does with music distribution—you can enforce categorical restrictions. Your curator judgment is the enforcement mechanism.
Bans break down in open platforms. The larger your scale, the harder enforcement becomes. The more AI embeds in routine tools, the harder definition becomes.
Better question: What specific harms are you trying to prevent? Talent replacement without consent? Copyright infringement? Data leakage? Audience deception? Name the actual risk, then build policy around that.
If you’re a creator trying to figure out where you stand:
Prestige protections matter for visibility, not livelihood. Award eligibility and exhibition bans protect the credential value of those recognitions. That’s real—especially for emerging creators building reputations. But it doesn’t address economic displacement in commercial work.
The economic threat is in routine creation. Stock imagery, background music, copywriting, design templates, commodity illustration. That’s where AI deployment is fastest because the use case is clear and the quality bar is “good enough.”
Watch the litigation track. If courts establish that unauthorized training constitutes infringement, that creates pressure for licensing agreements. That could build toward compensation frameworks. Platform bans won’t do that.
If you’re building policy:
Detection doesn’t work. Disclosure is fragile. You can’t programmatically identify AI use reliably. You can request disclosure, but that depends on honesty and shared understanding of what counts.
Think in terms of consent, transparency, and compensation. Not binary presence/absence of AI. Who controls deployment? Who gets paid? What requires disclosure?
Managed deployment beats prohibition. Unless you’re in an enclosed ecosystem where you control all access points, you’re better off governing how AI is used than trying to prevent use entirely.
The Uncomfortable Truth
Multiple tracks are developing simultaneously, and they’re not consistent with each other.
Litigation is establishing copyright liability. Unions are negotiating contractual protections. Awards and platforms are implementing categorical bans. Enterprises are building managed deployment frameworks. Universities are shifting to disclosure and in-person assessment.
These approaches contradict each other in some cases. The Netflix framework assumes AI is part of production workflows; SFWA’s policy attempts to exclude it entirely. Copyright litigation treats unauthorized training as infringement; opt-out systems treat it as default-permissible.
Different sectors face different vulnerabilities, operate under different economic models, have different institutional structures. There’s no reason to expect uniform policy.
But the organizations drawing the hardest lines are protecting the smallest territory.
Award eligibility, exhibition space, platform curation—these matter for prestige and visibility. But most creative professionals make their living in the vast commercial middle: routine work that’s valuable but not prestigious, skilled but not unique, professional but not award-worthy.
That’s where AI deployment is accelerating. Not because AI is better at creative work, but because the economics favor “good enough at near-zero marginal cost” over “excellent at human rates.”
Bandcamp can ban AI music. That doesn’t stop restaurants from using AI-generated background music instead of licensing from artists. Comic-Con can ban AI art. That doesn’t stop advertising agencies from using Midjourney for concept sketches instead of hiring illustrators.
The bans protect the showcase. The economic displacement is happening in the warehouse.
What I’m Watching
I’ve shifted on some of this while researching it. I used to think categorical bans were mostly performative. I now think they serve real functions for specific organizations—particularly those controlling prestige credentials in tight-knit communities.
But I’m more convinced that the durable solutions will come from three places:
Copyright litigation that establishes training liability. If courts consistently rule that unauthorized training constitutes infringement, that creates economic pressure for licensing. Companies can either pay for training access upfront or risk damages later. That’s a sustainable framework.
Union-style collective bargaining. Individual creators lack leverage against technology companies. Collective negotiations through unions, guilds, or industry associations can establish baseline protections for consent, compensation, and replacement restrictions.
Principled governance frameworks like Netflix’s. Policies that acknowledge AI presence in workflows while governing specific uses—talent replacement, copyright violations, audience deception. These align with union protections and address actual harms rather than trying to prohibit technology categorically.
What I’m genuinely uncertain about:
Will authenticity premiums persist? The bet these organizations are making is that human authorship retains distinctive value even as AI quality approaches parity. That might be true. Or audiences might not care once they can’t tell the difference. We’ll find out.
Can licensing frameworks establish before synthetic training makes them irrelevant? Gartner predicts that by 2026, 75% of AI training data will be synthetic—models training on AI-generated data rather than human-created work. By 2030, they estimate 100%. If that happens, the copyright leverage creators currently have disappears entirely. The window for establishing compensation frameworks might be shorter than we think.
What happens to mid-career creators? Emerging creators benefit from prestige protections—awards and exhibitions matter most when you’re building reputation. Established creators often have leverage for individual negotiations. Mid-career professionals who rely on steady commercial work but lack star power? I don’t see clear protection mechanisms for them yet.
I’ve worked with enough mid-career creatives to know they’re the ones who keep industries running. They’re the reliable professionals taking on diverse projects, mentoring emerging talent, building the connective tissue that makes creative ecosystems function. If we lose that middle tier because AI displaces routine commercial work faster than we can build compensation frameworks, we’re not just losing individual livelihoods. We’re losing the infrastructure that develops the next generation of creative talent.
The question isn’t whether AI is coming—it’s already here. The question is which organizational responses actually protect people, and which ones just make us feel better about drawing lines in places we can still control.
I don’t have definitive answers. But I think asking the right questions matters: Who has power over AI deployment? Who gets compensated for training data? What consent mechanisms actually function? How do we distinguish between governing AI use and prohibiting it entirely?
These are messier questions than “should we ban AI?” But they’re the ones that’ll determine whether creators have agency in this transition or just the consolation of principled policies that don’t protect livelihoods.
