On January 13, 2026, Meta started cutting loose more than 1,000 Reality Labs employees—roughly 10% of the division that was supposed to be building the future. The layoffs hit hardest on the teams working VR headsets and Horizon Worlds, Meta’s virtual social network that never quite convinced people to show up.
The timing wasn’t subtle, but then again, $73 billion in losses doesn’t leave much room for subtlety. That’s the cumulative damage since late 2020, when Mark Zuckerberg bet the company’s identity on immersive virtual worlds. Q3 2025 alone: $4.4 billion operating loss on $470 million revenue. At some point you have to ask whether anyone internally was allowed to say “this isn’t working” or if the growth imperative just steamrolled those conversations into irrelevance.
But Meta found its exit. Ray-Ban Meta smart glasses sold over 2 million units since October 2023, with sales tripling in Q2 2025. They look like regular Ray-Bans, cost $299 to $799, and incorporate cameras plus an AI assistant. Unlike Quest headsets gathering dust in closets, people actually wore them. So Meta did what Meta does: followed the growth signal and pivoted the entire narrative.
Vishal Shah, who’d been leading metaverse initiatives, got transferred to vice president of AI products. The company that spent five years trying to convince people to escape reality decided instead to augment it.
Here’s what bothers me about this: it’s not really a pivot. It’s the same playbook. Scale toward whatever shows traction, ignore internal warnings when they conflict with growth metrics, defer external costs until they become unavoidable. The warnings about teen mental health, algorithmic amplification of hate speech, sex trafficking on the platform—they were all there, documented in internal research and whistleblower testimony. They were also all ignored when action would have meant slowing down.
As Meta rushes into AI, I keep coming back to a simple question: has anything changed about how the company makes decisions when growth and safety collide? The evidence suggests it hasn’t. And that matters more than the technology itself.
When Warnings Get Ignored
The employee communications obtained by Congress and the SEC tell a story that’s hard to reconcile with public statements about user safety. More than that—they document what happens when evidence of harm runs headlong into growth targets.
When the Data Told the Truth
Frances Haugen testified before the Senate Commerce Subcommittee in October 2021 after copying tens of thousands of pages of internal Facebook documents. She’d worked nearly two years as a data scientist specializing in algorithmic product management. What she revealed wasn’t a few problematic posts slipping through moderation. It was systematic prioritization of engagement over everything else.
The Facebook Files—published by The Wall Street Journal—included research showing 13.5% of teen girls said Instagram made suicidal thoughts worse, and 17% said it worsened eating disorders. The company knew this. Employees had described themselves in internal communications as “drug pushers” designing for addiction. Infinite scroll wasn’t an oversight—it was engineered specifically to keep users engaged.
Haugen testified that Facebook consistently chose growth over safeguards and hid research from the public and government. The documents she filed with the SEC showed Zuckerberg’s public statements contradicted internal research. In March 2020, he told Congress: “We have removed content that could lead to imminent real-world harm.” Internal documents revealed the company lacked algorithms to detect hate speech in Hindi and Bengali—languages spoken by hundreds of millions of users.
Two years later, Arturo Béjar sat in front of the Senate Judiciary Subcommittee. Béjar had been Facebook’s director of engineering for Protect and Care from 2009 to 2015, reporting directly to the CTO. He came back as a consultant to Instagram in 2019 to work on user wellbeing. What shocked him: the safety tools his team had built for teenagers had been dismantled.
His testimony in November 2023 grounded everything in research among 13- to 15-year-olds on Instagram. The numbers: 13% reported receiving unwanted sexual advances in the previous seven days. 22% were targets of bullying. Nearly 40% experienced negative social comparison. A quarter felt worse about their bodies and social relationships.
Béjar sent these findings to Zuckerberg, Sheryl Sandberg, Instagram CEO Adam Mosseri, and Chief Product Officer Chris Cox on October 5, 2021—the same day Haugen testified in Congress. Sandberg expressed sympathy. Mosseri asked for a follow-up meeting.
Zuckerberg didn’t respond at all.
Let that sit for a second. The CEO receives data showing 13% of young teens are getting sexually harassed on his platform weekly, and he doesn’t even send a reply. Two years later, there was still no way for minors to flag conversations containing unwanted sexual advances. Béjar called it “the largest-scale sexual harassment of teens to have ever happened.”
I don’t know how else to read that silence except as a decision. Not responding is a response.
The 17-Strike Policy
If Béjar’s testimony revealed negligence, Vaishnavi Jayakumar’s allegations suggested something harder to explain away. Jayakumar was Instagram’s head of safety and wellbeing until 2022. In depositions unsealed in November 2025, she testified that when she joined in 2020, she was “shocked” to discover the “17x” strike policy.
Accounts trafficking humans for sex could rack up 16 violations for prostitution and sexual solicitation before suspension on the 17th strike. “By any measure across the industry,” Jayakumar testified, this was “a very, very high strike threshold.”
She raised it multiple times. Building a better reporting system, she was told, would require “too much work.” Meanwhile Instagram already let users report spam, intellectual property violations, and firearm promotions directly within the app. Apparently those violations merited simpler tools than sex trafficking.
Meta disputed the allegations in media statements, saying the company now enforces a “one strike” policy and immediately removes accounts involved in human exploitation. What Meta didn’t dispute: the 17-strike system existed.
I keep trying to construct a charitable explanation for this and I can’t get there. Sixteen chances to traffic people before consequences? What does that say about what the company valued versus what it was willing to ignore?
The VR Cover-Up
In September 2025, Jason Sattizahn and Cayce Savage testified before the Senate Judiciary Subcommittee about their work on user safety for Meta’s VR products. Their allegations went beyond ignoring warnings—they described active suppression of evidence.
Sattizahn worked at Meta from 2018 to 2024. When his research uncovered underage children in Germany being subjected to demands for sex acts and nude photos in Meta VR, he testified that “Meta demanded that we erase any evidence of such dangers that we saw.” When he continued researching harms to women experiencing sexual solicitation in VR, Meta’s legal team told him to change future research protocols “to not gather this data.”
Savage, who specialized in youth user experience research for VR, testified that researchers were discouraged from studying risks to children—allowing the company to claim ignorance. Her work uncovered bullying, sexual assault, and demands for nude images targeting minors in virtual environments.
Both alleged that after Haugen’s testimony, Meta’s legal department imposed new protocols on research into “sensitive” topics including children, gender, race, and harassment. Internal work groups were locked down. Researchers were “directed how to write reports to limit risk to Meta.”
Sattizahn testified that when he raised concerns about violating the Children’s Online Privacy Protection Act in October 2023, Meta fired him six months later—after six years, multiple promotions, and positive performance reviews.
Here’s the pattern that emerges: internal research reveals harm. Employees raise alarms. Leadership either ignores the warnings or—and this is where it gets darker—actively suppresses the data. By the time issues reach the public, millions of users have already been exposed to documented risks.
The question I can’t shake: how many times does this pattern have to repeat before we stop treating each incident as an isolated failure and start seeing it as the operating system?
The 41-State Lawsuit
In October 2023, attorneys general from 42 states filed lawsuits alleging Meta knowingly designed Instagram and Facebook features to addict children while falsely assuring the public these features were safe. The lawsuits drew heavily on whistleblower disclosures and Haugen’s internal documents.
According to the complaints, Meta designed features like infinite scroll and constant notifications “with the express goal of hooking young users.” Algorithms pushed users into rabbit holes to maximize engagement. Internal employee chats described Instagram as “a drug” and employees as “basically pushers.”
The states alleged violations of COPPA for collecting data on users under 13 without parental consent, plus state consumer protection violations. They cited Meta’s own research showing the platforms undermined sleep, promoted body dysmorphia through filters and likes, and contributed to what the U.S. Surgeon General called a “youth mental health crisis.”
Colorado Attorney General Phil Weiser said at the filing: “This is not an action we take lightly. This is not a case that we know is going to be decided very quickly. But it’s of the utmost importance.”
This lawsuit remains the primary legal threat Meta faces in the U.S. on teen safety. Unlike the FTC antitrust case—dismissed in November 2025—this action targets the core business model: maximizing time on platform to sell more targeted advertising.
The Global Cost of Lethal Negligence
The whistleblowers documented what happened when warnings were ignored inside the company. The international record documents what happened when those same patterns played out in countries Meta designated as “low priority”—places where the revenue didn’t justify the investment in safety.
How Neglect Gets Prioritized
Sophie Zhang worked at Facebook from 2018 to 2020 as a data scientist on the fake engagement team, identifying bot accounts. What she found was political manipulation networks in more than 25 countries—Honduras, Azerbaijan, Afghanistan, India—where fake accounts artificially inflated authoritarian leaders’ popularity and harassed opponents.
In October 2021, Zhang testified before the UK Parliament. She told lawmakers that Facebook’s resource allocation created a structural problem: 99% of resources went to fighting spam, while expansion to fight political manipulation was rejected due to human resource limitations. Issues were prioritized by volume, so political manipulation in smaller countries got discounted despite real-world impact.
In Honduras, Zhang found thousands of fake pages boosting posts by President Juan Orlando Hernández, whose 2017 reelection is widely viewed as fraudulent. It took 11 and a half months for Facebook to start investigating.
When Meta fired Zhang in September 2020, she declined a $64,000 severance that included a non-disparagement agreement. Instead she posted a 7,800-word memo to Facebook’s internal message board detailing the company’s failures. Facebook suppressed the post, then contacted her web hosting service to force her personal backup offline.
That response tells you everything about what the company feared more: the manipulation networks themselves, or people finding out about them.
Myanmar and the Path to Genocide
In 2018, the UN Independent International Fact-Finding Mission on Myanmar concluded that Facebook played a “determining role” in what became the genocide of the Rohingya people.
The mechanics were straightforward. Starting around 2014, Myanmar’s military ran a coordinated campaign using fake accounts to flood Facebook with anti-Rohingya propaganda. Posts called the Muslim minority “dogs,” “maggots,” and “rapists” who should be “fed to pigs” and “exterminated.”
Facebook’s response in mid-2014: one Burmese-speaking content moderator. Based in Dublin, Ireland. Monitoring 1.2 million active users. By early 2015, that number had increased to two.
I’ve read those numbers multiple times and I still can’t make them make sense. Two people. For a million users. In a country civil society groups were explicitly warning about genocide risk.
Technical challenges made it worse. Many users posted using Zawgyi, a local font encoding, rather than Unicode. Facebook’s Burmese-to-English translation tool for moderation relied on Unicode, so it often mistranslated content. The algorithm couldn’t read what people were posting.
Civil society groups tried to help. In February 2015, Susan Benesch of the Dangerous Speech Project gave a presentation at Facebook headquarters explaining how anti-Rohingya speech was spreading. In March 2015, researcher Matt Schissler traveled to Menlo Park to meet with Facebook employees about the dangers. The warnings were documented, acknowledged, and largely ignored.
By 2017, when Myanmar’s military launched what the UN would call a genocidal campaign, Facebook had become the primary information source in a country where “for most users, Facebook is the internet.” The military had 700 people working on propaganda. Facebook promised to have 100 Burmese moderators by year’s end.
More than 700,000 Rohingya fled to Bangladesh. Thousands were killed. In March 2022, the United States formally determined Myanmar had committed genocide.
The Gambia, prosecuting Myanmar at the International Court of Justice, sought access to Facebook’s preserved data on military accounts—data that could demonstrate genocidal intent. Facebook refused, calling the request “extraordinarily broad.”
I struggle with this one. The company’s algorithms amplified content that contributed to genocide. The UN documented it. And when asked to help prosecute the perpetrators, Facebook cited privacy concerns and the burden of compliance.
The Pattern Repeats in Ethiopia
When civil war broke out in Ethiopia’s Tigray region in November 2020, Facebook had already been through Myanmar. The company had issued public apologies, commissioned human rights reports, promised to do better. What happened next was a test of whether those promises meant anything.
According to a 2023 Amnesty International investigation, Facebook’s algorithmic systems supercharged the spread of harmful rhetoric targeting Tigrayans, while moderation systems failed to detect and respond appropriately.
On November 3, 2021, two Facebook posts targeted Professor Meareg Amare, a chemistry professor at Bahir Dar University. The posts included his name, photo, workplace, and home address. They falsely claimed he supported the Tigrayan People’s Liberation Front and had stolen money.
His son Abrham reported both posts to Facebook. Neither was removed.
That same day, men in Amharan special forces uniforms followed the professor home from work and killed him.
I don’t know how to process that sequence. Posts targeting someone by name and address. Family member reports them. Platform doesn’t act. Person gets murdered. Where in that chain was there room for the outcome to be different?
By late 2021, Meta could monitor content in only four of the eighty-five languages spoken in Ethiopia—potentially leaving 25% of the population uncovered. The company had hired twenty-five content moderators in the country.
In June 2022, Global Witness tested the system: they submitted twelve advertisements containing egregious hate speech—content previously removed from Facebook as policy violations. Facebook approved all twelve.
Internal Meta documents reviewed by Amnesty revealed the company knew its mitigation measures were inadequate and knew Ethiopia was at high risk of violence. The UN Special Adviser on the Prevention of Genocide warned of heightened genocide risk in Tigray, Amhara, Afar, and Oromia regions.
The warnings, once again, were documented and ignored.
Weaponizing the Platform in the Philippines
The Philippines offered a different model—not genocide, but systematic weaponization of Facebook to elect and protect an authoritarian leader.
When Rodrigo Duterte ran for president in 2016, his campaign hired paid trolls and built networks of fake accounts to spread propaganda and attack critics. Duterte admitted to it publicly. After winning with 39% of the vote, the networks didn’t disband. They became instruments of state power.
By April 2017, clear links to the state emerged, particularly through the Presidential Communications Operations Office under Secretary Martin Andanar. Rappler traced a sample network of 26 fake accounts that influenced up to three million users. In November 2016, they documented more than 50,000 accounts under direct control of the propaganda network.
Facebook’s own global elections policy director, Katie Harbath, called the Philippines “patient zero” in a 2018 talk about disinformation in politics. Christopher Wylie, the Cambridge Analytica whistleblower, testified that the company tested strategies for spreading propaganda and manipulating voters in the Philippines before using them for Brexit and Trump.
Duterte’s “war on drugs” killed an estimated 12,000 to 30,000 people in extrajudicial killings. The International Criminal Court opened investigations into crimes against humanity. Throughout, Facebook served as the primary platform for propaganda justifying the killings and harassment campaigns against journalists documenting them.
In March 2019, Facebook removed 200 pages linked to Nic Gabunada, who’d led Duterte’s 2016 social media campaign, for “coordinated inauthentic behavior.” First time Facebook publicly named an individual behind such a network. By then the troll infrastructure had already achieved its purpose and spread globally.
Cambridge Analytica and Delayed Action
In March 2018, the world learned Cambridge Analytica had harvested data from up to 87 million Facebook users without consent. Researcher Aleksandr Kogan created a personality quiz app in 2014. About 270,000 people downloaded it. Because of how Facebook’s API worked then, the app collected data from those users plus their entire friend networks.
The data included public profiles, page likes, birthdays, current cities, and sometimes News Feeds, timelines, and private messages. Cambridge Analytica used this to build psychographic profiles and target political ads, including for Trump’s 2016 campaign.
The critical fact: Facebook learned of the misuse in 2015. The SEC’s complaint alleged Facebook didn’t correct its existing disclosure for more than two years despite discovering the misuse in 2015. Facebook asked Kogan and Cambridge Analytica to delete the data, received certifications they’d done so, and took no further action. The public didn’t learn until March 2018, after The Guardian and New York Times published investigations.
Facebook agreed to pay the SEC $100 million for “misleading investors about the risks it faced from misuse of user data.” The FTC imposed a $5 billion fine. Cambridge Analytica declared bankruptcy.
Three years between discovering the breach and telling anyone. I keep trying to construct a scenario where that delay was anything other than hoping the problem would stay buried.
The pattern across Myanmar, Ethiopia, Philippines, and Cambridge Analytica is identical to the pattern internal whistleblowers documented: evidence of harm, warnings from employees or external groups, minimal action until public or regulatory pressure became unavoidable. The difference is scale. In these cases, the harm wasn’t measured in compromised data points or mental health impacts. It was measured in bodies.
Buying Political Protection
In December 2024, Meta donated $1 million to Donald Trump’s inauguration fund. Zuckerberg personally directed the donation, according to The Wall Street Journal. It came two weeks after Zuckerberg dined with Trump at Mar-a-Lago, demonstrating Ray-Ban Meta smart glasses and discussing the company’s relationship with the incoming administration.
The shift was stark. In January 2021, Zuckerberg had banned Trump from Facebook and Instagram following the Capitol attack, writing that Trump “intends to use his remaining time in office to undermine the peaceful and lawful transition of power.” Three years later, Zuckerberg called Trump’s response to a July 2024 assassination attempt “one of the most badass things I’ve ever seen in my life.”
Meta wasn’t alone—Amazon, Google, Apple, Microsoft, OpenAI all made similar seven-figure donations as Trump prepared to return. The total raised by Trump’s inaugural committee hit a record $239 million, more than the previous three inaugurations combined. For Meta, the donation bought access: donors giving at least $1 million received tickets and face time with Trump, VP JD Vance, and Cabinet officials.
The timing mattered. The FTC’s suit to break up Meta—seeking to force divestiture of Instagram and WhatsApp—was still pending. Meta faced potential regulatory action on AI, data privacy, and content moderation. The Trump administration would set policy on all of it for four years.
Here’s what’s interesting about Meta’s political spending: it doesn’t match its employees’ preferences. Individual Meta employees gave nearly $2 million to Kamala Harris during the 2024 campaign—86% of employee political giving went to Democrats. But the corporate PAC split its donations more evenly, leaning toward Republicans in recent cycles.
That suggests calculation, not ideology. Employees lean Democratic. The company hedges by directing PAC money toward Republicans who now control both chambers and the White House. In 2024, the corporate PAC gave $30,000 each to Republican and Democratic senatorial campaign committees and spread cash to leaders in both parties, emphasizing incumbents.
The China Allegations
In April 2025, Sarah Wynn-Williams testified before the Senate Judiciary Subcommittee on Crime and Counterterrorism. She’d been Meta’s director of global public policy from 2011 to 2017. Her allegations were specific: Meta worked “hand in glove” with the Chinese Communist Party to build censorship tools, briefed Chinese officials on AI technology, and deleted the account of a Chinese dissident living in the United States at Beijing’s request.
Wynn-Williams testified that when Beijing demanded Facebook delete Guo Wengui’s account—a prominent Chinese dissident living on American soil—the company complied and then misled Congress when questioned at a Senate hearing. Meta responded that Wengui’s account was removed for sharing personally identifiable information including passport numbers, social security numbers, and home addresses, violating Facebook’s rules.
According to Wynn-Williams, Meta started briefing the Chinese Communist Party as early as 2015 on critical emerging technologies including AI. She alleged Meta built custom censorship tools tested not only in mainland China but also in Hong Kong and Taiwan. The tools included a “virality counter” that flagged posts with over 10,000 views for review by a “chief editor”—Senator Richard Blumenthal called it “an Orwellian censor.”
Meta strongly denied the allegations, calling Wynn-Williams’ testimony “divorced from reality and riddled with false claims.” The company noted Zuckerberg had been public about Meta’s interest in the Chinese market but emphasized: “We do not operate our services in China today.”
Senator Josh Hawley revealed that Meta had attempted to prevent the hearing, threatening Wynn-Williams with $50,000 in punitive damages every time she mentioned Facebook in public, even if the statements were true. “Facebook is attempting her total and complete financial ruin,” Hawley said.
I don’t know if Wynn-Williams’ allegations are accurate. What I do know is Meta’s response—threatening financial destruction of a former employee for testifying to Congress—doesn’t suggest a company confident in its version of events.
Hardball Tactics
When governments try to force Meta to pay news publishers, the company responds with blunt force.
In February 2021, Australia passed the News Media Bargaining Code requiring tech platforms to negotiate payment deals with news publishers. Meta’s response: it blocked Australian users from sharing or viewing news content on Facebook. The ban lasted a week. After negotiations with the government, Meta agreed to voluntary payment deals with major publishers and restored news access.
In June 2023, Canada passed the Online News Act with similar provisions. Meta responded by permanently blocking news content for Canadian users—a ban still in effect as of January 2026. Canadian users cannot share news links. News publishers cannot reach Canadian audiences through Facebook or Instagram. Meta calculated that the Canadian market—37 million users—was less valuable than maintaining the principle that governments cannot force it to pay for links.
In January 2026, Meta took its most aggressive action: blocking approximately 550,000 accounts in Australia as the country prepared age verification requirements for social media. The move came as Australia’s parliament considered laws requiring platforms to verify users’ ages and ban children under 16.
The pattern is consistent: when a government threatens Meta’s business model or imposes regulations the company opposes, Meta demonstrates it will cut off entire populations. The message to other governments is clear—regulatory pressure will be met with retaliation, even if millions lose access.
The contrast is instructive. When genocide allegations emerged from Myanmar, Meta commissioned human rights reports and apologized. When Canada threatened revenue, Meta cut off an entire country’s access to news for 18 months and counting. The company’s responsiveness appears calibrated to financial and political leverage, not human cost.
Fines and Lawsuits as Operating Costs
For a company that generated $164 billion in revenue in 2024, regulatory fines function less as punishment than as line items on quarterly earnings.
The European Union has been Meta’s most aggressive enforcer. In May 2023, Ireland’s Data Protection Commission imposed a record €1.2 billion fine for illegally transferring EU user data to the United States in violation of GDPR. The fine—largest ever under GDPR—reflected what regulators called “systematic, repetitive and continuous” violations affecting millions.
That same year, January 2023, the Irish DPC fined Meta €390 million for improperly requiring users to accept personalized advertising to use Facebook, Instagram, and WhatsApp. September 2024: €91 million for storing user passwords in plaintext rather than with cryptographic protection. December 2024: €251 million related to a 2018 data breach.
Total from Ireland alone since 2023 exceeds €2 billion. Add fines from other EU countries and the cumulative cost approaches €2.9 billion—less than 2% of Meta’s 2024 revenue.
The company’s largest pending liability is domestic. The IRS is seeking approximately $16 billion in back taxes related to how Meta structured subsidiaries in Ireland to minimize U.S. tax obligations. Meta contests the assessment. The case remains in litigation.
On the legal front, Meta secured its biggest victory in November 2025 when U.S. District Judge James Boasberg dismissed the FTC’s antitrust lawsuit. The FTC had sued in December 2020, seeking to force Meta to divest Instagram and WhatsApp on grounds the acquisitions in 2012 and 2014 were anticompetitive. After five years of litigation and a six-week trial, Boasberg ruled the FTC failed to prove Meta currently holds monopoly power, noting TikTok and YouTube are now fierce competitors.
The dismissal removed the existential threat. Instagram and WhatsApp represent hundreds of billions in combined market value. Losing them would have fundamentally restructured the company.
The 41-state attorneys general lawsuit over teen mental health remains the primary active threat. Unlike the antitrust case—which turned on market definition and economic theory—the teen safety lawsuit rests on internal documents showing the company knew its products harmed children and chose growth over safeguards. Those documents, leaked by Frances Haugen and corroborated by subsequent whistleblowers, are harder to dismiss.
Meta also won a U.S. case in 2025 regarding AI training on copyrighted books, though new lawsuits in France and the EU continue challenging the company’s use of copyrighted material for AI development.
The pattern holds: when fines can be absorbed and legal battles won through resources and time, Meta proceeds. When compliance would require fundamental changes to the business model—ending targeted advertising, limiting data collection—the company fights or finds jurisdictional workarounds.
The AI Pivot With Unchanged Incentives
In January 2026, as Reality Labs employees received layoff notices, Meta was scaling a different bet. The company positions itself as a leader in “open source” AI through its Llama model family, released publicly with fewer restrictions than competitors like OpenAI or Google.
The “open source” framing serves multiple purposes. It appeals to developers, generates goodwill in the tech community, positions Meta as a counterweight to what Zuckerberg calls the “closed” approach of rivals. Critics call it “open washing”—the Llama license prohibits use by competitors with more than 700 million monthly active users and restricts certain applications. That’s not open source by traditional definitions. It’s strategic distribution with conditions.
What troubles me more: Meta’s AI development raises identical questions the company faced—and failed to answer—in previous technology shifts. Will algorithms optimized for engagement amplify harmful content in languages Meta doesn’t monitor? Will safety teams be adequately resourced in “low priority” markets? Will internal research showing potential harms be acted upon or suppressed?
Sarah Wynn-Williams alleged in April 2025 testimony that Meta’s Llama model “has contributed significantly to Chinese advances in AI technologies like DeepSeek.” If accurate, it suggests Meta is repeating the pattern: prioritize distribution and market share, address geopolitical and safety implications later if they become unavoidable.
Meta has survived the Metaverse collapse, the genocide accusations, Cambridge Analytica, and the FTC breakup attempt. The company is profitable, growing, positioned as a major player in the AI race. Zuckerberg—who faced calls for resignation after Myanmar and testified before Congress nine times—remains firmly in control with majority voting shares.
The $73 billion Reality Labs loss now looks like what it probably was: a failed bet, absorbed by a company with sufficient scale to survive it. The regulatory fines have been absorbed. The legal threats, except the ongoing teen safety lawsuit, have been neutralized or settled.
What hasn’t changed is the operating philosophy. When growth and safety conflict, growth wins. When evidence of harm emerges, the response is calibrated to regulatory and financial pressure, not human cost. When governments attempt meaningful restrictions, the company demonstrates it will cut off access rather than comply.
As Meta deploys AI across its platforms—using it to generate content, moderate posts, target ads, shape what 3 billion users see daily—these patterns matter. The technology is different. The scale is larger. But the incentives driving decisions remain unchanged.
I’m left with a question I don’t know how to answer: At what point do we stop treating each incident as an isolated failure and recognize we’re looking at the system working exactly as designed? Meta has shown us, repeatedly, what it prioritizes when trade-offs get hard. The question for regulators, users, and the public isn’t whether Meta’s AI will be transformative. It’s whether we’re willing to believe the company will treat warnings about AI harms any differently than it treated warnings about genocide.
The documented evidence suggests we shouldn’t.
