When I first saw the numbers, I tried to rationalize them. NVIDIA spends roughly $6,400 to manufacture a B200 GPU and sells it for $30,000 to $40,000. That’s an 80% gross margin on what’s rapidly becoming as essential as electricity for the global economy.
My instinct was to reach for the usual defenses—R&D costs, supply constraints, the natural economics of cutting-edge technology. Then I stopped myself and asked a harder question: Why was I making excuses for a company extracting these margins on infrastructure that determines which nations can participate in the defining technology of our era?
This isn’t a story about profit margins. It’s about three interconnected moral failures that should concern anyone building or leading in the AI space: monopolistic extraction that’s creating technological apartheid, partnerships with authoritarian regimes that reveal corporate values in action, and a pattern of behavior that consistently chooses profit over ethics, legality, and human dignity.
The decisions we make now about who controls AI infrastructure will shape power dynamics for decades. We need to look clearly at what’s actually happening.
Essential infrastructure priced for exclusion
The $30,000 price tag for a B200 GPU isn’t just expensive—it’s exclusionary by design, and the human consequences are stark and measurable.
In Kenya, the price of a single GPU represents 75% of GDP per capita. In Senegal, it’s 69%. These aren’t edge cases. University of Auckland and University of Otago economists warn that 60% of developing nations lack the digital infrastructure required to deploy AI systems at scale, risking “AI poverty traps” that could lock entire economies into permanent technological dependence.
The numbers tell a brutal story. Africa holds less than 1% of global data center capacity despite housing 18% of the world’s population. The continent has just over 100 data centers total. Meanwhile, AI is expected to contribute $13 trillion to global GDP by 2030, but this growth will be captured almost entirely by wealthier economies with advanced infrastructure.
Internet connectivity stands at just 27% in low-income countries compared to 93% in high-income nations. Fixed broadband costs 31% of monthly income in low-income countries versus 1% in high-income countries. Africa would need $2.6 trillion in investment by 2030 just to bridge the infrastructure gap.
We’re watching the emergence of technological colonialism, where entire continents are locked out of the defining technology of our era—not by capability, but by monopolistic pricing. The UNDP warns that unmanaged AI could reverse the long trend of narrowing development inequalities between countries.
This matters for business strategy. When you’re building AI systems, you’re building on infrastructure that’s creating structural exclusion at a global scale. That’s not just an ethical problem—it’s a strategic risk. Markets you might serve, talent you might hire, innovations that might emerge—all constrained by who can afford access to the foundational layer.
Meanwhile, 118 countries—mostly in the Global South—are absent from major AI governance discussions even as AI is expected to reach $4.8 trillion in market value by 2033. The people most affected by these infrastructure decisions have no seat at the table.
When profit meets principle in Riyadh
The economics are only half the story. Who NVIDIA chooses to partner with reveals even more about their values in action.
NVIDIA announced partnerships with HUMAIN, a subsidiary of Saudi Arabia’s Public Investment Fund, to deploy up to 5,000 Blackwell GPUs initially, with several hundred thousand more planned over five years. This represents an estimated $15-20 billion in revenue flowing through an entity directly controlled by Crown Prince Mohammed bin Salman.
That name should matter to you. U.S. intelligence agencies assessed that Crown Prince Mohammed bin Salman approved an operation to capture or kill Saudi journalist Jamal Khashoggi. By November 2018, the CIA concluded that bin Salman ordered Khashoggi’s assassination. Khashoggi was lured into the Saudi Consulate in Istanbul where he was killed and dismembered.
The Public Investment Fund—NVIDIA’s direct partner—has facilitated and benefited directly from serious human rights abuses linked to the Crown Prince, including the 2017 anti-corruption crackdown that consisted of arbitrary detentions, abusive treatment of detainees, and extortion of property from Saudi Arabia’s elite.
The regime’s record is documented and extensive. Saudi authorities executed 196 people in 2022 and at least 172 people in 2023—the 2022 number is the highest annual number of executions recorded in the country in the last 30 years. Between March 2022 and June 2023, Saudi border guards killed hundreds of Ethiopian migrants and asylum seekers who tried to cross the southern border with Yemen—killings that would amount to crimes against humanity if committed as part of Saudi government policy.
Saudi authorities forcibly evicted members of the Huwaitat community from the planned NEOM area, arrested those who protested their evictions, and killed one protesting resident. Two residents received sentences of 50 years in prison, and three received death sentences for resisting forced evictions. People have been sentenced to decades in prison for tweets. Salma al-Shehab and Nourah al-Qahtani were sentenced to 34 and 45 years respectively based solely on their peaceful social media activity, with al-Shehab’s posts related to support for women’s rights.
Jensen Huang stood next to Mohammed bin Salman and praised his “incredible vision”. The same man U.S. intelligence concluded ordered a journalist’s dismemberment. That’s not pragmatic business—that’s active complicity in legitimizing brutality for profit.
Here’s the strategic question that should concern every technical leader: What does it mean for AI governance when the company controlling 70-95% of AI infrastructure has no qualms about empowering authoritarian surveillance states? The AI capabilities NVIDIA is providing aren’t just for economic development—they will inevitably enhance the regime’s surveillance and control capabilities.
The PIF uses investments in high-profile sports and entertainment events to whitewash Saudi Arabia’s human rights record. NVIDIA’s AI partnership serves the same purpose—helping legitimize MBS internationally while generating billions in revenue.
A pattern of choices over time
Maybe you’re thinking: “That’s geopolitics—companies have to work with imperfect actors.” I understand that instinct. Except NVIDIA’s own track record suggests this isn’t an anomaly. It’s a pattern visible across two decades of corporate decisions.
In 2008, numerous laptop users worldwide reported screen distortion and complete failures due to faulty NVIDIA GPUs that had soldering issues from thermal expansion, affecting HP, Dell, and Apple products. NVIDIA refused to acknowledge responsibility, and Apple had to bear the cost of all replacement parts out of its own pocket. This decision directly ended NVIDIA’s partnership with Apple. When faced with defective products harming customers, they chose denial over accountability.
In 2014, NVIDIA released the GTX 970 graphics card advertised with 4GB of memory. Users discovered that only 3.5GB was fast memory while the remaining 0.5GB was about one-seventh the speed, causing massive performance hits. This was deliberate misrepresentation of specifications. It led to 15 different class-action lawsuits that were grouped together, with NVIDIA settling for only $30 per affected customer—a cost of doing business, not a deterrent.
In 2018, NVIDIA launched the GeForce Partner Program, accused of anticompetitive practices that forced partners to choose between NVIDIA branding or competitor products. They canceled it only after significant backlash. The FTC later blocked NVIDIA’s proposed $40 billion acquisition of ARM over concerns about vertical integration and monopoly power. Regulators recognized this would have given NVIDIA near-total control over AI and computing infrastructure.
In 2020, NVIDIA told YouTube tech reviewer Hardware Unboxed that it would no longer supply review units because they were focusing on rasterization instead of ray tracing, saying they would revisit this decision should the reviewer’s editorial direction change. This is a blatant attempt to control the narrative by punishing honest reviewers. Currently, NVIDIA is allegedly trading access to its new GeForce RTX 5060 GPU for friendly reviews, offering certain media outlets early access under the premise of publishing preview content while denying access to qualified reviewers who wouldn’t promise positive coverage.
The SEC charged NVIDIA with inadequate disclosures about cryptomining’s impact on gaming revenue in fiscal 2018, finding the company failed to disclose that cryptomining was a significant element of material revenue growth. NVIDIA agreed to pay a $5.5 million penalty without admitting or denying wrongdoing—another slap on the wrist for misleading investors.
Most recently, NVIDIA is alleged to have downloaded astronomical amounts of videos from YouTube and Netflix without authorization to train its Cosmos AI model, amounting to 80 years of videos per day, using virtual machines with renewed IP addresses to avoid being blocked. Internal documents show NVIDIA employees raised ethical and legal concerns about these practices but were reportedly told by managers that the practice had been approved at the highest levels of the company. When workers questioned the legality, management essentially told them to continue—it was authorized from the top. NVIDIA also used HD-VG-130M, a database whose license does not permit commercial use.
Former employees describe working seven days a week with hours extending until 1-2am, with a toxic work culture characterized by frequent arguments, shouting, and heated disputes during meetings. One former marketing employee reported attending up to ten meetings a day with more than 30 people, with meetings often involving shouting and fighting. NVIDIA’s 3,000%+ stock surge since 2019 has made employees millionaires, but they’re often too busy at the office to enjoy it, with few willing to leave if it means putting vested shares at stake—buying complicity with wealth.
Despite public commitments, NVIDIA’s own diligence processes revealed supplier non-compliance issues including hiring fees, document and passport retention, excessive working hours, and penalties for leaving employers prior to specified time periods—all indicators of forced labor conditions in their supply chain.
At every decision point—consumer fraud, defective products, anticompetitive practices, copyright theft, labor exploitation, partnership with authoritarian regimes—NVIDIA chooses extraction over ethics. The Saudi partnership isn’t an outlier. It’s the pattern made explicit.
Regulation arrives after the damage compounds
Multiple governments are investigating, but the question is whether action will arrive in time to matter.
The U.S. Department of Justice has issued subpoenas investigating whether NVIDIA’s 70-95% market dominance constitutes monopoly power used anticompetitively. Regulators are concerned that NVIDIA could be implementing illegal tying agreements by promoting exclusive use of its chips and complementary AI services.
China’s State Administration for Market Regulation found that NVIDIA violated terms of the regulator’s conditional approval of its acquisition of Israeli chip designer Mellanox Technologies. French regulators have reportedly raided NVIDIA offices gathering evidence of anticompetitive practices.
Critics argue that CUDA creates a closed, proprietary system that makes it difficult for competitors to challenge NVIDIA’s preeminence, as rebuilding applications for other processors requires costly code rewrites. This isn’t just technical lock-in—it’s strategic moat-building that prevents market competition from functioning.
Even if regulators act decisively, the damage compounds daily. Every month of 80% margins deepens global inequality. Every new authoritarian partnership entrenches surveillance infrastructure. Every researcher priced out of GPU access represents innovation that never happens. Regulation that arrives after monopoly consolidation is like applying a bandage after the patient has bled out.
NVIDIA has supreme pricing power right now with H100 gross margins exceeding 85%, as everyone simply has to take what NVIDIA is feeding them. This isn’t a market—it’s a chokepoint.
What this means for how we build and lead
This isn’t just about one company behaving badly. It’s about three interconnected strategic realities that affect how we think about building with AI:
Infrastructure determines participation. Who controls AI infrastructure determines who can participate in the AI economy. When you’re making build-versus-buy decisions, evaluating cloud providers, or planning technical roadmaps, you’re working within constraints created by monopolistic pricing. Those constraints aren’t neutral—they encode values and priorities that may not align with yours.
Ethics become optional without competition. When monopolies face no meaningful competitive pressure, ethical behavior becomes optional. NVIDIA’s pattern of choosing profit over principle at every decision point isn’t aberrant—it’s rational within a system that rewards monopolistic extraction and punishes nothing else. Markets are supposed to discipline bad actors. This market can’t.
Democratic deficit in governance. The company shaping global AI access through infrastructure control operates without meaningful accountability to the 118 countries absent from AI governance discussions. The decisions being made now about AI infrastructure will determine power dynamics for decades, and most of the world has no voice in those decisions.
I’ve spent my career bridging technical and strategic domains, trying to connect engineering decisions to business outcomes and human impact. What worries me most isn’t NVIDIA’s greed—corporations are designed to maximize profit. It’s our collective willingness to accept “that’s just business” as an adequate response to structural injustice.
If we wouldn’t accept one company controlling 90% of electricity infrastructure while partnering with murderous regimes and exploiting workers, why do we accept it for AI? The usual answer is that AI chips aren’t essential infrastructure. That answer becomes less convincing every month.
What accountability might look like in practice
I’m not interested in naive idealism or performative outrage. I’m interested in what changes are actually possible and what they would require.
Antitrust enforcement with real teeth. Not $5.5 million settlements that function as operating expenses. Forced licensing of the CUDA ecosystem to enable genuine competition. Regulated pricing for infrastructure recognized as essential. Breakup considerations if monopoly power persists despite other interventions. These aren’t radical proposals—they’re how we’ve historically handled natural monopolies on essential infrastructure.
International AI infrastructure investment. Public funding for alternative chip architectures. Open-source AI accelerator projects that aren’t controlled by a single vendor. Direct support for Global South infrastructure that doesn’t flow through monopolistic chokepoints. The market has failed to provide competitive alternatives at scale. Public investment can change that.
Corporate accountability mechanisms with consequences. Mandatory human rights due diligence before partnerships with authoritarian regimes. Supply chain transparency requirements with actual enforcement. Penalties that hurt enough to change behavior—calculated as percentages of revenue, not fixed amounts that function as the cost of doing business.
Support for alternative development models. AMD, Intel, and custom silicon development need to become viable competitive options, not just theoretical alternatives. Open hardware initiatives need funding and adoption. Regional AI infrastructure cooperatives could provide alternatives to corporate monopolies. None of this happens without deliberate support from both public and private sectors.
The radicalism isn’t in these proposals. The radicalism is pretending AI chips aren’t essential infrastructure while one company controls access and partners with authoritarian regimes.
The choice we’re making right now
The $6,400 to $30,000 markup isn’t just economics—it’s a statement about values. It says that access to transformative technology should be determined by ability to pay monopolistic prices, not by capability or need. It says that partnering with regimes that murder journalists and sentence people to decades in prison for tweets is acceptable if the revenue is large enough. It says that patterns of consumer fraud, copyright theft, and labor exploitation are just the cost of doing business.
Every time I use AI tools built on NVIDIA infrastructure—which is most of them—I’m implicated in this system. That’s uncomfortable. Discomfort is the price of awareness, and awareness is the prerequisite for change.
We’re at an inflection point. The decisions we make now about AI infrastructure will shape which nations can participate in the AI economy, which innovations can happen, and whether foundational technology serves broad human flourishing or concentrated profit extraction.
This matters for your technical strategy. When you choose cloud providers, evaluate GPU options, or plan infrastructure investments, you’re making choices within a system that encodes specific values. Understanding those values—and their consequences—is part of building responsibly.
This matters for your organizational ethics. If your company has stated commitments to human rights, sustainability, or responsible AI, those commitments exist in tension with reliance on infrastructure built through the patterns I’ve documented. That tension is worth examining explicitly rather than leaving it implicit.
This matters for the future we’re building together. Technology doesn’t have inherent values—people embed values through the choices they make about how technology gets built, priced, and governed. Right now, the values being embedded are clear: monopolistic extraction, partnership with authoritarian regimes, and profit over principle.
You can pressure regulators—specific agencies like the DOJ Antitrust Division and FTC have active investigations. You can support alternative AI infrastructure initiatives through procurement decisions and partnership choices. You can demand transparency from organizations about where their AI infrastructure comes from and who profits from it. You can refuse to accept “legal but unethical” as an adequate standard for essential infrastructure.
The moral cost of silicon is measured in excluded nations, empowered authoritarians, and a future where access to transformative technology is determined by wealth rather than need. That cost is too high. And unlike NVIDIA’s margins, it’s not negotiable.
