AI’s Scarcest Input is Reshaping Capital Flows, Supply Chains, and Geopolitics
By Priscilla Chan
OpenAI just locked in unprecedented, long-term access to the world’s scarcest input — AI compute — by striking two landmark infrastructure deals. In late September, Nvidia signed a letter of intent with OpenAI to invest up to $100 billion and deploy at least 10 gigawatts of Nvidia systems. Two weeks later, OpenAI struck a separate pact with another California-based chipmaker, Advanced Micro Devices, Inc. (AMD), to deploy 6 gigawatts of Instinct GPUs on a staged schedule with milestone-based share purchases.
The two recent infrastructure deals helped OpenAI secure long-term access to AI compute. AI compute consists of hardware resources, such as graphics processing units (GPUs), tensor processing units (TPUs), memory, and power, that train and run AI models. Simply put, an AI model is like a brain that learns from experience by processing data, while an AI compute provides the "muscles" and "power" to perform calculations. Not only did the deals shift the AI landscape towards a more diverse array of competitions, they also introduced new political dynamics to the world.
No matter how quickly AI algorithms improve, the advancements of AI compute dictate the scale and capability of modern AI. Companies like OpenAI, NVIDIA, and AMD face an emerging synchronicity challenge: aligning the pace of hardware and software innovation. The rapid evolution of software innovations often outpaces the slower, more complex cycles of hardware development. Big players have centered their competition and collaboration on developing the specialized hardware and data centers necessary to build advanced AI systems.
The two megadeals clearly aim to address this synchronicity challenge. AI companies like OpenAI are racing to secure scarce compute resources. Competition is intensifying: NVIDIA, the long-time dominant chipmaker, is challenged by AMD, which has now secured OpenAI’s support. By partnering with more than one market leader, OpenAI is shifting away from a monopolist dependency on Nvidia. By diversifying its supply chain and lowering scaling risk, OpenAI gains leverage.
The Nvidia-OpenAI $100 billion deal is mutually beneficial: it secures Nvidia a massive, long-term customer and ensures its hardware remains critical for the major AI player. For OpenAI, the deal helps finance the huge cost of building its own data centers to meet surging AI computing demands.
While the market reacted swiftly to AI’s rapid proliferation, governments have been slower to adapt. The AI and chip race is not just a technology race, but a contest of state power, particularly between the U.S. and China. In October 2022, Washington tightened export controls to slow Beijing’s access to frontier AI chips, using Nvidia’s leading technology as a negotiating tool in the trade spat with China.
China, historically reliant on Western semiconductors, has responded by accelerating its own AI hardware industry to achieve self-sufficiency. The constraints spurred the Chinese government to massively invest in homegrown chipmakers and push domestic firms to buy local chips.. Rather than deterring China’s ambitions, U.S. export controls strengthen Beijing’s resolve to build a resilient semiconductor ecosystem.
The impact is evident: Nvidia’s CEO Jensen Huang reported that the company’s China market share for advanced AI accelerators fell from 95% to 50%. Although surging global AI demand offsets U.S. chipmakers’ losses, U.S. policy is fundamentally reshaping investment strategies for the semiconductor industry, further incentivizing China to replace U.S. chips.
U.S. policymakers cannot neglect China's AI market and talent pool. Currently, China commands 50% of the world’s AI researchers and 30% of the technology market. As Beijing bolsters strength in open source models and generative AI, disengaging from the Chinese market risks reducing U.S. influence over how AI technologies diffuse and evolve globally.
Just off the coast of China lies the third major player in the race to AI dominance: Taiwan. The island sits at the core of both global semiconductor production and geopolitical tension, heightening the vulnerability of the global chip supply chain. Taiwan also produces over 90% of the world’s most advanced chips, giving it enormous strategic leverage.
Amid long-standing China-Taiwan tensions, U.S. military aid and presence in the region plays a deterrent role. To Beijing, U.S. military support to Taiwan is an interference in China’s sovereignty. Taiwan’s dominance in semiconductor manufacturing gives the island bargaining “chips” with both the U.S. and China in the broader geopolitical standoff. The chip supply chain has expanded beyond an industrial question; it has become a central negotiating tool in the emerging arms race over AI compute.
OpenAI’s back-to-back partnerships mark a new playbook for the AI industry: secure compute through circular financing, forge strategic global alliances globally, and reduce dependence on any single vendor or jurisdiction. This plans’s effects ripple beyond business into geopolitics, reshaping capital flows, supply chains, and geopolitical dynamics.
Whichever nations — and the companies within their borders — can integrate the “factory” fastest — chips, memory, racks, networking, cooling, and megawatts — will win this race. OpenAI’s agreements with Nvidia and AMD are more than procurement contracts; they’re industrial strategies with term sheets attached.
Who will win the AI race? It’s too early to tell. Chinese companies are rapidly innovating toward self-sufficiency under hardware constraints, the U.S. is reinforcing its leadership through massive investment and strategic controls, and Taiwan is leveraging its manufacturing base to bolster political stability. The eventual winner of the AI race will be hinged on companies’ ability to secure and stabilize the supply of the vital computational infrastructure amid these shifting forces.