Skip to main content

Filed under:

Chip race: Microsoft, Meta, Google, and Nvidia battle it out for AI chip supremacy

The rise of generative AI has been powered by Nvidia and its advanced GPUs. As demand far outstrips supply, the H100 has become highly sought after and extremely expensive, making Nvidia a trillion-dollar company for the first time.

It’s also prompting customers, like Microsoft, Meta, OpenAI, Amazon, and Google to start working on their own AI processors. Meanwhile, Nvidia and other chip makers like AMD and Intel are now locked in an arms race to release newer, more efficient, and more powerful AI chips.

As demand for generative AI services continues to grow, it’s evident that chips will be the next big battleground for AI supremacy.

  • Leading edge chipmakers requested $70 billion in CHIPS Act grants.

    With over 600 statements of interest received, Commerce Secretary Gina Raimondo acknowledged today that the amount requested is more than twice the $28 billion the government has budgeted to invest.

    We have decided to prioritize projects that will be operational by 2030. There are worthy proposals with plans to come online after 2030 that we say no to in order to maximize our impact in this decade...We anticipate that our investments in leading-edge logic chip manufacturing will put us on track to produce roughly 20% of the world’s leading-edge logic chips by 2030, up from the zero percent we produce today.

    The CHIPS Act originally had $52 billion in subsidies to boost US semiconductor manufacturing, but it’s not nearly enough to catch up by itself — industry leader Taiwan Semiconductor Manufacturing Company (TSMC) earmarked $44 billion in 2022 just to expand its existing capacity.


  • Emma Roth

    Feb 23

    Emma Roth

    Nvidia’s role in the AI wave has made it a $2 trillion company

    Nvidia’s logo.
    Illustration by Alex Castro / The Verge

    Nvidia has officially hit a $2 trillion market capitalization, making it the first chipmaker to cross the threshold. It’s also the third US company to reach over a $2 trillion valuation, right behind Apple ($2.83 trillion) and Microsoft ($3.06 trillion).

    The California-based company has seen rapid growth over the past year due to its leadership in the AI chip market. Nvidia’s market cap reached $1 trillion only less than one year ago, and it left both Amazon and Alphabet in the rearview mirror as it became a $1.83 trillion company earlier this month.

    Read Article >
  • Wes Davis

    Feb 22

    Wes Davis

    Microsoft and Intel strike a custom chip deal that could be worth billions

    An Intel logo surrounded by processors
    Illustration by Alex Castro / The Verge

    Intel will be producing custom chips, designed by Microsoft for Microsoft, as part of a deal that Intel says is worth more than $15 billion. Intel announced the partnership during its Intel Foundry event today. Although neither company specified what the chips would be used for, Bloomberg noted today that Microsoft has been planning in-house designs for both processors and AI accelerators.

    “We are in the midst of a very exciting platform shift that will fundamentally transform productivity for every individual organization and the entire industry,” said Microsoft CEO Satya Nadella in the official press release.

    Read Article >
  • “Generative AI has hit the tipping point.”

    As Nvidia reports its Q4 2023 earnings, CEO Jensen Huang says:

    Accelerated computing and generative AI have hit the tipping point. Demand is surging worldwide across companies, industries and nations

    That demand, and Nvidia’s dominance in AI chips, powered the company’s record earnings of $60.9 billion for the full year 2023 — a 126 percent increase from last year. Its Q4 revenue of $22.1 billion is an increase of an astounding 265 percent over the same period last year.


  • Nvidia lets Google’s Gemma AI model loose on its GPUs.

    The open-source Gemma models are optimized for “the installed base of over 100 million Nvidia RTX GPUs,” installed in PCs around the world, in addition to Nvidia’s ubiquitous AI chips like the H100.

    The models will also be part of Nvidia’s Chat with RTX demo, which lets AI models run locally and access users’ files to generate answers to prompts.


  • Intel announces bleeding-edge Intel 14A, targeting 2027 with High-NA EUV.

    Intel has said 2025 is the year it leads the world in chips again (TSMC begs to differ).

    Beyond that lies Intel 14A — the company’s smallest node yet, thanks to High-NA EUV. But Intel is hedging its bets with a tick-tock 18A successor, where the P in “18A-P” stands for performance jump. What does 14A stand for? Intel didn’t say. “We think this is the next leadership technology and we don’t want to give anyone something to shoot at,” Intel VP Craig Orr tells me.


  • SoftBank founder Masayoshi Son wants $100 billion for a new AI chip venture.

    It’s code-named Izanagi and is designed to combat Nvidia, sources are telling Bloomberg. OpenAI’s Sam Altman had been courting Softbank and other companies to potentially bankroll a chipmaking arm already — but the Izanagi project is reportedly not connected to that.

    If successful, Izanagi may become the largest investment in AI since Microsoft put a rumored $10 billion wager on OpenAI.


  • Nvidia is now worth more than Amazon and Alphabet

    Illustration of the Nvidia wordmark on a green and black background.
    Illustration by Alex Castro / The Verge

    Nvidia surpassed Alphabet in market capitalization on Wednesday, only a day after it overtook Amazon. Bloomberg is reporting that the chip maker’s stock is now worth $1.83 trillion, beating the Google owner’s $1.82 market cap by a hair. This makes Nvidia the world’s fourth most valuable company in the wake of the AI boom, after Microsoft ($3.04T), Apple ($2.84T) and Saudi Aramco. The company is currently making the H100 chip, which powers the majority of LLMs in use today, including OpenAI’s ChatGPT and the majority of AI projects from Microsoft, Meta, and Amazon.

    The world’s largest tech companies are involved in an AI chip arms race, with each of them hoping to create their own GPU chip to topple Nvidia’s virtual monopoly. Ironically, as Bloomberg notes, Nvidia’s top sales of AI chips come from those same companies. The Santa Clara-based firm is close to releasing a superior AI chip, the H200, which has more memory capacity and bandwidth than its predecessor. Earlier this month, Reuters reported that Nvidia has invested $30 billion into a unit devoted to helping other companies make their own custom AI chips. Meaning, even if companies opt to build their own AI chips, Nvidia could get a piece of the action.

    Read Article >
  • AI expert Andrej Karpathy confirms he’s left OpenAI.

    As first reported by The Information, which says he’d been working on an AI assistant. Karpathy’s exit comes a year after rejoining OpenAI, where he was a founding member, with stops at Apple and Tesla in between.

    Karpathy’s path has taken him from Google’s If I Had Glass competition to obtaining a Vision Pro, and tonight, he tweeted, “My immediate plan is to work on my personal projects and see what happens.”


  • Biden administration says it's investing $5 billion in research to boost US semiconductor manufacturing.

    The outlay is part of the CHIPS and Science Act signed in 2022. The CHIPS Manufacturing USA Institute will see investments of “at least $200 million” in experiments to lower the cost of US-based chip production and development.

    And around $300 million will fund research into advanced substrate packaging, which can help improve performance, avoid shortages, and enable new uses for semiconductors, like chips that process data with light instead of electrons.


  • Nvidia plans to help companies make custom versions of its expensive AI chips.

    Reuters reports on Nvidia’s new unit that will design custom AI chips, even as its customers plan for a future less reliant on Nvidia’s dominant H100 AI chips. Those can start at $16k each, and power the majority of large language models in use today, which has pushed its market value well beyond $1 trillion.

    Nvidia officials reportedly met with OpenAI, Meta, Google, Amazon, and Microsoft to offer its “bespoke” AI chip services, as well as businesses in other industries like gaming and telecom.


  • The latest rumor about Sam Altman’s AI chip-building dream could require up to $7 trillion.

    For context, here’s how the Wall Street Journal describes OpenAI’s once-again leader’s trillion dollar effort to “reshape the global semiconductor industry:”

    Such a sum of investment would dwarf the current size of the global semiconductor industry. Global sales of chips were $527 billion last year and are expected to rise to $1 trillion annually by 2030.

    The money is needed to fuel AI’s growth and solve the scarcity of expensive AI chips required to train the large language models that underpin systems like ChatGPT. According to the WSJ, Altman is pitching a chip-making partnership to investors from the UAE, SoftBank CEO Masayoshi Son (again), and TSMC.


  • Huawei just retasked a factory to prioritize AI over its bestselling phone

    The Huawei logo imposed on a red and black background.
    Illustration by Alex Castro / The Verge

    Reuters reports that Huawei will focus on increasing the manufacturing of its AI chip, the Ascend 910B, at the expense of production of its Mate 60 phones in at least one facility. 

    Huawei makes both its Ascend AI chip and the Kirin chip, which powers the Mate 60, in one facility. However, production in the plant has been low, people familiar with the matter told Reuters, so the company now plans to prioritize the AI chip. Demand for Ascend chips, which aid in training AI models, has been growing domestically. By delaying the production of chips for Mate 60, Huawei can focus on improving the number of usable and sellable chips from the facility. The Mate 60 helped Huawei beat Apple’s phone sales in the country in 2023, writes the South China Morning Post, which makes slowing its production an interesting bet on AI’s importance to the company.

    Read Article >
  • Meta’s reportedly working on a new AI chip it plans to launch this year.

    Reuters reports “Artemis” will complement the hundreds of thousands of Nvidia H100 chips Meta bought. Similar to the MTIA chip Meta announced last year, Artemis is also focused on inference — the actual decision-making part of AI — and not training AI models, but it’s already a little late to the game.

    Google introduced a second-generation TPU in 2017 that could do both, and so can Microsoft’s recently-announced Maia 100. And AMD claims its M1300X chip performs better than H100s on the inference side.


  • AMD says its MI300 AI accelerator is “now tracking to be the fastest revenue ramp of any product in our history”.

    While that doesn’t quite tell us how well AMD’s competing with Nvidia in the AI gold rush, AMD CEO Lisa Su says she’s not sitting back: “The demand for compute is so high that we are seeing an acceleration of the roadmap generations here.”

    She confirmed Zen 5 CPUs are still on track for this year, with server chips in second half. Acer, Asus, HP, Lenovo, MSI, and “other large PC OEMs” will begin putting Ryzen 8000 notebooks on sale in February.


  • Nvidia’s AI partners are also its competition.

    Nvidia powers most of the AI projects from Microsoft, OpenAI, Amazon, and Meta, but they’re also trying to lessen their dependence on its limited supply. The New York Times explains they want to make switching between Nvidia chips and others (including their own) “as simple” as possible.

    As The Verge reported, OpenAI CEO Sam Altman is interested in building chips. Microsoft’s AI-focused chip Maia 100 is expected to arrive this year, and Amazon announced the latest version of its Trainium chip.


  • Wes Davis

    Jan 20

    Wes Davis

    OpenAI CEO Sam Altman is talking to TSMC about fabricating AI chips.

    That’s according to a Financial Times story this morning, putting a name to yesterday’s Bloomberg report on Altman’s search for investors to realize an AI chip venture.

    TSMC, or Taiwan Semiconductor Manufacturing Co, is the massive chip fabricator responsible for chips like those you’d find in Apple’s laptops and phones, along with many ARM and AMD devices.


  • OpenAI CEO Sam Altman is still chasing billions to build AI chips

    Sam Altman onstage at OpenAI DevDay.
    Image: OpenAI

    A new report from Bloomberg says that once-again CEO of OpenAI Sam Altman’s efforts to raise billions for an AI chip venture are aimed at using that cash to develop a “network of factories” for fabrication that would stretch around the globe and involve working with unnamed “top chip manufacturers.”

    A major cost and limitation for running AI models is having enough chips to handle the computations behind bots like ChatGPT or DALL-E that answer prompts and generate images. Nvidia’s value rose above $1 trillion for the first time last year, partly due to a virtual monopoly it has as GPT-4, Gemini, Llama 2, and other models depend heavily on its popular H100 GPUs.

    Read Article >
  • Emma Roth

    Dec 14, 2023

    Emma Roth

    Intel’s Core Ultra CPUs are here — and they all come with silicon dedicated to AI

    A graphic showing Intel’s Core Ultra chip
    Image: Intel

    Intel is taking the wraps off its next generation of CPUs. During its AI Everywhere event on Thursday, Intel revealed all the details on the Core Ultra — no longer Core “i” — mobile processors that will be part of its Meteor Lake lineup, promising better power efficiency and performance thanks to a new setup that splits tasks across different chiplets.

    Intel says its Core Ultra 7 165H chip offers an 11 percent improvement in multi-threading performance when compared to competing laptop processors, like the AMD Ryzen 7 7840U, Qualcomm Snapdragon 8cx Gen 3, and Apple’s in-house M3 chip. It also offers a 25 percent reduction in power consumption when compared to the previous Intel Core i7-1370P and has up to 79 percent lower power than AMD’s Ryzen 7 7840U “at the same 28W envelope for ultrathin notebooks.”

    Read Article >
  • Emilia David

    Dec 6, 2023

    Emilia David

    AMD releases new chips to power faster AI training

    Lisa Su onstage with MI300X processors
    Image: PaulSakuma.com

    AMD wants people to remember that Nvidia’s not the only company selling AI chips. It’s announced the availability of new accelerators and processors geared toward running large language models, or LLMs. 

    The chipmaker unveiled the Instinct MI300X accelerator and the Instinct M1300A accelerated processing unit (APU), which the company said works to train and run LLMs. The company said the MI300X has 1.5 times more memory capacity than the previous M1250X version. Both new products have better memory capacity and are more energy-efficient than their predecessors, said AMD. 

    Read Article >
  • Alex Heath

    Dec 4, 2023

    Alex Heath

    The GPU haves and have-nots.

    This chart from Omdia Research estimating Nvidia’s largest customers this year has been making the rounds in my social media feeds.

    As I wrote in an earlier issue of Command Line, these H100s are essentially the tech industry’s new gold, since they are the preferred workhorse for powering generative AI. The gap in shipment volume between Meta, Microsoft and everyone else is quite something, and tracks with what I’ve heard from sources in recent months.


    A chart showing H100 GPU shipments this year.
    Omdia Research
  • Wes Davis

    Nov 19, 2023

    Wes Davis

    About that new venture.

    Former (and future?) OpenAI CEO Sam Altman has been pitching custom, Nvidia-rivaling AI Tensor Processing Unit (TPU) chips, according to a report in The New York Times. He’s reportedly also sought funding from Masayoshi Son, SoftBank’s founder, for his rumored Jony Ive AI hardware collaboration.

    Today, Bloomberg reported that the TPU project is code-named “Tigris,” and that “a number of prominent venture firms,” including Microsoft, are ready to or interested in backing Altman’s future projects.


  • Tom Warren

    Nov 15, 2023

    Tom Warren

    Microsoft is finally making custom chips — and they’re all about AI

    A person holding Microsoft’s Azure Maia 100 AI chip
    Microsoft’s new Azure Maia 100 GPU.
    Image: Microsoft

    The rumors are true: Microsoft has built its own custom AI chip that can be used to train large language models and potentially avoid a costly reliance on Nvidia. Microsoft has also built its own Arm-based CPU for cloud workloads. Both custom silicon chips are designed to power its Azure data centers and ready the company and its enterprise customers for a future full of AI.

    Microsoft’s Azure Maia AI chip and Arm-powered Azure Cobalt CPU are arriving in 2024, on the back of a surge in demand this year for Nvidia’s H100 GPUs that are widely used to train and operate generative image tools and large language models. There’s such high demand for these GPUs that some have even fetched more than $40,000 on eBay.

    Read Article >
  • Jacob Kastrenakes

    Nov 13, 2023

    Jacob Kastrenakes

    Nvidia is launching a new must-have AI chip — as customers still scramble for its last one

    Nvidia’s HGX H200.
    Nvidia’s HGX H200.
    Image: Nvidia

    Nvidia is introducing a new top-of-the-line chip for AI work, the HGX H200. The new GPU upgrades the wildly in demand H100 with 1.4x more memory bandwidth and 1.8x more memory capacity, improving its ability to handle intensive generative AI work.

    The big question is whether companies will be able to get their hands on the new chips or whether they’ll be as supply constrained as the H100 — and Nvidia doesn’t quite have an answer for that. The first H200 chips will be released in the second quarter of 2024, and Nvidia says it’s working with “global system manufacturers and cloud service providers” to make them available. Nvidia spokesperson Kristin Uchiyama declined to comment on production numbers.

    Read Article >
  • Jay Peters

    May 19, 2023

    Jay Peters

    Meta is working on a new chip for AI

    Image of Meta’s logo with a red and blue background.
    Illustration by Nick Barclay / The Verge

    Meta is building its first custom chip specifically for running AI models, the company announced on Thursday. As Meta increases its AI efforts — CEO Mark Zuckerberg recently said the company sees “an opportunity to introduce AI agents to billions of people in ways that will be useful and meaningful” — the chip and other infrastructure plans revealed Thursday could be critical tools for Meta to compete with other tech giants also investing significant resources into AI.

    Meta’s new MTIA chip, which stands for Meta Training and Inference Accelerator, is its “in-house, custom accelerator chip family targeting inference workloads,” Meta VP and head of infrastructure Santosh Janardhan wrote in a blog post. The chip apparently provides “greater compute power and efficiency” than CPUs and is “customized for our internal workloads.” With a combination of MTIA chips and GPUs, Janardhan said that Meta believes “we’ll deliver better performance, decreased latency, and greater efficiency for each workload.”

    Read Article >