Edge of Power

Edge of Power

The Untold Story Behind Nvidia's $20B Groq Deal: Who's Really Holding the Cards

From a $10M seed to the largest chip acquisition ever - the investor chain reveals what Jensen Huang really bought

Edge Of Power's avatar
Edge Of Power
Dec 26, 2025
∙ Paid

Nvidia’s $20 billion acquisition of Groq assets most closely resembles Facebook’s $19 billion purchase of WhatsApp for one simple reason: in both cases, an unprofitable startup with unclear prospects is being acquired. We’ll dig into the truth of why Jensen Huang is paying astronomical sums, approximately 40x forward revenues, for a company that couldn’t even establish mass production of its chips.

This deal has two stories. The first is economic: why pay 40x revenue for a failing hardware company? The second is political: who's on the cap table, and what does that reveal about power consolidation in AI infrastructure?

What is Groq

Groq is a California-based startup founded in 2016 by former Google engineers who worked on the Tensor Processing Unit (TPU). The company was led by Jonathan Ross, one of the key architects of the first-generation TPU. Over eight years, Groq raised $1.8 billion in funding and reached a valuation of $6.9 billion in September 2024.

The company’s flagship product is the LPU (Language Processing Unit), a specialized chip for AI inference. Unlike Nvidia’s GPUs, which dominate model training, the LPU is optimized for running already-trained models (inference), exactly what’s needed for ChatGPT, Claude, and other real-time AI applications.

Key technical distinction: The LPU uses a deterministic architecture with large amounts of embedded SRAM memory instead of external HBM, providing predictable latency and, according to Groq, a 10x advantage in speed and energy efficiency compared to GPUs. The company demonstrated generation speeds of up to 1,345 tokens per second on Llama-3 8B, impressive figures for real-time applications.

Financial Reality: Why 40x Revenue is Insane

But behind the impressive technology lies a rather modest business reality:

2024-2025 Revenue:

  • Target revenue for 2024-2025: ~$500 million

  • This is down from an initial projection of $2+ billion due to data center scaling challenges

  • Actual 2024 revenue: ~$90 million

Deal Math:

  • $20B / $500M = 40x forward revenues

  • $20B / $90M = 222x trailing 2024 revenues

For context: typical tech acquisition multiples are in the 5-15x revenue range. Nvidia is paying 3-8 times the norm.

Where the $500M projection comes from:

  1. Saudi Arabia: $1.5 billion commitment to deploy LPU infrastructure in a new data center in Dammam, the company’s largest contract

  2. GroqCloud: The cloud platform gained 356,000 developers in its first year (now over 2 million)

  3. Enterprise contracts: 75% of Fortune 100 companies have accounts on the platform

But here’s the problem: $2 billion turned into $500 million in just a few months. Why?

Problem #1: Manufacturing Hell

Groq faced the classic hardware startup problem, the gap between promise and scale:

  1. Dependence on GlobalFoundries: All chips manufactured on a 14nm process with a single supplier

  2. No proprietary fabs: Unlike Nvidia, which controls the entire chain through TSMC

  3. Samsung problems: Contract for 4nm chip production in Texas was signed in 2023, but no serial deliveries materialized

  4. Data center shortage: The company can’t find enough capacity to house its chips

This is exactly why revenue projections collapsed from $2B to $500M, Groq physically can’t sell more because it can’t produce and deploy the hardware.

Problem #2: Predators at the Gates

Jonathan Ross, Groq’s founder, publicly accused Nvidia of predatory practices. According to WSJ, potential customers were afraid to work with Nvidia competitors for fear of losing access to scarce GPUs.

“A lot of people that we meet with say that if Nvidia were to hear that we were meeting, they would disavow it,” he said. “The problem is you have to pay Nvidia a year in advance, and you may get your hardware in a year or it may take longer, and it’s, ‘Aw shucks, you’re buying from someone else and I guess it’s going to take a little longer.’”

This created a vicious cycle:

  • Customers depend on Nvidia GPUs for training

  • They’re afraid to buy Groq for inference, fearing they’ll lose GPU supplies

  • Groq can’t scale without customers

  • Nvidia dominates both markets

Now this threat is eliminated, Ross and the entire team are moving to Nvidia. Game over.

The Technology Story: Why Pay 40x Revenue?

1. The War for the $255 Billion Inference Market

By 2030, the AI inference market is valued at $255 billion. That’s 5x the training market. Nvidia dominates training, but inference is a completely different game.

Key difference:

  • Training: Need raw performance, can wait hours

  • Inference: Need predictability, <10ms latency, energy efficiency

GPUs excel at training, but for inference they’re overkill and inefficient. Groq’s LPUs are built specifically for inference and outperform GPUs 10x in speed and power consumption for these tasks.

Bank of America analyst Vivek Arya noted: the deal “implies Nvidia’s recognition that while GPUs dominated AI training, the rapid shift toward inference may require more specialized chips”.

2. Neutralizing an Existential Threat

If Groq had managed to:

  • Establish production on Samsung 4nm

  • Scale GroqCloud to millions of developers

  • Establish itself in the inference market before Nvidia

...then in 3-5 years, Groq could have become to Nvidia what the iPhone became to Nokia. A technologically superior product at the right time with the right strategy.

Nvidia paid $20 billion to guarantee this doesn’t happen.

3. World-Class Talent Acquihire

This isn’t just a technology purchase. Nvidia gets:

The team:

  • Jonathan Ross, creator of Google’s first-generation TPU

  • Sunny Madra, Groq’s president

  • Dozens of engineers who know inference better than anyone in the world

IP and assets:

  • Non-exclusive license to LPU technology

  • All of Groq’s physical assets (except the GroqCloud business)

  • Patents on deterministic architecture

BUT doesn’t buy:

  • Groq the company itself (it will continue operating)

  • Exclusive rights to IP

  • GroqCloud (will continue operating under Simon Edwards)

This is a classic “reverse acquihire”, a way to circumvent antitrust regulation. Meta paid $650M for Inflection’s CEO, Microsoft spent billions on talent from startups. Nvidia just raised the stakes.

4. Architectural Integration: GPU + LPU = Dominance

In an email to employees, Jensen Huang wrote:

“We plan to integrate Groq’s low-latency processors into the NVIDIA AI factory architecture, extending the platform to serve an even broader range of AI inference and real-time workloads”

Analysts envision a future where GPUs and LPUs work together in the same rack via NVLink:

  • GPU - for massive parallel training workloads

  • LPU - for ultra-low latency real-time inference

Imagine a data center where a model trains on GPUs, then instantly deploys to LPUs for inference. The complete cycle on one platform from Nvidia. This is a monopoly on the entire AI stack.

Keep reading with a 7-day free trial

Subscribe to Edge of Power to keep reading this post and get 7 days of free access to the full post archives.

Already a paid subscriber? Sign in
© 2025 Edge of Power · Privacy ∙ Terms ∙ Collection notice
Start your SubstackGet the app
Substack is the home for great culture