In a landmark move that underscores the intensifying race for supremacy in artificial intelligence hardware, Nvidia has agreed to buy key assets and license technology from AI chip startup Groq for approximately $20 billion in cash, according to investment firm Disruptive’s CEO Alex Davis, whose company led Groq’s latest funding round in September.
The agreement, described as a “non-exclusive licensing deal,” will bring Groq’s high-performance AI inference technology under Nvidia’s expanding portfolio while transferring core members of Groq’s leadership — including founder and CEO Jonathan Ross, president Sunny Madra, and senior engineers — to Nvidia. Despite this major transfer, Groq says it will continue operating independently, with Simon Edwards stepping up as its new CEO and GroqCloud continuing “without interruption.”
A Strategic Acquisition Without a Full Buyout
While early reports characterized the transaction as a $20 billion acquisition, Nvidia emphasized that it is not acquiring Groq as a company, but rather licensing its intellectual property and hiring its top talent. Nvidia CEO Jensen Huang, in an internal memo, explained that the company intends to integrate Groq’s low-latency AI processors into the Nvidia AI Factory architecture. This integration will enable Nvidia to address a wider spectrum of real-time and inference workloads — the stage of AI operation where trained models process and respond to user requests.
Huang’s message highlighted Nvidia’s goal to maintain its leadership even as the AI industry shifts focus from training models, where Nvidia dominates, to inference, where competition from AMD, Groq, and Cerebras Systems is heating up.
The Rise of Groq and Its Distinctive Technology
Founded in 2016 by ex-Google engineer Jonathan Ross, one of the architects behind Google’s Tensor Processing Unit (TPU), Groq quickly made a name for itself by designing ultra-fast AI accelerator chips optimized for inference tasks. Its unique approach — using on-chip SRAM memory instead of external high-bandwidth memory — allows for faster data access and lower latency, boosting chatbot and AI model responsiveness. However, it also constrains the model size these systems can handle.
Groq’s most recent funding round raised $750 million at a valuation of $6.9 billion, with backing from BlackRock, Neuberger Berman, Samsung, Cisco, and 1789 Capital, the latter co-led by Donald Trump Jr. The startup was targeting around $500 million in revenue for 2025 before Nvidia’s approach.
Though not initially seeking buyers, Davis said the deal came together swiftly as Nvidia recognized Groq’s potential to strengthen its grip on the AI inference market.
Nvidia’s Expanding AI Empire
This transaction represents Nvidia’s largest deal to date, far surpassing its 2019 acquisition of Mellanox for $7 billion. It follows a growing trend among tech giants — including Microsoft, Meta, and Amazon — of structuring large-scale “acqui-hire” and licensing deals to secure top AI talent while avoiding regulatory hurdles tied to outright acquisitions.
For Nvidia, this is not the first move of its kind. In September, it reportedly paid over $900 million to bring in Enfabrica’s CEO Rochan Sankar and license that startup’s technology. Earlier this year, Nvidia also announced plans to invest up to $100 billion in OpenAI and $5 billion in Intel as part of its broader AI expansion.
Industry analysts note that Nvidia’s non-exclusive structure with Groq could help it sidestep potential antitrust concerns, as Groq will remain a separate entity. Still, Wall Street observers, including Bernstein’s Stacy Rasgon, caution that regulators may revisit such creative deal structures as they blur the line between partnerships and acquisitions.

The Market Shift Toward AI Inference
The deal arrives as the global AI landscape transitions from a training-centric phase — where Nvidia’s GPUs reign supreme — to one increasingly focused on inference performance. Huang has repeatedly emphasized that sustaining leadership in this new era will depend on developing faster, more efficient hardware for edge and data center AI processing.
Groq’s inference engines and Nvidia’s GPU-backed infrastructure appear to form a powerful combination aimed at precisely that goal. Together, they could enable faster real-time responses and more cost-effective scalability for globally deployed AI systems.
Meanwhile, Groq’s primary rival, Cerebras Systems, continues forging its own path, having delayed its IPO earlier this year but signaling renewed plans to go public in 2026. Both Groq and Cerebras have recently secured major AI infrastructure deals in the Middle East, highlighting the surging global demand for advanced inference solutions.
A Transformative Moment for Nvidia and AI Hardware
With $60.6 billion in cash and short-term investments as of late 2025, Nvidia has both the financial muscle and strategic intent to pursue transformative deals like this. The Groq licensing partnership — blending a multi-billion-dollar investment with a leadership transfer — reflects the evolving playbook of modern tech giants: absorb innovative minds and proprietary technology without formal mergers.
As AI competition intensifies across inference and real-time computing, the integration of Groq’s cutting-edge processors into Nvidia’s vast ecosystem signals a new chapter in the company’s journey — one that aims to consolidate its influence across every stage of AI development, from training supercomputers to the devices and services that deliver instant intelligence to users around the world.



