Nvidia’s Move to Open-But-Owned AI Chips Could Cement Its AI Dominance

Why Trust Techopedia

Nvidia’s chief Jensen Huang dropped multiple announcements at last week’s Computex 2025 gathering in Taiwan. One of the most eye-opening was the launch of NVLink Fusion, which will allow the AI chip leader’s customers and partners to use its products alongside non-Nvidia central processing units (CPUs) and graphics processing units (GPUs).

That’s a big change. NVLink – which enables faster data exchange between GPUs and CPUs – has always been strictly for Nvidia silicon. Now it’s going to play well with others.

Analysts say the move is calculated to sustain the firm’s dominance in high-compute AI systems as demand shifts to bespoke applications.

With Big Tech firms like Amazon building their own custom silicon, will opening up NVLink help the world’s second biggest company stay top of the chip heap?

Key Takeaways

  • Nvidia released a new program last week called NVLink Fusion that will allow customers to use third-party chips alongside Nvidia’s.
  • The move represents a big change for the world’s second biggest company. Instead of locking competitors out, it lets customers and partners lead them in.
  • That ecosystem approach might seem risky, but Nvidia’s chief, Jensen Huang, is betting it will keep the company at the core of AI computing, while letting others innovate on top.
  • Nvidia also wants to fend off moves by tech giants like Google and Microsoft to develop alternative CPUs and GPUs of their own.

A Bigger Sandbox

A close-up of a high-tech computer motherboard featuring two large processing chips and intricate circuitry on a dark background.
Nvidia’s NVLink Fusion allows building semi-custom AI infrastructure. Source: Nvidia

NVLink Fusion gives AI data centers more freedom to use Nvidia’s processors with third-party CPUs, GPUs, and high-powered application-specific integrated circuits (ASICs).

Huang told Computex attendees that it will mean “you can build semi-custom AI infrastructure, not just semi-custom chips. You get the benefit of using the NV link infrastructure and the NV link ecosystem.”

On the day of Huang’s keynote, Nvidia said Alchip, Astera Labs, Cadence, Marvell, MediaTek, and Synopsys would be the inaugural program partners.

Customers like Qualcomm and Fujitsu will also be able to use their proprietary CPUs with Nvidia’s GPUs in data center environments, the company said.

Flexibly Sustaining Its Lead

Customers may be clamoring for more options to tailor their AI infrastructure, but analysts say Nvidia’s embrace of openness is strategic, not reactive.

While the company has pole position in GPUs for general AI training, rivals have eyed an opportunity in processors built for more specific applications. Some of Nvidia’s biggest competitors in AI computing are also some of its biggest customers – Amazon, Google, and Microsoft – all of whom build their own custom silicon.

Holger Mueller, an analyst at Constellation Research, said:

“Nvidia is acknowledging the importance of the network to AI rollouts. The speed and efficiency of how data is served to precious and inexpensive CPUs is what really matters. By allowing customers and partners to interoperate with NVLink, Nvidia is prioritizing the network over its own in-house designs, staying true to its roots as a component vendor.”

He adds that Nvidia will continue to offer a fully-integrated AI stack, but also disaggregate it in parallel. The company is betting that GPUs, platforms, and ecosystems will be more important to customers in the long run.

In a note to investors published after the announcement, BofA analyst Vivek Arya reaffirmed the bank’s ‘buy’ rating on Nvidia, saying the new program may help power the coming “multi-trillion-dollar AI factory market.”

Eye on ASICs

Opening up NVLink could also create opportunities for Nvidia to extend its footprint into data centers built around ASICs, which are powerful enough for cryptocurrency mining and sometimes used instead of Nvidia products.

Standard computing models have allowed data centers to be built on general-purpose CPUs for decades. With the arrival of trillion-parameter AI models and real-time agentic inference, the capability of those old architectures is being tested on a daily basis. Part of the solution is to install more high-performance GPUs like Nvidia’s Blackwell, but AI needs more.

High-speed, mutual data transfers between GPUs, data plane processors, and CPUs are crucial to LLM’s fast processing requirements. That’s where NVLink Fusion could strengthen Nvidia’s position.

Nvidia’s NVLink technology addresses AI’s robust data communication needs head-on. Opening it up means third-party options like custom CPUs and accelerators can be coupled with Nvidia GPUs at the board and rack level.

NVLink Fusion Snapshot

First launched in 2014, Nvidia’s NVLink provides high-performance data interconnection between the GPU and CPU, enabling rapid info exchanges at AI scale.

  • According to the company website, the platform’s fifth generation delivers 800 Gbps of throughput and a total bandwidth of 1.8Tbps per GPU.
  • Nvidia says NVLink uses ConnectX-8 SuperNICs, Quantum-X800 InfiniBand, and Spectrum-X switches.
  • Support for co-packaged optics is ‘coming soon’.

Until now, NVLink would only work with Nvidia GPUs. Fusion enables Nvidia GPUs to communicate with third-party CPUs and ASICs, increasing the options for bespoke AI infrastructures while presenting a new revenue stream for Nvidia.

NVLink Fusion uses an Nvidia chiplet that can be added to any CPU or ASIC. That should give cloud hyperscalers like Google greater flexibility to scale up their AI data centers to millions of GPUs, using Nvidia’s rack-scale systems and end-to-end networking platform alongside any ASIC.

Re-Asserting Dominance

From a strategic standpoint, the decision to open up NVLink to outsiders is a forward-thinking move that could solidify Nvidia’s place in the emerging AI ecosystem. Rather than digging a deeper moat around its products, Nvidia is lowering the drawbridge — at least partly.

Core computing still depends on its GPUs and interconnect technology, yet NVLink Fusion offers a balanced approach with a number of potential upsides.

  • For customers, it could accelerate innovation by sharing designs and tools, or lead to lower development costs thanks to shared resources and expertise. And no matter how good Nvidia’s products are, no one wants to be utterly dependent on a single vendor.
  • For Nvidia, the benefits calculation is simpler. An ‘open but owned’ footing could allow it to maintain control of the AI industry standard platform while letting others innovate on top of it.

As Amazon and Microsoft look to chip away at Nvidia’s dominance with their own in-house silicon, the market leader is offering a third way: bring your own processors, plug them into their systems, and continue to reap the benefits of Nvidia-driven performance.

The Bottom Line

As he unveiled NVLink Fusion, Huang said, “A tectonic shift is underway.” The company appears to be signaling a new willingness to operate within a broader AI ecosystem and even embrace co-opetition.

Huang also announced a new Taiwan office for its joint project with Foxconn to build an AI supercomputer, and launched a new AI collaboration platform called NVIDIA DGX Cloud Lepton. It boasts a marketplace that will connect AI developers with ‘tens of thousands’ of GPU products from a network of providers.

“We are delighted to support leading companies as they advance innovation in the age of AI and robotics,” Huang added.

FAQs

What is the most advanced Nvidia AI chip?

What are the benefits of custom AI chips?

Why did Nvidia open its AI ecosystem?

Related Reading

Related Terms

Advertisements
Mark de Wolf
Technology Journalist
Mark de Wolf
Technology Journalist

Mark is a tech journalist specializing in AI, FinTech, CleanTech, and Cybersecurity. He graduated with honors from the Ryerson School of Journalism and studied under senior editors from The New York Times, BBC, and The Globe and Mail. His work has appeared in Esports Insider, Energy Central, Autodesk Redshift, and Benzinga.

Advertisements