Scalable Link Interface

What Does Scalable Link Interface Mean?

Scalable Link Interface is a technology developed by Nvidia that enables the use of multiple graphics cards in conjunction to produce a single output. This technology is an application of the concept of parallel processing and it greatly increases the performance for graphic intensive applications like games and 3D rendering. SLI enables multiple graphic processing units (GPU) to work with each other in order to make processing faster and share the workload in rendering a scene.


The setup needs the following:

  • An SLI compliant motherboard
  • At least two SLI compliant Nvidia graphics cards of the same model.
  • An SLI Bridge connector

An SLI compliant board will already have at least two PCIe x16 slots that are required. Both cards are connected to each other via the special SLI bridge connector.

Techopedia Explains Scalable Link Interface

SLI works by giving both GPUs the same scene to render but different portions of it. The master card usually is given the top half of the scene while the slave gets the lower half. When the slave finishes rendering the other half of the scene, it is given to the master GPU and combined before being sent to the display.

When SLI was first released in 2004 it was only supported by very few motherboard models and setting one up was a tedious experience. Motherboard designs during that time did not have enough PCIe bus, so SLI compliant boards came with a “paddle card” which was inserted between the two PCIe slots, and depending on its position can channel all lanes into the primary slot or split it evenly between the two slots. As the technology matured there was no more need for the paddle card. Now SLI can even be achieved with a single graphics card by placing two separate GPUs on a single board eliminating the need for two PCIe slots, or a SLI compliant motherboard for that matter. By using two of these dual GPU cards on a SLI motherboard you can achieve Quad SLI.

SLI is the result of our ever growing need for graphics processing power. As we cannot advance the hardware technology fast enough to catch up with processing demands, the best way is to use current technology in parallel processing and make multiple GPUs work with each other to make processing faster. The result is a huge boost in performance, which also comes at a price since you actually need at least two of each card.

However, since the two cards are not working independently of each other, the performance boost is not 100%. The master card still needs to wait for the slave to finish, and then combine what both have done before sending it to be displayed, which is the bottleneck of the system. It has to take up a little more time to combine the renders which achieves a real world performance gain of 60-80%, still a very considerable increase.


Related Terms

Margaret Rouse
Technology Expert

Margaret is an award-winning technical writer and teacher known for her ability to explain complex technical subjects to a non-technical business audience. Over the past twenty years, her IT definitions have been published by Que in an encyclopedia of technology terms and cited in articles by the New York Times, Time Magazine, USA Today, ZDNet, PC Magazine, and Discovery Magazine. She joined Techopedia in 2011. Margaret's idea of a fun day is helping IT and business professionals learn to speak each other’s highly specialized languages.