Scalable Link Interface (SLI)
Definition - What does Scalable Link Interface (SLI) mean?
Scalable Link Interface is a technology developed by Nvidia that enables the use of multiple graphics cards in conjunction to produce a single output. This technology is an application of the concept of parallel processing and it greatly increases the performance for graphic intensive applications like games and 3D rendering. SLI enables multiple graphic processing units (GPU) to work with each other in order to make processing faster and share the workload in rendering a scene.
The setup needs the following:
- An SLI compliant motherboard
- At least two SLI compliant Nvidia graphics cards of the same model.
- An SLI Bridge connector
An SLI compliant board will already have at least two PCIe x16 slots that are required. Both cards are connected to each other via the special SLI bridge connector.
Techopedia explains Scalable Link Interface (SLI)
SLI works by giving both GPUs the same scene to render but different portions of it. The master card usually is given the top half of the scene while the slave gets the lower half. When the slave finishes rendering the other half of the scene, it is given to the master GPU and combined before being sent to the display.
When SLI was first released in 2004 it was only supported by very few motherboard models and setting one up was a tedious experience. Motherboard designs during that time did not have enough PCIe bus, so SLI compliant boards came with a “paddle card” which was inserted between the two PCIe slots, and depending on its position can channel all lanes into the primary slot or split it evenly between the two slots. As the technology matured there was no more need for the paddle card. Now SLI can even be achieved with a single graphics card by placing two separate GPUs on a single board eliminating the need for two PCIe slots, or a SLI compliant motherboard for that matter. By using two of these dual GPU cards on a SLI motherboard you can achieve Quad SLI.
SLI is the result of our ever growing need for graphics processing power. As we cannot advance the hardware technology fast enough to catch up with processing demands, the best way is to use current technology in parallel processing and make multiple GPUs work with each other to make processing faster. The result is a huge boost in performance, which also comes at a price since you actually need at least two of each card.
However, since the two cards are not working independently of each other, the performance boost is not 100%. The master card still needs to wait for the slave to finish, and then combine what both have done before sending it to be displayed, which is the bottleneck of the system. It has to take up a little more time to combine the renders which achieves a real world performance gain of 60-80%, still a very considerable increase.
11 Terms Every Virtualization Engineer Should Know
Join thousands of others with our weekly newsletter
Free Whitepaper: The Path to Hybrid Cloud:
Free E-Book: Public Cloud Guide:
Free Tool: Virtual Health Monitor:
Free 30 Day Trial – Turbonomic: