1. Hiatus
2. RIP, Satoru Iwata
3. Let there be Robot Battles
4. Regarding pixel art!
5. 16-bit Star Wars
6. Goodbye, Spock.
7. James Randi Retires
8. More Star Wars on GOG
9. gives you DOS Games
10. Ralph Baer, RIP.
1. Quickie: Impressions June 2014
2. Quickie: Penny Arcade Episode 3
3. Quickie: The Amazing Spider-Man
4. Quickie: Transformers: Fall of Cybertron
5. Quickie: Prototype 2
6. Quickie: Microsoft Kinect
7. Quickie: X-Men Destiny
8. Spider-Man: Edge of Time
9. Quickie: Transformers Dark of the Moon
10. Quickie: Borderlands GOTY
1. Musings 45: Penny Arcade and The Gripping Hand
2. Movie Review: Pacific Rim
3. Movie Review: Wreck-It Ralph
4. Glide Wrapper Repository
5. Movie Review: Winnie The Pooh
6. Musings 44: PC Gaming? Maybe it's on Life Support
7. Video Games Live 2009
8. Movie Review: District 9
9. Musings: Stardock, DRM, and Gamers' Rights
10. Musings: How DRM Hurts PC Gaming
Main Menu

X-bit labs
The Tech Zone
Twin Galaxies


 Log in Problems?
 New User? Sign Up!

NVidia's SLI: What Were They Thinking?
Author: Michael Ahlf
Date: June 28th 2004
NVidia, as most of us have found out by now, have reintroduced SLI as a way to get more graphics card power into a machine, bolstered by the new PCI-Express architecture that is replacing both PCI and AGP. They weren't alone of course; earlier in the year, there were hints about ATi doing the same, and Alienware released 

A historical perspective... when it came out, the now defunct (absorbed by NVidia) 3dfx's SLI (Scan Line Interleave) architecture was touted as the ultimate in graphics card power. Two PCI-based video boards functioned as one, each rendering half of the image, the two images interleaved to give the whole picture.

In the 16-bit graphics card days, when a few MB of memory per card was it, this was wonderful. Well, sort of. When the Voodoo2's were in their prime, they cost $300+ each. The upside was that cards from any manufacturer worked together, provided they used the same Voodoo2 chip and amount of RAM. The downside was that the full cost for an initial rig was $700+; $100 for the 2D card, and $300 each for the Voodoo2's to put together. That was a steep price tag for the card, especially since it was an add-on to PC's that, even in the value range, were still $1500+ just for the basic computer and an el-cheapo 2D card.

And people still paid it, all just to run Quake faster and faster.

Today, NVidia's got a different challenge for us, with SLI not meaning what we thought it meant. Tom's Hardware has a decently full examination/explanation of how it works: the question is, what will it do in the marketplace?

The Upside:

The new SLI design, unlike the old one which involved some pretty lousy pass-through cables, now renders everything digitally in roughly top-half/bottom-half fashion. It does incorporate load balancing, so the performance of each card ought to be relatively maxed-out; this is where NVidia claims that they can get up to a 90% speed improvement over having a single GeForce 6800 card.

In theory, a two-card solution could give you a 100% speed increase (twice the speed). Unfortunately, reality is never that nice. Textures and other data have to be duplicated in each board separately, which means that bandwidth may be consumed there. Even though load balancing should (theoretically) help keep the performance up, the load balancing software can't react instantaneously; at some point, like it or not, one card's going to be doing more work than the other. Both boards also will have to share the data sent to and from the CPU, which again can be a bottleneck. I'm very tempted to split the difference between NVidia's current reality - a reliable 77% increase - and their claimed max of 90%, and give them an easy shot at 85%. The rest will be theirs only with a LOT of work, driver optimizations, and luck.

Also, thanks to everything being digital, pass-through cables won't be necessary, nor will an independent 2D board. That's a good thing; back in the 3dfx days, threads upon threads of message board systems were dedicated to methods by which one might eliminate flicker/static caused by interactions over the pass-through cable, the inactive (but not quite) 2D board when the Voodoo cards were in use, or flicker caused on the desktop when the 2D board was active but the Voodoos weren't. A few enterprising, aggravated gamers even went to the homo sapiens solution (e.g. forgoing the 2D pass-through and just plugging the monitor to one or the other when appropriate).

The Downside:

Problem number one is price, pure and simple. ONE new NVidia 6800-series PCI-Express board (the only card line that can currently do this, and only with exact-same cards from the same manufacturer according to NVidia; you're welcome to try it with differing models of same chipset, but since many makers use slightly different clock speeds/RAM timing, good luck) may cost upwards of $600 when they first come out, especially on 256-MB boards.

There are currently 6800-series boards for less on Pricewatch, but they're all AGP-based, and the new system is for PCI-Express motherboards only (which won't be available for below prohibitive cost, except from system builders like Alienware where unit price can fly past $4000 quickly, for a few months yet).

Plus, upgrading to this spec, unless you're buying a computer already with it, will require a new PCI-Express motherboard, one that has at least two 16x PCI-X slots. This means, functionally, that you're likely to need a new processor, new RAM (to take advantage of the upgrades to the DDR spec that have happened recently), and the motherboard itself. It won't, thankfully, require chucking all your old PCI devices (like sound card or RAID controller) but, if you get low on slots, might require changing a few out. Plus, there are a variety of PCI-X slots, and not all will be 16x; finding one with two 16x's on it may be hard for a while.

Problem number two is the sheer amount of space this setup's going to take. Remember what I said about running low on PCI/PCI-Express slots?

Consider the following: NVidia, ever since the dustbuster-like cooling system for the GeforceFX series became their MO, have set up their cards such that they require not only the AGP slot, but the PCI slot below it (where some enterprising motherboard makers stick the AMR on the assumption it'll never be used anyways) as well. If you're really unfortunate, you might buy a new PCI-X motherboard with two 16x slots on it via mail order, only to find that they've stuck the two 16x's right next to each other and you're screwed.

So that's four slots gone. Most motherboards on the HIGH end for end-users, have 6-7 slots total (1 AGP, 1 AMR, 5 PCI). Granted, modems and Ethernet cards (assuming you're on standard 10/100 setup) are now built into many motherboards, so those are possible to excuse if you're a careful shopper and find a motherboard with them built in. If you're using wireless 802.11b/g or Gigabit ethernet for your network, of course, you may be entirely out of luck.

Still; to use this setup is going to require FOUR of your precious PCI-Express slots. Until PCI-X is the only interface available (most motherboards seem to be leaning to being PCI-X/PCI hybrids, with just one or two 16x PCI-X slots to start with like PCI/ISA hybrids were big when the Voodoo2's were first coming out), you may not be able to fit ANY other PCI-X devices into the system. Which means no even-higher-speed PCI-X SATA RAID controller cards, for instance.

Granted, more things can also be done on USB. Still, a system that takes up four slots of a computer is pushing your luck - just adding in a RAID controller card and sound card can max out your motherboard completely. Plus, many motherboard makers ship their boards with the USB 2.0 connectors available by a blank slot-card that attaches to the back... in case your PC's case doesn't have a USB 2.0 plug set in the front.

Going this way also completely ignores NVidia's past strategies and arguments as to why the Voodoo2 and SLI were bad (and for that matter, why SLI itself is a bit of a hack). Way back in our archives, all the way back on May 1st of 1999, our founder Khalid had something to say on the matter in response to an interview the now-defunct RivaZone did with Nick Triantos of NVidia. Nick's statement was as follows:


A: Don't count chips. More chips on a board usually means more cost when you buy the board. RIVA 128 can render 100 million textured pixels per second. We do that with one chip. The fact that the chip also does great 2D is almost as cool as when you find two toys in your cereal box. The nerve of those other companies, making you pay extra money for less performance and quality... 

Obviously, this was concerning the Riva128 and Voodoo2 - but the TNT came out a very short time later, and had extra settings and features like 32-bit color (the Voodoo2 could only handle 16), all again inside one chip. Khalid's take on the Voodoo2 was as follows:

The reason I call the Voodoo2 a hack is because it is essentially the Voodoo1 chipset (which is a TMU) with an additional Voodoo1 TMU and a triangle setup engine. The triangle setup "chip" just calculates the slopes of lines and allows the TMUs, the rasterizers, to perform the task of filling pixels to the screen. Obviously the rasterizer interpolates dozens of variables such as z values, color, u, v coordinates and all that (essentially quite a simple task).

The reason the Voodoo2 is a hack is because a proper, true engineer, who follows the discipline would simply modify the Verilog or VHDL code to have both TMUs and setup engine on one chip (not that simple to do). This greatly reduces the cost of the chip since what you are actually paying for is not the intellectual property on the chip, but the cost of the die (eg. Nintendo 64 cartridges cost a lot versus playstation CDs). It seems that everyone's "cost of chip" or "cost of board" is based on the die. The die is basically the "chip" inside all that ceramic that you see on your CPU. It is this incredibly small 1cm/1cm square (just an approx) which is basically what you  pay $$$ for.
Up until this point - when NVidia and Alienware both introduced a variant on SLI - this has been pretty much design law. Everything, function-wise, in a graphics board has been a part of ONE chip. The only exception that survived was ATi's All-In-Wonder series, in which the tuner board and some video processing functions have been external - but even there, the graphics capabilities all remain in one chip.

This situation is also why the 3dfx Voodoo5 series did so poorly; their idea was that the Voodoo4 would have one chip, the Voodoo5's two, and the ultra high-end model (the never released Voodoo5 6000) would have a whopping FOUR of their chips on-board. 

Unfortunately, just like with the SLI setup, this comes at great cost; for the arrangement to work, the memory storage of each board or processor has to remain duplicated. I can't imagine that the NVidia setup will be different, because when you're doing dynamic load-balancing, you've got to keep as much data local so that the numbers can be crunched, and much of the information processed in a graphics card - like textures - will have to be duplicated anyways. Either way, the Voodoo5 6000 was a 128MB board with the memory performance of a 32MB board, thanks to the fact that each chip required its own 32 Mb of memory. 

Memory is expensive, especially on video boards which HAVE to use the latest and greatest design in the pursuit of more power. When DDR boards came out, for instance, they blew standard-memory boards away - much to the chagrin of recent owners of new SDR boards or those who bought Geforce MX-series boards expecting DDR-style performance. All the extra memory required in the Voodoo5 series made them prohibitively expensive for what they did, especially since they lacked many of the T&L features that the competing GeForce series had made so popular (and thus, necessary). If in the end of 2000, less than 6% of readers in that FiringSquad poll said they'd pay $500+ for a video board, what makes NVidia think readers are going to shell out $1100+ to do SLI?

Obviously, they don't - not really. They may be expecting users to get one, and then look at getting another (when the price comes down) later on as an upgrade. Or they may be expecting users to shell out $600 for two low-end models, on the assumption that it'll work a bit better than that singular $500 high-end model.

Or it could be all a grab for the performance crown, which has gone back and forth between NVidia and ATi ever since ATi's introduction of the Radeon chipset. ATi's been having a good run, with lower-heat chips that still often manage to outperform NVidia's while not requiring the ridiculously-oversized cooling solution NVidia came up with. It's obvious that even if performance boosts were a mere 30%, they'd be more than enough for NVidia's double-chip solution to work better than any (barring an ace up their sleeves) single-chip ATi solution; with reported 77% increases already, the likelihood of ATi magically pulling out a single-chip solution that quickly is exceptionally low. For a short time, at least, this new breed of SLI ought to net NVidia the performance crown, though doubters and partisans will likely still use the single-board specs, or at worst case performance of price-equivalent setups involving the lowest rank of Geforce 6800s, for what will be termed (by them) a "proper" comparison.

Oh the Noise, Noise, Noise, Noise.

Sorry to rag on Nvidia for this one again, but come on guys - you KNEW this day was coming. You knew it back when some of your employees pilloried the GeforceFX in their own SPOOF PROMOTIONAL VIDEO about how loud the damn thing was.

Now, even though you've made the contraptions somewhat quieter, you want us to put in TWO of them? Yikes. I like being able to hear my speakers over my PC, thanks.

Yeah, I know. NVidia fans (no pun intended) are going to cry foul on me for this one. But let's face it; the ATi solutions ever since the GeforceFX have run quieter, cooler, and are just suited better to a dual-card design (2x the noise is a lot nicer from an already low-noise card than from a leaf blower) from those two points alone.

Word is that 's also a workstation-based solution that could put up to 4 of these NVidia cards into a rendering machine of some sort, or a REALLY REALLY expensive gaming rig. Fine, dandy if you want to do it; you'll just have to invest in a completely wireless video/input system so that you can run your monitor, keyboard, speakers, and mouse from the bedroom while the computer's in the soundproof radiation bunker under your basement.

What's Coming Next?

Since NVidia has offered up this solution, but Alienware's already come up with a solution that allows ANY boards of the same make & model (ATi's as well as NVidia's) to work together, the obvious step would be for ATi to license Alienware's setup, either as a standalone or an external add-on that people could use in their systems. Alternatively, ATi could develop their own SLI substitute in-house (rumors claim they already have), and offer it as an alternative to NVidia's.

In terms of the performance crown, it's highly unlikely (given that Alienware will soon offer ATi-based systems using their VideoArray technology) that the NVidia SLI setup will go unchallenged; by the time that both solutions are on the market, Geforce 6800 Ultra SLI will be compared to ATi X800 XT's in some dual-board mode, not to ATi's single-board solution. If ATi can get their X800 series (or the series after) to the point where it's more cost-effective to go with just one of those than with two of the lower-level NVidia boards, and they still beat NVidia's high class board in a one-on-one challenge, NVidia could be boxed into a corner.

The decision between the two will likely also be made (barring one gigantic tech leap from one company or the other) not by the individual strengths of the boards, but by how efficient their SLI system is at boosting performance. Also coming into play could be other improvements from ATi, such as 3Dc (the newer texture compression technology) which could help keep talk between boards concise, and ATi's OVERDRIVE dynamic overclocking suite (introduced with the Radeon 9600XT/9800XT line) could be a help as well; enabled by default, it could allow the ATi products to run higher than a standard-speed GeForce 6800 in a similar setup.

The one thing I can safely say; the video wars have returned. This time, it's "upstart" ATi, the former underdog, against the reigning champ NVidia who seem to have remade a lot of the old champ's (3dfx) mistakes when they took his crown and the contents of his locker.

Who will win? Only time will tell. But it's certain to be fun to watch.

Got Comments? Send 'em to Michael (at)!
Or better yet, make a username and post them right here, for everyone to see and discuss!

NVidia's SLI: What Were They Thinking?

Added:  Monday, June 28, 2004
Reviewer:  Michael Ahlf


[ Back to Articles index ]

Home :: Share Your Story
Site contents copyright Glide Underground.
Want to syndicate our news? Hook in to our RSS Feed.