(We assume Shego is reading this thread with intense interest...)
Saw the story when the Hydra 100 was announced. Basically, the reason it is so awesome is this:
Current SLI/CF relies on the drivers (software) to deal with splitting the graphics load up between however many cards are installed in your system. This means the software has to be tuned for the card(s) in question, which a) is somewhat of a (load) balancing act (which uses up CPU resources besides) and b) means that you generally need to have matching cards (same vendor, same series, etc).
The Hydra chip does away with all this crap and just hands the next frame in the render queue off to the next graphics card waiting in line, regardless of manufacturer, chip series, etc. This means that each individual card doesn't need to care about what other card(s) is/are installed in the system, each one just gets one frame, works on one frame, then outputs that frame and waits for another one. It's almost like
RAID0 for GPUs. Best of all, so long as the Hydra chip can keep up
*, you'll be able to either get
incredible speed by ganging a couple of high performance cards (GTX 285 or 5870) or go with 3 or 4 mid-level lower priced cards (possibly even of mixed parentage!) to get that same or similar level of performance.
The one thing that is left out of the whole thing is this: One of the reasons you need Vista/Win7 for really impressive gaming performance is that XP won't handle more than 2 video cards. If the Hydra chip can be made to work under WinXP, that means you might be able to run up to 4 video cards under, say, a special version of WinXP which has been tuned for high performance. An interesting possibility, wouldn't you say?
--Patrick
*This is the real sticking point. Otherwise, just like the previous poster says, you've just traded a software bottleneck for a hardware one.