NVIDIA has been preparing the bi-GPU GeForce GTX 590 for a few months now, ready to do battle with the AMD Radeon HD 6990. Out of the question to allow the competition to parade itself at the top of the graphics card classifications, especially after being vanquished by the Radeon HD 5970 last time out. With less successful handling of energy consumption, NVIDIA’s task was a tough one but they were nevertheless able to take advantage of successive AMD delays to hone their drivers and adapt the positioning of the GeForce GTX 590 to the performances of the Radeon HD 6990. Will there be a clearcut winner in this duel? To answer this question we put the cards through our test at standard resolutions, in surround and 3D Vision, without forgetting heat management and noise levels…
Beyond the 300W barrier
Although the PCI Express 2.0 spec doesn’t provide for cards that consume over 300W, both AMD and NVIDIA decided to ignore this, a first when it comes to reference cards. Outside of the fact that these cards won’t be validated by PCI-SIG, there are no real consequences, on the condition that you have a big enough power supply.
While it was hard to see how NVIDIA would come up with a card design with two GF110 GPUs within a 300W envelope, given the consumption levels of the GF110 and the lack of advanced energy consumption management, it wasn’t outside of the bounds of possiblity that AMD would do so, using PowerTune technology.
NVIDIA probably came to the same conclusion and, to ensure a crushing victory, went for a thermal envelope of 450W, presented as a ‘gaming TDP of 365W’. However, AMD also decided to up the ante as much as possible by bringing out its Radeon HD 6990 at 375W or 450W, depending on the bios used. On discovering these specs, NVIDIA must have understood that the battle would be tougher than they’d thought.