The stock GeForce GTX 580For this test, NVIDIA supplied us with two stock GeForce GTX 580s:
For this GeForce GTX 580, NVIDIA has used a similar PCB to that used for the GeForce GTX 480 although it no longer has holes in it as, in practice, this resulted in no gain in terms of the channelling of cool air. The new PCB has been optimised to be more reliable under the extreme energy consumption conditions of the GeForce GTX 580, identical to those for the GeForce GTX 480.
Connectivity is also identical with 2 DVI Dual Link outputs, a mini-HDMI, 2 SLI connectors and 8+6 pin power connectors.
The most significant development comes in the GeForce GTX 580’s cooling system. NVIDIA uses a slightly different turbine, which makes less noise. What’s more, the external part of the radiator which accumulated a lot of heat and dispersed it within the casing has been replaced with a more standard model. Efficiency has been improved here thanks to the use of a vapour chamber, in the same way as AMD has done on the Radeon HD 5970 for example.
NVIDIA is talking about a TDP of 244W in games but it is actually closer to 300W in the stress tests. To try to contain energy consumption, NVIDIA has used components that allow the drivers (but not the GPU itself) to monitor GPU energy consumption, or rather the intensity of the current which flows down each of the 12v (PCI Express bus and connectors) power supply channels.
NVIDIA hasn’t however activated overall monitoring and, to avoid reducing performance in games or 3Dmark, only puts its system in action in the latest versions of Furmark and OCCT, namely in the extreme load tests used by reviewers to measure energy consumption. In these applications, if the driver measures energy consumption beyond a certain limit, it lowers clocks by half, and then by half again if this hasn’t been sufficient.
In practice, the NVIDIA system ends up preventing the use of such softwares more than maintaining energy consumption within a well-defined thermal envelope. NVIDIA told us that it was committed to moving forward and including more similar applications. This doesn’t mean that it is moving towards permanent monitoring of energy consumption.
In terms of overclocking, we managed to increase the GPU clock of our two samples to 875 MHz, tested stable in a version of Furmark that hadn’t been slowed down by the NVIDIA mechanism. This represents an increase of 13%.
Lastly, we should say that although all the cards supplied the GPU at 0.962V in idle, the voltage varies in load from model to model, which allows NVIDIA to maximise the number of chips which qualify at certain spec. Thus, in load, one of our test models was running at a GPU voltage of 1.050V while the second was running at just 1.025V.