Home  |  News  |  Reviews  | About Search :  HardWare.fr 



MiscellaneousStorageGraphics CardsMotherboardsProcessors
Advertise on BeHardware.com
Review index:
Review: NVIDIA GeForce GTX 680
by Damien Triolet
Published on April 9, 2012

GPU Boost
The GeForce GTX 680 is the first graphics card with a turbo mode, allowing the card clock to be increased as long as power consumption remains within a defined envelope.

The NVIDIA approach is however different to what we've seen with turbo on a CPU or AMD’s PowerTune. CPUs and the Radeons include a unit that can estimate their consumption using a reading of the usage rates of their different blocks and a table of corresponding currents. This table is fixed for each model and determined conservatively on the basis of a 'bad' sample, say one that has high leakage. So in practice, energy consumption is overestimated to a greater or lesser degree but all the samples of a same model display identical performance.

The GK104 uses a different system. NVIDIA has used on the PCB the small shunt circuits we saw on the GeForce GTX 500s to give a reading right at the 12V power supply sources. This data is then reported to the driver which then reacts based on this real consumption reading of the card.

In contrast to the GeForce GTX 500, with the exception of the GTX 590, this energy consumption monitoring system is on all the time. It is also a bit more reactive here, which means it can adapt the GPU clock every 100ms, both upwards and downwards, by steps of 13 MHz at the same time as adapting voltage. If energy consumption exceeds a certain threshold, the clock will be reduced progressively but where consumption is low, the GPU clock can be increased.

The approach is however non-deterministic. Depending on the quality of the GPU sample, its temperature and its power stage yield, the energy consumption will vary and these factors will impact on the GPU clock.

And here we come to the thing that disturbs us: NVIDIA has refused to quantify this variation and claims not to know what it is. Moreover, NVIDIA is refusing to give the true clock limit for GPU Boost and is simply saying that all GeForce GTX 680s will at least be able to manage a GPU Boost clock of 1058 MHz in some cases. At the same time it's saying that its engineers have often observed higher clocks in the lab. In other words, NVIDIA’s engineers don’t really work to fix the specs but just sit back and observe the magic of clocks going up all on their own!

According to our observations on our sample, the maximum GPU Boost clock is 1110 MHz and the base clock 1006 MHz, with eight 13 MHz jumps in between. The final 13 MHz jump up to 1110 MHz is however harder to attain when the GPU has heated up. On most games tested our card managed up to 1097 MHz.

Note that GPU Boost uses a lower value than the TDP as the power target. While the GeForce GTX 680 TDP is 195W, GPU Boost uses a target of 170W for clock increases. However the clock isn’t reduced until the TDP has been reached.


NVIDIA has worked with the author of Rivatuner so as to be able to illustrate the functioning of GPU Boost and allow modifications to the power target as well as overclocking. EVGA’s Precision X (others will follow) is thus already fully functional.

The GPU Boost power target is expressed as energy consumption of 100%, with TDP corresponding to 115%. This can be varied between 71 and 132%, corresponding to a range of 121 to 224W for the GTX 680. Increasing the target allows you to maximise GPU Boost usage and is also required for overclocking.

Note that you can't turn GPU Boost off nor change how it works. If you increase the GPU base clock from 1006 to 1106 MHz, GPU Boost will still offer the same eight 13 MHz steps.

<< Previous page
GK104: GF114 x2

Page index
1 | 2 | 3 | 4 | 5 | 6 | 7 | 8 | 9 | 10 | 11 | 12 | 13 | 14 | 15 | 16 | 17 | 18 | 19 | 20 | 21 | 22 | 23 | 24
Next page >>
Specifications, the reference GeForce GTX 680  




Copyright © 1997- Hardware.fr SARL. All rights reserved.
Read our privacy guidelines.