Home  |  News  |  Reviews  | About Search :  HardWare.fr 



MiscellaneousStorageGraphics CardsMotherboardsProcessors
Advertise on BeHardware.com
Review index:
Review: Nvidia GeForce GTX 670
by Damien Triolet
Published on July 4, 2012



Following on from the exceptional GeForce GTX 690, Nvidia has started rolling out versions of its Kepler architecture further down the range. The GeForce GTX 670 has thus made its appearance with its rather strange design, reduced energy consumption and levels of performance very close to the GeForce GTX 680. Will it manage to break the energy efficiency record?


GPU Boost: a good or a bad feature?
Like the GTX 680, the GeForce GTX 670 is based on the GK104 GPU with no fewer than 3.5 billion transistors engraved at 28 nanometres by TSMC. Here Nvdia has prioritised gaming performance over GPU computing, setting the GTX 670/680 up to challenge the Radeon HD 7900s and their Tahiti GPU with no less than 4.3 billion transistors.


The GK104 and its 3.5 billion transistors.

To take on the Radeon HD 7970, Nvidia had to push its GPU hard on the GeForce GTX 680. It was originally designed for the segment just beneath but with the Radeon HD 7970 coming in with a lower performance than expected, Nvidia didn’t have to wait for the big GPU in the family, the GK110, to target top spot.

With energy consumption nicely under control on the GTX 600s, Nvidia has been able to introduce the first GPU turbo, GPU Boost, designed to maximise the GPU clock to make full use of the available thermal envelope. We aren’t fully convinced by Nvidia’s approach as GPU Boost is non-deterministic: in contrast to CPUs, it's based on real energy consumption which varies between each GPU sample according to manufacturing quality and the current leakage affecting it. Moreover Nvidia doesn’t validate all same-version samples of this GPU (the GK104-400 for the GTX 680 and the GK104-325 for the GTX 670) at the same maximum turbo clock and simply gives a guaranteed maximum clock allowing the GPU to go a good deal higher if it has been validated higher. The problem is that the press rarely receive medium level samples and as a result the performance levels we give are somewhat higher than what you may find with samples from stores.

Nvidia justifies itself by explaining that it aims to allow each individual model its maximum performance and says that while variation in the maximum GPU Boost clock can be significant, the variation in the average GPU Boost clock observed is lower, the reason being that the energy consumption limit stops the GPU from clocking up to a very high level in the more demanding games and also that its temperature limits it.

What Nvidia fails to say is that we as testers also slightly overevaluate performance levels as our testing is carried out under ideal conditions: brief test on a workbench. Although it might make our work more fun, unfortunately we can’t play for an hour to heat up the GPU before taking each performance reading! We previously compared the difference in performance between two GeForce GTX 680s. Without looking for worst case performance we observed a theoretical difference of 2% and 1.5% in practice. This isn’t enormous but is a nuisance when the difference with the competition is so tight. With the margin for manoeuvre given to GPU Boost increasing to 15% on the GeForce GTX 670, we do see the difference as problematic.

What's the solution? Ideally Nvidia would allow testers to limit cards at the least favourable level with the GPU Boost clock limited to the officially guaranteed value. This isn't the current situation and we have therefore opted to simulate such a sample ourselves, by juggling with the overclocking settings. Thus we are able to give you the guaranteed performance levels as well as the performance you can expect to obtain with a more favourable sample.


Page index
1 | 2 | 3 | 4 | 5 | 6 | 7 | 8 | 9 | 10 | 11 | 12 | 13 | 14 | 15 | 16 | 17 | 18 | 19 | 20 | 21 | 22
Next page >>
Specifications, the reference GeForce GTX 670  




Copyright © 1997- Hardware.fr SARL. All rights reserved.
Read our privacy guidelines.