Nvidia GeForce GTX 690: review of a €1000 card! - BeHardware
>> Graphics cards

Written by Damien Triolet

Published on May 3, 2012

URL: http://www.behardware.com/art/lire/864/


Page 1

Introduction, specifications



While we were expecting AMD to come through first with a new ultra high-end bi-GPU graphics card, in fact Nvidia have drawn first with the GeForce GTX 690. So what do you get from a €1000 graphics card?


The GeForce GTX 690 looks to be very much along the lines of the GeForce GTX 590 and comes in the same format. Nvidia however wanted to offer more. As energy consumption is better mastered on its new GPU, performance levels can be pushed higher within the available thermal envelope. This card also has an exemplary finish with Nvidia choosing exclusively high quality materials to give us a very exclusive model, on sale at no less than €1000!

This is very expensive, a record for a reference card. You’ll no doubt have understood, apart from the small market interested in such a product, the GeForce GTX 690 has above all been designed for the shop window to get the enthusiasts dreaming.


The GeForce GTX 680 in SLI, or almost
The GeForce GTX 690 uses two GK104 GPUs based on Kepler architecture which has a significantly better energy yield than Fermi architecture. Given that the thermal envelope is the most important factor in what it is possible to do with a bi-GPU card, the improved efficiency means Nvidia can offer higher specs here. This is particularly true when the best GPUs are selected. Each GPU used in the GTX 690 is thus very similar to the one used in the GeForce GTX 680


Nvidia is using complete GK104s here, each with 1536 processing units. Their base clock is lower than on the GTX 680 with 915 MHz vs 1006 MHz but their official GPU Boost clock is similar (1019 MHz on the GTX 690 and 1058 MHz on the GTX 680).

In practice, this GPU Boost clock isn’t actually part of the spec but corresponds to the minimum clock at which the GPU can be validated. On the GeForce GTX 680, this clock can go up to at least 1123 MHz and the two GPUs in our GeForce GTX 690 sample are able to go up to 1071 MHz.

/For memory the two cards have 2 GB of GDDR5 clocked at 1503 MHz and interfaced at 256 bits. Performance levels are thus much closer between two GTX 680s in SLI and a GTX 690 than the comparable set-ups on the previous generation were.


Page 2
The GeForce GTX 690, GPU Boost

Two GK104s in a luxury casing
For this test, Nvidia supplied us with a GeForce GTX 690:





The GeForce GTX 690 is structured in the same way as the GeForce GTX 590. The cooling system and PCB on the two cards are thus very similar in terms of design, although the cooler is of better quality on the new arrival.

On both cards the PCB is 28cm long and has a GPU at each end, with the power stages in the middle along with the PCI Express switch which links them. This design gives optimum cooling thanks to a central fan which can channel cool air towards the radiators coming with an integrated vapour chamber for each GPU.

Moreover, both the graphics cards have a similar power stage with five phases for each GPU, require two 8-pin connectors and offer the same video outputs: 3 DVI Dual Links and a mini-DP, which is enough to facilitate the installation of a Surround or 3D Vision Surround system.

A small difference here is that Nvidia has abandoned its in-house PCI Express 2.0 switch, the NF200, in favour of a PCI Express 3.0 switch, the PLX PEX8747. This PLX switch supports three PCI Express 16x 3.0 ports, which means you can link each GPU to the system and also put a direct link between the two GPUs into place.

The main difference on the 690 is that Nvidia wanted to make it an exclusive product and has therefore pushed the boat out in terms of the finish of its cooler, abandoning the much-abused plastic for more noble materials. We’ll spare you the technical terms the industry gives to these materials so as to impress you: such descriptions of the cheapest plastic out there would also have the same ‘no doubt used by NASA’ effect.

Nevertheless Nvidia has made an effort to use quality products along the same lines as, say, high-end ultraportable laptops. The result is a very nice robust design that isn’t too heavy. The finish really is remarkable, both visually and to the touch. Aesthetically, it’s perfect.





GPU Boost
To recap, with the GeForce GTX 680 Nvidia introduced a turbo technology, GPU Boost, which doesn't behave the same way from one sample to the next of the same GPU. Because it uses real power consumption, it’s non-deterministic, in contrast to CPUs and AMD GPUs, which use an estimation. What’s more Nvidia doesn’t validate all the GPUs at the same maximum turbo clock.

Nvidia simply communicates an official GPU Boost clock, which is the minimum you'll find on sale. In the case of the GeForce GTX 690, this value is 1019 MHz, as opposed to the 1071 MHz we got with our test sample. This value is actually the number of 13 MHz notches (bins) that the GPU can go up as of its base clock: 8 bins minimum for the GTX 690, 12 bins for our test sample. Remember, the GTX 680 had a minimum of 4 bins but this can go up to at least 10 depending on the sample.

The increased turbo range for the GeForce GTX 690 is linked to the fact that Nvidia had to reduce the TDP as far as possible. It is 300W. However GPU Boost targets an energy consumption of 265W, which corresponds to what the monitoring tools report as 100%. When energy consumption goes over 265W, the clock won't increase. However GPU Boost won’t go into protection mode as long as the TDP of 300W hasn't been reached.

GPU Boost on the GTX 690 actually doesn’t measure the total energy consumption of the card but rather that consumed by each GPU. The current readings aren’t taken at the 12V sources for this as they are on other cards but at the ins for the four main power stages (GPU1, GPU2, MEM1, MEM2). As we thought, Nvidia confirmed to us that the energy consumption of certain circuits and the PLX switch weren’t therefore measured. This is negligeable in comparison to the rest and therefore not very important, but Nvidia does nevertheless take it into account, using a constant that represents the estimated value of the energy consumption that would have been measured.


Page 3
Noise levels and GPU temperature

Noise
To observe the noise levels produced by the various solutions, we put the cards in a Cooler Master RC-690 II Advanced casing and measured noise at idle and in load. We used an SSD and all the fans in the casing, as well as the CPU fan, were turned off for the reading. The sonometer was placed 60 cm from the closed casing and ambient noise was measured at +/- 21 dBA.


The GeForce GTX 690 is very quiet at idle, though both GPUs remain powered on. In load, it is less noisy than all other bi-GPU systems, whether based on one or two graphics cards.


Temperatures
Still in the same casing, we took a reading of the GPU temperature with the internal sensor:


The GeForce GTX 690 cooling system is pretty effective, especially at idle.

Note that here we give the temperature of the hottest GPU, which is always the main GPU. With the GTX 690, we scored the second GPU at 35/84 °C (idle/load) and with the GTX 680s in SLI, we got 39/83 °C for the second GPU.


Page 4
Readings and infrared thermography

Readings and infrared thermography
For this test, we used the new protocol described here.

First of all, here’s a summary of all the readings:


At idle, the CPU temperature is slightly higher with the GTX 690, which doesn’t expel all the hot air out of the casing.


In load this difference is accentuated and most components present in the casing heat up more with the GTX 690. This is particularly the case with the hard drive which is situated just behind the GeForce GTX 690. It shouldn’t be placed here, otherwise its life expectancy will be seriously reduced.

Here’s what the infrared thermography images show:


Reference GeForce GTX 680
Reference GeForce GTX 690
Reference GeForce GTX 680 SLI
  [ Idle ]  [ Load ]
  [ Idle ]  [ Load ]
  [ Idle ]  [ Load ]

While the GTX 690's GPUs are well cooled, their power stages tend to heat up.



Reference GeForce GTX 680
Reference GeForce GTX 690
Reference GeForce GTX 680 SLI
  [ Idle ]  [ Load ]
  [ Idle ]  [ Load ]
  [ Idle ]  [ Load ]

As the GeForce GTX 690 only expels a small part of the hot air out of the casing, the internal temperatures increase.


Page 5
Energy consumption

Energy consumption
We used the test protocol that allows us to measure the energy consumption of the graphics card alone. We took these reading at idle on the Windows 7 desktop as well as with the screen in standby so as to check out the impact of ZeroCore Power on the Radeon HD 7000s. In load we opted for the readings in Anno 2070, at 1080p with all options pushed to maximum, as well as those in Battlefield 3, at 1080p with High quality preset:


As the GeForce GTX 690 doesn’t have any equivalent to ZeroCore Power, its two GPUs still receive current during idle on the Windows desktop and in standby. Energy consumption has however been halved in comparison to the GeForce GTX 590.

In load it slightly exceeds the limit up to which GPU Boost can increase clocks, but it remains under the TDP. Note that in Battlefield 3, GPU Boost increases the clock in spite of the fact that the 265W limit has been exceeded. This is because not all the circuit readings are taken directly, but are estimated at a fixed amount by Nvidia. In practice this doesn’t make a significant difference.


Page 6
Test protocol

The test
For this test, we used the protocol used in the report on the GeForce GTX 680 which includes Alan Wake.

We have decided no longer to use the level of MSAA (4x and 8x) as the main criteria for segmenting our results. Many games with deferred rendering offer other forms of antialiasing, the most common being FXAA, developed by NVIDIA. It therefore no longer makes sense to organise an index around a certain level of antialiasing, which in the past allowed us to judge a card according to its effectiveness with MSAA, which can vary according to implementation.

At 2560x1600, we carried out the tests with two different quality levels: extreme and very high, which automatically includes a minimum of antialiasing (either MSAA 4x or FXAA/MLAA/AAA). We also carried out the tests with this first quality level at 1920x1080 as well as with the second quality level with surround resolution, at 5760x1080.

We no longer show decimals in game performance results so as to make the graph more legible. We nevertheless note these values and use them when calculating the index. If you’re observant you’ll notice that the size of the bars also reflects this.

The Radeons were tested with Catalyst 12.4 drivers and the GeForces were tested with the 301.11 drivers.

The main GeForce GTX 680 tested is a model with the maximum GPU Boost clock set at 1084 MHz. The second used for the SLI configuration has however a maximum GPU Boost clock of 1097 MHz. Both the GeForce GTX 690 GPUs max at 1071 MHz.

With this report, we've taken the opportunity to introduce an X79 platform and a Core i7 3960X to our test system so as to be able to benefit from PCI Express 3.0 which was manually enabled on the GeForce.

Test configuration
Intel Core i7 3960X (HT off, Turbo 1/2/3/4/6 cores: 4 GHz)
Asus P9X79 WS
8 GB DDR3 2133 G.Skill
Windows 7 64 bits
GeForce beta 301.11
Catalyst 12.4







Page 7
Benchmark: Alan Wake

Alan Wake

Alan Wake is a pretty well executed title ported from console and and based on DirectX 9.

We used the game’s High quality levels and added a maximum quality level with 8x MSAA. We carried out a well defined movement and measured performance with Fraps.


The GeForce GTX 690 is at a similar performance level to the GeForce GTX 680 in SLI here. The Radeon HD 7970 however does a good deal better than a single GTX 680 and the CrossFire of the 7970 leads the field.


The Radeons dominate quite easily.


Page 8
Benchmark: Anno 2070

Anno 2070

Anno 2070 uses a development of the Anno 1404 engine which includes DirectX 11 support.

We used the very high quality mode on offer in the game and then, at 1920x1080, we pushed anistropic filtering and post processing to a max to make them very resource hungry. We carried out a movement on a map and measured performance with fraps.


While the GeForce GTX 500s were down in this game, the GeForce GTX 680 showed much better form though it couldn’t keep up with the Radeon HD 7970 when maximum quality was applied. In SLI you get a better yield however, which allows the bi-GPU Nvidia solutions to make up ground and take the lead at 1080p.



At very high resolutions, the GeForce GTX 680 and Radeon HD 7970 were on a par.


Page 9
Benchmark: Batman Arkham City

Batman Arkham City

Batman Arkham City was developed with a recent version of Unreal Engine 3 which supports DirectX 11. Although this mode suffered a major bug in the original version of the game, a patch (1.1) has corrected this. We used the game benchmark.

All the options were pushed to a maximum, including tessellation which was pushed to extreme on part of the scenes tested. We measured performance in Extreme mode (which includes the additional DirectX 11 effects) with FXAA High (high resolution) and MSAA 8x.


The good news is that after 5 months AMD has finally corrected the performance problems with MSAA that affected the Radeons. This puts the Radeon HD 7970 ahead of the GeForce GTX 680. The bad news is that CrossFire X still doesn’t work



At high resolutions with FXAA, the Radeon HD 7970 is left trailing.


Page 10
Benchmark: Battlefield 3

Battlefield 3

Battlefield 3 runs on Frosbite 2, probably the most advanced graphics engine currently on the market. A deferred rendering engine, it supports tessellation and calculates lighting via a compute shader.

We tested High and Normal modes and measured performance with Fraps on a well-defined route. Note that a patch designed to improve performance on the Radeon HD 7000s came out on the 14th February. Naturally we installed it and noted a gain of between 1 and 2%.


Here the GeForce GTX 690 gives a similar level of performance to the Radeon HD 7970 in CrossFire X.



You can enjoy the Ultra quality mode at 5760x1080 on three screens with the GeForce GTX 690.


Page 11
Benchmark: Bulletstorm

Bulletstorm

Although only in DirectX 9 mode, the rendering is pretty nice, based on version 3.5 of Unreal Engine.

All the graphics options were pushed to a max (high) and we measured performance with Fraps, with MSAA 4x and then 8x.


At MSAA 8x, the Radeons outdo the GeForces.



This lead disappears at high res with MSAA 4x.


Page 12
Benchmark: Civilization V

Civilization V

Pretty successful visually, Civilization V uses DirectX 11 to improve quality and optimise performance in the rendering of terrains (thanks to tessellation) and to implement a special compression of textures (thanks to the compute shaders), a compression which allows it to keep the scenes of all the leaders in the memory. This second usage of DirectX 11 doesn’t concern us here however as we used the benchmark included on a game card. We zoom in slightly so as to reduce the CPU limitation which has a strong impact in this game.

All settings were pushed to a max and we measured performance with shadows and reflections. The latest patch was installed.


While the GeForce GTX 680 outdoes the Radeon HD 7970 here, the CrossFire X does better which gives the Radeon bi-GPU the edge.

Although the game itself supports surround gaming, this isn’t the case in our test scene.


Page 13
Benchmark: Crysis 2

Crysis 2

Crysis 2 uses a development of the Crysis Warhead engine optimised for efficiency but adds DirectX 11 support via a patch and this can be quite demanding. As, for example, with tessellation, implemented abusively in collaboration with NVIDIA with the aim of causing Radeon performance to plummet. We have already exposed this issue here.

We measured performance with Fraps on version 1.9 of the game.


The GeForce GTX 680 in SLI is almost exactly on a par with the GeForce GTX 690. The Radeons are however in front at 2560x1600.



At very high res, performance levels between the cards are similar once again.


Page 14
Benchmark: F1 2011

F1 2011

The latest Codemaster title, F1 2011 uses a slight development of the F1 2010 and DiRT 3 engine, which retains DirectX 11 support.

We pushed all the graphics options to a max and we used the game’s own test tool on the Spa-Rancorchamps circuit with a single F1.


While the Radeon HD 7970 is slightly ahead of the GeForce GTX 680 here, CrossFire X performance is significantly limited by the CPU. You’ll note that the AMD solutions are limited by the CPU more quickly than the NVIDIA solutions.




At very high res, the GTX 680’s lead is cut but still has the advantage.


Page 15
Benchmark: Metro 2033

Metro 2033
Still one of the most demanding titles, Metro 2033 forces all recent graphics cards to their knees. It supports GPU PhysX but only for the generation of particles during impacts, a rather discreet effect that we therefore didn’t activate during the tests. In DirectX 11 mode, performance is identical to DirectX 10 mode but with two additional options: tessellation for characters and a very advanced, very demanding depth of field feature.

We tested it in DirectX 11, at maximum quality (including DoF and MSAA 4x), very high quality and with tessellation on.


No mono-GPU card allows you to play Metro 2033 comfortably at maximum quality. The GeForce GTX 680 struggles moreover in this mode which is very demanding in terms of memory bandwidth. The Radeon HD 7970s in CFX do best.



Things are better for the GeForces at high res but not enough to beat the Radeons.


Page 16
Benchmark: Total War Shogun 2

Total War Shogun 2

Total War Shogun 2 has a DirectX 11 patch, developed in collaboration with AMD. Among other things, it gives tessellation support and a higher quality depth of field effect.

We tested it in DirectX 11 mode, at max quality, with MSAA 4x and MLAA.


Since the release of the patch at the end of March, the GeForces have been suffering from a performance problem, probably because the optimisations specific to this game no longer work. As a result the Radeons open up a lead.



At very high resolution with MLAA, the Radeon HD 7970 jumps ahead of the GeForce GTX 680.


Page 17
Performance recap

Performance recap
Although individual game results are obviously worth looking at when you want to gauge performance in a specific game, we have also calculated a performance index based on all tests with the same weight for each game. We set an index of 100 to the GeForce GTX 680 in SLI:


[ 1920x1080 ]  [ 2560x1600 ]  [ 5760x1080 ]

On average, the GeForce GTX 690 finishes 3% behind the GeForce GTX 680 SLI at 2560x1600 and 2% behind at 1920x1080. This is an excellent result for a bi-GPU card. The average performance of the Radeon HD 7970s in CrossFire X is similar.

Note that the positions of the GeForce GTX 680 and the Radeon HD 7970 have been reversed compared to the results in our initial test of the GeForce GTX 680. There are several reasons for this:

- Here we used a GeForce GTX 680 from a store which had a maximum clock of 1084 MHz as against 1110 MHz on the press sample.
- AMD has finally (after 5 months!) corrected a performance problem in Batman AC with MSAA.
- Nvidia cards suffer from a performance problem in the latest version of Total War Shogun 2.
- The 1920x1080 index includes only extreme quality benches because of our ultra high end focus here and the GeForce GTX 680 fades a bit at extreme quality.


Page 18
GPU Boost performance and overclocking

GPU Boost performance and overclocking
We wanted to observe the gains given in practice by GPU Boost, which we managed to neutralise.

Moreover we managed to overclock the GeForce GTX 690 by increasing the GPU clock by 156 MHz (it therefore varies between 1,071 MHz and 1,227 MHz) and the memory clock by 300 MHz, which was therefore increased to 1800 MHz (7.2 Gbps). The energy consumption limit was then pushed to its maximum, +35% or 358W.

These results were obtained at 2560x1600 at an extreme quality level:

Hold the mouse over the graph to display the results in fps.
Hold the mouse over the graph to display the results in fps.

GPU Boost gives the GeForce GTX 690 an extra 5% boost in performance, which is modest in comparison to the 17% turbo clock margin on our sample card. This can be explained in part by the fact that the TDP constraint is relatively high, which prevents the GPUs from attaining their maximum clock in all games. We should also say that at very high res with an extreme quality level, particularly in respect of antialiasing, memory bandwidth, a weak spot on the GK104, plays an important role.

The good news is that you can overclock the memory on the GeForce GTX 690 by quite a bit, as you can on the GeForce GTX 680: + 20%. When we also increased the GPU clock by 14% and the energy consumption limit by 35%, we got an average gain of 19% in games, which is not bad at all for a card of this caliber!


Page 19
Conclusion

Conclusion
With the GeForce GTX 690, Nvidia has put a technological beast in its shop window that will get the pulses racing even if it doesn't stand up all that well to rational judgement. As is often the case in the ultra high-end segment, and even more so in the present case, such a model doesn’t really come into the reckoning for mere mortels who neither need such graphics power nor such a finish on what is after all a product that is hidden away in a PC casing. And even then, who’s willing to shell out €1000 on a graphics card alone?

Yet it’s difficult to remain indifferent in the face of what’s an excellent showing from this GeForce GTX 690, which offers a level of performance that’s very close to what you get with a pair of GeForce GTX 680s in SLI. The GTX 690 also makes less noise and takes up less room. To underline the exclusiveness of the card and justify the outsized price tag, Nvidia has worked on the design detail to an extent unheard of up until now. The luxurious finish will delight those who like nice looking objects and is something that we’d like to see more of at the high end.


If you're in search of some rational sense before going for this GeForce GTX 690, just tell yourself that a pair of GeForce GTX 680s offers a similar level of performance at the same price but without this exclusive touch. You will however need a well-cooled casing and a screen at 2560x1600/1440 (or a surround system) to fully make the most of its graphics power.

If on the other hand you’re looking for a way of avoiding succumbing to temptation and you’re out of favour with your banker, you can tell yourself that 2 GB per GPU isn’t really enough for such a monster and Nvidia should have pushed things further and gone for 4 GB. And in any case, one day or another a bigger GPU will eventually appear and will probably represent a more reasonable option.

What about the competition? Will the AMD Radeon HD 7990 be able to rival the GeForce GTX 690? In terms of performance it probably will. In terms of the price/performance ratio, it definitely will. It is however very doubtful whether the GTX 690 will be equalled in terms of finish. Unfortunately we haven’t yet seen the back of the plastic era!


Copyright © 1997-2014 BeHardware. All rights reserved.