Report: NVIDIA responds to AMD with the GeForce GTX 590 - BeHardware
>> Graphics cards

Written by Damien Triolet

Published on March 24, 2011

URL: http://www.behardware.com/art/lire/825/


Page 1

Introduction



NVIDIA has been preparing the bi-GPU GeForce GTX 590 for a few months now, ready to do battle with the AMD Radeon HD 6990. Out of the question to allow the competition to parade itself at the top of the graphics card classifications, especially after being vanquished by the Radeon HD 5970 last time out. With less successful handling of energy consumption, NVIDIA’s task was a tough one but they were nevertheless able to take advantage of successive AMD delays to hone their drivers and adapt the positioning of the GeForce GTX 590 to the performances of the Radeon HD 6990. Will there be a clearcut winner in this duel? To answer this question we put the cards through our test at standard resolutions, in surround and 3D Vision, without forgetting heat management and noise levels…


Beyond the 300W barrier
Although the PCI Express 2.0 spec doesn’t provide for cards that consume over 300W, both AMD and NVIDIA decided to ignore this, a first when it comes to reference cards. Outside of the fact that these cards won’t be validated by PCI-SIG, there are no real consequences, on the condition that you have a big enough power supply.


While it was hard to see how NVIDIA would come up with a card design with two GF110 GPUs within a 300W envelope, given the consumption levels of the GF110 and the lack of advanced energy consumption management, it wasn’t outside of the bounds of possiblity that AMD would do so, using PowerTune technology.

NVIDIA probably came to the same conclusion and, to ensure a crushing victory, went for a thermal envelope of 450W, presented as a ‘gaming TDP of 365W’. However, AMD also decided to up the ante as much as possible by bringing out its Radeon HD 6990 at 375W or 450W, depending on the bios used. On discovering these specs, NVIDIA must have understood that the battle would be tougher than they’d thought.


Page 2
Specifications, the GeForce GTX 590

Specifications

The GeForce GTX 590 uses two full GF110s, like the GPU used on the GeForce GTX 580. Their clocks have however been revised downwards, going from 772 to 607 MHz for the GPU and 2004 to 1707 MHz for the memory. Two GeForce GTX 580s in SLI will therefore have 27% higher processing power and 17% more memory bandwidth.


The reference GeForce GTX 590
For this test, NVIDIA supplied us with a reference GeForce GTX 590:





NVIDIA has managed to put two enormous GF110s on a relatively compact PCB. The GeForce GTX 590 is thus "only" 28cm long, a little more than 2cm less than the Radeon HD 6990 and only 1cm more than the GeForce GTX 580/570. In terms of connectivity, NVIDIA has gone for three DVI Dual-Link outs as well a mini-DisplayPort out. This opens the way for a Surround or 3D Vision Surround system.

The card is very well finished overall, with an illuminated GeForce logo and the option of removing the cooler hood without having to entirely dismount it, something which voids the guarantee. All you need to do is remove four small screws and a latch to access the fan and radiators to get rid of dust.

Overall, the NVIDIA design is similar to the AMD one: the GPUs are placed at the extremities of the PCB, the power stage is positioned in the middle and each GPU has its own radiator based on a vapour chamber. NVIDIA has however gone for a fan, in contrast to AMD who use a blower. In both cases only half of the hot air is expelled from the casing.

The power stage is made up of 5 phases for each GPU as well as two additional phases for the memory. Although only clocked at 854 MHz (1707 MHz for data), 1.25 GHz certified Samsung GDDR5 is used. Lastly, a traditional PCI Express NF200 switch is used to connect it all.

Two 8 pin power supply cables are required for the GeForce GTX 590. To recap, here’s what’s defined as the maximum power for each channel:

- PCIe bus: 75W
- 6-pin connector: 75W
- 8-pin connector: 150W

With two 8-pin connectors, the GeForce GTX 590 can attain energy consumption of 375W, which corresponds to NVIDIA’s gaming TDP, though this doesn’t mean very much and is given merely to avoid having to include the considerably higher figure of 450W, the real maximum. Indeed, NVIDIA does say that maximum energy consumption is 25% higher than this figure, ie. the real TDP of 450W.

As with the GeForce GTX 580, 570 and 560 Ti, NVIDIA has included OCP to protect the GeForce GTX 590 from overconsumption. An important difference with the other cards is that OCP is active all the time on the GTX 590 and not only for a handful of pieces of software such as Furmark. This is crucial as we noted after our first sample card gave up the ghost.

Due to a bug, Windows can sometimes automatically install a driver that doesn’t support the GTX 590 and doesn’t activate OCP across the board. In our test, a heavy load forced energy consumption up to dangerours levels and blew the card. After any installation of the GeForce GTX 590, we therefore advise you to ensure that any old drivers have been deleted. It goes without saying that we advise you not to use drivers other than versions which support the GeForce GTX 590 officially!


Page 3
Energy consumption and noise

Energy consumption
We measured the energy consumption of the graphics cards on their own. We took these readings at idle, in 3D Mark 06 and Furmark. Note that we use a version of Furmark that isn’t detected by NVIDIA. On the GeForce GTX 580/570/560 Ti, OCP is only enabled when such a piece of software is detected. However, on the GeForce GTX 590, OCP is always on and runs anyway, without having to detect the application.

NVIDIA has however added an additional tweak which consists in reducing clocks or slowing down performances when software such as Furmark is detected whatever the energy consumption level is. NVIDIA has justified this saying that it’s an additional level of security. We however suspect that this is also a way of preventing excessive consumption showing up in tests… In any case, the version we used wasn’t subject to this tweak.


Strangely, how PowerTune behaves seems to differ from one bios to the other with the Radeon HD 6990, which we imagine may be linked to the temperature of the GPU. The higher the temperature gets, the more current leakage there is, which has a direct effect on energy consumption. We can therefore imagine that AMD hasn’t fully taken this higher temperature into account and hasn’t been particularly conservative in defining the activity to energy consumption estimation ratios in the 450 watts mode.

This means there’s more of a margin, either for overclocking the GPU or in situations of extreme load as is the case with Furmark. Thus in 375W mode, we measured the GPU clock as varying from 535 to 830 MHz, with an average of 560 MHz, while in 450W mode, the GPU clock oscillated between 670 and 880 MHz, with an average of 765 MHz. The gain in performance in Furmark was therefore a little more than 20%!

The GeForce GTX 590 consumes a similar amount of energy to the Radeon HD 6990 with the 450W bios. However it consumes 370W in 3Dmark, which corresponds to NVIDIA’s gaming TDP.


Noise levels
We place the cards in an Antec Sonata 3 casing and measure noise levels at idle and in load. We placed the sonometer 60 cm from the casing.


NVIDIA has worked hard to keep noise levels at a reasonable level on what is a monster of a GeForce GTX 590.

Although the Radeon HD 6990 is quiet at idle, in load, it beats the record held by the GeForce GTX 480s in SLI with a reading of over 59 dB. Here the blower is running at full speed (4900 RPM).

Note that in 375W mode, noise levels vary cyclically between 52.8 dB and 59.4 dB. The blower runs at 3600 RPM at first and then, as the temperature goes up, accellerates progressively to 4900 RPM, with temperatures dropping after several tens of seconds and the fan then gradually dropping back down to 3600 RPM before the temperature starts to go up again and the cycle starts again. We therefore imagine that if your casing is well cooled, you should be able to keep the fan running at lower speeds. In any case, it’s a long way from being quiet!


Page 4
Temperatures

Temperatures
Still in the same casing, we took a temperature reading of the GPU using internal sensors:


The GPU temperatures climb relatively high in load, which makes us think that NVIDIA has prioritised lower noise levels when calibrating its cooling system.

Here’s what the infrared thermography imaging shows:


GTX 590 at idle


GTX 590 in load


These shots show that the whole of the GTX 590 heats up with the power stage getting particularly hot.

Note that the photos use a scale that is well adapted to the comparison of a wide range of graphics cards but which smooth results over 80°C. Here’s the same shot but with a different scale which allows us to better represent the differences between high temperatures:


With the GeForce GTX 590 half the hot air stays inside the casing and the components around the card heat up, particularly the hard drives which, as you can see in our report on the thermal characteristics of graphics cards which has been fully updated with the results for the GeForce GTX 590.


Page 5
Surround, 3D Vision, the test

Surround gaming
Given the rather high end status of the graphics solutions we’re covering in this report, we wanted to take a look at how they respond in surround gaming. To do this we used three screens at 1920x1080 for total resolution of 5760x1080.

On the AMD side, all Radeon HD 5000s and 6000s support surround gaming using Eyefinity technology. The same goes for all the CrossFire X solutions. Note however there’s a limitation in terms of support for DVI/HDMI/VGA connectivity: the Radeons can only feed two of these outputs. For the additional screen or screens, you have to use the DisplayPorts and an active adaptor (supplied with the Radeon HD 6990s) if your screens don’t support DisplayPort natively.

In the NVIDIA camp, things are somewhat different as any one GeForce can only pilot two screens, whatever connectivity you’re using. For surround gaming, you therefore need an SLI system, or a bi-GPU card. You can then connect two screens to one of the cards and the third to the other.


AMD and NVIDIA offer similar features to create a large display area, which will be seen as a single large screen by Windows or by games, which facilitates support for surround gaming. Both drivers allow you to modify the position of the screens (this means you don’t have to fiddle around with the outs so as to get the screens in the right order) and introduce bezel correction to avoid any gap as a result of screen borders. We didn’t activate bezel correction as it increases resolution, from 5760x1080 to 5920x1080 for example, and introduces an additional load.

Note that while the Catalyst Control Center has given us a few problems recently, the Eyefinity page has never caused any trouble at all, in contrast to the NVIDIA drivers which seem very capricious on this point. Once the large surround surface area is set up with the NVIDIA solutions, everything works perfectly, as long as you get that far!

What with all the system crashes, driver control panel crashes, display issues and so on, we had quite a time! After numerous attempts and changing of connectors, we finally managed to get surround gaming going, without finding any logic or similarity in the three systems tested. We imagine these problems are linked to the use of HDMI (used for two or three of the screens) and to models customised by partners, as the connectivity then differs from reference boards. In any case, it looks as if NVIDIA didn’t check out enough combinations in its validation process.


3D Vision
For this test, we looked at 1080p 3D stereo performance. Given that the AMD ecosystem is still very poor (even nonexistant) here, we tested exclusively with 3D Vision, excluding any games that pose a problem: H.A.W.X., S.T.A.L.K.E.R. Call of Pripyat and Crysis Warhead.


Note also that to enjoy 3D Vision in F1 2010, we had to settle for DX9 as NVIDIA has been dragging its feet on putting a driver profile adapted to DX11 into place. Is this a way of expressing their annoyance to Codemasters with respect to the fat that development was done in partnership with AMD? Whatever the case, GeForce users are the ones who suffer!

In 3D Vision mode you’re limited to 60 fps. We nevertheless deactivated vertical synchronisation in all games so as to avoid any ceiling effect.


The test
For this test, we introduced the excellent Bulletstorm to our protocol. The tests were carried out at 1920x1200 with MSAA 4x and MSAA 8x, as well as at 2560x1600 with MSAA 4x, except for Crysis and Far Cry 2 which were tested at several additional resolutions. We also tested all these solutions at 5760x1080 with and without MSAA 4x. All settings were pushed to a maximum. The most recent updates were installed and all the cards were tested with the most recently available drivers.

We decided to stop showing decimals in game performance results so as to make the graph more readable. We nevertheless note these values and use them when calculating the index. If you’re observant you’ll notice that the size of the bars also reflects this.

The Radeons and the GeForces were tested with texture filtering at the high quality setting. All the Radeons were tested with the Catalyst 11.4 beta driver (8.84.3 beta 2). All the GeForces were tested with beta 267.71 drivers, which give small gains in most games, particularly with the GeForce 580 and 580 SLIs.


Test configuration
Intel Core i7 980X (HT deactivated)
Asus Rampage III Extreme
6 GB DDR3 1333 Corsair
Windows 7 64 bits
Forceware 267.71 beta
Catalyst 11.4 beta (8.84.3 beta2).


Page 6
Starcraft 2

Starcraft 2

To test Starcraft 2, we launched a replay and measured performances following one player’s view.

All graphics settings were pushed to a maximum. The game doesn’t support antialiasing which is therefore activated in the control panels of the AMD and NVIDIA drivers. Patch 1.0.3 was installed.


A problem that appeared with the latest drivers affected performance in CrossFire mode.

To avoid any cheating, the Starcraft 2 developers have prevented the use of surround gaming. 3D stereo is however supported:


You have to push antialiasing up to 8x before the highest performance solutions display a significant advantage.


Page 7
Mafia II

Mafia II

The Mafia II engine passes physics handling over to the NVIDIA PhysX libraries and takes advantage to offer high physics settings which can be partially accellerated by the GeForces.

To measure performance we used the built-in benchmarks and all graphics options were pushed to a maximum, first without activating PhysX effects accelerated by the GPU:


The Radeon HD 6990 has the advantage here at 2560x1600 while the GeForce GTX 590 takes the lead at 1920x1200.

Next, we set all PhysX options to high:


With PhysX effects pushed to a maximum, performance levels dive. Note that they are in part limited by the CPU, as not all additional PhysX effects are accelerated. The Radeons are of course a long way behind.


Mafia II can detect when a large display area is made up of three monitors and registers them as 3 x 1920x1080 and not 5760x1080, which enables it to adapt the user interface automatically. For reasons unknown however, this detection system fails with a Radeon HD 6990 + Radeon HD 6970 pairing (triple CrossFire X) as well as with the Radeon HD 5970. The interface is then unusable and the rendering of the benchmark mode is only displayed on one screen.

On the GeForce side, activating antialiasing has no effect. We imagine that NVIDIA must have blocked the activation of antialiasing in its drivers to prevent the impact of lack of memory.


The GeForce GTX 590 steals a march on the GTX 580 when antialiasing is activated, but remains behind the GeForce GTX 570s in SLI.


Page 8
Crysis Warhead

Crysis Warhead

Crysis Warhead replaces Crysis and has the same resource-heavy graphics engine. We tested it in version 1.1 and 64-bit mode as this is the main innovation. Crytek has renamed the different graphics quality modes, probably so as not to dismay gamers who may be disappointed at not being able to activate the highest quality mode because of excessive demands on system resources. The high quality mode has been renamed as “Gamer” and the very high is called “Enthusiast”. This is the one we tested.



The Radeon HD 6900s do pretty well here and benefit from having 2 GB of memory at 2560x1600 AA8x.


The GeForce GTX 560s in SLI haven’t got enough memory to launch Crysis in these conditions. The same goes for the Radeons in CrossFire when they are only equipped with 1 GB of memory, but only when antialiasing is activated. Triple CrossFire comes in very useful here!

Crysis isn’t very well adapted to 3D stereo and NVIDIA’s driver isn’t very efficient, which means too many concessions, with 3D results pretty average even then. We therefore consider Crysis Warhead unplayable in stereo 3D.


Page 9
Bulletstorm

Bulletstorm

Bulletstorm is one of the best in the current crop of games. Although only in DirectX 9 mode, the rendering is pretty nice, based on version 3.5 of Unreal Engine.

All the graphics options were pushed to a max (high) and we measured performance with fraps.


The Radeons do very well in this game and the Radeon HD 6990 has a significant lead over the GeForce GTX 590.


Although the GeForces do pretty well in surround gaming, they are more impacted by activation of antialiasing.


The GeForce GTX 590 is on a par with the GeForce GTX 570s in SLI once 3D Vision is activated. These systems are a long way ahead of the GeForce GTX 580 when antialiasing is activated.


Page 10
Far Cry 2

Far Cry 2

This version of Far Cry isn’t really a great development as Crytek made the first episode in any case. As the owner of the licence, Ubisoft handled its development, with Crytek working on Crysis. No easy thing to inherit the graphics revolution that accompanied Far Cry, but the Ubisoft teams have done pretty well, even if the graphics don’t go as far as those in Crysis. The game is also less resource heavy which is no bad thing. It has DirectX 10.1 support to improve the performance levels of compatible cards. We installed patch 1.02 and used the “ultra high” quality graphics mode.



Although the GeForces do particularly well in Far Cry 2, the CrossFire system does slightly better than the SLI, which allows the Radeons to position themselves better. Strangely, the Radeon HD 6950 1 GB cards give significantly higher performance than the Radeon HD 6950 2 GBs.


For reasons unknown to us, the Radeons are greatly limited by the CPU in Far Cry 2 in surround gaming, which gives the GeForces an advantage. The CrossFire solutions stay over 60 fps however and allow you to enjoy Far Cry 2 without any problem in these conditions.


All the solutions are able to give 60 fps per eye in this test.


Page 11
H.A.W.X.

H.A.W.X.

H.A.W.X. is a flying action game. It uses a graphics motor that supports DirectX 10.1 to optimise results. Among the graphics effects it supports, note the presence of ambient occlusion that’s pushed to a max along with all other options. We use the built-in benchmark and patch 1.2 was installed.


The GeForces are more efficient here and the GTX 560 Ti pairing is on a par with the Radeon HD 6990.


All the solutions tested here allow you to enjoy surround gaming in this game, with the exception of the GeForce GTX 580 of course, as it doesn’t support this mode. Note that at this resolution H.A.W.X no longer offers antialiasing beyond 2x.

You can’t play H.A.W.X. in stereo 3D without significant annoyance.


Page 12
BattleForge

BattleForge

The first game with DirectX 11, or more precisely Direct3D 11 support, we couldn’t not test BattleForge. An update added in September 2009 gave support for Microsoft’s new API.

Compute Shaders 5.0 are used by the developers to accellerate SSAO processing (ambient occlusion). Compared to standard implementation, via the Pixel Shaders, this technique allows more efficient use of the available processing power by saturating the texturing units less. BattleForge offers two SSAO levels: High and Very High. Only the second, called HDAO (High Definition AO), uses Compute Shaders 5.0.

We used the game’s bench and installed the latest available update (1.2 build 304941).


The GeForce GTX 560 Tis finish here on a level with the Radeon HD 6990, which only has a lead at 2560x1600 AA4x. Activation of antialiasing 8x has a bigger impact on the AMD cards here.


Activation of antialiasing 4x has less of an impact on the GeForces.


The GeForce GTX 590 is on a par with the GTX 570s in SLI.


Page 13
Civilization V

Civilization V

Pretty successful visually, Civilization V uses DirectX 11 to improve quality and optimise performance in the rendering of terrains thanks to tessellation and in implementing a special compression of textures thanks to the compute shaders. This compression allows it to retain the scenes of all leaders in the memory. This second usage of DirectX 11 doesn’t concern us here however as we used the benchmark integrated on a game card. We zoom in slightly so as to reduce the CPU limitation which has a strong impact in this game.

All settings were pushed to a max and we measured performance with shadows and reflections. Patch 1.2 was installed.


The GeForces do very well here and CPU limitation doesn’t kick in as quickly as for the Radeons.

Although Civilization V supports surround gaming, unfortunately its built-in benchmark tool doesn’t.


All the cards were strangely limited to +/- 30 fps and there’s a lot of jumpiness when you move around on the game map.


Page 14
S.T.A.L.K.E.R. Call of Pripyat

S.T.A.L.K.E.R. Call of Pripyat

This new S.T.A.L.K.E.R. suite is based on a new development of the graphics engine which moves up to version 01.06.02 and supports Direct3D 11 which is used both to improve performance and quality, with the option to have more detailed light and shade as well as tessellation support.

Maximum quality mode was used and we activated tessellation. The game doesn’t support 8x antialiasing. Our test scene is 50% outside and 50% inside and inside it’s surrounded with several characters.


The Radeon HD 6900s are very efficient here, especially at 2560x1600.


The Radeons are also very efficient in surround mode, especially with antialiasing where they benefit from their 2 GB memory. The 1 GB solutions are clearly limited by their memory.

3D Vision is only supported in S.T.A.L.K.E.R. at ridiculously low graphics options. We therefore consider it incompatible.


Page 15
F1 2010

F1 2010

The latest Codemaster title, F1 2010 uses the same engine as DiRT 2 and supports DirectX 11 via patch 1.1 that we installed. As this patch was developped in collaboration with AMD, NVIDIA told us that they had only received it late in the day and haven’t yet had the opportunity to optimise its drivers for the game in the DirectX 11 version.

We pushed all the graphics options to a max and we used the game’s own test tool on the Spa-Rancorchamps circuit with a single F1.


In F1 2010, the Radeons are particularly at ease. Note that triple CrossFire X doesn’t bring any gain here.


For us, F1 2010 is the game in which surround gaming adds most to the experience, notably improving immersion and giving more of a sense of speed.

Although triple CrossFire X still doesn’t bring any gain in this game, all the Radeon 2 GB bi-GPU solutions allow you to play F1 2010 with antialiasing and in surround gaming mode. Things aren’t quite as good for the GeForce GTX 580 in SLI but it nevertheless runs with good fluidity. However there probably isn’t any point activating antialiasing with the GeForce GTX 570s in SLI or with the GeForce GTX 590.


3D Vision is only supported in DirectX 9 and the rendering doesn’t then require too much processing power with all cards easily managing 60 fps per eye.


Page 16
Metro 2033

Metro 2033
Probably the most demanding title right now, Metro 2033 forces all recent graphics cards to their knees. It supports GPU PhysX but only for the generation of particles during impacts, a rather discreet effect that we therefore didn’t activate during the tests. In DirectX 11 mode, performance is identical to DirectX 10 mode but with two additional options: tessellation for characters and a very advanced, very demanding depth of field feature.

We tested it in DirectX 11 mode, at a very high quality level and with tessellation activated, with 4x MSAA. Next we measured performance with MSAA 4x and Depth of Field.


The cards equipped with just 1 GB of memory can’t cope when Depth of Field is activated. The GeForce GTX 590 is on a par with the GeForce GTX 570s in SLI but remains behind the Radeon HD 6990.


The very heavy Metro 2033 has all the graphics solutions on their knees in surround gaming. You need 3 GPUs to enjoy good quality here, each with 2 GB of memory if you’re going to activate antialiasing.


The same goes for 3D Vision, where you either need 2 GPUs or need to revise graphics options down.


Page 17
Performance recap

Performance recap
Although individual game results are worth looking at, especially as high-end solutions are susceptible to being levelled by CPU limitation in some games, we have calculated a performance index based on all tests with the same weight for each game. Mafia II is included with the scores obtained without GPU PhysX effects.

We attributed an index of 100 to the Radeon HD 580 2GB at 1920x1200 with 4xAA:


Hold the mouse over the graph to view cards by performance at 1920 4xAA.

As we suspected, the Radeon HD 6990 finishes 3 to 8% up in our index with its 375W bios. With the 450W bios in place, this advantage varies between 6 and 12%.

The GeForce GTX 570s in SLI are also slightly in front of the GeForce GTX 590 while the GeForce GTX 580 pairing is up by 20 to 23%.


We have also put the results in surround gaming in an index, excluding Mafia II which poses a few problems in this mode:


Hold the mouse over the graph to view the cards by performance at 5760x1080.

Here too, the Radeon HD 6990 has a small advantage over the GeForce GTX 590 and this is extended once antialiasing is enabled. Note that the cards equipped with just 1 GB of memory suffer a good deal under these conditions.

Finally, we grouped the results in 3D Vision. This index is not the same as the two others as this one is an unweighted mean. These results therefore show the propensity of a card to give 60 fps per eye:


The GeForce GTX 590 is on a par with the GeForce GTX 570 in SLI here, benefitting from its larger memory in some cases.


Page 18
Conclusion

Conclusion
Fronting up to a Radeon on an equal footing is no easy thing for NVIDIA. At the same excessive maximum energy consumption, the Radeon HD 6990 has an advantage from the start with its slightly better performance/watt ratio. As we imagined, it retains top spot on the podium opposite the GeForce GTX 590.

Only looking at raw performance would however be too reductive when considering extreme graphics cards such as these, which aren’t required for gaming in full HD in standard conditions. What is important is to make sure they can handle extreme usage too, which is why we carried out more tests than usual, including surround gaming and stereo 3D results.

The Radeon HD 6990 benefits here from having 2 GB of video memory per GPU for use with Eyefinity to give excellent results with surround gaming. In spite of its processing power however, we did find that it wasn’t powerful enough in the most demanding games for which you have to lower graphics quality or move up to a tri or even quad GPU solution! Unfortunately the Radeon HD 6990 suffers from truly unbearable noise levels and this is likely to put off even the most assiduous performance enthusiasts.


The GeForce GTX 590 does honorably well in surround gaming, although it struggles more rapidly on activation of antialiasing as it comes with only 1.5 GB of memory per GPU. However it does have the advantage of using the 3D Vision ecosystem, which is really the only solution available for gaming in 3D stereo and full HD. Where it stands out most from its direct competitor is when it comes to noise levels, which are much better managed on the GeForce GTX 590.

Although generally slightly down on a pair of GeForce GTX 570s in SLI, which come in at the similar price of €650, the fact that it has more memory means that it does have an advantage in a few cases. Where the two solutions differ most however is in terms of cooling and size. The GeForce GTX 570 pairing occupies four slots but expels most of the hot air out of the casing while the GeForce GTX 590 is relatively compact but only expels half the hot air from the casing, the rest staying inside and heating up the hard drives, which therefore need to be well cooled if you value your data!

Getting the GeForce GTX 590 up and running was of course quite a challenge and in view of the levels of energy consumption and the lack of any real management technology in this respect, NVIDIA had to cobble together software security limitations to stop the card or the system being damaged by excessive loads. The relative fragility of such a solution and the lack of room for manoeuvre mean that overclocking and mod afficionados won’t be able to get much out of the card, depriving NVIDIA of part of what is already a very small niche for such a product.

At the end of the day, beyond its main justification which was to compete alongside the Radeon HD 6990, the GeForce GTX 590 won’t be of much practical use in more than a very few cases: firstly for a compact 3D stereo system and then for quad SLI solutions for systems designed for surround gaming.


Copyright © 1997-2014 BeHardware. All rights reserved.