Report: Nvidia GeForce GTX 570 - BeHardware
>> Graphics cards
Written by Damien Triolet
Published on December 7, 2010
A week before the arrival of the Radeon HD 6970 and 6950, NVIDIA is getting its arsenal ready with a new derivative of the excellent GeForce GTX 580. On the menu for the GeForce GTX 570, performance levels on a par with the GeForce GTX 480 and reduced energy consumption and noise levels.
The cut down GF110
For this GeForce GTX 570, NVIDIA has of course worked with the latest revision of its high-end GPU, the GF110 A1. To recap, this is in fact a “simple” revision of the GF100 A3 used in the GeForce GTX 470 and 480. Usually this revision would have been called the GF100 B1, but for marketing reasons NVIDIA has preferred to go for a code name that inspires a feeling of innovation rather than seeming simply to be making corrections to the design of its 3 billion transistor GPU.
This GPU used in the new board is partly cut down with one of its 16 blocks of execution units disabled as well as one of its 6 memory controllers, which gives something of a mix, in terms of internal configuration, of the GeForce GTX 480 and the GeForce GTX 470, with as many processing units as the first but with a reduced memory bus like the second. Higher clocks and the several small corrections made to the GPU should allow the GeForce GTX 570 to equal the GeForce GTX 480.
Specifications, the GeForce GTX 570
With a slightly higher GPU clock, the GeForce GTX 570 gives 5% higher processing and texturing power and fillrate than the GeForce GTX 480. Memory bandwidth is however down 14%. Moreover, although the reduced number of ROPs won’t limit fillrate, they will somewhat limit performance at high levels of antialiasing.
The stock GeForce GTX 570For this test, NVIDIA supplied us a stock GeForce GTX 570:
For the GeForce GTX 570, NVIDIA has used the same PCB as on the GeForce GTX 580. Given that the memory bus has been reduced from 384 to 320 bits, 2 of the 32-bit memory module slots remain unoccupied. The same goes for 2 of the 6 phases on the power stage, which haven’t been used as energy consumption has been revised downwards somewhat. The GeForce GTX 570 requires only two 6-pin power connectors, meaning that energy consumption shouldn’t exceed 225 watts, which corresponds to the “gaming TDP” announced at 219 watts. Connectivity is identical with 2 DVI Dual Link outputs, one mini-HDMI output and two SLI connectors.
The cooler is similar to the one used on the GeForce GTX 580, but not identical in spite of what its exterior might lead you to believe. Although still using vapour chamber technology for improved efficiency, it’s smaller, which makes sense, as there’s less heat to disperse.
GeForce GTX 570 and GeForce GTX 580 vapor chambers.
To try and contain energy consumption, NVIDIA has built components onto the PCB which allow the drivers (but not the GPU itself) to monitor the GPU’s energy consumption or rather the intensity of the current which passes down each of the three 12v power supply lines (PCI Express bus and connectors).
NVIDIA hasn’t however activated global monitoring and, to avoid reducing performance in games or 3Dmark, only puts its system in action in the latest versions of Furmark and OCCT, namely in the extreme load tests used by reviewers to measure energy consumption. In these applications, if the driver measures energy consumption beyond a certain limit, it lowers clocks by half, and then by half again if this hasn’t been sufficient.
In practice, the NVIDIA system ends up preventing the use of such software more than maintaining energy consumption within a well-defined thermal envelope. NVIDIA told us that it was committed to moving forward and including more similar applications. This doesn’t mean, however, that it is moving towards permanent monitoring of energy consumption.
Energy consumption, noise
Energy consumptionWe did of course use our new test protocol that allows us to measure the energy consumption of the graphics card alone. We took these readings at idle, in 3D Mark 06 and Furmark. Note that we use a version of Furmark that isn’t detected by the stress test energy consumption limitation mechanism put into place by NVIDIA in the GeForce GTX 500 drivers.
Although the GeForce GTX 580 is more economical at idle than the GeForce GTX 480, it draws just as much power in load with energy consumption at 300W. Energy consumption for the GeForce GTX 570 at idle is a little bit lower, on a par with the Radeon HD 5870 2 GB. In load, in 3Dmark06, which corresponds most closely to what you see in gaming, the GeForce GTX 570 drew exactly the same amount of power as the GeForce GTX 470. In Furmark however, energy consumption climbed to 250W.
Like the GeForce GTX 580 and 480, then, it can go beyond PCI Express standards in terms of power consumption during extreme loads. Although this isn’t a problem with power drawn through the PCI Express connectors, it can become so due to the 12v power drawn from the bus as the 5.5A limit fixed by the standard is then exceeded. In practice this isn’t a problem on a mid or high-end motherboard, though some entry-level models are more sensitive here and can even blow at this load if they have a fixed fuse. It’s about time that NVIDIA, and AMD on some models, were more rigorous here!
Noise levelsWe place the cards in an Antec Sonata 3 casing and measure noise levels at idle and in load. We placed the sonometer 60 cm from the casing.
NVIDIA has obviously worked hard on the noise levels of the GeForce GTX 500 cooler, which shows itself to be relatively quiet in the dissipation of power. We’re a long way from the disastrous noise levels experienced with the stock GeForce GTX 480 and 470.
TemperaturesStill in the same casing, we took a temperature reading of the GPU using internal sensors:
The GeForce GTX 500s, in addition to being quieter, also cool their GPUs much better than the GeForce GTX 480 and 470, which demonstrates effectively, if this was still necessary, that the GTX 480/470 cooler was really badly designed.
Here’s what the infrared thermographic imaging shows:
GeForce GTX 570 at idle
GeForce GTX 570 in load
Theoretical tests: pixels
Texturing performanceWe measured performance during access to textures of different formats in bilinear filtering. Here are the results for standard 32-bit (4xINT8), 64-bit “HDR” (4x FP16) and 128-bit (4x FP32). We have also added performance for 32-bit RGB9E5, a new HDR format introduced in DirectX 10 which allows 32-bit HDR textures to be stocked, give or take a few compromises.
As announced by NVIDIA, the GeForce GTX 500s can filter FP16/11/10 and RGB9E5 textures at full speed and are 2.35x and 2.58x faster here than the GeForce GTX 480 and 470 respectively.
FillrateWe measured the fillrate without and then with blending, with different data formats:
In terms of fillrate, the Radeons have a big advantage over the GeForce GTX 400s/500s, especially with FP10s, a format that is processed at full speed whereas it's at half speed on the GeForces. Given the limitation of the GeForces in terms of datapaths between the SMs and ROPs, it’s a shame that NVIDIA hasn’t given its GPU the possibility of benefitting from FP10 and FP11 formats.
The GeForces nevertheless still have a few advantages. First of all they can process the FP32 single channel at full speed without blending. Next, with blending they conserve maximum efficiency for INT8 whereas the Radeons suffer.
Theoretical tests: triangles
Triangle throughputGiven the architectural differences between the various GPUs in terms of geometry processing, we obviously wanted to take a closer look at the subject. First of all we looked at triangle throughput in two different situations: when all triangles are drawn and when all the triangles are removed with back face culling (because they aren’t facing the camera):
The GeForce GTX 580 gives very high performance here and leads the GeForce GTX 480 by 30% when the triangles are rendered, while theoretically the gain should have been somewhere between 10 and 18%. Given that the GeForces are limited artificially at this level in comparison to the Quadros, we suppose that NVIDIA has left a little more space to the GF110 than the GF100 and artificially introduced a difference between the two revisions of the GPU. It may also be that the incomplete GPUs lose some of their efficiency.
When it comes to rejecting triangles from the rendering via culling, none of the Radeon GPUs comes close to the high-end GeForces.
Next we carried out a similar test but using tessellation:
The advantage of the GeForces over the Radeons is there for all to see. AMD and NVIDIA have very different approaches and the throughput of the GeForces varies according to the number of processing units on each card, while for the Radeons this only varies according to changes in the clock.
The architecture of the Radeons means that they are rapidly overloaded with the quantity of data generated, which then drastically reduces their throughput here. Doubling the size of the buffer for the tessellation unit in the Radeon HD 6800s’ GPU gives them a significantly higher level of performance than the Radeon HD 5800s, though they don’t rival the GeForces.
Strangely the GeForce GTX 580 only posts a very slight gain on the GeForce GTX 480 and is even down on it in this test when triangles are ejected from the rendering via culling. This may be because of a different software configuration which favours certain cases (the most important probably) but can penalise others. This is something that we’ve observed with certain profiles with the Quadros. In any case, this isn’t something the GeForce GTX 570 is subject to. It behaves here like the GeForce GTX 480 with higher clocks. Note that we did of course retest the GeForce GTX 580 with the same driver as the GeForce GTX 570.
Displacement mappingWe tested tessellation with an AMD demo that is part of Microsoft’s DirectX SDK. This demo allows us to compare bump mapping, parallax occlusion mapping (the most advanced bump mapping technique used in gaming) and displacement mapping that uses tessellation.
Basic bump mapping.
Parallax occlusion mapping.
Displacement mapping with adaptive tessellation.
By creating true additional geometry, displacement mapping displays clearly superior quality. Here we activated the adaptive algorithm that allows you to avoid generation of useless geometry and too many small triangles that will not fill any quads and waste a lot of ressources.
We also measured performances obtained with the different techniques:
It is interesting to note that tessellation doesn’t only improve rendering quality but also performance! Parallax occlusion mapping is in fact very ressource heavy as it uses a complex algorithm that attempts to simulate geometry realistically. Unfortunately it generates a lot of aliasing and this is noticeable on the edges of objects or surfaces that use it.
Note however that in the present case the displacement mapping algorithm is aided by the fact that it is dealing with a flat surface. If it has to smooth geometry contours and apply displacement mapping at the same time, demands are of course much higher.
The GeForce GTX 400s handle the load associated with tessellation much better than the Radeons, although there is a significant gain with the 6800 series. The use of an adaptive algorithm which regulates the level of tessellation acording to the areas that are more or less detailed (depending on distance, screen resolution and so on) gives significant gains across the board and is more representative of what developers will put into place. The gap between the GeForces and the Radeons is then reduced and the Radeons sometimes even move in front in certain cases.
Is AMD sacrificing quality?
Is AMD sacrificing quality?With the arrival of the Radeon HD 6800s, AMD decided to review Catalyst A.I.s, the driver option which handles activation of certain generic optimisations as well as optimisations specific to each game.
Previously it was only possible to activate certain more aggressive optimisations in terms of filtering or to completely disable Catalyst A.I. and all the optimisations made by AMD. Henceforth you’ll be able to set texture filtering optimisations independently. A “quality” mode is enabled by default, which however integrates slightly more aggressive optimisations than previously and introduces more shimmering in some textures which then results in lower quality.
In contrast to some German websites, we didn’t pick up on this in the GeForce GTX 580 test. Although we did see it on the Radeon HD 6800, we didn’t notice that our quality tests were compromised by a bug in the Catalyst Control Center interface, which, up to version 10.10d, took into account the quality level defined on the Radeon HD 6800, but didn’t display it on the Radeon HD 5800s. In other words, after having tested the Radeon HD 6800 in high quality mode, which disables these aggressive optimisations, the setting remained active while we thought we were testing the Radeon HD 5870 in default mode. The Catalyst 10.10e corrected this problem and also display the filter quality setting with the Radeon HD 5800s. After more in depth tests, we were able to confirm that AMD had reduced quality across the board, at least on all its high-end series 5000 and 6000 cards, which is obviously something we can only regret.
To observe the gain given by these optimisations that we judge too aggressive and which reduce quality where NVIDIA already had a slight advantage, we measured performances right across our test protocol with the Catalyst 10.10e drivers and two texture filtering quality modes, both on a Radeon HD 5870 2GB and a Radeon HD 6870:
Depending on the game, the gain in performance varies between 0 and 8% with an average gain of 1 to 2%. Note that the average gain corresponds to the impact of the optimisation on our performance index. It is therefore relatively modest and doesn’t affect our conclusions in terms of these products as a difference of 2% in performance isn’t going to alter whether we recommend one model over another.
Note that the Radeon HD 6870 is slightly more affected than the Radeon HD 5870, which may be because its filtering units are correcting another issue which has a small impact in terms of performance.
Note also that during our tests we accept the maximum filtering setting available in the games themselves and don’t force 16x anisotropic filtering in the driver control panel. Were we to do so, the impact of these optimisations on results would no doubt have been more significant.
In 2010, there is no reason to sacrifice default graphics quality in this way and we hope that AMD will correct this with the arrival of the Radeon HD 6900. In any case, as you can see, we used the high quality mode in this test.
The testFor this test, we used the same protocol as the one we used in the GeForce GTX 580 test, though we haven’t retained H.A.W.X. 2. After experiencing the game, disappointing both in terms of gameplay and graphics, we didn’t see the point in keeping it. Except for detailed terrains, rendering is poorer and less successful overall than the first version. No reason, then, to encumber our protocol with this game and its partisan technical choices which, at the end of the day, tell us no more than theoretical tessellation tests.
The tests were carried out at 1920x1200, without FSAA, with 4x MSAA and 8x MSAA. Note that we made sure to test this mode on the GeForces, which isn’t always easy to work out. In the NVIDIA drivers, 8x antialiasing is in fact MSAA 4x with CSAA 8x which doesn’t give the same quality as MSAA 8x, which, for its part, is called 8xQ antialiasing. This is therefore what we tested.
Now one year after its release, we’ve decided that we can no longer treat DirectX 11 separately, especially as all the cards tested here are compatible with this API. DirectX 11, 10 and 9 games are therefore mixed together and we opted for very high settings even in the most demanding games. The most recent updates were installed and all the cards were tested with the most recent available drivers.
We decided to stop showing decimals in game performance results so as to make the graph more readable. We nevertheless note these values and use them when calculating the index. If you’re observant you’ll notice that the size of the bars also reflects this.
The Radeons were tested with texture filtering set on high quality, whereas on our previous two tests we used the ‘quality’ mode. Of course the Catalyst 10.10e give small gains in performance in many games which mostly makes up for the small dip in performance linked to deactivation of the overly aggressive optimisations.
Test configurationIntel Core i7 975 (HT and Turbo deactivated)
Asus Rampage III Extreme
6 GB DDR3 1333 Corsair
Windows 7 64 bits
Forceware 263.09 beta
Need for Speed Shift
Need for Speed Shift
To test the most recent in the Need for Speed series, we pushed all options to a max and carried out a well-defined movement. Patch 1.1 was installed.
Note that AMD has implemented an optimisation that replaces certain 64-bit HDR rendering areas with others at 32-bit. NVIDIA naturally has jumped on this but it doesn’t spoil quality. In reality AMD is taking advantage of an architecture particularity that can process 32 bit HDR formats at full speed. In fact the GeForces can also do so and NVIDIA has supplied a tool to the press which allows it to be set up and to obtain comparable performance.
In this first test, the GeForce GTX 570 is slightly ahead of the GeForce GTX 480 as well as all available mono-GPU Radeons.
To test Arma2, we carry out a well-defined movement on a saved game. We used the high graphics setting in the game (visibility at 2400m) and pushed all the advanced options to very high.
ArmA 2 allows you to set the display interface differently to 3D rendering which is then aligned with the display via a filter. We used identical rendering for both.
The antialiasing settings offered for this game aren’t clear and are different between the AMD and NVIDIA cards. 3 modes are offered by AMD: low, normal and high. These correspond to MSAA 2x, 4x and 8x. With NVIDIA things get more complicated:
- low and normal = MSAA 2x
- high and very high = MSAA 4x
- 5 = MSAA 4x + CSAA 8x (called 8x in the NVIDIA drivers)
- 6 = MSAA 8x (called 8xQ in the NVIDIA drivers)
- 7 = MSAA 4x + CSAA 16x (called 16x in the NVIDIA drivers)
- 8 = MSAA 8x + CSAA 16x (called 16xQ in the NVIDIA drivers)
Patch 1.5 was installed.
Here again the GeForce GTX 570 is slightly ahead of the GeForce GTX 480 and is up with the Radeon HD 5870s at 4xAA.
To test Starcraft 2, we launched a replay and measured perfromances following one player’s view.
All graphics settings were pushed to a maximum. The game doesn’t support antialiasing which is therefore activated in the control panels of the AMD and NVIDIA drivers. Patch 1.0.3 has been installed.
In Starcraft 2, the GeForce GTX 570 is to some extent disadvantaged by having a lower bandwidth than the GeForce GTX 480. Note with 8xAA the Radeons are more efficient.
The Mafia II engine passes physics handling over to the NVIDIA PhysX libraries and takes advantage to offer high physics settings which can be partially accellerated by the GeForces.
To measure performances we used the built-in benchmarks and all graphics options were pushed to a maximum, first without activating PhysX effects accelerated by the GPU:
The GeForce GTX 570 is on a par with the Radeon HD 480 here.
Next, we set all PhysX options to high:
With PhysX effects pushed to a maximum, performance levels dive. Note that they are in part limited by the CPU, as not all additional PhysX effects are accelerated. Of course the Radeons remain a long way behind.
Crysis Warhead replaces Crysis and has the same resource-heavy graphics engine. We tested it in version 1.1 and 64-bit mode as this is the main innovation. Crytek has renamed the different graphics quality modes, probably so as not to dismay gamers who may be disappointed at not being able to activate the highest quality mode because of excessive demands on system resources. The high quality mode has been renamed as “Gamer” and this is the one we tested.
In Crysis Warhead, the GeForce GTX 570 does slightly better than the GeForce GTX 480.
Far Cry 2
Far Cry 2
This version of Far Cry isn’t really a great development as Crytek made the first episode in any case. As the owner of the licence, Ubisoft handled its development, with Crytek working on Crysis. No easy thing to inherit the graphics revolution that accompanied Far Cry, but the Ubisoft teams have done pretty well, even if the graphics don’t go as far as those in Crysis. The game is also less resource heavy which is no bad thing. It has DirectX 10.1 support to improve the performance levels of compatible cards. We installed patch 1.02 and used ultra high quality graphics mode.
The Geforces do particularly well in Far Cry 2. The GeForce GTX 570 is on a par with the GeForce GTX 480 at 4xAA, is slightly ahead without AA and slightly behind with 8xAA, a mode in which the additional ROPs and bandwidth of the GTX 480 come into their own.
H.A.W.X. is a flying action game. It uses a graphics motor that supports DirectX 10.1 to optimise results. Among the graphics effects it supports, note the presence of ambient occlusion that’s pushed to a max along with all other options. We use the built-in benchmark and patch 1.2 was installed.
The GeForce GTX 570 has a significant lead on the GeForce GTX 480 here.
The first game with DirectX 11, or more precisely Direct3D 11 support, we couldn’t not test BattleForge. An update added in September 2009 gave support for Microsoft’s new API.
Compute Shaders 5.0 are used by the developers to accellerate SSAO processing (ambient occlusion). Compared to standard implementation, via the Pixel Shaders, this technique allows more efficient use of the available processing power by saturating the texturing units less. BattleForge offers two SSAO levels: High and Very High. Only the second, called HDAO (High Definition AO), uses Compute Shaders 5.0.
We used the game’s bench and installed the latest available update (1.2 build 298942).
Without antialiasing and in 4x mode, the GeForce GTX 570 does slightly better than the GeForce GTX 480, but is slightly behind in 8x mode.
Pretty successful visually, Civilization V uses DirectX 11 to improve quality and optimise performance in the rendering of terrains thanks to tessellation and to implement a special compression of textures thanks to the compute shaders. This compression allows it to retain the scenes of all the leaders in the memory. This second usage of DirectX 11 doesn’t concern us here however as we used the benchmark integrated on a game card. We zoom in slightly so as to reduce the CPU limitation which is very significant in this game.
All settings were pushed to a max and we measured performance with shadows and reflections. Patch 1.2 was installed.
The GeForce GTX 570 is 10 to 12% behind the GeForce GTX 480 here.
S.T.A.L.K.E.R. Call of Pripyat
S.T.A.L.K.E.R. Call of Pripyat
This new S.T.A.L.K.E.R. suite is based on a new development of the graphics engine which moves up to version 01.06.02 and supports Direct3D 11 which is used both to improve performance and quality, with the option to have more detailed light and shade as well as tessellation support.
Maximum quality mode was used and we activated tessellation. The game doesn’t support 8x antialiasing. Our test scene is 50% outside and 50% inside and inside it is surrounded with several characters.
Without antialiasing, the GeForce GTX 570 does slightly better than the GeForce GTX 480.
The latest Codemaster title, F1 2010 uses the same engine as DiRT 2 and supports DirectX 11 via patch 1.1 that we naturally installed. As this patch was developped in collaboration with AMD, NVIDIA told us that they had only received it late in the day and haven’t yet had the opportunity to optimise its drivers for the game in the DirectX 11 version.
We pushed all the graphics options to a max and we used the game’s own test tool on the Spa-Rancorchamps circuit with a single F1.
In F1 2010, the Radeons are particularly at ease and the GeForce GTX 570 is slightly ahead of the GeForce GTX 480, except at 8xAA.
Probably the most demanding title right now, Metro 2033 forces all recent graphics cards to their knees. It supports GPU PhysX but only for the generation of particles during impacts, a quite discrete effect that we therefore didn’t activate during the tests. In DirectX 11 mode, performance is identical to DirectX 10 mode but with 2 additional options: tessellation for characters and a very advanced, very demanding depth of field feature.
We tested it in DirectX 11 mode, at a very high quality level and with tessellation activated, both with and without 4x MSAA. Next we measured performance with MSAA 4x and Depth of Field.
Here there’s a tie between the GeForce GTX 570 and GTX 480.
Performance recapAlthough individual game results are worth looking at, especially as high-end solutions are susceptible to being levelled by CPU limitation in some games, we have calculated a performance index based on all tests with the same weight for each game. Mafia II is included here for scores obtained without GPU PhysX effects.
We gave an index of 100 to the GeForce GTX 480 at 1920x1200 with 4xAA:
The index is a good general representation of the results we got in the majority of games: the GeForce GTX 570 has a slight advantage over the GeForce GTX 480 without antialiasing and with 4x antialiasing and equals it at 8x antialiasing.
Note also that the difference between the GeForce GTX 570 and GTX 580 is smaller than between the GeForce GTX 470 and GTX 480: we measured 10 to 17% compared to 19 to 26%! As a result the GeForce GTX 570 is between 20 and 26% ahead of the GeForce GTX 470: a relatively big gain!
ConclusionWith a new version of its flagship GPU, the manufacture of which is less problematic, NVIDIA has more room for manoeuvre and this is immediately noticeable with this new version of the GeForce GTX 580. NVIDIA hasn’t had to reduce the clocks on the GeForce GTX 570 as much as it did with the GeForce GTX 470, which makes this cut down version more attractive.
In spite of having a smaller memory bus, NVIDIA has managed to get the GeForce GTX 570 on a par with the GeForce GTX 480 in terms of performance, which means that we’re now looking at a quieter and less demanding card, although it is still towards the high end.
In comparison with the competition, as with all recent high-end GeForces, the GeForce GTX 570 benefits from the GPU PhysX support, the 3D Vision ecosystem and excellent performance levels when operating with high levels of tessellation. It is moreover ahead of all the mono-GPU Radeons, at least the models currently available. At time of writing, we have finally received the Radeon HD 6900s and with the launch slated for this week, it would seem wise to wait a few more days before buying so as to have all the information at your fingertips. What’s more, although the launch price of the GeForce GTX 570 (€350) is justifiable given its performance, it could of course be adjusted.
Lastly, we want to show a red card to AMD for reducing default graphics quality, resulting from too aggressive optimisations of texture filtering that were introduced on the arrival of the Radeon HD 6800s and across the whole high-end range. In 2010, this is no longer acceptable!
Copyright © 1997-2013 BeHardware. All rights reserved.