Report: GeForce GTX 560 Ti vs Radeon HD 6950 1 GB - BeHardware
>> Graphics cards
Written by Damien Triolet
Published on January 25, 2011
At the beginning of the year, NVIDIA decided to take on the €200/250 segment once again with a development of the GeForce GTX 460. With a full spec GPU, the GeForce GTX 560 Ti gives 40% more processing power. What does this translate to in terms of practical performance? AMD has responded with a cut-price Radeon HD 6950 1 GB. A tough fight is in prospect!
Is the GF114 a new GPU ?In the same way as NVIDIA’s GF100 became the GF110 following a new revision, the GF104 used in the GeForce GTX 460 has become the GF114 for the GeForce GTX 560 Ti. The idea behind renaming that GPU was to push an impression of innovation and mark the break with the GF100, which had a doubtful reputation.
Does this second renaming truly represent a new revision or is it simply designed to confuse us? It’s hard to say. NVIDIA rather vaguely says that it has slightly reworked the chip to facilitate the increased clock and reduce energy consumption, but the name change stops us from knowing if the GF114 A1 really corresponds to a GF104 A2 or if it’s simply a renaming of the GF104 with a different chip selection. The architecture is in any case identical for these GPUs, which have 1.95 billion transistors. You can study our analysis here.
With the GeForce GTX 460s, NVIDIA chose to remain fairly conservative on clocks and also disabled 1 of the 8 blocks of processing units on all the GF104s, which is unusual. This might be because of manufacturing issues but also due to the fact that NVIDIA probably wanted to avoid competing with the GeForce GTX 480 and 470, large stocks of which still remained to be sold.
This time, whether because of different strategic choices or a new revision, NVIDIA has let its GPU off the leash: all the units are active and the GPU clock is much higher, which has increased processing power by 40%! This is enough to move it up into the next segment and match up to the Radeon HD 6870 and even the 6950.
Note however that at some point there will be several versions of the GeForce GTX 560, something confirmed by NVIDIA though without giving any details. This launch focusses on the GTX 560 Ti, a suffix already used by NVIDIA at the time of the GeForce 3/4s and which represents what should be the most powerful of the GeForce GTX 560s. NVIDIA has specified that this suffix should be printed in smaller lettering than the rest of the name, a strategy for extending the positive aura expected for this GeForce GTX 560 Ti across small models down the line. Should we expect a renamed GeForce GTX 460? Is the spec different? In any case, we hope not to find too significant a gap in terms of performance.
Specifications, the cards
The reference GeForce GTX 560 TiNVIDIA supplied us with a reference GeForce GTX 560 Ti:
The GeForce GTX 560 Ti is designed similarly to the GeForce GTX 460, though it is 2 cm longer at 23cm. Looking more closely however, you can see that its cooling system has been reviewed and is closer to systems used on higher-end models: a casing fixed to a plate that covers the PCB, between which is the radiator.
The radiator has the same radial format as on the GeForce GTX 460, but is larger and has a third heatpipe. The GeForce GTX 560 Ti will therefore only expel some of the hot air from the casing. The memory modules and the sensitive components on the power stage benefit from contact with the plate covering the PCB.
While the PCB is longer, this is to allow for development of the the power stage. It is similarly structured to the power stage on the GeForce GTX 460, but goes from three to four phases for the GPU, which protects it from the sort of overheating that we noted on the significantly overclocked GeForce GTX 460s.
Our GeForce GTX 560 Ti model overclocked up to 950 MHz (GPU), with graphics bugs appearing beyond this, which represents a pretty good gain of 15%. The card therefore still has plenty of overclocking potential, although not as much as the exceptional potential noted on the GeForce GTX 460. Clocking it up to 1 Ghz should be possible on selection of the best samples, but this will require a revisited PCB with a sturdier power stage to avoid overheating.
The reference card comes with Samsung GDDR5 memory certified at 1.25 GHz.
The reference Radeon HD 6950 1 GBAMD supplied us with a reference Radeon HD 6950 1 GB:
Although this Radeon HD 6950 1 GB certainly feels like a test prototype that has been rushed out, with no sticker for the product name, a red PCB and a manufacture date given as 15/06/2010 (!), AMD assured us that this would be the design on offer from several of their partners.
Overall, this 1 GB model is similar to the 2 GB model. It uses the same PCB and the same cooler equipped with a fan and a vapour chamber. To reduce costs, the back plate is no longer there and only 4 of the 6 GPU phases are used.
In place of Hynix 2 Gb memory certified at 1.5 GHz (3 GHz for data), we hve Hynix 1 Gb memory certified at 1.25 GHz, which is what it’s clocked at on this Radeon HD 6950.
Energy consumption, noise
Energy consumptionWe measured the energy consumption of the graphics card on its own. We took these readings at idle, in 3D Mark 06 and Furmark. Note that we use a version of Furmark that isn’t detected by the stress test energy consumption limitation mechanism put into place by NVIDIA in the GeForce GTX 580 drivers.
At idle, the GeForce GTX 560 Ti behaves like the GeForce GTX 460 but our Radeon HD 6950 1 GB card records a higher energy consumption.
In load, the GeForce GTX 560 Ti draws 30 to 40W more than the GeForce GTX 460, which represents an increase of 20 to 26%. Strangely, the Radeon HD 6950 1 Go draws 20W more than the 2 GB model. In contrast to the GeForce GTX 570 and 580, we didn’t note any drop in the clock of the GeForce GTX 560 Ti in Furmark 1.7.0, which has a similar control circuit but a bigger margin.
Note however, both in 3Dmark06 and Furmark, the Radeon HD 6900s reduced their clock (varying for example between 570 and 700 MHz for the Radeon HD 6970 instead of 880 MHz). Energy consumption readings are however a good deal under the announced limits of 200W for the Radeon HD 6950 and 250W for the Radeon HD 6970. Although we’re still waiting for more detail from AMD on this, in fact the GPU doesn’t really take a reading of its own energy consumption levels. It simply observes the level of utilisation of its numerous blocks. These levels are then matched to power consumption values drawn up by AMD to correspond to a worst case scenario: ie a GPU with lots of current leakage and running at very high temperatures. In other words, when a Radeon HD 6950 1 GB reduces its clock, it isn’t because it is drawing 200W but rather that under certain conditions, certain models will consume 200W. This is an important nuance and explains why, whatever the conditions, all cards behave in the same way.
Noise levelsWe place the cards in an Antec Sonata 3 casing and measure noise levels at idle and in load. We placed the sonometer 60cm from the casing.
Although the Radeon HD 6900s are quiet at idle, in load they’re noisier than the previous generation cards. Their fans are also set up to increase in speed in steps, which is a more annoying system. Here are the variations we noted in our tests:
HD 6950 2 GB: 44.6 dB -> 47.1 dB
HD 6970: 47.6 dB -> 48.9 dB
Of course we retained the highest value across the spread.
The GeForce GTX 560 Ti produces slightly more noise than the GeForce GTX 460, both at idle and in load where it remains relatively discreet, without being silent.
TemperaturesStill in the same casing, we took a temperature reading of the GPU using internal sensors:
NVIDIA has apparently calibrated its cooler in the same way for all the GeForce GTX 500s in load. The GeForce GTX 560 Ti GPU is very well cooled at idle.
Here’s what the infrared thermography shows:
GTX 560 Ti at idle
GTX 560 Ti in load
2 GB vs 1 GB
Radeon HD 6950: 2 GB vs 1 GBOf course we wanted to look at the difference in performance between the Radeon HD 6950 2 GB and 1 GB cards. Here are the results we obtained at 1920x1200 across all the games in our test protocol:
The extra 1 GB gives a significant gain in StarCraft II, Crysis Warhead with AA8x. It also gives a big gain in Metro 2033 when AA4x is combined with Depth of Field which is very demanding in terms of memory. With just 1 GB, the Radeon doesn’t have enough memory and performance drops right away.
In most other cases however, we noted an advantage of almost 1% for the 1 GB model, going up to as much as 10% in ArmA 2, which actually makes the Radeon HD 6950 1GB a higher performance model than the Radeon HD 6950 2 GB on our index.
How can this be? There are several possible reasons: slightly lower performance from high density memory modules (latency) and, very probably, more efficient use of the memory with 1 GB, a size for which drivers are now very well optimised. Proper handling of the memory means the four memory controllers benefit to a maximum from appropriate distribution of data.
Theoretical tests: pixels
Texturing performanceWe measured performance during access to textures of different formats in bilinear filtering. Here are the results for standard 32-bit (4xINT8), 64-bit “HDR” (4x FP16) and 128-bit (4x FP32). We have also added performance for 32-bit RGB9E5, a new HDR format introduced in DirectX 10 which allows 32-bit HDR textures to be stocked, give or take a few compromises.
As announced by NVIDIA, the GeForce GTX 500s can filter FP16/11/10 and RGB9E5 textures at full speed and are 2.35x and 2.58x faster here than the GeForce GTX 480 and 470 respectively. Introduced with the GeForce GTX 460, the GeForce GTX 560 Ti does of course also have this capability and has the same texturing power as the GeForce GTX 580, at equal clocks. Note that if at first, the yield from the GeForce GTX 460’s texturing units was lower than the other NVIDIA GPUs, this was corrected after some time via the drivers.
The Radeon HD 6900s have such superior filtering power that if they filter FP16 textures at half speed, they don’t trail far behind the GeForces.
Note that we had to raise the energy consumption limit on the Radeon HD 6900s to a maximum, as otherwise clocks were reduced in this test. These new Radeons seem, then, incapable of fully using all their texturing power by default!
FillrateWe measured the fillrate without and then with blending, and this with different data formats:
In terms of fillrate, the Radeons have a big advantage over the GeForce GTX 400s/500s, above all with FP10s, a format processed at full speed while with the GeForces this format is processed at half-speed. Given the limitation of the GeForces in terms of datapaths between the SMs and ROPs, it’s a shame that NVIDIA hasn’t given its GPU the possibility of benefitting from FP10 and FP11 formats.
Like the GeForces, the Radeon HD 6900s can process FP32 single channel at full speed without blending, but retain this speed with blending.
Theoretical tests: geometry
Triangle throughputGiven the architectural differences between the various GPUs in terms of geometry processing, we obviously wanted to take a closer look at the subject. First of all we looked at triangle throughput in two different situations: when all triangles are drawn and when all the triangles are removed with back face culling (because they aren’t facing the camera):
AMD has doubled triangle throughput with its Radeon HD 6900s, which easily outdo the GeForces when the triangles are actually displayed – NVIDIA has limited the GeForces to give an advantage to the Quadros. When it comes to ejecting triangles from the rendering via culling, no Radeon comes close to the high-end GeForces however. With half as many units dedicated to the GeForces, the GeForce GTX 560 Ti has a lower triangle throughput than the Radeon HD 6870 without culling but similar to the throughput of the Radeon HD 6950 with. Overall the Radeon HD 6950 offers superior geometry processing.
Next we carried out a similar test but using tessellation:
The advantage of the GeForces over the Radeons is there for all to see. AMD and NVIDIA have very different approaches and the throughput on the GeForces varies according the number of processing units on the card.
The architecture of the Radeons means that they are rapidly overloaded with the quantity of data generated, which then drastically reduces their throughput in this case. Doubling the size of the GPU tessellation unit buffer on the Radeon HD 6800s means they give significantly higher performance than the Radeon HD 5800s and the parallelisation of geometry processing allows the Radeon HD 6900s to catch the high-end GeForces up slightly, though without getting on a par with them. They do however get very close to the GeForce GTX 560 Ti.
Strangely the GeForce GTX 580 is only very slightly up on the GeForce GTX 480 and is even performs worse in this test when triangles are ejected from the rendering via culling. This may be because of a different software configuration which favours certain cases (the most important probably) but can penalise others. This is something that we’ve observed with certain profiles with the Quadros. In any case, this isn’t something the GeForce GTX 570 is subject to. It behaves here like the GeForce GTX 480 with higher clocks. Note that we did of course retest the GeForce GTX 580 with the same driver as the GeForce GTX 570.
Displacement mappingWe tested tessellation with an AMD demo that is part of Microsoft’s DirectX SDK. This demo allows us to compare bump mapping, parallax occlusion mapping (the most advanced bump mapping technique used in gaming) and displacement mapping that uses tessellation.
Basic bump mapping.
Parallax occlusion mapping.
Displacement mapping with adaptive tessellation.
By creating true additional geometry, displacement mapping displays clearly superior quality. Here we activated the adaptive algorithm that allows you to avoid generation of useless geometry and too many small triangles that will not fill any quads and waste a lot of ressources.
We also measured performances obtained with the different techniques:
It is interesting to note that tessellation doesn’t only improve rendering quality but also performance! Parallax occlusion mapping is in fact very ressource heavy as it uses a complex algorithm that attempts to simulate geometry realistically. Unfortunately it generates a lot of aliasing and this is noticeable on the edges of objects or surfaces that use it.
Note however that in the present case the displacement mapping algorithm is aided by the fact that it is dealing with a flat surface. If it has to smooth geometry contours and apply displacement mapping at the same time the demands are of course much higher.
The GeForces do a lot better with high tessellation load than the Radeons and strangely, the Radeon HD 6900s only give a moderate gain, similar to what you get with the Radeon HD 6800s.
The use of an adaptive algorithm which regulates the level of tessellation acording to more or less detailed areas (depending on distance, screen resolution and so on) gives significant gains across the board and is more representative of what developers will put into place. The gap between the GeForces and the Radeons is then reduced and the Radeons sometimes even move in front in certain cases.
The drivers, the testFor this test, we used the same protocol as used in the Radeon HD 6900s test. The tests were carried out at 1920x1200, without FSAA, with MSAA 4x and with MSAA 8x, at maximum settings except in Metro 2033.
The new CatalystsWith the Catalyst 11.1a hotfix and as of the standard 11.2s, AMD has introduced two innovations.
The first concerns texture filtering and the associated optimisations. As you no doubt now know, a few months ago AMD took the controversial decision of reviewing the quality offered by default, so as to benefit from a modest gain in performance but with the result that there was additional flickering in textures.
AMD has revised its thinking and put into place a new compromise (Quality mode). This results bringing the Quality setting back up to what was the original default quality for the Radeon HD 5800s, although this time AMD has not disabled the optimisation which consists in limiting trilinear filtering to areas around mipmap level transitions. This is a worthwhile optimisation and one that NVIDIA also uses and which can be disabled by moving to HQ mode.
In spite of this, the GeForces filtering is still slightly cleaner on the GeForces, with less flickering. If this default setting is a problem for you, AMD also has an oddly named ‘Performance’ mode, which is identical to the Q setting but with a modification to LOD biases, reducing both texture sharpness and flickering.
We definitely approve of these developments which allow a return to the initial status quo. Of course the gain in performance linked to the new Quality setting, which we use here in our performance measurements, is lower than with previous versions, which themselves only give a 1 to 1.5% gain on average. At the end of the day, then, playing with the reputation of its products in terms of quality, only resulted in a minimum gain for AMD. Wouldn’t it have been better to deploy its resources elsewhere? To improve filtering on forthcoming GPUs for example?
AMD has taken the opportunity offered by these drivers to introduce a new tesselation optimisation. A new setting allows you to define the maximum level of tessellation allowed, fixed at 64 by Direct3D 11. The AMD Optimized setting automatically uses the application profile information implemented by AMD. As things stand, no tessellation factors have been limited in AMD profiles, but this does mean AMD is armed against any future games that push tessellation levels beyond useful levels. We did nevertheless disable this setting in our tests, going for ‘Use application settings’.
Note that you can also fix the maximum level of tessellation manually (no tessellation), 2, 4, 6, 8, 16, 32 and 64 (no limit). We also wanted to look at the impact of this setting in HAWX 2, which gives an enormous advantage to the GeForces, at 1920x1200 AA4x with a Radeon HD 6970:
Use application settings: 92 fps
Limited to 64: 92 fps
Limited to 32: 92 fps
Limited to 16: 93 fps
Limited to 8: 101 fps
Limited to 6: 101 fps
Limited to 4: 107 fps
Limited to 2: 108 fps
Limited to 1: 109 fps
Tessellation deactivated in the game: 130 fps
For comparison, here’s what we got with the GeForce GTX 570:
Tessellation activated in the game: 152 fps
Tessellation deactivated in the game: 199 fps
The testThe test
We made sure to test this mode on the GeForces, which isn’t always easy to work out. In the NVIDIA drivers, 8x antialiasing is in fact MSAA 4x with CSAA 8x which doesn’t give the same quality as MSAA 8x, which, for its part, is called 8xQ antialiasing. This is therefore what we tested.
Now one year after its release, we’ve decided that we can no longer treat DirectX 11 separately, especially as all the cards tested here are compatible with this API. DirectX 11, 10 and 9 games are therefore mixed together and we opted for very high settings even in the most demanding games. The most recent updates were installed and all the cards were tested with the most recently available drivers.
We decided to stop showing decimals in game performance results so as to make the graph more readable. We nevertheless note these values and use them when calculating the index. If you’re observant you’ll notice that the size of the bars also reflects this.
The Radeons and the GeForces were tested with the "quality" texture filtering setting. All the Radeons were tested with the Catalyst 11.1a hotfix (8.82.2) driver. All the GeForces were tested with beta 266.56 drivers.
Test configurationIntel Core i7 975 (HT and Turbo desactivated)
Asus Rampage III Extreme
6 Go DDR3 1333 Corsair
Windows 7 64 bits
Catalyst 11.1a hotfix
Need for Speed Shift
Need for Speed Shift
To test the most recent in the Need for Speed series, we pushed all options to a max and carried out a well-defined movement. Patch 1.1 was installed.
Note that AMD has implemented an optimisation that replaces certain 64-bit HDR rendering areas with others at 32-bit. NVIDIA naturally jumps on this but it doesn’t spoil quality. In reality AMD is taking advantage of an architecture particularity that can process 32 bit HDR formats at full speed. In fact the GeForces can also do so and NVIDIA has supplied a tool to the press which allows it to be set up and to obtain comparable performance.
In this first test, the GeForce GTX 560 Ti is easily ahead of the GeForce GTX 470 but the Radeon HD 6950 has a small lead.
To test Arma2, we carry out a well-defined movement on a saved game. We used the high graphics setting in the game (visibility at 2400m) and pushed all the advanced options to very high.
ArmA 2 allows you to set the display interface differently to 3D rendering which is then aligned with the display via a filter. We used identical rendering for both.
The antialiasing settings offered for this game aren’t clear and are different between the AMD and NVIDIA cards. 3 modes are offered by AMD: low, normal and high. These correspond to MSAA 2x, 4x and 8x. With NVIDIA things get more complicated:
- low and normal = MSAA 2x
- high and very high = MSAA 4x
- 5 = MSAA 4x + CSAA 8x (called 8x in the NVIDIA drivers)
- 6 = MSAA 8x (called 8xQ in the NVIDIA drivers)
- 7 = MSAA 4x + CSAA 16x (called 16x in the NVIDIA drivers)
- 8 = MSAA 8x + CSAA 16x (called 16xQ in the NVIDIA drivers)
Patch 1.5 a was installed.
The Radeon HD 6900s finish a long way behind the Radeon HD 6870 here. Strangely the Radeon HD 6950 1 GB gives a 10% gain on the 2 GB version and is hot on the heels of the Radeon HD 6970.
To test Starcraft 2, we launched a replay and measured performances following one player’s view.
All graphics settings were pushed to a maximum. The game doesn’t support antialiasing which is therefore activated in the control panels of the AMD and NVIDIA drivers. Patch 1.0.3 was installed.
Note the improved efficiency of the Radeons at AA8x.
The Mafia II engine passes physics handling over to the NVIDIA PhysX libraries and takes advantage to offer high physics settings which can be partially accellerated by the GeForces.
To measure performances we used the built-in benchmarks and all graphics options were pushed to a maximum, first without activating PhysX effects accelerated by the GPU:
Without antialiasing the GeForce GTX 560 Ti is ahead of the Radeon HD 6950 here, but loses some ground with it activated.
Next, we set all PhysX options to high:
With PhysX effects pushed to a maximum, performance levels dive. Note that they are in part limited by the CPU, as not all additional PhysX effects are accelerated. Of course the Radeons remain a long way behind.
Crysis Warhead replaces Crysis and has the same resource-heavy graphics engine. We tested it in version 1.1 and 64-bit mode as this is the main innovation. Crytek has renamed the different graphics quality modes, probably so as not to dismay gamers who may be disappointed at not being able to activate the highest quality mode because of excessive demands on system resources. The high quality mode has been renamed as “Gamer” and this is the one we tested.
The Radeon HD 6900s do pretty well in comparison to the GeForce GTX 500s here.
Far Cry 2
Far Cry 2
This version of Far Cry isn’t really a great development as Crytek made the first episode in any case. As the owner of the licence, Ubisoft handled its development, with Crytek working on Crysis. No easy thing to inherit the graphics revolution that accompanied Far Cry, but the Ubisoft teams have done pretty well, even if the graphics don’t go as far as those in Crysis. The game is also less resource heavy which is no bad thing. It has DirectX 10.1 support to improve the performance levels of compatible cards. We installed patch 1.02 and used the ultra high graphics quality mode.
The GeForces do particularly well in Far Cry 2. Here the GeForce GTX 560 Ti is a good way ahead of the GTX 470.
H.A.W.X. is a flying action game. It uses a graphics motor that supports DirectX 10.1 to optimise results. Among the graphics effects it supports, note the presence of ambient occlusion that’s pushed to a max along with all other options. We use the built-in benchmark and patch 1.2 was installed.
The GeForce GTX 560 Ti is on a par with the Radeon HD 6970 in this game.
The first game with DirectX 11, or more precisely Direct3D 11 support, we couldn’t not test BattleForge. An update added in September 2009 gave support for Microsoft’s new API.
Compute Shaders 5.0 are used by the developers to accellerate SSAO processing (ambient occlusion). Compared to standard implementation, via the Pixel Shaders, this technique allows more efficient use of the available processing power by saturating the texturing units less. BattleForge offers two SSAO levels: High and Very High. Only the second, called HDAO (High Definition AO), uses Compute Shaders 5.0.
We used the game’s bench and installed the latest available update (1.2 build 304941).
Penalised by its fillrate, the GeForce GTX 560 Ti is slightly behind the GeForce GTX 470 here. It is however up on the Radeon HD 6900s once antialiasing is activated.
Pretty successful visually, Civilization V uses DirectX 11 to improve quality and optimise performance in the rendering of terrains thanks to tessellation and in implementing a special compression of textures thanks to the compute shaders. This compression allows it to retain the scenes of all leaders in the memory. This second usage of DirectX 11 doesn’t concern us here however as we used the benchmark integrated on a game card. We zoom in slightly so as to reduce the CPU limitation which has a strong impact in this game.
All settings were pushed to a max and we measured performance with shadows and reflections. Patch 1.2 was installed.
The GeForces do very well here and the Radeons only come back into the reckoning with antialiasing 8x activated.
S.T.A.L.K.E.R. Call of Pripyat
S.T.A.L.K.E.R. Call of Pripyat
This new S.T.A.L.K.E.R. suite is based on a new development of the graphics engine which moves up to version 01.06.02 and supports Direct3D 11 which is used both to improve performance and quality, with the option to have more detailed light and shade as well as tessellation support.
Maximum quality mode was used and we activated tessellation. The game doesn’t support 8x antialiasing. Our test scene is 50% outside and 50% inside and inside it is surrounded with several characters.
The Radeon HD 6900s are very efficient here and the GeForce GTX Ti is closer to the Radeon HD 6870.
The latest Codemaster title, F1 2010 uses the same engine as DiRT 2 and supports DirectX 11 via patch 1.1 that we installed of course. As this patch was developped in collaboration with AMD, NVIDIA told us that they had only received it late in the day and haven’t yet had the opportunity to optimise its drivers for the game in the DirectX 11 version.
We pushed all the graphics options to a max and we used the game’s own test tool on the Spa-Rancorchamps circuit with a single F1.
In F1 2010, the Radeons are particularly at ease and the GeForce GTX 560 Ti is on a par with the GeForce GTX 6850.
Probably the most demanding title right now, Metro 2033 forces all recent graphics cards to their knees. It supports GPU PhysX but only for the generation of particles during impacts, a quite discrete effect that we therefore didn’t activate during the tests. In DirectX 11 mode, performance is identical to DirectX 10 mode but with 2 additional options: tessellation for characters and a very advanced, very demanding depth of field feature.
We tested it in DirectX 11 mode, at a very high quality level and with tessellation activated, both with and without 4x MSAA. Next we measured performance with MSAA 4x and Depth of Field.
Although the GeForce GTX 560 Ti and Radeon HD 6950 are at a similar level without antialiasing, with it activated, the Radeon HD 6950 has a significant advantage. With the Radeon 1 GB however, performance drops right away when Depth of Field is activated.
Performance recapAlthough individual game results are worth looking at, especially as high-end solutions are susceptible to being levelled by CPU limitation in some games, we have calculated a performance index based on all tests with the same weight for each game. Mafia II is included with the scores obtained without GPU PhysX effects.
We attributed an index of 100 to the GeForce GTX 460 1 GB at 1920x1200 with 4xAA:
Hold the mouse over the graph to view cards by performance at 1920 4xAA.
Neither the GeForce GTX 560 Ti nor the Radeon HD 6950 1 GB has an advantage, at least without antialiasing. With antialiasing enabled, the Radeon has a small advantage (4 to 5%), apparently benefitting from higher memory bandwidth.
With an advantage of 30% over the GeForce GTX 460 at 1920 AA4x, the GeForce GTX 560 Ti easily outdoes the GeForce GTX 470 as well as the Radeon HD 6870.
As we saw before however, the Radeon HD 6950 1 GB is slightly up on the 2 GB model, except where its additional memory comes into play.
ConclusionOnce again, the competition between AMD and NVIDIA is fierce, with neither willing to give up any ground to the other. Good news for gamers then, with great price/performance ratios on new cards!
NVIDIA has freed up its GF104/114 GPU, which now has more active processing units and higher clocks. AMD has reduced production costs for the Radeon HD 6950 and has been able to bring pricing down.
On sale from €230, the GeForce GTX 560 Ti and Radeon HD 6950 1 GB both give a very good level of performance. They’re similar overall but each has its predilections. These cards will be ideal for gaming at 1920x1200 or at 1080p, just as the GeForce GTX 460 1 GB and the Radeon HD 6850, costing €180 or less, are perfect for gaming at 1680x1050.
This fierce battle has however led to casualties. As far as we can see, the remaining stocks of the GeForce GTX 470 and the Radeon HD 6870 and 6950 2 GB are pretty much going to fall by the wayside, except for specific usage purposes requiring extra memory for the 6950 2 GB. The Radeon HD 6870 will have to be repositioned to take account of the new standings.
As usual, you’ll need to make your choice between the GeForce GTX 560 Ti and the Radeon HD 6950 1 GB, or the GeForce GTX 460 1 GB and the Radeon HD 6850, based on factors other than performance. The GeForce GTX 560 Ti is more compact and noise levels are better controlled but the Radeon HD 6950 has richer connectivity thanks to Eyefinity.
It remains to be seen what the spec on the non-Ti GeForce GTX 560s will be. They will in any case be revised downwards but we don’t yet know whether this will be significant or not. In the meantime, we advise you to remain attentive as to the presence of the Ti suffix!
Copyright © 1997-2014 BeHardware. All rights reserved.