Review: AMD Radeon HD 7970 GHz Edition - BeHardware
>> Graphics cards
Written by Damien Triolet
Published on June 22, 2012
Introduction, GHz Edition
To celebrate the half-year anniversary of the Tahiti GPU and the Radeon HD 7970, AMD has decided to revisit its specs. The objective with the new GHz Edition model is clear: to outdo the GeForce GTX 680.
GHz EditionAt the end of 2011, the Tahiti GPU had to deal with teething problems of TSMCís 28 nanometres process. As the first GPU to be made at this fabrication process, its specs of course had to be determined with limited perspective. Things were complicated by relatively high production variability, which meant that AMD had to be quite conservative when fixing the specs of the Radeon HD 7970, settling for 925 MHz for the GPU.
Cape Verde and Pitcairn, the other two GPUs in the family, were put on sale at 1 GHz on launch and Cape Verde has recently moved up to 1.1 GHz, with production quality also improving as time has gone by. First to get up to this clock on a GPU, AMD took the opportunity to launch the GHz Edition brand and following the arrival of the GeForce GTX 680 in March, made no secret of its ambition to get a Radeon HD 7970 GHz Edition onto the market.
This it has now done. Thanks to the better quality production that has come with several refinements made to the PowerTune technology, AMD has been able to come up with a Radeon HD 7970 with a GPU clocked at 1050 MHz and the memory at 1500 MHz, giving gains of 13.5% for processing power and 9% for memory bandwidth. This is enough to give a 10% gain in performance (the minimum required to justify a new product).
AMD could just as well have called its new model the Radeon HD 7980 but given the fact that there are numerous overclocked Radeon HD 7970s on the market, choosing to call it the GHz Edition was probably the best solution. All the current partner cards with clocks at least at the level of this new model will moreover be able to adopt the GHz Edition brand.
We wonít be going back over the reference design of this Radeon HD 7970 GHz Edition in detail: physically, it's exactly identical to the original model and you can get all the info on the PCB and cooling system in our test here.
Specifications, PowerTune developments
The higher GPU clock brings the Radeon HD 7970 GHz Edition up alongside the GeForce GTX 680 with respect to fillrate and texturing power (two points on which it was trailing somewhat). It hammers home the 50% bandwidth advantage with a 384-bit bus.
PowerTune To recap, PowerTune is the technology that allows AMD to keep the energy consumption of its cards under control. It serves to both hold them within the TDP they have been designed for to prevent damage and maximize performance within this TDP.
Like CPU turbo technologies, it estimates consumption using a multitude of internal sensors on the different blocks that make up the chip. A relatively complex formula then transposes these usage levels into power consumed, taking account of the parameters that correspond to the least favourable case: a GPU with significant current leakage which is in a very hot environment.
AMD has made two refinements to the technology. The first consists in estimating the GPU temperature by giving a temporal value to the estimated consumption. This estimated temperature can therefore replace the constant which represents the worst of cases and gives more of a margin in standard situations. Itís partly this that has allowed AMD to increase the GPU clock on the Radeon HD 7970 GHz Edition whilst retaining the original TDP of 250W. The real temperature is still measured and used both as a higher level of protection and to regulate the fan.
This development to PowerTune isn't reserved for the new model and will be rolled out across the Radeon HD 7900 range. In practice this won't make any difference in games, outside of major overclocking, but it will allow these cards to retain a higher clock in stress tests. In the future, we might see AMD taking advantage of this new capability to allow the GPU to exceed its TDP for a few seconds, as Intel does with its latest CPUs.
The second innovation is the introduction of a turbo feature known as Boost, a mode it has become difficult to avoid. In concrete terms Boost represents the capacity of PowerTune to modify the GPU voltage, in addition to its clock. This innovation is reserved for the Radeon HD 7970 GHz Edition as the bios must contain a table of clock/voltage pairs and because thereís a more complex GPU validation process. The Radeon HD 7970 GHz has thus been validated up to 1000 MHz with a fixed standard voltage but also up to 1050 MHz with a voltage that climbs progressively. PowerTune currently supports up to 256 steps (P-states) with a granularity of 4 MHz.
In practice as the TDP remains oversized in comparison to energy consumption in video games (with some rare exceptions), Boost can be seen as a way of safely validating the GPU at a higher clock, which will be applied constantly across almost all the games as opposed to usual turbos which bring a variable gain under much more strict conditions.
AMD has managed to maximise use of the available TDP by combining Boost with a more accurate estimate of energy consumption and its influence on GPU temperature. As Boost brings about an increase in voltage, energy consumption increases exponentially, which doesnít make this approach a solution designed to increase energy efficiency.
This particularity is shared with Nvidiaís GPU Boost technology which doesnít aim to improve yield per Watt but to take advantage of each available Watt to give a few extra additional percent. This is however the only point the two technologies have in common, with AMD taking pains to point out that its technology remains entirely deterministic: in identical conditions, all the Radeon HD 7970 GHz Editions will behave in the same way.
This isnít the case with the GeForce GTX 600s whose performance levels vary from one sample to the next for two reasons: Nvidia takes the actual power consumption, which is different from one GPU to the next, and validates its GPUs, not at a common clock, but at the maximum clock at which they can pass certain tests. In addition to a turbo, GPU Boost thus consists of an automatic overclocking, making it difficult to estimate card performance.
Energy consumption and performance/watt
Energy consumptionWe used the test protocol that allows us to measure the energy consumption of the graphics card alone. We took these readings at idle on the Windows 7 desktop as well as with the screen in standby so as to observe the impact of ZeroCore Power. In load, we took our readings in Anno 2070, at 1080p with all the settings pushed to their max, and in Battlefield 3, at 1080p in High mode:
While our Radeon HD 7970 GHz Edition is particularly economical at idle, which means that it is only very slightly affected by current leakage, its energy consumption in load is significantly higher than the original Radeon HD 7970.
We've shown the energy consumption readings graphically, with fps per 100W to make the data more legible:
[ Anno 2070 1080p Max ] [ Battlefield 3 1080p High ]
These graphs highlight the reduction in energy efficiency that results when the TDP is maximised using voltage increases. This yield is a good deal better than what the previous generation provided, but the Radeon HD 7870 and above all the GeForce GTX 670 remain more efficient.
Note however that each game represents a particular case and that the yield varies from one card sample to the next, on the Radeons because their energy consumption varies and the GeForces because their maximum clock and therefore their performance levels vary. Here, the GeForce GTX 680 went up to 1110 MHz and the GeForce GTX 670 up to 1084 MHz.
Noise levels and GPU temperature
NoiseTo observe the noise levels produced by the various solutions, we put the cards in a Cooler Master RC-690 II Advanced casing and measured noise at idle and in load. We used an SSD and all the fans in the casing, as well as the CPU fan, were turned off for the reading. The sonometer was placed 60 cm from the closed casing and ambient noise was measured at +/- 20 dBA. Note that for all the noise and heat readings, we used the real reference design of the Radeon HD 7950, rather than the press card supplied by AMD.
While the GHz Edition is as quiet as the standard Radeon HD 7970 at idle, in load it gets louder and even exceeds the levels of the GeForce GTX 690 and is on a par with the GeForce GTX 590.
TemperaturesStill in the same casing, we took a reading of the GPU temperature as reported by the internal sensor:
The Radeon HD 7970 GHz Edition is well cooled.
Readings and infrared thermography
Infrared thermographyFor this test we used the new protocol described here.
First of all, here's a summary of all the readings:
The difference between the Radeon HD 7900s at idle is almost nothing.
In load, while the Radeon HD 7970 GHzís higher energy consumption results in higher noise levels, it doesnít have any impact on the internal temperatures as most of the hot air is expelled from the casing.
Finally, hereís what the thermal imaging shows:
Strangely, in spite of the additional Watts, the Radeon HD 7970 GHz Editionís power stage doesnít heat up any more than on the standard card. As the components and cooling system are identical, it's possible that the increase in the flow of air compensates for the additional heat.
Overclocking and test protocol
OverclockingOur test Radeon HD 7970 GHz Edition GPU clocked up to 1150 MHz and the memory up to 1700 MHz, namely increases of 9.5% and 13% respectively. These maximum clocks are in line with what the standard Radeon HD 7970s are capable of attaining, with AMD drawing on their overclocking potential in the elaboration of this GHz Edition.
It will be interesting to see how the new PowerTune does in overclocking situations as this is something we werenít able to look at before publishing this report.
Test protocolFor this test, we used the protocol used in the report on the GeForce GTX 670 which includes: Alan Wake, Anno 2070, Batman Arkham City, Battlefield 3, The Witcher 2 Enhanced Edition and Total War Shogun 2. We have however replaced F1 2011 with DiRT Showdown. All these games were tested with their latest patches, most of them being maintained via Steam/Origin.
We have decided no longer to use the level of MSAA (4x and 8x) as the main criteria for segmenting our results. Many games with deferred rendering offer other forms of antialiasing, the most common being FXAA, developed by NVIDIA Thereís therefore no point in drawing up an index based on a certain antialiasing level, which in the past allowed as to judge MSAA efficiency, which can vary according to the implementation. At 1920x1080, we therefore carried out the tests with two different quality levels: extreme and very high, which automatically includes a minimum of antialiasing (either MSAA 4x or FXAA/MLAA/AAA).
Also we no longer show decimals in game performance results so as to make the graph more readable. We nevertheless note these values and use them when calculating the index. If youíre observant youíll notice that the size of the bars also reflects this.
All the Radeons were tested with the Catalyst 12.7 beta drivers and all the GeForces with the 304.48 beta drivers, with the exception of the GeForce GTX 590 in Civiliation V following a bug between these new drivers and the latest patch of the game, which seems to affect the Fermi generation GeForces. These new drivers give small gains in many of the games tested and a significant gain in Total War: Shogun 2 on the Nvidia side, finally correcting a performance problem that appeared with the latest patches.
We made sure to test the GeForce GTX 690, 680 and 670 at their minimal guaranteed specs in terms of GPU Boost. To do this we played with the overclocking settings to reduce the base clock and slightly increase the energy consumption limit so that the clock in practice would correspond to that of a card with a maximum turbo clock equal to that of the official GPU Boost clock. Note that this isnít the same as turning GPU Boost off!
To recap, we took the opportunity of the report on the GeForce GTX 690 to introduce the X79 platform and a Core i7 3960X into our test system so as to benefit from PCI Express 3.0. Note that the activation of PCI Express 3.0 isnĎt automatic on the GeForce GTX 600s and requires a registry modification, which we of course effected and which gives an average gain of +/- 2%.
Test configurationIntel Core i7 3960X (HT off, Turbo 1/2/3/4/6 cores: 4 GHz)
Asus P9X79 WS
8 GB DDR3 2133 G.Skill
Windows 7 64 bits
GeForce beta 304.48 drivers
Catalyst 12.7 beta
Benchmark: Alan Wake
Alan Wake is a pretty well executed title ported from console and and based on DirectX 9.
We used the gameís High quality levels and added a maximum quality level with 8x MSAA and 16x anisotropic filtering. We carried out a well defined movement and measured performance with Fraps. The game is updated via Steam.
The Radeon HD 7000s do pretty well in this game, easily leading the GeForce GTX 600s which suffer particularly at MSAA 8x. The Radeon HD 7970 GHz Edition gives a gain of 13% over the standard model at max quality.
Benchmark: Anno 2070
Anno 2070 uses a development of the Anno 1404 engine which includes DirectX 11 support.
We used the very high quality mode on offer in the game and then, at 1920x1080, we pushed anistropic filtering and post processing to a max to make them very resource hungry. We carry out a movement on a map and measure performance with fraps.
With its latest drivers, Nvidia has gradually gained on the Radeons which previously had a significant advantage in maximum quality mode. The Radeon HD 6990 performs very poorly here, which seems to be due to its very high energy consumption, pushing PowerTune to reduce the clock of its GPUs.
Benchmark: Batman Arkham City
Batman Arkham City
Batman Arkham City was developed with a recent version of Unreal Engine 3 which supports DirectX 11. Although this mode suffered a major bug in the original version of the game, a patch (1.1) has corrected this. We use the game benchmark.
All the options were pushed to a maximum, including tessellation which was pushed to extreme on part of the scenes tested. We measured performance in Extreme mode (which includes the additional DirectX 11 effects) with MSAA 4x and MSAA 8x. The game is updated via Steam.
With the Catalyst 12.4s, AMD has finally, after more than four months, corrected a bug which affected the performance of the Radeons with MSAA in this game, allowing the HD 7970 to overtake the GTX 680 in 8x mode. With its latest drivers Nvidia has however improved performance with MSAA 4x. Note that AMD still doesnít support CrossFire in this title.
Benchmark: Battlefield 3
Battlefield 3 runs on Frosbite 2, probably the most advanced graphics engine currently on the market. A deferred rendering engine, it supports tessellation and calculates lighting via a compute shader.
We tested High and Normal modes and measured performance with Fraps, on a well-defined route. The game is updated via Origin.
The GeForce GTX 600s are particularly efficient at 1080p in Battlefield 3, allowing the GTX 680 to outdo the Radeon HD 7970 GHz Edition at high quality and to equal it at ultra quality. At 2560x1600 however the GHz Edition takes the lead.
Although only in DirectX 9 mode, the rendering is pretty nice, based on version 3.5 of Unreal Engine.
All the graphics options were pushed to a max (high) and we measured performance with Fraps, with MSAA 4x and then 8x.
With MSAA 8x, the GeForce GTX 680 and 670 struggle to measure up to the Radeons, particularly the GTX 680 which seems to lack memory bandwidth.
Benchmark: Civilization V
Pretty successful visually, Civilization V uses DirectX 11 to improve quality and optimise performance in the rendering of terrains thanks to tessellation and to implement a special compression of textures thanks to the compute shaders, a compression which allows it to keep the scenes of all the leaders in the memory. This second usage of DirectX 11 doesnít concern us here however as we used the benchmark included on a game card. We zoom in slightly so as to reduce the CPU limitation which has a strong impact in this game.
All settings were pushed to a max and we measured performance with shadows and reflections. The game is updated via Steam.
Although the performance issue the Radeon HD 7000s were suffering from in this game has been corrected, the GeForce GTX 600s do even better, benefitting among other things from the new 300 series drivers, which bring a significant gain here. The Radeon HD 7970 GHz Edition thus trails the GeForce GTX 680.
Benchmark: Crysis 2
Crysis 2 uses a development of the Crysis Warhead engine optimised for efficiency but adds DirectX 11 support via a patch and this can be quite demanding. As, for example, with tessellation, implemented abusively in collaboration with NVIDIA with the aim of causing Radeon performance to plummet. We have already exposed this issue here.
We measured performance with Fraps on version 1.9 of the game.
The GeForce GTX 680 enjoys a small improvement with the latest drivers, which allows it to position itself between the Radeon HD 7970s.
Benchmark: DiRT Showdown
Codemasterís latest game, DiRT Showdown benefits from a slight development of the in-house DirectX 11 engine. In partnership with AMD, the developers have introduced some advanced lighting which takes numerous sources of direct and indirect light into account to simulate global illumination. These additional options were introduced with the first patch of the game which we used on our system. The game is updated via Steam.
To measure performance, we pushed all the graphics options to maximum and used fraps on the gameís internal tool.
Although the GeForce GTX 680 equals the Radeon HD 7970 at 1080p without advanced lighting, once this is turned on its performance levels take a dive as Nvidia didnít have access to this patch early enough to be able to offer specific optimisations for it. This should be corrected very soon.
Benchmark: Metro 2033
Still one of the most demanding titles, Metro 2033 forces all recent graphics cards to their knees. It supports GPU PhysX but only for the generation of particles during impacts, a rather discreet effect that we therefore didnít activate during the tests. In DirectX 11 mode, performance is identical to DirectX 10 mode but with two additional options: tessellation for characters and a very advanced, very demanding depth of field feature.
We tested it in DirectX 11, at maximum quality (including DoF and MSAA 4x), very high quality as well as with tessellation on.
No mono-GPU card allows you to play Metro 2033 at 1080p comfortably at maximum quality. The GeForce GTX 600s suffer from a lack of memory bandwidth in this mode, which limits their performance.
Benchmark: The Witcher 2 Enhanced Edition
The Witcher 2 Enhanced Edition
The Witcher 2 graphics engine has been worked on gradually over time to give us the current version in the recent Enhanced Edition. Although itís based on DirectX 9, it's relatively demanding once all the graphics options are pushed to a maximum, one of these being particularly demanding: UberSampling. In reality itís a 4x supersampling type of antialiasing with a few optimisations.
We tested the game at maximum quality with and without UberSampling. Performance was measured with Fraps.
The latest AMD drivers give something of a boost in performance in this game and the GeForce GTX 670 is down on the Radeon HD 7870 and the Radeon HD 7970s easily outdo the GeForce GTX 680.
Benchmark: Total War Shogun 2
Total War Shogun 2
Total War Shogun 2 has a DirectX 11 patch, developed in collaboration with AMD. Among other things, it gives tessellation support and a higher quality depth of field effect.
We tested it in DirectX 11 mode, with a maximum quality, MSAA 4x and MLAA. The game is updated via Steam.
After the update made at the end of March, the performance of the GeForce 600s dropped right off, with certain specific optimisations probably no longer working. This problem has been corrected with the 304.48 drivers, allowing Nvidia to move ahead with MSAA 4x.
Performance recapAlthough individual game results are obviously worth looking at when you want to gauge performance in a specific game, we have also calculated a performance index based on all tests with the same weight for each game. We set an index of 100 to the Radeon HD 7870:
[ 1920x1080 ] [ 2560x1600 ]
The Radeon HD 7970 GHz Edition gives an overall gain of 10% over the standard Radeon HD 7970, whether at 1080p or at 2560x1600. It has a 12% lead over the GeForce GTX 680 at 2560x1600.
This performance gain gives the Radeon HD 7970 GHz Edition the lead over the GeForce GTX 590 by a short head on average and allows it to close the gap on the Radeon HD 6990, consigning the previous gen bi-GPU to history.
Note that at 1080p, the GeForce GTX 680 is on equal terms with the Radeon HD 7970, while a few weeks ago, during our test of the GeForce GTX 670, the HD 7970 had the advantage. This improvement can be explained by the fact that Nvidia has finally corrected its performance problem in Total War: Shogun 2.
ConclusionNot being able to proclaim victory for its Tahiti GPU opposite the GeForce GTX 680, which is based on a less complex GPU, was probably a source of frustration to AMD. In bringing out the Radeon HD 7970 at the end of 2011 and in inaugurating the 28nm TSMC fabrication process, its engineers stole a four-month march over the competition but werenít able to make the most of Tahitiís potential in terms of clocks.
This is what AMD has sorted out with the Radeon HD 7970 GHz Edition, taking advantage of the production knowhow built up over the last six months and the refined PowerTune technology to up the clocks, improve performance by around 10% and reclaim the top performance spot for Tahiti. This version of the Radeon HD 7970 should be available at the beginning of July for between Ä470 and Ä500, which is a similar price/performance ratio to that of the original model and superior to that of the GeForce GTX 680, which is good news.
Not all is rosy however. To make this increase in clock possible, AMD has maximised the use of the available TDP and increased the GPU voltage, causing energy consumption to go up more than proportionally. Nevertheless this is only a problem indirectly. That a GPU in the very high-end segment uses 30 to 40W extra doesnít worry us particularlyÖ as long as noise levels donít get too high. Unfortunately this is the main black spot of this Radeon HD 7970 GHz Edition, with AMD using the same cooling system as on the standard model. Noise levels were already rather high with this cooler so they were bound to increase further here Ö
We would have liked AMD to put a sturdier cooling system into place to accompany the increase in load and revisit the power stage components at the same time as they tend to make some noise on far too many samples. This is annoying even if the sound readings aren't really affected.
At the end of the day then, to enjoy the highest performance mono-GPU currently on the market in good conditions, we'll have to wait for the designs that are brought out by AMD's various partners. We donít however know what theyíll do with this new variant of the Tahiti GPU as they will also have the option of simply renaming certain overclocked products they already have on the market.
Copyright © 1997-2014 BeHardware. All rights reserved.