H264, also known as AVC (Advanced Video Coding) or MPEG 4 part 10, is one of the 3 main formats that have been chosen for most of HD videos. Very resource greedy, the decoding of this format is a challenge for most of the current computers. They are quickly brought to their knees by quality HD videos that have very high transfer rates. ATI and NVIDIA announced an acceleration of this format available from their latest cards. For ATI, it is only available from the Radeon X1000 (with some limitations (for the X1300 et X1600) whereas for NVIDIA it is available for all the GeForce 6 and 7 (except for the 6800 based on NV40).The level of acceleration is variable depending on the cards (variable level of assistance, transfer rate or resolution restriction etc.).
The 31th of March was also released the final version of a 100% software decoder, which is supposed to be significantly more efficient: CoreAVC. We compared tahnks to H.264 1080p videos this codec to Cyberlink and Nero codecs. We installed the final version of these codecs that take in charge both ATI and NVIDIA GPUs to avoid specific versions designed by video player developers for ATI and NVIDIA to show the performances of their latest GPUs. These versions do not report reliable results as they deactivate some of the competitor's GPU functions or come from different branch of development etc. The test computer was equipped with an Athlon FX 55. Let's take a look now on performances obtained. FYI, video quality was identical.
The first video watched come from Apple. It is the trailer of the movie Serenity.
CoreAVC's codec is really the most efficient compared to Cyberlink and Nero at least when they aren't accelerated by a GPU. Nero's codec accelerated by ATI or NVIDIA display similar performances to Cyberlink accelerated by the Radeon X1900 XTX, but these solution have higher CPU consumptions than CoreAVC's codec. Only one solution is better: the couple Cyberlink and GeForce 7900 GTX. They have a record CPU power consumption of 25%.
The second video is Click's trailer which is a lot more performance greedy than the first one.
CoreAVC is this time clearly ahead of Cyberlink and Nero. It is important to point out that these two are unable to render the video correctly which means at the right framerate and without lags. Once it is accelerated, the video reading is fluid for Nero and performances are identical for the Radeon X1900 XTX and the GeForce 7900 GTX. Cyberlink's codec accelerated by the Radeon X1900 XTX is slightly more efficient. However, once it is accelerated by the GeForce 7900 GTX it gets much in the lead. This is the only combination that is faster than CoreAVC's 100% software solution.
ATI indicated us that the decompression engine used by Cyberlink's codec is still the one that was released in December and that in the meantime (4 months), a more efficient version has been integrated in the drivers. As it has a different interface, it can't be exploited by Cyberlink's codec. The older acceleration engine disappeared from the recently released Catalyst 6.4 version (we used the 6.3 for this test), so it means that they will no longer accelerate H.264 decoding. We will have to wait for the Catalyst 6.5 or 6.6 for the new engine to be exploited and/or exploitable. Cyberlink apparently don't want to change their codec, so ATI will have to find a way to use the new engine when the codec calls the previous one.
Last but not least, we also observed the solution's power consumption :
First thing to note, the 7900 GTX power consumption in stand by is a few watts higher than the X1900 XTX. So we have had to measure the software power consumption with the two cards in order to have reliable results.
CoreAVC's power consumption is 16 watts lower than Cyberlink's. Once it is accelerated by the X1900 XTX, the global power consumption increases by 7 watts. For NVIDIA, the situation is the other way around. The power consumption is reduced by 23 watts once the acceleration is activated! Relatively speaking, NVIDIA allows the lowest power consumption.
To finish, we need to emphasize that it is a just a preview based on trailers and that things might evolve with driver and codec updates. Also, results will only become really representative when real HD content will be available. The real transfer rate isn't yet precisely known and it will have a strong influence on decompression performances. The performances of CoreAVC and the couple Cyberlink / NVIDIA are anyway excellent and the release of a future version of CoreAVC using GPU acceleration is really promising.