Home  |  News  |  Reviews  | About Search :  HardWare.fr 

MiscellaneousStorageGraphics CardsMotherboardsProcessors
Advertise on BeHardware.com
Review index:
H.264 encoding - CPU vs GPU: Nvidia CUDA, AMD Stream, Intel MediaSDK and x264
by Guillaume Louel
Published on August 18, 2011

It’s difficult to provide a succinct conclusion to so many tests, but one point does stand out: GPU acceleration of H.264 transcoding isn’t on a par with encoding carried out by CPUs.

What these solutions bring most of all is frustration. Whether NVIDIA, AMD or Intel solutions, rapidity has been accentuated to the detriment of quality. It's rather surprising to see that with software such as MediaConverter from Arcsoft or MediaEspresso from Cyberlink, CPU encoders systematically give a better result in terms of quality than the integrated GPU encoders!

It's also extremely annoying to note that in the case of GeForce and Radeon encoding, there’s no difference in speed between graphics cards costing 100, 170 or 330 euros. Quality is strictly identical from one card to another – except in the case of bugs – and encoding times are no different from one card to another either. In no case was the GPGPU power of our graphics cards fully used.

The use of graphics cards in such tasks still requires the help of a CPU. Even when GPU decoding and encoding are both used, with Cyberlink for example, CPU core occupation is 100% for one core with the Radeons or the Intel HD 3000, 100% for two cores with the GeForces and as many as four cores at 100% occupation with Arcsoft.

Another important particularity is that the decoding carried out by a GPU often puts a brake on the performance of GPU encoders. Once again, this is logical: H.264 decoding isn‘t carried out by the GPU’s processing units but by a dedicated ASIC which serves to decode videos such as Blu-rays. This decoding doesn’t need to be done extremely fast as the playback of such media is in real time. In spite of its faults, MediaCoder proves that to get the most out of GPU encoders in terms of speed (without, for all that, creating any variation between our differently priced cards…), you have to use a multithreaded CPU decoder.

It's difficult to recommend any one of the three solutions using GPUs for transcoding over the others. The Arcsoft application offers the encoder which, apart from x264, gets the best scores and is also extremely fast. Visually, results with the Arcsoft encoders are blurred but they may be sufficient for mobile peripherals if you’re not too fussy. These results were however obtained solely with the Media Converter CPU encoder, its CUDA version literally being a nightmare and its Radeon version, even though producing higher quality, being limited (like the rest) to a baseline profile which can’t compete with the highest H.264 profiles. Moving onto the Intel/MediaSDK version, although it obtained high SSIM and PSRN scores, it doesn’t measure up to the naked eye and important parts of the image (faces and so on) are very blurred.

The Cyberlink application rules itself out in only offering baseline type H.264 encoding, which isn’t up to handling action scenes. The numerous implementation bugs in the Cyberlink and MediaSDK software doesn’t help either (white and black squares make the encoded files unusable) and the fact that the encoders are unable to insert an I-frame when a new scene starts creates disagreeable flickering. The quality of the NVIDIA and AMD renderings is okay, which does at least represent some progress on the Arcsoft application for the GeForces. In practice however, the use of baseline H.264 profiles remains a handicap that it is impossible to compensate for.

MediaCoder is the fastest, the most configurable and the most efficient of the GPU encoding applications. If you can stand the advertising however, the fact that the files it produces lack frames is nonetheless a bit of an issue. At least it’s free. From a qualitative point of view, the NVIDIA encoder has the advantage here over the Intel, which tends to blur frames a little too much.

The StaxRip/x264 combination wins hands down for quality. With an equal number of passes, the ‘faster’/’fast' modes generally do as well, if not better, than the rest of the encoders tested here. If you only retain one thing from this article however, make sure you remember that the simplest way of increasing quality is simply to add a second pass. It is all the more annoying that NVIDIA, AMD and Intel could easily offer this second pass in their development kits and thus miraculously homogenise quality throughout their encoded videos.

Of course, you pay for this superior quality with higher encoding times, equivalent to real time at 1080p (one hour of encoding for around one hour of film in ‘veryfast’ 2p mode with Avatar for example). You can reduce quality too. Some users will find the ‘faster’ 1p modes give a more attractive compromise by more or less halving encoding time, but below this quality really does start to suffer.

In trying to sum up what AMD, NVIDIA and Intel are offering via these third party applications, we do owe it to ourselves to make a few remarks. Firstly, the AMD encoder is anything but stable. The applications that it was running in crashed on numerous occasions, something that we were able to reproduce when launching the AMD transcoding interface manually (you can access it via the CCC control panel). In limiting the encoder to the baseline profile here again, AMD isn‘t giving itself any real opportunity in terms of decent quality overall. This is particularly regrettable as in static scenes the quality is often pretty good. Implementing a high H.264 profile would be a good idea.

NVIDIA possibly has the most advanced SDK and its results are often the best when it comes to GPU acceleration. Nevertheless, the visual quality remains poor, to the point where the pure CPU solutions are often able to give an equivalent quality/encoding time ratio. What’s more, power consumption of these graphics cards is very high.

The Intel offer is the most surprising of the lot. The video encoding is managed by units added to the GPU which seem to accelerate a large part of the decoding. With very low CPU occupation, energy consumption is by far the lowest. In terms of issues with the Intel solution, you need to have an H67 or Z68 motherboard to run it, which greatly reduces the potential user base. Even if you do have one of these however, the visual quality of the encoded files frankly leaves too much to be desired for it to be really usable.

At the end of the day, the marketing promises in terms of GPGPU transcoding haven’t been kept. The manufacturers highlight the rapidity of their solutions as a solution to the very real problem of the excessive amount of time required for CPUs to encode video alone. By offering rapid encoding solutions, but with quality that leaves too much to be desired, H.264 encoding via GPGPU solutions remains, as yet, a poor solution to what is a real problem.

<< Previous page
Energy consumption/Time Recap

Page index
1 | 2 | 3 | 4 | 5 | 6 | 7 | 8 | 9 | 10 | 11 | 12 | 13 | 14 | 15 | 16 | 17 | 18 | 19 | 20 | 21 | 22 | 23 | 24 | 25 | 26 | 27

Autre articles dans le même thême
DirectX 11.1: neither for the GeForce 600s nor for Windows 7? Roundup: the Radeon HD 7970s and 7950s from Asus, HIS, MSI, PowerColor, Sapphire and XFX Review: Nvidia GeForce GTX 650 Ti, Asus DirectCU II TOP and MSI Power Edition Review: Nvidia GeForce GTX 660, Asus DirectCU II TOP and SLI
DirectX 11.1: neither for the GeForce 600s nor for Windows 7? Roundup: the Radeon HD 7970s and 7950s from Asus, HIS, MSI, PowerColor, Sapphire and XFX Review: Nvidia GeForce GTX 650 Ti, Asus DirectCU II TOP and MSI Power Edition Review: Nvidia GeForce GTX 660, Asus DirectCU II TOP and SLI

Copyright © 1997- Hardware.fr SARL. All rights reserved.
Read our privacy guidelines.