Home  |  News  |  Reviews  | About Search :  HardWare.fr 

MiscellaneousStorageGraphics CardsMotherboardsProcessors
Advertise on BeHardware.com
Review index:
AMD A8-3850 and A6-3650: staking it on the APU
by Guillaume Louel
Published on August 28, 2011

GPU architecture

The GPU used in the Llano APUs isn’t new either but is the same as the one used for the Radeon HD 5570 (Redwood). It uses the traditional Radeon HD vec5 architecture that we’ve covered many times here. We are of course talking a DirectX11 graphics chip here.

The A8-3850 GPU uses the 5 groups of SIMD units that equip the Redwood chip, which gives us 80 vec5 units (counted as 400 shaders by AMD). It’s clocked at 600 MHz. The GPU in the other APU launched with the A8, the A6-3650, has been slightly cut down to just 4 SIMD groups, giving a total of 64 vec5s (320 shaders), with the clock dropping to 443 MHz. In comparison to Redwood, this new “Sumo” now includes UVD3, or accelerated decoding of Mpeg 4 Part 2 (XviD) and MVC (BluRay 3D).

In terms of memory, on desktop only the main shared memory (up to 512 MB) will be used. In contrast to the 890GX motherboards, the “sideport” option, which allowed for the addition of a memory chip to the motherboard, has disappeared.

The GPU is directly linked to the 128-bit Llano memory controller, which is common to and shared by the CPU and the GPU. There are also 32 PHY lanes. 24 of them can be transformed into PCI Express lanes, which allows for support of one 16x port (or 2x 8x) and a 4x port. The last 4x serves as an interconnection with the chipset. The 8 additional lanes are set up for screens.

Indeed, this is where things get complicated because there are numerous options in terms of screen connectivity. In theory, Llano can support up to 6 screens, but only two can be on at any one time.

Configuring the motherboard outputs varies from one manufacturer to another however. They don’t all offer a DVI port with Dual DVI mode (required for 2560x1600 and 120Hz/3D screens). With most you can’t use the DVI and HDMI outs at the same time, a restriction that is linked to the switches used which are often the same for both these outs, though this isn’t necessarily an obligation (as we can see with some Gigabyte products for example). So if you’re envisaging a double screen set-up on this type of platform, we strongly advise you to download the manual for the motherboard you’re thinking of buying first.

Shared memory, CPU bandwidth <-> GPU

We tried to meaure the impact of memory bandwidth on CPU <-> GPU transfers, which are in fact central memory <-> video memory transfers, but in different spaces. We used an OpenCL application already used in our PCI Express article for this.

You can compare the scores to those for a discrete graphics card on this page . They're pretty good in absolute terms, a little above what you get via an 8x port. Although memory speeds upwards of 1333 MHz do have an impact, it’s relatively modest. Note that at AFDS, AMD brought up the question of preventing useless memory copies using a mechanism in the driver, something we'll come back to in another article.

Shared memory, the impact in practice

We looked in practice at the impact of shared memory bandwidth on the performance of the chip used on the A8-3850 APU.

Hold the mouse over the graph to view indexed performance

Whatever the game used, the gains are relatively constant. Moving up from DDR3 1333 to DDR3 1600 gives around a 10% performance increase. Using DDR3 1866 can give a gain of up to 20%, depending on the title you’re playing. This shows once again what a difference the new Llano memory controller makes, even though, at the time of writing, DDR3-1866 costs around 20€ more than more standard memory.


As in the Sandy Bridge test, we wanted to check variations in graphics performance against CPU load, a weak point on the Intel architecture (see here for the Core i3).

Hold the mouse over the graph to view indexed performance

Only CPU performance drops during gaming, which is to be expected. The priority is given to the game running in the foreground, and game performance is virtually unchanged. AMD gives the GPU priority in its graphics controller, in contrast, we suppose (we’re still waiting for a response from Intel to the problem discovered with the Core i7s, then i3s, and simply know that they were able to reproduce it and have confirmed the existence of the problem internally), to the way the Intel controller works.

<< Previous page
CPU architecture, new memory controller

Page index
1 | 2 | 3 | 4 | 5 | 6 | 7 | 8 | 9 | 10 | 11 | 12
Next page >>
The platform, Dual Graphics  

Copyright © 1997- Hardware.fr SARL. All rights reserved.
Read our privacy guidelines.