AMD A8-3850 and A6-3650: staking it on the APU - BeHardware
Written by Guillaume Louel
Published on July 22, 2011
After the launch of the first mobile A Series APUs, AMD has now officially launched the desktop processor models. Only two have been launched for now, the A8-3850 and the A6-3650, and weíve tested them both for you.
For AMD, APUs are first and foremost the realisation of the Fusion strategy that followed the buyout of graphics card manufacturer, ATI. The central idea behind Fusion was to create APUs, chips bringing together a standard x86 CPU (supplied by AMD) and a graphics processor (the ATI heritage). After the launch of the Brazos platform at the beginning of the year (designed for Netbooks and their desktop equivalents), the A-Series APUs target a much higher volume market, the entry level segment.
32 nm, SOI, HKMG
The A-Series APUs also represent a technical challenge as this is the first time that AMD and Global Foundries have come together to produce 32 nanometre chips (32nm-SHP that combines Silicon on Insulator, immersion lighography and High-k dielectric materials).
The A8-3850 in CPU-Z
The AMD Llano project was however delayed on numerous occasions and by the time it had come to market, Intel was also offering processors with an integrated graphics core. Does the APU recipe make sense for desktops? This is what weíre going to find out!
CPU architecture, new memory controller
On paper there doesnít seem to be much new with the Llano processor architecture. The x86 cores are the same as those used on the K10.5s, namely the architecture used for the Athlon II and the Phenom II. Like the Athlon IIs, the APUs have no third level cache and comparison between them therefore seemed apposite.
The memory system is what has evolved most, with changes to the memory cache. Firstly, the second level cache has been increased in size to 1 MB per core (instead of 512 KB for the Athlon IIs).
In terms of cache performance, we measured the Athlon IIs as being identical to the A8s: both scored reads of 92 GB/s and writes of 46 for the L1 and reads of 23 GB/s and writes of 18 for the L2. Note that the latency of the enlarged L2 on the A8 has gone from 3.2 to 3.6ns.
A new memory controller?
The changes donít stop there however as although AMD mentions developments to the prefetchers, in practice the changes go much further, with a new 128-bit (dual channel 64-bit) memory controller used for the A-Series APUs. This is actually a well-hidden little secret as itís reported to be the same memory controller as the one AMD is using for the Athlon FXs (Zambezis).
So whatís new? First of all letís look at the impact of the memory speed used on latency, comparing the A8-3850 to the Athlon II X4 635. Both processors are clocked at the same frequency (2.9 GHz) and (remember) have no third level cache. The A8-2850 controller supports memory clocked up to 1866 MHz, as long as only one memory module is used per channel.
The Athlon II has a slight advantage but the A8-3850 does perfectly okay. Next we measured single-threaded bandwidth using Aida64:
In terms of reads there's a small dip in efficiency with the slower memory, but performance is up. At 1600 MHz, the A8-3850 takes the lead.
Historically writes were the weak point for the K10.5s and the jump in performance is very much in evidence here. Great news! What about the multithreaded scores?
Here we used RightMark Memory Tester and the read gains are there for all to see. Beyond 1066 MHz the bandwidth rises fast, which is good news in theory!
We used the RMMT Write NT mode which allowed us to bypass the cache system to maximize memory bandwidth. While the Athlon II hits a wall, the A8-3850 takes off.
Impact of memory Ė in practice
What does this translate to in practice? We looked at three applications, the impact of the memory on each of which varies to a great extent: Cinebench (little impact), Dirt 3 (somewhat impacted) and 7-Zip (big impact).
Hold the mouse over the graph to view results in the form of an index
The applications which arenít much impacted by an increase in memory speed are fairly easy to spot. Itís mainly the lower latency, when clocks are increased, that creates the difference here.
Hold the mouse over the graph to view results in the form of an index
Memory bandwidth, especially during reads, already has more of an impact on a game like Dirt 3 (we used an additional graphics card for this test). The percentage gains are already a little higher with the A8.
Hold the mouse over the graph to view results in the form of an index
7-Zip was the most demanding application used in terms of memory. Here the gains are more noticeable again. Beyond 1333 MHz the Athlon II hits a ceiling, with just a 7.7% gain from DDR3 1066 to 1600 MHz. With the A8 thereís a 24.2% gain from 1066 to 1600 MHz!
GPU architecture, shared memory
The GPU used in the Llano APUs isnít new either but is the same as the one used for the Radeon HD 5570 (Redwood). It uses the traditional Radeon HD vec5 architecture that weíve covered many times here. We are of course talking a DirectX11 graphics chip here.
The A8-3850 GPU uses the 5 groups of SIMD units that equip the Redwood chip, which gives us 80 vec5 units (counted as 400 shaders by AMD). Itís clocked at 600 MHz. The GPU in the other APU launched with the A8, the A6-3650, has been slightly cut down to just 4 SIMD groups, giving a total of 64 vec5s (320 shaders), with the clock dropping to 443 MHz. In comparison to Redwood, this new ďSumoĒ now includes UVD3, or accelerated decoding of Mpeg 4 Part 2 (XviD) and MVC (BluRay 3D).
In terms of memory, on desktop only the main shared memory (up to 512 MB) will be used. In contrast to the 890GX motherboards, the ďsideportĒ option, which allowed for the addition of a memory chip to the motherboard, has disappeared.
The GPU is directly linked to the 128-bit Llano memory controller, which is common to and shared by the CPU and the GPU. There are also 32 PHY lanes. 24 of them can be transformed into PCI Express lanes, which allows for support of one 16x port (or 2x 8x) and a 4x port. The last 4x serves as an interconnection with the chipset. The 8 additional lanes are set up for screens.
Indeed, this is where things get complicated because there are numerous options in terms of screen connectivity. In theory, Llano can support up to 6 screens, but only two can be on at any one time.
Configuring the motherboard outputs varies from one manufacturer to another however. They donít all offer a DVI port with Dual DVI mode (required for 2560x1600 and 120Hz/3D screens). With most you canít use the DVI and HDMI outs at the same time, a restriction that is linked to the switches used which are often the same for both these outs, though this isnít necessarily an obligation (as we can see with some Gigabyte products for example). So if youíre envisaging a double screen set-up on this type of platform, we strongly advise you to download the manual for the motherboard youíre thinking of buying first.
Shared memory, CPU bandwidth <-> GPU
We tried to meaure the impact of memory bandwidth on CPU <-> GPU transfers, which are in fact central memory <-> video memory transfers, but in different spaces. We used an OpenCL application already used in our PCI Express article for this.
You can compare the scores to those for a discrete graphics card on this page . They're pretty good in absolute terms, a little above what you get via an 8x port. Although memory speeds upwards of 1333 MHz do have an impact, itís relatively modest. Note that at AFDS, AMD brought up the question of preventing useless memory copies using a mechanism in the driver, something we'll come back to in another article.
Shared memory, the impact in practice
We looked in practice at the impact of shared memory bandwidth on the performance of the chip used on the A8-3850 APU.
Hold the mouse over the graph to view indexed performance
Whatever the game used, the gains are relatively constant. Moving up from DDR3 1333 to DDR3 1600 gives around a 10% performance increase. Using DDR3 1866 can give a gain of up to 20%, depending on the title youíre playing. This shows once again what a difference the new Llano memory controller makes, even though, at the time of writing, DDR3-1866 costs around 20Ä more than more standard memory.
CPU vs IGP
As in the Sandy Bridge test, we wanted to check variations in graphics performance against CPU load, a weak point on the Intel architecture (see here for the Core i3).
Hold the mouse over the graph to view indexed performance
Only CPU performance drops during gaming, which is to be expected. The priority is given to the game running in the foreground, and game performance is virtually unchanged. AMD gives the GPU priority in its graphics controller, in contrast, we suppose (weíre still waiting for a response from Intel to the problem discovered with the Core i7s, then i3s, and simply know that they were able to reproduce it and have confirmed the existence of the problem internally), to the way the Intel controller works.
The platform, Dual Graphics
The desktop version of the AMD Llano platform is called Lynx. It of course brings together the APUs with new motherboards and a new chipset (southbridge).
Only two processor models have as yet been launched, the A8-3850 and the A6-3650. They both have a TDP of 100 watts.
Note the absence of Turbo mode, reserved for the A8-3800 and A6-3600 versions with a TDP of 65 watts, which have been announced but which arenít yet available.
Placing a graphics chip in the processor requires a new socket, something AMD has had to reconcile itself to. Named the FM1, it has 905 pins and can be fairly easily distinguished by the square hole in the middle of the socket, where there are no pins, which is reminiscent of older sockets. From a physical point of view, the processors are identical in size and AM3 heatsinks are still perfectly compatible with FM1, as the fixture system is the same.
Two southbridge type chipsets are available for the platform, the A75 and A55. There are two notable differences between these versions, the A75 having 6 x 6Gbps Serial ATA ports (6 x 3Gbps ports for the A55) as well as 4 native USB 3.0 ports and 10 USB 2.0 ports (against 14 USB 2.0 ports for the A55). A bus AMD calls the UMI serves as the interface between the processor and the chipset. In practice this is a PCI Express 2.0 4x link, with the same method also used by Intel for its Sandy Bridge platforms.
AMD supplied us with an Asrock A75-Pro4 motherboard for our tests and we used it for most of the testing. Asus also sent us an F1A75-V PRO a bit later and this is what we used for the overclocking tests. Note that the BIOS' were particularly capricious on these cards, particularly when it comes to USBs. We experienced problems with the 3.0 ports in getting them to recognise USB keys and external hard drives, as well as keyboard problems. And this wasnít allÖ
Dual Graphics: the return of Hybrid Crossfire
AMD has brought Hybrid Crossfire back in with the release of the A-series APUs. In contrast to the traditional version which doesnít allow you to combine identical GPUs (if necessary the driver cuts down the clocks of cards with a common GPU and different clocks, for example the Radeon HD 5850 and HD 5870 in CrossFire are set up as 2x HD 5850s), Hybrid Crossfire does allow you to combine different GPUs with differing performance. This mainly concerns entry level cards as the option was originally introduced for ATI graphics chipsets combined with a discrete entry level graphics card.
This is now known as Dual Graphics. As weíve already said, for marketing reasons OEMs that use Dual Graphics may give the name of a different virtual card to highlight the fact that theyíre using such a combination. This table summarizes the combinations on offer for desktop APUs.
Note that these names are given for purely marketing purposes as in practice, both GPUs (the 6550D from the A8-3850 and the discrete graphics card) appear in Windows. In games, the name of the graphics card that appears (and its deviceId) will be that of the card to which you have connected the screen.
This is the other issue that we remarked on in the BIOS with this platform. Ideally, for maximum efficiency in Hybrid Crossfire, you plug the screen into the graphics card. Unfortunately, things arenít as simple as this with current BIOSí. You have to activate dual graphics mode in the BIOS and then, only the motherboard video out is functional when you start the system up. The procedure recommended by AMD to activate Dual Graphics is as follows:
- Plug the screen into the motherboard, activate Dual Graphics in the BIOS
- Install the drivers in Windows and turn on Crossfire
- Reboot, turn Crossfire off, turn off the system
- Plug the screen into the graphics card, boot Windows blind (the screen only comes on when Windows has started)
- Turn on Crossfire, reboot (still blind, for access to the BIOS you have to plug the screen into the motherboard again). Finally the machine is ready.
AMD says, thankfully, that itís working to simplify the process and, most importantly, allow you to plug the video out into the graphics card when you boot in Dual Graphics mode. As things stand, OEMs which sell such machines cannot decently advise you to plug the screen in anywhere else than the motherboard. We measured the difference in performance between a Radeon HD 6670, and an HD 6550D + HD 6670 pairing plugged into the motherboard and then the graphics card:
Hold the mouse over the graph to view results at 1280x720
For now we wonít go into the fact that Dual Graphics can be slower than a graphics card on its own. We'll come back to this. In practice, the difference can be quite a big: you get a gain of over 26% in FarCry 2 at 1920x1080 when you use the Ďrightí out.
We carried out two distinct series of tests to measure, firstly, processor performance with an additional graphics card, and then graphics performance. We used the following platforms:
- AM3: Gigabyte GA-890GPA-UD3H
- FM1: Asrock A75-Pro4
- 1155: Asus P8Z68-V Pro
- Nvidia GeForce GTX 480
- Western Digital Velociraptor WD3000HLFS 300 GB hard drive
- 2 x 4 GB DDR3 G.Skill 2133 MHz (at 1600 MHz CL9)
- Windows 7 64-bit
We installed the latest graphics drivers, a beta 11.6 version was supplied by AMD for Llano and we used it for all the AMD GPUs.
We tested the Intel Core i3 2100 and Core i3 2120 which are priced above and below the new AMD arrival (110 and 130 euros respectively). On the AMD side, for comparison, we added the Athlon II X4 635 clocked at 2.9 GHz. This processor is no longer on sale, but its spec is very close to that of the A8-3850 (clock, no L3, architecture) and gives us an interesting comparison. The current AMD offer in this price range takes the form of the Phenom IIs, with the 955, 965 and 975, costing around 100, 125 and 150 euros.
On the graphics side we tested, other than Llano performance, various entry level graphics solutions from AMD, namely the Radeon HD 5450, 6570 and 6670, the last two alone and in Dual Graphics mode. We also added the 890 GX (the current graphics chipset used with AM3) as well as the Intel HD 2000 and HD 3000 graphics cores, tested using the Core i3 2100 and 2105. We also added a Radeon HD 5570 clocked at the frequencies of the HD 6550D (600/800), so as to measure the impact of integration.
CPU: Cinebench, x264
Letís begin our CPU tests with Cinebench, firstly in single-threaded mode:
No surprises here, while the A8-3850 is slightly up on the Athlon II X4 635, which is, remember, clocked at the same frequency, the Phenom IIs dominate with higher clocks on the AMD side. With their slightly more evolved architecture, the Core i3s are in the lead: they give higher performance on a single core. The A6 brings up the rear here (remember, itís on sale at the same price as the Core i3 2100).
Now letís move on to multithreaded performance.
The A8-3850 and A6-3650 outdo the equivalently priced Intel offer here. The Core i3ís, to recap, are dual-core processors with HyperThreading technology. Although there are four threads, performance isnít identical to a quad core. The Phenom IIs dominate in this test.BR>
Let's now move on to an encoding test in x264. We use the frontend Staxrip on the 720p Avatar scene used in our x264 encoder comparative, with 2 passes and the "Medium" profile.
The A8-3850 once again outperforms the Intel Core i3s, but once again the Phenom IIs dominate in terms of processor performance in this price range. To recap, the Phenom II 965 is available at the same price as the A8-3850 and the Core i3-2120 (around 130 euros). The A6-3650 is on an equal footing with its direct competitor: its two additional cores donít make up for the clock and architecture differences.
CPU: 7-Zip, MinGW
We used the 7-Zip LZMA2 compression mode to compress the different types of file (source code, images, raw data). This test is multithreaded and highly dependent on the memory used (DDR3 1600 MHz CL9 for all our platforms).
The AMD APUs are neck and neck with the Intel offer, with the A8-3850 slightly in front of the Core i3 2120, and the A6-3650 slightly behind the Core i3 2100. The domination of the Phenom IIs is however there for all to see, in spite of their inefficient use of the memory bandwidth.
Now to compilation with MinGW. Here we compiled the Mame source code, version 142u6 with the 64-bit version of the MinGW (GCC 4.4.7) environment. As the source code and tools have developed significantly since version 132, usually used in our CPU tests, the scores arenít comparable with our previous tests.
The Phenoms once again do best in this highly threaded test, but itís worth noting that the A6 does better than the Core i3s in this test. Letís now move on to the performance of these processors in games with an additional graphics card.
CPU: Arma 2, Dirt 3, GTA IV
To recap, we used a GeForce GTX 480 to isolate processor performance. Weíll come back to the graphics performance of the AMD APUs on the next page. We tested the games at 1920x1200, without anti-aliasing, at high options. We used the Arma 2 patch 1.10.
The Core i3s turn the tables on AMD here, with even the best Phenom IIs dethroned, though more expensive. The two APUs are significantly down in this test, even in comparison to the bottom ranking Phenom II, the 955, which nevertheless cost same as the A6.
Dirt 3 was tested in Ultra mode, with patch 1 installed.
The scores are higher but the trend is interesting as itís indicative of what would happen with a weaker graphics card. The A8-3850 opens up a gap on the Athlon II X4 635 here, but once again the Intel Core i3s have the last word. The A6 comes halfway between the A8 and the Athlon II.
GTA IV: Episodes from Liberty City
Patch 1.1.2 of the additional GTA IV content was installed.
The Phenom II 975 saves AMD's honour in this test, but at the same price, the Core i3 2120 is better placed. Once again, the APUs are significantly behind.
GPU: Far Cry 2, Dirt 3
Our GPU tests were carried out on different platforms but we have tried to keep CPUs with performance levels as close as possible to prevent them from having too much of an impact on our graphics tests.
Far Cry 2
We started with what is today a fairly old graphics engine, the one used in Far Cry 2. The latest patch was installed, as with all the other titles, unless otherwise indicated. We used "MediumĒ mode in DirectX 9, as Low is too limited graphically speaking.
There are several points to make here. First of all let's take the very poor 890GX performance. The chip built into this motherboard, a ĎRadeon HD 4250í is a dated DirectX 10.1 GPU, which didn't really evolved when moving over to the 800 series generation. Next come the HD 2000 and 3000, still under the threshold of what you need to make the game playable. They make the HD 6550D look like a rocket in comparison. Even though its scores are rather low, you can play smoothly at 1280x720 without anti-aliasing, but not beyond, in spite of the fact that this couldn't be said to be a demanding graphics mode. Under the same conditions the A6-3650 offers 20% lower performance, with the game on the limit of what's playable. Dual Graphics mode gives you around ten frames per second more than the graphics card alone, which can be significant.
We used DirectX 11 Low mode for Dirt 3. The Intel HD 2000/3000 and the 890GX don't support this API and therefore didnít feature in this test.
The first good news is that although Dirt3 wasn't perfectly smooth, it was at least playable at 1920x1080 using the A8 and almost playable with the A6, which was however much more at ease at 1920x720. APU. Note that while Dual Graphics does have an impact at 1920x1080, at 1280x720, performance is down on what you get with the graphics card on its own. Given that performance levels are already very high, this isnít too much of a problem.
GPU: F.E.A.R 3, Duke Nukem Forever
Recently released, the latest opus in the F.E.A.R series uses an improved version of the F.E.A.R 2. engine. The game doesnít have any simple graphics presets and we set it up as follows: shadows low, textures medium, ambient occlusion off, transparency low.
While the game is playable at 1920x1080 on the A8-3850, it isnít really smooth unless you reduce the resolution. The A6 is around 20% down on the A8 once again and you have to go down to 1280x720. The HD 3000 is crushed by a factor of 2 in this test. Note that Dual Graphics continues to threaten to slow graphics performance.
Duke Nukem Forever
Finally released, DNF doesnít have any graphics presets either. We limited the shadows on the world and textures to medium.
Again, you can play with the APU A8-3850 at 30 frames per second but it is only really smooth enough when you reduce the resolution. The A6 is only 15% down on the A8 here: this title is less limited by processing power than memory bandwidth. The HD 3000 has a big advantage over the HD 2000 here.
GPU: Metro 2033, Crysis 2
We finished our testing with two particularly demanding engines, firstly Metro 2033. We set the game to "low" DirectX 10 mode.
Things get significantly tougher here because even at minimum options and reduced resolution, Metro 2033 is barely playable on the AMD APUs.
Crysis 2 is the last benchmark in our comparison. We used patch 1.8, at the high quality option, which is actually the lowest available!
No miracles here. Even at low resolution, playability is strongly compromised on the APUs and the Intel solutions arenít even in the picture.
Energy consumption, overclocking
We measured the energy consumption of the respective platforms with a wattmeter at idle, with Prime 95 and then with Prime 95 and Furmark.
The first positive is that AMD has done very well in terms of energy consumption at idle. More than just the processor, the platform as a whole is economical in idle, giving AMD the edge over Intel. No surprises on the processor side in load, with the i3s and their two cores remaining the most economical. The HD 3000 and HD 6550D have pretty much identical energy consumption in load. The energy consumption of the A6-3650 is high, almost equivalent to that of the A8. This is due to undervolting by AMD. Our A8 sample has a voltage of 1.3875V, while our A6-3650 was measured at 1.4125V by default! We were able to bring the voltage of the A6 down to 1.075V and this brings about a reduction in energy consumption (CPU in Prime95 4Threads) to 86 Watts.
We also measured energy consumption on 1,2 and 4 threads on the A8-3850, A6-3650 and the Core i3 2105 using Prime95:
Loading the A8 with 1 thread adds 22W to the AMD A8 configuration, as against 20W to the i3. With 2 threads 20W is added to the A8 but just 13 to the i3. With 4 threads, energy consumption only increases slightly on the i3, a dual core with HyperThreading, while 35W are added to the AMD A8. In lighter loads (1 and 2 threads), the AMD A8 therefore remains competitive compared to the Core i3 in terms of instantaneous energy consumption but as the Intel processor is faster with a lower number of threads, it will return to idle more rapidly. Energy consumption for the A6 is in line with that of the A8 because of its higher voltage.
Overclocking of the Llano platform can officially only be effected via the bus frequency. You can increase or reduce the multiplier up to 36x (instead of 28x) in the motherboard BIOS.
In practice however, anything beyond 29 wonít have any performance impact. Unfortunately Windows and CPU-Z read a clock of 3.6 GHz as you can see in the screenshot below. HWiNFO uses a different mechanism to read the clock (the clock is determined by a performance measure, then the FSB is determined by dividing the clock by the multiplier, which is why the value in our screenshot below is so strange.
The strange 36x mode, the processor runs at 2.9 GHz in spite of indications to the contrary. Vestige of a disabled turbo?
To overclock, you have no choice but to increase the memory clock. Unfortunately all the other clocks are indexed on this (GPU, RAM, northbridge and PCI), which can pose a certain number of problems. Without increasing the Vcore, we did manage to reach a clock of 3.4 GHz (FSB 116 MHz) on our A8-3850. By increasing the voltage to 1.4V, you get a stable 3.6 GHz.
133 MHz seems to be the ideal speed for the FSB if youíre to keep the rest of the system running smoothly as new dividers are used on some motherboards according to AMD (though they donít say which). With a default multiplier of 29x, you get a clock of 3.86 which wasnít stable in our tests even when we increased the voltage to 1.5V with air cooling.
The A6-3650 was relatively capricious in terms of overclocking, with a 133 MHz bus clock being what allowed us to overclock it most easily. The processor remained stable even when we increased the voltage with its default multiplier.
Weíre circumspect about the release of the A8-3850 and A6-3650 APUs on several points. Naturally, thereís some good news. Firstly, the arrival of 32nm processors from AMD, finally, and also the new memory controller, which is far more efficient and able to use fast DDR3 memory correctly. The very low energy consumption levels on the FM1 platform at idle, lower than on the equivalent Intel platform, is also something to be happy about.
In spite of this, when it comes to pure processor performance, the APUs have a relatively low clock (2.9 and 2.6 GHz), no Turbo and a K10.5 architecture which is on its last legs. Opposite the Intel offer, AMD can compete in very highly multithreaded applications where the four Ďrealí cores generally do better than Intelís dual core chips with HyperThreading. When more power on less cores is required however, in gaming for example, the Intel offer dominates.
Of course these APUs arenít only CPUs and in terms of integrated graphics AMD now has a big advantage as the HD 6550D often does twice as well as the HD 3000 used on the Core i3-2105, currently the best Intel graphics offer. This is good of course, but for many titles isnít necessarily enough for playing at native resolution on a 1920x1080 screen (the resolution of most screens on sale today), or even at 1280x720 with some, even with your options turned right down. The HD 6530D used for the A6-3650 is between 15% and 20% down on the A8 in performance terms and this is often significant for gaming, limiting smoothness to less than what is acceptable on the A6. So, while there has been some clear progress and anything that hastens the death of entry level graphics cards with their often shameful levels of performance is to be praised (what with the NVIDIA and AMD partners sacrificing the already modest specs to bring prices down as low as possible), the levels of performance given by the new APUs donít really justify the "discrete class graphics" label yet and will continue to limit anyone who wants to use them for gaming.
The lack of OpenCL applications, nevertheless called for by AMD at AFDS, doesnít work in favour of these APUs which, although delayed, are still in advance of the software ecosystem which is for the moment confined to some rather lightweight applications (see our report on accelerated video encoding).
The yield of Dual Graphics, or Hybrid Crossfire, for its part, very much depends on the games used and as always the updates of drivers for profiles (the CAP profiles are thankfully the same as those used for standard Crossfire). Itís no surprise to see that the bigger the difference in performance between the IGP and the graphics card, the lower the yield can be, even sometimes going into negative. The complexity entailed in plugging the screen correctly into the graphics card is something AMD simply has to correct, and quickly.
It's worth asking what the target group for these new APUs, in particular these two models (the A8-3850 and A6-3650) is. While we can see how an A4 (the dual-core models not yet launched) will be perfectly suited to use in an entry level or home cinema PC, the A8-3850 target looks to be less well defined. With a price tag announced at 130 euros, itís faced with competition from AMD itself in the form of the much faster Phenom II X4 965 (3.4 GHz, L3 cache). Certainly, this CPU has no integrated graphics, but it can be linked up to an 890GX motherboard if youíre not looking to do any gaming with it. Because for gaming, even modest gaming, you still need to invest in a discrete card. The A6-3650 is in a similar quandary, competing with the Phenom II 955, whose price has been slashed by AMD over the last few months. Because of being priced slightly too high and the processor and graphics compromises made, the A8-3850 and A6-3650 are somewhat jacks of all trades but should nevertheless be attractive to OEMs. Weíre impatient to see how the rest of the Llano APU range will be rolled out.
What about the longer term future? AMD has taken quite a risk with this APU in pioneering the combination of such a powerful GPU with a CPU and, as such, we salute the development. As things stand, these APUs look more convincing for the PC laptop market than desktops, however the range is likely to develop rapidly. As of 2012, we should see the arrival of Trinity, which will include Bulldozer cores on the CPU side as well as a GPU that has been announced as 50% more powerful. Combined with what we hope are more numerous OpenCL applications, will Trinity make 2012 the year of the APU?
Copyright © 1997-2014 BeHardware. All rights reserved.