Report: AMD’s Radeon HD 3800 - BeHardware
>> Graphics cards
Written by Damien Triolet
Published on November 15, 2007
The month of November is a very important period for AMD, which for the first time, is launching a complete platform, the Spider, that will include new CPUs (the Phenom), chipsets (the AMD 7 series), and graphic cards (the Radeon HD 3800). This last product was introduced on November 15 a few days before the rest of the platform.
If you have been following closely, you probably know that there is a fierce competition between Nvidia and AMD on the performance market segment. Moreover, new video game releases coincide with the possibility for these two manufacturers to offer more cost efficient versions of their architectures and will enable them to unload a large amount of high performance graphic cards. As you may have realized, it’s a very interesting market. And for this reason, products and prices have been modified on several occasions in the two opposing camps.
Nvidia wanted to be sure to be the first to offer a high performance product and released the GeForce 8800 GT which was equipped with more activated scalar processors than planned. As for AMD, they don't play with redundancy in the same way and couldn't increase functionnal units in a similar way. However, this manufacturer came up with a much smaller and consequently cheaper and more easily producible GPU which enables to rival Nvidia with very aggressive prices. AMD prices varied a number of times and in fact just increased as we write this article at the product launch’s press conference.
To counter the GeForce 8800 GT at 220-250€, AMD is relying on a Radeon HD 3800 which will be offered to users from 180 to 230€. This is less expensive than Nvidia’s solution but more than what was originally announced by AMD (160-200€).
As we already mentioned in the GeForce 8800 GT’s test, the end of 2007 will offer great deals to gamers who for lower prices will get high end performances. With the Radeon HD 3870, AMD will try to come close to the Radeon HD 2900 XT and at half the price. Will it be enough to get back in the race which has been so easily dominated by Nvidia for a year now?
RV670 in 55nm and DirectX 10.1
RV670 in 55nmAs much as AMD is very conservative in its fabrication processes for its CPUs, the firm is aggressive on the very latest manufacturing processes offered by TSMC for the production of its GPUs. Thus, the RV670 which equips the Radeon HD 3800 ushers in the 55 nanometer process. As they usually do, after 65 nanometers TSMC proposes a "half node" process which means an intermediate technology between two methods of production. The 55 nanometer is therefore based on their 65 nanometer technology but updated for finer engraving.
As usual, the interest here is to lower production costs as well as power use – a point upon which AMD strongly insists.
The RV670 can be seen as an R600 (Radeon HD 2900 XT) which is more efficient in terms of its design. In this way, the RV670 inherits the R600’s entire shader core, in other words, 64 vec5 units to process shaders (AMD speaks of 320 scalar processors as Nivdia does, which in actuality isn’t too accurate) and 16 texturing units. You can take a look at our article on the Radeon HD 2900 XT to brush up on the details.
So what is different? Mainly the memory bus which is now reduced to 256 bits instead of the R600’s 512 bit. Next, there is the (supposed) improvement of the ROPs which is only normal given that it seems obvious that those of the R600 suffered from certain problems which limited performances with antialiasing. AMD always denied a problem of this type responding that they opted for more flexibility from the start. However you'll probably agree with us that voluntarily choosing to kill a GPU’s performances in an area that always put the Radeon brand ahead seems a bit unlikely. Whatever the case, we can assume AMD worked on this point resulting is some compensation for the reduction in bus width.
There are a few innovations. First of all, there is PCI Express 2.0 whose interest is yet to be proven even if AMD announces gains of 10% on the Radeon HD 3850 256 MB version. Next, there is UVD on a high performance card. And finally, AMD’s Radeon HD 3800 is the first card to support DirectX 10.1.
In the end, the 55 nanometer RV670 should be almost equivalent to an R600 (80 nanometers) while having 34 million transistors less (666 versus 700 million) and being half its size. Indeed, the RV670 only measures 192 mm² versus 408 mm² for the R600! This is quite an improvement and something that should allow AMD to be more aggressive on prices.
DirectX 10.1This light evolution of DirectX 10, like the shader model 3.0 introduced by DirectX 9.0c is an important argument for AMD because it’s the first to offer its support. DirectX 10.1 and shader 4.1 add small improvements in different areas by correcting deficiencies of the first version of DirectX 10, adding a little more flexibility and bringing control of antialiasing to developers. If you are interested in the details, here they are:
So is DirectX 10.1 support important today? It’s definitely a bonus, but at the same time developers are only starting to use DirectX 10 and will not be able to take advantage of this update for some time. This is all the more true given that we will have to wait for the SP1 of Windows Vista to see its appearance.
Of course, Nvidia is loudly proclaiming that DirectX 10.1 is useless. However, the contrary would have been surprising because they are late in its integration to their GPUs. DirectX 9.0c was THE revolution, but DirectX 10.1 is without interest. As usual, the reality is somewhere in between. However, in the GeForce 8800 GT’s presentation this manufacturer managed to have the head of Microsoft’s gaming division affirm quite resolutely that the update of this API would be useless, almost anticipating the arrival of its support for AMD. A good dose of hypocrisy for Microsoft…
Whatever the case, it’s obvious we will have to wait to see if developers take interest before it will become a large advantage. In the meantime, this is still a positive point.
Specifications, the cards, the test
For specifications, the GeForce 8800 GT is close to the GeForce 8800 GTX in terms of calculation power and texture filtering : however, it is very far in memory bandwidth as its bus is limited to 256 bits. It’s the same situation for the Radeon HD 3800 compared to the Radeon HD 2900 XT. The HD 3870 is slightly ahead while the Radeon HD 3850 is slightly behind in terms of raw power.
The cardsAMD provided us with a reference Radeon HD 3870 card:
This card uses a double slot cooling system which proved to be very silent in 2D mode. It’s slightly noticeable in load but still very satisfactory. The 512 MB of 0.8 ns GDDR4 memory are from Samsung.
Powercolor provided us with a Radeon HD 3850 :
The card is equipped with a fan that is different from the reference model. It is more silent in 3D mode but noisier in 2D, an unfortunate point. Overall, noise levels are acceptable and we are still more or less in the ‘’low noise level’’ category. Our HD 3850 version was overclocked to 670 / 830 MHz à 720 / 900 MHz or respective overclockings of 7 and 8 %. Of course, for tests we set the card to reference frequencies.
Good news for video fans. The Powercolor Radeon HD 3850 512 MB which was announced at a price of 179€ by the manufacturer is equipped with an HDMI connector.
Note that we encountered a problem with this card and certain power supplies in that sometimes the card does not want to boot and says there is a poor PCI Express power connection. This is probably a small bug preventing the proper detection of this connection.
Power consumption We evaluated the power consumption of the different cards. Measurements were taken at the wall socket, in order to have the total power consumption of the power supply, in this case, an Enermax Galaxy 850W.
A large advantage of 65 nanometers is that power use is greatly reduced. While consumption is still relatively high and requires a PCI Express connection we are very far from the extremes of the GeForce 8800 Ultra and Radeon HD 2900 XT.
On a positive note, power consumption in idle is significantly decreased and the GeForce 8800 Ultra uses 50 watts more than this 8800 GT!
It’s the same for the Radeon HD 3800. Here we only measured the consumption of the 3870 because the 3850 used in this article was an overclocked model from Powercolor reset to its original frequencies. This could have affected results.
In idle, the consumption of the Radeon HD 3870 is extremely low and even lower than that of the GeForce 8800 GT. This is great news. An inactive Radeon HD 2900 XT wastes 60 watts more! To obtain this result, AMD introduced Powerplay to GPU desktops. Thus 2D and 3D frequencies are different and the GPU dynamically changes this parameter depending on the graphic load. If you are watching a game introduction at 30 fps, the GPU doesn’t need to go up to the same frequency that demanding gamers need to play at 100 fps, for example. We can only ask why this wasn’t done earlier?
The test For this test, we used 10 games, four of which support DirectX 10. All tests were carried out in 1920x1200 and 1280x1024 with anisotrope filtering activated when an option in the game. HDR was activated every time when available.
Finally, transparency/adaptive antialiasing were activated in multisampling mode as Nvidia has just implemented a new and more efficient version in its drivers. For AMD, it is forced in many games and its activation is now possible via the control panel.
Of course, all updates to Windows Vista relative to performance were installed.
ConfigurationIntel Core 2 Extreme QX6850
nForce 680i SLI EVGA
2 GB DDR2
Forceware 169.04 (169.05 for Crysis)
Catalyst 7.10 beta
While the first cards of any given generation are generally very poorly equipped for HD video, luckily AMD and Nvidia take the time to correct this point on the following versions. In this way, while the GeForce 8800 GTS/GTX/Ultra and Radeon HD 2900 XT are not very efficient in terms of the acceleration of HD video decoding, the GeForce 8800 GT and Radeon HD 3800 correct this defect. You may recall, AMD has the advantage of entirely managing VC-1 decoding in addition to h.264 while Nvidia does only completely cover the latter.
We measured the performances of the different solutions in reading two HD DVDs, one in VC1 (King Kong) and the other in H.264 (Babel). The test configuration here changed and was now based on a Core 2 Duo E6400. For software, we remained in Windows Vista and used Power DVD version 3104 and in version 3501 for the Radeon HD 3800 which is incompatible with the previous version.
The new cards live up to their expectations here.
We also took a look at video quality and based our evaluations on the analysis tests of the HD HQV suite (go herefor the details of these tests).
The GeForce 8800 GT has the same behavior as the GeForce 8600 GTS. The rest of the GeForce line has problems with inverse telecine and doesn’t manage to reproduce these videos in high enough quality.
The 4 Radeon HDs obtained a score of 100, the difference being in terms of digital noise reduction. For Nvidia, this is activated with drivers in the control panel with a setting between 0 and 100 and set too high it can produce visual defects. We settled for 51%. Note that this test is subjective and that in some videos it’s best to be able to deactivate this effect when noise can benefit display of a movie. With the latest AMD beta drivers used for this test, you can also adjust the noise algorithm via Catalyst.
Enemy Territory : Quake Wars
Enemy Territory : Quake Wars
While Quake Wars is based on the Doom 3 engine, it has undergone some evolution such as megatexturing which facilitates the work of artists ; however, there is the additional cost in terms of decoding and access to megatextures. In the end, Quake Wars is a little more resource heavy than Doom 3 or Quake 4.
We saved a demo in a sequence versus 4 bots. Given that artificial intelligence was not calculated in the timedemo, results were less affected by the CPU than in actual gameplay or at least in this case versus our bot adversaries.
All parameters were set to a maximum in the game including 16x anisotropic filtering. The patch 1.2 was used.
In this first test, the GeForce 8800 GT finds itself halfway between the GTS and GTX. The Radeon HD 3870 is equivalent to the Radeon HD 2900 XT and therefore is on the same level as the 8800 GT in 1280x1024 and the 8800 GTS in 1920x1200. The Radeon HD 3850 is 15 to 20% behind due to the fact that Quake Wars is very dependent on the memory bandwidth. On the other hand, it is far ahead of the GeForce 8600 GTS and Radeon HD 2600 XT.
Once antialiasing was activated, the performances of the GeForce 8800 GT fall, theoretically, because of its reduced memory bandwidth. However ,this could also be a lack of memory as is the case with the GeForce 8800 GTS 320. For the Radeons, the reduction in performances is more significant than with the GeForce line.
Half Life 2 - Episode 2
Half Life 2 Episode 2
Still based on the Source Engine, Half Life 2 Episode 2 doesn’t really have anything new on the technological level. It simply optimizes and more heavily relies on the engine’s capabilities, making the game more resource heavy than its previous versions. We carry out a demo with all game options set to a maximum including anisotropic filtering which is in 16x.
The GeForce 8800 GT is on the heels of its bigger sibling and clearly puts some distance between it and the 8800 GTS /Radeon HD 3850.
It’s the exact same situation once 4x antialiasing is activated, except the Radeon HD performances are even more impacted. In this case, the Radeon HD 3870 is equivalent to the GeForce 8800 GTS.
We carry out an identical movement and measure the framerate with fraps. The test was done in high quality, complete dynamic lighting, and high detail followed by another session in medium quality, complete dynamic lighting, maximum details (anisotropic filtering 16x) and foliage shadows. S.T.A.L.K.E.R. uses an engine based on differed rendering, which is fundamentally incompatible with MSAA and makes the use of antialiasing impossible. A type of filtering of edges carried out with a shader can be activated but results are mixed. The 1.00004 patch is used.
The GeForce 8800 GT is situated between a GTS and GTX. The GeForce 8800 GTS 320 is far behind, 320 MB being insufficient for this game. Overall, the Radeons are behind and this game seems to prefer the GeForce. In this test, the Radeon HD 3870 is equivalent to the Radeon HD 2900 XT.
Rainbow Six : Vegas
Rainbow Six : Vegas
The first PC game based on the Unreal Engine 3.0, Rainbow Six : Vegas is still a very resource heavy game. We measure performances in the introductory scene. The HDR mode is activated as it is more or less obligatory, because without it banding is very noticeable. Shadows are set to “low” because a higher quality in this domain lowers performance too much in certain areas.
Originally designed for the Xbox 360, this game seems to have a natural affinity to the Radeon HD which has a similar architecture to the game console’s graphic chip. The Radeon HD 2900 XT and HD 3870 finish first this one time. The GeForce 8800 GT is very close to the 8800 GTX as calculation power is of high importance here.
The game does not support antialiasing but Nvidia has implemented it to drivers like it does with Oblivion. This is contrary to AMD who unfortunately didn’t make this effort. This time the GeForce 8800 GT is situated halfway between the GTS and GTX. The GTS 320 does not have enough memory to launch the game in 1920x1200.
We saved a specific movement in order for it to be always identical and the test reproducible. Of course, HDR was activated.
Given that Oblivion is very dependent on calculation power in HDR mode, the GeForce 8800 GT places close to the GTX and the Radeon HD 3870 finishes with the Radeon HD 2900 XT.
Once FSAA 4x is activated, the 2 Radeon HD 3800s surpass the Radeon HD 2900 XT much more for the 3870 which is ahead of the GeForce 8800 GTX.
Colin McRae DIRT
Colin McRae DIRT
> To test Colin McRae‘s latest opus which is very resource heavy we carried out a well defined sequence in high quality mode. Note that the activation of antialiasing is highly recommended given the way menus are rendered and because post processing effects amplify aliasing. The patch 1.2 was used.
Once again, the GeForce 8800 GT places between the GTS and GTX . The Radeon HD 2900 XT and Radeon HD 3870 are on the same level as the GeForce 8800 GTS and the HD 3850 is slightly behind.
No big change with antialiasing except that the 320 MB of the GeForce 8800 GTS 320 aren’t enough.
The first game based on the Unreal Engine 3.0 to support DirectX 10, Bioshock has great graphics even in DirectX 9 mode while it is less resource heavy than Rainbow Six : Vegas. We carry out a well defined sequence of movement with all options pushed to a maximum.
In DirectX 9 mode, the Radeon HD 2900 XT is rather fast and very close to the GeForce 8800 Ultra. Here, the Radeon HD 3800s display lower performances, but despite everything the HD 3870 is still ahead of the GeForce 8800 GT. The Radeon HD 2600 XT is ahead of the GeForce 8600 GTS but these 2 cards remain largely behind.
Like with Rainbow Six : Vegas, Nvidia allows the activation of antialiasing for this game which doesn’t normally support it. AMD doesn’t offer this option. For some unknown reason, it was impossible to activate antialiasing on the GeForce 8800 and 8600 GTS.
In DirectX 10 mode, GeForce 8 performances are similar to what we obtained in DirectX 9 while performances mysteriously plummet for the Radeon HD 2900 XT, more moderately so for the Radeon HD 3800. The 3870 therefore surpasses the HD 2900 XT.
Note that it is not yet possible to activate antialiasing in DirectX 10 mode for Nvidia and AMD.
Company of Heroes
Company of Heroes
Given that Company of Heroes received a DirectX 10 patch that adds a real plus on the graphics level, we decided to add it to our test protocol. All graphic settings were pushed to a maximum except for terrain details which remained on High (Ultra mode is reserved for DirectX 10). Textures were also limited to High, because the game indicates a lack of system memory for the GeForce in DirectX 9 mode with Ultra textures.
We run the integrated test on version 1.71.
The GeForce 8800 GT is just barely ahead of the Radeon and largely surpasses the 8800 GTS. The Radeon HD 3800s finish between the GeForce 8800 GTS and GT.
Once 4x antialiasing is activated, the performances of the Radeon HD 2900 XT plummet while those of the GeForce 8 line show consistent results.
In DirectX 10, performances are clearly inferior as the engine displays more advanced graphics. The Radeon HD 3870 surpasses the Radeon HD 2900 XT and places next to the 8800 GTS.
The standings are identical with antialiasing and a comparatively larger reduction in performance for the Radeons. Note that the GeForce 8600 GTS and Radeon HD 2600 XT quickly show their limits and are very far behind.
World in Conflict
World in Conflict
Very resource heavy and with nice graphics, it’s only natural World in Conflict joins our test suite. We carry out the internal test with the patch 1.0002. All game options are pushed to a maximum.
The Radeon HD 2900 XT and HD 3870 finish with the GeForce 8800 GTS, at least in its 640 MB version. The 320 MB model is left behind due to insufficient memory.
With antialiasing 4x, the GeForce 8800 GT seems to be only slightly limited by its memory bandwidth and is not that far from the 8800 GTS. Radeon performances plummet here.
The performances of all cards are slightly reduced in DirectX 10 except for the Radeon HD 2900 XT which is more impacted. For this reason the two Radeon HD 3800s are ahead.
This is the one and only time that the GeForce 8800 GTS 640 MB surpasses the 8800 GT. A problem seems to affect Radeon performances which are very low in these conditions. This is often similar with the 2 resolutions, which seems to suggest a bug in drivers.
Crysis - SP demo
Crysis - démo SP
Although we don’t usually use demos in our test suite, it’s hard to pass up Crysis due to the fact that this game is so much awaited and redefines modern graphics quality. We carry out the internal test in DirectX 9 High mode (forced via the addition of –dx9 behind the executable)and in DirectX 10 Very High mode.
Note that there was some polemic regarding a bug in Nvidia’s 169.04 driver related to deformed reflections on the water surfaces which weren’t updated enough. Due to the fact that it was calculated less often, performances improved or at least in certain conditions like in 1920X1200 with AA4x in DirectX 10 mode (+30%). In the beginning, we thought this was a bug in the game itself ; however, it showed that by renaming the executable in order that the Nvidia driver could no longer detect it, the problem was corrected. For this reason, this bug seemed a bit suspicious.
We contacted Nvidia about this and they confirmed the presence of the bug and the related gain in performances it could add. Nvidia provided us with a driver that no longer has this problem (169.05), which of course we used here. They also gave precise information on the bug in question in order that there be no further confusion on the subject.
Crysis’ engine tries to avoid calculating water reflections as much as possible, which is only logical. To do this, three pieces of information are used: the time since the last update, the movement of the camera, and the number of "water" pixels visible on the screen. Of course, it would be ridiculous to recalculate an entire reflection for a handful of pixels. Moreover, it is this last value that is the source of the problem.
To determine the number of visible "water" pixels, the engine carries out an occlusion test in the GPU, returns the result to the system which uses it to determine if the reflection should be updated or not. To avoid wasting time (the CPU waits for the result from the GPU) for any given image, the engine relies on the occlusion test result from the previous image, which can, for example, be one image behind. In SLI, this can be a problem as a one image delay can mean that GPU2 will then have to wait for GPU1 to carry out the occlusion test. There is therefore no performances gain with SLI. The demo (and most likely the game) will be updated to take into account multi-GPU systems by synchronizing the utilization of occlusion tests in order to avoid blocking part of the system.
In the meantime and so that we can see the gains in SLI (potentially causing other bugs), Nvidia in a way falsifies the occlusion test by always giving the same information to the engine: the water is visible but only on a very small number of pixels. In the case where the test actually indicates that water is not visible, the GeForce 8 needlessly processes the water (obviously without updating the reflection). In the case when the test detects that there is a large amount of water that is visible and a reflection to be updated, the GeForce 8 indicates that there is still very little visible and the bug appears. In the end, water can be a significant part of the screen in the demo’s test and the GeForce 8 benefits from this.
However, how is an SLI optimization related to the problem in question? There is an indirect effect because here a bug in the driver applies an SLI profile even when there is a single GPU. And this is what Nvidia corrected with its 169.05 driver. So this is still a problem in SLI and will remain one until this game has been updated for this to function properly.
Having pointed this all out, we can now move on to test results.
The GeForce 8800 GT is close to the GTX and just ahead of the Radeon HD 2900 XT and HD 3870. The HD 3850 is equivalent to a GeForce 8800 GTS. Note that only the GeForce 8800 GT/GTX/Ultra allow playing in High mode and 1280x1024 and even then just barely. This game is very demanding on all levels including for memory because the 320 MB of the GeForce 8800 GTS 320 is already insufficient.
With antialiasing, a number of cards start to fall off. While the GeForce 8800 GTX and Ultra manage to do ok, the GT plummets in 1920x1200. As for the Radeon HD 2900 XT and HD 3800, performances are cut in half!
With all details pushed to their limits and in DirectX 10, the GeForce 8800 GT has trouble putting distance between it and the 8800 GTS 640 MB in 1920x1200 but it is still ahead of the Radeon HD 2900 XT and HD 3870 which finish with the 8800 GTS.
And what about playing with all settings set to a maximum in 1280x1024 with the current biggest high end solution? Impossible and it’s a humbling experience for all cards. We can’t wait for the GeForce 9800 in SLI!
Recap Although individual game results are interesting, we calculated a performance index based on all tests with the same weight for each game. A score of 100 was given to the GeForce 8800 GT in 1280x1024.
On average the GeForce 8800 GT is 10% behind the GeForce 8800 GTX, which is quite good considering the card sells for half the price. It places closer to the GTX than to the GTS and is very far ahead of Nvidia’s previous upper mid-level card, the GeForce 8600 GTS. The GeForce 8800 GT also finishes ahead of the Radeon HD 2900 XT with the Radeon HD 3870 is right on its heels. The "little" Radeon HD 3850 is equivalent to the GeForce 8800 GTS.
With antialiasing 4x filtering activated, the GeForce 8800 GT slightly varies its position between the GTS and GTX depending on the game. Either way, performances are still very good. The GeForce 8800 GTS 320 MB is clearly limited by its memory in this area. As for the Radeon HD 2900 XT, its bugged ROPs in terms of antialiasing forced AMD to use an alternative solution which is very flexible but it stifles performances. For this reason, the Radeon HD 3800 does relatively better despite its memory bus which is half the size. Thus on average, the 3870 is ahead of the Radeon HD 2900 XT. Nevertheless, the GeForce line always does better and the 8800 GT does have a significant advantage over the Radeon HD 3800 in this area.
Note that the average scores of some cards was reduced because they aren’t capable of antialiasing in Rainbow Six : Vegas and Bioshock. However, in our opinion, excluding these games would not change the standings. You can consult this possibility here.
In DirectX 10, the GeForce 8800 GT isn’t disappointing and is still between the GTS and GTX and ahead of the Radeon HD 2900 XT. This last card is in turn is also surpassed by the Radeon HD 3870.
In DirectX 10 with FSAA 4x, the GeForce 8800 GT is limited most likely by its memory bandwidth ; however, it is still equivalent to a GeForce 8800 GTS 640 MB. They both finish with more or less the same scores. In these extreme conditions, the GeForce 8800 GTX and Ultra hold an advantage while the Radeon HDs are behind.
ConclusionIt’s quite exceptional that the two GPU manufacturers are simultaneously offering excellent products. While ATI and then AMD suffered successive delays compared to Nvidia, for the first time in a while, they come back on track and on time. The consequence is clear and direct: the competition causes a price and performance war with each one trying to vie for a slightly better position.
In fact, AMD and Nvidia are more or less aware of what the other is preparing or at least have some idea about it. This pushes them to try to aggressively anticipate prices as well as final specifications for when the two will eventually meet head to head. This is pure benefit for users and it comes at a time when a number of very good and graphic heavy games are arriving.
On the performance level, Nvidia has the advantage. The GeForce 8800 GT has the best performances and this in a consistent manner with and without antialiasing, in DirectX 9 and DirectX 10. However, it is also the most expensive with a price starting at around 250 €.
The Radeon HD 3870 is a step below but is more economical at 230 € (even a little less) and offers complete DirectX 10.1 support, a more universal video engine and lower power consumption. One downside is that it takes up 2 slots.
The Radeon HD 3850 version 512 MB is of great interest because it will be found for less than 200 €. Of course, it has slightly lower performances but the performance / price ratio is very interesting given its position under the « symbolic » 200 euro mark. Powercolor’s card offered for 180 € (according to the manufacturer) with 512 MB, light overclocking and HDMI connector should attract numerous buyers.
It is difficult to definitively choose between these 3 cards at the current time. They are all good and have rather similar performances ; however, their prices are unfortunately not too clear. Overly aggressive pricing, last minute changes, and supplies much lower than demand can prolong this vague situation. Your budget could also be a determining factor because between a GeForce 8800 GT at 250 € and a Radeon HD 3850 at 180 € there is a significant difference within this product range.
Finally, a last word on the 256 MB version of these cards. We think that in some situations, and this will even become more common, 256 MB is not enough. For this reason, we would not advise opting for one of these cards with so "little" memory before the limitations are clearly known so that you don't get a bad surprise.
Copyright © 1997-2013 BeHardware. All rights reserved.