17 entry and mid level graphic cards - BeHardware
>> Graphic cards

Written by Damien Triolet

Published on July 14, 2005

URL: http://www.behardware.com/art/lire/579/


Page 1

Introduction, TurboCache & HyperMemory



Announced a few days ago, the NVIDIA GeForce 7800 GTX is the new highest performance achiever. The downside is its price of $500, something only a few gamers and computer users are willing to spend for their graphic card. On the other hand, there is a whole spectrum of less expensive products costing from $50 to $150.

Of course, you often get what you pay for. These cards have lesser performances, but what is the performance difference? Can you play with them? Do they bring a significant contribution compared to Intel’s integrated graphic solutions, which dominate the market?

Reducing costs
In their evolution GPUs have increased their complexity. In consequence, they have bigger memory space, and bandwidth needs to access bigger and varying data (textures etc.) for more complex and less repetitive environments. On the other, hand to remain competitive compared to integrated chipset solutions, GPU and graphic card manufacturers have to keep production costs low. To resolve this conflict and make the GeForce 6200 a competitive product, NVIDIA released its TurboCache technology in December, which allows the card to extend its onboard video to include system memory. Consequentlly, NVIDIA avoids increasing memory, and even allows them to reduce it significantly.

Is using system memory as video memory new? Integrated GPUs do it as well as AGP texturing. Nothing new then? Yes and no. The difference is that PCI Express enables a better efficiency thanks to bigger (or even much bigger) bandwidth to send data from the GPU to the system memory. So it’s possible to use memory as a rendering area and not only a place to read textures. Also, the GPU can access two memory areas in parallel. This all comes mainly from PCI Express and new drivers efficiently supporting two memory spaces, even if NIVIDA speaks of in-depth GPU modifications to make TurboCache technology possible. Of course, there probably have been a couple of small modifications. TurboCache is PCI Express + optimised drivers to 99%.

TurboCache gives the possibility for NVIDIA to release cards with 16 MB, 32 MB and 64 MB and still keep (this is what we are going to verify) decent performances. As many users only differentiate graphic cards by the amount of embedded memory, the marketing department came up with announcing the quantity of addressable memory (graphic card memory + system memory). 16 and 32 MB cards are introduced as 128 MB and 64 MB as 256 MB. During the TurboCache release, NVIDIA assured us that the real memory quantity will be clearly displayed as they recommended their partners to do. 6 months later we can see that nothing has been done. The objective of many manufacturers and stores is that consumers notice the 128 or 256 MB but not the 16 MB!

ATI is next
When one of the two manufacturers releases an innovation, you can be sure the other will follow. ATI released a similar technology to TurboCache called HyperMemory. ATI hasn’t tried to convince us that major architectural modifications were made. They only optimised their drivers, removed a couple of memory chips and a new product appeared as the Radeon X300 SE HyperMemory. Just like NVIDIA the adopted strategy is to indicate addressable memory and not the real embedded memory, 128 MB for the 32 MB and 256 MB for the 128 MB version.

As the technology is linked to drivers and PCI Express, why not use it to increase performance instead of reducing costs? ATI visibly already answered that question as HyperMemory will be activated on all graphic cards with 128 MB or more with the next drivers! Gains will be variable and depend on the graphic card and game, but even if they are reduced it is still good to have it. Why not activate it for 256 MB graphic cards? Because it would probably reduce performances, the memory space and memory bandwidth would no longer be homogeneous. It’s preferable to stick to 256 MB rather than trying to address 512 MB with HyperMemory even if in specific cases it might help a little.


Page 2
Integrated, evolution

Graphic Extreme
Very often we ask ourselves does ATI or NVIDIA dominate the 3D accelerator market. This isn’t the right question, because the answer is Intel!

With more than 50% of market shares, the processor manufacturer is the undisputed leader. This domination is due to integrated chipsets, which are far from being unanimously accepted because of their reduced performances. The previous graphic core name was even ironic, Graphic Extreme. Despite their slowness, their technological delay and the number of games that were not supported, they weren’t exactly extreme. Of course, this only concerns 3D and not everybody needs it. Why use a complex GPU for a computer only dedicated to office use? Integrated graphic cores fit this role perfectly.


But Intel doesn’t only attack the "2D" card market and clearly aims for more by targeting polyvalent machines or even game computers by increasing its prices in adding millions of transistors for 3D support. Today Intel integrated cores are called GMA (for Graphic Media Accelerator), they entirely support DirectX 9 and Shaders 2.0. Of course, this support will be useful for the office when Longhorn will be available, because the new interface will use 3D and will need shaders 2.0. But for the time being, it’s clear. When Intel sells this chipset it clearly indicates that gaming is possible. So, is this really the case? We will see with the latest generation of Intel chipsets, the GMA 900 and 950.

Entry level evolution ?
Here’s a graph with the evolution of fillrate of entry level and high end products. We included all NVIDIA chips sold since 1999 from the TNT2 to the GeForce 7800 GTX. Results would have been similar with ATI graphic cards.



The difference between the two extremes of a GPU range is always accentuated. It’s important to know that these graphs only report raw data without taking into account architectural optimisations (and possible reductions due to the filtering quality via drivers) which can significantly increase performances. It’s also important to specify that high end GPUs include more improvements than their entry level equivalents.

Bandwidth doesn’t evolve much over the years. Even if it increases with the GeForce 6200 TurboCache, less memory is available at full speed. On the other hand, bandwidth went through the ceiling in 2003 with high end products.

Fillrate follows a similar path. If differences remained 1.5 or 3 times more between two extremes for a long a time, with the massive parallelisation of tasks of the last high end GPUs it increased to 6 – 7. In short, if high end GPUs would have evolved at the same rate as entry level GPUs, the current high end product would be a 6600 GT. If entry level products would have evolved at the same rate as the high end, the current entry level would also be a 6600 GT.

What does this tendency mean? That reducing entry level products costs is very difficult and slows down evolution. To compete with Intel’s integrated solutions, and to try and place a GPU in every computer, ATI and NVIDIA released products which compared to high end graphic cards produce lower and lower performances. In the end, the two GPU manufacturers probably helped integrated solutions to gain market shares. With the continuous reduction in performance of entry level products, the gap between entry level graphic cards and integrated solution reduces. On the one hand, the computer manufacturer has to spend approximately $25 for an entry level graphic card or only $5 to change from a standard chipset to an integrated one. This makes quite a big difference on paper as both products can run games.

Wouldn’t it have been best for ATI and NVIDIA not to release such cheap and low performance 3D solutions? To not try and place a $25 graphic card in every computer, but to convince gamers to spend $100 for a 3D graphic card and that polyvalent computers must be equipped with such a graphic card?

Developers could have pushed game graphic quality and not worried about the 50% of computers equipped with integrated graphic chipset and also the number of computers equipped with low performance graphic cards.

Let’s be clear on this point, ATI and NVIDIA’s solutions are better (you will see that in the next pages) than Intel integrated chipsets. But is the gap big enough? Intel is currently working hard on improving graphic solution performance on the driver and architectural levels. If Intel could have a product capable of directly competing with ATI and NVIDIA for performances and quality/reliability, these two would have something to worry about.

Isn’t it time for ATI and NVIDIA to develop an architecture fully adapted for entry level products? Today the “easy solution” is to avoid the production cost explosion for hardware and software by using the same architecture for the whole product range. The cost of this architecture can, however, be a problem for entry level products. Fewer useless options with higher performances, this is the eternal debate.


Page 3
GPU, cards, AGP

Specifications
5 GPUs and 1 core currently make up the PCI entry level market: the ATI RV370, RV380 and RV410; NVIDIA NV44 and NV43 and Intel GMA 9x0.

RV410 : Radeon X700

The current middle range of the Canadian manufacturer supports Shader 2.0b like the X800 and features 6 vertex shader pipelines and 8 complete pixel pipelines (shader + ROP). This 110 nm GPU includes 120 million transistors on a 150 mm² surface.



RV380 : Radeon X600

A previous middle range product, this is simply the RV350/360 PCI Express version (Radeon 9600 Pro/XT). It supports Shader 2.0 and features 2 vertex shader pipelines and 4 complete pixel pipelines. This 130 nm GPU has 75 million transistors on a 98 mm² surface.



RV370 : Radeon X300

Strictly identical to the RV380, this 110 nm GPU is cheaper to produce as it only has a 83 mm² surface. The downside is that frequency increases aren’t as easy.




NV43 : GeForce 6600, 6200

This GPU is used by NIVIDIA for a wide range of products from entry level to middle range. It supports Shader 3.0, HDR and features 3 vertex shader pipelines, 8 pixel shader pipelines and 4 ROPs. The low amount of ROPs generally isn’t restrictive, because in practice memory bandwidth prohibits all ROPs (which mainly write data in memory) from working simultaneously. This 110 nm GPU has 146 million transistors on 160 mm².

Only the 6600 & 6600 GT have a “full option” NV43. Indeed, the 6600 LE only has 4 functional pixel shader pipelines. It’s also the case for the GeForce 6200, but it doesn’t support data compression (Z and color) and HDR.

NV44 : GeForce 6200 TurboCache

The last NVIDIA entry level GPU, the NV44 supports the Shader 3.0 but not HDR. It also doesn’t support the data compression system (Z and colors) which significantly improves other GPU performances once the FSAA is activated. It features 3 vertex shader pipelines, 4 pixel shader pipelines and 2 ROPs. The 4 pixel shader pipelines are not grouped together, but are separated in two groups of two. NVIIDA then has the possibility of deactivating 2 for some products. The GPU surface is 105 mm².

GMA 900/950 : Intel i915G/i945G

This integrated core supports the DirectX9 and Shader 2.0. It includes 4 pixel shader pipelines and 4 ROPs but not any vertex shader pipelines. All geometric calculations must be made by the GPU. It doesn’t support FSAA. It is hard to know exactly how many transistors this core requires, probably 30 million.

One important detail is that the GMA 9x0 features 2 MAD units per pipeline just like the G70 or the GeForce 7800. This of course is the only thing they have in common! The GMA 9x0 pipelines are relatively short and don’t give the possibility of efficiently masking texture access latency. These accesses are shifted to mask their latency. This is very efficient except for complex textures accesses (which are more and more common) which result in a serious performance reduction.

The GMA 900, integrated in the i915G is clocked at 333 MHz as compared to 400 MHz for the i945G GMA 950. Intel integrated cores don’t have any local memory bandwidth and have to share the entire system memory bandwidth with the rest of the system.

Graphic cards

We selected a wide range of entry level graphic cards and we added the current ATI and NVIDIA middle line for comparison.



* The TurboCache and HyperMemory cards can access local and system memory in parallel. Effective bandwidth can be slightly superior to this figure. The theoretical maximum (never reached in practice) is the addition of the total PCI Express bandwidth.

Non respected official specifications
When a new NVIDIA or ATI graphic card hits the market, it features quite strict specifications for the GPU, memory frequency and the memory bus size. Unfortunately, all manufacturers do not exactly respect these specifications and we usually find products with castrated performances. (Yes, it is possible to castrate entry level products!). For example, we found a GeForce 6200 built by Point of View with a 64 bits memory clocked at 166 MHz instead of 128 bits at 200 MHz. The graphic card’s price was even higher than other graphic cards with the correct specifications. Of course, performances are clearly lower than what they should be.

TurboCache and HyperMemory cards also have problems. Some TC 32 MB cards use only one 32 MB chip instead of 2x16 MB chips. As one chip has a 16 bit data bus, the bandwidth is divided by 2! The X300 SE HM cards are supposed to use memory clocked at 300 MHz instead of 200 MHz for a standard X300 SE. Some manufacturers, with the change to HyperMemory, have apparently decided to change the memory frequency, which is still at 200 MHz for some of the X300 SE HM. For both cases, performances are significantly reduced. Of course, these differences are never clearly indicated on the product box or characteristic page…

What about AGP ?
You have probably noticed that we only included PCI Express solutions. The ageing AGP 8x isn’t dead yet and several interesting products for the low budget and previously tested are still on the market.

For NVIDIA, the 6600 GT and 6600 exist with this interface thanks to HSI. Be careful, GeForce 6200 graphic cards based on a NV44 (the NV44A), which natively support AGP also exists. This GPU has a 64 bit memory bus and, consequently, much lower performances.

For ATI we will also remind you of the 9600 Pro, the AGP equivalent to the X600 Pro but also and mainly the 9800 Pro. It is as expensive as the 6600 and provides higher performances. Unlike the 6600, however, it doesn’t support Shader Model 3.0, HDR, or WMV9 decompression.


Page 4
Chipsets, 2D, Video, test


The chipset
In TurboCache and HyperMemory technologies, the chipset has an important role to play. The graphic card uses it to access system memory. To be accurate, the graphic card has to go through two busses to access system memory, the PCI Express bus and then the memory system bus. For HyperMemory and TurboCache technologies to be efficient, you need to have enough bandwidth on the two busses.

We tested UT2004 and Far Cry with 3 different chipsets: the i915G, i945G and the nForce 4 Ultra:


The nForce4 is slightly more efficient, but the gap is so reduced that it isn’t really significant.

Of course, we used Dual Channel with all 3 platforms, as it’s very useful when the graphic card partly uses system memory. For that you need to use two memory modules instead of one with doubled capacity for all computers equipped with TurboCache or HyperMemory graphic cards.

2D
Is it still possible to have performance differences in 2D between current solutions? Yes, it is. For most cases it is insignificant, but we noticed a visible difference between graphic cards and Intel’s integrated cores. These cores have some difficulty in rendering some effect of Window’s interface. For example, the blending under menus isn’t flowing. Of course, it is only a small detail, but it would be best to completely deactivate blending to avoid this unpleasant feeling of lack of fluidity in 2D.

In their drivers, NVIDIA and ATI have a function, which gives the possibility of substituting them for flat monitor interpolation when connected in DVI. It’s interesting to take a look at when testing entry level products as chances are that they won’t be able to run games in standard resolution (usually 1280 x 1024). For ATI the option is purely decorative as it doesn’t have any effects (bug?). For NVIDIA the option is functional (but well hidden in the driver: nView display-> Monitor parameters -> Monitor adjustments) and the result is conclusive.

Instead of sending a 800 x 600 image to the TFT monitor, which resizes it in 1280 x 1024, the GPU does the operation before sending it. The monitor receives an image with standard resolution and doesn’t have to resize it. What’s the point? Don’t we obtain the same result? Yes and no. Most TFT monitors (especially entry level ones) use poor quality interpolation algorithms whereas NVIDIA’s graphic cards use an excellent one. Of course, using the standard resolution is always the best thing to do, but when it’s not possible, this option is more than welcome! Icing on the cake, the performance incidence is negligible.

Unfortunately we won’t be able to show you the quality difference as it’s not possible to take a screenshot image once it has been interpolated, and photo results aren’t presentable.

Video
If MPEG-2 decompression is no longer a problem for GPUs, it isn’t necessarily the case for the last HD codec such as WMV. What are the performances of the different graphic cards and integrated cores in continuous WMV file reading? We conducted this test with a computer equipped with a P4 3.8 GHz and deactivated Hyper Threading to have a more representative CPU use rate. The rate reported with Hyper Threading and CPU wasn’t always reliable because of the presence of the second logic core.


In 720p, acceleration worked correctly with all NVIDIA graphic cards except for the 6200 TC 16 MB, which doesn’t support it. The Radeon X700 Pro and X700 SE also accelerate 720p reading, but the X300, X550 and X600 do not. In the end, all solutions can read 720p content without much difficulty.

It is a little bit more difficult with 1080p. The GeForce 6600 and 6200 provide nice results thanks to their WMV HD acceleration. It is important to note that NVIDIA indicates that TurboCache 32 and 64 MB only support acceleration in 720p, but it is also functional in 1080p. The GeForce PCX 5750 reads the content without much difficulty even if it doesn’t have any acceleration and that the CPU rate is very high. It is also very high with the TurboCache 16 MB. With this card, however, several frames are left out, which causes small cuts here and there.

For ATI only the X700 has hardware acceleration. No graphic card, however, play the video corectly. An undetermined problem makes video turn at 17 to 20 FPS instead of 24 FPS, making it slower than usual. We also found the problem with an nForce 4 Athlon 64 4000+ platform. With the X800, however, it disappears. We asked ATI and they told us that they were indeed able to reproduce the same problem, but only under certain still undetermined circumstances (problem due to the regionalisation or drivers, or Microsoft’s patch activating the acceleration? A lack of power of entry and middle level graphic cards?). Let’s hope that they will soon find a solution via new drivers. Anyway for the moment this doesn’t work.

The test
For this test we used different protocol. To find out if it was possible to play games with these graphic cards… we played with them! In addition to the usual benches, we played 19 games with all graphic cards to determine the maximum quality in comfortable play with a nice acceptable framerate.

For each game we selected a scene we felt to be representative. Games chosen cover a wide range and were released from 2003 to 2005. All don’t have the same needs as Doom 3 or Far Cry and use less evolved engines. We felt it was important not to only evaluate entry level graphic cards on their capacity to run the latest FPS.

In order to stick to a realistic entry level memory, we used 512 MB only with the test computer. Under those circumstances, loading periods were awfully long and disc swaps were annoying with recent games, becoming even worse with TurboCache / HyperMemory and integrated chipsets. To play with 1 GB is more than recommended!

Test configuration:
- Leadtek / Foxconn nForce 4 Ultra
- AMD Athlon 64 4000+
- 2x256 MB of PC-3200 memory
- ForceWare 71.89
- Catalyst 5.5

- ASUS P5GDC-V (i915G)
- Intel D945GTP (i945G)
- Intel Pentium 4 670 (3.8 GHz, 2 MB)
- 2x256 MB of PC2-4200 memory.


Page 5
Benchs UT2004, Far Cry

Unreal Tournament 2004

Under UT2004, NIVIDIA dominates overall, but the performance gap with ATI isn’t huge. The 6200 TC 32 MB provided equivalent performances to the 6200 128 bit, so it proves its efficiency in technology. The TC 64 MB is also very close just like the X300 SE HM 32 MB is close to the X300 128 bit. The TC 16 MB is, however, slightly behind, but remains in front of Intel’s integrated solutions.

The Radeon X550 provides equivalent performances to the X600 Pro as UT isn’t restricted by memory bandwidth. The NVIDIA 6600 LE does ok and is right in the middle of the X700.


Far Cry

For Far Cry, ATI and NVIDIA remain very close with a slight advantage for NVIDIA. This time TurboCache and HyperMemory cards are far from reaching the performances of the 128 bit equivalent. Far cry is more memory hungry than UT. Intel two cores are far behind.


Page 6
Benchs Doom3, Sims2

Doom 3

NVIDIA’s domination is obvious here! The three TurboCache cards are in front of all cards based on RV370 including the X550. The 6600 LE is easily in front of the X700 SE, which has a fillrate twice as big. Intel integrated cores are completely left behind. The performance ratio between all NVIDIA GPU processors is 1 to 7 times higher for the most efficient. As the 7800 GTX is 3 times more efficient the gap increased from 1 to 20 for the current NVIDIA line.


Sims 2
In a completely different style from Doom3, Sims 2 gives the advantage to ATI except for the most “powerful cards”. Intel graphic cores provide pretty good results. They can thank the bench scene, which is not too heavy on the geometry side. It puts them in front of the TurboCache and HyperMemory cards in 1280.


Page 7
Benchs Colin 05, IL2 FB

Colin McRae 2005

ATI seems to like car games with Radeons dominating this test. TurboCache and HyperMemory cards provide nice results, but only the TC 32 MB reaches performances of the 128 bit equivalent. Intel cores are once again last.


IL-2 Forgotten Battles

IIL-2 is very much dependant on the memory bandwidth. TurboCache, HyperMemory and 64 bit cards like Intel’s core are logically behind. Overall, NVIDIA provides better results for this test. It is logical as their graphic card’s memories are higher clocked than their ATI equivalent.


Page 8
Benchs anti aliasing 4x

UT 2004

With most cards, the game is tight between ATI and NVIDIA, but as the 6200 don’t have data compression technology to avoid very high memory bandwidth needs once the AA is activated, performances collapses with these graphic cards. The RV370 has a great advantage thanks to Z-Buffer compression. The X300/X550 easily dominate the 6200. The 6200 uses an NV43 core, which integrates data compression, but only to make the 6200 range uniform this support is deactivated. This creates quite an important gap compared to the 6600 LE which possess the same core but with compression activated.

Intel cores don’t support FSAA.


Colin McRae 2005

The situation is identical to Colin McRae with the gap being even bigger.


Sims 2

Identical situation except the 6600 GT provides quite good results. A bug seriously affected performances of the TurboCache 32 and 64 MB cards and led to incoherent scores.


In the end, it seems that GeForce 6200s, TurboCache or not, aren’t made for antialiasing. ATI’s X300 provided much better results. Of course, now we still have to find out if it is really possible to use the antialiasing, and if it is of interest with this type of graphic card. Generally, we will put the resolution first rather than activating the antialiasing with entry level graphic cards.


Page 9
Practical tests: UT2004, Far Cry, D3, Riddick

UT 2004


Comfortable :
1280 x 1024 : 6600 GT, 6600, X700 Pro
1024 x 768 : 6600 LE, 6200, 6200 TC 32 MB, PCX 5750, X700 SE, X600 Pro, X550
800 x 600 : 6200 TC 64 MB, 6200 TC 16 MB, X300, X300 SE, X300 SE HM, i945G
640 x 480 : i915G

Acceptable :
1600 x 1200 : 6600 GT, X700 Pro
1280 x 1024 : 6600, 6600 LE, PCX 5750, X700 SE, X600 Pro, X550
1024 x 768 : 6200, 6200 TC 64 MB, 6200 TC 32 MB, 6200 TC 16 MB, X300, X300 SE HM
800 x 600 : X300 SE, i945G, i915G


Far Cry


Comfortable :
1280 x 1024 Medium : 6600 GT, X700 Pro
1024 x 768 Medium : 6600, 6600 LE, X700 SE, X600 Pro, X550
800 x 600 Medium : 6200 TC 64 MB, 6200 TC 32 MB, PCX 5750, X300, X300 SE HM
1024 x 768 Low : X300 SE
800 x 600 Low : 6200 TC 16 MB
Impossible : i945G, i915G

Acceptable :
1280 x 1024 High : 6600 GT, X700 Pro
1024 x 768 High : 6600, 6600 LE, X700 SE, X600 Pro, X550
1024 x 768 Medium : 6200, PCX 5750, X300, X300 SE HM
800 x 600 Medium : 6200 TC 64 MB, 6200 TC 32 MB, X300 SE
1024 x 768 Low : 6200 TC 16 MB
800 x 600 Low : i945G, i915G

Intel integrated chipsets aren’t powerful enough to play comfortably to Far Cry. The lack of geometric calculation units contributes a lot to this. It’s important to note that despite all the noise made around the Shader 3.0 and 2.0b in Far Cry, it’s only available in Very High quality mode. In High and Medium, Far Cry only uses Shader 1.1 and in low mode the shaders aren’t used as it corresponds to a DirectX 7 rendering type. It would be useless to include the result as it isn’t very nice …


Doom 3


Comfortable :
1280 x 1024 Medium : 6600 GT
1024 x 768 Medium : 6600, X700 Pro
800 x 600 Medium : 6600 LE, X700 SE
640 x 480 Medium : 6200, 6200 TC 64 MB, 6200 TC 32 MB, PCX 5750, X600 Pro, X550, X300
640 x 480 Low : 6200 TC 16 MB, X300 SE HM
Impossible : X300 SE, i945G, i915G

Acceptable :
1600 x 1200 High : 6600 GT
1280 x 1024 Medium : 6600
1024 x 768 High : X700 Pro
1024 x 768 Medium : 6600 LE, X700 SE
800 x 600 High : PCX 5750
800 x 600 Medium : 6200, 6200 TC 64 MB, 6200 TC 32 MB, X600 Pro, X550
800 x 600 Low : 6200 TC 16 MB, X300, X300 SE HM
640 x 480 Low : X300 SE, i945G, i915G

It isn’t possible to play comfortably with Intel integrated cores and the Radeon X300 SE (64 bits) to Doom 3.


Chronicles of Riddick


Comfortable :
1024 x 768 Aniso 8x : 6600 GT
800 x 600 : 6600, X700 Pro
640 x 480 : 6600 LE, 6200, 6200 TC 64 MB, 6200 TC 32 MB, X700 SE
640 x 480 Low : PCX 5750
Impossible : 6200 TC 16 MB, X600 Pro, X550, X300, X300 SE, X300 SE HM, i945G, i915G

Acceptable :
1280 x 1024 Aniso 8x : 6600 GT
1024 x 768 : 6600, X700 Pro
800 x 600 : 6600 LE, 6200, 6200 TC 64 MB, 6200 TC 32 MB, X700 SE, X600 Pro, X550
800 x 600 Low : PCX 5750
640 x 480 : 6200 TC 16 MB, X300, X300 SE HM
Impossible : X300 SE, i945G, i915G

Chronicles of Riddick is one of the “hungrier” games as results show perfectly well. It isn’t possible to play comfortably with 8 cards even in 640 x 480. Even worse, it isn’t possible to play at all with 3 cards! For the X300 SE it’s a performance problem. For the i915G and i945G, in addition to the performance issue the result is displayed upside down. Needless to say it’s difficult to play like that.


Page 10
Practical tests: XIII, Splinter 3, Swat 4, Max Payne 2

XIII


Comfortable :
1600 x 1200 : 6600 GT, 6600, PCX 5750, X700 Pro, X700 SE
1280 x 1024 : 6600 LE, 6200, 6200 TC 64 MB, 6200 TC 32 MB, X600 Pro, X550, X300, X300 SE HM
1024 x 768 : 6200 TC 16 MB, X300 SE, i945G, i915G

Acceptable :
1600 x 1200 : 6600 GT, 6600, 6600 LE, 6200, 6200 TC 64 MB, 6200 TC 32 MB, PCX 5750, X700 Pro, X700 SE, X600 Pro, X550, X300, X300 SE HM
1280 x 1024 : 6200 TC 16 MB, X300 SE, i945G, i915G

This game has simple and efficient graphics and engine. High resolutions are possible with entry level cards for some of the older games.


Splinter Cell Chaos Theory


Comfortable :
1024 x 768 Medium + Parallax Mapping : 6600 GT
1024 x 768 Medium : X700 Pro
800 x 600 Medium + Parallax Mapping : 6600, 6600 LE
800 x 600 Medium : X700 SE
640 x 480 Medium + Parallax Mapping : 6200, 6200 TC 64 MB, 6200 TC 32 MB
640 x 480 Medium : PCX 5750, X600 Pro, X550, X300
640 x 480 Low + Parallax Mapping : 6200 TC 16 MB
640 x 480 Low : X300 SE, X300 SE HM
Impossible : i945G, i915G

Acceptable :
1024 x 768 High + Parallax Mapping + Soft Shadows : 6600 GT
1024 x 768 High : X700 Pro
1024 x 768 Medium + Parallax Mapping : 6600
1024 x 768 : X700 SE
800 x 600 Medium + Parallax Mapping : 6600 LE
800 x 600 Medium : X600 Pro
800 x 600 Low : X550, X300
640 x 480 High + Parallax Mapping : 6200, 6200 TC 32 MB
640 x 480 High : PCX 5750
640 x 480 Medium + Parallax Mapping : 6200 TC 64 MB, 6200 TC 16 MB
640 x 480 Medium : X300 SE, X300 SE HM
640 x 480 Low : i945G, i915G

The Shader 3.0 support helps graphic cards based on NVIDIA GPUs to have access to additional quality options like Parallax mapping (improved bump mapping) and Soft Shadows. Once again, Intel graphic cores aren’t capable of providing perfect fluidity.


SWAT 4


Comfortable :
1024 x 768 High : 6600 GT, X700 Pro
1024 x 768 Medium : 6600, X700 SE, X600 Pro
800 x 600 Medium : 6600 LE, 6200, 6200 TC 32 MB, X550, X300, X300 SE HM
800 x 600 Low : 6200 TC 64 MB, 6200 TC 16 MB, PCX 5750, X300 SE, i945G, i915G

Acceptable :
1280 x 1024 High : 6600 GT
1280 x 1024 Medium : 6600, X700 Pro
1024 x 768 High : 6600 LE, X700 SE, X600 Pro, X550
1024 x 768 Medium : 6200, 6200 TC 32 MB, 6200 TC 64 MB, X300, X300 SE HM, i945G
800 x 600 Medium : 6200 TC 16 MB, PCX 5750
1024 x 768 Low : X300 SE, i915G


Max Payne 2


Comfortable :
1280 x 1024 AA 4x : 6600 GT, X700 Pro, X700 SE
1280 x 1024 : 6600, 6600 LE, X600 Pro, X550, X300
1024 x 768 : 6200, 6200 TC 64 MB, 6200 TC 32 MB, 6200 TC 16 MB, PCX 5750, X300 SE, X300 SE HM
800 x 600 Bug : i945G, i915G

Acceptable :
1600 x 1200 AA 4x : X700 Pro
1600 x 1200 AA 2x : 6600 GT, X700 SE
1280 x 1024 AA 2x : 6600, 6600 LE, X600 Pro
1280 x 1024 : 6200, 6200 TC 64 MB, 6200 TC 32 MB, PCX 5750, X550, X300, X300 SE HM
1024 x 768 : 6200 TC 16 MB, X300 SE
1024 x 768 Bug : i945G, i915G

Max Payne 2 isn’t recent, but the graphic quality is far from being old. High resolutions are accessible to many of the graphic cards. Several effects such as mirror reflections aren’t rendered with the i915G and i945G.


Page 11
Practical tests: Freedom Fighters/Force, Lego SW, IL2 FB

Freedom Fighters


Comfortable :
1600 x 1200 : 6600 GT, X700 Pro
1280 x 1024 : 6600, 6600 LE, PCX 5750, X700 SE, X600 Pro, X550
1024 x 768 : 6200, 6200 TC 64 MB, 6200 TC 32 MB, 6200 TC 16 MB, X300, X300 SE HM
800 x 600 : X300 SE, i945G, i915G

Acceptable :
1600 x 1200 : 6600 GT, 6600, PCX 5750, X700 Pro
1280 x 1024 : 6600 LE, 6200, 6200 TC 64 MB, 6200 TC 32 MB, X700 SE, X600 Pro, X550, X300, X300 SE HM
1024 x 768 : 6200 TC 16 MB, X300 SE, i945G, i915G


Freedom Force


Comfortable :
1280 x 1024 High : 6600 GT
1280 x 1024 Medium : 6600, X700 Pro
1024 x 768 High : X700 SE, X600 Pro
1024 x 768 Medium : 6600 LE, 6200, 6200 TC 64 MB, 6200 TC 32 MB, PCX 5750, X550, X300, X300 SE HM, i945G
1024 x 768 Low : 6200 TC 16 MB, X300 SE, i915G

Acceptable :
1280 x 1024 High : 6600 GT, 6600, 6600 LE, PCX 5750, X700 Pro, X700 SE, X600 Pro, X550
1280 x 1024 Medium : 6200, 6200 TC 64 MB, X300, X300 SE HM
1280 x 1024 Low : 6200 TC 32 MB, X300 SE, i945G, i915G
1024 x 768 Medium : 6200 TC 16 MB


LEGO Star Wars


Comfortable :
1280 x 1024 : 6600 GT, 6600, 6600 LE, 6200 TC 32 MB, X700 Pro, X700 SE, X600 Pro, X550
1024 x 768 : 6200, 6200 TC 64 MB, 6200 TC 16 MB, PCX 5750, X300, X300 SE HM
800 x 600 : X300 SE
Impossible : i945G, i915G

Acceptable :
1280 x 1024 : 6600 GT, 6600, 6600 LE, 6200, 6200 TC 64 MB, 6200 TC 32 MB, 6200 TC 16 MB, PCX 5750, X700 Pro, X700 SE, X600 Pro, X550, X300, X300 SE, X300 SE HM
1024 x 768 : X300 SE
Impossible : i945G, i915G

LEGO Star Wars doesn’t support 1600 x 1200, which is quite surprising for a recent game. The game refused to start with the i915G and i945G, because of the absence of several required functions. As the game is of the DirectX 9 class and totally supported by Intel cores, it is probably more due to a game bug.


IL-2 Forgotten Battles


Comfortable :
1280 x 1024 : 6600 GT, X700 Pro
1024 x 768 : 6600, 6600 LE, PCX 5750, X700 SE, X600 Pro
800 x 600 : 6200, 6200 TC 64 MB, 6200 TC 32 MB, X550, X300, X300 SE HM
640 x 480 : 6200 TC 16 MB, X300 SE
Impossible : i945G, i915G

Acceptable :
1600 x 1200 : 6600 GT, X700 Pro
1280 x 1024 : 6600, 6600 LE, X600 Pro
1024 x 768 : 6200, 6200 TC 32 MB, PCX 5750, X700 SE, X550, X300
800 x 600 : 6200 TC 64 MB, 6200 TC 16 MB, X300 SE, X300 SE HM
Impossible : i945G, i915G

Just like with Riddick the image is upside down for the i915G and i945G.


Page 12
Practical tests: Colin 05, GTR, NFSU2, FIFA 05

Colin McRae 2005


Comfortable :
1024 x 768 AA 2x : 6600 GT, X700 Pro
800 x 600 2x : 6600
800 x 600 : 6600 LE, X700 SE, X600 Pro, X550
640 x 480 : 6200, X300, X300 SE HM
Impossible : 6200 TC 64 MB, 6200 TC 32 MB, 6200 TC 16 MB, PCX 5750, X300 SE, i945G, i915G

Acceptable :
1280 x 1024 AA 2x : 6600 GT, X700 Pro
1280 x 1024 : X700 SE
1024 x 768 AA 2x : 6600
1024 x 768 : 6600 LE, X600 Pro, X550
800 x 600 : 6200, 6200 TC 64 MB, 6200 TC 32 MB, PCX 5750, X300, X300 SE HM, i945G, i915G
640 x 480 : 6200 TC 16 MB, X300 SE

With 7 cards, play isn’t possible. The fps of entry level NVIDIA graphic cards doesn’t represent the game’s real fluidity, because there are many breaks.


GTR


Comfortable :
1280 x 1024 : 6600 GT, X700 Pro
1024 x 768 : X700 SE
800 x 600 : 6600, 6600 LE, 6200, 6200 TC 32 MB, PCX 5750, X600 Pro, X550, X300, X300 SE HM, i945G
640 x 480 : 6200 TC 64 MB, 6200 TC 16 MB, X300 SE, i915G

Acceptable :
1280 x 1024 AA 2x : 6600 GT, X700 Pro
1280 x 1024 : X700 SE
1024 x 768 AA 2x : 6600, 6600 LE, X600 Pro
1024 x 768 : 6200, 6200 TC 64 MB, 6200 TC 32 MB, PCX 5750, X550, X300, X300 SE HM, i945G, i915G
800 x 600 : 6200 TC 16 MB, X300 SE


Need for Speed Underground 2


Comfortable :
1024 x 768 Medium : 6600 GT, X700 Pro
800 x 600 Medium : X700 SE
1024 x 768 Low : 6600, 6600 LE, PCX 5750, X600 Pro, X550, X300, X300 SE, X300 SE HM
800 x 600 Low : 6200, 6200 TC 64 MB, 6200 TC 32 MB, 6200 TC 16 MB, i945G, i915G

Acceptable :
1280 x 1024 Medium : 6600 GT, X700 Pro
1024 x 768 Medium : 6600, 6600 LE, X700 SE, X600 Pro, X550
800 x 600 Medium : 6200, 6200 TC 64 MB, 6200 TC 32 MB, 6200 TC 16 MB, PCX 5750, X300, X300 SE, X300 SE HM, i945G, i915G

To play this game under comfortable conditions, you have to use the low quality mode for most graphic cards. Here you have the impression you’ve been sent back three years in graphic quality.


FIFA 2005


Comfortable :
1024 x 768 : 6600 GT, 6600, 6600 LE, 6200, 6200 TC 64 MB, 6200 TC 32 MB, 6200 TC 16 MB, PCX 5750, X700 Pro, X700 SE, X600 Pro, X550, X300, X300 SE, X300 SE HM
800 x 600 : i945G, i915G

Acceptable :
1024 x 768 : 6600 GT, 6600, 6600 LE, 6200, 6200 TC 64 MB, 6200 TC 32 MB, 6200 TC 16 MB, PCX 5750, X700 Pro, X700 SE, X600 Pro, X550, X300, X300 SE, X300 SE HM, i945G, i915G

FIFA 2005 uses a special engine, open to criticism for numerous points. The 1280 x 1024 mode doesn’t work because of a bug and the 1600 x 1200 is unplayable with any card even if it runs 10 or 300 FPS. Movements seem to be made with a fixed latency (for example 25 times per second). It isn’t too obvious in low resolution but gives a bothersome impression of a lack of fluidity in high resolution. You have to use 1024 x 768.


Page 13
Practical tests: Sims2, Empire Earth 2, WoW

Sims 2


Comfortable :
1280 x 1024 Medium : 6600 GT
1280 x 1024 Low : X700 Pro
1024 x 768 Medium : X700 SE
1024 x 768 Low : 6600, 6600 LE, X600 Pro, X550
800 x 600 Low : 6200, 6200 TC 64 MB, 6200 TC 32 MB, X300, X300 SE HM
Impossible : 6200 TC 16 MB, PCX 5750, X300 SE, i945G, i915G

Acceptable :
1280 x 1024 High : 6600 GT
1280 x 1024 Medium : 6600, X700 Pro, X700 SE, X600 Pro, X550
1024 x 768 Medium : 6600 LE, 6200
800 x 600 Medium : 6200 TC 64 MB, 6200 TC 32 MB, X300, X300 SE HM
800 x 600 Low : 6200 TC 16 MB, PCX 5750, X300 SE
Impossible : i945G, i915G

If we used a simple scene (because it was easy to reproduce), for the benches we tested the playability with a more complex scene. Intel integrated cores don’t give the possibility of playing comfortable and graphic artefacts are numerous.


Empire Earth 2




Comfortable :
1024 x 768 Medium : X700 Pro, X700 SE, X600 Pro, X550, X300, X300 SE HM
1280 x 1024 Low : 6600 GT
800 x 600 Medium : X300 SE
1024 x 768 Low : 6600, 6600 LE, 6200, PCX 5750, i945G
800 x 600 Low : 6200 TC 64 MB, 6200 TC 32 MB, i915G
Impossible : 6200 TC 16 MB

Acceptable :
1600 x 1200 Medium : 6600 GT
1280 x 1024 Medium : X700 Pro, X700 SE, X600 Pro, X550, X300, X300 SE HM
1024 x 768 Medium : PCX 5750, X300 SE, i945G, i915G
1280 x 1024 Low : 6600, 6600 LE
1024 x 768 Low : 6200, 6200 TC 64 MB
800 x 600 Low : 6200 TC 32 MB, 6200 TC 16 MB

Just like with Colin McRae, breaks in the game are numerous especially with NVIDIA cards as soon as we chose the Medium mode.


World of Warcraft


Confortable :
1600 x 1200 : 6600 GT
1280 x 1024 : X700 Pro
1024 x 768 AA 2x : 6600, 6600 LE
1024 x 768 : 6200, 6200 TC 32 MB, X700 SE, X600 Pro, X550
800 x 600 : 6200 TC 64 MB, 6200 TC 16 MB, PCX 5750, X300, X300 SE, X300 SE HM, i945G, i915G

Acceptable :
1600 x 1200 AA 4x : 6600 GT
1600 x 1200 AA 2x : X700 Pro
1280 x 1024 AA 2x : 6600
1280 x 1024 : 6600 LE, 6200, 6200 TC 32 MB, X700 SE, X600 Pro, X550
1024 x 768 : 6200 TC 64 MB, X300, X300 SE, X300 SE HM, i945G, i915G
800 x 600 : 6200 TC 16 MB, PCX 5750

The WoW engine and what it displays isn’t too complex. All cards provide nice results even if you have to stay in 800*600 to keep an optimum fluidity with some.


Page 14
In a nutshell

Benchmarks in a nutshell
We calculated the average of all benchmark results and gave the same weight to all games:


On averages performances vary by as much as 5 times in 1280 x 1024, but only as 3 times in 800 x 600 as in this mode the most efficient graphic cards are restricted by the CPU. For middle range products, the 6600 GT, 6600, 6600 LE versus the X700 Pro, X700 SE and X600 Pro NVIDIA wins. The performance gap isn’t huge, but it is there with as much as a 15% advantage for NVIDIA.

It is also possible to see the X600 Pro as a direct opponent of the PCX 5750. It was the case a year ago for the 9600 Pro and FX 5700. At that time NVIDIA, was working hard on drivers to reach an acceptable performance level. It’s no longer the case today and as we can see from results: the PCX 5750 is completely left behind by the X600 Pro.

For entry level products, it’s more complicated as products aren’t exactly on the same level except for the TurboCache 32 MB and HyperMemory 32 MB, which provide equivalent performances with a slight advantage to NVIDIA. These two cards are very close to their 128 MB 128 bit equivalents, proving the interest of TurboCache and HyperMemory technologies. TurboCache 16 MB is one step behind as 16 MB is really insufficient. The TurboCache 64 MB is just slightly behind the 32MB version, showing the importance of bandwidth over the quantity of memory.

The 6200 and X300 provide similar performances with a slight advantage to ATI this time. The Radeon X550 is one step above with performances close to the X600 Pro.

The i915G and i945G performance level is lower than that of all ATI and NVIDIA graphic cards except for the 6200 TurboCache 16 MB, which is surpassed by the i945G. This doesn’t bring any value to 3D performances.


Practical tests in a nutshell
To calculate the average graphic quality reached for 19 games with comfortable playability, we gave 1 point per level of quality. For example, if there are 5 resolutions between the best and the worse graphic card, the cards obtained from 1 to 5 points according to their capacity (and 0 if play wasn’t possible). We then leveled all results for each game to have the same weight and made an average:


First of all, these results confirm the standard benches for all middle range graphic cards and partly so for entry level products. Results are similar between ATI and NVIDIA products.

For bottom entry level products there are a couple of differences. The X300 SE and TurboCache 16 MB provide similar results, a step below other entry level graphic cards. It wasn’t possible for 4 out of 19 games. Here, Intel integrated chipsets are clearly behind, whereas they were closer than entry level graphic cards in the benches. Why such a difference? Because it isn’t possible to play several games, and this resulted in several no scores. Of the 19 games 8 were unplayable, half of these because of bugs.


Page 15
Conclusion

Conclusion
Is it possible to play games with an entry level graphic card? Yes, we did it. Of course, it’s important to put this answer in perspective with the different tests results. It isn’t possible to play all games comfortably with all graphic cards, meaning with perfect fluidity. Some do not give us the option of playing all games even with low graphic quality requirements and others have bugs.

Also, if it is possible to play games that are 1 or 2 years old with good resolutions, this isn’t the case for recent games using advanced graphics. Newer games may require you to use very low resolutions of 640 x 480 or 800 x 600. Now it is up to you to see if this resolution is acceptable at a time when a basic monitor is a 17" 1280 x 1024 TFT. From our point of view it’s far from being ideal.

So we would advise gamers to use at least a X700 Pro or 6600 GT or even a X700 SE or 6600 if you are on a tighter budget. At each level NVIDIA is slightly ahead in terms of capabilities and performance, but we have to keep in mind that this advance will cost you $20 to $30 more compared to ATI solutions. This amount of money is far from being negligible for these types of products.


Below this range, you will have to make several concessions in terms of graphic quality. Of course, if it is impossible for you to spend more you will have to make a choice between lower budget graphic cards. If you really want to save some money, the Radeon X300 SE HyperMemory 32 MB and the GeForce 6200 TurboCache 32 and 64 MB provide similar 3D performances overall. Also, and here is rather a good surprise, they are much closer in practice to the Radeon X300 and GeForce 6200, which feature 128 MB memory with 128 bits, than to the same chips with a 64 bits bus memory. These two types of technology without being miraculous bring a plus for entry level products as they can significantly improve performances at lower production costs.

The 6200 TC takes the lead over the X300 HM thanks to hardware WMV HD video decompression and small options, which allow the replacing of low quality interpolation of most TFT monitors with a better GPU processed one. There is a slightly higher price.

The GeForce 6200 TurboCache 16 MB has to be treated separately, and we still wonder if this card is really useful. Indeed compared to integrated solutions it has higher quality drivers with fewer bugs, but performances aren’t better and even slightly inferior. We can’t even count on HD video acceleration, because it doesn’t support it. $50 for fewer bugs in games with low performances is quite a high price!

During the past few months Intel made some serious progress for integrated solutions. Of course, performances are still far from being sufficient for serious play, but drivers are improving fast even if there is still some work to do. Four games tested didn’t even start with the i915G and i945G. These GPUs are handicapped for certain games because of the absence of T&L; and Vertex Shader hardware, adding another load to the CPU. Intel’s dual core might avoid this problem by redirecting calculations to the unused core! This might reduce the gap even more between integrated solutions and entry level graphic cards, pressuring ATI and NVIDIA to improve entry level products performances. This wouldn’t be a bad thing!


Copyright © 1997-2008 BeHardware. All rights reserved.