The nForce 4 released in October 2004 gained the recognition as the reference on the chipset market for AMD platforms. If it kept this position for so long, it is of course thanks to several evolution like the nForce4 SLI and the nForce 4 SLI X16. With the release of the AM2 socket, this isn’t another version of the chipset that NVIDIA has decided to launch, but another nForce 5 product line based on a new MCP. The SPP is apparently identical for the chipset equipped with 2 PCI Express 16x slots.
In the list of improvements, we noted the native support of 6 SATA peripherals instead of 4 but only 2 PATA instead of 4. Thanks to the support of 6 ATA ports NVIDIA can extend the RAID 5 to 6 discs. This RAID 5 remains 100% software. It isn’t really performing or interesting in the case of standard use.
For the network there are now 2 Gigabit connections natively supported. NVIDIA improved their implementation in order to be able to associate 2 Ethernet ports to double transfer rate. For most of the users, 2 gbps or 250 MB/s connection isn’t really interesting because only a few users have gigabit networks. This functionality named Teaming also works in 100 mbps. It allows the connection increase from 12.5 to 25 MB/s.
The chipset/driver is also now capable to intervene on the priority of packets thanks to the FirstPacket technology. You have to specify which application has to be prioritized in the drivers to beneficiate from it. The interest is for example to keep a good ping for a game while data transfer is processed in the background.
TCP/IP acceleration is still there, but compared to the nForce 4, the hardware firewall is removed.Overclocking for everyone
During the release of the Radeon Xpress 3200, ATI has strongly emphasized the Overclocking capacities of its chipsets and it apparently has considerably irritated NVIDIA which has developed several Overclocking possibilities for the nForce 5. The EPP or SLI Memory is part of this initiative and we mentioned it in this news
. In practice, the EPP consist in giving more information to the bios about the memory. The bios has then to use these additional information. EPP behaves differently from one motherboard to another one because the manufacturer has to choose what it will do with the additional data: optimising timings, Overclocking memory, Overclocking the entire system, etc. The system works but it is always possible to go further manually. NVIDIA choose to limit the EPP (it only concern the memory and the bios and not the chipset) to the nForce 590 SLI. It is rather strange because, still according to NVIDIA, this chipset targets the DIY market. So why not implementing it on other motherboards? All the more that NVIDIA speaks of an open technology...of the nForce 590 SLI only?
NVIDIA’s second initiative for Overclocking is LinkBoost which consist in automatically Overclocking by 25% the graphic PCI Express bus and the bus between the MCP and the nForce 590 SLI SPP when LinkBoost certified graphic cards are detected. It is a way to increase the bandwidth between the two graphic cards or to be more accurate between 2 7900 GTX because they are the only two cards supported. If you remember the release of ATI’s Xpress 3200, you will remember that ATI has strongly insited on the bandwidth between the two cards thanks to the support of the two PCI Express 16x bus by a single chip. ATI announced to have in practice a much higher memory bandwidth than NVIDIA’s implementation which prevent from fully using the bandwidth available. For this last point, ATI is right. Using 2 chips to manage two ports PCI Express prevents from using the entire bandwidth of these buses. The reason is the connection between these two chips which is of Hyper Transport type. During the transfer, data are converted first from PCI Express to Hyper Transport and a second time in the other way around. It restricts the maximum practical transfer rate and increases latency.
Now the thing is that other factors can also restrict transfer rate such as the GPU and how it interacts with the chipset. Is NVIDIA’s implementation less efficient in the end? This is what we have measured thanks to a very simple home made test which consists in observing the maximum quantity of pixels that can be transferred from one GPU to the other via the PCI Express bus. We used two Radeon X1600 XT for ATI's chipsets and 2 7900 GTX for Nvidia's chipsets. As results are normalised, the difference of the graphic cards power calculation isn’t taken into account.
The results in practice are far from ATI’s announcement...If results are really two times higher for the CrossFire by changing from PCIE 8x to 16x unlike with NVIDIA, the bandwidth is much lower than what the SLI is capable of. We speak here of CrossFire and SLI because it is the chipset/GPU/driver ensemble that defines the transfer performances. Even if they don’t allow to reach the maximum transfer rate (4 GB/s), NVIDIA’s implementation is more efficient than ATI’s. LinkBoost really increases the bandwidth by 10 %. The thing is that we couldn’t find any practical case of current games that benefit from this additional bandwidth.An efficient chipset...
The nForce 5 is a complete chipset and even if evolutions aren’t enourmous in practice compared to the nForce 4 (who need 2 gigabits parts? RAID 5 for 6 discs ?) they are present. Also, NVIDIA worked on Overclocking aspects in order to simplify it. The thing is that these simplifications are only available for very high end products which aim experienced users who will most of the time prefer doing it manually. Why not implementing them on other products of the line where their usefulness seems to be a little bit more obvious?
An efficient chipset...which pays the price of the AM2 which isn’t really attractive. NVIDIA must be eagerly waiting for the Core 2 Duo as the nForce 5 will also be available in Intel version.