Serial ATA and RAID
IOMeter is used to simulate the load in a multi-user environment, in this case by using a file server type load comprised of 80% reading and 20% writing, all 100% non sequential. In this type of situation, NCQ can be particularly useful because there can be multiple simultaneous commands. In this test we measured performances, expressed in inputs/outputs per second (IO/s) with 1, 2, 4, 8, 16, 32, 64 and 128 simultaneous commands. The two chipsets were tested with Raptor 150 GB drives in the following configurations : 1 single drive, 2 drives in RAID 0 and 3 drives in RAID 5.
While with one drive performances are close, in RAID 0 and 5 the nForce 7 plateaus out at four simultaneous commands. Beyond this figure, performances stagnate to the point that with 128 commands a single drive has similar results! On the other hand, with the X38 performances continue to improve.
We now move on to file copying. We measure reading and writing speeds, as well as the copying of a series of files composed of 2 large files totaling 4.4 GB, 2620 files with a total of 2 GB, and finally, 16,046 smaller ones equaling 733 MB. The source or target in reading or writing on the drive is a RAID 0 of two Raptor 74 GB drives.
With a single drive, the nForce 7 is slightly ahead but it’s the opposite in RAID 0. Performances of the two chipsets in RAID 5 are rather catastrophic in cases where writing is significant. This doesn’t seem to have changed much since the first tests of the nForce 4 when we noticed that RAID 5 « software » wasn’t at ease in this area.
Pioneer in the domain of Gigabit network integration to the chipset with the nForce 3 250 GB, at the time NVIDIA was mostly competing with solutions using the “good old” PCI bus. Since then, motherboards based on rival chipsets more often integrate PCI-Express chips such as the Marvell 88E8056 controller which has a more appropriate bandwidth for this type of network. Nonetheless, besides performances we also wanted to take a look at CPU use. To test speeds, we used the program PCATTCP with a buffer size of 65536.
In terms of speed, the two solutions are close but the advantage still goes to NVIDIA for data transmission. The only thing is that with a 100 MB/s limitation on the Marvell solution, it’s most likely that we are more limited by the sub-system drive than the network. The nForce 7 actually manages to come out the best in terms of processor use which is notably lower. The biggest difference is felt in data transmission. These measurements having been carried out with the Q6600, the Marvell solution consumes roughly 84% of one of the four cores when transmitting at maximum speed!