Home  |  News  |  Reviews  | About Search :  HardWare.fr 



MiscellaneousStorageGraphics CardsMotherboardsProcessors
Advertise on BeHardware.com
Review index:
SSD product review: Intel, OCZ, Samsung, Silicon Power, SuperTalent
by Marc Prieur
Published on October 6, 2008

SuperTalent, Intel: Varying performance levels
Before looking at performance levels, we ought to look again at what is a fairly specific issue that we have noticed on three of the SSDs, namely a drop in performance in certain cases after being used for the first time.

With SuperTalent, this came about on both models when we were writing small and medium files to the drive. While write speed for large files was stable at 58 MB/s and 38 MB/s for the DX and MX versions respectively, the situation was quite different for medium-sized files: 28 MB/s and 23 MB/s on first usage but then 19.6 MB/s and 14.7MB/s on the second. For small files we were at 9.9 and 5.4 MB/s first time round and then 6.7 and 3.2 MB/s! We don’t really have an explanation for this, especially as it is something that only happened in this precise case.

More problematic however is a situation on the Intel drive, also limited to write speeds. As you can see below, the Intel drive does indeed give very impressive performances. Nevertheless, in certain cases, this does suddenly dip. This is particularly, but not only, the case for server load type operations involving random workloads.

The “new” drive gives an impressive 37 235 points on PC Mark Vantage for the HDD segment. You can run this test on its own several times and the score remains stable. But, after filling the drive with files of tests used while copying, and this on both partitions meaning virtually 100% of the drive is used, we only get 25 579 points on PC Mark Vantage.

The same thing happens on a new drive when we run our IOMeter test, itself not particularly stable as you will see below, then PC Mark Vangtage: we get 26 728 points. Even worse, if we redo our file copying test afterwards and then run PC Mark Vantage again, the score drops to 21 427. After multiple combinations we even managed a score of 16 202 points – which is nevertheless 2.5x a VelociRaptor. Repeating this test alone afterwards does give a slightly higher score each time … but without reaching original performance again (22 000 points after 20 repetitions).

File copying is also affected by multiple usage, even if this test can be carried out on its own several times with the same results. After running PC Mark Vantage and our IOMeter several times and file copying on a new drive, write speeds for large files fall from 70.4 to 43 MB/s, from 47.2 to 12.7 MB/s for medium-sized files and from 20.2 to 7.5 MB/s for small files.

This drop in performance can even go a good deal further; at one point we measured 6 MB/s for writing large files. This seems to have been an exceptional result that we were not able to reproduce and it should be noted that it is, in any case, only write speeds that are affected in this way. Have a look at the h2bench (sequential write speed) graph for a representation of first an unused drive and then a drive after functioning with IOMeter for 30 minutes with 100% random workload:


Intel explains the situation thus:
SSDs all have what is known as an “Indirection System” – aka an LBA allocation table (similar to an OS file allocation table). LBAs are not typically stored in the same physical location each time they are written. If you write LBA 0, it may go to physical location 0, but if you write it again later, it may go to physical location 50, or 8.567 million, or wherever. Because of this, all SSDs performance will vary over time and settle to some steady state value. Our SSD dynamically adjusts to the incoming workload to get the optimum performance for the workload. This takes time. Other lower performing SSDs take less time as they have less complicated systems. HDDs take no time at all because their systems are fixed logical to physical systems, so their performance is immediately deterministic for any workload IOMeter throws at them.

The Intel ® Performance MLC SSD is architected to provide the optimal user experience for client PC applications, however, the performance SSD will adapt and optimize the SSD’s data location tables to obtain the best performance for any specific workload. This is done to provide the ultimate in a user experience, however provides occasional challenges in obtaining consistent benchmark testing results when changing from one specific benchmark to another, or in benchmark tests not running with sufficient time to allow stabilization. If any benchmark is run for sufficient time, the benchmark scores will eventually approach a steady state value, however, the time to reach such a steady state is heavily dependant on the previous usage case. Specifically, highly random heavy write workloads or periodic hot spot heavy write workloads (which appear random to the SSD) will condition the SSD into a state which is uncharacteristic of a client PC usage, and require longer usages in characteristic workloads before adapting to provide the expected performance.

When following a benchmark test or IOMeter workload that has put the drive into this state which is uncharacteristic of client usage, it will take significant usage time under the new workload conditions for the drive to adapt to the new workload, and therefore provide inconsistent (and likely low) benchmark results for that and possibly subsequent tests, and can occasionally cause extremely long latencies. The old HDD concept of defragmentation applies but in new ways. Standard windows defragmentation tools will not work.

SSD devices are not aware of the files written within, but are rather only aware of the Logical Block Addresses (LBAs) which contain valid data. Once data is written to a Logical Block Address (LBA), the SSD must now treat that data as valid user content and never throw it away, even after the host “deletes” the associated file. Today, there is no ATA protocol available to tell the SSDs that the LBAs from deleted files are no longer valid data. This fact, coupled with highly random write testing, leaves the drive in an extremely fragmented state which is optimized to provide the best performance possible for that random workload. Unfortunately, this state will not immediately result in characteristic user performance in client benchmarks such as PCMark Vantage, etc. without significant usage (writing) in typical client applications allowing the drive to adapt (defragment) back to a typical client usage condition.

In order to reset the state of the drive to a known state that will quickly adapt to new workloads for best performance, the SSD’s unused content needs to be defragmented. There are two methods which can accomplish this task.

One method is to use IOMeter to sequentially write content to the entire drive. This can be done by configuring IOMeter to perform a 1 second long sequential read test on the SSD drive with a blank NTFS partition installed on it. In this case, IOMeter will “Prepare” the drive for the read test by first filling all of the available space sequentially with an IOBW.tst file, before running the 1 second long read test. This is the most “user-like” method to accomplish the defragmentation process, as it fills all SSD LBAs with “valid user data” and causes the drive to quickly adapt for a typical client user workload.

An alternative method (faster) is to use a tool to perform a SECURE ERASE command on the drive. This command will release all of the user LBA locations internally in the drive and result in all of the NAND locations being reset to an erased state. This is equivalent to resetting the drive to the factory shipped condition, and will provide the optimum performance.


<< Previous page
The test

Page index
1 | 2 | 3 | 4 | 5 | 6 | 7 | 8 | 9 | 10 | 11 | 12 | 13
Next page >>
Momentary disconnections with OCZ?  




Copyright © 1997- Hardware.fr SARL. All rights reserved.
Read our privacy guidelines.