SSD product review: Intel, OCZ, Samsung, Silicon Power, SuperTalent - BeHardware
>> Storage >> SSD
Written by Marc Prieur
Published on September 8, 2008
Introduction, for and against
Flash memory SSD drives have been around for over two years now. With ridiculously low capacity and performances at first, they have started moving forward in many respects and are now cutting out a real advantage across the board with the arrival of 60 GB drives at less than €300. And this is only the start!
SSD – in favour Compared to classic hard disks, the advantage of the SSD drive is of course the absence of mechanical parts. This means that that they are completely silent and have greater resistance to being knocked about. Seek times are also incredibly fast: while hard disks still have seek times of anywhere between 7 and 17ms, SSD drives measure averages of less than a ms, which is what makes for such a difference in performance for certain operations.
In terms of power consumption, SSD drives are of course very impressive, but it is important to put this in context. 2.5" hard drives are also very economical with, say, a Samsung HM160HI 5400, idling at 0.9W and reading/writing at 2.3W. For anyone who wants to know, from a system point of view, SSD drives function in exactly the same way as a standard hard disk: there is no difference in how they address the SATA controller.
SSD drives - against SSD drives are unfortunately far from being perfect. Using NAND flash memory, they do suffer its disadvantages, although these are attenuated by the controller. The first limitation with NAND flash memory is that it does not provide a random-access external address bus: data must be read on a block-wise basis.
Each memory chip is divided into blocks, which are then divided into pages. A 2 GB memory is generally divided into blocks of 128 MB, each of which is divided into pages of 2 KB. Pages are then read a page at a time and when it comes to writing, blocks must be written a block at a time. So an SSD takes as much time to read 1 KB as 2 KB, which isn’t a great problem, but as much time is needed to write 4 KB as 128 KB, which is obviously more so..
It is possible to get around this problem by introducing a sort of cache in the controller to intercept data and only write it in blocks of 128 KB. Otherwise, quite apart from the performance problem in this particular case, a proportion of memory cells will be worn out for nothing.
Because, yes, flash memory cells do wear out! And worse, depending on the technology, they wear out at a faster or slower rate. SLC memory, which stocks one bit per transistor, can supposedly be put through 100,000 erase cycles as opposed to 10,000 for MLC memory, which however has the advantage of giving double the capacity for the same physical size.
SDD drives - against (cont)
SSD drives - against (cont)With the aim of avoiding writing errors so as to minimize cell wear, SSD drives have built-in ECC correction: if the ECC algorithm detects an error the block is marked as defective and the data is written elsewhere. Of course, it’s even simpler to limit cell wear in the first place.
To achieve this, SSD drives use a technique known as wear leveling. The controller has a number of writes and the date of the last write of the data for each block. This allows it to arrange data so that erasures and re-writes are distributed evenly so as to share transistor wear evenly, even if this results in putting the oldest data on the most worn blocks. This can of course have a negative impact on performances during intensive writing on an SSD drive that has already had a fair amount of wear.
The other particularity of SSD drives is in terms of the data retention span in memory cells. In fact, data is not stocked permanently and indeed the retention span varies depending on how worn a cell is. According to the JEDEC specifications, a “new” cell ought to have a 10-year span, whereas at the end of a cell’s life, retention drops to around a year.
All this means that the life expectancy of an SSD drive can vary enormously depending on the technology used. In the best-case scenario, an MLC drive may be filled every day for 27 years before being worn out. In practice we’re far from this figure and Intel, new to the market and naturally claiming to be on the cutting edge in terms of reliability, gives a 5-year life expectancy for 100 GB/day. This isn’t bad but it remains to be seen what the actual life span of SSD-drives will be and, above all, what the differences between different models will be, something it’s very difficult to know in advance.
OCZ, Silicon Power, Samsung SSDs
OCZ Core V1 and V2 64/60 GBOCZ put these affordable, €300, 60 GB SSD drives on sale several weeks ago. In practice the drives use the same casing and PCB, except that on the V2 there is a mini-USB port to flash firmware.
In both cases the PCB is made up of a JMicron JMF602 controller, and 8 x 8MB Samsung MLC chips. However they don’t work at the same speed, with the V1 giving a read speed of 143 MB/s and a write speed of 93 MB/s, compared to 170 MB/s and 98 MB/s for the V2.
It should be noted that the OCZ Core V1, stated as being a 64 GB drive, is in fact 60 GB, or indeed 56.3 GB if you count 1 KB at 1024 bytes. The OCZ Core V2 does have 64 GB of storage, or 59.7 GB if you go by our OS, although it is stated as being 60 GB.
Silicon Power "64" GBSilicon Power, like OCZ, is pricing it’s SSD drives quite cheaply, at least in as far as the MLC tested here goes. The casing of this SSD is in fact very similar to the OCZ Core V1, and it is identical inside with Samsung NAND MLC Flash and a JMicron controller.
Like the OCZ Core V1, it is sold as a 64 GB drive but in fact has only 60 GB of usable space (or rather 56.33 GB). The two are in fact the same SSD but are quite differently described: SiliconPower gives a read speed of 119 MB/s and a write speed of 67 MB/s whereas OCZ claims 143 MB/s and 93 MB/s. SiliconPower is the more accurate.
Samsung 64 GBThe largest flash memory manufacturer in the world is naturally a fervent defender of SSD drives and has numerous models on offer. Among them is the MCCOE64G5MPP, a 64 GB model using SLC chips. With a good reputation for performance, there are however very few on sale. The OCZ SATA II is in fact an exact copy … but costs more than €800. Samsung officially states a read speed of 100 MB/s and a write speed of 80 MB/s.
On opening it up, there are 8 x 8GB Samsung SLC chips, a Samsung controller (an ARM type CPU in fact) as well as 32 MB of SDRAM … Samsung.
The SuperTalent MX and DX 60 GBAlthough the MX is affordable at €400, the DX is much less so (€1000) for the good and simple reason that it is equipped with SLC memory. For all that, the stated performances are far from amazing, at a read speed of 120 MB/s for both and write speed of 40 MB/s for the MX and 70 MB/s for the DX. We weren’t able to open the drives.
Intel X25-M 80 GBIntel has arrived on the SSD drive market with a great deal of noise, with its first model at 80 GB, the X25-M that uses MLC memory, priced around €500. The specs given by the manufacturer are pretty breathtaking; a read speed of 250 MB but a more modest write speed of 70 MB/s. This is achieved thanks to the fact that the controller addresses 10 flash channels at once.
What’s more, Intel uses a very effective wear leveling and write amplification control system (the usage difference factor between the average wear and the maximum wear because of good management of the blocks), allowing it to claim a long life expectancy for the drive, more than 100 GB/day for 5 years.
There is a 20 chip PCB (10 each side) inside carrying Intel and Micron logos (the two companies are partners in flash memory production), an Intel controller and a 16 MB SDRAM chip by Samsung.
The test For this test, we were able to source various SSD drives. Two OCZ Cores, a V1 and a V2, a Samsung, a Silicon Power, 2 SuperTalents, one using SLC and the other MLC and the most recent Intel SSD, the X25-M 80 GB. For information, we also give performance results for a VelociRaptor, a 3.5” Samsung SpinPoint F1 640 GB drive and a 2.5” Samsung Spinpoint M5 160 GB.
Various measurements were carried out in tests. First of all, we were interested in a drive’s “synthetic” performances: cache and sequential speeds and average access time. Next, were more practical tests, first of all involving an applicative performance index based on PC Mark Vantage and then a server load type simulation of files with IOMeter. This was followed by an evaluation of writing, reading, close (on the same partition) and far copying (on a partition which starts on 50% of the drive) with various groups of files. These groups were composed of:
- A collection of large files: 6 files (on average 2.2 GB) totalling 13.2 GB
- Medium sized: 7.96 GB of 10,480 files (each averaging 796 KB)
- Small sized: 2.86 GB of 68,184 files (each averaging 44 KB)
The source or target of reading or writing on the drive was a RAID of two Raptor 150 GB drives so as to make sure we weren’t limited. This type of measurement is worthwhile because, while the sequential speed gives us an idea of the performance in copying large files, things can be different with smaller ones.
The test machine was based on an X38 chipset mounted on an ASUSTeK P5E motherboard while Serial ATA ports were configured in the bios in AHCI (Advanced Host Controller Interface) so that NCQ could be used, all operating with Vista SP1.
SuperTalent, Intel: varying performance levels
SuperTalent, Intel: Varying performance levelsBefore looking at performance levels, we ought to look again at what is a fairly specific issue that we have noticed on three of the SSDs, namely a drop in performance in certain cases after being used for the first time.
With SuperTalent, this came about on both models when we were writing small and medium files to the drive. While write speed for large files was stable at 58 MB/s and 38 MB/s for the DX and MX versions respectively, the situation was quite different for medium-sized files: 28 MB/s and 23 MB/s on first usage but then 19.6 MB/s and 14.7MB/s on the second. For small files we were at 9.9 and 5.4 MB/s first time round and then 6.7 and 3.2 MB/s! We don’t really have an explanation for this, especially as it is something that only happened in this precise case.
More problematic however is a situation on the Intel drive, also limited to write speeds. As you can see below, the Intel drive does indeed give very impressive performances. Nevertheless, in certain cases, this does suddenly dip. This is particularly, but not only, the case for server load type operations involving random workloads.
The “new” drive gives an impressive 37 235 points on PC Mark Vantage for the HDD segment. You can run this test on its own several times and the score remains stable. But, after filling the drive with files of tests used while copying, and this on both partitions meaning virtually 100% of the drive is used, we only get 25 579 points on PC Mark Vantage.
The same thing happens on a new drive when we run our IOMeter test, itself not particularly stable as you will see below, then PC Mark Vangtage: we get 26 728 points. Even worse, if we redo our file copying test afterwards and then run PC Mark Vantage again, the score drops to 21 427. After multiple combinations we even managed a score of 16 202 points – which is nevertheless 2.5x a VelociRaptor. Repeating this test alone afterwards does give a slightly higher score each time … but without reaching original performance again (22 000 points after 20 repetitions).
File copying is also affected by multiple usage, even if this test can be carried out on its own several times with the same results. After running PC Mark Vantage and our IOMeter several times and file copying on a new drive, write speeds for large files fall from 70.4 to 43 MB/s, from 47.2 to 12.7 MB/s for medium-sized files and from 20.2 to 7.5 MB/s for small files.
This drop in performance can even go a good deal further; at one point we measured 6 MB/s for writing large files. This seems to have been an exceptional result that we were not able to reproduce and it should be noted that it is, in any case, only write speeds that are affected in this way. Have a look at the h2bench (sequential write speed) graph for a representation of first an unused drive and then a drive after functioning with IOMeter for 30 minutes with 100% random workload:
Intel explains the situation thus:
SSDs all have what is known as an “Indirection System” – aka an LBA allocation table (similar to an OS file allocation table). LBAs are not typically stored in the same physical location each time they are written. If you write LBA 0, it may go to physical location 0, but if you write it again later, it may go to physical location 50, or 8.567 million, or wherever. Because of this, all SSDs performance will vary over time and settle to some steady state value. Our SSD dynamically adjusts to the incoming workload to get the optimum performance for the workload. This takes time. Other lower performing SSDs take less time as they have less complicated systems. HDDs take no time at all because their systems are fixed logical to physical systems, so their performance is immediately deterministic for any workload IOMeter throws at them.
The Intel ® Performance MLC SSD is architected to provide the optimal user experience for client PC applications, however, the performance SSD will adapt and optimize the SSD’s data location tables to obtain the best performance for any specific workload. This is done to provide the ultimate in a user experience, however provides occasional challenges in obtaining consistent benchmark testing results when changing from one specific benchmark to another, or in benchmark tests not running with sufficient time to allow stabilization. If any benchmark is run for sufficient time, the benchmark scores will eventually approach a steady state value, however, the time to reach such a steady state is heavily dependant on the previous usage case. Specifically, highly random heavy write workloads or periodic hot spot heavy write workloads (which appear random to the SSD) will condition the SSD into a state which is uncharacteristic of a client PC usage, and require longer usages in characteristic workloads before adapting to provide the expected performance.
When following a benchmark test or IOMeter workload that has put the drive into this state which is uncharacteristic of client usage, it will take significant usage time under the new workload conditions for the drive to adapt to the new workload, and therefore provide inconsistent (and likely low) benchmark results for that and possibly subsequent tests, and can occasionally cause extremely long latencies. The old HDD concept of defragmentation applies but in new ways. Standard windows defragmentation tools will not work.
SSD devices are not aware of the files written within, but are rather only aware of the Logical Block Addresses (LBAs) which contain valid data. Once data is written to a Logical Block Address (LBA), the SSD must now treat that data as valid user content and never throw it away, even after the host “deletes” the associated file. Today, there is no ATA protocol available to tell the SSDs that the LBAs from deleted files are no longer valid data. This fact, coupled with highly random write testing, leaves the drive in an extremely fragmented state which is optimized to provide the best performance possible for that random workload. Unfortunately, this state will not immediately result in characteristic user performance in client benchmarks such as PCMark Vantage, etc. without significant usage (writing) in typical client applications allowing the drive to adapt (defragment) back to a typical client usage condition.
In order to reset the state of the drive to a known state that will quickly adapt to new workloads for best performance, the SSD’s unused content needs to be defragmented. There are two methods which can accomplish this task.
One method is to use IOMeter to sequentially write content to the entire drive. This can be done by configuring IOMeter to perform a 1 second long sequential read test on the SSD drive with a blank NTFS partition installed on it. In this case, IOMeter will “Prepare” the drive for the read test by first filling all of the available space sequentially with an IOBW.tst file, before running the 1 second long read test. This is the most “user-like” method to accomplish the defragmentation process, as it fills all SSD LBAs with “valid user data” and causes the drive to quickly adapt for a typical client user workload.
An alternative method (faster) is to use a tool to perform a SECURE ERASE command on the drive. This command will release all of the user LBA locations internally in the drive and result in all of the NAND locations being reset to an erased state. This is equivalent to resetting the drive to the factory shipped condition, and will provide the optimum performance.
Momentary disconnections with OCZ?
Momentary disconnections with OCZ/Silicon Power? At such an attractively low price, the OCZ SSD has been selling well but some users have been complaining about a latency problem on this SSD when performing multiple tasks. As you will see below, the IOMeter tests underline a random access problem with the model. We have tried to see if there is a link.
We used IOMeter with 4KB operations and more or less random access on the OCZ Core V2 and the Samsung SLC SSD as well as on a VelociRaptor. First, here are the read results:
The SSD drives show a great superiority over classic hard drives here. When it comes to write results however, the figures are not so positive:
The performances drop to a lesser extent than for read speeds on the VelociRaptor, because of the cache, but it’s the SSD results that are significant here. The Samsung SSD doesn’t fall off anywhere near as much as the OCZ Core V2. After operations go over 25% random access, the drive can no longer respond more than 19 times a second to what the IOMeter throws at it and at 100% random access this drops to 4!
Worse, although the OCZ SSD response time varies between 0.26 and 224ms (1 divided by the number of I/Os), it has a maximum write response time of around 900ms whatever the percentage of random access. In view of these scores, we can draw the conclusion that the higher the incidence of random access, the higher the frequency of this response time. This explains the saw-tooth pattern obtained in the sequential writing tests with applications such as HDTune and HDTach.
With the Samsung SSD, the maximum response time is 80ms and the average response varies between 0.14 and 8.8ms. The average for the VelociRaptor varies between 0.09ms and 4.59ms (long live the cache), with a maximum of 40ms.
The presence of a cache is supposed to cover the significant write latency but, in contrast to the Samsung and the Intel, the OCZ Core SSD has no SDRAM at all. As a result you have to rely on the JMicron controller buffers which hardly seem up to the job.
You will no doubt have understood that this is a significant limitation with lower end SSDs, especially with the OCZ Cores and the Silicon Power, which gives exactly the same results.
Access time, sequential speed
Measured with h2benchw, access time is THE strong point with SSDs. While SATA drives have seek times of 7ms at best (for the VelociRaptor) and can be as slow as 17.7ms (the Samsung 2.5"), SSD drives are under the millisecond. There are significant differences from model to model however, with the Intel measuring at 0.1ms compared to 0.5ms for the SuperTalent.
Sequential speedContinuing with sequential speed, still with h2benchw - in contrast to other benchmarking software such as HDTune or HDTach, h2benchw carries out a true sequential test because it reads or writes the entire drive, whereas the others jump between zones to reduce test time.
In terms of reading performance, the Intel SSD is truly exceptional at double the speed of the others or the VelociRaptor. Another advantage with SSD drives is that they yield a constant speed across the entire disk whereas HDD performance drops towards the edge of the disk.
Write speeds are, however, significantly lower. The OCZ, Silicon Power and SuperTalent models perform at well below spec and there does seem to be an issue with h2bench because, as you can see below, speeds observed for writing large files are significantly down. The Intel and the Samsung do however measure up to speeds stated.
The graph for the OCZ/Silicon Power drives shows a great deal of oscillation for this test.
PC Mark Vantage
PC Mark VantageWe now move on to less synthetic tests, starting with an index of hard drive performances in PC Mark Vantage. FutureMark reproduces a series of recorded reading/writing operations on the drive in diverse tasks such as the startup of Vista, loading of applications (Word, Photoshop, IE, Outlook), manipulation of multimedia files (photo, videos, and music), a game (loading in Alan Wake) and scanning the drive via Windows Defender.
The Intel literally crushes the competition here, even if, as we indicate in the section “Varying performance levels”, its performance should be relativised to a certain extent as it can drop significantly under some circumstances: we did go as low as 16,202. In any case the SSD drives show a clear advantage in software tests where reading is the majority activity. The read percentage is as follows for each test:
- Windows Defender: 99.38%
- Gaming: 99.95%
- Photo Gallery: 84.09%
- Vista Startup: 84.67%
- Movie Maker: 53.41%
- Media Center: 50.12%
- Media Player: 77.93%
- Application Loading: 87.17%
File copyingThis brings us to file copying. We measure reading and writing speeds of various groups of files, as well as copying on the same partition and copying on a partition which starts on 50% of the drive.
These groups of files are composed of the following:
- Large sized files: 6 files (on average 2.2 GB) totaling 13.2 GB
- Medium sized: 7.96 GB of 10,480 files (each averaging 796 KB)
- Small: 2.86 GB of 68,184 files (each averaging 44 KB)
The source or target in reading or writing on the drive is a RAID of two Raptor 150 GB drives, so as not to be limited by this. Nevertheless, with Intel SSD performance being what it is, this is a factor for reading large files!
In terms of read times SSD drives outperform everything else when it comes to large files, even though even the Intel doesn’t manage the full 200 MB/s for sequential reading because of RAID performance. You get speeds corresponding to hard disk performance for small and medium-sized files. Very poor write speeds were noted for small and medium-sized files on the SuperTalent. The OCZ Core and Silicon Power are at more or less the same level as the 5400 rpm 2.5” drive, while the Intel option gives a good level of performance for small and medium-sized files. The Samsung drive, a bit behind compared to OCZ and Intel in reading, shows much better results with medium and large-sized files. It should be noted that manufacturers’ specs usually ought to be obtained for large files: the OCZ Core V1 and V2, with 143/93 MB/s read/write speeds for the first and 170/98 for the second, are nowhere near.
Whether file copying is on the same partition or not obviously doesn’t have much impact when it comes to SSDs, seeing as how they function. The Intel and Samsung are out in front, then comes the OCZ/Silicon Power, followed by the SuperTalent, way behind.
IOMeterIOMeter is used to simulate the load in a multi-user environment by using a server type file load comprised of 80% reading and 20% writing all in a 100% random manner on the drive. In this type of situation, NCQ can be particularly useful because of multiple concurrent commands. In this test, we measured performances expressed in inputs/outputs per second (IO/s) with 1, 2, 4, 8, 16, 32, 64 and 128 simultaneous commands. Of course, with a single command NCQ has no effect.
Of course SSDs and even less so MLC SSDs, are not designed to make good file servers. Nevertheless such a fundamentally random workload does give insight into the performance of these drives. For reasons of scale, the Intel SSD scores are given separately.
The first thing that we notice is the dip in SuperTalent MX performance. Although strong with simultaneous commands, it collapses when more is asked of it, though it does remain superior to the Samsung SpinPoint F1. The SuperTalent DX is stable whatever the number of commands. The OCZ and Silicon Power SSDs score horribly on this test, whatever the number of commands. The unreliability of these models when it comes to writing really does devalue them. The Samsung SSD is very comfortable here and is not matched by the VelociRaptor, even with the help of NCQ.
The Intel SSD is in another world, with at one point over 12,000 I/O per second, that is 16 times better than any other MLC SSD. Performances are nevertheless very variable a) between the first and second execution of the test and b) when the test is carried out after having used the SSD for all the other tests in the comparison. Nevertheless, performances are in any case very good.
Power consumptionAll the SSDs tested here are completely silent and work at low temperature. We concentrated therefore on measuring consumption at rest and reading/writing with IOMeter. Here are the results obtained:
Compared to a 5400 rpm notebook drive, the gains are at best small and we cannot really say that an SSD significantly increases battery life on a notebook. It has to be said that at 2.3w when reading/writing, the SpinPoint M5 160 GB is already very economic and only the Samsung SSD is really any better.
Of course the SpinPoint M5 160 GB does have quite low performance levels and a comparison with desktop hard drives is clearly in favour of the SSDs. The absence of any noise is naturally appreciated far more than the low power consumption levels.
ConclusionAlthough the advent of affordable SSD drives is a good thing, they are far from perfect. More than the wear on MLC chips and other problems they suffer whether in terms of life expectancy or write speeds, it is the controllers that are most in need of improvements so that the advantages of flash memory can be made the most of.
Of course wear on flash cells still needs to be taken into account but the different mechanisms integrated within SSD drives seem to be equal to the situation, at least on paper, over a fairly long period. It is however difficult to sort the wheat from the chaff when it comes to SSDs, especially as manufacturers are not particularly revealing when it comes to the methods used inside their SSDs: Intel can moreover be congratulated on the detail it gives. In all cases, and this is also true for classic hard drive users, don’t forget to back up!
Otherwise, SSDs are completely silent, in contrast to hard disk drives and are therefore an obvious choice for anyone who’s looking for discretion in their PC. This makes them ideal for, for example, a living room PC which is used to read videos stored in a NAS. They aren’t however particularly more economical in terms of energy than a notebook hard drive and there would be no point investing in one purely for power economy.
For anyone who is looking purely for performance, these drives give great performance levels as system disks using the most common applications, as is shown by the scores obtained on PC Mark Vantage (unequalled read speeds and seek times). They improve the reactivity of any system, particularly when launching an application. Compare for example 6s for launching Photoshop CS3 on a VelocRaptor with 2 or 3 seconds on an SSD!
SSDs do however have some weaknesses on other tasks. SuperTalent SSDs give very low write speeds, while OCZ Core and Silicon Power SSDs are as cheap as they are unreliable on random writing, so much so that latency can be felt during use. If you add to this the false claims on capacity (64 GB against a real 60 GB for Silicon Power and Core V1 SSDs) and overinflated speeds given on spec sheets, these SSDs are the runt of this particular SSD litter.
And what is there to say about the Intel solution? A real treat on paper, the Intel X25-M literally knocked us for six during the first tests! However, the dip in performances recorded when the SSD is submitted to varying workloads is quite worrying, even though Intel says this phenomenon is to be “expected”. It is one thing for performance to dip and it may be acceptable as long as their starting point is so far ahead of the SSDs of its competitors. The problem is that in certain cases, which are, it’s true, very specific, performance is recorded as being lower than that of a 5400 rpm disc, particularly during file copying. It would be great news if Intel could find a solution allowing it to retain high performances whatever the workload of the SSD, even if this means losing something from maximum performance.
For now only Samsung is really managing this. Whether on PC Mark Vantage, file copying or IOMeter, performances do not disappoint, without, it is true, attaining the levels of the Intel X25-M when at its best. There is a price for all this unfortunately, with the Samsung SLC (or its exact OCZ copy) coming in at €800 for 64 GB. The icing on the cake is that it is the most economical of all the SSDs tested.
There is then, no perfect solution and given current prices, SSDs will as yet only be used in fairly limited circumstances and by a limited number of users. Recent evolutions show that the price of SSDs will fall quickly but there is still some way to go in terms of performance exploitation, as Intel has shown with its incredible X25-M. The classic hard drive is not yet dead but it is on its way out!
Copyright © 1997-2015 BeHardware. All rights reserved.