OCZ's Vertex 2 Pro Preview: The Fastest MLC SSD We've Ever Tested
by Anand Lal Shimpi on December 31, 2009 12:00 AM EST- Posted in
- Storage
Enter the SandForce
OCZ actually announced its SandForce partnership in November. The companies first met over the summer, and after giggling at the controller maker’s name the two decided to work together.
Use the SandForce
Now this isn’t strictly an OCZ thing, far from it. SandForce has inked deals with some pretty big players in the enterprise SSD market. The public ones are clear: A-DATA, OCZ and Unigen have all announced that they’ll be building SandForce drives. I suspected that Seagate may be using SandForce as the basis for its Pulsar drives back when I was first briefed on the SSDs. I won’t be able to confirm for sure until early next year, but based on some of the preliminary performance and reliability data I’m guessing that SandForce is a much bigger player in the market than its small list of public partners would suggest.
SandForce isn’t an SSD manufacturer, rather it’s a controller maker. SandForce produces two controllers: the SF-1200 and SF-1500. The SF-1200 is the client controller, while the SF-1500 is designed for the enterprise market. Both support MLC flash, while the SF-1500 supports SLC. SandForce’s claim to fame is thanks to their extremely low write amplification, MLC enabled drives can be used in enterprise environments (more on this later).
Both the SF-1200 and SF-1500 use a Tensilica DC_570T CPU core. As SandForce is quick to point out, the CPU honestly doesn’t matter - it’s everything around it that determines the performance of the SSD. The same is true for Intel’s SSD. Intel licenses the CPU core for the X25-M from a third party, it’s everything else that make the drive so impressive.
SandForce also exclusively develops the firmware for the controllers. There’s a reference design that SandForce can supply, but it’s up to its partners to buy Flash, layout the PCBs and ultimately build and test the SSDs.
Page Mapping with a Twist
We talked about LBA mapping techniques in The SSD Relapse. LBAs (logical block addresses) are used by the OS to tell your HDD/SSD where data is located in a linear, easy to look up fashion. The SSD is in charge of mapping the specific LBAs to locations in Flash. Block level mapping is the easiest to do, requires very little memory to track, and delivers great sequential performance but sucks hard at random access. Page level mapping is a lot more difficult, requires more memory but delivers great sequential and random access performance.
Intel and Indilinx use page level mapping. Intel uses an external DRAM to cache page mapping tables and block history, while Indilinx uses it to do all of that plus cache user data.
SandForce’s controller implements a page level mapping scheme, but forgoes the use of an external DRAM. SandForce believes that it’s not necessary because their controllers simply write less to the flash.
100 Comments
View All Comments
Shark321 - Monday, January 25, 2010 - link
Kingston has released a new SSD series (V+) with the Samsung controller. I hope Anandtech will review it soon. Other sites are not reliable, as they test only sequential read/writes.Bobchang - Wednesday, January 20, 2010 - link
Great Article!it's awesome to have new feature SSD and I like the performance
but, regarding your test, I don't get the same random read performance from IOMeter.
Can you let me know what version of IOMeter and configuration you used for the result? I never get more than around 6000 IOPS.
AnnonymousCoward - Wednesday, January 13, 2010 - link
Anand,Your SSD benchmarking strategy has a big problem: there are zero real-world-applicable comparison data. IOPS and PCMark are stupid. For video cards do you look at IOPS or FLOPS, or do you look at what matters in the real world: framerate?
As I said in my post here (http://tinyurl.com/yljqxjg)">http://tinyurl.com/yljqxjg), you need to simply measure time. I think this list is an excellent starting point, for what to measure to compare hard drives:
1. Boot time
2. Time to launch applications
_a) Firefox
_b) Google Earth
_c) Photoshop
3. Time to open huge files
_a) .doc
_b) .xls
_c) .pdf
_d) .psd
4. Game framerates
_a) minimum
_b) average
5. Time to copy files to & from the drive
_a) 3000 200kB files
_b) 200 4MB files
_c) 1 2GB file
6. Other application-specific tasks
What your current strategy lacks is the element of "significance"; is the performance difference between drives significant or insignificant? Does the SandForce cost twice as much as the others and launch applications just 0.2s faster? Let's say I currently don't own an SSD: I would sure like to know that an HDD takes 15s at some task, whereas the Vertex takes 7.1s, the Intel takes 7.0s, and the SF takes 6.9! Then my purchase decision would be entirely based on price! The current benchmarks leave me in the dark regarding this.
rifleman2 - Thursday, January 14, 2010 - link
I think the point made is a good one for an additional data point for the decision buying process. Keep all the great benchmarking data in the article and just add a couple of time measurements so, people can get a feel for how the benchmark numbers translate to time waiting in the real world which is what everyone really wants to know at the end of the day.Also, Anand did you fill the drive to its full capacity with already compressed data and if not, then what happens to performance and reliability when the drive is filled up with already compressed data. From your report it doesn't appear to have enough spare flash capacity to handle a worse case 1:1 ratio and still get decent performance or a endurance lifetime that is acceptable.
AnnonymousCoward - Friday, January 15, 2010 - link
Real world top-level data should be the primary focus and not just "an additional data point".This old article could not be a better example:
http://tinyurl.com/yamfwmg">http://tinyurl.com/yamfwmg
In IOPS, RAID0 was 20-38% faster! Then the loading *time* comparison had RAID0 giving equal and slightly worse performance! Anand concluded, "Bottom line: RAID-0 arrays will win you just about any benchmark, but they'll deliver virtually nothing more than that for real world desktop performance."
AnnonymousCoward - Friday, January 15, 2010 - link
Icing on the cake is this latest Vertex 2 drive, where IOPS don't equal bandwidth.It doesn't make sense to not measure time. Otherwise what you get is inaccurate results to real usage, and no grasp of how significant differences are.
jabberwolf - Friday, August 27, 2010 - link
The better way to test rather then hopping on your mac and thinking thats the end-all be-all of the world is to throw this drive into a server, vmware or xenserver... and create multiple VD sessions.1- see how many you can boot up at the same time and run heavy loads.
The boot ups will take the most IOPS.
Sorry but IOPS do matter so very much in the business world.
For stand alone drives, your read writes will be what your are looking for.
Wwhat - Wednesday, January 6, 2010 - link
This is all great, finally a company that realizes the current SSD's are too cheap and have too much capacity and that people have too much money.Oh wait..
Wwhat - Wednesday, January 6, 2010 - link
Double post was caused by anadtech saying something had gone wrong, prompting me to retry.Wwhat - Wednesday, January 6, 2010 - link
This is all great, finally a company that realizes the current SSD's are too cheap and have too much capacity and that people have too much money.Oh wait..