The Drive

The Black2 consists of a 120GB SSD and 1TB dual-platter 5400rpm hard drive. It's not a hybrid drive (or SSHD) by definition like the Momentus XT because there's no caching involved. The SSD and hard drive appear as separate partitions, giving the end-user the power to decide what data goes to the SSD and what doesn't. WD calls the Black2 a dual-drive, which is a logical name for the drive because it's fundamentally two drives in one.

(Sorry for the poor quality photos -- I no longer have access to the DSLR I used before)

WD Black2 Specifications
Interface SATA 6Gbps
Sequential Read 350MB/s
Sequential Write 140MB/s
Power Consumption 0.9W (idle/standby) / 1.9W (read/write)
Noise 20dBA (idle) / 21dBA (seek)
Warranty 5 years
Price $299

Included in the retail package is a USB 3.0 to SATA adapter and Acronis True Image WD Edition (via download) for easy data migration. There is no driver disc to my surprise but a small USB drive, which when plugged in runs a command that sends you to WD's download page (i.e. the actual drivers have to be downloaded).

Internally the drive is rather unique. The hard drive itself is the same as WD's Blue Slim model (7mm dual-platter 5400rpm drive) but in addition to the hard drive, there are two PCBs. The bigger PCB contains the SSD components (controller, NAND, DRAM) and the smaller one is home to Marvell's bridge chip, which allows the SSD and hard drive to utilize the same partition table.

WD went with a rather rare JMicron JMF667H controller in the Black2. It's a 4-channel controller and is based on the ARM9 instruction set, but as usual JMicron doesn't provide much in the way of public details.

JMicron used to be a fairly big player in the consumer SSD space back in ~2009 but the lack of a SATA 6Gbps controller pushed SSD OEMs to go with other manufacturers. The JMF667H isn't JMicron's first SATA 6Gbps controller, although it seems that all the members of JMF66x family are mostly the same with a few tweaks. I've seen the JMF66x controllers used in some Asian brand SSDs (e.g. Transcend SSD740) but the biggest demand for JMF66x has been in the industrial SSD side.

As for the NAND, WD has only disclosed that the NAND is 20nm MLC, suggesting that we're dealing with IMFT NAND (Micron or Intel). I tried googling the part numbers but it appears that the NAND is custom packaged as there was no data to be found. However, I'm guessing we're dealing with 64Gb dies, meaning eight dies (64GB) per package. There's also a 128MB DDR3-1600 chip from Nanya, which acts as a cache for the JMF667H controller.

Setting Up the Black2

When the Black2 is first connected, it appears as a 120GB drive and gaining access to the 1TB hard drive portion requires driver installation. The reason why the driver is required is due to the limits of the SATA protocol. Connecting two drives to a single SATA port would require port multiplication, which isn't supported by all SATA controllers as it's not an official requirement. Most modern SATA controllers do support port multiplication but for instance older Intel and nVidia chipsets don't. It's always better to play it safe and not have any specific hardware requirements, especially as most people have no idea what chipset is in their system.

Once the drivers have been installed, the Black2 will show up as a single drive with two partitions. The way this works is pretty simple. Operating systems use Logical Block Addresses (LBAs) for read/write commands, which are used to keep the data seen in the OS and the data in the drive in sync. As OSes have been designed with hard drives in mind, they use linear addressing, meaning that the LBAs start from 1 (i.e. the outer circle of the hard drive) and increase linearly as more data is written. Partitions are based on LBA ranges and as some of you might remember (and may still do it), splitting a hard drive into two partitions was a way to increase performance because the first partition would get the earliest LBAs with the highest performance.

In the Black2, the earliest LBAs (i.e. 120GB) are assigned for the SSD partition, whereas the rest are for the hard drive. The Marvell chip keeps track of all the LBAs and then sends data to the SSD or hard drive based on the LBA.

Out of interest, I also tried creating a 1120GB volume to see how the drive reacts. It certainly works and I was able to read and write data normally, but the issue is that you lose control of what goes to the SSD and what doesn't. As the earliest LBAs have been assigned to the SSD, the SSD will be filled first and once 120GB has been written the drive moves to writing to the hard drive, meaning that you are pretty much left with the hard drive for anything write related. If you go and delete something that's in the SSD, the next writes will go those SSD LBAs so in theory you could use the Black2 as a single volume drive, but it wouldn't be efficient in any way.

Unofficial Mac Support

The drivers WD provides are Windows only but there is a way, at least in theory, to use the drive in OS X. You need Windows access for this and what you do is set up the partitions in Windows and then use OS X's Disk Utility to format the partitions from NTFS to HFS. 

Unfortunately I don't have a Mac with USB 3.0 port to thoroughly test the Blackin OS X, so this is merely a heads up that it may work. I was able to read and write to the drive normally but without a faster interface I can't test that the writes are indeed going to the SSD when they should be. In theory yes, but it's possible that the drivers include more than just an automated partition set up.

Test System

CPU Intel Core i5-2500K running at 3.3GHz (Turbo and EIST enabled)
Motherboard AsRock Z68 Pro3
Chipset Intel Z68
Chipset Drivers Intel 9.1.1.1015 + Intel RST 10.2
Memory G.Skill RipjawsX DDR3-1600 4 x 8GB (9-9-9-24)
Video Card Palit GeForce GTX 770 JetStream 2GB GDDR5 (1150MHz core clock; 3505MHz GDDR5 effective)
Video Drivers NVIDIA GeForce 332.21 WHQL
Desktop Resolution 1920 x 1080
OS Windows 7 x64

Thanks to G.Skill for the RipjawsX 32GB DDR3 DRAM kit

Introduction Performance Consistency & TRIM Validation
Comments Locked

100 Comments

View All Comments

  • apoe - Friday, February 7, 2014 - link

    People think US internet speeds are slow? When I lived in China, 50 kb/s was considered really fast. In the US, with a standard ISP, I can download Steam games at 6 MB / s, 120 times faster. According to Forbes, the US is in the top 10 for fastest internet speeds, but nothing tops South Korea. Helpful that plenty of larger datacenters are located here too.
  • xKrNMBoYx - Tuesday, February 4, 2014 - link

    Try downloading a game or program that is 20-50+ GB constantly. That eats bandwidth and datacap. With out optical drives your left having to download everything or do multi step transfer from disk to computer to another computer. There are external optical drives but that is another story. There will never be a time where optical drives become obsolete until ISP invest more in speed/reliability with lower prices. Then there's the group of people that wouldn't use the internet either.
  • JMcGrath - Sunday, February 9, 2014 - link

    @Morawka -

    Forget Blu Ray, it will be left far behind in the (fairly) near future. I won't go as far as saying that BD is obsolete or going to be for quite a while, BD has far too much influence in the current movie industry or in the near future - even with 4K already hitting the markets a lot faster than people ever thought possible.

    However, as other people have stated BD is simply not a feasible solution going forward. It has served it's purpose for many years now, but just like CD and DVD better technologies will replace it with larger and faster storage mediums.

    I think it's too hard to say what will become the dominant technology in the near future, and hopefully we won't have to go through another BD vs HD-DVD type war again(!) but there are a number of different technologies in the works, many of which have already shown working prototypes to replace the aging BD tech.

    Most of these tech's have gone with either smaller track widths and laser technologies, additional layers or a combination of the two. However, there is one new technology that sounds very promising, and one I believe (and hope) will become the adopted standard - Holographic Discs!

    As the focus on ultra high resolutions and the aim for "retina" type displays, deeper color depths and shading, and higher true refresh rates (4K/60 or 4K/120 for example) new technologies will be needed. Most internet connections - even the fastest available in most areas - won't support these extreme bitrates, and BD simply can't keep up either.

    I have seen demo's of everything from true 24-bit color panels, 60hz and 120hz 4K via HDMI 2.0 or DP 1.2+, multi-panel / multi-head displays de-multiplexed (demuxed) showing true 23:9 content at 11280x4320P @ 120hz using multiple DP/HDMI connections.

    When talking about just the current standard 4K/30 on an RGB 4:4:4, 12-bit panel, you're talking about:

    3840 * 2160 * 36 * 30 = 8,957,952,000 bps / 8 = 373,248,000 bytes/s...
    =1.04GB/s, that's 3.67TB/hour (uncompressed, true 2160P)!!

    That's 62.6GB / minute or just over 7.33TB for a 2 hour long movie, and this is excluding audio!!

    Now, add in new technologies (coming very soon) like 24-bit color, 60FPS, and the *real* widescreen aspect and you're looking at closer to 367.6GB/s and 43TB for a 2 hour movie!

    I haven't kept a real close eye on the holographic disc technology lately, I know it was originally created by GE (who actually had a working, but smaller 4TB/Layer, proto ~3 years ago!) The discs themselves look identical to a CD/DVD/BD, but rather than using a single laser on 1 linear track, the drive uses multiple lasers at different angles. The possibilities are really endless considering the technology itself is no different than current media, just add 2 more lasers @ 45 degrees and you increase density by 300%, add 2 more @ 30 degrees and you've increased it 500%, 2 more lasers @ 15 degrees... you get the idea.

    The last I remember reading about the technology was that they had the working 4TB/Layer model that I mentioned, but were also working on using additional lasers and a finer track which would allow them as much as 40-80TB in the future!!

    BD won the last round because Sony had such a large influence on the market, especially with the PS3 hitting the market at the same time as BD/HD-DVD players and HDTV's becoming mainstream. It remains to be seen what will be the driving factor this time around, but with a company as large as GE behind the wheel and the demand in large data centers for backup I think holographic discs stand a good chance at winning the next round.

    For everyone out there that works in a large DC using automated tape backups or cloud based backups, imagine being able to not only store 80/160/320TB on a single disc the size of a CD but being able to do it in less than 2 hours!! Considering you could write 80TB in 2 hours and assuming they release PC writers @ 2X, 4X, 8X, etc you could backup an entire enterprise data center in less than an hour, throw it in a small fire/water proof safe, and you're done!
  • patrickjchase - Thursday, January 30, 2014 - link

    This is tangential, but...

    I have similar backup needs, and faced similar issues with OD unreliability (and also with HDD failures for that matter). I ended up developing my own archiver that stripes backup files across multiple drives/disks (optical or HDD). It calculates and embeds strong block-level checksums, and provides RAID6-style Reed-Solomon-code based redundancy within each block-sized stripe. In particular it can tolerate up to 2 block-checksum failures in each stripe (for for example if I stripe across 7 Blu-Ray disks I can tolerate read errors from any 2 within any given block-sized stripe), which means that it can tolerate a *lot* of optical disk read errors. I intentionally degraded (read: scratched up) a backup set such that every disk yielded a very large number of read errors, but the backup payload as a whole was recoverable.

    With that in mind, I find that optical (Blu-Ray) media remain very useful for backups due to their superior shock/vibration/environmental tolerance as compared to hard drives. If I were using them without my archiver I'd be pretty worried, though :-).
  • Navvie - Friday, January 31, 2014 - link

    I'd be very, very interested in seeing this software!
  • Solandri - Friday, January 31, 2014 - link

    We did that on Usenet in the 1990s. When posting a big binary (e.g. a TV show episode) you had to break it up into multiple parts to fit within the Usenet post length limit. So you might break the TV show into 50 compressed archive files (usually RAR). The problem was Usenet would frequently fail to propagate a file. So even though you posted 50, many sites might only get 49 or 47. The solution was to add parity files. So you'd post the original 50 archive files (RAR) and 5 parity files (PAR).

    Any 50 of those 55 files would allow you to recreate the original video file. You could vary the number of parity files, but about 10% was typical.

    When I was backing up stuff to DVD, I found and downloaded newer versions of the old parity programs. I broke up my backups into enough archive files and parity files that I could lose large portions of several disks, or even an entire disk and still recover my backup. Your block-level parity checksums sounds like it would be more robust and transparent, but I only had to use freely downloadable tools.

    http://en.wikipedia.org/wiki/Parity_file
  • Navvie - Monday, February 3, 2014 - link

    When I read patrickjchase's comment my first thought was "that's exactly like usenet."
  • peter64 - Friday, January 31, 2014 - link

    Yes, thank you Dell for making devices are easily user upgraded. I hate all these other notebooks being completely sealed. You can't even replace the battery.
  • peter64 - Friday, January 31, 2014 - link

    I bet if Dell didn't put that removable optical drive in there, your notebook won't have a 2nd hard drive at all.

    Thanks Dell for giving people options and post-purchase upgrade ability during times of sealed non-user upgradeable devices.
  • Johnmcl7 - Thursday, January 30, 2014 - link

    I think it would have been the ideal solution for a single drive perhaps last year but I think it's too late and too expensive as the Crucial M500 960GB is now under £330. While that's still a bit more expensive than this drive it's much neater (one drive instead of two) and I assume power consumption and heat would be better as well. That's the option I'd go for on a machine now, if they'd managed a 2TB drive for this price it would be a lot more attractive as that would put it beyond what's affordable with SSD's at the moment. I realise there are technical difficulties with 2TB 2.5in drives (I don't know if there's any standard drives available in this capacity) but they have to move forward at some point.

Log in

Don't have an account? Sign up now