We haven't even had time to cover everything we saw at CES last week, but there are already more product announcements coming in. Fusion-io launched their new ioScale product line at the Open Compute Summit, which was originally started by a few Facebook engineers who were looking for the most efficient and economical way to scale Facebook's computing infrastructure. Fusion-io's aim with the ioScale is to provide a product that makes building an all-flash datacenter more practical, the key benefits being the data density and pricing.

Before we look more closely at the ioScale, let's talk briefly about its target market: Hyperscale companies. The term hyperscale may not be familiar to all, but in essence it means a computing infrastructure that is highly scalable. A good example of a hyperscale company would be Facebook or Amazon, both of which must constantly expand their infrastructure due to increasing amounts of data. Not all hyperscale companies are as big as Facebook or Amazon, though, there are lots of smaller companies that may need as much scalability as Facebook and Amazon do.

Since hyperscale computing is all about efficiency, it's also common that commodity designs are used instead of pricier blade systems. Along with that goes expensive RAID arrays, network solutions and redundant power supplies for instance. The idea is that high-availability and scalability should be the result of smart software, not based upon expensive and - even worse - complex hardware. That way the cost of the infrastructure investments and management is kept as low as possible, which is crucial for a cloud service when a big portion of the income is often generated through ads or low-cost services. The role of software is simply huge in hyperscale computing and to improve the software, Fusion-io also provides an SDK called ioMemory that will assist developers in optimizing their software for flash memory based systems (for example, the SDK allows SSDs to be treated as DRAM, which will cut costs even more since less DRAM will be needed). 

The ioScale comes in capacities from 400GB to up to 3.2TB (single half length PCIe slot) making it one of the highest density, commercially available drives. Compared to traditional 2.5" SSDs, the ioScale provides significant space savings as you would need several 2.5" SSDs to build a 3.2TB array. The ioScale doesn't need RAID for parity as there is built-in redundancy, which is similar to SandForce's RAISE (some of the NAND die is reserved for parity data, so you can rebuild the data even if one or more NAND dies fail). 

The ioScale is all MLC NAND based, although Fusion-io couldn't specify the process node or manufacturer because they source their NAND from multiple manufacturers (makes sense given the volume required by Fusion-io). Different grades of MLC are also used but Fusion-io is promising that all their SSDs will match with the specifications regardless of the underlying components.

The same applies to the controller: Fusion-io uses multiple controller vendors, so they couldn't specify the exact controller used in the ioScale. One of the reasons is extremely short design intervals because the market and technology is evolving very quickly. Most of Fusion-io's drives are sold to huge data companies or governments, who are obviously very deeply involved in the design of the drives and also do their own validation/testing, so it makes sense to provide a variety of slightly different drives. In the past I've seen at least Xilinx' FPGAs used in Fusion-io's products, so it's quite likely that the company stuck with something similar for the ioScale.

What's rather surprising is the fact that ioScale is a single-controller design, even at up to 3.2TB. Usually such high capacity drives use a RAID approach, where multiple controllers are put behind a RAID controller to make the drive appear as a single volume. There are benefits with that approach too, but using a single controller often results in lower latencies (no added overhead by the RAID controller), prices (less components needed) and it takes less space. 

The ioScale has previously been available to clients buying in big volumes (think tens of thousands of units) but starting today it will be available in minimum order quantities of 100 units. Pricing starts at $3.89 per GB, which puts the 450GB model at $1556. For Open Compute Platforms, Fusion-io is offering a 30% immediate discount, which puts the ioScale at just $2.72/GB. For comparison, a 400GB Intel SSD 910 currently retails at $2134, so the ioScale is rather competitive in price, which is one of Fusion-io's main goals. Volume discounts obviously play a major role, so the quoted prices are just a starting point.

Comments Locked

15 Comments

View All Comments

  • IanCutress - Thursday, January 17, 2013 - link

    Is the speed limitation going to be the controller or the PCIe bus in this?
    Does an FPGA type controller handle random data requests better than a normal SSD controller?
    Would the 3.2 TB model be a dual sided PCB due to the single FPGA and NAND sizes? I don't see anything for additional PCBs ala OCZ RevoDrive style.

    I spend too much time in the consumer world :) I have a friend who works as a VE and he recently got to test a few 80 core systems (8P x 10 core Intel w/HT). Totally envious.
  • Kristian Vättö - Thursday, January 17, 2013 - link

    Fusion-io didn't release any performance specs, so honestly I don't know for sure. We should at least be very close to the 4GB/s barrier, which the PCIe 2.0 x8 provides.

    A custom silicon is usually better because it's solely designed for a specific purpose, whereas an FPGA is more like an all-purpose chip (it's obviously programmed to behave like a self-fabbed chip, though).

    The 3.2TB (at least) is a dual-PCB design but still a single-controller. David Flynn, the CEO of FIO, showed the card live here: http://new.livestream.com/ocp/winter2013/videos/95... (at around 18 minutes).
  • wolrah - Thursday, January 17, 2013 - link

    "A custom silicon is usually better because it's solely designed for a specific purpose, whereas an FPGA is more like an all-purpose chip (it's obviously programmed to behave like a self-fabbed chip, though)."

    Whether custom silicon is better depends on why the FPGA is being used. In some devices it's used for its design purpose, the field programmability means that a firmware update can bring new features "in hardware" by reconfiguring the FPGA.

    Others just use it because the complexities and costs of making custom silicon for a low volume device can outweigh the cost of just including a FPGA compatible with the ones used for development.

    If we assume the custom silicon is perfect and won't need to be updated it'll generally be cheaper in sufficient volume and will certainly clock faster, but it is certainly nice to be able to fix bugs or add features in the field.
  • liquan45688 - Thursday, March 21, 2013 - link

    I do not see any DRAM on the product. is it possiable to reach such high data thoughtput without any cache?
  • JPForums - Thursday, January 17, 2013 - link

    Is the speed limitation going to be the controller or the PCIe bus in this?
    Does an FPGA type controller handle random data requests better than a normal SSD controller?

    Hard to say. It really depends on the specific FPGA and the controller design implemented therein. Assuming Fussion-IO are experts capable of both selecting and fully exploiting and appropriate FPGA, I'd lean towards the PCIe bus. However, the "budget" nature of this card may dictate that the FPGA used is less capable. That said, I still think they'd be close given their track record.

    Does an FPGA type controller handle random data requests better than a normal SSD controller?

    Yes and no. A normal SSD controller is an ASIC and therefore more specifically purposed. Given the same architecture, they will generally have less latency, die area, and power consumption than an FPGA. An FPGA is more general purpose, which gives it more flexibility. Most current SSD controllers use 8 or 10 channels in an attempt to extract more speed from the SSD. However, once all channels are populated adding more flash chips will increase capacity, but not performance. To increase performance from a controller perspective, you must either create a new controller design with more channels, or use multiple controllers in parallel (RAID). This is where the FPGA's flexibility allows it to surpass standard controllers. With an FPGA, the same chip may be reprogrammed to utilize as many channels as makes sense for a given capacity. The limitation then becomes how many pins are available. Also, an FPGA can be redesigned to implement helpful features later in the design to relieve bottlenecks where it is far more rare to respin an ASIC design to address bottlenecks. These bottlenecks would normally be addressed in the next generation.

    An ASIC running the same architecture would definitely perform better than its FPGA counterpart, however, the expense of fabricating a chip for each capacity you want to offer is prohibitive (especially if you offer many capacities). An FPGA also makes more sense for low quantity runs. While the cost per chip is lower for an ASIC (mostly due to smaller size) the upfront design and initial fabrication costs are much higher than an FPGA. Thus, you have to ship quite a few chips to make up the cost of ASIC design and fabrication.
  • blanarahul - Thursday, January 17, 2013 - link

    It will be very interesting to see how this compared to the micron p320h.
    In my opinion the p320h should fare better because micron is making all components themselves.
  • Kevin G - Thursday, January 17, 2013 - link

    I'd love to see that comparison as well. Though Micron makes the controller and NAND, that is no guarantee that it'll perform better.
  • JPForums - Thursday, January 17, 2013 - link

    In my opinion the p320h should fare better because micron is making all components themselves.

    I wouldn't count Fussion-IO just yet. Micron definitely has an advantage in cost and quality of flash chips given they essentially get first pick. However, if implemented properly, Fussion-IOs FPGA based single chip controller will be able to extract more parallelism with less latency. This is potentially a far greater advantage. I can't wait for a review to see if they can pull it off.
  • blanarahul - Thursday, January 17, 2013 - link

    Another one: How does io-Scale compare to Fusion-io's ioDrive2?
  • Guspaz - Thursday, January 17, 2013 - link

    "Compared to traditional 2.5" SSDs, the ioScale provides significant space savings as you would need several 2.5" SSDs to build a 3.2TB array."

    2.5" 9.5mm SSDs come up to 2TB, and this PCIe card is definitely bigger, on a volumetric space consumed basis, than two 2.5" drives.

Log in

Don't have an account? Sign up now