Synology DS1812+ 8-bay SMB / SOHO NAS Review
by Ganesh T S on June 13, 2013 4:00 PM EST- Posted in
- NAS
- Storage
- Synology
- Enterprise
Testbed Setup and Testing Methodology
The testbed for the Synology DS1812+ involves the SMB / SOHO NAS testbed we built last year. Performance evaluation is done under both single and multiple client scenarios. In all cases, the two network ports are teamed with 802.11ad dynamic link aggregation. Our rackmount NAS reviews use SSDs typically, but, in the desktop form factor (for units based on ARM / PowerPC SoCs or the Atom series, typically), we use hard drives. Even though our review unit came bundled with 1 TB Seagate drives, we chose to go with the Western Digital RE (WD4000FYYZ) drives that have been used in our other NAS reviews. This allows us to keep benchmark figures consistent across different NAS units.
AnandTech NAS Testbed Configuration | |
Motherboard | Asus Z9PE-D8 WS Dual LGA2011 SSI-EEB |
CPU | 2 x Intel Xeon E5-2630L |
Coolers | 2 x Dynatron R17 |
Memory | G.Skill RipjawsZ F3-12800CL10Q2-64GBZL (8x8GB) CAS 10-10-10-30 |
OS Drive | OCZ Technology Vertex 4 128GB |
Secondary Drive | OCZ Technology Vertex 4 128GB |
Tertiary Drive | OCZ RevoDrive Hybrid (1TB HDD + 100GB NAND) |
Other Drives | 12 x OCZ Technology Vertex 4 64GB (Offline in the Host OS) |
Network Cards | 6 x Intel ESA I-340 Quad-GbE Port Network Adapter |
Chassis | SilverStoneTek Raven RV03 |
PSU | SilverStoneTek Strider Plus Gold Evoluion 850W |
OS | Windows Server 2008 R2 |
Network Switch | Netgear ProSafe GSM7352S-200 |
Thank You!
We thank the following companies for helping us out with our NAS testbed:
- Thanks to Intel for the Xeon E5-2630L CPUs and the ESA I-340 quad port network adapters
- Thanks to Asus for the Z9PE-D8 WS dual LGA 2011 workstation motherboard
- Thanks to Dynatron for the R17 coolers
- Thanks to G.Skill for the RipjawsZ 64GB DDR3 DRAM kit
- Thanks to OCZ Technology for the two 128GB Vertex 4 SSDs, twelve 64GB Vertex 4 SSDs and the RevoDrive Hybrid
- Thanks to SilverStone for the Raven RV03 chassis and the 850W Strider Gold Evolution PSU
- Thanks to Netgear for the ProSafe GSM7352S-200 L3 48-port Gigabit Switch with 10 GbE capabilities.
- Thanks to Western Digital for the eight WD RE hard drives (WD4000FYYZ) to use in the NAS under test.
In order to evaluate single client performance, we booted up one VM in our testbed and ran Intel NASPT on the CIFS share in the NAS. iSCSI support evaluation was also done in a similar manner with a 250 GB iSCSI LUN mapped on the VM. For NFS, we ran IOMeter benchmarks in Linux. For evaluation of multiple client performance, we accessed a CIFS share from multiple VMs simultaneously using IOMeter and gathered data on how the performance changed with the number of clients / access pattern. Without further digression, let us move on to the performance numbers.
93 Comments
View All Comments
SirGCal - Friday, June 14, 2013 - link
UPDATE: After looking carefully over these screenshots - I think their review might be SERIOUSLY lacking... I see a RAID 6 option in the setup for the box. But it's greyed out. Probably because they didn't have any drives in it when they were there is my guess. need 4-5 drives MINIMUM to do it to start. But with this many drives, Even testing a RAID 5 is just honestly a bit stupid. It should have been tested RAID 6 and in that situation, might actually be a more attractive option if it is capable and performs.But then again, RAID 5 generally is faster then RAID 6 due to the added calculations for the extra parity.. And it's RAID 5 performance was pretty weak unless I'm reading the numbers wrong. That is, if the RAID 6 is actually activatable within this device and not just an option within their software that is disabled in this device all together. But I would have thought this review would have tested that mode since that is what an 8-drive setup should have been setup for.
ganeshts - Friday, June 14, 2013 - link
The benchmarks were done with all 8-bays filled with WD RE Drives in RAID 5.The screenshots show that we can have disk groups. So, for example, you could allocate 4 disks to one disk group and run a RAID 5 volume on it. Then, the other 4 disks could be in another group and you could run a RAID 6 volume in that group.
What is the problem with performance that you are seeing? These Atom-based NAS units basically saturate the network link (accounting for overheads). Remember two links teamed is 2 Gbps in this case and that translates to a maximum of 250 MBps. Accounting for overhead, I see units saturate between 210 - 230 MBps and never have had any unit go above that unless I am teaming 4 ports or more (as you can see in our QNAP TS-EC1279U-RP review)
I will take your feedback about RAID-6 evaluation into consideration in the next round of benchmarks.
Jeff7181 - Monday, June 17, 2013 - link
How is single client, 1.5 MB/s throughput at about 100 ms latency "stellar?" That sounds absolutely abysmal to me. I'm curious to know how you set up IOMeter... I'd like to repeat the test on my own box and see how it fares.mitchdbx - Saturday, June 15, 2013 - link
There comes a time in your life where you just want things to work without the hassle of them breaking every time you turn around. I OWN the 5bay unit (for over a year now) and can say that the UX is wonderful on these. They configure to let you know when something goes wrong (send email, beep, send SMS, etc) so you can fix the issue. Please look at the product before you make conclusions that they are only "dumb" boxes. You can run Plex, and Many other media servers in addition to a DNS, DHCP, Web server with PHP and various CMS installs. Photo Management, Surveillance, etc....On another note, a inexperienced individual commented that an issue will arise when a drive fails and the array must rebuild. If you are using quality drives and constantly spinning the drives, the chance of a two drive failure is very low. As anyone that has years of experience with computers, keep the drives spinning and things will be fine, it is when you shut down and start up that issues come into play.
mitchdbx - Saturday, June 15, 2013 - link
More FYI about the RAID levels....http://forum.synology.com/wiki/index.php/What_is_S...
Micke O - Monday, June 17, 2013 - link
Synology aren't using some "nonstandard raid" with SHR. They are using mdadmThis is how to restore an array in standard PC using linux if your DiskStation would fail:
http://www.synology.com/support/faq_show.php?lang=...
I'd say that's even better than using some H/W Raid controller. Good luck replacing one of those with something else than an identical controller with the very same firmware etc.
Insomniator - Thursday, June 13, 2013 - link
Wow great timing! Been looking for a NAS with huge storage capabilities to transfer data offsite. Haven't seen many around... Buffalo Terastation looks good but I haven't seen reviews for those or any other modern NAS systems. Thanks for the review!SirGCal - Thursday, June 13, 2013 - link
Did I miss it? But I didn't see it support Raid 6? But Raid 5, ESPECIALLY with large drives, is just asking for failure. I personally have one 8-drive array, building my 2nd now. First with 2TB drives, new one with 4TB drives. Both are Raid 6. Old one 12TB, new one will be 24TB. Ya you lose 2 drives of usable space but that creates 3-drive failure protection. Or basically, when a drive fails and you're rebuilding, you have protection from another drive failing. Cause THAT is what it will happen...But I didn't see anything in the whole thing about Raid 6 at all. I would Never build an 8-drive system with Raid 5... Not especially with consumer grade hardware... Without Raid 6... It's just not worth it for large array...
Gigaplex - Thursday, June 13, 2013 - link
No, it only creates 2-drive failure protection. Lose 3 drives in RAID6, and you're toast.SirGCal - Friday, June 14, 2013 - link
3-drive failure, as it it takes 3 to kill the array.. Point is you can be repairing one, if another one fails, your not dead yet... As you would be with RAID 5...