Single Client Performance - CIFS and NFS on Linux

A CentOS 6.2 virtual machine was used to evaluate NFS and CIFS performance of the NAS when accessed from a Linux client. We chose IOZone as the benchmark for this case. In order to standardize the testing across multiple NAS units, we mount the CIFS and NFS shares during startup with the following /etc/fstab entries.

//<NAS_IP>/PATH_TO_SMB_SHARE /PATH_TO_LOCAL_MOUNT_FOLDER cifs rw,username=guest,password= 0 0

<NAS_IP>:/PATH_TO_NFS_SHARE /PATH_TO_LOCAL_MOUNT_FOLDER nfs rw,relatime,vers=3,rsize=32768,wsize=32768,namlen=255,hard,proto=tcp,timeo=600,retrans=2, sec=sys,mountaddr <NAS_IP>,mountvers=3,mountproto=udp,local_lock=none,addr=<NAS_IP> 0 0

The following IOZone command was used to benchmark the CIFS share:

IOZone -aczR -g 2097152 -U /PATH_TO_LOCAL_CIFS_MOUNT -f /PATH_TO_LOCAL_CIFS_MOUNT/testfile -b <NAS_NAME>_CIFS_EXCEL_BIN.xls > <NAS_NAME>_CIFS_CSV.csv

IOZone provides benchmark numbers for a multitude of access scenarios with varying file sizes and record lengths. Some of these are very susceptible to caching effects on the client side. This is evident in some of the graphs in the gallery below.

Readers interested in the hard numbers can refer to the CSV program output here.

The NFS share was also benchmarked in a similar manner with the following command:

IOZone -aczR -g 2097152 -U /nfs_test_mount/ -f /nfs_test_mount/testfile -b <NAS_NAME>_NFS_EXCEL_BIN.xls > <NAS_NAME>_NFS_CSV.csv

The IOZone CSV output can be found here for those interested in the exact numbers.

A summary of the bandwidth numbers for various tests averaged across all file and record sizes is provided in the table below. As noted previously, some of these numbers are skewed by caching effects. A reference to the actual CSV outputs linked above make the entries affected by this effect obvious.

Asustor AS7008T - Linux Client Performance (MBps)
IOZone Test CIFS NFS
Init Write 82 82
Re-Write 83 81
Read 46 122
Re-Read 48 122
Random Read 27 56
Random Write 82 78
Backward Read 26 44
Record Re-Write 1690* 1637*
Stride Read 44 104
File Write 82 81
File Re-Write 84 81
File Read 33 90
File Re-Read 33 91
*: Benchmark number skewed due to caching effect
Single Client Performance - CIFS & iSCSI on Windows Multi-Client Performance - CIFS on Windows
Comments Locked

29 Comments

View All Comments

  • buxe2quec - Monday, December 1, 2014 - link

    I have a ZFS-based home-server that acts as NAS with also mail server, IMAP server (not to have mail on each client, less stuff to backup), and so on. I use a Xeon E3-1220 with 32 GB RAM and RAID10 (4x3TB).
    I used OmniOS (based on ilumos, a Solaris derivative) as operating system.
    I would like to perform tests like the ones of this review to compare my home-built system with standard offerings (i know what I get with ZFS, I would like to know how much performances I lose), and also to compare the performances of a ZFS-based server with standard offerings that always use Linux mdadm (software RAID).
    However with 32 GB (ECC) RAM (overkill, I know) doing reliable tests that are not affected by the (aggresive) ZFS caching is difficult.
    Could anyone give me suggestions, or could Anandtech test a similar setup? after all, the product of this review is an i3 that may support (as some other i3 chips do) ECC, it would be a good choice for home-builds, whenever the desired fileystem is ZFS (OmniOS but also FreeNAs, or NAS4free).

    Thanks.
  • PrimozR - Monday, December 1, 2014 - link

    Maybe they should test a HP Microserver running FreeNAS?

    As far as i can see, the Microserver series is by far the best when it comes to a cheap NAS build, if you want to run ZFS. The system with no drives costs 200 €, where you then must add 8 GB of ECC RAM, but you still get under 300 € for an ECC enabled ready made NAS case for 4 drives. Just a Xeon motherboard will cost you 140 € on the low end (for LGA-1150 CPUs, supporting ECC). With the Gen8 Microserver you even get the ability to swap out LGA-1150 CPUs. Gen7 uses AMD's offerings.
  • buxe2quec - Monday, December 1, 2014 - link

    The Gen8 Microserver would make sense, but the Gen7 is too weak and is CPU limited. Concerning the other alternative you mention, keep in mind that Xeons are not the only option: if you can find a ECC-enabled mobo, y i3 with ECC support will do fine at a very low price. Check here for a configuration: https://forums.freenas.org/index.php?threads/ecc-v...

    Concerning FreeNAS (or NAS4free, they are both good): they may not achieve the full performances for ZFS-related tasks, compared to illumos kernels (like OmniOS or Nexenta), but it would still be interesting.
  • DanNeely - Monday, December 1, 2014 - link

    What does it need a 350W PSU for? None of the tests shown went above 135W. Even adding some margin for more power hungry drives and adding a bit of headroom to avoid efficiency/power quality penalties from running near full load it seems a 175 or 200W PSU would be more than sufficient.
  • KAlmquist - Monday, December 1, 2014 - link

    Some hard drives are specified to draw 2 amps on the 12 volt line when spinning up. Multiply 2 amps by 12 volts by 8 disks, and you have the disk drives alone drawing 192 watts while the system is powering up. In theory a user could install a 25 watt PCIe card and plug in USB devices that draw 18.5 watts. Add in power for the CPU and motherboard, and you are getting close to 300 watts.

    350 watts is overkill, but the cost difference between a 300 watt power supply and a 350 watt power supply is pretty minimal.
  • DanNeely - Monday, December 1, 2014 - link

    That's what staged/sequential powerup is for. Turn your HDDs and USB drives (if you support the higher power USB modes) sequentially instead of all at once. Higher end storage servers have done this for years; I'm not sure how far down the market it's gotten.
  • hjones - Friday, December 19, 2014 - link

    If you go to the Asustor website just from the model names alone it makes me think these are re-badged Synology equipment...at the very least they are OEMing some of the technology.
    The ADM config & management not only looks very similar, albeit differently themed...its even using the same underlying technology - Sencha ExtJS. The app store is remarkably similar too.
    Anyone know more about this company? What is their relationship with Synology?
  • hjones - Friday, December 19, 2014 - link

    From Anandtech's own article (http://www.anandtech.com/show/7887/asustor-as304t-...
    "Asustor, Synology and Thecus were touted as partners building NAS units based on this platform"
  • jeepcrazy - Wednesday, January 28, 2015 - link

    I feel completely ripped off by Asustor. Avoid them at ALL costs. My 608T operated fine for about a month and then the network ports died. Won't take an IP manually or via DHCP. After two weeks of back and forth email (one per day since they respond at 3am) they finally provided an RMA. I sent it to them, they kept it two weeks and sent it back, supposedly with a new main board. It has the EXACT same issue. So incredibly unacceptable for a business class NAS to have such terrible, slow, and ineffective support. If you have an outage, expect ZERO sympathy from Asustor. They have no cross shipment capability and no advance replacement offering. I wish I had bought a Synology or built my own. This cost me 12TB of data.

Log in

Don't have an account? Sign up now