GIGABYTE Server MD60-SC0 Conclusion

The MD60-SC0 has a clear market - air directed servers that prioritize CPU performance and storage over serious GPU compute, as the latter is only possible with PCIe riser configurations. Due to the orientation for the sockets, the use of narrow sockets and the SSI EEB form factor, it's use as a workstation board for OEMs or as a DIY board for home use is most likely limited unless they are already geared up for this scenario.

As always with GIGABYTE Server motherboards, most of the extra controls in place are via the Aspeed management console with access via the network interface. The BIOS and software bundled with the system is basic at best, especially when or if a user wants to control the fan speeds depending on workload. That being said, the management console will be familiar to those with experience in this field.

For those more used to consumer type products, it might sound odd for a motherboard manufacturer to include ports on the board that are only enabled when an add-in card is used. The MD60-SC0 does this by supporting a raid card via the Type-T mezzanine connector to power the SAS ports beside it. This does have some usefulness by allowing users to decide which RAID card is relevant, allowing an upgrade without much fuss, although the card GIGABYTE recommends revolves around the LSI SAS 2308 chip. It is worth noting that this card gets very hot to touch without airflow.

With respect to the system benchmarks, as per usual with management controlled server motherboards, the initial power on to turn on time is over 30 seconds. POST times were significantly cut if there is no use for some of the extra controllers or the raid card connectors. DPC latency was a little obscure, with the motherboard preferring the 2697 V3 processors over the other models we had at hand.

As this is our first LGA2011-3 dual socket motherboard review, it is a little difficult to place the benefits against the competition beyond the big obvious checkbox of QSFP. GIGABYTEServer's range of motherboards at launch afforded several models, a few focused on workstation but mostly for rack servers. The MD60-SC0 falls into the latter category, especially with this network configuration, and it ran most of our benchmark suite without issue. The main selling point of the motherboard will be that QSFP port.

CPU Benchmarks
Comments Locked

17 Comments

View All Comments

  • PCTC2 - Wednesday, December 3, 2014 - link

    Coming from the HPC space, seeing 512GB-1TB of RAM was pretty regular, but seeing 1.5TB-2TB was rare, but did occur. However, now with systems being able to have 6TB of RAM in a single 4U rack server is pretty incredible (4P servers with 96 DIMMs, Intel E7 v2 support).

    However, there are a few odd things about this board. For one, the QSFP+ is totally unnecessary, as it only supports 2x10GbE, and is not either 1) Infiniband or 2) 40GbE. Sure, with LACP, you could have bonded 20GbE, but you either need a splitter cable (QSFP+ to 4x SFP, with 2 SFP unusable) or a switch that supports multiple links over QSFP+ (a 40GbE with 10GbE breakout capabilities). Also, the decision to use the SFF-8087 connectors for the SATA and individual ports for SAS confounds me, as you lose the sideband support with individual cables, and onboard SATA doesn't support the sideband, thus losing some functionality with some backplanes. Also, the card Gigabyte advertises with this board is an LSI 2308, an HBA and not a full hardware RAID.

    Some of Gigabyte's B2B systems have intrigued me, especially their 8x Tesla/Phi system in 2U, but this board just doesn't seem completely thought out.
  • jhh - Wednesday, December 3, 2014 - link

    I suspect the QSFP was designed to support a Fortville, but they didn't get them qualified in time. That would get them a true 40 Gig port, or 4x10G
  • fackamato - Friday, December 5, 2014 - link

    What's fortville?
  • Cstefan - Friday, December 5, 2014 - link

    Intel 40GBE QSFP+
    Nothing the consumer need worry over for a long time yet.
  • Klimax - Sunday, December 7, 2014 - link

    With some results already available:
    http://www.tweaktown.com/reviews/6857/supermicro-a...
  • Cstefan - Friday, December 5, 2014 - link

    I run multiple database servers with 2TB of ram. My next round is slated for 4TB. And absolutely no joke, they reversed the SAS and SATA connectors in a monumentally stupid move.
  • ddriver - Wednesday, December 3, 2014 - link

    Well, surprisingly no gaming benchmarks this time, but what's with the "professional performance" benches? How many professionals out there make their money on running cinebench? How about some real workstation workloads for a change?
  • JeffFlanagan - Wednesday, December 3, 2014 - link

    This isn't a workstation, or a gaming machine.
  • ddriver - Wednesday, December 3, 2014 - link

    I actually applauded the absence of gaming benchmarks this time. As for whether this is for a workstation machines, I'd say it is far more suited for a workstation than suited for running winrar and image viewing software.

    And just to note this "review" of a "serve" motherboard doesn't have a single server benchmark whatsoever...
  • mpbrede - Wednesday, December 3, 2014 - link

    My usual gripe about acronyms that are not accompanied by an explanation when the term is first used. THis time aggravated by a typo, I'm sure.

    "The system is based on the C612 chipset, which is similar to the consumer based X99 but with 2P related features, such as MTCP over PCIe."

    I'm pretty sure you meant to type MCTP (Management Component Transport Protocol) and not the mTCP (microTCP?) or MTCP (Malaysian Technical Cooperation Programme or has something to do with Transport Layer Support for Highly Available Network
    Services)

Log in

Don't have an account? Sign up now