Ever since GIGABYTE’s Server team and I first started discussing reviews, it was interesting to see what a purely B2B (business to business) unit could do. Since then, GIGABYTE Server has expanded, catering to both the B2B and B2C (customer) markets and selling direct to end users. With the release of Haswell-EP we reported on their large launch at the time and they sent us the MD60-SC0 for review.

GIGABYTE Server MD60-SC0 Overview

For those not versed in 2P workstation and server culture, the MD60-SC0 looks a bit different to consumer line products. The DRAM and CPU sockets are aligned for airflow, first coming across the power delivery then the socket, and out the rear panel all in one straight line with the dram providing a baffle effect to channel the air. This means that the PCIe slots are in a somewhat awkward position in the middle of the motherboard, limiting the length of PCIe devices, and making the platform focus more on CPU power and storage rather than a GPU powerhouse without riser cables.

The sockets might also confuse some users. Unlike the standard square LGA2011-3 sockets we see on most Haswell-E or Haswell-EP motherboards, the MD60-SC0 uses the narrow socket configuration. This requires different coolers as well due to the different screw hole placement, and these coolers are typically not sold at retail and thus come OEM only.  Gigabyte also supplied us with a pair of Dynatron R14 heatsinks for our review which certainly looked the part, although the noise at high loading is something I wouldn't wish on anyone. This system combination belongs in a server room for sure.

The system is based on the C612 chipset, which is similar to the consumer based X99 but with 2P related features, such as MTCP over PCIe. We were also supplied with a type-T LSI RAID mezzanine card to enable some on-board SAS ports. So while the ports are part of the motherboard, the raid card is a separate purchase and leverages the T slot configuration. It also allows a series of RAID card potential upgrades over time if needed.

As with most server motherboards, the MD60-SC0 does not have any form of onboard audio but does offer two gigabit network ports alongside an Aspeed management interface. The board also comes with a QSFP+ port for added network connectivity.

Benchmark wise, as this is the first Haswell-EP motherboard we have tested, it is a little hard to place. As usual with management-esque type motherboards, POST times are long and power consumption is in the upper echelons. DPC Latency with 2697 V3 CPUs was quite reasonable though, despite the other two CPU combinations giving much higher peaks.

One of the main server design crux points is the orientation and the size of the board, so the MD60-SC0 has to fit into your design paradigm to make the short list. With narrow sockets and the PCIe orientation it clearly aims itself at the OEM and server builders more than consumers.

Visual Inspection

The first thing that strikes as the motherboard is taken out of the box is two things – the size of the sockets, and the fact that the motherboard is not a simple square but with a cut out. The size of the sockets is merely from the point of view that I have not encountered the Narrow ILM LGA2011 based socket before, let alone the LGA2011-3 iteration. Using the socket is easy enough, although instead of the hooks we get with the larger socket the narrow socket has a flattened lever on one side and a raised lever the other. The usage is exactly the same.

Similar to the narrow socket, Type-T mezzanine connectors are a new concept for this reviewer. The gap in the PCB is such that the add-in card can fit, as the card is built within a certain 1U/2U standard. I would assume that non-perfect rectangular PCBs are harder to create, requiring a cutout, but then again all the motherboards we have reviewed at AnandTech all have cutouts for screw holes, and as such is probably not the oddest thing we have encountered.

Each socket uses all four memory channels at two DIMMs per channel. Combine this with the narrow sockets and there is space left for SATA ports on the edge of the board. Either way you cut it, it is a very tight squeeze and as such the narrow CPU coolers have to conform to Intel specifications exactly. The 16 total DRAM slots will accept 128GB of UDIMM memory, up to 512GB of RDIMMs and 1024GB of LRDIMMs. The thought of 1TB of DRAM in a single system is mindboggling.

Each socket is supplied with six phase power and an applicable heatsink. For those more accustomed to the mid-to-high end consumer market, this combination might not look like enough, especially when the motherboard has to deal with 160W CPUs. One thing C612 motherboards have in their favor is a lack of overclocking, meaning that power draw is a known quantity. Also, with this motherboard being oriented with sockets and DRAM aligned with the rear panel, the focus will be in systems with high pressure fans blowing in a single direction across the motherboard. This will aid the cooling, especially when at full tilt.

The PCIe mezzanine type-T arrangement is a full PCIe 3.0 x8 affair, with each side of the PCIe arrangement dealing with power and data. One important thing to note with type-T is the ability to cope with height restrictions. PCIe devices are notoriously tall, and are rarely given upright in server systems that are not 4U or above. Type-T allows smaller height arrangements, and as this motherboard is geared more towards storage with SATA breakout connectors and the eight SAS RAID ports, Type-T is a good fit for smaller height systems. It is worth nothing that the red SATA ports on the motherboard are the basic SATA 6 Gbps ports for testing the system or OS/storage when the breakout cables are not in use.

With our memory right up against the SAS ports, there might be a slight conflict if locking cables are used here especially at the edges with the DRAM latches. Though one would imagine that in a server, the cables are fixed and only the drives are moved if they need replacing.

In this area of the motherboard is also where we see the fan header arrangement. Beside each socket is a four-pin fan header, although these are SYS headers. There are five SYS headers on board – four on the right hand side of the board and one at the top. The two CPU fan headers are found to the left of both the sockets.

The PCIe arrangement affords two possibilities: either an x8/x16/x8/x8 arrangement with the type-T at PCIe 3.0 x8, or an x8/x16/-/x16, again with the type-T at PCIe 3.0 x8.  Either way, due to the location of the sockets, large PCIe co-processors can only be used with a riser card or cable. For our testing, we typically equip the system with a GTX 770 Lightning. This was not possible with this system, and as such we used an R7 240 instead and we were unable to perform our normal GPU based testing.

Above the PCIe slots is the meat of the IO and control, with the Aspeed management engine chip paired with 256MB of Samsung flash and also the Intel 82599ES controller in quick succession.

Also in this area of the motherboard is a USB 3.0 header, a TPM header, a COM header and a Thunderbolt header (for use with a TB card). The QSFP Ethernet controller requires its own heatsink, and the port extends some way onto the motherboard:

Perhaps a little surprising is the power connectors. Bonus points for their location on the edge of the motherboard, although typically we see them a lot near the CPUs, especially the 8-pin connectors. This might have implications for power arrangement and delivery though the PCB, although as the board squeezes two sockets with 2DPC DRAM support in the way that it does it seems to be a reasonable compromise.

The rear panel is networking focused, giving the QSFP+ port alongside two Intel I350 ports and the server management port. All six USB ports on the rear are USB 3.0 standard, with a combination PS/2 port, a COM port and a VGA port (from the Aspeed) also in the mix.

Board Features

GIGABYTE MD60-SC0
Price US (Newegg)
Size SSI EEB
CPU Interface LGA2011-3, Narrow ILM
Chipset Intel C612
Memory Slots Sixteen DDR4 DIMM slots
Up to 128 GB UDIMM, 512GB RDIMM, 1024GB LRDIMM
Up to Quad Channel, 2133 MHz
Video Outputs VGA (via Aspeed)
Network Connectivity Intel 82599ES (QSFP+)
2 x Intel I350
10/100 Management Port
Onboard Audio None
Expansion Slots 2 x PCIe 3.0 x16
2 x PCIe 3.0 x8
1 x PCIe 3.0 x8 Type-T
Onboard Storage 2 x SATA 6 Gbps, RAID 0/1/5/10
4 x SATA 6 Gbps via mini-SAS
4 x S_SATA 6 Gbps, no RAID, via mini-SAS
8 x SAS/SATA via Type-T RAID card
USB 3.0 6 x USB 3.0 via Rear Panel
2 x USB 3.0 via Onboard Header
Onboard 2 x SATA 6 Gbps
2 x mini-SAS Breakout connectors
8 x SAS RAID Ports
1 x USB 3.0 Header
1 x COM Header
1 x TPM Header
7 x Fan Headers
1 x Thunderbolt Header
Front Panel Server Header
Power Connectors 1 x 24-pin ATX
2 x 8-pin CPU
Fan Headers 2 x CPU (4-pin)
5 x SYS (4-pin)
IO Panel 1 x Combination PS/2 Port
6 x USB 3.0 Ports
1 x COM Port
1 x VGA Port
1 x QSFP+ Port (via Intel 82599ES)
2 x 1Gbit RJ-45 Ports (via Intel I350)
1 x 10/100 Network Management Port (via Aspeed)
Warranty Period 3 Years
Product Page Link

The big cost here will be that QSFP+ port, although we are not sure on exact cost between manufacturer and end-user – it could be in the region of $50 to $300. The narrow LGA2011-3 slots will also require different CPU coolers to normal as well. The mezzanine Type-T arrangement and RAID slots will need an added purchase to get these working too.

BIOS and Software
Comments Locked

17 Comments

View All Comments

  • macwhiz - Wednesday, December 3, 2014 - link

    I'm not surprised that there's no temperature data in the BIOS. Server admins don't look at the BIOS after they complete initial setup (or a major overhaul). It's accessible from the BMC, where it's useful in a server environment. When a server overheats, the admin is usually not in the same room—and often not in the same building, or even the same state. The important question is how the BMC firmware does at exposing that data for out-of-band management via IPMI, SNMP, or another standard solution. Does it play well with an Avocent UMG managment device, for instance? As a server admin, I could care less about seeing the temperature in the BIOS. What I care about is that my chosen monitoring solution can see if the temperature is going up—or any hardware fault is detected—and page me, even if the operating system isn't running. That's what BMCs are for!

    Don't apologize for using 240VAC power. Chances are very good that, even in a U.S. data center, it'll be on 240VAC power. Given the current needs of most servers, it's impractical to use 120VAC power in server racks—you'll run out of available amperage on your 120VAC power-distribution unit (power strip) long before you use all the outlets. Keep going down that road and you waste rack space powering PDUs with two or three cords plugged into them. It's much easier and more efficient all the way around to use 240VAC PDUs and power in the data center. Comparing a 20-amp 120V circuit to a 20-amp 240V circuit, you can plug at least twice as many of a given server model into the 240V circuit. Because the U.S. National Electrical Code restricts you to using no more than 80% of the rated circuit capacity for a constant load, you can plug in 16A of load on that 20A circuit. If the servers draw 6A at 120V or 3A at 240V, you can plug in two servers to the 120V power strip, or five servers into the 240V strip, before you overload it. So, once you get beyond a handful of computers, 240V is the way to go in the datacenter (if you're using AC power).
  • leexgx - Wednesday, December 3, 2014 - link

    mass server racks are Pure DC in some cases or 240v (i would of thought there be some very basic Temp monitoring in the BIOS but guess most of this is exposed elsewhere

    so i agree with this post
  • jhh - Thursday, December 4, 2014 - link

    208V 3-phase is probably more popular than 240V, as most electricity is generated as 3-phase, and using all 3 phases is important for efficiently using the power without being charged for a poor power factor.
  • mapesdhs - Thursday, December 4, 2014 - link


    In, you're still using the wrong source link for the C-ray test. The Blinkenlights site is
    a mirror over which I have no control; I keep the main c-ray page on my SGI site.
    Google for, "sgidepot 'c-ray'", 1st hit will be the correct URL.

    Apart from that, thanks for the review!

    One question: will you ever be able to review any quad-socket systems or higher?
    I'd love to know how well some of the other tests scale, especially CB R15.

    Ian.
  • fackamato - Friday, December 5, 2014 - link

    No 40Gb benchmarks?
  • sor - Monday, December 8, 2014 - link

    I was excited to see the QSFP, but it seems like it's not put to use. I've been loving our mellanox switches, they have QSFP and you can run 40Gbe or 4 x 10Gbe with a breakout cable, with each port. It provides absolutely ridiculous port density and great cost. You can find SX1012s (12 port QSFP) for under $5k, and have 48 10G ports in 1/2U at about $100/port. No funny business with extra costs to license ports. The twinax cable is much cheaper than buying 10G optics, too, but you have to stay close. Usually you only need fibre on the uplinks, anyway.
  • dasco - Saturday, March 9, 2019 - link

    Does it support udimm. As the documentation says that it supports only rdimm or lrdimm.
    Does gskill ram used in this test is udimm or rdimm Ecc ram.

Log in

Don't have an account? Sign up now