IPMI

I often find with dual socket motherboards that some extra love and care is needed to get them to work properly, especially because normal troubleshooting techniques we get on consumer motherboards (like a two-digit debug) aren’t present. This is where the baseband management controller comes in, and being able to remote access the system over the network.

One of the new features that Supermicro has implemented here, due to California law, is that no system can be shipped with a default admin/password combination any more. The H11DSi still has the main admin user as ADMIN, but the password is printed on a sticker on the motherboard – you’ll find it in the area just below the DRAM slots. Ours was a 10-letter password in all caps.

By default the IPMI interface will accept a DHCP IP, although this can be changed. Once entered, we get Supermicro’s latest interface.

Within the first few pages is the system as detected, and we can see here it detects the two processors as well as the memory and BIOS versions. Users can update the BIOS and BMC through this interface.

There are 54 sensors on the motherboard, relating to temperature, voltages, and fan speeds. Through the IPMI, users can set high and low limits for any of these sensors. Any discrepancy from the expected values is recorded in the health log.

The big area for the IPMI is the configuration tab, which offers access to networking and server controls.

Ports can also be set for the various use cases.

One thing to note with this motherboard is the fan speeds. While there are eight different 4-pin fan headers on the board, the amount of control offered to the end-user is pitiful. There is nothing in the BIOS to allow users to control the fan speed – instead a user has to access the IPMI, and even then the options are limited to four:

By default this is set to Optimal Speed, but a modern system should be able to support fan curves. It seems odd that consumer motherboards are by far ahead of the curve here, as fan control might be something required for a server board depending on the environment.

Like most server motherboards, there is also a service log to show what was changed and when.

For remote control/iKVM, the interface supports only a HTML5 login, which is how we accessed the system. The interface allows for full power control, including a software shutdown mode.

Gallery: H11DSi IPMI

BIOS Overview System Benchmarks
Comments Locked

36 Comments

View All Comments

  • bryanlarsen - Wednesday, May 13, 2020 - link

    > the second CPU is underutilized.

    This is common in server boards. It means that if you don't populate the second CPU, most of your peripherals and slots are still usable.
  • The_Assimilator - Wednesday, May 13, 2020 - link

    It's almost like technology doesn't exist for the board to detect when a second CPU is present, and if so, switch some of the PCIe slots to use the lanes from that CPU instead. Since Supermicro apparently doesn't have access to this holy grail, they could have opted for a less advanced piece of manual technology known as "jumpers" and/or "DIP switches".

    This incredible lack of basic functionality on SM's part, coupled with the lack of PCIe 4, makes this board DOA. Yeah, it's the only option if you want dual-socket EPYC, but it's not a good option by any stretch.
  • jeremyshaw - Wednesday, May 13, 2020 - link

    For Epyc, the only gain of dual socket is more CPU threads/cores. If you wanted 128 PCIe 4.0 lanes, single socket Epyc can already deliver that.
  • Samus - Thursday, May 14, 2020 - link

    The complexity of using jumpers to reallocate entire PCIe lanes would be insane. You'd probably need a bridge chip to negotiate the transition, which would remove the need for jumpers anyway since it could be digitally enabled. But this would add latency - even if it wasn't in use since all lanes would need to be routed through it. Gone are the days of busmastering as everything is so complex now through serialization.
  • bryanlarsen - Friday, May 15, 2020 - link

    Jumpers and DIP switches turn into giant antennas at the 1GHz signalling rate of PCIe3.
  • kingpotnoodle - Monday, May 18, 2020 - link

    Have you got an example of a motherboard that implements your idea with PCIe? I've never seen it and as bryanlarsen said this type of layout where everything essential is connected to the 1st CPU is very standard in server and workstation boards. It allows the board to boot with just one CPU, adding the second CPU enables additional PCIe sockets usually.
  • mariush - Wednesday, May 27, 2020 - link

    At the very least they could have placed a bunch of M.2 connectors on the motherboard, even double stacked... or make a custom (dense) connector that would allow you to connect a pci-e x16 riser cable to a 4 x m.2 card.
  • johnwick - Monday, June 8, 2020 - link

    you have to share with us lots of informative point here. I am totally agree with you what you said. I hope people will read this article.
    http://www.bestvpshostings.com/
  • Pyxar - Wednesday, December 23, 2020 - link

    This would not be the first time i've seen that. I remember playing with the first gen opterons, the nightmares of the pro-sumer motherboard design shortcomings were numerous.
  • Sivar - Wednesday, May 13, 2020 - link

    This is a great article from Ian as always. Quick correction though, second paragraph:
    "are fairly numerate". Not really numerate means.

Log in

Don't have an account? Sign up now