One of the driving features of performance in the high-end desktop space is the creator community: the need for fast CPUs and fast storage is strong, regardless of cost. Rendering video, requiring large 8K datasets, and being able to mix and match the hardware to meet the required performance is in-of-itself an exciting area to delve into. In order to meet the needs of the most demanding creators, ASUS is upgrading its quad M.2 card it put into the market last year to now support PCIe 4.0 SSDs for the latest AMD systems.

The card is essentially a mounting point to take a full x16 PCIe slot and bifurcate it into four separate PCIe 4.0 x4 links, which is each paired with an M.2 connector. Thus each drive should be able to achieve full speed – in order to ensure this, the drive also comes with a full aluminium heatsink and fan, which operates at a reasonably low RPM. The fan can be enabled or disabled via a switch on the PCIe bracket, and the bracket also has four activity LEDs for each of the drives.

One of the big issues with the older PCIe 3.0 version was the support of the card on different systems. The card worked well on AMD systems, but had issue with Intel systems, because Intel’s PCIe solution did not support multiple endpoints in the same way. With this new solution, that problem ultimately disappears, because Intel has no PCIe 4.0 solution right now.

We expect the Hyper M.2 x16 Gen 4 card to be available soon, focused mainly for Threadripper and EPYC systems. Pricing should be equivalent to the PCIe 3.0 version.

Comments Locked

19 Comments

View All Comments

  • ingwe - Thursday, January 9, 2020 - link

    Yup glad I read to the bottom for that :D
  • Jorgp2 - Friday, January 10, 2020 - link

    Except it's completely irrelevant as this is a passive device.

    It will work on any system that support bifurcation, be it PCI-E 1 or 4
  • GreenReaper - Thursday, January 9, 2020 - link

    "Creator community" is already my most-hated phrase of 2020. >_<
  • eek2121 - Friday, January 10, 2020 - link

    Artificial market segmentation at it's finest.
  • Dug - Thursday, January 9, 2020 - link

    Great cheap solution without slowdown that you get from adding multiple nvme off of chipset.
  • Kevin G - Friday, January 10, 2020 - link

    I was kinda hooong that this would have a MicroSemi PCIe 4.0 bridge chip to solve the bifurcation issues. Right now PCIe 4.0 M.2 cards can't saturate the 4 lane link so stuffing this into an 8 lane PCIe 4.0 slot or a 16 lane PCIe 3.0 slot wouldn't be that detrimental currently.

    The other bonus of a bridge chip is that some permit things like RAID1 where the mirroring is done on the bridge so performance is maintained while not sacrificing PCIe lanes from the host. Nice for LGA 115X systems with so few lanes to begin with.
  • MenhirMike - Friday, January 10, 2020 - link

    Problem with a bridge is that it drives up the cost significantly. The existing PCIe 3.0 version costs about $50, compared to ~$200 for ones with a bridge chip (which will surely be made by someone at some point)
  • Billy Tallis - Friday, January 10, 2020 - link

    What you're referring to is usually called a PCIe switch. Bridge chips are generally doing some kind of conversion between protocols, so that they can connect two devices that would otherwise not be able to communicate.

    As far as I am aware, Marvell is the only one with a NVMe switch, that operates at the level of the NVMe protocol rather than just PCIe. That's how it can do RAID 0/1, virtualization, and other SSD-specific stuff—but it hasn't been updated for PCIe 4.0 yet. Broadcom/PLX and Microchip/Microsemi PCIe switches cannot do RAID for NVMe devices.

    All of those switch products are priced to only be accessible to enterprise customers. ASMedia has some relatively small PCIe 3.0 switches (up to 8 lane upstream, 16 lane downstream), and even that makes a quad-M.2 card into a ~$250 product.
  • eek2121 - Friday, January 10, 2020 - link

    I haven't looked at individual drives, but many are claiming numbers that are exactly double previous gen. What do you mean they aren't saturating the link?

    I also disagree with others who have stated that we don't need faster storage. Storage remains a huge bottleneck, even today. PCIE could also benefit from lower latency.

Log in

Don't have an account? Sign up now