Last year IBM presented details about its new Power10 family of processors: eight threads per core, 15 cores per chip, and two chips per socket, with a new core microarchitecture, all built on Samsung’s 7nm process with EUV. New technologies such as PCIe 5.0 for add-in cards, PowerAXON for chip-to-chip interconnect, and OpenCAPI for super-wide memory support made Power10 sound like a beast, but the question was always about time to market – when could customers get one? Today IBM’s Power10 E1080 Servers are being announced, aimed squarely at the cloud market.

Power10: A Brief Summary

IBM’s Power series of processors have been a steadfast progression over the last couple of decades, often using some secret manufacturing process to eke out that specialist frequency right on the bleeding edge. The new Power10 processor is also built for performance, with the 602mm2 16 core silicon die running at over 4 GHz with 8 threads per core. For yield reasons one core per chip is disabled, but a full 16 socket system can run to 1920 logical threads.

IBM built Power10 to be either in a single chip module (SCM) with one piece of silicon, or a dual-chip module (DCM) with two pieces of silicon. Where the chip really shines is in the two multi-protocol connectivity interfaces around the edges of the silicon.

PowerAXON at the corners and OMI (OpenCAPI Memory Interface) sound like amazing flexible interfaces. Running at 1 TB/sec each, the PowerAXON can be used for chip-to-chip communication, storage, regular DRAM, ASICs/FPGA connections, and clustered memory. The OMI can be used for storage also, or main DRAM, or for high-bandwidth GDDR/HBM. Together, these technologies allow for up to 8 TB per system, or 2048 TB of addressable memory across a networked cluster of systems. There’s also PCIe 5.0 x32 for add-in cards.

IBM compares Power10 against Power9: +20% single thread improvement, +30% per core improvement, and an overall 3x performance per watt against the previous 14nm processor. Also bundled is a new AI compute layer supporting four 512-bit matrix engines and eight 128-bit SIMD engines per core, providing 20x or more INT8 performance per socket.

For more slides and details, check our Live Blog from last year’s Hot Chips.

IBM Power10 E1080

The stated E1080 design is an eight-socket system, supporting transparent memory encryption, 2.5x faster encryption, and new RAS features for advanced recovery, self-healing, and diagnostics. IBM’s materials focus on one particular benchmark: how it beats other options in a two-tier SAP Hana SD standard benchmark with only half the sockets, or an Oracle benchmark requiring only 20% of the power and number of Oracle licenses to achieve the same result.

Multiple times in the release IBM mentions ‘instant scaling, pay per use consumption’, especially as it pertains to Red Hat’s OpenShift technology in the cloud. This all pertains to IBM’s ‘Hybrid Cloud’ strategy, which is meant to mean that a business runs some private internal cloud resources while also using some cloud server provider ‘public’ resources, and it’s the public element that relates down to the cost. IBM runs its own cloud service, to which Power10 will be a part.

IBM was relatively light on details about exact SKUs to be offered, memory options, whether these will be available for direct purchase and deployment. A lot of discussions went into the new AI accelerators with ONNX frameworks as well as the operating system support for enterprise features such as side-channel attacks, intrusion detection, compliance reporting, and full-stack encryption with support for ‘quantum-safe cryptography’.

IBM is taking orders now with shipments expected to begin before the end of the month.

POST A COMMENT

23 Comments

View All Comments

  • kgardas - Wednesday, September 8, 2021 - link

    And btw, here is vSMP Epyc: 8 chips: 2250 -- result nearly 2 years old by now.

    https://spec.org/cpu2017/results/res2020q1/cpu2017...
    Reply
  • RedGreenBlue - Wednesday, September 8, 2021 - link

    SPEC does low-level benchmarks. It is not the same as testing the performance of enterprise software. These are great or better chips, for their purpose, but they aren’t meant to compete directly with Epyc or Xeon. They are meant to avoid being in the same target market by specializing in certain environments where they shine and they are designed with those workloads in mind, not others. Reply
  • RedGreenBlue - Thursday, September 9, 2021 - link

    Also just saying, there’s a reason nobody is out there bragging about how well their POWER chip runs Crysis. The “any workload you throw at it” argument is only ever going to be true if you spend (waste) billions of dollars to build an entire software ecosystem around the hardware. Hence, why Steve Jobs, in talking about their hardware development quoted a famous engineer and said something like, “people really serious about software need to make their own hardware.” Reply
  • FunBunny2 - Friday, September 10, 2021 - link

    " Steve Jobs"

    near as I can tell, all Apple has ever done to "make their own hardware.” is take an existing ISA, and make the various bits and pieces wider and/or fatter. that doesn't require much imagination or skill.
    Reply
  • nubie - Sunday, September 12, 2021 - link

    Better let Cosworth and Prodrive know all they do is soup up Fords and Subarus /s. If it was so easy why doesn't everyone do it? Reply
  • Oxford Guy - Sunday, September 12, 2021 - link

    Apple pioneered quite a bit, such as the software floppy controller and the Lisa system. Sometimes the quest to develop innovative things in-house ended in failure, such as the Twiggy minifloppy.

    It was a good idea overall but the mistake was not using a hard protective shell like Sony did. Had Apple done that its floppies would have had more than twice the capacity of the Sony microfloppy.

    Jobs is also the originator of the NeXT Cube, which was the first system of note to ditch floppies (long before the iMac), replacing them with magneto-optical drives.

    Jobs was hardly a salesman of generic/vanilla products. If you want that you can look at Apple during the Performa era when he wasn’t around.
    Reply
  • FreckledTrout - Monday, September 13, 2021 - link

    While not as much skill as a new ISA and a new design this still is an excellent approach. As time goes by Apple having the OS and hardware will show some serious merits. Some of it is soft things like the hardware and software folks physically speaking to each other. Reply
  • abufrejoval - Wednesday, September 8, 2021 - link

    It seems quite safe to guess that it will suck much more power per second than ARM or EPYC....

    Power savings or exploiting any bit of idling to lower power consumption is very low priority on these, while the ARM chips are really concentrating on that and x86 is a compromise.

    Just like the z/Arch chips you better make sure that these chips stay loaded to have them pay for the architecture luxury tax and the juice they take.
    Reply
  • milli - Wednesday, September 8, 2021 - link

    This is not really meant for compute but rather throughput. That's why IBM is targeting the cloud market. Even if you would compare the two, the IBM chip would only require 15 CPU licenses per chip while the EPYC would require 64..... aaaaaaand all your money is gone to licensing. Reply
  • Dolda2000 - Thursday, September 9, 2021 - link

    Is there any chance that Anandtech might get one in for testing? Reply

Log in

Don't have an account? Sign up now