When the PCI Special Interest Group (PCI-SIG) first announced PCIe 4.0 a few years back, the group made it clear that they were not just going to make up for lost time after PCIe 3.0, but that they were going to accelerate their development schedule to beat their old cadence. Since then the group has launched the final versions of the 4.0 and 5.0 specifications, and now with 5.0 only weeks old, the group is announcing today that they are already hard at work on the next version of the PCIe specification, PCIe 6.0. True to PCIe development iteration, the forthcoming standard will once again double the bandwidth of a PCIe slot – a x16 slot will now be able to hit a staggering 128GB/sec – with the group expecting to finalize the standard in 2021.

As with the PCIe iterations before it, the impetus for PCIe 6.0 is simple: hardware vendors are always in need of more bandwidth, and the PCI-SIG is looking to stay ahead of the curve by providing timely increases in bandwidth. Furthermore in the last few years their efforts have taken on an increased level of importance as well, as other major interconnect standards are building off of PCIe. CCIX, Intel’s CXL, and other interfaces have all extended PCIe, and will in turn benefit from PCIe improvements. So PCIe speed boosts serve as the core of building ever-faster (and more interconnected) systems.

PCIe 6.0, in turn, is easily the most important/most disruptive update to the PCIe standard since PCIe 3.0 almost a decade ago. To be sure, PCIe 6.0 remains backwards compatible with the 5 versions that have preceded it, and PCIe slots aren’t going anywhere. But with PCIe 4.0 & 5.0 already resulting in very tight signal requirements that have resulted in ever shorter trace length limits, simply doubling the transfer rate yet again isn’t necessarily the best way to go. Instead, the PCI-SIG is going to upend the signaling technology entirely, moving from the Non-Return-to-Zero (NRZ) tech used since the beginning, and to Pulse-Amplitude Modulation 4 (PAM4).

At a very high level, what PAM4 does versus NRZ is to take a page from the MLC NAND playbook, and double the number of electrical states a single cell (or in this case, transmission) will hold. Rather than traditional 0/1 high/low signaling, PAM4 uses 4 signal levels, so that a signal can encode for four possible two-bit patterns: 00/01/10/11. This allows PAM4 to carry twice as much data as NRZ without having to double the transmission bandwidth, which for PCIe 6.0 would have resulted in a frequency around 30GHz(!).


NRZ vs. PAM4 (Base Diagram Courtesy Intel)

PAM4 itself is not a new technology, but up until now it’s been the domain of ultra-high-end networking standards like 200G Ethernet, where the amount of space available for more physical channels is even more limited. As a result, the industry already has a few years of experience working with the signaling standard, and with their own bandwidth needs continuing to grow, the PCI-SIG has decided to bring it inside the chassis by basing the next generation of PCIe upon it.

The tradeoff for using PAM4 is of course cost. Even with its greater bandwidth per Hz, PAM4 currently costs more to implement at pretty much every level, from the PHY to the physical layer. Which is why it hasn’t taken the world by storm, and why NRZ continues to be used elsewhere. The sheer mass deployment scale of PCIe will of course help a lot here – economies of scale still count for a lot – but it will be interesting to see where things stand in a few years once PCIe 6.0 is in the middle of ramping up.

Meanwhile, not unlike the MLC NAND in my earlier analogy, because of the additional signal states a PAM4 signal itself is more fragile than a NRZ signal. And this means that along with PAM4, for the first time in PCIe’s history the standard is also getting Forward Error Correction (FEC). Living up to its name, Forward Error Correction is a means of correcting signal errors in a link by supplying a constant stream of error correction data, and it’s already commonly used in situations where data integrity is critical and there’s no time for a retransmission (such as DisplayPort 1.4 w/DSC). While FEC hasn’t been necessary for PCIe until now, PAM4’s fragility is going to change that. The inclusion of FEC shouldn’t make a noticeable difference to end-users, but for the PCI-SIG it’s another design requirement to contend with. In particular, the group needs to make sure that their FEC implementation is low-latency while still being appropriately robust, as PCIe users won’t want a significant increase in PCIe’s latency.

The upshot of the switch to PAM4 then is that by increasing the amount of data transmitted without increasing the frequency, the signal loss requirements won’t go up. PCIe 6.0 will have the same 36dB loss as PCIe 5.0, meaning that while trace lengths aren’t officially defined by the standard, a PCIe 6.0 link should be able to reach just as far as a PCIe 5.0 link. Which, coming from PCIe 5.0, is no doubt a relief to vendors and engineers alike.

Even with these changes, however, as previously mentioned PCIe 6.0 is fully backwards compatible with earlier standards, and this will go for both hosts and peripherals. This means that to a certain extent, hardware designers are essentially going to be implementing PCIe twice: once for NRZ, and again for PAM4. This will be handled at the PHY level, and while it’s not a true doubling of logic (what is NRZ but PAM4 with half as many signal levels?), it does mean that backwards compatibility is a bit more work this time around. Though discussing the matter in today’s press conference, it doesn’t sound like the PCI-SIG is terribly concerned about the challenges there, as PHY designers have proven quite capable (e.g. Ethernet).

PCI Express Bandwidth
(Full Duplex)
Slot Width PCIe 1.0
(2003)
PCIe 2.0
(2007)
PCIe 3.0
(2010)
PCIe 4.0
(2017)
PCIe 5.0
(2019)
PCIe 6.0
(2021)
x1 0.25GB/sec 0.5GB/sec ~1GB/sec ~2GB/sec ~4GB/sec ~8GB/sec
x2 0.5GB/sec 1GB/sec ~2GB/sec ~4GB/sec ~8GB/sec ~16GB/sec
x4 1GB/sec 2GB/sec ~4GB/sec ~8GB/sec ~16GB/sec ~32GB/sec
x8 2GB/sec 4GB/sec ~8GB/sec ~16GB/sec ~32GB/sec ~64GB/sec
x16 4GB/sec 8GB/sec ~16GB/sec ~32GB/sec ~64GB/sec ~128GB/sec

Putting all of this in practical terms then, PCIe 6.0 will be able to reach anywhere between ~8GB/sec for a x1 slot up to ~128GB/sec for a x16 slot (e.g. accelerator/video card). For comparison’s sake, 8GB/sec is as much bandwidth as a PCIe 2.0 x16 slot, so over the last decade and a half, the number of lanes required to deliver that kind of bandwidth has been cut to 1/16th the original amount.

Overall, the PCI-SIG has set a rather aggressive schedule for this standard: the group has already been working on it, and would like to finalize the standard in 2021, two years from now. This would mean that the PCI-SIG will have improved PCIe’s bandwidth by eight-fold in a five-year period, going from PCIe 3.0 and its 8 GT/sec rate in 2016 to 4.0 and 16 GT/sec in 2017, 5.0 and 32 GT/sec in 2019, and finally 6.0 and 64 GT/sec in 2021. Which would be roughly half the time it has taken to get a similar increase going from PCIe 1.0 to 4.0.

As for end users and general availability of PCIe 6.0 products, while the PCI-SIG officially defers to the hardware vendors here, the launch cycles of PCIe 4.0 and 5.0 have been very similar, so PCIe 6.0 will likely follow in those same footsteps. 4.0, which was finalized in 2017, is just now showing up in mass market hardware in 2019, and meanwhile Intel has already committed to PCIe 5.0-capable CPUs in 2021. So we may see PCIe 6.0 hardware as soon as 2023, assuming development stays on track and hardware vendors move just as quickly to implement it as they have on earlier standards. Though for client/consumer use, it bears pointing out that with the rapid development pace for PCIe – and the higher costs that PAM4 will incur – just because the PCI-SIG develops 6.0 it doesn't mean it will show up in client decides any time soon; economics and bandwidth needs will drive that decision.

Speaking of which, as part of today’s press conference the group also gave a quick update on PCIe compliance testing and hardware rollouts. PCIe 4.0 compliance testing will finally kick off in August of this year, which should further accelerate 4.0 adoption and hardware support. Meanwhile PCIe 5.0 compliance testing is still under development, and like 4.0, once 5.0 compliance testing becomes available it should open the flood gates to much faster adoption there as well.

Source: PCI-SIG

POST A COMMENT

120 Comments

View All Comments

  • Kevin G - Wednesday, June 19, 2019 - link

    I'm curious if PCIe 6.0 will permit PAM4 data transmission but at the reduced clocks of PCIe 4.0/3.0 etc. in a low power mode. This would still be additional bandwidth vs. the previous standards and likely saves some complexity without have to toggle between PAM4 and NRZ as often. Otherwise I'm curious what the turn around time for a PAM4-to-NRZ transition would be on the bus and how much energy could be spent thrashing across that transition. Reply
  • SaturnusDK - Wednesday, June 19, 2019 - link

    The PCI specification dictates a maximum 36dB signal loss on both PCIe 4.0, 5.0 and 6.0. That alone should tell you that a multi-bit signal, ie. a signal with several discrete voltage states, uses more power than increasing the frequency since the signal loss has to be to the lowest discrete voltage state meaning that the transmitter will need to run at higher voltage which incurs a higher power loss. The reason for even using a multi-bit signal is due to the fact that with PCIe 5.0 we're already at 16GHz with a corresponding maximum PCB trace length of about 80mm between repeaters or signal conditioners. Doubling speed again, and thereby halving the trace length again isn't a feasible option. That is the only reason to go multi-bit. The disadvantages are too great to implement it at lower speeds. Reply
  • mode_13h - Wednesday, June 19, 2019 - link

    It would also be interesting to use 6.0 signalling at lower clock rates, for higher-noise environments that might benefit from FEC. Reply
  • bharatlagali - Monday, June 24, 2019 - link

    I think PCIe 5.0+ will be great for eGPUs. Currently there are only 4 PCIe 3.0 lanes available over TB3. With 5.0, even sticking with just 4 lanes would give as much bandwidth as PCIe 3.0 x16. So, apart from a little protocol overhead, near full potential of the GPU could be utilized, as opposed to the current 10%-40% performance hit. Reply
  • thomasg - Monday, June 24, 2019 - link

    The PCIe improvements don't necessarily translate to Thunderbolt.
    Thunderbolt isn't just limited by PCIe, Thunderbolt is explicitly designed to get up to PCIe speeds with as little cost as possible.

    The main issue is guaranteeing signal integrity at long distances in thin, cheap cables.
    This means you can't just put current thunderbolt hardware on a faster PCIe connection and it will just get faster.

    Over copper, cross-talk at those enormous clock rates is a significant issue - and PAM4 coding makes this much worse.
    There's a lot of catching up for Thunderbolt to do to achieve PCIe 5.0 signalling over copper, and I don't think the tech will be there for years to come.

    Thunderbolt 3 is already severely limited: While the maximum specified copper cable has 3 meters, PCIe 3 speed signalling only works over 0.5 meters.

    In all likelyhood, even with future technology in active cables, the USB Type-C connector will itself become a major issue and I find it highly unlikely that it could support PCIe 5 signalling.

    Thus, a future, faster Thunderbolt will be an entirely new technology.
    I don't see the eGPU concept itself surviving into that future.

    After all, what's the point. A eGPU box is barely smaller than a full blown PC that would easily outperform any notebook.
    You might as well just make it a full blown computer and just do data transfer over Thunderbolt to have all your data available on the small gaming-box, you connect to your notebook.
    Reply
  • mode_13h - Tuesday, July 2, 2019 - link

    > a future, faster Thunderbolt will be an entirely new technology.

    Optical. Is there any reason you couldn't cram that kind of bandwidth over something like a Toslink cable?
    Reply
  • mode_13h - Saturday, July 6, 2019 - link

    After reading the comments in the article on the DisplayPort 2.0 announcement, I would like to retract this Toslink reference. There's much discussion of optical cabling, in that thread.

    Maybe someday, optical cabling will return.
    Reply
  • Brother Printers - Tuesday, July 2, 2019 - link

    Brother printer support offer full functionality by providing printing, copying, scanning, and faxing capability from a single machine. They're available as inkjet or laser models with a variety of feature configurations to suit business and personal users and carries a wide range of Brother all-in-ones, including mono laser, color laser, and inkjet to meet home, home office, business, and school needs.
    Brother Printer Not printing Black?
    Solved
    https://www.brotherprintersupport.co.uk/2019/06/10...
    https://www.brotherprintersupport.co.uk

    https://www.brotherprintersupport.co.uk/brother-pr...

    <a href="https://www.brotherprintersupport.co.uk/brother-pr... Printer Support</a> or Call : +44-121-286-4615

    How to Solve Installation Problem with my Brother printer using a USB/local connection

    https://www.brotherprintersupport.co.uk/2019/06/27...

    <a href="https://www.brotherprintersupport.co.uk/2019/06/27... Printer Support</a> or Call : +44-121-286-4615

    Brother Printer Drivers Call: +44-121-286-4615

    https://www.brotherprintersupport.co.uk/2019/06/27...

    <a href="https://www.brotherprintersupport.co.uk/2019/06/27... Printer Support Drivers</a> or Call : +44-121-286-4615

    <a href="https:// https://www.brotherprintersupport.co.uk/2019/06/28... Printer Support Drivers</a> or Call : +44-121-286-4615

    <a href="https://www.brotherprintersupport.co.uk/brother-pr... printer error 1168</a>
    <a href="https://www.brotherprintersupport.co.uk/brother-pr... printer error 35</a>
    <a href="https://www.brotherprintersupport.co.uk/brother-pr... printer error ys-02</a>
    Reply
  • mode_13h - Saturday, July 6, 2019 - link

    Somebody please kill this spam. Reply
  • JayNor - Sunday, June 7, 2020 - link

    Cooper Lake ended up not having PCIE4.
    Looks like some Tiger Lake chips and Ice Lake Server chips will have PCIE4 in second half of 2020, as well as in some Optane SSD drives.

    PCIE5/CXL is on the roadmap for 2021, both in the Xe HPC GPUs and in the Sapphire Rapids Xeon chips being used in the Aurora exascale project.

    So, as it is turning out, Intel isn't skipping pcie4 or pcie5. Both were demonstrated in agilex fpga chiplets in 2019, and a Stratix 10 dx chip also had pcie4, sampled in 2019.
    Reply

Log in

Don't have an account? Sign up now