The Modular PC: Intel’s new Element brings Project Christine to Life
by Dr. Ian Cutress on October 7, 2019 3:55 PM EST- Posted in
- CPUs
- Intel
- Xeon
- Razer
- NUC
- PCIe
- Compute Element
- Project Christine
- Intel Element
Way back at CES 2014, Razer’s CEO introduced a revolutionary concept design for a PC that had one main backplane and users could insert a CPU, GPU, power supply, storage, and anything else in a modular fashion. Fast forward to 2020, and Intel is aiming to make this idea a reality. Today at a fairly low-key event in London, Intel’s Ed Barkhuysen showcased a new product, known simply as an ‘Element’ – a CPU/DRAM/Storage on a dual-slot PCIe card, with Thunderbolt, Ethernet, Wi-Fi, and USB, designed to slot into a backplane with multiple PCIe slots, and paired with GPUs or other accelerators. Behold, Christine is real, and it’s coming soon.
‘The Element’ from Intel
Truth be told, this new concept device doesn’t really have a name. When specifically asked what we should call this thing, we were told to simply call it ‘The Element’ – a product that acts as an extension of the Compute Element and Next Unit of Computing (NUC) family of devices. In actual fact, ‘The Element’ is a product of the same team inside Intel: the Systems Product Group responsible for the majority of Intel’s small form factor devices has developed this new ‘Element’ in order to break the iterative design cycle into something that is truly revolutionary.
(This is where a cynic might say that Razer got there first… Either way, everyone wins.)
What was presented on stage wasn’t much more than a working prototype of a small dual-slot PCIe card powered by a BGA Xeon processor. On the card was also two M.2 slots, two slots for SO-DIMM LPDDR4 memory, a cooler sufficient for all of that, and then additional controllers for Wi-Fi, two Ethernet ports, four USB ports, a HDMI video output from the Xeon integrated graphics, and two Thunderbolt 3 ports.
The M.2 slots and SO-DIMM slots are end-user accessible, by lifting a couple of screws from the front. This is in no-way a final design, but just a working prototype. The exact cooler, styling, and even the product name is in no way final yet, but the concept is solid.
The product shown used a Xeon BGA processor, however it was clear that this concept can be moved into consumer processors as well. As with the current NUC family, this would likely migrate into the mobile processors rather than BGA versions of desktop processors, and the fact that there are Thunderbolt 3 ports on the side would hint towards 10th Generation Ice Lake, however Intel stated that all options at this design stage are open at this point.
This whole card has a PCIe slot, which we believe at this time to be PCIe 3.0. It stands to reason that if this Element becomes a generational product, then it would migrate to PCIe 4.0 and PCIe 5.0 / CXL as and when Intel moves its product families onto those technologies. Intel is planning to bundle the card to partners with a backplane – a PCB with multiple PCIe slots. One slot would be designated the master host slot, and the CPU/DRAM/Storage combination would go in that slot. Discrete GPUs, professional graphics, FPGAs, or RAID controllers are examples of cards that could fit into the other slots.
In these configurations, in every instance the CPU compute card is the host, rather than an attached device. Intel does offer CPUs-on-a-card-as-a-device, which is Intel’s Visual Compute Accelerator (VCA), which pairs three Xeon E3 CPUs onto a slave card that is accessed from the host. We asked if Intel has plans that its Element cards could be used as a slave card in this configuration, but Intel stated there are no current plans to do so.
The backplane would also be the source of power. A direct PSU into the backplane would serve as offering 75W to each of the PCIe slots, as well as any other features such as system fans or additional on-backplane controllers. This power could come from a PSU, or from a 19V input, depending on the exact configuration of the system. The Element card we saw had an additional 8-pin PCIe power connector, suggesting another 150W could be powered to the card, giving a total of 225W for CPU, DRAM, and storage: which would beg the question if the card could support something like a Core i9-9900KS.
On the topic of cooling, the demo unit shown had very much a basic cooling setup. As stated, Intel said that this is in no-way the final version of what Intel is trying to do here. When asked if it would be easy enough for users to liquid cool the CPU, the Intel spokesperson said it would be customizable, though it would be up to component manufacturers to enable that themselves.
For board partners, Intel stated that they are not seeing this Element form factor as something that partners would create themselves. In essence, there would be no AIB partners like in the GPU market, but for OEMs that to build pre-built systems, they could take the Element card and customize on top of the Intel design, as well as develop their own backplanes and such.
Ultimately with the Element, Intel wants to make it easier for integrated system upgrades. Customers can keep the chassis, keep the system setup, keep the backplane, and all they would do is change the Element card to get the latest performance and features. This was the ultimate goal with something like Razer’s Project Christine, and is certainly something to work towards. However, by keeping the storage on the Element rather than having it as a separate add-in card, this is somewhat limiting as it would require swapping the drives out. This might not be much of an issue, if one of the PCIe slots on the backplane was used for M.2 drives (or even with drives on the backplane itself).
Intel stated that the plan for the Element to see daylight in the hands of OEMs would be sometime in Q1 2020, likely at the back-end of Q1. Our spokesman said that exact CPUs and configurations are still in flux, and as one might expect, so is pricing. Exactly how the Element will be named is a mystery, and how it will be packaged either to end users or OEMs is a question to answer.
Given that this is a product from the same group as the NUC, I highly expect it to follow the same roll-out procedure as other NUC products. Personally, I think this form factor would be great if Intel could standardize it and open it up to motherboard partners. I imagine that we might see some board partners do copy-cat designs, similarly with how we have several variations of NUCs on the market. Intel stated that they have a roadmap for the Element, which is likely to extend over multiple generations. I theorised a triple slot version with an Xe GPU, and the idea wasn't dismissed out of hand immediately.
We asked about RGB LEDs. The question recieved a chuckle, but it is going to be interesting whether Intel limits the Element to a professional environment or opens it up to more run-of-the-mill users.
We’ve politely asked Intel to let us know when it is ready so we can test. Our Intel spokesman was keen to start sampling when it is ready, stating that sampling budget in this context is not a problem. I think we’ll have to hold them to that.
Related Reading
- Intel Launches the NUC Compute Element for Modular Computing Systems
- Intel's Bean Canyon (NUC8i7BEH) Coffee Lake NUC Review - Ticking the Right Boxes
- EGlobal's NUC-Like PC Packs Intel’s Unlocked Hex-Core i7-8750HK CPU
- Shuttle’s X1 Now Available: NUC-Like PC With a GeForce GTX 1060 GPU
- Memory Frequency Scaling on Intel's Skull Canyon NUC - An Investigation
86 Comments
View All Comments
doggface - Monday, October 7, 2019 - link
I am kind of interested in this concept for a dual-pc build. If you could mount your NAS + drives in same chassis and via pcie have power + 10gb ethernet connection to the host system as well as external connections you could externalise the load without needing a new PSU/case.UpSpin - Tuesday, October 8, 2019 - link
It's already there, just Google for Dell VRTX.Kevin G - Monday, October 7, 2019 - link
Kinda weird that Intel is going through this idea as this isn't their first time. As pointed out, the VCA cards throw several CPUs onto a card. But Intel did the same thing to a degree with the first commercial Xeon Phi. There is also the Open Pluggable Specification ( OPS ) that is designed for more commercial video applications but similar concept where the host PC is on a removable modular card.I will say that a card like this is interesting if intel would permit the usage of a 'sub host' for virtualization environments. It'd provide some physical seperation in terms of execution domains while permitting IO to be shared or dedicated based upon card. With the recent spat of Intel security issues, this would provide another layer of protection for guest OSs. It'd also let VM farms mix latency sensitive/serial workloads that benefit from high clocks (which benefit from the >4 Ghz consumer parts more) into a wider more throughput oriented architecture of traditional servers.
For the average mass market consumer, the only niche I see this fulfilling is the video streamer that wants to do something like game and encoding on the same box. This would certainly help there but it is difficult to imagine another scenario where this would be ideal.
I also see the usage of a normal PCIe slot only beneficial to the consumer market. Realistically Intel should be leveraging SFF-TA-1002 with a high power connector for server usage. Being able to pop these out in server hot swap bays simplifies things greatly, at least in terms of node expansion. The high power connector, at +48 W can provide some truly insane amounts of power at 1 kW in a single slot and ~650W at +12V. These are also rated up to 112 Gbit per serial lane using PAM4 encoding (see PCIe 6.0).
mode_13h - Tuesday, October 8, 2019 - link
> Intel did the same thing to a degree with the first commercial Xeon Phi.No, I don't think so. Did it talk to other peripherals over PCIe? I highly doubt that.
The Phi add-in-cards were just self-contained accelerators that just happened to be built around x86-64 CPUs.
alufan - Tuesday, October 8, 2019 - link
not weird but calculated intel and other similar suppliers do this for repeat sales and to lock you into a deal think Gillette and razors once you buy the basic idea your trapped into the whole system unless you spend big to leave and of course you have a nice intel account manager to keep you in line with offers and discounts to further reduce your horizon, thats why intel has a whole eco system for you to buy into AMD really is bringing the fearescksu - Tuesday, October 8, 2019 - link
Oh, its not new concept. CPU backplanes has been around for a long time. Its still here today but not really popular. Below is an example of a modern one.https://www.ieiworld.com/_upload/news/images/PEMUX...
danielfranklin - Tuesday, October 8, 2019 - link
Isnt this really more of a blade server?Or does the PCI-E interface enable some sort of inter-system transport for a purpose im not thinking about?
Gadgety - Tuesday, October 8, 2019 - link
I can see the advantage to Intel of coming up with new products. I'd rather see the industry to establish a new form factor that enables smaller (than M-ATX) footprint dual PCI-E motherboards. I would have liked to have seen some specific dimensions for the element.mode_13h - Tuesday, October 8, 2019 - link
The only way I'd support this is if it had some advantage over current PCs, such as cooling.Otherwise, I see it as just a way for parts makers to boost margins at the expense of weight, bulk, and reduced selection, only to benefit a relatively small number of users who are unwilling to open a standard PC case.
PCs are modular enough. This is just wasteful.
edzieba - Tuesday, October 8, 2019 - link
It's an oooooold system layout. Blade servers, the ol' CompactPCI setup, etc.Not sure why Razer's fancy render got singled out for praise, they didn't even build one!