Comments Locked

59 Comments

Back to Article

  • trparky - Friday, April 5, 2019 - link

    Whoa, those are eye-watering prices right there.
  • Targon - Friday, April 5, 2019 - link

    And where are the benchmarks that show that these actually make sense for anyone?
  • Ian Cutress - Friday, April 5, 2019 - link

    Currently being run
  • p1esk - Friday, April 5, 2019 - link

    https://arxiv.org/abs/1903.05714
  • skavi - Friday, April 5, 2019 - link

    Thanks for the link!
  • Diogene7 - Friday, April 5, 2019 - link

    @Ian Cutress : If Intel Optane DC Persistent Memory is successful, how many years do you think consumers will have to wait to be able to purchase consumer laptop that have processors supporting Intel Optane PM (or any another Storage Class Memory (SCM) equivalent) ?

    I am really, really looking forward to the day where consumer could begins to have the possibility to buy laptop computers that use Storage Class Memory (SCM) on the memory bus as main repository for data storage instead of NVMe SSD to importantly in order to importantly lower data access latency : I think it could much more improve the overall user experience (better responsiveness) than a faster processor (a bit like when going from a HDD to a SSD).
  • Valantar - Monday, April 8, 2019 - link

    Would there be any significant changes beyond boot times and cold application launches? Windows' caching system works very well, and most application launches these days happen very quickly. Other than that, consumers don't work with datasets big enough for SCM to make a noticeable difference. This _might_ make a difference in very heavy applications (video editors, complex 3D modelling), but again, only for the initial loading of the app and/or data in question. And frankly, the difference between waiting, say half a second or three seconds for a file to load is negligible (whereas the difference from HDDs to SSDs was often in the minutes-to-seconds range). In day-to-day usage, the difference would be negligible (and the added latency compared to RAM could even mean a slower overall user experience).

    IMO, this kind of tech has a value in the consumer space for making computers turn on (pretty much) instantly and similar quality-of-life improvements, but other than that, I don't see much value in this. Moving from a fast SATA SSD to NVMe is barely noticeable - and of course these both have the low random performance at low queue depths issue of all NAND, but when things are as fast as they are, even a 10x improvement is unlikely to be very noticeable.
  • azazel1024 - Tuesday, April 9, 2019 - link

    That is what I'd be looking for as the slightly above average consumer. If I could sit in a 64GB module and that stored the Operating system in persistent memory so cold boot times were as fast as resuming from standby (or faster), that would be awesome. Same thing with moving around within the OS, extremely responsive. It would also reduce power budgets (I assume) with the option of S4 hibernation allowing basically as fast or faster resume than S3 and lower power than S3.

    Now a 512GB module to store all my applications on would be nifty AF, but if the price were decent (not nearly $7 a GB, more like ~$1.50-2 a GB) then having the OS stored in it would be great.
  • Diogene7 - Friday, April 5, 2019 - link

    Looking forward to read the results of the benchmark :).

    I don’t know if it was planned, would it be possible to include in the list of tests some test related to a consumer usage like :
    1. Time to boot full windows
    2. Time to wake from sleep
    3. Time to launch Microsoft Word
    4. Time to launch a big game
    ...
    And to compare them with same platform with :
    1. A Samsung NVMe SSD 970Pro
    2. An Intel Optane NVME SSD

    It would give a idea of how much of a latency improvement a consumer in the future might hope to get if/when Intel Optane Persistent Memory should become available on consumer platform... Just curious to get an idea...
  • twtech - Saturday, April 6, 2019 - link

    I suppose those are the sort of things many Anandtech readers would be interested in, but none of those would be factors the people who would consider buying a server with Optane memory would care about. :)

    I can think of several different servers we are using at work that could potentially benefit from Optane because of heavy disk usage. The data is recreate-able - so it's not critical to be backed up, and there isn't that much of it; less than a TB - but it is read from and/or written to extremely frequently.

    If we had servers with persistent memory, we'd just keep the affected data in persistent memory instead of storing it on a SSD.
  • duploxxx - Monday, April 8, 2019 - link

    add a ram drive, marginal cost.
  • Toadster - Monday, April 8, 2019 - link

    great example of using persistent memory as cache in a 2-tier storage HCI cluster https://blogs.technet.microsoft.com/filecab/2018/1...
  • name99 - Sunday, April 28, 2019 - link

    It's not clear quite what that proves. The Optane is used as "cache", which to me, suggests the persistence is not important, all that's important is that a large direct address space is provided. Which in turn suggests that some sort of alternative like I suggested (lots of DRAM behind a controller that re-interpreted the JEDEC commands) would work just as well IF connected to a memory controller that understood this.

    I keep raising thins point because Optane seems like a very striking case of a technology that's being pushed by a company in the face of alternatives that are just as feasible. We see this a LOT in the software space, especially when it comes to standards --- everyone wants THEIR way of solving a problem to be in the spec, but the Huawei solution, the Qualcomm solution, and the Ericsson solution are all basically just as good; it's politics that determines which one goes in.

    We've seen it also at the consumer level, with things like Blu-Ray vs HD-DVD.

    But it's rare that we've seen this at such a fundamental eco-system level. In the past a particular tech choice (DDR-n, or flash, or using a GPU) was so obviously superior to the alternatives that the basic idea was agreed upon by everyone; it was only minor details that differed.
    Optane feels, however, very much more like Beta vs VHS, there ARE tech alternatives to Optane that could be pushed if someone wanted, and the supposed superiorities of Optane to such alternatives have melted away (rather than increasing!) every year as we keep getting more details of the tech.
  • IntelUser2000 - Saturday, April 6, 2019 - link

    Diogene7:

    It doesn't matter right now. The Optane PMM(Persistent Memory Module) devices can act like RAM, but persistent. You can do lots of cool things with persistent memory that's really fast.

    However, nothing in the consumer space supports it. It just arrived in the server space, so its a start, but how many years away is it for consumers? Then you need Windows that supports it, and more time for applications as well.

    Again, very cool stuff, but nothing you'll see benefits from yet.
  • Diogene7 - Saturday, April 6, 2019 - link

    @IntelUser2000: Thanks for the clarification :). I know that, as of 2019, there is very little support (if any) for the consumer space, which is a quite frustrating for me.

    When I went from using a computer using an HDD to a computer using a SSD, I noticed tremendous improvements in terms of user experience due to lower latency data access.

    On a personal basis, my interest would be to completely replace a NVMe SSD by Storage Class Memory (SCM) on the memory bus to importantly lower data latency.

    At first, I only need that a consumer version of the Operating System (OS) like Microsoft Windows or Apple iOS support to vew SCM on memory bus as a « traditionnal » SSD but as it is on the memory bus, it would have much lower data access latency (hopefully to get at least a 10x improvement).

    Then, at a later stage, sure the applications could be optimized, but as each individual application would need to be optimized by each developper this will take a much longer time.

    From my point of view, it is typically the kind of innovation that smartphone manufacturers should be investing and focusing on, instead as of 2019 of smartphone with flexible displays...

    In 2019, the first smartphones with flexible displays will cost 500€ to 1000€ more than a premium smartphone, and I would be ready to invest 500€ more in a smartphone that would use SCM on memory bus instead to replace UFS Nand flash storage.

    I think it is typically what Apple should be investing in and working on before others as it is something that brings real visible and tangible value to the end user: the sooner, the better !!!
  • IntelUser2000 - Saturday, April 6, 2019 - link

    You can get Optane NVMe SSDs to get 7-10x latency advantage over the best, non-Optane NVMe SSDs, today.

    But you won't get anywhere near 7-10x advantage, because your OS and software isn't built to take advantage of it. In loading, Optane SSDs are maybe 40% best in super heavy games. Most of the time, it'll be 10-20% at best.

    You got big gain from going to SSD from HDD because HDDs were that much slower and software was far away from being a bottleneck.

    Even SSDs are maybe 2-4x fast at loading compared to HDDs despite the latency part being 100x better.

    Another reason you don't get benefit is because for *decades* software has been built to minimize using HDDs.

    Optane as a storage device on the memory bus will get you even less benefit because of that bottleneck. Optane as RAM has potential to revolutionize things but it'll take 3-5 years to see real changes(once its available) and 10-15 years after that to fully take advantage of it, maybe more.
  • IntelUser2000 - Saturday, April 6, 2019 - link

    So to clarify, there isn't "little to no support" for Optane on the memory bus for consumers like you and me. There's nothing. Zero. Zip. Nada. There's "little support" in servers, and that's what the launch enables.

    If you are a subscriber to Anandtech forums I can explain it to you in detail in PM.
  • twtech - Sunday, April 7, 2019 - link

    I'm guessing the basic/simple implementation will be to have the Optane memory show up as a hard drive in Windows. That would be sufficient for our needs for the time being.

    In the future, it should be possible to more or less eliminate the loading step for some data, and access it directly by address from where it resides - that's where the real benefits will come in.

    But for the time being, it sounds like it can be effective as an alternative to running a RAM drive.
  • tuxRoller - Monday, April 8, 2019 - link

    Err, other than Linux? And anything that uses dax filesystems?
    Definitely not optimized but still a good deal better than anything else that's remotely consumer oriented.
  • ilt24 - Saturday, April 6, 2019 - link

    @Diogene7..."Time to boot full windows"

    What I think would be very interesting would be to see recovery times (from a crash) for several types of servers (DB, VM Host, Web...) with and without the Persistent Memory.
  • duploxxx - Monday, April 8, 2019 - link

    you forgot the biggest issue... its the sw.
    laptop is not slow because of cpu, laptop is not slow because of ram or SSD/NVME...
    laptop is slow because of stupid windows
  • 0ldman79 - Tuesday, April 9, 2019 - link

    ^^This

    If you want to see something fast, install Windows 98, XP, Vista or 7 on a machine that runs 10 decently.

    It is stupid fast.

    The problem is that the people making the OS decide to use *all* available resources just for a stupid interface.

    The OS should be thin and light and you only notice it when something breaks. Right now Windows 10 has higher resource requirements than most of the games I play. Admittedly most of my games aren't terribly new, but still, I can run any of the Crysis games perfectly on a machine that is absolutely slow as hell *just* running Windows 10 with nothing else installed.

    How that became the norm I'll never understand. Windows 3.11 was Microsoft's last small and light OS, *maybe* Windows 2000.
  • Kevin G - Saturday, April 6, 2019 - link

    The test I'm interested in is where the standard storage stack is removed entirely and the OS sees no line between a data file and memory consumed by a running application. I'm not sure what out there actually exists that follows this so-old-it-is-now-new-again paradigm for OS and applications. The concept of separate application storage (RAM) and persistent storage started in the 1960's and as an idea has stuck for half a century. That idea made sense due to costs and the technology available.

    Otherwise I'd set my expectations for Optane DIMMs to be the fastest storage medium short of a real DRAM based RAM disk. For applications that can see and address this memory segment directly as memory but aware of its persistent nature, I would expect performance to be even more impressive.

    Using Optane DIMMs as memory I would expect performance to generally tank due to the higher latency and lower bandwidth than DRAM give the same aggregate memory capacities. However looking at it based upon price, ie having several times more Optane DIMM capacity than DRAM, the performance gap will close based upon data set size. In other words, a pure Optane DIMM solution should be faster than a DRAM + traditional storage tier structure when the data set exceeds that of the DRAM and can fit into the Optane DIMM capacity.

    All told, I'd expect a mix of results which falls into what we know about the technology ahead of time. The question is what scenarios are going to be tested.
  • FunBunny2 - Tuesday, April 9, 2019 - link

    "The test I'm interested in is where the standard storage stack is removed entirely and the OS sees no line between a data file and memory consumed by a running application."

    since I post on 7 April, I've found a number research and development efforts to do exactly that for RDBMS. just type in 'RDBMS persistent memory' in your favorite search engine. both linux (4.7) and windows (?) have added hooks for this stuff. the database engines have to be twerked to know that they're talking to this stuff. on the whole, databases run faster on fewer resources.
  • Schmov17 - Friday, April 5, 2019 - link

    Is it though? CompSource has the 128GB module for ~$900. Traditional 1x 128GB DIMMs on the same site go for anywhere between $2000-4500.
  • trparky - Friday, April 5, 2019 - link

    When you compare the prices to a standard SSD they're eye-watering.
  • dullard - Friday, April 5, 2019 - link

    Why would you compare memory pricing per GB to a SSD pricing per GB? Completely different needs and applications. Yes, Optane is not as fast as DDR memory, but when used in the right ratio you can have some data in DDR and the rest in Optane and not have a significant slowdown. Try that with a typical SSD and your application comes to a crawl.
  • alfalfacat - Friday, April 5, 2019 - link

    It's a dimm though. It looks like memory, not a block device. Why would you compare to a standard SSD? I mean, that would be as silly as saying the cost is high when you compare it to a stack of dvds.
  • saratoga4 - Friday, April 5, 2019 - link

    5400 RPM hard drives are even cheaper!
  • emn13 - Saturday, April 6, 2019 - link

    I'd argue that there is no such thing as a traditional 1x 128 DIMM - DDR doesn't usually come in that density, so you're in a niche project right there. But at lower density, this is approximately equal to the price of DRAM.

    And that means this price is very, very high.

    You're paying for persistance and high density with money and performance (unless there's some upset it appears very unlikly optate will be perf competitive with DRAM). That sounds like a liveable niche, but a pretty damn small one. And although I can't find pricing for samsungs sz985 - that may well undermine even that niche; by probably offering higher densities and lower prices (at the cost of lower perf, but still not that much lower).
  • Flunk - Saturday, April 6, 2019 - link

    Server tech is expensive.
  • name99 - Sunday, April 28, 2019 - link

    Remember the good old days when one of the Optane bullet points was
    - cheaper than DRAM
    ?

    Ah, good times.
    The 128GB price pretty much exactly matches DRAM today (4x32GB) is also about $890.

    In retrospect, we should have expected this when the marketing slides went from
    https://www.extremetech.com/extreme/211087-intel-m...
    to
    https://www.extremetech.com/extreme/270270-intel-a...

    Notice how "cheaper" and "lower power" have been replaced by "improved memory capacity"...
    Which is more or less true --- but also, to some extent, reflects a deliberate choice by Intel not to allow the support of more DRAM as pure DRAM, even in some sort of alternative slower configuration.

    So what's the ACTUAL win right now?
    - cheaper if you just want capacity. No.
    - lower power? Unclear the exact numbers, but doesn't seem to be less than DRAM. Probably depend on your exact usage patterns (read heavy is lower power, write heavy is higher power) than equivalent DRAM
    - lower spatial volume? Yes, but not dramatically so. My guess is that alternative models (think something like densely populated DIMMs of stacked DRAM dies, with a controller that remapped the standard JEDEC commands to route on the DIMM) would have bee a feasible alternative. (We don't need HBM levels of sophistication bcs we're not competing with DRAM, but with Optane performance)

    What you DO get are
    + it works in Intel servers, and the alternatives do not. (Unless someone decides to build something like I suggested; and even then Intel may not support it.) So if you NEED that much random access memory attached to your chip, it's the only game in town.
    + it's persistent. It remains unclear to me the value of this. Intel seems to be trying hard to push both stories, that it's super cool if you need persistence; but hey, it's also super cool if you just need more RAM. I honestly don't know which fraction of the market will use persistence, and which considers it more of a hassle (it is, after all, one more security attack surface that has to treated as such).

    It's certainly POSSIBLE that n years from now the price will drop a lot more than DRAM, and/or that the capacity in the same spatial volume will rise a lot more than DRAM. But I think anyone who believes Intel's promises in this respect right now is taking one heck of a gamble.

    It will be interesting to see if a company like IBM fires back in disgust with a solution like I'm suggesting, a way to allow chips to see a much larger (and slightly slower) pool of RAM than is available through straight JEDEC, and at essentially Optane prices.
  • CharonPDX - Friday, April 5, 2019 - link

    So, assuming the 512 GB module is exactly double the 256 GB module (*EXTREMELY* doubtful) that would be an eye-watering $33,000+ for the platform-maximum 3 TB per CPU.
  • Ian Cutress - Friday, April 5, 2019 - link

    2 TB of 128GB LRDIMMs will set you back around $64k. So by that logic, Optane gives you 1.5x the capacity for about half the price.
  • IntelUser2000 - Friday, April 5, 2019 - link

    Pre-order prices are always higher than MSRP, and it decreases to MSRP until launch date.

    The 128GB version in particular is similar in GB to the P4800X when you include the Memory Drive feature.

    Considering the much better endurance, along with advantages that come in a DIMM form factor, I think for most cases it can replace the P4800X completely. Which is what it should do as the technology doesn't fit so well as an SSD.
  • IntelUser2000 - Friday, April 5, 2019 - link

    I meant to say
    "The 128GB version in particular is similar in price per GB to the P4800X when you include the Memory Drive feature."
  • Kevin G - Saturday, April 6, 2019 - link

    The thing about an Optane SSD is that it should be able to exceed the capacities found in the DIMM format and be relatively cross platform. So conceptually Intel could make an 8 TB PCIe form factor card given the proper NVMe controller. Such a product does make sense on the technical side. I do think the cross platform nature of NVMe is a reason why Intel isn't pushing capacity for their data center Optane SSD's as they can be used by AMD Epyc and IBM POWER platforms. Intel's move here is more of a business politics decision than lack of a niche for their product.
  • IntelUser2000 - Saturday, April 6, 2019 - link

    I don't think so. The P4800X at 1.5TB is already very, very expensive. At 8TB, it would cost $30-40K.

    There's also the thing where only 128Gbit(16GB) dies are being manufactured. You can stack multiple of them in a package, but as of now the largest they have is 8 per stack, so 128GB per IC.

    They said their goal is using QLC to offer capacious storage, and Optane to offer new capabilities and performance levels. Rob Crooke, the guy leading the storage group also said Optane will be progressively higher performance, and NAND higher capacities and lower costs.
  • tuxRoller - Monday, April 8, 2019 - link

    The issue is that nvme itself has more overhead (~500ns, iirc) than 3dxp has latency (variable, but appears to always be less than 500ns).
  • IntelUser2000 - Monday, April 8, 2019 - link

    500ns? Think 10x that. It's 5us or more.
  • zodiacfml - Friday, April 5, 2019 - link

    If that is the case, it is poor value, better to spend more on DRAM. However, can be of value with servers that have limited DIMM slots.
  • twtech - Saturday, April 6, 2019 - link

    For some applications, the fact that Optane is persistent is potentially a nice advantage.

    Basically, if you had an application where you would have considered a custom DRAM-based storage solution with battery backup - the custom nature of which adds significant cost on top of the already expensive memory - Optane is a potentially cost-effective solution.

    What sort of scenarios would that apply to? A lot, actually. If you have a server that stores a (comparatively) small amount of data - but that data is being read from and/or written to extremely frequently - Optane could be an ideal solution.
  • Death666Angel - Saturday, April 6, 2019 - link

    "Basically, if you had an application where you would have considered a custom DRAM-based storage solution with battery backup"
    I don't really get that. Are there applications that provide battery power to just the DRAM when the whole system is offline (and that is not called "suspend to DRAM")? No business I know runs anything server-like on non-battery-backed-up systems. So the only advantage of Optane to DRAM I see is that it saves the seconds it takes the PCIe SSD (3GB/s) to copy the old DRAM data from and to DRAM. Considering server boot times are in the order of minutes already, shaving off that little bit doesn't seem like a huge benefit, especially if general performance suffers. And servers are rarely shut down as well.

    "small amount of data - but that data is being read from and/or written to extremely frequently"
    So it's a small amount of data and the server is never offline? How is the persistent thing of Optane an advantage in that scenario? And MSRP wise, Optane isn't cheaper than DDR4 ECC memory. And that is significantly faster still.

    Now, if Optane memory allows TBs of semi-high-speed DRAM like storage instead of just hundreds of GBs (to 1.5TB?) like DRAM does, it might have a niche, especially if it is cheaper. But your argument doesn't hold much water in my experience.
  • Kevin G - Saturday, April 6, 2019 - link

    There are edge cases were nonvolatile memory is still helpful. They generally revolve around the data center monkey pulling the wrong power cable or failed power supply on a running system.

    This also opens up interesting possibilities like hot swapping CPUs and resuming where they left off (though some select systems already have this capability to an extent with normal DRAM). A slightly different scenario would be interrupting execution and then moving the memory to a different system to resume after a different hardware failure/upgrade. Kind of niche but I can see some utility considering how arcane server grade software licensing is.
  • IntelUser2000 - Saturday, April 6, 2019 - link

    Initiallly the persistence feature won't be used as much, because it requires optimizing the application to take advantage. However the expanded memory capacity, Memory Mode doesn't. At higher capacities DRAM is significantly more expensive.

    VMs, in-memory databases, and even cloud servers are areas where it can be used. Enterprise would probably also be happy to get larger memory capacities. On Ars Technica, this guy was talking about being able to use larger datasets to speed up machine learning applications.

    You can also use the DIMMs as a block storage device, so it becomes a really fast SSD. The price/GB of the 128GB DIMM is about on par with P4800X, so it can displace them.
  • Threska - Sunday, April 7, 2019 - link

    Feeding an SIMD architecture.
  • Kevin G - Saturday, April 6, 2019 - link

    Just to point out, early talk of these Optane DIMMs pointed toward capacities of 2 TB per module but just not initially at launch. Though the release of Optane DIMMs is roughly 18 months behind schedule as they were originally targeted as a Sky Lake-SP feature. It is odd that we are not seeing the 1 TB Optane DIMM modules as Intel should have had time to build up inventory or migrated to their 3D XPoint production to a newer node.

    Right now Samsung is offering 256 GB LR-DIMMs for those seeking maximum capacity and they didn't exist when Intel initially announced their Optane DIMM initiative. So releasing an Optane DIMM with only twice the capacity of a DRAM option is kind of disappointing. It makes it more difficult for Intel to make their case if the capacity side of the equation is as strong. (ie 128 GB vs. 1 TB makes the case better for Optane DIMMs than a 256 GB vs. 512 GB match up).
  • Valantar - Monday, April 8, 2019 - link

    TechPowerUp reports 512GB modules for sale at CompSource for $7 816. So more like $47 000 :)
  • olafgarten - Friday, April 5, 2019 - link

    If that price comes down a bit and these come to Desktop Platforms (Or at least HEDT) this could be very useful for machine learning on large datasets.
  • shahmanish - Friday, April 5, 2019 - link

    ColfaxDirect is offering all the 3 modules at great prices https://colfaxdirect.com/store/pc/viewCategories.a...
  • Toadster - Monday, April 8, 2019 - link

    that's ~$5.43/GB
  • FunBunny2 - Saturday, April 6, 2019 - link

    I think I hear, in the distance, the death rattles of client-side coders of RDBMS. "We prefer to do transactions in our clients". the InnterTubes client machine reduced to VT-220 status. my heart goes flutter flutter.
  • mrvco - Saturday, April 6, 2019 - link

    At those prices, Apple must be involved somehow.
  • Valantar - Monday, April 8, 2019 - link

    I don't think you're quite familiar with enterprise-grade hardware prices ...
  • Supercell99 - Sunday, April 7, 2019 - link

    Whats the use case for this?
  • Xajel - Sunday, April 7, 2019 - link

    While the idea of having 128GB of semiRAM is awesome. Today's consumer grade NVMe are very competitive to Optane, they offer higher bandwidth for lower cost. The only things consumer Optane NVMe provides is better IO & better performance consistency. Which in real consumer world doesn't do that much and the consumer will mostly prefer higher capacity for the same price of regular NVMe drives.

    What could those PM DIMM's do for consumers then ?
    Maybe prosumers, mainly those who do content creations, other computing hobbies which can benefit from this, But it's still hard too to justify, they can always have a larger standard NVMe drive and use it as a cache drive. A lot of applications already support this kind of thing while almost nothing yet support the PM DIMM.
  • FunBunny2 - Sunday, April 7, 2019 - link

    the most important use for persistent "RAM" is the RDBMS. one gets rid of all those sync points between caches, DRAM, SSD, HDD and so on. transactions are only written once, to one place. yes, the engines will need to know when they're writing to such machines, and I suspect the OSs as well. but, man, what a difference there'll be.

    all industrial RDBMS use memory buffers, often under their engine's control rather than the OS, in addition to all those other stops along the way to persistence. just getting rid of those makes a huge difference, as well.
  • vaibhav24 - Wednesday, April 10, 2019 - link

    #MemoryModule Market Analysis - http://bit.ly/2D5NHwp
    The Memory Module industry concentration is not high; there are more than one twenty manufacturers in the world, and high-end products mainly from U.S. and Western European.
    Global giant manufactures mainly distributed in China and Taiwan. The manufacturers in China have a long history and unshakable status in this field. Manufacturers such as Kingston (Shanghai) and Ramaxel (Suzhou) have relative higher level of product’s quality. As to Taiwan, ADATA has become as Asia leader.
  • Damatrino - Friday, January 28, 2022 - link

    These sound like hot nice modules but on Intel W3 site it seems they only plug into Intel server MOBO's so for common use it maybe a few years yet still useless some moves it down into common use. HP's Persistent modules for 128 gb were selling $1,253.75 CA for HP Modules at NewEgg but the silly link wouldn't take to any place were it give anymore info. or how to purchase it. Much good that was! I'll keep hunt and searching and one day maybe I'll find out more about this stuff than general info. and talk.

Log in

Don't have an account? Sign up now