Comments Locked

23 Comments

Back to Article

  • Rick83 - Monday, November 11, 2013 - link

    So running memory on a 22nm CPU at 1.65 volts (or more, for the OC) has suddenly become acceptable again? Last I heard everyone clamoring to only get 1.5V memory, so as not to fry the IMC before its time.
    At $200, the key point is that by taking a $100 kit and putting those $100 dollars toward more memory or toward extra CPU performance would probably be better. Going with IB-E instead of with Haswell could probably done with that extra money - and you get double the memory channels to play with as a result.
  • IanCutress - Monday, November 11, 2013 - link

    Most DDR3 memory past 1866 C9 is at 1.65 volts. These IMCs are sturdy enough, almost all will take 2x8 GB 2933 C12 without breaking a sweat. When did it ever become unacceptable? I've never seen any issues except taking Sandy above 2400 MHz, because the IMC wasn't particularly built for it. Ivy kicked it up a notch and Haswell accepts most of what I throw at it as long as you're reasonable and the memory itself can handle it.
  • owan - Monday, November 11, 2013 - link

    There was a LOT of talk when SB released about using 1.5v ram instead of 1.65v due to the IMC supposedly not tolerating higher voltages well. I don't know how true it was, but I thought this was common knowledge.
  • hoboville - Monday, November 11, 2013 - link

    Yes, there has been (and still is concern) that over-volting RAM can have a negative impact on the memory controller, because it is on the CPU die. RAM voltages and power do have an impact on the memory controller, of that there is no doubt. In fact, Registered Memory (also known as Fully Buffered or just Buffered Memory) was a design that came about when the IMC had to interface with large amounts of RAM (and power), particularly servers where 8+ slots is not uncommon.

    http://en.wikipedia.org/wiki/Registered_memory
  • The Von Matrices - Monday, November 11, 2013 - link

    Well, according to Intel (http://www.intel.com/support/processors/sb/CS-0299...

    "Intel recommends using memory that adheres to the Jedec memory specification for DDR3 memory which is 1.5 volts, plus or minus 5%. Anything more than this voltage can damage the processor or significantly reduce the processor life span."

    However, I have not seen anyone who had a processor fail explicitly due to 1.65V memory. Granted, this might be hard to tell because many of the failed processors with 1.65V memory also have core overclocking and overvolting, and separating the actual cause of failure is impossible without an electron microscope.

    I run my Hawswell system at 1.65V DDR3-2400, and I am not worried about 1.65V killing the processor. What's more concerning to me is that my Mushkin Blackline memory's XMP profile adjusts the system agent voltage +0.3V, which is far too much for me. I forced it back to default voltage and the memory works fine.
  • jabber - Tuesday, November 12, 2013 - link

    It may be that Intel's research determined that running at 1.65v could reduce the life of the CPU from 30 years to 28 years.
  • freedom4556 - Tuesday, November 12, 2013 - link

    Yeah, I love that there is a huge difference between the statistical and colloquial meaning of the word "significant" that always seems to be abused by marketers and misused by media.
  • kishorshack - Monday, November 11, 2013 - link

    This is an Anandtech Review
  • hoboville - Monday, November 11, 2013 - link

    A quick suggestion: could you do a ranking of performance index as related to price, displaying performance per dollar?

    For gamers, the biggest point is how much time the GPU spends asking the RAM for data. Games that are more heavily CPU bound will probably see some benefit from faster RAM. It is worth noting that Dirt 3 seems to benefit the most from lower timings, as the lowest timings see the highest FPS. Undoubtedly, each GPU is waiting for information from RAM, and in turn, longer RAM latency means that each GPU has to wait for its chunk of data. Better titles will rely less on CPU and more on GPU, maybe Mantle will have some effect on this with reduced draw calls?

    Anyway, the price scaling on these "performance" RAM is so large that I couldn't in good conscience ever recommend anyone buying them when they would be better off spending it on a: dGPU, better dGPU, second dGPU.
  • freedom4556 - Monday, November 11, 2013 - link

    "Games that are more heavily CPU bound will probably see some benefit from faster RAM."
    Not according to nearly every review I've ever read on memory. Most reviews have all results within about 5 fps of each other regardless of game. Only synthetics really benefit. See articles like:
    http://anandtech.com/show/7364/memory-scaling-on-h...
    http://www.techpowerup.com/reviews/Avexir/Core_Ser...
    http://www.tomshardware.com/reviews/low-voltage-dd...
  • The Von Matrices - Monday, November 11, 2013 - link

    The silly part is that this is marketed as "gaming" memory while its advantages in gaming on a discrete GPU are minimal. It should be marketed as accelerating applications, which would be a much more reasonable statement. I bought 2400MHz memory not because I play games but because I perform encoding and file compression on my PC, and that is a situation where fast memory makes a difference.

    As far as making a recommendation on value, Ian stated (and I agree) that memory prices are very volatile. It's basically impossible to make a lasting value comparison on memory because of this. What is a great deal today could be eclipsed next week by a dramatic price decrease of a faster, better product. I agree with Ian omitting a value comparison because it would be pointless a month after the article is posted. However, the performance comparisons of different memory speeds and timings are still of value.

    I think the general conclusion he stated is still of value - buy something faster than DDR3-1600 but don't spend too much money because the performance increase is minimal beyond that.
  • DanNeely - Monday, November 11, 2013 - link

    Are any of your planned reviews going to look at the impact of timing relaxation needed to run 4 dimms instead of 2? Having bumped off 12GB a few times I'm now running 18 in my aging i7-920 box; and with both my browsers (Opera, FF) having multi-process upgrades forthcoming that will let them expand beyond the 4GB barrier I've decided on 4x8gb for my new system.
  • The Von Matrices - Monday, November 11, 2013 - link

    I don't understand why you're creating a new term "performance index" instead of just using the more standard time to first word (in ns). It would behave exactly in reverse to your "performance index" with lower times being better but otherwise the comparison would be the same.
  • ShieTar - Tuesday, November 12, 2013 - link

    I agree. Its not only more standard, it is also physically more meaningful, and can be adapted to describe the performance of software with known algorithms E.g. if your ramdisk is reading 512-Byte-sectors from memory, its performance will scale with the "time to get a full sector".

    But of course, frequency is also a much more useful parameter to distinguish electromagnetic signal than wavelength, and you still can't get anybody who learned their field on wavelength to give it up. Once people start to think within certain terms, they are very stubborn about changing definitions.
  • whyso - Monday, November 11, 2013 - link

    If you run IGP benchmarks can you please run at something relevant? 11 fps is not relevant.
  • cmdrdredd - Monday, November 11, 2013 - link

    Still with these big heatsinks on the memory? I almost have to use the low profile Samsung stuff because of my Noctua cooler not allowing much clearance.
  • meacupla - Monday, November 11, 2013 - link

    I think these unnecessarily tall RAM heatsinks are still being made, because the manufacturers think people will use CLC CPU coolers instead of a dual tower heatsink.

    or maybe they think the only people who will buy this type of RAM are people with real water cooling loops.
    or maybe they are for LNG overclocking contests or something.

    Either way, if the customer is sensible enough to buy a tower heatsink in the first place, I'm sure they would also be sensible and buy some lower profile, 1600Mhz or 1866Mhz CAS8 or CAS9 RAM, instead of overkill 2400Mhz.
  • DanNeely - Tuesday, November 12, 2013 - link

    giant ramsinks long predate CLCs. For that matter I'm fairly sure they predate tower style heatsinks as well.
  • Hood6558 - Wednesday, November 13, 2013 - link

    Overkill is best, sensible decisions are for Grandma's email machine...
  • Kamus - Tuesday, November 12, 2013 - link

    Some battlefield 4 tests would've been nice... According to corsair, 2400 memory was giving up to 20% better performance than 1333.mbut I've yet to see another test like that one to corroborate.
  • IanCutress - Tuesday, November 12, 2013 - link

    BF4 will hopefully be part of my 2014 test bed, I'm still getting equipment arranged to make it relevant and trying to decide a consistent benchmark. Running through an empty server atm is the only consistent way, but it might not be considered a true representation of what's possible.
  • d9ssk02md - Tuesday, November 12, 2013 - link

    Well, on my sandy bridge it only took a year or two of running memory at 1.65V to develop random freezes. Lowering the voltage (and the speed) to 1.5V made the issue completely disappear.
  • Gen-An - Tuesday, November 12, 2013 - link

    A bit surprising these couldn't go higher, considering they are likely using Hynix H5TQ4G83MFR ICs. I have some sticks of the same bin (2400C11 2x8GB) but a different brand (Silicon Power) and I've been able to push them to 2933.

Log in

Don't have an account? Sign up now