Original Link: https://www.anandtech.com/show/7565/adata-axdu1600gw8g9b-2g-review
ADATA XPG V1.0 Low Voltage Review: 2x8 GB at DDR3L-1600 9-11-9 1.35 V
by Ian Cutress on December 6, 2013 2:00 PM ESTFor the next in our series of memory reviews on Haswell, we have another ADATA kit to test: this time a low voltage 2x8 GB kit featuring DDR3-1600 C9 timings. Being lower down the chain on a SKU list, the heatsinks are also smaller than the ones previously tested, and come in at £125. Previously in our big roundup of Haswell testing we suggested 1866 C9 being the minimum people should consider: would going 1600 C9 LV matter that much in results?
ADATA XPG V1.0 2x8GB DDR3L-1600 C9 1.35V Overview
ADATA’s line away from the more extreme (2400+) memory setups is aimed more conservative. If they were to use the same DRAM chips on board, those 2400 MHz C10 kits might hit lower voltages or lower CL numbers when we move down to 1600 MHz, so there is some potential in diversifying the product line of the same ICs as long as the market is there.
The AXDU1600GW8G9B-2G kit we are testing today is very unassuming next to those higher end modules – a simple small metallic based heatsink. When using a large air cooler like my TRUE Copper, they were easy enough to fit in unlike some of the larger modules.
The lower voltage element of memory intrigues me, most likely for the wrong reasons. The difference between 1.50 volt and 1.35 volt memory, in an overall system build, is not going to affect power draw in a measurable way. If you were building a 42U rack of servers, then yes, multiply out the DRAM modules you need across all the sockets and it can make sense if they are going to be active all the time. But at this juncture, even when a system draws sub 30W on idle and under 100W load, I cannot see a picture where moving from 1.50V to 1.35V makes a sizeable difference to a yearly electricity bill. Others may argue this point, but finding consistent data that converts to a decent power saving is hard to produce or come by. It would be more applicable to buy a lower TDP CPU (such as the i7-4765T, or E3-1230L V3) instead. That leaves a bit of e-peen for being low powered and green as positives, and the kit obviously has to stand on its own two feet when we push through the benchmarks.
In our big memory round-up for Z87 and Haswell, our cautionary tale was that slower MHz (under 1866) kits, rather than being restrictive, have a few holes in their performance on certain benchmarks, causing a 10-20% performance drop compared to the asymptotic limit hit when you go beyond 2133-2400 MHz. This ADATA 1600 C9 Low Voltage kit hits a couple of pot holes in that regard: WinRAR could be faster, as well as some of the minimum frame rates on a few games.
For Overclocking, our kit comes out of the bag with a Performance Index (PI) of 178: we pushed this to a PI of 200 (2000 10-12-10) with very little effort. This is a bit different from the PI of 240-260 which we see on the higher end kits.
Price wise, 2x8 GB 1600 C9 memory kits can be found for under $150 – the cheapest on Newegg today is actually a 1.35V kit for $129, followed by a 1.65V kit for $130. ADATA do not list this kit on Newegg, but in the UK the pricing is around £125 – if you take off our 20% tax and do the conversion, that lists it as nearer $170. ADATA’s XPG V2 and V1 normal voltage kits are at $165, with another at $150. This is still above some of their competition, and around $150 would make it a competitive choice for this segment.
Specifications
ADATA | ADATA | Corsair | Patriot | ADATA | G.Skill | |
Speed | 1600 | 2400 | 2400 | 2400 | 2800 | 3000 |
ST | 9-11-9-27 | 11-13-13-35 | 10-12-12-31 | 10-12-12-31 | 12-14-14-36 | 12-14-14-35 |
Price (at review) |
£125 | $200 | - | $92 | $316 | $520 |
XMP | Yes | Yes | Yes | Yes | Yes | Yes |
Size | 2 x 8GB | 2 x 8GB | 2 x 8GB | 2 x 4GB | 2 x 8GB | 2 x 4GB |
Performance Index |
178 | 218 | 240 | 240 | 233 | 250 |
|
||||||
MHz | 1600 | 2400 | 2400 | 2400 | 2800 | 3000 |
Voltage | 1.35 V | 1.65 V | 1.65 V | 1.65 V | 1.65 V | 1.65 V |
tCL | 9 | 11 | 10 | 10 | 12 | 12 |
tRD | 11 | 13 | 12 | 12 | 14 | 14 |
tRP | 9 | 13 | 12 | 12 | 14 | 14 |
tRAS | 27 | 35 | 31 | 31 | 36 | 31 |
tRC | 38 | 46 | 43 | 49 | ||
tWR | 12 | 20 | 16 | 16 | ||
tRRD | 280 | 315 | 301 | 391 | ||
tRFC | 5 | 6 | 7 | 7 | ||
tWTR | 6 | 10 | 10 | 12 | ||
tRTP | 6 | 10 | 10 | 12 | ||
tFAW | 24 | 33 | 26 | 29 | ||
CR | - | 2 | 3 | 2 |
As you can imagine, this being our 1600 C9 kit for testing, the sub-timings are smaller than all the other kits we have tested. With a PI of 178 out of the box, there should hopefully be some room to grow.
Visual Inspection
Thankfully ADATA have avoided using annoying plastic packaging that can be a pain to get into – there is a simple tab on the back to help open their XPG line of memory. The packaging is simple enough, just a thin molded plastic to hold the memory in place. Out the memory modules come, barely taller than memory without heatsinks.
There is a slight z-height addition, but it should not affect many (if any) builds:
Market Positioning
As mentioned before, at current prices these modules will have a tough time in the turbulent memory market. On 12/4, the current prices for similar 2x8GB DDR3-1600 C9 memory kits were as follows (prices taken from Newegg):
$129: Crucial Ballistix Sport DDR3L-1600 C9 2x8 GB 1.35V
$130: Silicon Power XPower DDR3-1600 C9 2x8 GB 1.65V
$140: Patriot Viper 3 DDR3-1600 C9 2x8 GB 1.50 V
$143: Crucial Ballistix Sport DDR3-1600 C9 2x8 GB 1.50V
$145: Team Dark DDR3-1600 C9 2x8GB 1.50V
$145: Team Vulcan DDR3-1600 C9 2x8GB 1.50V
$145: AMD Radeon RE1600 DDR3-1600 C9 2x8GB 1.50V
$150: Mushkin Enhanced Blackline DDR3L-1600 C9 2x8 GB 1.35V
$150: G.Skill RipjawsX DDR3-1600 C9 2x8 GB 1.50V
$150: Mushkin Enhanced Stealth DDR3-1600 C9 2x8GB 1.35V
$150: ADATA XPG V1.0 DDR3-1600 C9 2x8GB 1.50V
$150: G.Skill Ares DDR3-1600 C9 2x8GB 1.50V
$150: Apotop Altair ProOC DDR3-1600 C9 2x8GB 1.50V
$155: Crucial Ballistix Sport XT DDR3-1600 C9 2x8GB 1.50V
and so on.
If we filter out the low voltage kits:
$129: Crucial Ballistix Sport DDR3L-1600 C9 2x8 GB 1.35V
$150: Mushkin Enhanced Blackline DDR3L-1600 C9 2x8 GB 1.35V
$150: Mushkin Enhanced Stealth DDR3L-1600 C9 2x8GB 1.35V
Or other low voltage kits:
$140: G.Skill Aegis DDR3L-1600 C11 2x8GB 1.35V
$140: G.Skill Aegis DDR3L-1333 C9 2x8GB 1.35V
$150: Mushkin Enhanced Blackline DDR3L-1866 C11 2x8GB 1.35V
$157: Crucial Ballistix Tactical DDR3L-1600 C8 2x8GB 1.35V
$165: Kingston HyperX DDR3L-1600 C9 2x8GB 1.35V
The main competition is from the Crucial $129 kit, which seems to be a discounted offer right now. The $157 Crucial 1600 C8 kit looks tempting, so this 1600 C9 kit from ADATA ideally needs to leave no doubt when users are looking for a LV kit and aim at the $140 price point.
Test Bed
Test Setup | |
Processor |
Intel Core i7-4770K Retail @ 4.0 GHz 4 Cores, 8 Threads, 3.5 GHz (3.9 GHz Turbo) |
Motherboards | ASRock Z87 OC Formula/AC |
Cooling |
Corsair H80i Thermalright TRUE Copper |
Power Supply | Corsair AX1200i Platinum PSU |
Memory |
ADATA XPG V2 DDR3-2400 C11-13-13 1.65V 2x8 GB Patriot Viper III DDR3-2400 C10-12-12 1.65V 2x4 GB ADATA XPG V1.0 DDR3L-1600 C9-11-9 1.35V 2x8 GB |
Memory Settings | XMP |
Discrete Video Cards |
AMD HD5970 AMD HD5870 |
Video Drivers | Catalyst 13.6 |
Hard Drive | OCZ Vertex 3 256GB |
Optical Drive | LG GH22NS50 |
Case | Open Test Bed |
Operating System | Windows 7 64-bit |
USB 3 Testing | OCZ Vertex 3 240GB with SATA->USB Adaptor |
Many thanks to...
We must thank the following companies for kindly donating hardware for our test bed:
Thank you to OCZ for providing us with 1250W Gold Power Supplies.
Thank you to Corsair for providing us with an AX1200i PSU, and Corsair H80i CLC
Thank you to ASUS for providing us with the AMD GPUs and some IO Testing kit.
Thank you to ECS for providing us with the NVIDIA GPUs.
Thank you to Rosewill for providing us with the 500W Platinum Power Supply for mITX testing, BlackHawk Ultra, and 1600W Hercules PSU for extreme dual CPU + quad GPU testing, and RK-9100 keyboards.
Thank you to ASRock for providing us with the 802.11ac wireless router for testing.
‘Performance Index’
In our Haswell memory overview, I introduced a new concept of ‘Performance Index’ as a quick way to determine where a kit of various speed and command rate would sit relative to others where it may not be so obvious. As a general interpretation of performance in that review, the performance index (PI) worked well, showing that memory kits with a higher PI performed better than those that a lower PI. There were a few circumstances where performance was MHz or CL dominated, but the PI held strong for kit comparisons.
The PI calculation and ‘rules’ are fairly simple:
- Performance Index = MHz divided by CL
- Assuming the same kit size and installation location are the same, the memory kit with the higher PI will be faster
- Memory kits similar in PI should be ranked by MHz
- Any kit 1600 MHz or less is usually bad news.
That final point comes about due to the law of diminishing returns – in several benchmarks in our Haswell memory overview performed very poorly (20% worse or more) with the low end MHz kits. In that overview, we suggested that an 1866 C9 or 2133 C10 might be the minimum suggestion; whereas 2400 C10 covers the sweet spot should any situation demand good memory.
With this being said, the results for our kits are as follows:
From the data in our memory overview, it was clear that any kit with a performance index of less than 200 was going to have issues on certain benchmarks. The ADATA kit has a PI of 178, and thus in principle might drop back in some benchmarks .
IGP Gaming
The activity cited most often for improved memory speeds is IGP gaming, and as shown in both of our tests of Crystalwell (4950HQ in CRB, 4750HQ in Clevo W740SU), Intel’s version of Haswell with the 128MB of L4 cache, having big and fast memory seems to help in almost all scenarios, especially when there is access to more and more compute units. In order to pinpoint where exactly the memory helps, we are reporting both average and minimum frame rates from the benchmarks, using the latest Intel drivers available. All benchmarks are also run at 1360x768 due to monitor limitations (and produces more relevant frame rate numbers).
Bioshock Infinite:
Tomb Raider:
Sleeping Dogs:
Single dGPU Gaming
For our single discrete GPU testing, rather than the 7970s which normally adorn my test beds (and were being used for other testing), I plumped for one of the HD 6950 cards I have. This ASUS DirectCU II card I purchased pre-flashed to 6970 specifications, giving a little more oomph. Typically discrete GPU options are not often cited as growth areas of memory testing, however we will let the results speak for themselves.
Dirt 3:
Bioshock Infinite:
Tomb Raider:
Sleeping Dogs:
Tri-GPU CrossFireX Gaming
Our final set of GPU tests are a little more on the esoteric side, using a tri-GPU setup with a HD5970 (dual GPU) and a HD5870 in tandem. While these cards are not necessarily the newest, they do provide some interesting results – particularly when we have memory accesses being diverted to multiple GPUs (or even to multiple GPUs on the same PCB). The 5970 GPUs are clocked at 800/1000, with the 5870 at 1000/1250.
Dirt 3:
Bioshock Infinite:
Tomb Raider:
Sleeping Dogs
CPU Real World
As mentioned previously, real world testing is where users should feel the benefits of spending more on memory. A synthetic test exacerbates a specific type of loading to get peak results in terms of memory read/write and latency timings, most of which are not indicative of the pseudo random nature of real-world workloads (opening email, applying logic). There are several situations which might fall under the typical scrutiny of a real world loading, such as video conversion/video editing. It is at this point we consider if the CPU caches are too small and the system is relying on frequent memory accesses because the CPU cannot be fed with enough data. It is these circumstances where memory speed is important, and it is all down to how the video converter is programmed rather than just a carte blanche on all video converters benefitting from memory. As we will see in the IGP Compute section of this review, anything that can leverage the IGP cores can be a ripe candidate for increased memory speed.
Our tests in the CPU Real World section come from our motherboard reviews in order to emulate potential scenarios that a user may encounter.
USB 3.0 Copy Test with MaxCPU
We transfer a set size of files from the 120GB OCZ Vertex3 connected via SATA 6 Gbps on the motherboard to the 240 GB OCZ Vertex3 SSD with a SATA 6 Gbps to USB 3.0 converter via USB 3.0 using DiskBench, which monitors the time taken to transfer. The files transferred are a 9.2 GB set of 7539 files across 1011 folders – 95% of these files are small typical website files, and the rest (90% of the size) are precompiled installers. In an update to pre-Z87 testing, we also run MaxCPU to load up one of the threads during the test which improves general performance up to 15% by causing all the internal pathways to run at full speed.
Results are represented as seconds taken to complete the copy test, where lower is better.
The Copy Test shows little difference - only 0.5 seconds between 2400 C11 and 1600 C9.
WinRAR 4.2
With 64-bit WinRAR, we compress the set of files used in the USB speed tests. WinRAR x64 3.93 attempts to use multithreading when possible, and provides as a good test for when a system has variable threaded load. WinRAR 4.2 does this a lot better! If a system has multiple speeds to invoke at different loading, the switching between those speeds will determine how well the system will do.
WinRAR is usually the benchmark that shows the biggest difference between the memory kits, and the ADATA LV is some 6 seconds off the pace. The hole of 1333 C9/1600 C11 is avoided.
FastStone Image Viewer 4.2
FastStone Image Viewer is a free piece of software I have been using for quite a few years now. It allows quick viewing of flat images, as well as resizing, changing color depth, adding simple text or simple filters. It also has a bulk image conversion tool, which we use here. The software currently operates only in single-thread mode, which should change in later versions of the software. For this test, we convert a series of 170 files, of various resolutions, dimensions and types (of a total size of 163MB), all to the .gif format of 640x480 dimensions. Results shown are in seconds, lower is better.
FastStone shows memory indifference.
Xilisoft Video Converter 7
With XVC, users can convert any type of normal video to any compatible format for smartphones, tablets and other devices. By default, it uses all available threads on the system, and in the presence of appropriate graphics cards, can utilize CUDA for NVIDIA GPUs as well as AMD WinAPP for AMD GPUs. For this test, we use a set of 33 HD videos, each lasting 30 seconds, and convert them from 1080p to an iPod H.264 video format using just the CPU. The time taken to convert these videos gives us our result in seconds, where lower is better.
XVC shows WinRAR like differentiation on a smaller scale, with <5% coverall all memory speeds. That might matter for large conversion projects.
Video Conversion - x264 HD Benchmark
The x264 HD Benchmark uses a common HD encoding tool to process an HD MPEG2 source at 1280x720 at 3963 Kbps. This test represents a standardized result which can be compared across other reviews, and is dependent on both CPU power and memory speed. The benchmark performs a 2-pass encode, and the results shown are the average frame rate of each pass performed four times. Higher is better this time around.
Same story with x264 as with XVC, except we see a reversal of fortunes with pass 2. As pass 2 is optimised after pass 1, it all depends if a second pass is needed. In the comments of some reviews, it seems some of our readers only perform the single initial pass, where the high end memory is ~3% faster than the ADATA LV.
TrueCrypt v7.1a AES
One of Anand’s common CPU benchmarks is TrueCrypt, a tool designed to encrypt data on a hard-drive using a variety of algorithms. We take the program and run the benchmark mode using the fastest AES encryption protocol over a 1GB slice, calculating the speed in GB/s. Higher is better.
CPU Compute
One side I like to exploit on CPUs is the ability to compute and whether a variety of mathematical loads can stress the system in a way that real-world usage might not. For these benchmarks we are ones developed for testing MP servers and workstation systems back in early 2013, such as grid solvers and Brownian motion code. Please head over to the first of such reviews where the mathematics and small snippets of code are available.
3D Movement Algorithm Test
The algorithms in 3DPM employ uniform random number generation or normal distribution random number generation, and vary in various amounts of trigonometric operations, conditional statements, generation and rejection, fused operations, etc. The benchmark runs through six algorithms for a specified number of particles and steps, and calculates the speed of each algorithm, then sums them all for a final score. This is an example of a real world situation that a computational scientist may find themselves in, rather than a pure synthetic benchmark. The benchmark is also parallel between particles simulated, and we test the single thread performance as well as the multi-threaded performance. Results are expressed in millions of particles moved per second, and a higher number is better.
N-Body Simulation
When a series of heavy mass elements are in space, they interact with each other through the force of gravity. Thus when a star cluster forms, the interaction of every large mass with every other large mass defines the speed at which these elements approach each other. When dealing with millions and billions of stars on such a large scale, the movement of each of these stars can be simulated through the physical theorems that describe the interactions. The benchmark detects whether the processor is SSE2 or SSE4 capable, and implements the relative code. We run a simulation of 10240 particles of equal mass - the output for this code is in terms of GFLOPs, and the result recorded was the peak GFLOPs value.
Grid Solvers - Explicit Finite Difference
For any grid of regular nodes, the simplest way to calculate the next time step is to use the values of those around it. This makes for easy mathematics and parallel simulation, as each node calculated is only dependent on the previous time step, not the nodes around it on the current calculated time step. By choosing a regular grid, we reduce the levels of memory access required for irregular grids. We test both 2D and 3D explicit finite difference simulations with 2n nodes in each dimension, using OpenMP as the threading operator in single precision. The grid is isotropic and the boundary conditions are sinks. We iterate through a series of grid sizes, and results are shown in terms of ‘million nodes per second’ where the peak value is given in the results – higher is better.
Grid Solvers - Implicit Finite Difference + Alternating Direction Implicit Method
The implicit method takes a different approach to the explicit method – instead of considering one unknown in the new time step to be calculated from known elements in the previous time step, we consider that an old point can influence several new points by way of simultaneous equations. This adds to the complexity of the simulation – the grid of nodes is solved as a series of rows and columns rather than points, reducing the parallel nature of the simulation by a dimension and drastically increasing the memory requirements of each thread. The upside, as noted above, is the less stringent stability rules related to time steps and grid spacing. For this we simulate a 2D grid of 2n nodes in each dimension, using OpenMP in single precision. Again our grid is isotropic with the boundaries acting as sinks. We iterate through a series of grid sizes, and results are shown in terms of ‘million nodes per second’ where the peak value is given in the results – higher is better.
IGP Compute
One of the touted benefits of Haswell is the compute capability afforded by the IGP. For anyone using DirectCompute or C++ AMP, the compute units of the HD 4600 can be exploited as easily as any discrete GPU, although efficiency might come into question. Shown in some of the benchmarks below, it is faster for some of our computational software to run on the IGP than the CPU (particularly the highly multithreaded scenarios).
Grid Solvers - Explicit Finite Difference on IGP
As before, we test both 2D and 3D explicit finite difference simulations with 2n nodes in each dimension, using OpenMP as the threading operator in single precision. The grid is isotropic and the boundary conditions are sinks. We iterate through a series of grid sizes, and results are shown in terms of ‘million nodes per second’ where the peak value is given in the results – higher is better.
On our IGP compute the 1600 C9 LV takes a bit of a knock in both 2D and 3D simulations when compared to the more powerful kits available - by as much as 10% in 3D.
N-Body Simulation on IGP
As with the CPU compute, we run a simulation of 10240 particles of equal mass - the output for this code is in terms of GFLOPs, and the result recorded was the peak GFLOPs value.
3D Particle Movement on IGP
Similar to our CPU Compute algorithm, we calculate the random motion in 3D of free particles involving random number generation and trigonometric functions. For this application we take the fastest true-3D motion algorithm and test a variety of particle densities to find the peak movement speed. Results are given in ‘million particle movements calculated per second’, and a higher number is better.
Matrix Multiplication on IGP
Matrix Multiplication occurs in a number of mathematical models, and is typically designed to avoid memory accesses where possible and optimize for a number of reads and writes depending on the registers available to each thread or batch of dispatched threads. He we have a crude MatMul implementation, and iterate through a variety of matrix sizes to find the peak speed. Results are given in terms of ‘million nodes per second’ and a higher number is better.
Overclocking Results
When it comes to memory overclocking, there are several ways to approach the issue. Typically memory overclocking is rarely required - only those attempting to run benchmarks need worry about pushing the memory to its uppermost limits. It also depends highly on the memory kits being used - memory is similar to processors in the fact that the ICs are binned to a rated speed. The higher the bin, the better the speed - however if there is a demand for lower speed memory, then the higher bin parts may be declocked to increase supply of the lower clocked component. Similarly, for the high end frequency kits, less than 1% of all ICs tested may actually hit the speed of the kit, hence the price for these kits increase exponentially.
With this in mind, there are several ways a user can approach overclocking memory. The art of overclocking memory can be as complex or as simple as the user would like - typically the dark side of memory overclocking requires deep in-depth knowledge of how memory works at a fundamental level. For the purposes of this review, we are taking overclocking in three different scenarios:
a) From XMP, adjust Command Rate from 2T to 1T
b) From XMP, increase Memory Speed strap (e.g. 1333 MHz -> 1400 -> 1600)
c) From XMP, test a range of sub-timings (e.g. 10-12-12 to 13-15-15 to 8-10-10) and find the best MHz theses are rated.
There is plenty of scope to overclock beyond this, such as adjusting voltages or the voltage of the memory controller – for the purposes of this test we raise the memory voltage to the ‘next stage’ above its rated voltage (1.35V to 1.5V, 1.5V to 1.65V, 1.65V to 1.72V). As long as a user is confident with adjusting these settings, then there is a good chance that the results here will be surpassed. There is also the fact that individual sticks of memory may perform better than the rest of the kit, or that one of the modules could be a complete dud and hold the rest of the kit back. For the purpose of this review we are seeing if the memory out of the box, and the performance of the kit as a whole, will work faster at the rated voltage.
In order to ensure that the kit is stable at the new speed, we run the Linpack test within OCCT for five minutes as well as the PovRay benchmark. This is a small but thorough test, and we understand that users may wish to stability test for longer to reassure themselves of a longer element of stability. However for the purposes of throughput, a five minute test will catch immediate errors from the overclocking of the memory.
With this in mind, the kit performed as follows:
Test | PovRay | OCCT |
XMP | 1603.85 | 76C |
---|---|---|
XMP, 2T to 1T | Already 1T | Already 1T |
1800 9-11-9 | 1598.21 | 76C |
1866 9-11-9 | 1593.88 | 76C |
2000 9-11-9 | No POST | No POST |
Off the bat our 1600 kit will jump to 1866 MHz in its stride, but 2000 at the same timings is a no-go.
Subtimings | Peak MHz | PovRay | OCCT | Final PI |
7-9-7 | 1400 | 1613.60 | 77C | 200 |
---|---|---|---|---|
8-10-8 | 1600 | 1610.20 | 77C | 200 |
9-11-9 | 1866 | 1623.81 | 78C | 207 |
10-12-10 | 2000 | 1596.91 | 78C | 200 |
11-13-11 | 2133 | 1620.29 | 78C | 194 |
12-14-12 | 2200 | 1619.96 | 77C | 183 |
13-15-13 | 2200 | 1609.89 | 77C | 169 |
A base-line PI of 200 is a good result (1400 C7 through 2000 C10), showing that there is some headroom from the basic settings of around 10%.
I mentioned on the first page of this review that low voltage memory never really made sense to me. In a server, it can make sense: if you have rows up on rows of memory banks in a 1/2U followed by racks upon racks of servers, at the top end you might see as much as 1 kW energy difference. But in home systems using 2-4 modules, even at a total power draw of 35W, it only really makes sense in ultra-low power devices that end up using SO-DIMMs. So when it comes up on regular sized modules for a normal PC build, the validity in that ‘it saves power’ is almost immeasurable unless that ultra-low power device is on 100% all the time. Does that mean that LV memory has a place in the regular DDR3 market? I am not so sure. The pricing seems to suggest almost no difference, at least financially:
$129: Crucial Ballistix Sport DDR3L-1600 C9 2x8 GB 1.35V
$130: Silicon Power XPower DDR3-1600 C9 2x8 GB 1.65V
$140: Patriot Viper 3 DDR3-1600 C9 2x8 GB 1.50 V
$143: Crucial Ballistix Sport DDR3-1600 C9 2x8 GB 1.50V
$145: Team Dark DDR3-1600 C9 2x8GB 1.50V
$145: Team Vulcan DDR3-1600 C9 2x8GB 1.50V
$145: AMD Radeon RE1600 DDR3-1600 C9 2x8GB 1.50V
$150: Mushkin Enhanced Blackline DDR3L-1600 C9 2x8 GB 1.35V
$150: Mushkin Enhanced Stealth DDR3L-1600 C9 2x8GB 1.35V
$150: ADATA XPG V1.0 DDR3-1600 C9 2x8GB 1.50V
As for the kit we are testing today, the ADATA AXDU1600GW8G9B-2G (2x8 GB DDR3L-1600 9-11-9 1.35V), it ideally needs to be situated under the $150 margin to be considered by most builders. When there are so many modules around this price point, it all comes down to marketable features and if it matches the color scheme of the build.
Going for 1600 C9 has some pitfalls on Haswell at 4 GHz: WinRAR and certain game frame rates are not at their peak which usually occurs around 1866 C9/2133 C10/2400 C10. But if these are not your intended purpose, the ADATA kit can fit quite nicely into any build. Memory is all in the pricing these days, and due to the volatile market it really pays to look the day you buy to see what is what. Reviews like this are just a snapshot in time, but it is safe to say we did not come across any crippling issues with this memory kit, and there was a good 10% (2000 C10) overclocking headroom at 1.50 volts.