G.Skill on Friday announced its new top-of-the-range DDR4 memory kit for dual-channel PCs running Intel’s Kaby Lake processors. The new Trident Z kit for operates at 4333 MT/s (DDR4-4333), though it requires a bit of extra voltage to get there. In addition, the company said that it was working on even faster DDR4 DIMMs.

The new G.Skill Trident Z memory modules are based on Samsung’s 8 Gb DRAM ICs (B-die, 20 nm), which also power other high-end DIMMs from the company. The 16 GB kit (F4-4333C19) consists of two 8 GB modules rated for DDR4-4333 operation with CL19 19-19-39 timings at 1.4 V, which is above the standard high-performance voltage setting for DDR4 (1.35 V) and is considerably higher than JEDEC-specification of 1.2 V. Just like the rest members of the Trident Z family, the new DDR4-4333 kit comes with aluminum heat spreaders (so, no RGB LEDs just yet) as well as SPDs with XMP 2.0 settings.

G.Skill is the Trident Z DDR4-4333 kit for use with Intel’s Z270 platforms and Kaby Lake processors. So far, G.Skill has only validated its F4-4333C19-8GTZ modules on the ASUS ROG Maximus IX Apex motherboard and the Intel Core i5-7600K processor, but expect the list of compatible mainboards to expand over time.

In addition to announcing the new DDR4-4333 kit, G.Skill also teased DDR4-4400 (F4-4400C19-8GTZ) and even DDR4-4500 (F4-4500C19-8GTZ) 8 GB memory modules working in dual-channel mode. The company is still working on such DIMMs and finalizing their specifications (timings, voltages, etc.), so do not expect them on the shelves for a bit.

It should be noted however that this week's announcement is just that: an announcement. The actual product release will come later. G.Skill has not yet announced when that will be; presumably the company is still binning chips and building up a launch supply. We'd also expect the retail price of the kit to be announced at that time. At present, a dual-channel 16 GB DDR4-4266 kit costs around $250, so it's reasonable to assume the new DDR4-4333 kit will be priced above that.

Related Reading:

Source: G.Skill

Comments Locked

23 Comments

View All Comments

  • BrokenCrayons - Wednesday, April 19, 2017 - link

    Excellently put Strunf!
  • JoeyJoJo123 - Wednesday, April 19, 2017 - link

    >People don't care about ECC cause people don't really get the problems ECC is supposed to address.

    No, people don't care about ECC because they're under the assumption that they never see cosmic ray events and that all faults seen when using their computer are just software based. They then mistakenly assume that MOAR speed and MOAR capacity beyond what is necessary offers tangible benefits when, depending on the workload, you hit a plateau where more speed or capacity is not inherently useful to your workload. 24/7 stability provided by ECC can be useful to anyone, even grandma and grandpa who only use Internet Explorer to send/receive pictures of grandkids to friends and family, where a cosmic ray event induced memory error can cause system instability.

    Cosmic ray events aren't as rare as you might think, and even though you may not _see_ cosmic ray events, doesn't mean they don't have an effect on your system.

    https://en.wikipedia.org/wiki/Cosmic_ray#Effect_on...

    "Studies by IBM in the 1990s suggest that computers typically experience about one cosmic-ray-induced error per 256 megabytes of RAM per month", attributed to this source: https://www.scientificamerican.com/article/solar-s...

    A PC equipped with 8GB of memory running 24/7 for a month will incurr, on average, about 32 cosmic ray events that result in a bit-flip in a DRAM module's register. If you run the PC 12 hours a day, then ~16 events will happen, or if you run a 16GB memory PC for 12 hours a day ~32 events will happen.

    If you're noticing the correlation, the higher the memory capacity that the PC has and the more often it's running, the more cosmic ray events will incur a bit-flip, and therefore affect the system's performance and stability over a given measure of time, such as a month.

    Likewise, you may think, "But hey! That's a 20 year old statistic, that can't be right today with how much denser and more electronically stable DRAM modules and CPUs of modern day." Well no, because according to this article, Intel states the problem increases in prevalence the smaller the integrated circuit becomes, so the 20 year old statistic is actually more of a best-case scenario: https://www.newscientist.com/blog/technology/2008/...

    So yes, it very much is possible that the time your browser completely locks up randomly even though you weren't doing anything is due to a cosmic ray event flipping a bit in memory where your browser was cached and when the processor proceeds to fetch this information, the process locks up and looks "stuck" to the user. It might look like a software error, but if the exact same scenario isn't reproducible exactly, then chances are that YES it was a memory error and that YES it's possible this memory error was caused by a cosmic ray event and that YES ECC (registered) memory may have prevented this issue from ever occurring.

    And then we go back to the initial topic at hand; Yes, ECC support on the platform gives users the CHOICE to purchase ECC memory at a ~25% premium over non-ECC memory of the same capacity and speed. AMD offers that CHOICE to users. Intel does NOT.

    What's your Intel apologist excuse for Intel not providing that CHOICE for mainstream platform users for over a decade?
  • twtech - Monday, April 24, 2017 - link

    Not that Windows 10 crashes all that often anyway, but there is still a noticeable difference between workstation-grade hardware - which includes ECC memory - and a regular consumer-grade platform.

    Not that the consumer grade hardware crashes often these days, but workstation/server grade crashes noticeably less. As in, pretty much never. I don't think my home workstation has ever gotten a BSOD.

    I don't know how much of that is specifically attributable to the ECC memory, but it does seem like memory errors are responsible for a lot of crashes when they do occur.
  • austinsguitar - Saturday, April 15, 2017 - link

    ryzen uses a strange block multiplier to overclock memory to xmp speeds. this in change means it is more difficult by default to get memory working on ryzen. they even have a different type of south bridge interface! yikes. it will take about a year or so to get 3600mhz to run normally with micro code. just moar waitin.
  • Lolimaster - Saturday, April 15, 2017 - link

    They should just release fast memory only for Ryzen, the only cpu that actually makes use of the faster speed because of their CCX design.
  • ddriver - Sunday, April 16, 2017 - link

    They should put some extra transistors at connecting modules on L3 cache level rather than on MC level. The MC is very slow compared to L3 cache, they did that because it was easier as both modules are connected to the MC, so it can additionally act as an inter module communication hub. But it is very slow, which has a pronounced effect on stuff like games. Number crunching is not really affected as core affinities do not fluctuate so much plus saturating the pipeline helps to mask out inter module access latency.
  • Lolimaster - Sunday, April 16, 2017 - link

    Ryzen still performs better in games using Win7 ("unsupported") vs Win10.

    Aside from not having DX12 the effect on games is greatly minimized.
  • JasonMZW20 - Sunday, April 16, 2017 - link

    It isn't slow; there's a higher average latency penalty for crossing CCXs, but if you use the wider bus width properly, you can fit more data with less penalty if you need cache coherency between CCX modules for dependent datasets. The NB does run at memory speed, but is 256-bit wide, which is how it provides good bandwidth and excellent power savings.

    If you want actual numbers, 40ns to communicate to cores within a CCX through L3, 120ns to communicate between cores in a different CCX, which is an 80ns average.
  • MrSpadge - Wednesday, April 19, 2017 - link

    80 additional ns.. that would be longer than an Intel IMC needs to access main memory, which should be far slower than inter-chip communication!
  • NeatOman - Sunday, April 16, 2017 - link

    That's a high CL right there :-/

    Bragging rights at that point, I'll wait for a sensible kit with much tighter timings.. but ima want me some 4GHz ram doe.

Log in

Don't have an account? Sign up now