Among many of Intel’s announcements today, a key one for a lot of users will be the launch of Intel’s 9th Generation Core desktop processors, offering up to 8-cores on Intel's mainstream consumer platform. These processors are drop-in compatible with current Coffee Lake and Z370 platforms, but are accompanied by a new Z390 chipset and associated motherboards as well. The highlights from this launch is the 8-core Core i9 parts, which include a 5.0 GHz turbo Core i9-9900K, rated at a 95W TDP.

One, Two, Three, Four, Five, Six, Seven, Eight. Eight Cores.

I won’t beat about the bush – the 9th Gen processors Intel is announcing today are as follows:

Intel 9th Gen Core
AnandTech Cores TDP Freq L3 L3 Per
Core i9-9900K $488 8 / 16 95 W 3.6 / 5.0 16 MB 2.0 MB 2666 GT2 1200
Core i7-9700K $374 8 / 8 95 W 3.6 / 4.9 12 MB 1.5 MB 2666 GT2 1200
Core i5-9600K $262 6 / 6 95 W 3.7 / 4.6 9 MB 1.5 MB 2666 GT2 1150
8th Gen
Core i7-8086K $425 6 / 12 95 W 4.0 / 5.0 12 MB 2 MB 2666 24 EUs 1200
Core i7-8700K $359 6 / 12 95 W 3.7 / 4.7 12 MB 2 MB 2666 24 EUs 1200
Core i5-8600K $258 6 / 6 95 W 3.6 / 4.3 9 MB 1.5 MB 2666 24 EUs 1150
Core i3-8350K $179 4 / 4 91 W 4.0 8 MB 2 MB 2400 24 EUs 1150
Pentium G5600 $93 2 / 4 54 W 3.9 4 MB 2 MB 2400 24 EUs 1100

Leading from the top of the stack is the Core i9-9900K, Intel’s new flagship mainstream processor. This part is eight full cores with hyperthreading, with a base frequency of 3.6 GHz at 95W TDP, and a turbo up to 5.0 GHz on two cores. Memory support is up to dual channel DDR4-2666. The Core i9-9900K builds upon the Core i7-8086K from the 8th Generation product line by adding two more cores, and increasing that 5.0 GHz turbo from one core to two cores. The all-core turbo is 4.7 GHz, so it will be interesting to see what the power consumption is when the processor is fully loaded. The Core i9 family will have the full 2MB of L3 cache per core. 

Also featuring 8-cores is the Core i7-9700K, but without the hyperthreading. This part will have a base frequency of 3.6 GHz as well for a given 95W TDP, but can turbo up to 4.9 GHz only on a single core. The i7-9700K is meant to be the direct upgrade over the Core i7-8700K, and although both chips have the same underlying Coffee Lake microarchitecture, the 9700K has two more cores and slightly better turbo performance, but less L3 cache per core at only 1.5MB per.

The other important overclocking focused processor is the Core i5-9600K, with six cores without hyperthreading. Users will also see a lot of similarity in this part to the Core i5 of the previous generation, but with added frequency.

From Tom's Hardware

At the time of writing, we are still awaiting pricing, although the Core i9-9900K was accidentally listed on Amazon for $582.50 last week. What we also saw from this accidental listing was the packaging: Intel would appear to be experimenting with a dodecahedron (12-sided figure) in an attempt to compete against AMD's elaborate packaging for its high-end CPUs. This is the first time in recent memory that Intel has expanded its packaging beyond that simple box, so it will be interesting to see how much it goes beyond the Core i9.

Edit: Pricing has been updated.

Per Core Turbo Ratios

Also in our list of information, we have the per-core Turbo ratios for each of the three overclocking-capable CPUs.

Users who read our Core i7-8086K review, which was Intel’s first 5.0 GHz processor, may remember that it offered a single core of 5.0 GHz turbo, and the knock on effect of that was we never really saw the magic 5.0 GHz number. Because of background processes and whatnot, we very often see the 2-4 core turbo values in most processors. This time around, at least with the Core i9, Intel has put the first two cores at the peak turbo frequency.

TIM: Soldered Down Processors

Intel has officially confirmed that new 9th generation processors will feature a layer of solder making up the TIM between the die and the IHS. The new processors with solder include the Core i9-9900K, the Core i7-9700K and Core i5-9600K.

In recent times Intel has opted to use a cheaper thermal interface comprised of a paste which in comparison to solder has a lower thermal conduction rating and thus led to a number of solutions cropping up which allowed users to delid the IHS from the chip and replace the cheaper material. The fact the new Intel 9th generation processors will indeed now feature a solder-based TIM means that the processors on the new 14++ process should in effect run cooler in comparison clock for clock and allow Intel and users to potentially overclock a little further.

Coffee Lake Refresh: Learning from the GPU Companies

Intel’s 9th Generation Core family is built around the Coffee Lake platform, and as the processors have not had any microarchitectural changes, they are refreshes of the 8th generation parts but with the product stack laid out a little differently. For those keeping track, Coffee Lake was already a rehash of Kaby Lake, which was an update to Skylake. So we are on Skylake Refresh Refresh Refresh.

Intel's Core Architecture Cadence
Core Generation Microarchitecture Process Node Release Year
2nd Sandy Bridge 32nm 2011
3rd Ivy Bridge 22nm 2012
4th Haswell 22nm 2013
5th Broadwell 14nm 2014
6th Skylake 14nm 2015
7th Kaby Lake 14nm+ 2016
8th Kaby Lake-R
Coffee Lake-S
Kaby Lake-G
Coffee Lake-U/H
Whiskey Lake-U
Amber Lake-Y
Cannon Lake-U
9th Coffee Lake Refresh 14nm** 2018
Unknown Ice Lake (Consumer) 10nm? 2019?
Cascade Lake (Server)
Cooper Lake (Server)
Ice Lake (Server)
* Single CPU For Revenue
** Intel '14nm Class'

Intel has promised that its 10nm manufacturing process will ramp through 2019, and has already announced that it will introduce Ice Lake for servers on 10nm in 2020, after another run of 14nm with Cooper Lake in 2019. On the consumer side, the status is still in limbo – with any luck, the next generation of consumer parts will be a proper update to the microarchitecture, regardless of the process node.

Hardware and Software Security Fixes

What makes this a little different are the eight-core products. In order to make these, Intel had to create new die masks for the manufacturing line, as their previous masks only went up to six cores (and before that, four cores). This would, theoretically, give Intel a chance to implement some of the hardware mitigations for Spectre/Meltdown. As of the time of writing, we have not been given any indication that this is the case, perhaps due to the core design being essentially a variant of Skylake in a new processor. We will have to wait until a new microarchitecture comes to the forefront to see any changes.

What Intel has been doing however is optimizing its manufacturing process. Many have reported that Intel’s 14nm family of technologies is the most profitable manufacturing node in the history of the company. It should be noted that Intel has now eschewed names such as ‘14+’ and ‘14++’ on official marketing, choosing instead to call it ‘14nm family’ and to highlight optimizations in particular products. The results of these optimizations, whatever they might be, are usually bumps in frequency and performance at iso-power or less power, usually at the expense of a little die area. That would mean fewer dies per wafer, naturally increasing the cost of the product.

The processors announced so far are 8-core and a 6-core parts, with leaks suggesting Intel will also produce 4-core and 2-core processors at a later date. There is no word if some of these parts share die representations (e.g. if a 6-core is a natural 6-core, or a cut 8-core, or if both will happen). Making new die masks is expensive, however it can be beneficial depending on the quantity of processors for each segment. However, Intel will want to do something with those 8-core dies that might not make it – a tactic used in its enterprise parts.

Edit: We have since got information about the security updates. You can read about it here:

More Coffee, Less Caffeine: HyperThreading and L3 Cache

All this aside, it would appear that Intel is also forgoing HyperThreading on most of its processors. The only Core processors to get HyperThreading will be the Core i9 parts, and perhaps the Pentiums as well. This is partly to help make the product stack more linear, and so cheaper chips are not treading on the toes of the more expensive ones (e.g. a quad-core with HyperThreading might outperform a 6-core without). The other angle, I suspect, is one of the side-channel attacks that can occur when HyperThreading is in action. By disabling HyperThreading on the volume production chips, this security issue is no longer present. It also ensures that every thread on that chip is not competing for per-core resources.

One of the more interesting dissections of the new 9th Generation product is in the L3 cache per core for the different models. In previous generations, the Core i7 parts had 2 MB of L3 cache per core, while the Core i5 had 1.5 MB of L3 cache per core, and the Core i3 was split between some with 2MB and others with 1.5MB. This time around, Intel is only putting the full cache on the highest Core i9 parts, and reducing the Core i7 to 1.5MB of L3 per core. This will have a slight knock-on effect on performance, which when we get the processors will be an interesting metric to test.

I’ve had an 8-Core for Years!

Depending on where you draw the line for ‘consumer’ processors, technically we have had 8-core Intel CPUs on the high-end desktop space for a number of years. The Core i7-5960X was released in August 2014, and features eight Haswell cores on the HEDT platform, with quad-channel DDR4-2133 memory and 44 PCIe lanes at 140W. Back then, on Intel’s 22nm process, the die size was around 355.52 mm2.

Back when Intel launched the first Coffee Lake processors, the 6+2 die design of the i7-8700K was around ~151 mm2, an increase of ~26mm2 over the 4+2 design of the i7-7700K (~125mm2). That also took place during a jump from Intel’s official 14+ to 14++ manufacturing nodes, which due to a relaxed fin pitch made everything a bit bigger anyway.

But if we take 26mm2 on the high end of adding a pair of cores to the die size, then we can predict that the 8+2 design of the Core i9-9900K should come in around ~177 mm2, or a 17% larger die size. At 177mm2, this would be half the size of the Core i7-5960X, although with only half the memory controllers and PCIe lanes too. Even with that, it’s a sizeable decrease.

This is a mockup of what a 9900K might look like

This is where I point out that in order for Intel to keep profit margins the same on its highest parts, that 17% increase in die area might directly translate to a 17% increase in price. At the time of writing this, Intel has not announced pricing, but at a tray price of $359 for the last generation 8700K 6+2 chip, a 17% increase puts it in the region of $420. Anything more than that, and Intel is either increasing its margins or dealing with how many chips actually bin to the required frequency. But as shown in the Amazon link above, $582 is a bigger increase.

Integrated Graphics

One topic that Intel has not focused on much in several generations (since Broadwell, really) is that of integrated graphics. All the chips announced for the 9th generation family will still have the same GT2 configuration as the 8th generation, including the new Core i9 parts. Officially these come under the 8+2 designation. Intel still believes that having a form of integrated graphics on these high-end, overclockable processors, is still a value addition to the platform. The only downside is the performance, and it won’t be winning any awards soon.

The graphics will still be labelled as UHD Graphics 630, and use the same drivers as the 8th gen family.

Motherboards and the Z390 Chipset

One of the worst kept secrets this year has been Intel’s Z390 chipset. If you believe everything the motherboard manufacturers tell me, most of them have been ready for this release for several months, hence why we’re going to see about 55+ new models of motherboard hit the market over the next few weeks.

The Z390 chipset is an update to Z370, and both types of motherboards will support 8000-series and 9000-series processors (Z370 will need a BIOS update). The updates are similar to the updates seen with B360: native USB 3.1 10 Gbps ports, and integrated Wi-Fi on the chipset.

Intel Z390, Z370 and Z270 Chipset Comparison
Feature Z390 Z370 Z270
Max PCH PCIe 3.0 Lanes 24 24 24
Max USB 3.1 (Gen2/Gen1) 6/10 0/10 0/10
Total USB 14 14 14
Max SATA Ports 6 6 6
PCIe Config x16
Memory Channels 2 2 2
Intel Optane Memory Support Y Y Y
Intel Rapid Storage Technology (RST) Y Y Y
Max Rapid Storage Technology Ports 3 3 3
Integrated 802.11ac WiFi MAC Y N N
Intel Smart Sound Y Y Y
Integrated SDXC (SDA 3.0) Support Y N N
DMI 3.0 3.0 3.0
Overclocking Support Y Y Y
Intel vPro N N N
Max HSIO Lanes 30 30 30
Intel Smart Sound Y Y Y
ME Firmware 12 11 11

The integrated Wi-Fi uses CNVi, which allows the motherboard manufacturer to use one of Intel’s three companion RF modules as a PHY, rather than using a potentially more expensive MAC+PHY combo from a different vendor (such as Broadcom). I have been told that the cost of implementing a CRF adds about $15 to the retail price of the board, so we are likely to see some vendors experiment with mid-price models with-and-without Wi-Fi using this method.

One of the more impressive motherboards with TB3 - the ASRock Z390 Phantom Gaming-ITX/ac

For the USB 3.1 Gen 2 ports, Type-A ports are supported natively and motherboard manufacturers will have to use re-driver chips to support Type-C reversibility. These come at extra cost, as one might expect. It will be interesting to see how manufacturers mix and match the Gen 2, Gen 1, and USB 2.0 ports on the rear panels, now they have a choice. I suspect it will come down to signal integrity on the traces on the motherboard.

For the Z390 chipset and motherboards, we will have our usual every-board-overview posted today, covering every model the manufacturers would tell us about. Interestingly there is going to be a mini-ITX with Thunderbolt 3, and one board with a PLX chip! There are also some motherboards with Realtek’s 2.5G Ethernet controller, which consumes on 1.6 W – now if only we also had consumer grade switches.

Timeline: Full Launch on 19th October, with Reviews

From here, Intel will have CPUs on shelves on October 19th in most (if not all) major markets. The review embargo for the three K processors is also for the same date, at 9am Eastern Time. I’ll be running my new scripts on as many processors as possible. Let me know what comparisons you want to see in the comments below.

Comments Locked


View All Comments

  • ziofil - Monday, October 8, 2018 - link

    Also, screen resolution used to be less demanding.
  • nathanddrews - Tuesday, October 9, 2018 - link

    Not really - people using CRTs during that time could run circles around LCD resolutions and refresh rates. Until the more common gaming-centric LCDs (1440p 120Hz) arrived, LCD had been a major regression in display capability. I played an awful lot of UT99, TF, and CS at resolutions and refresh rates over 1080p60 on my CRTs over the years.
  • mapesdhs - Wednesday, October 10, 2018 - link

    I used 2048x1536 for a long time on a 22" Dell P1130 CRT, was great for Stalker, Far Cry 2, etc. When I finally switched (the Dell was losing its colour), it had to be IPS (the off-axis colour shifting of TN was just too annoying) and the lower vertical height of 1080p really grated my brain, so I bought a 1920x1200 24" IPS (HP LP2475W), which was pricey but good, though I did miss the extra height from 1536 which was handy for documents, web pages, etc. I had looked at 2560x1600 (the standard "high end" gaming resolution of the day) but the prices were way too high. People forget, but back then the typical tested gaming resolutions were 1024x768, 1280x1024, 1600x1200 or 1920x1200, and 2560x1600. I think the first newer aspect ratio to come along was 1680x1050. How it's changed, nowadays almost everyone is used to screens with a wider aspect ratio, and a decent IPS 27" 1440p is only 200 UKP:
  • Flying Aardvark - Wednesday, October 10, 2018 - link

    You may have played at over 60hz, but you didn't play at a higher resolution than 1080P and especially not higher than 1080P with a refresh rate over 60hz. And the refresh rate isn't apples to apples. LCD refreshes per pixel.
  • willis936 - Tuesday, October 9, 2018 - link

    I paid less than $300 for my Q6600 in 2007.
  • Hrel - Monday, October 8, 2018 - link

    Moore's law is dead, computers are lasting WAY longer than ever before. These combine to shrink both the market and the margins.

    $1000 for a good computer is going to become a pipe dream.
  • abufrejoval - Tuesday, October 9, 2018 - link

    If you are ready to aim just a little lower, I bought an Acer 17" notebook with a GTX1050ti and a 35Watt TCP non-HT i5 Kabel Lake for €800: Turned out to be quite a solid gaming machine even at 1920x1080.

    2080ti is about 4k, inference and a novel way to do more realistic graphics: Simply another ball game.
  • mapesdhs - Wednesday, October 10, 2018 - link

    It's not though remotely about 4K *and* realistic graphics; in that regard NVIDIA has very much pushed the focus back down to 1080p and sub-60Hz frame rates, because the tech just isn't fast enough for anything better. A lot of gamers who spend that kind of money are far more likely to just turn off the relevant features in exchange for better frame rates, especially if they're gaming with 1440p/4K and/or high-frequency monitors (the latter being something that's very hard to pull back from if one has gotten used to it, the brain's vision system gets retrained over time, New Scientist had an article about this last year). As for more realistic, visually speaking yes (though NVIDIA's demos were grud awful and often highly misleading), but not a single thing they talked about in any way implied improved game worlds with respect to world/object functionality which IMO is far more important.

    For example, many people moaned about the changed water effect in the Spiderman game, but my first thought was, is it wet? Can I use it in the manner implied by its appearance? Drink it? Use it to fill a bottle? Put out a fire? Ditto the terrible looking fire effect in the NVIDIA demo, is it hot? Does it radiate heat? Does it make the gun barrel hot to touch afterwards? Does it melt the glass of the carriage nearby if too close? (no) Similar questions about the reflective glass, can I break it, use a shard as a weapon? Do the pieces persist, acting as an audible warning if enemies step on them? Can I throw a grenade into the carriage, make the glass be a weapon against passing enemies via the explosion? Can I cut down trees to block a road? Dig a pit as a trap? Remove a door? All of these questions relate to interactivity with the world, and it's this which makes for a genuinely more engaging and immersive environment, not pure visual realism. Our environment is interesting because of the properties of materials and objects around us and how they can interact, with us and with each other. If anything, the more visually realistic a game world appears to be, all the more jarring it is when an object cannot be used in a manner implied by how it looks. At least years ago a fake door was kinda obvious, one could see the way the texure was part of the frame surround, but these days it's all too easy to see something, assume one can interact with it, only to then be disappointed. Nope, can't open that door, can't pick up that object (or if you can it's just a generic object, it has no real functionality), the ground surface cannot be changed (oh how I miss the original Red Faction), substances are just visual effects, they don't affect the world in a meaningful manner, etc.

    I want functional game worlds; if they look great aswell then that's icing on the cake, but less important overall. Elite Dangerous can look amazing, but it lacks functional depth, whereas Subnautica has far more basic visuals but is functionally fascinating and very engaging indeed.

    Put bluntly, Turing isn't a gaming technology at all, it's a table scrap spinoff from Enterprise compute, packaged in a manner designed to make gamers believe there's this great New Thing they never knew they always wanted, launched using highly deceptive PR, with demos that looked generally terrible anyway (I mean really, the fire effect was just a bunch of sprites, the box scene was b.s. from start to finish, etc.) The demos made out like without RTX these effects would look awful, but games already do things that look really good without RTX, in many cases better than the way the demos looked. Their focus on such irrelevance as the reflections in a soldier's eye was also stupid; who cares? Just make the bullets go into the bad guy! Too damn busy to think about anything else; oh wow, look at that amazing raytraced effect, it looks just - oh dear, I got shot.

    NVIDIA helped create the market for high-res, high-frequency gaming and VR, yet now they've about-faced and are trying to drop back to 1080p because their baseline Enterprise tech isn't pushing for higher rasterisation anymore. These are not gaming cards now, they're spinoffs from compute.

    As I've posted before, a teapot can look as amazing as you like, with all sorts of fancy reflective surface effects & suchlike, but unless I can use it to make tea then it isn't a teapot, a concept explored in more detail from the following old but now even more relevant article:

    All of this, along with the crazy pricing, RAM capacity stagnation and other issues (what's the point of preorders if they're delivered late?) shows very clearly where this industry is headed. I think gamers would be far better served if devs just made better, more engaging, more interactive and more functional game worlds. Sure there'll always be a market for the FPS titles where people largely couldn't give a hoot about such things, but then again, Crysis was so impressive at launch partly because it was also funtionality interactive in a manner that hadn't been done before.

  • eddman - Wednesday, October 10, 2018 - link

    What you are asking for has nothing to do with graphics and video cards though. These are game logic related functionalities that run on the CPU. I don't think CPUs are yet powerful enough to run something that is basically a reality simulator.

    Besides, coding such games would require so much money and time that I don't think any game studio would even want to do it. They want to create a game and not a simulator.
  • cjl - Monday, October 8, 2018 - link

    A QX6800 and 8800 Ultra would set you back almost $2000 around that time

Log in

Don't have an account? Sign up now