Thanks Ian! One initial observation: that slide or picture from Dell showing processor options for their 2in1 has the i7 with 4 MB of cache; my guess is Dell needs a proofreader in their marketing department (:
It's likely those cache numbers are the other way round, i.e. quad core i7 with 8MB, dual core i3 with 4MB. That would align with what we would expect.
Agree. Looks a lot like somebody at Dell didn't check after cut-and-paste. Has Dell announced expected shipping date? As a launch partner, they're likely be among the first who ship finished product.
Page 2, the table about In-Flight Stores and Loads: The values are switched around, or else the paragraph after is wrong. Otherwise looks like a great article. Still reading :)
I'm a bit surprised to see pen support on the Athena requirement list. Everything else seems reasonable as an evolution of mainstream designs; but pens have always been very niche something I don't see changing. Even if pen support is baked into Intels <1W LCDs; including a pen's going to drive up system costs. And if loose is just going to end up lost; if a pen holster is required that's a decent amount of internal volume in increasing thin and dense designs.
The 5700 cards don't support VirtuaLink either, despite AMD belonging to the consortium since the beginning like nvidia and the RTX cards having it for about a year.
First generation Navi cards are just very, very late.
Because Intel product releases have been a mess since the 10nm trainwreck began. Icelake was originally supposed to be out a few years ago. I suspect PCIe4 is stuck on whatever upcoming design was supposed to be the 7nm launch part.
HDMI 2.1 is probably even farther down the pipeline; NVidia and AMD don't have 2.1 support on their discrete GPUs yet. Intel has historically been a lagging supporter of new standards on their IGPs, so that's probably a few years out.
This whole argument that "real world" benchmarks equate to "most used" is rather dumb anyway. We don't need benchmarks to tell us how much faster Chrome opens Reddit, because the answer is always the same: fast enough to not matter. We need benchmarks at the fringes for those reasons brought up in the room: measuring extremes in single/multi threaded scenarios, power usage, memory speeds; finding weaknesses in hardware and finding flaws in software; and taking a large enough sample to be meaningful across the board.
Intel wants to eat its cake and still have it - to be fair - who doesn't? But let's get real, AMD is kicking some major butt right now and Intel has to spin it any way they can. What's funny is that the BEST arguments that I've heard from reviewers to go AMD actually has nothing to do with performance, but rather the Zen platform as a whole in terms of features, upgradeability, and cost.
I say this as a total Intel shill, too. The only AMD systems running in my house right now are game consoles. All my PCs/laptops are Intel.
Interesting to read what Intel suggested some of their arguments in the server space would be: lower TCO like the old Microsoft argument against Linux, and having to revalidate all your stuff to use an AMD platform. Some quotes (from a story in their internal newsletter; the full thing is floating around out there, but couldn't immediately find):
I mean, they'll be fine long term, but trying to change the topic from straightforward bang-for-buck, benchmark results, etc. is an approach you only take in a...certain sort of situation.
Unfortunately, your average IT infrastructure guy no longer knows how fast a Xeon Platinum 8168 is vs an AMD EPYC 7601. They just ask OEMs like Dell or HP to sell them a solution. I've even seen cases where faster solutions were replaced with slower solutions because they were more expensive and the numbers looked bigger. It turns out that the numbers that looked bigger were not the numbers that they should have been paying attention to.
One company I worked at almost bought a $100,000 (yeah I know, small change, but it was a small company) pre-built system. We, as software developers, talked them into letting us handle it instead. We knew a lot about hardware and as a result? We spent around $15,000 in hardware costs. Yes there were labor costs involved in setting everything up, but it only took about 2 weeks for 4 guys, 2 of which were juniors. Had we gone with the blade system, there would have been extensive training needed, which would have costed about the same in labor. Our solution was fully redundant, a hell of a lot faster (the blade system used hardware that was slower than our solution, and it was also a proprietary system that we would be locked into, so there was an additional service contract that costed $$$ and would have to be signed). During my entire time there, we had very few issues with the solution we built outside the occasional hard drive (2 drives in 4 years IIRC) dying and having to pop it out, pop in a new one, and let the RAID rebuild. Zero downtime. In addition, our wifi solution allowed roaming all over a giant building without dropping the signal. Speeds were lightning fast and QoS allowed us to keep someone from taking up too much bandwidth on the guest network. The entire setup worked like a dream.
We also wanted to use a different setup for the phone system, but they opted to work with a vendor instead. They paid a lot of money for that, and constantly had issues. The administration software was buggy, sometimes the entire system would go down, even adding a user would take down the entire system until things were updated. IIRC after I left they finally switched to the system we wanted to use and had no issues after that.
Real men aren't scared of a few toxic chemicals entering their digestive systems! Clearly you and I are not real men, but we now have a role model to emulate over the course of our soon-to-be-shortened-by-cancer lives.
Are those RAM/SSD targets really "greater than" 8GB/256GB or is it supposed to be "greater than or equal to"?
Either way I would love to see an end to companies having >$1000 machines with pathetically low RAM/storage and then charging 500%+ markups to upgrade them to something decent. Like Microsoft's $1200 to go from 4/128 to 16/512.
“Intel uses the ‘U’ designation for anything from 9W to 28W TDP, which covers all the mobile ultra-premium devices.”
No they don’t. 9W are Y Series, 15 and 28W are U Series. This is all clearly stated in Intel’s publicly available product briefs for 10th Generation Core processors.
I be curious for more information on the Y processors - what the performance difference between Y and U. But it looks like these Ice Lake chips are designed for Ultraportable machines and not designed to replace to higher end ones - even like my Dell XPS 15 2in1 - I am really curious about that replacement - it's GPU is probably short lived possibly in updated higher voltage Ice Lake with Gen 11 graphics or new version with Xe graphics. I also have a Dell XPS 13 2in1 with Y processor - I am actually bringing it to meeting today - it is lightweight and does not really need that much power for what I using it for. I think it will be very interesting to compare this new XPS 13 2in1 and the existing XPS 15 2in1 - yes 15 2in1 has faster processor - but it not Ice Lake and that could make a huge difference.
And that should make people question the claims about performance increases. Mind you, how much performance has been lost on Intel chips due to the security issues? Intel may be comparing theoretical performance increases, without disclosing the fact that first through 9th generation have actually lost performance since launch due to security mitigations.
So, +18% IPC, but -20% due to security mitigations for issues that are still there. Has Intel actually fixed the problems with the memory controller and other problems that allow for Meltdown and the other problems, rather than mitigating around the problem? If a problem has existed since first generation Core processors that turns out to be a design flaw, that also shows that the fundamental core design hasn't changed all THAT much.
Meltdown and some of the first spectre mitigations are going to be fixed in the hardware. Later spectre variants are probably only fixed in microcode and software.
Where that line is drawn is going to be determined by when they froze the physical design for tapeout.
I'm not knocking Intel on the IPC growth. If they had an 18% increase, great for them! However, mobile Intel CPUs of any variant (U, HK, Y, etc.) are much slower than their desktop counterparts. My Core i7 2600k absolutely destroys the 6700HK in my laptop. Laptops in general are designed to be low power devices, so performance is never the number one factor in designing a laptop, even on the high end. The only exception to this is the so called 'desktop replacements' that weigh a ton, have desktop class hardware, and basically need to be plugged in to not have the battery die after an hour.
That's also the reason I take this announcement with a grain of salt. 18% on mobile is one thing. 18% on the desktop is something else. As I've mentioned to people here and elsewhere, the smaller the process, the harder it is to maintain high clock speeds. Also, from reading certain documentation, it seems that part of that 18% is counting the addition of AVX-512. I could be mistaken though.
Wow, really? That has not been my experience at all. My 6700hq has generally been (usually significantly) better performing than my 2600k for the vast majority of tasks I've thrown at it.
Any task that requires sustained compute will of course suffer on thr lower power budget on mobile. But tasks which require short bursts of activity will do better thanks to vastly improved turbo since the 2600k. So depending on what you do your impression might very well be accurate.
“Each CPU has 16 PCIe 3.0 lanes for external use, although there are actually 32 in the design but 16 of these are tied up with Thunderbolt support.”
This isn’t quite right. The ICL-U/Y CPU dies do not expose any PCIe lanes externally. They connect to the ICL PCH-LP via OPI and the PCH-LP exposes up to 16 PCIe 3.0 lanes in up to 6 ports via HSIO lanes (which are shared with USB 3.1, SATA 6Gbps, and GbE functions). So basically no change over the 300 Series PCH.
The integrated Thunderbolt 3 host controller may well have a 16-lane PCIe back end on-die, and I’m sure the CPU floorplan can accommodate 16 more lanes for PEG on the H and S dies, but that’s not what’s going on here.
The SoC architecture shows a direct path for the Thunderbolt3 PCIe lanes to the CPU, with only USB2 going across OPI.. Whatever PCIe lanes are available on the PCH are in addition those available via TB3.
The Thunderbolt 3 controller is part of the CPU die. There are four PCIe 3.0 x4 root ports connected to the CPU fabric that feed the Thunderbolt protocol converters connected to the Thunderbolt crossbar switch (the Converged I/O Router block in that diagram). The CPU exposes up to three (for Y-Series) or four (for U-Series) Thunderbolt 3 ports. The only way you can leverage the PCIe lanes on the back-end of the integrated Thunderbolt 3 controller is via Thunderbolt.
The PCH is a separate die on the same package as the CPU die. The two are connected via an OPI x8 link operating at 4 GT/s which is essentially the equivalent of a PCIe 3.0 x4 link. The PCH contains a sizable PCIe switch internally which connects to the back-ends of all of the included controllers and also provides up to 16 PCIe 3.0 lanes in up to 6 ports for connecting external devices. These 16 lanes are fed into a big mux which Intel refers to as a Flexible I/O Adapter (FIA) along with all the other high-speed signals supported by the PCH including USB 3.1, SATA 6Gbps, and GbE to create 16 HSIO lanes which are what is exposed by the SoC. So there are up to 16 PCIe lanes available from the Ice Lake SoC package, all of which are provided by the PCH die, but they come with the huge asterisk that they are exposed as HSIO lanes shared with all of the other high-speed signaling capabilities of the PCH and provisioned by a PCIe switch that effectively only has a PCIe 3.0 x4 connection to the CPU.
This is not at all what Ian seemed to be describing, but it is the reality.
And the USB 2.0 signals for the Thunderbolt 3 ports do indeed come from the PCH, but they do not cross the OPI, they're simply routed from the SoC package directly to the Thunderbolt port. The Thunderbolt 3 host controller integrated into the CPU includes a USB 3.1 xHCI/xDCI but does not include a USB 2.0 EHCI.
I was looking at buying Dell's XPS 15.6" (7590 model), but with Project Athena laptops a few months away, i think i'll wait. Intel parts for solid reliability and unified drivers, and "4 hours of battery life with <30min of charging", those 2 on their own make the wait worth it for me!
Well, for one, it is certainly not realistic to run single thread benchmarks on application that support multi threading. Realistically, most (all?) people will run the application multi threaded?
As developer for many years, multiple threads are useful for handling utility threads and such - but IO is typically area which still has to single thread. Unless it has significantly change in API, it is very difficult to multi-thread the actual screen. And similar for disk io as resource.
With 4x TB3 connections available, I wonder if the maker of an external GPU box could develop a multiplexer that combined two TB3 connections into a PCIe 3.0 8x.
This would significantly decrease some problems that eGPU owners are having due to relatively low CPU-GPU bandwidth.
One of the reasons that eGPU adoption rates are low is precisely because of the limitation mentioned above - huge performance drop (anywhere from 30-50% I think) compared to PCI-E connection, due to bandwidth limitations.
"There is still some communication back and forth with the chipset (PCH), as the Type-C ports need to have USB modes implemented."
Just to add to that, all of the high-speed signaling (Thunderbolt, DisplayPort, SuperSpeed USB) is handled by the CPU die, and the PCH only transacts in USB 2.0 and the low-speed, out-of-band management channel for Thunderbolt (LSx).
I fully understand why you (Ian) included the asterisk, because many OEMs won't bother implementing Thunderbolt 3 due to the additional expense / complexity, but to be fair to Intel, they integrated Thunderbolt as much as they possibly could. It's really not feasible to include all of the power and analog signal conditioning necessary for Thunderbolt 3 on the CPU package.
You do know pure Silicon is highly toxic? I wouldn't even hang it in your house unless it was plated in a clear lacquer. It degrades when exposed to open air. And it's been shown to cause cancer. I know you wouldn't actually bite it. But be sure to wash your hands.
A lot of the chemicals used in wafer processing are quite nasty. A bare wafer itself is pretty harmless unless you grind it up and inhale it. Solid elemental silicon or silicon dioxide is safe to handle.
I understand that there are embargos which must be respected and that Anandtech does not like to trade in unsubstantiated rumors, but much of what is presented regarding packaging and power seems a bit wishy-washy.
Wikichip has had photos of both sides of the Ice Lake U and Y packages posted for some time now. Furthermore, Intel's product briefs are very clear on the power for each series:
Ice Lake Y: Nominal TDP 9 W, cTDP Down 8 W on Core i3 only, cTDP Up 12 W but N/A on Core i3, Ice Lake U: Nominal TDP 15 W, cTDP Down 12 W (13 W for some UHD parts), cTDP Up 25 W AFAIK, no 28 W Ice Lake-U parts have been announced by Intel yet, but they most likely are in the works.
And you can cite whatever reasons you care to, but by all reports Intel was initially targeting a 5.2 W TDP for Ice Lake-Y 4+2, and that entire platform has been solidly shifted into the 8-12 W range.
Also, it should be noted that the 14nm 300 Series chipsets that Intel has been shipping for some time now are all Cannon Point, which was originally designed to complement Cannon Lake, and are almost identical in terms of capabilities to the 400 Series. And the particular designation for the Ice Lake PCH-LP according to Intel is "495 Series".
While I do enjoy and mostly want to read Dr Ian Cutress article, I seriously don't want to read Intel's marketing hype. Action and Results speaks louder than Powerpoint slides. Ship it, let Anandtech test it. And we make an opinion on it.
The Ryzen 7 3700U is a Zen+ part on 12nm, without the big IPC plus clock speed improvements seen with the desktop CPUs. As a result, Intel is doing a comparison against the previous generation products for laptops.
In laptops, getting max turbo or boost for more than one second is rare. Yea, Intel can put a laptop chip on a board on a bench without any enclosure to show the chip, but real world speeds will be quite a bit lower. That is true for both AMD as well as Intel, and it is up to the OEMs to come up with a design to keep the chips cool enough to run faster than the competition.
AMD knows what is going on, so if I am correct, AMD will move up the release of the next generation of laptop chips to November. If AMD does the right thing, AMD will call the new chips the 3250U, 3400U, 3600U, and 3800U to bring consistency with desktop naming conventions. These new chips would be 7nm with either Vega or Navi, for an APU it is less important than going 7nm for both.
Keep in mind, the only comparison they did with Ryzen (I think) was Graphics, not CPU. I'd imagine the Icelake chips have a solid CPU lead against quad core Ryzen based on Zen/Zen+. Zen 2 will certainly help close that gap, but it should still be roughly 15-20% behind Icelake in IPC, and I certainly won't be ahead that much on frequency.
I think in Q4 19 they'd release Ryzen 4000 series (based on Zen 2) and call it day, like last year or two year ago.
they'll be 4300U, 4500U, and 4700U for U-series and 4350H, 4550H, and 4750H for high-performance part with integrated graphics based on Navi.
but since Zen 2 has 8 cores now on each CCX, they'd probably also sell 6-core and 8-core part, but I don't know if they'll release it on U-series, though.
I have seen anything that was successfully comparing x86 based cpus with AEM based cpus
But one things - that makes all this MacBook ARM stuff meaningless to me is one sheer fact - Apple has yet to release development tools for iOS on actual iOS. It might be Apple trying force Macs for development but Apples own development tools don't run on iOS
That’s an idiotic chain of reasoning. ARM Macs will ship with macOS, not iOS. To believe otherwise only reveals that you know absolutely nothing of how Apple thinks.
As for comparison, the rough number is A12X gets ~5200 on GB4, Intel best (non-OC’d) gets ~5800. That’s collapsing lots of numbers down to one, but comparing benchmark by benchmark you see Apple does very well (almost matching Intel) across an awful lot.
If Apple can maintain its past pace (and there is no reason why not...) we can expect A13X to be anywhere from 20% to 35% faster, which puts it well into “fastest [non-OC’d] CPU on earth” territory for most single-threaded use cases. Can they achieve this? Absolutely. Just process improvement can get them 10% frequency. I expect A13X to clock around 2.8GHz. Then there is LPDDR5 which I expect they will be using, so substantially improved memory bandwidth. Then I expect they'll have SVE (2x256) and accompanying that basically double the bandwidth all the way out from L1 to DRAM. These are just the obvious basics. There are a bunch of things they can still do that represent “fairly easy” improvements to get to that 25% or so. (These include more aggressive fusion, a double-pumped ALU, attached ALUs to load/store to allow load-ok and op-store fusion, a micro-op cache, long-term-parking, criticality prediction, ...)
So, if it’s so easy, why doesn’t Intel also do it? Why indeed? That’s why I occasionally post my alternative rant about how INTC is no longer an engineering company, it is now pretty much purely a finance company...
Sorry, but both these comments seem mighty uninformed. The MacBooks Air and Pro currently and in the foreseeable future all run on Intel CPUs. The Apple Chips A12/13 are used in iPhone, iPad and the likes.
And regarding your prediction, your enthusiasm seems way over the top. What are you even talking about? Micro-op cache on a RISC processor? Think again. Aren't RISC commands all micro ops already?
Strong the Dunning-Kruger is with this one... Dude, seriously, learn something about MODERN CPU design, more than just buzz-words from the 80s. To get you started, how about you read https://www.anandtech.com/show/14384/arm-announces... and concentrate on understanding EVERY aspect of what's being added to the CPU and why. Note in particular that 1.5K Mop cache...
More questions to ask yourself: - Why was 80s RISC obsessed with REDUCED instructions? - Why was ARM (especially ARMv8) NOT obsessed with that? Look at the difference between ARMv8 and, say, RISC-V. - Why is op-fusion so important a part of modern high performance CPUs (both x86 and ARM [and presumably RISC-V if they EVER ship a high-performance part, ha...])? - which are the fast (shallow logic, even if it's wide) and which are the slow (deep logic) parts of a MODERN pipeline?
Oh my, this is so entertaining you should charge for the reading.
You demand to go beyond just buzz words (what would be good) while your posts look like entries to a contest on how many marketing phrases can be fit into a paragraph. Then you even manage to combine this with highly rude idiom. Plus you name a psychological effect but fail to transfer it to self-reflexion. And as cherry on the top you obviously claim for yourself to understand „EVERY aspect“ of a CPU (an unimaginably complex bit of engineering) but even manage to confuse micro- and macro-op cache and the conceptual differences of these.
I'm really impressed by your courage. Publicly posting so boldly on such a thin basis is brave. Your comments add near zero information but are definately worth the read. Pure comedy gold!
Please see this as an invitation to reply. I'm looking forwards to some more of your attempts to insult.
"The high-end design with 64 execution units will be called Iris Plus, but there will be a ‘UHD’ version for mid-range and low-end parts, however Intel has not stated how many execution units these parts will have."
Ah, but they have: Ice Lake-U Iris Plus (48EU, 64EU) 15 W, Ice Lake-U UHD (32EU) 15 W. So their performance comparisons may even be to the 15 W Iris Plus with 64 EUs, rather than the full fat 28 W version.
"On display pipes, Gen11 has access to three 4K pipes split between DP1.4 HBR3 and HDMI 2.0b. There is also support for 2x 5K60 or 1x 4K120 with a 10-bit color depth."
The three display pipes are not limited to 4K, and are agnostic of transport protocol—each of them can be output via the eDP 1.4b port, one of the 3 DDI interfaces which can support either DisplayPort 1.4 or HDMI 2.0b, or one of the up to 4 Thunderbolt 3 ports. Both HDMI and DP support HDCP 2.2, and DisplayPort also supports DSC 1.1. The maximum single pipe, single port resolution for HDMI is 4K60 10bpc (4:2:2), and for DisplayPort it's 4K120/5K60 10bpc (with DSC).
Thunderbolt 3 integration for Ice Lake-Y is only up to 3 ports.
What I personally liked most about the GT3e (48 EU) and GT4e (72 EU) Skylake variant SoCs was, that they didn't cost the extra money they should have, especially when you consider that the iGPU part completely dwarfs the CPU cores (which Intel makes you bleed for) and is much better than everything else combined together (have a look at the WikiChips layouts https://en.wikichip.org/wiki/intel/microarchitectu...
Of course, a significantly better graphics performance is never a bad thing, especially when it also doesn't cost extra electrical power: The bigger iGPUs might have actually been more energy efficient than their GT2 brethren at a graphics load that pushed the GT2 towards its frequency limits. And in any case if you don't crunch it on graphics, the idle consumption is near perfect: One of the reasons most laptop dGPU designs won't even bother to run 2D on the dGPU any more but leave that to Intel.
The biggest downside was that you couldn't buy them outside an Apple laptop or Intel NUC.
But however much Intel goes into Apple mode (the major customer for these beefier iGPUs) in terms of "x time faster than previous", the result aren't going to turn ultrabooks with this configuration into "THD gaming machines".
To have a good feel as to where these could go and whether they are worth the wait, just have a look at the Skull Canyon nuc6i7kyk review on this site: That SoC uses 72 EUs and 128MB of eDRAM and should put a pretty firm upper limit to what a 64 EU Ice Lake can do: Most of the games in that review are somewhat dated yet fail to reach 20FPS at THD.
So if you want to game on the device, you'd be much better of with a dGPU however small and chose the smallest iGPU variant available. No reason to wait, Whisky + Nvidia will do better.
If you want real gaming performance, you need to put real triple digit Watts and the bandwidth only GDDR5/6 or HBM can deliver to work even at THD, but with remote gaming perhaps it doesn't have to be on your elegant slim ultrabook. There again anything but the GT2 configuration is wasted, because only need the VPU part for decoding Google Stadia (or Steam Remote) streams, which is the same for all configurations.
For some strange reason, Intel has been selling GT3/4 NUCs at little or no premium over GT2 variants and in that case I have been seriously tempted. And only once I even managed to find a GT3e laptop once for a GT2 price (while the SoC is literally twice as big and the die carrier even adds eDRAM at zero markup), which I stil cherish.
But if prices are anywhere related to the surface area of the chip (as they are for the server parts), these high powered GTs are something that only Apple users would buy.
That's another reaons, I (sadly) don't expect them to be sold in anything bug Macs and some NUCs, no ChuWi notebooks or Mini-ITX boards.
Judging from the first 10nm generation, GPUs where the part where obtaining economically feasible yields didn't work out. Unless they have really, really fixed 10nm it's not hard to imagine that Intel could be selling high-count EU SoCs to Apple below cost, to keep them for another generation as flagship customer and perhaps due to long-term contractual obligations.
But maintaining GT2/3/4 price egality for the rest of the market seems suicidal even if you have a fab lead.
Not that I expect we'll ever be told: In near monopoly situations the so called market ecnomy becomes surprisingly complex.
"It stands to reason then that the smaller package is for lower performance and low power options, despite being exactly the same silicon."
I know the die floorplans are the same, but have Intel ever actually confirmed that U and Y (or H and S series for that matter) are the exact same silicon? Is it strictly binning and packaging that separates the platforms, or is there a slight tweak to the manufacturing process to target lower power / higher frequencies? Intel production roadmaps would seem to indicate this isn't just a binning situation, but I've never been entirely certain on that point.
And isn't Comet Lake-U 6+2 more likely to be 25 W, with Whiskey Lake-U 4+2 continuing to pull 15 W duty alongside Ice Lake-U 4+2?
Those goals for Aethena are OK, but my old Dell XPS 12 with a carousel frame hit all of those except biometric, and wake from sleep in <1 sec... well, and the bezel... but that was due to the carousel design which I would LOVE to come back in a more modern form. Not saying these goals are bad... but if a 6 year old midrange laptop can hit almost all of them, then this isn't exactly aiming for something amazing.
Security (and by inference the performance overhead required to implement proper security) is not important according to Anandtech/Ian Cutress. Which is obvious nonsense, so the only logical conclusion is that Anandtech are now a thoroughly biased outfit incapable of any critical reporting, which is quite sad particularly as it means all their articles (particularly when they relate to Intel) have to be read with a very heavy dose of cynicism.
If Ice Lake-U has a ~3.5% higher single core performance (and, assuming the "multi-core overhead" is the same, multi-core performance as well) than Whiskey Lake-U despite having a 20% lower single core boost clock, then Sunny Cove must be an extremely impressive μarch. Or, er, that might not actually be the case : Ice Lake-U has a 18% higher IPC than the *original* Skylake of 2015, not Whiskey Lake. While Whiskey Lake is basically the same design it must have a somewhat higher IPC due to its much more mature process node and other optimizations.
Let's be conservative and assume that Ice Lake-U (more specifically Sunny Cove) has a nice round 15% higher IPC than Whiskey Lake-U, with both at 15W. In that case, at a 20% lower boost clock Ice Lake-U should have a 5% lower performance than Whiskey Lake-U. Where is that +3.5% performance derived from then?
Even if we assumed that Ice Lake-U 18% IPC edge is over Whiskey Lake-U (highly unlikely, otherwise Intel would not have dug out the original Skylake from its computing grave) that would still translate to Ice Lake-U having a 1.5% lower single core performance than Whiskey Lake-U, rather than being 3.5% faster than it.
Maybe, just maybe, this is why Intel used just a single synthetic benchmark (surely compiled with aggressive flags and optimized for Intel CPUs) for that graph and avoided to disclose other synthetic benchmarks and real world use benchmarks? Is this also why they avoided to talk about CPU performance of Ice Lake in their Computex presentation, and instead focused on iGPU, Wifi and AI performance?
Based on the disclosed clocks and the "disclosed" (in obfuscated form) IPC of Ice Lake-U I just cannot see it being in any way faster than Whiskey Lake-U. It will probably also have worse power efficiency, since it has the same or higher TDP range at a much lower clock.
Getting Thunderbolt on-die is huge for adoption. While I doubt many laptop manufacturers will enable more than a single TB port, desktop is an entirely different kettle of fish.
This honestly is looking like the worst architecture refresh since Prescott. IPC increases are getting almost completely washed out by loss in frequency. I wonder if this would have happened if Ice Lake came out on 14nm. Is the clock loss from uArch changes, process change, or a mix of both?
Performance of an individual transistor has been decreasing since 45nm, but overall circuit performance kept improving due to interconnect capacitance decreasing at a faster rate at every node change. It looks like at Intel 10nm, and TSMC 7nm that this is no longer true, with transistor performance dropping off a cliff faster than interconnect capacitance reduction. 5nm and 3nm should be possible, but will anyone want to use them?
"...with a turbo frequency up to 4.1 GHz" This is the highest number I have come across for the new 10th generation processors, and according to SemiAccurate (which is accurate more often than not), this is likely not an error.
If this value is close to desktop CPU limitations, the low clock speed all but erases the 18% IPC advantage -- an estimate likely based on a first-gen Skylake. Granted, the wattage values are low, so higher-wattage units should run at least a bit faster.
I’m a bit confused by the naming scheme. Ian, you say: “The only way to distinguish between the two is that Ice Lake has a G in the SKU and Comet Lake has a U”
But that’s not what’s posted in several places throughout the article. The ICL processors are named Core iX-nnnnGn where CML are Core iX-nnnnnU. Comet lake is using 5 digits and Ice Lake only 4 (1000 vs 10000 series).
Is this a typo or will ICL be 1000-series Core chips?
Regarding AI on the desktop. The place where desktop AI will shine is NLP. NLP has lagged behind vision for a while, but has acquired new potency with The Transformer. It will take time for this to be productized, but we should ultimately see vastly superior translation (text and speech), spelling and grammar correction, decent sentiment analysis while typing, even better search.
Of course this requires productization. Google’s agenda is to do this in the cloud. MS’ agenda I have no idea (they still have sub-optimal desktop search). So probably Apple will be first to turn this into mainstream products.
Relevant to this article is that I don’t know the extent to which instructions and micro-architectures optimized for CNNs are still great for The Transformer (and the even newer and rather superior Transformer-XL published just a few months ago). This may all be a long time happening on the desktop if INTC optimized too much purely for vision, and it takes another of their 7 years to turnaround and update direction...
It seems that Ice Lake / Sunny Cove will have hardware fixes for Spectre and Meltdown. I would like to see some more information on this, such as how much speed gain, whether the patch is predictive (so as to block ALL such OOE / BP exploits) etc.
A month or so ago, we heard a few rumors that the CPUs were ahead ~18% in IPC (I see that number again in this article), but are down ~20+% in clock speed.... ; it would be nice to see at least one or two performance metrics/comparisons on a shipped product. :)
Unlike Ryzen mobile, intel’s “upto” 64 EUs part will probably only ship in like 2 laptops. Therefore amd has more designs in my book. I don’t understand people who buy expensive 4K laptops with intel integrated gfx which can’t even render windows 10 ui smoothly. Looking forward to Zen2 + navi based 7nm APU.
> it can be very effective: a dual core system with AVX-512 outscored a 16-core system running AVX2 (AVX-256).
it's obviously wrong - since ice lake has only one avx-512 block but two avx2 blocks, it's not much faster in avx-512 mode compared to avx2 mode
the only mention of HEDT cpus at the page linked is "At a score of 4519, it beats a full 18-core Core i9-7980XE processor running in non-AVX mode". Since AVX-512 can process 16 32-bit numbers in a single operation, no wonder that a single avx-512 core matches 16 scalar cores
To bad the article doesn't state any further details about the HEVC encoders. Would be interesting to hear if Intel only improved the speed or if they also worked on compression and quality.
I bought a Gemini Lake system last year to try the encoding in hardware and have very mixed feelings about Intel's Quick Sync since. The encoding speed is impressive with the last generation already, and all the while CPU and GPU are practically in idle. On the downside the image quality and compression ratio is highly underwhelming and not even near usable for “content creation“ or mere transcoding. It suffices for video calls at best. Even encoding h264 in software reaches far better compression efficiency while being not much slower on a low end CPU.
IIRC Intel promised some “quality mode” for their upcoming encoders, but I can't remember if that was for the gen11 graphics.
These improvements on serial performance are great, it's awesome to have bigger buffers and more execution units. But in clock area it seems to be a big drawback.
I'm sure clock issues is the reason we won't have any Ice Lake on desktop, and Comet Lake on laptops on the same generation. But, why no 6C Ice Lake? This opened a but alert sign on me.
But what also called my attention is its IGP power. Most mid range and above laptops ae using nVidia GPU. That's sad for us who want performance and won't play on it, because mid laptops are alrdy all coming with nVidia GPU which makes them more expensive.
Now I hope to have these segments using Intel IGP and not have nVidia GPU anymore. Good to us on having less money wasted on hardware we don't need, bad for nVidia.
Some thoughts : 1) If anyone believes that Intel left behind on node processes, while it was ahead at least 2 generations, and it was the definition of expertise on this field , good for them.
2) Likewise , Intel is fully capable of building 1-3 watts platforms. If it were to do so, when the mobile "devolution" began some years ago, it would bulldoze eveything on its way. It can still do. Or you think that somehow Intel is stuck at 6 Watts minimum?
3) This obsession at staying just before the power requirements for true mobility is to protect mommy ARM and its children. It knows that building "APU"s , for "premium" productts would still generate profit from the suckers that would buy them.
There is a number of plausible scenarios for this weird behavior of Intel. Lack of competence, in engineering, or management are certainly not included in this list.
We’ve updated our terms. By continuing to use the site and/or by logging into your account, you agree to the Site’s updated Terms of Use and Privacy Policy.
107 Comments
Back to Article
eastcoast_pete - Tuesday, July 30, 2019 - link
Thanks Ian! One initial observation: that slide or picture from Dell showing processor options for their 2in1 has the i7 with 4 MB of cache; my guess is Dell needs a proofreader in their marketing department (:Ian Cutress - Tuesday, July 30, 2019 - link
It's likely those cache numbers are the other way round, i.e. quad core i7 with 8MB, dual core i3 with 4MB. That would align with what we would expect.eastcoast_pete - Tuesday, July 30, 2019 - link
Agree. Looks a lot like somebody at Dell didn't check after cut-and-paste. Has Dell announced expected shipping date? As a launch partner, they're likely be among the first who ship finished product.Ian Cutress - Tuesday, July 30, 2019 - link
Not yet, as far as I know.close - Tuesday, July 30, 2019 - link
Now we know the cause of Intel's manufacturing hell. Ian's been biting their wafers... :)tipoo - Tuesday, July 30, 2019 - link
iirc Dell said NDA was lifting August 1st on the ICL XPS 13FloconDeNeige - Tuesday, July 30, 2019 - link
Page 2, the table about In-Flight Stores and Loads: The values are switched around, or else the paragraph after is wrong.Otherwise looks like a great article. Still reading :)
FloconDeNeige - Tuesday, July 30, 2019 - link
Page 3 sorryDanNeely - Tuesday, July 30, 2019 - link
I'm a bit surprised to see pen support on the Athena requirement list. Everything else seems reasonable as an evolution of mainstream designs; but pens have always been very niche something I don't see changing. Even if pen support is baked into Intels <1W LCDs; including a pen's going to drive up system costs. And if loose is just going to end up lost; if a pen holster is required that's a decent amount of internal volume in increasing thin and dense designs.notashill - Tuesday, July 30, 2019 - link
There are already tons of laptops with pen support but no included pen or internal holster.vFunct - Tuesday, July 30, 2019 - link
Why did they not go with HDMI 2.1 and PCIe 4.0?bug77 - Tuesday, July 30, 2019 - link
AMD'd newly released 5700(XT) doesn't support HDMI 2.1, it's not surprising Intel doesn't support it either.And PCIe 4.0 would be power hog.
ToTTenTranz - Wednesday, July 31, 2019 - link
The 5700 cards don't support VirtuaLink either, despite AMD belonging to the consortium since the beginning like nvidia and the RTX cards having it for about a year.First generation Navi cards are just very, very late.
tipoo - Tuesday, July 30, 2019 - link
PCI-E 4 currently needs chipset fans on desktop parts, the power needed isn't suitable for 15-28W mobile yet.DanNeely - Tuesday, July 30, 2019 - link
Because Intel product releases have been a mess since the 10nm trainwreck began. Icelake was originally supposed to be out a few years ago. I suspect PCIe4 is stuck on whatever upcoming design was supposed to be the 7nm launch part.HDMI 2.1 is probably even farther down the pipeline; NVidia and AMD don't have 2.1 support on their discrete GPUs yet. Intel has historically been a lagging supporter of new standards on their IGPs, so that's probably a few years out.
nathanddrews - Tuesday, July 30, 2019 - link
This whole argument that "real world" benchmarks equate to "most used" is rather dumb anyway. We don't need benchmarks to tell us how much faster Chrome opens Reddit, because the answer is always the same: fast enough to not matter. We need benchmarks at the fringes for those reasons brought up in the room: measuring extremes in single/multi threaded scenarios, power usage, memory speeds; finding weaknesses in hardware and finding flaws in software; and taking a large enough sample to be meaningful across the board.Intel wants to eat its cake and still have it - to be fair - who doesn't? But let's get real, AMD is kicking some major butt right now and Intel has to spin it any way they can. What's funny is that the BEST arguments that I've heard from reviewers to go AMD actually has nothing to do with performance, but rather the Zen platform as a whole in terms of features, upgradeability, and cost.
I say this as a total Intel shill, too. The only AMD systems running in my house right now are game consoles. All my PCs/laptops are Intel.
twotwotwo - Tuesday, July 30, 2019 - link
Interesting to read what Intel suggested some of their arguments in the server space would be: lower TCO like the old Microsoft argument against Linux, and having to revalidate all your stuff to use an AMD platform. Some quotes (from a story in their internal newsletter; the full thing is floating around out there, but couldn't immediately find):https://www.techspot.com/news/80683-intel-internal...
I mean, they'll be fine long term, but trying to change the topic from straightforward bang-for-buck, benchmark results, etc. is an approach you only take in a...certain sort of situation.
eek2121 - Wednesday, July 31, 2019 - link
Unfortunately, your average IT infrastructure guy no longer knows how fast a Xeon Platinum 8168 is vs an AMD EPYC 7601. They just ask OEMs like Dell or HP to sell them a solution. I've even seen cases where faster solutions were replaced with slower solutions because they were more expensive and the numbers looked bigger. It turns out that the numbers that looked bigger were not the numbers that they should have been paying attention to.One company I worked at almost bought a $100,000 (yeah I know, small change, but it was a small company) pre-built system. We, as software developers, talked them into letting us handle it instead. We knew a lot about hardware and as a result? We spent around $15,000 in hardware costs. Yes there were labor costs involved in setting everything up, but it only took about 2 weeks for 4 guys, 2 of which were juniors. Had we gone with the blade system, there would have been extensive training needed, which would have costed about the same in labor. Our solution was fully redundant, a hell of a lot faster (the blade system used hardware that was slower than our solution, and it was also a proprietary system that we would be locked into, so there was an additional service contract that costed $$$ and would have to be signed). During my entire time there, we had very few issues with the solution we built outside the occasional hard drive (2 drives in 4 years IIRC) dying and having to pop it out, pop in a new one, and let the RAID rebuild. Zero downtime. In addition, our wifi solution allowed roaming all over a giant building without dropping the signal. Speeds were lightning fast and QoS allowed us to keep someone from taking up too much bandwidth on the guest network. The entire setup worked like a dream.
We also wanted to use a different setup for the phone system, but they opted to work with a vendor instead. They paid a lot of money for that, and constantly had issues. The administration software was buggy, sometimes the entire system would go down, even adding a user would take down the entire system until things were updated. IIRC after I left they finally switched to the system we wanted to use and had no issues after that.
wrkingclass_hero - Tuesday, July 30, 2019 - link
Uh, I would not be putting cobalt anywhere near my mouthPeachNCream - Tuesday, July 30, 2019 - link
Real men aren't scared of a few toxic chemicals entering their digestive systems! Clearly you and I are not real men, but we now have a role model to emulate over the course of our soon-to-be-shortened-by-cancer lives.notashill - Tuesday, July 30, 2019 - link
Are those RAM/SSD targets really "greater than" 8GB/256GB or is it supposed to be "greater than or equal to"?Either way I would love to see an end to companies having >$1000 machines with pathetically low RAM/storage and then charging 500%+ markups to upgrade them to something decent. Like Microsoft's $1200 to go from 4/128 to 16/512.
mkozakewich - Wednesday, July 31, 2019 - link
I can't believe Microsoft has been using 4 GB as their base amount for the last six years. At some point it becomes insulting.repoman27 - Tuesday, July 30, 2019 - link
“Intel uses the ‘U’ designation for anything from 9W to 28W TDP, which covers all the mobile ultra-premium devices.”No they don’t. 9W are Y Series, 15 and 28W are U Series. This is all clearly stated in Intel’s publicly available product briefs for 10th Generation Core processors.
HStewart - Tuesday, July 30, 2019 - link
I be curious for more information on the Y processors - what the performance difference between Y and U. But it looks like these Ice Lake chips are designed for Ultraportable machines and not designed to replace to higher end ones - even like my Dell XPS 15 2in1 - I am really curious about that replacement - it's GPU is probably short lived possibly in updated higher voltage Ice Lake with Gen 11 graphics or new version with Xe graphics. I also have a Dell XPS 13 2in1 with Y processor - I am actually bringing it to meeting today - it is lightweight and does not really need that much power for what I using it for. I think it will be very interesting to compare this new XPS 13 2in1 and the existing XPS 15 2in1 - yes 15 2in1 has faster processor - but it not Ice Lake and that could make a huge difference.Hixbot - Tuesday, July 30, 2019 - link
4.2% annual IPC growth doesn't sound great but it is better than anything we've seen since SandyBridge.Targon - Tuesday, July 30, 2019 - link
And that should make people question the claims about performance increases. Mind you, how much performance has been lost on Intel chips due to the security issues? Intel may be comparing theoretical performance increases, without disclosing the fact that first through 9th generation have actually lost performance since launch due to security mitigations.So, +18% IPC, but -20% due to security mitigations for issues that are still there. Has Intel actually fixed the problems with the memory controller and other problems that allow for Meltdown and the other problems, rather than mitigating around the problem? If a problem has existed since first generation Core processors that turns out to be a design flaw, that also shows that the fundamental core design hasn't changed all THAT much.
rahvin - Wednesday, July 31, 2019 - link
Meltdown and some of the first spectre mitigations are going to be fixed in the hardware. Later spectre variants are probably only fixed in microcode and software.Where that line is drawn is going to be determined by when they froze the physical design for tapeout.
eek2121 - Wednesday, July 31, 2019 - link
I'm not knocking Intel on the IPC growth. If they had an 18% increase, great for them! However, mobile Intel CPUs of any variant (U, HK, Y, etc.) are much slower than their desktop counterparts. My Core i7 2600k absolutely destroys the 6700HK in my laptop. Laptops in general are designed to be low power devices, so performance is never the number one factor in designing a laptop, even on the high end. The only exception to this is the so called 'desktop replacements' that weigh a ton, have desktop class hardware, and basically need to be plugged in to not have the battery die after an hour.That's also the reason I take this announcement with a grain of salt. 18% on mobile is one thing. 18% on the desktop is something else. As I've mentioned to people here and elsewhere, the smaller the process, the harder it is to maintain high clock speeds. Also, from reading certain documentation, it seems that part of that 18% is counting the addition of AVX-512. I could be mistaken though.
erple2 - Wednesday, July 31, 2019 - link
Wow, really? That has not been my experience at all. My 6700hq has generally been (usually significantly) better performing than my 2600k for the vast majority of tasks I've thrown at it.jospoortvliet - Monday, August 5, 2019 - link
Any task that requires sustained compute will of course suffer on thr lower power budget on mobile. But tasks which require short bursts of activity will do better thanks to vastly improved turbo since the 2600k. So depending on what you do your impression might very well be accurate.repoman27 - Tuesday, July 30, 2019 - link
“Each CPU has 16 PCIe 3.0 lanes for external use, although there are actually 32 in the design but 16 of these are tied up with Thunderbolt support.”This isn’t quite right. The ICL-U/Y CPU dies do not expose any PCIe lanes externally. They connect to the ICL PCH-LP via OPI and the PCH-LP exposes up to 16 PCIe 3.0 lanes in up to 6 ports via HSIO lanes (which are shared with USB 3.1, SATA 6Gbps, and GbE functions). So basically no change over the 300 Series PCH.
The integrated Thunderbolt 3 host controller may well have a 16-lane PCIe back end on-die, and I’m sure the CPU floorplan can accommodate 16 more lanes for PEG on the H and S dies, but that’s not what’s going on here.
voicequal - Friday, August 2, 2019 - link
The SoC architecture shows a direct path for the Thunderbolt3 PCIe lanes to the CPU, with only USB2 going across OPI.. Whatever PCIe lanes are available on the PCH are in addition those available via TB3.https://images.anandtech.com/doci/14514/Blueprint%...
repoman27 - Tuesday, August 6, 2019 - link
The Thunderbolt 3 controller is part of the CPU die. There are four PCIe 3.0 x4 root ports connected to the CPU fabric that feed the Thunderbolt protocol converters connected to the Thunderbolt crossbar switch (the Converged I/O Router block in that diagram). The CPU exposes up to three (for Y-Series) or four (for U-Series) Thunderbolt 3 ports. The only way you can leverage the PCIe lanes on the back-end of the integrated Thunderbolt 3 controller is via Thunderbolt.The PCH is a separate die on the same package as the CPU die. The two are connected via an OPI x8 link operating at 4 GT/s which is essentially the equivalent of a PCIe 3.0 x4 link. The PCH contains a sizable PCIe switch internally which connects to the back-ends of all of the included controllers and also provides up to 16 PCIe 3.0 lanes in up to 6 ports for connecting external devices. These 16 lanes are fed into a big mux which Intel refers to as a Flexible I/O Adapter (FIA) along with all the other high-speed signals supported by the PCH including USB 3.1, SATA 6Gbps, and GbE to create 16 HSIO lanes which are what is exposed by the SoC. So there are up to 16 PCIe lanes available from the Ice Lake SoC package, all of which are provided by the PCH die, but they come with the huge asterisk that they are exposed as HSIO lanes shared with all of the other high-speed signaling capabilities of the PCH and provisioned by a PCIe switch that effectively only has a PCIe 3.0 x4 connection to the CPU.
This is not at all what Ian seemed to be describing, but it is the reality.
And the USB 2.0 signals for the Thunderbolt 3 ports do indeed come from the PCH, but they do not cross the OPI, they're simply routed from the SoC package directly to the Thunderbolt port. The Thunderbolt 3 host controller integrated into the CPU includes a USB 3.1 xHCI/xDCI but does not include a USB 2.0 EHCI.
poohbear - Tuesday, July 30, 2019 - link
I was looking at buying Dell's XPS 15.6" (7590 model), but with Project Athena laptops a few months away, i think i'll wait. Intel parts for solid reliability and unified drivers, and "4 hours of battery life with <30min of charging", those 2 on their own make the wait worth it for me!repoman27 - Tuesday, July 30, 2019 - link
“The connection to the chipset is through a DMI 3.0 x4 link...”Should be OPI x8 for U/Y Series.
“...Ice Lake will support up to six ports of USB 3.1 (which is now USB 3.2 Gen 1 at 5 Gbps)...”
They’re USB 3.1 Gen 2 ports, so it’s six USB 3.2 Gen 2 x 1 (10 Gbit/s) ports.
Roel9876 - Tuesday, July 30, 2019 - link
Well, for one, it is certainly not realistic to run single thread benchmarks on application that support multi threading. Realistically, most (all?) people will run the application multi threaded?HStewart - Tuesday, July 30, 2019 - link
As developer for many years, multiple threads are useful for handling utility threads and such - but IO is typically area which still has to single thread. Unless it has significantly change in API, it is very difficult to multi-thread the actual screen. And similar for disk io as resource.Arnulf - Tuesday, July 30, 2019 - link
"Our best guess is that these units assist Microsoft Cortana for low-powered wake-on voice inference algorithms ..."Our best guess is that these are designed for use by assorted three-letter agencies.
PeachNCream - Tuesday, July 30, 2019 - link
Open mics are totally okay. There is absolutely no privacy risk to you at all and you should never give it a second thought.ToTTenTranz - Tuesday, July 30, 2019 - link
With 4x TB3 connections available, I wonder if the maker of an external GPU box could develop a multiplexer that combined two TB3 connections into a PCIe 3.0 8x.This would significantly decrease some problems that eGPU owners are having due to relatively low CPU-GPU bandwidth.
PeachNCream - Tuesday, July 30, 2019 - link
Are there really that many eGPUs out there though?HStewart - Tuesday, July 30, 2019 - link
Just do a search on amazon for eGPU and you find 3 pages full of then. ASUS, Gigibyte, and Dell are examples plus many 3rd party.PeachNCream - Tuesday, July 30, 2019 - link
Not for sale. Actually in use by people.The_Assimilator - Wednesday, July 31, 2019 - link
2.Retycint - Wednesday, July 31, 2019 - link
One of the reasons that eGPU adoption rates are low is precisely because of the limitation mentioned above - huge performance drop (anywhere from 30-50% I think) compared to PCI-E connection, due to bandwidth limitations.DanNeely - Wednesday, July 31, 2019 - link
The performance issue is TB3 overhead. Running a GPU on an internal PCIe3 x4 link will come within a few percent of an x16.repoman27 - Tuesday, July 30, 2019 - link
"There is still some communication back and forth with the chipset (PCH), as the Type-C ports need to have USB modes implemented."Just to add to that, all of the high-speed signaling (Thunderbolt, DisplayPort, SuperSpeed USB) is handled by the CPU die, and the PCH only transacts in USB 2.0 and the low-speed, out-of-band management channel for Thunderbolt (LSx).
I fully understand why you (Ian) included the asterisk, because many OEMs won't bother implementing Thunderbolt 3 due to the additional expense / complexity, but to be fair to Intel, they integrated Thunderbolt as much as they possibly could. It's really not feasible to include all of the power and analog signal conditioning necessary for Thunderbolt 3 on the CPU package.
Galatian - Tuesday, July 30, 2019 - link
The numbers of the iGPU don’t really add up. They are comparing their best new GPU to last Gens UHD 420. I don't see the performance gain.at8750 - Tuesday, July 30, 2019 - link
Probably,CML-U
i7-10510U 4C 1.8GHz TB:4.9/4.8/4.3GHz
i5-10210U 4C 1.6GHz TB:4.2/4.1/3.9GHz
ICL-U
i7-1065G7 4C 1.3GHz TB:3.9/3.8/3.5GHz
i5-1034G1 4C 0.8GHz TB:3.6/3.6/3.3GHz
digitalgriffin - Tuesday, July 30, 2019 - link
Ian,You do know pure Silicon is highly toxic? I wouldn't even hang it in your house unless it was plated in a clear lacquer. It degrades when exposed to open air. And it's been shown to cause cancer. I know you wouldn't actually bite it. But be sure to wash your hands.
Billy Tallis - Wednesday, July 31, 2019 - link
A lot of the chemicals used in wafer processing are quite nasty. A bare wafer itself is pretty harmless unless you grind it up and inhale it. Solid elemental silicon or silicon dioxide is safe to handle.repoman27 - Tuesday, July 30, 2019 - link
I understand that there are embargos which must be respected and that Anandtech does not like to trade in unsubstantiated rumors, but much of what is presented regarding packaging and power seems a bit wishy-washy.Wikichip has had photos of both sides of the Ice Lake U and Y packages posted for some time now. Furthermore, Intel's product briefs are very clear on the power for each series:
Ice Lake Y: Nominal TDP 9 W, cTDP Down 8 W on Core i3 only, cTDP Up 12 W but N/A on Core i3,
Ice Lake U: Nominal TDP 15 W, cTDP Down 12 W (13 W for some UHD parts), cTDP Up 25 W
AFAIK, no 28 W Ice Lake-U parts have been announced by Intel yet, but they most likely are in the works.
And you can cite whatever reasons you care to, but by all reports Intel was initially targeting a 5.2 W TDP for Ice Lake-Y 4+2, and that entire platform has been solidly shifted into the 8-12 W range.
Also, it should be noted that the 14nm 300 Series chipsets that Intel has been shipping for some time now are all Cannon Point, which was originally designed to complement Cannon Lake, and are almost identical in terms of capabilities to the 400 Series. And the particular designation for the Ice Lake PCH-LP according to Intel is "495 Series".
James5mith - Tuesday, July 30, 2019 - link
Ian,You either have your graph or your paragraph about the store/load performance increases reversed.
Graph says 72 -> 128 stores, 56 -> 72 loads. The paragraph below it says 72-128 loads, 56-> 72 stores.
ksec - Tuesday, July 30, 2019 - link
While I do enjoy and mostly want to read Dr Ian Cutress article, I seriously don't want to read Intel's marketing hype. Action and Results speaks louder than Powerpoint slides. Ship it, let Anandtech test it. And we make an opinion on it.Targon - Tuesday, July 30, 2019 - link
The Ryzen 7 3700U is a Zen+ part on 12nm, without the big IPC plus clock speed improvements seen with the desktop CPUs. As a result, Intel is doing a comparison against the previous generation products for laptops.In laptops, getting max turbo or boost for more than one second is rare. Yea, Intel can put a laptop chip on a board on a bench without any enclosure to show the chip, but real world speeds will be quite a bit lower. That is true for both AMD as well as Intel, and it is up to the OEMs to come up with a design to keep the chips cool enough to run faster than the competition.
AMD knows what is going on, so if I am correct, AMD will move up the release of the next generation of laptop chips to November. If AMD does the right thing, AMD will call the new chips the 3250U, 3400U, 3600U, and 3800U to bring consistency with desktop naming conventions. These new chips would be 7nm with either Vega or Navi, for an APU it is less important than going 7nm for both.
Drumsticks - Tuesday, July 30, 2019 - link
Keep in mind, the only comparison they did with Ryzen (I think) was Graphics, not CPU. I'd imagine the Icelake chips have a solid CPU lead against quad core Ryzen based on Zen/Zen+. Zen 2 will certainly help close that gap, but it should still be roughly 15-20% behind Icelake in IPC, and I certainly won't be ahead that much on frequency.Fulljack - Wednesday, July 31, 2019 - link
I think in Q4 19 they'd release Ryzen 4000 series (based on Zen 2) and call it day, like last year or two year ago.they'll be 4300U, 4500U, and 4700U for U-series and 4350H, 4550H, and 4750H for high-performance part with integrated graphics based on Navi.
but since Zen 2 has 8 cores now on each CCX, they'd probably also sell 6-core and 8-core part, but I don't know if they'll release it on U-series, though.
Apple Worshipper - Tuesday, July 30, 2019 - link
Thanks Ian ! So how does Ice Lake purportedly stand next to Apple’s A12x in iPad Pro based on the Spec scores ?PeachNCream - Tuesday, July 30, 2019 - link
More importantly, how does Ice Lake taste?HStewart - Tuesday, July 30, 2019 - link
I have seen anything that was successfully comparing x86 based cpus with AEM based cpusBut one things - that makes all this MacBook ARM stuff meaningless to me is one sheer fact - Apple has yet to release development tools for iOS on actual iOS. It might be Apple trying force Macs for development but Apples own development tools don't run on iOS
name99 - Wednesday, July 31, 2019 - link
That’s an idiotic chain of reasoning.ARM Macs will ship with macOS, not iOS. To believe otherwise only reveals that you know absolutely nothing of how Apple thinks.
As for comparison, the rough number is A12X gets ~5200 on GB4, Intel best (non-OC’d) gets ~5800. That’s collapsing lots of numbers down to one, but comparing benchmark by benchmark you see Apple does very well (almost matching Intel) across an awful lot.
If Apple can maintain its past pace (and there is no reason why not...) we can expect A13X to be anywhere from 20% to 35% faster, which puts it well into “fastest [non-OC’d] CPU on earth” territory for most single-threaded use cases. Can they achieve this? Absolutely.
Just process improvement can get them 10% frequency. I expect A13X to clock around 2.8GHz.
Then there is LPDDR5 which I expect they will be using, so substantially improved memory bandwidth. Then I expect they'll have SVE (2x256) and accompanying that basically double the bandwidth all the way out from L1 to DRAM.
These are just the obvious basics. There are a bunch of things they can still do that represent “fairly easy” improvements to get to that 25% or so. (These include more aggressive fusion, a double-pumped ALU, attached ALUs to load/store to allow load-ok and op-store fusion, a micro-op cache, long-term-parking, criticality prediction, ...)
So, if it’s so easy, why doesn’t Intel also do it? Why indeed? That’s why I occasionally post my alternative rant about how INTC is no longer an engineering company, it is now pretty much purely a finance company...
ifThenError - Friday, August 2, 2019 - link
Sorry, but both these comments seem mighty uninformed. The MacBooks Air and Pro currently and in the foreseeable future all run on Intel CPUs. The Apple Chips A12/13 are used in iPhone, iPad and the likes.And regarding your prediction, your enthusiasm seems way over the top. What are you even talking about? Micro-op cache on a RISC processor? Think again. Aren't RISC commands all micro ops already?
name99 - Sunday, August 4, 2019 - link
Strong the Dunning-Kruger is with this one...Dude, seriously, learn something about MODERN CPU design, more than just buzz-words from the 80s.
To get you started, how about you read
https://www.anandtech.com/show/14384/arm-announces...
and concentrate on understanding EVERY aspect of what's being added to the CPU and why.
Note in particular that 1.5K Mop cache...
More questions to ask yourself:
- Why was 80s RISC obsessed with REDUCED instructions?
- Why was ARM (especially ARMv8) NOT obsessed with that? Look at the difference between ARMv8 and, say, RISC-V.
- Why is op-fusion so important a part of modern high performance CPUs (both x86 and ARM [and presumably RISC-V if they EVER ship a high-performance part, ha...])?
- which are the fast (shallow logic, even if it's wide) and which are the slow (deep logic) parts of a MODERN pipeline?
ifThenError - Monday, August 5, 2019 - link
Oh my, this is so entertaining you should charge for the reading.You demand to go beyond just buzz words (what would be good) while your posts look like entries to a contest on how many marketing phrases can be fit into a paragraph.
Then you even manage to combine this with highly rude idiom. Plus you name a psychological effect but fail to transfer it to self-reflexion. And as cherry on the top you obviously claim for yourself to understand „EVERY aspect“ of a CPU (an unimaginably complex bit of engineering) but even manage to confuse micro- and macro-op cache and the conceptual differences of these.
I'm really impressed by your courage. Publicly posting so boldly on such a thin basis is brave.
Your comments add near zero information but are definately worth the read. Pure comedy gold!
Please see this as an invitation to reply. I'm looking forwards to some more of your attempts to insult.
Techgeek43 - Tuesday, July 30, 2019 - link
Fantastic article Ian, I for one, cannot wait for ice lake laptopsWonderful in-depth analysis, with an interesting insight into the Intel brand
repoman27 - Tuesday, July 30, 2019 - link
"The high-end design with 64 execution units will be called Iris Plus, but there will be a ‘UHD’ version for mid-range and low-end parts, however Intel has not stated how many execution units these parts will have."Ah, but they have: Ice Lake-U Iris Plus (48EU, 64EU) 15 W, Ice Lake-U UHD (32EU) 15 W. So their performance comparisons may even be to the 15 W Iris Plus with 64 EUs, rather than the full fat 28 W version.
I know you have access to the media slide decks, but Intel has also posted product briefs for the general public that contain a lot of this info: https://www.intel.com/content/www/us/en/products/d...
"On display pipes, Gen11 has access to three 4K pipes split between DP1.4 HBR3 and HDMI 2.0b. There is also support for 2x 5K60 or 1x 4K120 with a 10-bit color depth."
The three display pipes are not limited to 4K, and are agnostic of transport protocol—each of them can be output via the eDP 1.4b port, one of the 3 DDI interfaces which can support either DisplayPort 1.4 or HDMI 2.0b, or one of the up to 4 Thunderbolt 3 ports. Both HDMI and DP support HDCP 2.2, and DisplayPort also supports DSC 1.1. The maximum single pipe, single port resolution for HDMI is 4K60 10bpc (4:2:2), and for DisplayPort it's 4K120/5K60 10bpc (with DSC).
Thunderbolt 3 integration for Ice Lake-Y is only up to 3 ports.
abufrejoval - Tuesday, July 30, 2019 - link
What I personally liked most about the GT3e (48 EU) and GT4e (72 EU) Skylake variant SoCs was, that they didn't cost the extra money they should have, especially when you consider that the iGPU part completely dwarfs the CPU cores (which Intel makes you bleed for) and is much better than everything else combined together (have a look at the WikiChips layoutshttps://en.wikichip.org/wiki/intel/microarchitectu...
Of course, a significantly better graphics performance is never a bad thing, especially when it also doesn't cost extra electrical power: The bigger iGPUs might have actually been more energy efficient than their GT2 brethren at a graphics load that pushed the GT2 towards its frequency limits. And in any case if you don't crunch it on graphics, the idle consumption is near perfect: One of the reasons most laptop dGPU designs won't even bother to run 2D on the dGPU any more but leave that to Intel.
The biggest downside was that you couldn't buy them outside an Apple laptop or Intel NUC.
But however much Intel goes into Apple mode (the major customer for these beefier iGPUs) in terms of "x time faster than previous", the result aren't going to turn ultrabooks with this configuration into "THD gaming machines".
To have a good feel as to where these could go and whether they are worth the wait, just have a look at the Skull Canyon nuc6i7kyk review on this site: That SoC uses 72 EUs and 128MB of eDRAM and should put a pretty firm upper limit to what a 64 EU Ice Lake can do: Most of the games in that review are somewhat dated yet fail to reach 20FPS at THD.
So if you want to game on the device, you'd be much better of with a dGPU however small and chose the smallest iGPU variant available. No reason to wait, Whisky + Nvidia will do better.
If you want real gaming performance, you need to put real triple digit Watts and the bandwidth only GDDR5/6 or HBM can deliver to work even at THD, but with remote gaming perhaps it doesn't have to be on your elegant slim ultrabook. There again anything but the GT2 configuration is wasted, because only need the VPU part for decoding Google Stadia (or Steam Remote) streams, which is the same for all configurations.
For some strange reason, Intel has been selling GT3/4 NUCs at little or no premium over GT2 variants and in that case I have been seriously tempted. And only once I even managed to find a GT3e laptop once for a GT2 price (while the SoC is literally twice as big and the die carrier even adds eDRAM at zero markup), which I stil cherish.
But if prices are anywhere related to the surface area of the chip (as they are for the server parts), these high powered GTs are something that only Apple users would buy.
That's another reaons, I (sadly) don't expect them to be sold in anything bug Macs and some NUCs, no ChuWi notebooks or Mini-ITX boards.
abufrejoval - Tuesday, July 30, 2019 - link
...(need edit)Judging from the first 10nm generation, GPUs where the part where obtaining economically feasible yields didn't work out. Unless they have really, really fixed 10nm it's not hard to imagine that Intel could be selling high-count EU SoCs to Apple below cost, to keep them for another generation as flagship customer and perhaps due to long-term contractual obligations.
But maintaining GT2/3/4 price egality for the rest of the market seems suicidal even if you have a fab lead.
Not that I expect we'll ever be told: In near monopoly situations the so called market ecnomy becomes surprisingly complex.
willis936 - Wednesday, July 31, 2019 - link
What the hell is a THD in this context?jospoortvliet - Monday, August 5, 2019 - link
Probably full HD (True HD)?repoman27 - Tuesday, July 30, 2019 - link
"It stands to reason then that the smaller package is for lower performance and low power options, despite being exactly the same silicon."I know the die floorplans are the same, but have Intel ever actually confirmed that U and Y (or H and S series for that matter) are the exact same silicon? Is it strictly binning and packaging that separates the platforms, or is there a slight tweak to the manufacturing process to target lower power / higher frequencies? Intel production roadmaps would seem to indicate this isn't just a binning situation, but I've never been entirely certain on that point.
And isn't Comet Lake-U 6+2 more likely to be 25 W, with Whiskey Lake-U 4+2 continuing to pull 15 W duty alongside Ice Lake-U 4+2?
CaedenV - Tuesday, July 30, 2019 - link
Those goals for Aethena are OK, but my old Dell XPS 12 with a carousel frame hit all of those except biometric, and wake from sleep in <1 sec... well, and the bezel... but that was due to the carousel design which I would LOVE to come back in a more modern form.Not saying these goals are bad... but if a 6 year old midrange laptop can hit almost all of them, then this isn't exactly aiming for something amazing.
AshlayW - Tuesday, July 30, 2019 - link
Quad core for 179 USD? What is this, 2015? No thanks.HStewart - Tuesday, July 30, 2019 - link
You do realize these are ultra-portable low power cpu's and not desktop chipsSamus - Tuesday, July 30, 2019 - link
Intel is a mess right now, the execution of this along with the naming scheme is ridiculous.shabby - Tuesday, July 30, 2019 - link
18% ipc gain and 20% clock lossPlace your bets how intel will spin this.
CHADBOGA - Tuesday, July 30, 2019 - link
I'm quite disappointed the issue of security mitigation in hardware was not addressed. o_OCityBlue - Saturday, August 3, 2019 - link
Disappointed, but not surprised.Security (and by inference the performance overhead required to implement proper security) is not important according to Anandtech/Ian Cutress. Which is obvious nonsense, so the only logical conclusion is that Anandtech are now a thoroughly biased outfit incapable of any critical reporting, which is quite sad particularly as it means all their articles (particularly when they relate to Intel) have to be read with a very heavy dose of cynicism.
eek2121 - Wednesday, July 31, 2019 - link
That picture of you biting a wafer is priceless.Santoval - Wednesday, July 31, 2019 - link
If Ice Lake-U has a ~3.5% higher single core performance (and, assuming the "multi-core overhead" is the same, multi-core performance as well) than Whiskey Lake-U despite having a 20% lower single core boost clock, then Sunny Cove must be an extremely impressive μarch. Or, er, that might not actually be the case : Ice Lake-U has a 18% higher IPC than the *original* Skylake of 2015, not Whiskey Lake. While Whiskey Lake is basically the same design it must have a somewhat higher IPC due to its much more mature process node and other optimizations.Let's be conservative and assume that Ice Lake-U (more specifically Sunny Cove) has a nice round 15% higher IPC than Whiskey Lake-U, with both at 15W. In that case, at a 20% lower boost clock Ice Lake-U should have a 5% lower performance than Whiskey Lake-U. Where is that +3.5% performance derived from then?
Even if we assumed that Ice Lake-U 18% IPC edge is over Whiskey Lake-U (highly unlikely, otherwise Intel would not have dug out the original Skylake from its computing grave) that would still translate to Ice Lake-U having a 1.5% lower single core performance than Whiskey Lake-U, rather than being 3.5% faster than it.
Maybe, just maybe, this is why Intel used just a single synthetic benchmark (surely compiled with aggressive flags and optimized for Intel CPUs) for that graph and avoided to disclose other synthetic benchmarks and real world use benchmarks? Is this also why they avoided to talk about CPU performance of Ice Lake in their Computex presentation, and instead focused on iGPU, Wifi and AI performance?
Based on the disclosed clocks and the "disclosed" (in obfuscated form) IPC of Ice Lake-U I just cannot see it being in any way faster than Whiskey Lake-U. It will probably also have worse power efficiency, since it has the same or higher TDP range at a much lower clock.
The_Assimilator - Wednesday, July 31, 2019 - link
Getting Thunderbolt on-die is huge for adoption. While I doubt many laptop manufacturers will enable more than a single TB port, desktop is an entirely different kettle of fish.umano - Wednesday, July 31, 2019 - link
I am afraid but I cannot consider 4 cores cpu as premiumKhenglish - Wednesday, July 31, 2019 - link
This honestly is looking like the worst architecture refresh since Prescott. IPC increases are getting almost completely washed out by loss in frequency. I wonder if this would have happened if Ice Lake came out on 14nm. Is the clock loss from uArch changes, process change, or a mix of both?Performance of an individual transistor has been decreasing since 45nm, but overall circuit performance kept improving due to interconnect capacitance decreasing at a faster rate at every node change. It looks like at Intel 10nm, and TSMC 7nm that this is no longer true, with transistor performance dropping off a cliff faster than interconnect capacitance reduction. 5nm and 3nm should be possible, but will anyone want to use them?
Sivar - Wednesday, July 31, 2019 - link
"...with a turbo frequency up to 4.1 GHz"This is the highest number I have come across for the new 10th generation processors, and according to SemiAccurate (which is accurate more often than not), this is likely not an error.
If this value is close to desktop CPU limitations, the low clock speed all but erases the 18% IPC advantage -- an estimate likely based on a first-gen Skylake.
Granted, the wattage values are low, so higher-wattage units should run at least a bit faster.
Farfolomew - Wednesday, July 31, 2019 - link
I’m a bit confused by the naming scheme. Ian, you say: “The only way to distinguish between the two is that Ice Lake has a G in the SKU and Comet Lake has a U”But that’s not what’s posted in several places throughout the article. The ICL processors are named Core iX-nnnnGn where CML are Core iX-nnnnnU. Comet lake is using 5 digits and Ice Lake only 4 (1000 vs 10000 series).
Is this a typo or will ICL be 1000-series Core chips?
name99 - Wednesday, July 31, 2019 - link
Regarding AI on the desktop. The place where desktop AI will shine is NLP. NLP has lagged behind vision for a while, but has acquired new potency with The Transformer. It will take time for this to be productized, but we should ultimately see vastly superior translation (text and speech), spelling and grammar correction, decent sentiment analysis while typing, even better search.Of course this requires productization. Google’s agenda is to do this in the cloud. MS’ agenda I have no idea (they still have sub-optimal desktop search). So probably Apple will be first to turn this into mainstream products.
Relevant to this article is that I don’t know the extent to which instructions and micro-architectures optimized for CNNs are still great for The Transformer (and the even newer and rather superior Transformer-XL published just a few months ago). This may all be a long time happening on the desktop if INTC optimized too much purely for vision, and it takes another of their 7 years to turnaround and update direction...
croc - Thursday, August 1, 2019 - link
It seems that Ice Lake / Sunny Cove will have hardware fixes for Spectre and Meltdown. I would like to see some more information on this, such as how much speed gain, whether the patch is predictive (so as to block ALL such OOE / BP exploits) etc.MDD1963 - Thursday, August 1, 2019 - link
A month or so ago, we heard a few rumors that the CPUs were ahead ~18% in IPC (I see that number again in this article), but are down ~20+% in clock speed.... ; it would be nice to see at least one or two performance metrics/comparisons on a shipped product. :)isthisavailable - Thursday, August 1, 2019 - link
Unlike Ryzen mobile, intel’s “upto” 64 EUs part will probably only ship in like 2 laptops. Therefore amd has more designs in my book. I don’t understand people who buy expensive 4K laptops with intel integrated gfx which can’t even render windows 10 ui smoothly.Looking forward to Zen2 + navi based 7nm APU.
Bulat Ziganshin - Thursday, August 1, 2019 - link
> it can be very effective: a dual core system with AVX-512 outscored a 16-core system running AVX2 (AVX-256).it's obviously wrong - since ice lake has only one avx-512 block but two avx2 blocks, it's not much faster in avx-512 mode compared to avx2 mode
the only mention of HEDT cpus at the page linked is "At a score of 4519, it beats a full 18-core Core i9-7980XE processor running in non-AVX mode". Since AVX-512 can process 16 32-bit numbers in a single operation, no wonder that a single avx-512 core matches 16 scalar cores
s.yu - Thursday, August 1, 2019 - link
"Charge 4+hrs in 30 mins"...Ok, I think "4+hrs battery life under 30 min. charging" sounds better, or just Intel's version.
29a - Thursday, August 1, 2019 - link
Should Intel go ahead with the naming scheme, it is going to offer a cluster of mixed messages.I believe the word you are looking for there is clusterfuck.
ifThenError - Friday, August 2, 2019 - link
To bad the article doesn't state any further details about the HEVC encoders. Would be interesting to hear if Intel only improved the speed or if they also worked on compression and quality.I bought a Gemini Lake system last year to try the encoding in hardware and have very mixed feelings about Intel's Quick Sync since. The encoding speed is impressive with the last generation already, and all the while CPU and GPU are practically in idle. On the downside the image quality and compression ratio is highly underwhelming and not even near usable for “content creation“ or mere transcoding. It suffices for video calls at best. Even encoding h264 in software reaches far better compression efficiency while being not much slower on a low end CPU.
IIRC Intel promised some “quality mode” for their upcoming encoders, but I can't remember if that was for the gen11 graphics.
intel_gene - Friday, August 2, 2019 - link
There is some information on GNA available. It is accessed through Intel's OpenVINO.https://docs.openvinotoolkit.org/latest/_docs_IE_D...
https://github.com/opencv/dldt/tree/2019/inference...
There is some background information here:
https://sigport.org/sites/default/files/docs/Poste...
urbanman2004 - Friday, August 2, 2019 - link
I wonder what happens to Project Athena if none of the products released by the vendor partners/OEMs meet the criteria that Intel's established.GreenReaper - Saturday, August 3, 2019 - link
Plagues of snakes, owls, eagles, Asari, etc.gambita - Monday, August 5, 2019 - link
nice of you to do intels bidding and promote and help their prhowtomakedeliciousfood - Thursday, August 8, 2019 - link
www.howtomakedeliciousfood.comHikariWS - Sunday, August 11, 2019 - link
These improvements on serial performance are great, it's awesome to have bigger buffers and more execution units. But in clock area it seems to be a big drawback.I'm sure clock issues is the reason we won't have any Ice Lake on desktop, and Comet Lake on laptops on the same generation. But, why no 6C Ice Lake? This opened a but alert sign on me.
But what also called my attention is its IGP power. Most mid range and above laptops ae using nVidia GPU. That's sad for us who want performance and won't play on it, because mid laptops are alrdy all coming with nVidia GPU which makes them more expensive.
Now I hope to have these segments using Intel IGP and not have nVidia GPU anymore. Good to us on having less money wasted on hardware we don't need, bad for nVidia.
nils_ - Wednesday, August 14, 2019 - link
Can you please stop eating the chips? Yield must be bad enough as it is!fizzypop1 - Saturday, August 17, 2019 - link
What does Ice lake taste like? 👍fizzypop1 - Saturday, August 17, 2019 - link
You need to add salt and viniger to make the chip taste great or have ketchupfizzypop1 - Saturday, August 17, 2019 - link
We need to see 35W and 45W parts for all in one's and small form factors.jhonsmith858585 - Saturday, August 31, 2019 - link
There are already tons of laptops with pen support but no included pen or internal holster.https://grandapk.com
IUU - Sunday, September 1, 2019 - link
Some thoughts :1) If anyone believes that Intel left behind on node processes, while it was ahead at least 2 generations, and it was the definition of expertise on this field , good for them.
2) Likewise , Intel is fully capable of building 1-3 watts platforms. If it were to do so, when the mobile "devolution" began some years ago, it would bulldoze eveything on its way. It can still do. Or you think that somehow Intel is stuck at 6 Watts minimum?
3) This obsession at staying just before the power requirements for true mobility is to protect mommy ARM and its children. It knows that building "APU"s , for "premium" productts would still generate profit from the suckers that would buy them.
There is a number of plausible scenarios for this weird behavior of Intel. Lack of competence, in engineering, or management are certainly not included in this list.
Coldblackice - Tuesday, February 4, 2020 - link
I know I'm late to this party, but do you mind expanding on this? As far as the plausible scenarios for this weird behavior?jhonsmith858585 - Saturday, September 14, 2019 - link
There are already tons of laptops with pen support but no included pen or internal holster.https://latestmenuprice.com