Bill Kircos, Intel’s Director of Product & Technology PR, just posted a blog on Intel’s site entitled “An Update on our Graphics-Related Programs”. In the blog Bill addresses future plans for what he calls Intel’s three visual computing efforts:

The first is the aforementioned processor graphics. Second, for our smaller Intel Atom processor and System on Chip efforts, and third, a many-core, programmable Intel architecture and first product both of which we referred to as Larrabee for graphics and other workloads.

There’s a ton of information in the vague but deliberately worded blog post, including a clear stance on Larrabee as a discrete GPU: We will not bring a discrete graphics product to market, at least in the short-term. Kircos goes on to say that Intel will increase funding for integrated graphics, as well as pursue Larrabee based HPC opportunities. Effectively validating both AMD and NVIDIA’s strategies. As different as Larrabee appeared when it first arrived, Intel appears to be going with the flow after today’s announcement.

My analysis of the post as well as some digging I’ve done follows.

Intel Embraces Fusion, Long Live the IGP

Two and a half years ago Intel put up this slide that indicated the company was willing to take 3D graphics more seriously:

By 2010, on a 32nm process, Intel’s integrated graphics would be at roughly 10x the performance of what it was in 2006. Sandy Bridge was supposed to be out in Q4 2010, but we’ll see it shipping in Q1 2011. It’ll offer a significant boost in integrated graphics performance. I’ve heard it may finally be as fast as the GPU in the Xbox 360.

Intel made huge improvements to its integrated graphics with Arrandale/Clarkdale. This wasn’t an accident, the company is taking graphics much more seriously. The first point in Bill’s memo clarifies this:

Our top priority continues to be around delivering an outstanding processor that addresses every day, general purpose computer needs and provides leadership visual computing experiences via processor graphics. We are further boosting funding and employee expertise here, and continue to champion the rapid shift to mobile wireless computing and HD video – we are laser-focused on these areas.

There’s a troublesome lack of addressing the gaming market in this statement. A laser focus in mobile wireless computing and HD video sounds a lot like an extension of what Intel integrated graphics does today, and not what we hope it will do tomorrow. Intel does have a fairly aggressive roadmap for integrated graphics performance, so perhaps missing the word gaming was intentional to downplay the importance of the market that its competitors play in for now.


The current future of Intel graphics

The second point is this:

We are also executing on a business opportunity derived from the Larrabee program and Intel research in many-core chips. This server product line expansion is optimized for a broader range of highly parallel workloads in segments such as high performance computing. Intel VP Kirk Skaugen will provide an update on this next week at ISC 2010 in Germany.

In a single line Intel completely validates NVIDIA’s Tesla strategy. Larrabee will go after the HPC space much like NVIDIA has been doing with Fermi and previous Tesla products. Leveraging x86 can be a huge advantage in HPC. If both Intel and NVIDIA see so much potential in HPC for parallel architectures, there must be some high dollar amounts at stake.

NVIDIA Tesla Seismic Supercomputing Universities Defence Finance
GPU TAM $300M $200M $150M $250M $230M
NVIDIA's calculated TAM for HPC applications for GPUs

The third point is the one that drives the nail in the coffin of the Larrabee GPU:

We will not bring a discrete graphics product to market, at least in the short-term. As we said in December, we missed some key product milestones. Upon further assessment, and as mentioned above, we are focused on processor graphics, and we believe media/HD video and mobile computing are the most important areas to focus on moving forward.

Intel wasn’t able to make Larrabee performance competitive in DirectX and OpenGL applications, so we won’t be getting a discrete GPU based on Larrabee anytime soon. Instead, Intel will be dedicating its resources to improving its integrated graphics. We should see a nearly 2x improvement in Intel integrated graphics performance with Sandy Bridge, and greater than 2x improvement once more with Ivy Bridge in 2012.

All isn’t lost though. The Larrabee ISA, specifically the VPU extensions, will surface in future CPUs and integrated graphics parts. And Intel will continue to toy with the idea of using Larrabee in various forms, including a discrete GPU. However, the primary focus has shifted from producing a discrete GPU to compete with AMD and NVIDIA, to integrated graphics and a Larrabee for HPC workloads. Intel is effectively stating that it sees a potential future where discrete graphics isn’t a sustainable business and that integrated graphics will become good enough for most of what we want to do, even from a gaming perspective. In Intel’s eyes, discrete graphics would only serve the needs of a small niche if we reach this future where integrated graphics is good enough.

Much like the integration of cache controllers and FPUs into the CPU, Intel expects the GPU to take the same path. The days of discrete coprocessors have always been numbered. One benefit of a tightly coupled CPU-GPU is the bandwidth present between the two, an advantage used by game consoles for years.

This does conflict (somewhat) with AMD’s strategy of a functional Holodeck in 6 years, but that’s why Intel put the “at least in the short-term” qualifier on their statement. I believe Intel plans on making integrated graphics, over the next 5 years, good enough for nearly all 3D gaming. I’m not sure AMD’s Fusion strategy is much different.

For years Intel made a business case for delivering cheap, hardly accelerated 3D graphics on aging process technologies. Intel has apparently recognized the importance of the GPU and is changing directions. Intel will commit more resources (both in development and actual transistor budget) to the graphics portion of its CPUs going forward. Sandy Bridge will be the beginning, the ramp from there will probably mimic what we saw ATI and NVIDIA do with their GPUs over the years with a constant doubling of transistor count. Intel has purposefully limited the GPU transistor budget in the past. From what I’ve heard, that limit is now gone. It will start with Sandy Bridge, but I don’t think we’ll be impressed until 2013.

What About Atom & Moorestown?

Anything can happen, but by specifically calling out the Atom segment I get the impression that Intel is trying to build its own low power GPU core for use in SoCs. Currently the IP is licensed from Imagination Technologies, a company Intel holds a 16% stake in, but eventually Intel may build its own integrated graphics core here.

Previous Intel graphics cores haven’t been efficient enough to scale down to the smartphone SoC level. I get the impression that Intel has plans (if it is not doing so already) to create its own Atom-like GPU team to work on extremely low power graphics cores. This would ultimately eliminate the need for licensing 3rd party graphics IP in Intel’s SoCs. Planning and succeeding are two different things so only time will tell if Imagination has a long term future at Intel. The next 3 years are pretty much guaranteed to be full of Imagination graphics, at least in the Intel smartphone/SoC products.

Final Words

Intel cancelled plans for a discrete Larrabee graphics card because it could not produce one that was competitive with existing GPUs from AMD and NVIDIA in current games. Why Intel lacked the foresight to stop from even getting to this point is tough to say. The company may have been too optimistic or genuinely lacked the experience in building discrete GPUs, something it hadn’t done in more than a decade. Maybe it truly was Pat Gelsinger's baby.

This also validates AMD and NVIDIA’s strategy and their public responses to Larrabee. Those two often said that the most efficient approach to 3D graphics was not through x86 cores but through their own specialized, but programmable hardware. The x86-tax would effectively always put Larrabee at a disadvantage. When running Larrabee native code this would be less of an issue, but DX and OpenGL performance is another situation entirely. Intel executed too poorly, NVIDIA and most definitely AMD executed too well. Intel couldn’t put out a competitive Larrabee quickly enough, it fell too far behind.

A few years ago Intel attempted to enter the ARM SoC race with an ARM based chip of its own: XScale. Intel admitted defeat and sold off XScale, stating that it was too late to the market. Intel has since focused on the future of the SoC market with Moorestown. Rather than compete in a maturing market, Intel is now attempting to get a foot in the door on the next evolution of that market: high performance SoCs.

I believe this may be what Intel is trying with its graphics strategy. Seeing little hope for a profitable run at discrete graphics, Intel is now turning its eye to unexplored territory: the hybrid CPU-GPU. Focusing its efforts there, if successful, would be far easier and far more profitable than struggling to compete in the discrete GPU space.

The same goes for using Larrabee in the HPC space. NVIDIA is the most successful GPU company in HPC and even its traction has been limited. It’s still early enough that Intel could show up with Larrabee and take a big slice of the pie.

Clearly AMD sees value in the integrated graphics strategy as it spent over $5 billion acquiring ATI in order to bring Fusion to market. Next year we’ll see the beginnings of that merger come to fruition. Not only does Intel’s announcement today validate NVIDIA’s HPC strategy, but it also validates AMD’s acquisition of ATI. While Larrabee as a discrete GPU cast a shadow of confusion over the future of the graphics market, Intel focusing on integrated graphics and HPC is much more harmonious with AMD and NVIDIA’s roadmaps. We used to not know who had the right approach, now we have one less approach to worry about.

Whether Intel is committed enough to integrated graphics remains to be seen. NVIDIA has no current integrated graphics strategy (unless it can work out a DMI/QPI license with Intel). AMD’s strategy is very similar to what Intel is proposing today and it has been for some time, but AMD at least has a far more mature driver and ISV compatibility teams with its graphics cores. Intel has a lot of catching up to do in this department.

I’m also unsure what AMD and Intel see as the future of discrete graphics. Based on today’s trajectory you wouldn’t have high hopes for discrete graphics, but as today’s announcement shows: anything can change. Plus, I doubt the Holodeck will run optimally on an IGP.

Comments Locked

55 Comments

View All Comments

  • Doormat - Tuesday, May 25, 2010 - link

    Thats how I read it, but remember you have to take it to account how fast the graphics industry as a whole is moving. If the mainstream Sandy Bridge IGP is only as fast as a 9400M was back in 2008/2009, and a higher performance version is 80% faster (around 4500 in 3DM2k6), where will mobile GPUs be in January 2011? If TSMC can get their head out of their butt and get 28nm fabrication online late this year maybe in Q2 we see GPUs from Nvidia that are 2x as powerful as the 300-series we see now for the same power/heat footprint.
  • KalTorak - Tuesday, May 25, 2010 - link

    Hm - can't edit my previous post to add the proper disclosure:

    [Yep, I work for Intel. That said, anything I happen to write here's completely my own thought and may or may not be reflective of Intel's positions.]
  • Hotdog3c - Tuesday, May 25, 2010 - link

    This has been coming for a long time, it's obvious that intel can't mix it with nvidia or ATI plus with the problems that nvidia has been having with their fab process, it's time for intel to buy nvidia just like AMD did with ATI
  • zodiacfml - Tuesday, May 25, 2010 - link

    I think, the importance of selling discrete GPUs is to have the consumers pay for the development of Larrabee while they start taking shares from Nvidia in the HPC market.
    Unfortunately, that wasn't the case since their GPU design will not stand a chance against Nvidia's or AMD's GPUs. They won't sell enough GPUs if they pushed through with the plan.

    They will just focus to create a Fusion like CPU before AMD beats them to it which is not that far related from creating good SoCs. Meanwhile, allow Nvidia to take HPC market for now.
  • bentherdunthat - Tuesday, May 25, 2010 - link

    intel's days of capable innovation and execution are behind it. future MBA candidates take copious notes, you are witnessing the decline of a monopoly by it's own doing.
  • nycromes - Thursday, May 27, 2010 - link

    I don't know about that. Intel is taking a calculated risk based on what they see going on in the graphics sector. I believe more and more people are choosing integrated graphics and you will see discrete graphics become somewhat of a niche market mostly for gamers and folders. I don't think discrete graphics are going anywhere in the short term (probably not the long term either).

    PCs are becoming more and more of a disposable device to people. Most people buy one till it breaks, and go back to BestBuy or some other store and buy their next one. Those PCs generally have integrated graphics. I think Intel realized it was late to the game in discrete graphics and was entering a market that at best probably won't experience much growth. I really have to wonder about performance of Larrabee though, I mean the people who want discrete graphics will go with whatever is best at the time in terms of performance, power, etc. Maybe Larrabee just couldn't cut it.
  • iwodo - Wednesday, May 26, 2010 - link

    In a Perfect World, the GPU should only be using few W when i am browsing and idle. And scale up to 100+W when i am gaming. But of coz, we know unless there is some major transistor tech breakthrough, this is not going to happen in the next 5 + years.

    So back to an ideal world, The IGP should be VERY low power, with superior 2D Rendering performance, Programmable DSP that allows Many if not all of the FFmpeg codec gets hardware acceleration. 3D Performance that is focused on UI, Effect workload, Browser canvas and other Vector acceleration. With final consideration for anything gaming related.

    As it currently stand, the IGP size is fairly large that provide performance we dont need 90% of the time. And when we do need performance it is not capable anyway. So why waste transistors on it? Intel could give us a Extra Core or L2 cache was CPU performance.

    I dont understand why a previous poster noted Optimus is not the way forward. I see Optimus as the future. At least in near term.

    And final Notes. The worst thing about Intel HD is not the Hardware itself. Is the fact Drivers for Intel HD is poor, slow to update, and the main causes for poor gaming performance. Nvidia has more Software Enginerr then Hardware for their Drivers. It just shows GPU and CPU are completely different beast.
  • KaarlisK - Wednesday, May 26, 2010 - link

    There's one more reason why Intel's IGP/IPG products can't be relied upon.
    They discontinue driver support, for architectures they are still developing, very early.

    There are a lot of bugs discovered in Intel's GMA 950, GMA X3100 drivers - but Intel has stated on its support forums that they will not be fixed. It's more than weird considering that GMA 950 is the same basic architecture as GMA 3150, and GMA X3100 is the same basic architecture as HD Graphics (Clarkdale).

    If these were Nvidia or ATI products, they would still be supported and I would be able to rely on my applications being able to run - albeit slowly - on all of my hardware generations; now, to have that reliability, I am forced to purchase discrete graphics cards, even though the performance of Intel's products might be enough for me.
  • silverblue - Wednesday, May 26, 2010 - link

    Two areas I'd like to touch on:

    1) "Intel is effectively stating that it sees a potential future where discrete graphics isn’t a sustainable business and that integrated graphics will become good enough for most of what we want to do, even from a gaming perspective. In Intel’s eyes, discrete graphics would only serve the needs of a small niche if we reach this future where integrated graphics is good enough."

    Intel should recognise that they are partly to blame for the lack of progress with integrated graphics. ATi and nVidia would have released far more powerful solutions if it became apparent that Intel was serious about providing a better gaming experience.

    2) "Anything can happen, but by specifically calling out the Atom segment I get the impression that Intel is trying to build its own low power GPU core for use in SoCs. Currently the IP is licensed from Imagination Technologies, a company Intel holds a 16% stake in, but eventually Intel may build its own integrated graphics core here."

    I'd rather Intel stopped developing its own solutions right now and just pumped money into Imagination Technologies. Everyone knows how good Kyro was despite its clock speed disadvantage, and despite being told it couldn't be done, they paired a T&L unit with their deferred rendering system. Every phone worth its salt uses PowerVR graphics and it can't be difficult to scale these up, plus you don't need to throw the most powerful components at them due to their unmatched efficiency.

    Besides which, I'd LOVE to see PowerVR back where it belongs. I still wonder how powerful Kyro III (PowerVR Series 4) would have been compared to the competitors of the time (the Radeon 9800 and GeForce FX 5900).
  • IntelUser2000 - Wednesday, May 26, 2010 - link

    Once they get their graphics architecture right, there will be advantages to making them in-house, and I bet it doesn't end at licensing costs either. Whether they'll get that on their LPIA products is another problem altogether.

    We'll see if PowerVR can scale up again, but other than papers claiming they can scale up, so far there hasn't been any for PC-centric since the Kyro days. Whether its hardware or drivers, there is indeed some merit to the claim that says for modern shader architectures, TBDR is hard to implement.

Log in

Don't have an account? Sign up now