Bill Kircos, Intel’s Director of Product & Technology PR, just posted a blog on Intel’s site entitled “An Update on our Graphics-Related Programs”. In the blog Bill addresses future plans for what he calls Intel’s three visual computing efforts:

The first is the aforementioned processor graphics. Second, for our smaller Intel Atom processor and System on Chip efforts, and third, a many-core, programmable Intel architecture and first product both of which we referred to as Larrabee for graphics and other workloads.

There’s a ton of information in the vague but deliberately worded blog post, including a clear stance on Larrabee as a discrete GPU: We will not bring a discrete graphics product to market, at least in the short-term. Kircos goes on to say that Intel will increase funding for integrated graphics, as well as pursue Larrabee based HPC opportunities. Effectively validating both AMD and NVIDIA’s strategies. As different as Larrabee appeared when it first arrived, Intel appears to be going with the flow after today’s announcement.

My analysis of the post as well as some digging I’ve done follows.

Intel Embraces Fusion, Long Live the IGP

Two and a half years ago Intel put up this slide that indicated the company was willing to take 3D graphics more seriously:

By 2010, on a 32nm process, Intel’s integrated graphics would be at roughly 10x the performance of what it was in 2006. Sandy Bridge was supposed to be out in Q4 2010, but we’ll see it shipping in Q1 2011. It’ll offer a significant boost in integrated graphics performance. I’ve heard it may finally be as fast as the GPU in the Xbox 360.

Intel made huge improvements to its integrated graphics with Arrandale/Clarkdale. This wasn’t an accident, the company is taking graphics much more seriously. The first point in Bill’s memo clarifies this:

Our top priority continues to be around delivering an outstanding processor that addresses every day, general purpose computer needs and provides leadership visual computing experiences via processor graphics. We are further boosting funding and employee expertise here, and continue to champion the rapid shift to mobile wireless computing and HD video – we are laser-focused on these areas.

There’s a troublesome lack of addressing the gaming market in this statement. A laser focus in mobile wireless computing and HD video sounds a lot like an extension of what Intel integrated graphics does today, and not what we hope it will do tomorrow. Intel does have a fairly aggressive roadmap for integrated graphics performance, so perhaps missing the word gaming was intentional to downplay the importance of the market that its competitors play in for now.


The current future of Intel graphics

The second point is this:

We are also executing on a business opportunity derived from the Larrabee program and Intel research in many-core chips. This server product line expansion is optimized for a broader range of highly parallel workloads in segments such as high performance computing. Intel VP Kirk Skaugen will provide an update on this next week at ISC 2010 in Germany.

In a single line Intel completely validates NVIDIA’s Tesla strategy. Larrabee will go after the HPC space much like NVIDIA has been doing with Fermi and previous Tesla products. Leveraging x86 can be a huge advantage in HPC. If both Intel and NVIDIA see so much potential in HPC for parallel architectures, there must be some high dollar amounts at stake.

NVIDIA Tesla Seismic Supercomputing Universities Defence Finance
GPU TAM $300M $200M $150M $250M $230M
NVIDIA's calculated TAM for HPC applications for GPUs

The third point is the one that drives the nail in the coffin of the Larrabee GPU:

We will not bring a discrete graphics product to market, at least in the short-term. As we said in December, we missed some key product milestones. Upon further assessment, and as mentioned above, we are focused on processor graphics, and we believe media/HD video and mobile computing are the most important areas to focus on moving forward.

Intel wasn’t able to make Larrabee performance competitive in DirectX and OpenGL applications, so we won’t be getting a discrete GPU based on Larrabee anytime soon. Instead, Intel will be dedicating its resources to improving its integrated graphics. We should see a nearly 2x improvement in Intel integrated graphics performance with Sandy Bridge, and greater than 2x improvement once more with Ivy Bridge in 2012.

All isn’t lost though. The Larrabee ISA, specifically the VPU extensions, will surface in future CPUs and integrated graphics parts. And Intel will continue to toy with the idea of using Larrabee in various forms, including a discrete GPU. However, the primary focus has shifted from producing a discrete GPU to compete with AMD and NVIDIA, to integrated graphics and a Larrabee for HPC workloads. Intel is effectively stating that it sees a potential future where discrete graphics isn’t a sustainable business and that integrated graphics will become good enough for most of what we want to do, even from a gaming perspective. In Intel’s eyes, discrete graphics would only serve the needs of a small niche if we reach this future where integrated graphics is good enough.

Much like the integration of cache controllers and FPUs into the CPU, Intel expects the GPU to take the same path. The days of discrete coprocessors have always been numbered. One benefit of a tightly coupled CPU-GPU is the bandwidth present between the two, an advantage used by game consoles for years.

This does conflict (somewhat) with AMD’s strategy of a functional Holodeck in 6 years, but that’s why Intel put the “at least in the short-term” qualifier on their statement. I believe Intel plans on making integrated graphics, over the next 5 years, good enough for nearly all 3D gaming. I’m not sure AMD’s Fusion strategy is much different.

For years Intel made a business case for delivering cheap, hardly accelerated 3D graphics on aging process technologies. Intel has apparently recognized the importance of the GPU and is changing directions. Intel will commit more resources (both in development and actual transistor budget) to the graphics portion of its CPUs going forward. Sandy Bridge will be the beginning, the ramp from there will probably mimic what we saw ATI and NVIDIA do with their GPUs over the years with a constant doubling of transistor count. Intel has purposefully limited the GPU transistor budget in the past. From what I’ve heard, that limit is now gone. It will start with Sandy Bridge, but I don’t think we’ll be impressed until 2013.

What About Atom & Moorestown?

Anything can happen, but by specifically calling out the Atom segment I get the impression that Intel is trying to build its own low power GPU core for use in SoCs. Currently the IP is licensed from Imagination Technologies, a company Intel holds a 16% stake in, but eventually Intel may build its own integrated graphics core here.

Previous Intel graphics cores haven’t been efficient enough to scale down to the smartphone SoC level. I get the impression that Intel has plans (if it is not doing so already) to create its own Atom-like GPU team to work on extremely low power graphics cores. This would ultimately eliminate the need for licensing 3rd party graphics IP in Intel’s SoCs. Planning and succeeding are two different things so only time will tell if Imagination has a long term future at Intel. The next 3 years are pretty much guaranteed to be full of Imagination graphics, at least in the Intel smartphone/SoC products.

Final Words

Intel cancelled plans for a discrete Larrabee graphics card because it could not produce one that was competitive with existing GPUs from AMD and NVIDIA in current games. Why Intel lacked the foresight to stop from even getting to this point is tough to say. The company may have been too optimistic or genuinely lacked the experience in building discrete GPUs, something it hadn’t done in more than a decade. Maybe it truly was Pat Gelsinger's baby.

This also validates AMD and NVIDIA’s strategy and their public responses to Larrabee. Those two often said that the most efficient approach to 3D graphics was not through x86 cores but through their own specialized, but programmable hardware. The x86-tax would effectively always put Larrabee at a disadvantage. When running Larrabee native code this would be less of an issue, but DX and OpenGL performance is another situation entirely. Intel executed too poorly, NVIDIA and most definitely AMD executed too well. Intel couldn’t put out a competitive Larrabee quickly enough, it fell too far behind.

A few years ago Intel attempted to enter the ARM SoC race with an ARM based chip of its own: XScale. Intel admitted defeat and sold off XScale, stating that it was too late to the market. Intel has since focused on the future of the SoC market with Moorestown. Rather than compete in a maturing market, Intel is now attempting to get a foot in the door on the next evolution of that market: high performance SoCs.

I believe this may be what Intel is trying with its graphics strategy. Seeing little hope for a profitable run at discrete graphics, Intel is now turning its eye to unexplored territory: the hybrid CPU-GPU. Focusing its efforts there, if successful, would be far easier and far more profitable than struggling to compete in the discrete GPU space.

The same goes for using Larrabee in the HPC space. NVIDIA is the most successful GPU company in HPC and even its traction has been limited. It’s still early enough that Intel could show up with Larrabee and take a big slice of the pie.

Clearly AMD sees value in the integrated graphics strategy as it spent over $5 billion acquiring ATI in order to bring Fusion to market. Next year we’ll see the beginnings of that merger come to fruition. Not only does Intel’s announcement today validate NVIDIA’s HPC strategy, but it also validates AMD’s acquisition of ATI. While Larrabee as a discrete GPU cast a shadow of confusion over the future of the graphics market, Intel focusing on integrated graphics and HPC is much more harmonious with AMD and NVIDIA’s roadmaps. We used to not know who had the right approach, now we have one less approach to worry about.

Whether Intel is committed enough to integrated graphics remains to be seen. NVIDIA has no current integrated graphics strategy (unless it can work out a DMI/QPI license with Intel). AMD’s strategy is very similar to what Intel is proposing today and it has been for some time, but AMD at least has a far more mature driver and ISV compatibility teams with its graphics cores. Intel has a lot of catching up to do in this department.

I’m also unsure what AMD and Intel see as the future of discrete graphics. Based on today’s trajectory you wouldn’t have high hopes for discrete graphics, but as today’s announcement shows: anything can change. Plus, I doubt the Holodeck will run optimally on an IGP.

Comments Locked

55 Comments

View All Comments

  • v12v12 - Tuesday, May 25, 2010 - link

    I can't wait till the days of the overpriced, loud, large space hogging, mass heat producing, LOW-tech single-core garbage GPU card goes extinct...

    Then maybe Anandtech and Tom's will stop doing all these waste of time gamer reviews about cards that have ruined the hardware market. The current GPU "technology," is a farce; it's the MOST metered market in the PC world... ATI and Nvid LOVE metering out worthless "next gen," cards that aren't next-gen anything. They've long developed a product line based off 1 core and then chopped it up 5-6-7-8-9 times. The stupid gaming masses endlessly fuel this low-tech (really—single cores in 2010 are still the majority...Vs CPU's quad and hex core? lmfao)

    The GPU industry is a pure JOKE... about as bad as Apple and their ULTRA rigid control of Intel hardware and their goof-ball attempt of a GUI based OS.

    GPU cards need to be modular like a mobo; we should be able to drop in a single, multi-core GPU as an "upgrade," vs the endless giant PCB chase. All of the major CPU advances in power savings and multi-core load sharing should have LONG been implemented towards GPU manufacturing... But that's another 10yrs away according to the METERED, profit plan.
  • JackNSally - Tuesday, May 25, 2010 - link

    Single core? Fermi has 480 and 448 "cores". ATI has 1600, 1440 and 1120 (at the top end). They also have dual-die/chip cards. They long ago got past the single core chip. SLI/Crossfire is like dual intel or amd processors.

    I do think that they could make bigger jumps in performance from generation to generation though. I agree with you on that.
  • bobvodka - Tuesday, May 25, 2010 - link

    wait... GPUs are low tech? I'm sorry, what?

    The complexity behind a GPU these days is frankly mind boggling at times; from the hardware point of view the stuff behind keeping so many threads in flight is just madness. While they haven't quite got the flexibility of a CPU for doing what they do (batch processing of floatng point data) they blow it away.

    The discrete graphics product isn't going to die any time soon; on the hardware end the maths simply doesn't work (memory bandwidth, memory access patterns, heat, power, the ability to scale), heck if you want high performance then you are just shifting the heat and power consumption somewhere else.

    The 'future' is divided into two groups going forward;
    - those who will 'make do' with on-chip 'fusion' style GPUs which are those who could live with the current crop of 'on board' GPUs which come on the motherboards (such as AMD's solutions)
    - those who want high resolution graphics at high framerates but will also benifit from a 'fusion' GPU/CPU combo as it will allow off loading of tasks to a 'close' FPU array

    So, yeah, your crazy, misinformed and misguided wish isn't going to happen.. or if it does it won't be for a long long time, kinda like when we can work out the competting memory access patterns and fights over bandwidth which CPU and GPUs are going to have.
  • Wilberwind - Tuesday, May 25, 2010 - link

    Intel entered the discrete graphics market too late. Larrabee might be a good GPGPU, but its still impractical for the average consumer if it doesn't have optimized drivers for 3D games and other video software. Sound cards and NICs are a lot more simple in complexity than graphics cards. I still don't believe discrete graphics will decline in the next 5 years; users who need discrete graphics today will favor a more powerful graphics card than a moderate integrated solution.
  • dagamer34 - Tuesday, May 25, 2010 - link

    Comparing demanding audio and video is a farce at best and mostly disingenuous. Audio hasn't been a major hog of CPU cycles in YEARS, if not decades. And there's no adequate way to compare audio quality without paying up the nose in good speakers (besides, space for audio on a disc is not limitless). It's demands grew linearly.

    Graphics on the other hand are already outpacing next-gen GPUs like nobody's business. The power of a video card will ALWAYS be a limiting factor. Believing that IGPs will one day be "good enough" is like thinking an IGP today can run Doom 3 flawlessly at 60fps at high settings, and that's a game that came out SIX YEARS AGO!
  • spathotan - Tuesday, May 25, 2010 - link

    This little pet project had vaporware written all over it from the beginning. This should come as no surprise.
  • MFK - Tuesday, May 25, 2010 - link

    When was Larrabee first announced? August 2008?
    Come on Intel WTF were you doing trying to put out a discrete GPU for this long???
  • Doormat - Tuesday, May 25, 2010 - link

    The rumors I've heard for SB IGP performance have the 6-core IGP performing equal to a Nvidia 9400M and the 12-core version performing a little worse than the new 320M. Anyone else hear anything?

    Its still not impressive, but the 12-core version is good enough for an IGP, plus Optimus would make it workable.
  • anandtech02148 - Tuesday, May 25, 2010 - link

    Kinda sad to see Intel lost focus,but a company like Arm is marginalizing Intel and that's dangerous. Probably a smart move to focus on mobile devices and try to bring the least best product, else we'll get get Apple setting more lower standard for years to come.
  • KalTorak - Tuesday, May 25, 2010 - link

    "We should see a nearly 2x improvement in Intel integrated graphics performance with Sandy Bridge, and greater than 2x improvement once more with Ivy Bridge in 2012."

    Then, talking about increasing transistor budgets, "It will start with Sandy Bridge, but I don’t think we’ll be impressed until 2013."

    Am I reading correctly that you believe Sandy Bridge is 2x of Clarkdale/Arrandale IGP performance, Ivy Bridge is 2x or more of Sandy Bridge, but you won't be impressed till past Ivy Bridge?

Log in

Don't have an account? Sign up now