The Future: Competition, Secrecy, & the Unexpected

Finally, while Apple developing their own GPU is not unexpected given their interests and resources, the ramifications of it may very well be. There hasn’t been a new, major GPU vendor in almost a decade – technically Qualcomm’s team would count as the youngest, though it’s a spin-off of what’s now AMD’s Radeon Technologies Group – and in fact like the overall SoC market itself, the market for GPU vendors has been contracting as costs go up and SoC designers settle around fewer, more powerful GPU vendors. So for someone as flush with cash as Apple to join the GPU race is a very big deal; just by virtue of starting development of their own GPU, they are now the richest GPU designer.

Of course, once they start shipping their custom GPU, this will also open them up to patent challenges from those other players. While it has largely been on the backburner of public attention, this decade has seen a few GPU vendors take SoC vendors to court. This includes NVIDIA with Samsung and Qualcomm (a case that they lost), and still ongoing is AMD’s case against LG/MediaTek/Sigma/Vizio.

GPU development is a lot more competitive due to the fact that developers and compiled programs aren’t tied to a specific architecture – the abstraction of the APIs insulates against individual architectures – however it also means that there a lot of companies developing novel technologies, and all of those companies are moving in the same general direction with their designs. This potentially makes it very difficult to develop an efficient GPU, as the best means of achieving that efficiency have often already been patented.

What exists then is an uneasy balance between GPU vendors, and a whole lot of secrets. AMD and NVIDIA keep each other in check with their significant patent holdings, Intel licenses NVIDIA patents, etc. And on the flip side of the coin, some vendors like Qualcomm simply don’t talk about their GPUs, and while this has never been stated by the company, the running assumption has long been that they don’t want to expose themselves to patent suits. So as the new kid on the block, Apple is walking straight into a potential legal quagmire.

Unfortunately, I suspect this means that we’ll be lucky to get any kind of technical details out of Apple on how their GPUs work. They can’t fully hide how their CPUs work due to how program compilation works (which is why we know as much as we do), but the abstraction provided by graphics APIs makes it very easy to hide the inner-workings of a GPU and make it a black box. Even when we know how something works, features and implementation details can be hidden right under our noses.

Ultimately today’s press release is a bit bitter-sweet for all involved in the industry. On the one hand it absolutely puts Imagination, a long-time GPU developer, on the back foot. Which is not to spell doom and gloom, but the company will have to work very hard to make up for losing Apple. On the other hand, with a new competitor in the GPU space – albeit one we’ve been expecting – it’s a sign that things are about to get very interesting. If nothing else, Apple enjoys throwing curveballs, so expect the unexpected.

Imagination: Patents & Losing Apple
Comments Locked

144 Comments

View All Comments

  • renz496 - Monday, April 3, 2017 - link

    that's only for old patent right? but if Apple need to make competitive modern GPU and supporting all the latest feature that still going to touch much more recent Imagination IP.
  • trane - Monday, April 3, 2017 - link

    > Alternatively, Apple may just be tired of paying Imagination $75M+ a year

    Yeah, that's all there is to it.

    Even with CPUs, they could easily have paid companies like Qualcomm or Nvidia to develop a custom wide CPU for them. Heck, isn't that what Denver is anyway? The first Denver was comfortably beating Apple A8 at the time. Too bad there's no demand for Tegras anymore, Denver v2 might have been good competition for A10. Maybe someone could benchmark a car using it...
  • TheinsanegamerN - Monday, April 3, 2017 - link

    Denver could only beat the A8 in software coded for that kind of CPU (vilv). Any kind of spaghetti code left denver choking on it's own spit, and it was more power hungry to boot.

    A good first attempt, but nvidia seems to have abandoned it. The fact that nvidia didnt use denver in their own tablet communicated that it was a failure in nvidia's eyes.
  • tipoo - Monday, April 3, 2017 - link

    Parker will use Denver 2 I believe, but paired with stock big ARM cores as well, probably to cover for its weaknesses.
  • fanofanand - Monday, April 3, 2017 - link

    That's what I read as well, they will go 2 + 4 with 2 Denver cores and 4 ARM cores (probably A73), letting the ARM cores handle the spaghetti code and the Denver handling the vilv code.
  • tipoo - Monday, April 3, 2017 - link

    >The first Denver was comfortably beating Apple A8 at the time

    Eh, partial truth at best there. Denvers binary translation architecture worked well for straight, predictable code, but as soon as you started getting unpredictable it would choke up. So it suffered a fair bit on simple user facing multitasking for example, or any benchmark with an element of randomness.

    Denver 2 with a doubled far cache could have been interesting, I guess we'll see, but Denver didn't exactly light the world on fire.
  • dud3r1no - Monday, April 3, 2017 - link

    I'd be curious if this is a sort of power play.
    This announcement has tanked the Imagination stock (down like 60% this morning). Acquire IMG cheap. Get all that IP and block other corps from access at the same time?
  • Ultraman1966 - Monday, April 3, 2017 - link

    Anti trust laws says they can't do that.
  • melgross - Monday, April 3, 2017 - link

    What anti trust laws? Is Apple the biggest GPU manufacturer around?
  • Eden-K121D - Monday, April 3, 2017 - link

    Market Manipulation. I think SEC and FSA won't be pleased

Log in

Don't have an account? Sign up now