The Future: Competition, Secrecy, & the Unexpected

Finally, while Apple developing their own GPU is not unexpected given their interests and resources, the ramifications of it may very well be. There hasn’t been a new, major GPU vendor in almost a decade – technically Qualcomm’s team would count as the youngest, though it’s a spin-off of what’s now AMD’s Radeon Technologies Group – and in fact like the overall SoC market itself, the market for GPU vendors has been contracting as costs go up and SoC designers settle around fewer, more powerful GPU vendors. So for someone as flush with cash as Apple to join the GPU race is a very big deal; just by virtue of starting development of their own GPU, they are now the richest GPU designer.

Of course, once they start shipping their custom GPU, this will also open them up to patent challenges from those other players. While it has largely been on the backburner of public attention, this decade has seen a few GPU vendors take SoC vendors to court. This includes NVIDIA with Samsung and Qualcomm (a case that they lost), and still ongoing is AMD’s case against LG/MediaTek/Sigma/Vizio.

GPU development is a lot more competitive due to the fact that developers and compiled programs aren’t tied to a specific architecture – the abstraction of the APIs insulates against individual architectures – however it also means that there a lot of companies developing novel technologies, and all of those companies are moving in the same general direction with their designs. This potentially makes it very difficult to develop an efficient GPU, as the best means of achieving that efficiency have often already been patented.

What exists then is an uneasy balance between GPU vendors, and a whole lot of secrets. AMD and NVIDIA keep each other in check with their significant patent holdings, Intel licenses NVIDIA patents, etc. And on the flip side of the coin, some vendors like Qualcomm simply don’t talk about their GPUs, and while this has never been stated by the company, the running assumption has long been that they don’t want to expose themselves to patent suits. So as the new kid on the block, Apple is walking straight into a potential legal quagmire.

Unfortunately, I suspect this means that we’ll be lucky to get any kind of technical details out of Apple on how their GPUs work. They can’t fully hide how their CPUs work due to how program compilation works (which is why we know as much as we do), but the abstraction provided by graphics APIs makes it very easy to hide the inner-workings of a GPU and make it a black box. Even when we know how something works, features and implementation details can be hidden right under our noses.

Ultimately today’s press release is a bit bitter-sweet for all involved in the industry. On the one hand it absolutely puts Imagination, a long-time GPU developer, on the back foot. Which is not to spell doom and gloom, but the company will have to work very hard to make up for losing Apple. On the other hand, with a new competitor in the GPU space – albeit one we’ve been expecting – it’s a sign that things are about to get very interesting. If nothing else, Apple enjoys throwing curveballs, so expect the unexpected.

Imagination: Patents & Losing Apple
Comments Locked

144 Comments

View All Comments

  • Meteor2 - Wednesday, April 5, 2017 - link

    Not many people 'work' on a phone -- now. But with Continuum and Dex, that number is going to rise.
  • raptormissle - Monday, April 3, 2017 - link

    You forgot the second part where Android SoC's shame Apple's SoC in multi-core performance. If you're going to selectively bring up single core then you should have also mentioned multi-core.
  • ddriver - Tuesday, April 4, 2017 - link

    Yeah, and if only phones were all about single threaded performance, or even performance in general. I still run an ancient note 3 and despite being much slower than current flagship devices, it is still perfectly usable for me, and I do with it more things than you do on a desktop PC.
  • cocochanel - Tuesday, April 4, 2017 - link

    You're looking at it from the wrong angle. The numbers speak for themselves and it all comes down to how much a company can spend on R&D. Plus how important components like GPU's have become to computing in general.
    With such small annual revenue, how much can IMG spent on R&D ? 10 million ? 20 ?
    Apple can easily spend 10-20 times that amount and not even feel a scratch. Everything being equal, how much you put into something is how much you're getting out. You want a top of the line product ? Well, it's going to cost you. If Apple is to stay at the top, their GPU's need to be on the same level as their CPU's.
    Plus, GPU's these days are used for all kinds of other things other than graphics. Look how lucrative the automotive business is for Nvidia not to mention GPU based servers.
    As for litigation and patents, gimme a break. What, just because a company bought a GPU from another for a long time, now they are supposed to do it forever ? When Apple started to buy them from IMG 10-15 years ago, it made sense at the time. Now, the market is different and so are the needs. Time to move on.
    When a company doesn't spend much on R&D, they get accused of complacency. When they want to spend a lot, now they get threatened with lawsuits. How is that for hypocrisy ?
  • Meteor2 - Wednesday, April 5, 2017 - link

    Possibly. But there's only so many ways to do something, especially if you want to do it well. Imagination have lots of patents, as the article explains. I expect to see a lot of lower-level IP being licensed.
  • BedfordTim - Monday, April 3, 2017 - link

    I suspect CAD and video production programmers would beg to differ. My experience in image processing certainly contradicts your assertion.
    Apple has also apparently abandoned the prosumer market so as long as it can run the Starbucks app and PowerPoint most of their customers will be happy.
  • prisonerX - Monday, April 3, 2017 - link

    You don't have a clue what you're talking about. The CPU does most of the work becuase it's the best solution. Specialized hardware requires very static requirements and very high performance requirements. GPUs exist only becuase graphics primitives don't change much and they are very compute intensive. Also throw in that they are easily parallelized.

    I have no idea of the point you're trying to make, and neither do you.
  • ddriver - Monday, April 3, 2017 - link

    There hasn't been a single ARM chip suitable for HPC. A10 has decent scalar performance, making it a good enough solution for casual computing. But for rendering, processing, encoding and pretty much all intensive number crunching its performance is abysmal.

    That being said, there is nothing preventing from extending the throughput of SIMD units. NEON is still stuck at 128bit but can easily be expanded to match what we have in x86 - 256 and 512 bits. But then again, transistor count and power usage will rise proportionally, so it will not really have an advantage to x86 efficiency wise.
  • lilmoe - Monday, April 3, 2017 - link

    Apple doesn't seem to be interested in anything outside of the scope of consumer/prosumer computing. Latest Macbooks anyone?

    "But for rendering, processing, encoding and pretty much all intensive number crunching its performance is abysmal."

    Rendering, encoding, etc. can be offloaded to better/faster dedicated co-processors that run circles around the best core design from Intel or AMD.

    The very fact that Apple are designing their own GPUs now supports my argument that they want to build more functionality to those GPUs aside from the current GP-GPU paradigm.
  • psychobriggsy - Monday, April 3, 2017 - link

    ARM offer SVE (IIRC) that allows 512-2048-bit wide SIMD for HPC ARM designs.

    It has been suggested that Apple's GPU may in-fact be more Larrabee-like, but using SVE with Apple's small ARM cores.

Log in

Don't have an account? Sign up now