Imagination: Patents & Losing an Essential Contract

As for Imagination, the news is undoubtedly grim, but not necessarily fatal. Imagination has never hidden the fact that Apple is their most important customer – even labeling them as an “Essential Contract” in their annual report – so it’s no secret that if Apple were to leave Imagination, it would be painful.

By the numbers, Apple’s GPU licensing and royalties accounted for £60.7M in revenue for Imagination’s most recent reporting year, which ran May 1st, 2015 to April 30th, 2016. The problem for Imagination is that this was fully half of their revenue for that reporting year; the company only booked £120M to begin with. And if you dive into the numbers, Apple is 69% of Imagination’s GPU revenue. Consequently, by being dropped by Apple, Imagination has lost the bulk of their GPU revenue starting two years down the line.

Imagination Financials: May 1st, 2015 to April 30, 2016
  Company Total GPUs Total Apple
Revenue (Continuing) £120M £87.9M £60.7M
Operating Income -£61.5M £54.7M

The double-whammy for Imagination is that as an IP licensor, the costs to the company of a single customer is virtually nil. Imagination still has to engage in R&D and develop their GPU architecture and designs regardless. Any additional customer is pure profit. But at the same time, losing a customer means that those losses directly hit those same profits. For the 2015/2016 reporting year, Apple’s royalty & licensing payments to Imagination were greater than the profits their PowerVR GPU division generated for the year. Apple is just that large of a customer.

As a result, Imagination is being placed in a perilous position by losing such a large source of revenue. The good news for the company is that their stakes appear to be improving – if slowly – and that they have been picking up more business from other SoC vendors. The problem for Imagination is that they’ll need a drastic uptick in customers by the time Apple’s payments end in order to pay the bills, never mind turning a profit. Growing their business alone may not be enough.

Which is why Imagination’s press release and the strategy it’s outlining is so important. The purpose of Imagination’s release isn’t to tell the world that Apple is developing a new GPU, but to outline to investors and others how the company intends to proceed. And that path is on continued negotiations with Apple to secure a lesser revenue stream.

The crux of Imagination’s argument is that it’s impractical for Apple to develop a completely clean GPU devoid of any of Imagination’s IP, and this is for a few reasons. The most obvious reason is that Apple already knows how Imagination’s GPUs work, and even though Apple wouldn’t be developing a bit-for-bit compatible GPU – thankfully for Apple, the code app developers write for GPUs operates at a higher level and generally isn’t tied to Imagination’s architecture – those engineers have confidential information about those GPUs that they may carry forward. Meanwhile on the more practical side of matters, Imagination has a significant number of GPU patents (they’ve been at this for over 20 years), so developing a GPU that doesn’t infringe on those patents would be difficult to do, especially in the mobile space. Apple couldn’t implement Imagination’s Tile Based Deferred Rendering technique, for example, which has been the heart and soul of their GPU designs.

However regardless of the architecture used and how it’s designed, the more immediate problem for Apple – and the reason that Imagination is likely right, to an extent – is replicating all of the features available in Imagination’s GPUs. Because Apple’s SoCs have always used GPUs from the same vendor, certain vendor-specific features like PowerVR Texture Compression (PVRTC) are widely used in iOS app development, and Apple has long recommended that developers use that format. For their part, Apple is already in the process of digging themselves out of that hole by adding support for the open ASTC format to their texture compression tools, but the problem remains of what to do with existing apps and games. If Apple wants to ensure backwards compatibility, then they need to support PVRTC in some fashion (even if it’s just converting the textures ahead of time). And this still doesn’t account for any other Imagination-patented features that have become canonized into iOS over time.

Consequently, for Imagination their best move is to get Apple to agree to patent indemnification or some other form of licensing with their new GPU. For Apple it would ensure that nothing they do violates an Imagination patent, and for Imagination it would secure them at least a limited revenue stream from Apple. Otherwise Imagination would be in a very tight spot, and Apple would face the risk of patent lawsuits (though Imagination isn’t making transparent threats, at least not yet).

Apple’s Got No Imagination The Future: Competition, Secrecy, & the Unexpected


View All Comments

  • lilmoe - Monday, April 3, 2017 - link

    In the short term? No. Evidently? Highly possible. Nothing's stopping them. Reply
  • psychobriggsy - Monday, April 3, 2017 - link

    RISC-V would be a potential free-to-license ISA that has had a lot of thought put into it.

    But maybe for now ARM is worth the license costs for Apple.
  • vFunct - Monday, April 3, 2017 - link

    Thing is, Arm is already Apple originated, being funded by Apple for their Newton.

    But, given the rumors of Apple buying Toshiba's NAND flash fabs, it seems more likely that Apple is going all in on in-house manufacturing and development of everything, including ISA and fabs.
  • vladx - Monday, April 3, 2017 - link

    Apple owning their own fabs? Seriously doubt it, the investment is not worth it for just in-house manufacturing. Reply
  • Lolimaster - Monday, April 3, 2017 - link

    And if your sales kind of plummet, the fab costs will make you sink. Reply
  • FunBunny2 - Monday, April 3, 2017 - link

    -- That's a moderately large undertaking.

    that's kind of an understatement. the logic of the ALU, for instance, has been known for decades. ain't no one suggested an alternative. back in the good old days of IBM and the Seven Dwarves, there were different architectures (if one counts the RCA un-licenced 360 clone as "different") which amounted to stack vs. register vs. direct memory. not to mention all of the various mini designs from the Departed. logic is a universal thing, like maths: eventually, there's only one best way to do X. thus, the evil of patents on ideas.
  • Alexvrb - Monday, April 3, 2017 - link

    The underlying design and the ISA don't have to be tightly coupled. Look at modern x86, they don't look much like oldschool CISC designs. If they're using a completely in-house design, there's no reason they couldn't start transitioning to MIPS64 or whatever at some point.

    Anyway I'm sad to see Apple transitioning away from PowerVR designs. That was the main reason their GPUs were always good. Now there might not be a high-volume product with a Furian GPU. :(
  • FunBunny2 - Tuesday, April 4, 2017 - link

    -- Look at modern x86, they don't look much like oldschool CISC designs.

    don't conflate the RISC-on-the-hardware implementation with the ISA. except for 64 bit and some very CISC extended instructions, current Intel cpu isn't RISC or anything else but CISC to the C-level coder.
  • willis936 - Wednesday, April 5, 2017 - link

    "Let's talk about the hardware. Now ignore the hardware." Reply
  • name99 - Monday, April 3, 2017 - link

    I think it's perhaps too soon to analyze THAT possibility (apple-specific ISA). Before that, we need to see how the GPU plays out. Specifically:

    The various childish arguments being put forth about this are obviously a waste of time. This is not about Apple saving 30c per chip, and it's not about some ridiculous Apple plot to do something nefarious. What this IS about, is the same thing as the A4 and A5, then the custom cores --- not exactly *control* so much as Apple having a certain vision and desire for where they want to go, and a willingness to pay for that, whereas their partners are unwilling to be that ambitious.

    So what would ambition in the space of GPUs look like? A number of (not necessarily incompatible) possibilities spring to mind. One possibility is much tighter integration between the CPU and the GPU. Obviously computation can be shipped from the CPU to the GPU today, but it's slower than it should be because of getting the OS involved, having to copy data a long distance (even if HSA provides a common memory map and coherency). A model of the GPU as something like a sea of small, latency tolerant, AArch64 cores (ie the Larrabee model) is an interesting option. Obviously Intel could not make that work, but does that mean that the model is bad, that Intel is incompetent, that OpenGL (but not Metal) was a horrible target, that back then transistors weren't yet small enough?

    With such a model Apple starts to play in a very different space, essentially offering not a CPU and a GPU but latency cores (the "CPU" cores, high power and low power) and throughput cores (the sea of small cores). This sort of model allows for moving code from one type of core to another as rapidly as code moves from one CPU to another on a multi-core SoC. It also allows for the latency tolerant core to perhaps be more general purpose than current GPUs, and so able to act as more generic "accelerators" (neuro, crypto, compression --- though perhaps dedicated HW remains a better choice for those?)

    Point is, by seeing how Apple structure their GPU, we get a feeling for how large scale their ambitions are. Because if their goal is just to create a really good "standard" OoO CPU, plus standard GPU, then AArch64 is really about as good as it gets. I see absolutely nothing in RISC-V (or any other competitor) that justifies a switch.

    But suppose they are willing to go way beyond a "standard" ISA? Possibilities could be VLIW done right (different model for the split between compiler and HW as to who tracks which dependencies) or use of relative rather than absolute register IDs (ie something like the Mill's "belt" concept). In THAT case a new instruction set would obviously be necessary.

    I guess we don't need to start thinking about this until Apple makes bitcode submission mandatory for all App store submissions --- and we're not even yet at banning 32-bit code completely, so that'll be a few years. Before then, just how radical Apple are in their GPU design (ie apparently standard GPU vs sea of latency tolerant AArch-64-lite cores) will tell us something about how radical their longterm plans are.

    And remember always, of course, this is NOT just about phones. Don't you think Apple desktop is as pissed off with the slow pace and lack of innovation of Intel? Don't you think their data-center guys are well aware of all that experimentation inside Google and MS with FPGAs and alternative cores and are designing their own optimized SoCs? At least one reason to bypass IMG is if IMG's architecture maxes out at a kick-ass iPad, whereas Apple wants an on-SoC GPU that, for some of their chips at least, is appropriate to a new ARM-based iMac 5K and Mac Pro.

Log in

Don't have an account? Sign up now