Imagination: Patents & Losing an Essential Contract

As for Imagination, the news is undoubtedly grim, but not necessarily fatal. Imagination has never hidden the fact that Apple is their most important customer – even labeling them as an “Essential Contract” in their annual report – so it’s no secret that if Apple were to leave Imagination, it would be painful.

By the numbers, Apple’s GPU licensing and royalties accounted for £60.7M in revenue for Imagination’s most recent reporting year, which ran May 1st, 2015 to April 30th, 2016. The problem for Imagination is that this was fully half of their revenue for that reporting year; the company only booked £120M to begin with. And if you dive into the numbers, Apple is 69% of Imagination’s GPU revenue. Consequently, by being dropped by Apple, Imagination has lost the bulk of their GPU revenue starting two years down the line.

Imagination Financials: May 1st, 2015 to April 30, 2016
  Company Total GPUs Total Apple
Revenue (Continuing) £120M £87.9M £60.7M
Operating Income -£61.5M £54.7M

The double-whammy for Imagination is that as an IP licensor, the costs to the company of a single customer is virtually nil. Imagination still has to engage in R&D and develop their GPU architecture and designs regardless. Any additional customer is pure profit. But at the same time, losing a customer means that those losses directly hit those same profits. For the 2015/2016 reporting year, Apple’s royalty & licensing payments to Imagination were greater than the profits their PowerVR GPU division generated for the year. Apple is just that large of a customer.

As a result, Imagination is being placed in a perilous position by losing such a large source of revenue. The good news for the company is that their stakes appear to be improving – if slowly – and that they have been picking up more business from other SoC vendors. The problem for Imagination is that they’ll need a drastic uptick in customers by the time Apple’s payments end in order to pay the bills, never mind turning a profit. Growing their business alone may not be enough.

Which is why Imagination’s press release and the strategy it’s outlining is so important. The purpose of Imagination’s release isn’t to tell the world that Apple is developing a new GPU, but to outline to investors and others how the company intends to proceed. And that path is on continued negotiations with Apple to secure a lesser revenue stream.

The crux of Imagination’s argument is that it’s impractical for Apple to develop a completely clean GPU devoid of any of Imagination’s IP, and this is for a few reasons. The most obvious reason is that Apple already knows how Imagination’s GPUs work, and even though Apple wouldn’t be developing a bit-for-bit compatible GPU – thankfully for Apple, the code app developers write for GPUs operates at a higher level and generally isn’t tied to Imagination’s architecture – those engineers have confidential information about those GPUs that they may carry forward. Meanwhile on the more practical side of matters, Imagination has a significant number of GPU patents (they’ve been at this for over 20 years), so developing a GPU that doesn’t infringe on those patents would be difficult to do, especially in the mobile space. Apple couldn’t implement Imagination’s Tile Based Deferred Rendering technique, for example, which has been the heart and soul of their GPU designs.

However regardless of the architecture used and how it’s designed, the more immediate problem for Apple – and the reason that Imagination is likely right, to an extent – is replicating all of the features available in Imagination’s GPUs. Because Apple’s SoCs have always used GPUs from the same vendor, certain vendor-specific features like PowerVR Texture Compression (PVRTC) are widely used in iOS app development, and Apple has long recommended that developers use that format. For their part, Apple is already in the process of digging themselves out of that hole by adding support for the open ASTC format to their texture compression tools, but the problem remains of what to do with existing apps and games. If Apple wants to ensure backwards compatibility, then they need to support PVRTC in some fashion (even if it’s just converting the textures ahead of time). And this still doesn’t account for any other Imagination-patented features that have become canonized into iOS over time.

Consequently, for Imagination their best move is to get Apple to agree to patent indemnification or some other form of licensing with their new GPU. For Apple it would ensure that nothing they do violates an Imagination patent, and for Imagination it would secure them at least a limited revenue stream from Apple. Otherwise Imagination would be in a very tight spot, and Apple would face the risk of patent lawsuits (though Imagination isn’t making transparent threats, at least not yet).

Apple’s Got No Imagination The Future: Competition, Secrecy, & the Unexpected


View All Comments

  • Meteor2 - Tuesday, April 4, 2017 - link

    What name99 said. Which is awfully like what Qualcomm is doing, isn't it? A bunch of conceptually-different processor designs in one 'platform'. Software uses whichever is most appropriate. Reply
  • peevee - Tuesday, April 18, 2017 - link

    It is certainly easier to design your own ISA than to build your own core for somebody else's ISA. And ARM64 us FAR from perfect. So 1980s. Reply
  • quadrivial - Monday, April 3, 2017 - link

    Very unlikely. They gave up that chance a couple years ago (ironically, to imagination).

    Consider, it takes 4-5 years from initial architecture design to final shipment. No company is immune to this timeframe no matter how large. Even more time is required for new ISAs because there are new, unexpected edge cases that occur.

    Consider, ARM took about 4 years to ship from the time the ISA was announced. Most of their licensees took closer to 5 years. Apple took a bit less than 2 years.

    Consider, Apple was a front-runner to buy MIPS so they could have their own ISA, but they backed out instead. The new ARM ISA is quite similar to MIPS64.

    Thought, Apple started designing a uarch that could work well with either MIPS or their new ARMv8. A couple years in (about the time nailing down the architecture would start to become unavoidable), they show ARM a proposal for a new ISA and recommend ARM adopt that ISA otherwise they buy MIPS and switch. ARM announces a new ISA and immediately has teams start working on it, but Apple has a couple year head start. Apple won big because they shipped an extremely fast CPU a full two years before their competitors and even more years for their competitors to catch up.

    Maybe imperfect, but its the best explanation I can come up with for how events occurred.
  • TheMysteryMan11 - Monday, April 3, 2017 - link

    Computing still heavily relies on CPU for all the things that matter to power users. ARM is long way away from being powerful enough to actually be useful for power users and creators.
    It is good enough for consumption and getting better.

    But then again, Apple hasnt been doing well catering to creators anyway. Still no refresh for Mac Pro. So you might be right. But that means Apple is Ok with ignoring that segment, which they probably are.
  • lilmoe - Monday, April 3, 2017 - link

    Single purpose equipment aren't mainly CPU dependent. This is my point. Relying on the CPU for general purpose functionality is inherently the least efficient, especially for consumer workloads.

    Outside the consumer market, for example engineering and video production software, are still very CPU dependent because the software isn't written efficiently. It's so for the sole purpose of supporting the most amount of currently available hardware. I'd argue that if a CAD program was re-written from the ground up to be hardware dependent and GPU accelerated ONLY, then it would run faster and more fluidly on an iPad than on a Core i7 with integrated graphics, if the storage speed was the same.

    This leaves only niche applications that are inherently dependent on a CPU, and can't be offloaded to hardware accelerates. With more work on efficient multi-threaded coding, Apple's own CPU cores, in a quad/octa configuration, can arguably suffice. Single-threaded applications are also arguably good enough, even on A72/A73 cores.

    Again, this conversation is about consumer/prosumer workloads. It's evident that Apple isn't interested in Server/corporate workloads.

    This has been Apple's vision since inception. They want to do everything in-house as a single package for a single purpose. They couldn't in the past, and almost went bankrupt, because they weren't _big enough_. This isn't the case now.

    The future doesn't look very bright for the personal computing industry as we know it. There has been talk and rumors that Samsung is taking a VERY similar approach. Rumors started hitting 2 years ago that they were also building their own in-house GPU, and are clashing with Nvidia and AMD graphics IP in the process. It also lead Nvidia to sue Samsung for reasons only known behind the scenes.
  • ddriver - Monday, April 3, 2017 - link

    Yeah let's make the x chip that can only do one n task. And while we are at it, why not scrap all those cars you can drive anywhere there is an open road, and make cars which are best suited for one purpose. You need a special car to do groceries, a special car to go to work, a bunch of different special cars when you go on vacation, depending on what kind of vacation it is.

    Implying prosumer software isn't properly written is laughable, and a clear indication you don't have a clue. That's the only kind of software that scales well with cores, and can scale linearly to as many threads as you have available.

    But I'd have to agree that crapple doesn't really care about making usable hardware, their thing is next-to-useless toys, because it doesn't matter how capable your hardware is, what matters is how much of that lacking and desperately needed self esteem you get from buying their branded, overpriced toy.

    Back in the days apple did good products and struggled to make profit, until genius Jobs realized how profitable it can be to exploit dummies and make them even dummer, giving rise to crapple, and the shining beacon of an example that the rest of the industry is taking, effectively ruining technology and reducing it to a fraction of its potential.
  • lilmoe - Monday, April 3, 2017 - link

    Chill bro.
    I said current software is written to support the most amount of hardware combinations possible. And yes, that's NOT the most efficient way to write software, but it _is_ the most accessible for consumers.

    I wasn't implying that one way is better than the other. But it's also true that a single $200 GPU runs circles around $1500 10 core Intel CPU in rendering.
  • steven75 - Monday, April 3, 2017 - link

    How amazing that an "overpriced toy" still shames all Android manufacturers in single thread performance. The brand new S8 (with a price increase, no-less) can't even beat a nearly 2 year old iPhone 6S.

    I wish all "toys" were superior like that!
  • fanofanand - Monday, April 3, 2017 - link

    How many people are working (like actual productivity) on an S8? Cell phones are toys 99% of the time. Reply
  • FunBunny2 - Monday, April 3, 2017 - link

    -- How many people are working (like actual productivity) on an S8? Cell phones are toys 99% of the time.

    I guess Mao was right.

Log in

Don't have an account? Sign up now