Qualcomm has had an incredible year. It wasn’t too long ago that I was complaining about Qualcomm’s release cadence, the lull between Scorpion and Krait allowed competitors like NVIDIA, Samsung and TI to get a foothold in the market. Since the arrival of Krait, the move to 28nm and the launch of monolithic AP/LTE solutions, no competitor has been able to come close to touching Qualcomm. These days the choice of integrating mobile silicon really boils down to what Snapdragon variant an OEM wants to go with. TI is out of the business, NVIDIA hasn’t seen much traction with Tegra 4 and even Samsung will ship Qualcomm silicon in many of its important markets. 
Qualcomm’s Snapdragon 600 was the SoC of choice at the beginning of the year, with Snapdragon 800 taking over that title more recently. Earlier this week, Qualcomm announced the successor to the 800: the Snapdragon 805. 
We’re expecting to see devices based on the Snapdragon 805 to be shipping in the first half of 2014, so Snapdragon 800 will still enjoy some time at the top of the charts.
The 805 starts by integrating four Krait 450 cores. Krait 450 appears to be an evolutionary upgrade over Krait 400, with no changes to machine width, cache sizes or pipeline depth. Qualcomm claims to have improved power and thermal efficiency, as well as increased maximum frequency from 2.3GHz to 2.5GHz. I suspect the design is quite similar to Krait 400, perhaps with some bug fixes and other minor tweaks. Qualcomm is likely leveraging yield and 28nm HPM process tech improvements to get the extra 200MHz over Krait 400. Krait 450 also adds 36-bit LPAE (Large Physical Address Extensions) to enable memory support above 4GB. This is a similar path to what we saw desktop PCs take years ago, although I'd expect the transition to 64-bit ARMv8 to happen for Qualcomm next year.
The GPU sees the bigger upgrade this round. The Snapdragon 805 features Qualcomm’s Adreno 420, designed 100% in house at Qualcomm. Adreno 420 brings about a D3D11-class feature set to Qualcomm’s mobile graphics, adding support for hull, domain and geometry shaders. Adreno 420 also includes dedicated tessellation hardware. Full profile OpenCL 1.2 is now supported. Texture performance improves by over 2x per pipe, and also gains ASTC support.
Adreno 420 is more efficient at moving data around internally. The GPU has a new dedicated connection to the memory controller, whereas in previous designs the GPU shared a bus with the video decoder and ISP. 
Qualcomm insists on occluding things like shader unit counts, so all we have to report today are a 40% increase in shader bound benchmarks (implying a 40% increase in shader hardware and/or more efficient hardware). 
Snapdragon 805 also features hardware accelerated decode of H.265 content. Hardware encode is still limited to H.264, but this is an awesome first for Qualcomm.
The Snapdragon 805 brings a much improved ISP. Qualcomm claims more than a 50% increase in ISP throughput (1GPixel/s class) compared to 640MP/s for Snapdragon 800. The 805 leverages its Hexagon DSP to deliver this level of performance. Qualcomm lists no change in DSP architecture between the 805 and 800.
Lastly, we see Qualcomm move to a 128-bit wide LPDDR3 memory interface for Snapdragon 805.  With support for LPDDR3-1600, the Snapdragon 805 features up to 25.6GB/s of peak memory bandwidth. It’s interesting to see Qualcomm go this wide just as Apple moved back down to a 64-bit wide interface. Qualcomm and Intel will be the only two shipping such a wide memory interface in the ultra mobile space come next year (although I do expect Apple to return to a wider memory bus at some point).
All of this makes for one beefy SoC, and a continuation of Qualcomm’s leadership in this space. I doubt we’ll see any slowing of Qualcomm’s roadmap after the 805 though. TSMC expects to be shipping 20nm wafers by the end of next year, and I wouldn’t be surprised to find a 20nm successor to the 805 in late ’14/early ’15. Remember that on the last process node shift we got Krait, I wonder what we’ll get this time.
Comments Locked


View All Comments

  • Suneater - Monday, November 25, 2013 - link

    I'm waiting for any examples of Nvidia lies with their numbers (in NVidia's favor)...
  • GTX420 - Thursday, December 26, 2013 - link

    given NV is still new to the mobile scene, i think they're doing quite well. Also, using Vanilla flavored ARM CPUs will have it's drawbacks compared to QC. However, T4 is FASTER than S800 on the CPU side. I think T5 will be awesome, this is probably why QC has to come out with S805, wo modem, how else will they compete with T5's Kepler? i honestly think S805 won't even be able to compete with T5 Kepler. T5 Kepler still probably has A15, hope it's 64bit, however, i'm sure if A15, it'll be updated CPUs with more effiency and power.
  • fteoath64 - Friday, November 29, 2013 - link

    This is provided Nvidia has to DELIVER Logan in order to get any design win. The T4 was plagued with delays left and right so badly that it missed the boat and the Tegra 4i is not even shipped in any product to date. That is supposedly just an improved A9 core and newer gpu cores limited 60. If Logan can deliver and live up to its power/performance gain, then it will clearly win. But until then, the 805 seems like sooner to market than Logan ever will be!. Besides, Qualcomm can evolve this into several models within the 8XX model ranges to address different price points and power requirements of their target devices. A very clever move as they have done with Krait from the get-go. No wonder Qualcomm gets all the design wins and yet keep reasonable supply and potentially good price points to attract many OEMs to built on their chips. The taiwanese competitors on the mid to low end is catching up fast with different evolutions and very low price points to compete. I would like to seen Qualcomm match a few of these with more powerful gpu cores at the expense of CPU cores just to ensure the target phones/tablets remain competitively priced for the market.
  • sutamatamasu - Friday, November 22, 2013 - link

    to anandtech please for use 'dual
    channel memory' instead '128 bit
    memory' after apple launch apple a7
    SOC some android noob user not see
    apple soc is new arm v8 chip and only
    know apple is make a 'waste' 64 bit. i
    know this '128 bit' is not related to arm
    core. but please for not using 'bit' many
    newbie user still thinking 128 bit is
    related to core ic
  • frostyfiredude - Friday, November 22, 2013 - link

    That wouldn't be correct, these SoCs don't use the channel approach that DDR1/2/3 uses. DDR4 is moving to the same 128-bit wide interface rather than a pair of 64-bit channels as well so these "android noob user" will need to learn the difference either way as it'll be everywhere come 2014.
  • extide - Friday, November 22, 2013 - link

    As mentioned by the other comment, specifying the bit width is the best way to go. It's 128 bit wide, it could be 4 x 32-bit, or 2x 64, or whatever. The most correct and clear thing to say is the width of the interface. I kinda wish desktop CPU's would stop with the single/dual/triple/quad channel crap, honestly, although it is pretty well known that desktop memory is 64bits wide per channel.
  • Suneater - Friday, November 22, 2013 - link

    40% increase in graphics horsepower you say? And what about 300% increase of Tegra 5 (Logan) which is going to be available this winter. This 805 is going to be nothing compared to Nvidia Tegra 5!
  • A5 - Friday, November 22, 2013 - link

    Yes, Nvidia is well noted for delivering on their marketing BS in the Tegra line.
  • Suneater - Friday, November 22, 2013 - link

    What BS are you talking about? The performance of Tegra 4 is essentially the same as of Snapdragon 800 and Qualcomm was bullshitting us that 800 would be much faster.
  • guidryp - Friday, November 22, 2013 - link

    Funny when I google I only see early reports how Tegra 4 was going to be the fastest SoC, but when I see benhmark tests like this:

    Tegra 4 is lagging again, just like Tegra 3, Just like Tegra 2, Just like Tegra 1.

    5th times the charm? I'll wait and see on that. It could happen but NVidia has cried wolf to long for me to ever believe them again.

Log in

Don't have an account? Sign up now