They really need to get more RTX 3080 FE cards into the hands of mere mortals. I started unsubscribing to the YouTubers showing off their RTX cards. It's really NOT in my interest to pay them any attention - what-so-ever. It will just translate int them getting more cards on the next launch and having even fewer for paying customers.
That's dumb. It's not false scarcity; the more the card is advertised (ie, shown off on YouTube), the higher demand potentially is. So a more correct way to say it is; Do not create scarcity by driving up demand. Or in other words, by not watching the YouTube channels, you don't give them any profit motive to drive up demand, and you don't become enamored of wanting one yourself.
If no one wants one, demand falls and scarcity falls.
None of the manufacturers share their numbers, online sources for numbers are from market research like JPR or mindfactory sharing their retail sales.
Hell if you believe they're lying and artificially creating scarcity why would you believe the numbers they give you anyway? They could say they sold a million and then at their earnings call they could list it as a million allocated to vendors/AIB in Q4. Either way what you're asking for makes no sense considering your theory.
creating a paper launch with an artificial price target on a card a consumer can't buy would support my theory, as would the number of FE cards produced
i'm saying if you think they're lying to you already, how exactly would another lie prove anything? There's no way for them to prove anything to you as you've already assume they are liars. From reading through your conspiracy theory it's pretty obvious you got it from the same youtuber everyone got this dumb conspiracy from. I guess it's not surprising considering RTG enables this behavior by retweeting that garbage.
Paper launch is a desperate gamble when you don't know what your competitor is getting ready to release. If Nvidia is trying to do a paper launch and AMD hits a home run with Radeon they will burn bridges with their customers and shift demand to their competitor.
I don't buy the paper launch theory. I think its their supply chain is comprised either with Covid impacts or they have yield issues.
Why do you think the FE numbers matter? NVIDIA isn't an OEM and doesn't book the revenue separately. If their revenue were significantly impacted by the FE cards I imagine they would be making a much bigger deal out of them.
Essentially, the FE cards are going to be out first, sold out first, and be a miniscule part of the market, especially if they are charging $100 to $200 more.
As for the real answer, I imagine it's because there is no need to publish information that could be used competitively against them.
I thought that was a point of FE cards? Limited numbers for the fanboys, everybody else waits for partner cards to see which offer the best balance of features/performance/price.
Very high demand market, brand new product, high mobility product channels. This means it typically takes time to ramp up production to hit demand targets, and with COVID messing up so many metrics this year it's hard to get a good number. It also requires foundries to product 'x' number of chips when they don't know exactly how much they need. If yields aren't good, you'll be waiting, and I haven't found any info on Ampere's yield rates.
Any time I see a 'new product' launch, I expect it to take up to 1-3 months after launch before it's available, at least here in Canada. You need to adjust your expectations ;)
The FE cards this go-round are loss leaders designed to pump up initial review scores. The cooling solution is too expensive for the MSRP (i.e., NV is making razor thin margins on them), and the AIBs likely aren't allowed to use the FE cooling design at any price.
They are ”beating” the FE but with plenty of conceits. They beat it by designing larger volume cards with vertical fin stacks, dumping 320W(!) worh of heat almost exclusively inside the case. And none can match MSRP, with Nvidia footing the bill for early adopters before oct 15th (this has been disclosed to reviewers but curiously the youtubers don’t like talking about it). Getting the FE at 699 was the best deal you could have made this launch. It might continue to be, depending on if nvidia plans to restock once o week going forward as well.
That's just silly.. I see nothing different here from other paper launches of NVs new cards and you can bet their making decent money off of it all. Slim profit margins? That's a joke.. The only time NV ever slashes prices is when Amd is beating them. Anyway, It's very similar to the 600 series launch where they released the titan (ish) cooler for the first time and people were actually interested in getting the NV offering over partner cards.
I think 320W is telling that NVIDIA is pushing Ampere too hard. I would be curious to know what the performance would be if the power budget was still at 20 series levels.
This is silly. There are maybe 10 to 20 reputable social media influencers and press outlets who get access to early hardware. That is not even going to remotely influence the supply when they are capable of producing thousands a day / week.
It is now 2 weeks after the RTX 3080 launch, and available units are limited to scalping sales on eBay. I am, not sure how much a 2-week delay will help, especially for the slightly more mainstream RTX 3070.
It's good to have such demand, and nVidia deserves to sell every unit they can get. I just hope I can get one within the next few months.
You might not want one of the first batch of 30 series cards anyhow, they've been having stability problems at clocks above 2GHz. OEMs are looking at new board revisions and Nvidia has release drivers that keep boost clocks under 2GHz.
There are almost no consumer cards that have been released with 6 sp-caps, and as we've been seeing, a good portion of the issues have been driver-related. It's honestly pretty ridiculous that--now that the stability issues have been solved--are going to complain about potentially losing 30mhz on an already very high overclock. Nobody is guaranteed a card that is going to hit 2050-2100mhz, and whether the overclocking stress breaks down first in the GPU, VRAM, or capacitors really doesn't make much difference.
That's a little too short of a summary. People have reported the same issues with FE cards, just not as frequent.
The initial blame was laid at the feet of AIBs - something Nvidia seems happy to allow to happen - but in reality they've all followed Nvidia's guidance. Updated drivers seem to have taken the edge off it, so the sensible conclusion is that Nvidia's boost algorithms were provoking issues with voltage regulation that were exacerbated by certain designs.
Does seem to be a fairly minor issue in the grand scheme of things, but it's redolent of a rushed launch. As such, it makes me more interested to see what AMD will come out with.
Hoping rDNA2 is amazing is a fools errand. rDNA is more bandwidth hungry then turing, and we expect that a 256 bit GDDR6 rDNA 2 is going to go toe-to-toe with Ampere?
Right.
Safe bet: the navi cards will compete with the 3070. If you want 3080 performance you're safe just buying a 3080, if the 1080ti is any indications it will take AMD till 2024 to catch up.
Why do you assume RDNA 2 will be equally as bandwidth-hungry as RDNA (itself a notable improvement over GCN, which was notoriously bad in that regard)?
Furthermore: what possible benefits could AMD hope to reap from releasing an expensive-to-manufacture, large-die 7nm GPU with high clock speeds and then crippling it with an insufficient memory bus?
I'm not sure why you're simultaneously trusting that the 256bit rumours are accurate whilst also assuming that they will have done nothing at all to compensate for that. It just doesn't make any sense - anything they'd save from the narrower bus would be more than lost on reduced margins from the hobbled performance. It would be a *bizarre* decision.
Personally I think it's a pitty because I'll need some time & reading to decide between 3070 and 3060Ti. But better launch availability for a product which is probably highly sought after is certainly good.
so it's amazing right, apple manages to fill the channel for millions of iPhones, but nvidia pawns you off on their aib shortcutters for more than their retail launch price of the og card. wth is going on with that company, i can't even back order an FE card.
To be fair, Apple iPhone SOCs are typically around 100 mm², whereas GA102 is 628 mm². This yields approximately 600 dies per wafer for Apple and 89 for nVidia. This factor of 7 is not enough to get from millions to 10 thousands, but it certainly makes it more difficult to get enough chips from the foundry. We also don't know how well Samsung works for nVidia as foundry.
It's not the AIBs that were at fault. Nvidia pushed for an early launch and didn't allow them enough time for testing, so all they could do was follow Nvidia's instructions.
In other words all roads lead back to Nvidia. As per usual, they're not exactly rushing to take the bullet for their own mistakes.
Not everyone who eyes an 3070 will buy a 3080 if the 3070 is not (yet) available. And not everyone who looks forward to buying a 3080 will suddenly switch to purchasing a 3070 if the 3070 becomes available.
Also, manufacturing costs vs. sales volume / revenue: The GA104 die is substantially smaller than GA102 (392 mm2 vs. 628 mm2, or so). Given that both are produced with the same process in the same fab, one can assume the same wafer size and the same defect rate per area unit for both GPUs. As such, for the same manufacuring costs per wafer (which stays the same regardless of how many GA102 and/or GA104 are on the waver) you produce a lot more dies. It could be that the profit/revenue per wafer is favouring the more expensive GA102, but it could very well also be possible that the profit/revenue per wafer calculation is favourable for the GA104 when compared against the GA102. I suggest you get a job at Nvidia, work your way up the hierarchy, and in no time you will know how much exactly Nvidia makes with each their GPU dies...
The 3070 is based on the same die as the 3080 just with parts disabled. Early in a product or manufacturing process these disabled parts are generally failures. You then take the failed 3080s and they become the 3070. It is a way to increase yields of what was made.
How do you know that AMD won't have anything compelling?
Or if you are paying attention, and watching their pathetic attempts at drumming up hype with the "we wont have a paper launch but we probably wont have stock either" tweets.
Short of AMD leaking out performance benchmarks they have little in the pipeline to be excited for.
I'm not fussed about stock at launch, so I couldn't give a rat's ass about any tweets in that regard.
Nvidia really, *really* pushed their chips hard - to the point where they burned all their PPW savings. Why would they do that if they knew they had no competition? Nvidia also quite clearly rushed to launch - same question as above. All we really know about the AMD GPUs is the likely CU config (enough to beat a 3080 at the high-end), the 50% PPW gain claim, and rumblings about tweaks to the memory interface. AMD aren't repeating the clamour of the Vega launch and, tbh, I see that as a positive sign.
I'm working off that information - I don't know how reliable those sources are yet, though, so I'm filing it all under "rumour" until I see them proven right at least once with an AMD GPU launch.
the seem to be ok, redgamingtech was talking about the cache a few days ago, and said his " sources " are calling it infinity cache, and well seems they put a trademark on it : https://www.tomshardware.com/news/amds-infinity-ca... but yea, im looking at these leaks the same way, waiting till the 28th to see how much of it pans out.
I don't quite understand how nvidia managed to 2.5x the number of CUDA cores, 1.5x more ROPS, increse the number of Tensor cores? (based on throughput) and still use less transistors (I am comparing the GA104 vs TU106). Some VERY big optimization?
Mainly because you can't compare the CUDA core counts vs. Turing, but also because the chart is wrong, that's the TU102 transistor count.
for Turing they added an INT core for each CUDA core, but didn't count that core anywhere. If you look at the block diagrams, sure, they SEEM like they took half of the cores in each SM and turned them into INT cores, but they actually doubled the SM count at the same time. ie. 1080 = 2560 CUDA cores across 20 SMs, 2070S = 2560 CUDA cores across 40 SMs, plus another 2560 INT32 cores.
For Ampere they also added FP32 capability to that secondary INT core, so now they're counting it as a full CUDA core. Makes sense, but as you can tell by the game performance, it's really not working like that. Part of the advantage of Turing vs. Pascal was that those "hidden" INT cores added concurrency to 36:100 game instructions (INT:FP) per the Turing architecture whitepaper. Now that's gone and each Ampere CUDA core is more comparable to a Pascal CUDA core for performance scaling purposes - granted there's much more granularity on Ampere, as Pascal could only do FP or INT operations on a group of CUDA cores - i.e. all 128/SM had to be INT for that clock, even if it was 1 INT operation, but those previously uncounted INT32 cores are now counted.
as for marketing, of course the FLOPs number looks great compared to Turing because a million INT32 cores can't even do 1 FLOP. For that reason it looks 2.x+ as powerful at pure FP math because it is. But that doesn't matter in applications.
In Turing they had parity between FP32 and INT32 resources, which resulted in some inefficiency (those workloads are not equal). With Ampere the INT32 resources can also perform FP32 calculations, so if all you're doing is FP32 then there's double the capacity - but for graphics workloads it doesn't work that way.
In short: some things that were "invisible" in their quoted specs have now been made visible in a manner that, for gamers at least, is very misleading.
interesting. looks like they don't know yet AMD's pricing for the RX 6000 series but recently found the performance. I reckon, the RTX 3070 will be within 5% performance (including RT) on one of the AMD cards.
Unless your lucky enough to get a pre order, the mainstream public will need to wait till 2021 to actually get an RTX3070. Look at the past 20 years of Nvidia launching their cards. They deliberately do this to make them more demanding and profit from it. Google Scalping. This is what Nvidia does. They should be prosecuted for this they are a 100 billion company who basically gets away with everything but murder. But then who knows about that.
That's a bit much. They're certainly manipulating launches to maximise profit, but it's down to consumers not to buy into the hype and they just... fail. Over and over.
This. Same with gaming. I cant be mad comapnies push heavy monetization, because consumers just fall head over heels to dump money into games these days. Pre-order culture only evolved into this monster because people kept buying into the BS.
The consumer is always right, and the consumer is a total idiot.
I agree with this line of thinking, particularly on the software side. Preordering software is lunacy. On the hardware front, the enthusiast market is being roasted by the companies by creating nominal parts and nominal prices, that are completely unavailable in the first 3 months after launch.
i think they meant persecuted, but definitely they are abusing the market by paper launching a card no one can buy at that price point. They actually have extreme fans who have waited to build a machine around a gpu that can't reasonably be purchased.
I think the Radeon 6900 will be likely between 3070 and 3080 performance and that’s why they’re delaying the launch of the 3070, to make sure that they stay competitive at that price point. I was really hoping for team AMD to pull out a big win but it looks like Vega all over again.
It's pretty much guaranteed at this point that if the hype is around an "NVIDIA killer" then the card isn't going to meet expectations. It has been a long time since AMD/ATI has held the performance crown in the GPU space.
All indications are that a card with the most consistently projected 6900 specs should *at least* reach performance parity with the 3080, and if that's all it does then it will likely also be more efficient.
Bear in mind that the 3080 performs like twice an RX 5700, and the 6900 is reputedly twice a 5700 XT with improved performance-per-watt, and the 3080 has roughly the same PPW as the RX 5700. It would be really, really difficult for AMD to miss this target.
I'm not saying they haven't; just saying that everybody assuming they have based purely on the Fury / Vega launches is missing a trick.
rDNA 1 was more bandwidth hungry then turing. Are we to expect that rDNA 2 is such a huge redesign that it can now feed twice as many cores with the SAME BUS as a 5700xt?
Yeah, no. Nvidia couldnt pull that off, AMD Certianly cant pull that off. They dont have the money or engineering capability for that big of a jump.
Expect nothing more then 3070/2080ti performance at best.
I refuse to believe (without further information) that AMD would deliberately design a GPU with such a lopsided design, especially given their historic tendency to overegg the pudding WRT memory bandwidth. It doesn't make sense economically or from an engineering perspective.
Whether it's because the leaked memory specs are wrong or because of some "secret sauce", I wouldn't really want to place any bets - but AMD have previous with using novel design innovations to upend the market, both in the CPU and GPU arenas.
Yes, but this website doesn't run on the internet, it runs on the WWW, and that's neither American, nor was it invented by an An American!
So this website runs on a British invention, but like all good British inventions, it was pi**ed away by private companies and venture capitalists, none of whom understood what they had!
British Telecom invented the Hyperlink that allows you to move freely from one web-page/site to another. But that innovation happened when it was a state owned company, and when it was privatised, the new directors didn't realise what they had. In 2002 BT tried to assert ownership of the hyperlink, and took a US ISP to court for royalties, and lost because it had waited too long to assert ownership... You can google the case the ISP was; "Prodigy Communications"
So no, it was never a US invention! The internet however is a US invention, and many people like yourself get confused between the Internet and WWW, so you're not alone!
29th, eh? I can't help but think they wanted to rain on AMD's parade. If the demand is there, and it is there, it won't matter if they wait two weeks to "stockpile".
29th, eh? I can't help but think they wanted to rain on AMD's parade"
Not only that, but by waiting a day later, and delaying the launch of the 3070, AMD now can't use the 3070 in comparison charts in its launch of RDNA2, so they will only be able to compare RDNA2 to the 3080/90 cards, and 'gamers' will see that RDNA II is not much better than a 3080, so will still buy the 3070, not AMD. Its about bull**it marketing, nothing more!
Nvidia gets another one over the average consumer, and still coins it in with deceptive marketing. And yet 'gamers' fall for it time-and-time again. LOL!
" and 'gamers' will see that RDNA II is not much better than a 3080, so will still buy the 3070 " oh ? if rdna2 competes as well as the rumors and leaks seems to indicate, and if it is priced lower then the equiv. rtx 3000, how do you figure gamers will still buy a 3070 ? as it stands, the 3080/3090 is priced put of reach in canada, the 3080 starts @ around $900. the 3070, could be with in reach of some, but, those with more important financial obligations, will probable pass on that as well. no know i know is even considering the 3080, forget about the 3090. they might consider the 3070, but cause it has been delayed til the 29th, they are now waiting to see what rdna2 is like and decide.
We’ve updated our terms. By continuing to use the site and/or by logging into your account, you agree to the Site’s updated Terms of Use and Privacy Policy.
105 Comments
Back to Article
TEAMSWITCHER - Friday, October 2, 2020 - link
They really need to get more RTX 3080 FE cards into the hands of mere mortals. I started unsubscribing to the YouTubers showing off their RTX cards. It's really NOT in my interest to pay them any attention - what-so-ever. It will just translate int them getting more cards on the next launch and having even fewer for paying customers.raywin - Friday, October 2, 2020 - link
This!!! do not support ppl that are helping nvidia with their false scarcity.michael2k - Friday, October 2, 2020 - link
That's dumb. It's not false scarcity; the more the card is advertised (ie, shown off on YouTube), the higher demand potentially is. So a more correct way to say it is; Do not create scarcity by driving up demand. Or in other words, by not watching the YouTube channels, you don't give them any profit motive to drive up demand, and you don't become enamored of wanting one yourself.If no one wants one, demand falls and scarcity falls.
raywin - Friday, October 2, 2020 - link
why don't they share their FE numbers?whatthe123 - Friday, October 2, 2020 - link
None of the manufacturers share their numbers, online sources for numbers are from market research like JPR or mindfactory sharing their retail sales.Hell if you believe they're lying and artificially creating scarcity why would you believe the numbers they give you anyway? They could say they sold a million and then at their earnings call they could list it as a million allocated to vendors/AIB in Q4. Either way what you're asking for makes no sense considering your theory.
raywin - Saturday, October 3, 2020 - link
creating a paper launch with an artificial price target on a card a consumer can't buy would support my theory, as would the number of FE cards producedwhatthe123 - Saturday, October 3, 2020 - link
i'm saying if you think they're lying to you already, how exactly would another lie prove anything? There's no way for them to prove anything to you as you've already assume they are liars. From reading through your conspiracy theory it's pretty obvious you got it from the same youtuber everyone got this dumb conspiracy from. I guess it's not surprising considering RTG enables this behavior by retweeting that garbage.raywin - Sunday, October 11, 2020 - link
conspiracy theory? i think this is just their business practiceGruntboyX - Monday, October 19, 2020 - link
Paper launch is a desperate gamble when you don't know what your competitor is getting ready to release. If Nvidia is trying to do a paper launch and AMD hits a home run with Radeon they will burn bridges with their customers and shift demand to their competitor.I don't buy the paper launch theory. I think its their supply chain is comprised either with Covid impacts or they have yield issues.
michael2k - Friday, October 2, 2020 - link
Why do you think the FE numbers matter? NVIDIA isn't an OEM and doesn't book the revenue separately. If their revenue were significantly impacted by the FE cards I imagine they would be making a much bigger deal out of them.Essentially, the FE cards are going to be out first, sold out first, and be a miniscule part of the market, especially if they are charging $100 to $200 more.
As for the real answer, I imagine it's because there is no need to publish information that could be used competitively against them.
raywin - Saturday, October 3, 2020 - link
because I think nvidia is producing an extremely limited number of FE cardsSpunjji - Monday, October 5, 2020 - link
I thought that was a point of FE cards? Limited numbers for the fanboys, everybody else waits for partner cards to see which offer the best balance of features/performance/price.raywin - Sunday, October 11, 2020 - link
I think the point of producing the FE cards was controlling the reviewer narrative with a false sampleDrkrieger01 - Friday, October 2, 2020 - link
Very high demand market, brand new product, high mobility product channels. This means it typically takes time to ramp up production to hit demand targets, and with COVID messing up so many metrics this year it's hard to get a good number. It also requires foundries to product 'x' number of chips when they don't know exactly how much they need. If yields aren't good, you'll be waiting, and I haven't found any info on Ampere's yield rates.Any time I see a 'new product' launch, I expect it to take up to 1-3 months after launch before it's available, at least here in Canada. You need to adjust your expectations ;)
nathanddrews - Friday, October 2, 2020 - link
I've never liked FE cards. The aftermarket versions are generally superior: cooling, dual BIOS, OC, etc.jtd871 - Saturday, October 3, 2020 - link
The FE cards this go-round are loss leaders designed to pump up initial review scores. The cooling solution is too expensive for the MSRP (i.e., NV is making razor thin margins on them), and the AIBs likely aren't allowed to use the FE cooling design at any price.raywin - Saturday, October 3, 2020 - link
This seems to be the consensus, FE were for reviewers, a miniscule number were made for the general public. Some estimates are less than 30kwhatthe123 - Saturday, October 3, 2020 - link
AIBs are already beating the FE cooler in thermals, mostly because they're willing to slap 3+ slot coolers on the 3080.Elusi - Monday, October 5, 2020 - link
They are ”beating” the FE but with plenty of conceits. They beat it by designing larger volume cards with vertical fin stacks, dumping 320W(!) worh of heat almost exclusively inside the case. And none can match MSRP, with Nvidia footing the bill for early adopters before oct 15th (this has been disclosed to reviewers but curiously the youtubers don’t like talking about it). Getting the FE at 699 was the best deal you could have made this launch. It might continue to be, depending on if nvidia plans to restock once o week going forward as well.raywin - Sunday, October 11, 2020 - link
someone is paying attentionjust4U - Tuesday, October 13, 2020 - link
That's just silly.. I see nothing different here from other paper launches of NVs new cards and you can bet their making decent money off of it all. Slim profit margins? That's a joke.. The only time NV ever slashes prices is when Amd is beating them. Anyway, It's very similar to the 600 series launch where they released the titan (ish) cooler for the first time and people were actually interested in getting the NV offering over partner cards.GruntboyX - Monday, October 19, 2020 - link
I think 320W is telling that NVIDIA is pushing Ampere too hard. I would be curious to know what the performance would be if the power budget was still at 20 series levels.GruntboyX - Monday, October 19, 2020 - link
This is silly. There are maybe 10 to 20 reputable social media influencers and press outlets who get access to early hardware. That is not even going to remotely influence the supply when they are capable of producing thousands a day / week.Sivar - Friday, October 2, 2020 - link
It is now 2 weeks after the RTX 3080 launch, and available units are limited to scalping sales on eBay.I am, not sure how much a 2-week delay will help, especially for the slightly more mainstream RTX 3070.
It's good to have such demand, and nVidia deserves to sell every unit they can get. I just hope I can get one within the next few months.
Mr Perfect - Friday, October 2, 2020 - link
You might not want one of the first batch of 30 series cards anyhow, they've been having stability problems at clocks above 2GHz. OEMs are looking at new board revisions and Nvidia has release drivers that keep boost clocks under 2GHz.https://videocardz.com/newz/manufacturers-respond-...
Flunk - Friday, October 2, 2020 - link
If you want the TLDR there, it only affects some 3rd party designs. Not all 3rd party and not the founders edition.raywin - Friday, October 2, 2020 - link
yep, good luck finding out which ones are the bad onesDizoja86 - Friday, October 2, 2020 - link
There are almost no consumer cards that have been released with 6 sp-caps, and as we've been seeing, a good portion of the issues have been driver-related. It's honestly pretty ridiculous that--now that the stability issues have been solved--are going to complain about potentially losing 30mhz on an already very high overclock. Nobody is guaranteed a card that is going to hit 2050-2100mhz, and whether the overclocking stress breaks down first in the GPU, VRAM, or capacitors really doesn't make much difference.Spunjji - Monday, October 5, 2020 - link
That's a little too short of a summary. People have reported the same issues with FE cards, just not as frequent.The initial blame was laid at the feet of AIBs - something Nvidia seems happy to allow to happen - but in reality they've all followed Nvidia's guidance. Updated drivers seem to have taken the edge off it, so the sensible conclusion is that Nvidia's boost algorithms were provoking issues with voltage regulation that were exacerbated by certain designs.
Gigaplex - Saturday, October 3, 2020 - link
The cards aren't advertised as supporting over 2GHz so it shouldn't matter if they don't support it.TheinsanegamerN - Monday, October 5, 2020 - link
The cards in ideal conditions can push 2 GHz on boost.It doesnt matter if they are *advertised* to do so, the card ARE doing it, and it IS causing problems. This reeks of lack of QC testing.
Spunjji - Monday, October 5, 2020 - link
100%Does seem to be a fairly minor issue in the grand scheme of things, but it's redolent of a rushed launch. As such, it makes me more interested to see what AMD will come out with.
raywin - Friday, October 2, 2020 - link
you might get a broke af aib, but you'll never get a FE at retail pricing. those were only for reviewers, our microcenter got 15 for launch. fifteennandnandnand - Friday, October 2, 2020 - link
Take a hard look at RDNA 2 before you do that.TheinsanegamerN - Monday, October 5, 2020 - link
Hoping rDNA2 is amazing is a fools errand. rDNA is more bandwidth hungry then turing, and we expect that a 256 bit GDDR6 rDNA 2 is going to go toe-to-toe with Ampere?Right.
Safe bet: the navi cards will compete with the 3070. If you want 3080 performance you're safe just buying a 3080, if the 1080ti is any indications it will take AMD till 2024 to catch up.
Spunjji - Monday, October 5, 2020 - link
Why do you assume RDNA 2 will be equally as bandwidth-hungry as RDNA (itself a notable improvement over GCN, which was notoriously bad in that regard)?Furthermore: what possible benefits could AMD hope to reap from releasing an expensive-to-manufacture, large-die 7nm GPU with high clock speeds and then crippling it with an insufficient memory bus?
I'm not sure why you're simultaneously trusting that the 256bit rumours are accurate whilst also assuming that they will have done nothing at all to compensate for that. It just doesn't make any sense - anything they'd save from the narrower bus would be more than lost on reduced margins from the hobbled performance. It would be a *bizarre* decision.
brunosalezze - Friday, October 2, 2020 - link
The Table.. the last collumn says 2070 but launch date and MSRP are from the 2080tiRyan Smith - Friday, October 2, 2020 - link
Thanks!austinsguitar - Friday, October 2, 2020 - link
smartest thing nvidia has ever done honestly.MrSpadge - Friday, October 2, 2020 - link
Personally I think it's a pitty because I'll need some time & reading to decide between 3070 and 3060Ti. But better launch availability for a product which is probably highly sought after is certainly good.raywin - Friday, October 2, 2020 - link
so it's amazing right, apple manages to fill the channel for millions of iPhones, but nvidia pawns you off on their aib shortcutters for more than their retail launch price of the og card. wth is going on with that company, i can't even back order an FE card.MrSpadge - Saturday, October 3, 2020 - link
To be fair, Apple iPhone SOCs are typically around 100 mm², whereas GA102 is 628 mm². This yields approximately 600 dies per wafer for Apple and 89 for nVidia. This factor of 7 is not enough to get from millions to 10 thousands, but it certainly makes it more difficult to get enough chips from the foundry. We also don't know how well Samsung works for nVidia as foundry.Spunjji - Monday, October 5, 2020 - link
It's not the AIBs that were at fault. Nvidia pushed for an early launch and didn't allow them enough time for testing, so all they could do was follow Nvidia's instructions.In other words all roads lead back to Nvidia. As per usual, they're not exactly rushing to take the bullet for their own mistakes.
raywin - Sunday, October 11, 2020 - link
i think that is fair, unprecedented demand may work for one launch, but 2 in a row. this time it is a supply issueWreckage - Friday, October 2, 2020 - link
Why release the 3070 when they are selling every 3080 they can make. Especially when it sounds like AMD won't have anything compelling.MrVibrato - Friday, October 2, 2020 - link
Not everyone who eyes an 3070 will buy a 3080 if the 3070 is not (yet) available. And not everyone who looks forward to buying a 3080 will suddenly switch to purchasing a 3070 if the 3070 becomes available.Also, manufacturing costs vs. sales volume / revenue: The GA104 die is substantially smaller than GA102 (392 mm2 vs. 628 mm2, or so). Given that both are produced with the same process in the same fab, one can assume the same wafer size and the same defect rate per area unit for both GPUs. As such, for the same manufacuring costs per wafer (which stays the same regardless of how many GA102 and/or GA104 are on the waver) you produce a lot more dies. It could be that the profit/revenue per wafer is favouring the more expensive GA102, but it could very well also be possible that the profit/revenue per wafer calculation is favourable for the GA104 when compared against the GA102. I suggest you get a job at Nvidia, work your way up the hierarchy, and in no time you will know how much exactly Nvidia makes with each their GPU dies...
raywin - Friday, October 2, 2020 - link
because they never intended to sell the 3080 FE to consumers, they intended to push you off to aib vendors and then push out a newer better++ versionMeteor2 - Saturday, October 3, 2020 - link
Yield, of course.schujj07 - Saturday, October 3, 2020 - link
The 3070 is based on the same die as the 3080 just with parts disabled. Early in a product or manufacturing process these disabled parts are generally failures. You then take the failed 3080s and they become the 3070. It is a way to increase yields of what was made.How do you know that AMD won't have anything compelling?
haukionkannel - Saturday, October 3, 2020 - link
3080 use 1023070 use 104... So how same die?
schujj07 - Saturday, October 3, 2020 - link
You are correct my mistake on the same die.Spunjji - Monday, October 5, 2020 - link
"Especially when it sounds like AMD won't have anything compelling"Only if you're not paying attention.
TheinsanegamerN - Monday, October 5, 2020 - link
Or if you are paying attention, and watching their pathetic attempts at drumming up hype with the "we wont have a paper launch but we probably wont have stock either" tweets.Short of AMD leaking out performance benchmarks they have little in the pipeline to be excited for.
Spunjji - Monday, October 5, 2020 - link
I'm not fussed about stock at launch, so I couldn't give a rat's ass about any tweets in that regard.Nvidia really, *really* pushed their chips hard - to the point where they burned all their PPW savings. Why would they do that if they knew they had no competition?
Nvidia also quite clearly rushed to launch - same question as above.
All we really know about the AMD GPUs is the likely CU config (enough to beat a 3080 at the high-end), the 50% PPW gain claim, and rumblings about tweaks to the memory interface. AMD aren't repeating the clamour of the Vega launch and, tbh, I see that as a positive sign.
I guess we'll see soon enough.
Qasar - Monday, October 5, 2020 - link
spunjji,youtube for redgamingtech, mooreslawisdead and coretec. seems rdna2 has a huge cache, 128megs of it.
Spunjji - Tuesday, October 6, 2020 - link
I'm working off that information - I don't know how reliable those sources are yet, though, so I'm filing it all under "rumour" until I see them proven right at least once with an AMD GPU launch.Qasar - Tuesday, October 6, 2020 - link
the seem to be ok, redgamingtech was talking about the cache a few days ago, and said his " sources " are calling it infinity cache, and well seems they put a trademark on it :https://www.tomshardware.com/news/amds-infinity-ca... but yea, im looking at these leaks the same way, waiting till the 28th to see how much of it pans out.
ss96 - Friday, October 2, 2020 - link
I don't quite understand how nvidia managed to 2.5x the number of CUDA cores, 1.5x more ROPS, increse the number of Tensor cores? (based on throughput) and still use less transistors (I am comparing the GA104 vs TU106).Some VERY big optimization?
MrVibrato - Friday, October 2, 2020 - link
You are mistaken. GA104: 17.4 bln transistors. TU106: 10.8 bln transistorsDolda2000 - Friday, October 2, 2020 - link
Actually, Wikipedia and other sources quote TU106 as having 10.8 Mxtors, not 18.6. Did AnandTech misquote the transistor count?MrVibrato - Friday, October 2, 2020 - link
Oi. I just noticed that there is actullay the wrong transistor count for the TU106 in the table. Oops...MrVibrato - Friday, October 2, 2020 - link
(The TU102 has 18.6 bln transistors. TU102 is in the 2080 Ti)drexnx - Friday, October 2, 2020 - link
Mainly because you can't compare the CUDA core counts vs. Turing, but also because the chart is wrong, that's the TU102 transistor count.for Turing they added an INT core for each CUDA core, but didn't count that core anywhere. If you look at the block diagrams, sure, they SEEM like they took half of the cores in each SM and turned them into INT cores, but they actually doubled the SM count at the same time.
ie. 1080 = 2560 CUDA cores across 20 SMs, 2070S = 2560 CUDA cores across 40 SMs, plus another 2560 INT32 cores.
For Ampere they also added FP32 capability to that secondary INT core, so now they're counting it as a full CUDA core. Makes sense, but as you can tell by the game performance, it's really not working like that. Part of the advantage of Turing vs. Pascal was that those "hidden" INT cores added concurrency to 36:100 game instructions (INT:FP) per the Turing architecture whitepaper. Now that's gone and each Ampere CUDA core is more comparable to a Pascal CUDA core for performance scaling purposes - granted there's much more granularity on Ampere, as Pascal could only do FP or INT operations on a group of CUDA cores - i.e. all 128/SM had to be INT for that clock, even if it was 1 INT operation, but those previously uncounted INT32 cores are now counted.
as for marketing, of course the FLOPs number looks great compared to Turing because a million INT32 cores can't even do 1 FLOP. For that reason it looks 2.x+ as powerful at pure FP math because it is. But that doesn't matter in applications.
Meteor2 - Saturday, October 3, 2020 - link
That's a great explanation. Thanks.Spunjji - Monday, October 5, 2020 - link
Didn't see you'd answered this already before putting in my own answer... and yours was better. 😅Gigaplex - Saturday, October 3, 2020 - link
Also don't forget the architecture is different, so the definition of what makes up a CUDA core is different.Spunjji - Monday, October 5, 2020 - link
In Turing they had parity between FP32 and INT32 resources, which resulted in some inefficiency (those workloads are not equal). With Ampere the INT32 resources can also perform FP32 calculations, so if all you're doing is FP32 then there's double the capacity - but for graphics workloads it doesn't work that way.In short: some things that were "invisible" in their quoted specs have now been made visible in a manner that, for gamers at least, is very misleading.
Sychonut - Friday, October 2, 2020 - link
Right on time for winter to keep your house warm and cozy.Gigaplex - Saturday, October 3, 2020 - link
Halloween isn't winter. Depending on the hemisphere, it's either spring or autumn.catavalon21 - Sunday, October 4, 2020 - link
By the time many are available at retail, it will no longer be Spring/Fall. Unless it's the one six months from now.zodiacfml - Saturday, October 3, 2020 - link
interesting. looks like they don't know yet AMD's pricing for the RX 6000 series but recently found the performance. I reckon, the RTX 3070 will be within 5% performance (including RT) on one of the AMD cards.AndrewJD - Saturday, October 3, 2020 - link
Unless your lucky enough to get a pre order, the mainstream public will need to wait till 2021 to actually get an RTX3070. Look at the past 20 years of Nvidia launching their cards. They deliberately do this to make them more demanding and profit from it. Google Scalping. This is what Nvidia does. They should be prosecuted for this they are a 100 billion company who basically gets away with everything but murder. But then who knows about that.Spunjji - Monday, October 5, 2020 - link
That's a bit much. They're certainly manipulating launches to maximise profit, but it's down to consumers not to buy into the hype and they just... fail. Over and over.TheinsanegamerN - Monday, October 5, 2020 - link
This. Same with gaming. I cant be mad comapnies push heavy monetization, because consumers just fall head over heels to dump money into games these days. Pre-order culture only evolved into this monster because people kept buying into the BS.The consumer is always right, and the consumer is a total idiot.
raywin - Sunday, October 11, 2020 - link
I agree with this line of thinking, particularly on the software side. Preordering software is lunacy. On the hardware front, the enthusiast market is being roasted by the companies by creating nominal parts and nominal prices, that are completely unavailable in the first 3 months after launch.TheinsanegamerN - Monday, October 5, 2020 - link
Prosecuted because you couldnt buy a $700 card.Jesus dude. Its just a video card. Calm TF down.
raywin - Sunday, October 11, 2020 - link
i think they meant persecuted, but definitely they are abusing the market by paper launching a card no one can buy at that price point. They actually have extreme fans who have waited to build a machine around a gpu that can't reasonably be purchased.bcronce - Saturday, October 3, 2020 - link
3090 has 15/20% more compute/ROP, but only has 10% more TDP at the same frequency.Gigaplex - Sunday, October 4, 2020 - link
Compared to the 3070 which is what this article is about? No.mrvco - Saturday, October 3, 2020 - link
Good on AMD for at least keeping this interesting, these certainly aren't the choices that Nvidia would be making if they were the only game in town.Sivar - Saturday, October 3, 2020 - link
I am still hoping for an Anandtech RTX 3080 review. It's a bit late. :(DejayC - Saturday, October 3, 2020 - link
I think the Radeon 6900 will be likely between 3070 and 3080 performance and that’s why they’re delaying the launch of the 3070, to make sure that they stay competitive at that price point. I was really hoping for team AMD to pull out a big win but it looks like Vega all over again.Gigaplex - Sunday, October 4, 2020 - link
It's pretty much guaranteed at this point that if the hype is around an "NVIDIA killer" then the card isn't going to meet expectations. It has been a long time since AMD/ATI has held the performance crown in the GPU space.Spunjji - Monday, October 5, 2020 - link
They haven't managed it since GCN launched. Coincidentally, they're about to be on the second iteration of their first post-GCN architecture.Not saying this will get them the crown; just saying this may not be the moment to assume they'll repeat those past mistakes.
TheinsanegamerN - Monday, October 5, 2020 - link
You forget the 290x, that was 2 years after launch.Spunjji - Monday, October 5, 2020 - link
Didn't that one essentially tie with the Titan? I'd certainly not call it a win worth having, not for the power draw it required 😬Spunjji - Monday, October 5, 2020 - link
All indications are that a card with the most consistently projected 6900 specs should *at least* reach performance parity with the 3080, and if that's all it does then it will likely also be more efficient.Bear in mind that the 3080 performs like twice an RX 5700, and the 6900 is reputedly twice a 5700 XT with improved performance-per-watt, and the 3080 has roughly the same PPW as the RX 5700. It would be really, really difficult for AMD to miss this target.
I'm not saying they haven't; just saying that everybody assuming they have based purely on the Fury / Vega launches is missing a trick.
TheinsanegamerN - Monday, October 5, 2020 - link
The 6900 *specs* dont inspire confidence.rDNA 1 was more bandwidth hungry then turing. Are we to expect that rDNA 2 is such a huge redesign that it can now feed twice as many cores with the SAME BUS as a 5700xt?
Yeah, no. Nvidia couldnt pull that off, AMD Certianly cant pull that off. They dont have the money or engineering capability for that big of a jump.
Expect nothing more then 3070/2080ti performance at best.
Qasar - Monday, October 5, 2020 - link
theinsanegamernhave you even seen the you tube videos by redgamingtech, coretek, or moorselawisdead ? if not, go look them up.
Spunjji - Monday, October 5, 2020 - link
I refuse to believe (without further information) that AMD would deliberately design a GPU with such a lopsided design, especially given their historic tendency to overegg the pudding WRT memory bandwidth. It doesn't make sense economically or from an engineering perspective.Whether it's because the leaked memory specs are wrong or because of some "secret sauce", I wouldn't really want to place any bets - but AMD have previous with using novel design innovations to upend the market, both in the CPU and GPU arenas.
a94ra - Tuesday, October 6, 2020 - link
Well, do you think the engineers are dumb enough to starve their hard-researched-powerful core?dans2530 - Sunday, October 4, 2020 - link
Where are Anandtech's Ampere reviews? Usually you guys are the first out of the gate.Qasar - Sunday, October 4, 2020 - link
at has stated that due to the fires in california, its been delayedvol.2 - Sunday, October 4, 2020 - link
So close to the election, most people won't be thinking about video cards. They should have waited another week.MrVibrato - Sunday, October 4, 2020 - link
Wait... are you assuming only US Americans are people?MisterAnon - Sunday, October 4, 2020 - link
This is an American website.MisterAnon - Sunday, October 4, 2020 - link
And the internet itself is American, but that's another matter altogether.Spunjji - Monday, October 5, 2020 - link
🤡Notagaintoday - Thursday, October 8, 2020 - link
Yes, but this website doesn't run on the internet, it runs on the WWW, and that's neither American, nor was it invented by an An American!So this website runs on a British invention, but like all good British inventions, it was pi**ed away by private companies and venture capitalists, none of whom understood what they had!
British Telecom invented the Hyperlink that allows you to move freely from one web-page/site to another. But that innovation happened when it was a state owned company, and when it was privatised, the new directors didn't realise what they had. In 2002 BT tried to assert ownership of the hyperlink, and took a US ISP to court for royalties, and lost because it had waited too long to assert ownership... You can google the case the ISP was; "Prodigy Communications"
So no, it was never a US invention! The internet however is a US invention, and many people like yourself get confused between the Internet and WWW, so you're not alone!
Alexvrb - Sunday, October 4, 2020 - link
29th, eh? I can't help but think they wanted to rain on AMD's parade. If the demand is there, and it is there, it won't matter if they wait two weeks to "stockpile".Spunjji - Monday, October 5, 2020 - link
I'd guess that's the most likely motivation - an attempt at a spoiler launch to steal press attention away.Notagaintoday - Thursday, October 8, 2020 - link
29th, eh? I can't help but think they wanted to rain on AMD's parade"Not only that, but by waiting a day later, and delaying the launch of the 3070, AMD now can't use the 3070 in comparison charts in its launch of RDNA2, so they will only be able to compare RDNA2 to the 3080/90 cards, and 'gamers' will see that RDNA II is not much better than a 3080, so will still buy the 3070, not AMD. Its about bull**it marketing, nothing more!
Nvidia gets another one over the average consumer, and still coins it in with deceptive marketing. And yet 'gamers' fall for it time-and-time again. LOL!
Qasar - Saturday, October 10, 2020 - link
" and 'gamers' will see that RDNA II is not much better than a 3080, so will still buy the 3070 "oh ? if rdna2 competes as well as the rumors and leaks seems to indicate, and if it is priced lower then the equiv. rtx 3000, how do you figure gamers will still buy a 3070 ? as it stands, the 3080/3090 is priced put of reach in canada, the 3080 starts @ around $900. the 3070, could be with in reach of some, but, those with more important financial obligations, will probable pass on that as well. no know i know is even considering the 3080, forget about the 3090. they might consider the 3070, but cause it has been delayed til the 29th, they are now waiting to see what rdna2 is like and decide.
nadim.kahwaji - Monday, October 5, 2020 - link
Does anyone know when the review of the 3080,3090 RTX will Published ?TacoR0sado - Friday, October 16, 2020 - link
4 weeks later, my guess is it isn't coming.