New NVIDIA GPU Variant Found at Supercomputing 2019: Tesla V100Sby Dr. Ian Cutress on November 22, 2019 3:00 PM EST
- Posted in
- Trade Shows
- Supercomputing 19
NVIDIA announced a number of things at Supercomputing, such as an Arm server referece design. Despite the show being the major hub event for high-performance computing/supercomputers, it isn’t often the location where NVIDIA launches a GPU. Nonetheless we saw a new model of NVIDIA’s high performance Tesla V100 at multiple booths at Supercomputing.
The new GPU we saw was called the V100S (or V100s). Firstly, the name: I didn’t realise it was new/unannounced until it was pointed out to me. The way it was written on a few of the billboards looks like it is just referring to ‘multiple V100 units’, but a couple of companies confirmed to be that it is a new product. For these vendors, they were actually told before the show that NVIDIA was planning to announce it there, and were surprised that the CEO Jensen Huang did not mention it in his off-site two hour presentation to press and partners.
Nonetheless, NVIDIA’s partners had printed the billboards, built the displays, built the systems, and hadn’t been told *not* to show it off. So they did. I was informed to look out for the gold shroud at one particular booth – they were differentiating by having the standard V100 with a green shroud, and their V100S will have a gold shroud. This is despite the gold shroud units also just say ‘V100’, which is meant to signify the family of the card.
Finding out what is different about this card has actually been a task – none of my usual contacts seem to know exact numbers, although a couple confirmed it was ‘faster memory’, referring to the on package HBM2. I’m still looking into exact frequency changes, and presumably the knock on effects on TDP, but as it stands ‘faster memory’ is the only information I have. There might also be a price difference for anyone interested in these variants.
One thought is that NVIDIA might not actually announce the V100S as a separate model, but just a higher memory version of the V100 and customers will just have to check exactly what the memory frequency is when they purchase – just like different consumer cards can have different memory speeds. No-one was discussing exact launch timing, but it seemed NVIDIA’s partners were deep into validation, if not already offering them to select customers.
Post Your CommentPlease log in or sign up to comment.
View All Comments
UltraWide - Friday, November 22, 2019 - linkjust stabilizer padding.
mode_13h - Saturday, November 23, 2019 - linkSo, the V100 came out just 1 year after the P100. Now, it's been with us for about 2.5 years... what's up with that? I expected some big announcement...
I guess they're trying really hard to let AMD catch up. Maybe when AMD announces Arcturus, that's when we'll finally hear about Nvidia's next datacenter chip (note: I didn't say GPU).
extide - Saturday, November 23, 2019 - linkI mean you almost gotta wonder if Nvidia got tripped up somehow ... You would think they'd have a 7nm line out by now, but no they did the Turing Super refreshes instead. I mean they are probably humming along just fine but ... it ALMOST seems a little fishy.
Santoval - Saturday, November 23, 2019 - linkIt's the same as Intel when they had no competition; Nvidia is well ahead of AMD, particularly at the top end consumer and the professional market. When you are the market leader you have little incentive to innovate and/or switch to a cutting edge process node. Of course that's how market leaders stop being market leaders, but when they realize that it's already too late.
Nvidia *will* innovate (eventually), with Ampere next year. By that time AMD will have released RDNA2 based graphics cards, though as of yet it's unknown if they will be able to surpass Nvidia. They probably won't, not even in ray-tracing. It also depends on whether Samsung's 7nm process node (that Ampere will be fabbed with) will turn out to be better or worse than TSMC's 7nm+ process node.
Morawka - Sunday, November 24, 2019 - linkperhaps Nvidia can't get access to TSMC's 7nm line in the volumes they need. I'm always reading about Apple, Huawei and Samsung eating up all the capacity.
rahvin - Monday, November 25, 2019 - linkIt's more likely cost, Nvidia as one of TSMC's first big clients had preferential access to new processes and even a deal for the first 5000 wafers being free. That deal expired about 2 years ago.
With AMD moving to TSMC and others I'm willing to bet nVidia wouldn't have been able to afford the move to 7nm due to the margin impact, and with them being ahead made a strategic decision to stay on the older process and make more money. The last 3 quarters or so have saw them boost their margins about 5% (probably at least part of that came from holding off on 7nm).
But if AMD offers competition at the high end soon it could hurt them badly on the margin side in the future quarters as they'd be forced to spin up on 7nm while wafer prices are still high. AMD has had problems focusing on CPU and GPU at the same time. If they focus on CPU their GPU side tends to slip and the reverse if they focus on GPU. Its one of the things Lisa Su needs to fix at AMD. AMD needs strong division leads that can move forward aggressively in both product segments. Until they can perform strongly in both divisions the company isn't fixed. Lisa has done a great job on the CPU side but the GPU side is still lagging, and that's leaving nVidia room to elevate prices, reduce innovation and milk the segment.
yannigr2 - Saturday, November 23, 2019 - linkIn other words
AshlayW - Saturday, November 23, 2019 - linkFull GV100? All 5376 CC?
Kjella - Saturday, November 23, 2019 - linkNeat, but I wish they'd offer a budget deep learning card. So many models assume you'll have 11+ GB of memory and will crash if they go OOM making the 1080 Ti / 2080 Ti the low bar for entry. Something like the RTX 2060 but with 12GB RAM instead of 6GB would be a perfect training box.
Rudde - Saturday, November 23, 2019 - linkQuadro RTX 5000 is basically a Geforce RTX 2080 super with 16GB memory.
The T4 accelerator is a Geforce RTX 2070 super with 16GB memory that is downclocked to 600 MHz.
The P6 accelerator is a Geforce GTX 1070 ti with 16GB memory downclocked to 1000 MHz.
Quadro P5000 is a Geforce GTX 1080 with 16GB memory.
I'm not saying that these are cheaper though, as I am unaware of their prices.