New NVIDIA GPU Variant Found at Supercomputing 2019: Tesla V100Sby Dr. Ian Cutress on November 22, 2019 3:00 PM EST
- Posted in
- Trade Shows
- Supercomputing 19
NVIDIA announced a number of things at Supercomputing, such as an Arm server referece design. Despite the show being the major hub event for high-performance computing/supercomputers, it isn’t often the location where NVIDIA launches a GPU. Nonetheless we saw a new model of NVIDIA’s high performance Tesla V100 at multiple booths at Supercomputing.
The new GPU we saw was called the V100S (or V100s). Firstly, the name: I didn’t realise it was new/unannounced until it was pointed out to me. The way it was written on a few of the billboards looks like it is just referring to ‘multiple V100 units’, but a couple of companies confirmed to be that it is a new product. For these vendors, they were actually told before the show that NVIDIA was planning to announce it there, and were surprised that the CEO Jensen Huang did not mention it in his off-site two hour presentation to press and partners.
Nonetheless, NVIDIA’s partners had printed the billboards, built the displays, built the systems, and hadn’t been told *not* to show it off. So they did. I was informed to look out for the gold shroud at one particular booth – they were differentiating by having the standard V100 with a green shroud, and their V100S will have a gold shroud. This is despite the gold shroud units also just say ‘V100’, which is meant to signify the family of the card.
Finding out what is different about this card has actually been a task – none of my usual contacts seem to know exact numbers, although a couple confirmed it was ‘faster memory’, referring to the on package HBM2. I’m still looking into exact frequency changes, and presumably the knock on effects on TDP, but as it stands ‘faster memory’ is the only information I have. There might also be a price difference for anyone interested in these variants.
One thought is that NVIDIA might not actually announce the V100S as a separate model, but just a higher memory version of the V100 and customers will just have to check exactly what the memory frequency is when they purchase – just like different consumer cards can have different memory speeds. No-one was discussing exact launch timing, but it seemed NVIDIA’s partners were deep into validation, if not already offering them to select customers.
Post Your CommentPlease log in or sign up to comment.
View All Comments
p1esk - Sunday, November 24, 2019 - linkAll these pro cards you mentioned are significantly more expensive than RTX 2080Ti. The problem is that even 11GB of memory is too little to train any decent model. Most serious DL research today is done on 8xV100 servers.
brucethemoose - Monday, November 25, 2019 - linkOne option is to rent a multi GPU rig from someone like vast.ai, or the many other services out there. It makes financial sense if you aren't training 24/7.
But yeah, you're right. It would be awesome if Nvidia manufacturers had the wiggle room to make their down double capacity cards, like the old 4GB GTX 680s or the 8GB 290X. But their hands are obviously tied for whatever reason, as otherwise there would be double capacity RTX and Pascal cards everywhere.
CiccioB - Monday, November 25, 2019 - link"The new GPU we saw was called the V100S (or V100s). "
Apparently this is not a new GPU but simply a new card.
The GPU name is GV100, the card is V100.
Having a V100S means a new board, not necessarily a new ASIC AFAIK.
marxxx - Monday, November 25, 2019 - linkOfficial specifications https://www.nvidia.com/en-us/data-center/tesla-v10...
CiccioB - Monday, November 25, 2019 - linkUnfortunately it just states that they are using faster memory, but not if there's a different cut version of GV100 or if they achieved the higher performances just by an increment of the frequencies.
jabbadap - Monday, November 25, 2019 - linkWell datasheet says 5120 cuda cores, but then again is that really so is another question. 16.4TFlops means core that clocks being 1.6GHz, not out of question clocks. But it being with
the same TDP as Tesla V100 pcie, it sounds a bit odd. Could newer faster HBM2 be much more power efficient, thus releasing more power budget to gpu or is it just some marketing boost clocks trick.
CiccioB - Monday, November 25, 2019 - linkOh, I see, they are 5120 cuda cores for all variants.
So this is not a new GPU at all, just a new board with new HBM2e (Aquabolt?) and increased boost frequencies.
AshlayW - Monday, November 25, 2019 - linkFull GV100 silicon has 5376 CUDA cores. I wonder if we will ever see it fully enabled? Probably never because yields on that absolutely enormous chunk of silicon.