There is a difference in channel width when it comes to desktops, servers and DIMMs: 72 bit vs. 64 bit. There is also an extra memory chip or two that holds the ECC information on each DIMM, though they're ordinary memory chips. Oddly, this extra memory is not marketed as part of memory capacity as it is not directly addressable for program usage (16 GB ECC DIMM actually contains 18 GB of memory). This extra hardware also means that ECC calculations do not significantly impact performance.
nVidia didn't widen the memory bus so enabling ECC on the M6000 will reduce usable memory as well as decrease performance. That 12 GB of memory drops to 10.5 GB usable with ECC enabled. Due to how wide and how the ECC algorithm works, expect a 20% to 25% drop in memory bound tests.
I think the cost difference is in the amount of time and testing that goes in to the component durability and compatibility to make this a workstation grade product, instead of a consumer grade one.
ECC memory support, software compatibility guarantee, higher grade reliability testing, drivers optimised for pro applications rather than gaming and business-grade tech support.
That stuff means nothing to consumers but means a lot to businesses who don't blink at dropping $100k on high-performance hardware, which is why it's priced accordingly.
Three very important differences, ECC support, binning, and driver optimizations and validations for a wide range of content creation and productivity software. Can't overstate the importance of the last point, particularly in my line of work. Drivers can make a huge difference in terms of viewport performance and rendering accuracy in certain 3D packages.
The first recorded use was in 1847. Plus it's widely used in IT vernacular. Every word is made up, after all, so the detractors will have to get over it :P
If it's not a word, then there sure are a lot of people coincidentally making the same serious of mouth noises across the tech industry when expressing the same concept.
And you need to be clueless and born yesterday for your conception of professional GPUs to boil down to running games. This a professional product, compute performance is important, professional application use double precision unlike games.
If you can quote where I mentioned games - I'll buy you one. I'm just curious as to what exactly your use case for double precision compute is, which you still have not provided, since you seem to need it.
That was a complete non-sequitur and straw man, you're not fooling anyone with saying that.
Yes, I know this is a professional product. I also know that there are many professional uses for single precision, which you didn't seem to know, which one would expect given you have probably never used a pro card. Double precision is required for certain scientific work, but it's still a niche within a niche.
The proof of that is self-evident, as Nvidia thought it was worth cutting DP out in favor of squeezing as much SP out of it as they could per unit die area.
Now, for you to give us that very specific use case you have that uses double precision. You surely have one, right, and weren't just talking out your butt?
Also, you clearly have no idea of the degree of creeping approximation errors which result in just a few calculations at 32 bit resolution. Even for sound or image processing you are far better off with 64bit precision, anything less is plain out unprofessional.
Can someone explain to me beyond just saying "servers" what this is used for? What would be a specific task you would assign to a rig like the Quadro VCA?
Its never for servers. This is a render farm node. basically you have a bunch of these nodes, and you send rendering data to them from your lousy 1 card 1 cpu workstation, and they render it way faster for you :)
A company like Pixar might be looking to build a new render farm to crank out their next animated film. They would be looking to buy something like these VCA nodes and stuff a ton of them in a bunch of racks and connect them all together. A desktop computer with one of these cards might take an hour to render a scene. A rack with 8 VCAs with 8 cards in each unit could take less than a minute to render the same scene. If they buy a few racks' worth of VCAs and interconnect them then they would be able to crank out video even faster or go the other way and render more complex things in less time.
Things like Merida's hair in Brave or Sully from Monsters Inc are ridiculously difficult to render and even with a rendering farm can take hours to render a single frame. Rendering something like that with a regular workstation would be totally impractical.
I wonder when the M4000 and M2000 are going to be released. They should be GM204 based so nVidia isn't necessarily waiting around to finish a new chip for them.
Oh, I get it. People assume they built these Big Maxwell boards with an emphasis on gaming because of gamers buying $1k cards (or cut-down variants for $650-750). No. They built these cards to slap into GRID centers everywhere and provide more vGPU's with less space.
That explains why they left Prosumers behind. GRID is a more important need atm.
We’ve updated our terms. By continuing to use the site and/or by logging into your account, you agree to the Site’s updated Terms of Use and Privacy Policy.
28 Comments
Back to Article
nathanddrews - Thursday, March 19, 2015 - link
Deadmau5 has been using a pair of them in SLI for the past week... lucky jerk.http://www.pcper.com/news/Graphics-Cards/NVIDIA-Qu...
rpg1966 - Thursday, March 19, 2015 - link
My reading comprehension is failing me.. apart from some slightly different clocks, how is this card different from the new Titan X?And in double-slot SLI or VCA-style arrangements, how do the packed-in cards get air into their blswer?
hypergreatthing - Thursday, March 19, 2015 - link
Sounds like someone will figure out a way to hardmod a titan x to become a quadro m6000 and save themselves 4000$ShieTar - Thursday, March 19, 2015 - link
Hardmoding normal memory into ECC memory would be very impressive.extide - Thursday, March 19, 2015 - link
There is no such thing as "ECC memory"...ECC is just a calculation done like raid 5.
Kevin G - Thursday, March 19, 2015 - link
There is a difference in channel width when it comes to desktops, servers and DIMMs: 72 bit vs. 64 bit. There is also an extra memory chip or two that holds the ECC information on each DIMM, though they're ordinary memory chips. Oddly, this extra memory is not marketed as part of memory capacity as it is not directly addressable for program usage (16 GB ECC DIMM actually contains 18 GB of memory). This extra hardware also means that ECC calculations do not significantly impact performance.nVidia didn't widen the memory bus so enabling ECC on the M6000 will reduce usable memory as well as decrease performance. That 12 GB of memory drops to 10.5 GB usable with ECC enabled. Due to how wide and how the ECC algorithm works, expect a 20% to 25% drop in memory bound tests.
RazrLeaf - Thursday, March 19, 2015 - link
I think the cost difference is in the amount of time and testing that goes in to the component durability and compatibility to make this a workstation grade product, instead of a consumer grade one.r3loaded - Thursday, March 19, 2015 - link
ECC memory support, software compatibility guarantee, higher grade reliability testing, drivers optimised for pro applications rather than gaming and business-grade tech support.That stuff means nothing to consumers but means a lot to businesses who don't blink at dropping $100k on high-performance hardware, which is why it's priced accordingly.
Murloc - Thursday, March 19, 2015 - link
the cards in the VGA are different from the single one.Also that thing can and will make all the noise it wants since it's in the server room.
dragonsqrrl - Thursday, March 19, 2015 - link
Three very important differences, ECC support, binning, and driver optimizations and validations for a wide range of content creation and productivity software. Can't overstate the importance of the last point, particularly in my line of work. Drivers can make a huge difference in terms of viewport performance and rendering accuracy in certain 3D packages.Dorek - Thursday, March 19, 2015 - link
Performant isn't a word. Try "capable" there.tipoo - Thursday, March 19, 2015 - link
The first recorded use was in 1847. Plus it's widely used in IT vernacular. Every word is made up, after all, so the detractors will have to get over it :Pxthetenth - Thursday, March 19, 2015 - link
If it's not a word, then there sure are a lot of people coincidentally making the same serious of mouth noises across the tech industry when expressing the same concept.ddriver - Thursday, March 19, 2015 - link
1/32 DP performance? No thanks, neeeext!tipoo - Thursday, March 19, 2015 - link
What was your planned use case, out of curiosity?Intervenator - Thursday, March 19, 2015 - link
LolJarredWalton - Thursday, March 19, 2015 - link
Trolling most likely. :-)tipoo - Thursday, March 19, 2015 - link
Everyone knows you need at least 1/2 to full DP performance for trolling!ddriver - Thursday, March 19, 2015 - link
And you need to be clueless and born yesterday for your conception of professional GPUs to boil down to running games. This a professional product, compute performance is important, professional application use double precision unlike games.tipoo - Thursday, March 19, 2015 - link
If you can quote where I mentioned games - I'll buy you one. I'm just curious as to what exactly your use case for double precision compute is, which you still have not provided, since you seem to need it.That was a complete non-sequitur and straw man, you're not fooling anyone with saying that.
Yes, I know this is a professional product. I also know that there are many professional uses for single precision, which you didn't seem to know, which one would expect given you have probably never used a pro card. Double precision is required for certain scientific work, but it's still a niche within a niche.
The proof of that is self-evident, as Nvidia thought it was worth cutting DP out in favor of squeezing as much SP out of it as they could per unit die area.
Now, for you to give us that very specific use case you have that uses double precision. You surely have one, right, and weren't just talking out your butt?
ddriver - Friday, March 20, 2015 - link
Simulations for architecture and automotive industry. DP is important, if you don't want buildings to collapse and people to die.And just because you talk out of your butt doesn't mean it is a common practice everywhere, it's just you and it isn't normal...
ddriver - Friday, March 20, 2015 - link
Also, you clearly have no idea of the degree of creeping approximation errors which result in just a few calculations at 32 bit resolution. Even for sound or image processing you are far better off with 64bit precision, anything less is plain out unprofessional.Evarin - Thursday, March 19, 2015 - link
Can someone explain to me beyond just saying "servers" what this is used for? What would be a specific task you would assign to a rig like the Quadro VCA?LukaP - Thursday, March 19, 2015 - link
Its never for servers. This is a render farm node. basically you have a bunch of these nodes, and you send rendering data to them from your lousy 1 card 1 cpu workstation, and they render it way faster for you :)WithoutWeakness - Thursday, March 19, 2015 - link
A company like Pixar might be looking to build a new render farm to crank out their next animated film. They would be looking to buy something like these VCA nodes and stuff a ton of them in a bunch of racks and connect them all together. A desktop computer with one of these cards might take an hour to render a scene. A rack with 8 VCAs with 8 cards in each unit could take less than a minute to render the same scene. If they buy a few racks' worth of VCAs and interconnect them then they would be able to crank out video even faster or go the other way and render more complex things in less time.Things like Merida's hair in Brave or Sully from Monsters Inc are ridiculously difficult to render and even with a rendering farm can take hours to render a single frame. Rendering something like that with a regular workstation would be totally impractical.
Kevin G - Thursday, March 19, 2015 - link
I wonder when the M4000 and M2000 are going to be released. They should be GM204 based so nVidia isn't necessarily waiting around to finish a new chip for them.HisDivineOrder - Thursday, March 19, 2015 - link
Oh, I get it. People assume they built these Big Maxwell boards with an emphasis on gaming because of gamers buying $1k cards (or cut-down variants for $650-750). No. They built these cards to slap into GRID centers everywhere and provide more vGPU's with less space.That explains why they left Prosumers behind. GRID is a more important need atm.
Mikemk - Thursday, March 19, 2015 - link
When I saw the M, I thought this was a mobile card.