Comments Locked

55 Comments

Back to Article

  • smn198 - Friday, January 28, 2005 - link

    It does do RAID-5!
    http://www.nvidia.com/object/IO_18137.html

    w00t!
  • smn198 - Friday, January 28, 2005 - link

    #18
    It can do RAID-5 according to http://www.legitreviews.com/article.php?aid=152

    Near bottom of page:
    "Update: NVIDIA contacted us to let us know that RAID 5 is also supported on the 2200 and 2050. They also didn't hesitate to point out that when the 2200 is matched with three 2050's, the RAID array can be spanned across 16 drives!"

    However, nidia's site does not mention it! http://www.nvidia.com/object/feature_raid.html

    I wonder. would be nice!
  • DerekWilson - Friday, January 28, 2005 - link

    #50,

    each lane in PCIe consists of a serial up link and down link. this means that x16 actually has 4Gb/s up and down at the same time (thus the 8Gb/s number everyone always quotes). Saying 8Gb/s bandwidth without saying 4 up and 4 down is a lil misleading because that bandwidth can't move in one direction when needed.

    #53,

    4x SATA 3Gb/s -> 12Gb/s -> 1.5GB/s + 2GbE -> 0.25GB/s + USB 2.0 ~-> .5GB/s = 2.25 GB/s ... so this is really manageable bandwidht. Especially as its unlikely for all this to be moving while all 5 gig up and down of the 20 PCIe lanes are moving at the same time.

    It's more likely that we'll see video cards setting aside 30% of the PCI Express b/w to nearly idle (as, again, upload is often not used). Unless using the 2 x16 SLI ... We're still not quite sure how much bandwidth this will use over the top and through the PCIe bus. But one card is definitely going to send data back up stream.

    Each MCP has a 16x16 HT link @ 1GHz to the system... Bandwidth is 8GB/s (4 up and 4 down) ...
  • guyr - Thursday, January 27, 2005 - link

    Can anyone explain how these MCPs work regarding throughput? What kind of clock rate do they have? 4 SATA II drives alone is 12 Gbps. Add 2 GigE and that is 14. Throw in 8 USB 2.0 and that almost an additional 4 Gbps. So if you add everything up, it looks to be over 20 Gbps! Oops, sorry, forgot about 20 lanes of PCIe. Anyway, has anyone identified a realistic throughput that can be expected? These specs are wonderful, but if the chip can only pass 100 MB/s, it doesn't mean anything.
  • jeromechiu - Thursday, January 27, 2005 - link

    #12, if you have a gigabit switch that supports port trunking, then you could use BOTH of the gigabit ports for faster intranet file-transfer. Hell! Perhaps you could add another two 4-port gigabit adaptors and give your PC a sort-of-10Gbps connection to the switch! ;)
  • philpoe - Wednesday, January 26, 2005 - link

    Being a newbie to PCI-E, if I read a PCI-Express FAQ correctly, aren't the x16 slots in use for graphics cards today 1 way only? Too bad the lanes can't be combined, or you could get to a 1-way x32 slot (apparently in the PCI-E spec). In any case, 4 x8 full duplex cards would be just the ticket for Infiniband (making all that Gbe worthless?) and 4 x2 slots for good measure :). Just think of 16x SATA-300 drives attached and RAID. Talk about a throughput monster.
    Imagine Sun, with the corporate-credible Solaris OS selling such a machine.
  • DerekWilson - Tuesday, January 25, 2005 - link

    #32 henry, and anyone who saw my wrong math :-)

    You were right in your setup even though you only mentioned hooking up 4 x1 lanes -- 2 more could have been connected. Oops. I've corrected the article to reflect a configuration that actually can't be done (for real this time, I promise). Check my math again to be sure:

    1 x16, 2 x4, 6 x1

    that's 9 slots with only 8 physical connections. still with 10 lanes left over. In the extreme I could have said you can't do 9 x1 connectios on one board, but I wanted to maintain some semblance of reality.

    Again, it looks like the nForce Pro is able to throw out a good deal of firepower ....
  • ceefka - Tuesday, January 25, 2005 - link

    Plus I can't wait to see a rig like this doing benchies :-)
  • ceefka - Tuesday, January 25, 2005 - link

    In one word: amazing!

    Some of this logic eludes me, however.

    There's no board that can fully exploit the theoretical connectivity of a 4-way opteron config with these chipsets?
  • SunLord - Tuesday, January 25, 2005 - link

    I'd pay upto $450 for a dual cpu/chipset board as long as it gave me 2x16 1x4 and 1-3x1 connectors... as I see no use for pci-x when pci-e cards are coming out... Would make for one hell of a workstation to replace my aging athlon mp using tyan thunder k7 pro board. Even if the onbaord raid doesn't do raid 5 I can use the 4x slot for a sata2 raid card with little to no impact! Though 2 gigabit ports is kinda overkill. mmm 8x74GB(136GB) raptor raid 0/1 and 12x500GB(6TB) Raid 5 3Ware/AMCC controller.

    I can dream can't I? No clue what I would do with that much diskspace though... and still have enough room for 4 dvd-+rw dual layer burners hehe
  • Googer - Tuesday, January 25, 2005 - link

    From what I have gathered, TCQ and NCQ are similar but not the exact same thing. Kind of like SCSI and IDE HDD's are similar but not the same.
  • tumbleweed - Monday, January 24, 2005 - link

    I've read before that NCQ as implemented by SATA is equivalent to the 'simple mode' of SCSI's TCQ, rather than being the same thing.
  • DerekWilson - Monday, January 24, 2005 - link

    #30:

    you cannot run 32-bit 33mhz cards at 66mhz ... There are 32-bit pci cards that can be dropped into 64bit 33mhz PCI slots. Not 64bit/66Mhz, and not PCI-X.
  • DerekWilson - Monday, January 24, 2005 - link

    In using two seperate displays, 2 x2 PCIe connections is fine for two graphics cards. The system can't saturate graphics cards.

    The fact that NVIDIA uses both over the top and PCIe to send data for SLI means that bandwidth does impact SLI to a point. We haven't yet seen the impact of two x16 SLI slots, but the article I linked to about NF4 Ultra modding that Wes wrote shows that x16 + x2 and x8 + x8 are close, but there is a difference.

    We'll be sure to test as much as we can -- hopefully someone will stick in PCIe lane configuration controlls in their BIOS.
  • Googer - Monday, January 24, 2005 - link

    #31 TCQ has been a feature of Hitachi/IBM PATA for many years now since the 120gxp and the only controller that supports PATA tcq is Pacific Digital's "Discstaq" ATA 100 controller with propietary cables.
  • Googer - Monday, January 24, 2005 - link

    #26 sound storm lives! Chaintech nFORCE4

    http://www.newegg.com/app/ViewProductDesc.asp?desc...
  • Googer - Monday, January 24, 2005 - link

    #12 Why wouldn't you want PCI-x for your existing pci cards, since it can run legacy 32-bit pci in 66mhz instead of 33mhz you are doubbleing your bandwith. It is something (pci-x) I am looking for on my next motherboard along with x16 and an x4 pci-e slot for skt 939.

    Here is an ananadtech article on the motherboard you
    were referring to.
    http://www.anandtech.com/news/shownews.aspx?i=2370...
  • Jeff7181 - Monday, January 24, 2005 - link

    Jesus... 2 16X PCI Express slots... that's nutty! Yay to AMD and nVidia for building in more parallelism!
  • Dubb - Monday, January 24, 2005 - link

    Kris: x2 is sufficient? I thought things started to drop off around x4...coulda sworn I saw that somewhere.

    I have a question though, does the scenario change if you're running separate cards as opposed to SLI? If I had the funds, I'd be looking to power a couple 9MP displays (or a 9 MP + 30" cinema) off separate 3400s or 4400s

    I'm pretty sure the 2895 (K8WE) was confirmed 16 + 16... their website claims "two x16 slots...with x16 signals"

    If I was actually looking to buy though, I'd be looking to the tyan regardless - I like the layout and features better.
  • KristopherKubicki - Monday, January 24, 2005 - link

    Dubb: I do not even believe that the Tyan is a "true" dual 16 lane configuration, but I sent them an email waiting a response.

    Of course - to be honest - it doesnt matter. two, two lane solutions are enough for modern SLI to scrape by - dual x4 or dual x8 are more than enough bandwidth for symmetric vector processing. I have a feeling full saturation of 16 lane PCIe, particularly for graphics, is a long way away.

    Kristopher
  • Dubb - Monday, January 24, 2005 - link

    You should probably specify that the Iwill DK8ES is NOT a dual x16 board. it's x16+x2, with the x2 on a x16 connector. the DK8EW that will be released in a few months is x8 + x8.

    the tyan is the only x16 + x16 I know of so far...

    feel free to correct me if I'm wrong, but the folks at 2cpu.com are pretty sure of this.
  • henry - Monday, January 24, 2005 - link

    > #32 ... heh ... that's only 4 x1 lanes not 5 ;-) the config i mentioned is not possible.

    Check this: 1x16 + 3x1 / 1x4 + 2x1 (+ 1x8 for the fun ;-)
  • DerekWilson - Monday, January 24, 2005 - link

    #32 ... heh ... that's only 4 x1 lanes not 5 ;-) the config i mentioned is not possible.

    And the Intel PCI-X idea is definitely funky :-) I suppose that would work. Rather than use an HT link for AMD's tunnel, that could interesting in a pinch. No matter how unlikely :-)
  • henry - Monday, January 24, 2005 - link

    Hi Derek

    Just two remarks:

    > On the flip side, it's not possible to put 1 x16, 1 x4, and 5 x1 PCIe slots on a dual processor workstation.

    Why shouldnt this be possible: Just partition the PCIe lanes in this way: 1x16 + 3x1 on first nForce (one lane wasted) and 1x4 + 1x1 on second chip (still 15 lanes and two controllers left)

    Regarding PCI-X: As you said mainboard makers can choose the obvious way and directly attach AMD's PCI-X tunnel chips.

    Nevertheless there is a more insane option: Use a spare x4 or x8 PCIe link to hook up a PCI-X bridge chip (e.g. Intel 41210).

  • DerekWilson - Monday, January 24, 2005 - link

    NCQ is native tagged command queuing for SATA ... TCQ is tagged command queuign for SCSI. WD called the Raptors initial support TCQ because they just pulled their SCSI solution over. This served to confuse people. SATA command queuing is NCQ. People call TCQ sometimes, and maybe that's fine. Really, they may as well be the same thing except that one is for SCSI.

    #25, SDA

    I meant PCI-X -- NVIDIA didn't build in legacy PCI-X support to their MCPs. In order to support it it must be paired with AMD-8000 series. Intel has PCI-X support off MCH. If many PCI-X slots are required, the Intel solution must sacrifice some of its PCIe lanes for the 6700PXH 64-bit PCI Hub. This hub hooks into the E75xx though either a x4 or x8 PCIe lane to provide additional PCI/PCI-X buses. I know, it's a lot of PCI/PCIe/PCI-X ... sorry for the confusion.
  • Cygni - Monday, January 24, 2005 - link

    btw, i was kidding about the windows thing...
  • Cygni - Monday, January 24, 2005 - link

    Nvidia is also releasing a new videocard that does all of that, plus the GPU can run windows!

    Countdown to the point where the video card becomes everything and the motherboard is a tiny piece of plastic that holds everything in place....
  • tumbleweed - Monday, January 24, 2005 - link

    #26 - rumour has it that SS will be showing up in future NV 'video' cards, rather than on motherboards. With the ridiculous bandwidth overkill that is PCIe x16, that's a good place to put it, IMO. Save a slot, save mobo space, and put unused bandwidth to use.
  • tumbleweed - Monday, January 24, 2005 - link

    Derek - Dissonance over at TR says he specifically asked NV about it, and was told it supported TCQ as well as NCQ, so somebody is confused. :)
  • AbRASiON - Monday, January 24, 2005 - link

    I've made myself a little saying which I now apply to nvidia motherboards,...

    It's "no soundstorm, no sale"

    Until they re-impliment it, I'm not buying one, period.
  • SDA - Monday, January 24, 2005 - link

    Thanks, Kris, but I do know that PCI-X != PCI-Express.. a lot of people use it to mean that by mistake, though, so I'm not sure what the author meant by PCI-X on the last page of the article.

    Also, technically, PCI-X isn't quite 64-bit PCI. 64-bit PCI is, well, 64-bit PCI; the main difference between it and PCI-X is that PCI-X also runs at a faster clock (133MHz, or 266MHz for 2.0). Obsolete PC technology is one of the few things I have any knowledge about, heh.
  • REMF - Monday, January 24, 2005 - link

    my mistake Derek, got the diagram muddled up with those hideous dual boards that connect all the memory through CPU0 and route it via HT to CPU1.

    mixed up memory with IO, silly me.
  • DerekWilson - Monday, January 24, 2005 - link

    nf pro supports ncq and not tcq ...

    I also updated the article ... MCPs are more flexible than I thought and NVIDIA has corrected me on a point --

    one 2200 and 2 2050s can connect to an Opteron 150. dual and quad servers are able to connect to 4 MCPs total (2 each processor for dual and 1 each for quad).

    With 8-way servers, it's possible to build even more I/O in to the system. NVIDIA says their mostly targeting 2 and 4 way, but with 8 way systems, there are topographies that essentially connect 2 4-way setups together. In these cases, 6 MCPs could be used giving even more I/O ...

    #21 ---

    Every Opteron has 3 HT links ... the difference between a 1xx, 2xx, and 8xx is the number of coherent HT links. In a dual core setup, either AMD could use one of the 3 HT links for core to core comm, or they could add an HT link for intra core comm.
  • pio!pio! - Monday, January 24, 2005 - link

    If I'm reading this correctly...with all those PCI Express slots and multiple MCP's and multiproc's...the number of traces in the mobo should be astronomically high..I wonder how expensive the motherboards will be
  • jmautz - Monday, January 24, 2005 - link

    Please correct my memory/misunderstanding...

    I thought the reason AMD could make a dual-core Opt so easilly was because they attached both cores via the unused HyperTrasport connector. Doesn't that mean there is no availible HyperTrasport conencters on to attch the 2050? (at least on the 22x models).

    Thanks.
  • DerekWilson - Monday, January 24, 2005 - link

    #18

    capable of RAID 0, 1, 0+1 ... same as NF4. The overhead of RAID 5 would require a much more powerful processor (or performance would be much slower).

    #15

    Quad and 8-way scientific systems with 4 video cards in them doing general purpose scientific computing (or any vector fp math app) comes to mind as a very relevant app ... I could see cluster of those being very effective in crunching large data science/math/engineering problmes.

    #12/#13

    NUMA and memory bandwidth has nothing to do with NVIDIA's nForce 4 or nForce Pro, or even AMD's chipsets.

    Each Opteron has it's own on die memory controller, and the motherboard vendor can opt to impliment a system that would allow or disallow NUMA as they see fit. What's required is a bios that: has APIC 2, no node interleaving, and can build an SRAT. Also the motherboard must allow physical memory to be attached to each processors' memory controllers. It's really a BIOS and phsyical layout issue.

    The NVIDIA core logic does do a lot for being single chip. But we should remember that it doesn't need to act as a memory controller as Intel's northbridge must. The nForce has no effect on memory config.
  • tumbleweed - Monday, January 24, 2005 - link

    The Tech Report mentioned that the nForce Pro supports TCQ instead of just NCQ - is that wrong, or was that just not mentioned here?
  • Doormat - Monday, January 24, 2005 - link

    Perhaps I missed it, but what RAID modes is it capable of? 0/1/5? I'd love to have a board with 8 SATA-II ports and dual opteron processors and run RAID 5 as a file server (with 64b linux of course). Let the CPUs do the parity calcs (since that'd be the only thing its used for). Mmmm... 8x400GB in RAID-5.
  • jmautz - Monday, January 24, 2005 - link

    Thanks I see that now. When I missed it the first time I went back and looked at the summery specs on page 3 and didn't see it listed.

    Thanks again.
  • ProviaFan - Monday, January 24, 2005 - link

    #14 / jmautz:

    On page 2 of the article, there is this statement:
    "NVIDIA has also informed us that they have been validating AMD's dual core solutions on nForce Professional before launch as well. NVIDIA wants its customers to know that it's looking to the future, but the statement of dual core validation just serves to create more anticipation for dual core to course through our veins in the meantime. Of course, with dual core coming down the pipe later this year, the rest of the system can't lag behind."
  • ProviaFan - Monday, January 24, 2005 - link

    For a long time, making a quad CPU workstation was pretty much not an option, because there was no way to connect an AGP graphics card for good 3D performance (yes, there is a PCI-X Parhelia, and no, that doesn't count). The only one I can remember was an SGI desktop system with 4 PIII's, though maybe the graphics on that were integrated (though of course they weren't bad, unlike Intel's).

    Now, with a quad CPU system and PCI-E, it will be possible to do whatever you want with those x16 slots, including using a high-performance graphics card (or two, which is something that used to be reserved for Sun systems with their proprietary graphics connectors). Or, with dual core, you could have a virtually 8-way workstation, though I'm not sure what the benefit of that would be outside of complex scientific calculations or 3D rendering.

    The sad part is there's no freakin way that I'll be able to afford that... :(
  • jmautz - Monday, January 24, 2005 - link

    I may have missed it, but I didn't see anything about support for dual-core processors. Was this mentioned? I would love to get a dual-core dual Opt board with all PCIe slots (2x16, 1x4, 4x1 would be nice).
  • R3MF - Monday, January 24, 2005 - link

    update on the Abit DualCPU board:

    Chipset

    * nVidia CrushK8-04 Pro Chipset

    > it does appear to use the nForce4 chipset, so one immediate question springs to mind: why if they can get numa memory on dual CPU boards with the nForce4, can they not do the same with the nForce Pro?


  • R3MF - Monday, January 24, 2005 - link

    what does this mean for the new Abit DualCPU board:
    http://forums.2cpu.com/showthread.php?s=ef43ac4b9b...

    one core-logic chip, yet with NUMA memory, presumably this means it is not an nForce Pro board if i understand anandtechs diagrams correctly.....?

    i like the sound of the Abit board:
    2x CPU
    2x NUMA memory per CPU
    2x SLI
    4x SATA2 slots
    1x GigE with Active Armour (my guess)

    best of all i am not paying for stuff i will never use like:
    second GigE socket
    PCI-X
    registered memory

    the only thing it lacks is a decent sound solution, but then every nForce4 suffers the same lack. hopefully someone will come out with a decent PCI-E dolby-digital soundcard...............
  • R3MF - Monday, January 24, 2005 - link

    @ 10 - i don't think so. the active armour is a DSP that does the necessary computations to run the firewall, as opposed to letting the CPU do the grunt work.
  • Illissius - Monday, January 24, 2005 - link

    Isn't the TCP offload engine thingy just ActiveArmor with a different name?
  • R3MF - Monday, January 24, 2005 - link

    two comments:

    the age of cheap dual opteron speed demons is not yet up on us, because although you only need one 2200 chip to have a dual CPU rig the second CPU connects via the first, thus you only get 6.4GB bandwidth as opposed to 12.8GB. yes you can pair a 2200 and a 2050 together but i bet they will be very pricey!

    the article makes mention of SLI for quaddro cards, presumably driver specific to accomodate data sharing over two different PCI-E bridges as opposed to one PCI-E bridge as is the case with nForce4 SLI. this would seem to indicate that regular PCI-E 6xxx series cards will not be able to be used in an SLI confiuration on nForce Pro boards as the ability will not be enabled in the driver. am i right?
  • DerekWilson - Monday, January 24, 2005 - link

    The Intel way of doing PCI Express devices is MUCH simpler. 3 dedicated x8 PCI Express ports on their MCH. These can be split in half and treated as two logical x4 connections or combined into a x16 PEG port. This is easy to approach from a design and implimentation perspective.

    Aside from NVIDIA having a more complex crossbar that needs to be setup at boot by the bios, allowing 20 devices would meant NVIDIA would potentially have to setup and stream data to 20 different PCI Express devices rather than 4 ... I'm not well versed in the issues of PCI Express, or NVIDIA's internal architecture, but this could pose a problem.

    There is also absolutely no reason (or physical spec) to have 20 x1 PCI Express lanes on a motherboard ;-)

    I could see an argument to have 5 physcial connections in case someone there was a client that simply needed 5 x4 connections. But that's as far as I would go with it.

    The only big limitation is that the controllers can't span MCPs :-)

    meaning it is NOT possible to have 5 x16 PCI Express connectors a quad Opteron mobo with the 2200 and 3 2050s. Nor is it possible to have 10 x8 slots. Max bandwidth config would be 8 x8 slots and 4 x4 slots ... or maybe throw in 2 x16, 4 x8, 4 x4 ... Thats still too many connectors for conventional boards ... I think I'll add this note to the article.

    #7 ... There shouldn't be much reason for MCPs to communicate explicityly with eachother. It was probably necessary for NVIDIA to extend the RAID logic to allow it to span across multiple MCPs, and it is possible that some glue logic may have been neccessary to allow a boot device to be locate on a 2050 for instance. I can't see it being 2M transistors in MCP-to-MCP kind of stuff though.
  • ksherman - Monday, January 24, 2005 - link

    I agree with #6... I think the extra transistors would be used to allow all the chips to communicate.
  • mickyb - Monday, January 24, 2005 - link

    One has to wonder what the 2 million extra transisters are for. I would be surprised if it was "just" to allow multiple MCPs. Sounds like a lot of logic. I am also surprised about the 4 physical connector limit. I didn't realize the PCI-e lanes had to partitioned off like so. I assumed that if there were 20 lanes, they could create up to 20 connectors.
  • miketheidiot - Monday, January 24, 2005 - link

    I'm impressed.
  • ksherman - Monday, January 24, 2005 - link

    Ill i can say is... DAMN! I wish i had the money do get one of these my self :(
  • KristopherKubicki - Monday, January 24, 2005 - link

    SDA: PCI-X is 64-bit PCI. PCI-Express also known as PCIe is a totally different animal. PCI-X is old, PCIe is new.

    Kristopher
  • SDA - Monday, January 24, 2005 - link

    PCI-X is supposed to mean PCI-E, right? PCI-X != PCI-Express, now I'm confused...

    Anyway, looking good. Wonder what performance will be like.
  • CBone - Monday, January 24, 2005 - link

    Sweet. Can't wait.

Log in

Don't have an account? Sign up now