Experience Testing

Because we couldn't perform as many useful repeatable tests as we wanted, we have done quite a bit of just plain gaming. We played with the hardware and without the hardware. We tested EVE Online and Team Fortress 2. Bigfoot reports that Team Fortress 2 sees some of the highest benefit from their technology, and we included EVE in order to gauge impact on network games / MMOs that were not singled out by Bigfoot. We played around with WoW for a while, but we don't have a high enough character to do anything where latency could really matter (large parties playing end-game content). These tests were done the way we normally game: with nothing running in the background and no downloading going on.

In playing on our Core i7 965 system with an NVIDIA GeForce GTX 280 and 6GB of RAM, we spent a couple hours with each game. Half of our time was with onboard networking and the other half with the Killer Xeno Pro. Both games were run at their highest quality settings and resolution on our 30" panel.

In EVE we ran some missions and got into a little PvP action. While we made more isk (EVE's in-game currency) playing with the Killer Xeno Pro, this was just the result of the missions we were handed. Neither PvE nor PvP situations felt any different with the onboard NIC versus the Killer Xeno Pro. Action was just as smooth and the UI was just as responsive no matter what was going on. We felt the same sort of loading hiccups when changing areas with both networking solutions as well: the Killer Xeno Pro just didn't deliver any tangible benefit in EVE Online.

Our Team Fortress 2 testing consisted of lots of different games played on both the on-board NIC and the Killer Xeno Pro.

We do need to preface this by acknowledging the fact that none of us are really twitch shooter experts. Sure, we all played and loved Counter Strike and CS:S, Unreal Tournament in all its incarnations, and many other FPS games, but we aren't the kind of people who run moderate resolutions with 16-bit color and most of the options turned as low as possible in order to get every single possible advantage. We are also not professional gamers; but we do love to game.

That being said, we really didn't notice any difference in our gaming experience with or without the Killer Xeno Pro. I tend to like sniping in games, and typically even non-twitch gamers can tell if they're being screwed out of kills by network issues. I didn't experience this sort of frustration with either solution. Game play was smooth and not jerky or problematic even in larger fire fights when there were no other issues at play. When playing both with and without the Killer Xeno Pro, we experienced some issues when on servers with issues.

It is just a fact that the most important factor is going to be finding a game where you and all the other players have a low latency connection to the server. The slight difference of a minimally reduced client side latency is not going to have a higher impact than any sort of other network issues.

In other words (and to sum up), when you have a bad connection, the Killer Xeno Pro is not going to fix it; when you have a good connection, the Killer Xeno Pro is not going to make the experience any better.

Mostly Deterministic Testing Final Words
Comments Locked

121 Comments

View All Comments

  • mindless1 - Saturday, July 4, 2009 - link

    The thing is, even with a slower CPU and PCI bottleneck, the network processing still isn't a substantial % of processing by the CPU, and the traffic for gaming not bottlenecked by PCI bus.

    Even a lowly Celeron 500MHz isn't much of an issue if jumbo frames are used, though CPU still has to be seen as a bottleneck to the gaming itself.
  • has407 - Sunday, July 5, 2009 - link

    Jumbo frames won't do squat in this case, and will likely cause worse problems. Even in well--managed and closed environment, expect very little gain unless you're using a very fast SAN, fast switches, and a network admin who knows what they're doing.

    Do the math: even for 1Gbe networks the efficiency gain for most apps using jumbo frames is noise. For 10Gbe you might notice it if you've got enough CPU and IO bandwidth; for the typical home network, it's not worth considering.
  • davecason - Saturday, July 4, 2009 - link

    PCIe, not PCI.
  • Theunis - Saturday, July 4, 2009 - link

    I wonder if it would be possible to use this board with my Linux x86_64 machine. LOL

    Wouldn't it be cool to run specific applications compiled for PPC, to run on this board? Does it come with it's own RAM?
  • ShawnD1 - Saturday, July 4, 2009 - link

    If they're trying to market this thing as something to reduce CPU usage, it doesn't really make much sense to test it with the fastest processor you can find. Try it with a CPU that has no speed at all, maybe a celeron or sempron.

    Of course that's not a real world test, but are any tests on Anandtech realistic? I don't run my games at 3000x2000 resolution, but ridiculous tests like that show us what a video card can do. For CPU tests we're looking at Phenom II and Core 2 Quad systems running games at 800x600 and getting 200fps. It's a ridiculous test, but it isolates the hardware being tested.

    The methodology in the article, in my opinion, is like testing a bunch of video cards at 800x600. Seeing that every video card is getting 200fps (the CPU bottleneck), the conclusion would be that upgrading the video card is a waste of money. Similarly, testing something that reduces CPU bottlenecking should not be done with a CPU that isn't bottlenecked by any game in existence.
  • Gannon - Saturday, July 4, 2009 - link

    This product has no real market, it's just an excuse to charge more money to clueless among the gaming population.

    What they should really do with this card is make it multi port and a router, I would love to ditch my piece of shit router that requires constant reboots because of someones Wifi dropping (it works fine for wires connections). If they could build a wireless router network add in card + network stack offloading + opening up the card to developers, then I'm in. Screw the "gaming" portion of it, how bout building a quality product gamers would want?

    Such as: Bandwidth control (so users can't flog the connection can have their bandwidth limited so it doesn't fsk_up your ping in an online twitch game like quake 3, etc, etc... othe routers have attempted this like Dlinks "Gamefuel", so one can have torrents + game at the same time, no one has really done it very though.

    The networking stack is the least of a gamers worries on a modern computer, there is a reason everything has become more integrated over time (audio + network), with the rise of the internet NOT having an ethernet port on a computer is stupid and most onboard NIC's are so good now-a-days unless you are doing some serious file transferring you don't need it.

    Anyone claiming to see a performance benefit is shitting you, the real problem lies in input output latencies to devices, RAM and hard drives.

    I'll take audio + NIC integrated on future CPU's with an integrated memory controller over add-in shite that is just going to fade away over the next 5-10 years.
  • Gannon - Saturday, July 4, 2009 - link

    Also the next real major speed up for games is in Solid state disks, what we really need is:

    -Newer faster Memory technology, CPU spends most of it's time waiting on RAM (and ram is damn fast comparable to hard drives and even solid state drives).
    -Newer faster permanent storage (SSDa and beyond)

    When solid state disks mature and they finally come out with a chipset that can really take full advantage of SSD's and the bandwidth they offer we'll see a lot more performance improvement.

    Try playing a game that chugs on an old hard drive, then put it on an SSD, notice it's not as choppy when things get harry. I noticed this when I moved many of my games from an older 320GB drive to my 1TB drive, games that were slow/choppy suddenly got a speed boost because the drive IO was a sever bottleneck, I can't wait to get an SSD once their capacity expands and price comes down to saner levels.
  • navilor - Saturday, July 4, 2009 - link

    I have on my old machine (a Core2Duo E6600) the Bigfoot KillerNIC M1. I thought that it was a complete crock of [expletive deleted] until a buddy of mine bought one and reported a much better gaming experience.

    And what exactly did it do?

    He played Everquest and it lowered his latency a lot. This means that his character is a lot more responsive to what is happening around him. Yes, you can disable Nagel's algorithm and do something similar, but that wasn't the end of it.

    He was able to report being able to run with higher graphical settings enabled. It also removed a stutter that he previously did not notice.

    So I sucked it up and decided to blow some cash on that product.

    Excellent investment. No longer did my CPU have to manage TCP/IP packets. UDP packets were invisible (via WireShark) yet passed through to the OS without issue (there is a Game setting and an Application setting for those who need it).

    This network card offloaded work so my CPU could be used to manage things. Now you might think that my CPU was lame. World of Warcraft, which is the game I play, barely touched the CPU at all. No matter how powerful your CPU is it still has to deal with networking.

    Unless it doesn't because you have a KillerNIC.

    Now did it lower my latency? No, but it did for my friend. Did disabling Nagel's algorithm help? Yes, but it didn't smooth out my frame rate. Combine the two and you will notice a difference.

    On my new rig (Core i7 920 with a GTX 295 and 12GB of RAM) disabling Nagle did jack for me. I am considering either purchasing the new Xeno Pro or possibly stripping the M1 out of my old system. My card can run a firewall on it (iptables) so I don't have to burden Windows with that overhead.

    Oh, and high end servers run network cards that do TCP offloading. I'm vaguely certain that those types of cards are there for a reason.

    You can prioritize your packets all you want on the network, too. That is always a good first step. What the network cannot do is reduce the time it takes for your application to:

    1). Generate a packet

    2). Have it go through the windows networking stack

    3). Go through the Windows network driver

    4). And then out the cable.

    The Killer products remove step two and the overhead of step three.

    Windows, by default, likes to lump packets together for high speed transmission. See also Nagle's Algorithm. That can be disabled via a few registry hacks. Removing the overhead of Windows compressing those and shoving them through the driver smooths things out. If you have a KillerNIC then you can still manually disable Nagel (which lightens the workload of packet management a small amount) and let the KillerNIC worry about the rest.

    So what this means is that the Killer products prioritize the packets INSIDE of your machine BEFORE they hit the home network to then be prioritized on your internet facing router.
  • DerekWilson - Sunday, July 5, 2009 - link

    I get the things that the Killer is doing and that those things are real ...

    But if your friend took the $120 for the Killer Xeno Pro (or the likely much higher cost of the M1 at the time) and spent it on a faster CPU, the benefits would have extended to much more than just making packet processing faster.

    It would have benefited many other applications as well in addition to delivering the performance needed for smooth network play.
  • has407 - Saturday, July 4, 2009 - link

    I won't argue with your experience, as I haven't used one of these NICs. However, is it a cost-effective or appropriate way to solve the problem? Color me skeptical; the evidence is at best inconclusive.

    1. Nagle applies to TCP, not UDP.
    = You aren't going to see any improvement disabling Nagle for apps that use UDP.

    2. TPC_NODELAY is a way for apps to bypass Nagle.
    = Apps with time-sensitive needs and that use TCP should use it. Nothing you can do about this, but I would hope and expect game developers would be cognizant of it and use it appropriately (or use UDP).

    3. An old 2.2GHz Core-2 can drive > 150KBs of 1-byte packets with TCP_NODELAY; > 500KBs using 1-byte UDP packets; > 225MBs with 16KB packets; and peaks at ~300K segments/sec.
    = For modern CPU's, CPU time is noise given typical Internet bandwidth.
    = Latency/CPU due to the network stack is noise.

    4. Some NICs have features which you may want to disable. E.g.:
    - Interrupt coalescing. This reduces CPU load by not generating an interrupt for every packet. That may be counterproductive for games.
    - Large send offload. Removes the CPU overhead of segmenting large packets into smaller ones and moves it to the NIC. Doubtful there'd be much difference unless the game is sending large packets (which I expect isn't the case).
    - Jumbo frames. Don't. In this scenario they're at best a NOP and at worst will degrade performance.

    In short, will the Killer NIC perform any better than a properly tuned system? I doubt it. Is the $premium worth the equivalent amount spent on a faster CPU or GPU, or the time required to tune the system? Your call, but again I doubt it.


    p.s. No, disabling Nagle does not reduce "the workload of packet management a small amount". It exists to reduce per-packet overhead by coalescing small messages into larger packets.

Log in

Don't have an account? Sign up now