Bigfoot’s Killer-N 1102 Wireless Networking vs. the World
by Jarred Walton on August 10, 2011 10:38 AM ESTTesting Overview
In total, we used three different testing locations, an “ideal” setup with the wireless router less than ten feet from the test laptop(s), and two different “distant” locations where the laptop is located several rooms away, with doors closed in between and several sheetrock walls. We also tested with two different routers, a Netgear WNR3500L and a Linksys E4200. At each location, we ran four different networking related tasks. The tests had the server PC connected directly to the Gigabit router via a six-foot CAT6 Ethernet cable, and the test PCs were otherwise idle (no extra processes, services, etc. were running).
The first task consists of copying files from the server to the test laptop. Here we have two different scenarios; the first is a single large file, a 2.2GiB HD movie. We time how long it takes to copy the file and then calculate the throughput in Mbps (1Mb = 1,000,000 bits). The second scenario consists of numerous small files. We use the contents of the Cinebench R10 and Cinebench R11.5 folders, which have a mix of a few larger files with many small files. In total, this test has 8780 files and 511 folders, with the total file size coming to 440MiB. Again we time the copy process and then calculate Mbps. Copying files between PCs is a completely real-world scenario, and with SSDs present in all of the test systems we should be bottlenecked by the networking performance (even a moderate HDD should be able to outpace WiFi speeds, but for Gigabit Ethernet you’ll want SSDs).
The remaining three tests are more theoretical rather than real-world. First up, we have the NTttcp utility. Unlike file copying, this test transfers data from memory between PCs, so the storage subsystem is out of the loop. We run two different scenarios, once with the laptop acting as “client” and the desktop being the “server”, and the second test with the roles reversed. This will test the maximum theoretical throughput of the laptop in both transmit and receive modes; most of the wireless devices do better at receiving vs. sending, but in a few instances the reverse holds. We used the following commands on the client/server:
ntttcpr.exe -m 6,0,[Server IP] -a 4 -l 64000 -n 4000
ntttcps.exe -m 6,0,[Client IP] -a 4 -l 64000 -n 4000
Our third test is very similar to the above, only we use Netperf instead of NTttcp and we conduct TCP as well as UDP testing. We use a precompiled version of Netperf that Bigfoot sent, but a Windows version is available from Chris Wolf that shows essentially the same results. One of the things to be aware of is that Netperf has a variety of options; we tested with the “stock” options, though Bigfoot has a second set of recommended command line parameters. The throughput can be very different depending on what options you specify and what wireless card you’re using. Several of the systems we tested with the Bigfoot parameters performed horribly on the UDP test, so we decided to stick with the stock Netperf command (though we’ll show the performance with the Bigfoot settings on the E4200 Ideal page as a reference point). The default options simply specify the IP address of the host, along with “-t UDP_STREAM” for UDP testing; the Bigfoot recommended commands for TCP and UDP are:
netperf.exe -l 20 -t TCP_STREAM -f m -H [Server IP] -- -m 1460 -M 1460 -s 4000000,4000000 -S 4000000,4000000
netperf.exe -l 20 -t UDP_STREAM -f m -H [Server IP] -- -m 1472 -M 1472 -s 4000000,4000000 -S 4000000,4000000
The key difference between the stock options and the Bigfoot parameters is the selection of a 1472 byte packet. The default UDP packet size is 1000 bytes, so that’s what most manufacturers optimize for, but larger packets are possible. Bigfoot states that video streaming often uses larger packets to improve throughput, so they’ve worked to ensure their drivers perform well with many packet sizes. Skip to the “ideal” testing with the Linksys router for more details of our Netperf UDP testing with the Bigfoot parameters.
Our final test is a measure of latency, and you might be tempted to take the results with a grain of salt as we’re using a utility provided by Bigfoot called GaNE (Gaming Network Efficiency). The purpose of this utility is to measure real latency between a client and a server, down to the microsecond, while simulating a network gaming workload. Most games don’t support any logging of network latency, so GaNE is an easy to use substitute. It sends a 100 byte UDP packet every 200ms, with a timestamp included in the packet. The client receives the packet and sends it back, and by looking at the timestamp the server can determine roundtrip latency. According to Bigfoot, the majority of network games send ~100 byte packets at intervals ranging from every 50ms to a few seconds apart, so they chose a value that would represent a large number of titles. While not all games are the same, there are long-established “best practice” coding standards for network gaming, so a lot of titles should behave similar to GaNE. For the test, two laptops connect to the server at the same time and GaNE reports average latency along with the average latency of the worst 10% of packets—the latter being a better measurement of jitter on your network connection.
Besides GaNE, we conducted some informal testing of latency with other utilities. First up is the Windows ping command; we saw latency spikes that are similar to those in GaNE so the results of both utilities appear valid. Ping doesn’t work like most games, however—it has smaller packets sent less frequently and it uses the ICMP protocol instead of UDP. That brings us to the third latency test. About the only games to include support for latency logging are Source Engine titles, so we tested latency with a local server running Counterstrike: Source using the “net_graph 3” utility. We’re unable to capture raw latency numbers this way, but we could clearly see periods of higher latency on all of the Intel WiFi cards. If anything, the latency blips tended to occur more frequently with CS:S than in GaNE, though interestingly enough the Atheros controller basically tied the Bigfoot in our experience (1ms ping times most of the match, with no discernable spikes). The Realtek card had a higher average latency of around 6ms, but at least there wasn’t a lot of jitter.
In short, latency measurements with GaNE, ping, and CS:S all show similar results, at least when measured over the same local network. Despite concerns about using a Bigfoot provided utility, it doesn’t appear that Bigfoot is doing anything unusual. The bigger concern is what the results mean in the world of Internet gaming. Since we’re running on a local network, latency is already an order of magnitude (or two) lower than what you’d generally experience over the Internet. For online gaming, the latency results we’re reporting end up being more of an additive latency, after the latency of the router to Internet server is factored in. In other words, if you’re playing a game where it takes 60ms for your data packets to get from the server to your router, what GaNE reports is how long it takes those packets to get from the router to your PC. The bigger issue with latency isn’t the 3-5ms average that you’ll get over the local network, but it’s the jitter where 100+ms spikes occur, and as noted GaNE provides an indication of jitter with the “worst 10% average latency” result.
We ran each of the above tests at least five times at each test location on each laptop, and most of the tests were run dozens of times. The reason is that we were trying to measure best-case performance at each location, so we didn’t bother trying to average all of the results. We did notice a distinct lack of consistency across all wireless cards, especially on 2.4GHz connections, but as noted in the introduction, interference from other sources is nearly impossible to avoid. We report the best result for each test, which was never completely out of line with other “good” results. If we averaged the best five runs on each device, we’d be within 3% of our reported result, but we skipped that step in order to keep things manageable.
With the test parameters out of the way, let’s move to our first testing location and see how the various wireless devices perform.
52 Comments
View All Comments
ktwebb - Thursday, August 11, 2011 - link
We didn't "start" at 11mb. For the consumer yes but I was putting in 900Mhz 2mb in schools in 97' and Aironet (bought by Cisco) by late 98.AEROSPIKE - Wednesday, August 24, 2011 - link
802.11b WAS NOT THE FIRST MARKETED WiFi HARDWARE BUT THE FIRST MANUFACTURER'S BACKED STANDARD BY THE IEEE, 802.11 WAS THE FIRST WITH UP TO 6 CHANNELS AT 1-2 MEG BANDWIDTH BACK IN THE NINETY S. I WAS AN EARLY ADOPTER, THE SOFTWARE AND HARDWARE OF THOSE UNITS WERE VERY UNSTABLE.