Just when your organization finally implements all of the necessary infrastructure for a gigabit Ethernet local area network, you’re hit with the realization that maybe all of the time, money, and planning spent on the upgrade was for naught. Sure, the configuration of the new Ethernet switching infrastructure made for some insightful training, but maybe that’s all it was - training.

But rather than wait idly by for your organization’s top decision makers to start peppering you with questions as to your lack of foresight or research skills, take solace in the fact that the soon-to-be released 802.11ac standard (gigabit Wi-Fi) may be a few years away from widespread enterprise implementation. (For background reading, see 802.What? Making Sense of the 802.11 Family.)

What is 802.11?

The Institute of Electrical and Electronic Engineers (IEEE) 802.11 standard (along with its amendments) defines the implementation of wireless local area network technology. IEEE 802.11 is commonly referred to as Wi-Fi. Within IEEE 802.11, there are several other standards such as 802.11a, 802.11b, 802.11g and 802.11.n. These "sub-standards" (technically referred to as amendments) are typically differentiated by their throughput rate and/or the frequency range in which their respective wireless signals are transmitted. For example, 802.11g operates within the 2.4 - 2.485 GHz range. With these characteristics as the baseline, it is easy to conclude that the manipulation of transmission/receiving techniques plays a vital role in the development of new standards within the overall IEEE 802.11 standard.

So now that some of the differentiating factors within the IEEE 802.11 standard have been established, how is 802.11ac different from its predecessors? To answer this question, we must dig into some details.

With the creation of the IEEE 802.11n standard, a concept known as multiple-input multiple-output (MIMO) was introduced. Simply put, MIMO indicates that two or more antennae are used on the sending side of a wireless network, and two or more antennae are used on the receiving side of the wireless network. The reasoning behind the multiple antennae idea involves the need for greater throughput without consuming extra bandwidth within the frequency range. All of this is made possible through a concept known as spatial multiplexing. Within the 802.11n standard, four spatial streams are available for transmitting and receiving, and this partially helped developers of the standard to achieve speeds as high as 200 Mbps, although it should be noted that this speed was achieved in lab conditions that were absolutely pristine.

Within the 802.11ac standard, eight spatial streams are said to be supported. This is what has led researchers to achieve gigabit speeds within ideal lab conditions. So now that gigabit WLAN speeds have been achieved, enterprise environments will be completely saturated in gigabit transmission signals, right? Furthermore, shouldn’t the network architect that recently recommended the purchase of an all-new gigabit Ethernet infrastructure just place his head on the chopping block right now? Not so fast.

Potential for the Enterprise

The 802.11n standard implemented a concept known as channel bonding, which is similar to interface bonding in that it takes two actual channels and combines them into one bigger channel. According to G.T. Hill, a director of technical marketing at Ruckus Wireless, the result is a bigger pipe, which translates into higher throughput speeds. The only drawback to this is that 802.11n operates on the 2.4 GHz frequency band, and in North America, this particular band only has three non-overlapping channels - typically 1, 6, and 11. The end result is that each node on a WLAN that is transmitting on the same wireless access point has to wait its turn prior to transmission. In a nutshell, this means more nodes - and more waiting.

The 802.11ac standard operates on the 5 GHz frequency band, which offers two apparent advantages. First, the 5 GHz frequency band within North America is relatively empty compared to the 2.4 GHz band. Second, and perhaps more importantly, more channels are available within the 5 GHz band.

So this is a win all the way around right? Maybe not. The only problem lies in the fact that more channels on a higher band typically translates into less throughput per channel. Furthermore, the solution given is exactly what is currently practiced within the 802.11n standard - channel bonding. So each node accessing a given wireless access point will still have to wait its turn prior to transmission. All of a sudden, gigabit speeds on the WLAN don’t seem so achievable in the enterprise when one considers the sheer number of nodes that will be competing for access on each wireless access point. In addition, when one considers the additional costs associated with purchasing 5 GHz compatible end devices, the decision to focus on Ethernet begins to make much more sense for enterprise environments.

Gigabit Wireless in the Home

IEEE 802.11ac within the home is most likely the venue where the biggest strides will take place initially. The reasoning behind this assertion is actually quite simple. Homes typically have far fewer wireless nodes than an enterprise environment does. Fewer nodes competing for a channel will invariably result in higher throughput speeds. Add to this the higher number of non-overlapping channels within the 5 GHz frequency band and the likelihood that the neighbors will be operating on the same channel decreases dramatically.

What the Future Holds

Hill suggests that gigabit Wi-Fi will begin making inroads into the enterprise by 2013, and it will most likely begin making headway into homes even earlier. One of the primary concerns involves something that had to be overcome by 802.11n as well - backward compatibility. As of today, most enterprise wireless access points are 2.4 GHz/5 GHz capable, but the problem lies in wireless end points. Hill states that due to the eight spatial stream functionality within 802.11ac, new chips will have to be inserted into wireless devices in order to be compatible with the new standard. Hill goes on to state that chip manufacturers typically take approximately two years before they’re ready to begin selling chips that can support additional spatial streams. So even if all of the kinks within the new standard were ironed out, a minimum two-year window would be needed to allow for some of the manufacturing realities.

According to a study released by In-Stat in 2011, nearly 350 million routers, client devices and attached modems with 802.11ac compatibility will be shipped each year by 2015, suggesting that mass implementation of the standard will occur within this time frame as well.

Lawson suggests that a likely forecast for mass implementation of the new standard within the enterprise will be 2015. Lawson cites a study conducted by In-Stat that estimates that nearly 350 million routers, client devices, and attached modems with 802.11ac compatibility will ship annually by this date.

Trade Up or Stick With the Status Quo?

Organizations that currently support Ethernet infrastructure would be wise to stick with the status quo. When one considers the advantages regarding throughput and security, taking the road most traveled may actually reap the greatest number of benefits. But does it have to be an either/or debate? Not necessarily; another wise move may be to dabble in the world of wireless while continuing to rely on Ethernet as the primary medium of choice. This may reap some valuable benefits, and allow organizations to move full-speed ahead on their operational networks without being left behind on technological advances. (To read more about networking, check out Virtual Private Network: The Branch Office Solution.)