November 05, 2015

Living in a virtualized world …

Gamers are used to living in a virtualized world. Battling imaginary villains and taking castle towers. However, this is not the only virtualized world that exists today. Our computer addicted world is going virtualized in virtual machines. Like with any new technology shift, a new vocabulary emerges to describe the entities that "live in this world". With the advent of companies like VMWare, applications are now created and run in this context on "virtual" computers that allow the business world to leverage their investment in VM Software and minimize expense of real computer hardware systems.

With the virtual solutions becoming entrenched in businesses, other concepts of how to use this technology have emerged. One important evolutionary step with virtual technologies is: Network Functions Virtualization (NFV). NFV takes the basic virtual computer concept one-step further, adding design flexibility by decoupling major network application functions and allowing them to operate in independent VM contexts.


This "application body" disassociation results in a network deployment flexibility that has never before been possible. VM solutions freed businesses from physical hardware restrictions. NFV can compound that by freeing businesses from physical location restrictions. Depending upon the application and specific customer requirements, decoupled VM-NFV elements can be deployed either distributed or centralized and still be viewed as a cohesive service used to meet user needs. Additionally, the flexibility of NFV also enables better scaling of network components across a network and can have a direct impact on lowering CAPEX and TCO. There are a plethora of examples of how NFV can impact your virtual world. When taken into consideration at design time, applications can be developed to fit a segmented deployment model across multiple VM systems. If a VM system maxes out its current resources, expansion only requires deploying additional NFV-VM resources as a business or network grows. The whole solution may be co-resident in the same facility or be distributed but can be expanded seamlessly.

One obvious example where NFV can play a vital role in optimizing a network, is in managing user data flows. There are two major classes of data streams in all computer networks:  

  1. Control - that information which is used to configure, provision, monitor, and troubleshoot the operation of the network itself. It has nothing to do with the applications that are used on the network.
  2. Data - network traffic that is received or transmitted data by network nodes is support of applications.

Often user data becomes the predominant traffic on the network and whether distributed or centralized, such data may require special handling as to security and QoS. Traditionally, this was achieved through managing flows as VLANs and provisioning switches and routers to direct the flows to the correct target. This approach works but can be tedious for IT team members and limited in the desired class of services that can be implemented and sustained. Client nodes and switches/routers have to be configured which becomes increasingly cumbersome with network growth. One concept to simplify this problem is to create a NFV service that eliminates the complexity of configuring VLANs for clients and network infrastructures. Such a virtual service can aggregate user data based on SSIDs, apply encryption and policies to the data, and route that steam to the designated receivers. One natural example of this in a business context would be to collect and securely route all "guest" traffic to the Internet with a minimum of management overhead. The NFV approach requires only that special SSIDs be created at the APs and clients are no longer VLAN tagged. Aggregation happens at the access point which is then transmitted to the NFV service provider for ultimate forwarding.

An NFV approach can amplify deployment options, lower costs through proper resource scaling and amplify performance within a network. Ruckus sees real value is such an approach and has begun implementing unique NFV solutions in our virtualized SmartZone product portfolio.


October 29, 2015

Wi-Fi’s Whipping Boy Complex

Stop-fault-findingIf you’ve ever attended a large conference or exhibition, chances are everyone whined about the Wi-Fi. But the truth is, a lot of the time, it’s not Wi-Fi’s fault at all.

While there is a litany of Wi-Fi-specific deployment options that can cause problems in increasingly crowded Wi-Fi networks, such as: too many or too few APs, improper channel planning, haphazard AP placement, or too many SSIDs – even when all of these are handled perfectly, Wi-Fi still tends to get the blame when anything goes haywire.

Not an exhaustive list of every possible networking problem, here are some of the more common culprits that cause Wi-Fi to be everyone’s whipping boy, especially in highly dense wireless conditions.

More Broadband Please!

 The most frequent and obvious problem for which Wi-Fi is castigated is lousy or slow broadband connectivity. The purpose of almost all Wi-Fi networks is to provide local connectivity for clients to get to the Internet. The fastest Wi-Fi networks on the planet that can now deliver local connection speeds at hundreds of megabits per second to clients, come to a crawl if there isn’t enough backhaul to the Internet. Even a 100Mbps Internet connection is too slow when you have thousands of clients served by dozens of APs capable of near gigabit speeds. This makes Wi-Fi appear slow or unreliable.

Another major problem, not directly related to Wi-Fi, is simply poor wired network design. Switching, routing and higher layer functions such as DHCP and DNS systems not configured correctly to support the explosion of Wi-Fi network connections can wreak havoc on the network but still appears to be a Wi-Fi problem. 

Addressing Users

There are a number of ways that setting up DHCP improperly will cause problems that will look to most people like Wi-Fi is broken.  The Dynamic Host Configuration Protocol (DHCP) is a method for automatically configuring TCP/IP network settings on computers, printers, and other network devices.

With DHCP, a common problem can be too long of a DHCP lease. This is the amount of time that a device is allowed to retain an IP address. In a standard network configuration, this period of time can be hours, or even days. Active devices will ask to renew their lease from the DHCP server when the lease is half up.  An inactive device will simply lose its lease and the address will be released and available to be assigned to another device.  

Over a long period of time in a high-density network it is possible to run out of IP addresses. It’s sort of like a train station where people come and go all day long. When the lease is too long, the DHCP server can run out of assignable addresses again giving the impression that Wi-Fi is broken. Shorter leases will generate a slight bit of additional traffic with the renewals but is worth the tradeoff versus depleting the available IP addresses.

Lost in Translation

 A domain network service (DNS) is a vital part of any network.  Whenever a device needs to know what address to use when passing traffic, the DNS server provides a translation from a name, or URL, to an actual IP address.

If a DNS server is underpowered, in a busy or dense Wi-Fi environment, it can fall way behind in its role by trying to provide address translation for more devices than it has processing power to complete. And If a DNS server crashes or clients can’t reach it, the users are effectively dead in the water. This causes devices to only sporadically be able to pass traffic and gives the impression of a Wi-Fi network that is overloaded even though every client is properly connected.

 DNS redundancy in this case is a helpful fix, especially in highly dense Wi-Fi conditions. A properly devised network has redundancy built in, providing multiple DNS servers to support large numbers of users.  

The Big MAC Attack

Every device has a unique media access control (MAC) address used by network switches to move traffic around. Different types of switches have different limitations on the number of MAC addresses of which they can keep track. 

A core switch typically has a large MAC table that lets it track a lot of devices while an edge switch has more MAC table limitations. When that limit is reached, the switches lose the ability to properly pass traffic where it needs to go and end up flooding all ports in an attempt to find the correct path.  When this happens there is already quite a large amount of traffic being passed, resulting in dropped packets, a lot of them.

If a large number of devices are attempting to access the network at the same time, DHCP requests and ARPs become affected and we once again see a problem that looks like the Wi-Fi is broken even though the problem has nothing to do with Wi-Fi.                       

A more devious limitation than the number of MAC addresses a switch can handle is the number that it can handle on any one virtual LAN or subnet.  A guest WLAN is generally configured for a single VLAN.  But edge switches are often limited to a smaller number of MAC addresses per VLAN than they are for the switch as a whole.  At an event with a very large number of people attending, the guest network is generally configured for a single VLAN. In this case every edge switch ends up seeing every MAC address of every guest connected to the network, possibly exceeding the limit of those switches for a single VLAN.  Correctly sizing the edge switches, controlling broadcast domains, using multiple VLANs where possible, or tunneling traffic to beefier core switches will help avoid this problem.

Now Broadcasting

When broadcast (UDP) packets are sent by a device over Wi-Fi, they are sent at much lower speeds than if they were sent directly the end receiving device (web server, VPN, etc.).  Broadcast traffic has no expectation of an acknowledgement. This means the device doesn’t always know if the packet was received.  Broadcast packets are typically sent multiple times because of this.

The effect is that broadcasts take up a lot more airtime than unicast (TCP) traffic. Because Wi-Fi is a shared medium where users contend for access and wait for the network to be available before they can transmit or receive traffic, too many broadcasts will bring a network to its knees. But certain types of broadcast, such as DHCP requests and ARPs (the address resolution protocol used to get the MAC addresses of devices on the network based on IP addresses) are necessary. Simply turning off broadcast traffic is not an option.

Good network design always accommodates broadcasts but limits them as much as possible. A large, flat, Layer 2 network, such as is typical for an event like a trade show or football game, is a perfect opportunity for broadcasts to kill the network. Every device sees every other devices broadcasts – whether they need to or not. Worse yet, while traffic is broadcast, no other devices can't send real data.

Too many broadcasts within a wired network will be just as deadly as too many broadcasts over the air.  The result looks like the Wi-Fi network is overloaded when that is not the case at all.  Packets will be dropped at the switches when a packets per second limitation is reached.

On the Wi-Fi side, client isolation can help to reduce the effect and also provide security to the wireless devices.  It’s also necessary to control broadcasts within the switched side of the network too; using VLANs to reduce broadcast domains. Switches that allow VLANs to be dynamically assigned to a single or group of devices help solve this problem.

Got Perspective?

Ultimately, Wi-Fi often gets a bad rap when it is completely undeserved.  Yes, Wi-Fi is not perfect, but at the end of the day, Wi-Fi is also dependent on the wired network that connects everything together and can never exceed its capabilities. Although only touching the surface of the many challenges that impact Wi-Fi that are not Wi-Fi-related, hopefully these common wired pitfalls will give Wi-Fi whiners some much needed perspective.

The Path for Cloudpath = Multi-Vendor Wi-Fi Security

Houser_130718_8022Author: Greg Beach, Ruckus VP, Product Management

Looks like we created a Ruckus last week with our Cloudpath acquisition.

Many customers and partners jumped on board to praise the deal, which we believe will simplify Wi-Fi onboarding and security for the industry. Cloudpath customer and blogger Lee Badman called it a “force multiplier for Wi-Fi support” – reducing support tickets and expediting users onto secure WLANs. Ruckus channel partner Gary Berzack (CTO of eTribeca) said the deal gives Ruckus “a stake in making a whole onboarding solution” and “is going to help us sell into the mid-market more easily”.

Not surprisingly, our competitors weren’t so generous in their commentary. Some jumped in to scare customers and partners with assertions that Ruckus Cloudpath will stop supporting networks that use competitive WLAN equipment. While this is a predictable competitive response, it couldn’t be farther from the truth.  

Let me make it clear that Ruckus intends to continue multi-vendor support in Cloudpath. We know it’s one of the reasons Cloudpath customers love this solution, which delivers secure and user-friendly policy management for both BYOD and IT devices. Our customers likewise rely on Wi-Fi for their business, and they want to make it easier and more pervasive – exactly what Cloudpath has been doing for nine years.

Why does this help Ruckus? Well, simply put, our goal is to deliver the best wireless experience in the industry. It’s not all about access points and controllers. It is all about the wireless experience – and that experience requires more performance, great reliability, easier onboarding and better security. That’s who we are – #SimplyBetterWireless.

It’s worth noting that Cloudpath is built entirely on standards-based protocols including 802.1X (an IEEE security framework), RADIUS (an IETF standard AAA protocol), EAP-TLS (an IETF authentication protocol utilizing certificates) and X.509 certificates (a standard for public key infrastructure).

Screen Shot 2015-10-29 at 4.48.13 PM

Competitive products are RADIUS-centric, though Cloudpath is designed to be smarter – using the standard functions present in all RADIUS servers while leaving the choice of RADIUS server to the customer. New customers can use the RADIUS server that comes with Cloudpath, or if they have an existing RADIUS server, we can extend their investment to deliver certificate-based security across their infrastructure. This contrasts with our competitors, whose solutions revolve around their RADIUS server and are priced accordingly—even if “their” RADIUS server is actually the open-source FreeRADIUS.

Cloudpath software can actually act as a single point of policy control across all wired, wireless and remote infrastructure. Cloudpath also offers a cloud-hosted solution for customers who prefer to abstract the complexity of an on-premise solution – an offering our competitors lack. Cloudpath founder and CEO Kevin Koster – who continues to lead the Cloudpath team within Ruckus – says it best: “Our core architectural principal is to avoid dictating a security architecture to our customers. This is what enables us to provide increased security using certificates, without increasing cost and complexity.”

Finally, we’re well aware that Ruckus’ position as the #1 pure-play wireless infrastructure company is due in part to our open standards commitment. This commitment enables us to continue expanding our ecosystem of hardware, services and software partners across the enterprise, carriers, education, hospitality and other verticals.

The bottom line: We bought Cloudpath because we believe the Wi-Fi authentication market is ripe for disruption – and Cloudpath has the easiest-to-use and most secure software in the market. Cloudpath’s architectural approach provides interesting ways for access points, controllers, devices and applications to become even smarter about how they use certificates. Ruckus is committed to leading the way in certificate-based Wi-Fi security – building new features and capabilities into our Wi-Fi infrastructure that take advantage of Cloudpath capabilities, while helping our customers improve the wireless experience regardless of what badge is on the access point.

# # #

September 16, 2015

Securing the World's Most Hostile Wi-Fi Network

Black-hat-imageWi-Fi security recently took on the ultimate challenge at the infamous Black Hat USA 2015 security conference held this year at Mandalay Bay in Las Vegas.

One of the world’s premiere hacking events, Black Hat attracts some 10,000 security super geeks who like to break stuff.

Wi-Fi has always been a prime security target for Black Hat, who describes the network “the most hostile Wi-Fi network in the world.” And this year was no different.  So Black Hat wanted to do something unique, something better.

Rgnets_logoSo we took the challenge, teaming with RG Nets, a little known but super sophisticated Wi-Fi application gateway innovator, to create a virtually unhackable network.

Blackhat-signThe Black Hat Wi-Fi network is infamous as a playground for attendees to try out the latest hacking tools against not only the greater Internet, but each other. Black Hat faced two fundamental challenges:

  1. providing high-speed, high density Wi-Fi connectivity to delegates while
  2. ensuring bullet-proof Wi-Fi security that could prevent attendees from using the Wi-Fi to compromise the entire network and each other.

Historically, Black Hat used a WPA2 pre-shared key (PSK) to provide hardened encryption that keeps Wi-Fi data secure by neatly tucking it away in a cozy AES encrypted shell. But that just wasn’t enough for this crowd.

When the bad guys already know the PSK, simply having an encrypted SSID is not good enough. Devices on the network are still able to communicate with each other, creating a ripe environment for ARP spoofing attacks, broadcast storms, DoS, and exploit scanning, not to mention visibility of unsecured services such as file shares and remote desktops.

Black-hat-boxesWireless client isolation somewhat mitigates this problem, but only within a single access point. Traditional network segmentation techniques such as implementing a handful of VLANs fail to create enough isolation between clients. Even "modern" VLAN pooling systems often fail to sufficiently minimize the number of devices that can talk to each other in high density environments, due to lazy assignment algorithms and a limited number of supported VLANs. The ideal solution for maximum security between users is to tag each device's traffic with its own unique VLAN ID, which effectively places each user in his or her own "sandbox" network. This per-device VLAN strategy prevents a would-be attacker from harming the network infrastructure and other users by making ARP spoofing, IP address conflicts, rogue DHCP servers, network scans, and other attacks and exploits irrelevant against anyone but themselves.

So Ruckus teamed up with RG Nets to create a safer and more reliable high-density Wi-Fi network by utilizing a fancy dynamic VLAN assignment and routing engine to provide thousands of isolated networks for Black Hat attendees.

The real goal was not only to try and provide secure, high-speed Wi-Fi, but also find a way to automate an 802.1X framework that provides AES-level encryption and authentication while dynamically assigning each device or a group of devices to a discrete VLAN.  

Rxg-box-with-shadowThis would require a close interworking between a cluster of Ruckus WLAN controllers, Ruckus SCGs in this case, and the RG Nets rXg Wireless Application Gateway system.

 RG Nets configured its system to act as a firewall in between the Ruckus SCG and rXg clusters. The wired network to which the Ruckus APs were connected was completely locked down. The MAC address OUIs of the Ruckus APs were programmed into the rXg system.

RG Nets rXg Dashboard

This ensured that only the Ruckus APs authorized could utilize the wired network and communicate with the rXg's RADIUS server. Routing out the Internet was completely disabled on the wired network. This was particularly important for Black Hat because it was very easy to sit on the floor in the Mandalay Bay conference area, unplug an AP, and instead connect a laptop via Ethernet to the same wired fabric. Disconnected APs were actively monitored throughout the event. Any "missing" AP MAC addresses were blacklisted to prevent someone from spoofing the MAC address of an AP and gaining access to the management network.


When any Black Hat user associated with the Ruckus Wi-Fi network, a RADIUS 802.1X request would be sent from the Ruckus controller to the in-line rXg system that would then dynamically assign each user, or a small group of users, to a unique VLAN that would follow them wherever they roamed.

The rXg is able, among a myriad of sophisticated packet processing chores, to support thousands of dynamic VLAN assignments, allowing each user, if needed, to have their own logical network, while keeping track of each user and their VLAN assignment.

With 802.1X MAC authentication configured on the Ruckus WLAN controllers, when a client tried to access the Wi-Fi network using a pre-shared key, the rXg system would receive a RADIUS access request from the WLAN cluster.

That access request contains the client’s MAC address and some other information used by the rXg to assign a VLAN tag.  The rXg then responded to the Ruckus controllers with a RADIUS Access-Accept response that contains the VLAN ID for each client or group of clients. 

Ruckus SCG Dashboard

Using this information from the rXg, the Ruckus WLAN controllers accepted the connection from the client and each AP would then tag client traffic with the assigned VLAN ID.

With all the traffic trunked to the Ruckus WLAN cluster and rXg system, the architecture proved to be extremely secure and successful, drastically reducing the attack “surface area” at Black Hat.

If compromised, a hacker would be able to “see” users and services only within a particular VLAN and not the entire network. So each user or user group effectively had their own virtual network that could follow them around coupled with AES encryption on the airlink. Wow.

And to ensure consistent, high-speed connectivity and fair use of the available wired and wireless bandwidth available, the rXg was also configured to provide per-device bandwidth queuing of 20Mbps down and 10Mbps up.

What’s more, the rXg cluster was also used to provide DHCP to Black Hats from a pool of public IPs, while controlling the routing of traffic to the Internet and preventing delegates from attacking the Ruckus controllers or APs.

A double SSH tunnel (VPN) through the rXg cluster was utilized to securely gain access to the RG Nets and Ruckus management consoles over the Wi-Fi network. SSH was blocked on the network as well other than from specific laptops. And SSH and HTTPS anomaly detection was enabled on the rXg just to be safe.


Ultimately, hundreds of malicious network scans and malware proliferation attempts were detected by the rXg and a variety of malicious events that would have turned into greater problems for network stability, were blocked. These events usually entail wide use of many types of attacks, particularly DoS, ARP spoofing, traffic storms, etc.

 Behavioral connection IPS, using fancy heuristics, was used to block all sorts of malicious activity. rXg's DPI engine was configured with emerging threat signatures to detect intrusion attempts, malware, etc. Before the event was over nearly 1000 instances of threat signatures occurred on the network, which is far more than other conference environments.


The AP network, consisting of some 80 Ruckus 802.11ac Smart Wi-Fi access points managed by a cluster of Ruckus SCG controllers, was never compromised, and no notable attacks or exploits were reported between Wi-Fi end-users due to the implementation of VLAN client isolation.

 Data use at Black Hat was higher than average for a typical conference. Over 3 terabytes of traffic was routed over the Ruckus Wi-Fi network during the event. SSL traffic made up over half of all data usage, as many of the delegates who were brave enough to connect to the Wi-Fi encrypted their connections through an external VPN.

During the conference, the network operations team saw concurrent Wi-Fi client connections peak to over 2300 with some APs able to take on 300 simultaneous users with no performance compromises.

So yes, we’ve been asked back next year.          

August 03, 2015

Small Cells Getting Big Attention


The LTE small cell market has generated a great deal of interest from MNOs and RAN vendors over the past few years.  The goals of this effort are to greatly increase cellular capacity in high traffic locations and to improve indoor coverage.  The primary applications will be packet voice and other real-time applications, along with high value data traffic.  However, the bulk of the data traffic in high-density locations will continue to be handled by Wi-Fi (check out the chart below).

Industry pundits have forecasted rapid growth in LTE small cell deployments, but that growth has been slow to materialize.  In some cases there were technical problems that needed resolution, like interference mitigation with the macro cellular layer, but the real challenges are around the business model.  At the heart of the business model discussion is the issue of “who pays” to deploy the network.  With macro cellular deployments, the operator always pays, but with small cell deployments in hotels, shopping malls, hospitals, and schools the burden starts to shift, for the most part, to the venue.


 In many ways, the enterprise LTE small cell market is much more like Wi-Fi than it is like the outdoor macro cellular market.  With Wi-Fi the venue almost always pays, the network can be installed by a VAR (value added reseller), and the equipment is inexpensive and easy to operate.  This is the trifecta for a successful enterprise deployment.

Looking at the LTE small cell opportunity it is important to focus on the indoor opportunity, as that is where most data traffic is consumed, and that is where cellular services will sometimes have coverage issues.  When looking at indoor wireless services there are two technologies that have been very successful and they are DAS and Wi-Fi.  DAS (distributed antenna systems) are generally deployed in high-density locations that are over a few hundred thousand square feet. 

These include stadiums, airports, convention centers, and the like.  The high cost of DAS deployments has limited them to these very large, heavily utilized locations.  The primary use for DAS is in providing good voice coverage in these large venues, while Wi-Fi networks handle the heavy data load.  Both technologies have a big advantage when it comes to deploying indoors and that is neutral host support.  Large venues will not typically allow a radio technology into their building unless it can support ALL users at that venue.  Any other arrangement isn’t to their benefit.  DAS systems are usually deployed by neutral host service providers, which resell access to the major MNOs.  If the venue is desirable enough (airports and convention centers) the neutral host service provider also pays a hefty site rental fee to the venue.

Wi-Fi networks are also neutral host and are typically installed by the venue to provide data services in their facility.   This is an area where Ruckus has been very successful.  In some cases the venue owns the network and in other cases they purchase a managed service.

Say Hello to Small Cells

 The purpose of LTE small cells is to provide enhanced cellular services in venues of all types.  In really large venues (those suitable for DAS) the mobile operator is more than willing to pay for the small cell deployment along with any site rental fees.  These are high-capacity venues that attract tens of thousands, or even hundreds of thousands of people per day.  When going to smaller venues, the subject of who pays for the small cell gets a lot more complicated.  In some cases the MNO might pay, but more often than not the venue has to pay.  There just isn’t enough traffic to justify the expense for the operator and they will fall back on the outside-in approach to cellular coverage.  For some buildings and in some locations, the outside-in approach just doesn’t do the job. 

Current-future-small-cell-modelThe figure on the left shows that to hit the “hockey stick” projections that many industry analysts have for LTE small cells a neutral host model is essential. 

To get the venue to pay, the solution almost always has to be neutral host, especially in a Bring Your Own Device (BYOD) world. So how might an LTE small cell be neutral hosted?  There are a couple of options that come to mind.  It all starts with spectrum, and more to the point, whose spectrum. 

1)  National roaming is one way to solve this problem.  With this approach the LTE small cell network is installed using spectrum of one of the national operators who then sets up roaming arrangements with all their competitors such that all their subscribers can now access the network.  This gives the venue what they need which is an indoor cellular network that anyone can access.  This is not a technical solution to the neutral host problem, but a business solution.

2)  Another option is to again use the spectrum of one of the national operators, but instead of having to roam with the host operator, the small cell can support the PLMN IDs of the other MNOs who can then tunnel traffic back to their Evolved Packet Cores (EPC).  This essentially creates a shared small cell in much the same way that a Wi-Fi access point can be shared using separate SSIDs (service set identifiers).  This is better known as MOCN (multiple operator core network) and it does add a great deal of complexity, as the small cell ends up talking to multiple EPCs instead of just one.

3)   A 3rd option is for a neutral host service provider (VAR, DAS vendor, tower company, etc.) to deploy the network using its own licensed spectrum.  These neutral host service providers might also handle DAS deployments as well.  So what spectrum might a neutral host service provider use to deploy a network and what does the business model look like?  One obvious place to look is in the low power bands that are part of the FCC’s Citizens Broadband Radio Service (CBRS) at 3.5GHz. 

The FCC has done something very interesting with the 3.5 GHz band and we can only hope that other regulatory jurisdictions follow suit.  The key elements here are a very unique approach to the sharing of spectrum where the incumbent user, in this case the U.S. Navy, has first rights to the spectrum. If they aren’t using it, then a business can get a Priority Access License (PAL) to operate a small cell network, and if there is no PAL user, or Navy user, it can be used for General Authorized Access (GAA), which usually means Wi-Fi.  This hierarchy of license priorities is depicted in the following figure.



This new 3.5 GHz spectrum allocation and management scheme is very interesting in that the coverage areas match census tracks, which match population density, and for which there are 74,000 in the U.S. The transmit power levels are also limited.  In fact, they closely track Wi-Fi power levels of 24-30 dBm, which makes them ideal for small cell usage.

Basically anyone that controls real estate can operate a neutral host LTE small cell network using this spectrum management approach.  These neutral host operators would roam with the major MNOs.  They need not have any customers of their own and would strictly be acting as a visited network.  The neutral host service providers would be paid by the venue to install and operate the network so as to provide greatly enhanced cellular service in that facility.  The venue recovers this cost because a strong cellular service helps them to sell whatever it is that drives their business.  It could be hotel rooms, hospital beds, tuition, etc.

The most fascinating aspect of the LTE small cell business is not in the technology but in the business models that will emerge to support and pay for this greatly enhanced cellular connectivity.

Talk about a ruckus!

July 14, 2015

LTE's Move into the Unlicensed Spectrum Continues

Wright Cooperate-[Converted2A lot has been going on with the proposals for LTE Operation in Unlicensed Spectrum . The FCC recently requested industry input on LTE-U (LTE Unlicensed) and LAA (LTE Licensed-Assisted Access), 3GPP’s formal action to move both the LAA and LWA (LTE/WLAN Aggregation) programs forward, and the announcement of a standalone (fully unlicensed) version of the technology are just a couple of goings on.  

For some useful background, some of our previous posts detail these worlds colliding, Digging deeper into how it all works and the public love affair between LTE and WiFi.

FCC Requests Public Notice for Comments 

On May 5th, the FCC Office of Engineering and Technology and Wireless Telecommunications Bureau opened a Public Notice (PN 15-105) requesting input from interested parties on the topics of LTE-U and LAA technology.Initial filings were due by June 11th, and reply comments by June 26th.

A total of 57 filings were made under this PN and the FCC definitely got a variety of wide ranging input. Ruckus provided our input in this filing.

This PN was expected, as Chairman Tom Wheeler had committed to it in the remarks he made after the commission issued its recent order on the 3.5 GHz Band (the commission received so much input on LTE-U and LAA during the 3.5 GHz proceedings that they decided to have a separate PN dedicated to the topic). 

In the PN, the FCC made the same distinction Ruckus has been using by classifying LTE-U as a non-standard, pre-standard technology and using LAA to describe the technology development program within 3GPP. It posed 10 specific questions about these proposals, which can be summarized into the following categories:

  • Distinctions between LTE-U and LAA
  • Timelines for development and deployment
  • Coexistence with WiFi
  • Coordination of LAA development between 3GPP and IEEE 802
  • Support for a standalone (unlicensed only) version

Perhaps not surprisingly, the majority of the filings can be classified as representing one of 2 ‘camps (skeptics and advocates)’, which typically took different positions on the key questions (click on graphic below).


A few sample excerpts on the question of industry coordination highlight the degree of polarization:


There has been extensive coordination between 3GPP and IEEE 802.11 on appropriate sharing characteristics to ensure coexistence between LTE-U/LAA and 802.11/WiFi.

IEEE 802:

There has been no coordination between IEEE 802 and any standards body associated with LTE-U, because LTE-U was not developed by a standards body. and,

 There has been no coordination between 3GPP and IEEE 802 on LAA.

It will be interesting to see how the FCC reconciles these types of conflicting statements – there were many.

Both Ruckus and the WiFi Alliance (WFA) took advantage of this PN to note their belief that LWA should also be under consideration as another proposal for LTE in Unlicensed Spectrum, especially as it pertains to the overall assessment of LTE and WiFi coexistence.

It’s unclear at this time exactly what next steps we might see from the FCC, but they certainly have a lot of information to digest. It does seem apparent that the Commission would like to see many of these disputed points resolved through interaction between 3GPP and IEEE, and it wouldn't be surprising to see them call for tighter coordination and agreement on LAA/WiFi coexistence between those bodies.

What happened in Malmö?

3GPP held a significant meeting in Malmö, Sweden last month. A number of important things occurred at that meeting: 

  • 3GPP considered all of the LAA simulation and testing results that had been generated during the Study Item (SI) period, and advanced LAA to the status of a Work Item (WI) for Release 13 (expected to be finalized in the March 2016 timeframe). The WI specifies a single global framework operating in 5 GHz, that the Rel13 version will be Supplemental DownLink (SDL) only, and that a standalone mode will not be supported. LAA Work Item 
  • 3GPP also approved LWA as a formal WI for Release 13. Unlike LAA, the LWA program was initiated as a WI, so there were no study results to consider. The WI did not specify SDL operation, so presumably LWA could support both downlink and uplink LTE data augmentation. LWA Work Item
  • 3GPP declined a request by IEEE 802 for both organizations to jointly co-host a workshop on LAA/Wi-Fi coexistence coincident with IEEE’s meeting this month. Instead, 3GPP announced that it will host an LAA workshop in late August open to interested industry organizations. The announcement was sent to IEEE 802, WFA, WBA, GSMA, ETSI, FCC, OfCom, and CCSA.

    The stated goal of this workshop is “to exchange views and information on LAA”. The WFA and NCTA both noted in their reply comments to the FCC PN, that the wording of this workshop announcement, and the minutes from the Malmö discussion, indicate that the interactions at this workshop will not constitute the “coordination and agreement” that the Commission enquired about. 

MuLTEfire –A Video Game or What?

You’ve got to give some props to the Qualcomm marketing folks, amidst the existing alphabet soup of LTE in Unlicensed proposals – LTE-U, LAA, and LWA – it’s refreshing to have a cool term in the mix.

 MuLTEfire is the name given to Qualcomm’s recently announced standalone version of LTE in Unlicensed. Standalone means that, unlike LTE-U or LAA, MuLTEfire will implement the entire LTE air interface (control, downlink and uplink data, paging, etc…) in unlicensed spectrum. This will help overcome one of the principal objections to LTE-U and LAA – that they require a licensed spectrum ‘anchor’, effectively precluding anyone but an existing cellular operator from deploying these technologies. 

At this point there are very few technical details available about MuLTEfire, and it remains to be seen if 3GPP will consider it for future standardization. Some have questioned the timing of the MuLTEfire announcement – it was unveiled on June 11th, the deadline for initial filings in the FCC PN, which had specifically requested information on a standalone version.

Now What?

The next major event that is already scheduled is the 3GPP-hosted workshop with various other industry organizations on August 29th.

It’s possible that the FCC could issue some type of statement based on the PN filings prior to the workshop, or they may decide to wait and see if the workshop provides new information to consider. And, of course, there may be other developments that aren't on the public radar at this time. So Stay tuned.

May 18, 2015

Understanding Wi-Fi Signal Strength vs. Wi-Fi Speed

Hand-holding-nothing-smThe relationship between Wi-Fi signal strength and the speed at which data can be transferred over that signal is something that is essential to understand when it comes to Wi-Fi performance.

One question we constantly get is this: 

When I connect my computer
to a wireless network, does a stronger signal always imply faster webpage loading, downloads, etc?

The answer, like all answers to WI-Fi questions, can be difficult to get a grip on.  So here's a good, fairly simple explanation from one of our rocket-scientist founders, Bill Kish, that should help clarify things.

All other factors (of which there are many) being equal, stronger signal strength is correlated with higher data transfer speeds, with a couple exceptions and assuming an optimal physical layer data rate selection algorithm. The super detailed, professional and technical diagram below shows a typical relationship for any modern wireless system with adaptive modulation:


The data transfer speed increases up to a point as signal strength increases since higher signal strengths enable the use of higher PHY (PHYsical layer data) rates, also known as MCS (Modulation and Coding Scheme) in modern WiFi. (One gross oversimplification is to think of different MCS as being somewhat like different gears on a bike or car.)

Once there is sufficient signal strength to operate reliably in the maximum supported MCS rate, additional signal strength does not produce additional throughput gains. In fact at some point (usually a few cm away from the AP) you can eventually run into a signal strength so high that the receiver's front-end is unable to process it, at which point throughput will drop precipitously.

All of the details (especially the scale) of this graph are highly dependent on the capabilities of the transmitting radio, the receiving radio and the environment. Variability in the environment and in the radios themselves makes real-world wireless throughput a random variable that can only be assessed accurately via statistical methods.

The physical layer data rate selection algorithm is critical to achieving the monotonically increasing relationship shown here up to saturation. There have been many examples of poor rate control algorithms loose in the wild (in both popular AP's and common client devices) that do not actually achieve this monotonic performance, especially when subject to unexpected environmental inputs or certain radio degradations.

So What To Do?  Get Smart.

Finding the right balance between optimum performance and reliability with adaptive data rate algorithms is what separates the great Wi-Fi systems from those that are good enough. This previous post awhile back helps explain some of the details.

Rate adaptation is the function that determines how and when to dynamically change to a new data rate. When it’s tuned properly, a good adaptation algorithm finds the right data rate that delivers peak AP output in current RF conditions –unstable as they are. Though often ignored, rate adaptation is a critical component to any high performance system.

Wi-Fi engineers have been led to believe, and—for better or worse—site survey software validates the belief, that data rates can be reliably predicted based on a metric like RSSI or SNR. And some product manufacturers use simple metrics like these to determine the right rate.

Ruckus approaches rate selection with a unique focus. Instead of using unreliable signal measurements to hope for the best data rate, we focus on the math. Our rate selection algorithms are statistically optimized, which is our engineer-chic way of saying that we pick the best data rate based on historical, statistical models of performance for each client.

Without the right algorithm, the optimal rate for any client at any given moment in time is a crapshoot. And when you're guessing, the safest guess is to err on the side of reliability, which sacrifices throughput and capacity and causes other unwanted problems.

At Ruckus, we believe in the importance of stable client connections in an unstable RF environment. In fact, our algorithms jointly adapt both the data rate and antenna pattern together to maximize reliability and throughput.

But don’t take our word for it; test it for yourselves!  You'll definitely see a big difference and create a Ruckus (a good one) with your users.

May 01, 2015

Making the Most of Multi-User MIMO

Flying_woman_composite2With the first Wave 2 802.11ac access points hitting the market, Wi-Fi goes gigabit in a big way. Yet to realize such speeds means maximizing a number of new sophisticated RF capabilities that the vast majority of today’s Wi-Fi access points are simply ill equipped to exploit. Two of these key features are transmit beamforming (TxBF) and multi-user MIMO (MU-MIMO), which effectively requires TxBF to work. Meanwhile, MU-MIMO clients are coming fast.  We’ll get a little geeky so strap yourself in.  It’s important.

With Wave 2, multi-user MIMO with transmit beamforming is designed to boost network capacity and strengthen Wi-Fi signals. That’s especially important as Wi-Fi clients explode and quickly take on new features and functions that the 802.11ac standard brings.

Getting a Good Grip on Chip-Based Transmit Beamforming

TxBF uses phasing of multiple signals to create
a virtual beam.

Transmit beamforming, or TxBF, remains an optional feature in 802.11ac but is essential for making MU-MIMO work. So vendors must deploy it in their Wave 2 products. Here’s how it works. 

Basically, TxBF allows an AP to concentrate RF energy in the direction of a particular client by using signal processing techniques at the baseband chip level. TxBF requires feedback from the client to allow the AP to synchronize the transmissions from multiple chains so they end up in phase when they are received. This results in some link budget gain.

Depending on the number of transmit chains available, TxBF can provide signal gain up to 3 or 4dB in ideal conditions. However with adaptive or smart antenna technology, a compliment to TxBF, there is no such limits and the gains are cumulative.



With TxBF, the 11ac access point sends out a test transmission to the nearby client devices, in effect instructing each client to “tell me what you just heard.”  The client replies with specific metrics that reveal how well it “hears” the AP’s signal.

This feedback is used by the AP to effectively decide whether and how to manipulate the transmission of the Wi-Fi signal (through phased timing) via its several antennas. With transmit beamforming all antennas participate in the process. 

Think of it like throwing two rocks into a pond. When each rock hits the water waves emanate in all directions out for where the rock entered the water. Where those waves come together is a stronger or bigger wave (in this case a “beam”). Throwing the rocks into the pond at different times effectively changes to location of the strongest wave.  In the Wi-Fi world the timing of these signals coming out for multiple antennas is all performed on the Wi-Fi chip through sophisticated software.

There are two parts to this manipulation: one is improving the signal to the intended client -- helping that client to hear it better; the other is minimizing that signal to all other clients in-range client to reduce noise and interference. Wi-Fi signals have to thrive despite two countervailing influences. One is noise -- the background, omnipresent ambient blend of static and distortion that exists naturally and as a result of electromagnetic devices. The second is interference -- other signals created by active transmitters such as the Wi-Fi radios in other access points themselves.

TxBF manipulates the phase of the signal coming out of each antenna so that these signals all add up (or cancel out) at the client location. This results in a higher signal-to-noise ratio (SNR), and a stronger, clearer signal to a given client.

More important to be able to hear the right signal intended for them, TxBF does something similar to noise-canceling headphones: it changes the phasing of the signal to create nulls (or what are called “notches” in the RF wave patterns) that cancel out that signal to the other clients.  This helps reduce radio noise and interference so each client only “hears” the signal intended for it. That, too, results in a higher SNR.

Another Primer on MIMO

 MIMO, for multiple input multiple output, was first introduced in 802.11n. It added multiple transmit and receive antennas on both sides of the radio link. The chipset sends information in two or more spatial streams, each one transmitting via a separate antenna. The corresponding antennas on the receiving radio collect all the signals arriving from different paths and at different times, and the RF chipset recombines them, essentially increasing the signal-capturing power of the receiver.

The multiple spatial streams pack in more data and the physically separate antennas create what’s called spatial diversity that creates slight changes in timing and other characteristics that differentiate each signal and make it easier to extract the information. The easier it is for a client to reduce the correlation between signals, the more data that can be extracted, and the more data that can be pumped into the sending side of the link.  This is a hugely important aspect of getting MIMO to work the way it was designed.

But until now, Wi-Fi access points could talk to only one client at a time, one after another, by means of a time-slicing to grant fair access to all the clients connected to that access point. An AP with four transmit and four receive antennas and four spatial streams (4x4:4) now has four multiple discrete spatial streams to transmit to different clients simultaneously.

MU-MIMO and RF efficiency

MU-MIMO stands for multi-user MIMO, a brand new, required feature in Wave 2. It too is implemented in hardware, in the 11ac radio chipset, and both the access point and client are required to support it. The concept is breathtakingly simple: with MU-MIMO, that same 4x4:4 access point can talk at the same time to three or four MU-MIMO clients. For example, with Wave 2 WLAN capacity is effectively doubled with single stream clients.

With MU-MIMO a given access point can now handle a much larger number of concurrent clients because it’s serving clients in parallel, in batches of three, for example, and because of the higher 11ac data rates that allow clients to get off the air much faster, leaving more airtime available for other clients.

Even without MU-MIMO clients, a Wave 2 WLAN can realize benefits. Just replacing a heavily burdened 11n AP with a Wave 2 11ac AP will boost the network’s capacity, serving more clients while increasing throughput even for 11n clients (as is the case with Wave 1 access points). As Wave 2 clients hit the market, MU-MIMO will kick in automatically under the covers.

Transmit beamforming is applied to each spatial stream from access point to client in a MU-MIMO configuration, simultaneously optimizing the signal to the target client and minimizing the noise level with regard to other neighboring clients.

The several antennas in MIMO are physically separate and the signal from each travels along different paths, but the spatial streams “mix” in the air on their way to the receiving antennas. This is why separating these spatial streams on the receiving side, often called decorrelation, is critically important.  Any antenna system that can focus or direct these signals to each client or client group to ensure they appear, or are heard, differently by each client is the key to maximizing MU-MIMO performance.

Apart from the physical differences of the antenna locations and the optimizations created by chip-level beamforming, is there another way to increase this differentiation, and thereby increase or sustain MU-MIMO throughput?  There is.  Hello smart antennas.

Value-Added MU-MIMO with Smarter Antennas

Ruckus adaptive directional antenna technology, marketed under the name “BeamFlex,” despite sounding somewhat similar to “beamforming,” is quite different. Adaptive antennas continually shape the “physical” antenna patterns by changing the antenna structure electronically (watch this).

With adaptive antennas there are three important gains derived.  One is simple antenna gain achieved by focusing more energy in the direction of a given client. Another is the gain from interference mitigation as smart antennas are not forced to constantly send and received signals in all directions at all times.  But perhaps most important to MU-MIMO is the ability for adaptive antennas to control multipath transmission.  Smart antenna systems can effectively steer one spatial stream in one direction and a separate spatial stream in a completely different direction so decorrelation and spatial multiplexing are maximized. This is critical to ensuring proper MU-MIMO operation and maximizing MU-MIMO performance.

While beamforming relies on manipulating the signal timing, to alter the signal’s phase. BeamFlex is all about manipulating the antenna pattern that transmits the beamformed signal.   

A sophisticated (and patented) best-path selection algorithm within each access points lets the AP automatically try different combinations of antenna elements to create focused signals that yield the highest possible data rates, with thousands of possibilities. In effect, Beamflex creates a custom, optimized antenna tuned for a specific spatial stream intended for a given client device or group of clients.

The uniqueness of each antenna pattern means “more signal” can be sent to the target client, and “less signal” to neighboring clients. Depending on the specific situation, BeamFlex can create up to 6dB improvement in the signal-to-noise ratio, and up to 15dB of improvement through reduced interference. The combination means higher data rates, longer range, and higher sustained data rates over those distances.

With multi-user MIMO networks, BeamFlex can create these custom RF patterns for each antenna and each simultaneous MU-MIMO client group so Wi-Fi signals can be better distinguished by clients.  This simply makes MU-MIMO work better. 

That’s pretty smart Wi-Fi.

April 15, 2015

Clearing FUDDY Waters

Fuddy-watersWave goodbye to slow Wi-Fi.

Wave 2 of 802.11ac is here and now, adding new capabilities that improve overall Wi-Fi system performance and capacity.

So don’t be put off by naysayers spewing FUD that Wave 2 APs won’t add immediate value to existing Wi-Fi infrastructures. They already have.

Wave 2 802.11ac-capable access points make more efficient use of the RF spectrum by getting clients on and off the medium faster, leaving more airtime for clients, even those that don't support Wave 2 capabilities. Because Wi-Fi is a shared medium, reducing the time to serve even some clients will benefit all clients.

And as multi-user MIMO clients hit Wi-Fi networks this year, Wave 2 is capable of serving those clients simultaneously—allowing others the opportunity to access the RF spectrum sooner.  It’s carpooling. If you can get people to carpool, even those who don’t carpool benefit because there are fewer cars on the road.  

Having more spatial streams available to use also provides incremental value in the form of spatial diversity, regardless if the clients have one, two, or three spatial streams. More antennas improve MIMO by increasing reliability and signal quality, pushing data throughput closer to data rates.

The other obvious and BIG benefit that wave 2 provides is simple: investment protection. Customers are tired of having to architect and re-architect their Wi-Fi networks every couple years to accommodate the barrage of new devices with new features and functions that can’t benefit from their existing networks. Wave 2 effectively mitigates this risk, extending Wi-Fi refresh cycles.

But, maybe you’re still hearing the same tired message when companies want you to buy Wave 1 instead of Wave 2 saying: “Wave 1 is good enough; no need for Wave 2.” To help demystify a lot of the fear, uncertainty and doubt (FUD) that vendors are belching, here are some more detailed radio truths to help you in your buying decision.

Increased Wi-Fi Capacity with MU-MIMO

Looking closer, if there’s only one reason why Wave 2 makes sense now (and there’s much more) it’s this: MU-MIMO allows an AP to send downlink frames to multiple stations at the same time. This increases capacity compared with single user MIMO. 


Historically, Wi-Fi was only capable of serving clients one-at-a-time. Slow devices consume extra airtime, and all devices served by that AP suffer as a result. This is especially true in mobile-rich deployments. And what networks aren’t packed with smart mobile devices today?   

Better Transmit and Receive Performance 

There may not be many 4x4 clients on the market this year, but adding radio chains helps improve reliability even if you have 1x1, 2x2, or 3x3 clients.

Adding more transmit radio chains improves downlink performance, especially for MU-MIMO. That extra transmitter provides more signal steering control and higher data rates with less interference.

Adding more receive radio chains also improves uplink performance. Using maximal ratio combining (MRC), the AP has the ability better hear signals on multiple antennas and in different polarizations (if the AP supports dual polarization), combining those signals to ensure better reception. This is especially useful for single- or dual-stream clients with small antennas and weak transmit power (e.g. smart phones). 

Legacy Clients Benefit

If you’re having a hard time seeing the benefit of MU-MIMO because some portion of your client devices won’t support MU, realize that every MU-capable client in your network ultimately benefits legacy clients (single-user, or non-MU) as well.

With 2-3x greater efficiency from MU, every extra bit of productivity gained is added to the airtime pool for other clients (especially legacy clients that need the boost) to utilize.



More Spatial Streams Helps Everyone

The number of spatial streams and the transmission bandwidth together indicate potential throughput performance and number of devices supported. Initial Wave 2 radio chips are 4x4:4 (4 transmit and 4 receive radio chains with support for 4 spatial streams), while most Wave 1 chips were 3x3:3.

While we all wait for four- stream Wi-Fi devices, more spatial streams provides unique benefits, particularly for wireless meshing.  Wi-Fi meshing has always suffered from multi-hop throughput loss. With additional, higher bandwidth streams, APs should now be able to connected wirelessly at true gigabit wireless speeds.

Investment Protection

MU-MIMO client support is happening this year.  In fact, MU-capable clients are already on the market. Many of the mobile device chipsets in devices used today are actually “multi-user ready” with a firmware upgrade. So, don’t be surprised if software upgrades this year enables widespread MU support with no need to buy new devices. And yes, MU-MIMO does require client support, so not all 11ac clients can use it. But MU-MIMO support in clients is a near-term reality.

MU-MIMO is a long-term investment – it’s simple myopia to defer Wave 2 because “no MU clients exist today.”  And even a short-term AP investment spans 3 years, so why would we focus on client support in the market RIGHT NOW instead of forecasting client feature support 6 months from now? With that perspective, MU-ready APs make even a 4 or 5-year AP investment plan very reasonable.

MU-MIMO also adds margin for imperfect designs – a small contingency of Wi-Fi consultants and administrators are true experts at maximizing spectral efficiency (proper channel reuse, AP location, Tx power, antenna choice, etc). Given the budget, time, building layout, and business requirement, they can fine-tune until Wi-Fi Zen is reached. For the rest of us, all performance features that offer margin to offset “best effort” designs are a huge help for maximizing investment—and making network admins look like experts, even if they aren’t.

Newer Chipsets Bring Efficiency and Performance Gains

Every new generation of Wi-Fi chips comes with efficiency and performance improvements. Every new AP hardware revision is an opportunity to improve radio components, fine-tune the layout, enhance antenna subsystems, and generally improve performance. If you remember back when the first 11ac APs were coming out, the industry as a whole saw a marked performance increase even for 11n clients (specs didn’t change, but performance did). For all clients, expect new APs to enhance speed.

Impressive Power Efficiency

Unfortunately, when you add more radio chains, APs require more power.

With Wave 2, The Ruckus R710 is designed to provide full GHz 802.11ac functionality on 802.3at power, while offering a pretty sweet deal on 802.3af “efficiency mode.” We simply reduce 2.4 GHz radio output power to 25 dBm and disable the USB and second Ethernet port. That’s it.

And you won’t have to think about it. The new ZoneFlex R710 is smart enough to detect how it’s being powered. Whether by DC, 802.3at PoE, or 802.3af PoE, it automatically makes the necessary adjustments to maximize 802.11ac performance.

Other Considerations

Wave 2 will be slightly more expensive than current Wave 1 APs, so you can still buy Wave 1 if you are budget conscious. IT JUST may not take you as far.

And if you’re waiting around for Wave 2 because of the data rates promised by 160 MHz channels, don’t be fooled. Wide channels are the enemy of spectral efficiency in the enterprise. Most client devices won’t support 160 MHz, so there’s really no reason to want it…other than for suspect marketing claims like “fastest AP ever.”   

And if you’re worrying about 802.11ac stabbing you in the backhaul, don’t.

For an AP to require more than Gbps the situation would need to be highly unusual, if not completely unlikely. This would mean a 4 spatial stream 802.11ac WiFi client running 80 MHz channels and an 802.11n 3 spatial stream client (on a 40Mhz wide channel) all using the AP at the same time,  Keep in mind there currently doesn't exist 4 spatial stream WiFi clients (but they ARE coming), and given the limited channels available, you'd never want to set the 2.4GHz radio to 40 MHz wide channels So given the real world device and traffic mix, you’ll rarely need more than 1 Gbps uplinks for Wave 2 APs. Even if you do, link aggregation is there to help. 

The Net-Net of it All

If we knew that we’d be really late to market, we’d probably be saying things like “wait on Wave 2 until clients are ready.” What we’d really mean is “please don’t buy Wave 2 from our competitors…we will be late to market.”  But we didn’t say that.  Instead we just thought we’d cause another Ruckus.  Mission accomplished with much more to come. 

April 07, 2015

Getting Engaged: LTE and Wi-Fi Fall in Love

Hratko Rings_WiFi_Lte_2-smWi-Fi and cellular are the two most successful wireless technologies in existence and have complemented each other for years. Now they seem to be getting engaged. And it couldn’t come at a better time as demand for wireless capacity is at an all time high.  But how this all plays out is another matter altogether.

Wi-Fi’s great strength is that it runs in unlicensed spectrum, can be deployed by anyone, and it is supported on almost every smart handheld or IoT device you can think of. Its real sweet spot is high capacity, high-density indoor applications with low mobility. 

In contrast, cellular technology, which has swept across the globe over the last few decades helping to create a multi-trillion dollar telecommunications industry, is ideal for its ubiquitous outdoor coverage, seamless mobility, and support for real-time applications like voice and streaming multimedia.

Combining these technologies offers great promise for the entire industry. But how they come together remains a big question.

There’s simply no doubt that these two technologies will continue to converge with the goal of giving users an “always best connected “experience. Ultimately, users don’t really care about what wireless technology is used as long as it is fast, reliable and affordable.  

A variety of different approaches to Wi-Fi/cellular convergence are being considered by various industry groups. As these worlds collide (see previous post), understanding the distinctions between these different approaches is important, realizing that there’s no right or wrong answer, just different choices (depending on your frame of reference). Like everything, the market will ultimately decide what works best and when.

LTE in Unlicensed Bands (LTE-U and LAA-LTE)

One such option that has received a lot of attention recently is LTE-U. Being promoted by QualComm and other radio access network (RAN) vendors, LTE-U is an approach to run LTE directly over the 5GHz unlicensed bands. While it isn’t so much convergence as it is way to obtain additional wireless spectrum for mobile services, this concept is now under development by 3GPP (3rd Generation Partnership Project) for standardization in Release 13 as LAA-LTE (license assisted access). 


LAA continues to run the LTE control channels, and primary uplink/downlink channels in the licensed bands, using LTE-A Carrier Aggregation (CA) to do channel bonding between the licensed and unlicensed downlinks, and possibly the uplinks in follow on releases. The purpose of the unlicensed bands is to provide additional data plane performance – a data plane boost in effect. The great challenge with this approach revolves around getting LTE to peacefully coexist with Wi-Fi in the unlicensed bands, but the sharing of spectrum is not in the LTE DNA.



Proponents say that LTE-U can easily coexist with and protect Wi-Fi operations in unlicensed spectrum, similar to the way different Wi-Fi networks shared the band today. Others worry that the scheduled nature of LTE could cause it to push Wi-Fi out of these bands.

LTE assumes that it has full control over the frequency bands in which it operates and was never really designed to contend for access to the medium, unlike Wi-Fi, which is a first-come, first-served contention-based access model.

Wi-Fi employs a listen-before-talk (LBT) mechanism. Any device wishing to use the band must listen to see if it is occupied.  If the band isn’t busy, the device can seize it and start transmitting. The band can only be held for a maximum of 10 milliseconds after which it must be released and the LBT process repeated. This assures fair access to the medium and has proven to be a very effective way of sharing unlicensed spectrum. The challenge for using LTE in unlicensed bands, is how best to implement LBT as it will require changes to the media access control layer.  

Failure to correctly implement listen-before-talk, will likely limit the viability of LTE-U technology, as public venue owners and other businesses will be reluctant to deploy anything that might negatively impact the unlicensed bands.  Public venues include hotels, conference centers, stadiums and transportation hubs. These are highly desirable locations with heavy data demands where a high quality Wi-Fi service now play an essential role in bringing customers into buildings and keeping them there.

 This effectively causes public venues to put a premium on protecting the unlicensed bands.  Many venues now even employ staff to keep track of how these bands are being used.  This makes it essential that any LAA-LTE standard coming out of 3GPP support LBT per IEEE specifications.  

LTE + Wi-Fi Link Aggregation (LWA)

An alternative to using LTE in unlicensed spectrum that could be much more palatable to the broader industry is LTE + Wi-Fi Link Aggregation (LWA).

This approach, being strongly promoted by QualComm, achieves a very similar result to LTE-U and LAA-LTE, but with some big differences. With LWA, the LTE data payload is split and some traffic is tunneled over Wi-Fi and the rest is sent natively over LTE.  This can greatly enhance the performance of an LTE service. It’s expected that LWA will proceed rapidly through the standards process and emerge in 3GPP Release 13 in the summer of 2016. 

LWA centers on using a Wi-Fi access point to augment the LTE RAN by tunneling LTE in the 802.11 MAC frame so it will look like Wi-Fi to another network even though it is carrying LTE data.

With LWA, Wi-Fi runs in the unlicensed bands and LTE runs in the licensed bands, and the two radio technologies are combined to offer a compelling user experience. Both technologies are allowed to do what they do best, and LTE no longer needs to perform any unnatural acts.

Unlike the deployment of LTE in unlicensed spectrum, which requires all new network hardware and all new smartphones, LWA could be enabled with a straightforward software upgrade allowing smartphones to power-up both radios and split the data plane traffic so some LTE traffic is tunneled over Wi-Fi and the rest runs natively over LTE. The traffic that flows over Wi-Fi is collected at the Wi-Fi access point and then tunneled back to the LTE small cell, which effectively anchors the session.The flows are combined at the LTE small cell and then sent on to the evolved packet core (EPC) and from there to the Internet.  

The big advantage of this approach is that all Wi-Fi traffic can benefit from the services provided by the mobile operator’s EPC. These services include billing, deep packet inspection, lawful intercept, policy, authentication and the list goes on. If the LTE signal is lost, this service will drop and the user can reinitiate an Internet connection over Wi-Fi.  This approach is somewhat similar to multi-link or multi-path TCP, except that the traffic is combined in the cellular RAN rather then back in the Internet. 

LTE + Wi-Fi Link Aggregation would require that LTE small cells to be deployed in the venue, and that any Wi-Fi APs in the venue be software- upgraded to support LWA. The Wi-Fi APs can also continue to support non-LWA traffic on a separate SSID as well, potentially making it the best of both worlds, providing more upside than using LTE in unlicensed bands, with none of the downside.  As such, LWA becomes a solution that doesn’t impact the unlicensed band while leveraging existing Wi-Fi access points and improving indoor cellular performance.   


LTE and Wi-Fi Aggregation         


Now What?

The convergence of Wi-Fi and LTE small cell technology will play out over the remainder of the decade.  The end result will be to enable an always best-connected experience for the user.  LTE-U, LAA-LTE, LWA, and multi-link TCP are all options for converging these two great radio technologies and there are others as well.  The future looks bright for carrier grade Wi-Fi technology and LTE small cells.