WiGig in mobile devices

Qualcomm has acquired Wilocity, a maker of chipsets for the 60 GHz wireless band – IEEE 802.11ad. Qualcomm chipsets for mobile devices are to be enabled in this band, so we should see rapid growth of 60 GHz (WiGig) inclusion in mobile devices.

Short range high throughput and out of band with existing WiFi, WiGig creates new opportunities. WiGig data transport from cheap low power small infrastructure equipment (see our thinking on Myrmidon access points) will be a great enabler for ubiquitous high throughput wireless connectivity. As consumer devices provide a rapid rollout of endpoints for WiGig so the ROI for this kind of access point becomes much better. Expect to see this new kind of access point coming to market in the short to medium term. A network built using them will make a form of ‘fog computing’ more viable, because high bandwidth wireless connectivity to proximate processing and storage services will have significant advantages over longer more contended network paths.

Dual band radios in mobile devices have been around for some time, and tri-band radios will be arriving soon. At some point it will become feasible to provide two or three concurrent radios in mobile devices with the obvious associated advantages. The question is when will such radio arrays arrive? Power consumption is probably the main constraint for this kind of connectivity. Battery technology is subject to intense research and we should expect impressive improvements to come to market soon. Nonetheless, concurrent multi-radio solutions need a rapid way to bring radios in and out of service to reduce power consumption.

MU-MIMO soon and trends

In April Qualcomm announced their forthcoming 802.11ac MU-MIMO chipsets. These include the QCA 9990 and QCA 9992 chipsets for business grade access points with 4 and 3 stream radios respectively. Their client device chipsets provide 1 and 2 streams. All these MU-MIMO chipsets provide up to 80 MHz channel width, not 160 MHz. Their highest link speed is then 1.73 Gbps on 4 stream access point and ‘home router’ chipsets, while their client device chipsets with 2 streams have a highest link speed of 867 Mbps. So, for an all Qualcomm setup the upper limits for access points and ‘home routers’ are more usefully considered as aggregate capacity limits, e.g. two 2 stream clients could in theory transfer at 1.73 Gbps. In practice of course it is more likely to be about half of that or less. As these chipsets were “expected to sample in the second quarter of 2014” we can expect them in the products in the second half of 2014, along with some of their competitors – Broadcom and Quantenna have made similar announcements.

With MU-MIMO access points can service multiple stations simultaneously, so the available streams can be more fully utilised. The most important effect of this is to effectively increase the capacity of the spectrum. Obviously this is good news for WLAN owners and managers who have spectrum operating around capacity. Although MU-MIMO does not make a connection faster than before, it does provide more uncontended air time to clients, so they should also feel the benefit as better transfer times.

As MU-MIMO is compute expensive we are going to see more PoE+ equipment. As more channels are available in the 5 GHz band, and they are being added to, it makes sense for access points with two or more radios with omnidirectional antennas to be deployed where spectrum is highly utilised. This will add further to power requirements so we may see a growing market for mid-span PoE+ injectors.

802.11ac and MU-MIMO is coming at a good time as expectations and use of WiFi are soaring; a trend that will continue as the Internet of Things and wearable devices gain traction. If rumours are correct, the ever growing bandwidth needs of static and moving images will soon be added to by the demands of holographic displays. Obviously with all this data aggregating over WiFi to Ethernet we need 10 GbE at a sensible price soon.

WiFi networks and crowdsourcing an IT strategy

In a recent survey 1 in 3 British workers say they rely on WiFi to do their jobs effectively, and 61% of those believe their home WiFi to be better than their workplace WiFi. That survey of 2,004 randomly selected British wireless-reliant UK workers aged 18+ was commissioned by Aerohive Networks – a US based maker of premium business grade WiFi equipment. Their report contains a number of observations on productivity problems in the workplace, with unreliable connectivity considered the most disruptive, power cuts as second, and ‘wireless temporarily down’ as third. Aerohive report that “Up to 40% have missed deadlines and opportunities at work due to poor [wireless] connectivity”. This negative experience of WiFi in the workplace relative to the home supports the widely held view that WiFi use in the workplace is led by employees, not by IT department strategists. From this report one might infer that WiFi connectivity in the workplace is behind the expectations and needs of some employees to work as productively as they would like.

Modern mobile working practices are more typical in younger people. It is probably significant that they have a more technologically aware mind-set, along with higher expectations of their working environment developed in technologically rich educational environments. Recently we upgraded WiFi in some student accommodation. While investigating issues with networks, two students in a common area both using WiFi were asked if they also used a wired connection. Only one of the two did, even though a laptop with a port for a wired connection was being used by the one that only used WiFi. Obviously this is a vanishingly small sample, but this scenario is typical, and there are two important points to draw from it. Firstly, both students were spending some time working in a communal area using WiFi, but that was their choice, not a requirement. Secondly, one did not even take the trouble to use a wired connection when it was available and provided a better service – the reason we were there. Mobility is at least in part about a more social and collaborative style of working. It allows people to take what they are doing with them. They are controlling the technology rather than the technology controlling them. In science fiction movies nothing is ever plugged in, everything is wireless because that is how we like to see ourselves, with freedom to move and in control of powerful technology. In problem scenarios in science fiction technology takes control, even if it is wielded by other people. Wireless connectivity then is an essential enabler of expectations in working practices, and currently wireless connectivity is dominated by WiFi.

A recent webinar by LogMeIn reported on their survey of almost 1400 IT and non-IT professionals globally concerning modern trends in IT that could be collectively described as crowdsourcing an IT strategy. They simplify their findings into four macro-IT trends:

Firstly – use of personal devices for business; the so called bring your own device (BYOD) trend. Employees chose the technology and IT departments provided WiFi connectivity. This was the start of significant employee contributions to the IT strategy, i.e. crowdsourcing the IT strategy.

Secondly – an empowered, connected, and mobile workforce. These employees (who as discussed above are generally younger) expect mobility and ubiquitous wireless connectivity. This group are probably the strongest drivers of WiFi expectations in the Aerohive Networks survey above.

Thirdly – applications sourced and managed by employees; the so called bring your own application (BYOA) trend. Employees report they do not always feel the need to seek the approval of the IT department, particularly to address problems localised to their small groups and themselves. This has resulted in a strong move away from enterprise grade software to the cloud ‘app’ based approach (cloud based processing and storage) which has perceived advantages described in terms like convenience, ease of use, agility, speed, less hassle, and flexibility. However, this piecemeal approach has no overarching strategy and little or no appreciation of broader consequences.

Fourthly – business data is increasingly in the cloud. A major advantage of the cloud is location anonymity, but that can also be a concern for some data.

IT professionals see the consequence of these four macro trends as a less secure and controlled IT world, with 42% expecting this trend to continue, and 35% expecting it to remain at about the same level. The main concern of 54% of IT professionals is a lack of security of business data in the cloud. The survey also indicated that 29% of IT departments monitor and modulate use of apps, accepting its inevitability but trying to make use of its advantages; 39% broadly ignore it, not yet knowing how to react; and 30% are actively suppressing use of apps not sanctioned by the IT department. This last reaction is despite strong anecdotal evidence that employee productivity is improved by these four trends.

We can see from these two surveys that IT strategies in the workplace are now partially emerging from employee decisions. At this time no coherent response has been established among IT professionals to crowdsourcing of IT strategy. However it is accepted that a strong WiFi network is a key enabling technology for the modern mobile working practices expected by an empowered, connected, and mobile workforce. Likely the way forward will be found in technologies being developed to modulate these trends so as to gain the best from them while minimising problems. Certainly, while it still possible the old arrangement of IT departments totally controlling IT use and strategy in the workplace are looking increasingly outdated and likely to hold back productivity.

‘Bring your own access’ will accelerate the trend for IT strategy crowdsourcing. Personally controlled mobile Internet connectivity can circumvent corporate Internet connectivity, so IT departments will then be unaware of the data moving in and out of the business. As data prices fall, coverage and speeds improve, and employees become more technologically enabled, this trend will accelerate.

8x8x8 MU-MIMO WiFi in 2015

Quantenna says they plan to release 8x8x8 MU-MIMO chipsets in 2015
This will be a very important development for anyone owning WiFi networks and of course WLAN/LAN professionals.
8 stream MU-MIMO can provide very high aggregate throughput to the LAN, making more efficient use of the WiFi infrastructure but requiring a 10 GbE LAN to make full use of it.

The significance of the ‘MU’ in 802.11ac MU-MIMO

Firstly, what is the ‘MU’ feature in 802.11ac MU-MIMO? Put simply it allows multiple Wi-Fi client devices (e.g. mobile phones, tablets, and laptops) to exchange data with an access point radio, in parallel. Previously only one Wi-Fi client device at a time could exchange data with an access point radio. An important consequence of this is that the aggregate throughput of access points can spend longer at higher levels and so make more efficient use of network resources. Another consequence is that traffic analysis will be more difficult when there are multiple simultaneous talkers.

The number of Wi-Fi client devices that can exchange data simultaneously with an access point radio is limited by the number of spatial streams that each supports. The 802.11ac amendment to the 802.11 standard allows for radios with up to eight spatial streams, although only recently have four stream MU-MIMO processors become available. Each spatial stream is a distinct stream of data that requires an antenna of its own linked to one radio. A connection between an access point and a Wi-Fi client device will use one or more streams. In practical terms this means a four stream 802.11ac processor with MU-MIMO in an access point can communicate in parallel with four single stream client devices, or two single stream client devices and one two stream client device, or two client devices each using two streams, or of course one four stream client device.

At this time a typical 802.11ac setup may use an 80 MHz channel width and an 800 ns guard interval, with connections perhaps achieving MCS 7. If that setup were fully MU-MIMO enabled it would then have a theoretical aggregate throughput of 4*292.5 Mbps i.e. 1.17 Gbps. Out of interest I performed a test as I wrote this in very good RF conditions using a Sony Xperia Z Ultra and Samsung Galaxy NotePRO 12.2 connected to D-Link DAP-2695. I used them for no other reason than they happen to be sitting on the next desk and are all are very current. All of these are 802.11ac devices, but not MU-MIMO. The Sony device achieved a link speed of 325 Mbps with RSSI at -42 dBm; it delivered 205.7 Mbps up and 207.95 Mbps down. The Samsung device achieved a link speed of 866 Mbps with RSSI also at -42 dBm; it delivered 208.87 Mbps up and 413.89 Mbps down. These were the best figures from among a handful of tests on each client device. Some test results achieved only half of these rates or less, but most were similar. These links are clearly 80 MHz, 400 ns, MCS 7 and MCS 9 for the Sony and Samsung respectively, with one and two streams respectively. Anyway, if these devices were MU-MIMO then my best aggregate download throughput for two Xperia and one NotePRO (for example) would be 2*207.95 + 413.89 = 829.79 Mbps. Add a client on a 600 Mbps 2.4 GHz radio and we can see it is possible for an access point to make full use a GbE link. The theoretical throughput of GbE is 118660598 data bytes per second (about 949 Mbps) using a 1460 data bytes Maximum Segment Size in a normal Ethernet frame of 1518 bytes containing a Maximum Transmission Unit (MTU) of 1500 bytes. Using a 9K MTU improves this to about 123916800 data bytes per second i.e. about 991 Mbps. In practice of course these theoretical GbE maximums cannot be achieved, and Wi-Fi transfer rates are likely to be about half of the link speed.

Let us consider how multiple SSIDs relate to this ‘MU’ feature. An access point radio operates on one logical channel at a time. In fact that logical channel may be composed of multiple contiguous channels or discontinuous ‘bonded’ channels that behave as one large channel. These techniques increase the amount of spectrum used by a radio for a logical channel and so its bandwidth. They do not provide distinct parallel streams of data. As SSIDs are configured to a band and thence a radio, so they will all share the same logical channel of their radio. Consequently all SSID traffic has to take a turn on their radio’s configured logical channel, unless that radio is MU-MIMO enabled. In which case SSID traffic might travel over one or more spatial streams, depending on Wi-Fi client device MU-MIMO capability, and so could travel in parallel with other SSID traffic. So, SSIDs provide no innate transmission parallelism; that can only come from MU-MIMO enabled 802.11ac radios.

The balance of wired and wireless

Recently I installed a 4G LTE router in a site where there is a poor wired Internet service with no plans for improvement, but a choice of proximate 4G LTE base stations. The resulting wireless throughput is better, the service more reliable, and the prospect of further improvements immanent – partly because of the increasing competition between the wireless Internet service providers (WISPs) offering 4G. Apparently the wired infrastructure is not economically viable to upgrade according to its ISP. This is surprising statement given that the area is very densely populated with consumers and wired infrastructure. Perhaps what they mean is not enough disgruntled customers are leaving for 4G to justify spend on upgrading their service yet. This is not the first area I have come across with that attitude by an ISP. The first time I was told this was also in a build-up area, but it had fewer consumers and more businesses that are probably paying for leased lines anyway, so it was easier to see why there. Anyway, this attitude made me wonder where it is economically viable to put in at least fibre to the cabinet. Obviously the WISP base stations that serve this recent site need to aggregate a lot of data, and at least one of them has no wireless carrier antennas, so I suspect it is using fibre for backhaul. I think this is a case where wired infrastructure can more easily make money. It has the throughput advantage (at the moment) that can justify the cost of digging in a heavily developed area with strong property laws. I expect ISPs to continue to cede customers to WISPs and wired infrastructure to further retrench and focus on highly aggregated throughput.

Now suppose that some clever researcher finds some scrap of information intrinsic in electromagnetic radiation that allows distinct transceivers to be identified, or even just groups of them. This would make a dramatic difference to wireless communication because spectrum becomes less contended. In fact something like that has already been announced in the shape of pCells. I hope for and expect more innovations of this kind. When they arrive they will have a profound effect on wireless communication and wires will retrench further.

Wi-Fi site surveys and two stream Wi-Fi for mobile phones

I recently asked the Ekahau site survey tool maker if they could add a feature that allows visualisation of different qualities of client transceiver. The reason I asked is because the range of Wi-Fi client ability continues to expand. Very soon the new Broadcom BCM4354 chip will deliver two-stream 802.11ac Wi-Fi to smartphones. Meanwhile some very old clients are still in use along with clients that have poor design and/or build quality. Some websites report that the Samsung Galaxy S5 should be available in April using the BCM4354 and that the iPhone 6 will also use it. So the Broadcom BCM4354 will rapidly expand to the range Wi-Fi client ability that WLANs are expected to manage. I think that site survey tools should allow me to deliver reports that visualise this diversity of connection quality. If you have to provide a certain level of Wi-Fi service to a diverse set of clients you need to know what they will experience, not just what can be obtained on the high end equipment that WLAN professionals use.

Wireless data transport growth

This is a link to Cisco’s latest interesting and comprehensive set of statistics and predictions about mobile data traffic from 2013 to 2018. Note that mobile data traffic discussed in it is traffic passing though mobile operator macrocells. This post is more broadly interested in the growth of wireless data transport, especially as it concerns Wi-Fi.

In their study Cisco note that “globally, 45 percent of total mobile data traffic was offloaded onto the fixed network through Wi-Fi or femtocell in 2013.” They add that “without offload, mobile data traffic would have grown 98 percent rather than 81 percent in 2013.” The study predicts 52% mobile offload by 2018. Nonetheless, it still predicts a 61% compound annual growth rate in global mobile data traffic from 2013 to 2018 (i.e. an increase by 10.6 times) from 1.5 exabytes per month at the end of 2013 to 15.9 exabytes of data per month by 2018. The study further predicts that “the average smartphone will generate 2.7 GB of traffic per month by 2018, a fivefold increase over the 2013 average of 529 MB per month.” i.e. a 63% compound annual growth rate. In more nascent areas the study projects that “globally, M2M connections will grow from 341 million in 2013 to over 2 billion by 2018, a 43 percent CAGR”. It does not estimate traffic volumes for M2M, but does notes its overlap with wearables. The study estimates that “there will be 177 million wearable devices globally, growing eight-fold from 22 million in 2013 at a CAGR of 52 percent”, but “only 13 percent will have embedded cellular connectivity by 2018, up from 1 percent in 2013”. The study also considers that “globally, traffic from wearables will account for 0.5 percent of smartphone traffic by 2018” and “grow 36-fold from 2013 to 61 petabytes per month by 2018 (CAGR 105 percent)”. The study also states that “globally, traffic from wearable devices will account for 0.4 percent of total mobile data traffic by 2018, compared to 0.1 percent at the end of 2013”. The study projects no significant change to the order of the share of mobile data type by 2018: mobile video 69.1%, mobile web/data 11.7%, mobile audio 10.6%, mobile M2M 5.7%, and mobile file sharing 2.9%. The study expects the following device type data consumption by 2018: laptop 5095 MB per month, 4G tablet 9183, tablet 5609, 4G smartphone 5371, smartphone 2672, wearable Device 345, M2M Module 451, non-smartphone 45.

As we can see from Cisco’s predictions Wi-Fi is set to become even more significant for offloading. To be clear, in their study offloading is defined as pertaining to devices enabled for cellular and Wi-Fi connectivity, but excluding laptops. Their study says that “offloading occurs at the user/device level when one switches from a cellular connection to Wi-Fi/small cell access”. Eventually offloading should be substantially simplified by Hotspot 2.0 enabled equipment, although it will take years for Wi-Fi CERTIFIED Passpoint equipment deployments to reach significant levels. The impact of Hotspot 2.0 is not mentioned in Cisco’s study.

Obviously Wi-Fi is far more than just an offloading adjunct for mobile operators. It also provides adaptability to local needs, containment of data, and will facilitate the Internet of Things and the Internet of Everything. Ownership of wireless network infrastructure allows wireless data transport to be better matched to local needs; for example providing more control of costs, throughput, latency, and service reliability, along with competitive advantages through differentiating functionality. Wireless network infrastructure ownership also allows data to remain local, circumventing the security concerns and compliance requirements associated with data passing through equipment owned by others. Finally, ownership is the only viable approach for connecting massive numbers of diverse M2M and wearable devices to the Internet of Things and the Internet of Everything. ‘Fog computing’ promotes hyper local data processing that it argues is necessary to manage the rapid growth in transported data that is expected from the Internet of Everything. Naturally this makes no sense without hyper local connectivity that currently is dominated by Wi-Fi. Data cabling is clearly not adaptable enough to handle massive, transient, mobile, and rapidly scaling connectivity. So Wi-Fi is destined to continue its rapid growth, not just on its own merits as a general purpose wireless data transport that will continue to gain new uses, but also as a convenient offloading platform for mobile operators and a network edge for the Internet of Things and the Internet of Everything.

Many organisations recognising the significance of Wi-Fi have plans to expand its abilities with improved standards, more license free electromagnetic spectrum, and enhanced functionality and technology. Others are developing wireless data transport systems with more specialised uses to accompany Wi-Fi, such as Bluetooth, Zigbee, WirelessHD, and NFC. However, wireless data transport for the Internet of Things and Internet of Everything needs wireless access points to be low cost so they can be deployed in large quantities. They need to handle very high numbers of transient and mobile connections, and provide high throughput for uses such as video. They also need to operate at short range to make better use of their license free spectrum in a space. Finally they should operate out-of-band with other transceivers to maintain service levels. These requirements are difficult address coherently. We have previously suggested the concept of a ‘myrmidon’ access point that is in essence a simple (and therefore low cost) short range access point operating out-of-band to Wi-Fi that would specialise in handling very high numbers of connections and high throughput. Myrmidons would defer all or most other functionality to proximate and much fewer more intelligent (and so more expensive) access points and/or other specialist ‘orchestration devices’. WiGig is an obvious choice for myrmidons as it is out-of-band to Wi-Fi, has a short range, high throughput, and is controlled by the Wi-Fi Alliance. Certainly Cisco’s predictions concerning the numbers of connections from M2M and wearable devices suggest pause for thought, especially in light of how few are predicted to have their own cellular connectivity. Not using Wi-Fi is an expensive and slow to deploy route. This is why we believe the myrmidon access point concept is the most natural approach as it can be more easily integrated with Wi-Fi. Nonetheless, other approaches using Wi-Fi as it currently exists are possible, especially when more spectrum is made available.

Fog computing

Cisco says its vision for fog computing is to enable its devices to do non-network specific data processing using their new IOx capability. They describe this processing as “at the network edge”. They argue that a shorter distance travelled over the network by data should reduce total network load, latency, and so ultimately the cost of transporting increasing amounts of data arising at the network edge from developments like the internet of things (IoT).

Development of Cisco’s scheme is very involved for the benefits it can deliver. Much data processing involves fetching data from network distributed sources such as databases, file servers, and the Internet. So for some data processing tasks network edge processing may even make the situation worse. Consequently while their scheme can do what they say, it is by no means a general purpose solution.

If Cisco’s concern is reducing network load, latency, and so data transport costs; then it is worth pointing out that much more capable network equipment than is typically being used has been available for many years. The problem with it is affordability. No doubt innovation to enable processing in network equipment will find uses, but for handling increasing network load and providing acceptable latency it is simpler, cheaper, and more generally applicable to innovate around making better performing network equipment more affordable.

Of more concern is that Cisco’s approach introduces the potential for rogue and badly written software to negatively impact on network infrastructure. Also it will probably lead to a fragmented market in programming for network equipment. Even if all vendors agree to support at least one particular programming language and runtime environment, vendors will inevitably provide APIs that make use of their specific equipment features. Once these are used this will tend to lead to vendor lock-in.

Enabling general data processing in network equipment is unnecessary, a suboptimal interpretation of the otherwise good vision of fog computing, and likely to create more problems than it solves.

Processing in fog computing could be handled by many specialised processing assets managed by policy and running a single standardised runtime environment with the ability to move running processes between assets on demand and as directed by automated policy based controllers. With this approach processes are not managed as associated with a device, but with a policy. Each policy selects appropriate devices from those available in the pool of assets. Such a network would significantly simplify managing processing and processing assets in complex and dynamic systems. It is important that these assets exist at a broad range of price points, particularly low prices, as they allow processing to be scaled in fine grained increments. The need for a specialised class of device is concomitant with the general trend in IT in which functionality has been progressively devolved to more specialised devices; something Cisco’s proposition goes against. For example, we are now at the beginning of a boom in wearable processing where specialised devices aggregate into wireless personal area networks (WPANs), and intra-body processing where specialised devices aggregate into body area networks (BANs). These kinds of devices are not just networked sensors; they do specialised data processing but are still networked. Obviously this proposition is not without concerns, but its processing mobility would address some of the aims that fog computing is proposed to address. Intel’s next unit of computing could be seen as a forerunner of this kind of asset, but the ideal class of device needs more directed development into the specialised roles rather than selling as low power versions of existing classes of devices like desktop computers.

Press release by Cisco on their vision for fog computing

802.11ac approved

The IEEE has finally announced approval of IEEE 802.11ac amendment to the 802.11 standard.
It seems like we waited forever.
Understandably makers were keen to sell us products that take advantage of it before it was finalised; that created the expectation.
As long as buyers are made aware of the risks associated with early adoption I am happy to have that choice.
802.11ac is expected to be an important step in Wi-Fi, but as it operates in the 5 GHz band only its shorter range will increase system costs where more access points are required to get coverage.