As the use of wireless connectivity increases across the globe – and increases at such a startling rate – bandwidth is being squeezed. Bart Saelets, solution architect, F5, investigates whether we really are heading towards a ‘capacity crunch.’
When you work for a tech company, you’re always on call for support. I remember fixing a family member’s Wi-Fi connection that had slowed to a crawl. “Maybe our Wi-Fi’s nearly full?” they’d suggested.
It wasn’t the issue with their Wi-Fi, but it wasn’t such an outlandish suggestion. Hitting bandwidth capacity is an issue the industry is slowly facing up to. By the year 2020, there will be 4.6 billion mobile subscribers across the globe, according to the GSMA. Those with more than one mobile service, such as one for work and a personal one, will push that figure of actual mobile subscriptions even higher. By 2020, it is thought that spectrum demand will triple.
For service providers, the reality is that at the end of the communication path there is less bandwidth available. This means overall networking performance can suffer, while average latency increases to negatively impact the user-experience. Slow data transfer rates and high end-to-end latency are a real thorn in the side for service providers, disappointing customers, leading to subscriber churn and lost revenue.
Traditionally, methods such as video compression and caching have helped, but encrypted connections and developments such as HTTP 2.0 have made this more difficult.
Reliable and consistent data delivery is achievable however, even when more and more subscribers are being added to the network. This happens by optimising the TCP connections that deliver data to and from the subscribers with a solution that sits between the Internet and the wireless network.
This way it can manage both sides of the connection – the wireless network on the subscriber side and the wide area network on the internet side – ensuring the different performance characteristics of these network technologies are accounted for.
There’s a reason this is an important step: the Internet side of the connection doesn’t generally suffer from much congestion, latency or packet loss, and therefore requires different TCP settings than that of the wireless side of the network, which does suffer from those issues. Standard TCP stacks in web servers are often fine-tuned for internet-side characteristics only, and therefore can perform poorly over radio networks.
A more modern approach to TCP optimisation solutions is one that can adapt to both sides of the connection. At F5 we are facing up to service provider demands through TCP Express, which is part of our BIG-IP platform. One advantage of this is that the enhancements can be applied to each individual connection, meaning each one is tuned to get the maximum benefit.
And it’s not just bandwidth that can be improved with TCP optimisation. For example, network monitoring technologies can be used to improve the stability of the connection. Additionally, optimising the TCP protocol this way means the solution is not dependent on inspecting the content of the connection, which means encrypted traffic can be optimised as well.
Using a TCP optimisation solution of this nature can offer anything between 15 and 100 percent broadband data transfer improvement. More than ever, the pressure is on to future-proof with the user-experience in mind. Those that delay on making the leap with solutions like TCP optimisation will soon feel the heat.