Monday, October 27, 2014

HTTP 2.0, SPDY, encryption and wireless networks

I had mused, three and half years ago, at the start of this blog, that content providers might decide to encrypt and tunnel traffic in the future in order to retain control of the user experience.

It is amazing that wireless browsing is becoming increasingly the medium of choice for access to the internet, but the technology it relies on is still designed for fixed, high capacity, lossless, low latency networks. One would think that one would design a technology for its primary (and most challenging) use case and adapt it for more generous conditions instead of the other way around... but I am ranting again.

We are now definitely seeing this prediction accelerate since Google introduced SPDY and proposed it as default for HTTP 2.0.
While HTTP 2.0 latest draft is due to be completed this month, many players in the industry are silently but definitely committing resources to the battle.

SPDY, in its current version does not enhance and in many cases, decreases user experience in wireless networks. Its implementation of TCP lets it too dependant on round trip time, which in turns creates race conditions in lossy networks. SPDY can actually contribute to congestion rather than reduce it in wireless networks.

On one side content providers are using net neutrality arguments to further their case for the need for encryption. They are conflating security (NSA leaks...), privacy (apple cloud leaks) and net neutrality (equal, and if possible free access to networks) concerns.

On the other side, network operators, vendors are trying to argue that net neutrality does not mean not intervening, that the good of the overall users is subverted when some content providers and browser/client vendors use aggressive and predatory tactics to monopolize bandwidth in the name of QoE.

At this point, things are still fairly fluid. Google is proposing that most / all traffic be encrypted by default, while network operators are trying to introduce the concept of trusted proxies that can decrypt / encrypt under certain conditions and user's ascent.

Both these attempts are short-sighted and doomed to fail in my mind and are the result of aggressive strategies to establish market dominance.

In a perfect world, the device, network and content provider negotiate service quality based on device capabilities, subscriber data plan, network capacity and content quality. Technologies such as adaptive bit rate could have been tremendously efficient here, but the operating word in the previous sentence is "negotiate", which assumes collaboration, discovery and access to relevant information to take decisions.

 In the current state of affair, adaptive bit rate is often times corrupted in order to seize as much network bandwidth as possible, which results in devices and service providers aggressively competing for bits and bytes.
Network operators tend to either try to improve or control user experience by deploying DPI, transparent caches, pacing technology, traffic shaping engines, video transcoding, etc...

Content providers assume that highest quality of content (HD for video for instance) equals maximum experience for subscriber and therefore try and capture as much network resource as possible to deliver it. Browser / apps / phone manufacturers also assume that more speed equals better user experience, therefore try to commandeer as much capacity as possible. The flaw here is the assumption that the optimum is the product of many maxima self regulated by an equal and fair apportioning of resources. This shows a complete ignorance of how networks are designed, how they operate and how traffic flows through these networks.

This behaviour leads to a network where all resources are perpetually in contention and all end-points vie for priority and maximum resource allocation. From this perspective one can understand that there is no such thing as "net neutrality" at least not in wireless networks. When network resources are over-subscribed, decisions are taken as to who gets more capacity, priority, speed... The question becomes who should be in position to make these decisions. Right now, the laissez-faire approach to net neutrality means that the network is not managed, it is subjected to traffic. When in contention, resources are managing traffic based on obscure rules in load balancers, routers, base stations, traffic management engines... This approach is the result of lazy, surface thinking. Net neutrality should be the opposite of non intervention. Its rules should be applied equally to networks, devices / apps/browsers and content providers if what we want to enable is fair and equal access to resources.

Now, who said access to wireless should be fair and equal? Unless the networks are nationalized and become government assets, I do not see why private companies, in a competitive market couldn't manage their resources in order to optimize their utilization.

If we transport ourselves in a world where all traffic becomes encrypted overnight, networks lose the ability to manage traffic beyond allowing / stopping and fixing high level QoS metrics to specific services. That would lead to network operators being forced to charge exclusively for traffic. At this point, everyone has to pay per byte transmitted. The cost to users would become prohibitive as more and more video of higher resolution flow through the networks. It would mean also that these video providers could asphyxiate the other services... More importantly, it would mean that the user experience would become the fruit of the fight between content providers; ability to monopolize network capacity, which would go again any "net neutrality" principle. A couple of content providers could dominate not only service but the access to these service as well.

The best rationale against this scenario is commercial. Advertising is the only common business model that supports pay TV and many web services today. The only way to have an efficient, high CPM ad model in wireless is to make it relevant and contextual. The only that is going to happen is if the advertising is injected as close to the user as possible. That means collaboration. Network operators cannot provide subscriber data to third party, so they have to exploit and anonymize it themselves. Which means encryption, if needed must occur after ad insertion, which need to occur at the network edge.

The most optimally  commercially efficient model for all parties involved is through collaboration and advertising, but current battle plans show adversarial models, where obfuscation and manipulation are used to reduce opponents margin of maneuver. Complete analysis and scenario in my video monetization report here.

Tuesday, October 21, 2014

Report from SDN / NFV shows part II

Today, I would like to address what, in my mind, is a fundamental issue with the expectations raised by SDN/NFV in mobile networks.
I was two weeks ago in Dallas, speaking at SDN NFV USA and the Telco Cloud forum.

While I was busy avoiding bodily fluids with everyone at the show, I got the chance to keynote a session (slides here) with Krish Prabhu, CTO of AT&T labs.

Krish explains that the main driver for the creation and implementation of Domain 2.0 is the fact that the company CAPEX while staggering at $20 billion per year is not likely to significantly increase, while traffic (used here as a proxy for costs) will increase at a minimum of 50% compounded annual growth rate for the foreseeable future.
Krish, then to lament:
"Google is making all the money, we are making all the investment, we have no choice but to squeeze our vendors and re architect the network."
Enter SDN / NFV.
Really? These are the only choices? I am a little troubled by the conclusions here. My understanding is that Google, Facebook, Netflix, in short the OTT providers have usually looked at creating services and value for their subscribers and then, when faced with unique success had to invent new technologies to meet their growth challenges.

Most of the rhetoric surrounding operators' reasons for exploring SDN NFV nowadays seem to be about cost reduction. It is extremely difficult to get an operator to articulate what type of new service they would launch if  their network was fully virtualized and software-defined today. You usually get the salad of existing network functions with the newly adorned "v". vBRAS, vFirewall, vDPI, vCPE, vEPC...
While I would expect these network functions to lend themselves to virtualization, they do not create new services or necessarily more value. A cheaper way to create, deploy, manage a firewall is not a new service.

The problem seems to be that our industry is again tremendously technology-driven, rather than customer-driven. Where are the marketers, the service managers who will invent, for instance, real-time voice translation services by virtualizing voice processing, translation functions in the phone and at the edge? There are hundred of new services to be invented, I am sure SDN NFV will help realize them. I bet Google is closer to enable this use case than most mobile network operators. That is a problem, because operators can still provide value if they innovate, but innovation must come first from services, not technology. We should focus on what first, how after.
End of the rant, more techno posts soon. If you like this, don't forget to buy the report.

Monday, October 20, 2014

Report from SDN / NFV shows part I

Wow! last week was a busy week for everything SDN / NFV, particularly in wireless. My in-depth analysis of the segment is captured in my report. Here are a few thoughts on the last news.

First, as is now almost traditional, a third white paper was released by network operators on Network Functions Virtualizations. Notably, the original group of 13 who co-wrote the first manifesto that spurred the creation of ETSI ISG NFV has now grown to 30. The Industry Specification Group now counts 235 companies (including yours truly) and has seen 25 Proof of Concepts initiated. In short the white paper announces another 2 year term of effort beyond the initial timeframe. This new phase will focus on multi-vendor orchestration operability, and integration with legacy OSS/BSS functions.

MANO (orchestration) remains a point of contention and many start to recognise the growing threat and opportunity the function represents. Some operators (like Telefonica) seem actually to have reached the same conclusions as I in this blog and are starting to look deeply into what implementing MANO means for the ecosystem.

I will go today a step further. I believe that MANO in NFV has the potential to evolve the same way as the app stores in wireless. It is probably an apt comparison. Both are used to safekeep, reference, inventory, manage the propagation and lifecycle of software instances.

In both cases, the referencing of the apps/VNF  is a manual process, with arbitrary rules that can lead to dominant position if not caught early. It would be relatively easy, in this nascent market to have an orchestrator integrate as many VNFs as possible, with some "extensions" to lock-in this segment like Apple and Google did with mobiles.

I know, "Open" is the new "Organic", but for me, there is a clear need to maybe create an open source MANO project, lets call it "OpenHand"?

You can view below a mash-up of the presentations I gave at the show last week and the SDN & NFV USA in Dallas the week before below.



More notes on these past few weeks soon. Stay tuned.