Community Blog

Net Neutrality

Net neutrality has been debated for many years now, but recently came into focus last week when the United States Federal Communications Commission (FCC) ruled to classify broadband as a Title II service, meaning it is classified as an “information service” rather than a “telecommunications service”.  The new rules, among other things, forbid activities such as blocking, throttling, or discriminating against lawful content, as well as any kind of paid prioritization.  While the political, business and ethical aspects of this change continue to be hotly debated, few have taken the time to think about what this means from a technical perspective.

First of all, how to decide what exactly is and is not lawful content?  File sharing protocols such as Bit Torrent which now has over 150 million users can account for between 43% to 70% of all Internet traffic.  Some BitTorrent traffic is completely legitimate – Facebook and Twitter, for example, both use it to distribute updates to its servers.  Other BitTorrent activity is definitely illegal – Pirate Bay, for example, an online indexing service for digital content to facilitate sharing, was found guilty of copyright infringement in 2009.  In the US alone, over 200,000 people have been sued for filesharing on BitTorrent.  Implementing this will mean a slow, arduous process where organizations and individuals who have been known to generate unlawful content can start to be blocked or throttled.  It’s not going to happen in real-time.  Furthermore, individual users can easily create and hide behind new identities.  A simple rule such as “ block all traffic from this IP address”, or “that application”, or “some combination of the two” might mean that other users sharing the same IP address, but for legitimate purposes, are unfairly discriminated against – meaning enforcing the rules at all might quickly lead to actually breaking the rules.  I don’t see any easy way out of this one.

Second, the problem of “no paid prioritization” is even tougher. The Internet has had prioritization built into it at least since 1998, when RFC 2474 allowed for Differentiated Services, or “DiffServ Code Point” bits to be set in packet headers.  The purpose of this work was to enable new kinds of services (like VoIP) to work on the Internet, and to enable service level agreements (SLAs), which, to my mind sounds exactly like paid prioritization.

But let’s say the classification of lawful vs. not-lawful content problem was solved cleanly in a way that would not unfairly discriminate.  And let’s also say that all the existing uses of IP SLAs (which appear to violate the rules) somehow magically go away.  How would we know that I, as a broadband subscriber, am not being discriminated against?  Conversely, how would we know that a service provider is faithfully compliant with the rules?

This forces us to imagine what a test solution would look like.  Do we use traffic generators to simulate lawful and not lawful traffic?  Do we generate different kinds of lawful traffic and see how well it propagates through the network?  On the receive side, what would the metrics of fairness be?  And how can we automate testing to be a consistent, repeatable process?  Should testing of testing of fairness be a standard itself?  Can NTAF play a role in defining standardized, automated tests.  These are just a few of the questions that the new rules bring to mind.

Todd Law, Spirent Communications, NTAF President

Bookmark and Share