Deep Packet Inspection – Hide your shame!
A company called Procera today announced the availability of a 12u rack system that can perform deep packet inspection on 80Gbps of data in real time with 96% accuracy. In a world where Internet bandwidth increases daily, ISPs are embracing technologies such as DPI as they potentially offer an answer to this and other challenges the ISPs face such as Copyright and Intellectual property protection.
But what is deep packet inspection? It a process that allows for the identification and characterisation of packets (internet traffic) by content and purpose. It can distinguish between innocuous HTTP, FTP, VoIP and slightly less liked high bandwidth traffic like Bittorrent (and other P2P protocols) as well as streaming. Armed with this information, ISPs or Internet backbones could then opt to throttle bandwidth to services or users in real time based on time of the day, the services they are using or simply how much they are paying.
Whilst throttling high bandwidth services such as file sharing and movie streaming might seem like a good idea, this brings us to the idea of net neutrality. Net neutrality is a principle in which ISPs and Top Tier providers can opt to slow or block specific services or websites based on their bandwidth usage or any criterion of their choosing. Take for example Skype, if an ISP decided Skype was taking up too much bandwidth, or worse, was competing with their telephony services with its VoIP serice, it could opt to slow the traffic an end user (you or I) has with Skype’s service. This could restrict the application or usability of Skype to a point where it might no longer be functionally or financially viable. The ISP or provider could then ask Skype to pay a premium for it’s bandwidth to be restored. It works the other way as well, lets say there was another VoIP company who decided it wanted to have the fastest bandwidth / lowest latency (compared to other VoIP providers) to an ISP’s users, it could pay the ISP to prioritise it’s packets over others. As you can see the scales of services / content on the Internet, once promoted as a source for free and equal speech and services, becomes tipped in the favour of corporations stifling both creativity and innovation.
Throttling is not the answer to the long term (or even short term) bandwidth explosion the Internet has seen in recent years (thank you youtube 😛 ) and at $800,000 per machine, I can’t help wondering if the money would be better spent upgrading existing capacities.
UPDATE: I just read another related article which touched on something I had not considered. Privacy. Whilst most information about a packet can be gleaned from the routing header, there is nothing to stop this technology literally parsing Gbps of traffic for any (and all) information at all which could be store for later examination. The only limitation would be hard drive space, 80Gbps is 10Gb of data every second which would fill up a Petabyte (Pb) of storage every 28 hours. The only limit would be the computational power and storage available to the ISP/backbone operator.