Internet System Design
Overview of the design of the Internet
The original concept was to make the network resistant to a nuclear war. One or several nodes could be destroyed without devastating consequences for the network as a whole, thanks to its distributed character. The big innovation was that data transmission was based on ‘packet switching’. This technique divides the information content of a message into small electronic packages of equal length, each equipped with an address tag. Each package could be routed on different ways in the network, a very practical feature in the case of bottlenecks or breakdowns (Skyttner, 2005, pg. 440).
Skyttner characterizes the internet as a “typical” peer-to-peer (P2P) network (pg. 442). A P2P system is characterized by decentralization (Rodrigues & Druschel, 2010, pg.72). Network nodes are typically not essential or centralized in nature. The nodes contribute the bandwidth (the rate of data transfer, usually expressed in bits per second (bps)), CPU capabilities, and storage reserves to the network. P2P networks are self-organizing, and evolve in their efficiency of connections. The nodes voluntarily join the system and are under the control of an independent individual or organization. Since resource allocation is handled by the contribution of individual nodes, the operation of the network is relatively inexpensive, say in comparison to a typical client-server network – which requires infrastructure upgrades to handle increased network usage. The system design of the internet as a whole is robust regarding “faults and attacks”, since there is an absence of nodes that are essential to the operation of the network (Rodrigues & Druschel, 2010, pg. 73–74).
The Future of the Internet System Design
A very current and ongoing discussion centers around improving the Internet architecture. The two camps are defined as a clean-slate approach versus an evolutionary approach (Rexford & Dovrolis, 2010). Arguments for a clean-slate approach point to the fact that ARPANET itself was a clean-slate approach to a global network resistant to failure. However the evolutionary camp basically points to the fact that ARPANET was one of many alternatives considered, and that it won out through its ability to evolve.
Whenever the Internet faces new challenges, from the fears of congestion collapse in the late 1980s to the pressing cybersecurity concerns of today, new patches are introduced to (at least partially) address the problems. Yet, we do not yet have anything approaching a discipline for creating, analyzing, and operating network protocols, let alone the combinations of protocols and mechanisms seen in real networks. Networking is not yet a true scholarly discipline, grounded in rigorous models and tried-and-true techniques to guide designers and operators (Rexford & Dovrolis, 2010, pg. 37).
A salient observation by Rexford is that these problems are a natural consequence of the original design of the Internet. The Internet was never designed to do what it has accomplished via individual innovation and creation of new technologies utilizing the internet for purposes that could not be imagined when the Internet was originally designed. Dovrolis points out:
Evolutionary Internet research aims to understand the behavior of the current Internet, identify existing or emerging problems, and resolve them under two major constraints: first, backward compatibility (interoperate smoothly with the legacy Internet architecture), and second, incremental deployment (a new protocol or technology should be beneficial to its early adopters even if it is not globally deployed) (pg.38).
It is likely a hybrid approach to the “Future Internet” will prevail. It is easy to envision a scenario where backwards compatibility will be maintained to a critical point where adoption of a new system architecture that especially addresses security, is inevitable. For example, network virtualization is a technique that allows simultaneous and parallel existence of networks with differing protocols (Martin, Völker, & Zitterbart, 2011).
Martin, D., Völker, L., & Zitterbart, M. (2011). A flexible framework for Future Internet design, assessment, and operation. Computer Networks, 55(4), 910–918. doi:10.1016/j.comnet.2010.12.015
Rexford, J, & Dovrolis, C. (2010). Point/Counterpoint Future Internet Architecture: Clean-Slate Versus Evolutionary Research. Communications of the ACM, 53(9), 36–38. doi:10.1145/1810891.1810906
Rodrigues, R., & Druschel, P. (2010). Peer-to-Peer Systems. Communications of the ACM, 53(10), 72–82. doi:10.1145/1831407.1831427
Skyttner, L. (2005). General systems theory: problems, perspectives, practice. Hackensack, NJ: World Scientific Publishing Co. Pte. Ltd.