viewBox="0 0 51 48">Five Pointed Star Five Pointed Star

"Future Internet"

Inter­net Sys­tem Design

Overview of the design of the Internet

The orig­i­nal con­cept was to make the net­work resis­tant to a nuclear war. One or sev­eral nodes could be destroyed with­out dev­as­tat­ing con­se­quences for the net­work as a whole, thanks to its dis­trib­uted char­ac­ter. The big inno­va­tion was that data trans­mis­sion was based on ‘packet switch­ing’. This tech­nique divides the infor­ma­tion con­tent of a mes­sage into small elec­tronic pack­ages of equal length, each equipped with an address tag. Each pack­age could be routed on dif­fer­ent ways in the net­work, a very prac­ti­cal fea­ture in the case of bot­tle­necks or break­downs (Skyt­tner, 2005, pg. 440).

Skyt­tner char­ac­ter­izes the inter­net as a “typ­i­cal” peer-to-peer (P2P) net­work (pg. 442). A P2P sys­tem is char­ac­ter­ized by decen­tral­iza­tion (Rodrigues & Druschel, 2010, pg.72). Net­work nodes are typ­i­cally not essen­tial or cen­tral­ized in nature. The nodes con­tribute the band­width (the rate of data trans­fer, usu­ally expressed in bits per sec­ond (bps)), CPU capa­bil­i­ties, and stor­age reserves to the net­work. P2P net­works are self-organizing, and evolve in their effi­ciency of con­nec­tions. The nodes vol­un­tar­ily join the sys­tem and are under the con­trol of an inde­pen­dent indi­vid­ual or orga­ni­za­tion. Since resource allo­ca­tion is han­dled by the con­tri­bu­tion of indi­vid­ual nodes, the oper­a­tion of the net­work is rel­a­tively inex­pen­sive, say in com­par­i­son to a typ­i­cal client-server net­work – which requires infra­struc­ture upgrades to han­dle increased net­work usage. The sys­tem design of the inter­net as a whole is robust regard­ing “faults and attacks”, since there is an absence of nodes that are essen­tial to the oper­a­tion of the net­work (Rodrigues & Druschel, 2010, pg. 73–74).

The Future of the Inter­net Sys­tem Design

A very cur­rent and ongo­ing dis­cus­sion cen­ters around improv­ing the Inter­net archi­tec­ture. The two camps are defined as a clean-slate approach ver­sus an evo­lu­tion­ary approach (Rex­ford & Dovro­lis, 2010). Argu­ments for a clean-slate approach point to the fact that ARPANET itself was a clean-slate approach to a global net­work resis­tant to fail­ure. How­ever the evo­lu­tion­ary camp basi­cally points to the fact that ARPANET was one of many alter­na­tives con­sid­ered, and that it won out through its abil­ity to evolve.

When­ever the Inter­net faces new chal­lenges, from the fears of con­ges­tion col­lapse in the late 1980s to the press­ing cyber­se­cu­rity con­cerns of today, new patches are intro­duced to (at least par­tially) address the prob­lems. Yet, we do not yet have any­thing approach­ing a dis­ci­pline for cre­at­ing, ana­lyz­ing, and oper­at­ing net­work pro­to­cols, let alone the com­bi­na­tions of pro­to­cols and mech­a­nisms seen in real net­works. Net­work­ing is not yet a true schol­arly dis­ci­pline, grounded in rig­or­ous mod­els and tried-and-true tech­niques to guide design­ers and oper­a­tors (Rex­ford & Dovro­lis, 2010, pg. 37).
A salient obser­va­tion by Rex­ford is that these prob­lems are a nat­ural con­se­quence of the orig­i­nal design of the Inter­net. The Inter­net was never designed to do what it has accom­plished via indi­vid­ual inno­va­tion and cre­ation of new tech­nolo­gies uti­liz­ing the inter­net for pur­poses that could not be imag­ined when the Inter­net was orig­i­nally designed. Dovro­lis points out:

Evo­lu­tion­ary Inter­net research aims to under­stand the behav­ior of the cur­rent Inter­net, iden­tify exist­ing or emerg­ing prob­lems, and resolve them under two major con­straints: first, back­ward com­pat­i­bil­ity (inter­op­er­ate smoothly with the legacy Inter­net archi­tec­ture), and sec­ond, incre­men­tal deploy­ment (a new pro­to­col or tech­nol­ogy should be ben­e­fi­cial to its early adopters even if it is not glob­ally deployed) (pg.38).

Con­clu­sion

It is likely a hybrid approach to the “Future Inter­net” will pre­vail. It is easy to envi­sion a sce­nario where back­wards com­pat­i­bil­ity will be main­tained to a crit­i­cal point where adop­tion of a new sys­tem archi­tec­ture that espe­cially addresses secu­rity, is inevitable. For exam­ple, net­work vir­tu­al­iza­tion is a tech­nique that allows simul­ta­ne­ous and par­al­lel exis­tence of net­works with dif­fer­ing pro­to­cols (Mar­tin, Völker, & Zit­ter­bart, 2011).

Ref­er­ences

Mar­tin, D., Völker, L., & Zit­ter­bart, M. (2011). A flex­i­ble frame­work for Future Inter­net design, assess­ment, and oper­a­tion. Com­puter Net­works, 55(4), 910–918. doi:10.1016/j.comnet.2010.12.015

Rex­ford, J, & Dovro­lis, C. (2010). Point/Counterpoint Future Inter­net Archi­tec­ture: Clean-Slate Ver­sus Evo­lu­tion­ary Research. Com­mu­ni­ca­tions of the ACM, 53(9), 36–38. doi:10.1145/1810891.1810906

Rodrigues, R., & Druschel, P. (2010). Peer-to-Peer Sys­tems. Com­mu­ni­ca­tions of the ACM, 53(10), 72–82. doi:10.1145/1831407.1831427

Skyt­tner, L. (2005). Gen­eral sys­tems the­ory: prob­lems, per­spec­tives, prac­tice. Hack­en­sack, NJ: World Sci­en­tific Pub­lish­ing Co. Pte. Ltd.


L. Ball
L. Ball

Father. Developer. Coffee Connoisseur. Amateur Guitarist.