The Internet is a rather loose assemblage of individual networks; there is little in the way of overall administration. The individual networks are owned by a huge number of independent operators. Some of these are major corporations with large, high-capacity networks; others are private individuals operating tiny networks of two or three computers their homes. Between them these networks employ just about every networking technology yet invented. The great strength of the Internet is that it allows these diverse networks to act together to provide a single global network service.
The interactions between a network and its neighbours are, in essence, both simple and robust. This makes for easy extendibility and fuelled the early growth of the Internet. New participants needed only to come to an agreement with an existing operator and set up some fairly simple equipment to become full players. This was in great contrast to the situation within the world of telephone networks, where operators were mostly large and bureaucratic and where adding new interconnections required complex negotiation and configuration and, possibly, international treaties.
What is the Internet architecture?
It is by definition a meta-network, a constantly changing collection of thousands of individual networks intercommunicating with a common protocol. The Internet’s architecture is described in its name, a short from of the compound word “inter-networking”. This architecture is based in the very specification of the standard TCP/IP protocol, designed to connect any two networks which may be very different in internal hardware, software, and technical design. Once two networks are interconnected, communication with TCP/IP is enabled end-to-end, so that any node on the Internet has the near magical ability to communicate with any other no matter where they are. This openness of design has enabled the Internet architecture to grow to a global scale.
In practice, the Internet technical architecture looks a bit like a multi-dimensional river system, with small tributaries feeding medium-sized streams feeding large rivers. For example, an individual’s access to the Internet is often from home over a modem to a local Internet service provider who connects to a regional network connected to a national network. At the office, a desktop computer might be connected to a local area network with a company connection to a corporate Intranet connected to several national Internet service providers. In general, small local Internet service providers connect to medium-sized regional networks which connect to large national networks, which then connect to very large bandwidth networks on the Internet backbone.
Most Internet service providers have several redundant network cross-connections to other providers in order to ensure continuous availability. The companies running the Internet backbone operate very high bandwidth networks relied on by governments, corporations, large organizations, and other Internet service providers. Their technical infrastructure often includes global connections through underwater cables and satellite links to enable communication between countries and continents. As always, a larger scale introduces new phenomena: the number of packets flowing through the switches on the backbone is so large that it exhibits the kind of complex non-linear patterns usually found in natural, analogy systems like the flow of water or development of the rings of Saturn.
Each communication packet goes up the hierarchy of Internet networks as far as necessary to get to its destination network where local routing takes over to deliver it to the addressee. In the same way, each level in the hierarchy pays the next level for the bandwidth they use, and then the large backbone companies settle up with each other. Bandwidth is priced by large Internet service providers by several methods, such as at a fixed rate for constant availability of a certain number of megabits per second, or by a variety of use methods that amount to a cost per gigabyte. Due to economies of scale and efficiencies in management, bandwidth cost drops dramatically at the higher levels of the architecture.
The network topology page provides information and resources on the real-time construction of the Internet network, including graphs and statistics.
The following references provide additional information about the Internet architecture:
Internet Architecture and Innovation
“Many people have a pragmatic attitude toward technology: they don’t care how it works, they just want to use it. With regard to the Internet, this attitude is dangerous. As this book shows, different ways of structuring the Internet result in very different environments for its development, production, and use. If left to themselves, network providers will continue to change the internal structure of the Internet in ways that are good for them, but not necessarily for the rest of us — individual, organizational or corporate Internet users, application developers and content providers, and even those who do not use the Internet.
If we want to protect the Internet’s usefulness, if we want to realize its full economic, social, cultural, and political potential, we need to understand the Internet’s structure and what will happen if that structure is changed.” The Internet’s remarkable growth has been fuelled by innovation. New applications continually enable new ways of using the Internet, and new physical networking technologies increase the range of networks over which the Internet can run. In this path breaking book, Barbara van Schewick argues that this explosion of innovation is not an accident, but a consequence of the Internet’s architecture – a consequence of technical choices regarding the Internet’s inner structure made early in its history. Building on insights from economics, management science, engineering, networking and law, van Schewick shows how alternative network architectures can create very different economic environments for innovation ou acheter du viagra.
The Internet’s original architecture was based on four design principles – modularity, layering, and two versions of the celebrated but often misunderstood end-to-end arguments. This design, van Schewick demonstrates fostered innovation in applications and allowed applications like e-mail, the World Wide Web, E-Bay, Google, Skype, Flickr, Blogger and Facebook to emerge.
Today, the Internet’s architecture is changing in ways that deviate from the Internet’s original design principles. These changes remove the features that fostered innovation in the past. They reduce the amount and quality of application innovation and limit users’ ability to use the Internet as they see fit. They threaten the Internet’s ability to spur economic growth, to improve democratic discourse, and to provide a decentralized environment for social and cultural interaction in which anyone can participate. While public interests suffer, network providers – who control the evolution of the network – benefit from the changes, making it highly unlikely that they will change course without government intervention.
Given this gap between network providers’ private interests and the public’s interests, van Schewick argues, we face an important choice. Leaving the evolution of the network to network providers will significantly reduce the Internet’s value to society. If no one intervenes, network providers’ interests will drive networks further away from the original design principles. With this dynamic, doing nothing will not preserve the status quo, let alone restore the innovative potential of the Internet. If the Internet’s value for society is to be preserved, policymakers will have to intervene and protect the features that were at the core of the Internet’s success. It is on all of us to make this happen.