Tip: Sniffing the Either for Vaportrails of Vaporware

What’s Who is NEXT?*

So you have plundered the High Seas of the Internet and the Ethernet?

When you swivel your chair around and look out the window toward the horizon, what is next to Plunder?

Maybe you need to be looking UP!

That’s right… the Interplanetary Internet! ‘Ha !’ you say? Think I’m blowing a little Vaporware vapor-trail up your butt? Swivel that chair back around and take notes. The IPN is coming, due to the current work on the DTN being funded by DARPA and kicked in the ass by Homeland Security, augmented by work being done under the auspices of NASA. What?

Firstly let us not forget, the current Internet was conceived, designed and implemented by the US Military, and paid for by the US Taxpayers. Then the technology and some early infrastructure was turned loose to the users of the world. Nice of them, eh? Little did DARPA dream of the extent the Internet would drill down to the basic fabric of the data-handling of everyone in the world.

The future vision of communications goes to Space. Yes the Interplanetary Internet. But, and there are a lot of buts: Speed of light, transmission speeds vs distance delays, temporary net failures due to Cosmic events or man-made sabotage/interference problems, along with an unbreakable network routing/encryption schema now only existing in the vapor of the vapor. This new frontier demands a complex combination of skills we are only now conceiving and developing. Space based communication delays and interruptions will no doubt require simplex packet type burst transmissions, with cloud type way-stations/relay-stations, instead of an almost instantaneous duplex emulating transmission type now enjoyed in today’s version of the Internet. This situation is further complicated by an interruptable data stream demand vs supply phenomenon combined with the complexities of network packet routing and top to bottom encryption.

We have already taken many of the initial baby-steps that will lead to the next ‘Leap of Mankind’ into tomorrow’s ‘taken-for-granted’ fundamental form of electronic communication.

The major basic building blocks of the end goal of the Interplanetary Internet will include the following concepts (to name only a few):

Let’s pirate some intel from Wikipedia and take a closer look-see at these concepts and see where the work is being done along the way:

Packet Radio

Packet radio is a form of packet switching technology used to transmit digital data via radio or wireless communications links. It uses the same concepts of data transmission via Datagram that are fundamental to communications via the Internet, as opposed to the older techniques used by dedicated or switched circuits.

Packet Node Controller

Packet radio is the fourth major digital radio communications mode. Earlier modes were telegraphy (Morse Code), teleprinter (Baudot) and facsimile. Like those earlier modes, packet was intended as a way to reliably transmit written information. The primary advantage was initially expected to be increased speed, but as the protocol developed, other capabilities surfaced.

By the early 1990s, packet radio was not only recognized as a way to send text, but also to send files (including small computer programs), handle repetitive transmissions, control remote systems, etc.

The technology itself was a leap forward, making it possible for nearly any packet station to act as a digipeater, linking distant stations with each other through ad hoc networks. This makes packet especially useful for emergency communications. In addition, mobile packet radio stations can automatically transmit their location, and check in periodically with the network to show that they are still operating.

Since radio circuits inherently possess a broadcast network topology (i.e., many or all nodes are connected to the network simultaneously), one of the first technical challenges faced in the implementation of packet radio networks was a means to control access to a shared communications channel. Professor Norman Abramson of the University of Hawaii developed a packet radio network known as ALOHAnet and performed a number of experiments around 1970 to develop methods to arbitrate access to a shared radio channel by network nodes. This system operated on UHF frequencies at 9600 baud. From this work the Aloha multiple access protocol was derived. Subsequent enhancements in channel access techniques made by Leonard Kleinrock et al in 1975 would lead Robert Metcalfe to use carrier sense multiple access (CSMA) protocols in the design of the now commonplace Ethernet local area network (LAN) technology.

In 1977, DARPA created a packet radio network called PRNET in the San Francisco Bay area and conducted a series of experiments with SRI to verify the use of ARPANET (a precursor to the Internet) communications protocols (later known as IP) over packet radio links between mobile and fixed network nodes. This system was quite advanced, as it made use of direct sequence spread spectrum (DSSS) modulation and forward error correction (FEC) techniques to provide 100 kpbs and 400 kpbs data channels. These experiments were generally considered to be successful, and also marked the first demonstration of Internetworking, as in these experiments data was routed between the ARPANET, PRNET, and SATNET (a satellite packet radio network) networks. Throughout the 1970s and 1980s, DARPA operated a number of terrestrial and satellite packet radio networks connected to the ARPANET at various military and government installations.

In order to get a small glimpse of networking complexities click this link to get a look at IP Protocols now being used.

Cloud Computing

The term “cloud” is used as a metaphor for the Internet, based on the cloud drawing used in the past to represent the telephone network, and later to depict the Internet in computer network diagrams as an abstraction of the underlying infrastructure it represents.

Cloud computing is a natural evolution of the widespread adoption of visualization, service-oriented architecture, autonomic, and utility computing. Details are abstracted from end-users, who no longer have need for expertise in, or control over, the technology infrastructure “in the cloud” that supports them.

The underlying concept of cloud computing dates back to the 1960s, when John McCarthy opined that “computation may someday be organized as a public utility.” Almost all the modern-day characteristics of cloud computing (elastic provision, provided as a utility, online, illusion of infinite supply), the comparison to the electricity industry and the use of public, private, government, and community forms, were thoroughly explored in Douglas Parkhill‘s 1966 book, The Challenge of the Computer Utility. Other scholars have shown that cloud computing’s roots go all the way back to the 1950s when scientist Herb Grosch (the author of Grosch’s law) postulated that the entire world would operate on dumb terminals powered by about 15 large data centers.

The actual term “cloud” borrows from telephony in that telecommunications companies, who until the 1990s offered primarily dedicated point-to-point data circuits, began offering Virtual Private Network (VPN) services with comparable quality of service but at a much lower cost. By switching traffic to balance utilization as they saw fit, they were able to utilize their overall network bandwidth more effectively. The cloud symbol was used to denote the demarcation point between that which was the responsibility of the provider and that which was the responsibility of the user. Cloud computing extends this boundary to cover servers as well as the network infrastructure.

After the dot-com bubble, Amazon played a key role in the development of cloud computing by modernizing their data centers, which, like most computer networks, were using as little as 10% of their capacity at any one time, just to leave room for occasional spikes. Having found that the new cloud architecture resulted in significant internal efficiency improvements whereby small, fast-moving “two-pizza teams” could add new features faster and more easily, Amazon initiated a new product development effort to provide cloud computing to external customers, and launched Amazon Web Service (AWS) on a utility computing basis in 2006.

In early 2008, Eucalyptus became the first open-source, AWS API-compatible platform for deploying private clouds. In early 2008, OpenNebula, enhanced in the RESERVOIR European Commission-funded project, became the first open-source software for deploying private and hybrid clouds, and for the federation of clouds. In the same year, efforts were focused on providing QoS guarantees (as required by real-time interactive applications) to cloud-based infrastructures, in the framework of the IRMOS European Commission-funded project, resulting to a real-time cloud environment.By mid-2008, Gartner saw an opportunity for cloud computing “to shape the relationship among consumers of IT services, those who use IT services and those who sell them”and observed that “organizations are switching from company-owned hardware and software assets to per-use service-based models” so that the “projected shift to cloud computing … will result in dramatic growth in IT products in some areas and significant reductions in other areas.”

In July 2010, OpenStack was announced, attracting nearly 100 partner companies and over a thousand code contributions in its first year, making it the fastest-growing free and open source software project in history.

Nebula is a Federal cloud computing pilot under development at NASA Ames Research Center in Silicon Valley, California. The project began in 2008 under the direction of Chris C. Kemp.

The Ames Internet Exchange, which hosts the Nebula Cloud, was formerly MAE-West, one of the original nodes of the Internet, and is a major peering location for Tier 1 ISPs, as well as being the home of the “E” root name servers. Nebula also connects to CENIC and Internet2, with 10GigE connections.

Nebula is an open-source project and uses a variety of open-source components, including OpenStack, Lustre and RabbitMQ.

Delay-tolerant networking (DTN)

Delay-tolerant networking (DTN) is an approach to computer network architecture that seeks to address the technical issues in heterogeneous networks that may lack continuous network connectivity. Examples of such networks are those operating in mobile or extreme terrestrial environments, or planned networks in space.

Recently, the term disruption-tolerant networking has gained currency in the United States due to support from DARPA, which has funded many DTN projects. Disruption may occur because of the limits of wireless radio range, sparsity of mobile nodes, energy resources, attack, and noise.

Interplanetary Internet (IPN)

The Interplanetary Internet (IPN) is a conceived computer network in space, consisting of a set of network nodes which can communicate with each other.  Communication would be greatly delayed by the great interplanetary distances, so the IPN needs a new set of protocols and technology that are tolerant to large delays and errors. While the Internet as we know it tends to be a busy network of networks with high traffic, negligible delay and errors, and a wired backbone, the Interplanetary Internet is a store-and-forward network of internets that is often disconnected, has a wireless backbone fraught with error-prone links and delays ranging to tens of minutes, even hours, even when there is a connection.

Concurrently with (but separate from) the MANET activities, DARPA had funded NASA, MITRE and others to develop a proposal for the Interplanetary Internet (IPN). Internet pioneer Vint Cerf and others developed the initial IPN architecture, relating to the necessity of networking technologies that can cope with the significant delays and packet corruption of deep-space communications. In 2002, Kevin Fall started to adapt some of the ideas in the IPN design to terrestrial networks and coined the term delay-tolerant networking and the DTN acronym. A paper published in 2003 SIGCOMM conference gives the motivation for DTNs. The mid-2000s brought about increased interest in DTNs, including a growing number of academic conferences on delay and disruption-tolerant networking, and growing interest in combining work from sensor networks and MANETs with the work on DTN. This field saw many optimizations on classic ad-hoc and delay-tolerant networking algorithms and began to examine factors such as security, reliability, verifiability, and other areas of research that are well understood in traditional computer networking.

HAL-9000

What’sWho is NEXT? = IPN

Do not be in the least surprised when they get the Interplanetary Internet up, they name it HAL.

Oh yeah, in the meantime, don’t forget to look up.

*[This article is in no way intended to be a comprehensive discussion of the matters involved. This is only a Teaser, and perhaps a starting point for those interested in the subject of tomorrow.]

Advertisements