Advanced Network Technology-Architecture

Published on September 2020 | Categories: Documents | Downloads: 12 | Comments: 0 | Views: 574
of 82
Download PDF   Embed   Report

Comments

Content

 

 Advanced  Adva nced Netw Network ork T Techn echnolog ologyy June 1993 OTA-BP-TCT-101 NTIS order #PB93-203735

 

Recommended Citation:

U.S. Congress, Office of Technology Assessment,  Advanced Network  Technology--Background Paper, OTA-BP-TCT-1O1 (Washington, DC: U.S. Government printing Office, June 1993).

  upcl   I I I I L W I I I I

For sale by the U.S. Government Printing Office \lAll  stop s01’,  \ Allllg u. [)(’ u. [)(’

 (If  [k c   ul l l L l l l \

2(141?

~?x

ISBN 0-16 -041805-4

 

 —

.

 . 

F oreword omputer networks having dramatic impacts on our lives. were once esoteric tools are used only by scientists and engineers are What becoming more widely used in schools, libraries, and businesses. At the same time, researchers are working to develop even more capable networks that promise to change fundamentally the way we communicate. This background paper analyzes technologies for tomorrow’s information superhighways. Advanced networks will first be used to support scientists

c

in their work, linking researchers to supercomputers, databases, and scientific instruments. As the new networks are deployed more widely, they will be used by a broader range of users for business, entertainment, health care, and education applications. The background paper also describes six test networks that are being funded as part of the High Performance Computing and Communications Program. These test networks are a collaboration of government, industry, and academia, and allow researchers to try new approaches to network design and to attack a variety of research questions, Significant progress has been made in the development of technologies that will help achieve the goals of the HighPerformance Computing Act of 1991. This is the third publication from OTA’s assessment on information

technology and research, which was requested by the House Committee on Science, Space, and Technology and the Senate Committee on Commerce, Science, and Transportation. The first two background papers,  Hig h Performance Computing & Networking for Science and Seeking Solutions:  High-Perfor  HighPerformance mance Computing Comp uting for Science, Scienc e, were published in 1989 and 1991, respectively. OTA appreciates the assistance of the National Science Foundation, the Advanced Research Projects Agency, the Department of Energy, the National Aeronautics and Space Administration, and many experts in industry and academia who reviewed or contributed to this document. The contents of this paper, however, are the sole responsibility of OTA.

h

-

Roger C. Herdman, Director

 

Ill

 

eviewers R eviewers Rick Adams CEO

UUNET Technologies Robert Aiken

Bruce Davie

Member of Technical Staff  Broadband Packet Switching Research Bellcore

Department of Energy Raymond Albers

Assistant Vice President Technology Planning Bell Atlantic Alan Baratz Applications Solutions Director

High Performance Computing and

Darleen Fisher Associate Program Manager Division of Networking and Communications Research and Infrastructure

National Science Foundation Linda Garcia Senior Associate

Craig Partridge Senior Scientist

Bolt Beranek and Newman Daniel Stevenson

Director Communications Research MCNC Richard Thayer

Director Federal Government Affairs AT&T Bo Thomas

Communications IBM Adam Beguelin Research Scientist

School of Computer Science Carnegie Mellon University Richard Binder

Office of Technology Assessment Tom Hausken Analyst Office of Technology Assessment Milo Medin Deputy Project Manager

NASA Science Internet Office

Senior Federal Account Manager sprint Philip Webre Principal Analyst Congressional Budget Office AlIan Weis President

Advanced Network & Services Principal Scientist Corporation for National Research Initiatives John Cavallini

Deputy Associate Director Office of Scientific Computing

NASA Paul Messina

Director Caltech Concurrent Supercomputer Facility California Institute of Technology

Joan Winston Senior Analyst

Office of Technology Assessment

Department of Energy

NOTE: OTA appreciates and is grateful for the valuable assistance and thoughtful critiques provided by the reviewers. The reviewers do not, however, necessarily approve, disapprove, or endorse this background paper OTA assumes full responsibility for the background paper and the accuracy of its contents.

iv

 

Preject Staff  ALAN BUZACOTT Project Director 

Administrative Staff Liz Emanuel, Office Administrator  Barbara Bradley, Secretary  Karolyn St. Clair, PC Specialist 

John Andelin  Assistantt Director,  Assistan Director, OTA Science, Information, and  Natural Resources Division  James W. Curlin Program Manager OTA Telecommunication and  Computing  Technologies Program 

 

c ontents 1 Introduction and Summary, 1 Federal Support for Gigabit Networking, 1 summary, 8

2 The Internet, 15 Applications, 21 Protocols, 24 Network Components, 26 The Internet and the Public Switched Network, 31

3 Broadband Network Technology, 35

 

3 Broadband Network Technology, 35 Broadband Applications, 35 Fast Packet Networks, 42 Network Component Development-Current Status, 44 Application of Broadband Technologies, 47

4 Gigabit Research, 51 Research Objectives, 52 Testbed Progress, 56

5 Application of Testbed Research, 65 Application to the NREN, 65 Application to Other Networks, 70

INDEX, 75

‘ IQ IQll



  vii

 

 

Introduction and Summary he vision of the Nation’s future telecommunications system is that of a broadband network (see box l-A) that can support video, sound, data, and image communications. Toward this end, the High-Performance Comput-

 1

 T 

ing Act of 1991 called for the Federal computer networks that connect universities and Federal laboratories to be upgraded to “gigabit networks” (see box l-B) by 1996. 1 This background paper reviews technologies that may contribute to achieving this objective, and describes the six prototype gigabit networks or “testbeds” that are being funded as part of the Federal High Performance Computing and Communications Program. These prototype networks are intended to demonstrate new communications technologies, provide experience with the construction of  advanced networks, and address some of the unresolved research questions.

FEDERAL SUPPORT FOR GIGABIT NETWORKING The High Performance Computing and Communications Program (HPCC) is a multiagency program that supports 2 research on advanced supercomputers, software, and networks. In part, these technologies are being developed to attack the “Grand Challenges”: science and engineering problems in climate change, chemistry, and other areas that can only be

The HPCC   program funds the development 

solved with powerful computer systems. Network research is one of four components of the HPCC program, and represents about 15 percent of the program’s annual budget of close to $1 billion. 3

of new communications

High-Perfo rmance  Computing Act of 1991 (HPCA), PL  102-194, Sec. 102(a). Office of Science and Twhnology  Policy (OSTP), “Grand Challenges 1993: High Perfo rmance  Computing and Communications, ’ 1992. 3 Ibid., p. 28.

technologies.

 

2

 

2 I Advanced Network Technology

Box l-A–Broadband Networks Figure l-A-l—Digital Data

Computers and networks handle information as patterns of electronic or optical signals. Text pictures, soun sound, d, video, and and numerical data can then be stored on floppy disks, used in computations, and sent from computer to computer through a network In digital/computers or networks, the electronic or optical signals that represent information can take on one of

Electrical or optical signal 1

~



1

 

I

Binary representation

  ,, ),,   H,,, two values, such as a high or a low voltage, “o” “l” “o” which are usually thought of either as a “l” or “1” “o” “o” “l” “l” a “O” (figure l-A-l). These 1s and 0s are called SOURCE: Office of Technology Assessment, 1993. bits. Different patterns of 1s and Os are used to represent different kinds of data Inmost computers, the letter ”A” is represented by the pattern of electronic signals corresponding to “01 000001 .“ To represent images, different patterns of bits are used to represent different shades (from Iight to dark) and odors. Sound is represented in much

the same way, except that the patterns of bits represent the intensity of sound at points in time. The number of bits required to represent information depends on a number of factors. One factor is the quality of the representation. A good quality, high-resolution image would require more bits than a low-resolution image. Also, some kinds of information inherently require more bits in order to be represented accurately. A page of a book with only text might contain a few thousand characters, and could be represented with a few tens of thousands of bits. A page of image data on the other hand, could require millions of bits. Because images and video, which is a sequence of images, require many more bits to be represented accurately, they have strained the capabilities of computers and networks. Images take up too much space in a computer’s memory, and take too long to be sent through a network to be practical. The new high-capacity networ networkk technologies described in this background paper have the ability to support two-way digital, image, and video communications in a more efficient manner.

Digital Networks In the past, networks designed for video or sound used anabg transmission. In the old analog telephone network, for example, the telephone’s microphone converted the spoken sounds into an electrical signal whose

The other three components of the program target supercom supe rcompute puterr design design,, software software to solve solve the Grand Challenges, and research in computer science and mathematics.

Network (NREN). The gigabit research program supports research on advanced network technology and the development of the six testbeds. The NREN program supports the deployment of an

The HPCC program is the most visible source of Federal funds for the development of new communications technology. The networking component of the program is divided into two parts: 1) research on gigabit network technology, and 2) developing a National Research and Education

advanced network to improve and broaden network access for the research and education community. The High-Performance Computing Act of 1991 specfies that the NREN should operate at gigabit speeds by 1996, if technically 4 possible.

4

HPCA, op. cit., footnote 1.

 

Chapter I—Introduction and Summary | 3

strength corresponded to the loudness of the sounds. This signal then traveled through the network’s wires until it reached its destination, where it was used to make the telephone’s speaker vibrate, recreating the spoken Sounds. Digital/ networks transmit information in digital form, as a series of bits. Digital networks are required for high-speed communications between computers-computers work with digital data. However, digital networks can also transmit real-world information such as sounds and pictures if special digital telephones or video cameras are used to represent the information in digital form. A digital telephone, for example, generates a series of patterns of Is and 0s, corresponding to the loudness of the sounds. At the destination, these 1s and Os are interpreted by the digital telephone and used to recreate the original sounds. Digital networks are quickly replacing analog networks. They are needed to transmit the growing amount of computer data They also transmit voice and video information more cleanly, without interference and distortion. More importantly, digital networks allow a single network to carry all types of information. Today, separate networks are used for voice traffic (the telephone network), computer communications (data networks such as the Internet) and video (broadcast or cable television or other specialized networks). Because these different kinds of information can all be represented in digital form, a single digital network can potentially be used to transmit all type typess of information. This is not the only requirem requirement, ent, however (see ch. 2 and ch. 3).

Broadband Networks The Capacity of a digital network is often described in terms of the number of bits that the network can transmit from place to place every second. A digital telephone network can transmit 64,000 bits every second. This is sufficient capacity to carry a telephone conversation with acceptable quality, but is not enough to carry video. Although some videotelephones can use regular telephone lines, users of videoconferencing systems usually prefer to use special services that can transmit at 384,000 bits per second or more. VCR-quality television needs about 1.5 million bits per second, and high-definition television needs about 20 million bits per second--about 300 timesThe theCapacity capacityofofaanetwork, digital telephone measured line. as the number of bits it can transmit every second, is called “bandwidth.” Engineers often talk about “narrowband” networks, which are low bandwidth networks, and “broadband” networks, which are high bandwidth networks. The dividing line between the two is not always clear, and changes as technology evolves. Today, any kind of network that transmits at more than 100 million bits per second would definitely be considered a broadband network. Chapter 3 describes fiber optics and other technologies that will be used to build broadband networks. SOURCE: Office of Technology Assessment, 1993.

Broadband Broadb and netwo networks rks such such as the the NREN NREN will both improve the performance of existing applications and accommodate new types of applications. There will likely be a shift to image- and

removed removed as a bottlene bottleneck, ck, the compute computers rs will will be be able to form an integrated system that performs as a single, more powerful, computer. Broadband networks will require a fundamen-

video-based communications, which are not adequately supported by currently deployed network technology. “Multimedia” applications that use images and video, as well as text and sound, look promising in a number of areas, e.g., education, health care, business, and entertainment. Broadband networks will also allow a closer coupling of  the computers on a network; as the network is

tal rethinking of network design. Several new concepts have been proposed and are being investigated by the testbeds. Fiber is a highly touted technology for constructing broadband networks, but it alone is not sufficient. Switches (see box l-C) and the components that link  computers to the network will have to be upgraded at the same time in order to keep pace with

 

4 I Advanced Network Technology

Box l-B-Gigabit Networks Much of the research described in this background paper is aimed at the development of gigabit networks, broadband networks that can transmit data atone billion bits per second or more (a “gigabit? is one billion bits; “gigabit per second” is abbreviated as Gb/s or Gbps). This represents a 20-fold increase over the most capable links in the networks that currently serve the research and education community. The current National Science Foundation network uses Iinks that transmit data at 45 million bits per second (megabits per second or Mb/s), and even this capacity has not been fully utilized because of bottlenecks in the network’s switches. The development of a gigabit network is an ambitious target-most current industry technology planning targets broadband networks with lower bandwidths, in the 150 million bits per second range. The basic outlines of the technology evolution of the DOD, NASA, DOE, and NSF networks that serve research and education were established in 1987 and 1989 reports issued by the Office of Science and Technology Policy. In the late 1980s, link bandwidths in the Federal networks were 1.5 Mb/s or less. The OSTP reports outlined a three-stage plan for the evolution of these networks to gigabit networks by the mid-to-late 1990s (see figure l-B-l). The gigabit target was also specified by the High-Performance Computing Act of 1991. The OSTP report envisioned that each generation of technology would move from an experimental phase in the

Federal networks to commercial service.

Figure l-B-l—Timetable for the National Research and Education Network

Stage 3 Gbits/sec

–.-

-—.--.– 1 -

Experimental networks ---Research and development I  —.————  —.—— —— —--

Revolutionary technology

to commercial services

changes Stage 2 45 mbps

Operational network Evolutionary changes

.  .  .  .  .  .  .  .  .  .  .

 / 

.  .  .  .  .  .  .  .  .  .

 

h

Stage 1 1.5 mbps

r–-- ‘ r–

F’”

:::=

Stages 1 & 2 R& D

89

90

91

92

93

94

95

96

SOURCE: Office of Science and Technology Policy, "The Federal High Performance Computing Program,” September 8, 1989.

Currently, the Federal agency networks are in the middle phases of the second stage, the operation of networks with 45 Mb/slinks. At the same time, research and development for the third stage, the deployment of gigabit networks, is underway. In practice, the network capacity will not jump directly from 45 Mb/s to gigabit rates. The next step will be to 155 Mb/s, then to 622 Mb/s, and then to greater than one gigabit per second. The bandwidths used in computer networks (1.5 Mb/s, 45 Mb/s, 155 Mb/s, and 622 Mb/s) correspond to standards chosen by manufacturers of transmission equipment. SOURCES: Office of Science and Technology Policy (OSTP), “A Research and Development Strategy for High Performance Computing,” Nov. 20, 1987; OSTP, “The Federal High-Performance Computing Program,” Sept. 8, 1989; High-Performance Computing Act of 1991 (HPCA), Public Law 102-194, Sec. 102(a).



 

Chapter I–Introduction and Summary | 5

Box l-C-Computer Network Components computer network has between three main computers, links, and switches -C-l).orThe web of linksAand switches carry data thecomponents: computers. Links are made of copper (either (figure “twisted1 pair” “coaxial cable”) or fiber optics. Transmission equipment at each end of the fiber or copper generates the electrical or optical signals. There are also satellite and microwave links that send radio waves through the air. Fiber has several advantages over other types of Iinks--most notably its very high bandwidth. The fiberoptic links needed for gigabit networks are already commercially available. However, gigabit networks will not be deployed until research issues in other network components are addressed.

Figure l-C-l—A Simple Computer Network

Link

 

{

m

-

\

1

1 5 A .

   

-

-

  5

SOURCE: Office of Technology Assessment, 1993.

For example, new high-capacity switches are needed to keep pace with the higher bandwidth of fiber optic links. Just as railroad switches direct trains from track to track, the switches in computer networks direct information from link to link. As the information travels through the network, the switches decide which link it will have to traverse next in order to reach its destination. The rules by which the switches and users’ computers coordinate the transmission of information through the network are called protocols. While most computer networks are limited in their ability to carry high-bandwidth signals such as video, cable television networks are widely used to distribute television signals to homes. However, cable networks usually do not have switches. For this reason, they only permit one-way communications: the signal is simply broadcast to everyone on the network. Much of the network research today is devoted to the development of switches that would allow networks to support two-way, high-bandwidth communications. SOURCE: Office of Technology Assessment, 1993.

the faster flow of data. Broadband networks will sufficient flexibility to carry all types of information efficiently. be more than simply higher bandwidth versions of  today’s networks, however. Networks will also be redesigned so that a single type of network can   B  The NREN carry video, sound, data, and image services. The One objective for the NREN is that it serve as

existing telephone and data networks do not have

an enabling technology for science and engineer-

 

6 I Advanced Network Technology

5

ing researche The gigabit NREN will be able to handle the very large data sets generated by supercomputers. Scientists could use the gigabit NREN to support “visualization,” the use of a computer-generated picture to represent data in image form. For example, ocean temperatures computed by a climate model could be represented by different colors superimposed on a map of the world, instead of a list of numbers.

Sciences Network (ESnet), and the National Aeronautics and Space Administration’s NASA 6 Science Internet (NSI). These networks form the core of the “Internet” a larger collection of  interconnected networks that provides electronic mail services and access to databases and supercomputers for users in all parts of the United States 7 and around the world. During 1992, Federal agencies announced plans for upgrading their

Visualization is an essential technique for understanding the results of a simulation. Currently, much of the data

current networks as part of the NREN program. The NREN program can be viewed as a continuation and expansion of the Federal support

8

periments and computed by simroadband  ulations goes un, used because of  networks will require  a fundamental  the time needed rethinking of  to compute imnetwork design. ages on conventional computers. Supercomputers could perform the computations

that created the Internet. The Internet’s technology evolved from that of the Arpanet, a research project of the Advanced Research Projects Agency. Beginning in 1969, the Arpanet served to demonstrate the then-new technology of ‘packet switching." Packet switched networks were able to support computer communications applications that could not be efficiently accommodated by the telephone network’s ‘‘circuit switched” technology (see ch. 2, p. 29). Packet switched networks

more quickly, few laboratories have asupercomputers. With abut high-speed network, scientist could send the data to a distant supercomputer, which would be able to quickly compute the images and send them back through the network  for display on the scientist’s computer. A second objective for the NREN program is that it demonstrate and test advanced broadband communications technologies before they are deployed in commercial networks. The NREN program will upgrade federally supported networks such as the National Science Foundation’s NSFNET, the Department of Energy’s Energy

are now widelybydeployed, Internet services being offered the private sector, and are the Internet protocols are becoming world standards. In much the same way, the NREN program is intended to catalyze the deployment of a new generation of network technology. Past government programs have also been successful in broadening access to networks for the larger research and education community. The Internet is increasingly essential to users in the academic community beyond the original core group of users in engineering and computer science. It is now estimated that over 600 colleges

B

5  For  a &~ptiOn  of the goals and  darackxistics of the NREN  see HPCA,  op. cit., footnote 1, Sec. 102(a)-(c); OSTP,  op. cit. footnote p. 18; U.S. Congress, Office of ‘Rxhnology  Assessmen~  High Performance Computing & Networking for Science, O’lA BP_ 9 (Washington DC: U.S. Government Printing Of Ice,  September 1989), p. 25.

2,

6  OSTP, op. cit., footnote 2, p. 18; Office of Science and ‘IMmology  Policy, “The National Research and Education Network Program:

A Report to Congress, ” December 1992, p. 2. 7

Robert E. Cale~ “The Network of AU Networks,” The New York Times, Dec. 6, 1992, p, F12.

s  National Science Foundation Public Draft: Network Awess   Point Manager/Routing Authority and Very High Speed Backbone Network Services Provider for NSFNET  and the NREN  Program,” June 12, 1992; James F. Leightonj  Manger of Networking and Engineering, National Energy Research Supercomputer  Center, Lawrence Livermore National Laboratory, ‘ESnet   Fast-Packet Services Requirements Specification Document”  Feb. 20, 1992.

 

Chapter 1–Introduction and Summary | 7

and universities and an estimated 1,000 high 9 schools are connected to the Internet. As the Internet user community becomes more diverse, there is a growing need for simplifying the applications and their user interfaces. This background paper primarily describes gigabit NREN applications and network technologies. There are, however, several controversial 10 policy issues related to the NREN program.

the same time achieving the science and network  research goals of the NREN program. There has been considerable uncertainty about the mechanisms by which this objective is to be achieved. The High-Performance Computing Act does not clearly specify the scope of the NREN or the mechanism for commercialization. NSF has had to address these issues in the course of developing a plan for the development of its network, which

First, the scope of the NREN is uncertain. As a key component of the HPCC program, a clear role

will be a central component of the NREN. These debates have slowed considerably the process by

of the NREN is to serve scientists and engineers at Federal laboratories, supercomputer centers, and major research universities. This objective will be met primarily by upgrading the networks operated by the National Science Foundation (NSF), Department of Energy (DOE), and the National Aeronautics and Space Administration (NASA). However, there are several different visions of the extent to which the NREN program should also serve a broader academic community, such as libraries and schools.

which NSF will select the companies that will operate its network. NSF’s original plan, released in the summer of 1992, is undergoing significant revisions (see box 5-A). As of May, 1993, a new plan had not been issued. It is increasingly unlikely that NSF will be able to deploy its next-generation network by the Spring of 1994, as was originally planned. In addition, the growing commercial importance of networking is leading to greater scrutiny of the agencies’ choices of contractors to operate

A second major issue concerns the “commerci cial aliz izat atio ion" n" of the the NREN. NREN. The NREN NREN will will develop from the current Internet, which is increasingly used by government and businesses, not only by the research and education community. Several new commercial providers have emerged to offer Internet services to this market,

their NREN networks. DOE selected a contractor for its component of the NREN in thes summer of  1992, planning to deploy the new network in mid-1993. However, a losing bidder protested DOE’s selection to the General Accounting Office (GAO). In March, 1993, GAO overturned DOE’s choice of contractor and recommended that DOE revise its solicitation, conduct discussions with potential contractors, and allow conll tractors a new opportunity to bid. DOE has

which is not served by Federal agency networks. One of the goals of the NREN program is to continue this commercialization process, while at

g  Darleen  Fisher, Associate Program Manager, National Science Foundation, personal comrnunicatio%  Feb. 11, 1993. 1 For ism= relat~  to tie NREN  program, see Hearings before the House Subcommittee on Science,W .  12,  1992,  Seti  No . 120 me  dispute  concem~  tie  pmies’  interpretation

of certain provisions in DOE’s Request for iOfXMdS  W V ATM’  protested  DOE’S selection of Sprint to be the contractor for the DOE network arguing successfully that the RFF had specified more fully-developed switches than had been proposed by Sprint as part of its bid. GAO ruled that the switches that Sprint planned to use did not comply with a provision in the RFP  that proposals had to “conclusivel “conclusivelyy demonstrate cument  availability of the required end-to-end opemtional  capability,” DOE, by contrast, was satisfkd  that the switches had been developed to the level envisioned by the RFP and were appropriate to a program designed to explore leading-edge technolog technology. y. DOE’s RFP  had speci.fkd the use of “cell relay’ technol technology, ogy, which is the basis for both synchronous Transfe Transferr Mode Am  and Switched

Multimegabit  Data Service (SMDS)  services. ATM is  expected to play an important role in the future development of computer networking and the telecommunications industry, while SMDS  is viewed primarily as an intermediate step towards ATM. DOE selected Sprint in large only  in part Sprint proposed to begin while AT&T bid a service based on SMDS  and evolving to ATM ndustxy 1994.because  Early deployment of ATM wouldATM have services providedimmediately, a valuable opportunity to evaluate and demonstrate a key telecommunications technology. Comptroller General of the United States, Decision in the Matter of AT&~ File B-250516.3, March 30, 1993.

 

8 I Advanced Network Technology

asked GAO to reconsider its decision. The DOE example raises questions about the effect of 

cant roles in the development of both the Arpanet 13 and the Internet. CNRI is responsible for

government procurement procedures on the ability of federal agencies to act as pioneers of  leading-edge network technology. The additional time that would be required to comply with GAO’s recommendations, added to the sevenmonth GAO process, would delay deployment of  DOE’s network by over a year.

the testbeds andtestbeds coordinating their org anizingFunding progress. for the is modest, when compared to their visibility and the overall HPCC budget. The cooperative agreement with CNRI is for $15.8 million over 3 years. Most of  the cost of building the networks has been borne by industry, in the form of contributions of  transmission capacity, prototype switches, and

9  The

Testbeds

The HPCC program’s six gigabit testbeds (table l-l) are intended to demonstrate emerging high-speed network technologies and address unresolved research questions. While each testbed involves a different research team anddifis emphasizing ~

s

f“nttoP@~=

is similarity in their approach.

ignificant 

progress has been  made toward the 

The testbeds typ-

development of 

a high-speed network connecting three or four sites

gigabit technology.

ically consist of 

I

-universities, industry laboratories, supercomputer centers, and Federal laboratories-with high-bandwidth optical fiber. Located at each of the testbed sites are computers, prototype switches, and other network components. Each research group has both network and applications researchers-the applications will be used to test different approaches to network design. The testbed program is administered by NSF 12

and the Advanced Research Projects Agency (ARPA). Five of the testbeds are jointly funded for 3 years by NSF and ARPA under a cooperative agreement with the Corporation for National

research personnel. The testbeds are investigating the use of  advanced network technology to match the needs of the NREN. There is an emphasis on delivering the highest bandwidths possible to the users and demonstrating the range of applications that would be used by leading-edge users of the NREN. Most of these applications are supercomputer-based. For example, some applications use the network to link several supercomputers, allowing their combined processing power to compute complex simulations more rapidly. Many of the applications being investigated also use the network to enable visualization of the results of  simulations or experiments. Initially, only a few users would have computers powerful enough to need a gigabit network. However, the processing power of lower cost workstations and ordinary desktop computers is likely to continue to increase rapidly, as a result of advances in microprocessor technology. Gigabit networks and the lessons learned from the testbeds will then be used more widely.

SUMMARY I   Progress

Significant progress has been made toward the development of gigabit network technology since 1987, when the Office of Science and Technology

Research Initiatives (CNRI). The principals of  CNRI, a nonprofit organization, played signifi-

Policy (OSTP) noted that considerable research would be needed to determine the design of 

12  Fo~erly  the Defense Advanced Research Projects Agency DARPA .

13  Dr.  R ow   ?.  

is  fie~id~t  of ~

Dr.  Vtiton  G.  Cerf  is Vim  ~sident

 

Table l-l-Gigabit Testbed Participants o I

u Ld I

Testbed

Industry

L o c a ti o n

Federal laboratories

Supercomputer centers

Universities & other MIT University of Pennsylvania

AURORA

Northeast

IBM Bellcore Bell Atlantic NYNEX MCI

BLANCA

Nationwide

AT&T

Lawrence Berkeley Laboratory

National C Ce enter ffo or Supercomputing Applications

University of of IlIllinois University of Wisconsin University of CaliforniaBerkeley

CASA

Southwest

MCI Pacific Bell

Jet Propulsion Laboratory

San Diego Supercomputer Center

California Institute of Technology

Iv

g

U.S. West

Los Alamos National Laboratory

NECTAR

Pittsburgh

Bellcore Bell Atlantic

Pitts Pittsbur burgh gh Sup Superc ercomp ompute uterr Center

Carneg Carnegie ie M Mell ellon on Univ Univers ersity ity

VISTAnet

North Carolina

Bell South GTE

North Carolina Supercomputer Center (at MCNC)

University of North Carolina-Chapel Hill North Carolina State University MCNC

South Dakota Kansas Minnesota

MAGIC

Sprint MITRE Digital Equipment Corp. Southwestern Bell Northern Telecom Split Rock Telecom SRI International

U.S. Army Future Battle Laboratory U.S. Army HighPerformance Computing Research Center U.S. Geological Survey Lawrence Berkeley Laboratory

Minnesota Supercomputer Center

University of Kansas

SOURCE: Corporation for National Research Initiatives (CNRI), Advanced Research Projects Agency (ARPA).

 

10 I Advanced Network Technology

14

gigabit within networks. There has been growing consensus the technical community on many

switches are also commercially available. Versions of becoming these switches that operate at

 

issues, and the development of the optical fiber links, switches, and other network components is underway. The testbeds represent the next step in the research-integrating the hardware and software components into a working network system

gigabit rates are in prototype form and will be incorporated in the testbeds over the coming year. The testbeds are looking to the next step in the research-the development of test networks. This is a systems integration task-developing the

and testing it with applications. applications. The basic characteristics of the design of  broadband networks began to emerge in the mid-1980s, supported by the results of simulations and small-scale experiments. Researchers’ objective was to develop networks that could

individual components is only part of the process of building an advanced network. There is often much to be learned about making the components work together and solving unforeseen problems in the implementation. In addition, there are research questions that can only be investigated with a realistic test network. The testbeds will provide a way to test various proposed approaches to network design. Progress on the testbeds has been slower than expected, due to delays in making the transmision equipment available and in completing work  on the switches and other components. Switches

T he testbeds have  established a useful  model for network  research.

support high widths and band werealso sufficiently flexible to support a range of  services. One characteristic of 

these networks is the use of optical fiber links, which have the necessary capacity to support many new services, including bandwidth-intensive video- and imagebased applications. The second major characteristic of the proposed designs for advanced networks is the use of ‘‘fast packet switches, ’ a new type of switch that has both the processing power to keep up with increases in link bandwidth and the flexibility to support several kinds of services. As these ideas began to emerge, computer and

are complex systems, circuits. requiringItthewas fabrication of  numerous electronic originally hoped that the optical fiber links could b e deployed and the gigabit switches and other components finished in time to have a year to experiment with the working testbed networks before the end of the program in mid-1993. It now appears that the testbeds will not be operational until the third quarter of 1993. The testbed program has been extended to permit a year’s research on the testbed facilities once they become operational.

telecommunications companies initiated the development of the network components required for broadband networks. There appear to be no significant technological barriers to the development of the components required for the gigabit NREN. Transmission equipment of the type that would be required for the gigabit NREN is already becoming available commercially and is being used in the testbeds. Some fast packet

 

Testbed Concept

The testbeds have established a useful model for network research. The design and construction of a test network fills a gap between the earlier stages of the network research-small scale experiments and component development—and the deployment of the technology in production

14  OfflW of Science an d  ‘lkchnoIo~  policy,  “A Research and Development Strategy for High Performance

Computing,”

NOV 20 1987

p. 21.

 

Chapter I-Introduction and Summary I 11

networks. The testbed networks model the configuration in which the technology is expected to be deployed—the test sites are separated by realistic distances and the networks will be tested with applications of the type expected to be used in the gigabit NREN. In addition, the participants in the testbeds will play important roles when the networks are deployed. The testbed research contributes in a number of  ways to a knowledge base that reduces the risks involved in deploying advanced network technology. First, there are a number of research issues that are difficult to address without a working

number of scientific disciplines and the supercomputer community. One of CNRI’s main contributions was to encourage the involvement of the telecommunications carriers in the testbeds. The transmission facilities required for the testbeds are expensive because of the long distances between the testbed sites and the demands for very high bandwidth. Most experimental work in the past was on small scale networks in a laboratory, due to the prohibitive cost of linking distant test sites. However, the carriers are installing the required transmission capacity and making it available to the testbeds at

network that can be used to try different approaches. Second, the systems integration process provides experience that can be applied when the production network is constructed. In many ways the experience gained in the process of getting the testbeds to work will be as valuable as any

no cost. All three interexchange (AT&T, MCI, andmajor Sprint), and most carriers of the Regional Bell Operating Companies (RBOCs) are playing a role in the testbeds. The testbed research overlaps with industry priorities in some areas and not in others. The basic design of the networks—the types of 

research done with the operational testbeds. Third, the testbeds serve to demonstrate the utility of the technology, which serves to create interest among potential users and commercial network  providers. The relatively small amount of government money invested has been used primarily to organize and manage the testbeds and to encourage academic involvement. The testbeds have mainly drawn on other government and industry

switches and transmission equipment—reflects emerging industry concepts. However, much of  the research agenda focuses on higher bandwidths and more specialized applications than will be used with commercial broadband networks in the near term. Only a few users will use the types of  supercomputer-based applications being emphasized by the testbeds. Of greater near-term commercial importance to industry are medium band‘‘multi ultimed media’ ia’ applicat applications ions that require require width ‘‘m

investment. The organization of the testbeds as a collaborative effort of government, academic, and industry groups is essential, because of the many

more bandwidth than can be supported by current networks, but significantly less than the gigabit speeds required by the supercomputer commu-

disciplines required to build and test a network. Industry has contributed expertise in a number of  areas. For example, it would be too difficult and expensive for academic researchers to develop the high-speed electronics needed for the switches and other components. Academic researchers are involved in the Internet community, and have contributed ideas for new protocols and applications. Other applications work has come from a

nity. 1   Application

of Testbed Research

The testbed research is applicable both to the NREN and to other networks. The NREN will serve only the research and education community and is best viewed as only part of the broader national information infrastructure. 15 The scope of  the national information infrastructure will in-

 For  one  view  of tie  relatiomtip  ~Ween  the NREN  and the ‘‘National kfOMKitiOn m a s m c t i ~ ’ see Michael M. Roberts, “Positioning the National Research and Education Network”EDUCOMRcview vol. 26, No. 3, s ummer  1991, pp. 11-13. lfI

 

12 I Advanced Network Technology

elude both the United States’ part of the Internet and a wide array of other services offered by the computer and information industries, the carriers, the cable television industry, and others. APPLICATION TO NREN During 1992,

the degree to which they will play the role of  technology pioneers. The agency networks’ evolution depends in part on the timely deployment of the necessary high bandwidth transmission infrastructure by the telecommunications carriers. Computer networks

DOE, development NASA, and NSF published plans for the future of their networks, a key component in the evolution to the l6 gigabit NREN. Some aspects of these plans are still unclear; for example, NSF has left to prospective bidders the choice of switching technology, from among those being investigated by

generallyoperators use linksdosupplied by theputcarriers-the network not normally their own fiber in the ground. The carriers’ networks already have gigabit-capacity fiber installed, but today the capacity is usually divided among thousands of low-bandwidth channels used for telephone calls. New transmission equipment, the

the testbeds and elsewhere. However, the agency plans appear to

ate at lower bandwidths than the testbed networks, but they will incorporate more of the testbed technology as

electronics at each end of the fiber, is required to allow the fiber’s capacity to be divided into the high-bandwidth channels needed by the gigabit NREN. This equipment is being used in the testbeds and is becoming available commercially, but is very expensive. The testbed applications research helps researchers to understand how the NREN would be used to achieve the science goals of the overall HPCC program. For example, some of the testbed applications show how networks can be used to bring greater computer power to bear on complex simulations such as the Grand Challenge prob-

they evolve over time to meet the goal of the gigabit NREN. Today, the highest bandwidth of the agency

lems. They may also show how networks can be used to help researchers collaborate-the Grand Challenge teams are expected to involve scien-

T he rate of NREN  evolution is less 

be consistent with the target established by the testbeds. Initially, the agency net-

dependent on  technology issues  than on delays  in the selection  of service providers.

  I

works will oper-

networks is 45 Mb/s; it appears that they will move to 155 Mb/s in 1994, with 622 Mb/s the highest rate that is realistically achievable by 1996. The rate of evolution is less dependent on

tists at widely separated locations. In 1992, the NSF supercomputer centers proposed the concept of a ‘‘ ‘‘me meta tace cente nter, r, which uses a high-speed network to link the computing power of the four

technology issues than on delays in the process by which the Federal agencies select suppliers of  NREN network services. Because agency choices of technologies and suppliers have broad implications for the Internet and the national information infrastructure in general, there have been several disputes over agency plans (see p. 7). While the NREN program has created a high level of  interest in advanced networks, further delays in the deployment of agency networks may reduce

NSF supercomputer centers. The testbeds do not address all of the technology issues that are key to the future development of the NREN. Because the NREN will develop from the federally funded segment of the current Internet, it is affected by issues related to the growing number of users of the Internet. This growth in the number of users is straining   some of  the Internet protocols, and their future development is a topic of intensive study and debate

16  NSF, Op. cit., footnote 8; Leightow  op. cit., foo~ote  8.

 

Chapterr I-Introduction and Summary 113 Chapte

within the Internet community. Also, the testbeds are not looking at applications that would be used by a broad range of users in the near term, or at issues related to making the Internet applications easier to use.

wherever possible, use equipment that conforms to emerging standards. For example, many of the testbeds use a switching technology called Asynchronous Transfer Mode or ATM. This technology has become central to telecommunications industry planning

OTHER NETWORKS One of the roles of the NREN is to serve as a

testbed in itself, demonstrating technology that will then be deployed more broadly in the national information infrastructure. The testbed program will also impact the evolution of the national information infrastructure more directly,

because it is designed to support many different kinds of services-today’s telephone network  switches are limited mainly to carrying ordinary telephone calls. ATM can support Internet-type services such as will be used in the NREN, and also video, voice, and other data communications

bypassing the intermediate stage of deployment in the NREN. This is because the network  technology used in the testbeds reflects near-term industry planning. While the testbeds have emphasized higher bandwidths and more specialized applications than are of immediate commercial importance, the testbed networks reflect ideas that figure prominently in industry plans and,

services-the carriers plan to use ATM to enter a variety of markets. Although ATM has been widely accepted by the telecommunications industry and progress has been made towards its implementation, there are a number of unresolved research issues. The testbeds are providing a large-scale opportunity to test this technology and possibly provide input to the standards process.

 

——  —.-.———.

The Internet

 T 

he gigabit National Research and Education Network  (NREN) is to develop from the current Internet, a ‘‘network of networks” that connects users in all parts of the United States and around the world. The Internet allows users to communicate using electronic mail, to retrieve

data stored in databases, and to access distant computers. The network began as an Advanced Research Projects Agency research project to investigate computer networking technology, and in slightly over 20 years has grown into an essential infrastructure for research and education. The NREN initiative and associated research programs are intended to support the further evolution of research and education networking, broaden-

 2

ing access to the network and enabling new applications through the deployment of advanced technologies. Federal support to further the development of networks that support research and education communications is directed primarily at upgrading the Federal “backbone” networks that have formed the core of the Internet. l These networks include the National Science Foundation’s NSFNET backbone, the NASA Science Internet (NSI) (figure 2-l), the Department of Energy’s Energy Sciences Network (ESnet), and the Department of  Defense’s DARTnet and Terrestrial Wideband Network  (TWBnet). The NASA and DOE networks are primarily intended for traffic related to the mission of the supporting agency, while the current NSFNET backbone serves users in a broader range of  disciplines in universities, supercomputer and industry research laboratories. The DOD networks centers, support research and development of new communications technologies. The Federal   Office of Science and lkchnology  Policy (OSTP),  “Grand Challenges 1993: High Performance Computing and Communications, ’ p. 18.

15

 

Federal agency networks will  f orm the core of the gigabit   NREN.

16 I Advanced Network Technology

  a

u i i

 

Chapter 2–The Internet | 17

  Figure 2-2—Regional Network  / 

   

 \

Plattsburgh

NYSERNet

Potsdam

Logical Topology

m

Rome/Utica

Oswego 

Rochester

m

//

Saratoga

z

Buffalo  — n  

k Freedonia  

  \

Syracuse

Geneseo \f  \ 

Alfred

 

in

Troy m

Albany

Ithaca

 \   \ 

IT

Olean

Corning

m

To NSFNET/lnternet

 — T1 (1 1.5 Mbsps) NYSERNet Backbone El   Core Point-of-Presence (POP)

Binghamton

Kingston

1

\

New York City

White Plains/Yorktown

r

 Future Core POP >if   NSFNET/lnternet Gateway



Germany vi

Dialup Services

Dialup Services =   NSF NET Backbone

‘ d

0   Future

 — —



.-

 —

Israel

5-

~ Gaden   C i t y

CIXnet

SOURCE: NYSERNet.

 

networks are interconnected at FIXes (Federal

another. In order to provide good performance

Internet Exchanges) at NASA’s Ames Research Center in California and at the University of  Maryland. Upgrading the agency-supported backbones is not the only thing needed to improve research and education networking. The majority of users in

end-to-end, all of the Internet’s networks will need to evolve in a coordinated fashion, matched in capability and performance. Most of the Internet’s networks are “campus” or ‘corporate networks, connecting users within a university or a company. Campus and corporate

universities, and do networks. not have direct access schools, to one of thelibraries backbone These users rely on thousands of other networks that, together with the Federal agency backbones, form the Internet. These networks are interconnected, and information typically travels through several networks on its way from one user to

networks in turn interconnected by “regional” may networks. Forbeexample, NYSERNet (New York State Education and Research Network) connects campuses and industrial customers in New York State (figure 2-2) and BARRNET (Bay Area Regional Research Network) does the same in northern California.

18 I Advanced Network Technology

-G @ -@53

Y“



‘N

SOURCE: National Science Foundation (NSF).

Regio Regional nal networ networks ksnetworks also also pr provi ovide de atheco conne nnecti ction on betwee between n campus and nation national al NSFNET backbone that carries traffic to other regions. 2 The regi regiona onall networ networks, ks, aand nd the the resulti resulting ng three-t thre e-tier ier structur structuree of cam campus, pus, regional, regional, and backbon bac kbonee network networkss (figure (figure 2-3), evolved evolved with support from the National Science Foundation. 3

4 Inter Interne t rvice als also o on incl include sever sev eral networ net works ks that thaThe t provi provide denet se servi ce audes forfosr-pro profit fitalbasis. bas is. The government investment in developing and demonstrati strating ng Inter Internet net ttech echnol nology ogy duri during ng the the 1970s 1970s and 1980s 1980s has created created opportun opportunities ities for the private private sector sector to sell Interne Internett service services. s. The eff effect ectiven iveness ess of the Internet technology has been proven, and a

2 NASA and DOE sites are connected directly to the agency networks. However, NASA and DOE rely on the regional networks and the NSFN@T  backbone to connect to university researchers participating in NASA and DOE projects.

For a description of evolution of the regional networks and the threetier structure, see Richard A. Mandelbaurn  and Paulette A. Mandelbaurn,  “The Strategic Future of the Mid-Level Networks,” Brian Kahin  (cd.),  Building Information Zn@structure  New YorlL NY: McGraw Hill Primis,  1992). 3

4

fi c  ArnuIU  “The

Internet Dil emnxu Freeway or Tollway,” Business Communications Review, December 1992, vol. 22, No. 12, p. 31.

 

——.

Chapter 2–The Internet 19

growing number of companies are now using the Internet to conduct business. Even though the NREN program continues government funding for the agency backbone networks, in order to upgrade them to gigabit speeds, gov ernment support is becoming less central to the Internet as a whole. New commercial providers of nationwide Internet services have emerged. In addition, NSF has been reducing subsidies to the regional networks, which are increasingly being asked to recover costs from users.

will continue to support agency missions, but the next-generation NSFNET backbone will be considerably different from the current NSFNET backbone. As part of its NREN plans, NSF has decided that much of the trafic that is currently carried by its NSFNET backbone will in the future be handled by commercial providers, encouraging the further development of this segment of the Internet. The next-generation NSFNET backbone will support a narrower range of users and serve fewer

The availability of commercial services is leading to a change in the makeup of the users of  the Internet. Until recently, corporate use of the Internet was restricted to scientists and engineers in research laboratories or engineering departments. In part, this was due to the history of the Internet as an experimental network. The limited use of the Internet by the private sector was also due to an “Acceptable Use” policy that reserved the federally supported backbones for research

sites. Today NSFNET backbone serves many sites nationwide, connecting regional networks and supercomputer centers (figure 2-4). It is a “general purpose” backbone, carrying traffic ranging from ordinary electronic mail to advanced supercomputer applications. In the future, the backbone will primarily be used by the NSF supercomputer centers, in Ithaca, New York, Pittsburgh, Pennsylvania, San Diego, California, and Champaign, Illinois.* Other users, with more

and education traffic. 5 The new commercial providers have no traffic restrictions, allowing the Internet to serve a wider range of users. Today’s

routine applications, will use services available from commercial providers. Without the current national backbone, the regional networks will

Internet users can have different security require6 ments their technical sophistication varies, and the demands they place on the network’s capacity differs. One of the goals of the NREN program is to continue the trend towards provision of Internet services on a commercial basis, rather than solely 7 as the result of a government subsidy. T h e NREN program continues government support for networking, but the emergence of commercial providers is leading to changes in the mechanisms by which this support is provided. NSI and ESnet

have to make new arrangements for their interconnection (see ch. 5, p. 67). The next-generation NSFNET backbone will continue to contribute to the objective of developing advanced network technology. The new backbone, together with the next-generation NSI and ESNET, will be one of the frost networks to use the technologies studied by the gigabit testbeds described in chapter 4. The Federal networks will provide ‘‘experimental” services, not yet available from commercial providers. They will demonstrate and test new network 

s  For issues  re~ted to NSF’S Acceptable Use Policy, see Hearings before the House ubcornmittee  on Science, Space,  ~d  wko@Y, Mar. 12, 1992, Serial No. 120. 6  Gary H. Anthes,  ‘Internet Security Risks, ” C ompuferworld 7

VO1

26, No . 48, NOV. 30, 1992, p.  55 .

“[Tlhe NREN  Program has a series of synergistic goals [including] stimulating the availability, at a reasonableos os t of the required

services from the private sector,’ OffIce  of Science and lkchnology  Policy (OSTP), “The National Research and Education Network Program: A Report to Congress,” December 1992, p. 2. For a description of the NSF supercomputer  centers, see U.S. Congress, OffIce  of Technology Assessment, High  Pe@ormance   Computing and Networkingfor  Science, OTA-BP-CIT-59  (Washington DC: U.S. Governm ent Printing Office, September 1989), pp. 9-10. 8

 

Chapter 2–The Internet | 21

This two-part strategy-agency operation of  advanced networks combined with subsidies for Internet access for certain groups of end users— represents a more detailed framework than the general NREN concepts and goals outlined in the High Performance Computing Act of 1991. It is expected to form the basis of NSF’s forthcoming solicitation for the operation of its component of  the NREN. It is also outlined in recently introduced legislation, the High Performance Computing and High Speed Networking Applications Act of 1993 (H.R. 1757), which Act would amend the High Performance Computing of 1991. However, there is concern in parts of the user community most affected by the change to an environment in which there is no longer a general purpose government operated network about the cost of commercial services and about the timing

In the business world, networks are increasingly used to track inventory or manage activities throughout a large company. In the future, networks may be used to help provide medical services to distant locations. From a network engineering perspective, an ‘‘application ‘‘ is a computer program that builds on the basic network service to allow a user to perform tasks. The application program provides interaction with the user; it does not handle the details of moving a message through the network to its destination. These functions are performed by communications software-a second program running on the computer—and specialized hardware that converts the computer’s digital data to the format used by the network. When an applications program wants to send information to another computer, it hands the message to the

and management of the transition.

communications software, which then formats the

From the users’ perspective, an “application” is a task that the combination of the computer and the network enables them to perform. For example, a science teacher might use the Internet to locate information that can be used in a class, such

message and sends it over the network. There are four major Internet applications— electronic mail (e-mail), file transfer, remote login, and news. Electronic mail is used to send messages to other users of the Internet, and for most users it is probably the application they use the most frequently. File transfer (File Transfer Protocol or FTP) is used to retrieve a "file" from another computer; a file could be a computer program, an article, or information from a commercial database. “Remote login” (Telnet) is used to control a distant computer; this is the application used to access a supercomputer or one of the other specialized computing resources on the Internet. “News” is a kind of bulletin board or discussion group-thousands of “newsgroups” address a wide range of different topics. The current Internet applications are difficult to use. For example, it is difficult to find information

as images stored NASA databases, or databases containing tailoredineducational materials. Researchers use the Internet to track developments in their field, by exchanging information or drafts of  10 papers and collaborating with other scientists.

resources the information network. First, the user has to know thatonthe exists somewhere reachable on the network, then where to find it, and, having found the database, how to locate the information in the database. A number of new

The remainder of this chapter describes the technology used in the current Internet. Chapter 3 provides an overview of emerging concepts that address some of the limitations of current network technology and might be used to construct gigabit networks. Chapter 4 describes the gigabit testbeds, NSF- and ARPA-funded prototype networks that are investigating these new technologies. Chapter 5 outlines NSF, NASA, and DOE plans for the deployment of the testbed technologies in their networks.

APPLICATIONS

1 For an ovemiew of the wide

e  of

uses for the Internet, see Daniel P. Dem, ‘‘Applying the Internet, ” yte

February  192 P. 111

 

22 | Advanced Network Technology

applications assist this process by acting as indexes or catalogues. Second, the user interface for most applications is often difficult to use, requiring a user to recall obscure commands. The

A second limitation of current Internet technology is that it is best suited for applications that handle text or numerical data. The Internet is less effective when supporting applications that make

difficulty in use is partly due to the Internet’s heritage as an experimental network used mainly by scientists and engineers who were comfortable with arcane computer languages. The existing Internet applications programs are beginning   to be replaced by more sophisticated 11 versions. Today, for example, the Internet file

use of12 ‘‘real-time’ media such as video and sound. In the case of video, this is due in part to the bandwidth limitation- high-quality video needs to move large amounts of data, and the necessary bandwidth is not available throughout the Internet. Support for video and sound is also limited because the performance of the Internet is highly variable. Because video creates the illusion of  motion by sending a “stream” of pictures at regular intervals, a longer delay in the time it takes one of the pictures to get through the network interrupts the video information that is 13 being displayed on the user’s computer. A new

transfer program, FTP, is used to retrieve a file from a distant computer, but a different program is used to retrieve a file stored on the ‘‘home’ computer. Newer versions of these applications are ‘‘transparent, ’ so that the user will not know whether a file is located on a distant computer, or

that a program is executed different machine. These new applications areon thea beginnings   of a foundation for ‘distributed computing,’ in which the computers on a network form an integrated system that performs as a single computer.

technology calledin“fast packet switching,” cussed in detail chapter 3, may provide disthe more consistent network performance  performance   that video applications need. Digital transmission and high bandwidth alone are not always sufficient to enable a network to carry video. The limited capacity of the current Internet and

I  Applications and Network Technology

the variability of its performance   also constrain the use of sophisticated sophisticated ‘distributed computing” applications. In distributed computing, one is able to treat the computers on a network as a single, more powerful computer. For example, two computers, exchanging data through the network 

Some limitations of current applications are due to the applications software itself, but other limitations are due to the underlying network  technology. One problem with current network  technology is a shortage of bandwidth. Bandwidth is a measure of the amount of data that can be moved through the network in a given period of time, and is typically specified in terms of ‘bits per second.” Because of the limited capacity of  today’s network, it is often impractical to move large amounts of data across the network— examples of large files are images (see box 2-A)

as necessary,in might be time able needed to complete computation half the by onea computer working alone. If data takes too long to travel between the computers, however, the advantages of dividing a computation among several computers are lost. In the current Internet, the local area network (LAN) technology used in

and the data sets used in supercomputer applications.

campus networks often performs better than wide area network (WAN technology used in the

11  For example, “distributed fde  systems” are beginning to replace the traditional File Transfer Protocol (FIT)  application. 12  Jeffrey  Jeffrey Schwartz, “A Push for Packet Vi Video,” deo,” ConununicarionsWeek Aug. 3, 1992, p. 1. 13 ~  problemis   ing ~ ~  inan~~rof ways.  Newnetworkarchitectures,  described in  Chi3pkr  3 try tO rdua  th e  dew of vfition

in network performance e. Other researchers are investigating mechanisms mechanisms that would compensate for the variable performan ce. For example,

the receiving computer could “even out” some of the variation before the data is displayed to the user.

 ——

 

.—

Chapter 2–The Internet | 23

Box 2-A-images and Video Images The screen of a computer’s display is made up of many individual picture elements or “pixels,” like the little dots that can be seen on television screens. By displaying each pixel with a different shade and different color, the computer forms an image on the screen. The greater the density of pixels, the higher the “resolution” of the image, The displays used for ordinary desktop computers usually have a few hundred pixels in both the horizontal and vertical directions, while a high-definition television display would have about 1,000 pixels vertically and about 2,000 horizontally. Even higher resolution displays are being developed for specialized medical, publishing, and defense-related applications. The use of high-resolution images places considerable demands on computers and networks. Typically, each pixel on a screen is represented by 24 bits. A high-resolution display with 2,000 pixels horizontally and 2,000 pixels vertically has 4 million pixels (2,000x2,000=4,000,000). This means that 96 million bits are needed to represent the image (4 million x 24 = 96 million). In the telephone network, voice conversations are sent through links that transmit 64 thousand bits per second. Using these links, an image represented by 96 million bits would take 25 minutes to send through the network. By contrast, it would take less than one-tenth of a second to send the same image through a gigabit

network.

Video Video is a series of images, sent many times a second at regular intervals in order to create the illusion of motion. Typically, 30 or 60 images are sent every second. In a low bandwidth network, in order to send this many images every second, the images have to be of very low resolution. Two strategies have been adopted for accommodating image and video transport in networks. The first is to use compression techniques that reduce the number of bits needed for each image. Often, some parts of a scene do not have to be shown in great detail. Compression schemes for videotelephones sometimes rely on the fact that users are only interested in the “talking head,” not the background. Sometimes little changes from one image to the next (if there is no movement in the scene), in which case the image data does not need to be sent again. These techniques are being applied to the new high-definition television systems that are being studied by the Federal Communications Commission for selection as a U.S. standard. An uncompressed high-definition television signal that sends 30 images or “frames” every second, with a resolution of 1,000 pixels vertically and 2,000 pixels horizontally, needs about 1.5 gigabits per second. By contrast, new compression algorithms support high-definition television at bandwidths of 30 Mb/s or less, one-fiftieth the bandwidth required for the uncompressed signal. The second strategy for accommodating video or images is to increase network capacity. Fiber optic technology can transport many more bits every second than the “twisted pair” copper wires that are used for today’s telephone service. This background paper outlines some of the research being done on very high-capacity networks that can carry high-resolution video and images. However, even a “gigabit network” is not sufficient for certain kinds of very high-resolution video, and compression techniques might still be used. SOURCES: Peng H. Ang et al., “Video Compression Makes Big Gains,” IEEE Spectrum,vol. 28, No. 10, October 1991, pp. 1519; Bernard Cole, “The Technology Framework,” IEEE Spectrum, vol. 30, No. 3, March 1993, pp. 32-39; J, Bryan Lyies and Daniel C. Swinehart, “The Emerging Gigabit Environment and the Role of Local ATM,” IEEE Communications, vol. 30, No, 4, April 1992, pp. 52-58.

 

Chapter 2–The Internet | 25

Box 2-B—Massively Parallel Computers The conventional computers found on most desktops use a single processor. Programs for these computers

consist of a list of instructions, to be executed one after another by the processor. Parallel computers are based on the idea that a computer with several processors can solve a problem more quickly than a computer with a single processor. Much of the HPCC Program’s supercompu supercomputer ter design research focuses on the development of “massively parallel” computers with thousands of processors. Supercomputers are expensive, high-performance machines that have been used mainly for numerical simulations in science and engineering. The first commercially important supercomputer, the CRAY-1, was first sold in 1976. It used a single processor, and achieved its high performance by careful attention to processor design and the use of specialized electronics. Over the next decade, supercomputer designers followed this basic model, trying to achieve the highest possible performance with a single processor. By the mid-1980s, however, it became increasingly difficult to squeeze better performance out of traditional supercomputer designs, even as more exotic technologies were applied to the task. As a result, supercomputer

designers began trying a different route to improved performance the use of several processors. One approach involved a relatively small number of traditional high-performance supercomputer processors. For example, in 1983, Cray shipped a supercomputer that used four processors to speed up performance. By contrast, the massively parallel approach to supercomputer design uses hundreds or thousands of low-cost microprocessors microprocessors (processors that fit on a single semiconductor chip). The greater the number of processors, the more powerful the computer. In many cases, the microprocessors are the same as those used in high-end workstations. The performance of microprocessors increases every year, creating the potential for even more powerful massively parallel supercomputers. Supercomputer centers and Federal laboratories have purchased several massively parallel supercomputers and are exploring their use in a number of applications. A major challenge for users of massively parallel supercomputers liescan in the area ofbe software. parallel haveNew to bealgorithms, programmed in new ways, because programs no longer thoughtMassively of as a simple list computers of instructions. efficient ways of solving numerical problems, will have to be developed. Research on algorithms and software tools that take advantage of the potential of massively parallel supercomputers is one focus of the HPCC program. SOURCES: Glenn Zorpette, cd.,“Special Report: Supercomputers,” IEEE Spectrum, vol. 29, No. 9, September 1992; pp. 26-41; Office of Science and Technology Policy, “Grand Challenges 1993: High Performance Computing and Communications,” 1992, pp. 13-17; Carl S. Ledbetter, “A Historical Perspective of Scientific Computing in Japan and the United States,” Supecomputing Review, vol.3, No. 12, December 1990, pp. 48-58.

put a short code in the header to tell the receiving computer that the data belongs to an electronic mail message—this allows the receiving computer to process the data appropriately after

address and determines which link the packet should transit next. The Internet packet switches or ‘routers’ are special computers that have been provided with connections to a number of links

receiving the packet. Once the packets have been formatted they are sent out of the computer and through the network’s web of links and switches. Switches receive packets coming in on one link and send them out on the next link in the path to their

and programmed programm ed to carry out the switching functions. The software in the routers and the users’ computers implement ‘protocols, ’ the rules that determine the format of the packets and the actions taken by the routers and networked

destination (figure 2 7). When packet arrives at a switch, the switch scansthethe destination

330-073

computers. The Internet protocolsrefer are tooften re ferred to as TCP/IP (the acronyms the two

0 - 93 - 3 QL 3

 

26 I Advanced Network Technology

Figure 2-6-Packet I

I

I

I

Trailer

 

I

 

1

Applications data  

 

I  

Header To: Computer #-  From: Computer #– 

‘“ 

>

A packet is a block of digital data, consisting of data from the user’s application and extra informationused by the network or receiving computer to process the packet. For example, the “header” might contain the “address” of the destination computer. A real packet would be several thousand bits long. SOURCE: Office of Technology Assessment, 1993.

most important Internet protocols, the Transmis-

number of different proposals that would simplify

sion Control Protocol and the Internet Protocol.) Special protocols called “routing protocols” are used by the routers to keep a current map of the Internet and to determine the best path to a

the routers task of finding paths through today s more complex Internet are being considered. The effect of increases in bandwidth on TCP/IP has also been debated in the technical community,

destination computer—for example, to choose a path that avoids heavily loaded networks. One of the most important characteristics of the Internet is that the thousands of linked networks are independently operated; there is no central control of the Internet. However, by sharing the

and new protocols have been proposed. Many now believe that TCP/IP can continue to provide good service over gigabit networks, but internetworking in high bandwidth networks is a research topic in itself.

Internet theofnetworks are able to exchange protocols, traffic. One the functions of the Internet protocols is to mask differences in the technology used by the networks that makeup the Internet. The campus networks’ local area network technology differs from the wide area network technology used in the regional and national backbone networks, and there are many different local area network standards. The term “Internet” is short for “internetworking,” the practice of linking technologically different and independently operated networks. The future of the current Internet protocols is

NETWORK COMPONENTS

are a number of potential bottlenecks-the rate at which data can be transferred from the computer’s memory to the network, the rate at which data can be transmitted through the links, and the amount of time the switches need to decide where to send data next. Simply removing one of these bottle-

the subject of considerable debate in the Internet community. The most significant problem is that today’s routing technologies are being strained by rapid growth in the number of connected net18 works and users. The management of a complex and growing network has been one of the major

necks does not guarantee that the overall performance of the network will improve. The emergence of fiber optics has removed the links as a bottleneck for the foreseeable future; the research projects described in chapter 4 show that this has exposed research issues in other parts of the

A network is a complex system, consisting of  many computer programs and hardware components such as links, computers, and switches. The overall performance of the network depends on how well these components work together. There

challenges faced by the current NSFNET. A 18

 men  LyDc

network.

‘‘Internet Wtioqhosk, ’ CommunicationsWeek International,

Aug. 10, 1992, p. 1.

 

Chapter 2–The Internet I 27

Figure 2-7—Packet Switching N?

.

(a) Packet-switched communication

 

v

As a packet travels through the network, the switches decide where to send the packet next.

 

1“

The links in a packet network are shared by several users. Network designers choose the link capacity or bandwidth to match the expected amount of traffic. SOURCE: Office of Technology Assessment, 1993.

I  Computers Many different kinds of computers are attached to the Internet, ranging from desktop personal computers costing a few hundred dollars to supercomputers that cost millions of dollars.

user interfaces and high-resolution displays. For most of today’s applications, almost any computer has enough processing power to attach to the Internet. The low bandwidth of the current Internet places few demands on computers for

Among scientists and engineers, the type of  computer that is most widely used is the ‘‘workstation," a powerful desktop computer with enough processing power to support graphical

handling the communications functions, leaving much of the processing power free to run the applications.

 

28 | Advanced Network Technology

One of the reasons for the creation of the NSFNET backbone was to provide access to NSF’s four supercomputer centers. Recently,

low-bandwidth links used for telephone calls, or a single high-bandwidth link needed for a gigabit network.

these supercomputer centers have begun to install “massively parallel’ supercomputers. This new type of supercomputer attempts to achieve very high processing speeds by combining the processing power of thousands of smaller processors. Other supercomputers use a more traditional

The required link bandwidth depends on both the bandwidth requirement of each user and on the number of users sharing the link One of the main reasons for upgrading the links in the NSFNET backbone from 1.5 Mb/s to 45 Mb/s in 1991 was to accommodate growth in the number of users. Growth in the use of routine applications

design, and are referred to as ‘vector’ supercom-

puters. Each design may work best with certain kinds of computations; one of the objectives of  the gigabit testbed research is to explore the use of networks to divide up problems in a way that takes advantage of the strengths of both vector 19 and massively parallel supercomputers.

can also be supported by simply adding more low-bandwidth links. However, new applications that need very large amounts of bandwidth to themselves require the deployment of higher bandwidth links. By increasing the link bandwidth to gigabit rates, the gigabit NREN will be

The digital links in computer networks usually use copper or fiber, but satellite and microwave links are also used. At each end of the copper or fiber is the transmission equipment, electronics that convert data into the optical or electrical signals that travel through the network. The capacity of the wires or strands of fiber depends on the characteristics of the material used and on the capabilities of the transmission equipment. Today’s Internet uses both low bandwidth links

able to support new classes of advanced applications, not just growth in the number of users. Operators of wide area computer networks, such as the regional networks and the agency backbones, typically lease their links from the telephone companies. The telephone companies have already obtained the rights-of-way and have installed the transmission facilities for use in their core business, voice telephone service. Because of the reliance on telephone company facilities, discussions of computer network link bandwidth often use telecommunications industry designations of link capacity. For example, the current

that operate over copper at a few bitshigh per second (kilobits per second or thousand kb/s), and bandwidth links that operate over fiber with a data rate of about 45 million bits per second (megabits per second or Mb/s). The test networks described in chapter 4 will use links that operate at a rate of  one billion bits per second (a gigabit per second

NSFNET backbone is often referredtoasa‘‘T3° network, after the industry designation of 45 Mb/s links. ‘‘Tl links, which operate at 1.5 Mb/s, are used in the current Department of Energy and NASA networks and in the regional networks. As the Federal networks are upgraded to bandwidths above the 45 Mb/s T3 rate, they will use a new

or Gb/s). Typically, a single wire or strand of fiber carries many links at the same time. Through a process called “multiplexing,” several low-

family of transmission standards designed for high-capacity fiber optic links, called Synchronous Optical Network (SONET) (see table 2-l). Universities and corporations install their own

I   Links

bandwidth links can be aggregated into a higher bandwidth link. Gigabit-capacity fiber, for example, can be used either to carry several thousand 19 IfA  u ~ o n  of

links in their buildings for use in local area networks. Local area networks can provide users with higher bandwidth than wide area networks—

sup~Owe~ “ IEEE Spectrum, vol. 28, No. 6, June 1991, p. 18.

 

Chapter 2–The Internet I 29

Figure 2-8-Access Link



 

2 1



 

0

Access link

 

1

K \

0

 n

““

w

SOURCE: Office of Technology Assessment, 1993.

Table 2-l—Transmission Rates Industry designation

Transmission rate

DSO . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 64 kb/S

individuals, schools, and small businesses are required to use their ordinary analog telephone line to access Internet services-a device called a

T1 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . T3 . . . . . . . . . . . . . . . . . . . . ..0. . . . . . . . . . . . SONET OC-3 . . . . . . . . . . . . . . . . . . . . . . . . SONET OC-12 . . . . . . . . . . . . . . . . . . . . . . .

1.5 Mb/s 45 Mb/s 155 Mb/s 622 Mb/s

‘‘modem’ is needed to send digital computer data over these lines, usually at 14.4 kb/s or less.

SONET OC-48 . . . . . . . . . . . . . . . . . . . . . . .

2.4 Gb/s



Switches

SOURCE: Office of Technology Assessment, 1993.

Packet switches in the Internet, also known as routers, direct packets to the next link in the path

this is due in large part to the high cost of high bandwidth wide area links. Because of the higher bandwidth available on local area networks, they have been used for experimentation with highbandwidth distributed computing and video applications. In the future, however, users will want wide area networks that match the performance of  local area networks; one of the objectives of the testbed project outlined in chapter 4 is to investigate high-speed wide area networking. When campus networks arrange to be connected to the closest regional or national network, they obtain an ‘‘access’ link (figure 2-8). This is

to their destination. Packet switched networks emerged to handle data communications, services not well supported by the “circuit switches’ used for ordinary telephone calls (figure 2-9). Packet networks are more efficient for typical computer communications traffic-short transactions or “bursts” separated by periods of no traffic (box 2-C). In a packet network, several users share the same link-during the period in which one group of users is not using the link, other users can send their packets. In a circuit switched network, by contrast, each communication gets its own link. For this reason, circuit switches are most efficient

usually leased from the telephone company, just as the links inside wide area networks are leased from the telephone company. The cost of the

when a communication involves a relatively long, steady stream of data such as video or voice. While the Internet networks use telephone

Internet service depends on the access bandwidth; high bandwidth access is extremely expensive. It is common to find local area networks operating at 10 Mb/s or 100 Mb/s, while the access link to the rest of the Internet operates at 56 kb/s or less (some organizations have 1.5 Mb/s access links, but these are considerably more expensive). Most

company links, the packet switches are usually not operated by the telephone companies. Instead, a second organization plans the network and installs the packet switches at the sites it has chosen—the involvement of the telephone company is usually limited to providing the links between the sites. From the perspective of the

 

30 I Advanced Network Technology

Figure 2-9-Circuit Switching (a) Telephone network

In the telephone network, circuit switches are interconnected by several links. No communication can take place until a “circuit” is established.

(b) Circuit-switched call

I First, a number is dialled by one of the users. The network then checks to make sure that there are unused links in the path between the two users. If there are unused links, the switches establish connections between each of the links in the path, thereby creating a “circuit.”

has to be established for each pair of users. Network designers try to ensure that the number of available links matches the expected level of usage.

 

w

SOURCE: Office of Technology Assessment, 1993.

 

Chapter 2–The Internet | 31

telephone company, the computer network traffic is just “bits” traveling over its links-the telephone company’s equipment does not make decisions about where to send the packets. Beginning in the mid- 1970s, the telephone companies began installing some packet switches in

serve users in the academic community or in industry. Finally, they differ in their network  technology-the Internet is a packet-switched network, while the telephone network is a circuitswitched network. However, the Internet and the telephone network are related in a number of 

their networks in order to support the growing data communications market, but their efforts to

ways. Any discussion of the evolution of networking has to consider both the traditional

enter this market were considered to be unsuccessful. The processing power required of a packet switch depends on the link bandwidth and the complexity of the network. As the link bandwidth increases, switches must be able to process packets more quickly. The processing power needed will also increase as the network gets larger and more complex, because it becomes more difficult to determine the best path through the network. Currently, the NSFNET backbone’s router technology does not allow the use of  applications that need more than 22.5 Mb/s, half 

telecommunications companies and the Internet community. First, the Internet and the public switched network are related in that the links in wide-area computer networks are usually supplied by the telephone companies--computer network operators do not usually put their own fiber in the ground. As a result, the availability of new computer network capabilities can depend on the extent to which the telephone companies deploy advanced transmission facilities, and on the cost of leasing the links. The availability of advanced transmission fadepending on whether a com21 cilitiesnetwork varies ,will operate over the telephone puter

the potential 20 maximum of a 45 Mb/s T3 network. This shows how the overall performance of the network depends on many different components; increasing the link bandwidth is not the only requirement for an advanced network.

THE INTERNET AND THE PUBLIC SWITCHED NETWORK In some ways, the Internet and the “public switched network” that is operated by common carrier telephone companies are separate. They differ in the services they provide-the telephone network mainly provides ordinary voice communications services, while the Internet provides

network’s ‘‘in ‘‘inter terof offi fice ce’’ or “local “local loop” loop” segments. Most of the links required for a wide area network such as the NSFNET backbone operate over the interoffice core of the telephone network, which has largely been upgraded to optical fiber and digital transmission. The telephone companies upgraded this part of their networks in part to achieve operational savings, even when delivering existing services. However, access links, such as those between a campus and a regional network, need to use local loop facilities. For the most part, this segment of the telephone network still consists of 

data communications services such as electronic mail and access to remote computers. They also differ as to the communities that they servealmost everyone has a telephone, while the Internet and other computer networks primarily

copper, analog lines. Large users are able to avoid this bottleneck by making special arrangements with the local exchange carrier for higher bandwidth digital lines. However, individuals, schools, and small businesses generally have to

m Mm  B~a@  IBM, personal cornrnunicatioq  Feb. 3, 1993.

21   Natio~  ~l=om~catiom   ad   ~o~tion Administration Department of Commerce, “’lklecommunieations   in the Age of 

Information” October 1991, pp. 97-109.

 

32 I Advanced Network Technology

Box 2-C-P 2-C-Packe ackett Switching Switchi ng and Circuit Switching Computer networks such as the Internet use packet switches, which direct packets from link to link through a network. Today’s telephone network by contrast, uses circuit switches. Each type of switching technology works

best with different kinds of communications. Packet switching is more efficient for the transfer of typicai computer communications traffic such as files of text or numerical data (figure 2-C-1). Circuit switching, on the other hand,

Figure 2-C-l—Packet Switching More Efficient for Data (a) Data communications does not use circuits fully

Circuit switching can be used for computer communications. Here, circuits have been set up between two pairs of computers. However, computer communications often have a “bursty” character -- periods in which data IS sent followed by periods of “silence.” When no data is sent, the circuit’s capacity goes unused. The capacity IS used more efficiently when the communications Involve a steady flow of Information, such as video or voice transmission. (b) Link sharing makes packet

networks

more efficient for data

  In a packet-switched network, several users’ traffic shares the same link. If one user IS not using the link’s capacity, it can be used by others. The figure shows bursts of data assembled into packets and travelling through the network on the same link. Here, one link’s capacity is sufficient to handle communications between both pairs of users, freeing the second link for other uses.

SOURCE: Office of Technology Assessment, 1993.

use the combination of a modem and their telephone line to access computer networks. The bandwidt band width h of such an arr arrang angeme ement nt is rela relative tively ly low--only a few kb/s—and is clearly a bottleneck

telecommunications providers are beginning   t o offer data communications services, including Interne Internett service services. s. In the past, past, eff efforts orts by the industry to enter this market have not been

that limits widespread use of sophisticated services. The telephone network and computer networks are also related in the sense that the traditional

successful. This has been attributed to a ‘‘culture clash’ —a lack of understanding of computer network technology and of the needs of users of  computer networks. However, the telephone com-

 

Chapter 2–The Internet | 33

can provide the consistent performance needed by video or voice traffic (figure 2-C-2). One of the objectives of

the research described in chapter 3 and chapter 4 is to develop switches that combine the efficiency and flexibility of packet switching with the consistent performance of circuit switching.

Figure 2-C-2—Circuit Switching Better for Voice or Video (a) Varia Variable ble pe perform rformance ance due tto o packet packet network link sha sharing ring

If two packets arrive at a switch at the same time and need to use the same outgoing Iink (I), one of the packets wiII have to wait (II). It IS difficult for a user to know in advance what the network performance wiII be. The packet may experience no delay (the dark gray packet), or it may have to wait at each switch (the light gray packet). This variation in delay has limited the use of packet networks for time-sensdive communications such as video or voice. (b) Circuit switched performance is predictable

1 -–1

 

>

[

 

1

 

(ii) In a circuit-switched network, each communication has Its own circuit. Users’ information travels through the network without being affected by the characteristics of other communications (1)–(11). The time needed for

information to travel through the network wiII always be the same. SOURCE: Office of Technology Assessment, 1993.

panics hope to play a more active role in this market. The telephone companies have two main competitors in this venture. First, there are already a number of commercial providers of Internet services and other data communications services. These The se provid provider erss lease lease lines lines from the the teleph telephone one companies, install packet switches, and operate

their network without any further involvement from the telephone companies, sharing their network’s capacity among different groups of  users for a fee. The current T3 NSFNET backbone is obtained as a service from one of these commercial Internet providers. Se Seco cond, nd, ma many ny use users rs choos choosee to oper operate ate ‘priva ‘private te networks’ ‘—they build a network of their own

 

34 | Advanced Network Technology

using leased lines and bypass the public network. Most corporations use this strategy to intercon-

evolution of the Internet is affected by two different sets of standards committees. The tele-

nect local area networks at different sites within their organization. Equipment used in private networks is provided by computer companies and others, who have taken advantage of the telephone companies’ lack of success in providing

communications industry standards affect mainly low level issues, such as transmission standards, but some of the standards for new telecommunications industry packet switched services may play a role as well. The most important intern-

data communications services. United States firms that specialize in the development of routers and other equipment for private networks are world leaders and are among today’s fastest 22 growing companies. The telephone companies have introduced a number of new packet-switched services that are intended to encourage users to abandon their 23 private networks. One of these services is called Switched Multimegabit Data Service (SMDS); 24 another is called Frame Relay. The SMDS and Frame Relay switches do not understand the Internet protocols, but they can still be used to carry Internet traffic. The Internet packets are ‘‘encapsulated, ’ or put inside an SMDS or Frame Relay ‘envelope,’ and sent through the network; at the other end the Internet packet is extracted and delivered to the computer. The carriers view SMDS and Frame Relay as transitional steps to a new technology called Asynchronous Transfer

ational standards group is the CCITT (International Telegraph and Telephone Consultative Committee). The CCITT is a technical committee of the International Telecommunications Union (ITU), a specialized agency of the United Nations 25 that is headquartered in Geneva. United States telecommunications standards are the responsibility of Committee Tl, which is accredited by the American National Standards Institute (ANSI) and sponsored by the Exchange Carriers Stand26 ards Association (ECSA). Telecommunications industry standards setting has often been criticized as excessively bureaucratic and slow.

By contrast, thehigher Internetlevel standards which addresses issuescommunity, related to routing, the TCP/IP protocols, and applications, is more informal. Much of the work is done by electronic mail, and there is a greater emphasis on proving that something works before it is stand27 ardized. The two groups responsible for Internet

Mode (ATM), described in chapter 3. They can potentially be used to provide data communications services up to 45 Mb/s. Because of the interrelationship between the Internet and the public switched network, the

standards are the Internet Engineering Task Force (IETF) and the Internet Activities Board (IAB). The IETF has a number of different working groups, each looking at a different aspect of the Internet’s operation.

22  G.  p= ~  f i c @ “U.S.  Hi@ Mh  Firms  Have Begun Staging Littie-Noticed  Revival,” Wall Street Journal, Dec. 14, 1992, p. 1; G. Pascal  Zachaq  and Stephen Kreider  Yoder, “Computm  Industry Divides Into Camps of Winners and Liners,” Jan. 27, 1993, p. 1; Alan Deutschmaq  “America’s Fastest-Growing Companies, ” Fortune, vol., 126, No. 7, Oct 5, 1992, p. 58. 23 ho~a  met  for the=  semic~  is the smaller companies that cannot currently us ~  private  newO*. y  25, 1992. ~  Tim Wikoq “bca.1  CarrieJS   y  Out Data Service Agendas,” CommunicationsWeek, ~  G.A. Codding  ~d  A.M. Rut.lcowsld,  The International Telecommunications Union in a Changing Worki  

MA: Artech  House,

1982). 26 ] M.   Lfichu5,   ~s~~ds   commit@ T 1 — ~ l ~ o m m ~ ~ t i o ~ ”   “

27

 

 IEEE Communicat ions, vol. 23, No. 1, January 1985, pp. 34-37. l Malamud, “Stacks: Interoperabil.ity  in Today’s Computer Networks,” (Englewood  Cliffs, NJ: Prentice Hall, 1992), p. 223.

Broadband Network Technology   3

 A 

dvances in computer technology are driving the requirements for broadband networks. Because of increases in the processing power of computers, there is a need for higher bandwidth networks. Computers are increasingly able to execute ‘‘multimedia” applications, so it is expected that future networks must be able to carry several kinds of traffic.

Broadband networks will lead to applications that are used for a wider range of problems, with more emphasis on image-based communications. The computer and telecommunications industries have conceived broadband network designs for these requirements. Fiber optic links are a key component of these networks. However, replacing the smaller capacity links in current networks with higher bandwidth fiber optic links is not all that is needed: Improvements in protocol and switch design must also follow. Future switches will have more processing power, in order to keep pace with the faster flow of traffic through the links. They

will also be designed in a way that allows them to handle different types of traffic. Today’s switching technologies do not have this capability-packet switches only handle text and numerical data efficiently, the telephone network’s circuit switches are best suited to voice traffic, and special networks are needed for video. The ‘‘integrated services’ concept envisions networks that use the same links and switches for all types of  traffic, instead of different technologies for video, data, and voice.

BROADBAND APPLICATIONS The new high bandwidth integrated services networks would

improve the performance of existing applications and enable new applications. Existing applications, such as electronic mail or 35

 

36 | Advanced Network Technology

 Broadband  networks use new switch

technologies.

databases, could be augmented though the use of  image files and video clips; higher bandwidth networks would also allow the faster transfer of  large files of supercomputer data. Support for real-time high-resolution video would expand possibilities further, allowing videoconferencing or the display of output from a scientific instrument, such as a telescope. More generally, the combination of more powerful computers and integrated services networks will permit wider use of two new categories of applications— multimedia applications and distributed computing.

of computing and communications to create a ‘‘co ‘‘colla llabor borati ative’ ve’ work environmen environmentt in which users at a number of scattered sites are able to 3 work together on the same project. For example, an application might allow several researchers to work on the same set of experimental data at the same time-any processing done by one researcher would automatically be shown on the other researchers’ displays. Videoconferencing and collaborative applications might allow closer interaction between researchers in different places. It is expected, for example, that the teams working on the Grand Challenges will include

Multimedia applications take advantage of the capability of high-bandwidth integrated services networks to deliver many different kinds of  data-video, image, audio, and text and numerical data. They also take advantage of the processing power of advanced workstations and other devices attached to the network, allowing users to edit, process, and select data arriving from a l variety of sources over the network. Multimedia applications have been suggested for a large 2 number of areas , including education and health

scientists at many locations. For researchers, “visualization” provides a way to represent large amounts of data in a more understandable form; it uses images and video to show the results of simulations or experiments (box 3-A). For example, the results of a simulation of a city’s air quality could be shown as an image, with the concentration of a particular chemical indicated by different colors and color intensity. If a researcher wanted to review the evolution of the air quality over time, a series of  images could be used to create a video segment showing the change in pollutant concentration.

care. There are many different concepts for delivering multimedia services to the home, such as multimedia catalogues for home shopping,

Other programs running on the workstation could be used to process the data further, perhaps by examining one part of an image more closely or

 

Multimedia Applications

information services, entertainment video, and videotelephone services. Many segments of both service and manufacturing industries are increasingly using image-based applications-for example, computers are widely used in the publishing and advertising industries to compose pages using high-resolution images. Multimedia is also the foundation for a new category of applications that use the combination

by comparing the simulation data to experimental data. In education, multimedia could be used in computer-based instructional materials. Multimedia databases would give students and teachers access to image and video data. Videoconferencing and collaborative applications could enable closer interaction between teachers and students at multiple locations. For example, it might

1

Special Issue: Multimedia Communications,  IEEE Commun icutions vol.  30, No. 5, h@ 1992. Z Michael L. Dertouzos,  Director, MIT Laboratory for Computer Science, testimony at hearings before the Joint Economic Committee, June 12, 1992. J Sara

 ACM,

vol

A. Bly et al., “Media Spaces: Bringing People Together in a Video, Audio, and Computing Environment,” Commuru”cations  of the 36, No. 1, January 1993.

4 Matthew Arrott  and

Sara La@ “Perspectives on Visualization” IEEE Spectrum, vol. 29, No. 9, pp. 61-65,

 

Chapter 3-Broadband Network Technology | 37

 As  part of the CASA

testbed research described in chapter

4, a

gigabit network will

be used to combine data from

a variety of sources, such as satellites and digital elevation models, to create three-dimensional views.

Sponsor Documents

Or use your account on DocShare.tips

Hide

Forgot your password?

Or register your new account on DocShare.tips

Hide

Lost your password? Please enter your email address. You will receive a link to create a new password.

Back to log-in

Close