Chapter 22

Published on June 2016 | Categories: Documents | Downloads: 39 | Comments: 0 | Views: 251
of 28
Download PDF   Embed   Report

Comments

Content

22
CHAPTER

Testbeds: Bridges from Research to Infrastructure
Charlie Catlett John Toole

D

eveloping, testing, and refining a technology (or set of technologies) are the functions of a testbed. In some cases, the desired capabilities are simply expansions—bigger, faster, easier—of currently available systems. In other instances, what is envisioned goes against established thinking or cannot be accomplished by mere extensions of current technology. In either case, for building computational grids, testbeds are critical in at least three ways:
!

Scale of integration: Diverse technologies must be integrated, deployed, and tested in radically new ways. The technologies described in the preceding chapters are maturing at different rates, demanding a conscious evolutionary testbed approach to scale both the magnitude and the breadth of the experiments. Building communities: Distributed computing enables new communities of users and developers to form as computational resources are linked with data and people. Testbeds provide a way to accelerate the formation of mutually agreeable, but strategically chosen communities. The evolution of these distributed testbeds over time will allow rapidly prototyping of future visions of the grid. Mitigating risk: In addition to building new communities and users on a rapidly changing technology base, we also face the challenge of quantifying and qualifying the evolutionary results in ways that help new users understand the wealth of new opportunities and the corresponding risks. Consequently, testbeds must support measurements and carefully chosen

!

!

568

22

Testbeds: Bridges from Research to Infrastructure

goals, while also providing incentives for new opportunities that may be discovered. This chapter examines the role, application, and development of testbeds, looking to both past and present systems to learn how testbeds can provide insight and capability as national and international computational grids evolve. Some of the testbeds that we consider have already led to significant infrastructure, while others were crucial steps along the way. Still others are part of today’s landscape, whose effects are not yet known. In the process, contributions are noted from the standpoint of both technical achievement and “best practices.” In the conclusion we discuss salient characteristics of “good” testbeds and the common challenges successful testbeds overcome. In all cases, the testbeds employed a variety of the technologies discussed in preceding chapters, focused on developing new communities that advanced the field, and resulted in solid, measurable progress that has proven useful in the next generation.

22.1

INTRODUCTION: DECIBITS TO KILOBITS
In 1843, the U.S. Congress approved funding for what today might be called the “Decibit testbed” to examine the merits of a new technology: the telegraph. The following year, Congress was treated to a demonstration of this technology in the form of an electronic message sent over the testbed from Baltimore to Washington, D.C. [217]. For several decades prior to this, inventors in Europe and in the United States had been working on the technology (Samuel Morse applied for a patent in 1837). By 1843 it appeared that the technology would in fact work, but it was not clear how the technology would scale in distance or complexity, and it was even less clear what would be the application of this technology or the utility of that application. About 125 years later, U.S. federal funding was approved for another experiment in long-distance communications: the ARPANET. In 1972 Washington, D.C., was the venue for the ARPANET’s first major demonstration as well. The network was extended into the conference hotel of the International Conference on Computers and Communications (ICCC) to show how ARPANET could support remote computer access, file transfer, and remote control of peripheral devices (from printers to robots). In both of these “testbed” examples, the technology being examined ran somewhat crosswise to current practice or state of the art. The telegraph came at a time when long-distance communication was done by moving paper,

22.1

Introduction: Decibits to Kilobits

569

with delays ranging from days to months. It was unclear what benefits would emerge if communication over distances took place in minutes or hours. Further, the existing communications industry, such as it was (essentially the U.S. postal service and private pony express enterprises such as Wells Fargo), would be threatened by this new technology. Similarly, ARPANET and the research behind it advocated two ideas that did not fit with established practice or current thinking: packet-switched networks and the use of computers as communication devices to augment human interaction. The notion of a packet-switched network was deemed to be infeasible by the existing communications industry, whose infrastructure model was based on circuit switching. The use of computers as communications devices to assist human collaboration [343] did not mesh with the view of the computer industry, which saw the computer as an arithmetic calculation device. Both the telegraph and the ARPANET eventually led to global infrastructure. The telegraph and its follow-on, the telephone, resulted in the telecommunications infrastructure we use in everyday life today. The ARPANET led to today’s Internet as an infrastructure that is rapidly approaching the same scale of ubiquity. These two “testbeds”—the early telegraph trial and the ARPANET—also provide several lessons regarding the transition of research into viable infrastructure. Both testbeds proposed models that were not necessarily consistent with current practice and were generally considered impractical or outlandish. Both put in place, at great expense, facilities whose application (much less benefit) was as yet unknown and certainly unproven. Both involved some combination of stable infrastructure beneath experimental devices and algorithms. In the case of the telegraph, the device was somewhat experimental: while the stringing of iron wires was a common practice (though generally the wires were used for fencing, not telegraphy), the telegraph’s encoding system (Morse code) was essentially a new protocol. In the case of the ARPANET, the leased telephone circuits and Honeywell minicomputers (used as IMPs) were current infrastructure, while the software, interface devices, applications, and protocols were new and unproven. In fact, at the start of the project only a proposed design for the IMP systems existed, and there were no proposals, much less designs, in place regarding protocols, interfaces, or applications.

22.1.1

Testbeds
What is a testbed? In one sense, any project experimenting with new capabilities is a testbed. Generally speaking, the collection of users trying out a new

570

22

Testbeds: Bridges from Research to Infrastructure

software application program is a testbed aimed at determining the utility of such a program. Here we will use the term to describe a broader effort in which cooperating project teams are attempting to provide a particular community of users with new capabilities, by both developing and combining multiple underlying components of technology. Some testbeds are aimed at specific communities of users; others strive for more ambitious scale. The most successful testbeds have tended to consist of the right balance of scale, component technologies, and coupling between the envisioned capabilities and the needs of the target communities. Testbeds are complex combinations of technology and people. Thus, it is important to also point out organizational contributions for testbeds. Perhaps the best way to illustrate the concept of testbed is to begin with a look at a well-known example: the ARPANET.

22.1.2

ARPANET
During the early 1960s, researchers in the United States and Great Britain began to develop the concept of a communications network that would send information in discrete packages, or packets. Such a network could, in theory, provide redundant paths from one point to another in order to route information around infrastructure failures. After nearly a decade of incubation of these ideas, the U.S. Department of Defense Advanced Research Projects Agency (DARPA) launched ARPANET—a project to determine whether packetswitched networks might be useful. At the time, DARPA was funding expensive time-shared computers at several computer science laboratories, and a computer network interconnecting them might prove useful in facilitating resource sharing among projects [256]. Below we discuss the broad issues of ARPANET’s contributions as a testbed combining people, new ideas, and technology (see Chapter 21 for a review of the evolution of ARPANET in terms of networking technology). J. C. R. Licklider, who directed DARPA’s Information Processing Techniques Office (IPTO) during the 1960s, set the stage for the ARPANET with a rather radical vision that saw computers not as merely arithmetic engines but as tools that might one day augment human intelligence and interaction. Licklider’s vision proposed a new model where humans interact with computers in a symbiotic relationship and where humans would interact with one another through networks of computers [343]. Throughout most of the 1960s, the state of the art was batch processing, followed by timeshared use toward the end of the decade. However, computers did not interact with one another, and users needed a separate, custom terminal for each mainframe used. Dial-up

22.1

Introduction: Decibits to Kilobits

571

or dedicated phone lines connected these terminals to remote computers. To compute on two systems at two labs, a user needed two separate terminals and phone lines. Even exchanging data manually was impractical because of differences in physical media formats and information representation (different character set encoding, etc.). The ARPANET project began in 1968 under the direction of Larry Roberts, who had succeeded Licklider as director of the IPTO office. A contract was awarded to Bolt, Beranek & Neuman (BBN) to build an Interface Message Processor (IMP) that would be the building block to a packet-switched network. IMPs at multiple sites would be interconnected with leased telephone circuits and would serve as packet switching, or routing, nodes (see Figure 22.1). Each site with an ARPA-funded computer was required to build a custom interface between their computer and the IMP. Part of the challenge, however, was that no standards or example systems existed: the entire system architecture was essentially wide open. How would information on one computer be transmitted to a distant computer? What would be the division of labor between software, hardware, hosts, and IMPs? What applications would run on one computer or the other to take advantage of such a network? The goal of sharing resources between locations led to an initial application that would allow a teletype terminal at one location to act as a user interface to a host at a distant location—that is, an application that allowed the packetswitched network function in the place of a dial-up connection to a distant computer.

Layered Protocols
In 1968, a group of graduate students at the participating universities began to meet and discuss the potential architecture for this network. The first task of this group (which became known as the network working group) was to develop the interface technology that would allow a host computer to connect to an IMP. The group’s development of mechanisms for getting data from one host to another through the network essentially represents the first notion of network protocols. The first end-to-end protocol, called “host-to-host,” was devised by the network working group along with a program to interface with the host operating system, called the Network Control Protocol (NCP, also generally used to refer to the host-to-host protocol; see Chapter 21). Like testbeds today, one of the obstacles to be overcome in the context of heterogeneity was integrating separate “worlds.” Steve Crocker describes this best in his introduction to RFC 1000 [454]: “Systems of that era tended to view themselves as the center of

572

22

Testbeds: Bridges from Research to Infrastructure

940

#2 SRC

#4 Utah

360

#3 UCSB

PDP 10

#1 UCLA

Sigma 7

22.1 FIGURE

The initial ARPANET connecting four sites. Redrawn from Bob Kahn and Vint Cerf, “ARPANET Maps 1969–1990,” Computer Communications Review, October 1990. (Original sketch by J. Postel.)

the universe; symmetric cooperation did not fit into the concepts currently available within these operating systems.” In 1973, TCP was proposed by Vint Cerf, then on the faculty at Stanford University (formerly a UCLA graduate student in the network working group), and Bob Kahn, who had designed the host-to-IMP interface specification while working on the BBN IMP team prior to moving to DARPA. While not the officially used protocol of the ARPANET until the January 1, 1983, “flag day” transition, TCP evolved steadily over its first decade. Its evolution represents one of the most significant concepts to arise from the ARPANET testbed. As the community began to look at interconnecting multiple networks, the distribution of work between hosts and IMPs and between applications and protocols began to move toward one of functional layers. Under Bob Kahn’s leadership in the 1970s and early 1980s, IPTO initiated several packetswitching testbeds in addition to the ARPANET. These included SATNET, a satellite-based packet-switched network between the United Kingdom and the United States, and the packet-radio testbed, which used terrestrial radio transmission to allow mobile devices to be interconnected with packet-switched

22.1

Introduction: Decibits to Kilobits

573

networks. During a discussion about interconnecting these networks with vastly different properties, Cerf, Kahn, and Jon Postel came up with the idea that TCP ought to be split into two separate pieces. The pieces that dealt with addressing and forwarding messages through the network became IP, while the functions dealing with guaranteed delivery (sequence numbers, retransmission, multiplexing separate streams) went to TCP. And thus, layered protocols and TCP/IP were born. Steve Crocker captures these early days in his introduction to RFC 1000: We envisioned the possibility of application specific protocols, with code downloaded to user sites, and we took a crack at designing a language to support this. . . . With the pressure to get something working and the general confusion as to how to achieve the high generality we all aspired to, we punted and defined the first set of protocols to include only Telnet and FTP functions. In particular, only asymmetric, user-server relationships were supported. In December 1969, we met with Larry Roberts in Utah, and suffered our first direct experience with “redirection.” Larry made it abundantly clear that our first step was not big enough, and we went back to the drawing board. Over the next few months we designed a symmetric host-host protocol, and we defined an abstract implementation of the protocol known as the Network Control Program. (“NCP” later came to be used as the name for the protocol, but it originally meant the program within the operating system that managed connections. The protocol itself was known blandly only as the host-host protocol.) Along with the basic host-host protocol, we also envisioned a hierarchy of protocols, with Telnet, FTP and some splinter protocols as the first examples. If we had only consulted the ancient mystics, we would have seen immediately that seven layers were required.

Applications
While most of the effort during the initial period of the ARPANET project went toward creating the technology for the network, the driving force remained: to use interconnected computer systems to support human collaboration. Initially, applications were developed to transfer data from one computer to another (file transfer protocol) and for remote log-in (telnet). Already there were visions of applications such as sending programs to be executed on remote computers or accessing large databases from across the country, but these future applications had to begin with simpler capabilities.

574

22

Testbeds: Bridges from Research to Infrastructure

One of the first applications to move the network testbed toward using computers to enable human collaboration was electronic mail. Initially there were multiple mail systems, each with its own user interface and message format. Eventually, the internetworking community’s culture of idea refinement through open discussions resulted in standard message formats to make it easier for people to exchange mail between various email user interfaces (i.e., client programs on the different hosts running different operating systems). The “finger” program followed this—a simple query program that allowed a user on one host to find out whether a colleague on another host (perhaps across the country) was logged in.

22.1.3

Organizational Testbed Issues in the ARPANET
The initial ARPANET community, starting with the graduate students working on host interfaces, began to record its progress and discuss new ideas in the form of “Request for Comments” (RFCs). The RFCs set the process in place for wide discussion of proposed protocols, standards, and other ideas. Equally important, the fact that RFCs could be submitted by anyone set the stage for the open and cooperative environment that shaped and crafted the Internet over the next two decades. Another important concept of a testbed is the use of time-forcing functions to drive research and development toward working systems. Just as release deadlines and end-of-quarter profit statements drive product development, large-scale demonstrations can be useful in driving research and development toward producing working prototypes. At least two events during the ARPANET project can be considered such drivers. The first was a “bake-off” held in October 1971, when all of the ARPANET site participants gathered at MIT to try to log into one another’s sites over the network (all but one worked). The second, more public, demonstration was held in conjunction with the ICCC in Washington, D.C., the following October. An IMP was installed in the conference hall and connected to the ARPANET, with each participating site bringing terminal equipment and peripherals to be hooked to the IMP. With live demonstrations of remote log-in, email, remote printing, even remote control of robots, the ARPANET community was able to show representatives from funding agencies, the computer and communications industries, and other researchers that packet-switched networks were indeed viable. At this point it can be argued that the ARPANET reached a major milestone in the transition from a packet-switched testbed to infrastructure.

22.2

Post-ARPANET Network Testbeds

575

22.2

POST-ARPANET NETWORK TESTBEDS
During the late 1970s and early 1980s, ARPANET testbed results made their way to infrastructure through technology transfer to the Department of Defense as well as to industry. MILNET was created to interconnect defense sites using ARPANET technology. Schlumberger, a multinational oil industry services company, used ARPANET technology for its worldwide corporate network during the late 1970s and early 1980s. In addition, new companies formed to commercialize the technology: GTE Telenet (not to be confused with the telnet protocol) and Tymnet (which created packet-switched networks to provide network services for corporations and universities) are two examples. Finally, the ARPANET testbed concepts and technologies led to other government and academic network projects such as CSNET, BITNET, MFENET, ESnet, NSI, and NSFNET. Packet-switched networks, having been proven viable by ARPANET (see Chapter 21), also influenced the computer industry to create products that could be interconnected by local area (and later wide area) networks. Digital Equipment Corporation and IBM created their own sets of layered protocols for interconnecting their products. AT&T’s UNIX operating system was further developed with DARPA funding at the University of California–Berkeley, and DARPA funding to Stanford resulted in startups such as Sun Microsystems, which delivered a minicomputer running the UNIX operating system. DARPA’s funding of the Berkeley UNIX work also came with the encouragement to include the TCP/IP protocol stack in the operating system. Thus, with the explosion of desktop UNIX workstations in the early 1980s came a wide deployment of the TCP/IP protocols. The U.S. university computer science community, many of whom were not included in the ARPANET, created CSNET as well as USENET (based on AT&T’s UUCP data transfer program that was also included with UNIX) to exchange electronic mail and documents. U.S. federal agencies began creating their own packet-switched networks as well. The Department of Energy initially created MFENET in the mid-1970s using its own protocols and later evolved to the multiprotocol (including IP) ESnet in the late 1980s to network its research laboratories; NASA used Digital Equipment’s DECNET protocols largely because of the popularity of the DEC VAX computers among its researchers, and the National Science Foundation and created NSFNET to interconnect its supercomputer centers. Here we will discuss NSFNET in more detail as illustrative of these second-generation networks. Each contributed significantly as intermediate steps between the ARPANET testbed and today’s global Internet infrastructure.

576

22

Testbeds: Bridges from Research to Infrastructure

Following the discussion of NSFNET, we will examine some of the highperformance network testbeds of the late 1980s and early 1990s to look at how these helped to scale the ARPANET technology to the point that it could support the global Internet infrastructure we see today.

22.2.1

NSFNET
Between 1985 and 1986 the NSF created five national supercomputer centers to provide access to advanced computational capabilities for researchers in U.S. universities. Initially there were several thousand users, growing to over ten thousand in the first five years of the program. During the first two years of the program, most users accessed the centers via dial-up lines, GTE Telenet X.25 service, or (for file transfer) BITNET. None of these networks, however, were sufficient in capacity or functionality to allow some of the advanced capabilities of supercomputer users such as remote visualization. The NSFNET program began with a 56 Kb/s backbone network between the five supercomputer centers and the NSF-funded National Center for Atmospheric Research (NCAR) coupled with a funding program to assist universities in forming collaborative “regional” networks (see Figure 22.2). A three-layer model was developed consisting of a backbone network (between the six supercomputer centers), mid-layer networks (the regionals), and campus networks. To receive NSF funding support for connecting a campus to a mid-level network, a campus was required to show commitment to installing a campus network that would extend the NSFNET to individual researchers’ desktops. Although the original goals of the NSFNET program were to use ARPANET technology to provide infrastructure for supercomputer users, a significant amount of additional research was nonetheless necessary for its success as infrastructure. The decision was made early on that the network would use Internet protocols (IP, TCP) even though several successful networks were already in place demonstrating that other protocol choices were valid. NASA’s network relied on the DECNET protocols, and the Department of Energy’s network used protocols developed by their laboratories as well as DECNET and later IP. While the original ARPANET architecture was a subnetwork of IMPs each having one or more directly attached hosts, it had evolved by the early 1980s to interconnect local area networks as well, requiring IMP-like devices (later called routers) to interconnect LANs. The selection of IP and the goal of interconnecting local area networks resulted in a need for routers in the NSFNET backbone project as well, al-

22.2

Post-ARPANET Network Testbeds

577

22.2 FIGURE

NSFNET backbone and mid-level networks circa 1990. Backbone nodes are shown as circles, with small boxes attached to backbone nodes showing entry points for mid-level networks and supercomputer centers.

though no commercial routers were available at the time. Borrowing again from the ARPANET community, the NSFNET backbone used minicomputerbased routers called “fuzzballs” [79]. Each fuzzball consisted of an Ethernet LAN interface and multiple serial interfaces that were used to interconnect the fuzzballs over leased 56 Kb/s phone circuits. A community of technical staff began to develop between the fuzzball sites, as well as the University of Michigan and the University of Delaware, where NSF was funding research and development of the fuzzball systems and network routing protocols. In the course of providing the NSF supercomputer center users with a network infrastructure, many important research and development issues were identified and resolved. As noted earlier, the NSFNET’s three-layer network architecture (backbone/mid-level/campus) resulted in a wide deployment of IP routers. As the network grew to include complex mid-level and campus routing, protocols had to take into account issues of scale that had not been encountered in previous networks. For example, early routing protocols assumed a maximum network diameter (number of hops from source to destination) of

578

22

Testbeds: Bridges from Research to Infrastructure

less than 15; thus, any packet with a hop count of 15 or more was considered to be lost in a routing loop and discarded. As mid-level and campus networks grew, there were instances where hop counts from one end of NSFNET to the other exceeded the routing protocol’s limits, forcing changes in these protocols. Commercial IP routers deployed in mid-level networks also used different routing protocols from those used in the core backbone (for various reasons including scale). The interaction between these different routing protocols resulted in both improved routing protocols and new strategies and software such as GATED [186], which is used to translate between multiple routing protocols. By 1987 the 56 Kb/s NSFNET backbone network had grown more and more congested, but no routers were available that could handle T1 (1.5 Mb/s) circuits, the next logical upgrade in bandwidth. At the same time, ARPANET was being decommissioned, and all of its traffic was beginning to flow over the NSFNET backbone. In a controversial move, researchers working on the fuzzball software adjusted the algorithms used to handle congestion so that interactive performance improved at the expense of file transfer throughput. During congestion, buffers in routers fill and packets are discarded. The fuzzballs would normally discard packets based on a first-come, first-served basis, but this was changed so that packets associated with interactive sessions (e.g., telnet) were favored over packets associated with file transfer sessions (e.g., ftp). This meant that interactive users saw an improvement in service, while file transfers took a bit longer. Network management prior to the late 1980s was done by using techniques that had developed during the ARPANET era, whereby a network operations center could monitor network devices (e.g., IMPs, hosts) and keep track of errors or outages, in many cases even using trend data to predict problems. During the NSFNET project and as the number of routers in the networks increased to hundreds (including backbone network, mid-level networks, and campus networks), Dave Mills wrote a simple query mechanism that would allow for certain console commands to be executed over the network without logging into the fuzzballs. This allowed for simple network monitoring software such as the “ping monitor” from MIT (keeping track of host reachability by periodically pinging a list of hosts) to be augmented to keep track of interfaces within routers. Members of the community of mid-level and backbone node network managers and researchers (specifically Martin Schoffstall from NYSERnet and Jeffrey Case from the University of Tennessee) expanded this notion with a specification for a Simple Gateway Monitoring Protocol (SGMP), which grew into the now common device management protocol Simple Network Monitoring Protocol (SNMP).

22.2

Post-ARPANET Network Testbeds

579

By 1994 it was clear that Internet technology had created a viable commercial marketplace, widely deployed in the form of private corporate IP networks and supporting a large number of equipment suppliers. Many of the midlevel networks, originally funded by NSF and/or consortium member dues, had already been commercialized. The NSFNET backbone was turned off in mid-1995 in favor of a new architecture proposed by NSF. Specifically, network access points (NAPs) would be used to interconnect mid-level networks rather than a national backbone, and inter-NAP traffic would be carried by commercial internet service providers (many of whom had begun as NSFNET mid-level networks). The NAP concept was crucial to development of the Internet, not only because it provided peering points for network providers, but also because it provided the capability for further evolution of the infrastructure by allowing piecemeal technology upgrades.

22.2.2

Gigabit Testbeds Program
As noted in the NSFNET discussion earlier, the evolution of the Internet through the late 1980s revealed that commercial Internet technology was not quite keeping up with the demand for higher-capacity networks. Further, because of the technical limitations as well as cost factors related to highercapacity networks, application development was somewhat stifled in terms of distributed systems. Shortly after leaving DARPA, Bob Kahn had formed the Corporation for National Research Initiatives (CNRI) to attempt to coordinate large-scale infrastructure projects with cooperation between the public and private sectors. Watching the growth of the Internet in terms of participation and capacity demand, coupled with rapid increases in computer processor speeds and memory size, Kahn proposed a major program in high-speed networks and applications that was eventually funded jointly by DARPA and NSF. The initiative would attempt to answer two questions: How would a gigabit-per-second network be architected? And what would its utility be to end users? After a national solicitation yielded nearly 100 proposals, CNRI, DARPA, and NSF began to organize testbeds based on a number of factors, including synergy between research projects and whether the research required a high-performance testbed. With substantial cost sharing and cooperation from a number of regional and long-distance telecommunications carriers (MCI, AT&T, BellSouth, USWest, PacBell, Bell Atlantic), five testbeds were formed in 1990 (see Figure 22.3). Roughly a year later the first testbed, a metropolitan area ATM testbed in North Carolina, was operational. The remaining four testbeds became operational over the next 18 months.

580

22

Testbeds: Bridges from Research to Infrastructure

AURORA MAGIC BLANCA CASA VISTANET NECTAR

22.3 FIGURE

The five gigabit testbeds coordinated by CNRI with funding and support from DARPA, NSF, and industry are shown with MAGIC, a gigabit testbed funded separately by DARPA.

It is instructive to examine the state of technology and the prevailing questions that existed at the outset of any testbed initiative. In the late 1980s, “broadband” was 1.5 Mb/s (T1), and high-performance networks running at 45 Mb/s were just beyond reach. ATM was debated widely, some saying it was ideal for integrating video/data/voice, while others saying that its 53-byte cells would produce too much overhead (in headers and in segmentation/reassembly) to support high speed. Protocol processing was also an issue, with claims that TCP overhead was too high to support gigabit-persecond rates and thus lightweight protocols were needed, or perhaps protocol processing in hardware. The five testbeds supported in the CNRI gigabit-per-second testbed initiative each had a unique blend of research in applications and in networking and computer science research. Below are brief overviews of these five testbeds along with the MAGIC testbed, separately funded by DARPA after the first five testbed projects had been launched:
!

CASA (Caltech, SDSC, LANL, JPL, MCI, USWest, Pacific Bell) focused primarily on distributed supercomputing applications, attempting to achieve “superlinear speedup” by strategically mapping application components

22.2

Post-ARPANET Network Testbeds

581

to the supercomputers best suited to the computation. The CASA network was constructed by using HIPPI switches interconnected by HIPPI-overSONET at OC-12 (622 Mb/s).
!

BLANCA (NCSA, University of Illinois, UC-Berkeley, AT&T, University of Wisconsin) applications included virtual environments, remote visualization and steering of computation, and multimedia digital libraries. BLANCA network research included distributed virtual memory, realtime protocols, congestion control, and signaling protocols, using experimental ATM switches from AT&T Bell Laboratories running over 622 Mb/s and 45 Mb/s circuits provided by AT&T Bell Laboratories Experimental University Network (XUNET) project. The VISTANET testbed (MCNC, UNC, BellSouth) supported the development of a radiation treatment planning application that allowed medical personnel to plan radiation beam orientation using a supercomputer and visualization application that extended the planning process from two beams in two dimensions to multiple beams and three dimensions. Using an ATM network at OC-12 (622 Mb/s) interconnecting HIPPI local area networks, the VISTANET application involved a graphics workstation at the UNC Medical Center with special-purpose graphics hardware at UNC’s Computer Science Department across campus and a supercomputer several miles away at MCNC. NECTAR (CMU, Bell Atlantic, Bellcore, PSC) was a metropolitan area testbed with OC-48 (2.4 Gb/s) links between the PSC supercomputer facility just outside of Pittsburgh at Westinghouse and the downtown campus of Carnegie Mellon University. The primary application work involved coupling supercomputers running chemical reaction dynamics, and computer science research included both distributed software environments and development of HIPPI-ATM-SONET conversion devices. AURORA (MIT, IBM, Bellcore, Penn, MCI) research focused primarily on network and computer science issues. An OC-12 (622 Mb/s) network interconnected the four research sites and supported the development of ATM host interfaces, ATM switches, and network protocols. AURORA research included telerobotics, distributed virtual memory, and operating system issues such as reducing the overhead in network protocol implementation. The MAGIC [540, 476] testbed (U.S. Army Battle Laboratory, Sprint, University of Kansas, Army High Performance Computing Center, University of Minnesota, Lawrence Berkeley Laboratory) was funded separately by

!

!

!

!

582

22

Testbeds: Bridges from Research to Infrastructure

DARPA after the CNRI initiative had already begun. MAGIC used an OC12 (622 Mb/s) network to interconnect ATM-attached hosts, developing remote vehicle control applications as well as high-speed access to databases for terrain visualization and battle simulation. The gigabit testbeds initiative set out to achieve nearly a 700-fold increase in bandwidth relative to the 1.5 Mb/s circuits typically in use on the Internet in 1989. While local area HIPPI networks were supporting peak data transfer between supercomputers at between 100 and 400 Mb/s, typical high-end workstations were capable of under 25 Mb/s. Even so, applications themselves rarely saw more than 25% of these peak numbers. Applications researchers in the testbeds were for the most part hoping to achieve 300–400 Mb/s actual throughput, or a 200-fold increase in performance. The design point in 1990, then, for applications was to target capabilities that might be supported by the 1992–93 timeframe. In the end, transmission rates of 622 Mb/s were supported, and memory-to-memory throughput between computers was demonstrated at 300–400 Mb/s. Thus, technically there were demonstrations showing success in reaching the original goals. At the same time, the fact that the testbeds combined research at multiple layers—from hardware to network protocols to middleware to applications— brought significant challenges. Research at any particular layer generally requires the layers below to be predictable, if not stable. For some of the testbeds, by the time a facility was presented to an application developer (much less an end user), the constraints in terms of availability and stability were actually quite restrictive. This situation, unfortunately, reduced the number of end users who participated in the testbed (as opposed to software developers). The selection of end points for the testbeds was partially constrained by the fact that some locations simply did not have the fiber infrastructure in place to support deployment of a gigabit testbed. This meant that at some testbed sites there existed a critical mass of computer science researchers and/or application developers, but there were not high-performance computing resources with which to construct high-end applications. The applications seen on the testbeds reflected not only the strengths and interests of the participating sites but also the constraints of the resources available. For example, the BLANCA testbed applications focus was remote visualization and control of supercomputer applications, but work in distributed applications between supercomputers was limited by the fact that there were supercomputers at only one site on the testbed. The testbed initiative did, in fact, provide a wealth of answers to the original two questions asked: What alternatives are there for architecting a gigabit

22.2

Post-ARPANET Network Testbeds

583

network, and what utility would it provide to end users? They also delivered what they proposed. The fact that they remained small “islands” of infrastructure prevented a large number of users from joining in. Part of the lesson here relates to the interdependency between end user applications and technology development. Technology development is largely justified by the hope for improved and/or new applications capabilities for end users. However, in order for the end users to fully benefit from technology development testbeds, they must result in infrastructure upon which those applications can support the work of the end users. In the case of the gigabit testbeds, the finite lifetime of the testbeds without any follow-on infrastructure prevented scientific use of the facilities and capabilities that were developed. It was not until two years later, when the vBNS was deployed (discussed below), that the testbed applications work was resumed in wide area networks. A number of important technology developments came from the testbeds initiative. Several testbeds (CASA, BLANCA) demonstrated wide area heterogeneous supercomputer applications, achieving between 300 and 600 Mb/s. The participation of multiple carriers in providing testbed switching and transmission facilities resulted in the first multivendor SONET interoperation, and AT&T’s deployment of optical amplifiers in BLANCA was the first deployment on a service basis. The AURORA testbed produced the first demonstration of striping of data over multiple OC-3 channels and the first ATM host interfaces for workstations operating above OC-3 speeds. Despite heated debates early on in the project about the feasibility of using ATM in a high-performance network, several of the participating telecommunications carriers had begun to deploy ATM in their commercial networks by the end of the project. While many of the telecommunications carriers initially asked why 45 Mb/s was insufficient for the foreseeable future, several had directly participated in successful demonstration of OC-3 (155 Mb/s), OC12 (622 Mb/s), and OC-48 (2.4 Gb/s) by the conclusion of the project. Thus, technology deployment within the telecommunications industry was notably accelerated by the gigabit testbeds initiative as well.

22.2.3

Other Testbeds
The number of important networking testbeds would easily fill an entire chapter in and of themselves. For example, the multiagency ATDnet (Advanced Technology Development Network, also occasionally known as the Washington Area Bitway or WABitway), a 2.4 Gb/s SONET/ATM testbed in the Washington, D.C., area has directly influenced the network architecture of vast sectors of the military and of the government in general. Today ATDnet is

584

22

Testbeds: Bridges from Research to Infrastructure

part of a larger effort that includes satellite technology from NASA’s Advanced Communications Technology Satellite (ACTS) network, commercial ATM technology from Sprint’s Interim Defense and Engineering Network (I-DREN), and advanced network research on the CAIRN (Collaborative Advanced Internet Research Network) network (see below). Efforts among universities and hightech companies in the San Francisco area such as BAGNET (Bay Area Gigabit Network) and NTON (National Transparent Optical Network) have had similar effects in industry and academia. In Canada, CANet was one of the first continental-scale ATM networks running at 45 Mb/s to 155 Mb/s, demonstrating interoperability of multiple commercial ATM switches interconnecting dozens of research and commercial laboratories as well as advanced regional networks such as WURCNET in Alberta. Its follow-on network, CANet*2, expands the number of connected sites and increases transmission speeds in some cases to 622 Mb/s. A similar testbed in the United Kingdom, SuperJANET, interconnects universities at 140 Mb/s. Finally, network testbeds have been formed in industry as well. The ARIES network, aimed at using ATM technology to address the needs of the petroleum industry, combined technologies ranging from T1 satellite connections to 155 Mb/s terrestrial networks. ARIES involved participants from the petroleum industry (Geko Prakla, AMOCO, and others), telecommunications (Sprint), government (NASA ACTS testbed), and academia (Minnesota Supercomputer Center). Its goal was to demonstrate the use of ATM network technology in a heterogeneous (satellite, terrestrial, various bandwidths) network environment to support applications such as processing of seismic data collected on ships and transmitted directly to supercomputers.

22.3

SYSTEM TESTBEDS
Testbeds have been used in the development of hardware, software, and systems beyond the networking examples previously given. We note that the evolution of computing has a rich history in what many could call testbeds, although there is not sufficient space to report. Two testbed efforts during the past several years (now concluded) illustrate the advantages of using existing network technology to support innovative applications. These are significantly different from network testbeds, whose primary focus is on improving the underlying networking and software technology. The first effort, the I-WAY project, attempted to exploit the multiple ATM testbeds in place in the United States and Canada in 1995 to

22.3

System Testbeds

585

support high-performance applications in science and engineering [155]. The second testbed, ARIES, was aimed at exploiting existing ATM services from telecommunications carriers and the NASA ACTS satellite to support applications important to the oil industry. Here we examine the I-WAY because of its multinational scope involving dozens of organizations.

I-WAY: The Importance of Middleware
Seeking to exploit the soon-to-be-deployed 155 Mb/s NSF vBNS testbed as well as DOE’s and NASA’s OC-3 networking infrastructure, organizers for IEEE Supercomputing ’95 released a call for proposals in the fall of 1994. The goal of this solicitation was to find teams of developers and researchers who would demonstrate innovative scientific applications, given the availability of computing resources at dozens of laboratories, interconnected with broadband national networks, and accessible from high-end graphics workstations and virtual reality environments to be made available at Supercomputing ’95. A select jury of leaders from universities, corporations, and government reviewed the more than 60 proposals received, selecting roughly 40 for support.

Featured Applications
I-WAY applications were classified into five general categories: distributed supercomputing, remote visualization and virtual environments, collaborative environments (particularly those using virtual reality technology and techniques; see Chapter 6), distributed supercomputing coupled with collaborative environments, and video [155]. The applications teams represented over 50 research institutions, laboratories, companies, and federal agencies. Several example applications will illustrate the various application types. An NSF-funded Grand Challenge team working on cosmology coupled multiple supercomputers to compute an n-body galaxy simulation, displaying the results in the CAVE at Supercomputing ’95. The code was a messagepassing code, coupling supercomputers from Cray, SGI, Thinking Machines, and IBM [423]. A team from Argonne National Laboratory, working with a commercial firm of Nalco/Fueltech, demonstrated a teleimmersive collaborative environment for the design of emission control systems for industrial incinerators [158]. This application coupled a supercomputer in Chicago with CAVEs in San Diego and in Washington, D.C.

586

22

Testbeds: Bridges from Research to Infrastructure

The University of Wisconsin Space Science and Engineering Center’s Vis5D software was adapted to the CAVE to support a simulation of the Chesapeake Bay ecosystem [567], allowing researchers at Supercomputing ’95 to explore the virtual Chesapeake Bay while interacting with a running simulation on a Thinking Machines CM-5 at NCSA in Illinois. Another group used a network-enabled version of Vis5D to explore remote climate modeling datasets [267]. MCI and equipment supplier Netstar experimented with video and quality of service using the vBNS. The experiment demonstrated the use of priority queuing in routers to improve quality of video streams in the presence of congestion in the network. Another key application area covered by I-WAY was remote control and visualization of experiments using network-attached instruments (see Chapter 4), including the use of immersive virtual environments and voice control. Instrument output was sent to supercomputers for near-realtime conversion to three-dimensional imagery displayed in the CAVE. This particular set of applications emphasized both high bandwidth and bounded delay, the latter due to human factors in virtual environments [456]. A group from the Aerospace Corporation demonstrated a system that acquired networked computing resources to process data downloaded from a meteorological satellite, and then made the enhanced data available in real time to meteorologists at the conference [332].

I-WAY Human and Technology Infrastructure
Staff from Argonne National Laboratory, the University of Illinois–Chicago Electronic Visualization Laboratory, and the National Center for Supercomputing Applications (NCSA) formed a leadership team to coordinate I-WAY. This coordination entailed working out details regarding the network connections, soliciting laboratories and computing centers to volunteer their resources to the effort, and developing and deploying the software infrastructure necessary to support the application teams. The network infrastructure for I-WAY required working with multiple agencies and telecommunications carriers to connect multiple networks (including vBNS, AAI, ESnet, ATDnet, CalREN, NREN, MREN, MAGIC, and CASA) and to install DS3 and OC-3 connections into the Supercomputing ’95 show floor network. Multiple equipment vendors volunteered equipment to be used for demonstrations, ranging from high-end graphics workstations to a fully immersive virtual environment CAVE. Each participating computing center worked with

22.4

The Landscape in 1998

587

the I-WAY team to provide resource allocations for application teams and to deploy a standard workstation system at their sites, called an I-WAY Point of Presence (IPOP) [200]. The IPOP was used for authentication of distributed applications, for distribution of associated libraries and other software, and for monitoring the connectivity of the I-WAY virtual network. A scheduling system was developed and deployed on the IPOP systems, and the scheduler software was ported to each type of computing resource by staff at the participating center. Applications could use the IPOP-based software infrastructure, which provided single authentication and job submission across multiple sites, or they could work directly with the end resources. For most of 1995, development teams worked on the I-WAY software, the network deployment plans, and the logistics of making resources available at several dozen computing centers. During the course of the year, many centers withdrew because of an inability to dedicate staff to port software or integrate their systems to the I-WAY software “cloud.” During the four months prior to the Supercomputing ’95 demonstrations, teams of staff at the participating sites worked with applications teams to prepare their applications for demonstration over the I-WAY. These teams debugged the applications, tuned them to take into account longer delays in the wide area networks, and in many instances ported the user interfaces into the CAVE environment using the associated libraries. The I-WAY project also involved software from gigabit testbeds as well as the expertise of a number of researchers who had participated in them. Software developed on the BLANCA gigabit testbed by the University of Wisconsin, Vis5D, was adapted to several environmental applications, and BLANCA’s Data Transfer Mechanism (DTM) communications library provided a communications API for the CAVE. Many of the principles learned during CASA’s experiments in coupling multiple supercomputers were also employed to hide latency for distributed supercomputing applications in I-WAY.

22.4

THE LANDSCAPE IN 1998
Network testbeds have become components of broader “infrastructure” testbeds that attempt to deliver total solutions. Within the United States, new efforts such as the NSF Partnerships in Advanced Computational Infrastructure (PACI), the DOE Advanced Strategic Computing Initiative (ASCI) and DOE2000 programs, the NASA Information Power Grid (IPG) initiative, and the Globus project are aimed at computational science and engineering while riding on top of networks and network testbeds such as vBNS, ESnet, NREN,

588

22

Testbeds: Bridges from Research to Infrastructure

and AAInet. Even network testbeds such as CAIRN rely partly on infrastructure from other ATM/SONET testbeds. European ACTS projects use networks such as SuperJANET and other European testbeds, as well as complex concatenation of networks in Europe, across the Atlantic, and into the vBNS via CANARIE and the STAR-TAP. Several important network testbeds are worth examining today as well, including the vBNS, AAInet, CAIRN, SuperJANET, and CANet*2. While there is not sufficient space to cover all of the major efforts, we examine a number of representative efforts.

22.4.1

ACTS ATM Internet (AAInet)
One of the most aggressive ATM testbeds under way in the United States is ARPA’s ACTS ATM Internetwork (AAInet), a testbed interconnecting NASA’s ACTS satellite network; the Washington, D.C., area ATDnet; MAGIC; and the Defense Research and Engineering Network (DREN). AAInet is addressing network signaling (e.g., virtual circuit setup between switches), congestion control, multicast, and interoperability with non-ATM networks. Constructed of several separate and autonomous networks, AAInet is an ideal testbed for these technical issues but also for issues of scale and interoperability among multiple equipment vendors and multiple separately managed networks.

22.4.2

DARTnet/CAIRN
The CNRI-coordinated gigabit testbeds were not the only network testbed activities taking place during the late 1980s and early 1990s. At lower speed (T1), DARPA was also funding DARTnet, whose research contributed to many of today’s capabilities in multimedia and multicast protocols. CAIRN (Collaborative Advanced Internet Research Network) is the present-day DARPA-funded network testbed follow-on to DARTnet. Today, the DARTnet-II network interconnects 18 sites at T1 and is a subset of the CAIRN infrastructure that adds 45 Mb/s and 155 Mb/s ATM links to several of the sites. Unlike many network testbeds that limit the scope of network research in favor of stability for applications projects, the sole purpose of the DARTnet/CAIRN infrastructure is to provide a “breakable” testbed for network research. DARTnet/CAIRN’s comprehensive network research agenda has resulted in major contributions in terms of Internet capabilities, including integrated service models, multicast routing protocols, QoS schemes (e.g., RSVP), network time protocols (including the use of Global Positioning System clocks for unidirectional delay measurement), IP security, and practical experience with IPv6.

22.4

The Landscape in 1998

589

22.4.3

NSF PACI and vBNS
As a follow-on to the supercomputer centers program that began in 1985 and initiated the NSFNET backbone project, NSF created the PACI program to fund several large-scale infrastructure development efforts to “prototype the 21st century computational environment.” Two consortia were funded in 1997: the National Computational Science Alliance (NCSA Alliance) centered at the National Center for Supercomputing Applications (NCSA) at the University of Illinois at Urbana-Champaign and the National Partnership for Computational Infrastructure (NPACI) centered at the University of California–San Diego and the San Diego Supercomputer Center (SDSC). In addition to over 120 principal investigators at roughly 80 universities in 27 states, the NCSA and NPACI PACI consortia involve partnerships with other agency laboratories. Argonne National Laboratory, for example, is a partner in the NCSA effort, while Lawrence Berkeley Laboratory is involved in the NPACI team. The PACI program, just under way as of late 1997, relies heavily on the use of testbeds at multiple levels. Systems or software from enabling technology teams will be stress-tested by application technology teams in order to determine its usefulness as well as its suitability for general infrastructure. Many of these testbeds will take place on the vBNS network, a cooperative program between NSF and MCI. As of late 1997 the vBNS backbone was running at OC-12 (622 Mb/s) with full OC-12 connectivity between SDSC and NCSA and interconnecting several dozen locations at speeds ranging from 45 Mb/s to 155 Mb/s. By the end of 1998 the NSF expects over 100 locations to be connected to the vBNS, including most of the PACI consortium members and all of the major resource centers in the consortia. The participants in the PACI consortia will be participating on a complex, multilayer set of testbeds (e.g., see [469, 515, 525]). Application technology teams cover more than a dozen fields, from astronomy to biology to nanotechnology, and involve teams of 6 to 12 principal investigators at as many institutions. These applications teams provide driving applications to multiple teams of computer scientists and engineers (the enabling technology teams) in order to influence the development of underlying infrastructure capabilities. The two consortia, NPACI and NCSA, are complementary in terms of application areas as well as computer science and engineering. For example, while the NPACI-led consortium has a strong concentration of work aimed at data-intensive computing, the NCSA-led effort emphasizes visual supercomputing and teleimmersion. Both centers provide a complementary suite of high-performance computing platforms, providing the community of more

590

22

Testbeds: Bridges from Research to Infrastructure

than 6,000 users across the country with the ability to select the environment that is ideal for their work.

22.4.4

Globus
In 1998 there are many other testbeds that range in scale from half a dozen institutions to multitestbed initiatives. The Globus project (see Chapter 11), a DOE- and DARPA-funded effort based at Argonne National Laboratory and the Information Sciences Institute (ISI) at the University of Southern California, involves many of the same institutions participating in the NSF PACI program. Globus is in many ways an outgrowth of work done in scheduling, security, and other distributed systems areas during the I-WAY project in 1995 [155, 200]. Globus components are rapidly becoming some of the first pieces of infrastructure within the wide area prototype activities of the PACI program (see Plate 17), partly because of the Globus emphasis on interoperability of a variety of underlying component systems.

22.4.5

NLANR
The National Laboratory for Applied Network Research (NLANR), a distributed laboratory with staff at several NSF-sponsored supercomputer centers, involves a variety of testbed support efforts as well as research efforts. For example, NLANR uses the vBNS as a national “backplane” to interconnect an experimental Web caching system aimed at improving the performance of the Web while reducing Internet load. NLANR comprises three complementary functions: a distributed applications support center (at NCSA), a network measurement research and tools development effort (at UCSD), and a network engineering resource center (at CMU). (More information on NLANR can be found at www.nlanr.net.)

22.4.6

ACTS
The Advanced Communications Technologies and Services (ACTS) program in Europe is one of the world’s most comprehensive and broad “testbed of testbeds,” with universities as well as government laboratories and private corporations, and various funding arrangements with the European Union. Several dozen consortia, each involving as many as 20 participating organizations, have been formed since 1995 to address a variety of communications, media, and computational developments. For example, the Distributed Virtual Prototype project involves both government laboratories and universities in mul-

22.5

Testbeds for the Future: Challenges and Opportunities

591

tiple European Community nations. Private corporations such as Caterpillar and its subsidiary operation in Belgium are developing techniques for collaborative engineering over transatlantic ATM networks interconnecting virtual environments facilities in the United States (NCSA) and Germany (GMD).

22.5

TESTBEDS FOR THE FUTURE: CHALLENGES AND OPPORTUNITIES
The preceding sections show clearly that testbeds can be a mechanism for technology development, technology transfer, and community development that later developed into mainstream information technology. However, the outcomes and transfer paths of future generations were not always as clear as history can write. In the dynamic world of information technology, the accelerating pace of industry and users opens the potential for a new generation of testbeds that will be critical to building the computational grids of the future.

22.5.1

Evolution and Revolution
Ironically, the testbeds cited have proven that evolution is one of the most important characteristics of producing revolutionary results in computing and information technology. Successfully moving from research to usable prototype to a viable industrially supported base is often stimulated by carefully choosing testbeds. The families of computing, networking, and information technology that have produced such revolutionary results on our lives, economies, and jobs evolved over long periods of time [411]. In addition, testbeds focused on revolutionary technologies of the day (such as all optical networking) eventually evolved into supporting broad-based classes of applications. Thus, the technology push and applications pull often uniquely meet at the testbed. Computational grids will require a unique sensitivity to this experience. On the one hand, there is revolutionary software and middleware needed to make it all happen. But, on the other hand, there are also very evolutionary policies and procedures that will affect the nature and growth of grid testbeds. Since these will initially be perceived as competing directly with the production facilities of many resources, careful attention to scale and risk will be needed to ensure successful evolution. One challenge facing the creation and support of testbeds will be to achieve concurrent support for both production network capabilities for applications as well as network research and bleeding-edge applications on as

592

22

Testbeds: Bridges from Research to Infrastructure

much of the same infrastructure as possible. If achieved, such coexistence will contain costs, maximize knowledgeable personnel, avoid unnecessary duplicative infrastructure, and provide the means for applications to easily migrate between production phase to testbed mode with minor effort.

22.5.2

Getting Real Users Involved
A major advantage of testbeds is to get real users involved in applying advanced technologies to their problems early in the development cycle. In addition, this involvement provides a real focus for the technologists in the application of their technology. This marriage of user and technology is often the key to success and can become a test of whether there is potential value in new technology for even a wider class of users. The vision that was shown in the I-WAY experiments has become the precursor of what a computational grid could become. As the national computational grid is built, testbeds of real users emerging from the application teams are expected to form successively linked networks of communities tied together with computational resources. Success or refinements in any given testbed will lead to the definition of the next wave of testbeds, which can bring new users, technologies, and communities together in productive ways. An important challenge for these computational grid testbeds will be to productively construct new multidisciplinary user communities, which in turn may routinely have the ability to discover new applications (see Chapters 3 through 6).

22.5.3

Funding and Organization
Sustained funding for a series of testbeds is essential to realize their potential and, even more important, to attract the high-quality users, experiments, and researchers needed to drive the testbeds to meet their aggressive goals. Fortunately, government agencies have initiated several programs (NSF PACI, DOE2000, NASA IPG, etc.), but to fully realize the potential of computational grids, existing and new programs will have to become grid enabled and participate in the grid’s evolution. This situation could pose political, organizational, and technical fragmentation unless approached from a system perspective. Testbeds developed could be applied across various agencies, bringing new communities into place and delivering both capabilities and additional resources to the grid. Long-term funding ensures that applications researchers and developers will have something to use at the end of technology development stages.

Further Reading

593

Similarly, technologists will have the opportunity to see how the systems behave under persistent use over time. Persistence is key to be able to provide the technology to the applications that helped develop it.

22.6

CONCLUSIONS
The understanding, organization, and experience developed in both large and small testbeds have been key contributors to today’s core networking, software, and computing infrastructure. The successful construction of the national-scale grid environments envisioned in this book will require careful choices as we select the technologies and communities that will form the testbeds for tomorrow. In making these choices, we must seek to balance the potentially conflicting requirements of technologists and users, while mitgating risk and providing opportunities for the exploitation of new opportunities. Significant commitments of time, money, and talent will be required to complement the ongoing evolution of technology and applications. However, history and our view of the future suggest that these investments will be very worthwhile.

FURTHER READING
For more information on the topics covered in this chapter, see www.mkp.com/ grids and also the following references:
!

Books by Hafner and Lyon [256] and Lynch and Rose [354] discuss the history of the Internet and Internet technologies, respectively. A paper by Catlett [108] contains a comprehensive discussion of applications investigated and developed on the gigabit testbeds. A special issue of IEEE Annals of the History of Computing on “TimeSharing and Interactive Computing at MIT” [334] describes the development of interactive timeshared computing during the 1960s. A book by Kaufmann and Smarr [304] provides a brief history of supercomputing as well as a rich discussion of supercomputing applications.

!

!

!

Sponsor Documents

Or use your account on DocShare.tips

Hide

Forgot your password?

Or register your new account on DocShare.tips

Hide

Lost your password? Please enter your email address. You will receive a link to create a new password.

Back to log-in

Close