cybersecurity and computer science

Published on May 2016 | Categories: Types, Research | Downloads: 69 | Comments: 0 | Views: 263
of 6
Download PDF   Embed   Report

1This white paper discusses some of the issues of cybersecurity as they relate tocomputer science. Computer science is a large field, so this paper is of necessitylimited in scope. We have chosen to emphasize what we consider the mostimportant area, namely, the current set of assumptions surrounding how oursystems are built, where the security boundary lies, and why that model isdeeply flawed.

Comments

Content

White paper: cybersecurity and computer science
February, 2007

1
1

Introduction

This white paper discusses some of the issues of cybersecurity as they relate to computer science. Computer science is a large field, so this paper is of necessity limited in scope. We have chosen to emphasize what we consider the most important area, namely, the current set of assumptions surrounding how our systems are built, where the security boundary lies, and why that model is deeply flawed. The systems we use today have evolved over the last six decades. The underlying assumptions – the foundations – of our current systems can be traced back to the earliest systems. Those systems were not networked, and security was not an issue – just getting the systems to work at all was the real problem. The common thread of the systems, then and now, was such as storing files, communicating with humans, communicating with each other, and so on. Where security was embedded in the system, it was often built in as an afterthought, or as the result of accidental discoveries or malice. This trend continues to this day, as recent security problems in Linux and Windows have shown. All our measurements – e.g. megabytes per second, floating point operations per second, megabytes of memory, and so on – are oriented to capability. We actually don’t have measurements that are security-oriented, and, in fact, can’t even agree on the units. There are no universally agreed-on measures of security. There are only qualitative descriptions – ”pretty secure”, ”I don’t think I’ve been hacked”, ”When did I buy a new car?”, and so on. We need to rethink our systems, with an emphasis on security orientation as opposed to the current capability orientation. As we show later in this paper, we can not assume that any part of our computers are safe. The computers we build today consist, themselves, of multiple computers, each running an unsecure operating system.
1 Prepared

by

Sandia

National

Laboratories

Livermore,

California

94550 ; and Pacific Northwest National Laboratory. Sandia is a multiprogram laboratory operated by Sandia Corporation, a Lockheed Martin Company, for the United States Department of Energy’s National Nuclear Security Administration under Contract DE-AC04-94AL85000. Approved for public release; further dissemination unlimited. SAND-2008-1708P

1

Figure 1: Here there be dragons We do not include theoretical computer science among the areas needing change, even though a great deal of theoretical computer science was used in the creation of the systems we use today. In a very real sense, theoretical computer science is neutral with respect to capability-oriented vs. securityoriented systems design. The distinction is in how we use it, not in whether it can be used. Thus, this paper necessarily takes a more practical view of computer science issues since the theoretical tools of computer science will, we expect, be equally applicable to security-oriented systems as they have been to capability-oriented systems.

2

Overview

The fundamental problem, going back nearly 60 years, is that our systems are designed from the ground up with a capability orientation. We are primarily concerned with whether the system can read and write disks, drive graphics, and communicate over a network. Once these criteria are met, we start to consider security. We think of security as ”what goes on outside the machine” or “what goes on outside our network/organization/nation”. We have made an implicit, and incorrect, assumption, that the dragons reside outside some arbitrary boundary, and that our control inside the boundary is absolute. If we were thinking in terms of a map, it might look like the one shown below in Figure 1.

2

Figure 2: The IT view of the boundary

The security boundary
A useful notion for thinking of security is the security boundary. Outside the boundary, there be dragons. Inside the boundary, all is well. All organizations and people that use and manage computers have an internalized notion of a boundary. IT organizations and government agencies think of the security boundary as outside the organization – hence firewalls, air gaps, and the perpetual surprise that neither of these is sufficient. Computer users think of it as outside their computer – hence their perpetual surprise at the ease with which their machines are violated. We show these ideas below. First, the IT view, shown in Figure 2. Next, the user view is shown in Figure 3. The user view is less naive, and considerably more accurate, in its lack of trust.

Where is the boundary, really?
The views shown above are completely obsolete, They hark back to a time when there was a single computational element – the CPU – in the system, and the hardware consisted of limited-function elements that were poked and prodded by the CPU into performing their tasks, idle otherwise, incapable of autonomous action. As we show below, the device hardware has more than enough capability to behave autonomously, and in fact many devices are already running an insecure operating system. Another faulty assumption is that, once the computer has loaded an oper-

3

Figure 3: The User view of the boundary: trust no one, especially the IT staff ating system (e.g. Windows, Linux or MAC OS), then that operating system is the only occupant of the CPU. In fact, nowadays, the CPU is literally time shared with other software, which the user – and most IT staff – don’t even know is there. This other software is a completely parallel operating system, which has no security whatsoever, and which, in the vendors’ plans for our future, cannot be turned off. This software is fertile ground for viruses, and some are now being found. We show the a simplified, but much more accurate, picture in Figure 4. It represents a very common laptop configuration. There are actually at least four operating systems running on this hardware. Three and one-half of them are completely insecure. All four of them process user data without any limitation, in clear text. Thus, we can not even assume that the systems inside the computer are safe. The operating system the user sees, as untrustworthy as it is, is at least more trustworthy than the other unknown operating systems. Unfortunately, this primary operating system is designed to trust all the other hardware systems implicitly. It assumes, for example, that the disk will not hide user data or try to send user data out on the network without the primary operating system’s knowledge – a naive assumption at best. If our castles are built on such fragile foundations; if the dragons live inside our computers, not outside; if they freely roam our organization; is it any surprise that our systems are so easily penetrated and co-opted? It should be, rather, a surprise, that we have any security at all. As mentioned, our systems are built for capability, not security. Security is 4

Figure 4: The dragons in our systems added on after capability is designed and built. The boundaries come almost as an afterthought – hence the firewall. The boundaries are maintained by a reactive, finger-in-the-dike approach, comprised of service patches and CERT bulletins, which is one reason that our networks and individual systems keep springing leaks. The systems were not built with security in mind; as noted, we can not even define the terms of security. The users and IT staff believe that the whole system is inside the boundary, dragons safely outside. In fact, the user lives with a pet dragon – the operating system, which we assume has not been compromised too completely – and real dragons, in the form of the other, completely unprotected operating systems running on other hardware inside each and every computer. Our computers are constructed of complex, untrustable components. At the very lowest level, we need to solve the fundamental problem of building a secure system from unsecure components. If we require some trusted components, we must determine what those are. We must find a way to determine what the minimum security boundary is to enable us to build a trusted computer system, one that we believe is secure. We currently do not know.

We need to rethink our systems from the ground up
Our computer systems were designed in an era in which there were no networks; where memory and disk were scarce; where computer cycles were precious. Nowadays, we are drowning in an abundance of memory, disk, and computing cycles. We are certainly not living in a secure world, however. Our networks and computers are providing opportunity for criminals of all types: organized

5

criminals, from all countries, now have a major presence on the Internet. This cybersecurity effort has to offer more than patches and bandaids and improved procedures. We are going to need to develop ways of describing security that are observable, quantifiable, measureable, and reproducible – in short, applying the mindset we have developed for capability to the needs of security. We have no foundation on which to build at present. Absent this development, we will continue to try to keep the dike from failing; at some point, we are going to run out of fingers.

6

Sponsor Documents

Or use your account on DocShare.tips

Hide

Forgot your password?

Or register your new account on DocShare.tips

Hide

Lost your password? Please enter your email address. You will receive a link to create a new password.

Back to log-in

Close