Security Policies Ross Anderson <
[email protected]> University of Cambridge Computer Laboratory Frank Stajano <frank.stajano@ <
[email protected] cl.cam.ac.uk>, >, <
[email protected] <
[email protected]> om> University of Cambridge Computer Laboratory AT&T Laboratories Cambridge Jong-Hyeon Lee <
[email protected]> Filonet Corporation Abstract
A security policy is a high-le high-level vel specification specification of the securit security y propertie propertiess that given system should possess. It is a with means forother, designers, expertsaand implementers to communicate each and adomain blueprint that drives a project from design through implementation and validation. We offer a survey of the most significant security policy models in the literature, showing how “security” may mean very different things in different contexts, and we review some of the mechanisms typically used to implement a given security policy.
Contents 1
2
What is a Security Policy?
2
1.1 1.2
3 4
Definition . . . . . . . . . . . . . . . . . . . . . . . . . . . . Origins . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
The Bell-LaPadula Policy Model
2.1 2.2 2.2 2.3 2.4 2.5
Classifi ificcation ions and and clearances . . . . . . . . . . . . Au Auto toma matic tic enfo enforc rcem emen entt o off inf infor orma matio tion n flow flow co cont ntro roll Formalising the p ol olicy . . . . . . . . . . . . . . . . Tranquility . . . . . . . . . . . . . . . . . . . . . . Alterna rnativ tive form rmu ulations . . . . . . . . . . . . . . .
This website stores data such as 3 Examples of Multilevel Secure Systems cookies to enable essential site 3.1 SCOMP . . . . . . . . . . . . . . . . . . . . functionality, as well as marketing,3.2 Blacker . . . . . . . . . . . . . . . . . . . . personalization, and analytics. You 3.3 3.3 ML MLS S U Uni nix, x, CMWs CMWs an and d Tru Trust sted ed Wi Wind ndo owi wing ng may change your settings at any time 3.4 The NRL Pump . . . . . . . . . . . . . . . or accept the default settings. 3.5 Logistics systems . . . . . . . . . . . . . . . 3.6 Purple Penelope . . . . . . . . . . . . . . . 3.7 Future MLS systems . . . . . . . . . . . . . 3.8 What Goes Wrong . . . . . . . . . . . . . . Privacy Policy 3.8.1 Technical issues . . . . . . . . . . . . 3.8. 3.8.2 2 Politi litica call and and ec econ ono omic issu issues es . . . . Marketing Personalization Analytics Save
1
Accept All
5
. . . . .
. . . . .
. . . . .
. . . . .
. 6 . 7 . 8 . 10 . 10
. . . . . . . . . .
. . . . . . . . . .
. . . . . . . . . .
. . . . . . . . . .
. . . . . . . . . .
12
. . . . . . . . . .
. . . . . . . . . .
. . . . . . . . . .
. . . . . . . . . .
13 13 14 14 15 16 16 16 17 18
4
The Biba Integrity Model
19
5
The Clark-Wilson Model
20
6
The Chinese Wall Model
22
7
The BMA Policy
23
8
Jikzi
25
9
The Resurrecting Duckling
26
10 Access Control
10.1 10.2 10.3 10.4
A AC CLs . . . . Ca ap pabilities . Ro olles . . . . . S eeccurity state
28
. . . .
. . . .
. . . .
. . . .
. . . .
. . . .
. . . .
. . . .
. . . .
. . . .
. . . .
. . . .
. . . .
. . . .
. . . .
. . . .
. . . .
11 Beyond Access Control
. . . .
. . . .
. . . .
. . . .
. . . .
. . . .
. . . .
. . . .
. . . .
28 29 30 30 31
11.1 Key management p po olic licies . . . . . . . . . . . . . . . . . . . 31 11.2 Corp o orrate email . . . . . . . . . . . . . . . . . . . . . . . . . 34 12 Automated Compliance Verification
35
13 A Methodological Note
35
14 Conclusions
37
15 Acknowledgements
38
1
Wh What at iiss a Se Secu curi ritty P Pol olic icy? y?
Security engineering is about building systems to remain dependable in the face of malice as well as error and mischance. As a discipline, it focuses on the tools, processes and methods needed to design, implement and test complete comp lete systems, and to adapt existing system systemss as their environment environment evolves. In most engineering disciplines, it is useful to clarify the requirements carefu carefully lly before embark embarking ing on a proje project. ct. Such Such a com commen mentt may may sou sound nd so obvious asurity toty. border the useless, buttoo it is of special relevance to compute com puterr sec securi . First, First,onbecause becaus e it is all often ignor ignored ed [9]: divi diving ng straight into the design of crypto protocols is more fascinating for the technicall tech nically y minded. Second, Second, because security security is a holistic property — a This website stores data such quality as of the system taken as a whole — which modular decomposition cookies to enable essential site is not sufficient sufficient to guarantee guarantee.. (We (We shall see in section section 3.8.1 below that functionality, as well as marketing, connecting secure components together does not necessarily yield a secure It is thus important to understand clearly the security properties personalization, and analytics.system.) You that a system should possess, and state them explicitly at the start of may change your settings at any time its devel developmen opment. t. As with other aspects of the specificat specification, ion, this will be or accept the default settings. useful at all stages of the project, from design and development through to testing, validation and maintenance. A top down representation of the protection of a computer system Privacy Policy might consist of the three layers shown in figure 1.
Marketing Personalization Analytics Save
2
Accept All
POLICY
MIDDLEWARE
MECHANISMS
Figuree 1: Layers Figur Layers of protectio protection n in a computer system
• At the highest level of abstraction, the whole system is represented by a concise and formalised set of goals and requirements: the policy. • At the bottom level, the system is composed of mechanisms such as the computing hardware, hardware, the cryptographic cryptographic primit primitives ives,, tamper resistant enclosures and seals as well as procedural items such as biometric biome tric scanning scanning of individuals individuals (iris, fingerprint, fingerprint, voicep voiceprint. rint...) ..) for purposes of authentication. • Between those two extremes there will be some middleware that connects together the available mechanisms in order to build the system syste m that conforms conforms to the policy. policy. This may include access access control control structures — whether or not enforced by the operating system — and cryptographic protocols. The security policy is a set of high-level documents that state precisely what goals the protection mechanisms are to achieve. It is driven by our understandin unders tanding g of threats threats,, and in turn drives our system design design.. Typ Typical ical statements in a policy describe which subjects (e.g. users or processes) may access which objects (e.g. files or peripheral devices) and under which circumstances. It plays the same role in specifying the system’s protection properties, and in evaluating whether they have been met, as the system specification specific ation does for gener general al functi functionalit onality y. Indeed, a secur security ity policy may be part of a system specification, and like the specification its primary function is to communicate.
1.1 1.1
Defin Definit itio ion n
This website stores data such as Many organisations use the phrase security policy to to mean a collection of cookies to enable essential site content-f cont ent-free ree statemen statements. ts. Here is a simpl simple e example: examp le: functionality, as well as marketing, personalization, and analytics. YouMegacorp Inc security policy may change your settings at any time 1. This policy is approved approved by Management. Management. or accept the default settings. 2. All staff shall obey this security security policy policy.. 3. Data shall be av availabl ailablee only to those with a “need “need-to-kno -to-know”. w”.
Privacy Policy
4. All breaches breaches of this policy shall be reported at once to Security. Security.
Marketing Personalization
This sort of thing is common but is of little value to the engineer.
Analytics Save
3
Accept All
1. It dodges the central issue, namely ‘Who determ determines ines “need-to-know” and how?’ 2. It mixes stateme statements nts at a num number ber of different different levels levels.. Organizatio Organizational nal approval of a policy should logically not be part of the policy itself (or the resulting self-reference makes it hard to express it formally). 3. The protection mec mechanism hanism is implied implied rather than explicit: ‘staff shall obey’ — but what does this mean they actually have to do? Must the obedience be enforced by the system, or are users ‘on their honour’? 4. It’s unc unclea learr on how breach breaches es are to be detected detected and on who has a duty to report them. Because the term ‘security policy’ is so widely abused to mean a collection of platitudes, there are three more precise terms that have come into use to describe the specification of a system’s protection requirements. A securit security y polic policy y model is a succinct statement of the protection propertiess that a system, propertie system, or generi genericc type of system, system, must have. have. Its key points can typically be written down in a page or less. It is the document in which the protection goals of the system are agreed with an entire communit comm unity y, or with the top management management of a custo customer. mer. It may also be the basis of formal formal mathe mathematic matical al analys analysis. is. A security security target is a more detailed description of the protection mechanism that a specific implementation provides, and of how they relate to a list of control objectives (some but not all of which are typically derived from the policy model). A protection protection profile is like a security target but expressed in an implementation-independent way to enable comparable evaluations across products and versions. This can involve the use of a semi-formal language, or at least of suitable security jargon. It is a requirement for products that are to be evaluated under the Common Criteria [61] [61] (a framework used by many governments to facilitate security evaluations of defence information systems, syste ms, and which which we’ we’ll ll discu discuss ss b elow). elow). The protection protection profile forms the basis for testing and evaluation of a product. When we don’t have to be so precise, we may use the phrase security policy to refer refer to any or all of the above above.. We wi will ll neve neverr use use the te term rm to refer to a collection collection of platitudes platitudes.. We will also av avoid oid a third meani meaning ng of the phrase – a list of specific configuration settings for some protection product. We will refer to that as configuration management in what follows.
1. 1.2 2
Orig Origin inss
we are confronted with a completely new application and have This website stores data such Sometimes as to design a security policy model from scratch. scratch. More commonly commonly,, there cookies to enable essential site alread alr eady y exi exists sts a model; mode l; we just have hav e to choos ch oose e the righ right t one one,, and defunctionality, as well as marketing, velop target. Neithe Neitherr of these personalization, and analytics.velo Youp it into a protection profile and/or a security target. tasks is easy. easy. Indeed one of the main purposes of this chapter is to promay change your settings at any time vide a number of security policy models, describe them in the context of or accept the default settings. real systems, and examine the engineering mechanisms (and associated
Privacy Policy Marketing Personalization
constr con strain aints) ts) that a sec securi urity ty target target can use to mee meett the them. m. Let us then introduce, in chronological order, the three major waves of security policy models that have have b een presented presented in the open literature. We shall review them individually individually in greate greaterr detail in subsequen subsequentt secti sections. ons. Historically, the concept of a security policy model came from the military sector. sector. The first one to appear, Bell-LaPadula [14], was introduced
Analytics Save
4
Accept All
in 1973 in response to US Air Force concerns over the confidentiality of data in time-sharing mainframe systems. This simple yet influential model is based on restricting information flow between labelled clearance levels such suc h as “Confidential” “Confidential” and “T “Top op Secret”. Its conceptual framew framework ork also forms the basis of other derived models such as Biba [17], which deals with integrity instead of confidentiality. A second wave of policy models emerged in the 1980’s from formalising well-established practices in the business sector. An abstraction of the double entry bookkeeping systems used in accounting and banking gave rise in 1987 to the Clark-Wilson security policy model [24]. Then Brewer and Nash, in 1989, introduced the Chinese Wall model [21] to represent the internal confidentiality constraints of a professional firm whose partners may be serving competing customers, and must avoid conflicts of interest. A third wave came from the development of policy models for applications in various other fields — an activity that our group at Cambridge has pursued extensively extensively in recent years. years. Case studies include include the BMA (British Medical Association) security policy model [10], concerned with the confidentiality and accessibility of a patient’s medical records; Jikzi [11] which describes the requirement requirementss of electro electronic nic publishing; publishing; and the Resurrecting Resurrecti ng Duckling [68], for secur securee trans transient ient associati association on among among,, for examp example, le, wireless wireless devices. devices. The lesson that can be drawn by observing such a wide spectrum of policy models is that securit security y means radically radically different things in differe different nt application appli cations. s. But whether we develop the system’s system’s security target using an establis established hed policy model or draw draw up a new model from scratch scratch,, a thorough understanding of the application environment and of established work patterns is essential, both to decide on a suitable model and to check that no threats threats have have b een overloo overlooked. ked. Cons Consultatio ultation n with domain experts is highly advisable. advisable. The wisdo wisdom m provi provided ded by experience has few worthy substitutes and a careful study of the history of past attacks on similar systems is the best way to turn the ingenuity of yesterday’s crooks to the advantage of today’s designers [9].
2
The The Be Bell ll-L -LaP aPad adul ula a P Pol olic icy y Mod Model el
By the early early 197 1970’s 0’s,, peop people le had realised realised that the prote protecti ction on offe offered red by commercial operating systems was poor, and was not getting any better. As soon as one operating system bug was fixed, some other vulnerability would would b e discovere discovered. d. Even unskille unskilled d users would discover discover loopholes and use them opportunistically. This website stores data such as A study by James Anderson led the US government to conclude that a cookies to enable essential site secure system should do one or two things well, and that these protection properties properti es should b e enforced enforced by mechanism mechanismss that were simple enough to functionality, as well as marketing, verify fy and that would change change only rarely [2]. It introduced the conce concept pt personalization, and analytics.veri You of a reference monitor – – a component of the operating system that would may change your settings at any time or accept the default settings. mediate access control decisions and be small enough to be subject to analysis analy sis and tests, the completene completeness ss of which which could be assur assured. ed. In modern parlance, such components components – togeth together er with their associ associated ated operating procedures – make up the Trusted Computing Base (TCB). (TCB). More formally, Privacy Policy the TCB is defined as the set of components (hardware, software, human, etc.) whose whose correct functioning functioning is sufficient sufficient to ensure that the security Marketing policy is enforced — or, more vividly, whose failure could cause a breach
Personalization Analytics Save
5
Accept All
TOP SECRET SECRET CONFIDENTIAL OPEN Figure 2: A classification hierarchy. hierarchy.
of the security security policy. policy. The goal was to mak makee the secur security ity policy so simpl simplee that the TCB could be amenable to careful verification. But what are these core security properties that should be enforced abovee all others abov others??
2.1
Class Classific ificati ations ons and and clea clearan rances ces
The Second World War, and the Cold War that followed, led NATO governments to move to a common protective marking scheme for labelling the sensitivity of documents. Classifications are labels such as Unclassified , Confidential , Secret and Top Secret , as in Fig Figure ure 2. The or origi iginal nal idea was that information whose compromise could cost lives was marked ‘Secret’ while information whose compromise could cost many lives was ‘T ‘Top op Secret Secret’. ’. Gov Governme ernment nt employ employees ees hav havee clearances depending on the care care wit with h whi which ch they’ they’ve ve been vet vetted ted.. The detai details ls chang changee fro from m time time to time but a ‘Secret’ ‘Secret’ clearance clearance may involve involve chec checking king fingerprin fingerprintt files, while ‘Top Secret’ can also involve background checks for the previous five to fifteen years’ employment [71]. The access control policy was simple: an official could read a document only if his clearance was at least as high as the document’s classification. So an official cleared to ‘Top Secret’ could read a ‘Secret’ document, but not vice versa. versa. The effec effectt is that that inf inform ormati ation on may may only only flo flow w upwa upwards rds,, from confidential to secret to top secret, but it may never flow downwards unless an authorized person (known as a trusted subject ) takes a deliberate decision decis ion to decla declassify ssify it. There are also document handling rules; thus a ‘Confidential’ document might be kept in a locked filing cabinet in an ordinary government office, while higher levels may require safes of an approved type, guarded roomss with room with control control over over pho photoco tocopie piers, rs, and so on. (Th (Thee NSA secu securit rity y manual [58] gives a summary of the procedures used with ‘Top Secret’ intelligence data.) The system rapidly became became more complicated. complicated. The damage criter criteria ia for classifyin classifying g documents documen ts were wer e expanded expand ed from possible milita military ry conseconse This website stores data such as quence que ncess to economic economic harm and political political embarras embarrassme sment nt.. The UK has cookies to enable essential site an extra level, ‘Res ‘Restricted’ tricted’,, betw between een ‘Unclassifi ‘Unclassified’ ed’ and ‘Con ‘Confident fidential’; ial’; the functionality, as well as marketing, USA used to have this too but abolished it after the Freedom of Informapersonalization, and analytics. You tion Act was introduced. introduced. Ameri America ca now has tw two o more specific markin markings: gs: may change your settings at any ‘Fortime Official Use only’ (FOUO) refers to unclassified data that can’t be or accept the default settings. released under FOIA, while ‘Unclassified but Sensitive’ includes FOUO plus material material that might might be releas released ed in response to a FOI FOIA A request. In the UK, ‘Restr ‘Restricted’ icted’ information information is in practice shared shared freely freely,, but marking Privacy Policy low-grade government documents ‘Restricted’ allows journalists and others involved involved in leaks to be prosecuted prosecuted.. (Its other main practica practicall effect is Marketing that when an unclassified US document is sent across the Atlantic, it automatically tomat ically becomes ‘Restricted’ ‘Restricted’ in the UK and then ‘Confi ‘Confident dential’ ial’ when Personalization
Analytics Save
6
Accept All
shipped back to the USA. American military system people complain that the UK policy breaks the US classification scheme; Brits complain about an incompatible US refinement of the agreed system.) There is also a system of codewords whereby information, especially at Secret and above, can be further restricted. For example, information which might reveal intelligence sources or methods — such as the identities of agents or decrypts of foreign government traffic — is typically classified ‘Top Secret Special Compartmented Intelligence’ or TS/SCI, which means that so-called need to know restrictions restrictions are imposed as well, with one or moree codewo mor codewords rds atta attach ched ed to a file. file. Som Somee of the code codewo words rds relat relatee to a particular partic ular military operation or intelligenc intelligencee source and are avail available able only to a gro group up of named named users users.. To read a documen document, t, a use userr mu must st hav havee all the codewords that are attached to it. A classification level, plus a set of codewords, makes up a security label or or (if there’s there’s at least one codew codeword) ord) a compartment . Sectio Section n 2.3 below offers a slightly more formal description, description, while a more detailed explanation can be found in [8]. Allowing upward only flow of information also models wiretapping. In the old days, tapping someone’s telephone meant adding a physical tap to the wire. Nowadays, it’s all done in the telephone exchange software and the effect is somewhat like making the target calls into conference calls with wit h an extra partic participa ipant nt.. The usual usual sec securit urity y requ require iremen mentt is tha thatt the target of investig target investigation ation should not know he is b eing wiret wiretapped. apped. What’s more, a phone can be tapped by multiple principals at different levels of clearance. If the FBI is investigating a complaint that a local police force conducted an unlawful wiretap on a politician in order to blackmail him, they will also tap the line and should be able to see the police tap (if any) without the police detecting their presence. Now that wiretaps are implemented as conference calls with a silent third party, care has to be taken to ensure that the extra charge for the confer con ferenc encee call call facilit facility y goes to the wiretappe wiretapper, r, not to the targe target. t. (Th (Thee addition of ever new features in switching software makes the invisible implementation of wiretapping ever more complex.) Thus wiretapping requires almost exactly the same information flow policy as does traditional classified clas sified data: High can see Low data, but Low can’t tell whether High is reading any and if so what.
2.2 Automa Automatic tic enforc enforceme ement nt of in infor forma matio tion n flow flow control The next problem was how to enforce this information flow policy in a computer compu ter system. The semina seminall work work here was the Bell-LaPadula Bell-LaPadula (BLP) model of computer securit security y , formulated formula ted in 1973 [14]. It is also known as This website stores data such as multilevel security, and systems that implement it are often called mulcookies to enable essential site tilevel secure or or MLS systems systems.. Their principa principall featur featuree is that information information functionality, as well as marketing, can never flow downwards. personalization, and analytics. You The Bell-LaPadula model enforces two properties: may change your settings at any time • The simple security property : no process ma may y read data at a higher or accept the default settings. level. This is also known as no read up (NRU) ;
Privacy Policy Marketing Personalization
• The *-property : no process ma may y write data to a lower lower level. This is no write down (NWD) also known as . The *-property *-property was Bell and LaPadu LaPadula’s la’s critical innovation. innovation. It was dri driven ven by the fear fear of attacks attacks usi using ng malici malicious ous code. An unclea uncleared red user might write a Trojan and leave it around where a system administrator
Analytics Save
7
Accept All
cle cleare ared d to ‘Secre ‘Secret’ t’ might might execu execute te it; it could could the then n cop copy y itself itself in into to the ‘Secret’ part of the system and write secret data into unclassified objects thatt the attacke tha attackerr wo would uld later retr retriev ieve. e. It’s also quite possible possible that an enemy agent could get a job at a commercial software house and embed some code in a product which would look for secret documents to copy. If it could then write them down to where its creator could read it, the security policy would have been violated. Information might also be leaked as a result of a bug, if applications could write down. Vulnerabilities such as malicious and buggy code are assumed to be given. given. It is therefore therefore neces necessar sary y for the system system to enf enforc orcee the secu securit rity y policy poli cy indepen independen dently tly of use userr act action ionss (and (and by ext extens ension ion,, of the action actionss taken take n by programs programs run by users). users). So we must prevent prevent programs running at ‘Secret’ from writing to files at ‘Unclassified’, or more generally prevent any process at High from signalling to any object at Low. In general, when systems are built to enforce a security policy independently of user actions, they are described as having mandatory access control , as opposed to the discretionary access control in in systems like Unix where users can take their ow own n access decisions about their files. (W (Wee won’t use these phrases phrases much as they traditionally refer only to BLP-type policies and don’t include many other policies whose rules are just as mandatory). It should also be noted that another significant contribution of the work of Bell and LaPadula is not at the level of the security policy model itself but at the meta-level of talking about security policies in a formal way. Their presentation is based on a simple mathematical formalism that captures the security properties of interest and allows one to derive proofs about the security security,, or insecurity insecurity,, of a given given syste system. m.
2.3
Formali ormalisi sing ng the the policy policy
Each subject and object in the system is assigned a security label or or protective marking which which consists of a number of sub-markings sub-markings of of which the most important is a classification level (e.g. (e.g. Top Secret, Confidential etc) and a set of further sub-marking sub-markingss (categories ) which means labels or compartments. A binary relation called “dominates” is then defined between any two security labels a and b in the following way.
∀a, b ∈ labels : a dominates b
level(a) ≥ level(b) ∧ categories(a) ⊇ categories(b) This website stores data such as cookies to enable essential site This relation is a partial order, since it is antisymmetric and transitive. Given an appropriate set of security labels over which the two auxiliary functionality, as well as marketing, join() and meet() can be defined, it forms a mathematical strucpersonalization, and analytics.functions You ture known as a lattice1 [27], an example of which is shown in figure 3. may change your settings at any time Someone with a ‘Top Secret’ clearance isn’t allowed to read a document or accept the default settings. marked (Secret, {Crypto}), despite it being at a lower level; he also needs 1
Privacy Policy Marketing
The operators join() and meet() each take two elements from the set of labels and return another one, the leas another leastt upper bound or the greates greatestt low lower er bound respe respectiv ctively ely.. Note that not all sets of labels give rise to a lattice under dominates: th there ere ma may y be set setss of labe labels ls wher wheree one pair of elements elements does not have a least uppe upperr b ound (or a greatest low lower er bound). For an example, remove the node ( T op Secret Secret,, {Crypto,F oreign oreign}) from figure 3.
Personalization Analytics Save
8
Accept All
(TOP SECRET SECRET,, {CRYPT {CRYPTO, O, FOREIGN} FOREIGN}))
SECRET,, {CRYPT {CRYPTO}) O}) (TOP SECRET
(TOP SECRET SECRET,, {})
(SECRET, {CRYPTO, FOREIGN})
(SECRET, {CRYPTO})
(SECRET, {})
(UNCLASSIFIED, {})
Figuree 3: The “dominates” Figur “dominates” relation relation on a lattice of security security labels. labels.
a ‘Crypto’ clearance, which will place him at (Top Secret, {Crypto}), and this dominates the document’s classification. (Note that, to reduce clutter on such diagrams, it is customary to omit any arrows arrows that can be deduced deduced by transitivity transitivity.) .) A predicate (i.e. a boolean-valued function) called “allow ()”, taking as arguments a subject, an object and an action, may then be defined in this framework. framework. Stating a particular particular securit security y policy is then equiv equivalen alentt to defining this function, which is in fact a complete formal specification for the behaviour of the reference monitor. The two rules of the BLP model can then be expressed as follows. 1. No read up (simple securit security y property):
∀s ∈ subjec subjects, ts, o ∈ objects : allow (s,o, re read ad) ⇔ label(s) dominates label(o) 2. No write dow down n (star property property): ):
∀s ∈ subjec subjects, ts, o ∈ objects : This website stores data such as allow(s, o,write) ⇔ label(o) dominates label (s) cookies to enable essential site functionality, as well as marketing,A state machine abstraction makes it relatively straightforward to verclaims about claims about the protecti protection on provided provided by a des design ign.. Sta Starti rting ng from a personalization, and analytics.ify You secure secur e state, and p erforming erform ing only state transit transitions ions allowed allow ed by the rules of may change your settings at any time or accept the default settings. the chosen policy model, one is guaranteed to visit only secure states for the system. This is true independently of the particular policy, as long as the policy itself is not inconsistent. As we said, this idea of how to model a security policy formally was almost as important as the introduction of Privacy Policy the BLP policy model itself. This simple formal description omits some elaborations such as trusted Marketing subjects – princip principals als who are allowe allowed d to declas declassif sify y file files. s. We’l e’lll discus discusss alternative formulations, and engineering issues, below. Personalization
Analytics Save
9
Accept All
2.4
Tranqui ranquilit lity y
The introduction of BLP caused some excitement: here was a straightforward security policy that appeared to be clear to the intuitive understanding yet still allowed allowed people to prove prove theorems. But McLean [54] showe showed d thatt the BLP rules were not in the tha thems mselv elves es enoug enough. h. He introd introduce uced d the System Z , a system that suffered from blatant conceptual construction of System disclosure problems even though it officially complied with the letter of the BLP model. model. In System System Z, a user user can ask the system system adm admini inistr strato atorr to temporari temporarily ly dec declas lassif sify y an any y file from High to Low. Low. So Low users can legitimately read any High file. Bell’s Bel l’s arg argume ument nt was was that that System System Z cheats cheats by doing doing som someth ething ing the model doesn’t allow (changing labels isn’t a valid operation on the state), and McLean’s McLean’s argument argument was that BLP didn’t explicitly explicitly tell him so. The issue is dealt with by introducing a tranquility property . The strong tranquility quilit y property property says that security labels never never change change during system operation, while the weak tranquility property says that labels never change in such a way as to violate a defined security policy. The motivation motivation for the weak weak property property is that in a real syste system m we often want to observe the principle of least privilege and start off a process at the uncleared level, even if the owner of the process were cleared to ‘Top Secret’. Secret ’. If she then accesses accesses a confidentia confidentiall email, her session session is autom automatatically upgraded to ‘Confidential’; and in general, her process is upgraded each time it accesses data at a higher level (this is known as the high water mark principle). principle). Such upgra upgrades des would not normally break a sensi sensible ble security policy. The practical practical implic implication ation of this is that a process acquires the security label or labels of every file that it reads, and these become the default labell set of every labe every file that it write writes. s. So a proc process ess that has rea read d file filess at ‘Secret’ and ‘Crypto’ will thereafter create files marked (at least) ‘Secret Crypto Cry pto’. ’. This This will incl include ude tempora temporary ry copies copies mad madee of oth other er files. If it then reads a file at ‘Top Secret Daffodil’ then all files it creates after that will be labelled ‘Top Secret Crypto Daffodil’, and it will not be able to write wri te to an any y tempora temporary ry files at ‘Secre ‘Secrett Crypto Crypto’. ’. The effect effect this has on applications is that most application software needs to be rewritten (or at least significantly modified) to run on MLS platforms. Finall Fin ally y it’s it’s worth worth noting noting that that even even with with thi thiss refi refinem nemen ent, t, BLP still doesn’t deal with the creation or destruction of subjects or objects, which is one of the hard problems of building a real MLS system.
2.5
Alternat Alternativ ive e formulat formulations ions
This website stores data such System as Z was one of several criticisms questioning the adequacy of the cookies to enable essential site BLP model: this prompted research into other ways to describe multilevel functionality, as well as marketing, secure systems and by now there are a number of competing models, some which have have been used to build real systems. We will now take a brief personalization, and analytics.ofYou tourtime of the evolution of multilevel models, and without loss of generality may change your settings at any we shall limit the discussion to two security levels, High and Low. or accept the default settings. The first multilevel security policy was a version of high water mark written in 1967–8 for the ADEPT-50, a mandatory access control system developed deve loped for the IBM S/360 mainframe mainframe [75]. This used triples of level, Privacy Policy compartmen compa rtmentt and group, with the groups being files, users, terminals and jobs. As programs (rather than processes) were subjects, it was vulnerable Marketing to Trojan horse compromi compromises ses,, and it wa wass more more comple complex x tha than n nee need d be. Personalization Analytics Save
10
Accept All
Nonetheless, it laid the foundation for BLP, and also led to the current IBM S/390 mainframe hardware security architecture. The second was the lattice model that we mentioned above. A primitive version of this was incorporated into the the Pentagon’s World Wide Military Command and Control System (WWMCCS) in the late 1960s, but this did not have the *-property. The realization that a fielded, critical, system handling Top Secret data was vulnerable to attack by Trojans caused cause d some const consternati ernation on [66]. Three impro improved ved lattice models wer weree produced duc ed in the early early 1970s: 1970s: by Schell Schell,, Downe Downey y and Pope Popek k of the US Air Force in 1972 [67]; a Cambridge PhD thesis by Fenton, which managed labels using a matrix, in 1973 [36]; and by Walter, Ogden, Rounds, Bradshaw, Ames and Shumway of Case Western University who also worked out a lot of the problems with file and directory attributes [73, 74], which they fed to Bell and LaPadula [73, 74]2 Finally, the lattice model was systematized and popularized by Denning from 1976 [28]. Noninterference was introduced by Goguen and Meseguer in 1982 [41]. In a system system with this property property, High’s actions have no effect on what Low can see. Nondeducibility is less restrictive and was introduced by Sutherland in 1986 [70]. Here the idea is to try to prove that Low cannot deducee anything with 100% certai deduc certainty nty about High’s input. Low users can see High actions, just not understand them; a more formal definition is that any legal string of high level inputs is compatible with every string of low level events. So for every trace Low can see, there is a similar trace that didn’t involve High input. But different low-level event streams may require changes to high-level outputs or reordering of high-level/low-level event sequences. The motive for nondeducibility is to find a model that can deal with applications such as a LAN on which there are machines at both Low and High, with the High machines machines encrypting their their LAN traffic. (Quite a lot else is needed to do this right, from padding the High traffic with nulls so that Low users can’t do traffic analysis, and even ensuring that the packets are the same size — see [65] for an early example of such a system.) Nondeducibility has historical importance since it was the first nondeterministic version of Goguen and Messeguer’s ideas. But it is hopelessly weak. wea k. There is nothing to stop Low making making deductions about High input with 99% certainty. There are also many problems when we are trying to prove results about databases, and have to take into account any information that can be inferred from data structures (such as from partial views of data with redundancy) as well as considering the traces of executing programs. Noninterference erference and ReThis website stores data such as Improved models include Generalized Nonint strictiveness . The form former er is the requ require ireme ment nt that if one alte alters rs a hig high h cookies to enable essential site level input event in a legal sequence of system events, the resulting sefunctionality, as well as marketing, quence que nce can be made made leg legal al by, by, at mos most, t, alteri altering ng sub subseq sequen uentt high-le high-leve vell personalization, and analytics. You output out put events events.. The The lat latter ter adds a further further restrict restriction ion on the part of the may change your settings at any time trace where the alteration where alteration of the high-level high-level outputs can take place. This or accept the default settings. is needed for technical reasons to ensure that two systems satisfying the restrictiveness property can be composed into a third which also does. See [53] which explains these issues. Privacy Policy The Harrison-Ruzzo-Ullman model [43] tackles the problem of how 2
Walter and his colleagues deserve more credit than history has given them. They had the main results first [73] but Bell and LaPadula had their work heavily promoted by the US Air Personalization Force. Fenton has also been largely ignored, not being an American.
Marketing
Analytics Save
11
Accept All
to deal with the creation and deletion of files, an issue on which BLP is silent. silent. It operates operates on acces accesss matric matrices es and ver verifie ifiess whether whether the there re is a sequence of instructions that causes an access right to leak to somewhere it was initially not present. This is more expressive than BLP, but more complex and thus less tractable as an aid to verification. Woodward Woodwa rd proposed prop osed a Compartmented Mode Workstation (CMW) policy, which attempted to model the classification of information using floating labels, as opposed to the fixed labels associated with BLP [42, 78]. It was ultimately unsuccessful, because information labels tend to float up too far too fast (if the implementation is done correctly), or they float up more slowly (but don’t block all the opportunities for unapproved information forma tion flow). How However ever,, CMW ideas have have led to real products – albeit products that provide separation more than informati information on shari sharing. ng. The type enforcemen enforcement t model, due to Boebert and Kain [20] and later extended by Badger and others [13], assigns each subject to a domain and and each eac h object to a type . There There is a domain definition table (DDT) (DDT) which acts as an access access control control matrix between domains domains and types. This is a natural model in the Unix setting as types can often be mapped to directory str struct uctures ures.. It is more more genera generall tha than n policie policiess suc such h as BLP BLP,, as it starts starts to deal with integrity as well as confidentiality concerns. Finally, the policy model getting the most attention at present from role-based access contr control ol (RBAC), introduced by Ferresearchers is role-based raiolo and Kuhn [37]. This sets out to provide a more general framework for mandatory access control than BLP in which access decisions don’t depend on users’ names but on the functions which they are currently performing form ing within the organisation. organisation. Transa ransactions ctions which may be perform performed ed by holders holders of a given given role role are specifi specified, ed, then then mec mechan hanism ismss for gra granti nting ng members mem bership hip of a role role (inclu (includin ding g delega delegatio tion). n). Roles, Roles, or gro groups ups,, had for years been the mechanism used in practice in organisations such as banks to manage manage access access cont control rol;; the RBA RBAC C mode modell starts starts to for forma malis lisee this. this. It can deal with integrity issues as well as confidentiality, by allowing role membership (and thus access rights) to be revised when certain programs are invoked. invoked. Thu Thus, s, for examp example, le, a process that calls untrusted untrusted software software (which has, for example, been downloaded from the Net) might lose the role membership membership required to write to sensi sensitive tive system files. We’ll discuss this kind of engineering problem further below. We won’t go into the details of how to express all these properties formally. We will remark though that they differ in a number of important
ways. ways. Som Somee are more express expressive ive than others, others, and som somee are better at handling properties such as composability — whether a system built out of two components that are secure under some model is itself secure. We This website stores data such shall as discuss this in section 3.8.1 below; for now, we will merely remark thatt tw tha two o nonded nondeduci ucibil bility ity secure secure system systemss can com compose pose int into o one tha thatt is cookies to enable essential site not [52]. Even the more restrict restrictive ive nonin noninterfere terference nce can be sho shown wn not to functionality, as well as marketing, compose. personalization, and analytics. You may change your settings at any time or accept the default settings. 3 Exa Exampl mples es of Multil Multilev evel el Secure Secure Sys System temss
Privacy Policy Marketing Personalization
The enormous influence of BLP and its concept of multilevel security is perhaps best conveyed by a more detailed look at the variety of actual systems that have been built according to its principles. Following some research products in the late 1970’s (such as KSOS [16], a kernelised secure version of Unix), products that implemented multilevel
Analytics Save
12
Accept All
security policies started arriving in dribs and drabs in the early 1980’s. By about 1988, a number of companies started implementing MLS versions of their operating operating sys system tems. s. MLS concepts concepts were were exte extende nded d to all sor sorts ts of products.
3.1 3.1
SCOM COMP
One of the most important products was the secure communications processor (SCOMP), (SCOMP), a Honeywell derivative of Multics launched in 1983 [39]. This was a no-expense-spared implementation of what the U.S. Department of Defense believed it wanted: it had formally verified hardware and software, with a minimal kernel and four rings of protection (rather than Multics’ Multic s’ seven) to keep things simple. simple. Its operating system, STOP STOP,, used these rings to maintain up to 32 separate compartments, and to allow appropriate one-way information flows between them. SCOMP was used in applications such as military mail guards . These are specialised firewalls that allowed mail to pass from Low to High but not vice vice versa [29]. [29]. (In gen genera eral, l, a device tha thatt does this is known as a data diode .) .) SCOMP’s successor, XTS-300, supports C2G, the Command and Control Guard. This is used in a Pentagon system whose function is to plan U.S. troop movemen movements ts and associated associated logistics logistics.. Ove Overall rall military plans are developed at a high classification level, and then distributed at the appropriate appropriate times as orders to lower lower levels levels for imple implement mentation. ation. (The issue of how high information is deliberately downgraded raises a number of issues. issues. In thi thiss case, case, the guar guard d examin examines es the con conten tentt of each recor record d before deciding whether to release it.) SCOMP has had wide influence – for example, in the four rings of protection used in the Intel main processor line – but its most significant contribution to the security community was to serve as a model for the U.S. Trusted Computer Systems Evaluation Criteria (the Orange Book ) [72]. This was the first systematic systematic set of standards for secure secure computer systems, being introduced in 1985 and finally retired in December 2000. Although it has since been replaced by the Common Criteria, the Orange Book was enormously influential, and not just in America. Countries such as Britain, Germany, and Canada based their own national standards on it, and these national standards were finally subsumed into the Common Criteria Crite ria [61]. The Orange Book allowed systems to be evaluated at a number of levels with A1 b eing the highest, levels highest, and moving downwar downwards ds through B3, B2, B2, B1 and C2 to C1. SCO SCOMP MP wa wass the first sys system tem to be rated A1. It was was also extensivel extensively y documente documented d in the open litera literature. ture. Being first, and being fairly public, it set the standard for the next generation of military This website stores data such as systems. syste ms. This stand standard ard has rarely been met since; in fact, the XTS-300 XTS-300 cookies to enable essential site is only evaluated to B3 (the formal proofs of correctness required for an functionality, as well as marketing, A1 evaluation were dropped). personalization, and analytics. You may change your settings at any time 3.2 2 Blac Black ker or accept the default settings. 3.
Privacy Policy Marketing Personalization
Blacker was a series of encryption devices designed to incorporate MLS technolog tech nology y [15]. Previ Previously ously,, encryp encryption tion devices were built with separate processors for the ciphertext, or Black end end and the cleartext or Red end. end. There are various possible failures that can be prevented if one can coordinate the Red and Black processing. processing. One can also provide greater greater operational flexibility as the device is not limited to separating two logical
Analytics 13
Save
Accept All
networks, but can provide encryption and integrity assurance selectively, and interact interact in useful useful wa ways ys with routers. routers. However, However, a high leve levell of assur assur-ance is required that the Red data won’t leak out via the Blac Black. k. (F (For or an actual example of such a leak, see [79].) Blacker entered service in 1989, and the main lesson learned from it was the extreme difficulty of accommodating administrative traffic within a mode modell of classi classific ficat atio ion n leve levels ls [76] [76].. As late as 1994 1994,, it was the the on only ly communications security device with an A1 evaluation. So like SCOMP it influenced influen ced later systems. It was not widely used though, and its succe successor ssor (the Motorola Motorola Netw Network ork Encryption System System)) which is still in use, has only a B2 evaluation.
3.3
MLS MLS Unix, Unix, CMWs and and Trus Trusted ted Windo Windowin wing g
Most of the available MLS systems are modified versions of Unix, and they the y starte started d to appear in the late 1980’s. 1980’s. An exam example ple is AT&T’s T&T’s System V/MLS [1]. This added secur security ity levels and labels, initially by using some of the bits in the group id record and later by using this to point to a more more ela elabora borate te str struct ucture. ure. This This enable enabled d MLS propertie propertiess to be introduced with minimal minimal changes to the system system kernel. Other products of this kind included SecureWare (and its derivatives, such as SCO and HP VirtualVault), and Addamax. Comparted Mode Workstations (CMWs) (CMWs) allow data at different levels to be viewed and modified at the same time by a human operator, and ensure that labels attached to the information are updated appropriately. The initial demand came from the intelligence community, whose analysts may have access to ‘Top Secret’ data, such as decrypts and agent reports, and produce reports at the ‘Secret’ level for users such as politica icall lea leader derss and officers officers in the field. As these reports reports are vulnera vulnerable ble to capture, they must not contain any information that would compromise intelligence sources and methods. CMWs allow an analyst to view the ‘Top Secret’ data in one window, compose a report in another, and have mechanisms to prevent the accidental copying of the former into the latter (so cut-and-paste operations work from ‘Secret’ to ‘Top Secret’ but not vice versa). CMWs have proved useful in operations, logistics and drug enforcement as well [44]. For the engineering issues involved in doing mandatory access control in windowing systems, see [33, 34] which describe a prototype for Trusted X, a system system implementing implementing MLS but not information information labelling labelling.. It runs one instance of X Windows per sensitivity level, and has a small amount of trusted code that allows users to cut and paste from a lower level to a one. For the specific architectural issues with Sun’s CMW product, This website stores data such higher as see [35]. cookies to enable essential site functionality, as well as marketing, personalization, and analytics.3You .4 The The NR NRL Pum Pump may change your settings at any time It was soon realised that simple mail guards and crypto boxes were too or accept the default settings. restrictive, as many more internet services were developed besides mail.
Privacy Policy Marketing Personalization
Traditional MLS mechanisms (such as blind write-ups and periodic readdowns) are inefficient for real-time services. The US Naval Research Laboratory therefore developed the Pump – a one-way data transfer device using buffering and randomization to allow one-way one-w ay informat information ion flow while limiting backward backward leakage leakage [45, 47]. The attraction of this approach is that one can build MLS systems by using
Analytics 14
Save
Accept All
HIGH
PUMP
LOW
Figure 4: The pump. pumps to connect separate separate systems at different security security leve levels. ls. As these systems don’t process data at more than one level, they can be built from cheap chea p comm commercial ercial-off-the -off-the-shelf -shelf (COTS) compone components nts [46]. As the cost of hardware falls, this government becomes thehas preferred option where it’s possible. The Australian developed a product called Starlight , which uses pump-type technology married with a keyboard switch to provide an MLS-type windowing system (albeit without any visible labels) using trusted hardware to connect the keyboard and mouse with High and Low systems [3]. There is no trusted software. It has been integrated with the NRL Pump [46]. A number of semi-com semi-commercia merciall data diode products havee also been introduced. hav introduced.
3.5
Logis Logistic ticss system systemss
Military stores, Military stores, like governmen governmentt documents, documents, can hav havee differe different nt classification catio n levels. levels. Some signals intell intelligence igence equipment equipment is ‘T ‘Top op Secret’, while things like jet fuel and bootlaces are not; but even such simple commodities may become ‘Secret’ when their quantities or movements might leak information inform ation about tactical intentio intentions. ns. There are also some peculiari peculiarities: ties: for example, an inertial navigation system classified ‘Confidential’ in the peacetime inventory might laser gyro platform ‘Secret’. The systems needed tocontain managea all this seem to be classified hard to build, as MLS logistics projects in both the USA and UK have been expensive disIn the UK, the Royal Air Force’s Logistics Information Technology This website stores data such asters. as System (LITS) was a 10 year (1989–99), £ 500m project to provide a single cookies to enable essential site storess management store management system for the RAF’s 80 bases [57]. It was designed functionality, as well as marketing, to operate on two levels: ‘Restricted’ for the jet fuel and boot polish, and personalization, and analytics.‘Secre You t’ for special stores suc ‘Secret’ such h as nuclear nuclear bombs. It was initially implemay change your settings at any time as two separate database systems connected by a pump to enforce mented or accept the default settings. the MLS property. property. The project became a classic tale of escalating escalating costs driven drive n by creeping requiremen requirements ts changes changes.. One of these chan changes ges was the easing of classification rules with the end of the Cold War. As a result, it was found that almost all the ‘Secret’ information was now static (e.g., opPrivacy Policy erating manuals for air-drop nuclear bombs that are now kept in strategic stockpiles stock piles rather than at airbases). airbases). In order to save money, money, the ‘Secret’ Marketing information is now kept on a CD and locked up in a safe. Personalization
Analytics 15
Save
Accept All
Logistics Logis tics systems often have have applicatio application n securi security ty features too. The classic example is that ordnance control systems alert users who are about to breach safety rules by putting explosives and detonators in the same truck or magazine [56].
3.6 3.6
Purp Purple le Penel enelope ope
In recent years, the government infosec community has been unable to resist user demands to run standard applications (such as MS Office) that are not available available for mul multilev tilevel el secure platform platforms. s. One response response is ‘Purple Penelope Penelope’. ’. Thi Thiss softw software are,, fro from m a UK govern governmen mentt agency agency,, put putss an MLS wrapper round a Windows Windows NT workstation. workstation. This implem implements ents the high water mark version of BLP, displaying in the background the current security level of the device and upgrading it when necessary as more sensitive resources are read. It ensures that the resulting work product is labelled correctly. Rather than preventing users from downgrading, as a classical BLP sys system tem might might do, it allow allowss the them m to ass assign ign an any y sec securit urity y labe labell the they y like like to their output output.. Ho Howe weve ver, r, if this inv involv olves es a downg downgrad rade, e, it req require uiress the user to confirm the release of the data using a trusted path interface, thus ensuring no Trojan Horse or virus can release anything completely unnoticed. unnoti ced. Of course, a really clever clever malicious progra program m can piggy piggy-bac -back k classified material on stuff that the user does wish to release, so there are other tricks tricks to make that harder. There is also an audit trail to provide provide a record of all downgrades, so that errors and attacks (whether by users, or by malicious code) can be traced after the fact [63].
3.7
Future uture MLS MLS sys system temss
The MLS industry sees an opportunity in using its products as platforms for firewalls, firewalls, Web servers and other systems systems that are likel likely y to come under attack. Thanks to the considerable effort that has often gone into finding and removing removing secur security ity vulnerabili vulnerabilities ties in MLS platforms, platforms, they can give more assurance than commodity operating systems can that even if the firewall or Web server software is hacked, the underlying operating system is unlikely to be. The usual idea is to use the MLS platform to separate trusted from untrusted untru sted networks, networks, then introduce introduce simp simple le code to bypa bypass ss the separ separation ation in a controlled way. In fact, one of the leading firewall vendors (TIS) was until recently a developer of MLS operating systems, while Secure Computing Corporation, Corporation, Cybergu Cyberguard ard and Hewlett-P Hewlett-Pack ackard ard hav havee all offered firewall products. The long tradition of using MLS systems systems This website stores data such MLS as based firewall as pumps and mail guards means that firewall issues are relatively well cookies to enable essential site understood in the MLS community. A typical design is described in [22]. functionality, as well as marketing, personalization, and analytics. You 3.8 3.8timeWhat What Goes Goes Wro rong ng may change your settings at any or accept the default settings. In computer security, as in most branches of engineering, we learn more from the systems systems that fail than from those that succeed. MLS systems systems have been an effective teacher in this regard; the large effort expended in Privacy Policy building systems to follow a simple policy with a high level of assurance has led to the elucidation of many second- and third-order consequences Marketing of information flow controls.
Personalization Analytics 16
Save
Accept All
RAND
•
H 3
XOR
L
H 2
H 1
XOR
Figuree 5: Insecure Figur Insecure compositio composition n of secure systems with feedback feedback..
3.8.1 3.8 .1
Techni echnica call iss issues ues
One of the most intractable intractable technical technical issues is composability . It is easy to design systems that are eachtogether. secure in For themselves but which are completely two insecure when connected example, consider a simple device (figure 5) that accepts two High inputs H 1 and H 2 ; mul multiplexe tiplexess them; encrypts them by xor’ing them with a one-time pad (i.e., a random genera gen erator) tor);; output outputss the other other copy copy of the pad on H 3 ; and outputs the ciphertext. cipher text. Being encrypte encrypted d with a cipher system giving perfect secrecy secrecy,, this is considered to be Low (output L ). In isolat isolation ion thi thiss device device is prov provably ably sec secure ure.. Ho Howe weve ver, r, if fee feedba dback ck is permitted, then the output from H 3 can be fed back into H 2 , with the result that the High input H 1 now appears at the Low output L . This trivial example highlighted problems in connecting two devices of the same type, but things become significantly more difficult when dealing with heterogeneous systems. If the systems to be composed obey different policies, it is a hard problem in itself even to establish whether the policies can be made to b e compatible compatible at all! Lomas Lomas [50], for example, example, describe describess the difficulty of reconciling reconciling the conflicting conflicting security policies of different different national natio nal branches of the same investme investment nt bank. In fact, estab establishi lishing ng the conditions of research.under which security policies compose is a long standing area There are many other technical technical problems problems.. We will summarise summarise them here briefly; for a fuller account, the reader should consult Anderson [8]. This website stores data such as cookies to enable essential site Covert channels arise when a high process can signal to a low process by affecting affecting some shared resource. resource. For example, it could position the disk functionality, as well as marketing, head at the outside of the drive at time ti to signal that the i-th bit in personalization, and analytics. You a Top Secret file was a 1, and position it at the inside to signal that the may change your settings at any time bit was a 0. A typical modern operating system has many such channels, or accept the default settings. whic which h provide a means for a virus that has migrated migrated up to ‘High ‘High’’ to signal back down to ‘Low’.
Privacy Policy Marketing Personalization
Polyinstantiation refers to the problem of maintaining data consistency when users at different clearance levels work with different versions of the data. Some systems conceal the existence of High data by inventing cover stories, and problems can arise if Low users then rely on these: it is easy to end up with multiple inconsistent copies of a database.
Analytics 17
Save
Accept All
Aggregation refers to the fact that a collection of Low facts may enable an attacker enable attacker to deduce a Hig High h one one.. For exam example ple,, we mig might ht be happy happ y to decla declassif ssify y any single satellit satellitee photo, but declassify declassifying ing the whole collection would reveal our surveillance capability and the history of our
intelligence priorities.
Overclassification is comm common. on. Because Because proces processes ses are autom automatica atically lly
upgrade upg raded d as they see new labels labels,, the files they use have have to be too. New file filess def defaul aultt to the hig highes hestt label label belo belongi nging ng to any possib possible le input. The result of all this is a chronic tendency for objects to migrate towards the highest level of classification in the system. Downgrading is a huge problem. An intelligence intelligence analyst might need to take a satellite photo classified at TS/SCI, and paste it into a‘Secret’ assessm asse ssment ent for field command commanders. ers. This contra contravene veness the BLP model, and so has to be handled handled by a trus trusted ted subject subject (that is, tru truste sted d code). But often the most difficult parts of the problem have to be solved by this code, and so the MLS mechanisms mechanisms add little. They can provide provide very high quality data separation, but the real problems are more likely to lie in the controlled con trolled sharing of data. Application incompatib incompatibility ility is often the worst problem of all; in many man y cases it is the show show-stopper -stopper.. For example, example, a process that reads a High file and is upgraded will automatically lose the ability to write to a Low file, and many applications simply cannot cope with files suddenly va vanishi nishing. ng. The knock-on knock-on effects can be wides widespread pread.. For exampl example, e, if an application uses a license server and is upgraded, then the license server must be too, so it vanishes from the ken of other copies of the application running at Low, whose users get locked out. These technical problems are discussed at greater length in Anderson [8]. They They are impor importan tantt not just to buil builder derss of mult multile ilevel vel secure secure systems but because variants of them surface again and again in other systems with mandatory access control policies. 3.8.2 3.8 .2
Polit olitica icall and ec econo onomic mic is issue sues s
The most telling argument argument against against MLS syste systems ms is econo economic. mic. They are built in small volumes, and often to high standards of physical robustness, using elaborate documentation, testing and other quality control measures driven drive n by militar military y purc purchasin hasing g bureaucracie bureaucracies. s. Administrat Administration ion tools and procedures are usually idiosyncratic, which adds to the cost; and many applications tolose be rewritten to human cope with the MLS functionality. One musthave never sight of the motivations that drive a system design, design, and the indirect costs that it imposes. Moynihan Moynihan prov provides ides This website stores data such aascritical study of the real purposes and huge costs of obsessive secrecy in US foreig foreign n and militar military y affairs affairs [55]. Follow ollowing ing a Sen Senate ate enqui enquiry ry,, he cookies to enable essential site discovered that President Truman was never told of the Venona decrypts functionality, as well as marketing, because considered d ‘Army ‘Army Property’ — despi despite te its being personalization, and analytics.becaus You e the material was considere the main motive motive for the prosecu prosecutio tion n of Alger Hiss. Hiss. As his book puts it, may change your settings at any time “Departments and agencies hoard information, and the government beor accept the default settings. comess a kind of market. Secrets come Secrets become organizat organizational ional assets assets,, neve neverr to be shared save in exchange for another organization’s assets.” He reports, for example, that in 1996 the number of original original classificati classification on autho authorities rities Privacy Policy decreased decrea sed by 959 to 4,420 (follow (following ing post-Cold-W post-Cold-War ar budge budgett cuts) but the total of all classification actions reported increased by 62% to 5,789,625. Marketing Yet despite the huge increase in secrecy, the quality of intelligence made available to the political leadership appears to have degraded over time. Personalization
Analytics 18
Save
Accept All
Effectiveness is undermined by inter-agency feuding and refusal to share information, inform ation, and by the lack of effective effective external critique. critique. So a case can be made that MLS systems, by making the classification process easier and controlled sharing harder, actually impair operational effectiveness.
4
Th The e B Bib iba a IIn nte tegr grit ity y Mod Model el
The usual formal definition of mandatory access control is that information flow restrictions are enforced independently of user actions. Although this is often taken to mean BLP, the majority of fielded systems that enforce such controls do so to protect integrity properties. The typical example comes from an electricity utility, where the main operational systems such as power dispatching and metering can feed information into the customer billing system, but not vice versa. Similar one-way information flows are found in banks, railroads, hospitals and a wide range of other commercial and government systems. The first security policy model to deal with such integrity protection was due to Biba [17] and is often referred to as ‘Bell-LaPadula upside do down’ wn’.. Its key observ observati ation on is that that confide confident ntial iality ity and in integ tegrit rity y are in some sense dual concepts — confidentiality is a constraint on who can read a message, while integrity is a constraint on who may have written or altered it. In BLP, information cannot flow down towards levels of lower confidentiality, since this would cause a leak. In Biba, conversely, information cannot flow up towards levels of higher integrity, or the “impure” data from the lower-integrity levels would contaminate the “pure” data held in the higher levels. levels. This may may be formulated formulated in terms of a No Read Down and a No Write Up property that are the exact dual of the corresponding ones in BLP. Further applications in which Biba is often applied (without the system builders being even aware of its existence) include the following.
• An electronic medical device such as an ECG may have two two separate modes: calibration and use. The calibration data must be protected from being corrupted by normal users, who will therefore be able to read rea d it but not write to it. Whe When n a normal normal use userr res resets ets the device, device, it will lose its current user state (i.e., any patient data in memory) but the calibration will remain unchanged. • In computer supported cooperative work, some of the authors may be very careful in noting all the precise details of their bibliographic This website stores data such as citations from first hand references, while others may content themselves with less complete records, perhaps only cross-checked on the cookies to enable essential site rather than than on the actual actual articles articles.. In such a case, case, the more functionality, as well as marketing, Web rather meticulous authors will probably refrain from copying citations from personalization, and analytics. You their colleagues’ files (No Read Down) and not grant their colleagues may change your settings at any time permission to modify their own bibliography files (No Write Up). or accept the default settings. The duality between Biba and BLP means that the System Z objections apply here too, and the introduction of tranquility properties becomess neces come necessary sary.. The obvious interpretati interpretation on is a low water mark policy Privacy Policy in which the integrity label of an object defaults to the lowest label read by the process that created it. Marketing An example of implementation is LOMAC, an extension to Linux with a low water mark policy [40]. This is designed to deal with the problem of Personalization
Analytics 19
Save
Accept All
malicious code arriving over the Net. The system prov malicious provides ides tw two o levels — high and low integrity — with system files at High and the network at Low. As soon as a program (such as a demon) receives traffic from the network, it is autom automatical atically ly downgrad downgraded ed to Low. Low. Thu Thuss even if the traffic contains an attack that succeeds in forking a root shell, this shell won’t have the ability to write to the password file, for example, as a normal root shell would. As one might expect, a number of system tasks (such as logging) become tricky and require trusted code. However, However, these mechan mechanisms isms still cannot stop a virus that has infected Low from replicating and sending copies of itself out over the network. As mentioned above, integrity concerns can also be dealt with by the type enforcement and RBAC models. However, in their usual forms, they revise a principal’s privilege when an object is invoked, while low watermark revises it when an object is read. The latter policy is more prudent where we are concerned with attacks exploiting code that is not formally invoked but simply read (examples include buffer overflow attacks conducted by ‘data’ read from the Internet, and ‘documents’ that actually contain con tain macros — curre currently ntly the most popular medium for virus propagapropagation). An interesting problem is that of combining the apparently contradictory requirements of Biba and BLP, as would be needed in a system for which both confidentiality and integrity were equally important goals. The trivial approach based on a single set of security labels for both confidentiality and integrity leads to an extremely restrictive system in which inf inform ormati ation on can cannot not flow flow eith either er up or down, down, but only only sidew sideway ays, s, among among items at the same security level; this does comply with both policies simultaneously, but probably does not yield a very useful system. A more intriguing solution is to assign different labels for confidentiality and integrity, and in particular to make high integrity correspond to low confidentia confidentiality lity and vice versa. Dependin Depending g on the context, this may not be as absurd as it may sound at first. Consider for example that systems software needs extremely high integrity, but very low confidentiality; whereas where as this may reversed reversed for data items such as user preferences. preferences. This approach has the advantage that both policy models end up dictating information flow in the same direction [1]. Researchers are now starting to build more complex models that accommodate both confidentiality and integrity inte grity to observe observe their interacti interaction on [48].
5
Th The e C Cla lark rk-W -Wil ilso son n M Mode odell
This website stores data such Most as mandatory integrity policies used in real systems are somewhat more comple com x than than Bib Biba, a, and the most influen influentia tiall of them is Cla Clark-W rk-Wils ilson on cookies to enable essential site plex (CW). This model distills a security policy out of the centuries-old practice functionality, as well as marketing, double-entry bookkeeping (arguably one of the most significant ideas personalization, and analytics.ofYou in finance finance after inventio tion n of mon money) ey).. Its main goal is to ensur ensuree the may change your settings at any time after the inven integrity inte grity of a bank’s accoun accounting ting system and to improve improve its robustness robustness or accept the default settings. against insider fraud. The idea behind double-entry bookkeeping is, like most hugely influential ideas, extremely simple. Each transaction is posted to two separate Privacy Policy books,, as a cre books credit dit in one and a debit debit in the othe other. r. For exam example, ple, whe when na firm is paid $100 by a creditor, the amount is entered as a debit in the Marketing accounts receivable (the firm is now owed $100 less) and as a credit in the cash cash accoun accountt (the (the firm now now has $100 $100 mor moree cash). cash). At the end of the Personalization Analytics 20
Save
Accept All
day, the books should balance , that is, add up to zero; the assets and the day, liabilities should be equal. (If the firm has made some profit, then this is a liability the firm has to the shareholders.) In all but the smallest firms, the books will be kept by different clerks, and have to balance at the end of every month month (at banks, banks, every every day). day). By suitab suitable le des design ign of the ledge ledgerr system, we can see to it that each shop, or branch, can be balanced separately. Thus most frauds will need the collusion of two or more members of staff; and this principle of split responsibility is complemented by audit. Similar schemes had been in use since the Middle Ages, and had been fielded in computer systems since the 1960’s; but a proper model of their security policy was only introduced in 1987, by Clark and Wilson [24]. In their model, some data items are constrained so that they can only be acted acted on by a cer certai tain n set of transa transacti ctions ons know known n as transf transform ormati ation on procedures. More formally, there are special procedures whereby data can be input — turned from an unconstr unconstrained ained data item , or UDI, into a constrained data item , or CDI; integrity verification procedures (IVP’s) (IVP’s) to check the validity of any CDI (e.g., that the books balance); and transformation procedures (TPs), which may be thought of in the banking case as transactions that preserve prese rve balance. In the general formulation, formulation, they mainta maintain in the integrit integrity y of CDIs; they also write enough information to an append-only CDI (the audit trail) for transac audit transactio tions ns to be reco reconst nstruc ructed ted.. Acc Access ess control control is by means of triples (subject, TP, CDI), which are so structured that a dual control policy is enforced. Here is the formulation found in [1]. 1. The system will hav havee an IVP for validati validating ng the integrity integrity of any CDI. 2. Applic Application ation of a TP to any CDI must must maintain its integrity integrity.. 3. A CDI can only be changed by a TP TP.. 4. Subjects can only initiate certa certain in TPs on certain CDIs. 5. CW-tri CW-triples ples must enforc enforcee an appropriate appropriate separation separation of duty policy on subjects. 6. Certai Certain n special TPs on UDIs can produce CD CDIs Is as output. 7. Eac Each h applicatio application n of a TP must must cause enough information information to reconstruct it to be written to a special append-only CDI. 8. The system mus mustt authenticate authenticate subjects attempting to initiate a TP TP.. 9. The syste system m must must let only only special special subjects subjects (i.e (i.e., ., sec securi urity ty officers officers)) make changes to authorisation-related lists. One of the historical merits of Clark and Wilson is that they introThis website stores data such duced as a style of security policy that was not a direct derivative of BLP. In cookies to enable essential site particular, it involves the maintenance of application-level security state functionality, as well as marketing, — firstly, in the audit log, and secondly to track the shared control mechpersonalization, and analytics.anisms You ani sms.. The These se can be quite quite dive diverse rse.. The They y can operat operatee in par parall allel el (as when whe n tw two o bank bank manage managers rs are needed to app appro rove ve a transa transacti ction on ov over er a may change your settings at any time certain amount) or in series (as when different people in a company are or accept the default settings. responsible for raising an order, accepting delivery, paying an invoice and balancing balan cing a departmen departmental tal budget). Although Although the details can be highly application specific, this new approach provided an overall framework for Privacy Policy reasoning about such systems, and fuelled research into security policies that were not based on label-based label-based classificati classification. on. Marketing Despit Des pitee being being ve very ry different different from the rules rules of the BLP mode model, l, the Clark-W Cla rk-Wils ilson on rule ruless still still fall fall in into to the genera generall pat pattern tern of firs firstt defi definin ning g a Personalization
Analytics 21
Save
Accept All
subset of the states of the system as “secure”, and then defining transition rules that, when applied to secure states, are guaranteed to lead into further secure states, thus preserving the fundamental invariant of the system. system. Insofar Insofar as a security security policy is an abstract abstract descriptio description n of the desired behaviour of the Trusted Computing Base, the above pattern captures fairly well the concept of “security policy” as we defined it in the introduction.
6
Th The e C Chi hine nese se Wall all Mode Modell
The Chinese Wall security policy, introduced by Brewer and Nash in 1989 [21], models the constraints of a firm of professionals – such as computer consultants, advertising agents or investment bankers — whose partners need to avoid situations where conflicts of interest or or insider dealing might might become possible. Suppose the firm consults for a variety of companies, for example three oil companie companies, s, four four banks banks and two compu computer ter com compan panies ies.. An Any y partne partnerr consulting for a company of a given type, say “oil company”, would face a conflict of interest if she were also to consult for any other company of that type. But nothing should stop her from simul simultaneo taneously usly consulti consulting ng for anot company of another such asofaabank. long has yet interacted withtype, companies given As type, sheasisthe freeconsultant to choose any company of that type for a new assignment. However, as soon as she consults for one of them, a closed ‘Chinese Wall’ is erected around her, with that company inside and all the other companies of the same type outside. So the consultant’s personal Chinese Wall changes whenever she consults for a company of a new type. The authors explicitly compare their policy with BLP and describe it using a similar formalism based on a simple security property and a star property. These two rules are somewhat more complicated than their BLP equivalents. Given a data object o , for example the payroll file of Shell, y (o) indicates the company to which o refers, namely Shell, and x(o) denotes the type of company, in this case “oil company”, which may also be seen as the set of companies among which there is a conflict of interests from the point of view of an analyst who accesses o. The critical difference difference from BLP is that the Chinese Wall Wall model needs state to retain in analysts order to have keep track of the minated”. objectsd”. (and therefore nies) with which anal ysts been “contaminate “conta The state is compakept in a two-dimensional matrix of Boolean values, N , indexed by subject and This website stores data such object: as N s,o s,o is true if and only if subject s has previously accessed object o. cookies to enable essential site functionality, as well as marketing,The simple security property says that each subject can access objects personalization, and analytics.from You at most one company of any given type. In particular, subject s can access may change your settings at any timeobject o only if one of the two following circumstances is verified. or accept the default settings. Either s has never dealt with a company of type x(o), i.e., there is no object p such that N s,p s,p is true and x ( p) = x (o); or s is already committed to the specific company y (o), i.e., for each object p such that N s,p s,p is true and x ( p) = x (o) we also have that y ( p) = y (o). Privacy Policy This still leaves scope for information leaks through indirect routes. Analyst Alice might be consulting for oil company Shell and bank Citicorp, Marketing while analyst analyst Bob might be consulting consulting for Exxon and Citicorp. Nothing Nothing as yet prevents Alice from writing Shell-related financial information in Personalization
Analytics Save
Accept All
22
a Citicorp object that Bob might later read, thus causing a conflict with Bob’s allegiance allegiance to Exxon. The star property covers this case. Subject s is only allowed to write to object o if the simple simple property property is sat satisfi isfied ed and if, for every objec objectt p that s has previously previously read, either y ( p) = y (o) or p is a “sani “sanitised” tised” object. objec t. To sani sanitis tisee an object o is to transform it in such a way that
no conflict conflict of interes interestt wil willl occu occurr if the sanitise sanitised d objec objectt is dis disclo closed sed to companies belonging to x(o). Thi Thiss may be achiev achieved ed through through a tru truste sted d subject applying appropriate de-identification or other data laundering mechan mec hanism isms. s. Saniti Sanitised sed objects can be ele elegan gantly tly included included in the model by introducing an artificial company type “sanitised” containing only one compan com pany y. Since Since the cardin cardinali ality ty of the type is 1, suc such h objec objects ts may be accessed by all analysts without any conflict of interests. The Chinese Wall model made a seminal contribution to the theory of access access contro control. l. It also spark sparked ed a debate debate about the exten extentt to which which it is consistent with the MLS tranquility properties, and some work on the formal semantics of such systems (see, for example, Foley [38] on the relationship with non-interference). There are also some interesting new questions about covert channels. For example, could an oil company find out whether a competitor that used the same consultancy firm was planning a bid for a third oil company, by asking which specialists were available for consultation and noticing that their number had dropped suddenly?
7
The BMA Policy
The healthcare sector offers another interesting scenario in which confidentiality requirements are paramount, but radically different from those in the military context. Medical privacy is a legal right in many countries, and frequently frequently a subject of cont controv roversy ersy.. As the inform information ation systems systems in hospitals and medical practices are joined together by networks, potentially large numbers of people have access to personal health information, and this has led to some abuses. abuses. The prob problem lem is like likely ly to get wor worse se as geneti gen eticc data data beco become me widely widely av avail ailabl able. e. In Iceland Iceland a proje project ct to build a national medical database that will incorporate not just medical records but also genetic and genealogical data, so that inherited diseases can be tracked across generations, has caused an uproar [7, 6]. The protection of of medical information also a model for protecting personal information other kinds, such asisthe information held on individual customers by banks, insurance companies and government agencies. This website stores data such In asEU countries, citizens have rights to data protection . In bro broad ad term terms, s, cookies to enable essential site this means that they must be notified of how their personal data may be used, and in the case of especially sensitive data (affecting health, sexual functionality, as well as marketing, and preferences, political and trade union activity and religious personalization, and analytics.behaviour You belief) they either must give consent to information sharing or have a right may change your settings at any time raisess the issue of how one can construct a securit security y policy or accept the default settings. of veto. This raise in which the access control decisions are taken not by a central authority (as in Bell-LaPadula) or by the system’s users (as in discretionary access control) but by the data subjects. Privacy Policy In a 1996 1996 project project for which which one of us was responsib responsible, le, the Brit British ish Medical Association developed a security policy for clinical information Marketing systems syst ems [10]. This model focuses on access control, patien patientt privacy and confidentiality management. It has a horizontal structure of access control Personalization
Analytics Save
Accept All
23
rather than a vertical hierarchy as used in the BLP model, although its confidentiality properties bear some relation to BLP as applied to compartments. The goals of the BMA security policy were to enforce the principle of patient consent, and to prevent too many people getting access to too large databases of identifiable records. It did not try to do anything new, but merely merely to codify codify exi existi sting ng best practi practice. ce. It also also so sough ughtt to exp expres resss other security features of medical record management such as safety and accountability. The policy consists of nine principles: 1. (Acces (Accesss control) Each identifia identifiable ble clinical record shall be mark marked ed with an access control list naming the people or groups of people who may may read read it and append data to it. The system system shal shalll pre preve vent nt anyone not on the access control list from accessing the record in any way. 2. (Re (Recor cord d opening opening)) A cli clinic nician ian may open a rec record ord with her hersel selff and the patien patientt on the access access cont control rol list. list. Where Where a pat patien ientt has been referred, she may open a record with herself, the patient and the referring clinician(s) on the access control list. 3. (Co (Contr ntrol) ol) One of the clini clinicia cians ns on the access access con contro troll list list mu must st be marked as being responsible. Only she may alter the access control list, and she may only add other health care professionals to it. 4. (Cons (Consent ent and notificatio notification) n) The responsible responsible clinician must notify the patient of the names on his record’s access control list when it is opened, of all subsequent additions, and whenever responsibility is transferred. His consent must also be obtained, except in emergency or in the case of statutory exemptions. 5. (Persistence) No-one shall ha have ve the ability to delete clinical information until the appropriate time period has expired. 6. (Attr (Attribution ibution)) All accesses to clinical records records shall be mark marked ed on the record reco rd with the subject’s subject’s name, name, as wel welll as the date and time. An audit trail must also be kept of all deletions. 7. (Inform (Information ation flow) Information Information derived derived from recor record d A may be appended to record B if and only if B’s access control list is contained in A’s. 8. (Aggre (Aggregatio gation n control) control) There shall be effective effective measure measuress to prevent the aggregation aggregation of personal health inform information. ation. In particula particular, r, patientss must receive special notificatio tient notification n if any person whom it is proThis website stores data such as posed to add to their access control list already has access to personal cookies to enable essential site health information on a large number of people. functionality, as well as marketing, 9. (T (Trusted rusted computi computing ng base) Computer Computer systems that handl handlee persona personall personalization, and analytics. You health information shall have a subsystem that enforces the above may change your settings at any time principles princip les in an effect effective ive way way. Its effectiv effectiveness eness shall shall be subject to or accept the default settings. evaluation by independent experts.
Privacy Policy Marketing Personalization
This policy is strictly more expressive than Bell-LaPadula (it contains a BLP-type information flow control mechanism in principle 7, but also contains state). A fuller discussion from the point of view of access control, aimed at a technical audience, can be found at [4]. The fundamental innovation of the BMA model is not at first sight obvious. obvi ous. Previous Previous models had tried to produce a security security policy for the
Analytics Save
Accept All
24
‘electronic patient record’ — a phrase that has come to mean the entirety of a person’s person’s health inform information, ation, from conception through autopsy autopsy.. This turned out to be an intractable problem, because of the different groups of people who had access to different different subsets of the record. So the solution adopted by the BMA model was to define the record as the maximal set of health information about a person that shared the same access control list. A syste system m now deployed deployed in a num number ber of British hospitals, hospitals, whic which h wor works ks along these lines and broadly complies with the BMA policy, is described in [26].
8
Jikzi
In the last decade of the twentieth century the World Wide Web staked a plausible claim to be the most significant event in publishing since the inventio invention n of the printing press. Any Anyone one can now acces access, s, from anywh anywhere ere in the world, a vast multifaceted and continuously updated hypertextual network of documents. The problem with this new medium, compared with the established world world of pape paper-b r-base ased d pub publis lishin hing, g, is its ephem ephemera erall nat nature ure.. The There re is no guarantee that the interesting and useful Web we are consulting now will be there tomorrow — or, more subtly, thatpage it won’t have been edited to deny something that it today asserts. There are few guarantees about the identity of the author of a page and any text can be repudiated at any time by withdrawing or modifying it. These properties make the new medium questionable for applications that need better guarantees of integrity, such as drugs databases, company business records and newspaper archives3 . To address the requirements of secure publishing, we implemented a system based on the idea of retaining all versions of the relevant documents forever, forev er, without ever deleting deleting any of them. We called it Jikzi, after the first ever book published using a movable type printing press (this is a Buddhist text printed in Korea in 1377, some 63 years before Gutenberg). Th Thee se secu curi rity ty policy policy mod model el of th thee Jikz Jikzii syst system em is as follow follows. s. As a foundation, we assume the domain D of published documents to be partitioned into two sets: the controll controlled ed documents, C D, whose integrity and authenticity are guaranteed by the system, and the uncontrolled docu UD ments, on which which :no constr constrain aints ts are placed placed.. The poli policy cy proper is stated state d as six, principles: principles
Neitherr deletion nor replaceme replacement nt is allowed allowed within CD . This website stores data such as1. Neithe cookies to enable essential site 2. The creato creatorr of a document define definess its revision access access condition and only authorised principals with respect to the condition are allowed functionality, as well as marketing, personalization, and analytics. You to revise it; all revisions in C D must be stored and browsable. may change your settings at any3.time Authen Authenticit ticity y validatio validation n procedur procedures es mus mustt be av availab ailable le for va validati lidating ng or accept the default settings. the authenticity of CD members. 4. Any Any actio action n to CD members must maintain the authenticity of the document.
Privacy Policy Marketing Personalization
5. Authen Authenticati tication on of a member of CD can be performed by any user. 3
George Orwell’s famous novel 1984 depicts a world in which even paper-based newspaper archives were retroactively changed so that history would suit the current government.
Analytics Save
Accept All
25
6. Transformation from U D to C D must be one-way and the principal who transformed a document becomes the creator of the document in C D. policy protects controlled documents by preventing destructiveThe modifications to them. If we need to modify a controlledany document, we may produce a revised copy of it, but all the previous versions will stay archived for the lifetime of the system. Principle Princi ple 6 assumes assumes a transforma transformation tion from an uncon uncontrolle trolled d docume document nt to its controlled version. The one-wayness of the transform means that a document, once controlled, cannot become uncontrolled, since Principle 1 does not allow deletions from CD . It is of cours coursee possible possible to take take an uncontrolled copy of a controlled document, and from then on the life of the copy will no longer be controlled; but the previous history of the document in C D remains unchanged. In the paper-based commercial commercial wor world, ld, write-once write-once documents have existed for a long time (e.g. business ledgers). One of us is using the above policy to build an online data repository service. Conversely, sometimes our data will not require persistence, but rather volatility. As we shall discuss in section 11.2, there are plausible scenarios in which we may wish all controlled documents to disappear after a fixed delay since their creation.
9
The The R Res esur urre rect ctin ing g D Duc uckl klin ing g
Authentication of principals in a distributed system is a well studied problem with established solutions. The traditional solutions, however, rely on the existence of an online server, either for the distribution of “tickets”, as in Kerberos4 , or to check whether a public key has been revoked, as in the various public key infrastructures5 . Where such a server is not available, a new strategy must be found. The problem becomes apparent in the example of a universal remote control that needs to be configured so as to control a new DVD player thatt its owner tha owner just bought. bought. We want want the DVD play player er to obey this remote control control but not any other, so as to preve prevent nt acciden accidental tal (or malic malicious) ious) activation activ ation by our neighbour’ neighbour’ss remote control control.. We also want to b e able to rescind this association, so that we may resell or give away the player once a better model comes out; but this facility should be restricted, to prevent prev ent a thief who steals the play player er from using it. The goal may be summarised as “secure transient association”, and the abstract problem has much applicabilit ability y than just consumer electro electronics: nics: one may conThis website stores data such muc as h wider applic ceive further instances as diverse as the temporary binding of an e-wallet cookies to enable essential site to an ATM (nobody should be able to interfere while I am performing my functionality, as well as marketing, transaction, but the ATM should be ready to bind to another customer’s personalization, and analytics. You e-wallet as soon as I go away) or as the temporary binding of a hospital may change your settings at any time thermometer to a doctor’s PDA. or accept the default settings. 4
Privacy Policy Marketing
Kerberos [60] is an online authentication protocol based on Needham-Schroeder [59] and developed at MIT for the Athena project in the late 1980s. The gene developed general ral idea is as follo follows. ws. Client A wishes to access resource server B , and therefore needs to authenticate itself to B as a valid user. A and B don’t know each other, but they each know (i.e. share a secret with) the authentication server S . So A contacts S and receives from it a time-limited “ticket” that it can show to B to prove its identity. identity. A variant of Kerberos is used in Windows Windows 2000. 5 See section 11.1.
Personalization Analytics Save
Accept All
26
A metaphor inspired by biology will help us describe the security policy model we deve developed loped to implemen implementt secure secure transi transient ent associatio association n [68]. As Konrad Lorenz beautifully beautifully narra narrates tes [51], a duckl duckling ing emerging from its egg will recognise as its mother the first moving object it sees that makes a sound makes sound,, regardless of what it looks like: this phenomenon phenomenon is called imprinting impri nting.. Simil Similarly arly,, our device (whose egg is the shrink shrink-wrapped -wrapped box that encloses it as it comes out of the factory) will recognise as its owner the first entity that sends sends it a secret key through an electrical contact. contact. As soon as this ‘imprint key’ is received, the device is no longer a newborn and will stay faithful to its owner for the rest of its life. If several entities are present at the device’s birth, then the first one that sends it a key becomes beco mes the ow owner: ner: to use anoth another er biolog biologica icall metaph metaphor, or, only only the first sperm gets to fertilise the egg. We can view the hardware of the device as the body, and the software (a (and nd partic particul ular arly ly it itss st stat ate) e) as the soul. soul. As long as the sou soull st stay ayss in the body, the duckling remains alive and bound to the same mother to which whi ch it was was imp imprin rinted ted.. But this bond is broken broken by deat death: h: the thereu reupon, pon, the soul dissolves and the body returns in its pre-birth state, with the resurrecting duckling ready for another imprinting that will start a new life with another soul. Death is the only event that returns a live device to the pre-birth state in which it will accept an imprinting. We call this process reverse metempsychosis . Metem Metempsyc psychosis hosis refers to the transmigr transmigration ation of souls as proposed in a number of religions; our policy is the reverse of this as, rather than a single soul inhabiting a succession of bodies, we have a single body inhabited by a succession of souls. With some devices, death can be designed to follow an identifiable transa transacti ction. on. A hos hospit pital al thermomet thermometer er can be des design igned ed to die (los (losee its memory of the previous key and patient) when returned to the bowl of disinfectan disin fectantt at the nursing nursing station. With others, we can arrange a simple simple timeou tim eout, t, so that the duckling duckling dies of old age. age. Wit With h oth other er device devicess (and particularly those liable to be stolen) we will arrange for the duckling to die only when instructed by its mother: thus only the currently authorised user may transfer control of the device. In order to enforce this, some level of tamper resistance resistance will be require required: d: assa assassin ssinating ating the duckl duckling ing without damaging dama ging its body should should be made suitably suitably difficult and expensive. expensive. (Of course, there will be applications in which one wishes to protect against accidental death of the mother duck – such as if the remote control breaks. In such cases, we can make a ‘backup mother’ duck by squirrelling away
a copy of the imprinting key.) Some systems, such as Bluetooth, grant control to whoever has the ability manipulate the device: if you can touch it, you can cont control rol it. This website stores data such abilit as y to manipulate The Duckling Duckling policy is different, because it specifie specifiess an eleme element nt of tamper cookies to enable essential site resistance to protect the crucial state transition of re-imprinting; that is, functionality, as well as marketing, resurrection control . It follows that a passer-by cannot take ownership of personalization, and analytics.an You unattended duckling. So an imprinted car stereo is useless to a thief. may change your settings at any time After a narrative illustration of the policy, we now list its four princior accept the default settings. ples for referen reference. ce.
Privacy Policy Marketing Personalization
1. (Two States) Th Thee entity that the policy protects, called the duckling, can be in one of two states: imprintable or imprinted (see figure 6). In the imprintable imprintable state, anyone can take it over. In the imprinted state, it only obeys another entity called its mother duck. 2. (Impri (Imprinting nting)) The transition from imprintabl imprintablee to imprinted imprinted,, know known n as imprinting, happens when the mother duck sends an imprinting
Analytics Save
Accept All
27
imprinting
imprinted (alive)
imprintable (unborn)
death Figure 6: A state diagram for the Resurrecting Duckling . key to the duckling. This may be done using physical contact, or by some other channel whose confidentiality and integrity are protected adequately. 3. (Death (Death)) The transition transition from imprinted imprinted to imprin imprintable table is known as death. dea th. It may may only only occur occur under under a specifie specified d cir circum cumsta stance nce,, such such as death by order of the mother duck (default), death by old age (after a predefin predefined ed time int interv erval) al),, or dea death th on com comple pletio tion n of a spec specific ific transaction. 4. (Assa (Assassina ssination) tion) The duckling must must b e uneconomical uneconomical for an attacker attacker to assassinate (which means to cause its death in circumstances other than as prescribed by the Death principle of the policy).
10
Ac Acce cess ss Co Con ntr trol ol
As we have seen, security policies started primarily as coherent sets of constraints describing who could access what, and when. Even if our last few examples have shown more general scenarios, scenarios, controlling controlling access access to resources tends to be the primary goal of many security policy models. In the general case, given the set S of all the subjects in the system, and the set O of all the objects, we may build a classical two-dimensional access matrix with a row for each subject and a column for each object and where each cell of the matrix contains a Boolean value specifying whethe whe therr that that subject subject is allow allowed ed to access access tha thatt objec object. t. In practice practice ther theree dimension to this matrix, indexed by the type of access This website stores data such will as also be a third dimension (create, (creat e, read, write, execute, execute, browse, browse, delete, etc.). cookies to enable essential site functionality, as well as marketing, personalization, and analytics.10 You 10. .1 ACLs CLs may change your settings at any time Centralised administration of such a vast collection of individually sigor accept the default settings. nificant security bits is difficult; to make the task more manageable, the
Privacy Policy Marketing Personalization
matrix is usually split by columns or by rows. When it is split by columns, to each object is associated a list of the subjects that are allowed to access it. This, appropriately enough, is called an Access Control List, or ACL (pronounced “ackle”). An example of this is given by the string of permission bits that Unix associates with each file. The sequence “rwxrwxrwx” represents three sets of “read, write, execute” permission permiss ion bits, the sets respectivel respectively y mapping mapping to “user “user” ” (the owner of the
Analytics Save
Accept All
28
file), “group” (any subject in a designated designated group that has been associate associated d with the file) and “other” (any other subject). If the permission is granted, the corresponding bit is set and is represented by its letter in the string; otherwise, the bit is reset and is represented by a hyphen instead of the letter. It is important to note that these permission bits may be set by the owner of the file at her own discretion — hence the term “discretionary access control” to denote this situation as opposed to the “mandatory access control” of BLP and related multilevel secure systems in which no user can override the stated rules about information flow between clearance levels. In general, mandatory access control makes it easier to guarantee that a stated stated secur security ity policy policy wil willl be enforc enforced. ed. Ce Centr ntral al adm admini inistr strati ation on of a complex access matrix in constant flux as files and directories are created and destroyed is a very hard task, so mandatory access control usually implies a simplified view of the world, as in the BLP model, where users cannott specify all the individua canno individuall access bits independently independently.. Where finer control is necessary, discretionary access control is a manageable way to achieve achi eve greater flexibilit flexibility y. The two strategies strategies may even be com combined, bined, as in Unix System V/MLS, where a base layer of mandatory access control is complemented by discretionary access control to further regulate accesses that are not constrained by the multilevel security rules.
10.2 10. 2
Capabi Capabili litie tiess
If the access matrix is instead split by rows, we obtain for each subject a list of the objects to which she has access. access. The eleme elements nts of such a list are called capabilities , meaning that if an object o is in the list for subject s, then s is capable of accessing o. In some syst systems ems,, suc such h as Windo Windows ws 2000, 200 0, the there re is also also the conce concept pt of a “negat “negative ive capab capabili ility” ty”,, to exp explic licitl itly y indicate that the given subject is not allowed to access the given object. There may be hierarchies of users, groups and directories through which a subject obtains both positive and negative capabilities for the same object and there will be rules that state which ones override the others. To compare the interplay between the different approaches, observe the case of the BMA policy (section 7). Although it was naturally expressed in terms of access control lists, when it came to implementing it in a hospital the most natural expression was in capabilities or certificates. The majority of access control rules could be expressed in statements of the form ‘a nurse may read, and append to, the records of all patients This website stores data such who as have been in her ward during the previous 90 days’. cookies to enable essential site It should be noted that the concept of capability as a row of the access matrix is subtly different from the original one of a capability as “a bit functionality, as well as marketing, string that you either know or don’t know”, as introduced by the Campersonalization, and analytics.bridge You CAP machine machine [77] which which imple implemen mented ted capabilities capabilities in hardware. hardware. may change your settings at any timeeach object was associated with an unguessable bit string (the caThere, or accept the default settings. pability) generated by the creator of the object; any subject wishing to access the object had to prove that it knew the capability 6 . Any subject with the right to access a given object could extend this right to another Privacy Policy 6
To clarify the difference between this idea of capability and the previous one, note that in this context a negative capability can’t work. A negative capability would have to be a bit Marketing string strin g that, if know known, n, denies acce access ss to a resou resource rce (inst (instead ead of grantin granting g it). Sinc Sincee it is up to the client to exhibit the capabilities he knows in order to be granted access, anyone given a Personalization negative capability might obviously find it convenient to just ‘forget’ it!
Analytics Save
Accept All
29
subject simply by telling it the capability. Even more interesting constructions were possible with the introduction of proxies (intermediate objects, acting as a layer of indirection, that among other things make revocation possible): possib le): instea instead d of giving subject s the capability c(o) of object o, the creator of o makes a proxy object p and gives c(o) to p; it then gives s the capability c( p) of the proxy, through which s can indirectly access o without knowing c(o). The twist twist is that now the creator creator of o can revoke s’s right to access o by simply deleting the proxy object — all without affecting the workings of any other subjects that might have been legitimately accessing o at the time. Capab Capabilitie ilitiess fell into disuse disuse in the 1980s and early 1990s, and were used only in a small number of systems, such as the IBM AS/400 series. They are now making a comeback in the form of public key certificates, which act as credentials for access to a set of resources. We’ll discuss certificates below.
10..3 10
Roles oles
We also mentioned role base based d acc access ess control control . Instea Instead d of assigning assigning access rights (or, more generally, “privileges”) to individual subjects such as Joe and Barbara, Barbara, we assign them to roles such as “receptionist” “receptionist” and “perso “personnnel manage manager”, r”, and then then give give one or more roles to each subjec subject. t. Thi Thiss is a very powerful organisational tool, in that it is much more meaningful to express a security target in terms of roles, which can be made to have well-defined and relatively stable semantics within the company, rather than in terms of individuals, whose functions (and employment status) may change over time. To obtain the full benefits of such a scheme it is essential to maintain the distinction distinction b etween etween subjects and roles, particularly particularly when it come comess to auditing. As an example of detrimenta detrimentall blurrin blurring g of the two, consid consider er the common case of the Unix “root” account on multiuser system with several administrators. Conceptually, “root” is a role, with which several subjects (the system administrators, who by the way have individual user accounts accou nts as well) are endowed. endowed. At the operating syste system m level, however, however, “root” is only another subject, or user account, albeit a privileged one. This means that when, by malice or incompetence, one of the system administrators moves a dangerous file to an inappropriate place, the record of ownership and permissions will only say that this was done by “root”, not by “Joe acting acting as root”. Ora Oracle cle in ess essenc encee reim reimple plemen ments ts an entire entire user management and access control system on top of that provided by the underlying operating system, and seems to have got this right, with a separation betw between een roles and users users.. This website stores data such clear as separation
cookies to enable essential site 10.4 10 .4 Secu Securi ritty stat state e functionality, as well as marketing, personalization, and analytics.ItYou must be noted that pure access control is not the best mechanism may change your settings at any time when whe n the policy policy req require uiress state state to be retaine retained. d. A sub subtle tle example example of this or accept the default settings. comes from Clark-Wilson models that specify dual control, i.e., the fact that certain data items must be approved by two principals from different groups (say a yellow manager and a blue one) before becoming valid. Due to workflow constraints it is impractical to impose atomicity on the Privacy Policy validation, since this would force the yellow and blue managers always Marketing to look at the transac transactio tions ns slips toget together her.. What What is done inste instead ad is tha thatt the transact transaction ion slip is placed placed in the inbo inbox x of the yello yellow w man manage ager, r, gets gets Personalization approved by her at some point, then it moves to the inbox of the blue Analytics Save
Accept All
30
manager, obtains its second approval and finally becomes valid. The system must therefore keep track of additional state for each Unconstrained Data Item to represent the approvals so far collected on the way to become a Constrained Data Item. Implementing a solution using only pure access control mechanisms (e.g. by creative use of intermediate files and directories with carefully chosen permissions and groups) is theoretically possible, but convoluted and error-prone. For example one has to prevent the blue manager from moving into the ‘valid’ directory a transaction that was never approved by yellow. The problem is even more evident in the Chinese Wall case, where there is explicit mention of a Boolean matrix of state bits to indicate whether a consultant has ever interacted with any companies of a given type. Here too it is theoretically possible to pervert the file system’s access control cont rol mechanisms mechanisms into some some means of artificially artificially retaining the requir required ed state, such as by giving a consultant a positive capability for a company he decides to work for and simultaneously a negative capability for all the other companies in its conflict-of-interest class. The cleaner alternative to these programming tricks is to keep the security state in a data structure explicitly devoted to that purpose. But where whe re wil willl such such a data data str structu ucture re reside? reside? Since Since it no longer longer imple implemen ments ts a general purpose facility, like the permission bits of the file system, it is likely to migrate from the operating system system into the application. application. The encapsulating this problem of encapsulating this data structure (in the object oriented sense) then arises: arises: no other program mus mustt be able to modify the securi security ty state, except by using the methods provided by the application that is enforcing the policy. This is not always trivial to ensure.
11
Be Bey yon ond d Ac Acce cess ss Co Con ntr trol ol
So not all security security policies are elabora elaborate te sets of rules about acces accesss con control. trol. There are many contexts in which the aspect of greatest interest in the system is not access control but authentication, or delegation, or availability abilit y — or perhaps a combinati combination on of those and other properties. properties. Biba and Jikzi are examples where integrit integrity y matter matterss more than acces accesss cont control. rol. These are not just a matter of controlling write access to files, as they bring in all sorts of other issues such as reliability, concurrency control and resistance to denial-of-service attacks. On a more general level, we may speak of “security policy” whenever a consistent and unambiguous specification is drawn, stating the required behaviour of the system with respect to some specific security properties. This website stores data such as Although we have presented a gallery of policy models and we have insisted cookies to enable essential site on their strengths and general applicability, it is not necessary for a policy functionality, as well as marketing, target to be derived as a specialisation of a model. personalization, and analytics. You To clarify these points, let’s examine a couple of examples of policies may change your settings at any time that are neither devoted to access control nor derived from established or accept the default settings. models.
Privacy Policy Marketing Personalization
11.1 11. 1
Key manag managem emen entt polic policies ies
Public key cryptography, as readers will know, is the technique introduced by Diffie and Hellm Hellman an [30] where whereby by each princi principal pal is end endow owed ed wit with h a pair of keys, one public and one private, whose associated cryptographic transforma trans formations tions are the inverse inverse of each other. The public key is widely
Analytics Save
Accept All
31
dis dissem semina inated ted while the pri priva vate te key is kept secret. secret. This This can be use used d for encryption encryp tion or for digital digital signa signature. ture. By publishing an encryption key and keeping the corresponding decryption key private, anyone can, using the public key, encrypt messages that only the holder of the private key will be able to decrypt. By publishing a signature verification key and keeping the corresponding signature creation key private, a principal can generate signatures that anybody can verify using the public key but which no other principal could have have produced. produced. For such a system to work on a large scale, a way to manage and distribute public keys must be deployed. In particular, one must avoid the “man in the middle” attacks that become possible if malicious principals can convince their unsuspecting victims to accept forged public keys as those of their intended correspondents. The CCITT X.509 recommendation [23], published in 1988, was the first serious serious attempt at such an infrastructure. infrastructure. It was part of the grander plan of X.500, a global distributed directory intended to assign a unique name to every principal (person, computer, peripheral, etc.) in the world — so called Distinguished Names . In this context, X.509 used certificates (i.e., signed statements statements)) that bound unique names to public keys keys.. Originally this was meant to control which principals had the right to modify which subtrees of X.500, but soon its use as an identity instrument became prevalent and it is used today to certify the public keys used with SSL,, the protoc SSL protocol ol use used d for secure secure access access to Web sites sites.. Web sites wish wish-ing to accept credit card transactions typically have an encryption key certified by a company such as Verisign whose public key is well known; customers custo mers entering entering credit card numbers numbers or other sensitive sensitive data can chec check k the certificate to ensure that the public key with which the data will be encrypted is certified by Verisign to belong to the intended destination. X.509 is thus an example of a hierarchical public key infrastructure with a small number of global roots — master certification authorities on which all name certificates certificates ultimately depend. However How ever the soft software ware that did most to bring public key cryptog cryptography raphy into the mainstream was Zimmermann’s Pretty Pretty Goo Good d Privacy — better known as simply PGP [80] — which has become the de facto standard for email encryption. One of PGP’s conceptual innovations was the rejection of this hierarchical infrastructure of certification authorities in favour of a decentralised “web of trust” in which all the users, as peers, mutually cer certif tify y the val validi idity ty of the keys of thei theirr inter interlocu locutor tors. s. Users Users may may thus thus obtain uncertified keys over insecure channels, as long as they can build “chains of trust” starting from people they know and leading to those This website stores data such keys. as cookies to enable essential site There have been at least two attempts to get the best of both worlds. Ellison’s Simple Public Key Infrastructure (SPKI), later to join forces functionality, as well as marketing, Lampson’s Simple Distributed Distributed Security Infras Infrastructure tructure personalization, and analytics.with You Rivest and Lampson’s (SDSI) [31, 32, 64], also rejected the concept of a single global certification may change your settings at any time authority. They bind keys directly to capabilities rather than via names. or accept the default settings. One of the core concepts is that of local local names — identifiers that do not have to be globally unique as long as they are unique in the context in which which they are used. Global Global name namess can be rei reintr ntroduc oduced ed as nee needed ded Privacy Policy by placing placing a local name in the relev relevant ant conte context. xt. So “Microsoft’s “Microsoft’s public key” becomes “DNS’s .com’s Microsoft’s public key”, with DNS being a Marketing privileged context. Without With out a sin single gle root, a use userr of the syste system m must must repeatedly repeatedly make Personalization decisions on the validity of arbitrary keys and may at times be requested Analytics
Save
Accept All
32
to express a formal opinion on the validity of the key of another principal (b (by y “sign “signing ing” ” it). For consis consistenc tency y it is desira desirable ble that the these se action actionss be go gove verne rned d by a poli policy cy.. Let us examin examinee som somee exa exampl mples es — we shal shalll refe referr to PGP for concreteness, since this is the most widely deployed system among end users. The “signature of Alice on Bob’s key” is actually a signature on the combina com bination tion of Bob Bob’s ’s public key and Bob’s name name.. It means: means: “I, Alice Alice,, solemnly sole mnly certify that, to the best of my knowledge, knowledge, this key and this name do match”. To Charlie, who must decide on the validity of Bob’s key, this statement is only worth as much as Alice’s reputation as an honest and competent introducer; in fact, PGP lets you assign a rating (denoted as “trust level”) to each introducer, as well as a global confidence threshold that must be reached (by adding up the ratings of all the introducers) before a key can be considered as valid. For example you may request two signatures from marginally trusted introducers, or just one from a fully trusted one; but someone with a higher paranoia level might choose five and two respectively. Such a rule would amount to a policy stating which keys to accept as vali alid. d. But the interes interesting ting aspects aspects,, as usua usual, l, come up in the detai details. ls. A fundamental but easily neglected element of this policy would be a precise operational definition of when to classify an introducer as untrusted, marginally marg inally trusted or fully trusted. The dual problem, equally interesting and probably just as neglected by individu individual al use users rs of PGP PGP,, is tha thatt of est establ ablish ishing ing a poli policy cy to gove govern rn one’s one ’s signing signing of oth other er people’s people’s keys. Thi Thiss is import importan antt if one wishes wishes to be considered as a trustwor trustworthy thy introduc introducer er by others. One possible suc such h policy might say: 1. I shall only certify a key if I have receiv received ed or checked checked it in a face-t face-tooface meeting with its owner. 2. I shall only certify a key if I have have personally personally verified the passport passport of its owner. 3. Whenev Whenever er I sign a key key,, I shall record date, key id and fingerprin fingerprintt in a signed log that I keep on my Web page. Such a policy is known as a certification practice statement , and can offer some procedural guarantees about the quality of the certifications that one has performed. It gives an independent observer a chance to assess the relative quality of the certification offered by different introducers (assuming that their claims about compliance can be believed). This website stores data such as An observer could for example remark that the policy above, while apparently appar ently very strict, does not actually ascertain whether the named cookies to enable essential site principal controls the private key corresponding to the public key being functionality, as well as marketing, signed signed.. Alice Alice migh mightt follo follow w the abov abovee poli policy cy and still still sig sign n a publ public ic key personalization, and analytics. You that Bob presents as his, but which he instead just copied off Charlie’s may change your settings at any time Web page page.. This This would would not allo allow w Bob to read Alice Alice’s ’s (or an anybody ybody’s) ’s) or accept the default settings. corres correspond pondenc encee to Cha Charlie rlie,, but it would would enable enable him to dam damage age Ali Alice’ ce’ss
Privacy Policy Marketing Personalization
reputation as a trustworthy introducer (“Look, she signed that this key belongs to Bob, but it’s not true! She’s too gullible to be an introducer! introducer!”). ”). We might try to fix this hole by adding a challenge-response step to the policy: Alice shall only sign Bob’s key if Bob is able to sign a random number chosen by Alice. One lesson from all this is that policies, like ideas, tend to only become clear once we write them down in detail. It will be much harder to
Analytics Save
Accept All
33
spot a methodological flaw if the de facto policy has never been explicitly stated. This even applies to the above “fix”: without a more explicit description of how to perform the challenge-response, it is impossible to say whether wheth er the proposed proposed exchange is safe or still vulnerable vulnerable to a “middleperson” attack. attack. For example, Bob might offer to certify Charlie’s Charlie’s key and simultaneously present it to Alice as his own for her to certify. She gives him a random challenge, he passes it to Charlie and Charlie provides the required signature. Bob now sends this signature to Alice who mistakenly certifies the key as Bob’s. Certification practice statements are even more important when we are dealing with a commercial or government certification authority rather than with private individuals using PGP. Such statements typically also set out the precise circumstances in which certificates will be revoked, and what liability (if any) will be borne for errors.
11.2 11 .2
Corpo Corpora rate te emai emaill
Another scenario calling for a policy unrelated to access control and not derived from one of the classical models is offered by corporate email. As the Microsoft trial demonstrated, a company can easily make the case against again st itself throug through h the informal (but often revealing) revealing) message messagess that its executives exchange via email while discussing the campaign against their competitors. A company aware of this precedent, and concerned that its off-therec record ord inter internal nal mess message agess could could be used used agains againstt it, might might decide decide to get rid of them at the first sign sign of trouble. trouble. But this could could be ve very ry risky for a compan company y alread already y und under er invest investiga igatio tion, n, as a cou court rt cou could ld pun punish ish it for contempt or obstruction. So a company may establish a policy to delete all email messages older than, say, one year. If the company has information in its archives, a court might force its disclosure; if the company deletes the information once the investigation has started, it is at risk; but if the company has an explicit established policy of not archiving old email, then it cannot be faulted for not being able to produce old correspondence. Anotherr exam Anothe example ple comes from our univ universit ersity y. Examiners Examiners are require required d to destroy exam papers, working notes and other documents after four months. If they were kept too long, students could acquire the right to see them under data protection law, and this would violate our confidentiality policy; but destroying them too soon might prejudice appeals. A policy of timely destruction has to address a number of practical (such as system backups and data on local discs), which makes its This website stores data such issues as implemen imple mentation tation nontrivial. nontrivial. A simple simple technique technique might involve involve encrypting cookies to enable essential site all messages before storing them using a company-wide limited-lifetime functionality, as well as marketing, key personalization, and analytics. Youheld in a tamper resistant box that deletes it after the specified time in inter terva val. l. Efficie Efficient nt purgin purging g of unr unread eadabl ablee messag messages es is lef leftt as a gar garbag bagee may change your settings at any time collection colle ction task for the system system admin administrat istrator. or. or accept the default settings. Of course none of this stops people from taking copies of messages while they are still readable, but:
Privacy Policy Marketing Personalization Analytics Save
1. If we want wanted ed to stop that, we would need further and more comp complilicated mechanisms to implement the policy. 2. It would never be possible to implemen implementt such a policy in a completely watertigh wat ertightt manner: manner: after all, a determ determined ined employee employee could alw always ays print out the mail or, if even that were forbidden, photograph the screen.
Accept All
34
3. We probably don don’t ’t wan wantt that anywa anyway: y: it would be wrong to see the legitimate user reading email as the potential enemy when the real one is his careles carelessnes sness. s. Taking a p ermanent ermanent copy of an importa important nt message should be allowed, as long as this exceptional action requires explicit explic it confirm confirmation ation and is audited. audited. At any rate, apart from the implementation details, the point we want to emphasi emphasise se here is the use of a poli policy cy as a leg legal al defenc defence. e. The sec securit urity y property of interest in this case is not access control but plausible deniability . (It’s not really a matter of “no-one may read X after time T ” but “even the system administrator will not be able to create a user who can”.)
12
Aut Automa omated ted Com Compli plianc ance e Veri Verifica ficatio tion n
Once policy is refined from a general model to a specific target, there is interest in a system that would automatically verify whether any given proposed propose d action is accept acceptable. able. Blaze, Feigenbaum and Lacy introduced the concept of trust managementt [19]: a unified approach to specifying securit men security y policies policies and credentials and to verifying compliance. compliance. The system they proposed and built, PolicyMa Pol icyMaker, ker, include includess an application-i application-independe ndependent nt engine whose inputs are policy assertions, security credentials (i.e., “certificates”) and the proposed action, and whose output is a series of “acceptance records” that say which assertions, if any, authorise the action. The idea is for this generic enginee to be config engin configured ured by an appropriate appropriate application-spec application-specific ific p olicy olicy.. Requests for security-critical actions must be accompanied by appropriate credentials in order to satisfy the system that the principal issuing the request has the authority to do so. Related work includes SPKI and SDSI, which address not only authorisation but also naming, i.e. the association of identities to public keys. PolicyMa Pol icyMaker ker explicitly explicitly refus refuses es to deal with the proble problem m of namin naming; g; its authors argue that it is orthogonal to author authorisati isation on and therefore irrelev irrelevant ant to the issue of compliance checking. The successor successor to PolicyMaker, PolicyMaker, called KeyNote, is now RFC 2704 [18]. PolicyMaker was designed for generality as a framework in which to explore trust management concepts, perhaps at the expense of efficiency. For example examp le the assertion assertionss could b e arbitrary progra programs. ms. KeyNote KeyNote is rather designed desig ned for simplicit simplicity y, competi competitiven tiveness ess and efficie efficiency ncy,, with the aim of being fielded in real appli application cations. s. Pop Popular ular open source pro projects jects includi including ng This website stores data such the as Apache-SSL Apache-SSL Web server and the OpenBSD OpenBSD operating operating syste system m already cookies to enable essential site use KeyNote. functionality, as well as marketing, personalization, and analytics. You 13time A Me Meth thodo odolo logi gica call N Not ote e may change your settings at any or accept the default settings. As a final exhibit in this gallery of examples it is worth mentioning our study of the security requirements for a computer-based National Lottery sys system tem [5]. More More tha than n the securit security y poli policy cy model in itself itself,, wha whatt is most Privacy Policy instructive in this case is methodology employed for deriving it. Experienced software engineers know that perhaps 30% of the cost Marketing of a software product goes into specifying it, 10% into coding, and the remaining 60% on maintenance. Personalization
Analytics Save
Accept All
35
Specification is not only the second Specification second most expensive expensive item in the system develo dev elopme pment nt life cycle, cycle, but is also also whe where re the most expe expensi nsive ve things go wrong.. The semina wrong seminall study by Curtis, Krasner Krasner and Iscoe of large softw software are project disasters found that failure to understand the requirements was mostl mostly y to blame [25]: a thi thin n spread spread of applic applicati ation on dom domain ain know knowled ledge ge typically led to fluctuating and conflicting requirements, which in turn caused a breakdown in communication. They suggested that the solution was to find an ‘exceptional designer’ with a deep understanding of the problem who would assume overall responsibility. But there are many cases where an established expert is not available, such as when designing a new application from scratch or when building a competitor to a closed, proprietary system whose behaviour can only be observed at a distance. It therefore seemed worthwhile to see if a high quality security specification could be designed in a highly parallel way, by getting a lot of different people to contribute drafts in the hope that most of the possible attacks would be considered in at least one of them. We carried out such an experiment in 1999 by recruiting volunteers from a captive audience of final year undergraduates in computer science at the University of Cambridge: we set one of their exam questions to be the definition of a suitable security policy for a company planning to bid for the licence to the British National Lottery. The model answer had a primary threat model that attackers, possibly in cahoots with insiders, would try to place bets once the result of the draw was was known, whether by altering bet record recordss or forgin forging g tick tickets. ets. The secondary threats were that bets would be placed that had not been paid for, and that attackers might operate bogus vending stations that would pay small claims but disappear if a client won a big prize. The security policy that follows logically from this is that bets should be registered online with a server that is secured prior to the draw, both against tampering and against the extraction of sufficient information to forge a winning ticket; that there should be credit limits for genuine vendors; and that there should be ways ways of identifying identifying bogus vendors. vendors. Once the sec securi urity ty policy policy has been develo developed ped in eno enough ugh det detail ail,, des design igning ing enforcement mechanisms should not be too hard for someone skilled in the computer security art. Valuable and original contributions from the students came at a number of levels, including policy goal statements, discussions of particular attacks, and arguments about the merits of particular protection mechanisms. At the lev level el of goa goals, ls, for exampl example, e, one candi candidat datee ass assume umed d tha thatt the This website stores data such custo as mer’s rights customer’s rights must must have precedence: precedence: “All winning ticke tickets ts must be cookies to enable essential site redeemable! So failures must not allow unregistered tickets to be printed.” Another candidate assumed the contrary, and thus the “worst outcome functionality, as well as marketing, personalization, and analytics.should You be that the jackpot gets paid to the wrong person, never twice.” Such goal conflicts are harder to identify when the policy goals are written may change your settings at any time or accept the default settings. by a single person. As for attacks, some candidates suggested using the broadcast radio clock signal as an authentication input to the vending terminals; but one candidate candi date correctly pointed out that this signal could be jamm jammed ed without Privacy Policy much difficulty. This caused some consternation to the auditor of a different online gaming system, which appears to be vulnerable to time signal Marketing spoofing. There was a lot of discussion not just on how to prevent fraud but Personalization how to assure the public that the results were trustworthy, by using techAnalytics
Save
Accept All
36
niques such such as third party logging logging or digital signature signatures. s. The candidate candidates’ s’ observations on protection mechanisms also amounted to a very complete chec checklis klist. t. Item Itemss such such as ‘ticke ‘tickets ts must must be ass associa ociated ted wit with h a par partic ticula ularr draw’ might seem obvious, but a protocol design that used a purchase date, ticket serial num number ber and server-supplied server-supplied random challenge challenge as input to a MAC computation might appear plausible to a superficial inspection. The evaluator might not check to see whether a shopkeeper could manufacture ufactu re tickets tickets that could be used in more than one draw draw.. Experienced Experienced designers appreciate the value of such checklists. The lesson drawn from this case study was that requirements engineering, like software testing and unlike software development, is susceptible to parallelisation. parallelisation. When developi developing ng the threat analysis, analysis, securi security ty requirements and policy model for a new system, rather than paying a single consultant to think about a problem for twenty days, it will often be more efficient to pay fifteen consultants to think about it for a day each and then have an editor spend a week hammering their ideas into a single coherent document.
14
Co Conc nclu lusi sion onss
A security policy is a specification of the protection goals of a system. Many expensive failures are due to a failure to understand what the system security security policy should have been. Technolog echnological ical protection protection mechanisms such as cryptography and smartcards may be more glamorous for the implementer, but technology-driven designs have a nasty habit of protecting tectin g the wrong things. things. At the highest level of abstraction, a security policy model has little if any reference to the mechanisms that will be used to implement it. At the next level down, a protection profile sets out what a given type of system syste m or component component should protect, protect, without without going into implem implementa entation tion detail, and relates the protection mechanisms to threats and evironmental assumptio assu mptions. ns. A securit security y target gives a precis precisee statement statement of what a given system or component will protect and how. Especially at the highest levels the policy functions as a means of communication. Like any specification, it is a cont contract ract between between the imple implemen menter ter and the client — some something thing that both understand and by which both agree to be bound. Our historical perspective has shown how security policies were first formally modelled in the 1970s to manage disclosure threats in military systems. syste ms. They were then extended to issues other than confide confidentia ntiality lity This website stores data such and as to problems other than access control. We have also seen a spectrum of cookies to enable essential site different formulations, from the more mathematically oriented models that allow one to prove theorems to informal models expressed in natural functionality, as well as marketing, All have their place. Often the less formal policies will acquire personalization, and analytics.language. You more structure once they have been developed into protection profiles may change your settings at any time or security targets and the second- and third-order consequences of the or accept the default settings. original protection goals have been discovered.
Privacy Policy Marketing Personalization
We now have a sufficiently large gallery of examples, worked out in varying levels of detail, that when faced with a project to design a new system, the security engineer should first of all assess whether she can avoid reinventing the wheel by adopting one of them. If this is not possible, familiarity with previous solutions is always helpful in coming up with an appropriate appropriate new idea. Finally Finally,, the methodolog methodological ical issues should not be underestimated: underestimated: securi security ty alwa always ys benefits from peer review and man many y
Analytics Save
Accept All
37
heads are better than one.
15
Ac Ackno knowle wledge dgemen ments ts
The authors are grateful to Jeremy Epstein, Virgil Gligor, Paul Karger, Ira Moskowitz, Mosko witz, Marv Schaefer, Rick Smith, Karen Sp¨a arck rck Jones and Simon Wiseman for helpful discussions. Portions of this chapter will appear in Ross Anderson’s book Security Engineering [8], to which the reader should refer refer for more detai detail. l. Other portions have have appeared appeared in Jong-Hyeon Jong-Hyeon Lee’s PhD disserta dissertation tion [49] and in other publications by the authors that were cited in the relevant sections [10, 11, 12, 68, 69, 5].
References [1] Edw Edward ard Amoroso. Amoroso. Fundame undamentals ntals of Compu Computer ter Security Security Tec echnolhnology . Pre Prent ntice ice-Ha -Hall, ll, Engle Englewoo wood d Cliffs, Cliffs, New Jerse Jersey y, 199 1994. 4. ISB ISBN N 013-305541-8. [2] J. Anders Anderson. on. “Compu “Computer ter Securit Security y Tec echno hnolog logy y Pla Planni nning ng Stud Study”. y”. Tech. ESD-TR-73-51, AFSC, 1972.. Rep. 1972 AD-758 206, ESD/AFS ESD/AFSC. C. Hanscom AFB, Bedford, MA, Oct [3] M. Anders Anderson, on, C. North, J. Griffin, R. Milner, Milner, J. Yesber Yesberg g and K. Yiu. “Starlight “Starl ight:: Inter Interactiv activee Link”. In “12th Annual Comp Computer uter Securit Security y Applications Conference”, pp. 55–63. IEEE, 1996. ISBN 0-8186-7606X. [4] Ross Ande Anderson. rson. “A Security Security Policy Model for Clinical Clinical Information Information Systems”. System s”. In “Proceedings “Proceedings of the IEEE Symposium on Resear Research ch in Security Securi ty and Privacy”, Privacy”, Research Research in Securi Security ty and Priv Privacy acy,, pp. 30– 43. IEEE Comp Computer uter Society Society,T ,Techn echnical ical Comm Committee ittee on Securit Security y and Privacy, IEEE Computer Society Press, Oakland, CA, May 1996. [5] Ross Ande Anderson. rson. “How to Cheat at the Lottery (or, Mass Massively ively Parallel Parallel Requiremen Requir ements ts Engineering)” Engineering)”.. In “Proceedings “Proceedings of the Annu Annual al Computer Security Applications Conference 1999”, Phoenix, AZ, USA, 19 1999 99.. UR URL L http://www. http://www.cl.cam.ac.u cl.cam.ac.uk/~rja14/lotter k/~rja14/lottery/lottery. y/lottery. html. [6 [6]] Ross Ros s J.Database”. An Ande ders rson on. Læknabladh . “T “The he De DeCO CODE DE Pr Propo oposa sall Medical fo forr an Journal), Ice Icela landi ndicc idh Health (The Icelandic This website stores data such as 84(11):874–875, Nov 1998. URL http://www.cl.cam.ac.uk/users/ cookies to enable essential site rja14/iceland/iceland.html . The printed article is an excerpt from a docume document nt produced for the Icelandic Medical Association. Association. The full functionality, as well as marketing, text of the latter is available online. personalization, and analytics. You [7]time Ross J. Anderso Anderson. n. “Com “Commen ments ts on the Security Security T Targets argets for the Icemay change your settings at any landic Health Health Database”, Database”, 1999. URL http://www.ftp.cl.cam.ac. or accept the default settings. uk/ftp/users/rja14/iceland-admiral.p%df. [8] Ross J. Ander Anderson. son. Security Engineering: Engineering: A Guide to Building Dependable Distributed Systems . Wiley Wiley,, 2001. ISBN 0-471 0-471-3892 -38922-6. 2-6. Privacy Policy [9] Ross John Anderso Anderson. n. “Why Crypt Cryptosyst osystems ems Fa Fail”. il”. Communications of the ACM , 37 (11):32–40, 1994. Marketing [10] Ross John Anderson. Anderson. “Secu “Security rity in Clinical Clinical Information Information Systems”. Systems”. Personalization BMA Report, British Medical Medical Association Association,, Jan 1996. ISBN 0-7279 0-7279-1048-5. Analytics
Save
Accept All
38
[11] [11] Ros Rosss John Anderson Anderson and Jong-Hye Jong-Hyeon on Lee. “Ji “Jikzi kzi:: A New Fram Frameework wor k for Secure Publi Publishing” shing”.. In “Proceedings “Proceedings of Security Protoc Protocols ols Workshop ’99”, Cambridge, Apr 1999. [12] Ross John Anderson and Jong-Hyeon Jong-Hyeo n Lee.and “Jikzi – A NewCommerce”. Framework Framework for Security Policy, Trusted Publishing Electronic Computer Communications , to appear. [13] [13] L. Badg Badger, er, D. F. Sterne Sterne,, D. L. Sherma Sherman, n, K. M. Walker alker and S. A. Haghighat. Haghi ghat. “Practical “Practical Domain and Type Enforceme Enforcement nt for UNIX”. In “Proceedings of the 5th USENIX UNIX Security Symposium”, pp. 66–77. Oakland, CA, May 1995. [14] D. Elliot Bell and Leonard J. LaPadula. LaPadula. “Secure Comput Computer er Systems: Systems: Mathematical Foundations”. Foundations”. Mitre Report ESD-TR-73-278 (Vol. I– III), Mitre Corporation, Bedford, MA, Apr 1974. [15] T Benkart Benkart and D Bitzer. “BFE Applicability to LAN Environments”. Environments”. In “Sev “Seventee enteenth nth National Comp Computer uter Security Security Conference” Conference”,, pp. 227– 236. NIST, Baltimore, Maryland, 11–14 Oct 1994. [16] T. Berson and G. Barks Barksdale. dale. “KSOS “KSOS-Dev -Developm elopment ent Methodology Methodology for a Secure Operating System”. In “Proc. NCC”, pp. 365–371. AFIPS, AFIPS Press, Press, Mont Montva vale, le, NJ, Jun 1979. Vol. 48. [17] Ken Biba. “Integrity Considerations Considerations for Secure Com Computing puting Systems”. Mitre Report MTR-3153, Mitre Corporation, Bedford, MA, 1975. [18] Matt Blaze, Joan Fei Feigenb genbaum, aum, John Ioanni Ioannidis dis and A. Keromyti Keromytis. s. “The KeyNote Trust-Ma Trust-Managem nagement ent System Version Version 2”. IETF RFC 2704, Internet Engineering Task Force, Sep 1999. URL http://www. cis.ohio-state.edu/htbin/rfc/rfc2704.html. [19] Matt Blaze, Joan Feigen eigenbaum baum and Jac Jack k Lacy Lacy.. “Dece “Decentra ntralized lized Trust Trust Manage Man agemen ment”. t”. In “Proceedi “Proceedings ngs of the IEEE Sympo Symposiu sium m on Research in Security and Privacy”, Research in Security and Privacy. IEEE Computer Society,Technical Committee on Security and Privacy, IEEE Computer Society Press, Oakland, CA, May 1996. [20] W. E. Boebert and R. Y. Kain. “A Practical Alternative Alternative to HierarchiHierarchical Integrity Policies”. In “Proceedings of the 8th National Computer Security Securi ty Conferen Conference”, ce”, pp. 18–27. NIST, 1985. [21] David F. C. Brewer Brewer and Michael J. Nash. “The Chinese Wall Security Security Policy”. In “1989 IEEE Symposium on Security Policy”. Security and Priv Privacy”, acy”, pp. 206–214. Oakland, CA, 1989.
This website stores data such [2 as2] C. Can [22] Cantt an and d S. Wisem Wiseman an.. “S “Sim impl plee As Assu sure red d Ba Bast stio ion n Ho Host sts” s”.. In cookies to enable essential site “13th Annual Computer Security Application Conference”, pp. 24– functionality, as well as marketing, 33. IEEE Computer Society, 1997. ISBN 0-8186-8274-4. personalization, and analytics.[23] YouCCITT. “Data Communications Networks Networks Directory”. Tech. Rep. 8, may change your settings at any time CCITT, Melbourne, Nov 1988. Recommendations X.500-X.521, IXth or accept the default settings. Plenary Assembly. Assembly. Privacy Policy Marketing Personalization
[24] David D. Clark Clark and David R. Wilson. “A Comparison of Commercial Commercial and Military Military Computer Computer Securit Security y Policies”. Policies”. In “1987 IEEE Symposium on Security and Privacy”, pp. 184–194. Oakland, CA, 1987. [25] [25] Bil Billl Curtis Curtis,, Her Herb b Krasn Krasner er and Neil Iscoe. Iscoe. “A Field Field Study of the Software Design Process for Large Systems”. Communications of the ACM , 31 (11):1268–1287, Nov 1988.
Analytics Save
Accept All
39
[26] I. Denley and S. Weston-Smith eston-Smith.. “Privacy “Privacy in clinic clinical al information information systems in secondary care”. British Briti sh Medic Medical al Journal , 318:1328–1331, May 1999. [27] Dorothy Doroth y E. Denning. “AACM Lattice of Secure Information Inform Flo w”. Communications of the , 19Model (5):236–243, May 1976.ation ISSNFlow”. 00010782. Papers from the Fifth ACM Symposium on Operating Systems Principles (Univ. Texas, Austin, Tex., 1975). [28] Dorothy Dorothy E. Denning. “A Lattice Model of Secure Information Information Flow”. Flow”. Communications of the ACM , 19 (5):236–243, May 1976. ISSN 00010782. [29] Dorothy Dorothy E. R. Denning. Cryptography and Data Security . Addiso AddisonnWesley esley,, Reading, 1982. ISBN 0-201-10150-5. [30] Whitfield Whitfield Diffie and Martin E. Hellman. Hellman. “New direc directions tions in cryptog cryptog-raphy”. IEEE Transactions on Information Theory , IT-22(6):644– 654, 1976. [31] [31] Car Carll Ell Elliso ison. n. “The “The nature nature of a useabl useablee PKI PKI”. ”. Computer Networks , 31(8):823–830, May 1999. [32] Carl M. Ellison, Bill Frant rantz, z, Butler Lampson, Lampson, Ron Rive Rivest, st, Brian M. Thomas Thom as and Tatu Ylonen. Ylonen. “SPKI Certificate Certificate Theory”. Theory”. IETF RFC 2693, Internet Engineering Task Force, Sep 1999. URL http://www. cis.ohio-state.edu/htbin/rfc/rfc2693.html. [3 [33] 3] J Epst Epstei ein, n, H Orma Orman, n, J McHu McHugh gh,, R Pas asca cale le,, M Br Bran anst stad ad an and d A Marmor-Squires. “A High Assurance Window System Prototype”. Journal of Computer Security , 2(2–3):159–190, 1993. [34] J Epstein and R Pascale. Pascale. “Use “Userr Interface for a High Assuranc Assurancee Windowing System”. In “Ninth Annual Computer Security Applications Conference”, pp. 256–264. IEEE, Orlando, Florida, USA, 6–10 Dec 1993. ISBN 0-8186-433 0-8186-4330-7. 0-7. [35] Glenn Faden. Faden. “Reco “Reconciling nciling CMW Require Requiremen ments ts with Those of X11 Applications Applic ations”. ”. In “Proceedin “Proceedings gs of the 14th Annual Nationa Nationall ComComputer Security Security Conference”, Conference”, Washin ashington, gton, DC, USA, Oct 1991. Architecture of the windowing portion of Sun’s CMW. [36] JS Fenton Fenton.. Information Protection Systems . Phd dissertation, Cambridge University, 1973. [37] D. Ferraio Ferraiolo lo and R. Kuhn. “Role-Based “Role-Based Acces Accesss Controls”. Controls”. In “15th NIST-NCSC National Computer Security Conference”, pp. 554–563. This website stores data such as Oct 1992. cookies to enable essential site [38] [38] Sim Simon on N. Foley oley. “Ag “Aggre gregat gation ion and separa separatio tion n as non nonin inter terfer ferenc encee functionality, as well as marketing, properties”. Journal of Computer Security , 1(2):158–188, 1992. personalization, and analytics. You [39]time L. J. Fraim. Fraim. “SCOMP: “SCOMP: A Solution to the Multilevel Secur Security ity Probmay change your settings at any lem”. Computer , 16(7):26–34, Jul 1983. or accept the default settings. [40] [40] T. Fraser. raser. “LO “LOMA MAC: C: Lo Low w Water-M ater-Mark ark Integr Integrity ity Prote Protectio ction n for COTS Environm Environments ents”. ”. In “Proceedi “Proceedings ngs of the 2000 IEEE Symposium on Security and Privacy”, pp. 230–245. IEEE Computer Society Privacy Policy Press, 2000. Marketing [41] J. A. Gogue Goguen n and J. Meseguer. Meseguer. “Secur “Security ity Policies Policies and Securit Security y Models”. In “Proceedings of the 1982 Symposium on Security and Privacy Personalization (SSP ’82)”, pp. 11–20. 11–20. IEEE Compute Computerr Societ Society y Press Press,, Los Alamitos, Alamitos, Ca., USA, Apr 1990. Analytics
Save
Accept All
40
[42] [42] R. D. Gra Grauba ubart, rt, J. L. Berge Bergerr and J. P. L. Woodward oodward.. “Co “Compa mpartrtmented men ted Mode, Workstation orkstation Evalua Evaluation tion Criteria, Version ersion 1”. Tech. Rep. MTR 10953 (also published by the Defense Intelligence Agency as document DDS-2600-6243-91), The MITRE Corporation, Bedford, MA, USA, Jun 1991. Revis Revised ed requireme requirements nts for the CMW, including a description of what they expect for Trusted X. [4 [43] 3] Mich Michae aell A. Harri Harriso son, n, Walte alterr L. Ruzz Ruzzo o and and Jeffr Jeffrey ey D. Ul Ullm lman an.. “Protection in Operating Systems”. Communications of the ACM , 19(8):46 (8):461–47 1–471, 1, Aug 1976. ISSN 0001-07 0001-0782. 82. SIGSAC C , 12(4):6– [44] [44] G Huber. “CMW “CMW In Introd troducti uction” on”.. ACM SIGSA (4):6–10, 10, Oct 1994.
[45] M. H. Kang and I. S. Moskowitz. Moskowitz. “A Pump for Rapid Rapid,, Reliable, Reliable, Secure Communica Communications” tions”.. In ACM (ed.), “F “Fairfax airfax 93: 1st ACM Conference on Computer and Communications Security, 3–5 November 1993, Fairfax, Virginia”, pp. 118–129. ACM Press, New York, NY 10036, 1003 6, USA, 1993. ISBN 0-89791 0-89791-629-8 -629-8.. [46] MH Kang, JN Froscher Froscher and IS Mosko Moskowitz. witz. “An Architecture for Multilevel Secure Interoperability”. In “13th Annual Computer Security Applications Conference”, pp. 194–204. IEEE Computer Society, San Diego,, CA, USA, 8–12 Dec 1997. ISBN 0-8186 Diego 0-8186-8274 -8274-4. -4. [47] [47] MH Kang Kang,, IS Moskow Moskowitz, itz, B Mon Montro trose se and J Parso Parsones nese. e. “A Cas Casee Study of Two NRL Pump Prototypes”. Prototypes”. In “12th Annual Compute Computerr Security Applications Conference”, pp. 32–43. IEEE, San Diego CA, USA, 9–13 Dec 1996 1996.. ISBN 0-8186 0-8186-7606 -7606-X. -X. [48] P.A. Karger Karger,, V.A. Austell and D.C. Tol Toll. l. “A New Mandatory Mandatory Security Policy Combining Secrecy and Integrity”. Tech. Rep. RC 21717 (97406), IBM, Mar 2000. [49] Jong-Hyeon Lee. “Designing a reliable publishing publishing framework”. Tech. Tech. Rep. 489, University of Cambridge Computer Laboratory, Apr 2000. [50] Mark Lomas. “Auditing against Multiple Policies (T (Transcript ranscript of Discussion)”. cussi on)”. In “Proceedings “Proceedings of Security Security Protocols W Worksho orkshop p 1999 1999”, ”, No. 1796 in Lecture Notes in Computer Science, pp. 15–20. SpringerVerlag, Apr 1999. ogeln und den Fischen [51] Konrad Konrad Lorenz. Lorenz. Er redete mit dem Vieh, den V¨ (King Solomon’s Solomon’s ring). Boro Borotha-Sc tha-Schoeler, hoeler, Wien, 1949. 1949.
[52] Daryl McCulloug McCullough. h. “A Hookup Theorem Theorem for Multilev Multilevel el Security”. Security”. This website stores data such as IEEE Transact ransactions ions on Softwa Software re Engine Engineering ering , 16(6):56 (6):563–56 3–568, 8, Jun 0098-5589. 589. Special Section on Security and Privac Privacy y. cookies to enable essential site 1990. ISSN 0098-5 functionality, as well as marketing, [53] J McLean. McLean. “Security “Security Models”. Models”. In “Encyc “Encyclopedia lopedia of Softwar Softwaree Engipersonalization, and analytics. Youneering”, John Wiley & Sons, 1994. may change your settings at any [54]time John McLean. McLean. “A comment comment on the ‘basic securi security ty theorem’ theorem’ of Bell or accept the default settings. Processing essing Letters , 20(2):67–70, Feb and LaPadula”. Information Proc 1985.. ISSN 0020-01 1985 0020-0190. 90.
Privacy Policy Marketing Personalization
[55] D.P. D.P. Moynihan Moynihan.. Secrecy — The American Experience . Yale UniverUniversity Press, 1999. ISBN I SBN 0-300-08079-4. [56] Paul Paul Mukherjee and Victoria Stavrid Stavridou. ou. “The F Formal ormal Specifica Specification tion of Safety Requirements for Storing Explosives”. Formal Aspects of Computing , 5 (4):299–336, 1993.
Analytics Save
Accept All
41
[57] M Nash and R Kennett. “Imple “Implemen menting ting Security Security policy in a Large Defence Procurement”. In “12th Annual Computer Security Applications Conference”, pp. 15–23. IEEE, San Diego, CA, USA, 9–13 Dec 1996. ISBN 0-8186-760 0-8186-7606-X. 6-X. [58] National National Security Agency. Agency. “The NSA Security Security Manual”. Manual”. Tech. rep., http://www.cl.cam.ac.u cl.cam.ac.uk/ftp/users/rja k/ftp/users/rja14/nsaman. 14/nsaman. NS NSA. A. URL URL http://www. tex.gz . (Leaked copy.). [59] Roger Michael Michael Needham and Michael Michael Schroeder. “Usi “Using ng Encryption for Authentication in Large Networks of Computers”. Communications of the ACM , 21 (12):993–999, 1978. [60] [60] B. Cliffo Clifford rd Neuma Neuman n and John John T. Kohl. Kohl. “The “The Kerberos Kerberos Net Netwo work rk Authentication Service (V5)”. IETF RFC 1510, Internet Engineering Task Force, Sep 1993. [61] NIST. “Common Criteria for Information Technology Technology Security, Security, VerVersion 2.1”. Tech. Rep. ISO IS 15408, 15408, Natio National nal Institute of Standards and Technology Technology,, Jan 2000. URL http://csrc.nist.gov/cc/. [62] [62] Public Recor Record d Office. Office. “F “Func unctio tional nal Require Requireme ments nts for Ele Electr ctroni onicc Record Recor d Managem Management ent Systems”, Systems”, Nov 1999. URL http://www.pro. gov.uk/recordsmanagemen gov.uk/reco rdsmanagement/eros/invest/ t/eros/invest/reference%.pdf reference%.pdf. [63] B Pomeroy Pomeroy and S Wiseman. “Private Desktops and Shared Store”. In “Computer Security Applications Conference”, pp. 190–200. IEEE, Phoenix,, AZ, USA, 1998 Phoenix 1998.. ISBN 0-81860-8186-87898789-4. 4. [64] Ronald Ronald L. Rivest Rivest and Butler W. Lampson. Lampson. SDSI – A Simple Distributed Security Infrastructur Infrastructure e , Apr Apr 19 1996 96.. UR URL L http://theory. lcs.mit.edu/~cis/sdsi.html . V1.0 V1.0 present presented ed at USENIX USENIX 96 and Crypto 96. [65] J. Rushby and B. Randell. “A Distributed Secure System”. System”. In “IEEE Computer”, pp. 55–67. IEEE, Jul 1983. [66] RR Schell. Schell. “Com “Computer puter Securi Security: ty: The Achi Achilles’ lles’ Heel of the Electronic Electronic Air Force?” Air University Review , 30 (2):16–33, Jan–Feb 1979. [67] [67] RR Sch Schell, ell, PJ Downey Downey and GJ Popek. Popek. “Pr “Preli elimin minary ary notes on the design desig n of secure military computer systems”. systems”. Tech. Rep. MCI-73-1, Electronic Electr onic Systems Division, Division, Air Force Force System Systemss Comm Command, and, 1 Jan http://seclab.cs.ucdavis.edu/projects/hi s.edu/projects/history/ story/ 1973 1973.. UR URL L http://seclab.cs.ucdavi papers/sche73.p%df . [68] [68] Frank rank Stajano Stajano and Ross And Anders erson. on. “Th “Thee Res Resurr urrect ecting ing Duc Ducklin kling: g: This website stores data such as Securit Security y Issues in Ad-Hoc Wireless Wireless Networks” Networks”.. In Bruce Chris Christiantiancookies to enable essential site son son,, Bru Bruno no Cri Crispo spo and Mike Roe (ed (eds.) s.),, “Secur “Security ity Protoc Protocols ols,, 7th functionality, as well as marketing, Inter Internation national al Worksho orkshop p Proceedings” Proceedings”,, Lecture Lecture Notes in Compu Computer ter personalization, and analytics. YouSci Scienc ence. e. SpringerSpringer-V Verl erlag, ag, 1999. 1999. URL http://www.cl.cam.ac.uk/ may change your settings at any time ~fms27/duckling/ . Also available as AT&T Laboratories Cambridge or accept the default settings. Technical Report 1999.2.
Privacy Policy Marketing Personalization
[6 [69] 9] Fra Frank nk Staja Stajano no an and d Ro Ross ss An Ande ders rson on.. “T “The he Re Resu surr rrec ectin ting g Duck Duck-ling ling:: Se Secu curit rity y Is Issu sues es in Ad Ad-Ho -Hocc Wi Wire rele less ss Netw Networks orks”. ”. In “Pro“Prord ceed ceedin ings gs of 3 AT&T Softwa Software re Symposium”, Symposium”, Middletow Middletown, n, New Jersey Jer sey,, USA, Oct 1999. 1999. URL http://www.cl.cam.ac.uk/~fms27/ duckling/. Abridg Abridged ed and revised versio version n of the Security Security Protocols article by the same name. Also available as AT&T Laboratories Cambridge Technical Report 1999.2b.
Analytics Save
Accept All
42
[70] David David Sutherla Sutherland. nd. “A Model of Information”. Information”. In “Proc. 9th National Security Conference”, pp. 175–183. Gaithersburg, Md., 1986. [71] US Department Department of Defen Defense. se. “T “Techn echnical ical Rationale Rationale behind CSC-ST CSC-STDD003-85: computer security requirements”. Tech. Rep. CSC-STD-00485, US Department of Defense, 1985. [72] US Department Department of Defense. “T “Trusted rusted Compu Computer ter Syste System m Evaluation Evaluation Criteria”. Crite ria”. Tech. Rep. 5200.28, US Departmen Departmentt of Defens Defense, e, 1985. [73] KG Walter, Walter, WF Ogden, WC Rounds, FT Bradsha Bradshaw, w, SR Ames and DG Shumway Shumway.. “Models for Secure Computer Computer System Systems”. s”. Tech. Rep. 1137, Case Western Reserve University, University, 31Jul 1973. Revised 21 Nov 1973. [74] KG Walter, Walter, WF Ogden, WC Rounds, FT Bradsha Bradshaw, w, SR Ames and DG Shumway Shumway.. “Primitive Models for Computer Secur Security”. ity”. Tech. Rep. ESD–TR–74– ESD–T R–74–117, 117, Case Western estern Reserve Reserve Universit University y, 23Jan 23Jan 1974. http://www.dtic.mil. URL http://www.dtic.mil [75] Clark Weissman. Weissman. “Security Controls in the ADE ADEPT-50 PT-50 Time-Sharing System”. In “Proc. Fall Joint Computer Conference, AFIPS”, vol. 35, pp. 119–133. 1969. [76] Clark Weiss Weissman. man. “BLACKE “BLACKER: R: Security for the DDN, Examples Examples of A1 Security Engineering Engineering T Trades rades”. ”. In “Proceeding “Proceedingss of the 1992 IEEE Computer Society Symposium on Security and Privacy (SSP ’92)”, pp. 286–292. 286–292. IEEE IEEE,, May 1992. ISBN 0-81860-8186-28252825-1. 1. [77] M. V. Wilkes and R. M. Needham Needham (eds.). The Cambridge Cap Computer and its Oper Operating ating System System . Nor North-H th-Holl olland and,, New Y York ork,, 197 1979. 9. ISBN 0-444-00357-6. [78] J. P. L. Woodward. Woodward. “Security “Security Requirement Requirementss for System High and Compartmented Mode Workstations”. Tech. Rep. MTR 9992, Revision 1 (also published by the Defense Intelligence Agency as document DDS-2600-5502-87), The MITRE Corporation, Bedford, MA, USA, Nov 1987. 1987. The origina originall requiremen requirements ts for the CMW, includin including g a description of what they expect for Trusted X. [79] P Wrigh Wright. t. Spycatcher – The Candid Autobiography of a Senior Intelligence Officer . Willia William m Heinemann Australia, Australia, 1987. ISBN 0-855610-85561098-0. [80] Philip R. Zimmerm Zimmermann. ann. The Official PGP User’s Guide . MIT Press, Cambridge, MA, 1995. ISBN 0-262-74017-6. This website stores data such as cookies to enable essential site functionality, as well as marketing, personalization, and analytics. You may change your settings at any time or accept the default settings.
Privacy Policy Marketing Personalization Analytics Save
Accept All
43