Automatic Verification of Confidentiality Properties of Cryptographic Programs Nabil El Kadhi L.E.R.I.A. Lab. Head Manager, 14-16 rue voltaire 94270 Kremelin Bicetre el-kad [email protected] April 20, 2004

Abstract

programs instead of mere abstract, simplified models of cryptographic protocols. However, our original motivation for engaging into static analysis of cryptographic programs is somewhat different, and consists in enlarging the bag of tricks available to security officers in the presence of mobile code. A typical situation is in smart card security. Modern smart cards include JavaCard virtual machines, which allow card application providers to upload new applications onto client cards, or upgrade old applications on cards. Such smart cards typically handle sensitive data— PIN numbers for banking, transaction logs for loyalty applications, private health records for social security purposes—which uploaded applications will compute on, transmit to servers (hopefully in encrypted form), and so on. Uploading new applications first undergoes a certification process, involving format checking and bytecode verification among other activities. One crucial step of this certification process will be to extend these verifications to semantic verifications, notably to assess the security properties of the applications to upload. Checking that sensitive data as above will not be disclosed to untrusted third parties, although they can be sent (encrypted) over a network, is of paramount importance. In general, verifying security properties of actual programs is our goal. We start with confidentiality properties in this paper—confidentiality is not only required for security, as we hope to have demonstrated, it is also a required basic block to establish other properties like authentication. We also deal with freshness, as confidentiality and freshness are interdependent concepts. From the technical standpoint, one source of inspiration comes from the particular brand of formal methods that have been used to verify properties of cryptographic protocols [7, 8, 9]. Our tack is indeed similar, but it differs sufficiently to have warranted more research. Compared to cryptographic protocol

We propose a method to statically analyze cryptographic programs, based on the theory of abstract interpretation, and enabling us to verify a policy of preservation of confidentiality for sensitive data. This method rests on an intruder model notably used in cryptographic protocol analysis by D. Bolignano, but differs from cryptographic protocol analysis in two ways. First, the analyzer is automatic. Then, the analyzer has only one program available, that is, only one of the roles of a cryptographic protocol, itself left unspecified. Finally, the analyzer must work on a real program, not an abstraction of it, as is customary. All this makes verification more difficult than in the case of protocols. Nevertheless, the techniques that we propose, and which rest on a calculus on symbolic constraints, can also prove useful in the setting of cryptographic protocols. We argue that our method is practical by investigating experimental evidence. Keywords :Static analysis, abstract interpretation, cryptographic protocols, security, confidentiality, constraints.

1

Introduction

Many protocols [1, 2] and standards have been proposed in order to ensure network security. A cryptographic protocol describes a set of actions and messages to been exchanged between legitimate protocol participants. Almost all communication security tools are based on the use of cryptographic algorithms (both symmetric [3] and asymmetric [4]) and hash functions [5, 6]. Verifying whether a given protocol really offers the required security is not just related to the algorithm used. Many other aspects have to been taken into account, such as implementation details and the format and contents of exchanged messages. This is one reason why it is interesting to verify actual cryptographic 1

verification, static analysis of cryptographic programs: • should be automatic: relying on user input is infeasible for large upload volumes—it is all the more so if the upload architecture (servers, clusters hosting distributed application databases) has to work without human intervention; • is harder than cryptographic protocol verification, as it has to work on actual programs, not on simplified models—since this simplification activity usually involves some cleverness from human experts; • must work on a partial view of security protocols: the program to analyze is, in the best of all cases, one implementation of one role of a cryptographic protocol; the static analyzer must provide relevant security information without access to the rest of the specification of the protocol that the analyzed program participates in. In fact, the analyzed program may call cryptographic primitives without actually participating in any recognized cryptographic protocols at all—why should a pirate obey any particular standard? Our paper is structured as follows. We survey related work in the field of formal methods for cryptographic protocol verification in Section 2. In Section 3, we recapitulate the security model our work is based on, which is due to D. Bolignano [8]. We then introduce a core language that displays the essential features that we are concerned with, and which will serve as a basis for the analysis we put forward in the sequel. This language and its concrete semantics (what the language constructs mean) are described in Section 4, while the abstract semantics that is at the core of our analysis technique is described, and related to the concrete semantics in Section 5. The integration of roles for key servers is discussed briefly, and we illustrate a run of the S.S.P.V. static analyzer, based on the above abstract semantics, on the Yahalom protocol [2] in Section 6. We conclude in Section 7 by presenting possible extensions of our approach.

2

Related Work

Formal methods are widely used to verify software and hardware [10]. There are essentially four formal approaches to cryptographic protocol verification. • Modal Logic: Security properties are expressed in a suitable modal logic allowing the specifier to describe beliefs of principals (participants in the protocol), jurisdictions, temporal and

causal relations, etc. The pioneer is the socalled BAN logic [2], whose aim was to clarify classical informal arguments on authentication protocols. Many extensions to the BAN logic have been also proposed, for example SG-logic [11], or [12]. Using such modal logics requires a special idealization phase to take place before the protocol is analyzed. This phase consists in transforming the sequence of exchanged messages in the protocol into formulas, and requires human intervention. • Model Checking: Describe the cryptographic protocol as an infinite transition system that models the interleaving of sequential executions of a finite number of so-called honest principals, plus infinitely many intruder processes. Intruders can send, read, delete or modify messages. This model was pioneered by Dolev and Yao [13], and extended and used by Meadows to analyze key management protocols [7] notably. To check any security property, the model-checking approach amounts to exploring, exhaustively, every possible execution of the system consisting of honest principals and intruders in parallel. As this is impossible in general, only specific instances with finitely many principals and finitely many exchanged messages are tested. If an attack is found on such an instance, then this is an attack against the protocol. This is therefore a practical method to find attacks, and it was successively demonstrated as such, see e.g., [14, 15, 16]. However, because it only considers particular instances of protocols, such a method is in general hopeless if we wish to ensure security guarantees, i.e., that whatever the number of principals, the number of messages, and the particular way all executions of principals are interleaved, the targeted security property holds. • Process Calculus: The typical example is Abadi and Gordon’s spi-calculus [17], a variant of the π-calculus extended with abstract cryptographic primitives. While the semantics of the spicalculus is extremely precise, to our knowledge the special brand of barbed bisimulation needed to verify security properties in this context is not known to be computable. Special instances are, and Abadi [18] notably shows that for specific spi-calculus processes, secrecy can be verified automatically with a simple dedicated type system. However, the notion of secrecy of the spi-calculus is extremely strong, so strong in fact that no practical protocol in use today satisfies it. (A message is secret if no intruder can say anything about it, not even that the same message was sent twice.) We aim at verifying

the more practical property of confidentiality: a message is confidential if not intruder can produce it. To illustrate the difference, note that a pair (M1 , M2 ) is never secret (because any intruder can tell it is a pair), however it is confidential provided M1 or M2 is. • General Purpose Formal Methods: Those methods are based the same kind of model as those used in the model-checking approaches outline above. However, they deal with in finite state spaces by relying on theorem provers, whether automated theorem provers or interactive proof assistants, to show universally quantified goals involving possibly infinite amounts of messages, principals, runs, etc. Typical examples are [19, 20, 8, 9]. We shall reuse models of cryptographic protocols of the kind used in the model-checking and general purpose formal methods approaches, because they allow us to express security properties in an adequate manner. Moreover, it adapts well to the case when the whole protocol is unspecified—recall that in analyzing a given program, we have no knowledge whatsoever of the roles of other principals—or even when the analyzed program does not participate in a protocol. To tackle the difficulty of handling infinite state systems, we shall rely on the theory of abstract interpretation [21], which will allow us to get safe estimates of what data are indeed confidential, in finite time. Abstract interpretation is a framework and a theory, and we shall instantiate this theory to a particular domain of symbolic constraints, which we compute by using rules to be defined in Section 5.

3

The Security Model

We consider Dominique Bolignano’s model [8], one model of the general purpose formal methods kind which is sufficiently general and well-defined for our purposes. In this model, the principals are identified and classified into trustable and non-trustable principals. Trustable principals are always assumed to play their role as stated by the protocol, i.e., to obey a given specification. There are no specific restrictions on the non-trustable principals, as Dominique Bolignano said, “For non-trustable principals, the situation is different as we do not know how many they are, how they work, how they cooperate and we have to imagine the worst case” [8]. Therefore all nontrustable principals are aggregated into a so-called intruder, which is all-powerful in the following sense:

• every communication goes through the intruder. Sending a message over the network gives it to the intruder, receiving a message from the network means accepting one the intruder just made. This allows the intruder to intercept, forge, throw away messages. As this also allows the intruder to pass messages unchanged, this also includes normal behaviors, which entails that redirecting communications this way incurs no loss of generality [13]; • the intruder can encrypt and decrypt messages, build tuples, read components of tuples, as many times as it wishes, producing messages as complex as required for any particular attack. However, it cannot do anything else—which essentially means that it cannot crack messages, i.e., it cannot deduce the plaintext from the ciphertext without knowing the inverse key. The intruder will be represented by a set E of messages that it has got hold of. Initially, E is any specified set of initial leaked data. Then E will evolve as the program executes: essentially, sending a message M over the network will mean adding M to E. On the other hand, receiving a message M will mean that M is any message at all that the intruder can deduce from E. The relation “the intruder can deduce M from E”, which corresponds to Bolignano’s known in predicate [8], is formalized by the natural deduction rules of Figure 1. Formally, our messages M are taken from the algebra: M ::= K|N |D|{M }aK |(M, . . . , M ) A derivation is any finite tree whose nodes are judgments of the form E |→M , and are linked through instances of the rules of Figure 1. Note that leaves occur on top and must be instances of the (Ax) rule or of (T upleI) with n = 0. The judgment at the root of the derivation is the conclusion of the derivation. We say that the intruder can deduce M from E provided there is a derivation whose conclusion is E |→ M . Similarly, we introduce a similar deduction system for judgments E |; M meaning “the intruder can deduce M from E using encryption, cracking (not just decryption), tuple formation and decomposition”. We don’t show the rules here, they are the same as those of Figure 1 with |; replacing |→, except for (CryptE), which is weakened to: Γ |; mK Γ |; m

(|;CryptE)

We describe the formal semantics of a core language that has explicit cryptographic and message exchange

(Ax) Γ, m |→ m Γ |→ m1 . . . Γ |→ mn Γ |→ (m1 , . . . , mn ) Γ |→ (m1 , . . . , mn )

A ::= (T upleI)

(T upleEi ) Γ |→ mi (for every i, 1 ≤ i ≤ n)

Γ |→ m Γ |→ K Γ |→ mK

(CryptI) Γ |→ mK Γ |→ K −1 Inv(K, K −1 ) Γ |→ m

(CryptE)

Figure 1: The natural deduction system CK0

| | | | | | | | | | | | |

 x=y ¬x = y x := y x := op(x1 , x2 , · · · , xn ) y := (x1 , x2 , · · · , xn ) detuple(x, (x1 , · · · xn )) x :=?y x!y x := encryptay z x := decryptay z decryptf ailay z x := f resh x, y := keysa

Figure 2: Atomic Actions primitives in Section 4, and we shall show in Section 5 how the sets E, which are of unbounded size, can be described sufficiently accurately by means of constraints.

4

The Core Language and Its Concrete Semantics

Let us consider a toy language on which our analysis will be based. This language will only have the primitives shown in Figure 2 available. Of these, some have been left unspecified: elementary operations can be whatever you wish here, e.g., arithmetic operations, reading pointers, writing into pointers. The precise semantics of these constructs is an orthogonal issue to our security verification problem. Including it would distract us from our purpose, and leaving it out allows us to concentrate on the main issues. Other primitives are not usually found in any real computer language. The channel read and write operations are present in Occam [22], but this is rather an exception. And most other operations, notably encryption, decryption, nonce generation, are present in no language. However, in most languages, encryption, to take one example, will be implemented by calls to specific library functions, which can be recognized statically, i.e., by comparing their names with names of trusted encryption library functions. (This assumes that we have a data base of functions trusted to achiever encryption—statically recognizing encryption algorithms by reading their code indeed seems impractical.) The task is slightly more complex in object-oriented languages like Java, where name resolution is not enough to identify specific methods. In this case, however, concrete type inference [23] is a well-known static analysis technique which allows

to reconstruct (a superset of) the actual classes C1 , . . . , Cn that each given object z may have. Invoking a virtual method m of an object z means that we call one of the actual methods C1 .m, . . . , or Cn .m: provided each is trusted as an encryption function implementing algorithm a, we may replace the call to z.m by our symbolic instruction x := encryptay z. Standard code transformations [24] show that we can then convert any such program into a control flow graph (CFG). A CFG is a finite directed graph with atomic actions as labels on edges. Formally, it is a 5tuple (V, v0 , F, E, δ) where V is a non-empty set of socalled vertices v (also called nodes, or program points), v0 ∈ V is the start node, F ⊆ V is the set of final nodes, E ⊆ V × V is the set of edges or transitions, and δ : E → A is a labeling. If (v, v 0 ) is an edge and a δ(v, v 0 ) = a, then we shall write v −→ v 0 . We define the semantics of this language as a small step operational semantics. Semantic rules describe the evolution of an environment ρ describing the values of program variables: this is a map from program variables x ∈ X to values, in fact messages m ∈ M. Concrete semantics rules describe how the knowledge of the intruder (the set E) evolves as the given program runs. The concrete semantics rules have the form : ρ, E a i −→ i0 ρ0 , E 0 with special side conditions describing the execution context and consequences of the rule. Figure 3 presents the concrete semantics rules.

5 Action 

Concrete Semantic Rules  i −→ i0

ρ,E ρ,E ρ,E ρ,E

¬x=y

i −→ i0 ρ(x) = ρ(y) ¬x=y 0 ρ,E ρ,E i −→ i ρ(x) 6= ρ(y) x:=y 0 ρ,E ρ0 ,E i −→ i ρ0 = ρ[x := ρ(y)]

x=y ¬x = y x := y x := op(x1 , x2 , · · · , xn )

ρ,E ρ0 ,E 0

i −→ ρ = ρ[y := [[op]] (ρ(x1 ), . . . , ρ(xn ))]

x:=op(x1 ,x2 ,···,xn ) 0

y := (x1 , · · · , xn )

ρ,E ρ0 ,E 0

−→ i0 i ρ = ρ[x := (ρ(x1 ), · · · , ρ(xn ))

x:=(x1 ,···,xn )

detuple(x, (x1 , · · · xn ))

ρ,E ρ0 ,E

i −→ ρ(x) = (m1 , · · · , mn ) ρ0 = ρ[x1 = m1 , · · · , xn = mn ]

detuple(x,(x1 ,···xn )) 0

x := f resh

ρ,E ρ0 ,E

x:=f resh

i

i

i −→ i0 v ∈ N , E 6 |;v, ρ0 = ρ[x := v] a

ρ,E ρ0 ,E

x,y:=keysa

x, y := keys

i −→ i v1 ∈ N , v2 ∈ N E 6 |;v1 , E 6 |;v2 , Inv a (v1 , v2 ) ρ0 = ρ[x := v1 , y := v2 ]

x := encryptay z

ρ,E ρ0 ,E 0

i −→ ρ = ρ[x := {ρ(z)}akey(ρ(y)) ]

x := decryptay z

ρ,E ρ0 ,E

x:=encrypta yz

The concrete semantics is not computable, so we approximate it through a computable abstract semantics, using the theory of abstract interpretation [21]. Our abstract semantics computes a an upper approximation of q selected set of program properties. The abstract semantics handles enough information, and propagates enough constraints about the evolution of intruder knowledge and program variables, to be able to check the fact that designated confidential values remain confidential throughout any execution. We represent sets of possible concrete configurations (ρ, E) by sets K of elementary constraints k, see Figure 4. These constraints handle enough information about the intruder knowledge evolution and the program variables to be able to verify whether the desired security properties are verified. The main idea in our abstraction is to propagate constraints that reflect the progression of intruder knowledge following the control flow graph. To do so, we define adequate abstract predicates to express which messages are definitely unknown to the intruder at each program point. k ::= x1 ≈ x2 | x1 6≈ x2 |x ≈ (x1 , x2 , · · · , xn ) (x, x1 , x2 , · · · xn ∈ X ) |x ≈ {z}ay2 (x, z ∈ X ∧ y2 ∈ X ∪ Y) |Invabsta (y1 , y2 ) |¬known in(x1 , x2 , · · · , xn ; m1 , m2 , · · · mp ) |¬kapprox(x1 , x2 , · · · , xn ; m1 , m2 , · · · mp )

x:=decrypta yz

i −→ i0 a ρ(z) = {m1 }key(m2 ) , Inv a (m2 , ρ(y)), ρ0 = ρ[x := m1 ] decryptf aila yz

decryptf ailay z

i −→ i0 ¬(∃ m1 , m2 ∈ M, a ∈ A·, a ρ(z) = {m1 }m2 a ∧Inv (m2 , ρ(y)))

x :=?y

ρ,E ρ0 ,E

i −→ i0 E |→ m, ρ0 = ρ[x := m]

x!y

ρ,E ρ,E 0 0

ρ,E ρ,E

Abstract semantics

x:=?y

x!y

i −→ i0 E = E ∪ {ρ(x)} Figure 3: Concrete Semantic Rules

Figure 4: Elementary Constraints Further details about our abstract semantics can be found in [25]. Abstract semantics rules are used to propagate constraints through program points. Those rules are summarized in Figure 5. The intruder knowledge is changed through communication actions (x!y and x :=?y). To maintain a coherent constraint set, we define two abstract operators for the communication actions. • W rite operator: Since all communication channels are redirected to an all-powerful intruder, any information written on the network is automatically added to the intruder knowledge. The W rite : K × X 7→ K operator computes the new constraint set after a x!y operation by taking into account that new information has been added to the intruder knowledge set E. The

Action  x=y

main idea of the W rite operator is to analyze any ¬known in or ¬kapprox elementary constraint of the constraint set K and to decide whether it is safe; if so, we keep it in the constraint set K 0 after the write action.

Abstract Semantic Rules ε i −→ i0 x=y i −→ i0 K = K ∪ {x ≈ y} ¬x=y 0 K K 0 i −→ i K 0 = K ∪ {x 6≈ y} x:=y 0 K K 0 i −→ i 0 K = ∃# (x · K) ∪ {x ≈ y}

• Read operator: The Read operator is defined to ensure the consistency of the constraint set K after any channel read action. The Read operator only modifies the ¬known in and ¬kapprox constraints. The channel read action added no new knowledge to the intruder. This is why the same ¬known in, ¬kapprox constraints are kept in the constraint set after the channel read action with a little modification. In fact, the redden information is explicitly mentioned at right of semi-column (;) in ¬ abstknowin, ¬kapprox constraints to establish that the information is previously known by the intruder.

K K K K0 0

¬x = y x := y y := op(x1 , x2 , · · · , xn ) Tuple Construction

detuple(x, (x1 , · · · xn ))

K K0 0

y:=op(x1 ,x2 ,···,xn )

i −→ K = ∃# (y · K)

detuple(x,(x1 ,···xn ))

i −→ i0 # K = ∃ ((m1 , · · · , mn ) · K) ∪{x = (m1 , · · · , mn )} K K0 0

detuple(x,(x1 ,···xn ))

i −→ i0 # K = (∃ (m1 , · · · mn K) ∪{x1 = m1 , · · · xn = mn } K K0 0

i −→ i0 K = ∃# (x · K) ∪{¬kapprox(; x)}

x, y := keysa

i −→ i0 K = ∃# (x, y · K)∪ {¬kapprox(; x), ¬kapprox(; y), Invabsta (x, y)}

x := encryptay z

i −→ i0 0 # K = ∃ (x · K) ∪{x ≈ {z}ay )}

x := decryptay z

i −→ i00 # K = ∃ y2 (∃# (x · K)∪ {Invabsta (y2 , y)}∪ {z ≈ {x}ay2 ))}

K K0 0

K K0 0

Thorme 1 Read(¬kapprox(x1 , · · · xn ; y), x) = ¬kapprox(x1 , · · · xn , x; y) Read(¬known in(x1 , · · · xn ; y), x) = ¬known in(x1 , · · · xn , x; y)

x:=f resh

x := f resh

decryptf ailyx z

i0

• ∃# Operator: In order to ensure that constraints are consistent, we use ∃# operator when new constraints are added. This operator is an upper approximation of the classical ∃ operator. It acts as a projection on constraints to eliminate old values of variables concerned by new constraints

x,y:=keysa

x:=encrypta yz

K K0 0

x:=decrypta yz

K K decryptf aila yz

i

K K0 0

−→

i0

x:=?y 0

x :=?y

i −→ i K = Read(∃# (x · K), x)

x!y

i −→ i0 K = W rite(K, x) K K0 0

∀I, E [[∃# y · K]](I, E) ⊇ ∃v · [[K]](I[y := v], E)

x!y

Figure 5: Abstract Semantics

6 6.1

Confidentiality Verification

Preservation

Global Description

A prototype of the Secret Preservation property Verifier (S.S.P.V), based on the presented abstract semantics has been implemented. Two steps compose the analysis process. The first step is a transformation of the program into what we call a symbolic program. A symbolic program is a control flow graph in an S.S.A form (static single assignment) [26] where transitions are labeled by actions of figure ??. The analysis process needs some parameterization in order to correctly verify programs. In fact what we aim to ensure is a confidentiality preservation property. The user must specify some parameters about the security policy and an abstract server representation. Initial assumptions are used by the verifier to generate initial

set of constraints K0 . The verifier propagates the constraints throw the symbolic program from the initial program point to a terminal one. The use of widening guarantees the termination of the analysis as shown in [25].

6.2 K {¬known in(~x; x)} {¬kapprox(~x; x)} {¬known in(~x; x, m1 , · · · mp )} {¬kapprox(~x; x, m1 , · · · , mp )} {¬known in(~x; {x}ak )} {¬kapprox(· · · ; {x}ak )} {¬known in(~x; m), x ≈ {m}ak } {¬kapprox(~x; m), x ≈ {m}ak } {¬known in(~x; m, · · · mp ), x ≈ {m}ak } {¬kapprox(~x; m, · · · mp ), x ≈ {m}ak } {¬known in(~x; {x}ak ), ¬known in(~x; k)} ¬known in(~x; k)}

{¬known in(~x; {x}ak ), ¬kapprox(~x; k)}

{¬kapprox(~x; {x}ak ), ¬known in(~x; k)}

{¬kapprox(~x; {x}ak ), ¬kapprox(~x; k)} {¬known in(~x; {x}ak ), ¬known in(~x; k), x ≈ {y}ak } {¬kapprox(~x; {x}ak ), ¬known in(~x; k), x ≈ {m}ak }

W rite(x, K) {>} {>} {>} {>} {>} {>} {x ≈ {m}ak } {x ≈ {m}ak } {x ≈ {m}ak } {x ≈ {m}ak } {>} {¬known in(~x; {x}ak ), ¬known in(~x; k)} {>} {¬known in(~x; {x}ak ), ¬kapprox(~x; k)} {>} {¬kapprox(~x; {x}ak ), ¬known in(~x; k)} {¬kapprox(~x; {x}ak ), ¬kapprox(~x; k)} {¬known in(~x; {x}ak ), ¬known in(~x; k) x ≈ {y}ak } {¬kapprox(~x; {x}ak ), ¬known in(~x; k), x ≈ {m}ak }

Figure 6: W rite Operator Definition (main cases)

Security Policy Specification

In any cryptographic protocols [2], [18] there is a particular participant, commonly called server role ensuring generation and distribution of communication session keys. As mentioned before, we need some information about the server role in order to correctly verify the analyzed program. S.S.P.V is a parameterized tool that can be used to verify a large wide of security policy dealing with confidentiality preservation property. In fact, a set of parameters allowed the user to specify: • Server encryption algorithm: we have to precise whether the server uses a symmetric or asymmetric encryption decryption algorithm to protect exchanged data with others protocol participants. • Initial confidential shared data: Verifying a confidentiality preservation property means verifying whether an initial confidential data remains confidential after the program execution. Typically we want to verify that the initial prearranged shared keys remain confidential after program executions. We have to specify whish keys are considered as confidential at the beginning of the program execution. • Initial confidential private keys: When the server is based on an asymmetric encryption decryption algorithm, we have to specify the confidential key for each potential protocol participant, the verifier automatically associate, to each confidential key, an abstract public key. Taking as input a symbolic program and set of parameters S.S.P.V generates, an initial set of constraints K0 and infer final constraint set Kf by applying abstract semantics rules.

6.3

An Example: analysis

Yahalom Role A

In order to illustrate the verification process let us consider a symbolic program (obtained by applying a transformation step) of the role A of Yahalom protocol [2]. As shown by figure 7 the symbolic program consider both internal and communications actions. In Yahalom protocol, the server uses a symmetric

algorithm with some pre-arranged keys. Those keys are supposed to be, initially confidential, and shared between legitimate protocol participants. The server generates a particular key Kab called session key and communicate it in an encrypted form to participants asking for a secure communication session key. N a := f resh write Ida write N a read x x = detuple(cp(pl, keyKas, ag 0 SY M 0 0 A10 ), cr2) dcr1 := decrypt(pl, keyKas, ag 0 SY M 0 0 A10 ) pl = detuple(Idb, N a, N b, Kab, Ida) N bc := encrypt(N b, keyKab, ag 0 SY M 0 0 A20 ) write cr2 write N bc Figure 7: Symbolic Program of Role A of Yahalom Protocol Let us now comment the analysis results presented by figure 8. First, we apply the analysis on the symbolic program without any initial hypothesis. The obtained constraints indicate just structural relation between program variables. We have information about witch variable is a cipher text witch is a tuple and so on as shown on first row of figure˜reffg:figres. When applying the verification process with initial constraints describing just the server message (¬known in(Kab )), we notes that the secrecy of Kab is no longer preserved by the analyzed program. This is explained by the fact of communication Kab in a cipher text with a non-guaranteed confidential key. The third row of figure 8 shows analysis results when we clearly indicate that some keys are initially confidential but without given any hypothesis about the server role. The symbolic program guarantee the confidentiality preservation of the initial confidential keys (¬known in(Kas ), ¬known in(Kas )). Next, apply the analysis with initial hypothesis about the shared confidential keys and the server generated keys. In addition to the constraints about the program variables, we have three ¬known in constraints. Those constraints indicates which information are surely confidential (not deduced from the intruder knowledge) after the program execution. The last row of figure 8 showed the analysis results when simulating a multisession execution by adding a global infinite loop. As we can see it the widening (used with loops) preserve the essential information about confidentiality preservation property.

Initial Constrains Set

K0

Empty Set

Final Constraints Set

N bc ≈ cp(N b; Kab ; ) pl ≈ T p(N b, Kab , idb) Inverse(Kas , Kas )A1

¬known in(Kab ) Inverse(Kab , Kab )A2

¬known in(Kas ) Inverse(Kbs , Kbs )A1 ¬known in(Kbs ) Inverse(Kas , Kas )A1

¬known in(Kab ) Inverse(Kab , Kab )A2 ¬known in(Kas ) Inverse(Kbs , Kbs )A1 ¬known in(Kbs ) Inverse(Kas , Kas )A1

N bc ≈ cp(N b; Kab ; ) pl ≈ (T p(N b, Kab , idb) Inverse(Kas , Kas )A1 ¬known in(Kas ); Inverse(Kas , Kas )A1 ¬known in(Kbs ); Inverse(Kbs , Kbs )A1 pl ≈ (T p(N b, Kab , idb) Inverse(Kas , Kas )A1 ¬known in(Kab ); Inverse(Kab , Kab )A2 ¬known in(Kas ); Inverse(Kas , Kas )A1 ¬known in(Kbs ); Inverse(Kbs , Kbs )A1 pl ≈ (T p(N b, Kab , idb) Inverse(Kas , Kas )A1

Multi session: We add a global infinite loop

¬known in(Kab ) Inverse(Kab , Kab )A2 ¬known in(Kas ) Inverse(Kas , Kas )A1 ¬known in(Kbs ) Inverse(Kbs , Kbs )A1

¬known in(Kab ); Inverse(Kab , Kab )A2 ¬known in(Kas ); Inverse(Kas , Kas )A1 ¬known in(Kbs ); Inverse(Kbs , Kbs )A1

Figure 8: Yahalom Role A Analysis Results

7

Conclusion

We have presented a new approach to verify statically the preservation of confidentiality properties in computer programs. Our analyzer is based on static analysis and abstract interpretation techniques, and is therefore automatic, rests on sound principles. It also handles programs, not just complete protocols. We hope to have demonstrated through an example that our system was usable, and gave sensible results. We believe that the constraint-based technique we use can also be used fruitfully in the more classical domain of cryptographic protocol verification.

References [1]

R. M. R. M. Needham and M. D. Schroeder, “Using encryption for authentication in large networks of computers,” Tech. Rep. CSL-78–4, Xerox Palo Alto Research Center, Palo Alto, CA, USA, December 1978. Reprinted June 1982.

[2]

M. Burrows, M. Abadi, and R. Needham, “A logic of authentication,” ACM Transactions on Computer Systems, vol. 8, pp. 18–36, February 1990.

[3]

B. Schneier, “The IDEA encryption algorithm,” Dr. Dobb’s Journal of Software Tools, vol. 18, pp. 50, 52, 54, 56, 106, December 1993.

[4]

B. Schneier, Applied Cryptography: Protocols, Algorithms, and Source Code in C. New York, NY, USA: John Wiley and Sons, Inc., 1994.

[5]

“Digital signature standard (DSS).” Federal Information Processing Standards Publication 186, May 1994.

[6]

G. Tsudik, “Message authentication with one-way hash functions,” ACM SIGCOMM Computer Communications Review, vol. 22, pp. 29–38, October 1992.

[7]

C. Meadows, “Applying formal methods to the analysis of a key management protocol,” Journal of Computer Security, vol. 1, no. 1, 1992.

[8]

D. Bolignano, “An approach to the formal verification of cryptographic protocols,” in 3rd ACM Conference on Computer and Communications Security (C. Neuman, ed.), (New Delhi, India), pp. 106–118, ACM Press, March 1996.

[9]

L. Paulson, “Proving properties of security protocols by induction,” in Proceedings of the IEEE Computer Security Foundations Workshop X, pp. 70–83, Computer Security Press, 1997.

[10]

R. P. Kurshan, “Formal verification in a commercial setting,” in Proceedings of the 34th ACM Conference on Design Automation (DAC-97), (Anaheim, CA, USA), pp. 258–262, ACM Press, June 1997.

[11]

S. Gurgens, “SG logic — A formal analysis technique for authentication protocols,” in 5th International Workshop on Security Protocols, (Paris, France), Springer Verlag Lecture Notes in Computer Science 1361, April 1997.

[12]

J. Zhou and D. Gollmann, “Towards verification of non-repudiation protocols,” in Proceedings of 1998 International Refinement Workshop and Formal Methods Pacific, (Canberra, Australia), pp. 370–380, Springer, September 1998.

[13]

D. Dolev and A. C. Yao, “On the security of pubic key protocols,” IEEE Transactions on Information Theory, vol. IT-29, pp. 198–208, March 1983.

[14]

G. Lowe, “An attack on the Needham-Schroeder public-key authentication protocol,” Information Processing Letters, vol. 56, pp. 131–133, November 1995.

[15]

G. Leduc, O. Bonaventure, E. Koerner, L. L´ eonard, C. Pecheur, and D. Zanetti, “Specification and verification of a TTP protocol for the conditional access to services,” in Proceedings of the 12th J. Cartier Workshop on Formal Methods and their Applications: Telecommunications, VLSI and RealTime Computerized Control System, (Montr´ eal, Canada), October 1996.

[16]

W. Marrero, E. M. Clarke, and S. Jha, “Model checking for security protocols,” Tech. Rep. CMU-SCS-97-139, Carnegie Mellon University, May 1997.

[17]

M. Abadi and A. D. Gordon, “A calculus for cryptographic protocols: The spi calculus,” in Proceedings of the 4th ACM Conference on Computer and Communications Security, pp. 36–47, ACM Press, 1997.

[18]

M. Abadi, “Secrecy by typing in security protocols,” Journal of the ACM, vol. 46, pp. 749–786, September 1999. An abstract appeared in the Proceedings of TACS ’97, Springer Verlag LNCS 1281.

[19]

R. A. Kemmerer, “Analyzing encryption protocols using formal description techniques,” IEEE Journal on Selected Areas in Communications, vol. 7, pp. 488–457, May 1989.

[20]

P.-C. Cheng and V. D. Gligor, “On the formal specification and verification of multiparty session protocol,” in Proceedings of the IEEE Symposium on Research in Security and Privacy, pp. 216–233, 1990.

[21]

P. Cousot and R. Cousot, “Abstract interpretation: A unified lattice model for static analysis of programs by construction or approximation of fixpoints,” in Conference Record of the 4th ACM SIGPLAN-SIGACT Symposium on Principles of Programming Languages, (Los Angeles, CA, USA), pp. 238– 252, January 1977.

[22]

A. W. Roscoe and C. A. R. Hoare, “The laws of Occam programming,” Theoretical Computer Science, vol. 60, pp. 177–229, September 1988.

[23]

O. Agesen, J. Palsberg, and M. I. Schwartzbach, “Type Inference SELF: Analysis of Objects with Dynamic and Multiple Inheritance,” in European Conference on Object-Oriented Programming (O. Nierstrasz, ed.), (Kaiserslautern, Germany), pp. 247–267, Springer Verlag LNCS 707, July 1993.

[24]

A. V. Aho, R. Sethi, and J. D. Ullman, Compilers: principles, techniques, tools. Addison-Wesley, 1986.

[25]

N. El Kadhi, “Analyse statique et pr´ eservation de secret.” Troisi` eme JavaCard Workshop, ENS Paris, December 1999.

[26]

R. Cytron, J. Ferrante, B. K. Rosen, and M. N. Wegman, “Efficiently computing static single assignment form and the control dependence graph,” ACM Transactions on Programming Languages and Systems, vol. 13, pp. 451–490, October 1991.

Automatic Verification of Confidentiality Properties of ...

of roles for key servers is discussed briefly, and we illustrate a run of ...... [3] B. Schneier, “The IDEA encryption algorithm,” Dr. Dobb's Journal of. Software Tools ...

186KB Sizes 1 Downloads 228 Views

Recommend Documents

Automatic Verification of Algebraic Transformations
restructuring transformations are called global data-flow transformations. In our earlier work [2], we have presented an intra- procedural method for showing the ...

efficient automatic verification of loop and data-flow ...
and transformations that is common in the domain of digital signal pro- cessing and ... check, it generates feedback on the possible locations of errors in the program. ...... statements and for-loops as the only available constructs to specify the.

Automatic Compositional Verification of Timed Systems
and challenging. Model checking is emerging as an effective verification method and ... To alleviate this problem, we proposed an automatic learning-based compositional ... To the best of our knowledge, our tool is the first one supporting fully auto

Automatic Verification and Discovery of Byzantine ...
which tests the correctness of the implied Consensus algo- rithm. In automatic discovery, the ... algorithms, which benefit from automated verification most. Secondly, any ...... an independent Python implementation of the algorithm in. Figure 7.

efficient automatic verification of loop and data-flow ...
Department of Computer Science in partial fulfillment of the .... Most importantly, in these applications, program routines subject to transformations are typically.

Automatic Functional Verification of Memory Oriented ...
software for power and performance-efficient embed- ded multimedia systems. ... between structure-preserving (viz., interchange, skew- ing, reversal, bumping) ...

Statistical Verification of Probabilistic Properties with ...
probabilistic model checking, PRISM [17], relies on iterative methods to verify properties with unbounded until. Each iteration involves a matrix–vector multi- plication, which in the worst case is O(n2), but often O(n) (for sparse models), where n

automatic pronunciation verification - Research at Google
Further, a lexicon recorded by experts may not cover all the .... rently containing interested words are covered. 2. ... All other utterances can be safely discarded.

2016 Confidentiality of Personally Identifiable Information Obtained ...
2016 Confidentiality of Personally Identifiable Information Obtained Through Child Find Activities.pdf. 2016 Confidentiality of Personally Identifiable Information ...

Verification of Employment.pdf
TO WHOM IT MAY CONCERN: The applicant/participant is applying for housing assistance subsidized through the Department of. Housing and Urban Development. Federal regulations require that all income, expenses,. preferences and other information relate

Geometric Model Checking: An Automatic Verification ...
based embedded systems design, where the initial program is subject to a series of transformations to .... It is required to check that the use of the definition and operand variables in the ..... used when filling the buffer arrays. If a condition d

Verification of Employment.pdf
There was a problem previewing this document. Retrying... Download. Connect more apps... Try one of the apps below to open or edit this item. Verification of ...Missing:

Properties of Water
electron presence. Electron density model of H2O. 1. How many hydrogen atoms are in a molecule of water? 2. How many oxygen atoms are in a molecule of ...

Verification of Residence.pdf
1940 Ralston Avenue (corner of Villa & Ralston). Direct (650) 590-4525 (650) 592-7111. San Mateo. Agency Insurance. 25 W. 25th Ave. Patio #8. 572-8944. Page 2 of 2. Verification of Residence.pdf. Verification of Residence.pdf. Open. Extract. Open wit

Code Mutation in Verification and Automatic Code ...
Model checking is a successful technique for comparing a model of a system with .... metric communication construct of the programming language ADA.

Code Mutation in Verification and Automatic Code ...
sender and receiver side. The protocol we check appears both in a .... The action is a sequence of statements, exe- cuted when the condition holds. in addition, ...

An Automatic Verification Technique for Loop and Data ...
tion/fission/splitting, merging/folding/fusion, strip-mining/tiling, unrolling are other important ... information about the data and control flow in the program. We use ...

VERIFICATION OF LANDSCAPE ARCHITECT LICENSURE.pdf ...
VERIFICATION OF LANDSCAPE ARCHITECT LICENSURE.pdf. VERIFICATION OF LANDSCAPE ARCHITECT LICENSURE.pdf. Open. Extract. Open with.

Principles for ensuring the confidentiality of data supplied to the ...
Aug 9, 2016 - Send a question via our website www.ema.europa.eu/contact ... has requested the European Medicines Agency (the 'Agency') to develop a.

Volunteer Confidentiality Agreement.pdf
Sign in. Page. 1. /. 1. Loading… Page 1 of 1. Page 1 of 1. Volunteer Confidentiality Agreement.pdf. Volunteer Confidentiality Agreement.pdf. Open. Extract.

Confidentiality Agreement.pdf
Page 1 of 1. SMITH VALLEY SCHOOL DISTRICT. #89. 2901 Highway 2 West, Kalispell, Montana 59901. PHONE: 406-756-4535 FAX: 406-756-4534.

Statement of authority and confidentiality commitment from the United ...
designated national security information or internal, pre-decisional information. ... data, or a written statement from DG SANTE or EMA providing that the information ... laws, or to any relevant policies or procedures, that would affect its ability

Statement of authority and confidentiality commitment from the ...
FDA is further authorized under section 708(c) of the Federal Food, Drug, and .... http://ec.europa.eu/dgs/health food-safety/contact www.ena.europa.eu/contact ...

STUDY OF MECHANICAL AND ELECTRICAL PROPERTIES OF ...
STUDY OF MECHANICAL AND ELECTRICAL PROPERTIES OF VINYLESTER NANOCOMPOSITES.pdf. STUDY OF MECHANICAL AND ELECTRICAL ...