Situational Identity: a Person-centered Identity Management Approach Tatyana Ryutov and Clifford Neuman Information Sciences Institute University of Southern California 4676 Admiralty Way, Suite 1001, Marina del Rey, CA 90292 Technical Report ISI-TR-630

Abstract Emerging personalized context-aware services require collection and analysis of user related information. User–centered identity management becomes a key technology for controlling personal information. In our view, true user-centered identity management is more than just letting users (vs. institutions) manage their personal information. It is an individualistic approach that recognizes the unique needs of an individual; dependent upon personal preferences, psychological traits and situational factors. In this paper we consider a user-centered identity negotiation approach built upon the social concept of situational identity which varies across time and place according to the needs and expectations of the individual. Selective disclosure allows a user to maintain different personas for different interaction environments (to emphasize this, we name our approach person-centered rather than more conventional user-centered). Situational identity incorporates purposeful construction of an identity with strategic outcome in mind. Preferable outcome can be expressed in terms of desired privacy, monetary benefits, safety or other factors. This is consistent with how people interact in the physical world. The approach accounts for the influence of social theory and contextual information that characterizes particular situation.

1. Introduction Our life in a digital world has changed dramatically: today activities such as shopping, discussion, entertainment, business and scientific collaboration are conducted in the cyber world. These changes greatly influence our understanding of digital identity and access management paradigms. This paper takes a new look at identity management, and proposes a solution built upon the social concept of situational identity which varies across time and place according to the needs and expectations of the individual. Traditionally, identity management has been viewed from a service provider’s point of view, for maintenance of account information to control access to resources owned by an organization. The risk resided on the side of the resource provider and, therefore, access control policies took into account only interests of the resource owners. Computer mediated interactions evolved from single organization to an open world. Maintaining identifiers and accounts for all potential users is not practical. Authenticating the identity (in traditional sense) of a stranger may not provide sufficient information for access control purposes. The decision to grant access is often based on the characteristics of the requestor rather than its identity [2] [3].

Current interactions involve mutual exchange of resources that each party controls and values. For example, users provide credit card numbers in exchange for goods or services. Often people cannot access a service without submitting profile information. For example, in order to access a corporate white paper, one has to supply e-mail, affiliation and other information that may be used by the corporation to send advertisements or could be sold to other companies. Early computer mediated interactions have tended to be one-way and very impersonal. Now service providers are increasingly urged to offer personalized services, which recognize the unique needs of individual consumers. In order to provide specific personalized value-added services the collection and analysis of user related information is essential. These trends require users to disclose a rich set of information including dynamic properties (e.g., user’s current location, environment) and sensitive personal information which can be bought and sold. A number of user–centered identity management approaches [5][8][9][12] are emerging to allow increased control over personal data. However, these solutions do not address complex perceptions that people have about interactions: whether to participate in the interaction, what information to release to which entity under certain circumstances, and the effects of disclosure. Current systems are based on simple and rigid models, which lack a methodology for dealing with an individual in the digital world. We believe that true user-centered identity management approach should not only give users control over their personal information (e.g., medical, financial and employment records), but also recognize the unique needs of an individual. The richness of electronic communications mirrors physical world experience. Resources may be accessed in a variety of contexts: social, business, government, health care, etc. To provide an intuitive way for users to deal with the complexity and richness of the computer mediated interactions, we propose an approach that explicitly models a largely unconscious way people interact in social environments. In physical world, any individual holds multiple identities and chooses to engage the identity most appropriate for that particular context. With little consciousness, people quickly evaluate the context of a given situation and determine which segment of their identity to convey. We attempt to model this process, by proposing a tool which implements (with certain limitations) a concept of situational identity. Partial identities [10] and facets [4][11] have been proposed to let people switch identities between different contexts. These approaches are mostly concerned with user privacy. The novelty of this work lies in providing additional flexibility for users to decide which identity to present based on personal preferences and strategic outcome in mind. Preferable outcome can be expressed in terms of desired privacy, monetary benefits, positive self presentation, safety or other factors. For example, based on a personality, a user may choose to act as a thrifty shopper, a privacy concerned shopper or a merchant reliability concerned shopper. To make this approach practical, the access management policies must be defined in a way to support user choice. This approach is close in spirit to the secure networked architecture based on interlocking rings (SNAIR) under development at ISI. In this system, the level of trust placed in architectural components and the type of a virtual system employed vary according to situational context and perspective of a node running part of a virtual system.

2. What is digital identity? Digital identity is a complex notion that is not fully understood and is still a subject of research. Clearly, the concept of identity is far broader than just a name that uniquely identifies a person or an account/password combination. To answer the question of digital identity, we need to look at it from the perspectives of resource provider and requester. In order to engage in a transaction, resource provider and requester have to go through an identity negotiation stage. During this stage both resource provider and requestor try to agree on each other identity they are willing to accept. For the purpose of this paper we are only interested in negotiation of the user identity. In open environments, a resource provider is more concerned with identity that allows it to judge trustworthiness of the party making a request relative to a resource, rather than with differentiating one identified individual from another. For a resource provider, identity is information about the requestor that predicts behavior of the requestor with respect to requested resources. In particular, when there is evidence that the user behaves as expected, the trust in such user is high. A resource provider needs information that assures it of likelihood of appropriate user behavior. For example, to provide access to an expensive on-line scientific instrument, the owner may consider user information, such as membership at a research lab and certification of completed training with a high score (a form of reputation) to guarantee safe instrument operation (expected behavior) by the user. From a user perspective, the decision to present a particular identity is based on situational context of the interaction, the communication partner and personal considerations. To support person-centered identity negotiation, a resource provider needs to accept more than one type of identity. A user needs to understand his options and select the best identity for the context. This choice will influence nature and extent of user participation, which in turn affects the risks and exposure of communicating parties. We believe that the social concept of situational identity provides an intuitive way to support user decisions during the identity negotiation process.

2.1 Situational Identity Situational identity arises when an individual constructs and presents any one of a number of possible social identities, depending on the situation: a religion, an ethnicity or lifestyle - as the context deems a particular choice desirable or appropriate [19]. The notion of situational identity is a dynamic one, in contrast to that of “fixed identity”. In real world, people easily switch between different situational identities. For example, a person who is half Italian half French may want to identify with a particular ethnicity in some social situation (e.g., attending a soccer game). This choice may even be crucial for his personal security. Situational identity already exists in current systems but is not regarded as such. For example, in role based access control (RBAC) [15] systems, users may take on different roles based on a specific task. For example, a user may take a “Programmer” role most of the time and switch to an “Administrator” role only when he needs to run protected privileged commands, such as accessing passwords, installing software, etc. The user’s choice to act with reduced privileges most of the time is dictated by the wish to keep system operation safe, trading user convenience for system safety.

In the physical world, a person is able to judge a situation and decide what the desirable outcome is and what he wants to disclose. However, relatively little is known about how people make decisions when to disclose personal information, and how much information to reveal in any given situation. Possible aspects include positive self presentation, privacy, costs and benefits of disclosure, and trust. A positive self-presentation is necessary for a person to deal effectively with the world. When developing a presentation to create a desired impression, people assess what is appropriate and expected in the situation, and select the presentation depending on one’s personality [4]. Concerns about online privacy stem from the technology's ability to monitor and archive almost every aspect of users' behavior. Often a person desires privacy out of fear that information may be used against him. People usually prefer to know more about others while hiding their own shortcomings. This is consistent with the desire to maintain a positive self concept. People have a level of privacy that varies across individuals based on person’s own perceptions and values. The multitude and variety of services that are becoming available to users (as well as different user personalities) lead us to believe that privacy is not the only concern. For example, if one has a choice whether to pay for a hotel with a credit card and get a discount (money back) or cash, the choice would depend on whether it is preferable to preserve anonymity or pay less. Perceived trust assertions identified for the target service influence the way a user interacts with the service. Trust is subjective: it is a personal opinion which depends on a situation and user personality. For example, one online customer may participate in a transaction without taking into account reliability of merchants, payment/delivery services, legal mechanisms that compensate losses, etc. Another user in the same situation may evaluate purchases quite differently, being reluctant to disclose identity and payment details to some service providers.

2.2 Acquisition of Situational Context Situational context refers to the aspects of the interaction and environment that suggest appropriate and expected behavior, risks, goals and value of interactions. When assessing contextual information, people rely on previous experiences and categorization to develop mental models of these situations and learn to associate particular fragments of their identity with specific situational contexts [4]. People compare the current environment to their mental model and make assumptions. A typical person cannot describe his mental models and in many situations people are not even aware that these mental models exist. A mental model may not necessarily reflect a situation accurately. Still, it provides the necessary framework for people to quickly determine how to best present themselves. A number of possible online situations can be very large; categorization helps to reduce it to a smaller number of relevant contexts. For example, situations such as “buying a book” and “buying a CD” could be considered instances of “buying a product” situation where a user expects to be presented with several payment options and be asked for a shipping address. In this context, the user may opt for monetary benefits when buying from trusted merchants and for greater privacy when dealing with unknown sellers.

3. Overview of a Situational Identity Management Tool

Increasing the number of identifiers and credentials that a user must manage can make a system unmanageable. Automation and system support is needed to manage situational identities in the digital world. By having the tools to control which aspects of identity to present in a particular situation, people can more appropriately organize and control their presentation to meet their needs, including the desire for privacy, perceived social acceptability, safety, and monetary benefits. In this section we provide a non technical overview of a tool for managing situational identities. We believe that rational choice theory [7] approach from social science is a promising way to build such a tool. In rational choice theory, individuals are seen as motivated by the goals that express their personal preferences. The theory is based on an idea that human actions are fundamentally rational in character and that people calculate the likely costs and benefits of any action before deciding what to do. Rational choice theory postulates that individuals must anticipate the outcomes of alternative courses of action and calculate the best alternative. Consider a personal tamper resistant hardware device which acts as a user agent for two main purposes: 1. The device securely stores identifiers and credentials from different service providers. These attributes include the identities held by a person ranging from significant that uniquely identify a person (e.g., birth certificate, social security number, passport, and drivers' license), to less significant: memberships in different organizations, affiliations, gender, etc. For each stored attribute, the tool maintains metadata that describes attribute sensitivity and other information. 2. When a user needs to access a particular service, the tool learns the security requirements of the target service and constructs relevant situational identity based on the context of interaction, outcome preferred by the user and the metadata associated with the stored attributes. To calculate a situational identity for a particular interaction, the tool acts as a rational decision maker according to the assumptions of rational choice theory: 1. The agent is goal oriented: it tries to maximize the benefit, therefore it always chooses the most preferred option; 2. The agent has sets of hierarchically ordered preferences, or utilities. This assumes a choice between alternatives and the possibility of rank ordering of these alternatives. 3. In choosing lines of behavior, agent makes rational calculations with respect to: o determining and evaluating the consequence of each alternative; o determining the utility of each consequence with reference to the preference hierarchy; o discovering the best way to maximize the utility. Formally, an agent needs to calculate the situational identity in an interaction with a party I who has security requirements S given the context of an interaction xk (xk is a subset of X that is a set of all possible contexts). The agent faces solution choices ai (subsets of user attributes that satisfy the requirements S) from the set of alternatives A = {a1, a2, … , an}. The task of the agent is to single out one element of A. The scheme of the choice procedure employed by the rational agent is as follows: First, the tool calculates a set of all possible consequences Cj = {c1, c2, … , cm} of presenting the subset of user attributes ai to the communicating party on each alternative ai, described by a consequence function Cons(ai) → Cj. Cj is a subset of C that is a set of all possible consequences.

To evaluate trust in the communicating party I given context xk, the agent employs function Trust_Eval(I, xk) → tm. The agent defines a preference relation Util over C (probably represented by a numerical function) in a given context xk according to desired outcome pn - a subset of P that is a set of all possible outcomes, Util (pn, xk, tm, Cj ) → nl, nl in N, N is a set of positive numbers. The agent then chooses, from the set A, the alternative ai that yields the best consequence - that is, the agent solves the optimization problem maxai in AUtil(pn, xk, tm, Cons(ai)). In other words, the preference relation on A is induced from the composition of the consequence function and the preference relation on C. Utility is influenced by user personality. To illustrate our approach, consider an example: a user wants to access an online conference room and needs to interact with an online “smart lock” via the situational identity tool. First, the tool needs to learn security requirements S. Assume that the lock’s access control policy states: a person can enter the online room if he is in the database of invited people (requires revealing user identity), or he is an employee of Company A and pays $5, or if he is anonymous and pays $15. There is obviously a choice of attributes that satisfy the requirements. Let us say the user wants to stay anonymous (the privacy is the most desirable outcome). In this case the tool tries to construct the situational identity by revealing the least number of the least sensitive credentials, for example “a person who pays $15” which is presented to the lock. However, if user wants to balance privacy and payment, than the choice is to identify the person as “employee of Company A”. Note that capabilities fit well within the framework – a capability defines an anonymous person who has access to the service. Generally, capabilities will be assigned the lowest sensitivity level. If anonymity is desired, than the system will retrieve a capability first. In other cases other credentials might make more sense for the user. For example, consider the case when the user is not concerned with privacy and the service offers the first time users (that need to disclose some information) a gift or a promotional discount. We now consider the tool operation in more details. Let X be a set of possible contexts maintained by the tool: X = {shopping, work, leisure} Assume than a trust evaluation function returns three values: Trust_Eval(I, xk) = {low, medium, high} Let P be a set of user desired outcomes in all possible contexts: P = {privacy, monetary_benefit, {privacy, monetary_benefit}} Let C be a set of all possible consequences: C = {reveal_identity, reveal_affiliation, cash_$x, no_payment } Let A be a set of alternatives constructed by the tool based on the security requirements S: A = {Name, {Emploee_of_A, payment_$5}, payment_$15} The agent selects context work as the most appropriate and calculates the trust level for the interaction party: Trust_Eval(Lock, work) = high Next, the tool calculates consequences of each alternative: Cons(Name) → c1 = (reveal_identity, no_payment) Cons({Emploee_of_A , payment_$5}) → c2 = (reveal_affiliation, cash_$5) Cons(payment_$15) → c3 = (anonymous, cash_$15)

Assume that utility function is 0≤Util(pn, xk, tm, Cons(ai)≤10 The agent now has to calculate utility for each of the possible outcomes. Not all of these calculations have to be done in real time. For example, the following calculations could be precomputed and stored along with the credentials: Util(privacy, work, high, reveal_identity ) → 2 Util(privacy, work, medium, reveal_identity ) → 1 Util(privacy, work, low, reveal_identity ) → 0 Util(privacy, work, high, reveal_afffiliation ) → 3 Util(privacy, work, medium, reveal_afffiliation ) → 2 Util(privacy, work, low, reveal_afffiliation ) → 1 Util(privacy, work, high, anonymous) → 8 Util(privacy, work, medium, anonymous) → 9 Util(privacy, work, low, anonymous) → 10 Util(monetary_benefit, work, high, no_payment) → 9 Util(monetary_benefit, work, medium, no_payment) →8 Util(monetary_benefit, work, low, no_payment) → 7 6≥Util(monetary_benefit, work, high/medium/low, cash_$x) > Util(monetary_benefit, work, high/medium/low, cash_$y) for all x,y: x < y Util({privacy,monetary_benefit}, work, high/medium/low, reveal_identity) → 2 Util({privacy,monetary_benefit}, work, high/medium/low, cash_x) → 7 Util({privacy,monetary_benefit}, work, high/medium/low, no_payment) → 4

The agent calculates the utility of each alternative with respect to privacy, if user desired outcome is privacy and chooses to present alternative payment_$15: Util(privacy, work, high, c1) → 2 Util(privacy, work, high, c2) → 3 Util(privacy, work, high, c3) → 8

If the desired outcome is to pay less, the agent chooses to present Name:

Util(monetary_benefit, work, high, c1) → 9 8≥Util(monetary_benefit, work, high, c2) > Util(monetary_benefit, work, high, c3)

If the user wants to balance money and privacy, the choice would be to present {Emploee_of_A , payment_$5. After the situational identity is constructed, the system needs to authenticate it to the service.

4. Discussion and Future Work In this section we briefly outline some research challenges that should be addressed to support automated situational identity management paradigm. As discussed in the previous chapter, the agent needs to find out an outcome preferable for a user for each interaction type. The tool can store a catalog that links different service types, situational contexts and user goals. However, it is burdensome for users to setup all polices in advance. A valuable situational identity management tool must reduce user effort. We want the system to be as unobtrusive, as possible. Therefore, the tool should be able to learn based on how the individual interacts in various situational contexts. The system should make guesses, allow a user to alter the assumptions, and remember user decisions which could be applied automatically in the future.

In our example, situational identity represents a collection of user attributes. This representation could be extended to include additional conditions. These conditions could be context related (e.g., require evaluation of some system predicates); usage related (e.g., restrictions about the secondary usage of the identity information once released to a communicating party); or define a set of obligations which require the party to take additional steps [6]. This provides additional flexibility but makes the system more complex. In particular, this approach may require the parties to participate in a negotiation process to agree upon the set of conditions associated with the identity information. In our future work we will consider specification of the exact structure of security requirements R, situational contexts X, possible consequences C, and user preferable outcomes P. The security requirements must support the situational identity idea; in other words a user should have a choice of alternatives. The type of access granted to a user should depend on the asserted identity which affects the trust that the service places with the user. If no appropriate credentials are found (generated situational identity ai is empty), either the user should relax the restrictions or the service needs to reconsider the requirements (may require negotiation). Other research directions will include developing an approach to represent metadata about user attributes in order to support the rational decision maker approach and modeling procedural aspects of the decision making process. We need to understand how to construct the set of possible consequences C, and how to define the utility function Util() which embodies individual preferences for outcomes of a transaction. Another issue is the uncertainties in situations when the tool can not determine and evaluate the consequence of each alternative due to, for example, insufficient information. In our example, the agent is fully aware of the set of alternatives from which it has to choose. It neither invents nor discovers new courses of actions (the chosen ai cannot be outside the set A). This is a rather restricted approach. Not all the choices could be revealed by the service at once, some could be available as a result of the identity negotiation process. Depending on preferred outcome P, the user can envision different negotiation strategies: bargaining for revealing less sensitive information vs. bargaining for a better deal in terms of money or service guarantees. Trust assertions identified for the target service influence the calculated situational identity, therefore a trust metric is essential for our model. Trust could be calculated based on the third party recommendations and prior interaction experience with the party: positive outcomes of interactions preserve or amplify trust, while trust erodes with negative experiences. When a user has no pre-existing knowledge about the service, initial trust could be established by monitoring the service behavior during the identity negotiation process and adjusting trust values based on the perceived behavior [14]. An example of suspicious behavior is asking for user medical record while negotiating an identity to buy a book. The requested information is clearly out of context and will raise user suspicion.

5. Conclusions Given the security requirements associated with the target service, the user defined desired outcome, and the context of the interaction (includes trust assertion that the user has about the service), the system we propose yields the appropriate subset of user credentials which constitutes situational identity needed to satisfy the requirements. When calculating the situational identity, the system acts as a rational decision maker.

The novelty of this work lies in providing additional flexibility for users to automatically decide which identity to present based on personal preferences and strategic outcomes. Preferable outcome can be expressed in terms of desired privacy, monetary benefits, safety or other factors. We anticipate that the proposed system will be particularly valuable in ubiquitous environments where users interact with a number of services (often simultaneously) in a variety of contexts; in such environments an automated personalized identity management tool is indispensable.

6. Related Work A number of emerging identity management solutions are based on the concept of identity federation which provides a mechanism to exchange sensitive user information between service providers located in different security domains. Shibboleth [9] aims to develop new middleware technologies based on the concept of federation of user attributes to facilitate inter organizational collaboration. WS-Federation [1] is an approach to manage the trust relationships in heterogeneous federated environments. It provides support for federated identities, sharing of attributes, and management of pseudonyms. The goal of the Higgins [8] project is to develop a framework that will enable users and enterprises to integrate identity, profile, and relationship information across heterogeneous systems. Liberty Alliance project aims to create a single sign-on system based on a federation of trusted parties. In this system, if an online service S1 trusts another online service S2 to properly authenticate a user, online service S1 can authenticate a user on behalf of online service S2 by passing a SAML [16] token that asserts the user’s identity to service S2. Microsoft attempted to create a universal login service - .net passport that allowed users to sign-in at many web sites using just one account. However, users have demonstrated resistance to the notion of a single universally usable digital identity. The selective disclosure inherent in managing independent identities allows users to maintain different personas for different interaction environments. Microsoft’s InfoCard [5] [12] digital identity management system supports a number of digital identities represented by a visual “Information Cards” in the client user interface. The user selects identities represented by InfoCards to authenticate to participating services. Our work is complementary to this approach in automating decisions about what card to present in current context. Attribute-based Access Control (ABAC) [17] [20] and automated Trust Negotiation (TN) are new approaches to access control and authentication in open environments [2][3][13][21] [21]. Unlike traditional identity-based access control, authorization decisions in ABAC are based on requesters’ attributes which may be sensitive. TN supports ABAC by providing bilateral credential exchange that consists of iteratively disclosing digital credentials. These credentials verify properties of their holders to establish mutual trust. Current ABAC and TN technologies are not sufficiently flexible. Most existing approaches treat credentials as sensitive objects and have security policies to statically control their disclosures, without considering the context of the transaction. These technologies would be greatly enhanced if a user is able to tailor the interaction and exchange of information between the user and the environment based on context, e.g., nature of the interaction, user preferences, user/device location, device properties, etc. We propose a model constructed with flexibility, social nuance,

and contextualization as critical design factors. This approach will lead to the development of next-generation ABAC and TN systems.

References [1] S. Bajaj, G. Della-Libera, B. Dixon, M. Dusche, M. Hondo, M. Hur, C. Kaler, H. Lockhart, H. Maruyama, A. Nadalin, N. Nagaratnam, A. Nash, H. Prafullchandra, and J. Shewchuk, Web Services Federation Language (WS-Federation). Version 1.0. [2] Bertino, B., Ferrari, E., and Squicciarini, A.C. Trust-X: A Peer-to-Peer Framework for Trust Establishment. In IEEE Transactions on Knowledge and Data Engineering, July 2004. [3] Bonatti, P. and Samarati, P. A Unified Framework for Regulating Access and Information Release on the Web. In Journal of Computer Security, 10, 3, (2002), 241-271. [4] Danah Boyd, Faceted Identity: Managing Representation in a Digital World. Cambridge, MA: MIT Master's Thesis. August 9, 2002. [5] Kim Cameron and Michael B. Jones, Design Rationale behind the Identity Metasystem Architecture, http://research.microsoft.com/~mbj/ [6] E. Damiani, S. De Capitani di Vimercati, P. Samarati, New Paradigms for Access Control in Open Environments, in Proc. of the 5th IEEE International Symposium on Signal Processing and Information, Athens, Greece, December 18-21, 2005. [7] A. Heath, Rational Choice and Social Exchange. Cambridge 1976. [8] Higgins Trust Framework Project, http://www.eclipse.org/higgins. [9] Internet2. Shibboleth. http://shibboleth.internet2.edu. [10] M. Kohntopp and A. Pfitzmann. Anonymity, unobservability, and pseudonymity - a proposal for terminology. Draft, June 2001. [11] Scott Lederer, Everyday Privacy in Ubiquitous Computing Environments, Ubicomp Workshop on Socially-informed Design of Privacy-enhancing Solutions in Ubiquitous Computing, 2002. [12] Microsoft. Microsoft’s Vision for an Identity Metasystem. Microsoft Whitepaper, [13] W. Nejdl, D. Olmedilla, and M. Winslett, Peertrust: Automated trust negotiation for peers on the semantic web, in Proceedings of the Workshop on Secure Data Management in a Connected World (SDM’04), August/September 2004. [14] Tatyana Ryutov, Clifford Neuman, Li Zhou, Noria Foukia. Initial Trust Formation in Virtual Organizations, The International Journal of Internet Technology and Secured Transactions, 2006. [15] R. Sandhu, E. Coyne, H. Feinstein, and C. Youman, Role-Based Access Control Models, IEEE Computer, 29(2):38–47, February 1996. [16] Security Assertion Markup Language (SAML) OASIS. [17] Skogsrud, H., Benatallah, B., and Casati, F. Model-driven trust negotiation for Web services. IEEE Internet Computing, 7, 6 (Nov./Dec. 2003). [18] S. De Capitani di Vimercati, P. Samarati, and S. Jajodia Policies, Models, and Languages for Access Control [19] Cohen, R. and Kennedy, P. 2000, Global Sociology, MacMillan, London, p. 380. [20] L. Wang, D. Wijesekera, and S. Jajodia, A logic-based framework for attribute based access control, in proceedings of the 2004 ACM Workshop on Formal Methods in Security Engineering, Washington DC, USA, October 2004. [21] Winsborough, W. and Li, N. Towards Practical Automated rust Negotiation. In Third International Workshop on Policies for Distributed Systems and Networks (POLICY2002), Monterey, CA, June 2002. [22] Winslett, M., Yu, T., Seamons, K. E., Hess, A., Jacobson, J., Jarvis, R., Smith, B., and Yu, L. Negotiating Trust on the Web. IEEE Internet Computing, 6, 6 (Nov./Dec. 2002). [23] Identity Management. Liberty Alliance Project, 2004. http://www.projectliberty.org.

Situational Identity: a Person-centered Identity ...

in terms of desired privacy, monetary benefits, safety or other factors. ... their personal information (e.g., medical, financial and employment records), but also ... The richness of electronic communications mirrors physical world experience. ... accessed in a variety of contexts: social, business, government, health care, etc.

124KB Sizes 450 Downloads 174 Views

Recommend Documents

Making-Identity-Count-Building-A-National-Identity-Database.pdf ...
Constructivism, despite being one of the three main streams of IR theory, along with realism and liberalism, is rarely, if ever,. tested in large-n quantitative work. Constructivists almost unanimously eschew quantitative approaches, assuming that. v

On Candido's Identity
n+2], where Fn denotes the nth Fibonacci number, by observing that for all reals x, y one has the curious identity. [x2 + y2 + (x + y)2]2 = 2[x4 + y4 + (x + y)4]. (1). Candido's identity (1) can be easily shown to be true not only in R. + := [0, ∞)

Identity whitepaper Services
A description for this result is not available because of this site's robots.txtLearn more

identity
Identity and legitimacy in the post-enlargement European Union ..... the European Commission, the European Parliament, and the European Court of Justice—.

Identity Theft.pdf
Make sure that you send it Return Receipt Requested and keep the postal. receipt with your copy. It cannot be guaranteed that the three credit reporting. agencies will honor these requests. The Social Security Administration fraud line number is 1-80

Locke Identity
identity, or will determine it in every case; but to conceive and judge of it aright ... different particles of matter, as they happen successively to be ..... self than any other matter of the universe. ... same consciousness continued on for the fu

On Candido's Identity
n+2], where Fn denotes the nth Fibonacci number, by observing that for all reals x, y one has the curious identity. [x2 + y2 + (x + y)2]2 = 2[x4 + y4 + (x + y)4]. (1). Candido's identity (1) can be easily shown to be true not only in R. + := [0, ∞)

Stereotypes and Identity Choice
Aug 30, 2016 - by employers, internship, or on-the-job training. ...... most affordable methods for 'regional identity' manipulation. ...... [For Online Publication].

Personal Identity
Hume, D. , Treatise of Human Nature, Oxford: Clarendon. Press (original ... Locke, J., , An Essay Concerning Human Understanding, ed. P. Nidditch ...

Syncretic Latino/a Identity
come a major force in an increasingly crowded commercial media market place. ... 34 Studies in Latin American Popular Culture 20. This article ... ethnic past and U.S. present, Telemundo speaks to a social world that exists apart from the ...

VISUAL IDENTITY GUIDELINES
403-500-2763 or by email at ​[email protected]​. ... magazines, websites, television, video, and advertising, a consistent visual image helps to.

Identity Theft.pdf
There was a problem previewing this document. Retrying... Download. Connect more apps... Try one of the apps below to open or edit this item. Identity Theft.pdf.

Designing Brand Identity
Following on from your comments about communicating a brand iden y , I eBay software and services provide listing management, design tools, marketing, ytics, repricing, feedback solicitation and more. Online PDF Designing Brand Identity: An Essential

Designing Brand Identity
Join today to get access to thousands of courses. ... Online PDF Designing Brand Identity: An Essential Guide for the Whole Branding Team, Read PDF Designing Brand Identity: An Essential Guide for the Whole .... Know Your Onions: Graphic Design: How

Designing Brand Identity
strategy, design development through application design, and identity standards ... in branding, including social networks, mobile devices, global markets, apps, ...

Google Identity Services for work
Enter your email. Next. One account. All of Google. INTRODUCING. Google Identity. Services for work for Work. We all care about keeping our data safe and private. Google Identity brings a new level of intelligence to make security effortless. Online

Biometric Identity Management System - UNHCR
In February 2015, DPSM and the. Division of Information Systems and. Telecommunications (DIST) completed development of UNHCR's new biometric identity ...

Designing Brand Identity
Designing Brand Identity An Essential Guide for the Whole Branding Team 4th ... Whole Branding Team 4th Edition by Alina Wheeler Designing Brand Identity .... 4th Edition Ebook, Best Book Designing Brand Identity: An Essential Guide ... application d

Social-Identity-Wheel.pdf
(Adapted from "Voices of Discovery", Intergroup Relations Center, Arizona State University). 1. Identities you think about most often. 2. Identities you think about ...

Identity Crisis - New.pdf
Israel's disobedience to God's law, they were split into two kingdoms: the house of Israel in the north with ten ..... Identity Crisis - New.pdf. Identity Crisis - New.pdf.

Ecological Identity - Thomashow.pdf
There was a problem previewing this document. Retrying... Download. Connect more apps... Try one of the apps below to open or edit this item. Ecological ...