INTERNATIONAL JOURNAL OF ELECTRICAL, ELECTRONICS AND COMPUTER SYSTEMS (IJEECS), volume 1, issue 1, MARCH 2011 www.ijeecs.org ISSN: 2221-7258(Print) ISSN: 2221-7266 (Online)

Privacy Protection in Multi-Agent based Applications Eiji Kamioka , Shigeki Yamada National Institute of Informatics, Tokyo, Japan.

Abstract In many multi-agent based applications, software agents share their private information with each other to reach their goals. But the agents may not always be willing to let other agents take away the shared private information. Also a malicious agent may steal unauthorized data from visited user host and send them to illicit personnel or sneak them away, resulting in privacy loss. In this paper we address the privacy issues in multi-gent based applications and propose a privacy protection model named iCOP for such applications. Participating agents are trapped in the iCOP host in which they interact with one another and solve the problem by sharing their private inputs. They are restricted from sending out any data except the computational result to the outside world. They cannot sneak away the private input data of other agents or illegally accessed data. We also developed a prototype of our proposed model and a number of mobile agents for experiments. The experimental results demonstrate the effectiveness of the iCOP model in protecting user privacy in multi-agent based applications. Keywords: Mobile Agent, Negotiation, Privacy and Security. I. INTRODUCTION In ubiquitous computing environments, users need tools that facilitate automated and intelligent decision making activities such as automated buying and selling of products. Mobile agent technology has been widely proposed as a key technology in the creation of such an open, programmable and heterogeneous service environment. It provides a number of benefits over client/server model [1]. For example, it allows disconnected computation and save network bandwidth significantly, especially when there needs a large number of interactions among agents. But its additional privacy and security risks are the key barriers for wide acceptance of this technology. Data about identifiable individuals should not be automatically available to other individuals and organizations, and that, even where data is possessed by another authorized party, the individual must be able to exercise a substantial degree of control over that data and its use. This is sometimes referred to as privacy [2]. Privacy and security are forever entwined, but they are not the same. Fred H. Cate, professor of law at Indiana University explains the distinction. "I think of privacy as the

use of the data by somebody you gave it to, and security as the theft of the data or the interception of the data by the unknown third party," he says. "If I buy a ticket from Travelocity, what Travelocity does with my data is a privacy issue. If somebody hacks into Travelocity and steals that data, that's a security issue. And we've had a tendency to confuse the two" [3]. The challenge in data privacy is to share data among users for the intended job while protecting further unauthorized use of the data or unauthorized disclosure to third parties. Consider a simple scenario where two employees Alice and Bob, each from a different company, want to make a meeting scheduled. To do the job, they negotiate by sharing their personal calendar information. From the negotiation, they can infer about each other’s previous schedule information of their personal calendar. From the proposal of a specific date/time to Bob by Alice for the new meeting, Bob reveals the information that Alice’s calendar has no schedule on the proposed date/time slot. Similarly from Bob’s reply, Alice can reveal information about Bob’s personal calendar information; whether he has any schedule on that proposed slot or not. The more proposals and replies are required to reach the goal the more personal calendar information is revealed to each other. Ideally the participants should be able to protect their personal calendar information yet solve the problem. In a multi-agent application, all the agents are not authorized to get all the private information. For example, in a book buying application scenario, a user agent may carry a list of bookstores, the desired book information, his credit card information, his postal address etc. The user agent needs to share the book information with the book stores to let the bookstores know which book it is looking for, the credit card information with the credit card company to verify the credit card number and credit limit, and the postal address with the delivery service (e.g., post office) to enable it to deliver the book to the desired address. Each involved party should get only the information it is authorized to get from an agent, and other private information of the agent should remain secret from it. In this paper, we address the protection of the private data that the participating software agents carry with them while interacting with other agents, from unauthorized parties in multi-agent based applications. The basic idea of our proposed iCOP (isolated Closed-door One-way Platform) architecture is to bring all interacting agents along with their data into a common agent

platform from where they cannot leave or cannot communicate with the external world. The agents complete their task from within the iCOP platform and they are destroyed after their job is done so that they cannot sneak away other agents’ private information. We also built an iCOP prototype application and some agent applications to do experiments about the effectiveness of our proposed architecture. The remainder of the paper is organized as follows. Section II describes related works concerning agents and agent hosts’ privacy and security issues. Section III describes our proposed architecture in details. Section IV analyzes the privacy protection characteristics of our model. In section V we describe our prototype application, experimental environment and experimental results. Section VI briefly discusses about applicability of the proposed model. Finally section VII concludes the paper. II. RELATED WORKS In this section we present the security and privacy related works for mobile agent paradigm. Reference [4] offers an excellent taxonomy of the security issues for mobile agents and gives an idea about which security goals are easy to achieve and which are difficult to achieve. So do the authors in paper [5], who examine “trust” as agents roam along a path that is not determined in advance. Agent authentication to identify an agent or the signer of an agent is discussed in [6]. Sandboxing consists of running a mobile code inside a restricted environment called the “sandbox” [7]. In addition to authentication, agents can carry a proof that enables a host to determine whether the agent’s code is safe to install and execute [8]. Code Signing, through the use of digital signatures, assures strong authentication and integrity of the code to whoever executes the code. The use of policy is common in computing systems that feature some security or privacy protection [9], [10], [11], [12]. The approaches referred above primarily consider mobile agent and/or agent host security issues like authentication, authorization, access control etc. They cannot protect the shared input data from disclosing by malicious agents to unauthorized parties. Anonymization is another technique for privately computing functions and data mining [13], [14], [15]. In this technique the risk of insider threat is greatly reduced as only anonymized values are in the database and the watch-listing-party receives only notice of matches. However this technique is not applicable in cases where raw data is needed for the computation. For anonymized data searching the data must be present at both sides, from where the search is being requested and where the actual search takes place. There is other privacy preserving multi-party computation [16], [17], [18]. These approaches are theoretical and not suitable practical applications.

III. PROPOSED ARCHITECTURE For privacy protect in multi-agent based applications, we propose to employ the following techniques in addition to standard security mechanisms. A. Multi-agent Computing Service Out proposed multi-agent computing service platform named iCOP (isolated Closed-door One-way Platform) was modeled based on the following design principles. isolation: Instead of interacting at any of the user host platforms, the interacting agents migrate to a multiagent computing service platform (i.e., iCOP) provided by a service provider. The use of a service provider’s iCOP host as the agent meeting place eliminates the risk on the user’s local data store at their hosts from external malicious agents. This also eliminates possible attacks by unknown malicious hosts on the data that the mobile agents carry with them. Thus when mobile agents of Alice and Bob need to interact with their private data (e.g., personal calendar) for a cooperative job (e.g., meeting schedule), their mobile agents migrate to an iCOP host provided by a service provider and interact locally with each other. Similarly when Alice wants to buy a book from Bob’s bookstore, their mobile agents as well as all other involved mobile agents (e.g., credit card verification agent from Credit Card Company) migrate into iCOP host and interact with one another locally. Closed-door platform: If agents are allowed to exchange messages with the originator or with other external parties or agents from within the iCOP host, malicious agents will have the scope to send other agents’ private data. We propose the iCOP host to be a closeddoor platform from where agents cannot communicate and cannot exchange any message with the outside world. Agents can only interact with other agents that are brought into the platform. This property of the common host helps to protect malicious agents from leaking any kind of private data of other agents that they learn through negotiation process with them and even the data acquired illegally from other agents. One-way platform: In conventional mobile agent based architectures, an agent may migrate to a host, perform some operation at that host and leave the platform by migrating to another host and finally may go back to the originator. In these models external agents may perform unauthorized access and steal data from the visited hosts or other agents at that host and take them away with them. Also in those architectures, the mobile agents are free to take away any private information about other agents that they learn during negotiation process with other agents. We propose to allow agents to only enter into the iCOP host platform with proper authorization but not let them leave the platform. Accordingly the iCOP host platform should be a one-way platform. On completion of their task, all the external agents, along

with their data, should be killed at the iCOP host. In this way malicious agents can be protected from taking away any unauthorized data from the iCOP host platform to the outside world. B. Computational Result According to the closed-door and one-way characteristics of the iCOP host, an external agent does not have the chance to leak any information out of the iCOP host and even the participating agents cannot send the computational result to their users. To send the computational result of the task back to the involved users, an iCOP host provides a service agent to verify the computational result and send it to the respective users. The service agent coordinates the task and can communicate with the outside world. The service agent verifies the result in cooperation with the participating agents whether the computational result contains any data other than what they agreed upon to be the result in their negotiation. For example, in the meeting scheduling scenario, both Alice’s and Bob’s agents cooperatively fix up the schedule for their meeting and the fixed up scheduling information (e.g., Dec 23rd at 11 AM) is their computational result. Both their agents know the result on which they agreed during negotiation. Before sending the result to the users, the service agent must verify that both the participating agents agreed with the result to be sent to the users. Similarly in the book buying scenario, a book purchase service agent verifies and sends the computational result (i.e., agreement among agents) to the users. Alice’s agent interacts with Bob’s agent for the desired book and its price. Alice’s agent and credit card verifying agent interact to verify Alice’s credit card number and its credit limit, and the credit card verifying agent let Bob’s agent know the verification result. When the purchase is decided the computational result is the bought book information and its price. A successful multi-agent interaction based job usually results in an agreement among the participants. The verification involves the assurance of the agreement among the participants about the result. Consider that two agents A and B were involved in a job and they agreed the final computation result as R. After completing the task an agent, say A, hands over its result R1 to the service agent to be sent to the users. The service agent then requests B to hand over its result, which sends R2. It then compares R1 and R2. If they match with each other, the service agent can conclude that A has sent correct result to it and B agrees that the result can be sent to their users. This implies that both the agents checked the data to be sent out and it contains nothing but the agreed upon result. The result can also be divided into parts and verification of each part can be done in the same process. Fig. 1 shows a simple agreement verification protocol.

Fig. 1 Agreement verification protocol

C. iCOP Privacy Model iCOP privacy manager is an enhance version of Java2 security manager that was designed according to iCOP’s basic characteristics requirements we described in section III A. Java2 security model provides basic security features that are essential for mobile agent based environment. For example, Java checks for conformance of Java Language specifications. The security manager controls the resource access. Also Java implement the policy of least privileges, which ensures that programs with less privilege do not acquire more access rights by making calls to privileged components.

Fig. 2 iCOP Architecture However, Java security manager allows downloaded code from an origin server to connect back to the origin server and this connectivity cannot be restricted with policy settings. Thus, to achieve iCOP's desirable properties that we discussed in section III A, our privacy manager monitors any attempt to migration and message sending activities by external agents and blocks them. Also the privacy manager does not allow external agent hosts to retract an agent from the iCOP host. Fig. 2 depicts the iCOP privacy model architecture. IV. ANALYSIS The privacy manager protects external agents from leaking any data from within the iCOP host. However,

the service agent gets the result, which is also considered private information of participating agents. But the private input data are remained secret from it. The privacy protection in our model is more efficient than a traditional model in which the user agents are free to communicate with the outside world or free to migrate. In the meeting scheduling scenario, the user agents come to know much more information about each other’s calendar during negotiation than our service agent (which knows only the result) and they can disclose it to unauthorized parties if they are not restricted. So the privacy protection in our model is more effective than the traditional models. Transferring data through covert channel by participating agents may not be effective in our architecture. The only information communicated with the outside world by the service agent is the computational result. Covert channel requires direct/indirect interaction with the outside world. Since the user agents can not interact with the outside world and the involved agents’ approval is required to send the computational result, data leakage by the participating agents through covert channel through the computational result is expected to be not much effective in our system. V. EXPERIMENT To demonstrate the effectiveness of the privacy manager in protecting malicious mobile agent from stealing other agents data we created a number of mobile agents and sent them to iCOP host and simulated attack trying to escape the privacy manager’s restrictions. The testing procedure is described in some details to validate that iCOP privacy protection framework was a success. An agent environment framework is required for the implementation, execution and management of agents. Several commercial agent frameworks have been developed such as ObjectSpace Voyager [19], IBM Aglet [20], and Meitca Concordia [21]. The IBM Aglet Ver. 2.0.2 was adopted for the implementation of our Agent prototype system because of its availability as open source and the detailed documentation. A. Experimental set up Experiments were carried out with two agent platforms hosted in personal computers (Host A and Host B) over local area network at National Institute of Informatics laboratory. Host A was setup in a desktop computer with Pentium 600MHz processor, 192MB RAM and Microsoft Windows XP Professional. Aglet 2.0.2 was installed in host A with typical configuration. Host B was setup in a IBM laptop computer with Pentium 1.4GHz processor, 768MB RAM and Microsoft Windows XP Professional. The iCOP platform was installed in host B. Following are the short descriptions of the mobile agents used in our experiments. The itinerary agent is a type of agent that has a list of

destination hosts to visit. It visits from host to host according to its route plan. Finally it comes back to the originator. This agent was used to test the migration restricting capabilities on external mobile agents by the iCOP privacy manager. As we explained earlier, the iCOP does not allow external mobile agents to migrate to another host to protect them from sneaking away any unauthorized data. The maliciousness of this agent we considered was to take back the list of agents present at the platform. The messenger agent can send messages to its originator. This agent was given a host address and a directory name of the host. It lists the contents of the given directory and sends back the listing to the originator. This agent was created to test effectiveness of the iCOP’s message restrictions on mobile agents with the outside world. The mailer agent is an agent capable of sending email to any address. The maliciousness of this agent we considered was to send some information by email. Besides direct message communication, a mobile agent may send sensitive data to external world by email. Thus we create the mailer agent to test iCOP’s effectiveness in protecting unauthorized email sending by mobile agents. The RMI client mobile agent is an RMI client that can send message to a remote RMI server. A malicious mobile agent may try to send unauthorized data to a remote RMI server by invoking RMI methods. Thus we test iCOP’s capabilities in restricting such message communication using RMI. The retracting agent visits a host, reads the contents of a directory of the visited host and waits for retraction. The originator retracts the agent from the visited host along with the directory listing. Retraction can be said as an indirect migration process. iCOP architecture does not allow retraction because it leaves the possibilities of getting unauthorized data from the visited host. B. Experimental Result The results of the experiments that were carried out in the two mobile agent platforms are summarized in Table I. We also explain the result in details. The Agent type column describes the name of mobile agent used in the experiment as described previously. Malicious activities column describes the type of activity done with unauthorized data (UD) at the agent host by external mobile agents. Note that the activities such as migration to another host from the visited host or sending a message by external mobile agents are considered malicious activities in this experiment because those activities help mobile agents steal unauthorized data and hence are forbidden. MA Host column represents the agent host platform where external mobile agents were sent. Host A is a typical host that is set with default configuration and

Host B is an iCOP host that possesses the essential characteristics implemented with the iCOP privacy manager. Data Protection column describes if the agent host could protect the data flow out of the host platform. If the host can impose the required restrictions, we say the data protection as success. Otherwise it is failure. Table I: Outcomes for the data protection experiments Agent type Malicious activity MA Host Data Protection Itinerary Take UD away to Host A Failure the origin Host B Success Messenger Send UD by mes- Host A sage to the origin Host B

Success

Mailer

Success

Send UD by mail to Host A the origin Host B

RMI client Send UD to RMI Host A server at the origin Host B Retracting Take UD with agent Host A through retraction Host B

Failure

Success Failure Success Failure Success

From the result table we see that the iCOP platform (hosted at B) could protect all the forbidden activities of the incoming agents. While the other agent host (host A), which was configured typically could not protect all maliciousness of the incoming agents. In summary the iCOP agent platform can protect data theft by external agents from the iCOP host and other agents at the iCOP host. External agents can use the given data while residing inside the agent host. They cannot send the data to a third party or take them back with them. Thus the iCOP architecture can protect privacy effectively. Table II: Latency for additional checking Agent type Latency (ms) Itinerary 20 Messenger 21 Mailer 24 RMI client 22 Retracting 17 We also measured the latency incurred due to the introduction in the security manager of the explicit checking of network socket use by external mobile agents. This delay varies depending upon the characteristics of mobile agents. With the mobile agent we created for experiments, we measured the average delay occurred for the checking by the privacy manager and presented in Table II. The variation in time is due to the differences in the current execution context stack size. VI. DISCUSSION An approach (called the centralized approach) for solving multi-party problem is to send all data to a third

party coordinating agent which can perform the computation on behalf of everybody and return the result to the users. This setup makes the problem solution much easier. However, in this approach all private data of all users are disclosed to the coordinating agent. If we consider that no one should be able to divulge the private input of the participants, then the centralized approach cannot be taken as a solution. An iCOP host can be provided from a service provider for its clients or an office can provide such a host for the office staffs to perform privacy sensitive jobs (e.g., meeting scheduling). For additional security measure, the iCOP host should not store sensitive information and should take standard security measures. If the host possesses the characteristics we mentioned in section III A. (closeddoor, one-way platform) then a malicious agent cannot send out or take away other agent’s data. Also, before sending out the computational result to the respective authorities if the verification agent verifies that the participating agents agree to send the result (i.e., it does not contain sensitive private data) then a malicious user agent cannot send other agent’s data without its approval. An iCOP host does not provide specific additional benefit for single agent based applications. For example, a mobile agent may visit a public agent platform and search into its database and take back the search result. There is no privacy concern in this application. Privacy concerns arise mainly in multi-agent based applications. Single agent based applications are more oriented towards security. iCOP is a platform where multiple mobile agents can migrate to perform some cooperative job with privacy protection. The one-way platform characteristics of iCOP may seem to restrict the free mobility of mobile agents. However, we argue that the one-way restriction does not limit the applicability of the iCOP model. Many mobile agent based applications can be modeled for iCOP architecture without any additional complexity. VII. CONCLUSIONS We address the privacy issues in multi-agent based applications and present privacy model named iCOP that possesses some basic characteristics for which external mobile agents can not leak out any information from the platform. Privacy issues are more crucial in multi-agent based applications in which multiple agents interact with one another to reach their goals. During the interaction an agent gets private information of other agents. Our privacy protection model can protect agent from sending out or taking away the other agents’ private data without the owners’ permission. Because the external mobile agents cannot communicate with the outside world from within the iCOP host they cannot leak any information. The only data that goes out of the platform is the computational result and before sending that re-

sult it is verified and approved by the involved agents. Thus the control of data flow remains on the data owners. Sending data through covert channel is expected to be not effective. The iCOP prototype and small attacking applications demonstrated that the architecture is effective in privacy protection. REFERENCES [1] Danny B.Lange, Mitsuru Oshima, Programming and Develoying Java Mobile Agents with Aglets, Addison-Wesley, 1998. [2] Roger Clarke, “Introduction to Dataveillance and Information Privacy, and Definitions of Terms,” http://www.anu.edu.au/people/Roger.Clarke/DV/In tro.html, last visited Nov 2005. [3] Sarah D. Scalet, “Security Versus Privacy” http://www.cio.com/archive/110101/court_sidebar_ 1.html CIO Magazine, Nov 1, 2001, last visited Nov 2005. [4] W. M. Farmer, J. D. Guttman and V. Swarup, “Security for Mobile Agents: Issues and Requirements,” In Proc. of the 19th National Information System Security Conference, Baltimore, MD, 1996, pp. 591-597. [5] J. J. Ordille, “When agents roam, who can you trust?” In Proc. of First Conference on Emerging Technologies and Applications in Communications. Portland, OR. 1996. [6] Gong, L. and Schemers, R. “Signing, sealing, and guarding Java objects,” Lecture Notes in Computer Science, vol. 1419, Springer-Verlag, New York, 1998, pp. 206-216. [7] Volpano, D. and Smith, G. Language issues in mobile program security, Lecture Notes in Computer Science, vol. 1419, Springer-Verlag, New York, 1998, pp. 25-43. [8] Necula,G. C. and Lee, P. “Safe, untrusted agents using proof-carrying code,” Lecture Notes in Computer Science, vol. 1419, Springer-Verlag, New York, 1998, pp. 61-91. [9] Harry Chen, Tim Finin, and Anupam Joshi. “A Pervasive Computing Ontology for User Privacy Protection in the Context Broker Architecture,” Technical Report eBiquity Publications, 2004.

[10] Morris Sloman and Emil Lupu, “Security and management policy specification,” IEEE Network, Special Issue on Policy-Based Networking Vol. 16, no. 2, 2002, pp. 10 – 19. [11] Gianluca Tonti, Jeffrey M. Bradshaw, Renia Jeffers, Rebecca Montanari, Niranjan Suri, and Andrzej Uszok, “Semantic web languages for policy representation and reasoning: A comparison of KAoS, Rei, and Ponder,” In Proceedings of the 2nd International Semantic Web Conference (ISWC2003), 2003. [12] Arif Tumer, Asuman Dogac, and I. Hakki Toroslu, “Semantic-based User Privacy Protection Framework for Web services,” http://www.srdc.metu.edu.tr/webpage/publications/ 2004/TumerDogacToroslu.pdf, last visited Nov 2005. [13] David E. Bakken, “Data obfuscation: anonymity and desensitization of usable data sets,” IEEE Security & Privacy, Vol. 2, no. 6, 2004, pp. 34-41 [14] L. Sweeney, “K-Anonymity: A Model for Protecting Privacy,” International Journal of Uncertainty, Fuzziness and Knowledge-Based Systems, Vol. 10, no. 5, 2002, pp. 557-570 [15] P. Lincoln, P. Porras, and V. Shmatikov, “PrivacyPreserving Sharing and Correlation of Security Alerts,” in Proc. of 13th Usenix Security Symp., Usenix Assoc., 2004, pp. 239-254 [16] O. Goldreich, S. Micali and A. Wigderson, “How to play any mental game,” In Proceedings of the 19th annual ACM symposium on Theory of computing, 1987, pp. 218 - 229 [17] A. Yao. “Protocols for secure computations,” In Proc. of the 23rd Annual IEEE Symposium on Foundations of Computer Science, 1982. [18] Wenliang Du and Mikhail J. Atallah, “PrivacyPreserving Cooperative Scientific Computations,” In proc. of the 14th IEEE Workshop on Computer Security Foundations, Canada 2001, pp. 273 – 282 [19] http://www.recursionsw.com/voyager.htm, last visited Nov 2005. [20] http://www.trl.ibm.com/aglets, last visited Nov 2005. [21] http://www.merl.com/projects/concordia/, last visited Nov 2005.

Privacy Protection in Multi-Agent based Applications

as the theft of the data or the interception of the data by the unknown third party," he says. "If I buy a ticket from Travelocity, what Travelocity does with my data is.

547KB Sizes 3 Downloads 230 Views

Recommend Documents

Optimized, delay-based privacy protection in social networks
1 Aggregated Facebook and Twitter activity profiles are shown in [7] per .... some sites have started offering social networking services where users are not ...

Subscription Privacy Protection in Topic-based Publish ...
promised subscribers and untrusted brokers easily exposes the privacy of hon- est subscribers. Given the untrusted .... fer the subscription privacy protection (e.g., for a large anonymity level k = 40, the proposed scheme uses only 2.48 folds of cos

Amalgam-based Reuse for Multiagent Case-based ... - Semantic Scholar
A way to compute such combinations is through amalgams [10], a formal ..... Dresser. Computer. Desk, Tower. & Monitor. Cabinet. Armchair. Coach & Laptop.

Optimal tag suppression for privacy protection in the ...
Aug 25, 2012 - [10], the privacy research literature [11] recognizes the distinction .... framework that enhances social tag-based applications capitalizing on trust ... these systems simply benefit from the networking of a combination of several mix

Large-scale Privacy Protection in Google Street ... - Research at Google
false positives by incorporating domain-specific informa- tion not available to the ... cation allows users to effectively search and find specific points of interest ...

Amalgam-based Reuse for Multiagent Case-based ... - Semantic Scholar
configuration of an office for having good working conditions. Naturally ..... Computer. Desk, Tower. & Monitor. Cabinet. Armchair. Coach & Laptop. Bookcase.

Large-scale Privacy Protection in Google Street ... - Research at Google
wander through the street-level environment, thus enabling ... However, permission to reprint/republish this material for advertising or promotional purposes or for .... 5To obtain a copy of the data set for academic use, please send an e-mail.

Privacy Notice Data Protection - Staff.pdf
There was a problem previewing this document. Retrying... Download. Connect more apps... Try one of the apps below to open or edit this item. Privacy Notice ...

Wireless Location Privacy Protection - IEEE Computer Society
Dec 1, 2003 - dated that, by December 2005, all cellular carriers be able to identify the location of emergency callers using mobile phones to within 50 to 100.

Wireless Location Privacy Protection
ple routinely use a credit card to buy goods and services on the Internet because they believe that the conve- nience of online purchases outweighs the potential ...

Issues in Multiagent Design Systems
Although there is no clear definition of what an agent is .... either a named object (the naming service) or an object ..... design support while leaving room for cre-.

multiagent systems.pdf
Sign in. Loading… Whoops! There was a problem loading more pages. Retrying... Whoops! There was a problem previewing this document. Retrying.

Enforcing Message Privacy Using Attribute Based ... - IJRIT
IJRIT International Journal of Research in Information Technology, Volume 2, Issue 3, .... j ∈ Ai, Ai chooses ri ∈ Zp and a random ki − 1 degree polynomial. 4.

Noise Injection for Search Privacy Protection
Department of Computer Science .... user side, e.g. user names, operation systems, and hard- ..... are verified by the Yacas computer algebra system [34]. This.

Taxation and Privacy Protection on Internet Platforms
Oct 4, 2015 - The immense profits of (some not all) internet platforms suggests ... specific tax paid by users (like a tax on internet service providers) produces ...

Noise Injection for Search Privacy Protection
Aug 28, 2009 - Threats to search users. Large number of data mining algorithms + machines. Data retention window ranges from months to years. Vulnerable data sanitization designs and improper implementations: AOL Gate: 20M queries from 650K "anonymiz

Noise Injection for Search Privacy Protection
Oct 26, 2011 - Privacy concerns have emerged globally as massive user information being collected by search en- gines. The large body of data mining ...

Enforcing Message Privacy Using Attribute Based ... - IJRIT
When making decision on use of cloud computing, consumers must have a clear ... identifier (GID) to bind a user's access ability at all authorities by using an ...