IJRIT International Journal of Research in Information Technology, Volume 3, Issue 4, April 2015, Pg. 47-51

International Journal of Research in Information Technology (IJRIT) www.ijrit.com

ISSN 2001-5569

Encrypted Peer to Peer File Sharing System using Wireless Mesh Network Kajal Chauhari1 , Deepak kumar Sharma2, Archana Bichave 3, Tejal Patil4 and Jitendra Patil5 1

Student, Department of Computer Science, SSBT’s COET, Bambhori, Jalgaon Jalgaon, Maharashtra, India [email protected]

1

Student, Department of Computer Science, SSBT’s COET, Bambhori, Jalgaon Jalgaon, Maharashtra, India [email protected]

1

Student, Department of Computer Science, SSBT’s COET, Bambhori, Jalgaon Jalgaon, Maharashtra, India [email protected]

41

5

Student, Department of Computer Science, SSBT’s COET, Bambhori, Jalgaon Jalgaon, Maharashtra, India [email protected]

Asst.Prof., Department of Computer Science, SSBT’s COET, Bambhori, Jalgaon Jalgaon, Maharashtra, India [email protected]

Abstract In peer to peer file sharing, there is a file replication which helps us to avoid overloading file owners. It also increases the efficiency of file query. There is a tradeoff between minimizing and maximizing the replica hit rate. More replicas imply to increase in replication overhead and also higher replica hit rate and vice versa. Replication method should simply generate a low overhead burden to system, along with this it provides low query latency to the users. In early replication methods it either achieves high hit rates at the cost of many replicas or may provide low hit rates. To reduce replicas and guaranteeing to provide high hit rate, this paper is presenting SWARM, a file replication mechanism based on swarm intelligence. SWARM identifies node swarms with common node interests and close proximity. Swarms also have a novel consistency maintenance algorithm that generates an update message between proximity-close nodes in a tree fashion from top to bottom. Results from the real world Planet Lab test bed and the PeerSim simulator demonstrate the effectiveness of the SWARM mechanism in comparison with other file replication and consistency maintenance methods. Query latency is reduced by 40%-58%, reduce the number of replicas by 39%-76% and achieves more than 84% higher hit rates as compared to earlier methods. As compared to previous consistency maintenance methods SWARM can reduce the consistency maintenance overhead by 49%-99%.

Kajal Chauhari, IJRIT-47

IJRIT International Journal of Research in Information Technology, Volume 3, Issue 4, April 2015, Pg. 47-51

Keywords: ServerEn, ClientEnd, SWARM, replica.

1. Introduction Over the past years, the immense popularity of the Internet has produced a significant stimulus to peer-to peer (P2P) networks. One of the most popular applications of P2P networks is file sharing such as Bit Torrent, Morpheus, eDonkey and Gnutella. There is no interference or assistance of a central server for sharing files directly between users of network, The total traffic on the Bit Torrent P2P file sharing application has increased by 12%, driven by 25% increases in per-peer hourly download volume in 2010; during peak hours, Bit Torrent accounted for more than a third of all upload traffic in 2012. According to the Cisco’s network traffic measurement and forecast, the P2P file sharing applications account for 83.5% of total file sharing traffic, and will still account for 60.3% of total file sharing traffic in 2018. In a P2P file sharing system, it becomes a hot spot, when a node receives a big volume of requests for a file at one time, leading to delayed responses. Consider a network with N wireless devices. Let K devices be identified as users, where K <= N. The users can send packets to each other via direct peer-to-peer transmissions. Each user has a collection of popular files in its cache. The remaining N - K devices are access points. The access points are connected to a larger network, such as the internet, and hence have access to a larger set of files. While a general network may have users that desire to upload packets to the access points, The network is mobile, and so the transmission options between access points and users, and between user pairs, can change over time. File replication is an effective strategy it will manage the problem of overload due to hot files. There are loads will be distributed and these distributing the load by replicating hot files to other nodes and improves file query efficiency by reducing query latency. There exists a tradeoff between minimizing the number of replicas (i.e., replication overhead) and maximizing the replica hit rate (i.e., file querying latency). Higher replication overhead and higher replica hit rates are lead to have more replicas and vice versa. Low overhead is generated by a ideal replication method to the system while low query latency is provided to the users. .High hit rates are achieved by previous file replication methods at the cost of many replicas or with fewer replicas it produce low hit rates. Specifically, previous replication methods can be organized into four classes denoted by Random, ServerEnd, Path and ClientEnd. Random, replicates files to randomly selected nodes. ServerEnd, replicates a file into nodes close to its file owner on the P2P overlay. Path, replicates a file along its query path. In Random, ServerEnd and Path, a file query still needs to travel until it encounters the file owner or a replica node. Due to a large number of replicas, Random and Path may also have to high overhead, some of which may not be fully utilized. Recently, we proposed the Efficient and Adaptive Decentralized (EAD) file replication algorithm, an improved Path method. EAD selects the query traffic hubs and frequent requesters of a file as its replica nodes for increase the hit rate while limit the number of replicas. ClientEnd, can access the file directly without query routing because it replicates a file into the nodes of frequent requesters, However high hit rates cannot ensure the ClientEnd since other requesters queries have low probabilities of passing the frequent requesters. Furthermore, ServerEnd, Path, ClientEnd and Random cannot give guarantee that a file query will encounter a replica node of the file. To provide this guarantee while constraining the number of replicas (i.e., replication overhead), this paper presents SWARM, a file replication mechanism based on swarm intelligence. In Swarm intelligence property of a system the coherent functional global patterns are caused by collective behaviors of agents. It recognizes the power of collective behaviors, with common interests and close proximity SWARM identifies node swarms. SWARM replicates a file according to the accumulated query rates of nodes in a swarm and enables the replica be shared among the swarm nodes. There are no spreading replicas and then without it all over the network and then gives permission to replicas to be shared among common-interest nodes in close proximity, file replicas are significantly reduced and fully utilized while improving query efficiency. More importantly, most previous methods replicate files in the nodes that increase the likelihood but cannot guarantee that a query encounters a replica node. In SWARM, nodes can easily determine the locations of replica nodes to actively query files. In addition, SWARM has a novel algorithm for consistency maintenance. Most consistency maintenance methods update files by relying on structures or message spreading. A structure-based method builds all replica nodes into a structure and spreads the updates through it. Structure-based methods do not waste resources since non-replica nodes do not receive update messages, but structure maintenance generates high overhead. A message spreading method propagates updates based Kajal Chauhari, IJRIT-48

IJRIT International Journal of Research in Information Technology, Volume 3, Issue 4, April 2015, Pg. 47-51

on either broadcast or multicast schemes. There is no structure maintenance overhead in message spreading methods, but many overhead is needed for propagation, and some non-replica nodes may receive update messages. In addition, the methods cannot guarantee that all replica nodes receive updates. The propagation tree proposed in for consistency maintenance is a traditional d-nary tree without taking proximity into account. SWARM consistency maintenance is novel in that it dynamically builds a locality-aware balanced d-nary tree with all replica nodes that does not need to be maintained and enables message propagation between nodes with close proximity in a tree fashion from the top to the bottom, thus enhancing the efficiency of consistency maintenance. We summarize the subscriptions of this paper below. 1) A method for structure construction that efficiently builds node swarms, and a mechanisms of a structure maintenance. It is proved that for the construction of swarms the number of messages is bounded. 2) The algorithm for a file replication that conducts replications which is based on the structure of constructed swarm. 3) The algorithm for file query that takes advantage of replicas for improving the file query efficiency. It is proved that file query latency remains bounded. 4) The algorithm for file consistency maintenance is based on the structure of constructed swarm that allows messages to be propagated between nodes with close proximity in a tree fashion without the need for tree construction and maintenance. 5) Comprehensive PlanetLab and experiments of simulation demonstrate the superior performance of SWARM in comparison with other file replication and methods of consistency maintenance.

2. LITERATURE SURVEY 2.1 Background Peer to Peer (P2P) file sharing concept, also known as the host-to-host concept, it was first used to create equal communication link between two peers. In this every peer can perform the role as server and client simultaneously. When USENET was created in 1979 by two graduate students "Tom Truscott and Jim Ellis" in Duke University then the basic P2P technology has been around . USENET was created for the purpose of exchanging information between UNIX machines. We will present the typical examples of P2P file sharing applications include Napster and LimeWire.

Napster: Napster is one of the best known Peer to Peer application for file sharing; In 1999 it was written by "Shawn Fanning" because there was some problems are comes in searching and downloading MP3 music files for some people then Napster was created for those. Basically, it is proposed by making the combination of a search engine and a file-sharing application. Napster is a service for file sharing that facilitates the location and exchange of files, such as images, audio or video, via access to the internet. It being one of the first file sharing applications, it then provide the way for other programs including Kazaa, Morpheus, LimeWire and Bearshare. However, in 1999, Napster was penalized by the Industry Association of America (RIAA) for copyright reasons and to resolve its legal battle with songwriters and music publishers it was forced to pays a fee of $26 million .Today, it includes a service for subscriptions and receives percentages from sale.

LimeWire:

Another famous file sharing application is LimeWire. This application makes use of a decentralised P2P network and it helps for sharing any type of file between the clients.

2.2 Related Work There are four methods for previous file replication it can generally be organized into four categories: Random, ServerEnd, ClientEnd and Path. In the Random category, BloomCast was proposed by the Chen et al., By randomly selecting nodes in which to replicates file BloomCast replicates file items uniformly across a P2P network. Under a bounded query communication cost it gives guarantee for a searching success rate. In the ServerEnd category, RelaxDHT replicates each file into a set of number of nodes whose IDs match most closely to the file owner’s ID. Kajal Chauhari, IJRIT-49

IJRIT International Journal of Research in Information Technology, Volume 3, Issue 4, April 2015, Pg. 47-51

Beehive replicates an object into nodes i hops prior to the server in the lookup path and determines a file’s replication degree based on its popularity. In the ClientEnd category, LAR replicates hot file into the file requester nodes. Cooperative multi-factOr considered file Replication Protocol (CORP) takes into account multiple factors including file popularity and update rate to minimize the replication cost. The works in studied the relationship between the number of replicas and the system performance such as successful queries and bandwidth consumption. Along with file replication, numerous file consistency maintenance methods have been proposed. They can be classified categorize into two categories : structure based and message spreading based. CUP and DUP propagate updates along a routing path. SCOPE constructs a tree for update propagation. Li et al. proposed building a two-tiered update message propagation tree (UMPT) dynamically for propagating update messages, in which a node in the lower layer attaches to a node in the upper layer with close proximity. In these methods, if any node in the structure fails, some replica nodes are no longer reachable. Moreover, they have high overhead costs for structure maintenance, especially in churn in which node join and leave the system constantly and frequently. Some structure based works build a simple unicast client-server structure. The work in relies on polling for consistency maintenance. Xiong et al. proposed a consistent metadata processing protocol to achieve metadata consistency of cross-metadata server operations in supercomputers. Tang and Zhou investigated update scheduling algorithms to improve consistency in distributed virtual environments. These methods easily overload the server due to the resource limitations of a single server. Leveraging these clustering techniques, SWARM distinguishes itself by a number of novel features. First, structured P2P systems have strictly defined topologies, which has posed a challenge for clustering. SWARM deals with this challenge by taking advantage of the functions of structured P2P systems to collect the information nodes with a common interest and close proximity. Second, rather than considering either proximity or interest in swarm construction, SWARM groups nodes with joint treatment of proximity and interest. Third, to the best of our knowledge, SWARM is the first work that applies a clustering technique based on proximity and interest to file replication in order to achieve high efficiency and effectiveness.

3. Existing System Centralized Server is a technique which provide services to client over the network, user can use any type of services. Network storage is the model of network enterprise storage where huge amount of data are stored. Centralized Server provide storage space services for the users, user can stored his data and information in the network and he can access to information as store it form any computer connected to the internet .the main thing is that the user don’t know where and how data is stored ? and who can see the data ? The problem of user when he store sensitive information in the network the user require security of the wireless network to assurance nobody can right to use and view his data and business related information that his store in network, to avoid this problem used encryption method. But encryption method unsuccessful in preventing data theft attacks. By applying encryption technique to the information we can’t realize total protection to confidential data. In Existing system as per the Literature survey done it is observe that SWARM intelligence is done whenever new file is being upload to the wireless network was suggested but in such case require huge amount of storage space in the network.

4. Proposed System We proposed a completely new technique to secure user’s data in cloud using user behavior. To provide this guarantee while constraining the number of replicas (i.e., replication overhead), this paper presents SWARM, a file replication mechanism based on swarm intelligence. In Swarm intelligence property of a system the coherent functional global patterns are caused by collective behaviors of agents. Recognizing the power of collective behaviors, with common interests and close proximity SWARM identifies node swarms. We use this techniques foe efficient file sharing and for providing data security in the wireless mesh network. In this technology a different approach for securing data in the wireless network are using. A swarms node is built by SWARM by providing our previous work that clusters nodes with close proximity for load balance. SWARM builds the swarm structure using a landmarking method that represents node closeness on the network by indices. Landmark clustering is based on the intuition that nodes close to each Kajal Chauhari, IJRIT-50

IJRIT International Journal of Research in Information Technology, Volume 3, Issue 4, April 2015, Pg. 47-51

other are likely to have similar distances to a few selected landmark nodes. Sophisticated strategies can be used for landmark node selection. We assume m landmark nodes are scattered in the Internet. Each node measures its proximity to the m landmarks, and uses the vector of distances < d1, d2, …… ,dm > as its landmark vector. By “presence”, we mean the average ping latency between two nodes. Two nodes in close presence have similar landmark vectors. A Hilbert curve is then used to map m-dimensional landmark vectors to real numbers, such that the closeness relationship among the nodes is preserved. This mapping can be estimated as filling a curve within the m-dimensional space until it completely fills the space. The mdimensional landmark space is then partitioned into 2mx grids of equal size (where m refers to the number of landmarks and x controls the number of grids used to partition the landmark space), and each node gets a number according to the grid into which it falls. The smaller the x is, the larger the likelihood that two nodes will have same Hilbert numbers, and the coarser grain the proximity information. The Hilbert mapping may generate inaccuracy. For the details of this problem and the solution to reduce inaccuracy, please refer to. The Hilbert number of a node, denoted by H, indicates the proximity closeness of nodes on the Internet. Two proximity-close nodes have close H values. SWARM clusters the nodes with similar Hilbert numbers into a swarm. Each node’s interests are described by a set of attributes described by globally known strings such as “image”, “music” and “book”. Each interest corresponds to a category of files. If a node does not know its interests, it can derive the attributes according to its frequently requested files . Consistent hash functions such as SHA-1 are widely used in DHT networks to generate node or file IDs due to their collision-resistant nature. Using this hash function, it is computationally infeasible to find two different messages that produce the same message digest. Therefore, the hash function is effective to group interest attributes because the same interest attributes will have the same consistent hash value, while different interest attributes will have different hash values. The Hilbert number is used by SWARM and a consistent hash function to build node swarms based on node interest and presence. To facilitate such structure construction, the information of nodes with close presence and common interests should be marshaled in one node in the DHT network, which enables these nodes to locate each other and form a swarm. Although logically close nodes may not have common interests or be in close proximity to each other, SWARM enables common-interest nodes to report their information to the same node, which further clusters the gathered information of nodes in close proximity into a group.

References [1] Haiying Shen, Haiying Shen, Harrison Chandler Swarm Intelligence based File Replication and Consistency Maintenance in Structured P2P File Sharing System. In Proc of IEEE,2015 [2]J. S. Otto, M. A. Sanchez, D. R. Choffnes, F. E. Bustamante, and G. Siganos. On Blind Mice and the Elephant-Understanding the Network Impact of a Large Distributed System. In Proc. Of SIGCOMM, 2011. [3] Bittorrent traffic increases 40% in half a year. http://torrentfreak.com/bittorrent-traffic-increases-40-inhalf-ayear- 121107/ [Accessd in Oct. 2014]. [4] Cisco visual networking index: Forecast and methodology, 2013c2018. Technical report, Cisco, 2014. http://www.cisco.com/c/en/us/solutions/collateral/serviceprovider/ ip-ngn-ip-next-generation-network/white paper c11- 481360.pdf, [Accessd in Nov. 2014]. [5] H. Chen, H. Jin, X. Luo, Y. Liu, T. Gu, K. Chen, and L. M. Ni. BloomCast: Efficient and Effective FullText Retrieval in Unstructured P2P Networks. TPDS, 2012. [6] S. Legtchenko, S. Monnet, P. Sens, and G. Muller. RelaxDHT: A Churn-Resilient Replication Strategy for Peer-to-Peer Distributed Hash-Tables. ACM TAAS, 2012. [7] V. Ramasubramanian and E. Sirer. Beehive: The Design and Implementation of a Next Generation Name Service for the Internet. In Proc. of ACM SIGCOMM, 2004. [8] M. Roussopoulos and M. Baker. CUP: Controlled Update Propagation in Peer to Peer Networks. In Proc. of USENIX ATC, 2003 [9] L. Yin and G. Cao. DUP: Dynamic-tree Based Update Propagation in Peer-to-Peer Networks. In Proc. of ICDE, 2005. [10] V. Martin, P. Valduriez, and E. Pacitti. Survey of Data Replication in P2P systems. Technical Report inria-00122282, 2007. [11] H. Shen. EAD: An Efficient and Adaptive Decentralized File Replication Algorithm in P2P File Sharing Systems. In Proc. Of P2P, 2008. Kajal Chauhari, IJRIT-51

Encrypted Peer to Peer File Sharing System using ...

1Student, Department of Computer Science, SSBT's COET, Bambhori, Jalgaon ... 1. Introduction. Over the past years, the immense popularity of the Internet has produced a significant stimulus .... file's replication degree based on its popularity.

161KB Sizes 4 Downloads 301 Views

Recommend Documents

Forensic Investigation of Peer-to-Peer File Sharing ...
There was a problem previewing this document. Retrying... Download. Connect more apps... Try one of the apps below to open or edit this item. Forensic Investigation of Peer-to-Peer File Sharing Net ... ely, Thomas Kerle, Brian Neil Levine, Clay Shiel

Forensic Investigation of Peer-to-Peer File Sharing Networks by ...
Massachusetts State Police, [email protected] ... Specifically, over 40 agencies share data ... provides a partial list of current peers (called a GWe- bCache), or by using a list of known peers distributed ... Sharing Networks by, Marc Li

Network Forensic on Encrypted Peer-to-Peer VoIP ...
(VoIP) application evolving quickly since its launch in 2003. However, the ability to traverse ..... Open source voip traffic monitoring. In SANE 2006,. May 2006.

The Evolution of the Peer-to-Peer File Sharing Industry.pdf ...
The Evolution of the Peer-to-Peer File Sharing Industry.pdf. The Evolution of the Peer-to-Peer File Sharing Industry.pdf. Open. Extract. Open with. Sign In.

ID SERVER STREAMING USING PEER TO PEER ... - Semantic Scholar
Also, by caching the requests at the clients, better content distribution of data is possible. For example, let us ... a smooth delivery of data. Different .... server partition will not have strict real time requirements and can be updated depending

Peer-to-Peer Internet Telephony using SIP
the users in the domain register their IP addresses with the server so that the other users .... Skype [12] is a free P2P application based on Kazaa [9] architecture that ..... node then computes the key on Alice's name and sends a SIP REGISTER ...

Peer-to-Peer Internet Telephony using SIP - Semantic Scholar
SIP architecture supports basic user registration and call setup as well as advanced .... recommendation H.323 [3] typically employ a registration server for every domain. .... In a way, the Skype architecture is no different from the classical SIP .

ID SERVER STREAMING USING PEER TO PEER ... - Semantic Scholar
Also, by caching the requests at the clients, better content distribution of data is .... author of [9] describes a cooperative distribution protocol for video on demand.

DANTE: A Self-adapting Peer-to-Peer System
system in which the topology of the underlying overlay network can be dynamically ..... the same software running on similar hardware4. In each experiment .... overloading nodes by explicitly accounting for their capacity constraints. In Gia,.

Peer-to-peer Sharing and Linking of Social Media ...
Mar 19, 2009 - Peer-to-peer Sharing and Linking of Social Media based on a ...... automatic search and download of this community from CCommunity. 20 ...

From Peer-to-Peer Networks to Cloud.pdf
Follow this and additional works at: http://digitalcommons.pace.edu/lawfaculty. Part of the Computer Law Commons, Criminal Law Commons, Internet Law ...

A Holistic Mechanism Against File Pollution in Peer-to-Peer Networks
Mar 12, 2009 - In section 4 we cover the design of our holistic anti-pollution ... create a different version of the same title. ... Figure 1: System illustration.

Ant-inspired Query Routing Performance in Dynamic Peer-to-Peer ...
Faculty of Computer and Information Science,. Tržaška 25, Ljubljana 1000, ... metrics in Section 3. Further,. Section 4 presents the course of simulations in a range of .... more, the query is flooded and thus finds the new best path. 3.2. Metrics.

Towards Yet Another Peer-to-Peer Simulator
The cost of implementation is less than that of a large-scale ..... steep, rigid simulation architecture that made extension difficult and software or hardware system ... for the simulation life cycle, the underlying topology and the routing of ...

A Blueprint Discovery of Hybrid Peer To Peer Systems - IJRIT
unstructured peer to peer system in which peers are connected by a illogical ... new hybrid peer to peer system for distributed data sharing which joins the benefits ..... [2] Haiying (Helen) Shen, “IRM: Integrated File Replication and Consistency 

Building Low-Diameter Peer-to-Peer Networks
build P2P networks in a distributed fashion, and prove that it results in ...... A Measurement. Study of Peer-to-Peer File Sharing Systems, in Proceedings.

Method and apparatus for facilitating peer-to-peer application ...
Dec 9, 2005 - microprocessor and memory for storing the code that deter mines what services and ..... identi?er such as an email address. The person making the ..... responsive to the service request from the ?rst application received by the ...

Viability of Microsoft Peer-to-Peer Framework for ...
One example of this is Windows Mobile Smartphone devices support an email channel to allow them to communicate using the simple data services provided ...

Leeching Bataille: peer-to-peer potlatch and the ...
with conceptualising the actual practice of gifting and how it can best be understood in relation to ... These effects can no longer be restricted to their online aspect. ..... This is a description of a noble and valuable social practice: “filesha

June 2014 Peer-to-Peer Webinars.pdf
... level emergency pre- paredness site reviewer, has co-authored a hospital evacuation course for the Federal Emergency Management Agency (FEMA), and is.

Simple Efficient Load Balancing Algorithms for Peer-to-Peer Systems
A core problem in peer to peer systems is the distribu- tion of items to be stored or computations to be car- ried out to the nodes that make up the system. A par-.

CONTENT LOCATION IN PEER-TO-PEER SYSTEMS: EXPLOITING ...
Jan 18, 2001 - several different content distribution systems such as the Web and popular peer- .... (a) Top 20 most popular queries. 1. 10. 100. 1000. 10000. 100000 ..... host is connected to monitoring ports of the two campus border routers. .....