IEEE TRANSACTIONS ON SIGNAL PROCESSING, VOL. 60, NO. 4, APRIL 2012

2001

Designing Router Scheduling Policies: A Privacy Perspective Sachin Kadloor, Student Member, IEEE, Xun Gong, Student Member, IEEE, Negar Kiyavash, Member, IEEE, and Parv Venkitasubramaniam, Member, IEEE

Abstract—We study the privacy compromise due to a queuing side channel which arises when a resource is shared between two users in the context of packet networks. The adversary tries to learn about the legitimate user’s activities by sending a small but frequent probe stream to the shared resource (e.g., a router). We show that for current frequently used scheduling policies, the waiting time of the adversary is highly correlated with traffic pattern of the legitimate user, thus compromising user privacy. Through precise modeling of the constituent flows and the scheduling policy of the shared resource, we develop a dynamic program to compute the optimal privacy preserving policy that minimizes the correlation between user’s traffic and adversary’s waiting times. While the explosion of state-space for the problem prohibits us from characterizing the optimal policy, we derive a suboptimal policy using a myopic approximation to the problem. Through simulation results, we show that indeed the suboptimal policy does very well in the high traffic regime. Adapting the intuition from the myopic policy, we propose scheduling policies that demonstrate good tradeoff between privacy and delay in the low and medium traffic regime as well. Index Terms—Privacy, queuing, scheduling policy design, side channel attack, timing side channel.

I. INTRODUCTION

I

T has long been known that a shared resource in a network leads to a covert channel that can be used for communication between different processes. Shared resources, however, not only lead to covert but also side channels—information leaks about the activities of one process to another without the cooperation of the former. In this paper, we propose to explore the side channel resulting from the queuing of packets from multiple users at a router. Consider the following example. A user, Alice, is using her computer at home to connect to the Internet using a home DSL

Manuscript received August 24, 2011; revised December 08, 2011; accepted December 22, 2011. Date of publication December 30, 2011; date of current version March 06, 2012. The associate editor coordinating the review of this manuscript and approving it for publication was Prof. Eduard A. Jorswieck. This work was supported in part by AFOSR under Grants FA9550-11-1-0016 and FA9550-10-1-0573, and NSF CCF 10-54937 CAR and CNS-1117701. S. Kadloor and X. Gong are with the Department of Electrical and Computer Engineering, and the Coordinated Science Lab, University of Illinois at UrbanaChampaign, IL 61801 USA (e-mail: [email protected]; [email protected]). N. Kiyavash is with the Department of Industrial and Enterprise Systems Engineering, and the Coordinated Science Lab, University of Illinois at Urbana-Champaign, IL 61801 USA (e-mail: [email protected]). P. Venkitasubramaniam is with the Department of Electrical Engineering, Lehigh University, Bethlehem, PA 18015 USA (e-mail: [email protected]). Color versions of one or more of the figures in this paper are available online at http://ieeexplore.ieee.org. Digital Object Identifier 10.1109/TSP.2011.2182348

router, which connects to a router at her Internet service provider (ISP). The ISP sees all the traffic that Alice sends; however, Alice is not worried because she knows that she is protected by antiwiretapping legislation, and she encrypts all her most sensitive information. Along comes Bob, who is located at another ISP entirely, perhaps even in another country. Bob sends a probe stream to Alice’s router. The probes are frequent, but small in size. Most importantly, the probes (and responses) make use of a shared queue at Alice’s DSL router. As a result, the waiting times of the probes are correlated with Alice’s traffic patterns. Due to the high correlation between probe waiting times and Alice’s traffic pattern, Bob can reliably fingerprint the websites Alice visited. The primary reason for the high correlation is the first-comefirst-serve (FCFS) queuing policy of the shared DSL router. While FCFS policy is an attractive choice by virtue of low delay and high utilization, the high correlation between Bob’s waiting times and Alice’s traffic pattern is highly unattractive in terms of preserving privacy. Now consider another extreme policy, namely, time division multiple access (TDMA) where a user is assigned a fix service time regardless of whether he has any packets that need to be serviced. As expected, in this case, Bob’s waiting times are independent of Alice’s traffic pattern. However, TDMA is a highly inefficient policy in terms of throughput and delay. This tradeoff between the information leakage and Quality-of-Service (in terms of delay or throughput) is inherent to the router policy design. The goal of this paper is to design scheduling policies that mitigate the information leakage but at the same time have good QoS guarantees. The main contributions of this paper are: • The development of a mathematical framework to analyze the router-based side channel using precise queuing model for the router and the constituent flows. • A quantitative measure of the information leakage based on this framework using the correlation between a target user flow and an attacker’s probe, for any scheduling strategy. • A dynamic programming formulation to compute the optimal scheduling strategy that minimizes the correlation metric and the derivation of scheduling policies based on a myopic approximation to the solution of the dynamic program. Demonstration through extensive experimental evaluation, that the derived policies perform better compared to various scheduling policies widely deployed currently. While the main ideas in the paper are explained in the context of packet networks motivated by the example above, the queuing side channel can arise in any environment where a shared resource is used to serve multiple clients. For example, a processor that is time-shared between multiple threads in a computer, similarly, cache memory on a computer that is shared

1053-587X/$26.00 © 2011 IEEE

2002

IEEE TRANSACTIONS ON SIGNAL PROCESSING, VOL. 60, NO. 4, APRIL 2012

Fig. 1. Side channel setup. The attacker Bob resides in Montreal, Canada, and the victim, Alice resides in Illinois. Bob sends a train of pings to Alice’s DSL router, and analyzes their RTT to estimating the traffic entering Alice’s computer. The results are given in Fig. 2(a) and (b).

between multiple threads. The waiting time of each client is influenced by the volume of the job of the other clients. This influence is a function of the policy which the resource uses to serve the clients. Hence, the conclusion we draw can be applied to analyzing the privacy/delay/throughput tradeoff in any such system.

Fig. 2. Bob sends ping probes to Alice’s computer every 10 ms while Alice is browsing the website www.yahoo.com. In the figure plotted on top, each bar represents the total volume of traffic downloaded to Alice’s router in an interval of duration 10 ms. The bottom plot gives the round trip times of Bob’s ping probes. (a) DSL traffic entering Alice’s router. (b) RTT of Bob’s ping packets.

II. THE ATTACK MODEL Consider the scenario as illustrated in Fig. 1. Bob sends a lowbandwidth, but high frequency probe to the router and measures the round-trip time (RTT) of his probe packets. The DSL has an incoming and an outgoing port, for the traffics addressed to, and originating from Alice’s computer. Bob’s probe responses and Alice’s incoming traffic share the same queue, hence, the delay that Bob observes would vary based on the pattern of traffic addressed to Alice. Although these pings travel through various intermediate routers, their roundtrip time is primarily affected by Alice’s traffic. This is because the intermediate routers have significantly higher bandwidth compared to the volume of the traffic flowing through them making Alice’s router the bottleneck [1], [2]. Bob needs to know Alice’s router IP address in order to know where to send the probes to. Although this mapping is typically only explicitly known to ISPs, many protocols such as file sharing, instant messaging, VoIP, and e-mail reveal the IP address of a user. Other forms of IP address reconnaissance maybe possible but outside the scope of this work. To evaluate the potential for this type of attack, we observed traffic of a home DSL user in Illinois, while simultaneously sending a ping probe from a computer in Montreal every 10 ms. Fig. 2(a) and (b) show the results. The DSL traffic entering Alice’s computer and the RTT of the ping packets are shown: there exists a clear correlation between the two. The probe traffic used less than 50 Kbps of bandwidth and is unlikely to be noticed. Note that our attack can only reveal the timing and the volume of the traffic, and not the actual contents of the packets. However, recent research has shown that significant inferences can be drawn by observing just the traffic pattern of a user: identification of web sites visited [3], [4], guesses at passwords typed [5], and recovery of phrases spoken over VoIP [6]. Unlike previous schemes, where the attacker is assumed to have access to

Fig. 3. Plot of the correlation-delay tradeoff exhibited by the FCFS and the TDMA scheduling policies. Two traffic streams are considered, one from Alice (user) and the other from Bob (attacker). We find the correlation between the traffic pattern of Alice and the delays experienced by Bob’s packets. We also compute the average delays experienced by Alice’s packets. Each point in the graph corresponds to one set of simulation parameters (a detailed explanation of the plot is given in Section VI). FCFS scheduling policy results in a high correlation whereas TDMA scheduling policy results in a low correlation, but at the expense of increased delay. The goal of this paper is designing scheduling policies that offer a better tradeoff between the two.

one of the routers along the path of Alice’s traffic, the strength of our attack lies in the fact that it does not require special access or privileges for the attacker. The attacker does not even need high computational power. This dramatically increases the attack surface available for traffic analysis. Even internal communication wholly contained within a network in an enterprise or military base may be subject to traffic analysis, if the network is connected to the public Internet. Likewise, even on a closed

KADLOOR et al.: DESIGNING ROUTER SCHEDULING POLICIES

network, a low-clearance insider may be able to carry out such probes from a remote location. A current limitation of this attack model is the assumption of no background flows, i.e., our attack assumes that a single stream flows through the router. This is indeed the case for non high-tech home users accessing a single website at a time. If this were not the case, our attack would reveal the sum of all the constituent flows. This attack model, however, represents the worst-case scenario from the legitimate user’s perspective. Our goal in this paper is to design scheduling policies that limit the capabilities of such attacks, even if the conditions are favorable to the attacker. The derived scheduling policies can only perform better in the presence of background flows. Remote traffic analysis can be applied to a wide variety of contexts, a few important attack scenarios are listed below. Cyberstalking: An attacker may put a particular user under surveillance in order to learn more about that user’s behavior on the Internet. The motives for such individual surveillance can vary widely. An attacker may want to determine whether a user is active or not, which will help him determine whether someone is likely to be home at a given time. An attacker may also wish to construct a profile of the user: Is the user likely to be affluent? What is the user’s gender? Are there children in the home? Another possibility for attack is to deanonymize a user pseudonym, used on a blog or a discussion board. By correlating user activity with the times that updates are posted on the blog/board, it will eventually be possible to obtain statistical evidence that implicates the user. Finally, as stated before, some encrypted traffic analysis techniques reveal the actual contents of a user’s communication: statistical models can recover characters typed based on keystroke timings, while others can recover features from encrypted VoIP calls, such as the language being spoken, or test for the presence of specific phrases. Unmasking Relationships: Remote traffic analysis can also be used to detect whether two users are communicating, thus inferring social or professional relationships between them. Such communications may take the form of a direct network connection between them via TCP or VoIP, or indirect conversation by instant message. In both cases, correlation of messages leaving the computer of one user and arriving at the computer of the other can give evidence of communication. Monitoring at Scale: Because each probe requires only a low volume of traffic, it is possible for attackers to monitor multiple routers at once. In addition to being able to monitor multiple people, more pervasive surveillance can provide new opportunities for monitoring. For example, when deanonymizing a pseudonym or finding connections between users, cyberstalking allows only confirmation attacks, where an existing suspicion can be confirmed or disproved, whereas more pervasive monitoring can discover previously unknown relationships.

2003

wishes to discover which routers are involved in forwarding it. He does this by sending a large flow to a particular router that “clogs” the router’s queue and looks for a corresponding drop in throughput in the anonymous flow. Murdoch and Danezis implemented a version of this attack against Tor and MorphMix [8]. Their approach was to send an on-off pattern of high-volume traffic through the anonymous tunnel and a low-volume probe to a router under test. If the waiting times of the probe show a corresponding increase during the “on” periods, the router is assumed to be routing the flow. However, their experiment was performed only on a lightly loaded Tor network with 13 relays and it is not practical with today’s 1500-relay heavily loaded Tor network. Even ignoring the growing false positive rates resulting from possible increases in traffic load due to legitimate uses during attacker’s “on” period, the attacker needs extremely large amount of bandwidth to measure the flows through enough relays during the attack window. Evans et al. [9] strengthened Murdoch and Danezis’s attack by a bandwidth amplification attack which makes their attack feasible in modern-day deployment of Tor. By combining JavaScript injection with a selective and asymmetric denial-of-service (DoS) attack, Evans et al. were able to infer specific information about the path selected by the victim and thus circumvent Murdoch and Danezis’s attack. Hopper et al. [2] use a combination of Murdoch and Danezis’s approach and pairwise round trip times (RTTs) between Internet nodes to correlate Tor nodes to likely clients. The countermeasure of Murdoch and Danezis’s attack was presented in [10]. They present a stochastic fair queuing (SFQ) policy that probabilistically distributes resources over a set of circuits passing through a relay. The effectiveness of this strategy relies on the fact that there are multiple active circuits. This is not the case in our scenario where there are only two active streams, the legitimate user’s and the attacker’s streams. Chakravarty et al. [11] propose an attack for exposing Tor relays participating in a circuit of interest by modulating the bandwidth of an anonymous connection and then observing the fluctuations as they propagate through the Tor network. Other instances of traffic analysis, particularly encrypted traffic, include recovery of information about keystrokes typed [5], [12], websites visited [3], [4], or words spoken over VoIP [6]. Additionally, communication relationships can be inferred by statistical correlation of network traffic [7], [13]–[15]. The specific attack studied in this paper was documented in [16], wherein we demonstrated that both the FCFS and Round Robin (RR), another popular scheduling policy, when used at the router, result in high correlation between the probe waiting times and Alice’s traffic. In this paper, we consider the problem of designing router scheduling policies that mitigate this traffic analysis attack. IV. MATHEMATICAL MODEL

III. RELATED WORK The idea of inferring information by utilizing a shared resource (specifically a router) has been applied to deanonymize users in [7] where the authors introduce “queue clogging” attack on anonymous systems. The attack considers a flow that is forwarded by a set of routers in an anonymous communication system. The attacker is able to observe one end of the flow and

Consider a single router serving packets from two streams as shown in Fig. 4; one stream belongs to the home user, and the other one belongs to the attacker. Note that the router is unaware of which stream belongs to the legitimate user. DSL routers typically serve packets using a FCFS policy; packets are served in the strict order in which they arrive. As demonstrated in Section II, the time between the departures of two packets

2004

IEEE TRANSACTIONS ON SIGNAL PROCESSING, VOL. 60, NO. 4, APRIL 2012

Fig. 4. Router serving two streams. If the scheduler uses a FCFS policy, the inter-departure time of packets from one stream depends on the number of packet arrivals in the other. The inter-departure time of packets in stream 2, , is proportional to the number of arrivals in stream 1 between and . times

from a stream in an FCFS router is heavily correlated with the traffic load of the other stream. We are interested in designing router policies which minimize the correlations between the interdeparture times in one stream and the arrival pattern in the other while still maintaining QoS guarantees. We consider a discrete time system, using a fixed time interval , where is small enough so that, in each time slot there can be at most one packet arrival from each stream. Let denote the size of the incoming packet in stream in time slot . implies there is no arrival. The packet sizes determine the service times at the router. Let denote the number of packets from stream served by the router in time slot . The router’s action in every time slot is to design given all the past observations and actions of the the control router so that minimum information is leaked about the traffic pattern of one user to another. Formally, a router policy can be viewed as a sequence of mappings from all the past observations and actions to the action in the current slot. More precisely, let . can be written as Then, the router’s action in slot , where denotes that a packet from stream was chosen by the router to serve. If , then no packet was chosen from either stream.1 In this notation, the arriving time of the packet in the stream is the first time the total number of arrivals in the queue is greater than . Therefore,

(1) where

is the indicator function: is TRUE otherwise

Similarly, the departure time of the queue is

packet from the

(2) 1 does not necessarily mean that the router is idle, it could be continuing the service of a packet from the previous slot.

Fig. 5. Pictorial representation of the definition of and : Upward arand are the arrival times of packets 1 rows represent arrival times ( and 2 of stream 2) while downward represent departure times of packets in each and are the departure times of packets 1 and 2 of stream 2). stream ( The size of the arrows refer to the size of the packets.

For the packet in stream , the overall delay experienced includes the waiting time for packets that arrived in its own stream to be served, and the waiting time for the packets in the other stream that were served before it. Since the arrival and service times of packets in its own queue is known to the user in stream , information is leaked through the waiting time for packets in the other stream to be served, which is given by (3) In effect, this is the time spent by the arriving packet waiting at the head of its queue (after the last packet in its own stream has been served). The total size of the arriving packets in the other stream between two consecutive arrivals in the one stream, is defined as (4)

(5)

Note that the maximum information that can be obtained by an adversary using the waiting times are the total size of the packets (through the total service time) that arrived in the other stream between two consecutive arrivals in his stream. In other words, is the total information that is inferable to user- . and are pictorially represented in The definitions of Fig. 5. In the figure, we show the arrival and the departure times of packets from two streams. At time zero, we assume that there are no packets in the system. In the second stream, note that is the interdeparture time between the first and second packets, is the RTT of the third packet. This is because, whereas the second packet spent a part of its total delay in waiting for packets in its own queue to be served, which does not provide information about the other stream. When the third packet arrived into the system, there was no other unserved packet from its own stream and hence the total round trip time is sufficient to extract information about the other stream.

KADLOOR et al.: DESIGNING ROUTER SCHEDULING POLICIES

2005

Correlation: From the router’s perspective, either of the streams could belong to legitimate user and hence, any metric cannot assume a specific stream to be the attacker. The information leakage metric for a router policy is therefore defined as the maximum of the absolute value of the correlation coefficient across the streams: Fig. 6. Pictorial representation of accumulate and serve policy. The switches close at integral multiples of the accumulate period and transfer all the packets accumulated till then. The server then serves all the accumulated packets in a random order.

A. Information Leakage Metric Suppose at a given time, packets have been served from stream . We define the following notation: (6) (7) If the adversary is user 1, then the vectors represent, respectively, the information available and the information that needs to be extracted from the adversary perspective. For a series of measurements of two random processes and , and , , the sample mean of the measurements is the unbiased estimate of the mean of the random processes. Define to be the mean of the measurements of , and similarly, define to be the mean of measurements of . The unbiased estimate of the correlation between the two processes is given by [17]

(8) and the estimate of the correlation coefficient between the two vectors is given by [17]

(9) Accordingly, we define the unbiased estimate of the correlation between the vectors and using the inner product between the vectors

(10) and the absolute value of the correlation coefficient between the vectors can be defined as (11)

(12) The metric defined above captures the influence of the traffic pattern in one stream on the delays experienced by packets in the other. The intuition for the metric can be understood from the performance of the two extreme policies described in Section I. When the scheduler in the router employs an FCFS policy, the waiting time between departures of one stream are directly proportional to the total service time of packets that arrived from the other stream thus resulting in a correlation close to 1; we can argue that the FCFS policy provides the maximum information leakage. When the scheduler employs a TDMA policy, since inter departure times are completely independent of the service times of packets in the other stream, the resulting correlation is very low, and over a long time horizon is zero; the TDMA scheme provides minimal information leakage. In general, the benefits of using correlation as a metric to quantify information leakage is three fold: (i) Correlation metric can be calculated without assuming any probabilistic models on the traffic. This is perhaps the main reason for abundant use of correlation metric in traffic analysis [8], [18]. (ii) As the information leaked by the queuing side channel can be used for variety of inference tasks such as cyberstalking or unmasking relationships (e.g., inferring social or professional relationships), correlation is a general metric that captures the dependency between observation and the information that is protected; as opposed to specific metrics associated with a certain inference task (e.g, miss and false alarm probabilities in detection). Metrics such as mutual information and estimation error depend on the probability mass functions (pmfs) and , these pmfs in turn depends on the scheduling policy. Because these metrics depend on the scheduling policy itself, designing a policy to minimize such a metric is ill-posed. Note that the knowledge of the pmfs is not required to compute the correlation metric. iii) Correlation metric tracks linear relationship among random variables. Given that we expect for most reasonable policies (in terms of stability and delay/throughput tradeoffs) the waiting time of the attacker to grow when users’ traffic surges, correlation is a natural choice.

V. SCHEDULING POLICY DESIGN As stated earlier, the goal of the paper is to design scheduling policies that minimize the correlation between the traffic pattern in one stream and the delays experienced by the packet in the other streams as given in (12). In order to make the class of scheduling policies as specified by the mathematical model in Section IV practical, it is necessary to impose some networking requirements in the design of an optimal scheduling policy. In particular, a valid scheduling policy needs to satisfy the following conditions:

2006

IEEE TRANSACTIONS ON SIGNAL PROCESSING, VOL. 60, NO. 4, APRIL 2012

1) The designed policy is stable: if the arrival rates of packets are within the service capacity of the scheduler, then the queue sizes do not increase indefinitely. 2) All the packets within each stream are served in a FIFO (first in first out) manner.2 A. Optimal Scheduling Policy: Markov Decision Framework The optimal scheduling policy that minimizes the correlation metric can be formulated using a Markov decision process (MDP) framework. MDP provides a mathematical framework to model decision making in situations where the outcomes probabilistically depend on the actions. A “finite horizon” MDP problem, with a time horizon is characterized by the tuple , where • , called the state space is a countable set of states. The state space captures all the necessary information from the past history required to make a decision. Let denote the random variable indicating the state of the system at time . • , called the action space is a countable set of actions which can be taken at a given time, which will in turn influence the subsequent state of the system. Let denote the random variable denoting the action taken at time . , the probability transition matrix, • gives the probability that the state at time is , given that the state of the system at time was , was and the state transition resulted because action taken at time . • is the per-stage cost function, i.e. the cost incurred in taking action , when in state . • , the terminal cost, is the cost incurred if the state of the system at the end of time horizon is . The objective of the MDP is to find an optimal “policy” that minimizes the total cost. A nonanticipative or a history dependent policy is a mapping such that (13) A Markov policy is one where the action at time on the state of the system at that time,

depends only (14)

2The FIFO requirement is to ensure that the order of the packets from one stream is not jumbled at the router. Although we could potentially minimize the information leakage between the queues by jumbling up the packet orders at the queue, we avoid doing so because of the additional overhead in complexity required to handle this jumbling up of the packet orders.

In the class of history dependent policies, an optimal policy is that policy which minimizes the total cost function given by

(15) where is the initial state of the system, and the expectation is taken over the distributions of the random variables , governed by the transition probability matrix . Value iteration is a procedure to solve for an optimal policy [19], which is as follows. At any time , define to be the optimal remaining expected cost if the current state of the system is . We can solve for and an optimal solution by solving the dynamic program (16)

(17) Moreover, we can show that an optimal policy can be found within the class of Markov policies. The solution to the dynamic program yields an optimal solution of the form . Refer to [19] for further details. Although the scheduling policy design problem fits into the MDP framework, solving for the optimal scheduling policy using the framework is computationally intractable because of state space explosion. The total information that corresponds to the state depends on the nature of the cost function, which in turn determines if solving for an optimal policy is computationally tractable or not. The cost function in the current problem is complex. In order to compute the terminal cost given in (18), shown at the bottom of the page, the router has to store a large number of variables as a part of the state space. Suppose at time , packets have arrived in queue 1, and packets have arrived in queue 2, packets have departed from queue 1, and have departed from queue 2, then the state at time would consist of the following: , • • • •

, , ,

(18)

KADLOOR et al.: DESIGNING ROUTER SCHEDULING POLICIES

• •

2007

, .

. : Total volume of the packets which have arrived in queue 1 since the last arrival in queue 2. • : Total volume of the packets which have arrived in queue 2 since the last arrival in queue 1. • : Remaining service time of the packet currently in service. Unfortunately, because of the nature of the cost function defined in (12), the total cost across the entire horizon cannot be split into sum of per-stage costs. We therefore have only one terminal cost and no per-stage cost. Define the terminal cost to be as given by (18), where it is assumed that packets of user have been served at the end of the time horizon . Associated with each of these variables is a state update equation. We do not list the state-update equations for the sake of brevity. In order to solve for an optimal policy, the number of variables to keep track of scales polynomially in time as roughly . Given the high dimensionality of the state space, the total information that the router has to store blows up quickly, and so it becomes intractable to solve for an optimal policy. Moreover, when the dynamic program is solved, the optimal policy is given as a lookup table, . The size of this table will grow exponentially with time as well. The explosion of state space is quite often a reason why finding the optimal solution in the dynamic programming framework gets futile [20], and other alternatives are sought. The framework developed here can however be used to provide an approximate solution by considering a myopic optimization, which is described in the subsequent section. A note about the relationship between TDMA scheduling policy and the solution to the MDP problem described above. When TDMA scheduling policy is employed, the departures of one stream are not affected by the input pattern in the other. Therefore, over a sufficiently long time horizon, , TDMA offers near zero correlation. The solution of the MDP problem, on the other hand, results in a policy that has the minimum possible correlation for any time horizon , and is therefore superior to TDMA. Unfortunately, we cannot compute the solution to the MDP directly, and resort to finding approximations to it. • •

B. Myopic Optimization: High Traffic Load Although the Markov decision framework is prohibitively complex to analyze, we can define a myopic policy in the framework which performs a greedy optimization at every step of the process. The intuition behind the greedy optimization is that when the traffic load is high, the router does not gain significantly by looking far into the future, and a near sighted optimization can perform just as well. More specifically, at time , the router serves a packet from that stream, after which the resulting correlation metric is lower, i.e., the router solves the following optimization problem at each time it becomes idle, to yield a policy:

(19)

To implement the myopic policy, the router time-stamps each incoming packet to the router, i.e., the router stores the time of arrival of the packet. The router picks that stream for which the correlation metric after the service is minimum. Other than the time of arrivals of the packets which are already in the buffer, the router also needs to store the departure times of all the packets which have been served since the arrival of the earliest unserved packet waiting in the buffer. Note that in the design of the policy, we do not allow the router to idle. We do so because we do not assume any distribution on the external traffic, and hence, in order to guarantee stability of the policy, we force it to be a non-idling policy. C. Accumulate and Serve: Low Traffic Load Besides the myopic policy described in Section V-B, the MDP formulation gives us the intuition for another good privacy-preserving policy, accumulate and serve. Internet traffic resulting from applications such as web-surfing is typically bursty, i.e. packets arrive in a sporadic manner, with many packets in a small interval, followed by long periods with no arrivals. The bursty behaviour of the traffic can, for instance, serve as the signature of a website. One method to limit the correlation introduced by burstiness is to introduce artificial delays to the incoming traffic. Specifically, delays need to be added in such a manner that the resulting pattern of the packet arrivals has no bearing to the original signature. If packet delay were not a criterion of network performance, then TDMA or a close variation of it would be a near optimal scheduling policy. In other words, we would collect all the packets of a stream arriving in a time slot and serve all of them, idle for any remaining time, and then move on to the other stream. Under such a policy, it would be impossible for the attacker to gain any information about the traffic pattern of the legitimate user. The accumulate and serve policy adapts this intuition while maintaining fairness to both users. Specifically, packets from both the streams are buffered (accumulated) for a certain amount of time . Then, all these buffered packets are served in a random order, i.e., at the completion of service of a packet, the scheduler randomly decides which stream to serve from. Furthermore, if serving all these packets takes amount of time, and if , then the scheduler idles in between serving two packets in such a way that the total idling time is . After serving all these packets, the scheduler then starts serving all the packets accumulated meanwhile in the next batch. If , then the scheduler immediately starts serving all the packets accumulated till then. VI. EXPERIMENTAL RESULTS In this section, we study numerically the performance of the myopic and accumulate and serve policies in terms of correlation and network performance. We compare the performance of the proposed policies with other known scheduling policies, FCFS, RR, TDMA, Randomized RR, and Traffic Shaping. These policies are described in the following. For TDMA, we set the duration of a time slot to be 10 ms. For accumulate and serve policy, we again set the duration of an accumulate period to 10 ms. Randomized Round Robin: The randomized round-robin policy is a randomized version of the round robin policy. In this policy, at the completion of service of a packet, if there

2008

Fig. 7. Pictorial representation of the traffic shaping policy. Each incoming stream is first fed into a queue that smooths out an incoming stream by adding delays, and then sends the smoothed streams through one of the following schedulers, a FCFS, RR, or randomized RR scheduler.

are unserved packets in both the queues, the scheduler decides whether to serve a packet from queue one or two by flipping a fair coin. If there are no packets in one of the queues, the scheduler serves packets from the other queue. Traffic Shaping: As the name suggests, traffic shaping is a policy designed to shape the incoming traffic from the users so that, in the worst case, the attacker learns the shaped traffic of the other user, and not the original one. The policy reduces the burstiness of the input traffic by introducing two virtual FCFS queues, one for each stream, as shown in Fig. 7. The incoming traffic streams are fed as the inputs to the FCFS queues, and their outputs serve as the inputs to the scheduler. The output traffic streams from the two virtual queues would be far less bursty than the input due to the service times at the virtual queues. The smoothed streams are then processed by one of the afore mentioned schedulers, FCFS, RR, or randomized RR. The traffic shaper used here is also referred to as token bucket filter or regulator [21], and is commonly used to regulate the bandwidth of a data stream [22]. Since none of the policies require the scheduler to be idle when there are packets to be served, the policies are stable, and consequently, the maximum throughput does not differ across policies. We therefore, compare the relative performances of the policies using their correlation metric and delay properties. The system model for the experiment was the same mentioned in Section IV, shown in Fig. 4. For the inputs to stream one, we used synthetic traffic which was generated according to Pareto distribution. In case of high speed networks with unexpected demand on packet transfers, Pareto based traffic models are considered to be excellent candidates since the model takes into the consideration the long-term correlation in packet arrival times [23]. One reason for using synthetic traffic is to be able to parametrize the traffic, and study the behavior of various policies across a range of traffic loads. The interarrival times between the packets are modelled as Pareto distributed random variables. The size of packets is modelled as independent and identically distributed (i.i.d.) Bernoulli random variables, taking the values 74 bytes, and 1500 bytes with probability 0.4 and 0.6, respectively. These two numbers correspond to the two most common sizes of TCP packets that we captured in our experiments. The traffic in the second stream is assumed to be a train of pings which are sent at a regular interval of 10 ms. The traffic of stream one is generated synthetically according to a Pareto distribution as described above, with different parameters. We have two sets of experiments, one in which the traffic load is high, and the other when it is low. In the high traffic load regime, the parameters of the Pareto distribution are varied so that the heavy traffic corresponds to an average traffic load of about 0.8 and the light traffic corresponds to a load of about 0.3. In the low traffic regime, the parameters are varied so that the load at the scheduler varies from 0.1 to 0.01.

IEEE TRANSACTIONS ON SIGNAL PROCESSING, VOL. 60, NO. 4, APRIL 2012

In general, the attacker can design the traffic in stream two, it need not be a train of regularly spaced pings. However, in the absence of any information about the regular user’s stream of packets, sending a uniformly spaced train of pings seems reasonable. We discuss the issue of pings sent at regular versus random times in more detail in Section VI-B. Corresponding to each policy, and each parameter for the traffic, we have one point in the plots. In Fig. 8, we plot the value of the correlation metric as a function of the traffic load. In Figs. 9 and 10, we plot the value of the correlation metric, and the average delay experienced by the user packets in high and low high traffic regimes, respectively. Note that policies which have low correlation metric, and low average delay are the ones preferred. For the simulation, we generated 100 traces of the input traffic for each set of parameters, and averaged the results over these. A. Inference From Simulation Results From Figs. 9 and 10, some of the broad inferences are the following: • The FCFS and the TDMA policies represent two extreme points in terms of correlation (and consequently delay), with FCFS being the largest and TDMA the least. This follows our previously described intuition that FCFS reveals maximum information about the traffic pattern whereas with TDMA, the observables and the arrival information are independent of each other. • In the high traffic regime, the myopic policy based on the dynamic program framework provides the least correlation for a given achievable delay, while the accumulate and serve has the least correlation for medium and low traffic loads. In fact at high traffic loads, the correlation performance of both these policies are close to that of TDMA with significantly lower delay. Since the myopic policy is directed at optimizing the correlation metric, and the accumulate and serve is an approximation of the optimal infinite delay policy, the good correlation performance of these policies are as expected. • The correlation performance of the accumulate and serve and myopic policies reduces with decrease in traffic load. Both the policies have a tendency to buffer packets from the streams, which results in the distortion of perceivable traffic pattern. This distortion reduces as traffic load decreases, since the number of packets buffered prior to transmission also reduces. • Apart from the FCFS policy, all other policies experience an increase in delay for the attacker packets with the increase in traffic load of the user packets. This demonstrates that although the designed policies reduce the information regarding the traffic pattern, the traffic load of the legitimate user is revealed through the ping delays. • For the RR policy, an incoming ping experiences delay only when the scheduler is busy serving a packet from the other queue. It will get served as soon as the scheduler finishes the service of the other packet. Hence, when RR policy is used, the delay experienced by the ping is correlated to the size of only one of the packets in the other queue, and not the total volume. Hence, as the traffic load goes down, there is a higher likelihood of there being only one packet in between successive pings which is directly

KADLOOR et al.: DESIGNING ROUTER SCHEDULING POLICIES

2009

Fig. 8. Plot of the correlation metric versus offered traffic load for the following scheduling policies: FCFS, RR, FCFS-shaping, RR-shaping, Myopic, TDMA, and accumulate and serve followed by random selection. We have two plots, one for the high traffic regime and one for low traffic regime. Note that the TDMA policy performs the best among all the policies, followed by accumulate and serve. This plot does not show the delays experienced by the packets: the delays are shown in Figs. 9 and 10. (a) Traffic load versus correlation in high traffic density regime. (b) Traffic load versus correlation in the low traffic density regime.

Fig. 9. Correlation-delay performance of the different policies in the high traffic density regime. (a) plots the correlation metric versus the average delay experienced by the user’s packets. TDMA policy offers low correlation at the expense of very high delay. The myopic policy and the accumulate and serve offer low correlations and low delays, and are good alternatives to FCFS policy. (b) plots the average delays of the user’s versus the delays of the attacker’s pings. The Myopic policy delivers low correlation by giving priority to the user’s packets, and delaying the pings. The TDMA policy delays both the user’s and the attacker’s packets. (a) Correlation versus user packet’s delay. (b) Delays experienced by the pings and the user’s packets.

reflected in the queuing delay, and hence, a higher correlation. Overall, the policy design for the scheduler depends on the traffic load and delay requirements. Under low and medium traffic conditions, the accumulate and serve provides the best tradeoff between delay and correlation. In high traffic loads, the myopic policy based on the Markov decision framework provides the best tradeoff. If delay were not a criterion, then TDMA would be the ideal policy.

ms in both the cases, for the randomly spaced pings, the interping times were i.i.d. exponentially distributed random variables with mean 10 ms. We carried out the experiment in the high traffic regime. We compare the correlation/delay curves for the two cases. Our simulation results, depicted in Fig. 11, demonstrates that the regular pings pick up higher correlation in case of FCFS and Round Robin policies. For the accumulate and serve policy, it is otherwise. In this paper, we do not explore the possibility of the attacker using cleverly designed probing strategy, and it would be the subject of our future work.

B. Regularly Versus Randomly Spaced Pings

VII. WEBSITE CLASSIFICATION EXPERIMENT In this section we describe the results of the experiment wherein we studied the efficacy of the accumulate and serve policy in mitigating the remote traffic analysis attack. We consider the situation where the attacker, Bob, is interested in

We carried out a simulation wherein we compared the correlation when regularly spaced and randomly spaced pings are used by the attacker. The mean time between pings was 10

2010

IEEE TRANSACTIONS ON SIGNAL PROCESSING, VOL. 60, NO. 4, APRIL 2012

Fig. 10. Correlation-delay performance of the different policies in the low traffic density regime. These plots are analogous to those in Fig. 9, demonstrating the performance of the policies in the low traffic regime. The TDMA policy is excluded because the delays experienced by the packets when TDMA is used is multiple orders greater than the delays experienced when other policies are used. (a) Correlation versus user packet’s delay. (b) Delays experienced by the pings and the user’s packets.

TABLE I RESULTS OF THE CLASSIFICATION EXPERIMENT

Fig. 11. Changing the inter-ping time from a fixed value to a random value results in an increase in correlation for a few scheduling policies [FCFS (green curves) and RR (blue curves)], and a reduction in another [accumulate and serve (brown curves)].

identifying which website Alice is browsing. We assume that Alice is browsing one of the top twenty four news websites listed in http://www.alexa.com/topsites/category/Top/News. In [24], a detailed account of the attack is presented. In order to carry out this attack, Bob first sends pings to a training testbed (e.g., his own DSL router) while simultaneously browsing one of the 24 websites. He thus builds a set of training RTT patterns associated with each website. He then sends a train of pings to Alice’s router, and observes the RTT pattern. Bob now tries to identify the website Alice was browsing by comparing the observed RTT patters to those in the training set. The similarity between two RTT patterns is measured by a Dynamic Time Warping (DTW) algorithm [25]. DTW is an algorithm used in multimedia processing for measuring similarity between two time series which may vary because of insertion and deletion or speed. The attacker then uses nearest neighbor algorithm to identify which website Alice was browsing. We carried out the attack described above by simulating the functioning of the router, and by using real traffic traces cap-

tured using tcpdump.3 A total of 18 traces were captured for each website, nine of them were used as training set, and the other nine were used for classification. We consider the FCFS, RR and Accumulate and Serve (AS) policies. The attacker is assumed to know which policy Alice’s router is using. The classification results are presented in Table I. When FCFS policy was used, the attacker could correctly identify Alice’s website over 70% of the time (and about 30% of the time, Alice’s website was wrongly classified). In the other extreme, the TDMA policy resulted in a classification percentage of less than 10%. We also tested the AS policy with different accumulate times, 10, 30, and 40 ms, the classification percentage for the latter being less than 25%, thus demonstrating the ability of the AS policy in preventing this attack to a great extent, relative to the commonly used FCFS policy. Table I also shows the average delays experienced by Alice’s packets for the policies tested. The delays for AS policies are all within 10% of delays for FCFS, hopefully a tolerable tradeoff in return for improvement in Alice’s privacy. Furthermore, it can be seen that the policies with lower correlation metric also have a lower classification rate. This serves as a validation to show that the correlation metric is indeed nicely tied to the operational performance metrics such as classification error. On the other hand, TDMA offers very low classification rate, but, on average, resulted in five times the average delay offered by FCFS. VIII. CONCLUSION In this paper, we considered the problem of designing scheduling policies for routers to prevent inference of traffic pattern from ping round trip times. We proposed a correlation metric to 3www.tcpdump.org.

KADLOOR et al.: DESIGNING ROUTER SCHEDULING POLICIES

quantify the dependency between the round trip time of the attacker packets and the traffic pattern of a legitimate user. The most common scheduling policy used in routers today is the FCFS policy. However, if FCFS policy is used at the router, we have observed that an attacker, wishing to learn what website a user was browsing, could correctly identify it over 70% of the time. To mitigate these attacks, we developed a mathematical framework to design scheduling policies and proposed scheduling policies that achieve a desirable tradeoff between privacy and network performance in terms of delay. In particular, one of our policies, accumulate and serve brought down the classification rate to below 25%, while having similar delay properties. The accumulate and serve policy improves on FCFS by eliminating the possibility of the attacker learning fine grained information about the arrival times of the packets. The experimental results consider the special case of regular pings transmitted by the attacker. In theory, the attacker can design his probing strategy depending on the policy adapted by the router. This interaction can be modelled as a dynamic game between the router and the attacker, with the router trying to minimize the correlation and the attacker trying to maximize it. In the game-theoretic formulation of the problem, deriving the optimal scheduling policy and attack strategy will entail investigating the existence of a Nash equilibrium, i.e., a stable pair of strategies such that neither the attacker nor router benefit by (unilaterally) deviating from it. Investigating such an equilibrium will be subject of future work as it would provide optimal strategies for both parties and therefore will let us characterize the fundamental tradeoffs between performance and privacy. Although we considered the problem in context of packet networks, the queuing side channel can arise in any environment where a shared resource is used to serve multiple clients. We believe the mathematical framework and solutions we developed will be applicable in any such scenario.

REFERENCES [1] A. Akella, S. Seshan, and A. Shaikh, “An empirical evaluation of widearea internet bottlenecks,” in Proc. ACM SIGCOMM Conf. Internet Measure. (IMC), M. Crovella, Ed., New York, 2003, pp. 101–114. [2] N. Hopper, E. Y. Vasserman, and E. Chan-tin, “How much anonymity does network latency leak,” in Proc. 14th ACM Conf. Comput. Commun. Secur. ACM (CCS ’07), 2007. [3] G. Bissias, M. Liberatore, D. Jensen, and B. Levine, “Privacy vulnerabilities in encrypted HTTP streams,” Privacy Enhancing Technol., pp. 1–11, 2006 [Online]. Available: http://dx.doi.org/10.1007/11767831_1 [4] M. Liberatore and B. N. Levine, “Inferring the source of encrypted HTTP connections,” in Proc. 13th ACM Conf. Comput. Commun. Secur. (CCS ’06), New York, 2006, pp. 255–263 [Online]. Available: http://dx.doi.org/10.1145/1180405.1180437 [5] D. X. Song, D. Wagner, and X. Tian, “Timing analysis of keystrokes and SSH timing attacks,” in Proc. USENIX Security Symp. , 2001. [6] C. V. Wright, L. Ballard, S. E. Coull, F. Monrose, and G. M. Masson, “Spot me if you can: Uncovering spoken phrases in encrypted VOIP conversations,” in Proc. 2008 IEEE Symp. Secur. Privacy( SP ’08), Washington, DC, 2008, pp. 35–49, IEEE Computer Soc. [7] A. Back, U. Möller, and A. Stiglic, “Traffic analysis attacks and tradeoffs in anonymity providing systems,” in Information Hiding, ser. Lecture Notes in Computer Sci.. New York: Springer, 2001, vol. 2137, pp. 245–247. [8] S. J. Murdoch and G. Danezis, “Low-cost traffic analysis of tor,” in Proc. 2005 IEEE Symp. Secur. Privacy (SP ’05), Washington, DC, 2005, pp. 183–195, IEEE Computer Soc.

2011

[9] N. Evans, R. Dingledine, and C. Grothoff, “A practical congestion attack on tor using long paths,” in USENIX Secur. Symp., 2009. [10] J. Mclachlan and N. Hopper, “Don’t clog the queue! Circuit clogging and mitigation in P2P anonymity schemes,” in Financial Cryptography and Data Security, G. Tsudik, Ed. Berlin, Germany: Springer-Verlag, 2008, pp. 31–46 [Online]. Available: http://dx.doi.org/10.1007/978-3540-85230-8_3 [11] S. Chakravarty, A. Stavrou, and A. Keromytis, “Identifying proxy nodes in a tor anonymization circuit,” in Proc. IEEE Int. Conf. Signal Image Technol. Internet Based Syst. (SITIS ’08), Nov. 30–Dec. 3, 2008, pp. 633–639. [12] K. Zhang and X. Wang, “Peeping tom in the neighborhood: Keystroke eavesdropping on multi-user systems,” in USENIX Secur., 2009. [13] X. Wang, D. Reeves, and S. F. Wu, “Inter-packet delay based correlation for tracing encrypted connections through stepping stones,” in Proc. 7th Eur. Symp. Res. Comput. Secur., Oct. 2002, vol. 2502, ser. Lecture Notes in Computer Sci., Springer. [14] G. Danezis, “The traffic analysis of continuous-time mixes,” in Workshop on Privacy Enhancing Technologies, D. Martin and A. Serjantov, Eds., Berlin/Heidelberg/New York, May 2004, vol. 3424, ser. Lecture Notes in Computer Sci., pp. 35–50, Springer. [15] J.-F. Raymond, “Traffic analysis: Protocols, attacks, design issues, and open problems,” in Designing Privacy Enhancing Technologies, ser. Lecture Notes in Computer Sci., H. Federrath, Ed. New York: Springer, Jul. 2000, vol. 2009, pp. 10–29. [16] S. Kadloor, X. Gong, N. Kiyavash, T. Tezcan, and N. Borisov, “A low-cost side channel traffic analysis attack in packet networks,” in Proc. IEEE ICC 2010—Communi. Inf. Syst. Secur. Symp., Cape Town, South Africa, 2010 [Online]. Available: http://www.ifp.illinois.edu/~kadloor1/kadloorICC2010.pdf [17] D. C. Montgomery and G. C. Runger, Applied Statistics and Probability for Engineers. New York: Wiley, 2003 [Online]. Available: http://www.worldcat.org/isbn/9780471204541 [18] D. L. Donoho, A. G. Flesia, U. Shankar, V. P. J. Coit, S. Staniford, J. Coit, and S. Staniford, “Multiscale stepping-stone detection: Detecting pairs of jittered interactive streams by exploiting maximum tolerable delay,” in Proc. 5th Int. Symp. Recent Advances in Intrusion Detection (RAID), 2002, pp. 17–35, Springer. [19] P. R. Kumar and P. Varaiya, Stochastic Systems: Estimation, Identification and Adaptive Control. Upper Saddle River, NJ: Prentice-Hall, 1986. [20] W. B. Powell, Approximate Dynamic Programming: Solving the Curses of Dimensionality, 2nd ed. New York: Wiley, 2011. [21] R. L. Cruz, “A calculus for network delay. I. Network elements in isolation,” IEEE Trans. Inf. Theory vol. 37, no. 1, pp. 114–131, Jan. 1991 [Online]. Available: http://dx.doi.org/10.1109/18.61109 [22] B. Hubert, T. Graf, G. Maxwell, R. Van Mook, M. Van Oosterhout, P. B. Schroeder, J. Spaans, and P. Larroy, Linux Advanced Routing & Traffic Control HOWTO, Online publication, Linux Advanced Routing & Traffic Control, Apr. 2004 [Online]. Available: http://lartc.org/lartc.pdf [23] A. Adas, “Traffic models in broadband networks,” IEEE Commun. Mag., vol. 35, no. 7, pp. 82–89, Jul. 1997. [24] X. Gong, N. Kiyavash, and N. Borisov, “Fingerprinting Websites Using Remote Traffic Analysis,” [Online]. Available: http://www.ifp.illinois. edu/~kadloor1/attackdescription.pdf, in preparation for a conference submission [25] H. Sakoe and S. Chiba, “Dynamic programming algorithm optimization for spoken word recognition,” pp. 159–165, 1990.

Sachin Kadloor (S’07) received the Bachelor’s degree in electrical engineering in 2007 from the Indian Institute of Technology, Madras, and the Master’s degree in 2009 from the University of Toronto, Canada. His research then dealt with power allocation in selection-based cooperative cellular networks. Since September 2009, he has been working toward the Ph.D. degree at the University of Illinois, Urbana-Champaign. His current research interests are information theory pertaining to timing channels and issues in network security.

2012

IEEE TRANSACTIONS ON SIGNAL PROCESSING, VOL. 60, NO. 4, APRIL 2012

Xun Gong (S’11) received the Bachelor’s and Master’s degrees in 2007 and 2009, respectively, both from the Department of Automation, Tsinghua University, Bejing, China. Now he is a Ph.D. degree candidate with the Department of Electrical and Computer Engineering, University of Illinois at Urbana-Champaign. His research interests include information theory, signal processing with applications to online privacy, and network anonymity.

Negar Kiyavash (S’06–M’06) received the B.S. degree in electrical and computer engineering from the Sharif University of Technology, Tehran, in 1999, and the M.S. and Ph.D. degrees, also in electrical and computer engineering, both from the University of Illinois at Urbana-Champaign in 2003 and 2006, respectively. From 2006 through 2008, she was a member of the Research Faculty at the Department of Computer Science and a Research Scientist at the Information Trust Institute, both at the University of Illinois at

Urbana-Champaign. Her research interests are in information theory and statistical signal processing with applications to computer, communication, and multimedia security.

Parv Venkitasubramaniam (S’03–M’07) received the B.Tech. degree in electrical engineering from the Indian Institute of Technology, Madras, in 1998 and the M.S. and Ph.D. degrees in electrical engineering from Cornell University, Ithaca, NY, in 2005 and 2008, respectively. He is presently a P.C. Rossin Assistant Professor in the Electrical and Computer Engineering Department, Lehigh University, Bethlehem, PA. His research interests include security and anonymity in networks, information theory, distributed signal processing, and smart energy distribution. Dr. Venkitasubramaniam received the 2004 Leonard G. Abraham Award from the IEEE Communication Society and a Best Student Paper Award at the 2006 IEEE ICASSP.

Designing Router Scheduling Policies: A Privacy ... - IEEE Xplore

scheduling policy of the shared resource, we develop a dynamic program to compute the optimal privacy preserving policy that minimizes the correlation ...

2MB Sizes 0 Downloads 202 Views

Recommend Documents

Delay-Privacy Tradeoff in the Design of Scheduling ... - IEEE Xplore
much information about the usage pattern of one user of the system can be learned by ... include, a computer where the CPU needs to be shared between the ...

Privacy-Enhancing Technologies - IEEE Xplore
filling a disk with one big file as a san- ... “One Big File Is Not Enough” to ... analysis. The breadth of privacy- related topics covered at PET 2006 made it an ...

Joint Cross-Layer Scheduling and Spectrum Sensing for ... - IEEE Xplore
secondary system sharing the spectrum with primary users using cognitive radio technology. We shall rely on the joint design framework to optimize a system ...

Batch scheduling algorithm for SUCCESS WDM-PON - IEEE Xplore
frames arrived at OLT during a batch period are stored in Virtual. Output Queues (VOQs) and scheduled at the end of the batch period. Through simulation with ...

Scheduling Jobs With Unknown Duration in Clouds - IEEE Xplore
Here, we present a load balancing and scheduling algo- rithm that is throughput-optimal, without assuming that job sizes are known or are upper-bounded. Index Terms—Cloud computing, performance evaluation, queueing theory, resource allocation, sche

Joint Link Adaptation and User Scheduling With HARQ ... - IEEE Xplore
S. M. Kim was with the KTH Royal Institute of Technology, 114 28. Stockholm ... vanced Institute of Science and Technology, Daejeon 305-701, Korea (e-mail:.

IEEE Photonics Technology - IEEE Xplore
Abstract—Due to the high beam divergence of standard laser diodes (LDs), these are not suitable for wavelength-selective feed- back without extra optical ...

wright layout - IEEE Xplore
tive specifications for voice over asynchronous transfer mode (VoATM) [2], voice over IP. (VoIP), and voice over frame relay (VoFR) [3]. Much has been written ...

Device Ensembles - IEEE Xplore
Dec 2, 2004 - time, the computer and consumer electronics indus- tries are defining ... tered on data synchronization between desktops and personal digital ...

wright layout - IEEE Xplore
ACCEPTED FROM OPEN CALL. INTRODUCTION. Two trends motivate this article: first, the growth of telecommunications industry interest in the implementation ...

Evolutionary Computation, IEEE Transactions on - IEEE Xplore
search strategy to a great number of habitats and prey distributions. We propose to synthesize a similar search strategy for the massively multimodal problems of ...

I iJl! - IEEE Xplore
Email: [email protected]. Abstract: A ... consumptions are 8.3mA and 1.lmA for WCDMA mode .... 8.3mA from a 1.5V supply under WCDMA mode and.

Gigabit DSL - IEEE Xplore
(DSL) technology based on MIMO transmission methods finds that symmetric data rates of more than 1 Gbps are achievable over four twisted pairs (category 3) ...

IEEE CIS Social Media - IEEE Xplore
Feb 2, 2012 - interact (e.g., talk with microphones/ headsets, listen to presentations, ask questions, etc.) with other avatars virtu- ally located in the same ...

Grammatical evolution - Evolutionary Computation, IEEE ... - IEEE Xplore
definition are used in a genotype-to-phenotype mapping process to a program. ... evolutionary process on the actual programs, but rather on vari- able-length ...

SITAR - IEEE Xplore
SITAR: A Scalable Intrusion-Tolerant Architecture for Distributed Services. ∗. Feiyi Wang, Frank Jou. Advanced Network Research Group. MCNC. Research Triangle Park, NC. Email: {fwang2,jou}@mcnc.org. Fengmin Gong. Intrusion Detection Technology Divi

striegel layout - IEEE Xplore
tant events can occur: group dynamics, network dynamics ... network topology due to link/node failures/addi- ... article we examine various issues and solutions.

Digital Fabrication - IEEE Xplore
we use on a daily basis are created by professional design- ers, mass-produced at factories, and then transported, through a complex distribution network, to ...

Iv~~~~~~~~W - IEEE Xplore
P. Arena, L. Fortuna, G. Vagliasindi. DIEES - Dipartimento di Ingegneria Elettrica, Elettronica e dei Sistemi. Facolta di Ingegneria - Universita degli Studi di Catania. Viale A. Doria, 6. 95125 Catania, Italy [email protected]. ABSTRACT. The no

Device Ensembles - IEEE Xplore
Dec 2, 2004 - Device. Ensembles. Notebook computers, cell phones, PDAs, digital cameras, music players, handheld games, set-top boxes, camcorders, and.

Fountain codes - IEEE Xplore
7 Richardson, T., Shokrollahi, M.A., and Urbanke, R.: 'Design of capacity-approaching irregular low-density parity check codes', IEEE. Trans. Inf. Theory, 2001 ...

Optimized Software Implementation of a Full-Rate IEEE ... - IEEE Xplore
Hardware implementations are often used to meet the high-data- rate requirements of 802.11a standard. Although software based solutions are more attractive ...

Multipath Matching Pursuit - IEEE Xplore
Abstract—In this paper, we propose an algorithm referred to as multipath matching pursuit (MMP) that investigates multiple promising candidates to recover ...

Let's Have a Conversation - IEEE Xplore
If the program aborts altogether, both caller and callee share the same fate, making the interaction an all-or-nothing affair. This kind of binary outcome is a welcome behavior in the pre- dictive world of computer software, especially one that's bas