Proportional Bandwidth Sharing Using Bayesian Inference in SDN-based Data Centers Purnima Murali Mohan, Dinil Mon Divakaran* , Mohan Gurusamy Department of Electrical & Computer Engineering National University of Singapore, Singapore E-mail: [email protected], [email protected], [email protected] Abstract—With the evolution of software-defined networking (SDN) paradigm, traffic management in data center networks has become flexible and scalable. The existing solution using OpenFlow, the rate-guaranteeing mechanism, is inefficient as it limits the rate of the flows by dropping batches of packets to achieve the desired throughput. In this paper, we propose BASIS, a solution based on Bayesian inference for providing proportional Quality of Service (QoS) guarantees to tenants in a datacenter network. With BASIS, the bandwidth of an outgoing congested link will be shared among the competing flows in proportion to the weights chosen by them. We use Bayesian inference to capture the history of flow arrival rates and their offered load using a single queue, and estimate the differential drop probabilities of flows in a way that respects the weights assigned to them on arrival. Unlike the rate-limiting approach, BASIS proactively drops a packet of a flow probabilistically to achieve the desired throughput and avoids dropping batches of packets. We evaluate the proposed solution in an emulated SDN platform and show that BASIS achieves the desired throughput with lesser number of packet drops than the existing approaches. Index Terms—Software Defined Networks, OpenFlow, Data center, QoS, Bayesian Inference.

I. I NTRODUCTION Software Defined Networking (SDN) paradigm enables dynamic and adaptive management of a network. Recently there has been an immense interest in SDN related research as it brings to networks, the benefits of programmability and easy management leading to faster innovation. The separation of the control plane from the data plane reduces the complexity of the networking elements, while the centralized view enables easy management of the network by the controller. Due to its flexible and dynamic means to manage a network using simple APIs, SDN has become attractive to be applied to data centers, cloud networks, wide area networks, etc. OpenFlow [1], the first standard communication interface between the control and data plane of the SDN architecture, was initially launched as an application experimentation in a campus network. In new generation Internet applications such as VOD, VoIP, QoS mechanisms are very important to prioritize and separate traffic of different applications. For e.g., a data center serving the live streaming video application will need a higher QoS priority level than a web server application to guarantee throughput. Though network QoS has been studied extensively, there is no well-known QoS mechanism that is widely deployed in data centers to provide proportional QoS guarantees. With proportional QoS guarantees, bandwidth of an outgoing congested link in a (data center) network is shared among * Dinil Mon Divakaran is currently with the A*STAR Institute for Infocomm Research (I2 R)

the competing flows in proportion to the weights chosen by them. This concept generalizes the weighted fair queuing or the weighted version of the more practical and well-known deficit round-robin (DRR) scheduling [2]. Though scheduling (such as DRR) is a proven way to achieve proportional QoS guarantees, they often employ multiple queues. A system of static and pre-defined queues is inflexible; and on the other hand, a system of dynamic queues is difficult to manage in high throughput network devices such as switches and routers. Maintaining the simplicity of single-queueing system, our solution here uses probabilistic packet dropping to deliver proportional QoS guarantees to the subscribed flows or tenants. The algorithm we develop here samples arriving packets at a queue to estimate the drop probability of different flows, while serving packets in First in, First out (FIFO) order. Packet sampling is used in a congestion management algorithm called QCN (quantized congestion notification), which provides Layer-2 congestion notifications to end hosts [3]. The feedback is used by the end-hosts to adapt their rates accordingly. The work in [4] builds on top of QCN to provide approximate fairness by adjusting the feedback messages according to the desired rates. These works do not depend on packet drops to achieve the rates. While probabilistic dropping has been used in the wellknown active queue management (AQM) policy called random early detection (RED) [5], RED did not differentiate flows — neither when estimating the drop probability nor when deciding to drop packets. Some works have used sampling to detect large long-lived flows [6] as well as to probabilistically drop and thereby punish large flows [7]. An AQM to achieve approximate fairness using single FIFO queue was proposed in [8]. This work samples arriving packets to estimate the rates of flows, which is then used to estimate drop probability. The drop probability for a flow i is (1 − r¯/ri ), where r¯ is the targeted fair rate, and ri the current rate of the flow. The proposal in [9] extended this to achieve weighted fair queuing solution using differential drop over a single queue, by adding weights to the estimation of rates. However, the history of flow rates captured by these works are limited by the buffer size. A recent work in [10] uses SDN capability to dynamically monitor and control the network for QoS routing. However it uses multiple queues for differentiating traffic classes, whereas we deliver QoS guarantees to flows belonging to different classes using a single queue. In this work, we propose BASIS (BAyeSian Inference based QoS provisioning for SDN). BASIS uses Bayesian inference

to estimate the drop probabilities to deliver rates proportional to weights of subscribed flows in a network. To the best of our knowledge, there exists no prior work that uses Bayesian inference to achieve proportional rate guarantees. The differential dropping mechanism in BASIS assumes TCP or TCPfriendly flows at the end-hosts that will respond to packet drops by reducing their congestion windows. Packets of a flow are dropped with probability estimated for that particular flow proactively before the event of congestion to deliver QoS guarantees. This avoids batches of packets being dropped as in the case of rate-limiting approach to guarantee throughput. As our approach proactively drops a packet of a flow that is more likely to congest the bottleneck link, it offers better Quality of Experience (QoE) to end users with lesser latency. II. P ROPORTIONAL Q O S IN AN O PEN F LOW BASED DATA C ENTER NETWORK There are multiple ways in which quality of service (QoS) can be provided to flows of tenants in a data center network. The approach used in the currently well-established SDN protocol, OpenFlow, is to guarantee rates to flows by rate-limiting the flows. We carried out detailed experiments to study rate guaranteeing solution in OpenFlow [11]. This study revealed that the mechanism dropped batches of packets from the flows. There are two disadvantages with this: (i) TCP flows often and frequently switch from congestion avoidance to slow-start resulting in inefficient use of bandwidth, (ii) Dropping batches of packets is unnecessary, as a flow can be slowed down by dropping just one packet. In this work, we focus on probabilistically dropping packets of flows to provide proportional QoS guarantees. The mechanism we develop here does not require changes in packet headers or modifications to any existing protocols in the TCP/IP stack. We develop a Layer-2 differential drop mechanism, which coordinates with the OpenFlow controller [12] to provide proportional QoS guarantees to flows in a data center network. We achieve proportional QoS guarantees, by dropping packets of flows probabilistically at a switching node (router). The estimated packet drop probability of a flow is dependent on its contribution to the system load, the weight (given as input by a tenant), and the current system load. For a network, a set of weights W is defined. A tenant chooses one of these weights, say, wi . The tenant informs the OpenFlow controller, the weight wi as well as the IP addresses of the VMs. All flows originating or terminating at these IP addresses will be assigned the weight wi . When a new flow arrives at a switching node, it contacts the OpenFlow controller. Following the protocol, the controller returns the information required for routing the flow. In addition, the switch will also fetch the weight associated with the flow (identified by the IP address of the flow). In general, a tenant can assign different weights to different flows. III. BASIS: DESIGN AND ALGORITHM In this section, we describe the detailed design of our proposed BASIS. With SDN, we collect the statistics to estimate contributions of every flow even before congestion. That is to say, we have prior knowledge of a flow’s contribution to the load (queue at the outgoing link), before the link becomes

congested. At time t, we learn two instantaneous values; (i) Load at the queue at time t, (ii) Estimates of contribution of different flows to this measured load at time t. Using this prior knowledge and current event, we use Bayes’ theorem as shown in Algorithm 1 to estimate the posterior probability that a flow f contributed to the current congestion and thereby calculate the drop probability for that flow. The drop probability is calculated based on the estimated posterior probability and the weight wi initially assigned to that flow by the tenant. The tenants choose weights from a set of weights W , and assign them to each of its flows for differentiated services. The OpenFlow switch fetches this weight associated with the flow based on the packet header information by probing the OpenFlow controller. When the flow arrives at the switch, the Flow ID and IP address information is obtained from the packet header. By hashing these two fields, the flow is identified and the associated weight is fetched from the controller. Let φ be a threshold for the load. Whenever the load is greater than φ, the system estimates the contribution of flows to the load. Let θ be another threshold that defines congestion; i.e. the system is considered as being congested if the load is greater than θ; where the load is the fraction of queue (at the outgoing link) occupied. A. Estimating contribution of flows and the prior probability Contributions of flows to load are estimated by sampling the arriving packets, and storing information of sampled packets in a buffer. We use probabilistic sampling; and therefore only a fraction of the arriving packets will be ‘hit’. This also means that, the number of packets sampled from a large (long-lived) flow is going to be likely greater than the number of packets sampled from small short-lived) flows. Let β be the probability for sampling. Let B denote the buffer containing information (packet header) of sampled packets. The buffer is taken up for processing only when it fills up. Once processed, the buffer is cleared of all information (that is, buffer size is set to zero). The buffer is also cleared at regular intervals (lest we take into account the obsolete information); the interval is long enough to allow the buffer to fill up at ‘medium’ load. The contribution of each flow is estimated as a fraction of packets of a flow to the total number of packets in the buffer B. The prior statistics on the load at the buffer is collected and whenever the load goes greater than φ, the system collects estimates on the contributions of different flows to this measured load. The probability of a flow f contributing to congestion is denoted as e(f ). The average of e(f ) over time is the prior probability of flow f , denoted as p(f ). Averaging of e(f ) is done whenever buffer B is full using an exponential moving average. When the buffer is full, the prior probability p(f ) for the flow f is updated using the current estimate e(f ) as follows.   α = 0; e(f ), p(f ) = α.p(f ) + (1 − α).e(f ), 0 < α < 1;  p(f ), α = 1;

(1)

When the weighting factor α = 0, the prior probability is updated only with the current estimate for each flow f . This setting of α will not be of much use for estimating prior

Algorithm 1 BASIS(P, Q) 1: 2: 3: 4: 5: 6: 7: 8: 9: 10: 11: 12: 13: 14: 15: 16: 17: 18: 19:

if l > φ then if β > random(0,1) then B ← B.append(FlowID(P)); if B.isFull() then Compute p(fi ) using Eq.1; Update(µl , σl ); B ← B.reset(); end if end if if l > θ then Compute γ¯ (fi ) using Eq.5 γ¯ (fi ) = Normalize(γ(fi )); if γ¯ (fi ) > random(0,1) then Drop(P); return; end if end if Enqueue(P, Q); end if

probability since the past behavior of flow f is completely lost. Similarly for the case of α = 1, the prior probability is never updated. Whenever buffer B is processed, the load/congestion l in the system is known. This allows us to estimate the distribution of the load for an estimated prior probability of a flow p(f ). Assume that for a given prior probability, the distribution of load follows Normal distribution. The more the number of buffer instances we process to estimate a prior probability, the better the estimation of the mean load, µl and its standard deviation, σl . The value of µl and σl is updated using the current measurements when buffer B is full using a numerically stable algorithm due to Knuth [13]. Once the distributions of load for various prior values are known, we can compute the likelihood probability. Upon a packet arrival at the queue Q, the function in Algorithm 1 is called. The following steps are executed whenever l > φ. Step 1: Let β be the probability of packet sampling. Using β, probabilistically decide to select the current packet P for performing the following steps. Step 2: Find the flow to which the packet belongs to, by hashing the fields constituting the flow identification. Let the flow be f . Step 3: Store the packet information in buffer B. Step 4: If buffer B is full, perform the following actions: (i) Find the contribution of all flows in the buffer. The contribution of each flow is estimated as the fraction of packets of a flow to the total number of packets buffered in B (assuming constant packet size). For each flow f , denote this as the current estimate e(f ). Ignore flows that contribute less than the fractional value c. (ii) Update the prior probability p(f ) of flow f from previous estimates by averaging with the current estimate. (iii) For each flow f , update the µl and σl of load l using the current measurement. (iv) Reset buffer B (clear the contents; set buffer size to zero).

Fig. 1: Bottleneck link scenario. B. Estimating the drop probability of packets of a flow The drop probability of a flow is estimated based on the current event (congestion) and the prior probability of a flow’s contribution (to the load). This happens only at times of congestion; i.e., when the load is greater than θ. The actions taken on the arrival of a packet at time t when the outgoing link is congested are given below. Step 1: Find the flow f to which packet P belongs. Step 2: The prior probability of flow f 0 s contribution to the system load is p(f ). Note, the current load l > θ. Given p(l), the probability distribution of load for p(f ), compute the posterior probability using Bayes’ theorem. p(f |l) ∝ p(l|f ).p(f )

(2)

Note, in the final equation, the denominator is just the normalizing factor. p(l|f ).p(f ) , p(f |l) = P A p(l|f ).p(f )

(3)

where A is the set of active flows. Step 3: Similarly compute the posterior probability for all active flows, and normalize by their weights. (i) Initially each tenant chooses a weight wi , i ∈ {1, . . . n} from a defined a set of weights for the network; W = {w1 , w2 , ...wn }

(4)

(ii) The tenant sends wi along with the IP addresses of the VMs to the OpenFlow controller. (iii) When a flow f arrives at the switch, the switch fetches the weight wi associated with that flow using the IP address of the flow and uses it to calculate the drop probability of the flow thereby differentiating flows from different tenants. For the flow f , the posterior probability from Step 2 is now recalculated using its associated weight wi and denoted by γ(f ). γ(f ) =

p(f |l) wi

(5)

(iv) Normalize γ(f ) in the range [0,1] and denote it by γ¯ (f ). (v) Probabilistically decide to drop the packet from flow f with the drop probability of γ¯ (f ). IV. P ERFORMANCE S TUDY A. Simulation Setting We setup the network topology as shown in Fig. 1 with m = 10 and bottleneck link capacity 100Mbps using Mininet1

TABLE I: Set of flows with increasing flow weights and their expected throughput.

800

Flow 1 Flow 2 Flow 3 Flow 4 Flow 5

Flow ID Flow1 Flow2 Flow3 Flow4 Flow5

Source Rate (Mbps) 15 20 25 30 35

Flow Weight 1 2 3 4 5

Expected throughput (Mbps) 6.67 13.33 20 26.67 33.33

cwnd (segments)

700 600 500 400 300 200 100

TABLE II: Set of flows with decreasing flow weights and their expected throughput.

Flow1 Flow2 Flow3 Flow4 Flow5

Source Rate (Mbps) 15 20 25 30 35

Flow Weight 5 4 3 2 1

Expected throughput (Mbps) 15 20 25 26.6 13.4

with OpenVswitch (OVS)2 supporting OpenFlow v1.3. The OpenFlow switches S1 and S2 communicate to the SDN controller NOX3 through the OpenFlow protocol. We conduct the Bayesian based experiments with the value of α set to 0.3 discounting older observations of prior values faster. This estimation of α was concluded after analyzing all the possibilities of the weighting factor. 1) Performance metrics: We use the following three performance metrics to evaluate the proposed BASIS algorithm for proportional QoS in a Data center network. • Congestion window: In TCP, this value increases linearly until a packet loss is detected. Using the congestion window, the congestion between the sender and receiver is determined and hence a means to stop link overload. • Packet Drop Ratio: This is the ratio of the number of lost packets of a flow to the total number of packets sent. • Average throughput: This is the average throughput of a flow. The available bandwidth in a bottleneck link has to be proportionally shared among multiple flows so that expected throughput is realised for every flow. 2) Comparisons: We compare the performance of bandwidth sharing for the proposed BASIS with other existing QoS mechanisms as given below. • Rate-limiter (RL) • Priority Queues (PQ) In the PQ method, different queues have different priorities. Based on the weight chosen for that flow, the SDN controller assigns each flow to a queue with the respective priority. In the RL approach, whenever a flow arrives, based on the residual bandwidth available and the weight of the new flow, the controller recalculates the rate-limiter values and reallocates bandwidth for all the flows. We evaluate the performance of these approaches in comparison with BASIS using two scenarios as shown in Table I and Table II. The former describes a scenario of five flows with increasing source rates and increasing weights; i.e., Flow1 has the lowest priority and 1 ”Mininet,”

http://mininet.org (OVS),” http://openvswitch.org/ 3 ”NOX,” http://www.noxrepo.org 2 ”OpenVswitch

120

140 160 Time (s)

180

200

Fig. 2: Increasing weight scenario: Congestion window behaviour for Bayesian based proportional bandwidth sharing. 1000 800 cwnd (segments)

Flow ID

0 100

Flow 1 Flow 2 Flow 3 Flow 4 Flow 5

600 400 200 0 100

120

140 160 Time (s)

180

200

Fig. 3: Increasing weight scenario: Congestion window behaviour for rate-limiting based bandwidth sharing. Flow5 the highest. Whereas the latter presents a decreasing weight scenario with the same source rate as in the former. B. Analysis of Results 1) Increasing weight scenario: The Table I describes the increasing weight scenario. Here, Flow5 has the highest weight and hence the highest priority i.e. drop probability of a flow is inversely proportional to its weight. The expected throughput is calculated theoretically, i.e., the bottleneck link capacity is shared between the flows based on its weights capped by their source rates. For example, Flow5 shares one-third of the bottleneck link capacity (100 Mbps) which is 33.33 Mbps and Flow1 shares 1/15 of the bottleneck link capacity which is 6.67 Mbps. For Flow5, had the source rate been 30 Mbps, the expected throughput would have been capped by this source rate and the excess 3.33 Mbps will be shared by other flows based on their weights. We will see this capping on the source rate in the the next Section IV-B2. Fig. 2 and Fig. 3 show the congestion window plot for the increasing weights scenario using BASIS and RL respectively. It is observed that in Fig. 3, the congestion window for Flow1 and Flow2 repeatedly go to slow-start with MSS (Maximum Segment Size) of one and two. The RL mechanism drops consecutive packets from these low-priority flows, leading to timeouts. Consequently, the TCP flows switch from the stable congestion-avoidance phase to slow-start phase. The packet drop plot in Fig. 5 also supports this observation, where batches of packets are dropped in the 100-150s window. Obviously, such an approach that drops batches of packets is not just unnecessary, but also inefficient. Besides, when

10

Packet Drop Ratio (%)

8 FlowId

1.4

Flow1 drops Flow2 drops Flow3 drops Flow4 drops Flow5 drops

6 4 2 0 100

RL PQ BASIS

1.2 1 0.8 0.6 0.4 0.2 0

110

120 130 Time (s)

140

Flow1 Flow2 Flow3 Flow4 Flow5

150

Flows

Fig. 4: Increasing weight scenario: Packets dropped for Bayesian based proportional bandwidth sharing.

Fig. 7: Increasing weight scenario: Comparing packet drops 1200

10

Flow1 drops Flow2 drops Flow3 drops Flow4 drops Flow5 drops

cwnd (segments)

FlowId

8

Flow 1 Flow 2 Flow 3 Flow 4 Flow 5

1000

6 4

800 600 400 200

2 0 100

0 100 110

120 130 Time (s)

140

Fig. 5: Increasing weight scenario: Packets dropped for ratelimiting based bandwidth sharing. 30 25

400 350

RL PQ BASIS Expected

20 15 10

140 160 Time (s)

180

200

Fig. 8: Decreasing weight scenario: Congestion window behaviour for Bayesian based proportional bandwidth sharing.

cwnd (segments)

Average throughput (Mbps)

35

120

150

300 250

Flow 1 Flow 2 Flow 3 Flow 4 Flow 5

200 150 100 50

5

0 100

0 Flow1 Flow2 Flow3 Flow4 Flow5 Flows

120

140 160 Time (s)

180

200

Fig. 6: Increasing weight scenario: Throughput comparison.

Fig. 9: Decreasing weight scenario: Congestion window behaviour for rate-limiting based bandwidth sharing.

multiple packets are dropped, all the dropped packets have to be retransmitted; hence leading to inefficient use of the constrained bandwidth. With TCP, the source rate of a flow can be decreased even with a single packet drop. In BASIS, we drop packets randomly. Packets belonging to a flow are dropped probabilistically; and this is done in a proactive way, by estimating the contribution of flows towards the congestion that is building up. Fig. 4 demonstrates that only a few packets are dropped due to the Bayesian approach. The congestion window behaviour for the PQ is similar to the RL. But the congestion windows of flows never go to the initial window size; and continue in congestion-avoidance phase. The average throughput and the packet drop ratio of the approaches RL and the PQ based QoS mechanism are compared with BASIS in, Fig. 6 and Fig. 7, respectively. It is observed that the priority queue is not able to achieve the required throughput; particularly it over-penalizes the flows

with lower weight (or priority) as it has the least source rate. Whereas in the RL approach, since the accurate rates are set, it is able to achieve the required throughput as desired; but results in batches of packet drops as shown in Fig. 5 and Fig. 7. However the Bayesian based QoS mechanism achieves the desired throughput, with 96% lesser number of packet drops compared to the RL, and 94% lesser number of packet drops compared to the PQ. The RL approach has the inherent disadvantage—the rates need to be calculated as and when flows join or terminate, requiring the controller to communicate to the switches frequently. Also, while BASIS samples a single queue to provide proportional QoS, PQ requires multiple queues for flows with different priorities. 2) Decreasing weight scenario: We also evaluate our algorithm in a decreasing weight scenario as shown in Table II. In this scenario, with the same source rates, Flow1 has the highest weight (or priority) of five. Thus the Flow1 ideally shares one-

0.4

SDN enabled Data center based on Bayesian Inference. The most interesting aspect of our solution is that it can be used to provide various bandwidth sharing solutions, such as fairqueueing, weighted fair queueing, and priority scheduling just by varying the flow weights and hence their drop probabilities. On the other hand, RL based on OpenFlow is not flexible as it does not allow an operator to implement many other QoS guaranteeing schemes. The results show that RL drops batches of packets upon congestion to provide proportional QoS guarantees, whereas our algorithm achieves the desired throughput by probabilistically dropping a packet of a flow that is more likely to congest the bottleneck link even before the event of congestion. The PQs on the other hand, drops packets when the queues are full and also does not guarantee the desired throughput. The proposed BASIS guarantees the desired throughput in SDN based Data centers with 97% lesser number of packet drops compared to RL and 96% lesser number of packet drops compared to PQ. Thus, BASIS delivers proportional QoS guarantees with lesser number of packet drops enabling better QoE to the end users.

0.2

VI. ACKNOWLEDGEMENT

Average throughput (Mbps)

30 RL PQ BASIS Expected

25 20 15 10 5 0

Flow1 Flow2 Flow3 Flow4 Flow5 Flows

Fig. 10: Decreasing weight scenario: Throughput comparison. Packet Drop Ratio (%)

1.4 RL PQ BASIS

1.2 1 0.8 0.6

0 Flow1 Flow2 Flow3 Flow4 Flow5 Flows

This research work was supported by MoE ACRF Tier 1 FRC Grant No: R-263-000-C04-112, NUS, Singapore.

Fig. 11: Decreasing weight scenario: Comparing packet drops.

R EFERENCES

third of the bottleneck link capacity which is 33.33 Mbps but is capped by the source rate 15 Mbps. Ideally the excess share of Flow1, 18.33 Mbps is shared among the other flows based on their weights. Fig. 9 shows the congestion window plot for the decreasing weight scenario using RL. Unlike the increasing weight scenario, the congestion window does not got slow start. However, the congestion window drops continuously for all the flows with packet drops. Whereas in Fig. 8, for the congestion window plot using BASIS, it is observed that Flow1 with the highest weight has a constantly increasing window. Whereas the congestion window of Flow5 drops with every packet drop as Flow5 has higher drop probability due to the highest source rate and lowest weight among other flows. Fig. 10 shows the average throughput obtained for each flow in the network for the RL, PQ based QoS mechanism in comparison with BASIS. The throughput deviation from the expected value is more for the PQ. Especially for Flow5 with least priority, the PQ method provides 28% additional throughput than the expected value. This is due to the reason that Flow5 has higher source rate and the PQs, unlike Bayesian, do not have any mechanism to dynamically determine the contribution of a flow to congestion. Fig. 11 shows the packet drop ratio for all the flows in the network using RL and the PQ in comparison with BASIS. It is observed that BASIS provides the desired proportional throughput with 97% lesser number of packet drops compared to RL and 96% lesser number of packet drops compared to PQ based proportional QoS mechanism.

[1] N. McKeown, T. Anderson, H. Balakrishnan, G. Parulkar, L. Peterson, J. Rexford, S. Shenker, and J. Turner, “OpenFlow: enabling innovation in campus networks,” ACM CCR, vol. 38, no. 2, pp. 69–74, Mar. 2008. [2] M. Shreedhar and G. Varghese, “Efficient Fair Queueing Using Deficit Round Robin,” SIGCOMM Comput. Commun. Rev., vol. 25, no. 4, pp. 231–242, Oct. 1995. [3] R. Pan, B. Prabhakar, and A. Laxmikantha, “QCN: Quantized Congestion Notification,” 2007, http://www.ieee802.org/1/files/public/docs2007/ au-prabhakar-qcn-description.pdf. [4] A. Kabbani, M. Alizadeh, M. Yasuda, R. Pan, and B. Prabhakar, “AFQCN: Approximate Fairness with Quantized Congestion Notification for Multi-tenanted Data Centers,” in High Performance Interconnects (HOTI), IEEE 18th Annual Symposium on, Aug 2010, pp. 58–65. [5] S. Floyd and V. Jacobson, “Random Early Detection Gateways for Congestion Avoidance,” IEEE/ACM Trans. Netw., vol. 1, no. 4, pp. 397– 413, Aug. 1993. [6] C. Estan and G. Varghese, “New Directions in Traffic Measurement and Accounting: Focusing on the Elephants, Ignoring the Mice,” ACM Trans. Comput. Syst., vol. 21, no. 3, pp. 270–313, Aug. 2003. [7] D. M. Divakaran, “A Spike-detecting AQM to Deal with Elephants,” Comput. Netw., vol. 56, no. 13, pp. 3087–3098, Sep. 2012. [8] R. Pan, L. Breslau, B. Prabhakar, and S. Shenker, “Approximate Fairness Through Differential Dropping,” SIGCOMM Comput. Commun. Rev., vol. 33, no. 2, pp. 23–39, Apr. 2003. [9] F. Lu, G. Voelker, and A. Snoeren, “Weighted fair queuing with differential dropping,” in INFOCOM, Proceedings IEEE, March 2012, pp. 2981–2985. [10] D. Adami, L. Donatini, S. Giordano, and M. Pagano, “A network control application enabling software-defined quality of service,” in Communications (ICC), 2015 IEEE International Conference on, June 2015, pp. 6074–6079. [11] P. Mohan, D. Divakaran, and M. Gurusamy, “Performance study of TCP flows with QoS-supported OpenFlow in data center networks,” in 19th IEEE International Conference on Networks (ICON), Dec 2013, pp. 1–6. [12] Open Networking Foundation, OpenFlow Switch Specification, Oct. 2014, https://www.opennetworking.org/images/stories/downloads/ sdn-resources/onf-specifications/openflow/openflow-spec-v1.4.0.pdf. [13] D. E. Knuth, The Art of Computer Programming, Volume 2 (3rd Ed.): Seminumerical Algorithms. Boston, MA, USA: Addison-Wesley Longman Publishing Co., Inc., 1997.

V. C ONCLUSIONS In this paper, we proposed an efficient algorithm, BASIS to provide proportional QoS guarantees to the tenants of a

Proportional Bandwidth Sharing Using Bayesian ...

(SDN) paradigm, traffic management in data center networks ... Index Terms—Software Defined Networks, OpenFlow, Data center ..... that is building up. Fig.

485KB Sizes 1 Downloads 256 Views

Recommend Documents

Using bandwidth sharing to fairly overcome channel ...
this approach is to share file data when communications are idle using random .... in this fragmented storage mode, the emphasis is on fairness and the ability to ...

Optimization Bandwidth Sharing For Multimedia ...
Optimization Bandwidth Sharing for Multimedia. Transmission Supporting Scalable Video Coding. Mohammad S. Talebi. School of Computer Science, IPM.

CUBS: Coordinated Upload Bandwidth Sharing in ...
system prototype to Coordinate Upload Bandwidth Sharing. (CUBS) among ... idle upload bandwidth of neighbors can be used upon a request ..... mation instead of files. The main ..... Linux with the load-based balancing mechanism supported.

Bandwidth compression optical processor using ...
The resolution and sampling rate of today's best analog-to-digital converters are limited by ... analog-to-digital converter (ADC), impresses an electronic signal onto a broadband chirped optical field ... segments off line in a digital storage and p

Strategic knowledge sharing in Bayesian games
Available online 16 December 2003. Abstract ... that this literature differs from the literature on cheap talk games, i.e., games where non- binding .... classes of games in which sufficient conditions for particular types of knowledge equilibria.

Automatic speaker recognition using dynamic Bayesian network ...
This paper presents a novel approach to automatic speaker recognition using dynamic Bayesian network (DBN). DBNs have a precise and well-understand ...

BAYESIAN COMPRESSED SENSING USING ...
weight to small components encourages sparse solutions. The CS reconstruction ... knowledge about the signal. ... MERIDIAN PRIORS. Of interest here is the development of a sparse reconstruction strategy using a Bayesian framework. To encourage sparsi

Recycling In IEEE 802.16 Networks, Bandwidth Optimization by Using ...
Worldwide Interoperability for Microwave Access (WiMAX), based on IEEE 802.16 standard standards [1] [2], is designed to facilitate services with high transmission rates for data and multimedia applications in metropolitan areas. The physical (PHY) a

Bandwidth Efficient String Reconciliation using Puzzles
A version of this work will appear in the IEEE Transactions on Parallel and Distributed ... is a binary array; applying a mask to a string involves computing a dot product ... comparison of the proposed approach with the well known open-source ...

proportional reasoning
9, May 2011 ○ MATHEMATICS TEACHING IN THE MIDDLE SCHOOL 545 p of proportion occur in geometry, .... in your notebook. Use your estimation skills to ...

Data sharing in the Cloud using Ensuring ... - IJRIT
Sep 9, 2013 - where software objects that offer sensitive functions or hold sensitive data are responsible for protecting .... Log files should be reliable and tamper proof to avoid illegal insertion, deletion, and ..... attacker erase or tamper a re

Energy proportional datacenter networks - CiteSeerX
Jun 23, 2010 - of future network switches: 1) We show that there is a significant ... or power distribution unit can adversely affect service availability.

Energy-Proportional Networked Systems - EPFL
Video streaming, Cloud computing. • CMOS reaching a plateau in power-efficiency ... NETWORK. Threats to Internet's growth. Power deliver/. Cooling problems.

Using a Current Sharing Controller with Non ... - Linear Technology
diodes. In this way, the LTC4370 can actively balance .... 2m. 1%. GA. TE1. FETON1. FETON2. OUT2. COMP. 10. 9. 8. 7. 3. EN1. EN1. V ... call (408) 432-1900.

Energy proportional datacenter networks - CiteSeerX
Jun 23, 2010 - Finally, based on our analysis, we propose opportunities for ..... package, operating at higher data rates, further increasing chip power.

Data sharing in the Cloud using Ensuring Distributed ...
cloud as part of the storage services offered by the utility computing ..... The JRE is reinstalled using commands such as sudo apt install for Linux-based .... log records correspond to his actions by mounting a chosen plaintext attack to obtain ...

Encrypted Peer to Peer File Sharing System using ...
1Student, Department of Computer Science, SSBT's COET, Bambhori, Jalgaon ... 1. Introduction. Over the past years, the immense popularity of the Internet has produced a significant stimulus .... file's replication degree based on its popularity.

Bayesian Active Learning Using Arbitrary Binary Valued Queries
Liu Yang would like to extend her sincere gratitude to Avrim Blum and Venkatesan. Guruswami for several enlightening and highly stimulating discussions. Bibliography. Bshouty, N. H., Li, Y., & Long, P. M. (2009). Using the doubling dimension to analy

optimizing ieee 802.11 dcf using bayesian estimators of ...
A = [ai,j], i.e., p(xt+1 = j|xt = i) = ai,j where ai,j ≥ 0 and. ∑N j=1 ai,j = 1, N being the maximum number of ter- minals, and initial probability vector π = [π1, ··· ,πN ],.