Heavy Traffic Queue Management Using Policy Based Approach and Network Calculus 1

S.Rajeev1 Senior Member IEEE, K.V.Sreenaath2

Dean - Research & Development, SNS College of Technology, Coimbatore 2 Security Engineer, Motorola India Pvt. Ltd., Bangalore

email : [email protected], [email protected] Abstract—Traffic management includes queuing, buffer management and scheduling that are key in delivering network efficiency. Effective management of the network requires appropriate queuing algorithm at the active router. Heavy traffic is the rate at which the processor can work close to the rate of arrival of work. Policy based queue management gives the flexibility to choose and implement queuing algorithm dynamically suiting different requirements. Network calculus is based on the mathematical theory of diods and in particular the Min-Plus diod. It is developed for efficiently managing the flow problems encountered in networking. In this paper we present the real time implementation of policy based queue management for heavy traffic using IXP 1200[1] network processor and Ponder Policy Toolkit[2] with appropriate model for Network Calculus. Index Terms—Policy Management, Heavy traffic analysis, Network processor, RED queuing

I

I. INTRODUCTION

N heavy traffic, the processor ideal time is small due to the fact that the difference between the arrival and the service curve is negligible. When the idle time of the processor is very less then the queue management should be efficient enough so that the performance of the network is high. But unfortunately different queuing algorithms such as RED, REM, DropTail, SFQ and DRR are suitable for delivering efficiency in network with different needs. For example DropTail is useful when the emphasis is on the throughput alone whereas RED and REM is useful when the emphasis is on both throughput and bandwidth utilization. To handle these multiple requirements Policy Based Queue Management is employed. Policy is defined as “a definite goal, course or method of action to guide and determine present and future decisions.” [3] In general, policies can be seen as plans of an organization to achieve its objectives. This may involve a set of rules to govern the behavior of network and its components (resources, users, applications, etc.), and the specification of a set of actions to be performed.

II. HEAVY TRAFFIC CONDITION Let Q(t) denoted the size of the physical queue at real time t. The physical queue is parameterized by n, where the traffic intensity goes to unity as n → ∞, and Qn(t) denotes the size of the nth member of this sequence at real time t. Owing to the small difference between the arrival and service rates, the queue builds up over time. The heavy traffic condition,

1   1 n   bn  b   a, n d , n  as n → ∞

-----(1.1)

The traffic intensity is

n 

d , n a, n

----- (1.2) If b < 0 in ( 1.1) , so that the queues are barely stable, then, for any moderate initial condition the size builds up slowly to an asymptotic average of O(√n). If b > 0, so that the queue is barely unstable, then, its size goes to infinity, but slowly, so that at time nt there are O(√n ) queued. The basic arrival and service rates are usually very high, and it is more appropriate to scale the number queued, but not time. Then the parameter n denotes the basic size or speed of the system, and the difference between arrival and service rate in real time is O(√n ). A. Multiple Arrival Streams of Different Rates Heavy traffic consists of multiple stream of different rates where there are finite number of independent input streams, called Ak,n, 1 ≤ k ≤ k . It is assumed that the service time distributions do not depend on the arrival class. Let the kth arrival stream have interarrival times

{al , k , n , l  }

B. Frequent Arrivals For some centering constants ∆a,k,n that converge to 1/λa,k , k ≤ k, define



a, k, n

1 nt  al ,k ,n  (t )   1  a,k ,n  n l 1   

∆a,k ≡

----- (1.3)

For moderate frequent arrival, the heavy traffic condition is given by

1 1 1   k lim n k 1 a , k ,n  a , 0,n  d , n   0 n      ----- (1.4) C. Bursty Arrival In application where the interarrival or service intervals are correlated, the correlation usually arises from some specific aspects of the physical model, and not in the form of a process where the convergence to a wiener process is a priori obvious. Bursty in the sense, the arrival rate and service time vary with time. D. Queuing Delay Queuing delay depends largely on the rate at which traffic arrives at the queue, the transmission rate of the link, and the nature of the arriving traffic, that is, whether the traffic arrives periodically or whether it arrives in bursts. The average rate at which bits arrive at the queue is La bits/sec. The queue is very big, so that it can hold an infinite number of bits. The ratio La/R, called traffic intensity (a- packets/sec) R- transmission rate and L bits/packet, often plays an important role in estimating the extent of the queuing delay. If La/R > 1, then the average rate at which bits arrive at the queue exceeds the rate at which the bits can be transmitted from the queue. Unfortunately the queue will tend to increase without bound and queuing delay will approach infinity. The nature of the arriving traffic impacts the queuing delay when La/R<= 1. If packets arrive periodically – that is, one packet arrives every L/R seconds then every packet will arrive at an empty queue and there will be no queuing delay. On the other hand, if packets arrive in bursts but periodically, there can be a significant average queuing delay. The first packet transmitted has no delay and the nth packet transmitted has a delay of (n-1) L/R seconds. As the traffic intensity approaches 1, the average queuing delay increases rapidly. E. Packet loss Queuing capacity greatly depends on the design of network devices. Because the queue capacity is finite, packet delay do not really approach infinity as the traffic intensity approaches 1. Packet arriving at the full queue is dropped or lost, fraction of lost packets increases as the traffic intensity increases. Therefore, performance at a node is not only measured in terms of delay, but also in terms of the probability of packet loss. III. IMPLEMENTATION TOPOLOGY

router S and then forwarded to the node D. The active router S is connected to a Policy Server. Policies for effective queue management of heavy traffic are stored in the Directory Server. The queue size is set to maximum of 256, with output link of 100Mbps. The traffic is generated for 5 seconds at the source node. The routing of heavy traffic is done by means of Intel IXP1200 [4] programmed as an active router. Real time implementation topology can be separated in to two modulesIXP 1200 module and the Policy Based Management Module.

Fig.1 Implementation Topology

IV. PACKET HANDLING IN IXP1200 The Intel IXP1200 is designed to perform packet reception and transmission in discrete units called as m-packets, which are 64 bytes in length. This concept forms the basis for designing any application on the processor. The reception and transmission functions are handled by means of the Ready Bus Sequencer (RBS), an Intel proprietary standard. The RBS contains a 16 instruction programmable engine. The primary function of the RBS is to poll the IX Bus Interface and report the status of incoming and outgoing packets from the processor. Certain Control and Status Registers (CSR's) available in the processor provide an interface for intimation and control of this unit. A. Multi Micro Engine - Multiple Threads This implementation model forms the core of IXP development. The developer should take into account of proper interfaces for inter micro-engine communication. Also semaphores and mutex locks should be implemented to prevent RAW (Read After Write), WAW (Write After Write) and WAR (Write After Read) hazards commonly found in multiple pipelined micro-processors.  Program Flow: Parallel  Data Flow: Parallel  Debugging: Very Challenging  Resource Utilization: Optimal  System Throughput : Maximum This model should be used only after careful planning and offline simulation, as real time debugging is very challenging and not recommended. However efficient this model may seem, it may lead to serious instabilities if Inter Process Communication (IPC) protocols are not carefully designed.

Fig. 1 shows the implementation topology of “Policy Based Real-Time Queue management for Heavy Traffic” with 6 nodes connected to an active router S (Policy Enforcement Point (PEP)) with 10 flows each generating the traffic. The topology consists of 6 connections of 100 Mbps each B. Multiple Single-Server Queues connecting the nodes and the active router S. The active router In Multiple single-server queuing model the incoming in turn is connected to the node D to which the sink is attached. packet is being put in the individual queue of the server, then All the traffic from the 6 sources is queued up in the active

processed by the server and departed to the destination. Fig. 2 shows the multiple single-server queuing system.

Fig. 2 Multiple single-server queueing system model

The Heavy traffic Model is the most important characteristics of present day network that should be handled and tried on any evaluation platform to verify the capabilities and limitations of the model. The Heavy traffic queueing Model consists of three fundamental functions,  Packet Reception  RED queuing analysis  Packet Transmission The Packet Reception unit should be designed to handle any type of data, at any rate arriving with a random inter-arrival time. This ensures the system’s input stability in real time conditions. The packets are received from the port buffer and put into the receive FIFO element by polling the port for the arrival of packets. Once the packets are put into the RFIFO, a control signal is given to the thread which is free to receive the packet from RFIFO and put in the SDRAM for processing. The RED queuing process handles the heavy traffic. There are three queues one for each port and serviced in the manner of First In First Out (FIFO) [1]. RED works by tracking the average depth of a queue and dropping packets so that this average never exceeds a threshold [4]. This average is maintained as an exponential weighted moving average – typically measured using the characteristic equation of a lowpass filter. If adding a packet causes this queue depth to exceed the minimum threshold queue depth, the packet is dropped with certain probability. As the average queue depth keeps building beyond the minimum threshold, packets are dropped with increasing probability. If the average queue hits the maximum threshold, all subsequent packets are dropped until the average queue depth goes below the maximum threshold. The packet is processed in the queue which is put in the SDRAM memory. The Packet Transmission unit is designed to be able to transmit packets at maximum line rate. This ensures maximum system throughput if Packet Redirection unit is able to resolve and identify destination ports at incoming packet rate. The packets form the process queue is transmitted to the corresponding port by the value specified in the control word. The control word specifies the TFIFO element and the port to which the data has to be transmitted, the port is locked until the full packet is transmitted and released the control of the port otherwise multiple packet may access the same port at the same time which result in blocking of the task. Each process

queue is serviced by the free thread available in the transmit microengine. C. Buffer Management A buffer management mechanism is employed to manage and protect the system buffer resources to provide low delay and low loss to diverse types of traffic from steady –stream fixed rate traffic to bursty low-long-term average-rate traffic [4]. Each port is provided with receive and transmit buffer space for the packet storage before it is being processed by microengine and stored in SDRAM. Each port has a transmit and receive ready threshold set to inform the microengine to transmit packet only when it reaches the threshold value. Buffer space can be increased to store more packet for high data rate as other wise buffer overflow may occur at the ports and result in simulation error. Depending upon the data rate the buffer size both at the transmitter and receiver is set. Packets from the receiver buffer are transfered to SDRAM through RFIFO element which is of 16 elements of 10 quadwords each. Packet, after processing, is transmitted to the port buffer through TFIFO element. V. NETWORK CALCULUS BASED QUEUE MANAGEMENT Network Calculus is a set of recent developments that provide deep insights into flow problems encountered in networking. Application of Network Calculus for Queue management in High Traffic is an ongoing research. The foundation of network calculus lies in the mathematical theory of dioids, and in particular, the Min-Plus algebra [7]. The approach for buffer management through network calculus is as follows. We first arrive at the constraints on the queuing system and then apply the “Space Method Theorem” [7] to arrive at the maximal and minimal values of the metrics. Thus we arrive at the optimal model for queue management for Heavy traffic so that real time queue management of Heavy Traffic is efficient. VI. POLICY BASED MANAGEMENT Components of ‘Real-time implementation of the Policy Based Management’ Module is shown in Fig. 3 Ponder Policy Toolkit is used as the Policy Server. Microsoft Active Directory for Windows 2000 [7] is used as the Directory Server and IXP 1200 described in the previous section is used as the PEP. The communication between IXP 1200 and the Ponder Policy Toolkit is through Common Open Policy Service (COPS) [8]. Lightweight Directory Access Protocol (LDAP) [9] is used for communication between the Ponder Policy Toolkit and the Active Directory. Ponder Policy Toolkit performs the following functions • Retrieval of relevant policies created by the network administrator through the policy console after resolving any conflicts with existing policies; • Translating the policies relevant for each of the PEP (IXPs) into the corresponding Policy Information Base (PIB) commands;

• Arriving at policies decisions from relevant policies for policy decision requests and maintaining those decision states; • Taking appropriate actions such as deletion of existing decision states or modification of installed traffic control parameters in the PEP.

subject /PMAs/QueuingPMA; do select_algorithm(packet_loss_priority, throughput_priority; }

VII. RESULTS AND DISCUSSIONS The simulation has been carried out for the topology as shown in Fig.1. The entire duplex link uses Drop Tail at the source and has a queue limit of 1000. The TCP packet was set to 1040 bytes. The throughput, packet loss and bandwidth utilization of each queueing algorithm is evaluated. A. Test on Nodes or Traffic Load (Throughput & Packet Loss) Output link bandwidth (link S to N) - 100Mbps Queue size - 256

Fig.3 Policy Based Management Module

Policy Management Console (PMC) is the entity responsible for creating, modifying or deleting policy rules or entries in Active Directory Server. LDAP protocol provides access to directories supporting the X.500 models, while not incurring the resource requirements of the X.500 Directory Access Protocol (DAP). It is specifically targeted at management applications and browser applications that provide read/write interactive access to directories. It does not have the mechanism of notifying policy consumers of changes in the Active Directory Server. Therefore, it is the responsibility of the PMC to indicate the changes in the Active Directory Server as and when required using an internal event messaging service. The PMC provides a highlevel user interface for operator input translation. When a heavy traffic occurs, the PEP requests the Ponder Policy Toolkit through COPS for making decisions on implementation on relevant queue management algorithm. Ponder Policy Toolkit takes appropriate inputs from the PEP such as throughputflag, packetlossflag, bandwidthflag and queueutilizationflag (mentioned in Table 2) and fetches corresponding policies from the Active Directory and then makes a decision on which queuing algorithm has to be implemented in the PEP. Ponder Policy Toolkit then enforces the suitable policy back to IXP 1200 which then implements the appropriate queuing algorithm. Table 1 shows the Policy Specification for real time queue management for heavy traffic. Ponder Toolkit uses java code for the implementation of the written policies. Polices are automatically converted to java code by the Ponder Toolkit which is then compiled and interpreted. Table 1:Policy Specification for Real Time Queue Management inst oblig/Policies/QueingPolicy { on QueueImplementation(packet_loss_priority, throughput_priority ,bandiwidth_priority, queue_utilization_priority);

Fig. 4 Nodes Vs Throughput

Fig. 4 shows that, as the traffic load increases, throughput also increases. Throughput of Drop Tail, RED and REM shows negligible variations.SFQ has minimum throughput and show a variation as the traffic intensity is increased from low to high due to jitter. B. Test on Bandwidth utilization and Throughput: Queue size - 256 Nodes - 6

Fig. 5 Bandwidth Vs Throughput

Fig. 5 shows that the throughput increases linearly as the bandwidth of the output link is increased, RED and Droptail gives almost same result.

Ethernet IP(2) Variable

64

300

297.16

37.2

124.1

5

128

600

592.14

45

149.4

18

300

298.23

37.9

126

3

600

597.74

43.1

143

6

300

247.17

37.8

125.6

66

00

376

44.8

149.6

60

E.IP/ ATM TCP/IP (3)

64

E.IP/ ATM TCP/IP (3) Variable

64

VIII. CONCLUSION Real-time implementation of Queue management for heavy traffic using Policy Based approach and Network Calculus was implemented and tested along with heavy traffic analysis.

Fig. 6 Bandwidth Vs Packet loss

Fig. 6 shows the Packet loss decrease as the bandwidth is increased and DRR gives maximum packet loss. C. Test on Queue Utilization Nodes Queue length Output link bandwidth

-6 - 256 - 100Mbps

IX. ACKNOWLEDGEMENT The authors wish to thank Intel Inc., for providing facilities to implement the work on routers constructed using Intel IXP1200 and IXP2400 Network processors.

X. REFERENCES [1] [2] [3]

[4] [5] [6] [7]

Fig. 7 Queue size Vs Time

Fig. 7 shows that RED gateway utilizes the queue effectively, but there is a lot of oscillation in queue usage of Drop Tail and REM. DRR and SFQ uses only 10% of the total queue size and hence packet loss rate is very high.

Floyd.S., Jacobson, V., “Random Early Detection gateways for congestion avoidance”, IEEE/ACM Transactions on Networking, Vol.1 No.4, pp.397-413, Aug. 1993. M. Shreedhar and G. Varghese, "Efficient fair queueing using deficit round robin," in Proc. 1995 ACM SIGCOMM, pp. 231—242. Jian-Guo Chen, David Sonnier, Robert Munoz, “Implementing High – Performance, High – Value Traffic Management using Agere Network Processor solutions” Network Processor Design , Vol. 2, pp. 301- 326, 2000. Pranav Gambhire, “Implementating QoS Mechanisms on the Motorola C-Port C-5e Network Processor”, Network Processor Design, Vol. 2, pp. 405 –426, 2001 IXP 1200 Programmer’s Reference Manual IXP 1200 Development Tool User’s Guide “Network Calculus A Theory of Deterministic Queuing Systems for the Internet”, Jean-Yves Le Boudec, Patrick Thiran, Springer Verlag LNCS 2050 , 2004

XI. BIOGRAPHIES

The simulation result shows how various queuing algorithm behaves under heavy traffic condition and how far the queue is utilized to its optimum value. In terms of throughput and packet loss RED, Droptail and REM gives almost equal result. Packet loss is more in DRR and throughput is independent of queue size.

Prof. S. Rajeev Ph.D is the Dean for Research and Development in SNS College of Technology, India. He has over 14 years of Industrial and Academic experience. He is a Senior Member of IEEE. His research interests are Policy Provisioning Systems, Network Processors and Distributed Computing Systems. He has over 50 International Refereed Publications to his credit.

D. IXP1200 Results Code optimization enables to achieve high throughput with low queuing delay. The buffer space is set as 256 quadword and each port is configured for data rate of 100Mbps[5]. SDRAM usage is around 606Mb/s to 1020Mb/s. Data rate, microengine usage, queuing delay are tabulated in Table I.

Sreenaath. K.V is a Security Engineer, working in Motorola India Private Limited. He has authored several conference and Journal publications and has co-authored a chapter in a research handbook. His areas of interests include Security in Computing and Networks, Authentication Systems and Policy Based Systems.

Packets Type Ethernet IP(1)

Packet Size (Bytes) 64

TABLE I IXP RESULTS Rx. Tx. Rate Rate (Mbps) (Mbps) 300 600

299.77 598.25

µ engine usage (%) 37.2 44.5

Execution Rate (Mips) 123.6 148.8

Queue Delay (pkts) 1 3

Heavy Traffic Queue Management Using Policy Based ...

Abstract—Traffic management includes queuing, buffer ... the real time implementation of policy based queue management ..... Publications to his credit.

160KB Sizes 1 Downloads 167 Views

Recommend Documents

Heavy traffic queue length behavior in a switch under ...
be served in each time slot. A well-known algorithm called the MaxWeight. Received July 2015. ∗. The work presented here was supported in part by NSF Grant ECCS-1202065. MSC 2010 subject classifications: 60K25, 90B15. Keywords and phrases: Switch,

Delay Optimal Queue-based CSMA
space X. Let BX denote the Borel σ-algebra on X. Let X(τ) denote the state of ..... where λ is the spectral gap of the kernel of the Markov process. Hence, from (3) ...

Remote Active Queue Management
work to their Internet provider. The link setup often ... end user's network to the Internet is usually not config- ... deployed in consumer-grade DSL or cable routers.

advanced queue management techniques pdf
There was a problem previewing this document. Retrying... Download. Connect more apps... Try one of the apps below to open or edit this item. advanced ...

Heavy traffic optimal resource allocation algorithms for ...
Aug 27, 2014 - b School of ECEE, 436 Goldwater Center, Arizona State University, Tempe, AZ 85287, USA ... We call these requests jobs. The cloud service ...

Queue-based Sub-carrier Grouping For Feedback ...
Hybrid schemes have been considered that combine sub-carrier grouping and channel thresholing. Here, the CSI is reported for the group if its quality exceeds a pre- specified threshold [6]–[8]. Jorsweick et al. [10] consider an uplink MIMO-OFDM sys

Patient Queue - GitHub
If you are using a Notebook computer with Firefox follow these instructions: .... Claim: this option is selected by a clinician in order to start an exam on a patient. 9 ...

Exposing Invisible Timing-based Traffic ... - Semantic Scholar
sible in many scenarios (e.g., a public Web server not controlled by the detection ..... Although, to our best knowledge, the types of traffic to which the existing.

Exposing Invisible Timing-based Traffic ... - Semantic Scholar
Permission to make digital or hard copies of all or part of this work for personal or ... lem, because they do not have a fixed signature. So far, only a few detection ...

Policy-based Management for Resource-Specific ... - Semantic Scholar
telecommunications and enterprise networks can ... the operation of the network, the semantic model ..... Management – Solutions for the Next Generation”,.

SUTHERLAND COUNCIL TRAFFIC MANAGEMENT PLAN.pdf ...
A Pedestrian Crossing is for the. safe crossing of Children in a. busy road area. You must not stop on a. pedestrian crossing AT ANY TIME. You must not stop near a. pedestrian crossing unless you. are giving way to pedestrians. Page 1 of 1. SUTHERLAN

Policy-based Management of Semantic Clustering
level broker network which knows of all topics, at a high level, such that an ... semantics associated with those clusters, and the performance gains shown ...

Policy-based management for ALAN-enabled networks
... security management of active servers running the proxylets, and management of .... Internet must address support for multiple domains, ..... of free/used memory /disk space. pNETthr: .... proxylet. However, its reflection is based on a per-host.

A Policy Based QoS Management System for DiffServ ...
different domains like QoS, security and VPNs in order to allow new services and facilitate network management. The interface to the network device and the information models required for ..... draft, draft-ietf-framework-00.txt, May 1998.

Policy-based Management for Resource-Specific ... - Semantic Scholar
alternatives have been addressed in practice. ... based management directives for a resource can be applied .... Network and system management have a strong.

Policy-based Resource Management for Application ...
candidate resource model for the active servers. Then, we discuss resource management policies and resource mechanisms. Moreover, we present the ...

The queue - GitHub
Input file: A.in. Output file: A.out. Time limit: 1 second. Memory limit: 64 megabytes. There is an interesting queue. Cashier of this queue is not a good one. In fact ...

Queue & Autoplay UX Developers
next/prev if queue (if available). ○ play/pause or play/stop. ○ timeline scrubber (if possible). ○ volume icon (iOS only). ○ a link to the content entity page or info.

CE9040 TRAFFIC ENGINEERING AND MANAGEMENT DEC 2013 ...
Page 1 of 3. 0. B E / B.Tech DEGREE END SEMESTER EXAMINATIONS, November/ December 2013. CIVIL ENGINEERING BRANCH. SIXTH SEMESTER. C E 9040 - TRAFFIC ENGINEERING & MANAGEMENT. (REGULATIONS 2004). Time: 3 Hrs Max Marks: 100. ANSWER ALL THE QUESTIONS. P

Waste Management Policy - PolyU
Additionally, PolyU will put emphasis on life-cycle thinking in waste management. 3. Encourage its consultants and contractors to follow similar practices in their ...

Waste Management Policy - PolyU
To apply Waste Management Hierarchy principles of Reduce, Reuse, Recycle ... Currently, PolyU encourages the use of information technology to promote and ...