DEVELOPMENT OF A CONGESTION AVOIDANCE MODEL FOR DATA NETWORKS

By ABU, AMUDA JAMES

CSC/2001/010

BEING SUBMITTED TO DEPARTMENT OF COMPUTER SCIENCE AND ENGINEERING, FACULTY OF TECHNOLOGY, OBAFEMI AWOLOWO UNIVERSITY, ILE-IFE.

IN PARTIAL FULFILLMENT OF THE REQUIREMENTS FOR THE AWARD OF A BACHELOR OF SCIENCE (HONOURS) DEGREE IN COMPUTER ENGINEERING

OCTOBER, 2007

ii CERTIFICATION This is to certify that this research work was carried out by ABU Amuda James, CSC/2001/010, under my supervision in partial fulfillment of the requirement for the award of a Bachelor of Science (B.Sc Hons) degree in Computer Science and Engineering, Obafemi Awolowo University, Ile-Ife.

…………………………….. Mr A. A. Akintola

………………………… Date

Project Supervisor

……………………………… Dr (Mrs) H. A. Soriyan Head of Department

…………………………. Date

iii ACKNOLEDGEMENTS My profound gratitude goes the Almighty God who has been proving himself faithful during the course of this research work. I acknowledge the assistance of my respectable, noble and competent supervisor, Mr. A. A. Akintola. Your contribution to the success of this work is highly appreciated. My profound gratitude goes to my loving and caring mother, Mrs. R. B. Abu, who believes wholeheartedly that education is the best legacy. At this point, I want to acknowledge the support and care from the love of my life, Olabisi Coker, who always leaves me better than she finds me. I am grateful particularly to the families of Akinbosoye, Fadayomi, Ewuola, and Coker. I am equally grateful to my sister, Zainab, and my uncle, Mr S. Momoh, for their support in the course of this work. The contributions of Akindele Bankole, Ifeoluwa Ewuola, and some other friends are highly appreciated. I want to specially thank all the lecturers, as well as the Non-Academic staff, of the department of Computer Science and Engineering for their efforts. Lastly, I am very grateful to all other people whose names are not mentioned here who have contributed in various forms to the success of this work.

iv TABLE OF CONTENTS Title Page

i

Certification

ii

Acknowledgements

iii

Table of Contents

iv - vii

List of Figures

viii - ix

Abstract

x

CHAPTER ONE 1.0

INTRODUCTION

1

1.1

Overview

1

1.2

Scope of Research

2

1.3

Research Justification

3

1.4

Research Objectives

3

1.5

Research Methodology

4

1.6

Organization of the Report

4

CHAPTER TWO 2.0

LITERATURE REVIEW

6

2.1

Background

6

2.2

Congestion in Data Networks

7

2.3

Congestion Control Models at the Gateways

12

2.3.1

DECbit model

12

2.3.2

Random Early Detection model

13

2.3.2.0

14

Variations of RED model

v

2.4

2.3.2.1

Gentle RED model

16

2.3.2.2

Flow RED model

16

2.3.2.3

RED with Preferential dropping model

18

2.3.2.3

Adaptive RED model

18

2.3.2.4

Stabilized RED model

19

2.3.2.5

Dynamic RED model

19

2.3.3

CHOKe model

21

2.3.4

BLUE model and the Stochastic Fair BLUE model

21

2.3.5

Random Exponential Marking model

22

2.3.6

Virtual Queue model

23

Overview of the Proposed Model

24

CHAPTER THREE 3.0

RESEARCH METHODOLOGY

25

3.1

Introduction

25

3.2

Model Development

25

3.2.1

TCP hosts

26

3.2.2

Data channels

26

3.2.3

Gateways

27

3.3

3.2.4 Dynamic routing table

27

The Analysis of the Model

28

3.3.1

The DRED model

30

3.3.2

Extension of the design and implementation of the DRED model

32

vi 3.3.3

Deployment of the DRED model in multiple paths Data networks

3.3.4

33

Introduction of the dynamic routing table at each of the gateways

33

3.3.4.1

Dynamically updating the routing table

34

3.3.4.2

Determination of packets destination address in the routing table

34

3.4

Assumptions

36

3.5

Limitations

36

CHAPTER FOUR 4.0

SYSTEM SIMULATION AND RESULT ANALYSIS

38

4.1

Introduction

38

4.2

Simulation Tool

38

4.3

Simulation of the Extended Dynamic RED Model

39

4.4

Performance Evaluation of the Extended Dynamic RED Model

43

4.4.1

The average queue and current queue

43

4.4.2

The probability of dropping packet

48

SUMMARY, CONCLUSION AND RECOMMENDATION

55

5.1

Brief Summary

55

5.2

Conclusion

55

5.3

Recommendation and Future Work

56

CHAPTER FIVE 5.0

References

58-60

vii Appendix

61-83

viii LIST OF FIGURES Figure 2.1 A Typical Internet Network showing the Transmission of Packets through Multiple Hops, Gateways and Links Figure 2.2 Marking Probability Graph for Original RED

8 15

Figure 2.3 Marking Probability Graph for Gentle RED (a variant of the original RED)

17

Figure 3.1 Diagrammatic Representation of the Network Model for Implementing DRED Algorithm in Multiple Paths Transmission Networks

29

Figure 3.2: The Possible paths from which the available path(s) at an instance can be chosen for each gateway

35

Figure 4.1: The graph of the average queue and current queue against time for gateway 1

40

Figure 4.2: The graph of probability of dropping packet against average queue for Gateway 1

41

Figure 4.3: The graph of probability of dropping packet against time for Gateway 1

42

Figure 4.4: The graph of the average queue and current queue against time for gateway 2

44

Figure 4.5: The graph of the average queue and current queue against time for gateway 3

45

Figure 4.6: The graph of the average queue and current queue against time for gateway 4

46

ix Figure 4.7: The graph of the average queue and current queue against time for gateway 5

47

Figure 4.8: The graph of probability of dropping packet against time for Gateway 2

49

Figure 4.9: The graph of probability of dropping packet against time for Gateway 3

50

Figure 4.10: The graph of probability of dropping packet against time for Gateway 4

51

Figure 4.11: The graph of probability of dropping packet against time for Gateway 5 Figure 4.12: The routing pattern used in the extended model

52 54

x ABSTRACT The development of an effective and efficient congestion avoidance mechanism has becomes very imperative as the Internet experiences an explosive growth over the years in terms of the increasing number of users and the advancement in the complexity and bulkiness of various applications run on the Internet. The Dynamic Random Early Detection (DRED) model is capable of avoiding congestion only in a single data path transmission network. The conventional Internet does not operate on a single data path transmission. Hence, the need arises to deploy the DRED model in multiple data paths transmission networks. This model called the “Extended DRED” is efficient to avoid congestion in multiple data paths transmission networks. This is the contribution of this work to the original DRED model. This is done by incorporating the routing operation of the gateways into the original DRED model. In the Extended DRED model, five gateways were used and two out of the five are connected to the packets sources while the remaining three are connected to the destination hosts. Packets leaving either gateway 1 or 2 are routed to their destination address by hopping from one gateway to the directly connected gateway. The average queue of each of the gateway responds effectively to the increasing queue. Also, the probability at which a packet is marked for dropping is a function of the average queue. This also responds early enough as the queue exceeds the minimum threshold. From this model, the probability of congestion occurrence at each of the gateways other than the initial source gateways reduces as the number of gateways used increases. As a result, the transmission of data from source to sink is achieved at low delay and high throughput. The developed model is capable of avoiding congestion at gateways, thereby avoiding global synchronization and oscillation in the network.

CHAPTER ONE INTRODUCTION 1.1

Overview

For the past two decades, data networks have immensely undergone a notable advancement in terms of complexity, degree of traffic loading in the network and so on. As a result of this, congestion problems have been a major setback in this advancement. Since the number of people using the internet increases annually, as well as the advancement in the bulkiness and frequent utilization of real time applications especially media applications, a great challenge has been posed to data networks. In this research work, emphasis is laid on avoiding congestion occurrence at the gateways rather than congestion recovery. Briefly, a congestion avoidance scheme allows a network to operate in the region of low delay and high throughput (Jain et al, 1997). As a result, these schemes can prevent a network from entering the congested state in which the packets are lost. In the current internet, the gateway is the most predominant level for congestion detection. Akintola et al (2005) states that in the current internet, the TCP (Transmission Control Protocol) transport protocol detects congestion only after a packet has been dropped at the gateway. Several mechanisms have been developed over the years. The original RED (Random Early Detection) gateways is one of the most prominent congestion avoidance schemes in a typical internet architecture1. In order to put this challenge under control, the need arises to develop efficient and intelligent congestion control mechanisms at the gateway because congestion can be easily detected at the gateways i.e. at the transport layer.

1

Internet architecture is viewed as an interconnection of autonomous computers or devices. It is also viewed as a group of networks that communicate to each other. Each network has a gateway that connects a network to another in order to allow communication between two networks having different network protocols.

2 Congestion in computer networks is becoming a significant problem due to increasing use of the networks, as well as due to increasing mismatch in link speeds caused by intermixing of old and new technology (Jain et al, 1997). Modern technological advancements in data networks such as local area networks and the fibre optic LANs have resulted in a notable increase in the bandwidths of computer network links. On the other hand, the modern technologies must coexist with the old low bandwidth media such as the twisted pair. This heterogeneity has resulted in mismatch of arrival and service rates in the intermediate nodes in the network, thereby causing increased queuing and congestion (Jain et al, 1997). Congestion is a threat to any data network and this “monster” must be checked. Over the past few years, researchers have come out with several congestion avoidance models for data networks. The problems of congestion are so severe in computer networks that it is not uncommon to find internet gateways dropping as high as 10% of the incoming packets because of local buffer overflows. The findings of some of these problems have proved beyond reasonable doubt that much of the cause lies in the transport protocol implementation and not in the protocols themselves. According to Jacobson and Karels (1988), the „obvious‟ ways to implement a window-based transport protocol can result in exactly the wrong behaviour in response to network congestion. Thus, with the advancement in high speed data networks, it is of paramount importance to have congestion avoidance mechanisms that keep throughput high but average queue sizes low.

1.2

Scope of Research

The problem of congestion persists until a suitable, efficient and powerful mechanism is deployed on the network. As it was earlier stated by Jain and Ramakrishnan (1998),

3 congestion is said to occur in the network when the resource demands exceed the capacity and packets are lost due to too much queuing in the network. In this research work, effort is made to develop a model that can avoid congestion at gateways with multiple paths data transmission networks. This work does not go beyond detection of transient congestion. Deciding the connections to be notified when congestion occur will not be examined. The scope of this work is also to extend the number of gateways in the Dynamic RED model from two to five gateways (i.e. to create multiple paths for the transmission of data in the networks).

1.3

Research Justification

Explosive growth over the years in data or computer networks indicates that the number of network gateways increases proportionally. Existing models for congestion avoidance at the gateway fail to control the problem of congestion among gateways in multiple paths data transmission networks. The Internet is a good example of a group of networks that has more than two gateways. As a result of this, the existing DRED model needs to be improved to work efficiently and reliably in networks of more than two gateways. Since congestion problems are prevalent in the Internet and can only be best detected at the gateways, the need to develop a robust and efficient congestion avoidance model for data networks that will detect incipient congestion and randomly drop packets before the buffer is full becomes urgent.

1.4

Research Objectives

One of the goals of this work is on the development of a congestion avoidance model for multiple paths data transmission networks of several gateways as opposed the previous

4 implementation of the DRED model for single path data transmission networks of just two gateways. The objectives of this work will be to: i.

develop an enhanced model to avoid congestion among gateways with multiple paths data transmission.

ii.

1.5

simulate the model developed in (i) above.

Research Methodology i.

An in-depth study of the existing models for congestion avoidance in data networks.

ii.

Development of an enhanced model that avoids congestion in a multiple paths data transmission networks (a typical internet architecture) by extending the number of transmission paths in the existing DRED model from single to multiple paths.

iii.

Simulation of the model developed in (ii) above using an appropriate simulation software (MATLAB 7.0, a standard simulation software).

iv.

Performance evaluation of the system developed in multiple paths data transmission networks.

1.6

Organization of the Report

Basically, the entire project comprises five chapters. Each of which is discussed elaborately. Chapter one entails the introductory part under which the background of the problem, an overview of the project, the problem definition (Scope of research), research justification and objectives as well as the research methodology are discussed. Chapter two talks about the literature review of the various congestion avoidance mechanisms at

5 gateways that have been developed before now and the various setbacks that are associated with them. Chapter three describes in details the research methodology which involves the model design consideration and methodology for DRED in more than two gateways networks. Chapter four presents in detail the simulation and analysis of the DRED algorithm implemented on more than two gateways networks. This will be done using MatLab, as one of the standard simulation software. Chapter five entails the part where the conclusion of the entire project work is presented as well as recommendation for future work.

CHAPTER TWO LITERATURE REVIEW 2.1

Background

A group of computer data networks has been the building block of the Internet. A network can be viewed as the interconnection of autonomous computer systems in order to share resources such as remote files and information, remote equipment and so on. Nowadays the world is seen as a global village as a result of a network (Internet) that crosses international boundaries. This type of network is described as the global interconnection of computer networks. According to Wikipedia website (2007), the Internet is seen as a specific internetwork, consisting of a worldwide interconnection of governmental, academic, public, and private networks based upon the Advanced Research Projects Agency Network (ARPANET) developed by ARPA of the U.S. Department of Defense, also home to the World Wide Web (WWW), and it is the “Internet” with a capital 'I' to distinguish it from other generic internetworks. Jacobson (1998), states that the Internet is an arbitrary mesh-connected network. Walrand (1998) affirms that the prevalent paradigm used in the design and operation of the current Internet is ''keep it simple'' and the design principle of TCP is ''do not ask the network to do what you can do yourself”. As the network began to increasingly expand in terms of complexity, bulkiness, traffic load, level of Internet based applications and so on, the number of Internet users also follow the same trend, even in a more explosive growth that cannot be matched with the available resources even at the gateways (devices that serve as links among heterogeneous networks in the Internet). A path from a source to a destination may have multiple hops through several gateways and links. Probably speaking, paths through the Internet can be viewed as being heterogeneous (homogeneous paths also exist and experience congestion). This implies that links,

7 gateways and hosts may be of different speeds or may be providing only a part of their processing power to communication-related activity. During this advancement even up till today, it is increasingly important to develop gateway mechanisms that are capable to keep throughput of a network high, while maintaining sufficiently small average queue lengths (Haider et al, 2005). This brought the need to design promising congestion control mechanisms. It should be noted that the number of users that places demand on the network is not limited by any explicit mechanism; no reservation of resources occurs and transport-layer set-ups are not disallowed due to lack of resources (Akintola et al, 2005). When packets are sent over the Internet from the source to the sink, the path through which the data traverse might have had multi-hops and the path might have come across many gateways and links before the packets could get to the destination. The source (host) may be transmitting at a speed different from that of the sink (destination). Also in the process of hopping around the network, the packets might have passed through multiple gateways and links that can be of different buffer sizes and speeds. The diagram in Figure 2.1 actually depicts this kind of scenario. Akintola et al (2005), affirms that the buffers for storing packets flowing through the Internet gateways are not infinite. However, the nature of the Internet protocol is to drop packets when these buffers overflow (Jacobson, 1988).

2.2

Congestion in Data Networks

Basically, the problem of congestion in data networks, including the Internet, cannot be overemphasized. This is due to the fact that the number of Internet users is unlimited compared to the limited resources of the networks such as transmission links, processors and space used for buffering. Congestion is best detected or controlled at the gateways.

8

Source and Destination Host

Gateway

Figure 2.1: A Typical Internet Network showing the Transmission of Packets Through Multiple Hops, Gateways and Links

9 The technique used by gateways for transferring any type of data from one host computer to another is called routing (Haider et al, 2005). Gateway congestion problem emanates when the demand for one or more of the resources of the gateway exceeds the capacity of that resource. Operationally, uncongested gateways operate with little queuing on the average, where the queue is the waiting line for a particular resource of the gateway (Akintola et al, 2005). According to Kleinrock (1979), one commonly used quantitative definition when a resource is congested is when the operating point is greater than the point at which resource power is defined as the ratio of the throughput to delay. When congestion occurs (when the buffer is full), packets are dropped, very low or zero throughputs and a very long delay in the network are some of the consequences that can be encountered. Apart from these outcomes, congestion may also be sent to the user to reduce the size of packets that can be sent. Dropping many packets at the same time will cause global synchronization (Koutsimanis and Park, 2005). Congestion problem also causes instability in the Internet. As a result of this problem facing computer networks, networking researchers have been making efforts in developing various congestion control algorithms (mostly implemented at the transport layer). In a nutshell, congestion in data networks can be prevented, avoided and recovered as classified in the next discussion. Congestion control mechanisms are basically classified into three (Akintola et al, 2005): a. Congestion Prevention: This mechanism guide against congestion at all times, i.e. congestion never occurs. b. Congestion Avoidance: This mechanism detects transient congestion and packets

are dropped randomly before the buffer overflows.

c. Congestion Recovery: This mechanism is implemented when congestion occurs and the transmission is brought back to its operating state. Without congestion

10 recovery, the network may cease to operate entirely (zero throughput and a large network delay for a congested network). Most of the congestion control mechanisms had been on recovery for a very long time. As a result, the Internet has been operating on congestion recovery, even the TCP which is the protocol of the Internet. The argument is not that congestion avoidance should replace congestion recovery in order to effectively improve the overall performance of the network. The development of both congestion avoidance and recovery mechanisms should be focused on; they should rather be seen as complementary. This is because in a situation where the buffer overflows and congestion takes place, the need then will arise to have a mechanism that will bring the transmission of the packets to its operating state. However, congestion avoidance mechanism is one that signals the occurrence of an incipient congestion. In most models for avoidance, it is achieved by setting a minimum and maximum threshold. The average queue size is calculated and compared with both the min and the max thresholds. If the average queue size is below the min threshold, the network is safe for packets to be transmitted via it. On the other hand, when the average queue size is between the min and the max threshold this implies the buffer is near full and packets are dynamically dropped. The probability for which packets are dropped is proportional to the average queue size of the packets. This is part of the achievement of DRED (a variant of the original RED) (Akintola et al, 2005). The congestion control mechanisms of TCP have been used to adaptively control the rates of individual connections sharing IP network links over the last decade (Koutsimanis and Park, 2005). However, the performance of the TCP congestion control mechanism in networks has some drawbacks such as the TCP sources reduce their rate only after detecting packet loss due to queue overflow. Floyds and Jacobson, (1993) states that in the current Internet, the TCP (Transmission Control Protocol) transport

11 protocol detects congestion only after a packet has been dropped at the gateway and however, it would be obviously undesirable to have large queues (possibly on the order of a delay-bandwidth product) that were near full much of the time; this would significantly increase the average delay in the network. Variants of TCP congestion control mechanisms have been developed over the years. A renowned one is the TCP Reno (Fast Retransmit and Fast Recovery) model which has proved false and thus Reno TCP has a poor performance (Jacobson, 1988). Others are TCP Tahoe (Slow start and congestion avoidance) and TCP Vegas to mention a few (Ryu et al, 2003). As it was earlier on affirmed that congestion in the Internet can be easily and effectively detected at the gateways. As a result of this, the congestion control mechanisms of TCP were not designed specifically to manage congestion at the gateways. As a result of the drawbacks associated with the TCP and the Tail Drop schemes, the need to develop mechanisms that can overcome the demerits became very germane. The solution is to drop packets before a queue becomes full so that a source can respond to congestion before buffers overflow. This approach is called active queue management (AQM) which was deployed by the Internet Engineering Task Force (IETF), according to IETF website (2007). An instance of this AQM approach is the Random Early Detection (RED) scheme (Floyds and Jacobson, 1993). Akintola et al, (2005) states that RED algorithm is one of the most prominent congestion avoidance schemes in the Internet architecture. Several works have done in the area of developing congestion avoidance and recovery mechanisms using the Active Queue management approach. There are a lot of past works done in developing congestion avoidance and recovery mechanisms. The past works to be considered will be the ones that control and manage congestion at the gateways. The shortcoming(s) for each of these models will not be left out.

12 2.3

Congestion Control Models at the Gateways.

In this section, discussion is based on the various models developed by networking researchers over the years. According to Chen and Yang (2006), a congestion avoidance mechanism tries to prevent congestion from being developed in a network and therefore avoid packet drops. As a result of this, certain boundaries are set in such that a certain parameter (average queue size) is used to compare with these boundaries. If the value of the parameter is below the minimum boundary, the transmission of the packets will be successful. In another case when the parameter value is between the boundaries, this is where the congestion avoidance mechanism comes to operate in order to avoid incipient congestion. In the case of a situation where the parameter value exceeds the maximum boundary, the buffer is seen to overflow (buffer size is full). Congestion occurs in the network and can only be brought back to its operating state with the help of an efficient congestion recovery mechanism. Therefore, congestion avoidance is efficiently used when the buffer is near full which is an indication that if packets are continuously sent at this rate, congestion may occur. The appropriate techniques to avoid the buffer overflows should be put in place by a powerful congestion avoidance algorithm. In the past few years, several of these algorithms have been proposed by different researchers, Engineers and scientists. Each of these algorithms has its own setbacks and what it was able to achieve. All these will be discussed in the following section. Below are some of the congestion avoidance models:

2.3.1 DECbit model In 1989, DECbit congestion avoidance model was developed by Ramakrishnan and Jain (1990) at Digital Equipment Corporation. This model was the first model to detect congestion at gateways. In this model, the congested gateway uses a congestion

13 indication bit in the packet headers to provide feedback about congestion. The model uses a small amount of feedback from the network to the users who adjust the number of packet allowed into the network. Through the transport-level acknowledgement, the congestion indication bit is communicated back to the users. When the average queue size is greater than one, the gateway sets congestion-indication bit in the header of the arriving packet. The model uses the window based control mechanism. Haider et al (2005) states that if at least half of the packets in the last window had the congestionindication bit set, then the window size is decreased exponentially, otherwise it is increased linearly. Response to the model: It can adapt to the dynamic state of the network. It can also converge to the optimal operating point, simple to implement, has low overhead. The model is distributed as well as it can maintain the fairness in service provided to multiple sources. According to Heider et al (2005), averaging queue size for fairly short periods of time and no difference between congestion detection and indication are the shortcomings of this model. It uses simple averaging and is biased against bursty traffic.

2.3.2

Random Early Detection model

Floyd and Jacobson (1993) proposed Random Early Detection (RED) gateways as one of the prominent schemes used to implement the active queue management in network gateways. For the arrival of each packet into the gateway, the average queue length (qn) is computed using the Exponential Weighted Moving Average (EWMA) as in the work of Young (1984). The computed average queue size is compared with the minimum threshold (minth) and the maximum threshold (maxth) to determine the next action. The basic idea behind RED algorithm is that:

14 If qn ≤ minth, then No incoming packets are dropped. If minth ≤ qn ≤ maxth, then the arriving packet is dropped with probability pb, given by the expression for pb: pb ← maxp( qn − minth)/(maxth − minth) If qn > maxth then all incoming packets are dropped. Floyd and Jacobson (1993) suggest the expression; pa ← pb/(1 − count · pb) The expression for pa should be used as the dropping or marking probability in order to make the inter-packet drop uniform instead of being geometric. It should be noted that count indicates the number of packets forwarded since last drop. The diagram in Figure 2.2 depicts a graph showing the relationship between the dropping or marking probability pb and average queue length qn of the RED algorithm. Response to the model: The responsiveness of this model to avoid congestion at the gateway could be increased by developing enhanced models. The RED gateways model detects incipient congestion by the computation of the average queue size (Jacobson, 1998). If the original RED is poorly configured, it will not perform better than Drop Tail model. The performance of RED model is very sensitive to the settings of the parameters. Thus, badly set parameters will correspond to poor performance. 2.3.2.0 Variations of RED model Basically, several modifications, enhancements and improvements have been made to the original RED model to make the scheme more responsive to congestion avoidance at the

15

Figure 2.2: Marking probability graph for original RED (Source: Haider et al (2005))

16 gateways, as well as to be more sensitive to incipient congestion. Below are the various variants of the RED model. 2.3.2.1 Gentle RED model In the original RED, if the average queue size is greater than the maximum threshold, all the incoming packets are dropped. According to Firoiu and Borden (2000), this can lead to oscillatory behaviour. As shown in Figure 2.3, a new parameter is introduced to give the maximum buffer size B. Response to the model: In this model, much more robustness can be found in the undesired oscillations in queue size and also in the setting of parameters.

2.3.2.2 Flow RED model Lin and Morris (1997) reported a variation of RED called the Flow RED (FRED). They put forward that RED algorithm is not fair with different types of traffic. FRED uses the per active flow accounting to impose on each flow a loss rate that is dependent upon the flow’s use of the buffer (Heider et al, 2005). If a flow continually occupies a large amount of the buffer space of the queue, it is detected and limited to a smaller amount of the buffer space. Response to the model: It can be clearly found that there is fairness among flows is highly maintained. A single shortcoming of the FRED is the higher queue sampling frequency.

17

Figure 2.3: Marking probability graph for Gentle RED (a variant of the original RED) (Source: Haider et al (2005))

18 2.3.2.3 RED with Preferential dropping model Another variant of the RED is the RED with Preferential Dropping which was developed by Mahajan and Floyd (2001). This is a model that utilizes preferential dropping to control the high bandwidth and non-responsive flows in data network. One of the processes involved is to identify the non-responsive high bandwidth flows. The next thing to do is to reduce their bandwidth. Haider et al (2005) states that the algorithm draws heavily from the core stateless fair queueing and the flow random early detection mechanisms. RED with Preferential Dropping model uses packet drop history to identify and control the non responsive flows. Response to the model: This model seems to be very effective to some extent in a network with high bandwidth non-responsive flow. However, the major setback is that it cannot control a large number of non responsive flows properly. 2.3.2.4 Adaptive RED model In the work of Feng et al (1999), the algorithm for Adaptive RED model is given in detail. In this model, the configuration of its parameter is mainly based on the traffic load. It was found from this algorithm that: if the average queue size qn is in the range of minth and maxth, then the maxp is multiplicatively scaled up by a factor of α or scaled down by a factor of β. This depends on the current status of traffic load, with α = 3 and β = 2. Another version of this was reported by Floyd et al (2001) in which maxp is increased additively and decreased multiplicatively, over time scales larger than a typical round trip time, to keep the average queue length within a target range, which is half way between minth and maxth.

19 Response to the model: One beautiful fact about ARED is that it sets its parameters automatically in response to the changing traffic load. A singular setback that can be found in this model is that it does not have a clear and optimum policy for changing of the parameters settings. 2.3.2.6 Stabilized RED model Ott et al, (1999) presented another enhanced version of the original RED called Stabilized RED (SRED). In this model, packets are dropped with a load-dependent probability that is based on the estimated number of flows and the instantaneous queue length (i.e. when the packets arrive at the queue, even thought the buffer is not full). Moreover, SRED can stabilize its buffer occupation at a level of independent of the number of active connections over a wide range of load levels by estimating the number of active connections or flows statistically. Whenever a packet arrives at the queue, SRED do the comparison test and claims a hit only when two packets come from the same flow. A simple way to do the comparison is to compare it with a packet still in the queue. Its goal is to identify flows that are taking more than their fair share of bandwidth, and to allocate a fair share of bandwidth to all flows without incurring in too many computations (Li and Wang, 2003). Responses to the model: It can be clearly stated that SRED estimates the number of flows without maintaining a per-flow account. The task in carrying out the comparison is time consuming. In this case, no consideration is made for fairness according to the transfer size. 2.3.2.7 Dynamic RED model Akintola et al, (2005) saw the need to develop a new modified and enhanced version of the original RED, which is called Dynamic RED (DRED). In this model, a new parameter (warning line) was introduced. The model was developed and tested to

20 measure the burstiness of the incoming traffic. The model got its name from the fact that the estimation of the average queue size is dynamically adjust. The computation of the dropping probability is got in a similar manner as that of the original RED via the formula below: Pb = maxp (avg-minth) ………………………………………..………………….2.1 maxth-minth The following expression, surplus = q – avg

…………………………………………………………….2.2

Where q = the actual queue size and avg = the average queue size, calculates the surplus of actual queue size over the average queue size indicates the burstiness of the incoming traffic and this give the gateway clue about the nature of the incoming traffic. According to Akintola et al (2005), a large value of surplus indicates bursty incoming traffic and the continuous increase of the surplus indicates that the incoming bursty traffic is beyond the gateway’s buffer capacity and buffer overflow is imminent. Conversely, if the surplus is low, the incoming traffic is less bursty. In this case, no buffer overflow can occur. When the avg is computed and found to be between maximum and the minimum thresholds, the incoming packets are dropped with a probability pb that is proportional to the average queue size, in order to avoid congestion to occur. Response to the model: This model is more responsive to congestion avoidance at the gateways in order to avoid buffer overflow. The average queue size is dynamically set to reflect the departure of packets from the gateway. It avoids global synchronization and it also experiences a great reduction in the fluctuations of the actual queue size. The limitations of this model are that no decision is made of the connections to be notified of congestion and the model was only tested on just two gateways (this is not a realistic Internet that consists of many networks that consist of more than just two gateways)

21 2.3.3

CHOKe model

Another model which resembles the original RED in the computation of probability is the CHOKe model which was proposed by Pan et al (2000). According to the algorithm, for every arrival of a new packet at the congested gateway, a packet is singly drawn from the buffer and then compared with the arriving packet. Both packets are either dropped or the randomly selected packet is kept intact depending on whether they both belong to the same flows or not respectively. If both packets belong not to the same flows the new incoming packet is admitted into the buffer with a probability that depends on the level of congestion in the network. Response to the model: Simplicity, presence of stateless algorithm, ease of implementation and no requirement of any special data structure are some of the merits of this model in data networks. However, poor performance of this model is possible when the number of flows is large compared to the buffer size. It has both fairness and scalability problems.

2.3.4

BLUE model and the Stochastic Fair Blue (SFB) model

In the work of Feng et al (1999), it was shown by simulation that BLUE and the Stochastic Fair Blue (SFB) have a better performance than the original RED. This is simply because the two models were designed problems encountered in RED. The problem faced in RED is that the queue length gives very little information about the number of competing connections in a shared link. BLUE and Stochastic Fair Blue Algorithms (SFB) were designed to use packet loss and link idle events for protecting TCP flows against non-responsive flows.

22 Response to the model: Very high scalability and fairness in utilizing very small amount of state information and buffer size are some of the advantages of the BLUE and SFB algorithms. Other merits are low packet loss rate and less buffer size needed. Because a FIFO queueing algorithm is used, this identifies and limits the non-responsiveness flows.

2.3.5

Random Exponential Marking model

A new technique for congestion control mechanism which was given in the work of Athuraliya et al (2001) is the Random Exponential Marking Algorithm (REM). According to the algorithm of the model, a variable called price pl(.). This is congestion measure variable and is updated as follows: pl(k + 1) = [pl(k) + γ(αl(b − l(k) − b∗l ) + xl(k) − cl(k))]+ ………………..……..……2.3 where γ > 0 and αl > 0 are small constants and [z]+ = max{z, 0}. ………………………………………………………………………2.4 Here, bl(k) is the aggregate buffer occupancy, b∗l is a target queue length, xl(k) is the aggregate input rate to queue and cl(k) is the available bandwidth. The constant αl trades off between utilization and queueing delay during the transient period. The constant γ controls the responsiveness of REM to changes in network conditions. If a packet traverses links l = 1, 2, ..., L that have prices pl(k) at a sampling instant k, then the marking probability ml(k) at queue l is ml(k) =1 − φ−pl(k) …………………………………………………………..2.5 where φ is a constant. The end-to-end marking probability for a packet is computed and can be approximated

23 Response to the model: In this model, a high utilization of the link capacity, low packet loss, scalability, negligible loss and delay were achieved. However, before the model can be used, a properly computed and fixed value of φ must be known globally. Other limitations are that the model gives no incentive to cooperative sources and the model lacks quality of service (QoS).

2.3.6

Virtual Queue model

According to Gibben and Kelly (1999), Virtual Queue algorithm is seen as a radical technique. According to the model, the link maintains a virtual queue with the same arrival rate as the real queue. It should be, however, noted that the capacity of the virtual queue is smaller than the capacity of a real queue. When the virtual queue drops a packet, then all packets already lined up in the real queue as well as all of the new incoming packets are dropped until the virtual queue becomes empty again (Heider et al, 2005). Kunniyur (2001) presented a latest algorithm of the original Virtual Queue Model called the Adaptive Virtual Queue (AVQ) algorithm. In this case, at each packet arrival, the virtual queue capacity is updated according to the differential equation as used in the algorithm. There are very important parameters in this model that determine the stability of the AVQ algorithm Response to the model: Virtual queue model has high link utilization and the AVQ model is adaptive to traffic changes. One weakness of this algorithm is the fixed size of the virtual queue. Also the virtual queue does not correctly follow the changing traffic pattern at gateway (Heider et al, 2005).

24 2.4

Overview of the Proposed Model

The original RED is one of the most promising and efficient active queue management mechanisms used in congestion control at the gateway. A modified and enhanced version of this as proposed by Akintola et al (2005) responds quickly to transient congestion when the gateway buffer is near full. Packets are then dropped before the buffer overflows according to some calculated probability which is a function of the average queue size. The model, Dynamic RED, can dynamically adjust the value of the average queue size to cater for the departure and the arrival of packets into the queue. The DRED was only tested with just two gateways and its performance analysis was also evaluated. However, typical Internet networks consist of more than just two gateways. Hence, the behaviour of DRED in a conventional Internet could not be predicted. However, in this research work, the DRED model will be tested on a group of networks consisting of several gateways. The performance analysis of DRED with several-gateway networks will be simulated and its result will be presented and compared with when it was implemented on just two-gateway networks. In this work, the routing function of a gateway will be called into action. Hence, packet from a gateway can be routed via multiple dynamic paths before arriving at its destination.

CHAPTER THREE RESEARCH METHODOLOGY 3.1

Introduction

Not only the alarming growth of the Internet has made it imperative to develop and deploy a powerful and efficient congestion control mechanism at the transport layer of networks but also the increasing number of Internet users. The Transmission Control Protocol (TCP) is the protocol of the Internet (Ryu et al, 2003). This protocol2 is playing a very central role in avoiding congestion collapse of the Internet. As a result of this, the current Internet is associated with the end-to-end transmission Control Protocol congestion control. In this case, congestion control and management can only be implemented by the end hosts. Akintola et al (2005) states that the performance of endto-end congestion control is anticipated to be immensely improved with the deployment of advance gateway congestion control mechanisms. This is as a result of the fact that the gateway is reckoned with as the fitting agent for detecting incipient congestion and ensuring fair allotment of network bandwidth. The current Internet gateways employ the First In First Out queueing policy and packets are being dropped when the buffer is full. This discarding of packets when the buffer is full has a global synchronization effect in both one-way and two-way TCP traffic. Consequently, the aggregate throughput is lowered.

3.2

Model Development

Certain components are of paramount importance in the proposed model if there will be a need to develop a typical Internet networks model in order to enable the performance enhancement and analysis of the DRED gateway. The model consists of the various

2

Protocol is a set of rules, standards, conventions for communication between systems or processes

26 components shown in Figure 3.1. At the end of every network there are TCP hosts (sources and sinks/destination) whose sources have packets to send to the designated sinks. Connected to every network is a gateway3. In this model there will be five gateways (G1, G2, G3, G4 and G5). Between the hosts and each of the gateways are data channels through which data are conveyed from TCP sources to the attached gateways and then to the destination hosts. Embedded in each of the gateways is a dynamic routing table that contains all the information about the addresses of the available routes (available routes at a particular instance) to the gateways that are directly connected and via any available route, packets can be sent from sources to destinations. In the case of more than one available route at a particular time, packets can be conveyed through the best route. Lastly, included in this model is the bottleneck channel among the five gateways. 3.2.1

TCP hosts

The TCP hosts are the data sources and destinations (sinks) for each of the networks. They either consist of short-lived bulk-data (interactive data) transmission or long-lived bulk- data transmissions. Transient congestion can be caused by either long-lived or short-lived bulk-data transmissions. In short-lived transmission, transient congestion may not be harmful when the queue is not near full. However, in a long-lived transmission, the transient congestion is accommodated without any negative feedback to traffic sources. 3.2.2

Data channels

Communication links or paths between two or more hosts in a network are referred to as data channels. Through the channels, data are transmitted and received in the networks. Each of the data channels has its own bandwidth/capacity (this is the amount of data that

3

A device that enable two heterogeneous networks to communicate successfully

27 can pass through the channel at a particular time and it is measured in bit per second (bps)). Propagation delay4 is an important factor in data channels which must be kept as low as possible in order to make throughput high and is measured in seconds 3.2.3

Gateways

A device that has the ability to ensure communication between two or more heterogeneous networks is known as a gateway. Heterogeneous networks simply imply networks that do not have the same communication protocols, data formatting structures, languages and architecture. Some gateways work at every level of the Open System Interconnection (OSI) reference model. The OSI reference model, a seven-layered protocol stack, was developed by the International Standard Organization (ISO) to enable interoperability among or between diverse networks. According to Walrand (1998), the gateway decapsulates incoming packets through the network’s complete protocol stack and encapsulates the outgoing packets in the complete protocol stack of the other network to allow successful transmission. The DRED model (even the original RED) is designed to be implemented on the gateway because congestion is best detected at the gateway. The extension of the DRED model is to be implemented with multiple gateways in networks so as to avoid transient congestion at the gateways. 3.2.4

Dynamic routing table

A routing table is a table that keeps track of the routes to networks and the associated cost to those routes. Routing can either be dynamic or static. A dynamic routing is one in which the Internetwork routing adjusts automatically to the network topology or the traffic changes based on the Routing Information Protocol (RIP) broadcast information or packets received from other networks. A static routing is one in which the network administrator manually configures a route into the routing table. In this model, dynamic

4

This is the time taken by a packet to travel from the source to the sink. It is also referred to as latency.

28 routing will be used on every gateway as shown in Figure 3.1. When packets arrived at a gateway, the packets are routed through the available routes to gateways directly connected to the gateway. In a situation where the destination address of a packet corresponds to the gateway where the packet is currently present, the packet will not hop to another gateway but it will be taken that the packet has arrived its destination. The available paths in the routing table are dynamically updated every time the routing table of each of the gateways is visited. This is made to relate the way the Internet gateways update their routing tables every 30 seconds. Packets are only forwarded to the next hop (which could be the destination gateway or intermediate gateway). With the implementation of five gateways as specified in this model, there will be a routing table in each of them.

3.3

The Analysis of the Model

In this work, the number of the data transmission paths as well as the number of gateways is extended from mere just a single path and two gateways respectively to multiple paths and five gateways. The algorithms used for this model are presented in appendices 1 and 2. All the parameters used in the DRED model are used in this model. One of them is the queue weight, wq, which controls the rate at which the calculated average queue size reacts to the change in traffic load at the gateways. The value of the queue weight is dynamically varied according to the change in the actual queue size. As a result of this dynamic change of the queue weight, transient congestion is detected in a timely manner and then avoided by randomly dropping packets before the gateway buffer overflows. Another parameter that is also adjusted dynamically is dropping probability whenever the buffer is near full. The maximum dropping probability maxp is

29

TCP Hosts

TCP Hosts

Data Channels

Data Channels

S1

D1 D2

S2

RT G1

RT G3 Dj

Sn

S1

S2

D1 RT G2

RT G4

D2

Sm

Dk D1

RT G5

D2

Dp

RT = Routing Table, G1, G2, G3, G4 and G5 = gateways for each of the networks S1, S2, ….., Sn, Sm (Sources). D1, D2, …., Dj, Dk, Dp (Destinations/sinks)= TCP hosts

Figure 3.1 Diagrammatic Representation of the Network Model for Implementing DRED Algorithm in Multiple Paths Transmission Network Showing the Possible Paths

30 dynamically adjusted depending on the average queue length. The maxp is directly proportional to the average queue length. Apart from the algorithm that avoids transient congestion in data networks, suitable and efficient algorithm for routing packets in the networks is developed in the model. Routing tables are implemented at the gateways in which available paths to the directly connected gateways in the networks are stored. The routing table contents (available paths to destinations) are dynamically changed in a timely manner which is similar to the actual Internet gateway (every 30 seconds). For the essence of the model simulation, five gateways are employed and two of the gateways are connected to the source hosts while the other three are connected to the destination hosts. Packets can also be routed between gateways before they get to their respective designated gateways. When the routing table is checked for available paths, the suitable path to the destination is chosen for the transmission of the packet. The parameters used in the simulation of this model are contained in appendix 4. 3.3.1

The DRED model

In this model, three thresholds were introduced, namely: minimum threshold (minth), maximum threshold (maxth) and the warning line. All the thresholds have predetermined values, especially for the warning line which is set to half of the gateway buffer size. The minth and maxth are used in the calculation of

the packet-drop probability. The

Exponentially-Weighted Moving Average (EWMA) is used to compute the average queue size and the expression is given in the expression given for avg:

avg  1  wq avg  wq q Where wq is the queue weight which is dynamically adjusted in the DRED model and determines the sensitivity of the model to the fluctuation of the actual queue size and q is the actual queue size. In adjusting the value of wq, the third threshold is used called the

31 warning line. If the actual queue size is less than the warning line, wq is unchanged. On the other hand, if the actual queue size is greater than or equal to the warning line, wq is set as shown in equation 3.3. R is the ratio of surplus to buffer size; nwq and oldwq are the new queue weight and old queue weight respectively. For responsiveness to rapid dequeueing of packets, if the actual queue size is less than the average queue size, wq is multiplied by 10. The complete algorithm for computing wq for responding to the departure of every packet is shown in appendix 1. The difference between the actual queue size and the average queue size is taken as the surplus and is given by the simple expression in equation 3.1.

surplus  q  avg ………………………………………………………3.1 Where q is the actual queue size and the avg is the average queue size. A large and a small value of surplus imply that the incoming traffic is bursty and less bursty respectively. The ratio, R, of the surplus and the buffer size is used to adjust the wq dynamically. The ratio is shown in equation 3.2.

R

surplus buf _ s

……………………………………………………………3.2

buf_s means the buffer size. It should be noted that the higher the ratio of the surplus to the buffer size, the greater the queue weight as it can seen in the equation 3.3

nwq 

oldwq,

R [0,0.1]

oldwq x 4,

R [0.1,0.2]

oldwq x 8,

R [0.2,0.3]

oldwq x 12,

R [0.3,0.4]

oldwq x 16,

R [0.4,0.5]

oldwq x 20,

R [0.5,1]

………………………3.3

32 The aggressiveness of the DRED model towards the incoming packets when an incipient congestion is detected is determined greatly by the maximum dropping probability maxp. maxp is varied according to the change in the average queue size to reduce queueing delay and avoid buffer overflow. With variation in the average queue size, maxp is dynamically changed to different values. When incipient congestion is detected, packets are discarded randomly with a probability called the packet-drop probability, pb. pb is calculated as shown in equation 3.4 and varies linearly from 0 to maxp. No packet is dropped when pb is less than zero, packets are processed and forwarded successfully. All parameters mean the same as defined previously.

pb  3.3.2

max p avg  min th 

max

th

 min th 

…………………………………………..3.4

Extension of the design and implementation of the DRED model

Most of the RED gateway models are only developed for just a single path transmission. To be specific, in the design of the DRED model only two gateways were used to test its performance. In this research work, the DRED gateway algorithm will be deployed on a group of networks consisting of five gateways in order to test the strength of the DRED model as the network grows. The extension of this become very imperative as the conventional Internet is beyond a single path transmission. Thus, the performance of this model is needed to be evaluated in a multiple data transmission paths. In order to do this, there will be need for the gateways to have a routing table to be able to decide which path or route is available and in case of more than one, a decision is also made on which is the best. Congestion can be avoided at any of the gateways that receive packets and when packets are not randomly dropped they are processed and forwarded to their destinations depending on the available route(s). The proposed model will be simulated with the modification on the capacity of the network.

33 3.3.3

Deployment of the DRED model in multiple paths data networks

Multiple paths data networks are networks that are linked together with multiple routes. Each of the routes connects one network to the other. Effort to send data from one network to the other may require the data to be first routed through some intermediate networks before they get to their destinations. Extension is made in the number of gateways used in the DRED model from two gateways to five gateways so as to test the performance of DRED model in multiple paths data transmission as previously shown in the diagram in Figure 3.1. The essence of this is to deploy a congestion avoidance algorithm in multiple paths data networks. In case of any detection of incipient congestion, packets are dropped with a probability that is a function of the average queue size to avoid the gateway buffer overflows in order to have a high throughput and low propagation delay.

3.3.4

Introduction of the dynamic routing table algorithm in each of the gateways

As a result of these multiple gateways and multiple routes between sources and destinations, there is the need to employ an efficient and powerful routing algorithm in each of the gateways. This is the major focus of this work. Having multiple transmission paths in data networks, packets are transmitted through the best route to a destination. Best route implies the consideration of parameters like the number of hops (the trip a packet takes from one gateway or intermediate gateway to another in the network), time delay and communication cost of packet transmission. In this research, only the best available path to the directly connected gateway that can be used to transport packets to their designated gateways will be employed. Information about the structure of the networks is stored in the routing table. The information entails the available routes to a

34 particular gateway from the directly connected gateways. The figure shown in figure 3.2 depicts the various possible paths from each of the gateways. From the possible paths, the available paths at an instance can be chosen with the help of the rand function in MATLAB. Since the Internet is connectionless, the store-and-forward technology is used in situation where there is no direct link from the source gateway to the destination gateway. After packets have left the source gateway, on their arrival to the intermediate gateway or the destination gateway, the congestion avoidance algorithm is first evoked before the routing table algorithm is initiated. This means that only the packets that have been processed successfully and ready to be forwarded to the next stage will be routed. 3.3.4.1 Dynamically updating the routing table As earlier stated that the dynamic routing algorithm will be used, the available paths through which packets are routed at a time are updated in a timely manner. To accomplish this in this work, an array of the possible paths to the directly connected gateways will be created for each of G1, G2, G3, G4 and G5 (i.e. Gateway 1, Gateway 2, …………, Gateway 5). Another array, called the available paths, is created from the possible paths at every interval using the rand function in MATLAB. This is shown in Appendix 3. Two out of these 4 possible paths for G1 and G2 and one out of the two possible paths for G3, G4 and G5 are randomly selected at every interval and stored into the available paths array in the course of simulation. 3.3.4.2 Determination of packets destination address in the routing table For every packet that arrives at either G1 or G2, there is a particular destination address attached. In this model, an algorithm to check the routing table for the gateway destination address of packets will be developed and presented in detail as shown in appendix 2. G3, G4 and G5 are denoted by 3, 4 and 5 respectively. So, when packets

35

G3

G1

G4

G5 G2

G1 G3 G2 G4 G3 G5

G4

G5 G3

G5

G3

G4

G4

G5

G1= Gateway 1, G2= Gateway 2, ………………………………….. G5= Gateway 5 Figure 3.2: The Possible paths from which the available path(s) at an instance can be chosen for each gateway.

36 arrive at either G1 or G2, the attached destination address to the packets is compared with 3, 4 and 5. If the result is true for any of these options, the packet will then be routed through the path that links to its destination. If the result is otherwise, the packet is routed through a suitable path to another gateway that is not the destination gateway address. A detailed algorithm is shown in appendix 2.

3.4

Assumptions

(i)

The entire TCP hosts data sources have packets to transmit.

(ii)

The size of the packet is assumed to be fixed (at a convenient value) during the simulation.

(iii)

For the purpose of this work, from Figure 3.1, G1 and G2 are connected to host that can only send sources while G3, G4 and G5 are sinks to which destination hosts are attached. However, packets are allowed to hop from one intermediate gateway to the other.

(iv)

The routing table contains both the possible and available routes in the networks that consist of just five gateways.

(v)

The determination of the shortest path in routing the packets is not included in the model.

(vi)

All the gateways have the same buffer size and service rate.

3.5

Limitations

The number of the gateways is limited to just five gateways through which data can be routed before they get to their specified destinations. The model will only be limited to detecting incipient congestion. Decision on which connection to be notify of congestion

37 will not be included. Also, for the purpose of simulation, the routing table of G1 and G2 are limited only to two available paths at a time.

CHAPTER FOUR SYSTEM SIMULATION AND RESULT ANALYSIS 4.1

Introduction

Having developed the extended DRED model mathematically, simulations were performed for the DRED model implemented on each of the gateways in the multiple data paths transmission network using MATLAB 7.0 to evaluate the performance. The extended DRED model was implemented on five gateways. The packet size, packet destination address (gateway address) and the available path(s) from one gateway to the immediately connected gateways were all generated randomly. The simulation results were presented in graphical forms to evaluate the performance of the DRED model in multiple paths data transmission network.

4.2

Simulation Tool

MATLAB stands for matrix laboratory developed originally to provide easy access to matrix software developed by the LINPACK and EISPACK projects. It is basically a high-performance language for technical computing. It basically integrates computation, visualization, and programming in an easy-to-use environment where problems and solutions are expressed in familiar mathematical notation. It is very useful in the following areas: Mathematics and computation, Algorithm development, Data acquisition,

Modelling,

Simulation,

Prototyping,

Data

analysis,

Exploration,

Visualization, Scientific and Engineering graphics, e.t.c. In general, MATLAB system consists of five main parts. They are as follows: the Desktop Tools and Development Environment, The MATLAB Mathematical Functions and Library, The MATLAB Language, Graphics, and The MATLAB External interfaces/Application Programming Interface (API).

39 4.3

Simulation of the Extended Dynamic RED Model

The extended DRED model was simulated using MATLAB 7.0 to evaluate the behaviour of the original DRED model with multiple gateways. The parameters and the values used in this model are the same as in the original DRED model as presented in appendix 4. Parameters such as queue weight, minimum and maximum thresholds, buffer size of each of the gateways, and warning line (half of the buffer size of each of the gateways) were not altered in the simulation process. The packets (the number of packets and destination addresses) were generated randomly from the various sources connected to gateway 1 and gateway 2 with the RAND function in MATLAB (that follows a poisson distribution). This function generates a random number X such that 0
40

450

queue average queue

400

number of packets

350 300 250 200 150 100 50 0

0

1

2

3

4

5 6 time in sec

7

8

9

10

Figure 4.1: The graph of the average queue and current queue against time for gateway 1

41

0.1 probaility of dropping packets 0.09 0.08

probability

0.07 0.06 0.05 0.04 0.03 0.02 0.01 0

0

50

100

150 average queue

200

250

300

Figure 4.2: The graph of probability of dropping packet against average queue for Gateway 1

42

0.1 probaility of dropping packets against time 0.09 0.08

probability

0.07 0.06 0.05 0.04 0.03 0.02 0.01 0

0

1

2

3

4

5 6 time in sec

7

8

9

10

Figure 4.3: The graph of probability of dropping packet against time for Gateway 1

43 4.4

Performance Evaluation of the Extended Dynamic RED model

When the packets are ready to leave gateway 1 to their destination addresses, the routing function is evoked. The results of the simulations performed at the remaining four gateways are graphically presented as follows. The performance evaluation at each of the four gateways is in terms of the current queue, average queue, dropping probability, propagation delay and throughput. 4.4.1

The average queue and current queue

Like the graph of figure 4.1, the graphs of figures 4.4, 4.5, 4.6 and 4.7 show that the average queue which determines the probability with which a packet is dropped responds early enough to the increase in the number of packets in each of the gateways (i.e. when the ratio of surplus [queue – avg] to the buffer size increases). However, it can be observed in figures 4.4, 4.5, 4.6 and 4.7 that the incoming traffic is less bursty than that of gateway 1 where all the packets are transmitted from (i.e. the queue is less populated than the queue of the gateway1). This will avoid the buffer of each of the four gateways 2, 3, 4 and 5 from overflowing. This is attributed to the fact that the various routes with which all the packets passed through to get to their destinations are not static, but they are rather dynamic. The routing decision depends on the available path(s) at an instant. As a result, buffer overflow at each of the four gateways is more effectively avoided than when all the packets arrive at gateway 1 even for a bursty incoming traffic. Hence, the gateways do not degrade to droptail gateways and as a result no global synchronization occurs and fluctuation of actual queue is greatly reduced in the multiple data paths transmission network.

44

450 queue average queue

400

number of packets

350 300 250 200 150 100 50 0

0

0.5

1

1.5

2 time in sec

2.5

3

3.5

4

Figure 4.4: The graph of the average queue and current queue against time for gateway 2

45

450 queue average queue

400

number of packets

350 300 250 200 150 100 50 0

0

1

2

3 4 time in sec

5

6

7

Figure 4.5: The graph of the average queue and current queue against time for gateway 3

46

450 queue average queue

400

number of packets

350 300 250 200 150 100 50 0

0

1

2

3 4 time in sec

5

6

7

Figure 4.6: The graph of the average queue and current queue against time for gateway 4

47

450 queue average queue

400

number of packets

350 300 250 200 150 100 50 0

0

1

2

3 time in sec

4

5

6

Figure 4.7: The graph of the average queue and current queue against time for gateway 5

48 4.4.2

The probability of dropping packet

The probability at which a packet is dropped depends on the average queue size at any given time. In relating this probability with time, simulations were performed and graphs of probability at which a packet is marked for dropping against time for gateways 2, 3, 4 and 5 as shown in figures 4.8, 4.9, 4.10 and 4.11 respectively were obtained. From all the figures for the four gateways, it can be observed that the probability of dropping packets is responsive enough to the incoming traffic at each of the gateways. As a result of this, there will not be buffer overflow at any of the gateways. The maximum drop probability is around 0.09 which is considerably high enough to avoid buffer overflow even if the incoming traffic is from long lived bulk data transmission. Observing critically the time axis of each of the figures presented so far, it can be deduced that the propagation delay of each of the gateways (except gateway 1) before the packets are made ready to be routed to the next available gateway is smaller compared to that of gateway 1. This is due to the number of packets that enter a gateway which also implies a less bursty incoming traffic (i.e. a less populated queue). The propagation delay of each of the four gateways other than gateway 1 is considerably smaller than that of gateway 1. At these gateways, low propagation delay and high throughput are effectively achieved in this work. Therefore, the four gateways will not experience congestion assuming that each of the gateways has the same buffer size. This is due to the arrival rate of the packets at each of the gateways 2, 3, 4 and 5 which is smaller than the service rate of each of these gateways.

49

0.09 probaility of dropping packets against time 0.08 0.07

probability

0.06 0.05 0.04 0.03 0.02 0.01 0

0

0.5

1

1.5

2 time in sec

2.5

3

3.5

4

Figure 4.8: The graph of probability of dropping packet against time for Gateway 2

50

0.09 probaility of dropping packets against time 0.08 0.07

probability

0.06 0.05 0.04 0.03 0.02 0.01 0

0

1

2

3 4 time in sec

5

6

7

Figure 4.9: The graph of probability of dropping packet against time for Gateway 3

51

0.09 probaility of dropping packets against time 0.08 0.07

probability

0.06 0.05 0.04 0.03 0.02 0.01 0

0

1

2

3 4 time in sec

5

6

7

Figure 4.10: The graph of probability of dropping packet against time for Gateway 4

52

probaility of dropping packets against time

0.09 0.08 0.07

probability

0.06 0.05 0.04 0.03 0.02 0.01 0

0

1

2

3 time in sec

4

5

6

Figure 4.11: The graph of probability of dropping packet against time for Gateway 5

53 From this model, it can be observed that the higher the number of gateways used in the networks (i.e. the larger the networks) the lower the probability of the gateways’ buffer overflow, provided the number of packets remains constant. The routing pattern used for this model is depicted in figure 4.12. Parameters used in the simulation of this model are presented in appendix 4. Contained in appendixes 5, 6 and 7 are the MATLAB codes used in the simulation of the developed mathematical model. Appendix 5 contains the code for the generation of packets sizes and destination addresses. Appendix 6 contains the routing code for every packet that leaves gateway 1. Appendix 6 contains the DRED code to avoid congestion for each of the gateways. Lastly, appendix 7 contains the graphs of the relationship between the dropping probability of a packet and the average queue.

54

G3

G1

G4

G5

G2

G3 G2 G4 G3 G5

G4

G5 G3

G5

G3

G4

G4

G1= Gateway 1, G2= Gateway 2, ………………………………….. G5= Gateway 5 Figure 4.12: The routing pattern used in the extended model

G5

CHAPTER FIVE SUMMARY, CONCLUSION AND RECOMMENDATION 5.1

Summary

It has been successfully established in this work that the Extended Dynamic Random Early Detection (EDRED) can avoid congestion in multiple paths data transmission networks. This is made possible by the introduction of the routing operation of the gateways used in this model. The original DRED was only used to avoid congestion at the Internet gateways of a single path data transmission network, which is not a typical Internet architecture. This work is an extension to the original DRED model. The gateway remains the best for the most effective detection of congestion in the network. This is because of its capability to reliably distinguish between propagation delay and persistent queueing delay. Over the years, only the gateway has a unified view of queueing behaviour. Apart from the above, a gateway is shared by many active connections with a wide range of roundtrip times, tolerances of delay, throughput requirements, e.t.c. The gateway is in the best position to make decision about the duration and magnitude of transient congestion to be allowed at the gateway. Also, gateways, in multiple data paths transmission networks, have routing operation embedded which allow them to successfully take a decision on the best available path to forward packets from source to sink.

5.2

Conclusion

When the queue of the gateway is near full, transient congestion is shown to be harmful. It is imperative to prevent buffer overflow. This is achieved by ensuring that a gateway

56 must be responsive early enough to transient congestion when the incoming traffic is highly bursty and the free buffer size falls below the warning line (i.e. half of the gateway buffer size). In such a case, unconditional allowance of transient congestion causes frequent buffer overflows, global synchronization and oscillation in the network. The original DRED model was developed to measure the burstiness of the incoming traffic in a single data path transmission network. The conventional Internet generally involves routing and forwarding of packets. A simple and efficient method to route packets from the source gateway to the destination gateway is developed. Based on this routing, the availability of a path to forward the packets is randomly determined. The extended DRED has been shown to be responsive enough to take care of transient congestion in multiple paths data transmission networks and thereby avoiding global synchronization and oscillation in the network. In addition, the extended DRED model reduces the burstiness of the incoming traffic at every gateway, except the source gateway, thereby reducing the probability of congestion occurrence at each of these gateways.

5.3

Recommendation and Future Work

It is strongly recommended that the enhanced and extended versions of the RED gateways should be made available and deployed in the conventional Internet architecture. This will be able to check the bottleneck of congestion in data networks. Also, future research work should be simulated with other better and appropriate simulation softwares such as Network Simulator, Mathematica, etc. so that the work can be more perfect and verified.

57 Apart from routing packets, by hopping from one gateway to the directly connected gateway, there is also another routing mechanism whereby packets are routed through the shortest path to the destination address of the packets. Therefore, it is recommended that future work should focus on the determination and the use of the shortest path through which packets can be forwarded from the source gateway to the destination gateway in the routing algorithm of the gateway routing operation.

58 REFERENCES Akintola, A. A., Aderounmu, G. A., Akanbi, L. A. and Adigun, M. O. (2006). Modeling and Performance Analysis of Dynamic Random Early Detection (DRED) Gateway for Congestion Avoidance. In issues in Informing Science and Information Technology, pp. 623-636. Feng, D. S. W., Kandlur, D. D. and Shin, K. G. (1999). “BLUE: A New Class of Active Queue Management Algorithms”. Technical Report CSETR- 387-99, University of Michigan, April 1999. Firoiu, V. and Borden, M. (2000). “A Study of Active Queue Management for Congestion Control”. Proceedings of IEEE INFOCOM‟2000, March 2000. Floyd, S. and V. Jacobson. (1993). „Random Early Detection Gateways for Congestion Avoidance.‟ ACM/IEEE Transactions on Networking, Vol.1, 397–413. Gibben, R. J. and Kelly, F. P. (1999). “Resource pricing and evolution of congestion control”. Automatica. Haider, A., Sirisena, H., Pawlikowski, K. and Ferguson, M. J., (2005). Congestion Control Algorithms in High Speed Telecommunication Networks. Jacobson, V. (1988). “Congestion Avoidance and Control”. In Proceedings of ACM SIGCOMM, pages 314–329. Jacobson, V. (1988). “Dynamic Congestion Avoidance / Control” (long message). Email message, 11 Feb. 1988. http://www-nrg.ee.lbl.gov/nrg-email.html. Accessed on 15 Jan. 2007. Jacobson, V. and Karels, M. J. (1988). “Congestion Avoidance and Control”. In the Proceedings of SIGCOMM „88 Jacobson, V. (1998). Congestion avoidance and control. In Proceedings of ACM SIGCOMM, pp. 314-329.

59 Jain, R. and Ramakrishnan, K (1998). “Congestion Avoidance in Computer Networks with a Connectionless Network Layer. Part I: Concepts, Goals and Methodology”. Technical Report DEC-TR-506, Digital Equipment Corporation. Jain, R., Ramakrishnan, K. and Chiu, D. (1997). “Congestion Avoidance in Computer Networks with a Connectionless Network Layer”. Technical Report DEC-TR506, Digital Equipment Corporation. Kleinrock, L. (1979) „Power and Deterministic Rules of Thumb for Probabilistic Problems‟. Computer Communications. Proceedings of the ICC. Koutsimanis, C. and Park, P. G. (2004). “Active Queue Management – A router based control mechanism” ACM SIGCOMM Kunniyur, S. (2001). “Analysis and Design of an Adaptive Virtual Queue Algorithm for Active Queue Management”. ACM SIGCOMM. Li, M. and Wang, H. (2003). “Study of Active Queue Management Algorithms:-Towards stabilize and high link utilization”. „Unpublished thesis, Department of Computer Science and Engineering, University of Nebraska-Lincoln, Lincoln. Mahajan, R. and Floyd, S. (2001). “Controlling High Bandwidth Flows at the Congested Router”. ICSI Tech Report TR-01-001, Apr 2001. Ott, T. J., Lakshman, T. V. and Wong, L. (1999). “SRED: Stabilized RED”. In Proc. of INFOCOMM99, Mar.1999, pp.1346-1355. Ramakrishnan, K. K. and Jain, R (1990). “A Binary Feedback Scheme for Congestion Avoidance in Computer Networks”. ACM Transactions on Computer Systems, 8(2):158–181, May 1990.

60 Ryu, S., Rump, C. and Qiao, C. (2003). “Advances in Internet Congestion Control”. State University of New York at Buffalo. IEEE Communication Surveys. Third Quarter

2003,

Volume

5,

No

1.

Available

at:http://www.comsoc.org/pubs/surveys. Accessed on: 20 Jan. 2007 Sally Floyd, S., Ramakrishna Gummadi, R. and Scott Shenker, S. (2001). “Adaptive RED: An Algorithm for Increasing the Robustness of RED‟s Active Queue Management”. ACM SIGCOMM, Aug 2001. The Internet Engineering Task Force, IETF home page, Available at: http://www.ietf.org/ Accessed on 15 Jan. 2007. Walrand, J., (1998). “Communication Networks: A first course”. McGraw-Hill, second edition. Wikipedia website (2007). “Internet”; Available at: http://www.en.wikipedia.org/wiki/Internet. Accessed on: 20 Jan. 2007 Young P. (1984). “Recursive Estimation and Time-Series Analysis”. Sringer-Verlag.

61 APPENDIX 1 For each arriving packet P: q++, if (q>avg) diff = q – avg ; else diff = 0; ratio = diff / buf_s R = int (10*ratio); If (q < warn_line) nwq = oldwq else { switch (R) { case 0: nwq = oldwq case 1: nwq = oldwq *4 case 2: nwq = oldwq *8 case 3: nwq = oldwq *12 case 4: nwq = oldwq *16 default: nwq = oldwq *20 } }

For each departing Packet q--; if (avg > q) nwq = oldwq*10; else nwq = oldwq;

62 APPENDIX 2 Algorithms for Routing and forwarding packets in the networks Dynamically updating the routing table on the arrival of a packet Possible_Paths{ } While (packet arrives) { for i = 1 to 2 n = random_number * 10 if (n<=2.5) m=1 elseif ((n>2.5)&& (n<=5)) m=2 elseif ((n>5)&& (n<=7.5)) m=3 else m=4 endif Available_Path (i) = Posible_Paths (m) Next for end } end

Randomly generated packet’s destination address: Dest_Add = ((randon_Number * 10)* 0.5) If

(Dest_Add <= 1.5) Destination_Address = 3

Elseif ((Dest_Add > 1.5) and (Dest_Add <= 3.0)) Destination_Address = 4 else Destination_Address = 5 end

63 Packet (i,2) = Destination_Address Determination of Packets Destination Address On the arrival of a packet If (Packet(i,2) = Available_Path ( m) Call Gate_m (Packet) function ElseIf (Packet(i,2) = Available_Path (m+1) Call Gate_m+1 (Packet) function Else Call Gate_m (Packet) or Gate_m+1 (Packet) functions End If

64 APPENDIX 3 Possible Paths from G1 to G2, G3, G4 and G5 1. G1 ------ G2 2. G1 ------ G3 3. G1 ------ G4 4. G1 ------ G5 Possible Paths from G2 to G1, G3, G4 and G5 1. G2 ------ G1 2. G2 ------ G3 3. G2 ------ G4 4. G2 ------ G5 Possible Paths from G3 to G4 and G5 1. G3 ------ G4 2. G3 ------ G5 Possible Paths from G4 to G3 and G5 1. G4 ------ G3 2. G4 ------ G5 Possible Paths from G5 to G3 and G4 1. G5 ------ G3 2. G5 ------ G4

65 APPENDIX 4 A Table Showing RED gateway parameters and their Meaning

Parameter

Meaning/value of the parameters

Minth

Minimum threshold

Maxth

Maximum threshold

qw

Weight factor for averaging

Maxp

Maximum packet marking probability

W(k)

Window size at slot k

N(k)

The number of TCP connections at slot k



Propagation delay of TCP connections

A Table Showing the model parameters and their Meaning/Value Parameter Old_q Buf_s warn_line Q Diff Ratio R nwq

Meaning/value of the parameters 0.002 buffer size half of buffer size actual queue size surplus of actual queue size over average queue size ratio of surplus over buffer size the integer part of (10*ratio) new queue weight

66 APPENDIX 5 clear temp = 0; % Initial value of average queue temp1 = 0; buffer = 500; % Total number of packet that can be accommodated by the gateway warnline = 250; % A new threshold set to half of the buffer size minthres = 100; % Minimum threshold maxthres = 450; % Maximum threshold wq = 0.002; % Queue weight maxp=0.02; % Maximum drop probability %This section generates 1000 packets from the various sources connected to % gateway 1, with their randomly generated sizes and destination addresses for i= 1:1000 Packet(i,1) = rand*100; q(i,1) = temp1 + Packet(i,1); average = temp; queue = q(i,1); if(queue > average), diff = (queue - average); else, diff = 0; end ratio = diff/buffer; maratio = 10*ratio; mratio = fix(maratio); %this section dynamically adjust the queue weight if (q(i,1) < warnline) newq = wq; else switch mratio case 0, newq = wq; case 1, newq = wq*4; case 2, newq = wq*8; case 3, newq = wq*12; case 4, newq = wq*16; otherwise newq = wq*20; end end if(q(i,1) > maxthres) q(i,1) = rand*1; avg(i,1) = (1-newq)*temp + newq*q(i,1); else avg(i,1) = (1-newq)*temp + newq*q(i,1); end testprob = avg(i,1)/buffer; rate = 10*testprob;

67 %this section dynamically adjust the dropping probability if (avg(i,1)<= minthres) nmaxp = maxp; else switch rate case 4, nmaxp = maxp; case 5, nmaxp = 2*maxp; case 6, nmaxp = 4*maxp; case 7, nmaxp = 6*maxp; case 8, nmaxp = 8*maxp; otherwise nmaxp = 10*maxp; end end prob(i,1)=nmaxp*(avg(i,1)-minthres)/(maxthresminthres) ; if(prob(i,1) < 0) prob(i,1) = 0; end temp = avg(i,1); temp1 = q(i,1); t(i,1)=i*0.01; %to randomly generate the Destination Addresses Dest_Add = fix((rand * 10)* 0.5); if(Dest_Add <= 1.5) Dest_Add = 3; elseif((Dest_Add > 1.5) && (Dest_Add <= 3.0)) Dest_Add = 4; else Dest_Add = 5; end %to assign the Destination Address into column 3 Packet (i,2) = Dest_Add; end %this section plot the graphs for gateway 1 figure(1) plot(t,q,'k-',t,avg,'k^') xlabel('time in sec') ylabel('number of packets') legend('queue','average queue') grid figure(2) plot(avg,prob,'k+') xlabel('average queue') ylabel('probability') legend('probaility of dropping packets') grid figure(3) plot(t,prob,'k-')

68 xlabel('time in sec') ylabel('probability') legend('probaility of dropping packets against time') grid routing(Packet); %this line call the routing function

69 APPENDIX 6 function routing(Packet) Gate_1 = [2; 3; 4; 5]; % possible paths from gateway 1 %all these variable are declared global global t avg q prob k t2 avg2 q2 prob2 k2 t4 avg4 q4 prob4 global k4 t5 avg5 q5 prob5 k5 k = 0; k2 = 0; k4 = 0; k5 = 0; % Initial value of average queue temp = 0;temp2 = 0;temp4 = 0;temp5 = 0; temp1 = 0;temp12 = 0;temp14 = 0;temp15 = 0; %performing a loop to route each of the packets for j = 1:1000 Path(1)=0; Path(2)=0; %to check for a situation of same paths while(Path(1)==Path(2)) for i = 1:2 n = rand * 10; %MATLAB random function if(n<=2.5) m=1; elseif((n>2.5)&& (n<=5)) m=2; elseif((n>5)&& (n<=7.5)) m=3; else m=4; end Path(i) = Gate_1(m); end end Path1 = Path(1); Path2 = Path(2); sink = Packet(j,2); dest(j) = Packet(j,1); %this section perform the routing decision if((Path1==3)&&(Path2==4)) if(sink==3) [temp,temp1,k,temp2,temp12,k2,temp4,temp14,k4, temp5,temp15,k5]=Gate3(sink,dest,temp,temp1,k,j, temp2,temp12,k2,temp4,temp14,k4,temp5,temp15,k5); continue else [temp,temp1,k,temp2,temp12,k2,temp4,temp14,k4, temp5,temp15,k5]=Gate4(sink,dest,temp,temp1,k,j, temp2,temp12, k2,temp4,temp14,k4,temp5,temp15,k5); continue end elseif((Path1==4)&&(Path2==3)) if(sink==3) [temp,temp1,k,temp2,temp12,k2,temp4,temp14,k4, temp5,temp15,k5]=Gate3(sink,dest,temp,temp1,k,j, temp2,temp12,k2,temp4,temp14,k4,temp5,temp15,k5);

70 continue else [temp,temp1,k,temp2,temp12,k2,temp4,temp14,k4, temp5,temp15,k5]=Gate4(sink,dest,temp,temp1,k,j, temp2,temp12,k2,temp4,temp14,k4,temp5,temp15,k5); continue end elseif((Path1==3)&&(Path2==5)) if(sink==3) [temp,temp1,k,temp2,temp12,k2,temp4,temp14,k4, temp5,temp15,k5]=Gate3(sink,dest,temp,temp1,k,j, temp2,temp12, k2,temp4,temp14,k4,temp5,temp15,k5); continue else [temp,temp1,k,temp2,temp12,k2,temp4,temp14,k4,tem p5,temp15,k5]=Gate5(sink,dest,temp,temp1,k,j, temp2,temp12,k2,temp4,temp14,k4,temp5,temp15,k5); continue end elseif((Path1==5)&&(Path2==3)) if(sink==3) [temp,temp1,k,temp2,temp12,k2,temp4,temp14,k4, temp5,temp15,k5]=Gate3(sink,dest,temp,temp1,k,j, temp2,temp12, k2,temp4,temp14,k4,temp5,temp15,k5); continue else [temp,temp1,k,temp2,temp12,k2,temp4,temp14,k4, temp5,temp15,k5]=Gate5(sink,dest,temp,temp1,k,j, temp2,temp12,k2,temp4,temp14,k4,temp5,temp15,k5); continue end elseif((Path1==4)&&(Path2==5)) if(sink==3) [temp,temp1,k,temp2,temp12,k2,temp4,temp14,k4, temp5,temp15,k5]=Gate4(sink,dest,temp,temp1,k,j, temp2,temp12,k2,temp4,temp14,k4,temp5,temp15,k5); continue elseif(sink==4) [temp,temp1,k,temp2,temp12,k2,temp4,temp14,k4, temp5,temp15,k5]=Gate4(sink,dest,temp,temp1,k,j, temp2,temp12,k2,temp4,temp14,k4,temp5,temp15,k5); continue else [temp,temp1,k,temp2,temp12,k2,temp4,temp14,k4, temp5,temp15,k5]=Gate5(sink,dest,temp,temp1,k,j, temp2,temp12,k2,temp4,temp14,k4,temp5,temp15,k5); continue end elseif((Path1==5)&&(Path2==4))

71 if(sink==3) [temp,temp1,k,temp2,temp12,k2,temp4,temp14,k4, temp5,temp15,k5]=Gate4(sink,dest,temp,temp1,k,j, temp2,temp12,k2,temp4,temp14,k4,temp5,temp15,k5); continue elseif(sink==4) [temp,temp1,k,temp2,temp12,k2,temp4,temp14,k4, temp5,temp15,k5]=Gate4(sink,dest,temp,temp1,k,j, temp2,temp12,k2,temp4,temp14,k4,temp5,temp15,k5); continue else [temp,temp1,k,temp2,temp12,k2,temp4,temp14,k4, temp5,temp15,k5]=Gate5(sink,dest,temp,temp1,k,j, temp2,temp12,k2,temp4,temp14,k4,temp5,temp15,k5); continue end elseif((Path1==2)&&(Path2==3)) if(sink==3) [temp,temp1,k,temp2,temp12,k2,temp4,temp14,k4, temp5,temp15,k5]=Gate3(sink,dest,temp,temp1,k,j, temp2,temp12,k2,temp4,temp14,k4,temp5,temp15,k5); continue else [temp,temp1,k,temp2,temp12,k2,temp4,temp14,k4, temp5,temp15,k5]=Gate2(sink,dest,temp,temp1,k,j, temp2,temp12,k2,temp4,temp14,k4,temp5,temp15,k5); continue end elseif((Path1==3)&&(Path2==2)) if(sink==3) [temp,temp1,k,temp2,temp12,k2,temp4,temp14,k4, temp5,temp15,k5]=Gate3(sink,dest,temp,temp1,k,j, temp2,temp12,k2,temp4,temp14,k4,temp5,temp15,k5); continue else [temp,temp1,k,temp2,temp12,k2,temp4,temp14,k4, temp5,temp15,k5]=Gate2(sink,dest,temp,temp1,k,j, temp2,temp12,k2,temp4,temp14,k4,temp5,temp15,k5); continue end elseif((Path1==2)&&(Path2==4)) if(sink==4) [temp,temp1,k,temp2,temp12,k2,temp4,temp14,k4, temp5,temp15,k5]=Gate4(sink,dest,temp,temp1,k,j, temp2,temp12,k2,temp4,temp14,k4,temp5,temp15,k5); continue else [temp,temp1,k,temp2,temp12,k2,temp4,temp14,k4, temp5,temp15,k5]=Gate2(sink,dest,temp,temp1,k,j, temp2,temp12,k2,temp4,temp14,k4,temp5,temp15,k5); continue

72 end elseif((Path1==4)&&(Path2==2)) if(sink==4) [temp,temp1,k,temp2,temp12,k2,temp4,temp14,k4, temp5,temp15,k5]=Gate4(sink,dest,temp,temp1,k,j, temp2,temp12,k2,temp4,temp14,k4,temp5,temp15,k5); continue else [temp,temp1,k,temp2,temp12,k2,temp4,temp14,k4, temp5,temp15,k5]=Gate2(sink,dest,temp,temp1,k,j, temp2,temp12,k2,temp4,temp14,k4,temp5,temp15,k5); continue end elseif((Path1==2)&&(Path2==5)) if(sink==5) [temp,temp1,k,temp2,temp12,k2,temp4,temp14,k4, temp5,temp15,k5]=Gate5(sink,dest,temp,temp1,k,j, temp2,temp12,k2,temp4,temp14,k4,temp5,temp15,k5); continue else [temp,temp1,k,temp2,temp12,k2,temp4,temp14,k4, temp5,temp15,k5]=Gate2(sink,dest,temp,temp1,k,j, temp2,temp12,k2,temp4,temp14,k4,temp5,temp15,k5); continue end elseif((Path1==5)&&(Path2==2)) if(sink==5) [temp,temp1,k,temp2,temp12,k2,temp4,temp14,k4, temp5,temp15,k5]=Gate5(sink,dest,temp,temp1,k,j, temp2,temp12, k2,temp4,temp14,k4,temp5,temp15,k5); continue else [temp,temp1,k,temp2,temp12,k2,temp4,temp14,k4, temp5,temp15,k5]=Gate2(sink,dest,temp,temp1,k,j, temp2,temp12,k2,temp4,temp14,k4,temp5,temp15,k5); continue end end end %this section plot the simulation results on graphs for %each gateway figure(4) plot(t,q,'k-',t,avg,'k^') xlabel('time in sec') ylabel('number of packets') legend('queue','average queue') grid figure(5) plot(avg,prob,'k+')

73 xlabel('average queue') ylabel('probability') legend('probaility of dropping packets') grid figure(6) plot(t,prob,'k-') xlabel('time in sec') ylabel('probability') legend('probaility of dropping packets against time') grid figure(7) plot(t2,q2,'k-',t2,avg2,'k^') xlabel('time in sec') ylabel('number of packets') legend('queue','average queue') grid figure(8) plot(avg2,prob2,'k+') xlabel('average queue') ylabel('probability') legend('probaility of dropping packets') grid figure(9) plot(t2,prob2,'k-') xlabel('time in sec') ylabel('probability') legend('probaility of dropping packets against time') grid figure(10) plot(t4,q4,'k-',t4,avg4,'k^') xlabel('time in sec') ylabel('number of packets') legend('queue','average queue') grid figure(11) plot(avg4,prob4,'k+') xlabel('average queue') ylabel('probability') legend('probaility of dropping packets') grid figure(12) plot(t4,prob4,'k-') xlabel('time in sec') ylabel('probability') legend('probaility of dropping packets against time') grid figure(13) plot(t5,q5,'k-',t5,avg5,'k^')

74 xlabel('time in sec') ylabel('number of packets') legend('queue','average queue') grid figure(14) plot(avg5,prob5,'k+') xlabel('average queue') ylabel('probability') legend('probaility of dropping packets') grid figure(15) plot(t5,prob5,'k-') xlabel('time in sec') ylabel('probability') legend('probaility of dropping packets against time') grid end

%the routing function of gateway 2 function[temp,temp1,k,temp2,temp12,k2,temp4,temp14,k4,temp5 ,temp15,k5]=Gate2(sink,dest,temp,temp1,k,j,temp2,temp12,k2, temp4,temp14,k4,temp5,temp15,k5) %the DRED function used by gateway 2 [temp2, temp12, k2] = DRED2(dest, temp2, temp12,k2,j); Gate_2 = [1; 3; 4; 5]; %possible paths of gateway 2 Path(1)=1; Path(2)=1; while((((Path(1)==1)||(Path(2)==1)))&&(Path(1)==Path(2))) for i = 1:2 n = rand * 10; if(n<=2.5) m=1; elseif((n>2.5)&& (n<=5)) m=2; elseif((n>5)&& (n<=7.5)) m=3; else m=4; end Path(i) = Gate_2(m); end end Path1 = Path(1); Path2 = Path(2); if((Path1==3)||(Path2==3))

75 [temp, temp1, k, temp2, temp12, k2,temp4,temp14,k4,temp5,temp15,k5] =Gate3(sink, dest, temp, temp1, k,j, temp2, temp12, k2,temp4,temp14,k4,temp5,temp15,k5); elseif((Path1==4)||(Path2==4)) [temp, temp1, k, temp2, temp12, k2,temp4,temp14,k4,temp5,temp15,k5]=Gate4(sink, dest, temp, temp1, k,j, temp2, temp12, k2,temp4,temp14,k4,temp5,temp15,k5); elseif((Path1==5)||(Path2==5)) [temp, temp1, k, temp2, temp12, k2,temp4,temp14,k4,temp5,temp15,k5]=Gate5(sink, dest, temp, temp1, k,j, temp2, temp12, k2,temp4,temp14,k4,temp5,temp15,k5); end end

%The routing function of gateway 3 function [temp, temp1, k, temp2, temp12, k2,temp4,temp14,k4,temp5,temp15,k5] =Gate3(sink, dest, temp, temp1, k,j, temp2, temp12, k2,temp4,temp14,k4,temp5,temp15,k5) %The DRED function for gateway 3 [temp, temp1, k] = DRED(dest, temp, temp1,k,j); Gate_3 = [4; 5]; %Possible paths from gateway 2 if(sink~=3) n = rand * 10; if(n<=5), m=1; else, m=2; end Avail_Path3 = Gate_3(m); if(Avail_Path3 == 4) [temp, temp1, k, temp2, temp12, k2,temp4,temp14,k4,temp5,temp15,k5]=Gate4(sink, dest, temp, temp1, k,j, temp2, temp12, k2,temp4,temp14,k4,temp5,temp15,k5); else [temp, temp1, k, temp2, temp12, k2,temp4,temp14,k4,temp5,temp15,k5]=Gate5(sink, dest, temp, temp1, k,j, temp2, temp12, k2,temp4,temp14,k4,temp5,temp15,k5); end end end %The routing function of gateway 4 function [temp, temp1, k, k2,temp4,temp14,k4,temp5,temp15,k5] temp, temp1, k,j, k2,temp4,temp14,k4,temp5,temp15,k5)

temp2, =Gate4(sink, temp2,

temp12, dest, temp12,

76 %The DRED function for gateway 4 [temp4, temp14, k4] = DRED4(dest, temp4, temp14,k4,j); Gate_4 = [3; 5]; if(sink~=4) n = rand * 10; if(n<=5), m=1; else, m=2; end Avail_Path4 = Gate_4(m); if(Avail_Path4 == 3) [temp,temp1,k,temp2,temp12,k2,temp4,temp14,k4, temp5,temp15,k5]=Gate3(sink,dest,temp,temp1,k,j, temp2,temp12, k2,temp4,temp14,k4,temp5,temp15,k5); else [temp,temp1,k,temp2,temp12,k2,temp4,temp14,k4, temp5,temp15,k5]=Gate5(sink,dest,temp,temp1,k,j, temp2,temp12, k2,temp4,temp14,k4,temp5,temp15,k5); end end end %The routing function of gateway 5 function[temp,temp1,k,temp2,temp12,k2,temp4,temp14,k4,temp5 ,temp15,k5]=Gate5(sink,dest,temp,temp1,k,j,temp2,temp12,k2, temp4,temp14,k4,temp5,temp15,k5) %The DRED function for gateway [temp5, temp15, k5] = DRED5(dest, temp5, temp15,k5,j); Gate_5 = [3; 4]; if(sink~=5) n = rand * 10; if(n<=5), m=1; else, m=2; end Avail_Path5 = Gate_5(m); if(Avail_Path5 == 3) [temp,temp1,k,temp2,temp12,k2,temp4,temp14,k4, temp5,temp15,k5]=Gate3(sink,dest,temp,temp1,k,j, temp2,temp12,k2,temp4,temp14,k4,temp5,temp15,k5); else [temp,temp1,k,temp2,temp12,k2,temp4,temp14,k4, temp5,temp15,k5]=Gate4(sink,dest,temp,temp1,k,j, temp2,temp12,k2,temp4,temp14,k4,temp5,temp15,k5); end end end

77 APPENDIX 7 function [temp, temp1, k] = DRED(Packet,temp, temp1, k,j) global t avg q prob buffer = 500; % Total number of packet that can be %accommodated by the gateway warnline = 250; % A new threshold set to half of the buffer %size minthres = 100; % Minimum threshold maxthres = 450; % Maximum threshold wq = 0.002; % Queue weight maxp=0.02; % Maximum drop probability k = k + 1; q(k) = temp1 + Packet(j); average = temp; queue = q(k); if(queue>average),diff=(queue-average);else,diff=0; end ratio = diff/buffer; maratio = 10*ratio; mratio = fix(maratio); if (q(k) < warnline) newq = wq; else switch mratio case 0, newq = wq; case 1, newq = wq*4; case 2, newq = wq*8; case 3, newq = wq*12; case 4, newq = wq*16; otherwise newq = wq*20; end end if(q(k) > maxthres) q(k) = rand*1; avg(k) = (1-newq)*temp + newq*q(k); else avg(k) = (1-newq)*temp + newq*q(k); end testprob = avg(k)/buffer; rate = testprob*10; if (avg(k)<= minthres) nmaxp = maxp; else switch rate case 4, nmaxp = maxp; case 5, nmaxp = 2*maxp; case 6, nmaxp = 4*maxp; case 7, nmaxp = 6*maxp; case 8, nmaxp = 8*maxp; otherwise nmaxp = 10*maxp;

78 end end prob(k)=nmaxp*(avg(k) minthres)/(maxthres minthres); if(prob(k) < 0), prob(k) = 0; end temp = avg(k); temp1 = q(k); t(k)=k*0.01; end

-

function[temp2,temp12,k2]=DRED2(Packet,temp2, temp12, k2,j) global t2 avg2 q2 prob2 buffer=500;%Total number of packet that can be accommodated %by the gateway warnline=250; %A new threshold set to half of the buffer %size minthres = 100; % Minimum threshold maxthres = 450; % Maximum threshold wq = 0.002; % Queue weight maxp=0.02; % Maximum drop probability k2 = k2 + 1; q2(k2) = temp12 + Packet(j); average = temp2; queue = q2(k2); if(queue>average),diff=(queue-average);else,diff=0; end ratio = diff/buffer; maratio = 10*ratio; mratio = fix(maratio); if (q2(k2) < warnline) newq = wq; else switch mratio case 0, newq = wq; case 1, newq = wq*4; case 2, newq = wq*8; case 3, newq = wq*12; case 4, newq = wq*16; otherwise newq = wq*20; end end if(q2(k2) > maxthres) q2(k2) = rand*1; avg2(k2) = (1-newq)*temp2 + newq*q2(k2); else avg2(k2) = (1-newq)*temp2 + newq*q2(k2); end testprob = avg2(k2)/buffer; rate = testprob*10; if (avg2(k2)<= minthres) nmaxp = maxp;

79 else switch rate case 4, nmaxp = maxp; case 5, nmaxp = 2*maxp; case 6, nmaxp = 4*maxp; case 7, nmaxp = 6*maxp; case 8, nmaxp = 8*maxp; otherwise nmaxp = 10*maxp; end end prob2(k2) = nmaxp*(avg2(k2) - minthres)/(maxthres minthres); if(prob2(k2) < 0), prob2(k2) = 0; end temp2 = avg2(k2); temp12 = q2(k2); t2(k2)=k2*0.01; end

-

function[temp4,temp14,k4] = DRED4(Packet,temp4,temp14,k4,j) global t4 avg4 q4 prob4 buffer = 500; % Total number of packet that can be %accommodated by the gateway warnline = 250; % A new threshold set to half of the %buffer size minthres = 100; % Minimum threshold maxthres = 450; % Maximum threshold wq = 0.002; % Queue weight maxp=0.02; % Maximum drop probability k4 = k4 + 1; q4(k4) = temp14 + Packet(j); average = temp4; queue = q4(k4); if(queue>average),diff=(queue-average);else,diff=0; end ratio = diff/buffer; maratio = 10*ratio; mratio = fix(maratio); if (q4(k4) < warnline) newq = wq; else switch mratio case 0, newq = wq; case 1, newq = wq*4; case 2, newq = wq*8; case 3, newq = wq*12; case 4, newq = wq*16; otherwise newq = wq*20; end end if(q4(k4) > maxthres) q4(k4) = rand*1;

80 avg4(k4) = (1-newq)*temp4 + newq*q4(k4); else avg4(k4) = (1-newq)*temp4 + newq*q4(k4); end testprob = avg4(k4)/buffer; rate = testprob*10; if (avg4(k4)<= minthres) nmaxp = maxp; else switch rate case 4, nmaxp = maxp; case 5, nmaxp = 2*maxp; case 6, nmaxp = 4*maxp; case 7, nmaxp = 6*maxp; case 8, nmaxp = 8*maxp; otherwise nmaxp = 10*maxp; end end prob4(k4)=nmaxp*(avg4(k4)-minthres)/(maxthresminthres); if(prob4(k4) < 0), prob4(k4) = 0; end temp4 = avg4(k4); temp14 = q4(k4); t4(k4)=k4*0.01; end function[temp5,temp15,k5] = DRED5(Packet,temp5,temp15, k5,j) global t5 avg5 q5 prob5 buffer = 500; % Total number of packet that can be %accommodated by the gateway warnline = 250; % A new threshold set to half of the %buffer size minthres = 100; % Minimum threshold maxthres = 450; % Maximum threshold wq = 0.002; % Queue weight maxp=0.02; % Maximum drop probability k5 = k5 + 1; q5(k5) = temp15 + Packet(j); average = temp5; queue = q5(k5); if(queue>average),diff=(queue-average);else,diff=0; end ratio = diff/buffer; maratio = 10*ratio; mratio = fix(maratio); if (q5(k5) < warnline) newq = wq; else switch mratio case 0, newq = wq; case 1, newq = wq*4;

81 case 2, newq = wq*8; case 3, newq = wq*12; case 4, newq = wq*16; otherwise newq = wq*20; end end if(q5(k5) > maxthres) q5(k5) = rand*1; avg5(k5) = (1-newq)*temp5 + newq*q5(k5); else avg5(k5) = (1-newq)*temp5 + newq*q5(k5); end testprob = avg5(k5)/buffer; rate = testprob*10; if (avg5(k5)<= minthres) nmaxp = maxp; else switch rate case 4, nmaxp = maxp; case 5, nmaxp = 2*maxp; case 6, nmaxp = 4*maxp; case 7, nmaxp = 6*maxp; case 8, nmaxp = 8*maxp; otherwise nmaxp = 10*maxp; end end prob5(k5)=nmaxp*(avg5(k5)-minthres)/(maxthresminthres); if(prob5(k5) < 0), prob5(k5) = 0; end temp5 = avg5(k5); temp15 = q5(k5); t5(k5)= k5 *0.01; end

82 APPENDIX 8 0.09 probaility of dropping packets 0.08 0.07

probability

0.06 0.05 0.04 0.03 0.02 0.01 0

0

50

100

150 average queue

200

250

300

Figure 4a: The graph of dropping probability against average queue for gateway 3

0.09 probaility of dropping packets 0.08 0.07

probability

0.06 0.05 0.04 0.03 0.02 0.01 0

0

50

100

150 average queue

200

250

300

Figure 4b: The graph of dropping probability against average queue for gateway 2

83

0.09 probaility of dropping packets 0.08 0.07

probability

0.06 0.05 0.04 0.03 0.02 0.01 0

0

50

100 150 average queue

200

250

Figure 4c: The graph of dropping probability against average queue for gateway 4

0.09 probaility of dropping packets 0.08 0.07

probability

0.06 0.05 0.04 0.03 0.02 0.01 0

0

50

100 150 average queue

200

250

Figure 4d: The graph of dropping probability against average queue for gateway 5

a project proposal

of a Bachelor of Science (B.Sc Hons) degree in Computer Science and Engineering,. Obafemi Awolowo ... 4.4.2 The probability of dropping packet. 48 .... Over the past few years, researchers have come out with several congestion avoidance.

3MB Sizes 4 Downloads 240 Views

Recommend Documents

Project 4.3 - Project Proposal - GitHub
Nov 5, 2013 - software will find the optimal meet time for all users. This component is similar to the ... enjoy each others company! Existing Approaches:.

A Project Proposal by
It would appear, then, that it is more the functionalities of the resident species that would ... been placed on functional grouping of species, which is non-phylogenetic. ... bank, we are tempted to attribute their proportional numbers and kinds ...

Voltha Project Proposal -
Dec 31, 2016 - Abstraction) is a software module that acts as an isolator between an abstract (vendor agnostic) PON management system and a set of vendor-.

Voltha Project Proposal -
Dec 31, 2016 - set of abstract APIs via which north-bound systems can interact with the ... Python was ... assistance in system testing framework for VOLTHA.

pdf project proposal
Download. Connect more apps... Try one of the apps below to open or edit this item. pdf project proposal. pdf project proposal. Open. Extract. Open with. Sign In.

Master's Project Proposal Prithviraj Deshmane
Comparison of Clustered WSNs employing Distance-based Sleep ... sensor network is said to have perished owing to the hole in coverage and functionality.

Project proposal v2.pdf
A preprocedural checklist improves the safety of emergency department. intubation of trauma patients. Academic Emergency Medicine; 22(80):989-92.

Final Robotics Project Proposal pdf.pdf
Final Robotic ... posal pdf.pdf. Final Robotics ... oposal pdf.pdf. Open. Extract. Open with. Sign In. Main menu. Displaying Final Robotics Project Proposal pdf.pdf.

Project Plan Samples Sample 1 Author's Name Project Proposal ...
find it within the HTML source code and copy it to the place where you need it.] ... 2. Project Scope + Deliverables. 2.1 Scaling Plan. 2.2 Partnerships. 3.

DIFFERENTIAL DRIVE PROJECT PROPOSAL ...
Bachelor of Science in Electromechanical Engineering, exp. ... Computer Science I Using C ... BOSTON UNIVERSITY, College of Communication, Boston, MA.

1. WanMuShu HuangHeYuan - Tree Planting Project Proposal ...
... trees in the surroundings, flooding jeopardize local people and their. Page 3 of 7. 1. WanMuShu HuangHeYuan - Tree Planting Project Proposal Revised.pdf.

A Modest Proposal
well of the public, as to have his statue set up for a preserver of the nation. .... as fast as can be reasonably expected. And as to the young .... rent without money or trade, the want of common sustenance, with neither house nor clothes to cover.

A Research Proposal -
To determine how advertisement exposure response functions differ between established brands and market newcomers. Submitted To. Prof Sanjeev Verma.

Simms Park Project Proposal Notes for Discussion ... -
Background. Project Watershed is interested in projects that will help to restore and protect natural ecosystems in the Comox Valley. Recently there has been an interest in projects in the estuary due to the importance of the estuary to support many