OpenDaylight Performance Stress Test Report v1.3: Beryllium vs Boron

Intracom Telecom SDN/NFV Lab

www.intracom-telecom.com | intracom-telecom-sdn.github.com | [email protected]

Intracom Telecom | SDN/NFV Lab G 2017

OpenDaylight Performance Stress Tests Report v1.3: Beryllium vs Boron Executive Summary In this report we present comparative stress test results performed on OpenDaylight Beryllium SR0 controller against OpenDaylight Boron SR0. For these tests the NSTAT (Network Stress–Test Automation Toolkit) [1] testing platform and its external testing components (Multinet [2] , OFTraf [3], MT–Cbench [4], nstat-nb-emulator [5]) have been used. The test cases presented in this report are identical to those presented in our previous performance report v1.2, so the interested reader is referred to [6] for comprehensive descriptions. As opposed to v1.2, in this report all test components have been executed within Docker containers [7] instead of KVM instances.

Contents

Switch scalability stress tests 3.1 Idle Multinet switches . . . . . . . . . . . . . . 3.1.1 Test configuration . . . . . . . . . . . 3.1.2 Results . . . . . . . . . . . . . . . . . 3.2 Idle MT–Cbench switches . . . . . . . . . . . . 3.2.1 Test configuration . . . . . . . . . . . 3.2.2 Results . . . . . . . . . . . . . . . . . 3.3 Active Multinet switches . . . . . . . . . . . . . 3.3.1 Test configuration . . . . . . . . . . . 3.3.2 Results . . . . . . . . . . . . . . . . . 3.4 Active MT–Cbench switches . . . . . . . . . . . 3.4.1 Test configuration, ”RPC” mode . . . . 3.4.2 Test configuration, ”DataStore” mode 3.4.3 Results . . . . . . . . . . . . . . . . .

. . . . . . . . . . . . .

4 4 4 4 5 5 5 5 5 6 6 6 6 7

. . . . . . . .

7 7 7 8 8 9 9 9 10

4

Stability tests 4.1 Idle Multinet switches . . . . . . . . . . . . . . 4.1.1 Test configuration . . . . . . . . . . . 4.1.2 Results . . . . . . . . . . . . . . . . . 4.2 Active MT–Cbench switches . . . . . . . . . . . 4.2.1 Test configuration, ”DataStore” mode 4.2.2 Results . . . . . . . . . . . . . . . . . 4.2.3 Test configuration, ”RPC” mode . . . . 4.2.4 Results . . . . . . . . . . . . . . . . .

. . . . . . . . . . . . .

. . . . . . . .

. . . . . . . . . . . . .

. . . . . . . .

. . . . . . . . . . . . .

. . . . . . . .

. . . . . . . . . . . . .

. . . . . . . .

. . . . . . . . . . . . .

. . . . . . . .

5

Flow scalability tests 5.1 Test configuration . . . . . . . . . . . . . . . . . . . . . . . 5.2 Results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.2.1 Add controller time/rate . . . . . . . . . . . . . 5.2.2 End–to–end flows installation controller time/rate

10 11 12 12 12

6

Contributors

12

References

12

1. NSTAT Toolkit The NSTAT toolkit follows a modular and distributed architecture. With the term modular we mean that the core application works as an orchestrator that coordinates the testing OpenDaylight Performance Stress Tests Report v1.3: Beryllium vs Boron

con

fig

traf fic

co nfi

gt

raf

fic

NSTAT node lifecycle management

SCGEN–NB node N

orchestration

reporting

test dimensioning

sampling & profiling

monitoring

controller node

OF

SCGEN-SB node 1

ffic

tra

SCGEN-SB node 2

...

OF traffic

3

c

4

ffi

Experimental setup

...

SCGEN–NB node 2

tra

2

SCGEN–NB node 1

OF

3

fic

NSTAT Toolkit

config tr af

1

SCGEN-SB node N

Fig. 1: NSTAT architecture

process. It controls the lifecycle of different testing components and also the coordination and lifecycle management of the testing subject, the OpenDaylight controller. With the term distributed, we mean that each component, controlled by NSTAT, can either run on the same node or on different nodes, which can be either physical or virtual machines. In the latest implementation of NSTAT we introduced the use of Docker containers [7]. With the use of containers1 we can isolate the separate components and their use of resources. In older versions of NSTAT we were using CPU affinity [9] to achieve this resource isolation. In Fig.1 NSTAT toolkit architecture is depicted. 1 The provisioning of docker containers along with their interconnection, is achieved with the use of 1) the docker–compose provisioning tool and 2) the pre–built docker images which are present at docker hub [8]. All containers were running on the same physical server.

Date: 7th Feb G p.3/14

SDN controller

SDN controller

∗ TS s∗ ES LY QU EP ic E R f f R _ T_ tra ART .3 AR /1 TIP TIP 1.0 MUL UL F , O s, M Ys ST PL UE RE EQ O_ R H _ EC HO EC

Mininet switch 1

Mininet switch 2

∗ ffic ∗∗ c 1. raffi t F 0 O 1. OF ra 0t

Mininet switch 3

...

* the vast majority of packets

MT–Cbench switch 1

MT–Cbench switch 2

MT–Cbench switch 3

...

*, ** a few messages during initialization, idle during main execution.

(a) Multinet switches sit idle during main operation. ECHO and MULTIPART mes- (b) MT–Cbench switches sit idle during main operation, without sending or resages dominate the traffic being exchanged between the controller and the swit- ceiving any kind of messages. ches. Fig. 2: Representation of a switch scalability test with idle Multinet and MT-Cbench switches. Table 1: Stress tests experimental setup. Host operating system Nodes type Container OS Physical Server platform Processor model Total system CPUs CPUs configuration Main memory SDN Controllers under test Controller JVM options Multinet OVS version

Centos 7, kernel 3.10.0 Docker containers (Docker version 1.12.1, build 23cf638) Ubuntu 14.04 Dell R720 Intel Xeon CPU E5–2640 v2 @ 2.00GHz 32 2 sockets × 8 cores/socket × 2 HW-threads/core @ 2.00GHz 256 GB, 1.6 GHz RDIMM OpenDaylight Beryllium (SR1), OpenDaylight Boron (SR0) -Xmx8G, -Xms8G, -XX:+UseG1GC, -XX:MaxPermSize=8G 2.3.0

2. Experimental setup Details of the experimental setup are presented in Table 1.

3. Switch scalability stress tests In switch scalability tests we test controller towards different scales of OpenFlow switches networks. In order to create these networks we use either MT–Cbench [4] or Multinet [2]. MT– Cbench generates OpenFlow traffic emulating a “fake” OpenFlow v1.0 switch topology. Multinet utilizes Mininet [10] and OpenVSwitch v2.3.0 [11] to emulate distributed OpenFlow v1.3 switch topologies. In our stress tests we have experimented with topology switches operating in two modes, idle and active mode: switches in idle mode do not initiate any traffic to the controller, but rather respond to messages sent by it. Switches in active mode consistently initiate traffic to the controller, in the form of PACKET_IN messages. In most stress tests, MT–Cbench and Multinet switches operate both in active and idle modes. For more details regarding the tests setup, the reade should refer to the NSTAT: OpenDaylight Performance Stress Test ReOpenDaylight Performance Stress Tests Report v1.3: Beryllium vs Boron

port v1.2, [6] 3.1 Idle Multinet switches 3.1.1 Test configuration

• topology types: ”Linear”, ”Disconnected” • topology size per worker node: 100, 200, 300, 400, 500, 600 • number of worker nodes: 16 • group size: 1, 5 • group delay: 5000ms • persistence: enabled In this case, the total topology size is equal to 1600, 3200, 4800, 6400, 8000, 9600. 3.1.2 Results

Results from this series of tests are presented in Fig.3(a), 3(b) for a ”Linear” topology and Fig.4(a), 4(b) for a ”Disconnected” topology respectively.

Date: 7th Feb G p.4/14

Beryllium Boron

2756

Beryllium

8000

Boron

6400

1563

bootup time [s]

bootup time [s]

2163

4800

1030

3200

511

6400

1363 4800

702

1600

−1

3200

293 113 −1

8000 9600

1600

9600

number of network switches [N]

number of network switches [N] (a) Group size: 1, topology type: Linear

(b) Group size: 5, topology type: Linear

Fig. 3: Switch scalability stress test results for idle Multinet switches. Network topology size from 1600→9600 switches. Topology type: Linear. Boot–up time is forced to -1 when switch discovery fails. 1357

Beryllium

6400

4800

1545 3200

1024

509

Beryllium Boron

bootu time [s]

bootup time [s]

2150

6400

Boron

1600

672

4800

286 80009600

−1

3200

1600

111

80009600

−1 number of network switches [N] (a) Group size: 1, topology type: Disconnected.

number of network switches [N] (b) Group size: 5, topology type: Disconnected.

Fig. 4: Switch scalability stress test results for idle Multinet switches. Network size scales from 1600→9600 switches. Topology type: Disconnected. Boot–up time is forced to -1 when switch discovery fails.

3.2 Idle MT–Cbench switches

3.2.2 Results

3.2.1 Test configuration

Results of this test are presented in Fig.5(a), 5(b), 6(a), 6(b).

• controller: ”RPC” mode • generator: MT–Cbench, latency mode • number of MT–Cbench threads: 1, 2, 4, 8, 16, 20, 30, 40, 50, 60, 70 , 80, 90, 100 threads. • topology size per MT–Cbench thread: 50 switches • group delay: 500, 1000, 2000, 4000, 8000, 16000. • persistence: enabled

In this case switch topology size is equal to: 50, 100, 200, 300, 400, 800, 1000, 1500 2000, 2500, 3000, 3500, 4000, 4500, 5000. OpenDaylight Performance Stress Tests Report v1.3: Beryllium vs Boron

3.3 Active Multinet switches 3.3.1 Test configuration

• controller: with l2switch plugin installed, configured to respond with mac–to–mac FLOW_MODs to PACKET_IN messages with ARP payload [12] • topology size per worker node: 12, 25, 50, 100, 200, 300 • number of worker nodes: 16 • group size: 1 • group delay: 3000ms • topology type: ”Linear” Date: 7th Feb G p.5/14

inter thread creation delay: 1sec

inter thread creation delay: 0.5sec Beryllium

Beryllium

Boron

Boron

51

5000

46 41

4000

35

3500

30 26

3000 2500

21

4000

71

3500

61

3000

51

2500 2000 1500

21 16 9 42 1

1000 800 50

81

31

1500

4 2

4500

41

2000

15 11 9

5000

91

4500 bootup time [s]

bootup time [s]

102

200 400 100 102 103 number of network switches [N]

1000 800 50

(a) group delay: 0.5s.

100 200 400 102 103 number of network switches [N] (b) group delay: 1s.

Fig. 5: Switch scalability stress test results with idle MT–Cbench switches. Topology size scales from 50 → 5000 switches. inter thread creation delay: 16sec

inter thread creation delay: 8sec Beryllium

Beryllium Boron

5000

bootup time [s]

721

4000

561

3500

481

3000

401

2500

321

2000

241

1500 1000 800 50

400 100 200 2 10 103 number of network switches [N] (a) group delay: 8s.

5000

Boron

1441

4500

641

161 129 64 32 16 8

1601

bootup time [s]

801

4500

1281

4000

1121

3500

961

3000 2500

801

2000

641

1500

481 321 257 129 64 32 16

1000 800 50

400 100 200 102 103 number of network switches [N] (b) group delay: 16s.

Fig. 6: Switch scalability stress test results with idle MT–Cbench switches. Topology size scales from 50 → 5000 switches.

• • • •

hosts per switch: 2 traffic generation interval: 60000ms PACKET_IN transmission delay: 500ms persistence: disabled

Switch topology size scales as follows: 192, 400, 800, 1600, 3200, 4800. 3.3.2 Results

• generator: MT–Cbench, latency mode • number of MT–Cbench threads: 1, 2, 4, 8, 16, 20, 40, 60, 80, 100 threads, • topology size per MT–Cbench thread: 50 switches • group delay: 15s • traffic generation interval: 20s • persistence: enabled In this case, the total topology size is equal to 50, 100, 200, 400, 800, 1000, 2000, 3000, 4000, 5000.

Results of this test are presented in Fig.8. 3.4.2 Test configuration, ”DataStore” mode

3.4 Active MT–Cbench switches 3.4.1 Test configuration, ”RPC” mode

• controller: ”RPC” mode OpenDaylight Performance Stress Tests Report v1.3: Beryllium vs Boron

• controller: ”DataStore” mode • generator: MT–Cbench, latency mode, • number of MT–Cbench threads: 1, 2, 4, 8, 16, 20, 40, 60, 80, 100 threads, Date: 7th Feb G p.6/14

SDN controller

SDN controller

s∗ ∗ fic _IN Ds rt af KET MO 1.0 AC W_ OF ial P FLO c tifi ial ar tific ar

Ys PL fic RE raf HO_ Ts t ES .3 /1 , EC PLYs QU E 1.0 s∗ RE _ OF T_In T_R O E PAR CH STS I CK ,E E PA ULT s∗ QU M OD _RE _M ART OW IP FL ULT M

Mininet switch 1

Mininet switch 2

Mininet switch 3

...

MT-Cbench switch 1

MT-Cbench switch 2

MT-Cbench switch 3

...

* the vast majority of packets

* the vast majority of packets

(a) The switches send OF1.3 PACKET_IN messages to the controller, which (b) The switches send artificial OF1.0 PACKET_IN messages to the controller, replies with OF1.3 FLOW_MODs. which replies with also artificial OF1.0 FLOW_MOD messages. Fig. 7: Representation of switch scalability stress test with active (a) Multinet and (b) MT–Cbench switches.

8

4. Stability tests

Beryllium Boron

7

4800|7.31

Stability tests explore how controller throughput behaves in a large time window with a fixed topology connected to it, Fig.10(a), 10(b). The goal is to detect performance fluctuations over time.

[Mbytes/s]

6

5

The controller accepts a standard rate of incoming traffic and its response throughput is being sampled periodically. NSTAT reports these samples in a time series.

4

3 3200|2.21 2 4800|1.68

4.1 Idle Multinet switches

1600|0.79

1 192|0.12 192|0.08

0

400|0.19 400|0.14

800|0.34

1600|0.79

3200|0.96

800|0.32

number of network switches [N]

Fig. 8: Switch scalability stress test results with active Multinet switches. Comparison analysis of controller throughput variation Mbytes/s vs number of network switches N. Topology size scales from 192 → 4800 switches.

• • • •

topology size per MT–Cbench thread: 50 switches group delay: 15s traffic generation interval: 20s persistence: enabled

In this case, the total topology size is equal to 50, 100, 200, 300, 400, 800, 1000, 2000, 3000, 4000, 5000.

The purpose of this test is to investigate the stability of the controller to serve standard traffic requirements of a large scale Multinet topology of idle switches. During main test execution Multinet switches respond to ECHO and MULTIPART messages sent by the controller at regular intervals. These types of messages dominate the total traffic volume during execution. NSTAT uses the oftraf [3] to measure the outgoing traffic of the controller. The metrics presented for this case are the OpenFlow packets and bytes collected by NSTAT every second, Fig.15. In order to push the controller performance to its limits, the controller is executed on the bare metal and Multinet on a set of interconnected virtual machines 4.1.1 Test configuration

3.4.3 Results

Results for test configurations defined in sections 3.4.1, 3.4.2 are presented in Figs.9(a), 9(b) respectively.

OpenDaylight Performance Stress Tests Report v1.3: Beryllium vs Boron

• • • • • •

topology size per worker node: 200 number of worker nodes: 16 group size: 1 group delay: 2000ms topology type: ”Linear” hosts per switch: 1 Date: 7th Feb G p.7/14

300

120000

throughput [responses/s]

110000

Beryllium Boron

109236.57 106481.125

100000 91807.225 86160.75

90000

89492.7 83618.975 79954.35 77676.57

80000 76331.55

72263.425 67044.675 68305.975 61657.6 65230.85

212.725 | 4000

200

165.05 | 3000

150

125.375 | 5000 114.775 | 2000 100

106.225 | 4000

74327.55

68647.42 67052.9

70000

258.95 | 5000

250

throughput [responses/s]

Beryllium (mean) Beryllium (min) Beryllium (max) Boron (mean) Boron (min) Boron (max)

60000

64.9 | 1000 50

59155.84

50

50000

100 10 2

200

400

8001000 10 3

55723.6 2000 300040005000

14.8 | 50

14.2 | 100

0 10

35.05 | 400 25.87 | 200

63 | 3000

55.55 | 800

19.35 | 1000 26.6 | 2000 15.075 | 20014.77 | 400 14.37 | 800

2

10

3

number of network switches [N]

number of network switches [N]

(a) OpenDaylight running in RPC mode.

17.4 | 50

20.07 | 100

(b) OpenDaylight running in DS mode.

Fig. 9: Switch scalability stress test results with active MT–Cbench switches. Comparison analysis of controller throughput variation [responses/s] vs number of network switches N with OpenDaylight running both on RPC and DataStore mode.

SDN controller

SDN controller

* Ys PL c* _RE ffi T tra AR .1 3 LTIP S* / Ts ST ES UE 1.0 , MU F U Q O LYs EQ RE P _R RT_ RE O _ H A HO EC LTIP EC U M

Mininet switch 1

Mininet switch 2

s∗ fic IN s∗ rt af ET_ OD .0 CK M 1 _ OF al PA OW ci FL fi l i t cia ar tifi ar

Mininet switch 3

* the vast majority of packets (a) Representation of switch stability stress test with idle Multinet switches.

MT–Cbench switch 1

MT-Cbench switch 2

MT–Cbench switch 3

* the vast majority of packets (b) Representation of switch stability stress test with active MT–Cbench switches.

Fig. 10: Representation of switch stability stress test with idle Multinet (a) and active MT–Cbench switches (b). The controller accepts a standard rate of incoming traffic and its response throughput is being sampled periodically.

• • • •

period between samples: 10s number of samples: 4320 persistence: enabled total running time: 12h

In this case switch topology size is equal to 3200.

4.1.2 Results

The results of this test are presented in Fig.15

OpenDaylight Performance Stress Tests Report v1.3: Beryllium vs Boron

4.2 Active MT–Cbench switches In this series of experiments NSTAT uses a fixed topology of active MT–Cbench switches to generate traffic for a large time window. The switches send artificial OF1.0 PACKET_IN messages at a fixed rate to the controller, which replies with also artificial OF1.0 FLOW_MOD messages; these message types dominate the traffic exchanged between the switches and the controller. We evaluate the controller both in ”RPC” and ”DataStore” modes. In order to push the controller performance to its limits, all test nodes (controller, MT–Cbench) were executed on bare metal. To isolate the nodes from each other, the CPU shares feature

Date: 7th Feb G p.8/14

controller throughput [Boron]

controller throughput [Beryllium]

35

35

30

throughput [responses/sec]

throughput [responses/sec]

30

25

20

15

10

20

10

15

5

5

0

25

0 0

1000

3000

2000

repeat number [N]

4000

5000

0

1000

3000

2000

4000

5000

repeat number [N]

(a)

(b)

Fig. 11: Controller stability stress test with idle MT–Cbench switches. Throughput comparison analysis between OpenDaylight Beryllium and Boron versions. OpenDaylight running in ”DS” mode. used memory Gbytes, [Boron]

250

250

200

200

[Gbytes]

[Gbytes]

used memory Gbytes, [Beryllium]

150

100

150

100

50

0

50

0

1000

3000

2000

4000

5000

0

0

1000

3000

2000

4000

5000

repeat number [N]

repeat number [N] (a)

(b)

Fig. 12: Controller stability stress test with idle MT–Cbench switches. Throughput comparison analysis between OpenDaylight Beryllium and Boron versions. OpenDaylight running in ”DS” mode.

of NSTAT was used.

In this case, the total topology size is equal to 500 switches. 4.2.2 Results

4.2.1 Test configuration, ”DataStore” mode

• • • • • • • • •

controller: ”DataStore” mode generator: MT–Cbench, latency mode number of MT-–Cbench threads: 10 topology size per MT–Cbench thread: 50 switches group delay: 8s number of samples: 4320 period between samples: 10s persistence: enabled total running time: 12h

OpenDaylight Performance Stress Tests Report v1.3: Beryllium vs Boron

The results of this test are presented in Fig.11, 12. 4.2.3 Test configuration, ”RPC” mode

• • • • • •

controller: ”RPC” mode generator: MT–Cbench, latency mode number of MT–Cbench threads: 10 topology size per MT–Cbench thread: 50 switches group delay: 8s number of samples: 4320, Date: 7th Feb G p.9/14

controller throughput [Beryllium]

110000

100000

90000

80000

110000

100000

90000

80000

70000

70000

60000

60000

50000 0

1000

2000

3000

4000

repeat number [N]

controller throughput [Boron]

120000

throughput [responses/sec]

throughput [responses/sec]

120000

50000 0

5000

1000

2000

3000

repeat number [N]

(a)

4000

5000

(b)

Fig. 13: Controller stability stress test with idle MT–Cbench switches. Throughput comparison analysis between OpenDaylight Beryllium and Boron versions. OpenDaylight running in ”RPC” mode. used memory Gbytes, [Boron]

used memory Gbytes, [Beryllium] 250

250

200

[Gbytes]

[Gbytes]

200

150

150

100

100

50

50

0

0 0

1000

2000

3000

4000

5000

repeat number [N] (a)

0

1000

2000

3000

4000

5000

repeat number [N] (b)

Fig. 14: Controller stability stress test with idle MT–Cbench switches. Throughput comparison analysis between OpenDaylight Beryllium and Boron versions. OpenDaylight running in ”RPC” mode.

• period between samples: 10s • persistence: enabled • total running time: 12h In this case, the total topology size is equal to 500 switches. 4.2.4 Results

The results of this test are presented in Fig.13, 14.

5. Flow scalability tests With flow scalability stress tests we try to investigate both capacity and timing aspects of flows installation via the controlOpenDaylight Performance Stress Tests Report v1.3: Beryllium vs Boron

ler NB (RESTCONF) interface, Fig.16. This test uses the NorthBound flow emulator [5] to create flows in a scalable and configurable manner (number of flows, delay between flows, number of flow writer threads). The flows are being written to the controller configuration data store via its NB interface, and then forwarded to an underlying Multinet topology as flow modifications, where they are distributed into switches in a balanced fashion. The test verifies the success or failure of flow operations via the controller’s operational data store. An experiment is considered successful when all flows have been installed on switches and have been reflected in the operational data store of the controller. If not all of them have become visible in the

Date: 7th Feb G p.10/14

outgoing kbytes [s] Vs repeat number

outgoing packets per [s] Vs repeat number

300

4000

Beryllium Boron

Beryllium Boron 250

3000 200

[kbytes/s]

outgoing packets per [s]

3500

2500

2000

1500

150

100

1000 50 500

0

0 0

1000

2000

3000

repeat number [N]

4000

5000

0

1000

2000

3000

4000

5000

repeat number [N]

(a) Comparison analysis for outgoing packets/s between OpenDaylight Beryl- (b) Comparison analysis for outgoing kbytes/s between OpenDaylight Belium and Boron releases. ryllium and Boron releases. Fig. 15: Controller 12–hour stability stress test with idle Multinet switches.

The metrics measured in this test are: NB app 1

NB app 2

con

fig

tra

ffic

(RE

ST

NB app 3

...

)

SDN controller

ffic s∗ tra PLY ∗ .1 3 RT_RE UESTs / A Q 1.0 LTIP _RE*

OF

U O TS ,M CH ES Ys s, EEQU PL D E R O_ _R _MART HO OW IP EC FL ULT M

Mininet switch 1

Mininet switch 2

Mininet switch 3

• Add controller time (tadd ): the time needed for all requests to be sent and their response to be received (as in [9]). • Add controller rate (radd ): radd = N / tadd , where N the aggregate number of flows to be installed by worker threads. • End–to–end installation time (te2e ): the time from the first flow installation request until all flows have been installed and become visible in the operational data store. • End-to-end installation rate (re2e ): re2e = N / te2e In this test, Multinet switches operate in idle mode, without initiating any traffic apart from the MULTIPART and ECHO messages with which they reply to controller’s requests at regular intervals. 5.1 Test configuration

* the vast majority of packets Fig. 16: Representation of flow scalability stress test. An increasing number of NB clients (NB appj , j=1,2,. . .) install flows on the switches of an underlying OpenFlow switch topology.

data store within a certain deadline after the last update, the experiment is considered failed. Intuitively, this test emulates a scenario where multiple NB applications, each controlling a subset of the topology2 , send simultaneously flows to their switches via the controller’s NB interface at a configurable rate. 2 subsets are non-overlapping and equally sized

OpenDaylight Performance Stress Tests Report v1.3: Beryllium vs Boron

For both Lithium and Beryllium we used the following setup • • • • • • • • • •

topology size per worker node: 1, 2, 4, 35, 70, 330. number of worker nodes: 15 group size: 1 group delay: 3000ms topology type: ”Linear” hosts per switch: 1 total flows to be added: 1K, 10K, 100K, 1M flow creation delay: 0ms flow worker threads: 5 persistence: disabled

In this case switch topology size is equal to: 15, 30, 60, 525, 1050, 4950. Date: 7th Feb G p.11/14

number of flows: N = 10000

30

number of flows: N = 10000

2000

Beryllium

Beryllium

Boron

Boron

25 add controller rate [Flows/s]

add controller time [s]

1200 | 21.45

20

4950 | 19.28 1050 | 19.46

15 525 | 12.18

10

15 | 9.02

30 | 8.47

60 | 8.61

15 | 8.24 30 | 8.15

60 | 8.09

1050 | 11.36 525 | 9.75

1500 30 | 1227.62 60 | 1236.00 15 | 1213.06

1000

30 | 1180.27 60 | 1161.44 15 | 1109.18

525 | 1026.16 1050 | 880.34 525 | 820.98 1050 | 513.76

500

1200 | 466.24

5 0 1 10

102 103 number of network switches [N]

0 1 10

104

4950 | 518.80

102 103 number of network switches [N]

(a) Flow operations: 104 .

104

(b) Flow operations: 104 .

Fig. 17: Flow scalability stress test result. Comparison performance for add controller time/rate vs number of switches for various numbers of flow operations N. Add controller time is forced to -1 when test fails. Add controller time/rate vs number of switches for N=104 flow operations. number of flows: N = 100000 Beryllium

Beryllium

Boron

Boron 1200 | 121.54

120 100

add controller rate [Flows/s]

add controller time [s]

1050 | 117.72 525 | 96.78

80 15 | 67.14

60

number of flows: N = 100000

2000

4950 | 143.58

140

15 | 65.10

30 | 68.35 60 | 68.25

1050 | 81.20 525 | 73.92

30 | 66.01 60 | 67.60

40

15 | 1536.22 30 | 1514.84 60 | 1479.19

1500

15 | 1489.40 30 | 1463.10 60 | 1465.25

525 | 1352.82 1050 | 1231.59 525 | 1033.29

1000 1050 | 849.47 1200 | 822.75 4950 | 696.45

500

20 0 101

102 103 number of network switches [N]

0 1 10

104

(a) Flow operations: 105 .

102 103 number of network switches [N]

104

(b) Flow operations: 105 .

Fig. 18: Flow scalability stress test result. Comparison performance for add controller time/rate vs number of switches for various numbers of flow operations N. Add controller time is forced to -1 when test fails. Add controller time/rate vs number of switches for N=105 flow operations.

5.2 Results

• Konstantinos Papadopoulos • Thomas Sounapoglou

5.2.1 Add controller time/rate

The results of this test are presented in Figs.17, 18, 19. 5.2.2 End–to–end flows installation controller time/rate

References

The results of this experiment are presented in Figs.20, 21.

[1]

“NSTAT: Network Stress Test Automation Toolkit.” https:// github.com/intracom-telecom-sdn/nstat.

[2]

“Multinet: Large–scale SDN emulator based on Mininet.” https://github.com/intracom-telecom-sdn/multinet.

[3]

“OFTraf: pcap–based, RESTful OpenFlow traffic monitor.” https://github.com/intracom-telecom-sdn/oftraf.

6. Contributors • Nikos Anastopoulos • Panagiotis Georgiopoulos OpenDaylight Performance Stress Tests Report v1.3: Beryllium vs Boron

Date: 7th Feb G p.12/14

number of flows: N = 1000000

number of flows: N = 1000000 2000

2000 Beryllium

Beryllium

Boron

Boron 1200 | 1611.04 1050 | 1575.76

30 | 1268.08 60 | 1172.52

525 | 1236.87

15 | 1128.84

1000

1500

add controller rate [Flows/s]

add controller time [s]

1500

4950 | 1205.91

1050 | 1116.14 30 | 1021.58

15 | 1013.41

60 | 963.22

500

0 101

60 | 1038.18

1000

525 | 1024.03

102 103 number of network switches [N]

525 | 976.53 1050 | 895.95 4950 | 829.25

15 | 885.86 30 | 788.60

60 | 832.86

525 | 808.49 1200 | 620.72 1050 | 834.61

500

0 101

104

(a) Add controller time vs number of switches for N=106 flow operations.

15 | 986.76 30 | 978.87

102 103 number of network switches [N]

104

(b) Add controller rate vs number of switches for N=106 flow operations.

Fig. 19: Flow scalability stress test result. Comparison performance for add controller time/rate vs number of switches for various numbers of flow operations N. Add controller time is forced to -1 when test fails. Add controller time/rate vs number of switches for N=106 flow operations. number of flows: N = 10000

number of flows: N = 1000

300

350 Beryllium

Beryllium

Boron

flows controller installation rate [Flows/s]

flows controller installation rate [Flows/s]

Boron

250 200 150 1050 | 103.07

100 50 525 | 45.91 1200 | 24.62 15 | 5.12

0 10115 | 4.72

30 | 6.78

60 | 6.14

60 | 4.88 30 | 4.36

1050 | 21.53 525 | 10.57

102

103

number of network switches [N]

250 200 150 1050 | 111.95

100 525 | 54.60

50

4950 | -1

104

(a) Flow operations: 103 .

1200 | 83.77

15 | 15.78

30 | 14.26

0 15 | 12.04 101

30 | 12.13

60 | 15.15 60 | 12.77

1050 | 35.26 525 | 20.91

4950 | -1

102 of network switches 103[N] number

104

(b) Flow operations: 104 .

Fig. 20: Flow scalability stress test result. Comparison performance for add controller time/rate vs number of switches for various numbers of flow operations N. Add controller time is forced to -1 when test fails. Add controller time/rate vs number of switches for N=104 flow operations.

[4]

“MT–Cbench: A multithreaded version of the Cbench OpenFlow traffic generator.” https://github.com/ intracom-telecom-sdn/mtcbench.

[5]

“NSTAT: NorthBound Flow Emulator.” https://github.com/ intracom-telecom-sdn/nstat-nb-emulator.

[6]

“NSTAT: Performance Stress Tests Report v1.2: ”Beryllium Vs Lithium SR3”.” https://raw.githubusercontent. com/wiki/intracom-telecom-sdn/nstat/files/ODL_ performance_report_v1.2.pdf.

[7]

“Docker docker.

containers.”

https://www.docker.com/what-

OpenDaylight Performance Stress Tests Report v1.3: Beryllium vs Boron

[8]

“Docker–hub Intracom–repository.” https://hub.docker. com/u/intracom/.

[9]

“OpenDaylight Performance: A practical, empirical guide. End–to–End Scenarios for Common Usage in Large Carrier, Enterprise, and Research Networks.” https://www.opendaylight.org/sites/www.opendaylight. org/files/odl_wp_perftechreport_031516a.pdf.

[10]

“Mininet. An instant virtual network on your Laptop.” http: //mininet.org/.

[11]

“OpenVSwitch: http://openvswitch.org/.” //openvswitch.org/.

http:

Date: 7th Feb G p.13/14

number of flows: N = 100000

300

number of flows: N = 1000000 3000

Beryllium

Beryllium Boron

flows controller installation rate [Flows/s]

flows controller installation rate [Flows/s]

Boron

250

2500

1050|209.06

30|207.6

200 1200|153.58

525|146.34

150 60|127.52

525|118.87

1000

15|76.48 30|79.44

60|80.82

50 0 15|-1 101

4950|-1

102

1050|1883.71

1000

1050|147.27

100

525|2063.32

1500

103

number of network switches [N]

500 15|-1

104

(a) Flow operations: 105 .

0 1 10 15|-1

30|-1 30|-1

525|-1 1050|-1

60|-1 60|-1

102

103

4950|-1 1050|-1

number of network switches [N]

104

(b) Flow operations: 106 .

Fig. 21: Flow scalability stress test result. Comparison performance for add controller time/rate vs number of switches for various numbers of flow operations N. Add controller time is forced to -1 when test fails. Add controller time/rate vs number of switches for N=106 flow operations.

[12]

“Generate PACKET_IN events with ARP payload.” https://github.com/intracom-telecom-sdn/multinet# generate-packet_in-events-with-arp-payload.

OpenDaylight Performance Stress Tests Report v1.3: Beryllium vs Boron

Date: 7th Feb G p.14/14

INTRACOM TELECOM: SDN/NFV Lab: OpenDaylight ... - GitHub

we were using CPU affinity [9] to achieve this resource isola- ..... 9: Switch scalability stress test results with active MT–Cbench switches. ..... org/files/odl_wp_perftechreport_031516a.pdf. [10] “Mininet. An instant virtual network on your Laptop.

1016KB Sizes 26 Downloads 286 Views

Recommend Documents

INTRACOM TELECOM: SDN/NFV Lab: OpenDaylight ... - GitHub
9. 5.1.1. ”DataStore” mode, 12 hours running time . . . . 9. 5.1.2. ”RPC” mode, 12 hours running time . . . . . . . 10 ..... An instant virtual network on your Laptop.” http:.

INTRACOM TELECOM - GitHub
switches over moderate virtual resources (10 vCPUs,

OpenDaylight Performance Stress Tests Report v1.0 - GitHub
Jun 29, 2015 - For our evaluation we have used NSTAT [1], an open source en- vironment ... rameters), online performance and statistics monitoring, and.

Lab 3: Structure - GitHub
Structure Harvester is very easy to use, and is all web-based! You simply upload your zip file and then click “Harvest!” It may take a few minutes to run.

Lab 5: strataG - GitHub
Let's take Wang (2016)'s advice into account. • To change settings to ... individuals using the software STRUCTURE: a simulation study. Molecular Ecology.

Lab 3 Example - GitHub
Download “polygonum.stru”'. • Look at “polygonum.stru” using a text editor. – Column 1 refers to individual ID (516 total individuals). – Column 2 refers to ...

Integrated Transport Research Lab KTH - GitHub
Page 1. Integrated Transport Research Lab. KTH.

OpenDaylight Performance: A Practical, Empirical ... -
By publishing common API frameworks, app developers can .... OpenDaylight RESTconf REST API supports any flow programming granularity, from one flow per REST call to programming ...... In this test, Cisco IOS XR models for router static ...

Telecom Dictionary
I wrote every word of it and I am solely responsible for its content, which is how I know that it is correct and objective. ...... balancing or storage management. 2.

samart telecom plc. - Settrade
Feb 19, 2018 - CHARAN DNA. FVC. INSURE ..... 127 Gaysorn Tower, 14-16fl., Ratchadamri Rd.,. Lumpini ... Asian city resort Building 2nd Floor 1468/126-128.

OpenDaylight Performance: A Practical, Empirical ... -
The goals of our tests were to measure endtoend performance in the underlying protocols and ... By publishing common API frameworks, app developers can.

telecom cci.pdf
Mr. Naushad R. Engineer, a/w Mr. Prateek Pai, Ritika Gadoya i/by. Keystone Partners for Respondent Nos. 1 and 2 i.e. CCI. (10) W.P. No. 8594/2017, 8596/2017 ...

telecom cci.pdf
Vodafone India Limited ....Petitioner ... Mr. Iqbal M. Chagla, Senior Advocate along with Ms. Pallavi Shroff,. Mr. Aashish .... Displaying telecom cci.pdf. Page 1 of ...

telecom pdf
Whoops! There was a problem loading more pages. Retrying... telecom pdf. telecom pdf. Open. Extract. Open with. Sign In. Main menu. Displaying telecom pdf.

Singapore Telecom -
www.morganmarkets.com. Asia Pacific Equity Research. 21 October 2011. Singapore Telecom. △ Overweight. Previous: Neutral .... forward, but are not forecasting an all out price war on a product by product basis. This is driven by the fact that a) bu

SWE20001 Lab Team 02 Project Convert Research * The ... - GitHub
Convert code only. SharpDevelop. Microsoft Visual C++ 2008 ... Online Service. • Free. Convert code only. Other online code converters such as. Telerik Code.

Two-tier App migration on Azure Container: Lab Guide ... - GitHub
OnPremisesRG (any valid name) b. Location: East US(any location) c. Virtual Machine Name: WebDbServer (Any valid name) d. Admin User: demouser (any username of your choice) e. Admin Password: . Accepts terms and conditions and click on Purchase butto

interlink telecom - efinanceThai
Nov 10, 2017 - Tel. Mayuree Chowvikran, CISA Head of Research [email protected]. 0-2009-8050. Wichuda Plangmanee. Fundamental Analyst. Construction Service,. Commerce [email protected]. 0-2009-8069. Thakol Banjongruck. Fundamental Analyst.

SPD Telecom-Data.pdf
Retrying... Download. Connect more apps... Try one of the apps below to open or edit this item. SPD Telecom-Data.pdf. SPD Telecom-Data.pdf. Open. Extract.

telecom industry pdf
Page 1. Whoops! There was a problem loading more pages. Retrying... telecom industry pdf. telecom industry pdf. Open. Extract. Open with. Sign In. Main menu.

telecom industry pdf
Sign in. Loading… Whoops! There was a problem loading more pages. Retrying... Whoops! There was a problem previewing this document. Retrying.

T5 Telecom - LOA -
Please fill out the following information as it appears on the Customer Service Record (CSR) of the current carrier: Customer Name test customer. Service ...

digital telecom infra fund - Settrade
Nov 15, 2017 - transmission equipment (FOC system),. TRUEIF also invests the right to 6,000 telecommunication towers (delivery of 3,000 towers due in 2014 ...

telecom sector pdf
There was a problem previewing this document. Retrying... Download. Connect more apps... Try one of the apps below to open or edit this item. telecom sector ...