Evaluating the Impact of Reactivity on the Performance of Web Applications Adriano Pereira Leonardo Silva Wagner Meira Jr. Federal University of Minas Gerais - e-SPEED Lab. Av. Antˆonio Carlos 6627 - ICEx - room 4010 - CEP 31270-010 Belo Horizonte – Minas Gerais – Brazil fadrianoc,leosilva,[email protected] Abstract The great success of the Internet has raised new challenges in terms of applications and the satisfaction of their users. In fact, there is strong evidence that a significant part of the user behavior depends on its satisfaction. Users reactions may affect the load of a server, establishing successive interactions where the user behavior affects the system behavior and vice-versa. It is important to understand this interactive process to design systems more suited to user requirements. In this work we study and explain how this reactive interaction is performed by users and how it affects the system’s performance. We perform experiments using a real server under a TPC-W-based workload generated using a reactive version of httperf. We also simulate different workload configurations in order to evaluate the effects on the system’s load. The results show that accounting for reactivity causes a significant impact on the server’s performance in terms of throughput and response time, raising the possibility of performance improvement of Web systems by considering reactivity.

taneous conditions at the action time. As a consequence, user behavior varies according to some factors related to the server and the application provided. In this context, one important aspect to evaluate is how users react to the performance of the system, that is, how the behavior of the user changes as a function of the response of a server. In this work we study and explain how this reactive interaction affects the system’s performance. Moreover, we evaluate how different user profiles affects the system’s load, changing the performance and defining different characteristics to this reactive environment. The paper is organized as follows. Section 2 explains the concept of reactivity, discussing how it may be modeled and its impacts on both client and server performance. Section 3 provides an overview of related work. Section 4 assess the impacts of the reactivity in a experiment using a reactive version of httperf. Section 5 presents our experimental study that complements the evaluation of the reactivity impact. Finally, Section 6 presents conclusions and ongoing work.

2 Reactivity 1 Introduction The phenomenal success of the Internet has raised new challenges in terms of applications and user satisfaction. Several new applications demand basic requirements, such as performance and scalability, to offer a good quality of service to users and generate profitable Web services. User-system interactions are usually complex and intriguing. It is quite hard to determine exactly the factors that lead a user to behave as we may observe. The interaction process is not isolated, but depends on successive interactions that may be seem as a loop-feedback mechanism, where the user behavior affects the system behavior and vice-versa. There is strong evidence that a significant part of the user behavior is reactive, that is, the user reacts to the instan-

This section discusses reactivity. Reactivity represents the way a user behaves according to the quality of service provided. Section 2.1 describes how it may be modeled and Section 2.2 discusses the impacts of reactivity on both server and client behaviors. 2.1

Rea tivity Modeling

Several works have proposed methodologies to characterize workloads, considering user and server-side metrics, but ignoring the correlation between them. [18] presents a characterization model, named USAR, that makes possible to model and replicate the reactivity observed in these systems. USAR models reactivity using functions to relate the inter-arrival time (IAT) and response time (R) measures of

each workload’s burst. Bursts consist of sequences of requests for fetching a web page and its embedded objects (like pictures). A burst is submitted to the server when a user clicks on a link or requests a Web page during its session. Bursts mimic the typical browser behavior where a click causes the browser to first request the selected Web object and then its embedded objects. A session consists of a sequence of bursts in which the time between any two consecutive bursts is below a certain threshold. The IAT represents the interval of time between the submission of two consecutive bursts performed by an user. Response time is the time a service takes to process a request, considering its receipt, process, and response. USAR correlates the IAT and response time using the following functions:

8 I(k,k+1) R(k) < )= : R(k) I(k,k+1)1 =

RAT (k

=

DIF(k)> 0 DIF(k)< 0 DIF(k)= 0

; ;

;

DI F (k )

=

I (k; k

+ 1)

and

R(k );

8k 2 workload, where k is a user request, I (k; k + 1) is the IAT between request k and k + 1, and R(k ) is the response time associated to the request k . Functions RAT and DIF are used in the discretization model depicted in Figure 1. The x axis is associated with the DIF function and the y axis with the RAT function. The model defines seven user action classes (A to G), using two limit values for each axis. Values k1 and k2 divide the positive and negative sides of DIF function, defining a zone close to zero, where the values of IAT and response time are very close to each other. Values k3 and k4 divide the vertical scale into three different zones, according to RAT function that quantify the correlation between IAT and response time. The classes A, B and, C represent behaviors where users do not wait for the answer to their requests before asking another object, and the classes E, F and G represent behaviors where users wait for the answer to their requests before asking another one. The boundaries of these classes are defined by two other constants: k5 and k6 . For Class D, the user requests a new object a short time after receiving the previous one. Y

A

G K6

K4 RAT

B

D

F K5

K3

C 1

E K1

0

K2

DIF

Figure 1: Discretization Model

X

Figure 2 presents the patience scale formed by the classes derived from the discretization model. Classes in the left side of the scale represent user action classes for impatient users. The right side represents user action classes where the user is patient, waiting for a request to complete before submitting another one. A

B −3

C −2

D −1

D 0

E 1

F 2

G 3

Figure 2: Patience scale 2.2

Rea tive Behavior

The discretization model of the reactive behavior provides seven user action classes. Each user action class represents a different behavior that can be observed analyzing the relation between IAT and the response time. Table 1 presents a representation of the functions RAT and DIF and the relation between IAT and response time for each class. Observing the RAT and DIF functions behavior for each class we can infer the typical relation between them. We identify that class A has the biggest RAT value among the other impatient classes (B and C). We represent this by a << symbol. For classes B and C the IAT value is still lower than the response time but their RAT value is lower than the one for user class A. The same applies to classes E, F and G. Class G has the greater RAT value compared to the classes E and F. We represent the relation between IAT and response time with a >> symbol for class G and with a > for class E and F. We use the  symbol for class D because the IAT and response time have similar values. Class A B C D E F G

DIF Function IAT - R 1 IAT - R 1 IAT - R 1 IAT - R 1 2 IAT - R 2 IAT - R 2 IAT - R 2

k <

k >k >k

RAT Function R / IAT 4 R / IAT 3 4 R / IAT 3 IAT / R 5 IAT / R 5 6 IAT / R 6

k < k <

>k k

Relation IAT R IAT R IAT R IAT R IAT R IAT R IAT R

<< < <  > > >>

Table 1: Relation between IAT and response time for each user class

In order to understand the behavior of each user action class we represent a typical request-response scenario in Figure 3. For each situation we represent a client asking a request to the server, which answers it according to server’s load. We represent a non-overloaded scenario, where the server takes less than 5 seconds to answer the requests. We show also an overloaded scenario, where the response time grows, achieving values greater than 5 seconds. Clients with impatient profile behave according to classes A, B and C. The figure presents their typical behavior. We observe that the IAT is lower than the response time. In a non-overloaded scenario their difference is not

as significative as in overloaded ones. In overloaded scenarios the server takes more time to answer the bursts and the impatiency of the client will cause the submission of more requests before receiving the response for the preview ones. From the server’s perspective, an impatient user tends to submit more requests before receiving its preview ones, making an overload scenario worse. Patient users behave according to classes E, F, and G. Their typical behavior, as represented by Figure 3, have the IAT greater than the response time, meaning that for each request submitted to the server the user tends to wait for the server’s response before asking the next one. In overload situations, patient users tend to wait for the next request and think for a time period before proceeding. This is very important since the overload situation for the server may not increase due to the patient behavior of clients. Class G presents the most patient behavior since its IAT tends to be greater than the ones for classes E and F. Overloaded

Non−overloaded

response−time >= 5 sec

response−time < 5 sec

user action class

Server

A IAT R

11 00 0000 1111 00 11 0000 1111

Server

t

Client

IAT R

11 00 0000000 1111111 00 11 0000000 1111111

Server

B and C IAT R

D IAT R

111 000 0000 1111 000 111 0000 1111

Server

t

Client

IAT R

Server

11111 00000 0000 1111 00000 11111 0000 1111

t

Client

IAT

R

1111 0000 0000000 1111111 0000 1111 0000000 1111111 11111111 00000000 0000000 1111111 00000000 11111111 0000000 1111111

Server

E and F IAT R

G IAT R

111111 000000 0000 1111 000000 111111 0000 1111 11111111 00000000 0000 1111 00000000 11111111 0000 1111

IAT R

Server

t

Client

t

Client Server

t

Client Server

t

Client

t

Client

IAT R

1111111111 0000000000 0000000 1111111 0000000000 1111111111 0000000 1111111 1111111111111 0000000000000 0000000 1111111 0000000000000 1111111111111 0000000 1111111

t

Client Server

t

Client

Figure 3: Client Reactive Behavior

From the server’s perspective, the reactions of users provoke different changes in terms of load, since variations in the response time affect the rate of requests submitted. In fact, the impatient behavior tends to cause an increase in the server’s load, since users behaving according to classes A, B, and C usually ask requests at high rates. The patient behavior tends to decrease the load of the server due to the behavior of users of classes E, F, and G. In a real scenario, the number of users behaving according to each user class is variable and understand its impact on the performance of a server is not obvious, due to the complexity of such scenario. In this work we address this task by experimenting a web server with an reactive workload and simulating a real web application.

3 Related Work The characterization and generation of workloads are essential to the evaluation of Internet systems, motivating several studies over the last few years. [4, 9] analyze some of the characteristics of workloads of web servers, and [19] analyze streaming media workloads. Workload generators are tools designed to generate synthetic logs composed of requests that simulate real user requests. SPECweb99 [1], WebBench [3] and TPC-W [11] are benchmarks for evaluating the performance of Web Servers. They provide representative benchmark for measuring a system’s ability to act as a Web server. SURGE [6] and httperf [17] are workload generators, developed to exercise Web servers through the submission of a set of requests with different characteristics of load. These workload generators are powerful tools but are not capable of simulating user behavior patterns related to the reactions of users according to the server’s performance. They adopt an arrival process independent of the performance provided, generating the same workload, despite the variations observed in the quality of service provided. The user behavior can be analyzed using a lot of variables observed in a Web log. One can use the list of requests submitted to the server, navigational patterns, types of functions [15] accessed, think-times, among other information. [12, 7, 8, 5] models aspects related to the user behavior, such as click-stream, correlation between requests, distribution of the users, session duration, data rates and application popularity and mobility. [13] proposes a user behavior model framework, consisting of various layers and based on mathematical models, that is used to produce a user oriented workload generator. However, these studies fails to model the behavior of users to the performance provided by the service. They do not capture aspects related to the reactivity to the quality of service.

4 Impact of reactivity In this section we assess the impact of reactivity on the performance of a web server. Section 4.1 describes how we generate reactive workloads with httperf workload generator. Section 4.2 presents our experimental methodology. Sections 4.3 and 4.4 show the experiments and results. 4.1

Rea tive Workload Generation

This work uses httperf as the tool for workload generation. We chose it because it provides an effective way of generating HTTP workloads and measuring performance. In order to generate reactive workloads we have created a new version of httperf compatible with the USAR Model. Traditional workload generators, such as httperf, assume that a new request of a user must wait for the last one to

00 11 00 11 00111100 11001100 00 11 00 11 00 11 00 11 00 11 11001100 1100 00 11 00 11 00 11 00 11 00 11 1100 1100 00 11 00 11 0 11001100 1100 1 0 1 0 1 00 11 0 1 0 1 00 11 01 1 0 0 1 00 11 01 1 0 00 11 11001100 1 0 01 1 0 00 11 0 1 0 1 00 11 0 1 01 1 0 00 11 0 1 0 1 00 11 1100 1 0 01 1 0 00 11 0 1 0 1 00 11 0 11001100 1100 1 01 1 0 1 00 11 0 1 0 00 11 01 1 0 0 1 00 11 00 11 01 1 0 00 11 11001100 1100 1 0 01 1 0 00 11 0 1 00 11 0 1 0 1 00 11 0 1 00 11 1100 1100 1 0 0 1 00 11 0 1 00 11 11001100 1 0 00 11 0 1 00 11 0 1 00 11 00 11

Client Session Start First Burst / First Request

Burst i

0

Server

1 2

Events EVT_SESS_NEW / EVT_CALL_SEND_START / EVT_BURST_SEND_START EVT_CALL_SEND_START 0 1 0 Session Duration 1 0 1 Response Time Think Time EVT_CALL_DESTROYED IAT EVT_CALL_DESTROYED / EVT_BURST_DESTROYED EVT_CALL_SEND_START / EVT_BURST_SEND_START Main Request Embedded Request

0011 11001100 1010

Expected Resp. Time 1 0 0 Expected IAT 1 0 1

1 2

0

End Burst i (case 1) End Burst i (case 0) Last Burst (case 1) Last Burst (case 0) End Burst i (case 2) Last Burst (case 2)

End Last Burst Session End

1

0

2

11 00 00 11 00 11 00 11 00 11 00 11 00 11

EVT_CALL_DESTROYED / EVT_BURST_DESTROYED EVT_CALL_SEND_START / EVT_BURST_SEND_START EVT_CALL_SEND_START / EVT_BURST_SEND_START EVT_CALL_DESTROYED / EVT_BURST_DESTROYED EVT_CALL_SEND_START / EVT_BURST_SEND_START

EVT_CALL_DESTROYED / EVT_BURST_DESTROYED EVT_SESS_DESTROYED

Figure 4: Client-server interaction mechanism

be completed before dispatching a new one. This approach does not allow to represent the situation where the user wants to send a new request even though the last one has not finished yet. Users would do this because the response time for the last request is unacceptable for him/her, for example. This behavior corresponds to user action classes from the impatient side of the patience scale described in Section 2. This new situation demands the ability of the workload generator to allow non-blocking sessions, i.e., a burst may begin before the last one has completed. Figure 4 illustrates the traditional workload generation mechanism and the new one that supports the impatient behavior. The figure presents the execution of a sequence of bursts of an user session. We see the client and server sides and some of the httperf events associated with the execution. The requests are represented by lines going from the client to the server side, and vice-versa. The vertical space represents the time. The figure illustrates the session duration and the concepts of response time, think-time, and IAT. The main request of the burst is represented by a bold line and the embedded requests are single lines. The figure represents the traditional mechanism of execution performed by the httperf (elements labeled with 0) and the reactive mechanism that we implement on it in order to represent the reactive behavior (elements with labels 1 and 2). Moreover, the figure introduces the expected response time and the expected IAT, which are measures defined dynamically according to the reactivity model. httperf has a module called wsesslog, which submits requests based on a user session file. In order to aggregate the reactivity model, we have added information about the user action class to the user session structure. In order to determine the user action class, according to the USAR model, we need the value of the response time observed by the client (in this case, the httperf itself) and

the client think-time. A typically wsesslog file contains the think-time, so we only have to get the response time. The value of response time can be easily obtained in httperf, since it is built around the concept of events. These events, showed on Figure 4 can be captured through callback handles, defined using httperf API functions. There is a response time associated to each request and another one that belongs to the burst. To obtain real values of response time, we submitted a workload file based on TPCW [11, 16] using the original version of httperf to the test environment. In order to reproduce the impatient behavior, we changed the way httperf schedules the burst that is submitted for each session. The original implementation waits until the last submitted burst finishes to start a timer event that triggers the next burst. We adapted the wsesslog to start a timer event as soon as the first request of the burst was submitted. The time httperf should wait before triggering a new burst can be calculated using the user class and the response time of the former burst. As a result we create a new version of httperf that is nonblocking and reactive. This version supports submitting requests that time-out after a period specified by the user think-time. We also instrumented httperf to record some important events and bursts information. 4.2

Experimental Methodology

In order to assess the impact of the reactive workload we prepare an environment composed of an HTTP Server (Apache), an application server (Apache Tomcat), a database server (MySQL) and a client (httperf), each running on a different machine. Each machine runs Linux with kernel version 2.4.25, having a Intel Pentium 4 1.80GHz CPU, and 1GB of main memory. For best performance, we have turned off all unnecessary services and configured the

operating system to support a number of file descriptors that is enough for our experiments (65,000 file descriptors). We have used a Java implementation of TPC-W benchmark as the application service. For the client, we adapted a workload generated based on TPC-W with information related to the user reactive behavior, following 5 steps: 1. We create a base workload following TPC-W recommendations and its CBMG. The workload generated, wl-tpcw, is composed of 5000 user sessions with mean session length of 124 bursts. 2. We convert the wl-tpcw workload on a new one, wlhttperf, which is compatible with the format used by the httperf’s module wsesslog [17]. 3. We submit the workload wl-httperf to our simulation environment using the original version of httperf and record the real response times. 4. With the recorded response times and the workload wlhttperf, we apply the USAR characterization model, resulting in the distribution of user actions for each burst. 5. We add to the workload wl-httperf the information obtained in the last step, obtaining the workload wlhttperf-react that can be used by the new version of httperf to generate workloads with reactivity. It is important to emphasize that the number of simultaneous users during an experiment is defined by the number of active sessions during the experimental time. We execute experiments with many different workload configurations. Here we show the experiments where the httperf is set to execute 100, 1000, and 5000 user sessions, with a rate of 100 sessions initiated per second. We chose these workloads since we want to assess the impact of reactive workloads in light, medium and heavy conditions. For each workload configuration we have employed reactive and non-reactive approaches. We focus our analysis on the most overloaded period, that corresponds to the first ten minutes. The experiments evaluate a set of metrics for each scenario: throughput (both the output and input throughput), cumulative throughput, response time (refers to the user perceived response time), active bursts (the number of bursts requested to the server but not yet answered at each period of time), and active sessions (the number of sessions initiated but not finished yet). The response time is a critical factor to users of interactive systems [14]. It is evident that user satisfaction increases as response time shortens. Modest variations around the average response time are acceptable, but large variations may affect user behavior.

4.3

Results

Due to space constraints, we summarize the main results for the experiments in Table 2 and show only the graphs for the experiment with 5000 sessions. In the table, we list some important measures that are useful to analyze the impact of reactivity on the server performance: total number of bursts (B), numbers of bursts per second (B/sec), total number of requests (Req), number of requests per second (Req/sec), average response time (R), percentage of finished sessions (S). NR represents the non-reactive experiments and R the reactive ones. Measure B (103 ) B/sec Req (103 ) Req/sec R (sec) S (%)

100 sessions NR R 6 10 9.2 16.1 50 90 100 190 0.027 0.039 45 85

1000 sessions NR R 57 78 92.2 114.4 500 650 800 1200 0.1 0.35 45 90

5000 sessions NR R 80 80 123 133 580 690 1180 1280 40.7 13.7 20 25

Table 2: Experiments - Main Results

For the experiments running 100 sessions, the nonreactive experiment presents an average response time very small, near zero (instantaneously). This confirms the nonoverloaded state. The number of active bursts during the execution is very low, and thus there are no performance problems. The reactive experiment achieves a higher throughput than the non-reactive one, but this occurs without raising much the response time. 85% of sessions have finished, showing that reactivity allows users to reduce the estimated session time once the response time to their bursts of requests is very small. For the experiment with 1000 sessions, the non-reactive experiment presents an average response time very small, near zero (instantaneously) with peaks under 1 second. This confirms the non-overload state. The number of active bursts during the experiment presents a stable behavior, once there are no performance problems. The reactive experiment has an average response time still small, but not instantaneous. The response times present peaks of up to 2 seconds, but in isolated situations that not endanger the server performance. The non-reactive experiment with 5000 sessions executes 80,000 bursts, with an average throughput of 123 bursts/second, varying from 100 to 400 bursts/second. The response time raises from few seconds to more than 120 seconds, with an average time of 40.7 seconds. Figure 5 presents the throughput (bursts per second) (a) and the average response time (b) for this experiment. It is easy to observe that the server became overloaded since after 30 seconds of experiment, the response time has already achieved the 10-second limit [14].

It is important to analyze what happened near 360 seconds of experiment. The following aspects are recorded: the response time begins to decrease, the throughput decreases, the number of active bursts for both send and receive rate become the same, and the number of active sessions decreases fast. A detailed investigation shows that the cause of this anomaly is the time-out of TCP/IP connections, represented by the system error number 110 in Linux operating system. This problem caused the decrease in the number of active sessions, which demonstrates that a significant number of sessions begin to fail as a consequence of the error identified. When the workload generator tries to open or send requests and the TCP returns error, the current session fails and close after no more connections are available for it. Only an amount of 100 sessions become active after 400 seconds, representing the users who generate load to server from this point to the end of the experiment. In this non-reactive experiment we identify a big overload in the server, which causes a very poor performance. The response time values observed are unacceptable. Moreover, the unavailability of the server represents a big problem since around 80% of the users stay waiting for server’s answer without success. The reactive experiment running 5000 sessions executes 80,000 bursts, with an average throughput of 133 bursts/second, varying from 25 to 250 bursts/second. Figure 6 presents the throughput (a) and the average response time (b). The response time raises from few seconds to more than 60 seconds, with an average time of 13.7 seconds, demonstrating an overload situation. The receive rate increases and the send rate decreases from the period between 100 and 200 seconds. Due to users reactions to the overloaded server, there is a delay in the session duration of the users and 75% of sessions are still active after the experimental time. 4.4

Summary

In the experiments with 100 sessions, the server achieves a very good performance, guaranteeing that users perceives an instantaneous answer to their bursts of requests. A good response time rate allows users from the reactive experiment to request faster their new bursts. The increase in the throughput rate without changing the response time rate shows the server is not overloaded. The decrease in the bursts execution time causes the reactive experiment to conclude succesfully more sessions than in the non-reactive one. For the experiments with 1000 sessions it is interesting to note that the throughput rate of the reactive experiment decreases exactly when response time rates raise. In this case, the change in the user reactions causes the throughput rate to raise again after some time. The application server

keeps a very good response rate to the requests under no overload. The non-reactive and reactive experiments with 5000 sessions present very different scenarios. The first one has caused a heavy overload in the server, that keeps it unavailable for most of the users. The reactive one has overloaded the server, but the reaction of users to unacceptable response time values changes their global behavior, allowing the server to save resources and turns back to acceptable response times after this. Analyzing the overall experiments, we observe that the reactive ones result in different situations of load compared to the non-reactive ones. This result is interesting, once it can be the base for research in QoS techniques that consider the influence of user reaction on the server performance.

5 Understanding the reactivity impact This section explains how we evaluate the impact of reactivity. Section 5.1 briefly describes the simulator used in our experiments and presents our experimental methodology. The Section 5.2 shows the results of the experiments. 5.1

Experimental Methodology

In order to evaluate the impact of reactivity we built a simulator named USAR-QoS. It was implemented using the Simpack Toolkit [10], a C++ simulation environment. The architecture of USAR-QoS is event-driven and mimics a complete Web system, consisting of the workload generator that supports reactivity and the web application environment. It is built respecting modularity, allowing its extension with new QoS policies and features. We instrument USAR-QoS to record the same measures of real experiments and the rate of expired bursts, which represents the situation where a user request the next burst before receiving the response for the previous one, due to impatience and high response times. We simulate several scenarios using USAR-QoS to observe how the application server behaves under various loads. We use the same workload of 5000 sessions of the real experiment presented in Section 4, based on the TPC-W benchmark [2]. For each scenario, the workload is configured with different distributions of user action classes. We evaluate the workload with exclusive distribution (100%) of each user class, and with a mixed distribution (A 22%, B 15%, C 10%, D 6%, E 10%, F 15%, and G 22%). 5.2

Results

Figure 7 shows the average response time for each workload configuration. (a) and (b) present the results for the workloads with exclusive class from A to G. From the

Throughput (Burts/sec) over Time - Experiment C 900 800

Response Time (sec)

140

700

Response Time (sec)

Throughput (bursts/sec)

Average Response Time over Time - Experiment C

Receive Rate Send Rate

600 500 400 300 200

120 100 80 60 40 20

100 0

0 0

100

200

300 Time (sec)

400

500

600

0

100

(a) Throughput

200

300 Time (sec)

400

500

600

(b) Response Time

Figure 5: Experiment Non-reactive with 5000 sessions Throughput (Burts/sec) over Time - Experiment C 350

Response Time (sec)

70 Response Time (sec)

300 Throughput (bursts/sec)

Average Response Time over Time - Experiment C

Receive Rate Send Rate

250 200 150 100 50

60 50 40 30 20 10

0

0 0

100

200

300 Time (sec)

400

500

600

(a) Throughput

0

100

200

300 Time (sec)

400

500

600

(b) Response Time

Figure 6: Experiment Reactive with 5000 sessions

graphs we can observe clearly how differently is the impact of each user action class on the performance of the server. Users of class A causes a very heavy load on the server, which determines a mean response time that achieves 350 seconds. Classes B and C also lead the server to a big overload, with peaks of response time of 175 and 90 seconds, respectively. In the experiments with these workloads, we observe a burst expired rate of 100%, confirming the user’s unsatisfaction. The experiment with workload with D profile presents a maximum response time of 50 seconds, that begins to decrease gradually. In this scenario, 30% of the bursts associated with user satisfaction. Experiments with classes E, F and G present representative differences when compared to the previous ones. The response time of E varies from 10 to 25 in the heaviest period, achieving less than 10 seconds after the first half of the experiment time. Experiments with classes F and G resulted in mean response times that vary from 3 to 15, and 0 to 5 seconds, respectively. These experiments have a high satisfaction rate, almost 100%. It is easy to note that each user profile causes very different impacts on the performance of the server, confirming the study presented in Section 2. Figure 7 (c) presents the response time for the workload with the balanced distribution of user classes. The mean response time varies from 1 to 5 seconds, with an average of X seconds. This is a direct result of the combination

of different users profiles. The satisfaction rate during this experiment is 50%. This evaluation shows the impacts of the user reactivity to the quality of service provided by a server. As we observe, different workload configurations resulted in different behaviors on both client and server sides.

6 Conclusions In this paper we evaluate the impact of reactivity on the performance of Web applications. We design a new version of httperf workload generator that considers reactivity, based on USAR model [18]. Using this we perform experiments, comparing the non-reactive and reactive approaches. The results show that reactivity causes a significant impact on the server’s performance. This can be explained by the static behavior assigned to clients in the non-reactive scenario. Adopting traditional workload generation mechanisms, the unavailability of the system is an expected situation, since changes in the users’ reaction are not considered. Our new model shows the importance of understanding better the user-server interactivity process. Moreover, this work presents novel contributions explaning how reactivity occurs, how it affects the system’s performance, and how different user profiles reacts over variations on the server’s performance. We design and implement the

Mean Response Time over Time (sec) [smooth bezier]

Mean Response Time over Time (sec) [smooth bezier]

350

40 Reaction Class E Reaction Class F Reaction Class G

250 200 150 100

Mean Response Time (sec) 35 Response Time (sec)

20 Response Time (sec)

300 Response Time (sec)

Response Time over Time (sec)

25 Reaction Class A Reaction Class B Reaction Class C Reaction Class D

15

10

30 25 20 15 10

5 50

5

0

0 0

1000

2000

3000

4000

5000

6000

7000

0 0

1000

2000

Time (sec)

3000

4000

5000

6000

7000

0

Time (sec)

(a) User classes A, B, C and D

(b) User classes E, F and G

1000

2000

3000

4000

5000

6000

7000

8000

Time (sec)

(c) Balanced distribution of classes

Figure 7: Average Response Time for each workload configuration

USAR-QoS simulator which allows the analysis of each user profile behavior. The results demonstrate that is important to consider the correlation between user and server sides, once it can decrease the gap between the real and model scenarios. Adopting traditional workload generation mechanisms, the unavailability of the system is an expected situation, once changes in the users’ reaction are not considered. Our new model has presented a completely different result, demonstrating the importance of understanding better the userserver interactivity process. We are currently working with reactive QoS strategies. As part of ongoing work, we plan to investigate how to design reactive QoS control strategies that use both admission control and scheduling techniques.

7 Acknowledgements This work was (partially) developed in collaboration with Hewlett Packard Brazil R&D (Project CAMPS HPUFMG-2005).

References [1] Specweb99. http://www.specbench.org/ osg/web99/ . [2] Tpc - transaction processing council. tpc benchmark w. http://www.tpc.org/ tpcw/. [3] Webbench. http://www.veritest.com/ benchmarks/webbench/. [4] M. F. Arlitt and C. L. Williamson. Web server workload characterization: The search for invariants. In Measurement and Modeling of Computer Systems, pages 126–137, 1996. [5] A. Balachandran, G. M. Voelker, P. Bahl, and P. V. Rangan. Characterizing user behavior and network performance in a public wireless lan. SIGMETRICS Perform. Eval. Rev., 30(1):195–205, 2002. [6] P. Barford and M. Crovella. Generating representative web workloads for network and server performance evaluation. In Proceedings of the 1998 ACM SIGMETRICS joint international conference on Measurement and modeling of computer systems, pages 151–160. ACM Press, 1998.

[7] P. Chatterjee, D. Hoffman, and T. Novak. Modeling the clickstream: Implications for web-based advertising efforts, 1998. [8] C. Costa, I. Cunha, A. Borges, C. Ramos, M. Rocha, J. Almeida, and B. Ribeiro-Neto. Analyzing client interactivity in streaming media. In Proceedings of the 13th World Wide Web Conference, 2004. [9] M. Crovella and A. Bestavros. Self-Similarity in World Wide Web Traffic: Evidence and Possible Causes. In Proceedings of SIGMETRICS’96: The ACM International Conference on Measurement and Modeling of Computer Systems. [10] P. A. Fishwick. Simpack: Getting started with simulation programming in c and c++. In Winter Simulation Conference, pages 154–162, 1992. [11] D. F. Garca and J. Garca. Tpc-w e-commerce benchmark evaluation. Computer, 36(2):42–48, 2003. [12] T. Henderson. Latency and user behaviour on a multiplayer game server. In Proceedings of the Third International COST264 Workshop on Networked Group Communication, pages 1–13. Springer-Verlag, 2001. [13] H. Hlavacs, E. Hotop, and G. Kotsis. Workload generation by modeling user behavior. In Proceedings of OPNETWORKS 2000, 2000. [14] D. Menasc´e, V. Almeida, and L. Dowdy. Performance by Design. Prentice Hall, 2004. [15] D. Menasc´e, V. Almeida, R. Riedi, F. Ribeiro, R. Fonseca, and W. M. Jr. A hierarchical and multiscale approach to analyze e-business workloads. Perform. Eval., 54(1):33–57, 2003. [16] D. A. Menasc´e. Testing e-commerce site scalability with tpc-w. In Int. CMG Conference, pages 457–466, 2001. [17] D. Mosberger and T. Jin. httperf–tool for measuring web server performance. SIGMETRICS Perform. Eval. Rev., 26(3):31–37, 1998. [18] A. Pereira, G. Franco, L. Silva, W. Meira, Jr., and W. Santos. The usar characterization model. In Proceedings of the IEEE 7th Annual Workshop on Workload Characterization (WWC-7), Austin, Texas, USA, 2004. IEEE Computer Society. [19] E. Veloso, V. Almeida, W. Meira, A. Bestavros, and S. Jin. A hierarchical characterization of a live streaming media workload. In Proceedings of the second ACM SIGCOMM Workshop on Internet measurment, pages 117–130. ACM Press, 2002.

Evaluating the Impact of Reactivity on the Performance of Web ... - Core

Bursts mimic the typical browser behavior where a click causes the browser to first request the selected Web object and then its embedded objects. A session ...

199KB Sizes 1 Downloads 351 Views

Recommend Documents

Evaluating the Impact of Reactivity on the Performance ...
interactive process to design systems more suited to user ... user clicks on a link or requests a Web page during its ses- sion. ...... Tpc-w e-commerce benchmark.

Evaluating the Impact of Wind Power Uncertainty on ...
of these objectives in mind: 1) To estimate the wind resource capacity value, that is, the ... conventional renewable energy (NCRE) technologies are properly ...

2 Evaluating the Impact of Multidimensionality on ... -
The stan- dard paradigm in IRT applications, building on Monte Carlo simulation research, is to use a combination of SEM fit indices, residual values, and eigenvalue ratios to judge whether data are unidimensional enough for IRT. Once a data set is d

Evaluating the Impact of Health Programmes on ... - Wiley Online Library
Evaluating the Impact of Health Programmes on Productivity. 303. The presence of the second term in the last line of Equation 1 summarizes the key identification problem that must be solved. In this section, we outline the most widely used empirical

Evaluating the Impact of Health Programmes on ... - Semantic Scholar
Malcolm Keswell, School of Economics, University of Cape Town, Rondebosch, ...... 'Evaluating Anti-poverty Programs', in R. E. Evenson and T. P. Schultz (eds.) ...

Evaluating the Impact of Health Programmes on ... - Semantic Scholar
The most basic parameter of interest to be estimated is the average treatment ..... be to partner with medical randomized controlled studies to study economic outcomes. ..... of the institutional environment as well as the administration of the ...

On the Impact of Arousals on the Performance of Sleep and ... - Philips
Jul 7, 2013 - Electrical Engineering, Eindhoven University of Technology, Den Dolech. 2, 5612 AZ ... J. Foussier is with the Philips Chair for Medical Information ..... [6] J. Paquet, A. Kawinska, and J. Carrier, “Wake detection capacity of.

On the Impact of Arousals on the Performance of Sleep and ... - Philips
Jul 7, 2013 - techniques such as the analysis of the values in neighboring epochs [3] ..... Analysis Software (ADAS),” Physiology & Behavior, vol. 65, no. 4,.

The Impact of Financial Crises on Foreign Direct Investment Web ...
Web appendix for paper with same title published in Review of Development Economics ..... Dummy variable indicating Host country and USA share a common ...

Evaluating Impact of Wind Power Hourly Variability On ...
I. INTRODUCTION. S a renewable resource, wind power is undergoing an ... II. WIND POWER HOURLY VARIABILITY STUDY APPROACH. A. Wind Power Variability. In a study report conducted by National Renewable Energy. Laboratory ...

Evaluating the Performance of the dNFSP File System
In the November 2004 TOP500 list1,. 294 parallel machines are ... http://www.top500.org ter suited to the .... parallel supercomputers. There are several flavours ...

The Impact of Edge Effects on the Performance of MAC ...
∗Wireless Communication Laboratory, DEIS, University of Bologna, Bologna, Italy. †Dept. of Electronics and Telecommunications, Norwegian Univ. of Science and Technology, Trondheim, Norway. Emails: [email protected], [email protected]

[PDF BOOK] Evaluating the Impact of Leadership Development: A ...
[PDF BOOK] Evaluating the Impact of Leadership Development: A ... Search archived web sites Advanced SearchExpress Helpline Get answer of your question ...

The Impact of Edge Effects on the Performance of MAC ...
characterizing the performance of wireless ad hoc networks and what role interference plays on it [6]. The edge effect due to the finiteness of deployment region ...

Evaluating the Impact of Non-Financial IMF Programs ...
symmetric and positive semi-definite matrix that weighs the importance of all explanatory variables. It is selected to minimize the mean-squared prediction error for .... for the outcome variable in the pre-treatment period by giving priority to matc

evaluating the impact of moroccan company law reform ... | Google Sites
Aug 30, 1996 - the impact of that change upon manufacturing firms' access to credit. ...... Weiss (1981) was to consider the role of interest rates and of ...... adverse selection” in their South African consumer loan market; in the US credit card.

The impact of childhood maltreatment: a review of ... - Core
Jul 28, 2011 - regions of the CC might have different windows of vulnerability to early experience. .... their environment, potentially at the cost of other developmental processes. ..... OS. M > NM, increased activation in posterior cingulate cortex

Evaluating the Forecasting Performance of GARCH ...
van Dijk, D., T. Teräsvirta, and P. H. Franses (2002): “Smooth Transition Autoregressive. Models - A Survey of Recent Developments,” Econometric Reviews, 21, 1–47. White, H. (2000): “A Reality Check for Data Snooping,” Econometrica, 68, 10

Evaluating the Performance and Intrusiveness of Virtual ...
assess the performance of a Linux guest OS running on a virtual machine by separately benchmarking the CPU, file. I/O and the .... mode returns an execution rate based on the number of instructions that were ... tions of the filesystem. For this ...

impact of performance appraisal on employee productivity pdf ...
There was a problem previewing this document. Retrying... Download ... impact of performance appraisal on employee productivity pdf. impact of performance ...

The impact of fiscal policy on economic activity over the ... - Core
East Germany. Gerhard Kempkes. 17 2010. The determinants of cross-border bank flows to emerging markets – new empirical evidence Sabine Herrmann.