Implementation and Empirical Evaluation of Server Selection Policies for Accessing Replicated Web Services José Airton F. Silva Supervisor: Nabor C. Mendonça Mestrado em Informática Aplicada – Universidade de Fortaleza Av. Washington Soares, 1321, CEP 60811-905 Fortaleza – CE {jairton.mia,nabor}@unifor.br

Abstract. This paper describes our work on the implementation and empirical evaluation of a client-side server selection framework for accessing replicated web services. The framework, which incorporates five representative server selection policies, was evaluated in the context of accessing a real-world replicated service, with four replicas distributed over three different continents. Our empirical results show that, in addition to the individual performance of each server hosting a replica of the replicated service, service response time using the five policies can be strongly affected by local client characteristics, such as connection capacity and workload distribution throughout the day.

1. Introduction Web services [3] offer software developers with a powerful mechanism for application deployment and integration over the web, independently of programming language, execution platform and communication protocol. However, to be effectively used in real world applications current web services technologies have to provide application developers with important quality of service (QoS) guarantees, such as efficiency, high availability, reliability and security [4]. Efficiency and high availability, in particular, can be provided through the use of replication [2]. Two main approaches have been traditionally used when accessing replicated resources on the web. The first approach aims at increasing system throughput at the server side, by automatically re-distributing client requests among a set of cooperating servers. A typical use of this approach is when resources are transparently replicated over a cluster of servers of the same network domain. The second approach aims at reducing response time at the client side, by exploring mechanisms for selecting the “best” server according to the specific characteristics of each client (e.g. [5][9]). This approach is therefore more appropriate for situations in which resources are replicated over servers that are geographically or administratively apart from each other. Our work follows the latter approach, but focusing specifically on the challenges of automatically selecting the “best” replica (for a particular client application) of a geographically replicated web service. Automatic selection of replicated (or functionally equivalent) web services is a relatively new research area in which only a few works have been proposed thus far (e.g. [6][1][8]). However, those works are still limited in that they either do not consider service response time, as perceived at the client side, as a first-class service quality attribute, or fail to provide empirical evidence

of the overall performance gain offered by their proposed server selection policies. In an attempt to address those issues, we have implemented a client-side server selection framework for dynamically invoking replicated web services [11]. The framework, which incorporates a variety of server selection policies, has been empirically evaluated in the context of accessing a real-world replicated service with four replicas geographically distributed over three different continents [12][7]. Our results show that, in addition to the individual performance of each replica provider, the over all service response time, when invoked using different server selection policies, can be strongly affected by local client characteristics, such as connection capacity and workload distribution.

2. Server Selection Policies The following five policies were implemented as part of our server selection framework: Random Selection – As the name says, this policy selects the server to be invoked randomly amongst the set of servers hosting a replica of the service. Parallel Invocation – This policy invokes all replicas “in parallel” (in fact, invocations are issued concurrently, using threads). The first response to be received in full is returned to the client application. All pending invocations are then interrupted and their responses, ignored. HTTPing – This policy initially sends a “probe” (in the form of a small HTTP HEAD message) to all servers concurrently. The first server to respond the probe is then selected for invocation, with all pending probe responses being ignored. Best Last – This policy selects the server to be invoked as the one with the best previous performance considering the last successful invocation of the same service operation (with an equivalent set of parameters) issued to each replica. Best Median – This policy is a variation on the Best Last policy, where the selection is based on the performance of the k last successful invocations recorded for each server. The selected server will be the one with the lowest median response time amongst the set of k past performances considered (k was set to 6 in our experiments).

3. Empirical Evaluation 3.1. Method Ours experiments were carried out from 12 to 31 December 2003 (excluding weekends and public holydays), and consisted of two client machines, both located at the city of Fortaleza, Brazil, running dedicatedly the same client application. That application would continuously invoke the UDDI Business Registry (UBR) replicated web service, using each of the five server selection policies at a time. The UBR service was chosen because it is a real-world replicated service, with four replicas (provided by Microsoft, IBM, SAP and NTT) geographically distributed over three continents (Microsoft and IBM in North America, SAP in Europe, and NTT in Asia). Despite being in the same city, the two client machines presented significant differences in terms of work environment and connection characteristics. One client was located at the University of Fortaleza (referred to as client UNIFOR), while the other client was located at the IT department of a Brazilian bank (referred to as client BNB). All service invocations were

Response time (s)

50 45 40 35 30 25 20 15 10 5

Parallel HTTPing Bes t Las t Bes t Median Random

0

1

2

3

4

5

6

7

8

9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24

Response time(s)

T ime of day 50 45 40 35 30 25 20 15 10 5

Parallel HTTPing Bes t Las t Bes t Median Random

0

1

2

3

4

5

6

7

8

9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24

T ime o f day

Figure 1. Effects of invocation period: client UNIFOR (top) and client BNB (bottom).

organized in sessions of five cycles each. Each cycle consisted of a sequence of five service invocations, using the same service operation with the same set of parameters, with each invocation using a different server selection policy. Invocation parameters were defined in terms of a list of identification keys, used to identify service descriptions registered with the UBR service. The number of identification keys used in each cycle (1, 10, 20, 30 and 40, respectively) determined the size of the response expected to be received from the service (expected response sizes for each cycle were 9Kb, 87Kb, 174Kb, 260Kb and 347Kb, respectively). 3.2. Effects of Invocation Period Initially, we analyzed the performance of the five server selection policies with respect to invocation time. Figure 1 depicts the results obtained at each client, considering the mean response time along each hour of the day. The graphics only show results relative to cycle 5, which, due to a larger response, presented the widest variations in terms of response time. At client UNIFOR, we observed a significant increase in response time after 7am, with a corresponding decrease to the same levels only after 11pm. This period matches exactly the period of most intense academic activities at the university. We also observed a high variability of performance throughout the day, even for those policies with the lowest response times. At 8am, for instance, the response time of some policies are already up to 6 seconds higher than the response time obtained by the same policies at 7am. The period of worst performance is around 3pm. At client BNB, considering only those policies with the best performances, we can see that the period of higher response times is between 8am and 5pm, approximately. This period also matches the period of most intense activities at the bank. The periods of worst performances, for the best policies, are around 11am and 3pm. In general, we observed that response times obtained at client BNB are at higher levels than those obtained at client UNIFOR. On the other hand, client BNB seems to be more scalable, as variations in response times observed at that client are of lower magnitude than those observed at client UNIFOR. It is interesting to note that the Random Selection policy presented the worst performance at both clients, independently of response size and invocation period. The reason is that random selection treats all replicas equally, which means that

2,0

2,5

Parallel HTTPing Best Last Best Median

Relative speed up

Relative speed up

2,5

1,5

Parallel HTTPing Best Last Best Median

2,0

1,5

1,0

1,0 1

10

20

Request size (#keys)

30

40

1

10

20

30

40

Request size (#keys)

Figure 2. Relative speedup: client UNIFOR (left) and client BNB (right).

slower servers (such as SAP and NTT) are invoked as often as those with better performance results (i.e., Microsoft and IBM). 3.3. Relative Speedup To better visualize the performance differences amongst the five policies, we analyzed the relative speedup in median response time offered by each of the other four policies with respect to Random Selection. Figure 2 illustrates the relative speedup observed at both clients, during the periods of more intense activities (i.e., from 7am to 11pm at client UNIFOR, and from 8am to 5pm at client BNB). At client UNIFOR, we can see that highest speedups were offered by the two policies based on historical data, Best Median and Best Last, followed by HTTPing and Parallel Invocation, in that order (in the best case, for 30-key invocations, Best Median was up to 2.3 times faster than Random Selection). This result is justifiable, since both Best Median and Best Last consider recent invocation results as part of their selection process, thus avoiding momentarily slower servers. The negative result for Parallel Invocation is somewhat surprising, and is due to the slower connection capacity of client UNIFOR, which was found to be insufficient to handle the incoming traffic generated from invoking the four servers concurrently. At client BNB, we can see that the overall speedup factors observed for the four policies is considerably lower than those observed at client UNIFOR. Another notable difference is in the results observed for Parallel Invocation. In contrast to the results observed at client UNIFOR, where Parallel Invocation offered the worst speedup, here that policy offered a speedup factor that is visibly higher than those of the other three policies. These results suggest that, contrary to what was observed at client UNIFOR, the connection capacity of client BNB was enough to handle the incoming traffic generated by Parallel Invocation. Finally, HTTPing offered an intermediate speedup at both clients, being the second worst policy at client BNB but performing better than Parallel Invocation at client UNIFOR. A more detailed analysis of these and a number of other experimental results can be found in [10].

4. Conclusion This paper presented our work on the implementation and empirical evaluation of a variety of client-side server selection policies for accessing replicated web services [10][11][12][7]. Our experiments show that, in addition to the individual performance of each service replica, the overall service response time, when invoked using different server selection policies, can be strongly affected by the local characteristics of each client. This is an important result since many existing approaches for managing replicated resources on the web are deployed exclusively at the server-

side. As regards future work, some interesting lines of research include expanding the experiments to new services and new client settings; designing new server selection policies, possibly from the combination of those five described here; and reimplementing (and re-evaluating) the server selection framework in the form of a service proxy.

References [1]

Azevedo, V., Pires, P. F., Mattoso, M., “WebTransact-EM: Um Modelo para Execução Dinâmica de Serviços Web Semanticamente Equivalentes”, In Proc. of the 9th the Brazilian Symposium on Multimedia Systems and the Web (WebMedia 2003), Salvador, Brasil, November 2003. [2] Berners-Lee, T., Gettys, J., Nielsen, H. F., Replication and Caching Position Statement, 1996. Available at http://www.w3.org/Propagation/Activity.html. [3] Cauldwell, P., Chawla, R., Chopra, V., et al., Professional XML Web Services, Wrox Press, Birminghan, USA, 2001. [4] Conti, M., Kumar, M., Das, S. K., Shirazi, B. A., “Quality of Service Issues in Internet Web Services”, IEEE Trans. on Computers, vol. 51, no. 6, June 2002. [5] Dikes, S., Robbins, A. K., Jeffery, L.C., “An Empirical Evaluation of Client-side Server Selection Algorithms”, IEEE INFOCOM 2000, Tel Aviv, Israel, March 2000. [6] Keidl, M., Kemper, A., “A Framework for Context-Aware Adaptable Web Services”, In Proc. of the 9th Int. Conf. on Extending Database Technology, Crete, Greece, March 2004. [7] Mendonça, N. C., Silva, J. A. F., “An Empirical Evaluation of Client-side Server Selection Policies for Accessing Replicated Web Services”, In Proc. of the 20th ACM Symposium on Applied Computing (ACM SAC 2005), Special Track on Web Technologies and Applications, Santa Fe, New Mexico, EUA, March 2005. [8] Padovitz, A., Krishnaswamy, S., Loke, S.W., “Towards Efficient Selection of Web Services”, In Second Int. Joint Conferences on Autonomous Agents and Multi-agent Systems, New York, USA, July 2003. [9] Sayal, M. O., “Selection Algorithms for Replicated Web Servers”, In Workshop on Internet Server Performance, SIGMETRICS, USA, June 1998. [10] Silva, J. A. F., “Implementação e Avaliação Empírica de Políticas de Invocação para Serviços Web Replicados”, M.Sc. Dissertation, Mestrado em Informática Aplicada, UNIFOR, April 2004. Available at http://www.unifor.br/hp/pos/ mia/31.zip. [11] Silva, J. A. F., Mendonça, N. C., “Dynamic Invocation of Replicated Web Services”, In Proc. of the 2nd Latin American Web Congress and the 10th Brazilian Symposium on Multimedia Systems and the Web (LAWeb/WebMedia 2004), Ribeirão Preto-SP, Brazil, IEEE Computer Society Press, October 2004. [12] Silva, J.A.F, Mendonça, N. C., “Uma Avaliação Empírica de Políticas de Invocação para Serviços Web Replicados”, In Anais do 22o. Simpósio Brasileiro de Redes de Computadores (SBRC 2004), Gramado-RS, Brazil, May 2004.

Implementation and Empirical Evaluation of Server ...

IBM, SAP and NTT) geographically distributed over three continents (Microsoft .... ACM Symposium on Applied Computing (ACM SAC 2005), Special Track on.

203KB Sizes 0 Downloads 250 Views

Recommend Documents

An Empirical Evaluation of Client-side Server Selection ... - CiteSeerX
selecting the “best” server to invoke at the client side is not a trivial task, as this ... Server Selection, Replicated Web Services, Empirical Evaluation. 1. INTRODUCTION .... server to be invoked randomly amongst the set of servers hosting a r

An Empirical Evaluation of Client-side Server Selection ...
Systems – client/server, distributed applications. Keywords .... this policy selects the server to be invoked randomly amongst the set of servers hosting a replica of ...

Empirical Evaluation of Volatility Estimation
Abstract: This paper shall attempt to forecast option prices using volatilities obtained from techniques of neural networks, time series analysis and calculations of implied ..... However, the prediction obtained from the Straddle technique is.

Implementation and Performance Evaluation Issues of Privacy Policies ...
In this paper we study about social network theory and privacy challenges which affects a secure range of ... In recent years online social networking has moved from niche phenomenon to mass adoption. The rapid .... OSN users are leveraged by governm

Implementation and Performance Evaluation Issues of Privacy Policies ...
In this paper we study about social network theory and privacy challenges which affects ... applications, such as recommender systems, email filtering, defending ...

Implementation and performance evaluation of TeleMIP - IEEE Xplore
Implementation and Performance Evaluation of. TeleMIP. Kaushik Chakraborty, kauchaks @ glue.umd.edu. Department of Electrical and Computer Engineering,.

Empirical Evaluation of Excluded Middle Vantage Point ...
Mar 25, 2011 - demonstrate that the performance of this approach scales linearly up to at least 128 cores ..... to carry out parallel query processing with Java code. The .... conference on Management of data, New York, NY, USA,. 1984, pp.

An Empirical Evaluation of Test Adequacy Criteria for ...
Nov 30, 2006 - Applying data-flow and state-model adequacy criteria, .... In this diagram, a fault contributes to the count in a coverage metric's circle if a test.

Empirical Evaluation of Brief Group Therapy Through an Internet ...
Empirical Evaluation of Brief Group Therapy Through an Int... file:///Users/mala/Desktop/11.11.15/cherapy.htm. 3 of 7 11/11/2015, 3:02 PM. Page 3 of 7. Empirical Evaluation of Brief Group Therapy Through an Internet Chat Room.pdf. Empirical Evaluatio

Empirical Evaluation of 20 Web Form Optimization ... - Semantic Scholar
Apr 27, 2013 - and higher user satisfaction in comparison to the original forms. ... H.3.4 Systems and Software: Performance evaluation;. H.5.2 User Interfaces: ...

Empirical Evaluation of Brief Group Therapy Through an Internet ...
bulletin board ads that offered free group therapy to interested. individuals. ... Empirical Evaluation of Brief Group Therapy Through an Internet Chat Room.pdf.

An Empirical Performance Evaluation of Relational Keyword Search ...
Page 1 of 12. An Empirical Performance Evaluation. of Relational Keyword Search Systems. University of Virginia. Department of Computer Science. Technical ...

Empirical Evaluation of 20 Web Form Optimization ... - Semantic Scholar
Apr 27, 2013 - Unpublished master's thesis. University of. Basel, Switzerland. [2] Brooke ... In: P. W. Jordan, B. Thomas, B. A.. Weerdmeester & I. L. McClelland ...

Empirical Evaluation of 20 Web Form Optimization Guidelines
Apr 27, 2013 - Ritzmann, Sandra Roth and Sharon Steinemann. Form Usability Scale FUS. The FUS is a validated questionnaire to measure the usability of ...

Empirical Evaluation of Signal-Strength Fingerprint ...
Institute of Computer Science, Foundation for Research and Technology-Hellas (FORTH) ...... AP based on the degree of autocorrelation, the mean, and.

FPGA Implementation Cost & Performance Evaluation ...
IEEE 802.11 standard does not provide technology or implementation, but introduces ... wireless protocol for both ad-hoc and client/server networks. The users' ...

Implementation of SQL Server Based on SQLite Engine on Android ...
Keywords: Embedded Database, android, android platform, SQLite database ..... 10. Motivation. The application under consideration, The SQL database server, ...

Implementation of SQL Server Based on SQLite ... - IJRIT
solution can be used independent of the platform that is used to develop mobile applications. It can be a native app(. iOS, Android), a mobile web app( HTML5, ...

Implementation of Domain Name Server System using ...
Today is a world of high speed internet with millions of websites. Hence, in ... system is in true sense the backbone of the secure high speed internet [11]. As the ...

Evaluation of Architectures for Reliable Server Pooling ...
conducted in both wired and wireless environments show that the current version of ... the Collaborative Technology Alliance Program, Cooperative Agreement.