An Empirical Evaluation of Client-side Server Selection Policies for Accessing Replicated Web Services Nabor C. Mendonça, José Airton F. Silva Mestrado em Informática Aplicada Universidade de Fortaleza Av. Washington Soares, 1321, Fortaleza, CE, Brasil Tel.: +55 85 3477 3268

{nabor, jairton.mia}@unifor.br ABSTRACT Replicating web services at geographically distributed servers can offer client applications with a number of benefits, including higher service availability and improved response time. However, selecting the “best” server to invoke at the client side is not a trivial task, as this decision needs to account for (and is affected by) a number of factors, such as local connection capacity, external network conditions and servers workload. This paper presents the results of an experiment in which we implemented and empirically evaluated the performance of five server selection policies for accessing replicated web services. The experiment involved two client stations, with different connection capacities, continuously applying the five policies to invoke a real-world web service replicated over four servers in three continents. Our results show that, in addition to the individual performance of each server, service response time using the five policies is affected mainly by client differences in terms of connection capacity and workload distribution throughout the day.

Categories and Subject Descriptors C.2.4 [Computer-Communication Networks]: Systems – client/server, distributed applications.

Distributed

Keywords Server Selection, Replicated Web Services, Empirical Evaluation

1. INTRODUCTION Service Oriented Computing (SOC) is an emerging software development paradigm in which services are used as the fundamental unit of design [11]. It is most commonly used through web services [5], a SOC technology tailored to the web environment. Web services offer developers with a powerful abstraction mechanism for application integration over the web, independently of programming environment, execution platform and communication protocol. In contrast to traditional web resources, such as HTML documents, images files and GCI scripts, web services are implemented as loosely-coupled Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. SAC’05, March 13-17, 2005, Santa Fe, New Mexico, USA. Copyright 2005 ACM 1-58113-964-0/05/0003…$5.00.

components that can be described [15], discovered [14] and accessed [13] using standard XML-based languages and protocols [4]. Web services are now being deployed within a scenario where providing quality of service (QoS) guarantees is vital to the success of most Internet-based applications [6]. Some important QoS attributes can be provided through the use of replication [3]. In particular, replication has the potential to increase service availability (by having multiple servers providing the same service at different physical locations) and reduce service access time (by accessing a nearby server). Two main approaches have been traditionally used for accessing replicated resources on the web. The first approach aims at increasing system throughput at the server side by automatically (re)distributing client requests among multiple servers. For this reason, the approach is mostly used in situations where resources are transparently replicated over servers of the same administrative domain, such as in cluster-based architectures. The second approach aims at reducing response time at the client side by exploring mechanisms for selecting the “best” server according to the specific characteristics of each client. Hence, this approach is more appropriate for situations in which resources are replicated over servers that are geographically or administratively apart from each other. In our work, we focus on the second approach. In particular, we are interested in investigating the impact of applying client-side server selection mechanisms to reduce the response time of geographically replicated web services. Server selection for replicated (or functionally equivalent) web services is a relatively new research area in which only a few works have been proposed thus far (e.g., [9][2][10]). However, those works are still limited in that they either do not consider service response time, as perceived at the client side, as a first-class service quality attribute, or fail to provide empirical evidence of the overall performance gain offered by their proposed server selection mechanisms. To address those issues, we have implemented and empirically evaluated the performance gain provided by five server selection policies for accessing geographically replicated web services. The evaluation was carried out in a real-world Internet setting and involved two client stations, with different connection capacities, which continuously applied the five policies to invoke a geographically replicated web service with four replicas located in three continents. Overall, our experimental results show that, in addition to the individual performance of each replica, service response time using the five server selection policies, as measured by the client applications themselves, is affected mainly by client differences in terms of

connection capacity and workload distribution throughout the day. It should be noted that performance is not the only quality attribute that should be considered at the client side when accessing a replicated web service. Factors such as replica consistency, transaction support, security, and monetary costs may also play an important role when selecting the best replica to invoke [2]. However, in view of current practice in web services deployment and use, performance is likely to remain the dominant replica selection factor for the next few years. Therefore, an empirical evaluation of the performance gain provided by different replica selection policies, as reported in this paper, can be an important contribution to foster further research in this relevant field. The rest of the paper is organized as follows. First, we describe the five server selection policies in more details (section 2). We follow with a report on the method and results of our empirical evaluation (section 3). Then, we cover related work (section 4). Finally, we conclude the paper with a summary of our research and some suggestions for future work (section 5).

2. SERVER SELECTION POLICIES As mentioned previously, we have implemented five performance-based server selection policies using different selection criteria. The five policies are: Random Selection, Best Last, Best Median, HTTPing, and Parallel Invocation. The details of each policy are provided below. Random Selection – As the name says, this policy selects the server to be invoked randomly amongst the set of servers hosting a replica of the service. This policy was also implemented in the context of other work for accessing replicas of traditional web resources such HTML documents and images [7]. Parallel Invocation – This policy invokes all replicas “in parallel” (in fact, concurrently, using threads). The first response to be received in full is returned to the client application, with all pending invocations being interrupted and their partial response, if available, discarded. Variations on this policy were also implemented in [9][2] for invoking semantically equivalent services. HTTPing (or Probe) – This policy initially sends a “probe” (in the form of a HTTP HEAD message whose typical response is in the order of hundreds of bytes) to all servers concurrently. The first server to respond the probe is then selected for invocation, with all later responses being ignored. This policy is a variation on the TCPing policy, proposed in [7], which uses a TCP-level probe mechanism for server selection in the context of traditional web resources. In our case, the use of a HTTP-level probe has the advantage that the servers’ current workload (in addition to network latency) is also taken into account as part of the selection process. Best Last – This policy selects the server to be invoked based on the performance of the last successful invocation done to each replica. Information on past server performances is obtained from a history log that is maintained by the (modified) AXIS framework to record past invocations results. The selected server will the one with the lowest response time for the most recent invocation involving the same operation requested by the application with an equivalent set of parameters.

Best Median – This policy is a variation on the previous policy, where the selection is based on the k most recent performances recorded for each server. The selected server will be the one with the lowest median response time amongst the set of past performances considered. Note that, for k = 1, Best Median is reduced to Best Last (in our experiments, k was set to 6). The use of the median, instead of the mean, is due to the fact that the median is less affected by unusually large response times, which may occur sporadically, most notably at client peak times, due to the unpredictable nature of Internet latencies. To make the service invocation process transparent for the client application programmer, we implemented those five policies as part of an automatic server selection mechanism, which was then integrated into an existing web service invocation framework. Specifically, we extended the client-side infrastructure of AXIS [1], an open source web service framework for Java. Using our modified version of AXIS client application programmers have the option of either explicitly specifying the service replica to be invoked, or delegating that decision to the framework itself, based on a given server selection policy. In the latter case, the framework selects the “best” service replica to invoke dynamically, according to the criteria defined by the given policy.

3. EMPIRICAL EVALUATION Our experiment was carried out from 12 to 31 December 2003 (excluding weekends and public holydays), and consisted of two client machines, both located at the city of Fortaleza, Brazil, running dedicatedly the same client application. That application used our modified version of the AXIS framework to invoke the same replicated web service according to each of the five server selection policies described previously. The replicated web service considered in the experiments was the public service UDDI Business Registry [14]. This service is provided by four companies (namely Microsoft, IBM, SAP and NTT), whose web servers are located in three continents: North America (Microsoft and IBM), Europe (SAP) and Asia (NTT). Despite being in the same city, the two client machines presented significant differences in terms of work environment and connection characteristics. One client was located at the University of Fortaleza (referred to as client UNIFOR), and connected to the Internet through a government-run 2Mbps academic backbone. The other client was located at the IT department of a Brazilian bank (referred to as client BNB), and connected to the Internet through a 4Mbps commercial backbone.

3.1. Method Each replica of the service was invoked via the same operation, GetServiceDetail, which returns UDDI information about a list of services whose identification keys are provided as invocation parameters by the client application [14]. Since UDDI information structures are of very similar size, even for different services, by varying the number of identification keys passed as invocation parameters it was possible control, with high accuracy, the size of each response an application would expected to receive. The client application was implemented in a way to continuously invoke the replicated service, so that we could observe how factors such as invocation period, response size, and sever selection policy used would affect the perceived response time at

In each cycle, a maximum invocation time was defined, within which every pending responses would have to be completely received. In the case that a full response failed to be received within the expected time, the corresponding invocation would be recorded as a time-out exception. This measure was necessary to avoid overly long invocations, which could prevent the execution of a minimum number of sessions each day. The maximum invocation time defined for each cycle was estimated based on the mean response time observed during a calibration period, in which the same set of experiments were run without imposing any time restriction on the duration of invocations. Just like past results are treated by the Best Median policy, response times observed during the experiments were analyzed considering the median as the measure of central tendency. The reason, again, was to minimize the impact of bad network behavior during peak times. In our analysis, we considered the effects of both invocation period and response size on the observed mean response time for each server selection policy investigated. In addition, we considered the relative performance speedup offered by each policy with respect to Random Selection. Finally, we also analyzed the cumulative distribution over all response times observed for each policy at both clients.

3.2. Effects of Invocation Period Initially, we analyzed the performance of the five server selection policies with respect to invocation time. The aim was to identify those periods of the day when the observed response times presented the most significant variations. Figures 1 depicts the results obtained at each client station, considering the mean response time along each hour of the day. The graphics only show results relative to cycle 5, which, due to a large response size, presented the largest variations. At client UNIFOR, we observed a significant increase in response time after 7am, with a corresponding decrease to the same levels only near 11pm. This period matches exactly the period of most intense academic activities at the university. We also observed a high variability of performance throughout of the day, even for those policies with the lowest response times. At 8am, for instance, the response time of some policies are already up to 6 seconds higher than the response time obtained by the same policies at 7am. The period of worst performances is around 3pm. At client BNB, considering only those policies with the best performances, we can see that the period of higher response times is between 8am and 5pm, approximately. This period also matches the period of most intense activities at the bank. The periods of worst performances, for the best policies, are around 11am and 3pm. In general, we observed that response times obtained at client BNB are at higher levels than those obtained at client UNIFOR.

50 P arallel HTTP ing B est Last B est M edian Rando m

Response time (s)

45 40 35 30 25 20 15 10 5 0

1

2

3

4

5

6

7

8

9

10

11

12

13

14 15

16

17

18

19

20 21 22 23 24

14 15

16

17

18

19

20 21 22 23 24

Time of day 50 P arallel HTTP ing B est Last B est M edian Rando m

45

Response time(s)

each client. To this end, all service invocations were organized in sessions of five cycles each. Each cycle consisted of a sequence of five service invocations, using the same set of parameters (i.e., the same list of identification keys), one invocation for each of the five server selection policies implemented. The numbers of identification keys used as invocation parameters in the five cycles of each session were 1, 10, 20, 30 and 40, respectively. Those numbers corresponded to responses of size 9Kb, 87Kb, 174Kb, 260Kb and 347Kb, approximately.

40 35 30 25 20 15 10 5 0

1

2

3

4

5

6

7

8

9

10

11

12

13

Time of day

Figure 1. Effects of invocation period: client UNIFOR (top) and client BNB (bottom). On the other hand, client BNB seems to be more scalable, as variations in response times observed at that client are of lower magnitude than those observed at client UNIFOR. At client BNB, the amplitude between the lowest and the highest points of the best performance curve is about 5 seconds, whereas at client UNIFOR this same amplitude is near 16 seconds. It is interesting to note that the Random Selection policy presented the worst performance at both clients, independently of response size and invocation period. The reason is that the random selection process treats all replicas equally. This means that it may select momentarily slower servers as often as the fast ones, which clearly degrades the policy’s overall performance. Our Random Selection results match those described in [7], which were obtained in the context of accessing replicated web content in the form of images and HTML documents.

3.3. Relative Speedup To better visualize the performance differences amongst the five policies, we analyzed the relative speedup in median response time offered by each of the other four policies with respect to Random Selection. Figure 2 illustrates the relative speedup observed at both clients, during the periods of more intense activities (i.e., from 7am to 11pm at client UNIFOR, and from 8am to 5pm at client BNB). At client UNIFOR we can see that highest speedups were offered by the two policies based on historical data, Best Median and Best Last, followed by HTTPing and Parallel Invocation, in that order. The best case for Best Median was observed with 30-key invocations (Cycle 4), where Best Median produced a median response time up to 2.3 times faster than that observed with Random Selection. This result is justifiable, since both Best Median and Best Last consider recent invocation results as part of their selection process, thus avoiding momentarily slower servers. The negative results for Parallel Invocation is possibly due to the slower connection capacity of client UNIFOR, which may be insufficient to cope with the concurrency generated by the incoming traffic when the four servers are invoked simultaneously.

2,0

100%

Parallel HTTPing Best Last Best Median

1,5

1,0 1

10

20

30

40

Request size (#keys)

80% 70% 60% 50% 40% 30% 20% 10%

2,5

0%

Parallel HTTPing Best Last Best Median

2,0

0

4

100%

1,0 10

12

16

20

24

28

32

36

40

32

36

40

Parallel HTTPing Best Last Best Median Random

90% 1,5

1

8

Response time (s)

20

30

40

Request size (#keys)

Figure 2. Relative speedup: client UNIFOR (top) and client BNB (bottom).

Cumulative Distribution

Relative speed up

Parallel HTTPing Best Last Best Median Random

90%

Cumulative Distribution

Relative speed up

2,5

80% 70% 60% 50% 40% 30% 20% 10% 0%

At client BNB, in turn, we can see that the overall speedup factors observed for the four policies is considerably lower than those observed at client UNIFOR. Another notable difference is in the results observed for Parallel Invocation. In contrast to the results observed at client UNIFOR, where Parallel Invocation offered the worst speedup, here that policy offered a speedup factor that is visibly higher than those of the other three policies. The speedup is more significant for 10 and 20-keys invocations (Cycles 2 and 3, respectively), with Best Last and Best Median, especially the latter, only catching up at the last two cycles. These results suggest that, contrary to what was observed at client UNIFOR, the connection capacity of client BNB was enough to handle the incoming traffic generated by parallel invocation. The relative speedup offered by HTTPing was lower than that offered by Best Last and Best Median, at both clients. However, with regards to Parallel Invocation, the relative performance of HTTPing varied from client to client, being above it at client UNIFOR but below it at client BNB.

3.4. Cumulative Distribution With the aim of investigating the performance of the five server selection policies from the perspective of the whole spectrum of observed response times, and not only in terms of average bahavior, we computed the cumulative distribution function over the total of response times obtained with each policy. The cumulative distribution function makes it possible to evaluate, in more general way, the performance of the five policies in terms of response time intervals, as they appeared throughout the whole experiment. Figure 3 shows the cumulative distribution results for each policy at both clients. Those results are relative to 40-key invocations (Cycle 5). From Figure 3 we can see that the cumulative distribution results are in accordance with the results described for each client in the previous subsections. On the other hand, we take advantage of the cumulative distribution function to conduct a closer evaluation of each policy at a given performance level. For example, we can see that at client UNIFOR 50% of all response times obtained with the Best Last and Best Median policies were equal to or

0

4

8

12 16 20 24 28 Response time (s)

Figure 3. Cumulative distribution: client UNIFOR (top) and client BNB (bottom). below 11.5 seconds, whereas the third best policy at that client, HTTPing, had only 35% of all its observed response times within that interval. Similarly, we can see that at client BNB 50% of all response times obtained with Best Median and Parallel Invocation were equal to or below 19 seconds, whereas Best Last and HTTPing had only 45% and 35% of all their response times, respectively, within that interval.

4. RELATED WORK Client-side server selection mechanisms for accessing replicated web resources have been the focus of intensive research in recent years, with many server selection policies already described in the literature (e.g., [8][7][12]). However, most of those policies were implemented only considering the context of accessing traditional web resources, such as HTML documents and images files. Although conceptually applicable to the web services domain, in practice those policies do not allow a direct adaptation of their server selection mechanism to this new context – at least not the way they have been originally described – due to the intrinsic characteristics of web services, which are closer to the execution model of distributed objects than to the retrieval of documents and images from a remote web server. In our work, we have adapted and/or extended some of those existing policies in the form of a new server selection mechanism specifically tailored to the context of web services. This allowed us not only to investigate the performance impact of applying those policies when accessing a geographically replicated web service, but also to compare our empirical results to those results already reported in the literature for traditional web resources. In the context of web service selection, we identify three recently proposed works that are closely related to ours. Azevedo et al. [2] propose a mechanism for dynamic selection of semantically equivalent web services. The selection mechanism is based on a

number of quality and cost criteria, such as service initialization time, expected execution time for each service operation, monetary cost of service execution, and cost of compensation in case of failure. Since we consider performance as the dominant quality attribute, we limit our discussion to time-based selection criteria. With regards to those criteria, the authors simply suggest using parallel invocation as an ideal selection policy, without giving any empirical evidence to support that claim. In contrast to the work of Azevedo et al., we have implemented and empirically evaluated a variety of performance-based server selection policies in a real-world Internet setting. In addition, contrary to those authors’ claim, we have found that parallel invocation may not always be the ideal choice for accessing a replicated service, and that the perceived service performance under that policy may depend, amongst other factors, on the capacity of the client connection at hand. Parallel invocation is also used in the invocation mechanism proposed by Keidl et al. to access functionally equivalent web services [9]. In that work, however, there is a parameter that the application can set in order to allow completion of all pending responses, and not only the one that is received first. With this parameter enabled, the service invocation mechanism will compare all responses received according to some quality criteria, and only return to the application the one that best satisfies those criteria. The problem with this approach is that waiting for all responses may severely affect the overall service performance perceived by the application. In our work and in the work of Azevedo et al., in contrast, we consider all service replicas to be equally useful, so that there is no need to compare or even retain any pending response while using a parallel invocation policy. Finally, Padovitz et al. have also proposed mechanisms for the dynamic selection of web services [10]. That work, differently from our work and the works discussed above, focuses on ways to obtain accurate information on the state of each service, immediately prior to issuing an invocation. To this end, the authors propose a system to retrieve state information from service providers, according to client-defined properties, such as performance, availability, and quality of service. Based on the information received from each server, the system then selects the “best” server for invocation. The problem with this approach is that the server selection process only takes into account information directly provided the servers themselves, thus neglecting important factors such local network conditions and client connection capacity.

5. CONCLUSION This paper presented a performance study of the impact of using different server selection policies when accessing geographically replicated web services. As part of our experiment, five server selection policies – namely Random Selection, Parallel Invocation, HTTPing, Best Last, and Best Median – were implemented and empirically evaluated in a real-world Internet setting. As regards future work, some interesting lines of research include expanding the experiments to new services and new client settings; designing new server selection policies, possibly from the combination of those five described here; and reimplementing (and re-evaluating) the server selection mechanism in the form of a proxy service.

6. REFERENCES [1]

AXIS, The Apache SOAP Project, Version 1.0, June 2002. Available at http://ws.apache.org/axis/.

[2]

Azevedo, V., Pires, P. F., Mattoso, M., “WebTransact-EM: A Model for Dynamic Execution of Semantically Equivalent Web Services”, In Brazilian Symposium on Multimedia Systems and Web (WEBMEDIA2003), Salvador, Brasil, November 2003. In Portuguese.

[3]

Berners-Lee, T., Gettys, J., Nielsen, H. F., Replication and Caching Position Statement, 1996. Available at http://www.w3.org/Propagation/Activity.html.

[4]

Bray, T., Paoli, J., Sperberg-Mcqueen, M., et al., Extensible Markup Language (XML), 1.0 Second Edition, W3C Recommendation, October 2000. Available at http://www.w3.org/TR/REC-xml.

[5]

Cauldwell, P., Chawla, R., Chopra, V., et al., Professional XML Web Services, Wrox Press, Birminghan, USA, 2001.

[6]

Conti, M., Kumar, M., Das, S. K., Shirazi, B. A., “Quality of Service Issues in Internet Web Services”, IEEE Transactions on Computers, vol. 51, no. 6, June 2002.

[7]

Dikes, S., Robbins, A. K., Jeffery, L.C., “An Empirical Evaluation of Client-side Server Selection Algorithms”, IEEE INFOCOM 2000, Tel Aviv, Israel, March 2000.

[8]

Hanna, M.K., Natajaran, N., Levine, N.B., “Evaluation of a Novel Two-Step Server Selection Metric”, Proceedings of IEEE International Conference on Network Protocols, California, USA, November 2001.

[9]

Keidl, M., Kemper, A., “A Framework for Context-Aware Adaptable Web Services”, In Proceedings of the Ninth International Conference on Extending Database Technology, Crete, Greece, March 2004.

[10] Padovitz, A., Krishnaswamy, S., Loke, S.W., “Towards Efficient Selection of Web Services”, Second International Joint Conferences on Autonomous Agents and Multi-agent Systems, New York, USA, July 2003. [11] Papazoglou, M. P., Georgakopoulos, D., “Service-Oriented Computing: Introduction”, Comm. of the ACM, Volume 46, October 2003. [12] Sayal, M. O., “Selection Algorithms for Replicated Web Servers”, In Workshop on Internet Server Performance, SIGMETRICS, USA, June 1998. [13] SOAP, Simple Object Access Protocol, World Wide Web Consortium (W3C), Version 1.1, Note 08, May 2000. Available at http://www.w3.org/TR/SOAP/ [14] UDDI, Universal Description, Discovery e Integration of Web Services, UDDI.org's OASIS Technical Committees, Version 2.0, July 2002. Available at http://www.oasisopen.org/committees/uddi-spec/doc/tcspecs.htm#uddiv2 [15] WSDL, Web Services Definition Language, World Wide Web Consortium (W3C), Version 1.1, Note 15, March 2001. Available at http://www.w3.org/TR/wsdl.

An Empirical Evaluation of Client-side Server Selection ... - CiteSeerX

selecting the “best” server to invoke at the client side is not a trivial task, as this ... Server Selection, Replicated Web Services, Empirical Evaluation. 1. INTRODUCTION .... server to be invoked randomly amongst the set of servers hosting a replica of the .... cycles of each session were 1, 10, 20, 30 and 40, respectively.

219KB Sizes 1 Downloads 315 Views

Recommend Documents

An Empirical Evaluation of Client-side Server Selection ...
Systems – client/server, distributed applications. Keywords .... this policy selects the server to be invoked randomly amongst the set of servers hosting a replica of ...

Implementation and Empirical Evaluation of Server ...
IBM, SAP and NTT) geographically distributed over three continents (Microsoft .... ACM Symposium on Applied Computing (ACM SAC 2005), Special Track on.

Empirical Evaluation of Volatility Estimation
Abstract: This paper shall attempt to forecast option prices using volatilities obtained from techniques of neural networks, time series analysis and calculations of implied ..... However, the prediction obtained from the Straddle technique is.

Client-side selection of replicated web services: An empirical ...
and Biersack (2002) to download replicated Internet files. Variations on this policy ... means of a small ''probe” message in the form of a HTTP. HEAD request.2 The .... replicas of the UDDI Business Registry (UBR) web service. (OASIS, 2002) ...

An Empirical Evaluation of Test Adequacy Criteria for ...
Nov 30, 2006 - Applying data-flow and state-model adequacy criteria, .... In this diagram, a fault contributes to the count in a coverage metric's circle if a test.

Empirical Evaluation of Brief Group Therapy Through an Internet ...
Empirical Evaluation of Brief Group Therapy Through an Int... file:///Users/mala/Desktop/11.11.15/cherapy.htm. 3 of 7 11/11/2015, 3:02 PM. Page 3 of 7. Empirical Evaluation of Brief Group Therapy Through an Internet Chat Room.pdf. Empirical Evaluatio

Empirical Evaluation of Brief Group Therapy Through an Internet ...
bulletin board ads that offered free group therapy to interested. individuals. ... Empirical Evaluation of Brief Group Therapy Through an Internet Chat Room.pdf.

An Empirical Performance Evaluation of Relational Keyword Search ...
Page 1 of 12. An Empirical Performance Evaluation. of Relational Keyword Search Systems. University of Virginia. Department of Computer Science. Technical ...

Externalities in Keyword Auctions: an Empirical and ... - CiteSeerX
with VCG payments (VCG equilibrium) for all profiles of valua- tions and search ..... For the keyword ipod, the Apple Store (www.store.apple.com) is the most ...

Externalities in Keyword Auctions: an Empirical and ... - CiteSeerX
gine's result page) highly depends on who else is shown in the other sponsored .... his position. We use the model described above to make both empirical and ...

PERFORMANCE EVALUATION OF CURLED TEXTLINE ... - CiteSeerX
ABSTRACT. Camera-captured document images often contain curled .... CBDAR 2007 document image dewarping contest dataset [8] .... Ridges [5, 6] (binary).

EVALUATION OF SPEED AND ACCURACY FOR ... - CiteSeerX
CLASSIFICATION IMPLEMENTATION ON EMBEDDED PLATFORM. 1. Jing Yi Tou,. 1. Kenny Kuan Yew ... may have a smaller memory capacity, which limits the number of training data that can be stored. Bear in mind that actual deployment ...

PERFORMANCE EVALUATION OF CURLED TEXTLINE ... - CiteSeerX
2German Research Center for Artificial Intelligence (DFKI), Kaiserslautern, Germany ... Curled textline segmentation is an active research field in camera-based ...

Performance Evaluation of Parallel Opportunistic Multihop ... - CiteSeerX
of the IEEE International Conference on Communications, Seattle,. WA, pp. 331-335 ... From August 2008 to April 2009, he was with Lumicomm Inc.,. Daejeon ...

Div Personnel Selection Board, this Office will conduct an evaluation ...
Div Personnel Selection Board, this Office will conduct an evaluation-screening of applicants.pdf. Div Personnel Selection Board, this Office will conduct an ...

Empirical Justification of the Gain and Discount Function ... - CiteSeerX
Nov 2, 2009 - [email protected]. College of Computer & Information Science ... to (a) the efficiency of the evaluation, (b) the induced ranking of systems, and.

The Empirical Case for Two Systems of Reasoning - CiteSeerX
1996, Vol. 119. No. I, 3-22. 0033-2909/96/$3.00. The Empirical Case for Two Systems of Reasoning. Steven A. ... form (the way brains probably function) against those who pre- .... I call this form of reasoning rule based because rules are the.

Empirical Evaluation of Excluded Middle Vantage Point ...
Mar 25, 2011 - demonstrate that the performance of this approach scales linearly up to at least 128 cores ..... to carry out parallel query processing with Java code. The .... conference on Management of data, New York, NY, USA,. 1984, pp.

Empirical Evaluation of 20 Web Form Optimization ... - Semantic Scholar
Apr 27, 2013 - and higher user satisfaction in comparison to the original forms. ... H.3.4 Systems and Software: Performance evaluation;. H.5.2 User Interfaces: ...

Empirical Evaluation of 20 Web Form Optimization ... - Semantic Scholar
Apr 27, 2013 - Unpublished master's thesis. University of. Basel, Switzerland. [2] Brooke ... In: P. W. Jordan, B. Thomas, B. A.. Weerdmeester & I. L. McClelland ...

Empirical Evaluation of 20 Web Form Optimization Guidelines
Apr 27, 2013 - Ritzmann, Sandra Roth and Sharon Steinemann. Form Usability Scale FUS. The FUS is a validated questionnaire to measure the usability of ...

Empirical Evaluation of Signal-Strength Fingerprint ...
Institute of Computer Science, Foundation for Research and Technology-Hellas (FORTH) ...... AP based on the degree of autocorrelation, the mean, and.

A Quantitative Evaluation of the Target Selection of ... - Semantic Scholar
ACSAC Industrial Control System Security (ICSS) Workshop, 8 December 2015, Los. Angeles .... code also monitors the PLC blocks that are being written to.

A Quantitative Evaluation of the Target Selection of ... - Semantic Scholar
ment, and forensics at large, is lesser explored. In this pa- per we perform ... of ICS software providers, and thus replaced legitimate ICS software packages with trojanized versions. ... project infection and WinCC database infection. The attack.