FeatureSelector: an XSEDE-Enabled Tool for Massive Game Log Analysis Y. Dora Cai

Bettina Cassandra Riedl

Rabindra (Robby) Ratan

University of Illinois 1205 W Clark Street Urbana, IL 61801 1-217-265-5103

LMU Munich Ludwigstraße 28 VG II 80539 München 49-176-32296226

Michigan State University 404 Wilson Rd East Lansing, MI 48824 1-517-355-3490

[email protected]

[email protected]

[email protected]

Cuihua Shen

Arnold Picot

University of Texas at Dallas 800 W Campbell Rd ATC10 Richardson, TX, 75080 1-972-883-4462

LMU Munich Ludwigstraße 28 VG II 80539 München 49-89-21802252

[email protected]

[email protected]

ABSTRACT

Categories and Subject Descriptors

Due to the huge volume and extreme complexity in online game data collections, selecting essential features for the analysis of massive game logs is not only necessary, but also challenging. This study develops and implements a new XSEDE-enabled tool, FeatureSelector, which uses the parallel processing techniques on high performance computers to perform feature selection. By calculating probability distance measures, based on K-L divergence, this tool quantifies the distance between variables in data sets, and provides guidance for feature selection in massive game log analysis. This tool has helped researchers choose the high-quality and discriminative features from over 300 variables, and select the top pairs of countries with the greatest differences from 231 country-pairs in a 500 GB game log data set. Our study shows that (1) K-L divergence is a good measure for correctly and efficiently selecting important features, and (2) the high performance computing platform supported by XSEDE has substantially accelerated the feature selection processes by over 30 times. Besides demonstrating the effectiveness of FeatureSelector in a cross-country analysis using high performance computing, this study also highlights some lessons learned for feature selection in social science research and some experience on applying parallel processing techniques in intensive data analysis.

H.3.4 [Information Systems]: System and Software performance evaluation (efficiency and effectiveness)

Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected]. XSEDE '14, July 13 - 18 2014, Atlanta, GA, USA Copyright 2014 ACM 978-1-4503-2893-7/14/07…$15.00. http://dx.doi.org/10.1145/2616498.2616511



General Terms Algorithms, Measurement, Performance

Keywords feature selection; K-L divergence; game log analysis; Massively Multiplayer Online Games

1. INTRODUCTION The popularity of Massively Multiplayer Online Games (MMOGs) is increasing everyday due to the availability of the Internet, powerful computer devices, and a growing maturity of online entertainment games. Based on a report published in 2012, 135 millions of Americans play at least one hour of games per month. Compared to 56 million in 2008, the number of U.S. gamers increased 241% from 2008 to 2011 [1, 2]. To aid scientific research geared towards the understanding of games and gamers, an enormous amount of game log data from MMOGs has already been collected using advanced computational devices. These game logs, typically created by the companies hosting the games, capture players’ behavior and activities, and provide a fascinating resource for studying hundreds of thousands of simultaneously socially interacting individuals from different countries with a variety of cultural background. Although researchers admit that online games are not perfectly analogous to real-world situation, online games do shed light on the social activities in our society and offer a sneak preview for the tomorrow’s real world [3]. Travian, developed and hosted by Travian Games, is one of the most popular multi-national games in the world. This particular MMOG offers three key advantages. It is free - there are no subscription fees or initial costs, which makes the game equally available and attractive to all players, regardless of their level of commitment. And, it is browser-based, requiring no unwieldy or

complex client software, so the entry barrier for new players is low. Thus, the game has a broad user base, yielding a high volume of compelling social science data which is likely to be more representative of the populations involved than more restrictive games. In contrast to most online games, Travian players are widely spread in six continents, rather than concentrated in a single region. The game has attracted over 125 million players with 50 languages from over 200 countries [4]. Because the game is played throughout the world, researchers can inexpensively conduct the true international, cross-cultural studies. To take the full advantage of the Travian’s distinctive feature of multiple nations and to meet the great demand on intensive computing, we have focused our research on cross-country studies using high performance computing. Our study was performed on Gordon, a supercomputer hosted at San Diego Supercomputer Center, and utilized a data collection of 500GB game logs, which includes 2 million players from 22 countries with over 300 variables for a 12-month period (October 2009 – October 2010). Table 1 lists the 22 countries included in the study. These 22 countries are selected based on the GLOBE study [5] which is one of the largest investigations of inter-relationships between societal culture, organizational culture, and organizational leadership. The GLOBE study clusters world countries into 10 different regions on the basis of nine culture dimensions. By choosing at least 2 countries from each of 10 GLOBE regions in our cross-country analysis, we ensure our study has adequate sampling to provide representative results. Table 1. 22 countries included in the study

pairs until discovering a convincing result. This method may work for small data sets with low dimensionalities, but it is not feasible for massive game log analysis. To address the above questions, inspired by many previous studies [6, 7, 8, 9], we designed and implemented a new XSEDE-enabled tool, FeatureSelector, for our cross-country analysis based on the Kullback-Leibler divergence (K-L divergence) [10, 11, 12]. FeatureSelector quantifies the distance between data distributions and identifies the best feature candidates for our study. FeatureSelector has been used for two purposes: One is to select the most discriminative features from over 300 variables, and the other is to choose the most dissimilar country-pairs from 231 country-pairs which are the all possible grouping pairs from 22 countries. The remainder of the paper is organized as follows. Section 2 presents a brief introduction to K-L divergence and feature selection. Section 3 details our new tool, FeatureSelector, that correctly and efficiently performs feature selection using super computers supported by XSEDE. Section 4 describes our study results on feature selection and our validation experiments. Section 5 analyzes the computational cost in calculation of K-L divergence. Section 6 discusses the lessons we have learned in the design and implementation of FeatureSelector and Section 7 presents the directions for our future research.

2. K-L Divergence and Feature Selection Kullaback-Leibler divergence (K-L divergence), is also known as information divergence, information gain, or relative entropy [10, 11, 12]. It is computed by:

where P1 to Pn are values from one data distribution with sum(Pi) = 1.0, and Q1 to Qn are values from the other data distribution with sum(Qi) = 1.0. K-L divergence is non-symmetric. That is, KL(P,Q) is not the same as KL(Q,P). K-L divergence has many variants, JensenShannon divergence (J-S divergence) is one of them. J-S divergence is a symmetrized and smoothed version of K–L divergence. It is defined by:

where P1 to Pn are the values from one data distribution with sum(Pi) = 1.0, and Q1 to Qn are the values from the other data distribution with sum(Qi) = 1.0. Scores of the K-L divergence and its variants are always nonnegative, with KL(P,Q) equal to zero if and only if P = Q everywhere. The K-L divergence measures how much one probability distribution is different from another [13]. The higher the K-L divergence score, the more difference exists between two counterparts. The K-L divergence measure has been widely used in pattern recognition, machine learning and data mining [9, 14]. Although this data set supports a unique test bed for the crosscountry study, we face a great challenge in our research due to the massive volume and the data’s extreme complexity. We were challenged to examine: What variables are the most effective classifiers to distinguish different countries? And which countries have the highest similarity or the largest difference? A traditional method is to examine and compare all variables and all country-

Feature selection, a process of selecting a small subset of original features from a large number of variables according to certain criteria, is an important and frequently used technique in the analysis of datasets with massive volume and high dimensionality. It reduces the number of features, removes irrelevant and redundant variables, and cleans up noisy data. The objective of feature selection is three-fold: helping better understand the underlying data, providing faster and more cost-effective

prediction, and improving the prediction accuracy [15, 16, 17]. Feature selection can be performed by a variety of mechanisms. One of the well-known methods is to use the K-L divergence. Although there have been many efforts on feature selection in various research areas, for example, text categorization, genomic analysis, combinatorial chemistry, and speech and image recognition [6, 7, 8, 9, 16], to the best of our knowledge, feature selection has not been used much in social science studies, especially in massive game log analysis. In addition, we have not seen much in the literature that applies high performance computing to feature selection.

3. A Tool for Feature Selection To focus our research on the most dissimilar ones from 22 countries and on the most discriminative features from over 300 variables in our cross-country study (which is a part of our massive game log analysis project), we designed and implemented a new XSEDE-enabled tool, FeatureSelector, to perform feature selection based on the K-L divergence and its variants using high performance computers. FeatureSelector takes three steps in a feature selection process. .

Fig. 1. An example of data transformation on “total play time” Step 1: Data transformation

Step 2: Divergence calculation

The Travian game log is a unique collection of large-scale longitudinal datasets which captures players’ online behaviors at the resolution of seconds by tracing the development of hundreds of variables, such as playing time, level advancement, character achievement, degree of damage, communication behaviors, and trading transactions. Since the longitudinal data format is not suitable as the input stream for the K-L divergence computation, we transformed the data before calculation. Taking the variables “logon time” and “logoff time” (which represent the time when a player begins and ends a game session, respectively) as an example, we first computed the play time (in seconds) of each game session by subtracting the “logon time” from the “logoff time”. We then further computed, for each player, the minimum play time, maximum play time, average play time, and median play time per game session, and total play time in all game sessions. In addition, since K-L divergence is used to measure the divergence of probability distribution between two data sets, we further converted the data into a probability representation; this was done by calculating the percentage of each value over all data points. After those data transformations, the data set was ready for computing the scores of K-L divergence. Fig. 1 gives an example on the data transformation for the variable “total play time”

For each variable, the scores of divergence were computed for each pair of data sets represented for two countries. Since the K-L divergence is non-symmetric, two K-L divergence scores were computed: KL(P, Q) and KL(Q, P). One is for K-L divergence of Q from P and the other for the K-L divergence of P from Q. J-S divergence was then calculated by adding 50% of KL(P,Q) with 50% of KL(Q,P). The piece of pseudo code for this process is shown in Fig. 2. Step 3: Feature selection After the scores of divergence were computed for all variables and for all country-pairs, the feature-selection process was performed easily. The most discriminative features were the ones with highest K-L divergence scores among all features, and the most dissimilar country-pairs are the ones with the highest K-L divergence scores among all country-pairs. The first and second steps were performed on super computers. The Travian game log data were first retrieved from a database hosted at a local server, then transferred to an XSEDE facility by GridFTP; after data transformation and divergence calculation, the score of divergence were sent back and saved into a local database for future use.

Table 2. Five play-time related features for Experiment 1

Based on divergence scores, our study identifies the feature “total play time” as the most discriminative feature, and the feature “minimum play time” as the least discriminative feature. Fig. 3 shows the distribution of J-S divergence scores on these two features. Thus, “total play time” is a better feature for our game log analysis. This is because the feature has higher predictive power to distinguish countries; focusing on this feature can speed up the performance of the analysis and improve the accuracy of the prediction models.

Fig. 2. Pseudo code for FeatureSelector

4. Result and Validation We used FeatureSelector to perform feature selection on the Travian game logs. We present the results of one of our experiments in this section. The experiment focused on choosing the important features from the variables related to play-times. The length of time a gamer plays online depends on a variety of factors, such as the game’s artistic design (whether it is a fantasybased game or a strategy game), game’s cultural background (whether it is Eastern- or Western-style), and players’ demographics (their age, gender, race and education level). Studying the pattern of play-time can help explore the cultural influence, the players’ life style, and the players’ personality in the virtual world. Many variables related to play-time were recorded in Travian game logs, for example, how long a gamer played in each session, how frequently a gamer has played, what time of day a gamer prefers to start, and what season in a year a gamer likes to be online. In our first experiment, we focused on five candidate variables which are related to play-time listed in Table 2.

Fig. 3. Differences of divergence scores on “total play time” and “minimum play time” Such a variable-based analysis, i.e., selecting a small set of high divergence variables, is also very useful for classification, clustering, prediction, and other data mining / machine learning tasks. Clearly, analysis on a small set of highly discriminative variables can lead to better analytical results and clearer vision on many kinds of research studies. The K-L divergence computation can serve a preprocessing function when choosing quality features for subsequent analysis. We also studied the differences between 231 country-pairs in this experiment. As demonstrated in Fig. 3, our study reveals that there are significant differences between countries on “total play

time”, as identified by the higher divergence scores. The countrypair of Israel and United States (IL-US) takes the first place and the country-pair of Italy and Russia (IT-RU) takes the last place in this regard. However, in a very different pattern, almost all countries show a close similarity on “minimum play time” which is represented by consistently low divergence scores, likely because of a “floor effect” in the minimum amount of time a player can play and be included in the data set (i.e., 3 minutes). Fig. 4 depicts the various divergences scores for the country-pair of IL-US and the country-pair of IT-RU on the five play-time related features. The country-pair of IL-US has higher divergences scores than the country-pair of IT-RU, especially on the features of “total play time” and “number of play sessions”. However, for the feature of “minimum play time”, both country-pairs demonstrate small divergence scores. This result suggests that, the country pair of IL-US is a better choice if we want to study country differences, but the country pair of IT-RU is a good option if we aim to find country similarities.

players). As a contrast, Fig. 6 demonstrates the data distribution of “total play time” for Italy and Russia. In agreement with a relative low divergence score (score = 3.207), the data distribution curves for these two countries overlap to a significant degree.

Fig. 5. Data distribution on Total Play Time between Israel and USA

Fig. 4. Divergence on play time variables To confirm the correctness of K-L divergence (and variants) in predicting distance / similarity, we performed a data distribution analysis on selected data sets. We present our results using the data distribution of the variable “total play time”. Since the minimum play time is 180 seconds in Travian games, we make 180 seconds the starting point in this example. The validation process was completed as follows. All data points were first grouped into 25 bins in the log-2 scale for total play time (in seconds), that is, the first bin holds the data points with play time of 180 to 182 seconds, and the 25th bin holds the data points with play time of (180 + 223) to (180 + 224) seconds (i.e., 97 days to194 days). By counting the number of data points in each bin, the data distribution is illustrated in a histogram. Our study confidently confirms that the K-L divergence (and variants) correctly measures the distance of data distributions between the two data sets, and it is highly effective for feature selection. Fig. 5 shows the data distribution of “total play time” for Israel and United States. The horizontal axis represents the “total play time” (in seconds) for each bin. The vertical axis represents the number of data points included in each bin. Consistent with a relative high divergence score between Israel and USA (score = 5.728), the data distribution of these two countries has significant difference in the bins of 25 to 211 (i.e., total play time in the range of 212 seconds to 71 minutes). It is also noticeable that the number of US gamers who played for 5 to 8 minutes is much more than the Israel gamers for the same play time (32909 players vs. 6325

Fig. 6. Data distribution on Total Play Time between Italy and Russia Our experiments demonstrate the predictive power of K-L divergence. Similar to other studies [6, 7, 9], we discovered that the K-L divergence is a good measure for correctly and efficiently selecting important features. The tool, FeatureSelector, which is based on the K-L divergence, has helped us choose the highquality and discriminative features from over 300 variables, and select the top pairs of countries with greatest differences from 231 country-pairs.

5. Feature Selection Using Super Computers We have performed the feature selection process on Gordon, a supercomputer hosted at San Diego Supercomputer Center. Gordon is equipped with 1024 compute nodes, each with 64 GB RAM. Gordon has large memory “supernodes” which are capable of providing over 2TB of coherent memory. Gordon also makes 300 TB flash memory SSDs available on its I/O nodes. The massive high performance compute nodes have allowed us to parallelize the process for the data transformation and the K-L divergence calculation. And the rich memory environment has helped us substantially reduce the required amount of writing to disk during those computations.

Comparing to the process of data transformation (Step 1), the process of divergence calculation (Step 2) consumes many more computational resources. The cost for divergence calculation depends on multiple factors. In our case, those factors include: the number of countries, the player count in each country, the volume of data points for each player, and the number of observed variables. Based on our benchmark on a standalone local server which has 4-CPU and 8 GB RAM, it took over 150 hours, in the manner of serial processing, to calculate the K-L divergence scores for a 100 GB data set on 231 country-pairs over 16 variables. On average, it takes 150 seconds to compute each country-pair over a single variable with about one-million data points per country. Since the data set has 231 country pairs and over 300 variables, we predicted that computing the K-L divergence scores for the entire data set would take a lengthy course of 120 days by the serial processing mechanism on this local server. Luckily, we received help from Gordon to meet this computational challenge. Our benchmark experiment shows that, for the same 100 GB data set, it took 4.5 hours on Gordon by running 16 parallel jobs, with each job conducting independent single feature computation over 231 country-pairs. In contrast to the 150-second lapse period in the serial processing, the parallel processing took 4.5 seconds to calculate each country-pair over one variable on average. This indicates that we could have the complete scores of K-L divergence for the entire data set within 4 days if we use 16 parallel processes on Gordon. This is an improvement in performance by over 30 times. We can expect even better performance if we scale up the parallel degrees to use more compute nodes [21, 22].

6. Discussion Distance or similarity measures are essential for feature selection. Numerous distance / similarity measures have been proposed [18, 19, 20]. In our study, we have applied the K-L divergence and the J-S divergence. Since K-L divergence is non-symmetric, we have computed K-L divergence in bi-direction, that is, the K-L divergence from Q to P and the K-L divergence from P to Q. We have also computed J-S divergence which is a symmetrized variant of K-L divergence. J-S divergence can be easily calculated if the bidirectional K-L divergences are available. Our study has revealed that those divergence measures are highly correlated. Any one of them is effective for feature selection. Given that it is a symmetric measure, J-S divergence is a preferred method in this study. Although feature selection is not the ultimate goal, it provides valuable information for massive game log analysis. The scores of K-L divergence can offer a quick snapshot of the difference between data distributions, accurately predict the result of data analysis, and effectively provide guidance for researchers to focus on the most important features. For example, after discovering that “total play time” is a discriminative feature, we further explored the causes of the large differences in this variable. We discovered some interesting semantic relationships between demographics and play-time among countries, which we will report in a separate paper. Most game logs are based on longitudinal data collections; they are not in a probability distribution format. Therefore, data must be transformed before calculating K-L divergence. As we have demonstrated in this paper, the process of data transformation may include data summarization, new variable generation, and

probability computations. All data transformation can be realized by typical statistical methods. One of the common characteristics possessed by massive game logs is their error-proneness. Most data collections contain null values and outlier values. Noisy data may skew the patterns of data distributions and lead to unreasonable K-L divergence scores. To improve result accuracy, the noisy and outlier data must be eliminated before computing K-L divergence using data mining and machine learning methods [14]. This can be done by first finding the unexpected or extreme data in the collected data set and then removing them with proper procedures, such as by expert validation or setting up some cut-off thresholds. In our study, this cleaning process was conducted before applying FeatureSelector. Another special issue in our study was size equality between comparison countries. Based on the definition of K-L divergence, the calculation enforces a constraint that two measured data sets must be of the same size [10, 11, 12]. However, in our crosscountry analysis, the data volume in one country was often different from others. To satisfy the equal-size requirement, we applied the technique of Stratified Sampling [14] to select a subset of data from each country, using the size of the smaller country in each pair as the upper bound limit on the number of samples. In the process of Stratified Sampling, all data points were first arranged into mutually disjoined bins, and then samples were randomly selected based on the approximate percentage of each bin in the overall data set. Although the typical K-L divergence calculation uses some special small values (e.g., ε) to replace certain missing values in the calculation, in our case, we decided not to use that method. This is because the number of data points in comparison countries is so different that artificially added special values could skew the score of K-L divergence. We found the Stratified Sampling approach to be not only more efficient but also more effective in deriving quality answers. One of the concerns on performing feature selection is cost effectiveness. While feature selection is largely beneficial, it also has computational costs. However, this cost is justifiable. Instead of aimlessly examining every variable, the feature selection process identifies the most important features from a large number of candidates. By focusing data analysis on the discriminative features, we are able to save overall computational costs and produce higher quality results. The availability of high performance computing (HPC) and advanced parallel processing techniques has made the process of feature selection a reality for massive data analysis.

7. Future Work Looking ahead, we are interested in expanding our work to other similar kinds of studies in social sciences, including more extensive analysis of game log data with additional functions based on the selection of a small set of discriminative features in the HPC environment. We also plan to construct a web-based user-friendly interface to allow users to perform feature selection online based on the K-L divergence. As part of the effort to make the tool more scalable to accommodate “Big Data”, we plan to experiment and integrate this tool with new technologies, such as Cloud and Hadoop.

8. Acknowledgement This material is based in part upon work supported by the National Science Foundation under the grants IIS-1247861, OCI-

0838231 and OCI-0838402, and the Deutsche Forschungsgemeinschaft (DFG). This research was also supported in part by the National Science Foundation via the XSEDE project’s Extended Collaborative Support Service under the grant NSF-OCI 1053575. The data used for this research was provided by Travian Games. We would like to thank the Gordon group at SDSC for their constant support. We would also like to thank the Campus Cluster group at NCSA/UIUC for their help hosting the game log databases. Any opinions, findings, and conclusions or recommendations expressed in this material are those of the authors and do not necessarily reflect the views of the sponsors.

9. References [1] Online Gaming and Digital Distribution, www.parksassociates.com/bento/shop/whitepapers, Parks Associates, 2012. [2] D. Williams, N. Yee, and S. Caplan. Who Plays, How Much, and Why? A behavioral Player Census of a Virtual World. In Proc. of National Communication Association Conference, 2008. [3] K. Riopelle, J. C. Gluesing, T. C. Alcordo, M. L. Baba, D. Britt, W. McKether, L. Monplaisir, H. H. Ratner, and K. H. Wagner. Context, Task and the Evolution of Technology Use in Global Virtual Teams. In C. B. Gibson & S. G. Cohen (Eds.), Virtual Teams That Work: Creating Conditions for Virtual Team Effectiveness: 239-264. New York, NY: John Wiley & Sons, 2003. [4] www.traviangames.com/en/press/newsarchive (visited Feb. 20, 2014). [5] R. J. House, P. J. Hanges, M. Javidan, P. W. Dorfman, V. Gupta, editors. Culture, Leadership, and Organizations. Sage Publications, 2004 [6] K. Schneider. A New Feature Selection Score for Multinominal Naïve Bayes Text Classification Based on KLDivergence. In Proc. of ACL 2004. [7] E. P. Xing, M. I. Jordan, and R. M. Karp. Feature selection for high-dimensional genomic microarray data. ICML. Vol. 1. 2001. [8] C. Lee and G. G. Lee. Information gain and divergence-based feature selection for machine learning-based text categorization. Information Processing & Management Volume 42, Issue 1, Jan. 2006, page 155 – 165.

[9] J. R. Hershey and P. A. Olsen. Approximating the Kullback Leibler Divergence between Gaussian Mixture Models. In proc. of International Conference on Acoustics, Speech and Signal Processing, Volume 4, 2007. [10] S. Kullback, and R. A. Leibler. On Information and Sufficiency. Annals of Mathematical Statistics 22 (1): 79-86, 1951. [11] S. Kullback. Information theory and statistics. John Wiley and Sons, NY, 1959 [12] S. Kullback. Letter to the Editor: The Kullback-Leibler distance. The American Statistician, 41 (4): 340-341, 1987. [13] T. M. Cover and J. A. Thomas. Elements of Information Theory. John Wiley, New York, 1991. [14] J. Han, M. Kamber and J. Pei. Data Mining, Concepts and Techniques, 3rd ed., Morgan Kaufman, 2011. [15] H. Liu and H. Motoda, editors. Computational Methods of Feature Selection. Chapman & Hall, 2008. [16] I. Guyon. An Introduction to Variable and Feature Selection. Journal of Machine Learning Research 3 (2003) 1157-1182. [17] H. Liu, H. Motoda, R. Setiono, and Z. Zhao. Feature Selection: An Ever Evolving Frontier in Data Mining. Journal of Machine Learning Research-Proceedings Track 10: 4-13, 2010. [18] Kullback-Leibler divergence. Wkipedia, en.wikipedia.org/wiki/Kullback-Leibler divergence [19] S. Cha. Comprehensive Survey on Distance/Similarity Measures between Probability Density Functions. International Journal of Mathematical Models and Methods in Applied Sciences, Issue 4, Volume 1, 2007. [20] W. H. Press, S. A. Teukolsky, W. T. Vetterling, B. P. Flannery. "Section 14.7.2. Kullback–Leibler Distance". Numerical Recipes: The Art of Scientific Computing (3rd ed.), Cambridge University Press, 2007 [21] C. Brown, I. Ahmed, Y. D. Cai, M. S. Poole, A. Pilny, Y. Atouba, “Comparing the Performance of Group Detection Algorithm in Serial and Parallel Processing Environments”, in Proc. of XSEDE12, 2012. [22] Y. D. Cai, I. Ahmed, A. Pilny, C. Brown, Y. Atouba, Y.

Atouba, M. S. Poole. “SocialMapExplorer: Visualizing Social Networks of Massively Multiploayer Online Games in TemporalGeographic Space”. in Proc. of XSEDE13, 2013.

Cai etal2014.XSEDE.FeatureSelector_Xsede14_pub.pdf ...

Page 1 of 7. FeatureSelector: an XSEDE-Enabled Tool for. Massive Game Log Analysis. Y. Dora Cai. University of Illinois. 1205 W Clark Street. Urbana, IL 61801. 1-217-265-5103. [email protected]. Bettina Cassandra Riedl. LMU Munich. Ludwigstraße 28 VG II. 80539 München. 49-176-32296226. b.c.riedl@gmail.com.

723KB Sizes 0 Downloads 154 Views

Recommend Documents

cai, xiaodong - GitHub
I designed and assembled the system and developed Android app, code in the ... ARM development board and a data transmission system from hardware driver ...

Java JDBC CAI 25.04.2013.pdf
... its own API (JDBC API). that uses JDBC driver written in Java language. Page 3 of 45. Java JDBC CAI 25.04.2013.pdf. Java JDBC CAI 25.04.2013.pdf. Open.

Cai etal 2015 Cognition.pdf
that more recent research has managed to discriminate. these accounts. To determine whether people build syntac- tic representations for missing elements in a ...

CP - CAI-2017.pdf
présentation des vainqueurs sur la carrière Prince Albert. - Trophée Yves Duveau - Challenge National d'Attelage d'Anes. 14 attelages, du débutants aux ...

CAI 4th Quarter Magazine.pdf
behavior to be a great manager. All it. takes is a little ... Fountain Lakes Community Association. Secretary ... CAI 4th Quarter Magazine.pdf. CAI 4th Quarter ...

Huong dan cai dat Teamviewer.PDF
Page 1 of 1. Huong dan cai dat Teamviewer.PDF. Huong dan cai dat Teamviewer.PDF. Open. Extract. Open with. Sign In. Main menu. Displaying Huong dan cai ...

Cai Connell 2015 Cognition.pdf
not themselves visual, and rather are handled by a multi- modal or supramodal system that draws perceptual input. from visual, haptic, or auditory modalities (or ...

Desaparecidos-1-Quando-Cai-o-Raio.pdf
Então eu fiz o que qualquer melhor amiga faria nas mesmas circunstâncias. Eu. o empurrei e bati nele. Não é como se Jeff Day não merecesse ser esmurrado, ...

KHAI HUNG - CAI AM DAT SH-9.pdf
There was a problem previewing this document. Retrying... Download. Connect more apps... Try one of the apps below to open or edit this item. KHAI HUNG ...

CAI Evaluation Report 2012-2015.pdf
Executive Director. AFAO. Whoops! There was a problem loading this page. CAI Evaluation Report 2012-2015.pdf. CAI Evaluation Report 2012-2015.pdf. Open.

CAI-DAT-BIEN-TAN-DELTA-VFD-L.pdf
1-09 Thôøi gian taêng toác laàn 1(Tacc1) Ξ 0.1 ñeán 600 Sec 10. 1-10 Thôøi gian giaûm toác laàn 1 (Tdec1) Ξ 0.1 ñeán 600 Sec 10. 1-11 Thôøi gian taêng toác laàn 2 (Tacc 2) Ξ 0.1 ñeán 600 Sec 10. 1-12 Thôøi gian giaûm toác

HUONG DAN CAI DAT VA SU DUNG CAMERA VSTARCAM TREN ...
HUONG DAN CAI DAT VA SU DUNG CAMERA VSTARCAM TREN ANDROID.pdf. HUONG DAN CAI DAT VA SU DUNG CAMERA VSTARCAM TREN ...

by Chengming Cai A thesis submitted in conformity with ...
start by distinguish manual and automatic content adaptation. Next, we ..... technique is the Power Browser project developed at Stanford University. ...... ACM Symposium on Principles of Distributed Computing, Philadelphia, Pennsylvania,.

Huong Dan Su Dung Va Cai Dat J-tech HD4110W-HD3110W.pdf ...
Huong Dan Su Dung Va Cai Dat J-tech HD4110W-HD3110W.pdf. Huong Dan Su Dung Va Cai Dat J-tech HD4110W-HD3110W.pdf. Open. Extract. Open with.

220029071-H--ng-dn-cai---t-man-hinh-may-ct-step ...
... trên màn hình sáng lên là đúng. Page 3 of 3. 220029071-H--ng-d-n-cai---t-man-hinh-may-c-t-step-XC2001-XC2005A-XC2005B-va-Bi-n-T-n-LS600-series.pdf.

by Chengming Cai A thesis submitted in conformity with ...
Tom, Jing, Mahsa, Levon, Sina and Arpit, I will never forget all the good time we spend together in ..... The content provider, who is typically a professional ..... The proxy runs on a well-provisioned server with a high-bandwidth and low-latency ..

Watch Gui Cai Lun Wen Xu (1976) Full Movie Online Free ...
Watch Gui Cai Lun Wen Xu (1976) Full Movie Online Free .Mp4___________.pdf. Watch Gui Cai Lun Wen Xu (1976) Full Movie Online Free .

Huong dan cai dat Bien tan Mitsubishi FR-A500,FR-E500.pdf ...
Huong dan cai dat Bien tan Mitsubishi FR-A500,FR-E500.pdf. Huong dan cai dat Bien tan Mitsubishi FR-A500,FR-E500.pdf. Open. Extract. Open with. Sign In.