Case Study: Creating Methods

CHI 2014, One of a CHInd, Toronto, ON, Canada

Online Microsurveys for User Experience Research Victoria Schwanda Sosik

David Huffaker

Paul McDonald

Abstract

Cornell University

Google Inc.

Google Inc.

301 College Ave.

Mountain View, CA 94043

Mountain View, CA 94043

Ithaca, NY 14850

[email protected]

[email protected]

This case study presents a critical analysis of microsurveys as a method for conducting user experience research. We focus specifically on Google Consumer Surveys (GCS) and analyze a combination of log data and GCSs run by the authors to investigate how they are used, who the respondents are, and the quality of the data. We find that such microsurveys can be a great way to quickly and cheaply gather large amounts of survey data, but that there are pitfalls that user experience researchers should be aware of when using the method.

[email protected] Gueorgi Kossinets

Aaron Sedley

Elie Bursztein

Google Inc.

Google Inc.

Google Inc.

Mountain View, CA 94043

Mountain View, CA 94043

Mountain View, CA 94043

[email protected]

[email protected]

[email protected] Kerwell Liao Sunny Consolvo

Google Inc.

Google Inc.

Mountain View, CA 94043

Mountain View, CA 94043

[email protected]

[email protected]

Author Keywords Microsurveys; user experience research; user research methods

ACM Classification Keywords Permission to make digital or hard copies of part or all of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for third-party components of this work must be honored. For all other uses, contact the owner/author(s). Copyright is held by the author/owner(s). CHI 2014, April 26–May 1, 2014, Toronto, Ontario, Canada. ACM 978-1-4503-2474-8/14/04. http://dx.doi.org/10.1145/2559206.2559975

H.5.2. User Interfaces: Theory and methods.

Introduction To keep up with fast paced design and development teams, user researchers must develop a toolkit of methods to quickly and efficiently address research questions. One such method is the microsurvey, or a short survey of only one to three questions. There are several commercial microsurveys—including Google Consumer Surveys (GCS), SlimSurveys, and Survata—

889

Case Study: Creating Methods

that promise p to provide e people with large amounts of data quickly q and at a re elatively low cost. In this case study, we present a critical analysis of one type of microsurvey, Google Co onsumer Surveys, addressing questions about how th hey are being used d, who their respondents are, and of o what quality is the t data they collect. We conclude wiith some current best b practices for us sing this method in n user research.

One Example of a Microsurvey: M GCS G

Figure 1. An example of how a respondent en ncounters GCS. They are asked to answ wer a short survey question, q or share the p page they are readin ng via social media iin order to continue reading the publisher’’s content.

Figure 2. Part of the GCS results interface. To the left are controls s to filter responses by y demographics, and d results to a multiple choice question are e shown to the right.

Since we use GCS in th his case study, we e first provide a brief overview o of how itt works. Each GCS S respondent is shown n only one questio on, two if there is a screening question. If a survey ha as more than one question, then each respondent r is randomly shown only y one of the survey y questions. The survey s designer can c choose one of twe elve predefined qu uestion formats th hat include open ended d, single answer, multiple m answer, and a rating scale responses. Certain question formats allo ow for images in the qu uestion or respons ses. Questions and responses must be short, with 125 character and 44 4 character limits respectively; multiple choice questtions are limited to sho owing 5 response options to each re espondent. Surve ey designers can request r a represen ntative popula ation, or target re espondents based on specific demographics (as inferrred by IP address ses and DoubleClick cookies) orr by using a scree ening question. Questtions are then sho own to people trying to access a publis sher’s premium co ontent—primarily in the catego ories of News, Artts & Entertainmen nt, and Refere ence—and people answer the questtion in order to contin nue reading the co ontent (see Figure e 1); in this way, these t microsurvey ys are acting as a surveywall betwe een the responden nt and the contentt they want to access.

CHI 2014, One of a CHInd, Toronto, ON, Canada

vey designers can view the After data is collected, surv results in tthe GCS interface e, which provides users with basic analy ysis tools includin ng comparison of rresults by different d demographics and d automatic, editable clustering of open-ended te ext responses (see e Figure 2).

Results: Analysis of GC CS We analyzzed GCS log data a and data from sev veral surveys ru un by the authors. Some of the surrveys were run specifiically to gather da ata about GCS as a method, and otherss were run to answ wer user research h questions for our pro oduct teams, how wever we analyzed d them from a methodo ological perspectiv ve for this case sttudy. GCS by th he Numbers GCS log da ata shows that the two most frequently used types of q uestions are multtiple-choice questiions (see Table 1). T Together, single a and multiple answ wers make up over 80 0% of all deployed d GCS questions. However the most ccommon question type—multiple an nswer—has the lowestt completion rate (see Table 1). On averag ge, respondents sp pend 9.7 seconds responding to a GCS q question, and the modal response ttime is 4 seconds (ssee Figure 3). GCS Ss also collect datta very quickly—o on average, survey ys are approved tto start collecting data between one e and four hours a after being created, a nd complete data a collection in abou ut two to four days. General population surveys finish data collection on the lower end of that range, wh hereas targeted ssurveys tend to take the four days. Who are G GCS Respondents? ? In Novemb ber 2012, PEW Re esearch ran a stud dy to compare G GCS demographics with that of their

890

Case Study: Creating Methods

Demographic

PEW

Men Women 18–24 25–34 35—44 45—54 55—64 65+ Unknown Ag ge

GCS

32% 35% 33% 37% 49% 38% 28% 18% —

27% 27% 18% 30% 32% 28% 26% 23% 27%

Table 2. Infe erred GCS demograp phics compared to PEW demographics.. Survey Question n Do you ever use the internet to use a socia al networking site like MySpace e, Facebook, orr LinkedIn.com m? What is the primary socia al networking site you use?

PEW

GCS G

42% [age 50+]

46% 4 [age e 45+]

CHI 2014, One of a CHInd, Toronto, ON, Canada

Question Q Type Multtiple answers Sing gle answer Open n Ended Ratin ng Num meric open ended Ratin ng with text Ratin ng with image Larg ge image choice Side e-by-side images Imag ge with menu Open n ended with image Two choices with image

Usage 62.04% 21.71% 4.62% 3.81% 1.60% 1.50% 1.30% 0.99% 0.92% 0.82% 0.69% --

Completion C Rate 20.56% 39.37% 27.03% 34.19% 25.30% 34.09% 27.20% 28.49% 29.37% 36.57% 27.79% --

Table e 1. Rate of usage among survey design ners and complletion rate among re espondents for the 12 1 different types of GCS questions.

Facebook (8 85%) Linked dIn (6%) Twitte er (4%) Google e+ (3%) MyS Space (1 1%)

We ran a sseries of GCSs to dig deeper into demographic and techno ology-use questio ons. We found tha at the rate of tablet o ownership (PEW = 34%, GCS = 28% %), cell phone own nership (91%, 67%) and use of cell phones (35%, 33% %) or the internett for banking (61% %, 48%) was lower among GCS than n PEW respondentts. In terms of demogrraphics, GCS show ws lower rates acrross age and gende er (see Table 2). W With respect to so ocial networking g site usage amon ng older American ns, our findings ussing GCS were clo ose to PEW (see T Table 3). We also co ompared GCS respondents to respo ondents from Surv ey Sampling International (SSI) an nd Knowledge e Networks (KN) w with respect to intternet use and techno ology adoption. R Results across the panels were simil ar, with SSI respo ondents tending tto be the heavier intternet users and ttechnology adopte ers, and KN being the lowest (see Table e 4). while we notice dem mographic differences Overall, w between th he survey sample es—likely due to th he number of unknow wns in GCS—techn nology usage and adoption is similar acrross all four samp ples, with PEW and d KN representi ng the high and lo ow extremes, respectively.

mong Table 3. Social Network usage am older Americans, using PEW and GCS G survey sample es. Figure e 3. Distribution of response r times in se econds to GCS su urvey questions.

teleph hone panels. Theirr overall findings were w that GCS respondents “conform closely c to the dem mographic compo osition of the overall internet population,” and that there is little evide ence that GCS is biased b towards heavy y internet users. [4]

nts’ Attitudes Tow ward Surveywalls Responden We ran a G GCS to explore re espondents’ attitud des toward surveywal ls that stand betw ween them and co ontent they are trying to access. We asked them which o of five ey would prefer w options the when trying to acc cess premium ccontent. We found d that the most po opular response w was taking a shorrt microsurvey (47 7%), followed b by having content sponsored by an advertiser (34%), ma aking a small one e-time payment (1 10%), purchasing g a subscription (6 6%), and other (3 3%; which they then had to specify as open ended text)).

891

Case Study: Creating Methods

KN

GCS

SSI

For personal purposes, I normally use the Internet (5 = every hour or more, 1 = once per week or less) 3.2 3.5 3.8 Other people often seek my ideas and advice regarding technology (5 = describes me very well, 1 = describes me very poorly) 2.7 3.1 3.2 I am willing to pay more for the latest technology (same as above) 2.3 2.6 3.1 Which of the following best describes when you buy or try out new technology? (5 = Among the first people, 1 = I am usually not interested) 2.5 2.6 3.1 How frequently do you post on social networks? (5 = multiple times a day, 1 = once a month or less) 1.7 2.1 2.4 Table 4. Technology use and adoption among 3 different survey panels.

Trap Questions in GCS

 What is the color of a red ball? (90.3% correct)

 What is the shape of a red ball? (85.7%)  The purpose of this question is to assess your attentiveness to question wording. For this question please mark the ‘Very Often’ response. (72.5%)  The purpose of this question is to assess your attentiveness to question wording. Ignore the question below, and select “blue” from the answers. What color is a basketball? (57%)

Data Quality: Survey Attentiveness As one measure of data quality, we ran a GCS that asked respondents one of several trap questions. For a summary of how respondents performed, see the sidebar to the left. We find that our GCS respondents answered correctly the “Very Often” question less often (73%) than an example of the same trap question being asked on a paper survey (97%) [3]. A trap survey run in Mechanical Turk found only 61% of respondents answering correctly when asked to read an email and answer two questions [2], but this task is arguably harder than the questions we asked. Data Quality: Garbage Open Ended Responses We also analyzed data quality by looking at the rate of garbage responses that we received across 25 GCS questions run for other projects. Examples of these questions include: “which web browser(s) do you use?” and “what does clicking on this image allow you to do?” responses such as “blah”, “who cares”, and “zzzzz” and found that the percentage of garbage responses ranged from 1.8% to 23.4% (Mean = 7.8%). Our analysis revealed that the percentage of “I don’t know” responses tended to correlate with the percentage of garbage responses, suggesting that people were more likely to provide such garbage responses when they were not sure of what the question was asking of them.

Conclusion: Best Practices for Microsurveys We find that microsurveys such as Google Consumer Surveys can quickly provide large amounts of data with relatively low setup costs. We also see that the GCS population is fairly representative as compared to other large-scale survey panels.

CHI 2014, One of a CHInd, Toronto, ON, Canada

However there are also pitfalls to keep in mind. Our findings from the trap question survey suggests that being concise is important to maximize data quality, which supports GCS’s question length constraints. We also suggest that it is important to appropriately target surveys to a population in order to keep garbage open ended responses to a minimum. If respondents are being asked about something they are unfamiliar with, they are less likely to provide meaningful responses. Finally, multiple answer questions had the lowest completion rate—which is often used as a measure of data quality (e.g. [1])—so we suggest that people think critically about the types of questions they use, and consider using other question types if at all appropriate. With respect to analyzing microsurveys, first it is important to remember that demographics are inferred, and there are many “unknowns”. We also suggest using built-in text clustering tools to categorize open-ended responses, and if desired, following up with multiple choice questions to determine how frequent these categories are.

References

[1] Dillman, D. A. & Schaefer, D. R. (1998). Development of a standard e-mail methodology: results of an experiment. Public Opinion Quarterly, 62, 3. [2] Downs, J.S., Holbrook, M. B., Sheng, S., & Cranor, L. F. (2010). Are your participants gaming the system? Screening mechanical turk workers. In Proceedings of CHI '10. [3] Hargittai, E. (2005). Survey Measures of WebOriented Digital Literacy. Social Science Computer Review. 23, 3, 371–379. [4] Pew Research (November 2012). A Comparison of Results from Surveys by the Pew Research Center and Google Consumer Surveys.

892

Online Microsurveys for User Experience Research - ACM Digital Library

May 1, 2014 - experience research. We focus specifically on Google. Consumer Surveys (GCS) and analyze a combination of log data and GCSs run by the ...

852KB Sizes 19 Downloads 402 Views

Recommend Documents

User Experience at Google – Focus on the user ... - ACM Digital Library
Apr 10, 2008 - The first statement of Google's Corporate Philosophy is. “Follow the user and all ... design, user research, web development, technical writing ...

practice - ACM Digital Library
This article provides an overview of how XSS vulnerabilities arise and why it is so difficult to avoid them in real-world Web application software development.

Designing Unbiased Surveys for HCI Research - ACM Digital Library
May 1, 2014 - enable the creation of unbiased surveys, this course ... Permission to make digital or hard copies of part or all of this work for personal or ...

Incorporating heterogeneous information for ... - ACM Digital Library
Aug 16, 2012 - A social tagging system contains heterogeneous in- formation like users' tagging behaviors, social networks, tag semantics and item profiles.

Trust, Transparency & Control in Inferred User ... - ACM Digital Library
Apr 26, 2014 - Abstract. This paper explores the importance of transparency and control to users in the context of inferred user interests. More specifically, we illustrate the association between various levels of control the users have on their inf

6LoWPAN Architecture - ACM Digital Library
ABSTRACT. 6LoWPAN is a protocol definition to enable IPv6 packets to be carried on top of low power wireless networks, specifically IEEE. 802.15.4.

Kinetic tiles - ACM Digital Library
May 7, 2011 - We propose and demonstrate Kinetic Tiles, modular construction units for kinetic animations. Three different design methods are explored and evaluated for kinetic animation with the Kinetic Tiles using preset movements, design via anima

A Framework for Technology Design for ... - ACM Digital Library
Internet in such markets. Today, Internet software can ... desired contexts? Connectivity. While the Internet is on the rise in the Global South, it is still slow, unreliable, and often. (https://developers.google.com/ billions/). By having the devel

Borg, Omega, and Kubernetes - ACM Digital Library
acmqueue | january-february 2016 71 system evolution. As more and more applications were developed to run on top of Borg, our application and infrastructure ...

A Framework for Technology Design for ... - ACM Digital Library
learning, from the technological to the sociocultural, we ensured that ... lives, and bring a spark of joy. While the fields of ICTD and ..... 2015; http://www.gsma.com/ mobilefordevelopment/wp-content/ uploads/2016/02/Connected-Women-. Gender-Gap.pd

Proceedings Template - WORD - ACM Digital Library
knowledge-level approach (similarly to many AI approaches developed in the ..... 1 ArchE web: http://www.sei.cmu.edu/architecture/arche.html by ArchE when ...

The Chronicles of Narnia - ACM Digital Library
For almost 2 decades Rhythm and Hues Studios has been using its proprietary software pipeline to create photo real characters for films and commercials. However, the demands of "The Chronicles of. Narnia" forced a fundamental reevaluation of the stud

Computing: An Emerging Profession? - ACM Digital Library
developments (e.g., the internet, mobile computing, and cloud computing) have led to further increases. The US Bureau of Labor Statistics estimates 2012 US.

Towards a Relation Extraction Framework for ... - ACM Digital Library
to the security domain are needed. As labeled text data is scarce and expensive, we follow developments in semi- supervised Natural Language Processing and ...

BlueJ Visual Debugger for Learning the ... - ACM Digital Library
Science Education—computer science education, information systems education. General Terms: Experimentation, Human Factors. Additional Key Words and ...

Privacy-preserving query log mining for business ... - ACM Digital Library
transfer this problem into the field of privacy-preserving data mining. We characterize the possible adversaries interested in disclosing Web site confidential data and the attack strategies that they could use. These attacks are based on different v

An interactive multi-touch sketching interface for ... - ACM Digital Library
Diffusion curves are effective 2D vector-graphics primitives, for creating smoothly-shaded drawings with rich colors and unique styles. Conventional drawing ...

Evolutionary Learning of Syntax Patterns for ... - ACM Digital Library
Jul 15, 2015 - ABSTRACT. There is an increasing interest in the development of tech- niques for automatic relation extraction from unstructured text. The biomedical domain, in particular, is a sector that may greatly benefit from those techniques due

Who knows?: searching for expertise on the ... - ACM Digital Library
ple had to do to find the answer to a question before the Web. Imagine it is. 1990, before the age of search engines, and of course, Wikipedia. You have.

ACM significantly improved user experience with Google Site Search
company places innovation, efficiency, and user friendliness at the heart of its technical development ... As a Software-as-a-. Service (SaaS) solution, you can ...

ACM significantly improved user experience with Google Site Search
company places innovation, efficiency, and user friendliness at the heart of its technical development ... As a Software-as-a-. Service (SaaS) solution, you can ...

Home, habits, and energy: examining domestic ... - ACM Digital Library
2 HCI Institute. Carnegie Mellon University. Pittsburgh, PA 15232 USA. {jjpierce,paulos}@cs.cmu.edu. 3 SAMA Group. Yahoo!, Inc. Sunnyvale, CA 94089 USA.

Optimizing two-dimensional search results ... - ACM Digital Library
Feb 9, 2011 - Classic search engine results are presented as an ordered list of documents ... algorithms for optimizing user utility in matrix presenta- tions.