Encouraging, advancing and elevating market research worldwide

ALLS FAIR IN LOVE AND ... CHESS?

No 42 I September 2013

ESOM.AR Industry Report Global Market Research 2013

Surveying at Google

An inteiv.iew with Mario callegaro

56

RESEARO I Septarber 2013;

RESEARCH I N BUSINESS

SIMON CHADWICK

Simon Chadwick talks to Mario Callegaro, a surv~ research scientist at Google Simon: Mario, tell me a little bit about your background and what brought you to Google and what your role is at Google. Mario: I started my career, in my undergraduate years. in sociology at the University of Trento, Italy, and I was always interested in the survey area. I d id my master's at the Univer sity of Nebraska-Lincoln, fo llowed by a Ph D in survey research there. Once I finished, Knowledge Networks. which is now GfK, was looking for a survey methodologist, and I guess it was the right time for me and for them. After almost two years, I went to work for Google. I was already living in Mountain View, where Google is headquartered. I knew the company, but not the survey work they were doing. Simon: Going through Knowledge Networks - what a great place to work. They had a great reputation for excellent research. Mario: absolutely, I learned so much from Knowledge Networks! It was a great place to work. Going to Google, what really surprised me, is the scale and the volume of the surveys that are done, and the international scope. It's typical to do English and possibly Spanish for surveys in the U.S. but w hen you work for an international company, some surveys are translated into 40 languages. That's really challenging, and t he numbers, the volume, is really high.

Simon: So it sounds as if market research and survey research actually play a fairly central role at Google. Mario: They do. My role was to help establishing a single global design for customer satisfaction at Google and work w ith d ifferent teams in order to accomplish that. Goals around customer satisfaction, for example, are established at an executive level and tracking studies provide constant monitoring of such goals. Like many companies, we conduct market research to learn more about how consumers use and feel about our products. For example, we mig ht conduct a survey to find out which product features users like better, or which features t hey are aware of. In my specific case my team is actually called quantitative marketing. Many of my colleagues have PhDs in statist ics. Another colleague and I are survey scientists. We work w ith our statisticians in order to p rovide them w ith the best data t hat can be later used in performing advanced analytics, for example, and in developing weights and bias removal strategies. Many other teams conduct surveys as well, including usability team and HR. It is nice to work with other team.s w ith different goals and b ring survey evidence-based knowledge. We also do market research surveys where you don't survey Google users, but look to understand potential user s or clients for new products - more traditional market research , where we contract external vendors. R£SEARCH~ I Se~2013

57

Simon: Those external vendors - d o t hey tend to be just data-collection vendors, or do you use full-service research companies as well? Mario: It depends. On my team, we tend to use o nly the data collection side of a market research company, for example, their call center or online panel. We do everyt hing from designing the questionnaire to the data analysis and reporting. We can bring information that we already have on respondents into the mix, so we don't need to re-ask all the same questions - which is very powerful. It also shortens the questionnaire. We typically cannot share this type of data w ith vendors. Of course, the respondent has consented to share this information with us. We ask for explicit consent, and it allows us to link this information to understand how they use the product and to connect the behavior with their attitudes. But other teams at t imes use full-services, where the company is also giving you insights, analytics and reports. Simon: Right. So a lot of your role, therefore, is synth esising all those different information points about the customers. Mario: Yeah, that's a good way to put it. Simon: W hat is the definit ion of market research at Google? W e're hearing a lot, right across the markets and across the wo rld, about a wider definition - one t hat might include things like social media, web analytics, b ig data and so on. Is that more how research is v iewed at Google? Mario: There are so many teams at Google who use research in d ifferent ways, so it's different per team. In my team's case, we can be both innovative and traditional. I spent my first t wo years at Google doing online surveys of our small advertisers and spent my last two years doing telephone surveys of our top advertisers. This is somet hing t hat m ight seem surprising. For that specific target of respondents it was a g reat way to get a really high response rate, especially for C-level respondents, whom are few by definition. They are d ifficult to catch in a web survey with an e-mail invitation. This is one of the problems we are facing in the indust ry -e-mail invitations seem to be less and less effective in eliciting response. The volume of e- mails that are received, especially in a business environment, is so high that your e-mail invitation 58

RESEAAOiWORU) l - 2 0 1 l

might get lost during the day. Fo r example we know that, if people are going to respond, they answer w ithin the first 12 hours; after that there is a sharp drop in response. We use best practices for telephone surveys, where we send a traditional envelope w ith a letter signed by our country lead - which actually surprises some of our clients. Simon: Right. I t hink a lot of people wo uld be surprised t hat Google does a lot on the phone and, indeed , uses snail mail to legitimise and invite its respondents. A lot of people w ould be actually rather p leased to hear that . Mario: I think so. I mean, the first time I was talking to somebody - they were having some issues with response rates - and I said, "Have you tried switching mode and doing telephone surveys?" - they looked at me like I was crazy. But two years later, we are doing it - and very successfully for a specific section of clients. We need to remember that every methodology has strengths and limitations as well. We need to understand which is the best met hod for the target population. Let me g ive you an example: open-end questions - many respondents don't answer them online. That's not t he case in telephone surveys where you get in depth answers to open-end questions. We also use mixedmode research where we combine telephone and web for some sections of respondents. Telephone surveys are a lot of work. You need to have a good vendor who is able to scale a survey to many countries at the same time. They should be able to manage the mailing, which is easier said than done. The mail p iece determines the entire call pattern, so you need to make sure that you send the letter in advance, before you start calling, and you cannot wait too long or people forget about the letter. We also do ask a sub-sample of respondents if they remember receiving the letter, which is quality control for the vendor and us. Simo n: listening to you talk about q uality contr ol and survey methods is very encouraging, because clearly there's a g reat deal of thoug ht and effort that goes into this. From your experience on the supplier's sid e of the b usiness, w ould you say t hat this is u nusual, o r do you feel t hat this is relatively normal?

Mario: All told, some of the vendors just look at, say, cost, and in our case, luckily; we focus more on quality than cost. W hen you have large p rojects, cutting corners and reducing costs just diminishes the value of the survey, so I prefer to do one less survey and have an overall higher quality. That's actually a b ig debate I have with many teams I consult wit h at Google - they are considering collecting data on a quarterly basis. or even on a monthly basis, and my first question is, "Is the key metric(s) you are measuring going to change in a month o r in a quarter?" If not, maybe you don't need to run a survey w ith that frequency. I also ask, "Are you going to do something to change this metr ic?" If you don't do anything, if you don't change anything in your process, you're just measuring some variance due to sampling and potentially non-response. In other words we push teams to make the results actionable so we can measure some change in the next iteration. Coming from a survey research background, I am always very sensitive to respondent burden. We need to respect t he fact that we have respondent s investing time to answer ou r surveys, and we really need to use the data the best we can. Actually. sometimes it's much better to re-analyse the data than to re-do a survey. But that's a b igger discussion. maybe outside the scope of this conversation. Si mon: But it's a very relevant one. Many people in the industr y have b een batt ling with the idea fo r a long time. So you talked abo ut surveys and you talked about synthesis of data fro m d iffer ent sources. Do you use o r make use of some of t he newer modal ities of research, fo r examp le o nline communities o r ethnography or d igital qualitative? Mario: Personally, not much, just because of time. I wish I could have a second life just for doing q ualitative. We generally start ou r survey projects with q ualitative interviews and in- depth interviews, and t hat's what we suggest to everybody. As you can imagine, at Google everything is very new and keeps changing, so you cannot just sit down and write a questionnaire. That's not going to work. Consequently we talk to a small g roup of people from your target sample to see whet her they understand o ur language. Are we writing questions that make sense or that are just too technical?

We need to make really sure t hat our respondents understand t he q uestions. Then the pretest phase is really crucial. Given the nature of our products p retesting and soft-launching a survey is key. I invest a lot of time in talking and negotiating w ith vendors a pretest phase and the metr ics to keep track of in order to spot potential problems. To start, interview median length, t hen issues with particular questions that in web surveys can be detected using paradata, and in a telephone interview by reading the interviewers' reports and conducting debriefing after t he first day of calls. For example, when we launched a wide mult ilingual survey I spent two days with my colleagues in the call center listening to interviews. You can be smart and do quality control along w ith data collection w ithout slowing down everything - and still get high-quality data. RW see part 2 of ttus interview In ttie October 2013 tssue of Research 'M:>rld

Simon Chadwick is managing partner at cambiar and editor-in-chief For Research World Mario CalJogaro is survey research scientist at Google in the UK

alls fair in love and ... chess? - Research at Google

their call center or online panel. We do everyt hing from designing the questionnaire to the data analysis and reporting. We can bring information that we already ...

656KB Sizes 0 Downloads 197 Views

Recommend Documents

A Mechanism Design for Fair Division - Research at Google
shift into cloud computing, more and more services that used to be run on ... One distinguishing property of resource allocation protocols in computing is that,.

RECOGNIZING ENGLISH QUERIES IN ... - Research at Google
2. DATASETS. Several datasets were used in this paper, including a training set of one million ..... http://www.cal.org/resources/Digest/digestglobal.html. [2] T.

Hidden in Plain Sight - Research at Google
[14] Daniel Golovin, Benjamin Solnik, Subhodeep Moitra, Greg Kochanski, John Karro, and D. Sculley. 2017. Google Vizier: A Service for Black-Box Optimization. In. Proc. of the 23rd ACM SIGKDD International Conference on Knowledge Discovery and Data M

Domain Adaptation in Regression - Research at Google
Alternatively, for large values of N, that is N ≫ (m + n), in view of Theorem 3, we can instead ... .360 ± .003 .352 ± .008 ..... in view of (16), z∗ is a solution of the.

Unified and Contrasting Cuts in Multiple Graphs - Research at Google
Aug 13, 2015 - ing wide scale applications from social networks to medical imaging. A popular analysis is to cut the graph so that the disjoint ..... number of graphs (typically 10s or at most 100s). In ad- .... google.com/site/chiatungkuo/. 10. 20.

Collaboration in the Cloud at Google - Research at Google
Jan 8, 2014 - all Google employees1, this paper shows how the. Google Docs .... Figure 2: Collaboration activity on a design document. The X axis is .... Desktop/Laptop .... documents created by employees in Sales and Market- ing each ...

Collaboration in the Cloud at Google - Research at Google
Jan 8, 2014 - Collaboration in the Cloud at Google. Yunting Sun ... Google Docs is a cloud productivity suite and it is designed to make ... For example, the review of Google Docs in .... Figure 4: The activity on a phone interview docu- ment.

HyperLogLog in Practice: Algorithmic ... - Research at Google
network monitoring systems, data mining applications, as well as database .... The system heav- ily relies on in-memory caching and to a lesser degree on the ...... Computer and System Sciences, 31(2):182–209, 1985. [7] P. Flajolet, Éric Fusy, ...

Competition and Fraud in Online Advertising ... - Research at Google
Advertising fraud, particularly click fraud, is a growing concern to the online adver- .... Thus, in equilibrium, ad network 1 will choose to filter at a level x1 greater than x∗, and win over ... 3828 of Lecture Notes in Computer Science, Springer

Student Skill and Goal Achievement in the ... - Research at Google
Mar 4, 2014 - members of the general public how to use Google tools more efficiently and ... Power Searching and Advanced Power Searching with. Google, that ..... Third International Conference on Learning Analytics and Knowledge ...

Challenges And Opportunities In Media Mix ... - Research at Google
Media mix models (MMMs) are statistical models used by advertisers to .... The ads exposure data is more challenging to collect, as ad campaigns are often ... publication can be provided, it is not always a good proxy for the actual ... are well-esti

Good Abandonment in Mobile and PC Internet ... - Research at Google
Jul 23, 2009 - a result nor issuing a query refinement [10]. Internet ... it compares to PC (desktop/laptop) search with respect .... age computer-based user [12].

Localization and Tracking in Sensor Systems - Research at Google
bootstraps location information without external assistance ... Taxonomy of localization and tracking systems. 2. ..... indoor asset and systems management.

N-gram Statistics in English and Chinese - Research at Google
N-gram Statistics in English and Chinese: Similarities and Differences. Stewart Yang, Hongjun Zhu, ... In this paper, we present results of analyzing the quantity and frequency of ..... Mapreduce: Simplified data pro- cessing on large clusters.

Applying WebTables in Practice - Research at Google
2. EXTRACTING HIGH QUALITY TABLES. The Web contains tens of billions of HTML tables, even when we consider only pages in English. However, over 99%.

Mathematics at - Research at Google
Index. 1. How Google started. 2. PageRank. 3. Gallery of Mathematics. 4. Questions ... http://www.google.es/intl/es/about/corporate/company/history.html. ○.

restoring punctuation and capitalization in ... - Research at Google
We study the effect on performance of varying the n-gram order (from n = 3 to ... 1. INTRODUCTION. State-of-the-art automatic speech recognition (ASR) systems ... also for natural language processing tools, which usually ex- pect formatted text ... o

Moving Targets: Security and Rapid-Release in ... - Research at Google
Nov 7, 2014 - of rapid-release methodology on the security of the Mozilla. Firefox browser. ... their development lifecycle, moving from large-scale, infre- quent releases of new ...... http://www.cigital.com/presentations/ARA10.pdf. [36] Gary ...

Visual and Semantic Similarity in ImageNet - Research at Google
The paper is structured as follows. After introducting the ImageNet ...... ImageNet: A large-scale hierarchical image database. In. CVPR, 2009. [8] M. Douze, H.

AUTOMATIC LANGUAGE IDENTIFICATION IN ... - Research at Google
this case, analysing the contents of the audio or video can be useful for better categorization. ... large-scale data set with 25000 music videos and 25 languages.

Automatic generation of research trails in web ... - Research at Google
Feb 10, 2010 - thematic exploration, though the theme may change slightly during the research ... add or rank results (e.g., [2, 10, 13]). Research trails are.