My da|ra Login

Detailed view

metadata language: English

Precision and Disclosure in Text and Voice Interviews on Smartphones: 2012 [United States]

Version
1
Resource Type
Dataset
Creator
  • Conrad, Frederick G. (University of Michigan. Institute for Social Research. Survey Research Center)
  • Schober, Michael F. (The New School for Social Research. Department of Psychology)
Publication Date
2015-05-03
Funding Reference
  • National Science Foundation. Directorate for Social, Behavioral and Economic Sciences
  • National Science Foundation. Directorate for Social, Behavioral and Economic Sciences
Free Keywords
survey interviewing; rounding; automated interviewing; response rates; straightlining; completion; nondifferentiation; statisficing; iPhone; breakoff; sensitive questions; data quality; text message interviewing; survey methodology; IVR; precision; text message; speech IVR; heaping; nonresponse; interview satisfaction; multitasking; mobile devices; disclosure; smartphone; SMS; mode comparison
Description
  • Abstract

    As people increasingly communicate via asynchronous non-spoken modes on mobile devices, particularly text messaging (e.g., SMS), longstanding assumptions and practices of social measurement via telephone survey interviewing are being challenged. This dataset contains 1,282 cases, 634 cases that completed an interview and 648 cases that were invited to participate, but did not start or complete an interview on their iPhone. Participants were randomly assigned to answer 32 questions from US social surveys via text messaging or speech, administered either by a human interviewer or by an automated interviewing system. 10 interviewers from the University of Michigan Survey Research Center administered voice and text interviews; automated systems launched parallel text and voice interviews at the same time as the human interviews were launched. The key question was how the interview mode affected the quality of the response data, in particular the precision of numerical answers (how many were not rounded), variation in answers to multiple questions with the same response scale (differentiation), and disclosure of socially undesirable information. Texting led to higher quality data—fewer rounded numerical answers, more differentiated answers to a battery of questions, and more disclosure of sensitive information—than voice interviews, both with human and automated interviewers. Text respondents also reported a strong preference for future interviews by text. The findings suggest that people interviewed on mobile devices at a time and place that is convenient for them, even when they are multitasking, can give more trustworthy and accurate answers than those in more traditional spoken interviews. The findings also suggest that answers from text interviews, when aggregated across a sample, can tell a different story about a population than answers from voice interviews, potentially altering the policy implications from a survey.
Temporal Coverage
  • 2012-03-28 / 2012-05-03
    March - May 2012
Geographic Coverage
  • United States
Note
Funding insitution(s): National Science Foundation. Directorate for Social, Behavioral and Economic Sciences (SES-1026225). National Science Foundation. Directorate for Social, Behavioral and Economic Sciences (SES-1025645).
Availability
Download
This study is freely available to the general public via web download.

Update Metadata: 2016-08-20 | Issue Number: 1 | Registration Date: 2016-08-20

Conrad, Frederick G.; Schober, Michael F. (2015): Precision and Disclosure in Text and Voice Interviews on Smartphones: 2012 [United States]. Version: 1. ICPSR - Interuniversity Consortium for Political and Social Research. Dataset. https://doi.org/10.3886/E100113V1