Term Definition
Imputation variance
That component of overall variability in survey estimates that can be accounted for by imputation.
Imputations

A computation method that, using some protocol, assigns one or more replacement answers for each missing, incomplete, or implausible data item.

Inconsistent responses

Inappropriate responses to branched questions. For instance, one question might ask if the respondent attended church last week; a response of "no" should skip the questions about church attendance and code the answers to those questions as "inapplicable." If those questions were coded any other way than "inapplicable,” this would be inconsistent with the skip patterns of the survey instrument.

Indirect cost
An expense that is incurred in joint usage and difficult to assign to or is not directly attributable to a specific department, process or product.
Informant

The person who supplies a list of the eligible elements within the selected unit. For example, many in-person surveys select a sample of housing units at the penultimate stage of selection. Interviewers then contact the housing unit with the aim of convincing the member of the housing unit who responded to the contact attempt to provide a list of housing unit members who are eligible for the study. The housing unit member who provides a list of all eligible housing unit members is called the informant. Informants can also be selected respondents as well, if they are eligible for the study and are chosen as the respondent during the within household stage of selection.

Intention question
A question asking respondents to indicate their intention regarding some behavior.
Interactive Voice Response (IVR)

A telephone interviewing method in which respondents listen to recordings of the questions and they respond by using the keypad of the telephone or saying their answers aloud.

Interface design

Aspects of computer-assisted survey design focused on the interviewer’s or respondent’s experience and interaction with the computer and instrument.

Interpenetrated sample assignment, interpenetration

Randomized assignment of interviewers to subsamples of respondents in order to measure correlated response variance, arising from the fact that response errors of persons interviewed by the same interviewer may be correlated. Interpenetration allows researchers to disentangle the effects interviewers have on respondents from the true differences between respondents.

Interviewer design effect (Deffint)

The extent to which interviewer variance increases the variance of the sample mean of a simple random sample.

Interviewer effect
Measurement error, both systematic and variable, for which interviewers are responsible.
Interviewer falsification

Intentionally departing from the designed interviewer guidelines that could result in the contamination of the data. Falsification includes: 1) Fabricating all or part of an interview—the recording of data that are not provided by a designated survey respondent, and reporting them as answers of that respondent; 2) Deliberately misreporting disposition codes and falsifying process data (e.g., the recording of a respondent refusal as ineligible for the sample; reporting a fictitious contact attempt); 3) Deliberately miscoding the answer to a question in order to avoid follow-up questions; 4) Deliberately interviewing a nonsampled person in order to reduce effort required to complete an interview; or intentionally misrepresenting the data collection process to the survey management.

Interviewer observations and evaluations

“In interviewer-administered surveys, interviewers have long been asked to make general assessments about how engaged, cooperative, hostile, or attentive the respondent was during the interview. Additionally, interviewers record information about the interview-taking environment, such as whether other individuals were present or whether the respondent used headphones during an ACASI component. Unlike the previous sources of paradata, these interviewer evaluations are questions asked directly of the interviewer and included as a few additional questions in the questionnaire” (Olson & Parkhurst, 2013).

Interviewer variance
That component of overall variability in survey estimates that can be accounted for by the interviewers.
Item
Researchers differ greatly in how they use this term. In survey research, it usually determines one question in a survey questionnaire, that is, each time the respondent is asked to give an answer.
Item nonresponse, item missing data

The lack of information on individual data items for a sample element where other data items were successfully obtained.

Item Response Theory (IRT)

A theory that guides statistical techniques used to detect survey or test questions that have item bias or differential item functioning (see dif). IRT is based on the idea that the probability of a response an individual provides is a function of the person's traits and characteristics of the item.