Here is the CMT Uptime check phrase

Glossary

  • The degree of closeness an estimate has to the true value.
  • Changing existing materials (e.g., management plans, contracts, training manuals, questionnaires, etc.) by deliberately altering some content or design component to make the resulting materials more suitable for another sociocultural context or a particular population. In the context of(...)
  • Interviewer behavior that is tailored to the actual situation encountered.
  • The translation evaluation step at which a translation is signed off and released for whatever follows next, such as pretesting or final fielding (the 'A' in the TRAPD method; see Translation). When all review and refinement procedures are completed, including any revisions after pretesting(...)
  • The person who signs off on a finalized version of a questionnaire ('adjudication').
  • Survey error (variance and bias) occurring due to post-data collection statistical adjustment.
  • A translation is made from a source questionnaire in order to find problems in the source text that only become apparent when translation is attempted. These insights are then used to modify the source questionnaire or plan for adaptation. We recommend carrying out the advance translation(...)
  • A technique used to adjust for noncomparability in self-assessment questions caused by differences in response scale usage across groups. It relies on a set of (usually brief) descriptions of hypothetical people and situations to which self-assessment is calibrated (King, Murray, Salomon, &(...)
  • Information appended to text in the source questionnaire to help clarify the intended meaning of a source text concept, phrase, or term. See Translation: Overview, Appendices A, B, C, and D for further detail and examples of the use of annotations. Note that 'annotations' are sometimes(...)
  • Information appended to text in the source questionnaire to help clarify the intended meaning of a source text concept, phrase, or term. See Translation: Overview, Appendices A, B, C, and D for further detail and examples of the use of annotations. Note that 'annotations' are sometimes(...)
  • Information appended to text in the source questionnaire to help clarify the intended meaning of a source text concept, phrase, or term. See Translation: Overview, Appendices A, B, C, and D for further detail and examples of the use of annotations. Note that 'annotations' are sometimes(...)
  • Recording or storing information without a name or identifier, so that the respondent cannot be identified in any way by anyone. No one can link an individual person to their responses, including the investigator or interviewer. Face-to-face interviews are never anonymous, since the(...)
  • Stripping all information from a survey data file that would allow for the re-identification of respondents (see 'anonymity' and 'confidentiality').
  • Data files in the American Standard Code for Information Interchange (ASCII) format, which is the most common format for text files on the Internet.
  • An approach to question design wherein researchers collect data across populations or countries based on using the most salient population-specific questions on a given construct/research topic. The questions and indicators used in each location are assumed (or better, have been shown) to tap(...)
  • An approach to question design wherein researchers collect data across populations/countries by asking a shared set of questions. The most common way to do this is by developing a source questionnaire in one language and then producing whatever other language versions are needed on the basis(...)
  • A question asking about respondents' opinions, judgments, emotions, or perceptions. These cannot be measured by other means; we are entirely dependent on respondents' answers. Example: "Do you think smoking cigarettes is bad for the smoker's health?"
  • A mode in which the respondent uses a computer that plays audio recordings of the questions to the respondent, who then enters their answers. The computer may or may not also display the questions on the screen.
  • An electronic file in which computer-assisted and Web survey software captures paradata about survey questions and user actions, including times spent on different questions and in different sections of a survey (timestamps) and interviewer or respondent actions while proceeding through a(...)
  • Data from an external source, such as census data, that is incorporated or linked in some way to the data collected by the study. Auxiliary data is sometimes used to supplement collected data, for creating weights, or in imputation techniques.
  • The inverse of the probability of selection.
  • Behavior codes are information about the interviewer's and respondent's verbal behaviors during the question-and-answer process of a survey interview. They are developed and recorded during by human coders, not automatically coded by computers. To obtain behavior codes ('behavior coding'),(...)
  • Systematic coding of the interviewer-respondent interaction in order to identify problems and sometimes to estimate the frequency of behaviors that occur during the question-answer process (see behavior codes).
  • A question asking respondents to report behaviors or actions. Example: "Have you ever smoked cigarettes?"
  • The systematic difference over all conceptual trials between the expected value of the survey estimate of a population parameter and the true value of that parameter in the target population.
  • A complete proposal (submitted in competition with other bidders) to execute specified jobs within the prescribed time and budget, and not exceeding a proposed amount.
  • A glossary is a list of words or phrases used in a particular field alongside their definitions, and can often found at the back of a specialist or academic book as an appendix to the text. A bilingual glossary lists special terms used in a particular field in two languages. A key notion or(...)
  • A type of coding in which values that exceed the predetermined minimum value are reassigned to that minimum value or are recoded as missing data.
  • A language, common to both interviewers and respondents, that is used for data collection but may not be the first language of either party.
  • A fishbone-structured diagram for a process, used as a brainstorming tool to help understand or improve the process. The main bone represents the process (e.g., interviewer training), and bones coming off of the main bone are pre-identified factors (e.g., training materials) that may affect(...)
  • A fishbone-structured diagram for a process, used as a brainstorming tool to help understand or improve the process. The main bone represents the process (e.g., interviewer training), and bones coming off of the main bone are pre-identified factors (e.g., training materials) that may affect(...)
  • A fishbone-structured diagram for a process, used as a brainstorming tool to help understand or improve the process. The main bone represents the process (e.g., interviewer training), and bones coming off of the main bone are pre-identified factors (e.g., training materials) that may affect(...)
  • A fishbone-structured diagram for a process, used as a brainstorming tool to help understand or improve the process. The main bone represents the process (e.g., interviewer training), and bones coming off of the main bone are pre-identified factors (e.g., training materials) that may affect(...)
  • A fishbone-structured diagram for a process, used as a brainstorming tool to help understand or improve the process. The main bone represents the process (e.g., interviewer training), and bones coming off of the main bone are pre-identified factors (e.g., training materials) that may affect(...)
  • Objective assessment of performance. Based on pre-established criteria, the interviewer either meets the requirements and may proceed to conduct the study interview, or does not meet the requirements and may either be permitted to try again or be dismissed from the study. Certification(...)
  • A survey question format that provides a limited set of predefined answer categories from which respondents must choose. Example: "Do you smoke?"
  • A grouping of units on the sampling frame that is similar on one or more variables, typically geographic. For example, an interviewer for an in-person study will typically only visit households in a certain geographic area, with the geographic area being the 'cluster.'
  • A sampling procedure wherein units of the sampling frame that are similar on one or more variables (typically geographic) are organized into larger groups ('clusters'), and a sample of groups is selected. The selected groups contain the units to be included in the sample. The sample may(...)
  • A list of descriptions of variable categories and associated code numbers. Also referred to as 'code frame,' 'coding frame,' or 'codes.'
  • A list of descriptions of variable categories and associated code numbers. Also referred to as 'code frame,' 'coding frame,' or 'codes.'
  • A list of descriptions of variable categories and associated code numbers. Also referred to as 'code frame,' 'coding frame,' or 'codes.'
  • A list of descriptions of variable categories and associated code numbers. Also referred to as 'code frame,' 'coding frame,' or 'codes.'
  • A list of descriptions of variable categories and associated code numbers. Also referred to as 'code frame,' 'coding frame,' or 'codes.'
  • A document that provides question-level metadata that is matched to variables in a dataset. Metadata include the elements of a data dictionary as well as basic study documentation, question text, universe statements (the characteristics of respondents who were asked the question), the number(...)
  • The translation of nonnumeric data into numeric fields.
  • A measure of dispersion of a probability distribution or frequency distribution that describes the amount of variability relative to the mean.
  • A pretesting method designed to uncover problems in survey items by having respondents think out loud while (or after) answering a question.
  • A statistical measure that accounts for degree of chance of agreements between coders.
  • The extent to which differences between survey statistics from different countries, regions, cultures, domains, time periods, etc., can be attributable to differences in population true values (Mohler & Johnson, 2010); in other words, whether the concepts are comparable or not. It is often(...)
  • The extent to which differences between survey statistics from different countries, regions, cultures, domains, time periods, etc., can be attributable to differences in population true values (Mohler & Johnson, 2010); in other words, whether the concepts are comparable or not. It is often(...)
  • The extent to which differences between survey statistics from different countries, regions, cultures, domains, time periods, etc., can be attributable to differences in population true values (Mohler & Johnson, 2010); in other words, whether the concepts are comparable or not. It is often(...)
  • The extent to which differences between survey statistics from different countries, regions, cultures, domains, time periods, etc., can be attributable to differences in population true values (Mohler & Johnson, 2010); in other words, whether the concepts are comparable or not. It is often(...)
  • A person who carries out comparative studies, especially a student of comparative literature or comparative linguistics.
  • Survey datasets or designs based on stratified single-stage or multistage samples with survey weights designed to compensate for unequal probabilities of selection or nonresponse.
  • A face-to-face interviewing mode in which a computer displays the questions onscreen and the interviewer reads them to the respondent and enters the respondent's answers directly into the computer.
  • A system for audio recording of interviews (or parts of interviews) that allows for monitoring interviewer performance in the field/call center and detection of data fraud.
  • A mode in which a computer displays the questions on a screen to the respondent and the respondent then enters their answers into the computer.
  • A telephone interviewing mode in which a computer displays the questions on a screen and the interviewer reads them to the respondent over the phone and enters the respondent's answers directly into the computer.
  • A mixed-mode design in which one group of respondents uses one mode and another group of respondents uses another.
  • Securing the identity of, as well as any information provided by, the respondent in order to ensure to that public identification of an individual participating in the study and/or their individual responses does not occur. See also 'anonymity' and 'anonymization.'
  • Consistency is achieved when the same term or phrase is used throughout a translation to refer to an object or entity that was referred to with one term or phrase in the source text. In many cases, consistency is most important with regard to technical terminology or to standard repeated(...)
  • The degree to which a survey question adequately measures an intended hypothetical construct. This may be assessed by checking the correlation between observations from that question with observations from other questions expected on theoretical grounds to be related.
  • A recoded variable created by data producers or archives based on the data originally collected. Examples include age grouped into cohorts, income grouped into 7 categories, Goldthorpe-Index, or the creation of a variable called 'POVERTY' from information collected on the income of respondents.
  • A written record of the time and outcome of each contact attempt to a sample unit.
  • The proportion of all elements to those in which some responsible member of the housing unit was reached by the survey.
  • The software and procedures used to capture, save, organize, and distribute information in digitalized form.
  • The software and procedures used to capture, save, organize, and distribute information in digitalized form.
  • The software and procedures used to capture, save, organize, and distribute information in digitalized form.
  • The software and procedures used to capture, save, organize, and distribute information in digitalized form.
  • The software and procedures used to capture, save, organize, and distribute information in digitalized form.
  • The software and procedures used to capture, save, organize, and distribute information in digitalized form.
  • The effect of question context, such as the order or layout of questions, on survey responses.
  • A legally binding exchange of promises, or an agreement creating and defining the obligations between two of more parties (for example, a survey organization and the coordinating center) written and enforceable by law.
  • A sample of elements that are selected because it is convenient to use them, not because they are representative of the target population.
  • An interviewing style in which interviewers read questions as they are worded but are allowed to use their own words to clarify the meaning of the questions.
  • Data processing procedures used to create harmonized variables from original input variables.
  • The proportion of all elements interviewed over all eligible units ever contacted.
  • A research center that facilitates and organizes cross-cultural or multi-site research activities.
  • The person who reviews a text and marks up any changes required to correct stylistic, punctuation, spelling, and grammatical errors. In many instances, the copyeditor may also make the corrections needed.
  • The proportion of the target population that is accounted for on the sampling frame.
  • The systematic difference between the expected value (over all conceptual trials) of a statistic and the target population value that occurs because some elements in the target population do not appear on the sampling frame.
  • Survey error (variance and bias) that is introduced when there is not a one-to-one correspondence between frame and target population units. Causes include when some units in the target population are not included on the sampling frame (undercoverage), when some units on the sampling frame are(...)
  • The number of elements on the sampling frame divided by the estimated number of elements in the target population.
  • Electronic or printed materials associated with each element that identify information about the element, e.g., the sample address, the unique identification number associated with an element, and the interviewer to whom an element is assigned. The coversheet often also contains an(...)
  • A description, usually presented in tabular format, of all the relationships between variables in individual data files and their counterparts in the harmonized file.
  • A conceptual structure, shared by members of a cultural group and created from common experiences, by which objects and events can be identified and understood.
  • The process of converting data (from questionnaires, audio/visual recordings, samples, etc.) to an electronic file.
  • A document linking the survey instrument (questionnaire) to the dataset, or to more abstract question- or variable-level metadata, including question identifiers (variable names and labels), response category identifiers (value labels), and data types (e.g., F2.0, specifying that the response(...)
  • An international effort to establish a standard for technical documentation describing social science data. The membership-based DDI Alliance is developing the specification, which is written in XML.
  • An international effort to establish a standard for technical documentation describing social science data. The membership-based DDI Alliance is developing the specification, which is written in XML.
  • An international effort to establish a standard for technical documentation describing social science data. The membership-based DDI Alliance is developing the specification, which is written in XML.
  • Separating personally identifiable information (PII) from the survey data to prevent a breach of confidentiality.
  • An approach to designing questions in two languages in which neither of the languages or cultures involved is allowed to dominate. A 'ping-pong'-like process of formulation and comparison between the two languages is used to develop versions in each language. Any language or cultural obstacles(...)
  • The effect of the complex survey design on sampling variance, measured as the ratio of the sampling variance under the complex design to the sampling variance computed as a simple random sample of the same sample size.
  • Item bias occurring as a result of systematic differences in responses across cultures due to features of the item or measure itself, such as poor translation or ambiguous wording.
  • Diglossic linguistic contexts exist in single-language communities that use two or more markedly different varieties of a language, or two different languages, in different contexts. The variety used may be determined by whether the language is written or spoken in a given instance, or by the(...)
  • An expense that can be traced directly to (or identified with) a specific cost center or is directly attributable to a cost object, such as a department, process, or product.
  • The process of identifying and protecting the confidentiality of data. It involves limiting the amount of detailed information disseminated and/or masking data via noise addition, data swapping, generation of simulated or synthetic data, etc. For any proposed release of tabulations or(...)
  • A code that indicates the result of a specific contact attempt or the outcome assigned to a sample element at the end of data collection (e.g., noncontact, refusal, ineligible, complete interview).
  • A document management system (DMS) is a computer program (or set of computer programs) used to track and store electronic documents and/or images of paper documents. The term has some overlap with the concept of a content management system (CMS). DMSs are often viewed as a component of(...)
  • A survey question that inadvertently asks about two topics at once.
  • Altering data recorded by the interviewer or respondent to improve the quality of the data (e.g., checking consistency, correcting mistakes, following up on suspicious values, deleting duplicates, etc.). Sometimes this also includes coding and imputation (the placement of a number into a field(...)
  • The number of eligible sample elements divided by the total number of elements on the sampling frame.
  • An experiment included within the framework of an actual study, rather than in a study conducted solely for the purpose of conducting said experiment (a 'standalone experiment').
  • A conversational interviewing technique designed to improve recollection of complex sequences of personal events by using respondents' own past experiences as memory cues. This technique is sometimes also referred to as a 'life history calendar.'
  • A conversational interviewing technique designed to improve recollection of complex sequences of personal events by using respondents' own past experiences as memory cues. This technique is sometimes also referred to as a 'life history calendar.'
  • A conversational interviewing technique designed to improve recollection of complex sequences of personal events by using respondents' own past experiences as memory cues. This technique is sometimes also referred to as a 'life history calendar.'
  • The process of creating harmonized variables at the outset of data collection, based on using the same questionnaire or agreed-upon definitions in the harmonization process.
  • The process of creating harmonized variables from data that already exists.
  • A question asking about the respondent's expectation about the chances or probabilities that certain things will happen in the future.
  • Eye tracking tracks the time, duration, and location of our eyes' fixations and the saccades between fixations. See Li (2011).
  • A sheet, pamphlet, or brochure that provides important information about the study to assist respondents in making an informed decision about participation. Elements of a fact sheet may include the following: the purpose of the study, its sponsorship, uses of the data, the role of the(...)
  • A question that requires respondents to remember autobiographical events and use that information to make judgments. In principle, such information could also be obtained by other means of observation, such as comparing survey data with administrative records (if such records exist). Factual(...)
  • A question that aims to collect information about things for which there is a correct answer. In principle, such information could also be obtained by other means of observation, such as comparing survey data with administrative records (if such records exist). Factual questions can be about a(...)
  • The degree to which products conform to essential requirements and meet the needs of the users for which they are intended. In literature on quality, this is also known as 'fitness for use' or 'fitness for purpose.'
  • The degree to which products conform to essential requirements and meet the needs of the users for which they are intended. In literature on quality, this is also known as 'fitness for use' or 'fitness for purpose.'
  • The degree to which products conform to essential requirements and meet the needs of the users for which they are intended. In literature on quality, this is also known as 'fitness for use' or 'fitness for purpose.'
  • A longitudinal study which attempts to collect survey data on the same sample elements at intervals over a period of time. After the initial sample selection, no additions to the sample are made.
  • A longitudinal study in which a panel of individuals is interviewed at intervals over a period of time and additional elements are added to the sample.
  • A method used to identify the steps or events in a process. It uses basic shapes for starting and ending the process, taking an action, making a decision, and producing data and documentation. These are connected by arrows indicating the flow of the process. A flowchart can help identify(...)
  • Small group discussions carried out under the guidance of a moderator which can be used to explore and develop topics and items for questions. They are often used in qualitative research to test survey questionnaires and survey protocols.
  • A response format requiring the respondent to select the option that best describes the frequency in which certain behaviors occur.
  • A response format requiring the respondent to select the option that best describes the frequency in which certain behaviors occur.
  • A response format requiring the respondent to select the option that best describes the frequency in which certain behaviors occur.
  • Each translator translates all of the material to be translated, in contrast to split translations. This refers to the first step in the TRAPD model, the 'T.' It is also referred to as 'double' or 'parallel' translation.
  • Each translator translates all of the material to be translated, in contrast to split translations. This refers to the first step in the TRAPD model, the 'T.' It is also referred to as 'double' or 'parallel' translation.
  • Each translator translates all of the material to be translated, in contrast to split translations. This refers to the first step in the TRAPD model, the 'T.' It is also referred to as 'double' or 'parallel' translation.
  • Each translator translates all of the material to be translated, in contrast to split translations. This refers to the first step in the TRAPD model, the 'T.' It is also referred to as 'double' or 'parallel' translation.
  • Each translator translates all of the material to be translated, in contrast to split translations. This refers to the first step in the TRAPD model, the 'T.' It is also referred to as 'double' or 'parallel' translation.
  • Measures the degree to which a hypothetical construct serves similar functions within each society or cultural group. See Johnson (1998) for more information.
  • Please see Butson (1962) for more details on the Hadamard matrix.
  • A signal warning that there is an inconsistency between the current response and a previous response; the interviewer or respondent cannot continue until the inconsistency is resolved (as opposed to a soft consistency check).
  • A measure of study efficiency, calculated as the total number of interviewer hours spent during production (including travel, reluctance handling, listing, completing an interview, and other administrative tasks) divided by the total number of interviews.
  • A group or committee that is given the responsibility by an institution to review that institution's research projects involving human subjects. The primary purpose of the review is to assure the protection of the safety, rights and welfare of the human subjects.
  • A group or committee that is given the responsibility by an institution to review that institution's research projects involving human subjects. The primary purpose of the review is to assure the protection of the safety, rights and welfare of the human subjects.
  • A group or committee that is given the responsibility by an institution to review that institution's research projects involving human subjects. The primary purpose of the review is to assure the protection of the safety, rights and welfare of the human subjects.
  • A group or committee that is given the responsibility by an institution to review that institution's research projects involving human subjects. The primary purpose of the review is to assure the protection of the safety, rights and welfare of the human subjects.
  • A group or committee that is given the responsibility by an institution to review that institution's research projects involving human subjects. The primary purpose of the review is to assure the protection of the safety, rights and welfare of the human subjects.
  • A computation method that, using some protocol, assigns one or more replacement answers for each missing, incomplete, or implausible data item.
  • The component of overall variability in survey estimates that can be accounted for by imputation.
  • Inappropriate responses to branched questions. For instance, one question might ask if the respondent attended church last week; a response of 'no' should skip the questions about church attendance and code the answers to those questions as 'inapplicable.' If those questions were coded any(...)
  • An expense that is incurred in joint usage and is difficult to assign to or is not directly attributable to a specific department, process, or product.
  • The person who supplies a list of the eligible elements within the selected unit. For example, many in-person surveys select a sample of housing units at the penultimate stage of selection. Interviewers then contact the housing unit with the aim of convincing the member of the housing unit who(...)
  • A process by which a sample member voluntarily confirms their willingness to participate in a study after having been informed of all aspects of the study that are relevant to the decision to participate. Informed consent can be obtained orally or via a written consent form (or implied, if the(...)
  • A question asking respondents to indicate their intention regarding some behavior.
  • A telephone interviewing method in which respondents listen to recordings of the questions and respond by using the telephone keypad or saying their answers aloud.
  • Aspects of computer-assisted survey design focused on the interviewer's or respondent's experience and interaction with the computer and instrument.
  • Randomized assignment of interviewers to subsamples of respondents in order to measure correlated response variance, arising from the fact that response errors of persons interviewed by the same interviewer may be correlated. Interpenetration allows researchers to disentangle the effects(...)
  • Randomized assignment of interviewers to subsamples of respondents in order to measure correlated response variance, arising from the fact that response errors of persons interviewed by the same interviewer may be correlated. Interpenetration allows researchers to disentangle the effects(...)
  • The extent to which interviewer variance increases the variance of the sample mean of a simple random sample. Written as 'deffint' (or '\(\text{deff}_{int}\)') in equations.
  • Measurement error, both systematic and variable, for which interviewers are responsible.
  • Intentionally departing from the designed interviewer guidelines that could result in the contamination of the data. Falsification includes: 1) fabricating all or part of an interview by recording data that were not provided by a designated survey respondent and reporting them as answers of(...)
  • In interviewer-administered surveys, interviewers have long been asked to make general assessments about how engaged, cooperative, hostile, or attentive the respondent was during the interview. Additionally, interviewers record information about the interview-taking environment, such as(...)
  • The component of overall variability in survey estimates that can be accounted for by the interviewers.
  • The lack of information on individual data items for a sample element where other data items were successfully obtained.
  • The lack of information on individual data items for a sample element where other data items were successfully obtained.
  • The lack of information on individual data items for a sample element where other data items were successfully obtained.
  • The lack of information on individual data items for a sample element where other data items were successfully obtained.
  • The lack of information on individual data items for a sample element where other data items were successfully obtained.
  • A theory that guides statistical techniques used to detect survey or test questions that have item bias or differential item functioning (DIF). IRT is based on the idea that the probability of a response an individual provides is a function of the person's traits and characteristics of the item.
  • Keystroke files, sometimes called 'audit trails' or 'trace files,' are recorded when interviewers or respondents use specific keys during the survey. Keystroke files contain both response timing data and a record of the keystrokes pressed during the questionnaire administration (Olson &(...)
  • Latent class analysis (LCA) is a subset of structural equation modeling, used to find groups or subtypes of cases in multivariate categorical data. These subtypes are called 'latent classes.'
  • A procedure used in area probability sample designs to create a complete list of all elements or a cluster of elements within a specific set of geographic boundaries.
  • A question that is worded such that it invites respondents to respond in a particular way.
  • A study where elements are repeatedly measured over time.
  • A study where elements are repeatedly measured over time.
  • A study where elements are repeatedly measured over time.
  • The total error of a survey estimate; specifically, the sum of the variance and the bias squared.
  • Equivalence of the calibration system used in the questionnaire and the translation.
  • Survey error (variance and bias) due to the measurement process; that is, error introduced by the survey instrument, the interviewer, or the respondent.
  • Information that describes data. The term encompasses a broad spectrum of information about the survey, from the study title to sample design, details such as interviewer briefing notes, and contextual data and/or information such as legal regulations, customs, and economic indicators. Note(...)
  • Nonaggregated data that concern individual records for sampled units such as households, respondents, organizations, administrators, schools, classrooms, students, etc. Microdata may come from auxiliary sources (e.g., census or geographical data) as well as surveys. They are contrasted with(...)
  • A country with high per capita income (the minority of countries).
  • Method of data collection.
  • Mouse click files record each action the respondent or interviewer takes using the computer's mouse, ranging from the presence or absence of simple single mouse clicks to the position of the mouse cursor at a specified time interval on an x/y coordinate of the survey page (Olson & Parkhurst, 2013).
  • A technique that uses the correlations between multiple methods (i.e., modes) and multiple traits (i.e., variables) to assess the validity of a measurement process.
  • A sample element is selected, but an interview does not take place (for example, due to noncontact, refusal, or ineligibility).
  • Sampling units that were potentially eligible but could not be reached.
  • The failure to obtain measurement on sampled units or items (see unit nonresponse and item nonresponse).
  • The systematic difference between the expected value (over all conceptual trials) of a statistic and the target population value due to differences between respondents and nonrespondents on that statistic of interest.
  • Survey error (variance and bias) that is introduced when not all sample members participate in the survey (unit nonresponse) or not all survey items are answered by a sample element (item nonreponse).
  • A supplemental survey of sampled survey nonrespondents. Nonresponse followup surveys are designed to assess whether respondent data are biased due to differences between survey respondents and nonrespondents.
  • A survey question that allows the respondent to formulate an answer in their own words. Unlike a closed question format, it does not provide a limited set of predefined answers.
  • A bidding process in which all the bidders are evaluated and then chosen on the basis of cost and technical merit.
  • A rate calculated based on the study's defined final disposition codes that reflect the outcome of specific contact attempts before the unit was finalized. Examples include response rate, cooperation rate, refusal rate, and contact rate.
  • An atypical observation that does not appear to follow the distribution of the rest of a dataset.
  • Extensive editing that becomes too costly for the amount of error that is being reduced.
  • A compromise solution between split and full translations is to ensure that some overlap exists between materials divided among translators. The material is split up the way cards are dealt in many games, with everyone getting a spread of the material. Each translator could then receive the(...)
  • The exceeding of costs estimated in a contract.
  • Couper first introduced the term 'paradata' into the survey research methodology field in 1998 (Groves & Couper, 1998), and the definition of paradata has vastly expanded since then. Paradata now refers to additional data that can be captured during the process of producing a survey statistic(...)
  • A bar chart that reflects the types of most errors in a process, by error type in descending order; for example, the five or six most frequent types of help desk calls from interviewers using computer-assisted interviewing.
  • A technique used in quality control to determine whether quality assurance procedures have worked. An example of a technique would be analysis of routine measures of interviewer or coder performance.
  • Information that can be used to identify a respondent that minimally includes their name, address, telephone number, and identification number (such as a social security number or driver's license number), but may include other information as well, such as biometric data.
  • A quantitative miniature version of the survey data collection process that involves all procedures and materials that will be used during data collection taking place before the actual data collection begins. A pilot study is also known as a 'dress rehearsal.'
  • A quantitative miniature version of the survey data collection process that involves all procedures and materials that will be used during data collection taking place before the actual data collection begins. A pilot study is also known as a 'dress rehearsal.'
  • A quantitative miniature version of the survey data collection process that involves all procedures and materials that will be used during data collection taking place before the actual data collection begins. A pilot study is also known as a 'dress rehearsal.'
  • An agreement (typically in written or electronic form) to maintain the confidentiality of survey data that is signed by persons who have any form of access to confidential information.
  • A file that is coded in a non-proprietary format such as XML or ASCII, and thus can be used by a variety of software and hardware platforms.
  • An adjustment made to reduce the impact of error on estimates.
  • A statistical adjustment that assures that sample estimates of totals or percentages (e.g., the estimate of the percentage of men in living in Mexico based on the sample) equal population totals or percentages (e.g., the estimate of the percentage of men living in Mexico based on census data).(...)
  • A measure of how close an estimator is expected to be to the true value of a parameter, which is usually expressed in terms of imprecision and is related to the variance of the estimator. Less precision is reflected by a larger variance.
  • Precoding refers to the determination of coding conventions and formats of survey items (especially the closed-ended questions) based on existing coding frames or prior knowledge of the survey population in the questionnaire design phase.
  • An interviewer behavior that must be carried out exactly as specified.
  • An interviewer behavior that must be carried out exactly as specified.
  • An interviewer behavior that must be carried out exactly as specified.
  • A collection of techniques and activities that allow researchers to evaluate survey questions, questionnaires, and/or other survey procedures before data collection begins.
  • A context effect in which the placement of an item at the beginning of a list of response options increases the likelihood that it will be selected by the respondent (see also 'recency').
  • A cluster of elements sampled at the first stage of selection.
  • A sampling method that ensures that sample estimates of totals or percentages (e.g., the estimate of the percentage of men living in Mexico based on the sample) equal population totals or percentages (e.g., the estimate of the percentage of men living in Mexico based on census data). The(...)
  • A sampling method wherein each element of the sampling frame has a known, non-zero chance of selection.
  • The use of tools such as flowcharts to analyze processes, e.g., respondent tracking, computerized instrument programming and testing, coding, data entry, etc. The aim is to identify indicators or measures of product quality. Process analysis also is used to identify improvements that can be(...)
  • A plan for improving a process (as a result of process analysis). A process improvement plan may result from the development of a quality management plan, or as a result of quality assurance or quality control.
  • An indicator that refers to aspects of data collection (e.g., HPI, refusal rates, etc.).
  • Survey error (variance and bias) that arises during the steps between collecting information from the respondent and having the value used in estimation. Processing errors can include all post-collection operations, as well as the printing of questionnaires. Most processing errors occur in(...)
  • An indicator that refers to aspects relating to reaching the goal (e.g., number of complete interviews).
  • An interview with someone other than the person about whom information is being sought (e.g., a parent or spouse). There should be a set of rules specific to each survey that define who can serve as a proxy respondent.
  • An anonymized data file, stripped of respondent identifiers, that is distributed for the public to analyze.
  • The degree to which a product's characteristics conform to the requirements as agreed upon by the producers and clients.
  • A planned system of procedures, performance checks, quality audits, and corrective actions to ensure that the products produced throughout the survey lifecycle are of the highest achievable quality. Quality assurance planning involves identifying key indicators of quality.
  • A systematic examination of the quality system of an organization by an internal or external quality auditor or team. It assesses whether the quality management plan has clearly outlined quality assurance, quality control, corrective actions to be taken, etc., and whether they have been(...)
  • A quality checklist identifies all the steps, procedures, and controls specified to ensure required procedures have been followed and their goals met. An example of a translation quality checklist is the ESS Round 7 Translation Quality Checklist (European Social Survey, 2014c).
  • A planned system of process monitoring, verification, and analysis of quality indicators and updates to quality assurance procedures to ensure that quality assurance is working.
  • A document that describes the quality system an organization will use, including quality assurance and quality control techniques and procedures, requirements for documenting the results of those procedures, corrective actions taken, and process improvements made.
  • A comprehensive report prepared by producers of survey data that provides information users will need to assess the quality of the data.
  • Text associated with some questions in interviewer-administered surveys that provides information on the objectives of the questions.
  • The deliberate technical or substantive modification of some feature of a question, a response scale, or another part of a questionnaire to better fit a new sociocultural context or particular target population (e.g., updating language: 'radio' for 'wireless,' adapting an adult questionnaire(...)
  • A non-probability sampling method that sets specific sample size quotas or target sample sizes for subclasses of the target population. The sample quotas are generally based on simple demographic characteristics (e.g., quotas for gender, age groups, and geographic region subclasses).
  • A method of selecting telephone numbers in which the target population consists of all possible telephone numbers, and all telephone numbers have an equal probability of selection.
  • For each randomly chosen sampling point (e.g., urban units, small cities, or voting districts), interviewers are assigned with a starting location and provided with instructions on the random walking rules (e.g., which direction to start, on which side of the streets to walk, and which(...)
  • For each randomly chosen sampling point (e.g., urban units, small cities, or voting districts), interviewers are assigned with a starting location and provided with instructions on the random walking rules (e.g., which direction to start, on which side of the streets to walk, and which(...)
  • A response format wherein respondents express their preferences by ordering persons, brands, etc. from top to bottom, i.e., generating a ranked order of a list of items or entities.
  • A response format requiring the respondent to select one position on an ordered scale of response options. Example: "To what extent do you agree or disagree with the following statement?"
  • A context effect in which the placement of an item at the end of a list of response options increases the likelihood that it will be selected by the respondent (see also 'primacy').
  • To have someone other than the interviewer (often a supervisor) attempt to speak with the sample member after a screener or interview is conducted, in order to verify that it was completed according to the specified protocol.
  • The proportion of all units of all potentially eligible sampling units to those in which a respondent sampling unit refuses to do an interview or breaks off interviews of all potentially eligible sampling units.
  • The process or action of interviewing the same respondent twice to assess reliability (simple response variance).
  • The consistency of a measurement, or the degree to which an instrument measures the same way each time it is used under the same condition with the same subjects.
  • Techniques that can reduce reluctance to participate in potential respondents, thereby increasing the overall response rate.
  • A series of fixed panel surveys that may or may not overlap in time. Generally, each panel is designed to represent the same target population definition applied at a different point in time.
  • A systematic probability subsample of the full sample.
  • A question which is repeated ('replicated') at a later stage in a study or in a different study. Replication assumes identical question wording. Questions which were used in one study, then translated and used in another, are also frequently spoken of as having been 'replicated.'
  • A rule to help interviewers determine which persons to include in the household listing based on what the informant reports.
  • A description of the values and frequencies associated with a particular question.
  • The return of false or subjectively modified information from survey respondents.
  • A method of examining potential problems in responding to particular items, measured by recording the time between the interviewer asking a question and the response being given.
  • The category, wording, and order of options given with the survey question. See Questionnaire Design for more information.
  • The number of complete interviews with reporting units divided by the number of eligible reporting units in the sample.
  • Consistent and stable tendencies in response behavior which are not explainable by question content or presentation. These are considered to be a source of biased reporting. For example, extreme response style is the tendency to select the two extreme endpoints of a scale, midpoint response(...)
  • Consistent and stable tendencies in response behavior which are not explainable by question content or presentation. These are considered to be a source of biased reporting. For example, extreme response style is the tendency to select the two extreme endpoints of a scale, midpoint response(...)
  • Consistent and stable tendencies in response behavior which are not explainable by question content or presentation. These are considered to be a source of biased reporting. For example, extreme response style is the tendency to select the two extreme endpoints of a scale, midpoint response(...)
  • Consistent and stable tendencies in response behavior which are not explainable by question content or presentation. These are considered to be a source of biased reporting. For example, extreme response style is the tendency to select the two extreme endpoints of a scale, midpoint response(...)
  • Consistent and stable tendencies in response behavior which are not explainable by question content or presentation. These are considered to be a source of biased reporting. For example, extreme response style is the tendency to select the two extreme endpoints of a scale, midpoint response(...)
  • Consistent and stable tendencies in response behavior which are not explainable by question content or presentation. These are considered to be a source of biased reporting. For example, extreme response style is the tendency to select the two extreme endpoints of a scale, midpoint response(...)
  • Consistent and stable tendencies in response behavior which are not explainable by question content or presentation. These are considered to be a source of biased reporting. For example, extreme response style is the tendency to select the two extreme endpoints of a scale, midpoint response(...)
  • Consistent and stable tendencies in response behavior which are not explainable by question content or presentation. These are considered to be a source of biased reporting. For example, extreme response style is the tendency to select the two extreme endpoints of a scale, midpoint response(...)
  • Developed by Groves and Heeringa (2006), responsive design is an approach wherein researchers continually monitor selected paradata to inform the error-cost tradeoff in real-time as the basis for altering design features through interventions during the course of data collection of for(...)
  • Developed by Groves and Heeringa (2006), responsive design is an approach wherein researchers continually monitor selected paradata to inform the error-cost tradeoff in real-time as the basis for altering design features through interventions during the course of data collection of for(...)
  • Developed by Groves and Heeringa (2006), responsive design is an approach wherein researchers continually monitor selected paradata to inform the error-cost tradeoff in real-time as the basis for altering design features through interventions during the course of data collection of for(...)
  • A bidding process in which only bidders prequalified through a screening process may participate in bidding. Bidders are evaluated and then chosen on the basis of cost and technical merit.
  • A file that includes information that can be related to specific individuals and is confidential and/or protected by law. Restricted-use data files are not required to include variables that have undergone coarsening disclosure risk edits. These files are available to researchers under(...)
  • A person who participates in the review of translations in order to produce a final version (see Translation: Overview, Appendix A).
  • A study where elements are repeatedly measured a set number of times, then replaced by new randomly chosen elements. Typically, the newly chosen elements are also repeatedly measured the appropriate number of times.
  • Information on the target and final sample sizes, strata definitions, and the sample selection methodology.
  • A selected sampling unit of the target population that may be eligible or ineligible.
  • A computerized and/or paper-based system used to assign and monitor sampling units and record documentation for sample records (e.g., time and outcome of each contact attempt).
  • A person selected from a sampling frame to participate in a particular survey.
  • The systematic difference between the expected value (over all conceptual trials) of an unweighted sample estimate and the target population value, occurring because some elements on the sampling frame have a higher chance of selection than other elements.
  • Survey error (variance and bias) due to observing a sample of the population rather than the entire population.
  • Primary sampling units in one-PSU-per-stratum sampling designs that are grouped in pairs after data collection for the purpose of estimating approximate sampling variances.
  • A list or group of materials used to identify all elements (e.g., persons, households, establishments) of a survey population from which the sample will be selected. This list or group of materials can include maps of areas in which the elements can be found, lists of members of a professional(...)
  • An element, or a cluster of elements, considered for selection in some stage of sampling. For a sample with only one stage of selection, the sampling units are the same as the elements. In multi-stage samples (e.g., enumeration areas, then households within selected enumeration areas, and(...)
  • A measure of how much a statistic varies around its mean (over all conceptual trials) as a result of the sample design only. This measure does not account for other sources of variable error such as coverage and nonresponse.
  • To answer survey questions optimally, four stages of cognitive processing are required: (1) interpret the questions comprehensively, (2) retrieve information from memory, (3) form a judgment, and (4) map the judgment to the appropriate response category. However, to lower cognitive burden,(...)
  • A decision-making strategy that entails searching through any available alternatives until an acceptability threshold is met.
  • A cluster of sample elements sampled at the second stage of selection.
  • A mixed-mode design in which additional modes are offered as part of a nonresponse followup program.
  • Shared language harmonization can be understood as the procedures and result of trying to harmonize, as much as possible, different regional varieties of a 'shared' language across countries, i.e., in terms of vocabulary and/or structure. An example would be to harmonize translations into(...)
  • Monitoring without the awareness of the interviewer.
  • A procedure where a sample of size \(n\) is drawn from a population of size \(N\) in such a way that every possible sample of size \(n\) has the same probability of being selected.
  • A tendency for respondents to overreport desirable attributes or attitudes and underreport undesirable attributes or attitudes.
  • A question typically asking about respondent characteristics such as age, marital status, income, employment status, and education.
  • A signal warning that there is an inconsistency between the current response and a previous response. The soft consistency check should provide guidance on resolving the inconsistency, but the interviewer or respondent may continue the survey without resolving it (as opposed to a hard(...)
  • The original document from which other (target) documents are translated or adapted as necessary.
  • The original instrument from which other (target) instruments are translated or adapted as necessary.
  • The language in which a questionnaire is available from which a translation is made. This is usually, but not always, the language in which the questionnaire was designed.
  • The questionnaire taken as the text for translation. The source questionnaire would normally not be intended to be fielded as such, but would require local adaptation even if fielded in the source language: for instance, in the ESS, the source questionnaire is English, but the questionnaires(...)
  • Original variable(s) chosen as part of the harmonization process.
  • A design that contains a blend of cross-sectional and panel samples at each new wave of data collection.
  • Each translator translates only a part of the total material to be translated in preparation for a review meeting, in contrast to translating the entire text (see full translation).
  • An experiment conducted as an independent research project (as opposed to an 'embedded experiment').
  • An interviewing technique in which interviewers are trained to read every question exactly as worded and abstain from interpreting questions or responses or offering much in the way of clarification.
  • An interviewing technique in which interviewers are trained to read every question exactly as worded and abstain from interpreting questions or responses or offering much in the way of clarification.
  • An interviewing technique in which interviewers are trained to read every question exactly as worded and abstain from interpreting questions or responses or offering much in the way of clarification.
  • An interviewing technique in which interviewers are trained to read every question exactly as worded and abstain from interpreting questions or responses or offering much in the way of clarification.
  • An interviewing technique in which interviewers are trained to read every question exactly as worded and abstain from interpreting questions or responses or offering much in the way of clarification.
  • A statistical chart that compares expected process performance (e.g., number of hours worked by interviewers in a week) against actual performance. For example, interviewers who perform outside upper and lower boundaries on this measure are flagged; if greater variation from expected(...)
  • Clicking the same answer for each item in a multi-numeric list.
  • Mutually exclusive, homogenous groupings of population sample elements or clusters of elements that comprise all of the elements on the sampling frame. The groupings are formed prior to selection of the sample.
  • A sampling procedure that divides the sampling frame into mutually exclusive and exhaustive groups (or 'strata') and places each element on the frame into one of the groups. Independent selections are then made from each stratum, one by one, to ensure representation of each subgroup on the(...)
  • A technique where each nonresponding sample element from the initial sample is replaced by another element of the target population, typically not an element selected in the initial sample. Substitution increases the nonresponse rate and, most likely, the nonresponse bias.
  • The lifecycle of a survey research study, from design to data dissemination. (See figure here.)
  • The actual population from which the survey data are collected, given the restrictions from data collection operations.
  • A statistical adjustment created to compensate for complex survey designs with features including, but not limited to, unequal likelihoods of selection, differences in response rates across key subgroups, and deviations from distributions on critical variables found in the target population(...)
  • A procedure that selects of every \(k^{th}\) element on the sampling frame after a random start.
  • The practice of adapting interviewer behavior to the respondent's expressed concerns and other cues in order to provide feedback to the respondent that addresses their perceived reasons for not wanting to participate.
  • The language into which a questionnaire is to be translated.
  • The finite population for which the survey sponsor wants to make inferences using the sample statistics.
  • A variable created during the harmonization process.
  • A commonly used tool in statistics for handling the variance estimation of statistics that are not simple additions of sample values, such as odds ratios. The Taylor series handles this by converting a ratio into an approximation that is a function of the sums of the values.
  • Team approaches to survey translation and translation assessment bring together a group of people with different talents and functions in the team so as to ensure the mix of skills and discipline expertise needed to produce an optimal translation version in the survey context. Each stage of(...)
  • A formal offer specifying activities to be completed within the prescribed time and budget.
  • Time and date data recorded with survey data, indicating dates and times of responses, at the question and questionnaire section levels. They also appear in audit trails, recording times questions are asked, responses recorded, and so on.
  • A type of coding in which values that exceed the predetermined maximum value are reassigned to that maximal value or are recoded as  missing data.
  • Total survey error provides a conceptual framework for evaluating survey quality. It defines quality as the precise estimation and reduction of the mean squared error (MSE) of statistics of interest.
  • The process of attempting to locate a sample element whose contact information (e.g. address, telephone number, email address) has changed since the previous time the element's contact information was collected.
  • A mathematical operation that changes the values of variables.
  • Translatability assessment is a recently developed process that aims to identify potential problems in translation and adaptation during the initial instrument development stage in the source language (Conway, Acquadro, & Patrick, 2014; Sperber et al., 2014). It evaluates the extent to which a(...)
  • The person who translates text from one language to another (e.g., French to Russian). In survey research, translators might be asked to fulfill other tasks as well, such as reviewing and copyediting.
  • A repository whose mission is to provide reliable, long-term access to managed digital resources to its designated community, both now and in the future.
  • A code that is not authorized for a particular question. For instance, if a question that records the sex of the respondent has documented codes of '1' for female, '2' for male. and '9' for missing data, a code of '3' would be an undocumented code.
  • A unique number that identifies an element (e.g., a serial number). The number sticks to the element throughout the entire survey lifecycle, and is published with the public dataset. It does not contain any information about the respondents or their addresses.
  • An eligible sampling unit that has little or no information because the unit did not participate in the survey.
  • A description of the subgroup of respondents to which the survey item applies (e.g., 'Female, 45, Now working').
  • An unwritten language is one which does not have a standard written form used by the native speakers of the language.
  • Evaluation of a computer-assisted survey instrument to assess the effect of design on interviewer or respondent performance. Methods of evaluation include review by usability experts and observation of users working with the computer and survey instrument.
  • The extent to which a variable measures what it intends to measure.
  • A measure of how much a statistic varies around its mean over all conceptual trials.
  • A brief story/scenario describing hypothetical situations or persons and their behaviors to which respondents are asked to react in order to allow the researcher to explore contextual influences on respondents' response formation processes.
  • Analysis of vocal characteristics (also called paralinguistic data (Draisma & Dijkstra, 2004) involves examining audio recordings of interviews to identify notable properties of the interviewer's voice (such as pitch, intonation, rate of speech, and volume), similar to the way behavior codes(...)
  • A post-survey adjustment that may account for differential coverage, sampling, and/or nonresponse processes.
  • The process of harmonizing language versions within one multilingual country, such as harmonizing the Ukrainian and Russian versions within Ukraine.
  • Word lists can serve various purposes: when regional varieties of a language need to be accommodated, a word list can be created featuring the words that are required for specific varieties of a language. They can also be incorporated into computer applications of an instrument. Word lists can(...)
  • Experts working together to oversee the implementation of a particular aspect of the survey lifecycle (e.g., sampling, questionnaire design, training, quality control, etc.)
  • XML (eXtensible Markup Language) is a flexible way to create common information formats and share both the format and the data on the Web, on intranets, and elsewhere. XML documents are made up of storage units called entities, which contain either parsed or unparsed data. Parsed data are made(...)