Cross Cultural Images Cross Cultural Images Cross Cultural Images Cross Cultural Images

X. Interviewer Recruitment, Selection, and Training

Webpage last modified: 2013-Jul-11

Kirsten Alcser and Judi Clemens


Interviewers play a critical role in surveys, as they implement the survey design. They are often required to perform multiple tasks with a high level of accuracy. In a face-to-face household survey, the interviewer may be required to physically locate the household and to update the sampling frame. In both telephone and face-to-face surveys, the interviewer has to contact the household, explain the purpose of the study, enumerate household members, select the respondent, motivate the respondent to participate, ask questions in the required manner, put the respondent at ease, and accurately record the respondent's answers as well as any other required information. Depending upon the survey topic and survey context, the interviewer may be required to perform additional tasks, such as bio-measure collection or oral translation.

Interviewers can influence responses through their personal attributes and their behaviors ("interviewer effects"). These guidelines present strategies to optimize interviewer efficiency and minimize the effect interviewer attributes have on the data through appropriate recruitment, selection, and case assignment; they also present strategies to minimize the effect that interviewer behaviors have on sampling error, nonresponse error, measurement error, and processing error through training.

Figure 1 shows interviewer recruitment, selection, and training within the survey production process lifecycle (survey lifecycle) as represented in these guidelines. The lifecycle begins with establishing study structure (Study, Organizational, and Operational Structure) and ends with data dissemination (Data Dissemination). In some study designs, the lifecycle may be completely or partially repeated. There might also be iteration within a production process. The order in which survey production processes are shown in the lifecycle does not represent a strict order to their actual implementation, and some processes may be simultaneous and interlocked (e.g., sample design and contractual work). Quality and ethical considerations are relevant to all processes throughout the survey production lifecycle. Survey quality can be assessed in terms of fitness for intended use (also known as fitness for purpose), total survey error, and the monitoring of survey production process quality, which may be affected by survey infrastructure, costs, respondent and interviewer burden, and study design specifications (see Survey Quality).

Figure 1. The Survey Lifecycle

Survey Lifecycle Illustration


Goal:To improve the overall quality of the survey data by minimizing interviewer effects while controlling costs by optimizing interviewer efficiency.

  1. Determine the structure and composition of the interviewing staff.

    The structure and composition of the interviewing staff must be established during the design and planning phases of the project because these decisions will determine the number and type of interviewers required, the training protocol, sample assignment, and most efficient methods of supervision.

    Procedural steps
    • Consider such parameters as sample size and, for face-to-face studies, geographic distribution; the timing and duration of the data collection period; budget constraints; and the language(s) in which interviewing will occur [44].
    • For face-to-face studies, decide whether interviewers will travel, either individually or in teams with a supervisor, or be locally assigned.
      • Factors favoring the use of traveling interviewers include:
        • Lower training costs compared to using local interviewers, as there are fewer interviewers to train and trainers do not have to travel to as many different locations.
        • Breach of confidentiality is less of an issue than with local interviewers because interviewers are unlikely to know the respondent personally.
        • Respondents may be more willing to participate in sensitive-topic surveys if the interviewers are strangers or "outsiders" [34].
      • Additional factors favoring the use of traveling teams rather than traveling individual interviewers include:
        • Traveling as a group may be safer than traveling individually.
        • Monitoring and supervision are easier than with individual traveling interviewers, since the supervisor is part of the group and is in close daily contact with the interviewers.
        • Interviewers have more opportunity to share experiences, learn from one another, and support one another than they would if traveling individually.
        • If multiple household members need to be surveyed, different interviewers can speak to them concurrently.
        • Similarly, if privacy is difficult to achieve, one interviewer can speak to the respondent while another engages other household members.
        • It is easier to implement interpenetrated sample assignments than it would be with individual traveling interviewers [24]
      • Factors favoring the use of local interviewers include:
        • Employing a larger number of interviewers, each with a smaller workload, reduces the interviewer design effect [32] [40] (see Appendix A).
        • With a larger field staff, data collection can be completed within a shorter period of time, although the effect is not linear.
        • More call attempts can be made per case, since the interviewer remains in the area throughout the data collection period.
        • Local interviewer assignment reduces the need for interviewers to travel large distances, thereby reducing travel costs and time expended.
        • Local interviewers are familiar with the area and are more likely to share the language and customs of respondents; they may achieve higher response rates than would a stranger or "outsider."
    • For telephone studies, decide whether interviewers will conduct the survey from a central telephone facility or from their homes (that is, decentralized telephone interviewing).
      • Factors favoring the use of centralized telephone interviewing include:
        • Training can be easily centralized.
        • Monitoring and supervision can be easier and less expensive, since the supervisor is in close daily contact with the interviewers and may, as a result, have access to more information of relevance.
        • It is easier to transfer sample elements among interviewers.
        • Cost controls are more efficient.
      • Factors favoring the use of decentralized telephone interviewing include:
        • A dedicated telephone facility is not required.
        • Interviewer working hours can be more flexible.
      • Some organizations already have a system in place which mixes centralized and decentralized telephone interviewing.
        • In these cases, retaining the combination of centralized and decentralized interviewing may minimize disruption and maintain flexibility.
        • Establishing a sample management system that pulls together information from the two into a single report can be a challenge.
    • Estimate the Hours Per Interview (HPI). The HPI includes time spent traveling to all sample elements, attempting to contact them, documenting contact attempts, and working on project-related administrative duties, as well as conducting the interview with those respondents who agree to participate. The HPI, combined with the hours per week that each interviewer is expected to work on the project and the total number of weeks planned for data collection, helps determine the number of interviewers required (see Appendix B for an example).
    • Utilizing the results of feasibility studies (see Data Collection), consider any special requirements of the study, such as:
      • How many languages are spoken and in what regions?
      • Are any specialized skills or knowledge required?
      • Would interviewer familiarity with the topic introduce bias or enhance an interviewer's ability to collect data?
      • Do cultural norms or the nature of the topic necessitate matching interviewers and respondents by gender, dialect, religion, race, ethnicity, caste, or age?
      • Is physical stamina a consideration (e.g., if interviewers will be required to walk, ride, or bicycle long distances) [38] ?
      • Is the sample widely dispersed, making interviewer access to a car or reliable public transportation a consideration?
      • Is interviewer safety an issue?
    Lessons learned
    • Many organizations use a combination of interviewer assignment protocols. For example, they may hire local interviewers to make initial contact with sample households, select the respondent, and, if he or she is willing, administer the survey. Later in the data collection period, special traveling interviewers (for instance, experienced interviewers who have proven to be especially skillful at gaining cooperation or relating to particular types of respondents) can be brought in to persuade those selected individuals who have expressed a reluctance to participate. Alternatively, local interviewers might be hired in heavily populated areas while traveling interviewers are sent to more remote regions.
    • If traveling teams of interviewers are used, the interviewer may not always be conversant in the respondent's language, and local interpreters may be needed to facilitate data collection. For example, the French Institut National d'Etudes Démographiques has collected data in several Bwa villages in Mali for over 15 years. Although French is the official language of Mali, most villagers speak only Boma, so interpreters were essential for collecting data. The interviewer was responsible for administering the questionnaire, while the interpreter's job was to act as a neutral intermediary between the interviewer and respondent, conveying the words and the concepts attached to them to the two speakers [46] (see Translation).
    • Matching interviewer and respondent characteristics may improve cooperation but only appears to impact survey data quality if the topic of the survey is related to an identifiable and stable interviewer attribute.
      • Indonesian researchers felt that matching interviewers with respondents in terms of age, marital status, and child-rearing experience improved rapport and willingness to participate during in-depth interviews [42].
      • Several studies indicate that when the topic of the survey (e.g., racial attitudes or women's rights) is related to a fixed interviewer attribute (e.g., race or gender), the interviewer attribute can affect respondents' answers [15] [24] [27] [30] [47] [49].
      • If the topic of the survey is not related to a fixed interviewer attribute, matching the interviewer and respondent on the attribute does not appear to affect data quality. Axinn et al. [3] found that matching Nepalese interviewers and respondents by gender and ethnicity for a health survey did not decrease the number of technical errors and "don't know" responses or reduce incorrect information gathered during the interview.
    • Attempting to match interviewer and respondent characteristics may strain the project's resources, particularly if this is not an established practice in the locale.
  2. Determine the pay structure for the data collection staff.

    Since data collection staff quality has a major impact on the quality of the data collected, it is important to attract and retain the most qualified interviewers possible.

    Procedural steps
    • Interviewer pay structures vary greatly across cultures. Depending on local labor laws, set interviewer pay comparable to the pay for other jobs requiring similar skills, ideally adjusted for regional cost of living standards.
    • Keep in mind local research traditions, the mode of the survey, and local labor laws. The two standard policies are to pay interviewers an hourly rate or to pay per completed interview [17] [44].
      • Factors favoring payment per interview:
        • It is most feasible if each completed interview takes approximately the same amount of interviewer effort, as is more likely in a telephone survey [44].
        • It is easier to monitor and control interviewer costs than when paying by the hour [44] [52].
      • Factors favoring an hourly rate:
        • It is most feasible if the effort to complete an interview varies widely, as is common in face-to-face surveys [33] [44].
        • Interviewers have less incentive to perform hurried, sloppy work or even to fabricate interviews than when paid per interview [44] [52].
        • Interviewers are less likely to focus on easy cases while neglecting those who are hard to reach or hard to persuade to participate than when paid by the completed interview [17] [44].
        • Interviewers may be more willing to spend time on other important tasks (e.g., completing a thorough screening interview and entering comprehensive, accurate contact attempt records) than when paid by the completed interview.
      • When determining pay, consider the length and complexity of the interview, the expected difficulties of obtaining cooperation, and the amount of record-keeping demanded of the interviewer [17].
      • Pay interviewers for time spent in training.
      • Adjust the pay rate based on interviewer experience and any special skills they may possess and require (e.g., bilingual interviewers).
      • Consider offering incentives for work above a certain target (e.g., response rate , contact rate, refusal conversion rate) as a way to keep interviewers motivated [17] [55].
        • Incentives can be extra pay, prizes, or special rewards.
        • Overreliance on interviewer incentives for completed interviews may give interviewers a reason to fabricate interviews [55].
        • Any bonus system must be perceived by the interviewers as being fair. For example, different sample assignments can vary considerably in t he challenges they pose for interviewers [11].
    Lessons learned
    • Most survey organizations have a standard policy concerning pay arrangements (either paying per interview or paying by the hour) which they may be unwilling to change [11].
    • If interviewers are paid by the interview instead of by the hour, they may rush the critical respondent-interviewer rapport-building process. It is especially important for face-to-face interviewers to spend the time necessary to develop this rapport so that respondents feel comfortable reporting honestly, as this leads to higher-quality responses. For example, when approaching a household, face-to-face interviewers need to conform to the culture's introductory customs, such as drinking tea or meeting elders [28].
    • To discourage hurried, sloppy work when paying per interview, some organizations set a cap on the number of interviews that each interviewer is allowed to conduct in a day. Another strategy is to offer bonuses for high quality work. For example, set a basic pay per interview plus an additional 10% if the interviewer makes fewer than some predetermined number of errors. This requires the survey organization to have a monitoring system in place, which can distinguish between minor and more serious interviewer errors and can identify errors that cannot be attributed to the interviewer but rather to system factors, such as question wording and technology failures.
    • In contrast to face-to-face interviewing, an experiment with telephone interviewers found that their productivity increased when they were paid per interview as opposed to being paid per hour [12].
  3. Recruit and select an appropriate number of qualified interviewers.

    The quality of an interviewer-administered survey depends, to a large extent, on the quality of the interviewers and their supervisors. It is important, therefore, to recruit and select the best possible people for the job. In addition, selecting candidates who are well suited for the job may lead to lower interviewer turnover and reduced survey costs.

    Procedural steps
    • Recruit applicants.
      • Sometimes the interviewing component of the study can be subcontracted to an external survey organization with an existing pool of interviewers. At other times, the research team will have to implement outreach measures to find interviewers, such as asking local contacts for suggestions, placing flyers in strategic locations, or putting ads in local papers. In the latter case, recruitment and training will take longer.
      • Keeping in mind any special considerations, as described in Guideline 1, target sources of potential interviewer candidates. Professionals, such as traveling nurses, can be a good source of interviewers for health studies; teachers, or others with substantive knowledge of the study topic, may also be good candidates.
      • Keep cultural norms and logistical factors in mind when recruiting interviewers. For example, it may not be acceptable in some cultures for young people (e.g., college students) to interview older persons or for women to interview men and vice versa. Similarly, persons with other jobs may not be available to work on the study at the times when respondents are most likely to be at home.
    • Recruit more than the number of interviewers needed for data collection to allow for attrition and the dismissal of candidates who prove to be unsuitable.
    • As appropriate, prepare an application form to use in prescreening interviewer candidates before they are invited to an in-person or telephone job interview.
    • Interview applicants in the mode of the study. That is, hold telephone screening interviews for a telephone survey and face-to-face screening interviews for a face-to-face study.
    • Evaluate each candidate.
      • If appropriate for the culture, conduct a criminal background check, particularly if the interviewers will handle sensitive information or come into contact with vulnerable populations (e.g., the young, the old, the infirm).
      • Criteria for employment commonly include interviewing skills, language skills, computer or technical skills, organizational skills, education, availability, location, the ability to meet production (i.e., data collection) goals, and the capability to handle potentially emotional or stressful interactions with respondents [44].
      • When possible, select interviewers who have previously worked on similar studies and have good recommendations based on their performance. Experienced interviewers require less training and are likely to achieve higher response rates [11] [19].
      • Evaluate the accuracy and clarity with which each potential candidate can read and process the survey questions in the language(s) of the interview and make sure that he or she is comfortable reading out loud. Ideally, language proficiency should be formally assessed by an outside expert or language assessment firm and should include evaluation of [43]:
        • Conversational skills (e.g., comprehension level, comprehension speed, speech level, speech speed, and accent)
        • Writing skills (e.g., grammar, spelling, and the ability to enter responses)
        • Reading skills (e.g., reading aloud)
      • Ideally, test applicants' computer skills for studies using a computerized questionnaire and sample management system.
      • Select interviewers who are punctual and have good organizational skills (e.g., are able to handle forms and keep track of paperwork).
        • Select interviewers who have completed the full period of required schooling within their country.
        • For face-to-face studies, assess applicants' ability to read maps.
      • Give the candidates a realistic preview of the job including the survey topic and the type of questions that will be asked; describe any non-traditional interviewing tasks (e.g., taking physical measures) in the recruitment description and the screening interview.
      • Clearly present the candidates with study expectations for workload (weekly, monthly, including evening work and possibly weekend work).
      • Obtain the candidates' written commitment to work at the expected level of effort for the duration of the data collection period.
      • Base selection on an objective evaluation of the candidate's abilities rather than his or her relationship to survey staff or favoritism [38] [54] [57]
    Lessons learned
    • Vaessen et al. [54] suggest that study managers recruit at least 10-15 percent more than the number of interviewers ultimately needed for fieldwork to allow for attrition and the dismissal of candidates who prove to be unsuitable.
    • A variety of selection criteria have been used successfully by established cross-cultural studies. In the Afrobarometer Survey, interviewers (preferably women) usually hold first degrees in social sciences and have some university education, strong facility in local language, and the ability to relate to respondents in a respectful manner (selection is on a competitive basis which may include reading, speaking, and comprehension of national and local languages, and competence at following detailed instructions) [57]; the Asian Barometer recruits interviewers from university graduates, senior social science undergraduates, and professional survey interviewers [58]; the European Social Survey highly recommends using experienced interviewers [59]; the Living Standard Measurement Study Survey requires that interviewers have completed secondary education and recommends fluency in two or more languages [36]; the Survey of Health, Aging and Retirement in Europe selects survey agencies for all participating countries and requires interviewers to have extensive face-to-face experience [60]; and in the World Mental Health Survey, some participating countries use field staff from established survey organizations, while others recruit new interviewers from the general population or among college students (interviewer criteria vary among participating countries and may include interviewing experience, language skills, technology skills, education, and capability to handle potential sensitive situations with respondents) [31].
    • Liamputtong, a professor in the School of Public Health at La Trobe University, believes that bicultural researchers who are familiar with both the local and mainstream cultures of communities in the study are ideal [35].
    • As noted in Guideline 1, it is not always possible to recruit interviewers who are fluent in the language(s) preferred or needed by respondents. In this case, other arrangements must be made. Options may include working with interpreters, data collection by proxy, using a bridge language if available, or using self-completion modes if literacy levels permit.
    • If the topic is sensitive (e.g., domestic violence), empathy and strong interpersonal skills may be more important than high levels of education or previous interviewing experience [29]. This holds true for both interviewers and any interpreters being used.
    • If the project's interviewing protocol differs significantly from previous studies, experienced interviewers may find it difficult to change their habits ("veteran effects"). In this case, it may be preferable to recruit and train new interviewers. Similarly, interviewers who have worked for an organization with low quality standards may have to unlearn some behaviors and adapt to new standards.
  4. Provide general basic interviewer training.

    Newly hired interviewers and supervisors require basic training in techniques for successful interviewing before they receive specific training on the study on which they will be working. Research indicates that interviewer training helps improve the quality of survey data by: (1) reducing item nonresponse [8], (2) increasing the amount and accuracy of information obtained [8], and (3) increasing survey participation by teaching interviewers how to identify and respond to respondents' concerns [39].

    Procedural steps
    • Allow sufficient time to adequately cover general interviewing techniques material.
    • Select appropriate trainers. These may include research staff, project managers, project management assistants, supervisors who directly oversee data collection staff, and experienced interviewers.
    • Provide the following general information:
      • An overview of the organization.
      • The roles of the interviewer and the supervisor in the research process.
      • The format of the survey interview.
      • An overview of different interview modes (face-to-face, telephone, computer-assisted, observing behaviors and events, and delivering self-administered survey materials such as diaries) and the tasks each poses for the interviewer.
      • An overview of different sampling designs and the tasks each poses for the interviewer.
      • Interviewer evaluation procedures and criteria.
    • Include the following prescribed procedures [20]:
      • Standardized question-asking. Train interviewers to read each question exactly as written and to read the questions slowly. They should ask all questions exactly in the order in which they are presented in the questionnaire [16] [24] (see Guideline 5 for exceptions).
      • Questionnaire format and conventions. Teach interviewers how to enter the answers to both open- and closed-ended questions. Train them to follow interviewing conventions such as emphasizing words in the questionnaire which appear in bold or are underlined, recognizing and not reading aloud interviewer instructions, reading or not reading optional words as appropriate, and selecting correct fill choices (e.g., he/she, has/have).
      • Clarification. If the study staff has not prepared a stock definition, train interviewers to repeat all or a specified part of the question verbatim when respondents ask for clarification. Interviewers should not make up their own definitions to any word, phrase, or question in the questionnaire [11]. Train interviewers to notify their supervisors about any questions which are confusing to respondents and require further clarification.
      • Probing. If a respondent's answer is inadequate and it is legally and culturally permissible to probe, train interviewers to employ unbiased techniques to encourage answers that are more complete, appropriate, and thoughtful [11] [24].
        • Such strategies of probing for more information may include a pause to encourage the person to fill the silence or a direct request for further information.
        • Verbal probes should be chosen from a stock list of phrases such as "Could you explain what you mean by that?" or "Can you tell me anything else about ___________ ?"
        • Stock phrases must be neutral; that is, they must avoid "sending a message" about what is a good or a bad response.
      • Feedback. Train interviewers to provide their respondents with culturally appropriate feedback when they are doing well in order to encourage them to listen carefully and to give thoughtful answers [11].
        • This feedback may be in the form of a nonverbal smile or nod or a short encouraging phrase.
        • Verbal feedback should be selected from a prepared list of stock phrases such as "That's useful information" or "Thank you, that's helpful" to ensure that the feedback is not evaluative of the content of the answer. For example, in English the word "okay" is discouraged for use in feedback because it could be construed as agreement with or approval of the respondent's answer.
        • As a general rule, give nonverbal or short feedback to short answers and longer feedback phrases to longer answers.
      • Recording answers. To reduce measurement error, train interviewers to record answers exactly as given.
        • If the question offers fixed alternatives, teach interviewers to get respondents to choose one of the fixed alternatives; interviewers should not infer which alternative is closest to what the respondent actually says [24].
        • If the question requires a narrative response, teach interviewers to record the answer in as near verbatim form as possible [24].
      • Confidentiality. Train interviewers to keep confidential all identifying respondent contact information as well as respondent's answers to survey questions.
      • Computer Assisted Personal Interviewing (CAPI) conventions (see Instrument Technical Design).
      • Completing contact attempt records. Teach interviewers to record when each contact was attempted, pertinent respondent comments (e.g., the best time to reach him or her or reasons for reluctance to participate), and the result of each contact attempt, using disposition codes (further information on contact attempt records and disposition codes can be found in Tenders, Bids, and Contracts and Data Processing and Statistical Adjustment; examples of contact attempt records can be found in Data Collection).
      • Recording time and meeting production goals. Teach interviewers how to record the time they spend on each defined aspect of their work for the study, both for their remuneration and to allow supervisors to monitor their progress and efficiency during data collection.
    • If legally and culturally permissible, teach interviewers noncoercive persuasion techniques and practice counter replies to common statements of reluctance.
      • Discuss optimal times and modes for contacting target persons.
      • Train interviewers to tailor their initial interactions with respondents by developing the following skills [25] [39]:
        • Learning the classes of concerns ("themes") that respondents might have.
        • Classifying the respondent's wording into the appropriate theme.
        • Addressing the concern, using their own words.
      • Employ hands-on practice exercises so that the trainees become proficient in quickly identifying respondent concerns and quickly responding to them.
    • For best overall results, employ a training format that combines lectures with visuals and small-group practice sessions.
      • Mixing the format keeps the trainees engaged and acknowledges that different people learn in different ways [21].
      • Through practice, trainees move from procedural knowledge (knowledge of how to perform a task) to skill acquisition (the ability to perform the task almost automatically) [39].
      • Although the class can be large for lecture sessions, trainees should break up into smaller groups for hands-on practice.
    • Be sensitive to the local culture.
      • Educate trainers in cultural sensitivity.
      • Take religious holidays into consideration when scheduling training sessions.
      • Make every effort to accommodate dietary restrictions when planning meals or snacks for the training.
      • Be aware that conventions regarding breaks vary among cultures.
    • At the end of basic interviewer training, evaluate the knowledge of the interviewer candidates. This can be done by written test, conducting a scripted certification interview with a supervisor, audio taping, or observing the interviewer conduct an actual interview.
    Lessons learned
    • If the interviewer candidates have access to the necessary equipment, some basic interview training material can be presented in the form of audio- or video-recordings for home study [10] [18]. Other training options include telephone and video conferencing and self study using paper materials.
    • A forthcoming article [56] indicates that interviewer-related variance on survey items may be due to nonresponse error variance rather than measurement difficulties. That is, different interviewers may successfully contact and recruit respondents with different characteristics (e.g., age, race), even though their sample pools start out the same.
    • Interviewer training and the interviewer manual need to be adjusted to be culturally sensitive to the population under study:
      • Textbook instructions on handling reluctance to participate and provide accurate information rely to a large extent on Western experiences. When possible such procedures should be modified so that they include culturally acceptable and suitable tactics. Researchers conducting a women's health study on the Apsáalooke native American reservation in southeastern Montana, U.S.A., felt that standard Western tactics for handling reluctance would be offensive in that culture. They therefore did not attempt to persuade reluctant respondents to participate. In addition, interviewers were encouraged to display a compassionate attitude and interest in the women, rather than the standard recommended neutral voice tone and lack of responsiveness to respondent answers, to minimize eye contact, and to accept offers of food and drink — all to be more consonant with the Apsáalooke culture [13].
      • In majority countries, a Western trainer may be respected but resented. Researchers in Puerto Rico found allowing interviewer trainees to provide input about the local culture and supplementing trainer criticism with peer criticism helpful [50].
      • The World Mental Health study added country-specific topics to their general interviewer training sessions. In New Zealand, they included cultural empathy to Maori and Pacific Islander households; in Colombia, they provided special training on interacting with governmental authorities and armed guerrilla and paramilitary groups [44].
  5. Provide study-specific training for all interviewers and supervisors.

    Interviewers and supervisors need to be very familiar with the study's protocols and questionnaire in order to carry out their tasks. Depending upon the survey, they may need to learn the instrument's branching structure, the study's requirement for field coding, or the use of a respondent booklet, show cards, or other visual materials. There may be special instructions for implementing all or part of the survey that deviate from the standardized interviewing covered in general interviewer training. Interviewers should also be knowledgeable about the project objectives so that their actions help, not hinder, the overall goals. Both newly hired and experienced interviewers require training specific to the study at hand.

    Procedural steps
    • Allow sufficient time for study-specific training, depending upon the complexity of the study (see Appendix C for a sample training agenda).
    • When possible, have the same team from the coordinating center train all interviewers to ensure standardization of study-specific protocols [53] (see Study, Organizational, and Operational Structure for more on the coordinating center). The team may provide regional trainings, traveling to where interviewers are located.
    • Select appropriate trainers. These may include research staff, project managers and people on their staffs, supervisors who directly oversee data collection staff, experienced interviewers, and consultant(s) hired to assist with interviewer training.
    • Include a large amount of practice and role playing using the questionnaire [53].
      • Consider having the interviewers complete a self-interview to become familiar with the survey instrument.
      • Hands-on training may include round-robin practice sessions (i.e., scripted practice sessions where interviewers take turns administering survey questions to the trainer in a group setting), mock one-on-one interviews (i.e., sessions where interviewers interview each other), listening and discussing taped interviews, and live practice with potential respondents.
      • For role playing to be effective, prepare different scripts in advance so that the different branching structures of the interview, the nature of explanations that are permitted, and anticipated problems can be illustrated.
      • Consider making a video to illustrate the correct administration of physical measures, if applicable. This ensures that the material is consistently taught, especially if training is conducted at multiple times or in various locations.
    • Provide interviewers with an Interviewer Project Manual/Study Guide that has been prepared by the coordinating center, with input from local collaborators. The manual is an important part of training and will serve as reference material while the survey is underway [22].
      • Complete and review the manual before training begins [22].
      • When appropriate, translate the manual into the languages used in the geographical areas encompassed by the study.
      • Include the following content in both the training agenda and the project manual:
        • General information about the project (e.g., the study's background and goals, funding sources, and principal investigators).
        • How to introduce the survey to respondents.
        • Eligibility and respondent selection procedures, if applicable. Sampling and coverage errors can occur if interviewers fail to correctly locate sample households, determine eligibility, or implement the respondent selection procedure [37].
        • Review of the survey instrument, highlighting the content of the various sections and the types of questions being asked.
        • Data entry procedures (paper and computer-assisted). Measurement error can occur if interviewers do not record responses in the appropriate manner.
        • Computer hardware and software usage, if appropriate (e.g., use of the laptop computer, email, and any other software packages).
        • Use of the sample management system.
        • Review of interview procedures and materials (e.g., informed consent materials and respondent incentive payments).
        • Review of study-specific probing conventions (e.g., when to probe a "don't know" response and an open-ended response).
        • Techniques for handling reluctance that are specific to the study (e.g., recommended responses to frequently asked questions) and are approved in advance by an ethics review board (see Ethical Considerations in Surveys). Nonresponse bias can occur if interviewers are unable to persuade reluctant persons to participate in the survey.
        • Nonstandardized interviewing, if appropriate for the study (e.g., event history calendars, time diaries, or conversational interviewing) [5] [6] [14] [24] [51]. (See Data Collection for a discussion about combining qualitative and quantitative data collection methods.)
        • Any observational data which interviewers will be required to enter (e.g., observations of the respondent or the neighborhood).
        • Any specialized training for the study (e.g., procedures for taking physical measurements, instruction on interviewing minors or interviewing on sensitive topics, proxy interview protocol, interviewing in unsafe neighborhoods, and protocol for handling respondent or interviewer distress).
        • Procedures to be used for unusual cases, including general principles to be applied in dealing with unforeseen problems (e.g., how to report abuse of children or others that is observed while conducting an interview in the respondent's home).
        • Production goals and maintaining productivity.
        • Proper handling of equipment and survey materials.
        • The structure of the survey team and the role of all members of the team.
        • Procedures for editing and transmitting data. Processing error can occur if interviewers do not correctly edit and transmit the completed questionnaire (see Data Processing and Statistical Adjustment for other potential sources of processing error).
        • Any other required administrative tasks.
      • The Project Manual/Study Guide must be especially clear and self-contained if it is impossible to train interviewers in person (e.g., if interviewers must be trained via conference call or video).
    • Collect and analyze written evaluative feedback (i.e., provide the opportunity for trainees to give written feedback on trainer performance, the sufficiency of time allocated to different topics, and the adequacy of practice exercises).
    • Certify the interviewers. (See Appendix D for a sample interviewer certification form.) Certification for study-specific tasks should include:
      • A complete role-play interview with a supervisor.
      • Certification by an appropriate trainer of any physical measurements that are included in the study (see Appendix E for a sample certification checklist for taking physical measurements).
      • Language certification, as appropriate (see Translation).
    • Supplement the initial training with periodic in-person seminars, telephone conference calls, and periodic bulletins or newsletters [44].
    • If data collection will extend for a long period of time, hold a brief refresher training course towards the middle of the data collection period [40].
      • This refresher training session is an opportunity to review various aspects of data collection, focusing on difficult procedures or on protocols that are not being adhered to sufficiently by interviewers.
      • The session can also be used to provide feedback on what has been achieved to date.
      • Require even experienced interviewers to attend refresher training sessions, including sessions on standardized techniques.
    Lessons learned
    • Most of the time it is not feasible for the same team to train all interviewers, particularly in very large cross-cultural studies. If this is the case, other steps must be taken to ensure the standardization of study-specific protocols:
      • One approach is the "train-the-trainer" model.
        • Training is generally done in one common language.
        • Each country or cultural group sends one or more individuals, who can understand and work in the language of the trainers, to the central training.
        • These representatives return to their own country or cultural group, adapt and translate the training materials as needed, and train the interviewers.
        • This model allows for tailoring at the country or cultural group level.
        • The Survey of Health, Ageing, and Retirement in Europe (SHARE) train-the-trainer (TTT) program is one example of this approach [1] [2] [9] [60]. The University of Michigan's Survey Research Center, under contract to SHARE, created the TTT program. Each participating country sent a Country Team Leader, a member of his or her staff, and 2-3 trainers to the TTT sessions. Once the trainers had completed the TTT program, they used the training materials provided, translated if necessary, to conduct country-level interviewer training. (see Appendix C for the SHARE Model Training Agenda).
        • The World Mental Health Survey gives two train-the-trainer sessions for interviewer supervisors, lasting, on average, six days. Interviewer supervisors in turn, train the interviewers in general interviewing techniques (on average 20 hours) and CIDI specific training (on average 30 hours). Before progressing to CIDI specific training, interviewers must demonstrate competence, in the form of role playing, tests, and/or supervised respondent interaction, and in general interview techniques. All interviewers must be tested and certified before they are authorized for production work [31].
      • Another approach is the training center model [45].
        • A centralized training course is held, but language "regions" are represented rather than countries.
        • This model is effective when it is not possible for every country to send trainers who are functional in the central trainer's language.
        • The training center model was used in the World Health Organization's Composite International Diagnostic Interview training sessions. For example, trainers from Lebanon were trained in the United States and subsequently trained the trainers in Lebanon, Oman, Jordan, Palestine, Saudi Arabia, and Iraq.
      • Organizing training in steps (first training the trainers and then having them train the interviewers) increases the overall time needed for training and should be factored into the project timeline.
      • All step-wise training results in a certain loss or distortion of information as it is passed along. Trainers should be aware of this and take precautions, such as providing approved standardized training materials.
    • If interviewers are being hired for one study only, basic interviewer training techniques can be incorporated into study-specific training.
    • The amount of time devoted to training varies among large established cross-cultural surveys. Glewwe [22] recommends up to a month of intense interviewer training (general and study-specific) for inexperienced interviewers in a face-to-face survey. Field team members for the Asian Barometer received intensive, week long training sessions on the questionnaire, sampling methods, and the cultural and ethical context of the interview [58]. The Living Standard Measurement Study Survey recommends that training take place over a four week period and include introduction to the LSMS survey, general survey procedures, the questionnaire, sampling procedures, and data entry program error reports, with at least two observed training interviews [36]. The Survey of Health, Ageing, and Retirement in Europe (SHARE) requires 16-18 hours of training spread over 2-3 days in addition to the basic interviewer techniques training for new interviewers [1] [2]. Similarly, the World Health Survey (WHS) recommends three full days of study-specific training [53].
    • Round 4 of the Afrobarometer Survey held a six day training workshop for all persons involved with the project, including interviewers and field supervisors. The Afrobarometer protocol requires holding a single national training workshop at one central location. Interviewers must complete at least six practice interviews before they leave for the field: at least one mock interview in the national language, at least one mock interview in each of the local languages they will use in the field, and at least four training interviews in a field situation [57].
    • In addition to general interview training, all interviewers for round 5 of the European Social Survey were briefed by the National Coordinator or a research team member regarding respondent selection procedures, registration of the calling process, response rate enhancement, coding of observation data, documentation, and questionnaire content. Practice interviews were suggested [59].
    • If the topic is extremely sensitive, additional specialized training may improve response rates and data quality. The WHO Multi-Country Study on Women's Health and Domestic Violence, fielded in multiple culturally diverse countries, found that previously inexperienced interviewers who had received specialized training obtained a significantly higher response rate and significantly higher disclosure rate of incidences of domestic violence than did experienced interviewers who had not received the additional training [29].
    • Training interviewers in adaptive behavior, such as tailoring responses to respondent concerns or nonstandardized conversational interviewing, can be time-consuming and could increase training costs [39].
    • Field interviewers often work some distance away from their trainers and supervisors. Before sending the interviewers to their assigned areas, some organizations have found it useful to have them conduct a few interviews close to the training locale. Afterward, they meet with the trainer, discuss their experiences, and check their questionnaires. Any problems or misunderstandings can be identified and rectified more easily than if they had occurred in a more remote area.
    • During pretesting for the Tamang Family Research Project, investigators trained interviewers in a Nepalese village that was not in the sample. The investigators and interviewers lived together during this period and throughout data collection. This allowed for the continuous assessment of interviewers who were let go if they were not completing quality work [4].
  6. Institute and follow appropriate quality control measures.
    • Quality control (QC) is a procedure or set of procedures intended to ensure that a product or service adheres to a defined set of quality criteria or meets the requirements of the study (see Survey Quality). The implementation of quality control measures enhances the accuracy, reliability, and validity of the survey data and maximizes comparability of these data across cultures. To implement an effective QC program in a cross-cultural survey context, the coordinating center must first decide which specific standards must be met. Then real-world data must be collected and the results reported back to the coordinating center. After this, corrective action must be decided upon and taken as quickly as possible. Finally, the QC process must be ongoing to ensure that remedial efforts, if required, have produced satisfactory results.
    Procedural steps
    • Track the cost and success rates of different recruitment avenues to determine which are the most fruitful and cost effective; use this information to guide the future allocation of resources.
    • Considering the factors enumerated in Guideline 3, establish a checklist of minimum interviewer candidate requirements (e.g., interviewing skills, reading/writing fluency, language skills, educational level, and computer skills).
      • Require recruiters to complete the checklist as they screen each interviewer candidate. If specific assessment tests are used (e.g., to evaluate language skills), record each candidate's performance on the test.
      • Accept only those candidates who meet the predetermined minimum requirements.
      • To ensure accountability, require the recruiter to sign or initial checklists and assessment tests.
    • Survey interviewer candidates to determine what improvements could be made to the recruitment process; use this information to modify the procedure, if possible (for example, ask how the candidate heard about the position).
    • Take attendance at general interviewing techniques and study-specific training sessions.
      • Dismiss candidates who fail to attend a predetermined minimum number of training sessions, or make arrangements to train them individually on the missed material.
      • Keep a signed written record of the training completed by each candidate.
    • At the end of basic interviewer training, evaluate the knowledge of the interviewer candidates, as described in Guideline 4.
      • Require all trainers to use the same evaluation criteria.
      • Dismiss or retrain those candidates who fail to attain predetermined minimum standards.
      • Keep a signed written record of each candidate's performance on the evaluation measures.
    • At the end of study-specific training, certify the interviewer candidates, as described in Guideline 5.
      • Require all trainers to use the same evaluation criteria for certification.
      • Dismiss or retrain those candidates who fail to attain predetermined minimum standards.
      • Keep a signed written record of each candidate's performance on the certification tests.
    • Debrief interviewer trainees to determine how training could be improved; use this information to modify the training protocol, if possible.
    Lessons learned
    • Including quality control protocols as part of the overall survey design, and implementing them from the start, permits the survey organization and the coordinating center to monitor performance and to take immediate corrective action when required. For example, if many interviewer candidates fail to pass the study-specific certification test, additional training could be provided. Afterward, the candidates would be tested again. Those passing the certification test could then be sent out into the field.
  7. Document interviewer recruitment and training.

    Comprehensive documentation helps analysts correctly interpret the data and assess data quality; it also serves as a resource for later studies.

    Procedural steps
    • Document the recruitment effort for enrolling data collection staff on the project, including:
      • Any special criteria used in reviewing data collection staff employment applications (e.g., language proficiency and special knowledge and skills, such as taking physical/biological measurements).
      • The way in which language fluency was assessed, as appropriate for the study.
      • Recruitment scripts and sources used to recruit data collection staff, as well as an evaluation of the success of the recruitment strategies.
      • Interviewer characteristics (e.g., gender, age, race, education, length of tenure as interviewer).
      • Characteristics of the multilingual interviewing staff in terms of the percent certified to interview by language.
      • The minimum number of hours required, if applicable, and the average number of hours worked by an interviewer during the data collection period.
      • Interviewer pay structure (e.g., hourly or per completed interview), the pay range, and any bonus program (e.g., amount and when or under what circumstances these bonuses were offered).
    • Document the general and study-specific training, including:
      • Number of training sessions conducted.
      • Number of training hours, dates, and locations.
      • Number of trainers and trainees.
      • Background of the trainers, including expertise in training and in any substantive areas as applicable to the survey.
      • Copy of the training agenda(s) (i.e., list of topics covered).
      • All written materials that were used (e.g., the interviewer manual/study guide, trainer/facilitator guide and supplemental training materials).
      • Certification procedures (e.g., scripted certification interview with a supervisor or other staff, written or online test on general interviewing procedures, live practice interviewing with potential respondents).
    • Document any issues encountered (e.g., if the recruitment plan failed to produce a sufficient number of qualified interviewers or interviewer attrition was unexpectedly high, necessitating a second round of recruitment and training; the training agenda did not provide adequate time for hands-on practice; or the ratio of trainers to trainees was inadequate) and suggestions for future studies.
    • Document all direct measurements of data quality, all indicators of data quality obtained via quality control (QC), and any decisions made to change the protocol in order to maintain high levels of quality (see Survey Quality).
    Lessons learned
    • Documenting the recruitment effort, including method(s) of recruiting, number of candidates recruited, and number of candidates screened, as well as post-study documentation of interviewer retention, is also useful for other future projects. This information can guide future recruitment strategies and help estimate the number of recruits needed to provide a sufficient number of interviewers for data collection in similar studies.
    • Documentation of general and study-specific training can pinpoint areas needing improvement in future training efforts.

Appendix A

Interviewer design effect [24] [32] [40] [41]

Research indicates that the interviewer design effect may be even larger than the design effect attributable to geographic clustering [48]. This is especially true in some international studies where cultural and other factors contribute to large interviewer variances. Interviewer variance occurs when response errors of persons interviewed by the same interviewer are correlated; therefore interviewer variance is part of the correlated variance component of the total variance (other correlated variances stem from coders, editors, supervisors, and crew leaders).

The intra class coefficient, pint, is a measure of the ratio of interviewer variance to the total variance and is defined as:
math formula

The value of pint is theoretically always between 0 and 1 although calculated estimates of pint may sometimes be negative. In this case, they are usually treated as zeros. When pint for a particular variable is 0 or is negative, we interpret this to mean that the interviewers have no effect on the variance of responses to that variable; the larger the value of pint, the larger the effect of interviewers on the variance of the particular variable.

The interviewer design effect (deffint) is a measure of the effect of interviewers carrying out multiple interviews, compared to what you would get if there was a different interviewer for each respondent, all else being equal (if the addition of more interviewers increases costs such that supervision or training must be reduced to compensate, interviewer variance may actually increase).

math formula

where m is the average number of interviews per interviewer.

Thus, even a small interviewer variance (pint,) can have a significant effect on the variance of a survey estimate if m is large. The interviewer variance contribution is usually not included in textbook variance estimation formulas. Interviewer variance leads to a loss of sample information when the effective sample size neff, defined as n/deffint, is smaller than the actual sample size n.

Standardized interviewing aims to reduce interviewer variance.

For specification of a mathematical model of response errors when interviewers are used, see [26]; for further discussion of interviewer variance see [7] and [23].

Appendix B

Estimating the number of interviewers needed for a study

The following example shows how to calculate the number of interviewers required for a hypothetical study. The example makes the following assumptions:

  1. Interviewers and respondents do not need to be matched on any attributes.
  2. The average number of hours worked per week is the same for all interviewers.
  3. The expected number of completed interviews is 500.
  4. The estimated Hours Per Interview (HPI) is 5.
  5. The projected data collection period is 5 weeks.
  6. Each interviewer is expected to work 15 hours per week (based on the optimal hours of work during the times the respondents are expected to be at home).

Make the following calculations:

  1. Total hours to complete the study = (500 interviews * 5 HPI) = 2500 hours.
  2. Average interviewer hours per week = (2500 total hours/5 weeks) = 500 hours per week.
  3. Number of interviewers needed = (500 hours per week/15 hours per interviewer per week) = 33 interviewers.

(To determine the optimum number of interviewers based on interviewer variance and cost, see [26]).

Appendix C

Example of a training agenda [2]

The Survey of Health, Ageing and Retirement in Europe (SHARE) utilizes a model agenda for training interviewers in participating countries. While the content of this agenda is SHARE study specific, it might provide a useful basic template for other similar cross-national survey efforts. Organizations may add country-specific items to the model training agenda (e.g., tracing/locating steps that should be followed in their country and any relevant cultural considerations).

  • Note that SHARE is a longitudinal (i.e., panel) study. However, new countries join at each wave, or a refresher sample is recruited—hence SHARE provides training for panel study and baseline study at most of its trainings.
  • The model training agenda assumes that interviewers have already received basic training in General Interviewing Techniques (GIT). However, since SHARE wants to make sure that certain specific GIT interviewing conventions are always implemented, SHARE spends part of the study-specific training reviewing those.
  • See the SHARE website for details about the study [60].
SHARE Model Training Agenda (na=not applicable)
Topic Purpose Panel: Time (minutes) Baseline: Time (minutes)
Introduction, Welcome, and Logistics Set the stage for this intense training. 15 30
SHARE Project and Questionnaire Overview Explain the goals of the project and the importance of baseline and longitudinal sample 45 45
Sample Overview Understand how the sample was selected, sample eligibility, and response rate requirements. 30 60
GIT Requirements Cover minimal GIT requirements, including when and how to contact sample, probes, feedback, etc. 60 60
Overview of the Sample Management System Learn how to operate the SHARE electronic sample management system, assign result codes, and enter call notes. Introduce noncontact mock scenarios and test results. 60 90
Longitudinal Sample Management System Introduce splitters, deceased, new eligible respondents, and additional result codes. 30 Na
Proxy Interviews Explain how to identify and interview proxy respondents. 30 45
Nursing Homes Explain how to contact respondents in nursing homes and to work with gatekeepers / potential proxy respondents. 30 Na
Overview of the Blaise Program Explain the Blaise program conventions, including different types of questions, question wording, data entry, interviewer instructions, etc. 45 45
SHARE Questionnaire Walk-Through Describe SHARE modules. Conduct a scripted review of the questionnaire, including spawning of additional line. Address main questions and issues that arise with different sections. Longitudinal: Describe longitudinal differences. Explain preloads. Address different questions arising from reinterviews. 330 240
End-of-Life Interviews (EOL) Cover the concept of the EOL interview, approaching respondents, and administering the interview. Explain how to record these in the Sample Management System (SMS). 30 Na
Drop-Off Describe drop-off and the procedure for identifying and labeling drop-off appropriately. Explain procedure for administering drop-off and how to record these in SMS. 45 45
Physical Measurements; "Certification" Have each interviewer demonstrate the ability to conduct physical measures. 30 60
Response Rates and Contact Efforts Explain the importance of response rates and the reiteration of required contact effort per line. (Longitudinal: review only) Longitudinal: Cover panel care and effort requirements, including tracking effort. 45 90
Gaining Respondent Cooperation Review eight concerns that interviewers are likely to encounter. Practice quick answers to several concerns. Note that longitudinal sample is more likely to encounter different types of resistance. 90 90
Practicing Household Introductions Have interviewers team up in groups of 10 or so and each take a turn introducing the study. Optional 90
Pair-wise Questionnaire Walk-through This is an opportunity for interviewers to go through the questionnaire with a fellow interviewer. Use an abbreviated script. Switch at the half-point mark and complete the interview. 90 130
Pair-wise EOL Interview Practice administering the End-of-Life interview. 45 Na
Administrative Wrap-Up Answer outstanding questions. 30 30
  • Total Time Training for the Panel Model: 1080 minutes (18 hours, 0 minutes)
  • Total Time Training for the Baseline Model: 1120 minutes (18 hours, 40 minutes)

Appendix D

Example of an interview certification form

  • The aim of certification is to assess the interviewer's conduct of the interview, including introducing the study, doing the interview itself, using all appropriate respondent materials and interviewer aids, closing out the interview, and recording all required information in the sample management system.
  • Specific studies should modify items in this form, as needed, to ensure that all key elements are measured in the certification.
    • For example, the template below assumes that an electronic sample management system (SMS) is used; if a paper coversheet is used to manage the sample, one should develop items appropriate for that system.
    • Similarly, the template assumes that the interview is programmed in Blaise; if data is collected via paper and pencil, one should check the interviewer's comfort in following routing instructions, choosing appropriate fills, etc.
  • Additional items may be included on the form and scoring may be changed to suit the situation. Some potential additions might include (a) professionalism (e.g., pace, tone, and emphasis of speech), (b) establishing rapport with the respondent, (c) introducing the study to the respondent, and (d) the administration of specific areas in the instrument, such as cognitive tests or mental health questions.
Certifier Notes for the Individual Certifications
Interviewer:   Certifier:
Time:   Location:
CERTIFIER INSTRUCTIONS: Score each item 0, 1, or 2. 0 = Inadequate performance; 1 = Needs Improvement; 2 = Met Expectations. Use the Errors column to tally the number of times the interviewer makes general interviewing technique (GIT) errors in reading, probing, feedback, or clarification. Note question numbers of errors when possible.
Interviewing Skill Score Errors Comments
On time and prepared for certification     Sample Management System running and ready to interview; for face-to face interview, have respondent materials ready, including copy of letter and brochure.
Correctly completing household listing/enumeration and screener     Make sure that the interviewer has completed the household listing/enumeration correctly; if not, tell him/her how to correct and proceed. The interviewer will have to recertify on the screener portion if this happens.
Use of GIT probes and clarification     Should use standard GIT protocol as indicated; 1 - 2 errors - score 1; 3+ errors - score 0.
Use of neutral feedback     Interviewer should provide feedback for at least 30% of responses. Non-standard feedback counts as an error.
Verbatim question reading     Include pronunciation and emphasis in evaluation; 1-3 errors - score 1; 4+ errors - score 0.
Data Entry     General comfort with navigating in Blaise.
Post-interview process & contact person information     Interviewer should confirm all contact information for respondent and enter information for required number of contact persons.
Contact attempt record     Interviewer should enter a final contact attempt note which you will check before scoring. If 1 or 2 items are missing - score 1. If more than 2 items are missing - score 0.
Total Score 0    
Total possible = 16 Certified = 12or higher Re-Certify = 10-11 Administrative Review will be required if score is less than 10.
GENERAL COMMENTS: Provide specific examples and question numbers of problem areas when possible. Note the way in which the interviewer administered the informed consent and reads the script to explain the need for obtaining information for contact persons.

Debriefing with Interviewer by [NAME]: Date:
Notes: Include summary of recertification plan and retraining or practice interviews needed. Make note of areas that need close review on taped interviews.

Appendix E

Example of a certification checklist for physical measurements

  • Interviewers should act as though this is a real interview. It is recommended that the person performing the certification ("certifier") observe a pair of interviewers where one acts as the respondent and the other is the interviewer being evaluated. If the "interviewer" asks questions during the certification, such as "should I ask/do this...", neither the certifier nor the "respondent" should respond.
  • Make sure that the list of supplies is first checked off (interviewer has all materials ready or not).
  • Observe that the interviewer reads all instructions and explanations to the respondent and enters the values correctly.
  • If the interviewer performs a given activity correctly, make a "check" in the column labeled "Correct" for that activity; if the interviewer does not perform the activity correctly, circle the number in the column labeled "Incorrect" for that activity.
    • The numbers in the "Incorrect" column indicate the importance of the activity as defined by the researcher. Individual researchers can establish the relative "weight" of the error score, as necessary.
  • For each physical measurement total the circled numbers and enter the sum in the row labeled "Total Incorrect;" also enter the total on the "Total Incorrect" line below the table. Assess whether or not the interviewer has passed the section. To be certified, the interviewer must successfully pass all sections.
    • Say, for example, the interviewer failed to correctly perform the Blood Pressure activities "arm on table" and "use correct cuff size." The certifier would circle the Incorrect scores of 2 and 4 respectively, for a Total Incorrect score of 6. Since the "Max incorrect to pass" is 3, the interviewer would not pass this section and would need to be retrained and recertified.
  • At the end, be prepared to provide feedback regarding the certification items and whether the interviewer passed certification. Make a decision about whether to permit recertification (for all measures or for only those that the interviewer did not pass) and be sure to let some time pass before attempting a recertification.
  • Retain final, signed records as documentation of this certification. Some large cross-national studies may require some form of documentation on interviewer certification levels across member countries.


Interviewer's Name: ______________________________
Certifier's Name: _________________________________
Date of Certification: ____________________________

Blood Pressure

Activity Correct Incorrect
Feet flat on floor/legs uncrossed   2
No smoking   2
Loose clothing/ no more than one layer   4
Arm on the table (or supported) at heart level   2
Use of correct cuff size   4
Tube of cuff hanging at inner crease of arm   4
Start at 180 SBP (Systolic Blood Pressure)   2
Re-inflation no sooner than 30-45 sec.   4
Re-inflation to first SBP + 20   1
Total Inccorrect:    

Total incorrect Blood Pressure: ___________

Max incorrect to pass: 3 (4 or more needs recertification)

Activity Correct Incorrect
Shoes off   4
Heels to wall   2
Place sticky properly on wall   2
Orange triangular ruler on top of head, parallel to floor (fat edge against the wall)   4
Place metal tape measure properly and straight for accuracy in measuring height   4
Remove sticky from wall when done   1
For leg length, ask respondent to locate bony prominence and hold metal tape in place there; keep tape straight   4
Total Incorrect:    

Total incorrect (Height): ___________

Max incorrect to pass: 3 (4 or more needs recertification)


Activity Correct Incorrect
Place scale on firm floor   4
Shoes off   4
Remove bulky clothes   2
Tap red label on scale; wait for "000.0"   4
Total Incorrect:    

Total incorrect (Weight): ___________

Max incorrect to pass: 2 (3 or more needs recertification)


Activity Correct Incorrect
Ask respondent to identify umbilicus (navel) and hold cloth tape in place there   3
One layer of clothing   3
Tape snug but not tight   1
Check that tape is horizontal all around the R   3
Ask respondent to take normal breath and exhale, holding breath at end of exhalation   1
Record to nearest centimeter   3
Total Incorrect:    

Total incorrect (Waist): ___________

Max incorrect to pass: 2 (3 or more needs recertification)


Activity Correct Incorrect
Take measurement at respondent's side   2
Place cloth tape at level of maximal protrusion of gluteal muscles   1
Tape snug but not tight   3
Check that it is horizontal all around the respondent   3
Move tape up and down to make sure measurement is taken at greatest diameter   1
Ask the respondent to take normal breath and exhale, holding breath at end of exhalation   1
Record to nearest centimeter   3
Total Incorrect:    

Total incorrect (Hip): ___________

Max incorrect to pass: 2 (3 or more needs recertification)


The degree of closeness an estimate has to the true value.
Changing existing materials (e.g., management plans, contracts, training manuals, questionnaires, etc.) by deliberately altering some content or design component to make the resulting materials more suitable for another socio-cultural context or a particular population.
Adaptive behavior
Interviewer behavior that is tailored to the actual situation encountered.
Auxiliary data
Data from an external source, such as census data, that is incorporated or linked in some way to the data collected by the study. Auxiliary data is sometimes used to supplement collected data, for creating weights, or in imputation techniques.
The systematic difference over all conceptual trials between the expected value of the survey estimate of a population parameter and the true value of that parameter in the target population.
Bridge language
A language, common to both interviewers and respondents, that is used for data collection but may not be the first language of either person.
Objective assessment of performance. Based on pre-established criteria, the interviewer either meets the requirements and may proceed to conduct the study interview or does not meet the requirements and may either be permitted to try again or be dismissed from the study. Certification outcome should be documented and filed at the data collection agency.
Closed-ended question
A survey question format that provides a limited set of predefined answer categories from which respondents must choose.
Example: Do you smoke?
Yes ___
No ___
A grouping of units on the sampling frame that is similar on one or more variables, typically geographic. For example, an interviewer for an in person study will typically only visit only households in a certain geographic area. The geographic area is the cluster.
Translating nonnumeric data into numeric fields.
The extent to which differences between survey statistics from different countries, regions, cultures, domains, time periods, etc., can be attributable to differences in population true values.
Computer assisted personal interviewing (CAPI)
A face-to-face interviewing mode in which a computer displays the questions onscreen, the interviewer reads them to the respondent, and enters the respondent's answers directly into the computer.
Securing the identity of, as well as any information provided by, the respondent, in order to ensure to that public identification of an individual participating in the study and/or his individual responses does not occur.
A process by which a sample member voluntarily confirms his or her willingness to participate in a study, after having been informed of all aspects of the study that are relevant to the decision to participate. Informed consent can be obtained with a written consent form or orally (or implied if the respondent returns a mail survey), depending on the study protocol. In some cases, consent must be given by someone other than the respondent (e.g., an adult when interviewing children).
Contact attempt record
A written record of the time and outcome of each contact attempt to a sample unit.
Contact rate
The proportion of all elements in which some responsible member of the housing unit was reached by the survey.
A legally binding exchange of promises or an agreement creating and defining the obligations between two of more parties (for example, a survey organization and the coordinating center) written and enforceable by law.
Conversational interviewing
Interviewing style in which interviewers read questions as they are worded but are allowed to use their own words to clarify the meaning of the questions.
Coordinating center
A research center that facilitates and organizes cross-cultural or multi-site research activities.
Coverage error
Survey error (variance and bias) that is introduced when there is not a one-to-one correspondence between frame and target population units. Some units in the target population are not included on the sampling frame (undercoverage), some units on the sampling frame are not members of the target population (out-of-scope), more than one unit on the sampling frame corresponds to the same target population unit (overcoverage), and one sampling frame unit corresponds to more than one target population unit.
Electronic or printed materials associated with each element that identify information about the element, e.g., the sample address, the unique identification number associated with an element, and the interviewer to whom an element is assigned. The coversheet often also contains an introduction to the study, instructions on how to screen sample members and randomly select the respondent, and space to record the date, time, outcome, and notes for every contact attempt.
Disclosure analysis and avoidance
The process of identifying and protecting the confidentiality of data. It involves limiting the amount of detailed information disseminated and/or masking data via noise addition, data swapping, generation of simulated or synthetic data, etc. For any proposed release of tabulations or microdata, the level of risk of disclosure should be evaluated.
Disposition code
A code that indicates the result of a specific contact attempt or the outcome assigned to a sample element at the end of data collection (e.g., noncontact, refusal, ineligible, complete interview).
Altering data recorded by the interviewer or respondent to improve the quality of the data (e.g., checking consistency, correcting mistakes, following up on suspicious values, deleting duplicates, etc.). Sometimes this term also includes coding and imputation, the placement of a number into a field where data were missing.
Ethics review committee or human subjects review board
A group or committee that is given the responsibility by an institution to review that institution's research projects involving human subjects. The primary purpose of the review is to assure the protection of the safety, rights and welfare of the human subjects.
Fitness for intended use
The degree to which products conform to essential requirements and meet the needs of users for which they are intended. In literature on quality, this is also known as "fitness for use" and "fitness for purpose."
Hours per interview (HPI)
A measure of study efficiency, calculated as the total number of interviewer hours spent during production (including travel, reluctance handling, listing, completing an interview, and other administrative tasks) divided by the total number of interviews.
A computation method that, using some protocol, assigns one or more replacement answers for each missing, incomplete, or implausible data item.
Interpenetrated sample assignment, interpenetration
Randomized assignment of interviewers to subsamples of respondents in order to measure correlated response variance, arising from the fact that response errors of persons interviewed by the same interviewer may be correlated. Interpenetration allows researchers to disentangle the effects interviewers have on respondents from the true differences between respondents.
Interviewer design effect (Deffint)
The extent to which interviewer variance increases the variance of the sample mean of a simple random sample.
Interviewer effect
Measurement error, both systematic and variable, for which interviewers are responsible.
Interviewer variance
That component of overall variability in survey estimates that can be accounted for by the interviewers.
Item nonresponse, item missing data
The absence of information on individual data items for a sample element where other data items were successfully obtained.
A procedure used in area probability sample designs to create a complete list of all elements or cluster of elements within a specific set of geographic boundaries.
Longitudinal study
A study where element are repeatedly measured over time.
Majority country
A country with low per capita income (the majority of countries).
Mean Square Error (MSE)
The total error of a survey estimate; specifically, the sum of the variance and the bias squared.
Measurement error
Survey error (variance and bias) due to the measurement process; that is, error introduced by the survey instrument, the interviewer, or the respondent.
Nonaggregated data that concern individual records for sampled units, such as households, respondents, organizations, administrators, schools, classrooms, students, etc. Microdata may come from auxiliary sources (e.g., census or geographical data) as well as surveys. They are contrasted with macrodata, such as variable means and frequencies, gained through the aggregation of microdata.
Method of data collection.
Sampling units that were potentially eligible but could not be reached.
Nonresponse bias
The systematic difference between the expected value (over all conceptual trials) of a statistic and the target population value due to differences between respondents and nonrespondents on that statistic of interest.
Nonresponse error
Survey error (variance and bias) that is introduced when not all sample members participate in the survey (unit nonresponse) or not all survey items are answered (item nonreponse) by a sample element.
Open-ended question
A survey question that allows respondents to formulate the answer in their own words. Unlike a closed question format, it does not provide a limited set of predefined answers.
Example: What is your occupation?
Please write in the name or title of your occupation___________
Post-survey adjustments
Adjustments to reduce the impact of error on estimates.
Prescribed behaviors
Interviewer behaviors that must be carried out exactly as specified.
Primary Sampling Unit (PSU)
A cluster of elements sampled at the first stage of selection.
Processing error
Survey error (variance and bias) that arise during the steps between collecting information from the respondent and having the value used in estimation. Processing errors include all post-collection operations, as well as the printing of questionnaires. Most processing errors occur in data for individual units, although errors can also be introduced in the implementation of systems and estimates. In survey data, processing errors may include errors of transcription, errors of coding, errors of data entry, errors in the assignment of weights, errors in disclosure avoidance, and errors of arithmetic in tabulation.
Proxy interview
An interview with someone (e.g., parent, spouse) other than the person about whom information is being sought. There should be a set of rules specific to each survey that define who can serve as a proxy respondent.
The degree to which product characteristics conform to requirements as agreed upon by producers and clients.
Quality assurance
A planned system of procedures, performance checks, quality audits, and corrective actions to ensure that the products produced throughout the survey lifecycle are of the highest achievable quality. Quality assurance planning involves identification of key indicators of quality used in quality assurance.
Quality audit
The process of the systematic examination of the quality system of an organization by an internal or external quality auditor or team. It assesses whether the quality management plan has clearly outlined quality assurance, quality control, corrective actions to be taken, etc., and whether they have been effectively carried out.
Quality control
A planned system of process monitoring, verification, and analysis of indicators of quality, and updates to quality assurance procedures, to ensure that quality assurance works.
Quality management plan
A document that describes the quality system an organization will use, including quality assurance and quality control techniques and procedures, and requirements for documenting the results of those procedures, corrective actions taken, and process improvements made.
The consistency of a measurement, or the degree to which an instrument measures the same way each time it is used under the same condition with the same subjects.
Response rate
The number of complete interviews with reporting units divided by the number of eligible reporting units in the sample.
Sample element
A selected unit of the target population that may be eligible or ineligible.
Sample management system
A computerized and/or paper-based system used to assign and monitor sample units and record documentation for sample records (e.g., time and outcome of each contact attempt).
Sampling error
Survey error (variance and bias) due to observing a sample of the population rather than the entire population.
Sampling frame
A list or group of materials used to identify all elements (e.g., persons, households, establishments) of a survey population from which the sample will be selected. This list or group of materials can include maps of areas in which the elements can be found, lists of members of a professional association, and registries of addresses or persons.
Sampling units
Elements or clusters of elements considered for selection in some stage of sampling. For a sample with only one stage of selection, the sampling units are the same as the elements. In multi-stage samples (e.g., enumeration areas, then households within selected enumeration areas, and finally adults within selected households), different sampling units exist, while only the last is an element. The term primary sampling units (PSUs) refers to the sampling units chosen in the first stage of selection. The term secondary sampling units (SSUs) refers to sampling units within the PSUs that are chosen in the second stage of selection.
Secondary Sampling Unit (SSU)
A cluster of elements sampled at the second stage of selection.
Simple random sampling (SRS)
A procedure where a sample of size n is drawn from a population of size N in such a way that every possible sample of size n has the same probability of being selected.
Standardized interviewing technique
An interviewing technique in which interviewers are trained to read every question exactly as worded, abstain from interpreting questions or responses, and do not offer much clarification.
Survey lifecycle
The lifecycle of a survey research study, from design to data dissemination.
Survey population
The actual population from which the survey data are collected, given the restrictions from data collection operations.
The practice of adapting interviewer behavior to the respondent's expressed concerns and other cues, in order to provide feedback to the respondent that addresses his or her perceived reasons for not wanting to participate.
Target population
The finite population for which the survey sponsor wants to make inferences using the sample statistics.
Total Survey Error (TSE)
Total survey error provides a conceptual framework for evaluating survey quality. It defines quality as the estimation and reduction of the mean square error (MSE) of statistics of interest.
Unique Identification Number
A unique number that identifies an element (e.g. serial number). That number sticks to the element through the whole survey lifecycle and is published with the public dataset. It does not contain any information.
Unit nonresponse
An eligible sampling unit that has little or no information because the unit did not participate in the survey.
The extent to which a variable measures what it intends to measure.
A measure of how much a statistic varies around its mean over all conceptual trials.
A post-survey adjustment that may account for differential coverage, sampling, and/or nonresponse processes.


[1] Alcser, K. H., & Benson, G. D. (2005). The SHARE train-the-trainer program. Ch. 6 In A. Boersch-Supan & H. Juerges (Eds.), The survey of health, ageing and retirement in Europe - Methodology. Mannheim, Germany: Mannheim Research Institute for the Economics of Aging. From

[2] Alcser, K. H., & Benson, G. D. (2008). Training for SHARE Wave 2. Ch. 8.3 in A. Boersch-Supan &, A. Brugiavini, H. Juerges, et al. (Eds.). Health, Ageing and Retirement in Europe (2004-2007). Mannheim, Germany: Mannheim Research Institute for the Economics of Aging.

[3] Axinn, W. G. (1989). Interviewers and data quality in a less developed setting. Journal of Official Statistics, 5(3), 265-280.

[4] Axinn, W.G., Fricke, T.E., & Thornton, A. (1991). The micro-demographic community study approach: Improving survey data by integrating the ethnographic method. Sociological Methods Research, 20(2), 187-217.

[5] Beatty, P. (1995). Understanding the Standardized/Non-Standardized Interviewing Controversy. Journal of Official Statistics, 11(2),147-160.

[6] Belli, R., Shay, W., & Stafford, F. (2001). Event History Calendars and Question Lists. Public Opinion Quarterly, 65(1), 45-74.

[7] Biemer, P. & Lyberg, L. (2003). Introduction to survey quality. New York, NY: John Wiley & Sons.

[8] Billiet, J., & Loosveldt, G. (1988). Improvement of the quality of responses to factual survey questions by interviewer training. Public Opinion Quarterly, 52(5), 190-211.

[9] Börsch-Supan, A., Jürges, H., & Lipps, O. (2003). SHARE: Building a panel survey on health, aging and retirement in Europe. Mannheim, Germany: Mannheim Research Institute for the Economics of Aging.

[10] California Health Interview Survey (2007). 2005 Methodology series: Report 2 — Data collection methods. Los Angeles, CA: UCLA Center for Health Policy Research, 2009.

[11] Cannel, C. F. et al. (1977). A summary of studies of interviewing methodology.Vital and Health Statistics: Series 2, Data Evaluation and Methods Research. (No. 69. DHEW Publication No. (HRA) 77-1343). Washington, DC: U.S. Government Printing Office. Retrieved December 29, 2009, from Search_SearchValue_0=ED137418&ERICExtSearch_SearchType_0= no&accno=ED137418.

[12] Cantave, M., Kreuter, F., & Alldredge, E. (2009). Effect of pay structure on interviewer productivity. Poster presented at the 2009 AAPOR meetings.

[13] Christopher, S., McCormick, A. K. H. G., Smith, A., & Christopher, J. C. (2005). Development of an interviewer training manual for a cervical health project on the Apsáalooke reservation. Health Promotion Practice, 6(4), 414-422.

[14] Conrad, F., & Schober, M. F. (1999). Conversational interviewing and data quality. Proceedings on the Federal Committee on Statistical Methodological Research Conference. Retrieved February 14, 2008, from

[15] Davis, D. W. (1997). Nonrandom measurement error and race of interviewer effects among African Americans. Public Opinion Quarterly, 61(1), 183-207.

[16] Doyle, J. K. (2004). Introduction to interviewing techniques. Ch. 11 in Wood. D. W. (Ed.), Handbook for IQP Advisors and Students. Prepared for the Interdisciplinary and Global Studies Division. Worcester, MA: Worcester Polytechnic Institute. Retrieved March 26, 2010, from

[17] European Social Survey. (2004). Field procedures in the European Social Survey: Enhancing response rates. Retrieved January 24, 2008, from

[18] Federal Committee on Statistical Methodology. (1998). Interviewer training. Ch. 5 in Training for the future: Addressing tomorrow's survey tasks (pp. 50-61). (Report number NTIS PB99-102576). Washington, DC: U.S. Government Printing Office. Retrieved March 26, 2010, from

[19] Fowler, F.J. Jr. & Mangione, T.W. (1985). The value of interviewer training and supervision. Final report to the National Center for Health services Research. Boston, MA: Center for Survey Research.

[20] Fowler, F. J. & Managione, T. W. (1990). Standardized survey interviewing: Minimizing Interviewer-related error. Beverly Hills, CA: Sage Publications

[21] Galbraith, M. (2003). Adult learning methods (3rd ed.). Malabar, FL: Kreiger Publishing Co

[22] Glewwe, P. (2005). Chapter IV: Overview of the implementation of household surveys. In United Nations Statistical Division, United Nations Department of Economic and Social Affairs (Eds.), Household surveys in developing and transition countries. New York. NY: United Nations. Retrieved March 26, 2010, from

[23] Groves, R. (1989). Survey errors and survey costs. Hoboken, NJ: John Wiley & Sons

[24] Groves, R. M., Fowler, F. J. Jr., Couper, M. P. , Lepkowski, J. M. Singer, E. & Tourangeau R. (2009). Survey methodology (2nd ed.). Hoboken, NJ: John Wiley & Sons

[25] Groves, R. M., & McGonagle, K. A. (2001). A theory-guided interviewer training protocol regarding survey participation. Journal of Official Statistics, 17(2), 249-265.

[26] Hansen, M. H., Hurwitz, W. N., Marks, E. S., & Mauldin, W. P. (1951). Response errors in surveys. Journal of the American Statistical Association, 46(254), 147-190.

[27] Hatchett, S., & Schuman, H. (1975). White respondents and race-of-interviewer effects. Public Opinion Quarterly, 39(4), 523-528.

[28] Hursh-César, R. P. (1976). Third world surveys: Survey research in developing nations. Delhi: Macmillan Company of India.

[29] Jansen, H. A. F. M., Watts, C., Ellsberg, C., Heise, L., & García-Moreno, C. (2004). Interviewer training in the WHO multi-country study on women's health and domestic violence. Violence against Women, 10(7), 831-849.

[30] Kane, E. W., & Macaulay, L. J. (1993). Interviewer gender and gender attitudes. Public Opinion Quarterly, 57(1), 1-28.

[31] Kessler, R. C., Ustun, T. B., & World Health Organization. (2008). The WHO world mental health surveys: Global perspectives on the epidemiology of mental disorders. Cambridge; New York :Geneva: Cambridge University Press; Published in collaboration with the World Health Organization.

[32] Kish, L. (1962) Studies of interviewer variance for attitudinal variables. Journal of the American Statistical Association, 57(297), 92-115.

[33] Lavrakas, P. J. (1993). Telephone survey methods: Sampling, selection, and supervision (2nd ed.). Thousand Oaks, CA: Sage Publications.

[34] Lee, R.M. (1993). Doing research on sensitive topics. London, UK: Sage Publications.

[35] Liamputtong, P. (2010). Performing qualitative cross-cultural research. Cambridge, UK: Cambridge University Press.

[36] Living Standard Measurement Study Survey. (1996). Working paper 126: A manual for planning and implementing the Living Standard Measurement Study Survey. Retrieved July 10, 2010, from /default/WDSContentServer/WDSP/IB/2000/02/24/000009265_3961219093409/ Rendered/PDF/multi_page.pdf

[37] Martin, E. (1996). Household attachment and survey coverage. Proceedings of the section on survey research methods (pp. 526-531). Alexandria, VA: American Statistical Association.

[38] Nyandieka, L. N., Bowden, A., Wanjau, J., & Fox-Rushby, J. A. (2002). Managing a household survey: A practical example from the KENQOL survey. Health Policy and Planning, 17(2), 207-212

[39] O'Brien, E. M., Mayer, T. S., Groves, R. M., & O'Neill, G. E. (2002). Interviewer training to increase survey participation. In Proceedings of the American Statistical Association: Survey Research Methods Section. Alexandria, VA: American Statistical Association.

[40] Office of Management and Budget. (2006). Standards and guidelines for statistical surveys. Washington, DC: Office of Information and Regulatory Affairs, OMB. Retrieved January 10, 2010, from

[41] O'Muircheartaigh, C. & Marckward, (1980). An assessment of the reliability of World Fertility Study Data. In Proceedings of the World Fertility Survey conference, Vol. 3, 305-379. The Hague, The Netherlands: International Statistical Institute

[42] Papanek, H. (1979). Research on women by women: Interviewer selection and training in Indonesia. Studies in Family Planning, 10(11/12), 412-415.

[43] Pennell, B-E., Harkness, J., & Mohler, P. (2006). Day 8: Data collection, interviewers, and interviewer training. SRC Summer Institute Presentation. University of Michigan, Ann Arbor, MI.

[44] Pennell, B-E., Harkness, J., Levenstein, R., & Quaglia, M. (2010). Challenges in cross-national data collection. In J. Harkness, B. Edwards, M. Braun, T. Johnson, L. Lyberg, P. Mohler, B. E. Pennell, & T. Smith. (Eds.), Survey methods in multicultural, multinational, and multiregional contexts. New York: John Wiley & Sons.

[45] Pennell, B-E., Mneimneh, Z., Bowers, A., Chardoul, S., Wells, J. E., Viana, M. C., et al. (2009). Implementation of the World Mental Health survey initiative. In R. C. Kessler & T. B. Üstün (Eds.), Volume 1: Patterns of mental illness in the WMH surveys. Cambridge, MA: Cambridge University Press.

[46] Quaglia, M. (2006). Interviewers and interpreters: Training for a biographical survey in Sub-Saharan Africa. Paper presented at Proceedings of Q2006 European Conference on Quality in Survey Statistics. Retrieved April 30, 2008, from

[47] Schaeffer, N. C. (1980). Evaluating race-of-interviewer effects in a national survey. Sociological Methods & Research, 8(4), 400-419.

[48] Schnell, R. & Kreuter, F. (2005). Separating interviewer and sampling-point effects. Journal of Official Statistics, 21(3), 389—410.>

[49] Schuman, H., & Converse, J. M. (1971). The effects of black and white interviewers on black responses in 1968. Public Opinion Quarterly, 35(1), 44-68.

[50] Stycos, J. M. (1952). Interviewer training in another culture. Public Opinion Quarterly, 16(2), 236-246.

[51] Suchman, S. & Jordan, B. (1990). Interactional troubles in face to face survey interviews. Journal of the American Statistical Association, 85, 232-241.

[52] Sudman, S. (1966). New approaches to control of interviewing costs. Journal of Marketing Research, 3(1), 55-61.

[53] Ustun, T. B., Chatterji, S., Mechbal, A., & Murray, C. J. L. (2005). Chap X: Quality assurance in surveys: Standards, guidelines, and procedures. In United Nations Statistical Division, United Nations Department of Economic and Social Affairs (Eds.), Household surveys in developing and transition countries. New York, NY: United Nations. Retrieved March 26, 2010, from

[54] Vaessen, M., Thiam, M., & Le, T. (2005). Chapter XXII: The demographic and health surveys. In United Nations Statistical Division, United Nations Department of Economic and Social Affairs (Eds.), Household surveys in developing and transition countries. New York: United Nations. Retrieved March 26, 2010, from

[55] Weisberg, H. (2005). The total survey error approach. Chicago, IL: University of Chicago Press.

[56] West, B. & Olson, K. (2009) How much of interviewer variance is really nonresponse error variance? Michigan Program in Survey Methodology Brownbag Presentation. University of Michigan, Ann Arbor, MI.

[57] Afrobarometer Survey. Retrieved July 1, 2010, from

[58] Asian Barometer. Retrieved July 1, 2010, from

[59] European Social Survey. Retrieved July 1, 2010, from

[60] Survey of Health, Ageing, and Retirement in Europe. Retrieved July 1, 2020, from

Return to top

Previous chapter | Next chapter | Home

© 2008 The authors of the Guidelines hold the copyright. Please contact us if you wish to\n publish any of this material in any form.