PDF 
Kirsten Alcser,  Judi Clemens, Lisa Holland, Heidi Guyer, and Mengyao Hu, 2016
 
Appendices:  A | B | C | D | E 

Introduction

Interviewers play a critical role in surveys, as they members of the research team who implement the survey design. They are often required to perform multiple tasks with a high level of accuracy. In a face-to-face survey, the interviewer may be required to physically locate the sampled household and to update the sample frame. In both telephone and face-to-face surveys, the interviewer has to contact the household, explain the purpose of the study, enumerate household members, select the respondent, motivate the respondent to participate, ask questions in the required manner, put the respondent at ease, and accurately record the respondent’s answers as well as any other required information. Depending upon the survey topic and survey context, the interviewer may be required to perform additional tasks, such as biomeasure collection.

Interviewers can influence responses through their personal attributes and their behaviors, otherwise known as an interviewer effect (or interviewer effects). These guidelines present strategies to optimize interviewer efficiency and minimize the effect that interviewer attributes have on the data through appropriate recruitment, selection, and case assignment; they also present strategies to minimize the effect that interviewer behaviors have on sampling error, nonresponse error, measurement error, and processing error through interviewer training. Note that multinational, multicultural, or multiregional surveys, which we refer to as “3MC” surveys, present a particular challenge as the recruitment, selection and training of interviewers can vary greatly among different countries, due to differences in the cultural environment, existing infrastructure, and resources available (Smith, 2007).

Guidelines

Goal: To improve the overall quality of the survey data by minimizing interviewer effects while controlling costs by optimizing interviewer efficiency.

1.   Determine the structure and composition of the interviewing staff.

Rationale

The structure and composition of the interviewing staff must be established during the design and planning phases of the project because these decisions will determine the number and type of interviewers required, training protocol, sample assignment, and most efficient methods of supervision. See also Study Design and Organizational Structure and Tenders, Bids, and Contracts for discussion on decisions about interviewing staff.

Procedural steps

1.1   Consider such parameters as sample size and, for face-to-face studies, geographic distribution; the timing and duration of the data collection period; budget constraints; and the language(s) in which interviewing will occur (Pennell, Harkness, Levenstein, & Quaglia, 2010).

1.2   For face-to-face studies, decide whether interviewers will travel, either individually or in teams with a supervisor, or be locally assigned. See also Data Collection: Face-to-Face Surveys for additional discussion.

1.2.1   Factors favoring the use of traveling interviewers include:

  • Lower training costs compared to using local interviewers, as there are fewer interviewers to train and trainers do not have to travel to as many different locations.
  • Breach of confidentiality is less of an issue than with local interviewers because interviewers are unlikely to know the respondent personally.
  • Respondents may be more willing to participate in sensitive-topic surveys if the interviewers are strangers or “outsiders” (Lee, 1993).

1.2.2   Factors favoring the use of traveling teams rather than traveling individual interviewers include:

  • Traveling as a group may be safer than traveling individually.
  • Monitoring and supervision are easier since the supervisor is part of the group and is in close daily contact with the interviewers.
  • Interviewers have more opportunity to share experiences, learn from one another, and support one another than they would if traveling individually.
  • If multiple household members need to be surveyed, different interviewers can speak to them concurrently.
  • Similarly, if privacy is difficult to achieve, one interviewer can speak to the respondent while another engages other household members.
  • It is easier to implement interpenetrated sample assignments for research purposes than it would be with individual traveling interviewers (Groves et al., 2009a). It is important to note that the cluster design of most area probability sample surveys confound the sampling and non-sampling (i.e., interviewer) variances. “Interpenetrated sample assignments” are necessary to measure interviewer variance by removing the effects of real differences among respondents assigned to different interviewers. However, interpenetrated workloads are typically not feasible due to the added travel costs and logistics. For this reason, interpenetrated designs are typically only employed for research purposes. However, the use of interviewing teams allows for partial interpenetration, permitting estimation of measurement error introduced by the interviewer. Multi-level modeling is a data analysis technique that makes it possible to estimate interviewer and design effects simultaneously without an (or only a partial) interpenetrated design (O’Muircheartaigh and Campanelli, 1998). See Statistical Analysis for further discussion.

1.2.3   Factors favoring the use of local interviewers include:

  • Employing a larger number of interviewers, each with a smaller workload, reduces the interviewer design effect (Kish, 1962; Office of Management and Budget, 2006). See Appendix A for a discussion of the interviewer design effect.
  • With a larger field staff, data collection can be completed within a shorter period of time, although the effect is not linear.
  • More call attempts can be made per case, since the interviewer remains in the area throughout the data collection period.
  • Local interviewer assignment reduces the need for interviewers to travel large distances, thereby reducing travel costs and time expended.
  • Local interviewers are familiar with the area and are more likely to share the language and customs of respondents; they may achieve higher response rates than would a stranger or “outsider.”

1.3   For telephone studies, decide whether interviewers will conduct the survey from a central telephone facility or from their homes (that is, decentralized telephone interviewing). See also Data Collection: Telephone Surveys for further discussion.

1.3.1   Factors favoring the use of centralized telephone interviewing include:

  • Training can be easily centralized.
  • Monitoring and supervision can be easier and less expensive, since the supervisor is in close daily contact with the interviewers and may, as a result, have access to more information of relevance such as interviewer schedules, vocal patterns, and techniques used when addressing different situations.
  • It is easier to transfer sample units among interviewers.
  • Cost controls are more efficient.

1.3.2   Factors favoring the use of decentralized telephone interviewing include:

  • A dedicated telephone facility is not required.
  • Interviewer working hours may be more flexible.

1.3.3   Some organizations already have a system in place which mixes centralized and decentralized telephone interviewing.

  • In these cases, retaining the combination of centralized and decentralized interviewing may minimize disruption and maintain flexibility.
  • Establishing a sample management system that pulls together information from the two into a single report can be a challenge.

1.4   Estimate the Hours Per Interview (HPI). The HPI includes time spent traveling to all sample units, attempting to contact them, documenting contact attempts, and working on project-related administrative duties, as well as conducting the interview with those respondents who agree to participate. The HPI, combined with the hours per week that each interviewer is expected to work on the project and the total number of weeks planned for data collection, helps determine the number of interviewers required (see Appendix B for an example).

1.5   Consider whether any specialized skills or knowledge are required. This may include language skills, the use of special equipment, prior collection of biomeasures, or any physical requirements.

1.51   With a steadily growing interest in the association between social science data and biological data, interviewers are increasingly being called upon to collect biomeasure data such as height, weight, blood spots, saliva samples, and other measures. It is important to consider such tasks in the recruitment stage and to communicate these, or any other tasks that deviate from standard data collection, to prospective interviewers at the time of recruitment.

1.6   Utilizing the results of feasibility assessments (see Data Collection: General Considerations), consider any special requirements of the study, such as:

1.6.1   How many languages are spoken and in what regions?

1.6.2   Would interviewer familiarity with the topic introduce bias or enhance an interviewer’s ability to collect data?

1.6,3   Do cultural norms or the nature of the topic necessitate matching interviewers and respondents by gender, dialect, religion, race, ethnicity, caste, age, etc.?

1.6,4   Is physical stamina a consideration (e.g., if interviewers will be required to walk, ride, or bicycle long distances) (Nyandieka, Bowden, Wanjau, & Fox-Rushby, 2002)?

1.6.5   Is the sample widely dispersed, making interviewer access to a car or reliable public transportation a consideration?

1.6.6   Is interviewer safety an issue? For example, should interviewers be advised to travel to the area accompanied, only visit the segment in daylight hours, be prepared to deal with stray dogs, or be coached on dressing for the environment and keeping their equipment, such as a laptop or cell phone, out of view? Interviewer safety must be addressed if interviewers will be required to work in known areas of high crime or active conflict.

Lessons learned

1.1   Many organizations use a combination of interviewer assignment protocols. For example, they may hire local interviewers to make initial contact with sample households, select the respondent, and, if he or she is willing, administer the survey. Later in the data collection period, special traveling interviewers (for instance, experienced interviewers who have proven to be especially skillful at gaining cooperation or relating to particular types of respondents) can be brought in to persuade those selected individuals who have expressed a reluctance to participate. Alternatively, local interviewers might be hired in heavily populated areas while traveling interviewers are sent to more remote regions.

1.2   If traveling teams of interviewers are used, the interviewer may not always be conversant in the respondent’s language, and local interpreters may be needed to facilitate data collection. For example, the French Institut National d’Etudes Démographiques has collected data in several Bwa villages in Mali for over 15 years. Although French is the official language of Mali, most villagers speak only Boma, so interpreters were essential for collecting data. The interviewer was responsible for administering the questionnaire, while the interpreter’s job was to act as a neutral intermediary between the interviewer and respondent, conveying the words and the concepts associated with them to the two speakers (Quaglia, 2006) (see also Translation: Overview for more information on unwritten translation).

1.3   Matching interviewer and respondent characteristics may improve cooperation but only appears to impact survey data quality if the topic of the survey is related to an identifiable and stable interviewer attribute.

1.3.1   Indonesian researchers felt that matching interviewers with respondents in terms of age, marital status, and child-rearing experience improved rapport and willingness to participate during in-depth interviews (Papanek, 1979).

1.3.2   Several studies indicate that when the topic of the survey (e.g., racial attitudes or women’s rights) is related to a fixed interviewer attribute (e.g., race or gender), the interviewer attribute can affect respondents’ answers (Davis, 1997; Groves et al., 2009a; Hatchett, & Schuman, 1975; Kane & Macaulay, 1993; Schaeffer, 1980;Schuman & Converse, 1971).

1.3.3   If the topic of the survey is not related to a fixed interviewer attribute, matching the interviewer and respondent on the attribute does not appear to affect data quality. Axinn (1989) found that matching Nepalese interviewers and respondents by gender and ethnicity for a health survey did not decrease the number of technical errors and “don't know” responses or reduce incorrect information gathered during the interview.

1.3.4   Recent research shows that interviewers' religious appearance can affect responses to religion-related questions. For example, studies have found that interviewers wearing Islamic symbols received higher reports of religiosity from respondents in Turkey (Koker, 2009) and that reported religiosity was affected by the interplay between interviewer religious appearance and respondents’ characteristics (Blaydes & Gillum, 2013; Benstead, 2014), and the interviewers' own religious, cultural and political attitudes (Mneimneh, de Jong, Cibelli Hibben, & Moaddel, 2015).

1.3.5   Attempting to match interviewer and respondent characteristics may strain the project’s resources, particularly if this is not an established practice in the locale. Gender matching is essential in some Muslim countries, but unexpected challenges may arise even when a project has planned and budgeted for such matching (Pennell et al., 2010). The Saudi National Health and Stress survey (SNHS) in the Kingdom of Saudi Arabia has found, for instance, that male interviewers must make the initial contact with a household and seek permission to interview both an eligible male and a eligible female household member. Once cooperation has been secured, a female interviewer must arrange to visit the household to interview the female respondent. In this context, cultural norms also preclude the recording of a female voice, limiting the use of this method of quality monitoring. More complexity is introduced as female interviewers may not travel without a male family member. And, supervisors are randomly assigned to observe interviews but must also be gender matched (Mneimneh et al., 2015).

1.4    Using non-clinical interviewers for the collection of biomeasures can be an efficient cost-savings measure.

1.4.1   In a pilot study for the UK Household Longitudinal Study (UKHLS), Understanding Society, researchers successfully used non-clinical interviewers for collection of biomeasures including saliva, a finger-prick, blood pressure and body mass. The research team experienced one barrier in that the interviewers were required to have certain immunizations themselves in order to collect the blood samples. The immunization series took a number of weeks to complete, and this had a negative impact on the survey organization’s ability to recruit (McFall, Conolly, & Burton, 2012).

1.4.2   The Health and Retirement Study (HRS), the Survey of Health, Ageing and Retirement in Europe (SHARE) and European Longitudinal Study of Aging (ELSA) were designed to collect similar biomeasures for comparison purposes. Two of the studies, HRS and SHARE, trained interviewers to collect the measurements during the interview; ELSA employed nurses to collect the measures in a separate visit to a health center. Response rates and the distribution of measurements varied depending upon whether trained interviewers or nurses collected the measures. For example, walking speed and grip strength were more variable when measured by interviewers. While response rates in general were lower when respondents had to schedule a separate visit outside of the home, response rates for the more invasive measures were higher for nurses—probably because respondents had more confidence in the medical training of the data collector. However, cost considerations must also be taken into account when considering nurses versus interviewers in collecting such measures (Guyer, Ofstedal, Lessof, Cox, & Juerges, 2010).

2.   Determine the pay structure for the data collection staff.

Rationale

Since data collection staff quality has a major impact on the quality of the data collected, it is important to attract and retain the most qualified interviewers possible and to structure compensation accordingly.

Procedural steps

2.1   Interviewer pay structures vary greatly across countries in a 3MC survey. Depending on local labor laws, set interviewer pay comparable to the pay for other jobs requiring similar skills, ideally adjusted for regional cost of living standards.

2.2   Keep in mind local research traditions, the mode of the survey, and local labor laws. The two standard policies are to pay interviewers an hourly rate or to pay per completed interview (European Social Survey [ESS], 2004Pennell et al., 2010).

2.2.1   Factors favoring payment per interview:

  • It is most feasible if each completed interview takes approximately the same amount of interviewer effort, as is more likely in a telephone survey (Pennell et al., 2010).
  • It is easier to monitor and control interviewer costs than when paying by the hour (Pennell et al., 2010; Sudman, 1966).

2.2.2   Factors favoring an hourly rate:

  • It is most feasible if the effort to complete an interview varies widely, as is common in face-to-face surveys (Lavrakas, 1993Pennell et al., 2010)  .
  • Interviewers have less incentive to perform hurried, sloppy work or even to fabricate interviews when paid hourly than when paid per interview (Pennell et al., 2010; Sudman, 1966).
  • Interviewers are less likely to focus on easy cases while neglecting those who are hard to reach or hard to persuade to participate than when paid by the completed interview (ESS, 2004; Pennell et al., 2010).
  • Interviewers may be more willing to spend time on other important tasks (e.g., completing a thorough screening interview and entering comprehensive, accurate contact attempt records) than when paid by the completed interview.

2.3   When determining pay, consider the length and complexity of the interview, the expected difficulties of obtaining cooperation, and the amount of record-keeping demanded of the interviewer (ESS, 2004).

2.4   Pay interviewers for time spent in both initial interviewer training as well as any necessary refresher training.

2.5   Adjust the pay rate based on interviewer experience and any special skills they may possess and require (e.g., bilingual interviewers, phlebotomists, etc.).

2.6   Consider offering incentives for work above a certain target (e.g., response rate, contact rate, refusal conversion rate) as a way to keep interviewers motivated (ESS, 2004Weisberg, 2005).

2.6.1   Incentives can be extra pay, prizes, or special rewards.

2.6.2   Overreliance on interviewer incentives for completed interviews may give interviewers a reason to fabricate interviews (Weisberg, 2005).

2.6.3   Any bonus system must be perceived by the interviewers as being fair. For example, different sample assignments can vary considerably in the challenges they pose for interviewers (Cannell, Marquis, and Laurent, 1977).

Lessons learned

2.1   Most survey organizations have a standard policy concerning pay arrangements (either paying per interview or paying by the hour) which they may be unwilling to change (Cannell et al., 1977).

2.2   If interviewers are paid by the interview instead of by the hour, they may rush the critical respondent-interviewer rapport-building process. It is especially important for face-to-face interviewers to spend the time necessary to develop this rapport so that respondents feel comfortable reporting honestly, as this leads to higher-quality responses. For example, when approaching a household, face-to-face interviewers need to conform to the culture’s introductory customs, such as drinking tea or meeting elders, which require additional time spent by the interviewer (Hursh-César, 1976).

2.3   To discourage hurried, sloppy work when paying per interview, some organizations set a cap on the number of interviews that each interviewer is allowed to conduct in a day or during some other time frame. Another strategy is to offer bonuses for high quality work. For example, set a basic pay per interview plus an additional 10% if the interviewer makes fewer than some predetermined number of errors. This requires the survey organization to have a monitoring system in place, which can distinguish between minor and more serious interviewer errors and can identify errors that cannot be attributed to the interviewer but rather to system factors, such as question wording and technology failures. See Survey Quality and Paradata and Other Auxiliary Data for further discussion on such systems.

2.4   In contrast to face-to-face interviewing, an experiment with telephone interviewers found that their productivity increased when they were paid per interview as opposed to being paid per hour (Cantave, Kreuter, & Alldredge, 2009).

3.   Recruit and select an appropriate number of qualified interviewers.

Rationale

The quality of an interviewer-administered survey depends, to a large extent, on the quality of the interviewers and their supervisors. It is important, therefore, to recruit and select the best possible people for the job. In addition, selecting candidates who are well suited for the job may lead to lower interviewer turnover and reduced survey costs.

Procedural steps

3.1   Recruit applicants.

3.1.1.  Often times, the research organization who will be conducting the data collection in an individual country will have its own in-house interviewing staff from which to select suitable interviewers for the particular survey.

3.1.2.  The research organization may also have to implement outreach measures to find additional interviewers, such as asking local contacts for suggestions, placing flyers in strategic locations, and advertising in local papers or online. If this is necessary, recruitment and training will take longer and the cost may increase. The cost will vary by method as well.

3.1.3.  The interviewing component of the study may also be subcontracted to an external survey organization with an existing pool of interviewers. If this will occur, it should be specified in the initial contract (see Tenders, Bids, and Contracts).

3.2   Target sources where potential interviewer candidates might be located, keeping in mind any special considerations, as described in Guideline 1. Professionals, such as traveling nurses, can be a good source of interviewers for health studies; teachers, or others with substantive knowledge of the study topic, may also be good candidates. However, these professionals must be willing to set aside other knowledge and training received if it differs from the study protocols.

3.3   Keep cultural norms and logistical factors in mind when recruiting interviewers. For example, it may not be acceptable in some cultures for young people (e.g., college students) to interview older persons or for women to interview men and vice versa. Similarly, persons with other jobs may not be available to work on the study at the times when respondents are most likely to be at home.

3.4   Clearly describe all requirements of the interviewing position in recruitment materials. In addition to reducing training costs, this can reduce the interviewer attrition that can occur when interviewers are not fully informed of study requirements and responsibilities until they attend training or begin field work.

3.5   Recruit more than the number of interviewers needed for data collection to allow for attrition and the dismissal of candidates who prove to be unsuitable.

3.6   Prepare an application form to use in prescreening interviewer candidates before they are invited to an in-person or telephone job interview as appropriate.

3.7   Consider interviewing applicants in the mode of the study. For example, hold telephone screening interviews for a telephone survey and face-to-face screening interviews for a face-to-face study. This also provides the applicant with the opportunity to demonstrate his or her use of a PAPI or a CAPI instrument, depending on the needs of the study.

3.9   Evaluate each candidate.

3.8.1   If appropriate, conduct a criminal background check, particularly if the interviewers will handle sensitive information or come into contact with vulnerable populations (e.g., the young, the old, the infirm, etc.).

3.8.2   Criteria for employment commonly include interviewing skills, language skills, computer or technical skills, organizational skills, education, availability, location, the ability to meet production (i.e., data collection) goals, and the capacity to handle potentially emotional or stressful interactions with respondents (Pennell et al., 2010).

3.8.3   When possible, select interviewers who have previously worked on similar studies and have good recommendations based on their performance. Experienced interviewers require less training and are likely to achieve higher response rates (Cannell et al., 1977;  Fowler & Mangione, 1985).

3.8.4   Evaluate the accuracy and clarity with which each potential candidate can read and process the survey questions in the language(s) of the interview and make sure that he or she is comfortable reading out loud. Ideally, language proficiency should be formally assessed by an outside expert or language assessment firm and should include evaluation of (Pennell, Harkness, & Mohler, 2006):

▪    Conversational skills (e.g., comprehension level, comprehension speed, speech level, speech speed, and accent)
 ▪    Writing skills (e.g., grammar, spelling, and the ability to enter responses)
 ▪    Reading skills (e.g., reading aloud)
 

3.8.5   Realize that poor eyesight can lead to difficulty reading computer screens (Shirima et al., 2007).

3.8.6   If using a paper instrument, ensure that the applicant can follow questionnaire logic and instructions; if using a computerized interview, test applicants’ computer skills.

3.8.7   Select interviewers who are punctual and have good organizational skills (e.g., are able to handle forms and keep track of paperwork).

3.8.8   Select interviewers who have completed the full period of required schooling within their country.

3.8.9   For face-to-face studies, assess applicants’ ability to read or use maps or mapping software.

3.8.10 See Data Collection: Face-to-Face Surveys for additional discussion on interviewer recruitment and training considerations, particularly when the data collection instrument has any technological component (i.e., a table, laptop, smartphone, etc.).

3.9   Give the candidates a realistic preview of the job including the survey topic and the type of questions that will be asked; describe any non-traditional interviewing tasks (e.g., collecting biomeasures) in the recruitment description and the screening interview.

3.10 Clearly present the candidates with study expectations for workload (weekly, monthly, including evening work and possibly weekend work).

3.11 Obtain the candidates’ written commitment to work at the expected level of effort for the duration of the data collection period.

3.12 Base selection on an objective evaluation of the candidate’s abilities rather than his or her relationship to survey staff or favoritism (Nyandieka et al., 2002; Vaessen, Thiam, & Le, 2005; Afrobarometer Survey, 2010).

Lessons learned

3.1   Vaessen, Thiam, and Le (2005) suggest that study managers recruit at least 10 to 15 percent more than the number of interviewers ultimately needed for field work to allow for attrition and the dismissal of candidates who prove to be unsuitable.

3.2   A variety of selection criteria have been used successfully by established 3MC studies.

3.2.1   In the Afrobarometer Survey, interviewers (preferably women) usually hold first degrees in social sciences and have some university education, strong facility in the local language, and the ability to relate to respondents in a respectful manner. Selection is on a competitive basis and may include reading, speaking, and comprehension of national and local languages, and competence at following detailed instructions (Afrobarometer Survey, 2010).

3.2.1   The Asian Barometer recruits interviewers from among university graduates, senior social science undergraduates, and professional survey interviewers (Asian Barometer, 2010).

3.2.3   The European Social Survey highly recommends using experienced interviewers (ESS, 2010).

3.2.4   The Living Standard Measurement Study Survey requires that interviewers have completed secondary education and recommends fluency in two or more languages (Living Standard Measurement Study Survey, 1996).

3.2.5   The coordinating center for the Survey of Health, Aging and Retirement in Europe (SHARE) selects survey research organizations for all participating countries and requires interviewers to have extensive face-to-face experience (Survey of Health, Ageing and Retirement in Europe, 2010).

3.2.6   In the World Mental Health Survey, some participating countries use field staff from established survey organizations, while others recruit new interviewers from the general population or among college students. Interviewer criteria vary among participating countries and may include interviewing experience, language skills, technology skills, education, and capability to handle potential sensitive situations with respondents (Kessler, Ustun, & World Health Organization, 2008).

3.3   Students can be a good source of interviewers.

3.3.1   In an experiment using interviewers from “scholarly networks” (senior or graduate students), government organizations, and survey firms, the research team from the Chinese General Social Survey found that student interviewers were the most trustworthy. Because it was somewhat easier to exert control over the student interviewers, they were able to monitor survey quality most easily within this group. The drawback of using the student interviewers was their limited availability (Bian & Li, 2012).

3.3.2   In a study of childhood behaviors in Turkey, researchers posted announcements at various academic institutions. They successfully recruited senior psychology and counseling students to conduct semi-structured face-to-face interviews about behaviors related to a number of neurodevelopmental and neuropsychiatric disorders. (Cevikaslan, Evans, Dedeoglu, Kalaca, & Yazgan, 2013)

3.4   As revealed in a survey on data collection as part of the International Social Survey Program (ISSP), different types of people can be employed as interviewers, such as full-time professionals, part-time professionals, students and others (who are not in the labor force, and likely to work temporarily) (Smith, 2007). The types of people employed as interviewers differ greatly across cultures. For example, as mentioned in Smith (2007), in the ISSP, “a quarter of the countries use no part-time professionals and another quarter employ all part-timers. Likewise, over half of all countries have no full-time professionals, while almost a quarter have full-timers making up half or more of their staff. Similarly, over a third of countries use no student interviewers, while almost a fifth have a majority of interviewers who are students.” The differences are likely due to different resources available for each country and local traditions.

3.5   In a panel study, it may be helpful to keep the same interviewer with the same respondent across rounds of the study. In New Zealand, focus groups were conducted with interviewers who worked on the Prospective Outcomes of Injury Study (POIS). The interviewers explained that the “personal connection generated between the interviewers and participants was important, and enabled successful follow-up rates for the study.” They felt this connection allowed them to “negotiate the requirements of the interview within a relationship they form with participants” (Derrett & Colhoun, 2011).

3.6   Liamputtong, a professor in the School of Public Health at La Trobe University, argues that bicultural researchers who are familiar with both the local and mainstream cultures of communities in the study are ideal (Liamputtong, 2010).

3.7   As noted in Guideline 1, it is not always possible to recruit interviewers who are fluent in the language(s) preferred or needed by respondents. In this case, other arrangements must be made. Options may include working with interpreters, data collection by proxy, using a bridge language if available, or using a self-adminstered mode if literacy levels permit (see also Data Collection: Self-Administered Surveys).

3.7.1   A study was conducted during the 2010 Census in the United States to investigate Non Response Follow-up (NRFU) interviews with households that speak languages other than English in with heavy concentrations of residents with limited English proficiency. The researchers found that enumerators were far more likely to go off script in interviews they conducted with respondents in other languages than they were in interviews with English-speaking respondents. Interviewers relied on on-the-fly translation and use of interpreters – practices which enabled enumerators to complete nonresponse follow-up interviews but posed a potential threat to data quality. Issues observed include: (1) inaccurate and incomplete translation of census questions; (2) modifying census questions or skipping some questions completely; and (3) having someone, especially a child, act as an ad hoc interpreter, which created some communication problems or placed a cognitive and emotional burden on the under-age interpreter (Pan & Lubkemann, 2013)

3.8   If the topic is sensitive (e.g., domestic violence), empathy and strong interpersonal skills may be more important than high levels of education or previous interviewing experience (Jansen, Watts, Ellsberg, Heise, & García-Moreno, 2004). This holds true for both interviewers and any interpreters being used.

3.9   If the project’s interviewing protocol differs significantly from previous studies, experienced interviewers may find it difficult to change their habits, leading to what is known as “veteran effects”. In this case, it may be preferable to recruit and train new interviewers. Similarly, interviewers who have worked for an organization with low quality standards may have to unlearn some behaviors and adapt to new standards.

4. Provide general basic interviewer training.

Rationale

Newly hired interviewers and supervisors require basic training in techniques for successful interviewing before they receive specific training on the study on which they will be working. Research indicates that general interviewer training (GIT) helps improve the quality of survey data by: (1) reducing item nonresponse (Billiet & Loosveldt, 1988), (2) increasing the amount and accuracy of information obtained (Billiet & Loosveldt, 1988), and (3) increasing survey participation by teaching interviewers how to identify and respond to respondents’ concerns (O'Brien, Mayer, Groves, & O'Neill, 2002).

Procedural steps

4.1   Allow sufficient time to adequately cover general interviewing technique (GIT) material. One option is to provide materials for interviewers to read and complete prior to attending in-person training. Training content can be provided electronically—either accessible online or on a DVD or CD that can be reviewed on a personal computer. Interviewers can read materials provided in advance, view videos or tutorials, and answer questions electronically prior to attending in-person interviewer training.

4.2   Select appropriate trainers. These may include research staff, project managers, project management assistants, supervisors who directly oversee data collection staff, and experienced interviewers.

4.3   Provide the following general information to the interviewers at the beginning of the training:

4.3.1   An overview of the survey research organization and introduction to all trainers present.

4.3.2   The roles of the interviewer and the supervisor in the research process.

4.3.3   The format of the survey interview.

4.3.4   An overview of different interview modes (face-to-face, telephone, computer-assisted, observation, and delivering self-administered survey materials such as diaries) and the tasks each poses for the interviewer.

4.3.5   An overview of the sample design and associated implications and tasks for the interviewer.

4.3.6   Interviewer evaluation procedures and criteria.

4.4   Include the following prescribed procedures in the general interviewer training (Fowler & Mangione, 1990):

4.4.1   Standardized question-asking. Train interviewers to read each question exactly as written and to read the questions slowly. They should ask all questions exactly in the order in which they are presented in the questionnaire (Doyle, 2004; Groves et al., 2009a) (see Guideline 5 for exceptions).

4.4.2   Questionnaire format and conventions. Teach interviewers how to enter the answers to both open-ended and closed-ended questions. Train them to follow interviewing conventions such as emphasizing words in the questionnaire which appear in bold or are underlined, recognizing and not reading aloud interviewer instructions, reading or not reading optional words as appropriate, and selecting correct fill choices (e.g., he/she, has/have, etc.).

4.4.3   Clarification. If the study staff has not prepared a stock definition, train interviewers to repeat all or a specified part of the question verbatim when respondents ask for clarification. Interviewers should not make up their own definitions to any word, phrase, or question in the questionnaire (Cannell et al., 1977). Train interviewers to notify their supervisors about any questions which are confusing to respondents and require further clarification.

4.4.4   Probing. If a respondent’s answer is inadequate and it is legally and culturally permissible to probe (see Ethical Considerations), train interviewers to employ unbiased techniques to encourage answers that are more complete, appropriate, and thoughtful (Cannell et al., 1977Groves et al., 2009a). Probes must be neutral; that is, they must avoid “sending a message” about what is a good or a bad response. Such strategies of probing for more information may include:

▪    A pause to encourage the person to fill the silence or a direct request for further information.
▪    Verbal probes chosen from a stock list of phrases such as "Could you explain what you mean by that?" or "Can you tell me anything else about ___________ ?"
 

4.4.5   Feedback. Train interviewers to provide their respondents with culturally appropriate feedback when they are doing well in order to encourage them to listen carefully and to give thoughtful answers (Cannell et al., 1977).

▪    This feedback may be in the form of a nonverbal smile or nod or a short encouraging phrase.
▪    Verbal feedback should be selected from a prepared list of stock phrases such as "That's useful information" or "Thank you, that's helpful" to ensure that the feedback is not evaluative of the content of the answer. For example, in English the word “okay” is discouraged for use in feedback because it could be construed as agreement with or approval of the respondent’s answer.
▪    As a general rule, give nonverbal or short feedback to short answers and longer feedback phrases to longer answers.
 

4.4.6   Recording answers. To reduce measurement error, train interviewers to record answers exactly as given.

▪    If the question offers fixed alternatives, teach interviewers to get respondents to choose one of the fixed alternatives; interviewers should not infer which alternative is closest to what the respondent actually says (Groves et al., 2009).
▪    If the question requires a narrative response, teach interviewers to record the answer in as near verbatim form as possible (Groves et al., 2009).
 

4.4.7   Confidentiality. Train interviewers to keep confidential all identifying respondent contact information as well as respondents’ answers to survey questions. See Ethical Considerations and Data Collection: Face-to-Face Surveys for additional discussion on confidentiality.

4.4.8   Any Computer Assisted Personal Interviewing (CAPI) conventions used in the survey instrument (see Instrument Technical Design and Data Collection: Face-to-Face Surveys).

4.4.9   Completing contact attempt records. Teach interviewers to record when each contact was attempted, any pertinent respondent comments (e.g., the best time to reach him or her or reasons for reluctance to participate), and the result of each contact attempt, using disposition codes (further information on contact attempt records and disposition codes can be found in Data Processing and Statistical Adjustment; examples of contact attempt records can be found in Data Collection: General Considerations).

4.4.10 Recording time and meeting production goals. Teach interviewers how to record the time they spend on each defined aspect of their work for the study, both for their remuneration and to allow supervisors to monitor their progress and efficiency during data collection.

4.5   If legally and culturally permissible, teach interviewers non-coercive persuasion techniques and practice counter replies to common statements of reluctance.

4.5.1   Discuss optimal times and modes for contacting target persons.

4.5.2   Train interviewers to tailor their initial interactions with respondents by developing the following skills (Groves & McGonagle, 2001; O'Brien et al., 2002):

▪    Learning the classes of concerns (“themes”) that respondents might have.
▪    Classifying the respondent’s wording into the appropriate theme.
▪    Addressing the concern, using their own words.
 

4.5.3   Employ hands-on practice exercises so that the trainees become proficient in quickly identifying respondent concerns and quickly responding to them.

4.6   For best overall results, employ a training format that combines lecture with visuals and small-group practice sessions.

4.6.1   Mixing the format keeps the trainees engaged and acknowledges that different people learn in different ways (Galbraith, 2003).

4.6.2   Through practice, trainees move from procedural knowledge (knowledge of how to perform a task) to skill acquisition (the ability to perform the task almost automatically) (O'Brien et al., 2002).

4.6.3   Although the class can be large for lecture sessions, trainees should break up into smaller groups for hands-on practice.

4.7   Be sensitive to the local culture.

4.7.1   Educate trainers in cultural sensitivity.

4.7.2   Take religious holidays into consideration when scheduling training sessions.

4.7.3   Make every effort to accommodate dietary restrictions when planning meals or snacks for the training.

4.7.4   Be aware that conventions regarding breaks during training vary among cultures.

4.8   At the end of basic interviewer training, evaluate the knowledge of the interviewer candidates. This can be done by written test, conducting a scripted certification interview with a supervisor, audio taping, or observing the interviewer conduct an actual practice interview.

Lessons learned

4.1   If the interviewer candidates have access to the necessary equipment, some basic interview training material can be presented in the form of audio- or video-recordings for home study (California Health Interview Survey, 2007; Federal Committee on Statistical Methodology, 1998). Other training options include telephone and video conferencing and self-study using paper materials.

4.2   West & Olson (2009) found that interviewer-related variance on survey items may be due to nonresponse error variance rather than measurement difficulties. That is, different interviewers may successfully contact and recruit respondents with different characteristics (e.g., age, race), even though their sample pools start out the same.

4.3   Interviewer training and the interviewer manual need to be adjusted to be culturally sensitive to the population under study:

4.3.1   Textbook instructions on handling reluctance to participate and provide accurate information rely to a large extent on Western experiences. When possible such procedures should be modified so that they include culturally acceptable and suitable tactics. Researchers conducting a women’s health study on the Apsáalooke native American reservation in southeastern Montana, U.S.A., felt that standard Western tactics for handling reluctance would be offensive in that culture. They therefore did not attempt to persuade reluctant respondents to participate. In addition, interviewers were encouraged to display a compassionate attitude and interest in the women, rather than the standard recommended neutral voice tone and lack of responsiveness to respondent answers, to minimize eye contact, and to accept offers of food and drink – all to be more consonant with the Apsáalooke culture (Christopher, McCormick, Smith, & Christopher, 2005).

4.3.2   In some countries, a Western trainer may be respected but resented. Researchers in Puerto Rico found allowing interviewer trainees to provide input about the local culture and supplementing trainer criticism with peer criticism helpful (Stycos, 1952).

4.3.3   The World Mental Health study added country-specific topics to their general interviewer training sessions. In New Zealand, they included cultural empathy to Maori and Pacific Islander households; in Colombia, they provided special training on interacting with governmental authorities and armed guerrilla and paramilitary groups (Pennell et al., 2010).

4.4   In the SHARE data collection, interviewers were trained to record all contacts and contact attempts and to take notes in order to tailor approaches for maximizing contact. This information allowed the researchers to observe considerable variation by country in how contact strategies were implemented (Alcser, Benson, & Guyer, 2011).

4.5    In an initial pilot study, the Japanese Study of Aging and Retirement (JSTAR) suffered low response rates in urban areas, due to the inability to access respondents in locked buildings. Staff from the University of Michigan provided additional training to field managers in accessing housing units in locked buildings. The response rates in a second pilot increased significantly after interviewers were trained and comfortable using the specific techniques for approaching locked units, gaining access, and returning to households where access was not previously obtained. These included identifying and talking with building managers, providing additional study information to building managers to gain their confidence in the research while still protecting respondent confidentiality, approaching other housing units when access to one unit is gained, returning to the building at different times of day and varying days of the week, and noting patterns of building resident arrival and departure times (Guyer, Alcser, & Kirgis, 2008).

5. Provide study specific training for all interviewers and supervisors.

Rationale

Interviewers and supervisors need to be very familiar with the study’s protocols and questionnaire in order to carry out their tasks. Depending upon the survey, they may need to learn the instrument’s branching structure, the study’s requirement for field coding, or the use of a respondent booklet, show cards, or other visual materials. There may be special instructions for implementing all or part of the survey that deviate from the standardized interviewing covered in general interviewer training. Interviewers should also be knowledgeable about the project objectives so that their actions help, not hinder, the overall goals. Both newly hired and experienced interviewers as well as supervisors require training specific to the study at hand.

Procedural steps

5.1   Allow sufficient time for study-specific training, depending upon the complexity of the study (see Appendix C for a sample training agenda).

5.2   When possible in a 3MC survey, have the same team from the coordinating center train all interviewers from all study countries to ensure standardization of study-specific protocols (Ustun, Chatterji, Mechbal, & Murray, 2005) (see Study Design and Organizational Structure for additional discussion on the role of the coordinating center). The team may provide regional trainings, traveling to where interviewers are located.

5.3   Select appropriate trainers. These may include research staff, project managers and people on their staffs, supervisors who directly oversee data collection staff, experienced interviewers, and consultant(s) hired to assist with interviewer training.

5.4   Include a large amount of practice and role playing using the questionnaire (Ustun et al., 2005).

5.4.1   Consider having the interviewers complete a self-interview to become familiar with the survey instrument.

5.4.2   Hands-on training may include round-robin practice sessions (i.e., scripted practice sessions where interviewers take turns administering survey questions to the trainer in a group setting), mock one-on-one interviews (i.e., sessions where interviewers interview each other), listening and discussing taped interviews, and live practice with potential respondents.

5.4.3   For role playing to be effective, prepare different scripts in advance so that the different branching structures of the interview, the nature of explanations that are permitted, and anticipated problems can be illustrated.

5.4.4   Consider making a video to illustrate the correct administration of any biomeasures, if applicable. This ensures that the material is consistently taught, especially if training is conducted at multiple times or in various locations.

5.5   Provide interviewers with an Interviewer Project Manual/Study Guide that has been prepared by the coordinating center, with input from local collaborators. The manual is an important part of training and will serve as reference material while the survey is underway (Glewwe, 2005).

5.5.1   Complete and review the manual before training begins (Glewwe, 2005).

5.5.2   When appropriate, translate the manual into the languages used in the geographical areas encompassed by the study.

5.5.3   Include the following content in both the training agenda and the project manual:

▪    General information about the project (e.g., the study’s background and goals, funding sources if relevant, and principal investigators).
▪     How to introduce the survey to respondents.
▪     Eligibility and respondent selection procedures, if applicable. Sampling and coverage errors can occur if interviewers fail to correctly locate sample households, determine eligibility, or implement the respondent selection procedure (Martin, 1996).
▪     Review of the survey instrument, highlighting the content of the various sections and the types of questions being asked.
▪     Data entry procedures for the mode(s) of instrument used (e.g., paper-and-pencil, CAPI, etc.). Measurement error can occur if interviewers do not record responses in the appropriate manner.
▪     Computer hardware and software usage, if appropriate (e.g., use of the laptop computer, email, and any other software packages).
▪     Use of the sample management system.
▪     Review of interview procedures and materials (e.g., informed consent materials and respondent incentive payments).
▪    Review of study-specific probing conventions (e.g., when to probe a “don’t know” response and an open-ended response).
▪    Techniques for handling reluctance that are specific to the study (e.g., recommended responses to frequently asked questions) and are approved in advance by an ethics review committee (see Ethical Considerations). Nonresponse bias can occur if interviewers are unable to persuade reluctant persons to participate in the survey.
▪     Nonstandardized interviewing, if appropriate for the study (e.g., event history calendars, time diaries, or conversational interviewing) (Beatty, 1995; Belli, Shay, & Stafford, 2001; Conrad & Schober, 1999;Groves et al., 2009a; Suchman, & Jordan, 1990). (See Data Collection: General Considerations for a discussion about combining qualitative and quantitative data collection methods.)
▪     Any observational data which interviewers will be required to enter (e.g., observations of the respondent or the neighborhood). See Paradata and Other Auxiliary Data.
▪     Any specialized training for the study (e.g., procedures for taking biomeasures, instruction on interviewing minors or interviewing on sensitive topics, proxy interview protocol, interviewing in unsafe neighborhoods, and protocol for handling respondent or interviewer distress).
▪     Procedures to be used for unusual cases, including general principles to be applied in dealing with unforeseen problems (e.g., how to report abuse of children or others that is observed while conducting an interview in the respondent’s home). See Ethical Considerations for additional discussion of interviewer obligations in the course of fieldwork.
▪     Production goals and maintaining productivity.
▪     Proper handling of equipment and survey materials.
▪     The structure of the survey team and the role of all members of the team.
▪     Procedures for editing and transmitting data. Processing error can occur if interviewers do not correctly edit and transmit the completed questionnaire (see Data Processing and Statistical Adjustment for other potential sources of processing error).
▪     Any other required administrative tasks. See Data Collection: Face-to-Face Surveys and Data Collection: Telephone Surveys for additional discussion on mode-specific material to include during interviewer training.
 

5.5.4   The Project Manual/Study Guide must be especially clear and self-contained if it is impossible to train interviewers in person (e.g., if interviewers must be trained via conference call or video).

5.6   Develop and implement interviewer training appropriate to the instrument.

5.6.1   Include equipment-specific training, such as an overview of the hardware and software systems, password use, stylus use, if needed, questionnaire access, entering responses, charging the battery, general care and maintenance, and how to insert and remove memory cards.

5.6.2   When using new technology, interviewers tend to focus on the technology rather than the respondent; this should be addressed during interviewer training.

5.6.3   Determine whether paper questionnaires will be available in the event of equipment malfunction; if this is the case, training on the PAPI instrument is also essential.

5.6.4   See Data Collection: Face-to-Face Surveys and Data Collection: Telephone Surveys for additional discussion on mode-specific material to include during interviewer training.

5.7   Collect and analyze written evaluative feedback (i.e., provide the opportunity for trainees to give written feedback on trainer performance, the sufficiency of time allocated to different topics, and the adequacy of practice exercises).

5.8   Certify the interviewers; see Appendix D for a sample interviewer certification form. Certification for study-specific tasks should include:

5.8.1   A complete role-play interview with a supervisor.

5.8.2   Certification by an appropriate trainer for any biomeasures are included in the study (see Appendix E for a sample certification checklist for taking physical measurements).

5.8.3   Language certification, as appropriate (see Translation: Overview).

5.9   Supplement the initial training with periodic in-person seminars, telephone conference calls, and periodic bulletins or newsletters (Pennell et al., 2010).

5.10   If data collection will extend for a long period of time, hold a brief refresher training course towards the middle of the data collection period (Office of Management and Budget, 2006).

5.10.1 This refresher training session is an opportunity to review various aspects of data collection, focusing on difficult procedures or on protocols that are not being adhered to sufficiently by interviewers.

5.10.2 The session can also be used to provide feedback on what has been achieved to date.

5.10.3 Require even experienced interviewers and supervisors to attend refresher training sessions, including sessions on standardized techniques.

Lessons learned

5.1   Most of the time it is not feasible for the same team to train all interviewers, particularly in very large 3MC studies. If this is the case, other steps must be taken to ensure the standardization of study-specific protocols:

5.1.1   One approach is the “train-the-trainer” (TTT) model.

▪    Training is generally done in one common language.
▪    Each country or cultural group sends one or more individuals, who can understand and work in the language of the trainers, to the central training.
▪    These representatives return to their own country or cultural group, adapt and translate the training materials as needed, and train the interviewers.
▪    This model allows for tailoring at the country or cultural group level.
▪    The TTT program in SHARE is one example of this approach (Alcser & Benson, 2005; Alcser & Benson, 2008; Börsch-Supan, Jürges, & Lipps, 2003; Survey of Health, Ageing and Retirement in Europe, 2010). The University of Michigan’s Survey Research Center, under contract to SHARE, created the TTT program. Each participating country sent a Country Team Leader, a member of his or her staff, and 2-3 trainers to the TTT sessions. Once the trainers had completed the TTT program, they used the training materials provided, translated if necessary, to conduct country-level interviewer training (see Appendix C for the SHARE Model Training Agenda).
▪    The SHARE team also found that under the TTT model, having the use of training session observations at the regional level was effective for identifying deviations from the project objectives that could potentially contribute to systematic interviewer effects (Alcser et al., 2011).
▪    The Chinese General Social Survey provides another example of regional trainings being conducted for supervisors and local trainings being conducted for interviewers. In this model, interviewers were not selected to work until training was completed and interviewers had demonstrated successful interviewing skills (Bian & Li, 2012).
▪    The World Mental Health Survey gives two TTT sessions for interviewer supervisors, lasting, on average, six days. Interviewer supervisors in turn, train the interviewers in general interviewing techniques (on average 20 hours) and Composite International Diagnostic Interview (CIDI)specific training (on average 30 hours). Before progressing to CIDI specific training, interviewers must demonstrate competence, in the form of role playing, tests, and/or supervised respondent interaction, and in general interview techniques. All interviewers must be tested and certified before they are authorized for production work (Kessler et al., 2008).

5.1.2   Another approach is the training center model (Pennell et al., 2009).

▪    A centralized training course is held, but language “regions” are represented rather than countries.
▪    This model is effective when it is not possible for every country to send trainers who are functional in the central trainer’s language.
▪    The training center model was used in the World Health Organization’s Composite International Diagnostic Interview training sessions. For example, trainers from Lebanon were trained in the United States and subsequently trained the trainers in Lebanon, Oman, Jordan, Palestine, Saudi Arabia, and Iraq.
 

5.1.3   Organizing training in steps (first training the trainers and then having them train the interviewers) increases the overall time needed for training, which should be factored into the project timeline.

5.1.4   All step-wise training results in a certain loss or distortion of information as it is passed along. Trainers should be aware of this and take precautions, such as providing approved standardized training materials.

5.2   If interviewers are being hired for one study only, basic interviewer training techniques can be incorporated into study-specific training.

5.4   The amount of time devoted to training varies among large established 3MC surveys. The variation is likely due to different constitutes of the interviewer (full-time, part-time or students and others), different resources available (e.g., whether the survey has its own field staff or not), organizational structures, and cultural traditions.

5.4.1   Glewwe (2005) recommends up to a month of intense interviewer training (general and study specific) for inexperienced interviewers in a face-to-face survey.

5.4.2   Field team members for the Asian Barometer received intensive, weeklong training sessions on the questionnaire, sampling methods, and the cultural and ethical context of the interview (Asian Barometer, 2010).

5.4.3   The Living Standard Measurement Study Survey (LSMS) recommends that training take place over a four-week period and include introduction to the LSMS survey, general survey procedures, the questionnaire, sampling procedures, and data entry program error reports, with at least two observed training interviews (Living Standard Measurement Study Survey, 1996).

5.4.4   SHARE requires 16-18 hours of training spread over 2-3 days in addition to the basic interviewer techniques training for new interviewers (Alcser & Benson, 2005; Alcser & Benson, 2008).

5.4.5   Similarly, the World Health Survey (WHS) recommends three full days of study-specific training (Ustun et al., 2005).

5.4.6   Round 4 of the Afrobarometer Survey held a six-day training workshop for all persons involved with the project, including interviewers and field supervisors. The Afrobarometer protocol requires holding a single national training workshop at one central location. Interviewers must complete at least six practice interviews before they leave for the field: at least one mock interview in the national language, at least one mock interview in each of the local languages they will use in the field, and at least four training interviews in a field situation (Afrobarometer Survey, 2010).

5.5   In addition to general interview training, all interviewers for Round 5 of the European Social Survey were briefed by the National Coordinator or a research team member regarding respondent selection procedures, registration of the calling process, response rate enhancement, coding of observation data, documentation, and questionnaire content (ESS, 2010).

5.6   If the topic is extremely sensitive, additional specialized training may improve response rates and data quality. The WHO Multi-Country Study on Women’s Health and Domestic Violence, fielded in multiple culturally diverse countries, found that previously inexperienced interviewers who had received specialized training obtained a significantly higher response rate and significantly higher disclosure rate of incidences of domestic violence than did experienced interviewers who had not received the additional training (Jansen et al., 2004).

5.7   Training interviewers in adaptive behavior, such as tailoring responses to respondent concerns or nonstandardized conversational interviewing, can be time-consuming and could increase training costs (O'Brien et al., O'Neill, 2002).

5.8   If data are to be collected via a computerized instrument (i.e., laptop, table, smartphone, etc.) Personal Digital Assistant (PDA)’s, interviewers will need training on their use.  

5.8.1   In a survey in Bolivia using PDAs, interviewers, who had limited previous experience with the technology, wanted additional practice time. They particularly wanted additional instruction on the use of a stylus, since the keyboards on handheld devices can be cumbersome (Escandon, Searing, Goldberg, Duran, & Monterrey, 2008).      

5.8.2   Analyses of inter-observer accuracy and performance revealed a considerable range in a survey in Burkina Faso. Some interviewers clearly worked faster with the PDAs than others, though these were not necessarily the ones who covered the greatest number of households per day worked. However, those who carried out interviews relatively quickly were generally also the ones who made the fewest input errors. In surveys of this kind, where competence in local languages is important, there is often a limited pool of potential interviewers. Different types of interviewers can be considered including students and part-time professionals. It is found that “school leavers” in one of the world’s poorest societies were, in general, able to do a good job interviewing using PDAs (Byass et al., 2008).

5.8.3   Training on the proper handling and care of equipment is very important, particularly in a rural context where the equipment must be transported through rough terrain, the power supply is unstable, and unexpected rain is a concern. In the DHS survey in Nepal, teams were provided with generators, rain shields, umbrellas, and other items to manage these challenges. Enforcing joint responsibility for the theft of, or damage to, the tablet PCs among the interviewer teams helped to ensure security of the tablets during transport and storage. With proper care and maintenance, tablet PCs (and portable generators) can be reused in future surveys, resulting in additional cost savings over the long term (Paudel, Ahmed, Pradhan, & Dangol, 2013).

5.9   In the pilot for the UK Household Longitudinal Study (UKHLS), interviewers reported that the certification process for biomeasure data collection enhanced their confidence in being able to execute these tasks accurately (McFall et al., 2012).

5.10 Field interviewers often work some distance away from their trainers and supervisors. Before sending the interviewers to their assigned areas, some organizations have found it useful to have them conduct a few interviews close to the training locale. Afterward, they meet with the trainer, discuss their experiences, and check their questionnaires. Any problems or misunderstandings can be identified and rectified more easily than if they had occurred in a more remote area.

5.11 During pretesting for the Tamang Family Research Project, investigators trained interviewers in a Nepalese village that was not in the sample. The investigators and interviewers lived together during this period and throughout data collection. This allowed for the continuous assessment of interviewers who were let go if they were not completing quality work (Axinn, Fricke, & Thornton, 1991).

 

6. Institute and follow appropriate quality control measures.

Rationale

Quality control (QC) is a procedure or set of procedures intended to ensure that a product or service adheres to a defined set of quality criteria or meets the requirements of the study (see Survey Quality) The implementation of quality control measures enhances the accuracy, reliability, and validity of the survey data and maximizes comparability of these data across cultures. To implement an effective QC program in a 3MC survey context, the coordinating center must first decide which specific standards must be met. Then real-world data must be collected and the results reported back to the coordinating center. After this, corrective action must be decided upon and taken as quickly as possible. Finally, the QC process must be ongoing to ensure that remedial efforts, if required, have produced satisfactory results.

Procedural steps

 

6.1   Assess the cost and success rates of different recruitment avenues to determine which are the most fruitful and cost effective; use this information to guide the future allocation of resources.

6.2   Considering the factors enumerated in Guideline 3, establish a checklist of minimum interviewer candidate requirements (e.g., interviewing skills, reading/writing fluency, language skills, educational level, and computer skills).

6.2.1   Require recruiters to complete the checklist as they screen each interviewer candidate. If specific assessment tests are used (e.g., to evaluate language skills), record each candidate’s performance on the test.

6.2.2   Accept only those candidates who meet the predetermined minimum requirements.

6.2.3   To ensure accountability, require the recruiter to sign or initial checklists and assessment tests.

6.3   Survey interviewer candidates to determine what improvements could be made to the recruitment process; use this information to modify the procedure, if possible (for example, ask how the candidate heard about the position).

6.4   Take attendance at general interviewing techniques and study-specific training sessions.

6.4.1   Dismiss candidates who fail to attend a predetermined minimum number of training sessions, or make arrangements to train them individually on the missed material.

6.4.2   Keep a signed written record of the training completed by each candidate.

6.5   At the end of basic interviewer training, evaluate the knowledge of the interviewer candidates, as described in Guideline 4.

6.5.1   Require all trainers to use the same evaluation criteria.

6.5.2   Dismiss or retrain those candidates who fail to attain predetermined minimum standards.

6.5.3   Keep a signed written record of each candidate’s performance on the evaluation measures.

6.6   At the end of study-specific training, certify the interviewer candidates, as described in Guideline 5.

6.6.1   Require all trainers to use the same evaluation criteria for certification.

6.6.2   Dismiss or retrain those candidates who fail to attain predetermined minimum standards.

6.6.3   Keep a signed written record of each candidate’s performance on the certification tests.

6.7   Debrief interviewer trainees to determine how training could be improved; use this information to modify the training protocol, if possible.

Lessons learned

6.1   Including quality control protocols as part of the overall survey design, and implementing them from the start, permits the survey organization and the coordinating center to monitor performance and to take immediate corrective action when required. For example, if many interviewer candidates fail to pass the study-specific certification test, additional training could be provided. Afterward, the candidates would be tested again. Those passing the certification test could then be sent out into the field.

7. Document interviewer recruitment and training.

Rationale

Comprehensive documentation helps analysts correctly interpret the data and assess data quality; it also serves as a resource for later studies.

Procedural steps

7.1   Document the recruitment effort for enrolling data collection staff on the project, including:

7.1.1   Any special criteria used in reviewing data collection staff employment applications (e.g., language proficiency and special knowledge and skills, such as taking physical/biological measurements).

7.1.2   The way in which language fluency was assessed, as appropriate for the study.

7.1.3   Recruitment scripts and sources used to recruit data collection staff, as well as an evaluation of the success of the recruitment strategies.

7.1.4   Interviewer characteristics (e.g., gender, age, race, education, length of tenure as interviewer).

7.1.5   Characteristics of the multilingual interviewing staff in terms of the percent certified to interview by language.

7.1.6   The minimum number of hours required, if applicable, and the average number of hours worked by an interviewer during the data collection period.

7.1.7   Interviewer pay structure (e.g., hourly or per completed interview), the pay range, and any bonus program (e.g., amount and when or under what circumstances these bonuses were offered).

7.2   Document the general and study-specific training, including:

7.2.1   Number of training sessions conducted.

7.2.2   Number of training hours, dates, and locations.

7.2.3   Number of trainers and trainees.

7.2.4   Background of the trainers, including expertise in training and in any substantive areas as applicable to the survey.

7.2.5   Copy of the training agenda(s) (i.e., list of topics covered).

7.2.6   All written materials that were used (e.g., the interviewer manual/study guide, trainer/facilitator guide and supplemental training materials).

7.2.7   Certification procedures (e.g., scripted certification interview with a supervisor or other staff, written or online test on general interviewing procedures, live practice interviewing with potential respondents).

7.3   Document any issues encountered (e.g., if the recruitment plan failed to produce a sufficient number of qualified interviewers or interviewer attrition was unexpectedly high, necessitating a second round of recruitment and training; the training agenda did not provide adequate time for hands-on practice; or the ratio of trainers to trainees was inadequate) and suggestions for future studies.

7.4   Document all direct measurements of data quality, all indicators of data quality obtained via quality control (QC), and any decisions made to change the protocol in order to maintain high levels of quality (see Survey Quality).

Lessons learned

7.1   Documenting the recruitment effort, including method(s) of recruiting, number of candidates recruited, and number of candidates screened, as well as post-study documentation of interviewer retention, is also useful for other future projects. This information can guide future recruitment strategies and help estimate the number of recruits needed to provide a sufficient number of interviewers for data collection in similar studies.

7.2   Documentation of general and study-specific training can pinpoint areas needing improvement in future training efforts.