Cross Cultural Images Cross Cultural Images Cross Cultural Images Cross Cultural Images

II. Survey Quality

Webpage last modified: 2013-Jul-11

Sue Ellen Hansen, Grant Benson, Ashley Bowers, Beth-Ellen Pennell, Yuchieh Lin, and Benjamin Duffey

Introduction

This chapter presents a quality framework for assessing quality in cross-cultural surveys, followed by guidelines for managing and assessing quality throughout the survey lifecycle.

In mono-cultural surveys, assessing the quality of survey data requires adequate documentation of the entire survey lifecycle and an understanding of protocols used to assure quality. In such surveys, there may be challenges to overcoming methodological, organizational, and operational barriers to ensuring quality. For example, a country may not have the infrastructure or an organization may not have the means to implement a study entirely according to survey best practices.

In cross-cultural survey research, the challenges increase. Cross-cultural surveys hinge on the comparability or equivalence of data across cultures. Moreover, cross-cultural survey quality assessment procedures and criteria become more complex with additional survey processes, such as adaptation and translation of questions and harmonization of data across multiple surveys (see Adaptation of Survey Instruments, Translation, and Data Harmonization).

Figure 1 shows the survey production lifecycle as represented in these guidelines. The lifecycle begins with establishing study structure (Study, Organizational, and Operational Structure) and ends with data dissemination (Data Dissemination). In some study designs, the lifecycle may be completely or partially repeated. There might also be iteration within a production process. The order in which survey production processes are shown in the lifecycle does not represent a strict order to their actual implementation, and some processes may be simultaneous and interlocked (e.g., sample design and contractual work). Quality and ethical considerations are relevant to all processes throughout the survey production lifecycle. Survey quality can be assessed in terms of fitness for intended use (also known as fitness for purpose [20]), total survey error, and the monitoring of survey production process quality, which may be affected by survey infrastructure, costs, respondent and interviewer burden, and study design specifications.

Figure 1. The Survey Lifecycle

Survey Lifecycle Illustration

Quality Framework

The framework adopted by these guidelines for assuring and assessing quality is informed by research on survey errors and costs and quality management, and highlights three aspects of quality: total survey error ([14] [15]), fitness for intended use ([9]; also known as "fitness for purpose" [20]), and survey process quality ([4] [19] [23]).

Total survey error

The total survey error (TSE) paradigm is widely accepted as a conceptual framework for evaluating survey data quality [2] [6]. TSE defines quality as the estimation and reduction of the mean square error (MSE) of statistics of interest, which is the sum of random errors (variance) and squared systematic errors (bias). TSE takes into consideration both measurement (construct validity, measurement error, and processing error)— i.e., how well survey questions measure the constructs of interest—and representation (coverage error, sampling error, nonresponse error, and adjustment error) [15]—i.e., whether one can generalize to the target population using sample survey data. In the TSE perspective, there may be cost-error tradeoffs, that is, there may be tension between reducing these errors and the cost of reducing them.

With advances in computerized interviewing software and sample management systems, data related to quality increasingly can be collected with survey data, and can be used to measure various components of error. These include paradata [4] [5], data from experiments embedded in a survey, and supplementary data, such as nonresponse followup questions. Each of these facilitates evaluation of survey data in terms of TSE.

Fitness for intended use

Biemer and Lyberg [4] argue that the TSE framework lacks a user perspective, and that it should be supplemented by using a more modern quality paradigm— one that is multidimensional and focuses on criteria for assessing quality in terms of the degree to which survey data meet user requirements (fitness for intended use). By focusing on fitness for intended use, study design strives to meet user requirements in terms of survey data accuracy and other dimensions of quality (such as comparability and timeliness). In this perspective, ensuring quality on one dimension (comparability) may conflict with ensuring quality on another dimension (timeliness); and there may be tension between meeting user requirements and the associated cost of doing so on one or more dimensions. There are a number of multidimensional quality frameworks in use across the world (see, for example, [5] [7] [16] [27] [28]).

Table 1 shows seven dimensions that are often used to assess the quality of national official statistics in terms of both survey error and fitness for use: comparability, relevance, accuracy, timeliness and punctuality, accessibility, interpretability, and coherence. In this framework, TSE may be viewed as being covered by the accuracy dimension.

Table 1. Dimensions of Quality
Quality DimensionDescription
Comparability Are the data from different countries or cultures comparable to each other (equivalent)?
Coherence Do the data form a coherent body of information that can be rearranged or combined with other data?
Relevance Do the data meet the requirements of the client and users?
Accuracy Are the data describing the phenomena that they were designed to measure; that is, are the survey estimates close to the true values of the population parameters they are meant to measure?
Timeliness and punctuality How much time has elapsed between the end of the data collection and when the data are available for analysis? Are the data available when expected, based on client specifications?
Accessibility Can users easily obtain and analyze the data?
Interpretability Do the data make sense in terms of users' hypotheses? Are supplementary data available to facilitate analysis, e.g., data that describe the major characteristics and structure of the data (metadata) as well as data about the survey processes (paradata)?

Cost, burden, professionalism, and design constraints are factors that may also affect fitness for use on these dimensions:

The aim is to optimize costs, minimize burden and design constraints where appropriate—based on the need to be sensitive to local survey contexts, and to maximize professionalism. Figure 2 shows the dimensions of quality and factors that affect quality in terms of fitness for use (see [3] [5] [7] [16] [27] [28] for examples of dimensions of quality used by statistical agencies). It also shows the accuracy dimension in terms of TSE [2] [14] [15].

Figure 2. Fitness for Intended Use (Quality Dimensions) and Total Survey Error (Accuracy Dimension)

Fitness for Intended Use related to Total Survey Error Illustration

The dimensions of quality (comparability, coherence, relevance, accuracy, and so on) and factors that may have an impact on quality (cost, burden, professionalism, and design constraints) apply to all surveys. However, in a cross-cultural context, challenges increase:

Appendix A highlights recommendations from specific chapters in these guidelines in relation to dimensions of quality.

Survey process quality

Fitness for intended use provides a general framework for assessing the quality of cross-cultural surveys, and defines the essential dimensions of quality, one of which is accuracy (TSE). A third approach to quality monitoring and assessment is survey process quality management, and the notion of continuous process improvement ([15]).This approach focuses on quality at three levels: the organization, the process, and the product [18]. Quality products cannot be produced without quality processes, and having quality processes requires an organization that manages for quality.

A focus on survey production process quality requires the use of quality standards and collection of standardized study metadata, question metadata, and process paradata [7]. Figure 3 shows the elements of survey process quality management that allow users to assess the quality of processes throughout the survey lifecycle: quality assurance, quality control [17] [18], and a quality profile [4] [11]. These are discussed further in the guidelines below.

Cross-cultural survey organizations may vary in what cost-quality tradeoffs they can make, as well as processes they generally monitor for quality purposes. However, if each organization reaches a minimum standard through adherence to the quality guidelines of the study's coordinating center, the coordinating center can assess the quality of each survey based on quality indicators (paradata) from each organization, and create a quality profile that allows users to assess survey data quality and comparability across cultures. Appendix B summarizes for each chapter examples of elements of quality planning and assurance, quality monitoring and control, and a quality profile.

Figure 3. Survey Process Quality Management

the elements of survey process quality management that allow users to assess the quality of processes throughout the survey lifecycle

Guidelines

Goal: To ensure the quality of survey production processes and consequently the survey data throughout the survey lifecycle, as well as clear and comprehensive documentation of study methodology and to provide indicators of process and data quality.

  1. Develop a sustainable quality management plan.
    Rationale

    Developing planned, systematic quality assurance (Guideline 2) and quality control (Guideline 3) activities helps ensure that the study and survey data meet client and user requirements. It also facilitates development of a quality profile (Guideline 4), which should document survey methodology, key indicators of quality, lessons learned, and recommendations for improvement.

    Procedural steps
    • Review available cross-cultural survey standards and best practices for ensuring the quality of survey processes, survey data, and documentation (such as these guidelines).
    • Review existing quality profiles (Guideline 4) and lessons learned from other studies. Use standardized quality profiles and protocols to establish sustainable quality management.
    • Review study requirements for quality assurance and quality control. These may be developed at the study design stage by the coordinating center, the survey organization, or both.
    • Review study goals and objectives, required products and deliverables, and study timeline and budget.
    • Review country-specific regulations and legislation relevant to conducting survey research.
    • Through analysis of the processes in the survey lifecycle (process analysis) [1], identify characteristics of survey products (e.g., coded data) that could vary during the processes (e.g., verification failures). For example,
      • Use tools to analyze a process, to determine what steps in the process need to be monitored to ensure quality, and to identify quality indicators to monitor [1]. Examples of tools used to analyze processes are:
      • Identify key indicators of the quality of the product(s) of the process, in terms of TSE and other dimensions of quality, as well as factors such as cost, burden, and the risk of not meeting quality requirements. See Appendix A for examples of survey quality indicators as they relate to TSE and the fitness for use quality dimensions (see Quality Framework).
      • If possible, use such indicators to determine whether the process is stable or in control; that is, is variation on a key indicator due to randomness alone? This can be done using paradata from similar studies the organization has conducted or is conducting, or from pilot studies.
      • Define measurement and reporting requirements for use during quality assurance (see Guideline 2) and quality control (see Guideline 3), and determine who would be responsible for ensuring that quality assurance and quality control activities are carried out.
      • Assess whether these requirements can be met through current procedures and systems, and with currently collected paradata; and if not, develop a process improvement plan.
      • Create cost/error tradeoff decision rules about how to alter the features of the study design if the goals are not met.
    • Use quality planning tools to help determine what performance analyses and assessments should be used. For example,
      • A cost-benefit analysis of potential quality management procedures and activities; that is, evaluating their benefits in relation to the cost of performing them relative to overall study costs.
      • Benchmarking, that is, comparing planned activities against those of similar studies, and the outcomes of those activities, to form a basis for performance measurement.
      • Statistical analysis of factors that may influence indicators of process or product quality.
      • Cost of quality and cost of poor quality analyses.
    • Develop a quality assurance plan, which could include (see Appendix B):
      • The process improvement plan.
      • Performance and product quality baselines.
      • Process checklists.
      • A training plan.
      • Recommended performance analyses and assessments (e.g., quality assurance procedures for verifying interviews and evaluating interviewer performance).
      • Required process quality audits, reviews, and inspections (e.g., review of tapes of interviews to assess interviewer performance).
    • Develop a plan for continuous monitoring of processes to ensure that they are stable and that products are meeting requirements (Quality Control; see [1], Guideline 3, and Appendix B). Such a plan could include:
      • The process improvement plan.
      • Performance and product quality baselines.
      • Quality indicators identified in process analysis and planning for responsive design.
      • Performance analyses and assessments to use to monitor processes.
      • Tools to use to monitor processes and product quality, e.g., Pareto charts and statistical process control charts.
      • Reports to prepare on performance measurement, such as interviewer training certification.
    • Develop procedures to ensure that throughout the survey lifecycle all documentation, reports, and files related to quality planning and assurance, quality monitoring and control, and process improvement are retained. This facilitates preparing a quality profile for users of the disseminated survey data (see Guideline 4 and Data Dissemination).
    • Develop procedures for updating the quality management plan as needed during the survey lifecycle.
    Lessons learned
    • There are many quality management methodologies that survey organizations may use that focus on the three levels of quality: product, process, and organization; for example, Total Quality Management (TQM). Discussion of such methodologies is beyond the scope of this chapter, but experience has shown that they can help organizations manage for quality.
    • Developing a quality management plan alone does not necessarily guarantee quality. Other project management practices may also affect quality. Many survey organizations and statistical agencies have recognized the value of also adhering to professional project management guidelines, such as those of the Project Management Institute (PMI) [26] and the International Project Management Association (IPMA). Many have certified project managers and follow professional project management best practices that may affect quality, schedule, and costs, such as developing risk management and communication plans. As with a quality management plan, these can be critical to ensuring the quality of processes and survey data.
  2. Perform quality assurance activities.
    Rationale

    Quality assurance is the planned procedures and activities (see Guideline 1) an organization uses to ensure that the study meets process and product quality requirements. It specifies ways in which quality can be measured.

    Procedural steps
    • For each process in the survey lifecycle, perform quality assurance activities as outlined in the quality management plan, such as (see Appendix B):
    • Perform performance and product quality assessments. Examples are:
      • Certification of interviewers after training (rate of certification, rate of certification after follow-up training, etc.); that is, based on evaluation of interviews (taped or monitored), determination that the interviewer is ready to work on the study.
      • Verification of coded questionnaires (rate of verification failures).
    • Generate indicators of quality for each assessment, based on baselines established in quality planning (Guideline 1), and create reports on performance and quality assessments, which can be used for both quality monitoring and control (see Guideline 3), and documentation in a quality profile (see Guideline 4).

      Perform quality audits at key points in the survey lifecycle if study guidelines for quality management require them. These generally are structured independent reviews to determine whether activities comply with study and organizational policies and procedures for managing quality. They are intended to identify inefficiencies in processes, and to make recommendations for reducing the cost of quality management and increasing the quality of processes and products. In international studies, these generally would be done by the survey organization, or an independent local auditor.
    • Provide documentation for:
      • Performance and quality assessments.
      • Recommended corrective actions and corrective actions taken.
      • Updates to baselines.
      • Changes to quality assurance plan.
  3. Perform quality control activities.
    Rationale

    To ensure that standards and requirements are met, it is necessary to monitor study processes and the products produced against predetermined baselines and requirements, and continuously evaluate whether processes are stable (in control) and quality requirements are being met [4] [17]. This may lead to recommendations for preventing or minimizing error or inefficiencies, updates to the quality management plan (see Guideline 1), and suggestions for improving standards and best practices. The result is continuous process improvement ([4] [17] [23]), through improved quality assurance (see Guideline 2) and improved quality monitoring and control.

    As indicated in Figure 3, quality control is closely linked to quality assurance, and the outputs of each feed into the other. Thus, in some respects, quality control may be viewed as part of quality assurance. However, these are separated in this chapter to make monitoring and controlling performance and product quality an explicit part of quality management.

    Procedural steps
    Lessons learned
    • Some organizations have used quality control techniques to monitor survey data collection processes and adapt study designs when quality goals are not met. This is known as adaptive or responsive survey design [13].
  4. Create a quality profile
    Rationale

    A quality profile (also known as a quality report) synthesizes information from other sources, documenting survey methodology used throughout the survey lifecycle, providing indicators of process and data quality (sampling and nonsampling errors), corrective actions taken, lessons learned, and recommendations for improvement and further research. It provides the user all information available to help assess data quality in terms of fitness for intended use, total survey error, and other factors (see Framework above). See [9] for an example of guidelines for such reports, [10], [11], and [29] for examples of quality profiles, and Appendix A for examples from chapters in these guidelines.

    Procedural steps
    • Document procedures and methodology used for key stages or processes in the lifecycle (see Appendix B). For example, for sample design this would include:
      • Time dimension of design (e.g., one time cross sectional, fixed or rotating panel).
      • Target and survey population definitions, including inclusion/exclusion criteria.
      • Sampling frame(s) descriptions.
      • Maps and protocol used in field listing.
      • Description of all stages of selection, including sample sizes, stratification, clustering, oversampling and number of replicates fielded at each stage.
      • Documentation of procedures to determine probabilities of selection and weights for each stage of selection.
      • Tables of the precision of the estimates of key survey statistics.
      • (If necessary), descriptions of substitution procedures.
      For each process documented, this should include
    • Provide key indicators of quality for all dimensions of quality (see [9] and Appendix B), some of which can be collected during data collections, others afterwards. They include:
    • Document lessons learned and make recommendations for improvement in studies of the same design, and, if possible, make recommendations for methodological research that could inform design of similar studies in the future. Such information would be useful for the study's coordinating center and national survey agencies, but also researchers and organizations interested in conducting similar studies.

Appendix A

The following table lists recommendations from individual chapters in these guidelines that are related to the dimensions of quality. Also included are examples of indicators of quality adapted from Eurostat's standard quality indicators [12].

Table 1
Quality DimensionGuidelines

Comparability

To ensure as much as possible, that observed data from different countries or cultures are comparable (equivalent).

Indicators:

Time

  • The differences, if any, in concepts and methods of measurements between last and previous reference period
  • A description of the differences, including an assessment of their effect on the estimates

Geographical

  • All differences between local practices and national standards (if such standards exist)
  • An assessment of the effect of each reported difference on the estimates

Domains

  • A description of the differences in concepts and methods across cross-cultural surveys (e.g., in classifications, statistical methodology, statistical population, methods of data manipulation, etc.)
  • An assessment of the magnitude of the effect of each difference

Establish minimum criteria for inclusion in a cross-national survey dataset, if applicable, as follows:

Minimize the amount of undue intrusion by ensuring comparable standards when appropriate (based on differences in local survey contexts) for informed consent and resistance aversion effort, as well as other potentially coercive measures such as large respondent incentives (see Ethical Considerations in Surveys).

Define comparable target populations and verify that the sampling frames provide adequate coverage to enable the desired level of generalization (see Sample Design).

Minimize the amount of measurement error attributable to survey instrument design, including error resulting from context effects, as much as possible (see Instrument Technical Design).

Minimize or account for the impact of language differences resulting from potential translations (see Translation and Adaptation of Survey Instruments).

Minimize the effect interviewer attributes have on the data through appropriate recruitment, selection, and case assignment; minimize the effect that interviewer behavior has on the data through formal training (see Interviewer Recruitment, Selection, and Training).

Identify potential sources of unexpected error by implementing pretests of translated instruments or instruments fielded in different cultural contexts (see Pretesting).

Reduce the error associated with nonresponse as much as possible (see Data Collection for a discussion of nonresponse bias and methods for increasing response rates).

Minimize the effect that coder error has on the data through appropriate coder training (see Data Processing and Statistical Adjustment).

If possible, provide a crosswalk between survey instruments fielded at different times or for different purposes, but using the same questions, to facilitate analysis and post-survey quality review (see Data Dissemination).

Coherence

To ensure that the data can be combined with other statistical information for various, secondary purposes.

Indicators:

  • A description of every pair of statistics (statistical unit, indicator, domain, and breakdown) for the survey(s) that should be coherent
  • A description of any of the differences that are not fully explained by the accuracy component.
  • A description of the reported lack of coherence, for specific statistics

Create a clear, concise description of all survey implementation procedures to assist secondary users. The Study, Organizational, and Operational Structure chapter lists topics which should be included in the study documentation; there are also documentation guidelines within each chapter.

Provide data files in all the major statistical software packages and test all thoroughly before they are made available for dissemination (see Sample Design, and Data Dissemination).

Designate resources to provide user support and training for secondary researchers (see Data Dissemination).

See Data Harmonization for a discussion of the creation of common measures of key economic, political, social, and health indicators.

Relevance

To ensure that the data meet the needs of the client or users.

Indicators:

  • A description of clients and users
  • A description of users' needs (by main groups of users)
  • An assessment of user satisfaction

Clearly state the study's goals and objectives (see Study, Organizational, and Operational Structure).

Conduct a competitive bidding process to select the most qualified survey organization within each country or location (see Tenders, Bids, and Contracts).

While designing the questionnaire, ensure all survey questions are relevant to the study objectives (see Questionnaire Design).

Construct the data file with a data dictionary of all variables in the selected element data file, with all variable names and an accompanying description which are relevant to the study objectives (see Sample Design).

Accuracy

To ensure that the data describe the phenomena they were designed to measure. This can be assessed in terms of Mean Square Error (MSE).

Indicators:

Measurement error:

  • A description of the methods used to assess measurement errors (any field tests, reinterviews, split sample experiments, or cognitive laboratory results, etc.)
  • A description of the methods used to reduce measurement errors
  • Average interview duration
  • An assessment of the effect of measurement errors on accuracy

Processing Error:

  • A description of the methods used to reduce processing errors
  • A description of the editing system
  • The rate of failed edits for specific variables.
  • The error rate of data entry for specific variables and a description of estimation methodology
  • The error rate of coding for specific variables and a description of the methodology followed for their estimation
  • A description of confidentiality rules and the amount of data affected by confidentiality treatment

Coverage error:

  • A description of the sampling frame
  • Rates of over-coverage, under-coverage and misclassification broken down according to the sampling stratification
  • A description of the main misclassification and under-and over-coverage problems encountered in collecting the data
  • A description of the methods used to process the coverage deficiencies

Sampling error:

  • Type of sample design (stratified, clustered, etc.)
  • Sampling unit at each stage of sampling
  • Stratification and sub-stratification criteria
  • Selection schemes
  • Sample distribution over time
  • The effective sample size
  • Coefficients of variation of estimates and a description of the method used to compute them (including software)
  • An assessment of resulting bias due to the estimation method

Nonresponse error:

  • Unit nonresponse rate
  • Identification and description of the main reasons for nonresponse (e.g., non-contact, refusal, unable to respond, non-eligible, other nonresponse)
  • A description of the methods used for minimising nonresponse
  • Item nonresponse rates for variables.
  • A description of the methods used for imputation and/or weighting for nonresponse
  • Variance change due to imputation
  • An assessment of resulting bias due to nonresponse

Model assumptions error:

  • A description of the models used in the production of the survey's statistics
  • A description of assumptions used on which the model relies
  • A description of any remaining (unaccounted for) bias and variability which could affect the statistics

Pretest all the versions of the survey instrument to ensure that they adequately convey the intended research questions and measure the intended attitudes, values, reported facts and/or behaviors (see Pretesting).

In order to reliably project from the sample to the larger population with known levels of certainty/precision, use probability sampling (see Sample Design).

Provide a report on each variable in the dataset of selected elements to check correct overall sample size and within stratum sample size, distribution of the sample elements by other specific groups such as census enumeration areas, extreme values, nonsensical values, and missing data (see Sample Design).

If possible, assess accuracy by looking at the differences between the study estimates and any available "true" or gold standard values (see Data Collection).

Timeliness and punctuality

To ensure that the data are available for analysis when they are needed.

Indicators:

  • The legal deadline imposed on respondents
  • The date the questionnaires were sent out
  • Starting and finishing dates of fieldwork
  • Dates of processing
  • Dates of quality checks
  • The dates the advance and detailed results were calculated and disseminated
  • If data is transmitted later than required by regulation or contract, the average delay in days or months in the transmission of results with reference to the legal deadline
  • If data are transmitted later than required by regulation or contract, the reasons for the late delivery and actions taken or planned for the improving timeliness

Time data collection activities appropriately (see Data Collection, and Pretesting).

Create a study timeline, production milestones, and deliverables with due dates (see Study, Organizational, and Operational Structure).

Accessibility

To ensure that the data can easily be obtained and analyzed by users.

Indicators:

  • A description of how to locate any publication(s) based on analysis of the data
  • Information on what results are sent to reporting units included in the survey
  • Information on the dissemination scheme for the results
  • A list of variables required but not available for reporting
  • Reasons why variables are not available

Save all data files and computer syntax from the preferred statistical software package needed during sample design process in safe and well labeled folders for future reference and use (see Sample Design).

Establish procedures early in the survey lifecycle to insure that all important files are preserved (see Data Dissemination).

Test archived files periodically to verify user accessibility (see Data Dissemination).

Create electronic versions of all project materials whenever feasible (see Data Dissemination).

Produce and implement procedures to distribute restricted-use files, if applicable (see Data Dissemination).

Interpretability

To ensure that supplementary metadata and paradata are available to analysts.

Indicator:

  • A copy of any methodological documents relating to the statistics provided

At the data processing stage of the study, create a codebook that provides question-level metadata matched to variables in the dataset. Metadata include variable names, labels, and data types, as well as basic study documentation, question text, universes (the characteristics of respondents who were asked the question), the number of respondents who answered the question, and response frequencies or statistics (see Sample Design, and Data Processing and Statistical Adjustment).

Collect and make available process data collected during data collection, such as timestamps, keystrokes, and mouse actions ("paradata") (see Instrument Technical Design).

Appendix B

The following table summarizes recommended elements of process quality management relevant to each chapter in these guidelines. These are meant to reflect quality management at two levels: (1) the overall study level; and (2) the national organization level. It is not meant to convey that all elements listed should be part of a study's design, but to provide examples and to help guide the development of specifications for quality management for a study.

If possible, the study's quality profile (quality report) would include a summary of each organization's performance, based on standardized quality indicators. It also would include lessons learned and recommendations for improvement.

Where possible, examples are taken from the individual chapters in these guidelines. Not all chapters have specific measures for monitoring and controlling quality. Even without clear individual rates or measures of quality, there often may be reports on quality assurance activities that facilitate assessing quality.

See Table

Glossary

Accuracy
The degree of closeness an estimate has to the true value.
Adaptation
Changing existing materials (e.g., management plans, contracts, training manuals, questionnaires, etc.) by deliberately altering some content or design component to make the resulting materials more suitable for another socio-cultural context or a particular population.
Adjudication
The translation evaluation step at which a translation is signed off and released for whatever follows next such as pretesting or final fielding (see Translation). When all review and refinement procedures are completed, including any revisions after pretesting and copyediting, a final signing off/adjudication is required. Thus, in any translation effort there will be one or more signing-off steps ("ready to go to client," "ready to go to fielding agency," for example).
Adjustment Error
Survey error (variance and bias) due to post data collection statistical adjustment.
Audit trail
An electronic file in which computer-assisted and Web survey software captures paradata about survey questions and computer user actions, including times spent on questions and in sections of a survey (timestamps) and interviewer or respondent actions while proceeding through a survey. The file may contain a record of keystrokes and function keys pressed, as well as mouse actions.
Auxiliary data
Data from an external source, such as census data, that is incorporated or linked in some way to the data collected by the study. Auxiliary data is sometimes used to supplement collected data, for creating weights, or in imputation techniques.
Behavior coding
Systematic coding of the interviewer-respondent interaction in order to identify problems and sometimes to estimate the frequency of behaviors that occur during the question-answer process.
Bias
The systematic difference over all conceptual trials between the expected value of the survey estimate of a population parameter and the true value of that parameter in the target population.
Bid
A complete proposal (submitted in competition with other bidders) to execute specified jobs within prescribed time and budget, and not exceeding a proposed amount.
Cause and effect diagram
A fishbone-structured diagram for a process, used as a brainstorming tool to help understand or improve the process. The main bone represents the process (e.g., interviewer training), and bones coming off of the main bone are pre-identified factors (e.g., training materials) that may affect the quality of the process. From there potential causes (lack of resources and time) and effects (poor quality materials) can be discussed, and solutions identified. Also known as a fishbone or Ishikawa diagram.
Certification
Objective assessment of performance. Based on pre-established criteria, the interviewer either meets the requirements and may proceed to conduct the study interview or does not meet the requirements and may either be permitted to try again or be dismissed from the study. Certification outcome should be documented and filed at the data collection agency.
Cluster
A grouping of units on the sampling frame that is similar on one or more variables, typically geographic. For example, an interviewer for an in person study will typically only visit only households in a certain geographic area. The geographic area is the cluster.
Codebook
A document that provides question-level metadata that is matched to variables in a dataset. Metadata include the elements of a data dictionary, as well as basic study documentation, question text, universe statements (the characteristics of respondents who were asked the question), the number of respondents who answered the question, and response frequencies or statistics.
Coding
Translating nonnumeric data into numeric fields.
Coefficient of Variation (CV)
The ratio of the standard deviation of a survey estimate and its mean value. Its purpose is to cancel the unit of measurement and create a relative measure of variation that facilitates comparisons across different statistics.
Cognitive interviews
A pretesting method designed to uncover problems in survey items by having respondents think out loud while answering a question or retrospectively.
Comparability
The extent to which differences between survey statistics from different countries, regions, cultures, domains, time periods, etc., can be attributable to differences in population true values.
Confidentiality
Securing the identity of, as well as any information provided by, the respondent, in order to ensure to that public identification of an individual participating in the study and/or his individual responses does not occur.
A process by which a sample member voluntarily confirms his or her willingness to participate in a study, after having been informed of all aspects of the study that are relevant to the decision to participate. Informed consent can be obtained with a written consent form or orally (or implied if the respondent returns a mail survey), depending on the study protocol. In some cases, consent must be given by someone other than the respondent (e.g., an adult when interviewing children).
Construct validity
The degree to which a survey question adequately measures an intended hypothetical construct. This may be assessed by checking the correlation between observations from that question with observations from other questions expected on theoretical grounds to be related.
Contact rate
The proportion of all elements in which some responsible member of the housing unit was reached by the survey.
Context effects
The effect of question context, such as the order or layout of questions, on survey responses.
Contract
A legally binding exchange of promises or an agreement creating and defining the obligations between two of more parties (for example, a survey organization and the coordinating center) written and enforceable by law.
Conversion process
Data processing procedures used to create harmonized variables from original input variables.
Cooperation rate
The proportion of all elements interviewed of all eligible units ever contacted.
Coordinating center
A research center that facilitates and organizes cross-cultural or multi-site research activities.
Copyeditor
The person who reviews a text and marks up any changes required to correct style, punctuation, spelling, and grammar errors. In many instances, the copyeditor may also make the corrections needed.
Coverage
The proportion of the target population that is accounted for on the sampling frame.
Coverage error
Survey error (variance and bias) that is introduced when there is not a one-to-one correspondence between frame and target population units. Some units in the target population are not included on the sampling frame (undercoverage), some units on the sampling frame are not members of the target population (out-of-scope), more than one unit on the sampling frame corresponds to the same target population unit (overcoverage), and one sampling frame unit corresponds to more than one target population unit.
Coversheet
Electronic or printed materials associated with each element that identify information about the element, e.g., the sample address, the unique identification number associated with an element, and the interviewer to whom an element is assigned. The coversheet often also contains an introduction to the study, instructions on how to screen sample members and randomly select the respondent, and space to record the date, time, outcome, and notes for every contact attempt.
Crosswalk
A description, usually presented in tabular format, of all the relationships between variables in individual data files and their counterparts in the harmonized file.
Data dictionary
A document linking the survey instrument (questionnaire) with the dataset, or more abstract question or variable-level metadata including question identifiers (variable names and labels); response category identifiers (value labels), and data types (e.g., F2.0, specifying that the response is a two-digit integer with zero decimal places.
Disclosure analysis and avoidance
The process of identifying and protecting the confidentiality of data. It involves limiting the amount of detailed information disseminated and/or masking data via noise addition, data swapping, generation of simulated or synthetic data, etc. For any proposed release of tabulations or microdata, the level of risk of disclosure should be evaluated.
Disposition code
A code that indicates the result of a specific contact attempt or the outcome assigned to a sample element at the end of data collection (e.g., noncontact, refusal, ineligible, complete interview).
Editing
Altering data recorded by the interviewer or respondent to improve the quality of the data (e.g., checking consistency, correcting mistakes, following up on suspicious values, deleting duplicates, etc.). Sometimes this term also includes coding and imputation, the placement of a number into a field where data were missing.
Eligibility Rate
The number of eligible sample elements divided by the total number of elements on the sampling frame.
Ethics review committee or human subjects review board
A group or committee that is given the responsibility by an institution to review that institution's research projects involving human subjects. The primary purpose of the review is to assure the protection of the safety, rights and welfare of the human subjects.
Fitness for intended use
The degree to which products conform to essential requirements and meet the needs of users for which they are intended. In literature on quality, this is also known as "fitness for use" and "fitness for purpose."
Fixed panel design
A longitudinal study which attempts to collect survey data on the same sample elements at intervals over a period of time. After the initial sample selection, no additions to the sample are made.
Flow chart
A method used to identify the steps or events in a process. It uses basic shapes for starting and ending the process, taking an action, making a decision, and producing data and documentation. These are connected by arrows indicating the flow of the process. A flow chart can help identify points at which to perform quality assurance activities and produce indicators of quality that can be used in quality control.
Focus group
Small group discussions under the guidance of a moderator, often used in qualitative research that can also be used to test survey questionnaires and survey protocols.
Hours Per Interview (HPI)
A measure of study efficiency, calculated as the total number of interviewer hours spent during production (including travel, reluctance handling, listing, completing an interview, and other administrative tasks) divided by the total number of interviews.
Imputation
A computation method that, using some protocol, assigns one or more replacement answers for each missing, incomplete, or implausible data item.
Item nonresponse, item missing data
The lack of information on individual data items for a sample element where other data items were successfully obtained.
Listing
A procedure used in area probability sample designs to create a complete list of all elements or cluster of elements within a specific set of geographic boundaries.
Longitudinal study
A study where elements are repeatedly measured over time.
Majority country
A country with low per capita income (the majority of countries).
Mean Square Error (MSE)
The total error of a survey estimate; specifically, the sum of the variance and the bias squared.
Measurement error
Survey error (variance and bias) due to the measurement process; that is, error introduced by the survey instrument, the interviewer, or the respondent.
Metadata
Information that describes data. The term encompasses a broad spectrum of information about the survey, from study title to sample design, details such as interviewer briefing notes, contextual data and/or information such as legal regulations, customs, and economic indicators. Note that the term 'data' is used here in a technical definition. Typically metadata are descriptive information and data are the numerical values described.
Microdata
Nonaggregated data that concern individual records for sampled units, such as households, respondents, organizations, administrators, schools, classrooms, students, etc. Microdata may come from auxiliary sources (e.g., census or geographical data) as well as surveys. They are contrasted with macrodata, such as variable means and frequencies, gained through the aggregation of microdata.
Mode
Method of data collection.
Noncontact
Sampling units that were potentially eligible but could not be reached
Nonresponse
The failure to obtain measurement on sampled units or items. See unit nonresponse and item nonresponse.
Nonresponse bias
The systematic difference between the expected value (over all conceptual trials) of a statistic and the target population value due to differences between respondents and nonrespondents on that statistic of interest.
Nonresponse error
Survey error (variance and bias) that is introduced when not all sample members participate in the survey (unit nonresponse) or not all survey items are answered (item nonreponse) by a sample element.
Nonresponse followup
A supplemental survey of sampled survey nonrespondents. Nonresponse followup surveys are designed to assess whether respondent data are biased due to differences between survey respondents and nonrespondents.
Outcome rate
A rate calculated based on the study's defined final disposition codes that reflect the outcome of specific contact attempts before the unit was finalized. Examples include response rates (the number of complete interviews with reporting units divided by the number of eligible reporting units in the sample.), cooperation rates (the proportion of all units interviewed of all eligible units ever contacted), refusal rates (the proportion of all units in which a housing unit or respondent refuses to do an interview or breaks-off an interview of all potentially eligible units), and contact rates (the proportion of all units are reached by the survey).
Outlier
An atypical observation which does not appear to follow the distribution of the rest of a dataset.
Paradata
Empirical measurements about the process of creating survey data themselves. They consist of visual observations of interviewers, administrative records about the data collection process, computer-generated measures about the process of the data collection, external supplementary data about sample units, and observations of respondents themselves about the data collection. Examples include timestamps, keystrokes, and interviewer observations about individual contact attempts.
Pareto chart
A bar chart that reflects the types of most errors in a process, by error type in descending order; for example, the five or six most frequent types of help desk calls from interviewers using computer-assisted interviewing.
Performance measurement analysis
A technique used in quality control to determine whether quality assurance procedures have worked. For example, analysis of routine measures of interviewer or coder performance.
Pilot study
A quantitative miniature version of the survey data collection process that involves all procedures and materials that will be used during data collection. A pilot study is also known as a "dress rehearsal" before the actual data collection begins.
Pledge of confidentiality
An agreement (typically in written or electronic form) to maintain the confidentiality of survey data that is signed by persons who have any form of access to confidential information.
Poststratification
A statistical adjustment that assures that sample estimates of totals or percentages (e.g. the estimate of the percentage of men in living in Mexico based on the sample) equal population totals or percentages (e.g. the estimate of the percentage of men living in Mexico based on Census data). The adjustment cells for poststratification are formed in a similar way as strata in sample selection, but variables can be used that were not on the original sampling frame at the time of selection.
Post-survey adjustments
Adjustments to reduce the impact of error on estimates.
Precision
A measure of how close an estimator is expected to be to the true value of a parameter, which is usually expressed in terms of imprecision and related to the variance of the estimator. Less precision is reflected by a larger variance.
Pretesting
A collection of techniques and activities that allow researchers to evaluate survey questions, questionnaires and/or other survey procedures before data collection begins.
Primary Sampling Unit (PSU)
A cluster of elements sampled at the first stage of selection.
Probability sampling
A sampling method where each element on the sampling frame has a known, non-zero chance of selection.
Process analysis
The use of tools such as flowcharts to analyze processes, e.g., respondent tracking, computerized instrument programming and testing, coding, data entry, etc. The aim is and to identify indicators or measures of the quality of products. Process analysis also is used to identify improvements that can be made to processes.
Process improvement plan
A plan for improving a process, as a result of process analysis. A process improvement plan may result from development of a quality management plan, or as a result of quality assurance or quality control.
Process indicator
An indicator that refers to aspects of data collection (e.g., HPI, refusal rates, etc.).
Processing error
Survey error (variance and bias) that arise during the steps between collecting information from the respondent and having the value used in estimation. Processing errors include all post-collection operations, as well as the printing of questionnaires. Most processing errors occur in data for individual units, although errors can also be introduced in the implementation of systems and estimates. In survey data, processing errors may include errors of transcription, errors of coding, errors of data entry, errors in the assignment of weights, errors in disclosure avoidance, and errors of arithmetic in tabulation.
Proxy interview
An interview with someone (e.g., parent, spouse) other than the person about whom information is being sought. There should be a set of rules specific to each survey that define who can serve as a proxy respondent.
Quality
The degree to which product characteristics conform to requirements as agreed upon by producers and clients.
Quality assurance
A planned system of procedures, performance checks, quality audits, and corrective actions to ensure that the products produced throughout the survey lifecycle are of the highest achievable quality. Quality assurance planning involves identification of key indicators of quality used in quality assurance.
Quality audit
The process of the systematic examination of the quality system of an organization by an internal or external quality auditor or team. It assesses whether the quality management plan has clearly outlined quality assurance, quality control, corrective actions to be taken, etc., and whether they have been effectively carried out.
Quality control
A planned system of process monitoring, verification, and analysis of indicators of quality, and updates to quality assurance procedures, to ensure that quality assurance works.
Quality management plan
A document that describes the quality system an organization will use, including quality assurance and quality control techniques and procedures, and requirements for documenting the results of those procedures, corrective actions taken, and process improvements made.
Quality profile
A comprehensive report prepared by producers of survey data that provides information data users need to assess the quality of the data.
Recontact
To have someone other than the interviewer (often a supervisor) attempt to speak with the sample member after a screener or interview is conducted, in order to verify that it was completed according to the specified protocol.
Refusal rate
The proportion of all units of all potentially eligible sampling units in which a respondent sampling unit refuses to do an interview or breaks off interviews of all potentially eligible sampling units.
Reinterview
The process or action of interviewing the same respondent twice to assess reliability (simple response variance).
Reliability
The consistency of a measurement, or the degree to which an instrument measures the same way each time it is used under the same condition with the same subjects.
Replicates
Systematic probability subsamples of the full sample.
Response rate
The number of complete interviews with reporting units divided by the number of eligible reporting units in the sample.
Restricted-use data file
A file that includes information that can be related to specific individuals and is confidential and/or protected by law. Restricted-use data files are not required to include variables that have undergone coarsening disclosure risk edits. These files are available to researchers under controlled conditions.
Reviewer
Person who participates in the review of translations in order to produce a final version (see Appendix A of Translation).
Rotating panel design
A study where elements are repeatedly measured a set number of times, then replaced by new randomly chosen elements. Typically, the newly-chosen elements are also measured repeatedly for the appropriate number of times.
Sample design
Information on the target and final sample sizes, strata definitions and the sample selection methodology.
Sample element
A selected unit of the target population that may be eligible or ineligible.
Sample management system
A computerized and/or paper-based system used to assign and monitor sample units and record documentation for sample records (e.g., time and outcome of each contact attempt).
Sampling error
Survey error (variance and bias) due to observing a sample of the population rather than the entire population.
Sampling frame
A list or group of materials used to identify all elements (e.g., persons, households, establishments) of a survey population from which the sample will be selected. This list or group of materials can include maps of areas in which the elements can be found, lists of members of a professional association, and registries of addresses or persons.
Sampling units
Elements or clusters of elements considered for selection in some stage of sampling. For a sample with only one stage of selection, the sampling units are the same as the elements. In multi-stage samples (e.g., enumeration areas, then households within selected enumeration areas, and finally adults within selected households), different sampling units exist, while only the last is an element. The term primary sampling units (PSUs) refers to the sampling units chosen in the first stage of selection. The term secondary sampling units (SSUs) refers to sampling units within the PSUs that are chosen in the second stage of selection.
Secondary Sampling Unit (SSU)
A cluster of elements sampled at the second stage of selection.
Source questionnaire
The questionnaire taken as the text for translation.
Statistical process control chart
A statistical chart that compares expected process performance (e.g., number of hours worked by interviewers in a week) against actual performance. For example, interviewers who perform outside upper and lower boundaries on this measure are flagged; if greater variation from expected performance for some interviewers in a certain location can be explained (e.g., a hurricane or a snow storm causing lower than expected hours worked), the process is in control; if not, corrective actions are taken.
Strata (stratum)
Mutually exclusive, homogenous groupings of population elements or clusters of elements that comprise all of the elements on the sampling frame. The groupings are formed prior to selection of the sample.
Stratification
A sampling procedure that divides the sampling frame into mutually exclusive and exhaustive groups (or strata) and places each element on the frame into one of the groups. Independent selections are then made from each stratum, one by one, to ensure representation of each subgroup on the frame in the sample.
Substitution
A technique where each nonresponding sample element from the initial sample is replaced by another element of the target population, typically not an element selected in the initial sample. Substitution increases the nonresponse rate and most likely the nonresponse bias.
Survey lifecycle
The lifecycle of a survey research study, from design to data dissemination.
Survey population
The actual population from which the survey data are collected, given the restrictions from data collection operations.
Target population
The finite population for which the survey sponsor wants to make inferences using the sample statistics.
Task
An activity or group of related activities that is part of a survey process, likely defined within a structured plan, and attempted within a specified period of time.
Tender
A formal offer specifying jobs within prescribed time and budget.
Timestamps
Timestamps are time and date data recorded with survey data, indicated dates and times of responses, at the question level and questionnaire section level. They also appear in audit trails, recording times questions are asked, responses recorded, and so on.
Total Survey Error (TSE)
Total survey error provides a conceptual framework for evaluating survey quality. It defines quality as the estimation and reduction of the mean square error (MSE) of statistics of interest.
Translator
The person who translates text from one language to another (e.g., French to Russian). In survey research, translators might be asked to fulfill other tasks such as reviewing and copyediting.
Unique Identification Number
A unique number that identifies an element (e.g. serial number). That number sticks to the element through the whole survey lifecycle and is published with the public dataset. It does not contain any information about the respondents or their addresses.
Unit nonresponse
An eligible sampling unit that has little or no information because the unit did not participate in the survey.
Universe statement
A description of the subgroup of respondents to which the survey item applies (e.g., "Female, = 45, Now Working").
Usability testing
Evaluation of a computer-assisted survey instrument to assess the effect of design on interviewer or respondent performance. Methods of evaluation include review by usability experts and observation of users working with the computer and survey instrument.
Variance
A measure of how much a statistic varies around its mean over all conceptual trials.
Weighting
A post-survey adjustment that may account for differential coverage, sampling, and/or nonresponse processes.
Working group
Experts working together to oversee the implementation of a particular aspect of the survey lifecycle (e.g., sampling, questionnaire design, training, quality control, etc.)

References

[1] Aitken, A., Hörngren, J., Jones, N., Lewis, D., & Zilhäo, M. J. (2003). Handbook on improving quality by analysis of process variables. Luxembourg: Eurostat. Retrieved March 27, 2010, from http://epp.eurostat.ec.europa.eu/portal/page/portal/quality/documents/HANDBOOK ON IMPROVING QUALITY.pdf

[2] Anderson, R., Kasper, J., Frankel, M., & Associates (Eds.). (1979). Total survey error: Applications to improve health surveys. San Francisco: Jossey-Bass.

[3] Australian Bureau of Statistics. (2009). Draft report on quality assurance frameworks. Retrieved May 7, 2010, from http://unstats.un.org/unsd/dnss/qaf/qafreport.htm

[4] Biemer, P. P., & Lyberg, L. E. (2003). Introduction to survey quality. Hoboken, NJ: John Wiley & Sons.

[5] Brackstone, G. (1999). Managing data quality in a statistical agency. Survey Methodology, 25(2), 1-23.

[6] Cochran, W. G. (1977). Sampling techniques. New York, NY: John Wiley & Sons.

[7] Couper, M. P. (1998). Measuring survey quality in a CASIC environment. Paper presented at the Proceedings of the Survey Research Methods Section, American Statistical Association. Retrieved March 27, 2010, from http://www.amstat.org/sections/srms/proceedings/papers/1998_006.pdf

[8] Couper, M. P., & Lyberg, L. E. (2005). The use of paradata in survey research. Paper presented at Proceedings of the 54th Section of International Statistical Institute

[9] Defeo, J. A. & Juran , J. M. (2010). Juran's quality handbook: The complete guide to performance excellence (6th ed.). New York, NY: McGraw-Hill.

[10]Eurostat. (2003). Methodological documents — Definition of quality in statistics (Report of the Working Group Assessment of Quality in Statistics, item 4.2). Luxembourg: Eurostat. Retrieved March 27, 2010, from http://epp.eurostat.ec.europa.eu/portal/page/portal/ quality/documents/ess%20quality%20definition.pdf

[11] Eurostat. (2003). Methodological documents — Standard report (Report of the Working Group Assessment of Quality in Statistics, item 4.2B). Luxembourg: Eurostat. Retrieved March 27, 2010, from http://epp.eurostat.ec.europa.eu/portal/page/portal/quality/documents/ STANDARD_QUALITY_REPORT_0.pdf

[12] Eurostat. (2005). Standard quality indicators (Report to Metadata Working Group 2005, Doc. ESTAT/02/Quality/2005/9/Quality Indicators). Luxembourg: Eurostat. http://circa.europa.eu/Public/irc/dsis/metadata/library?l=/metadata_working_1/metadata_working_1/ qualityindicatorspdf/_EN_1.0_&a=d

[13] Groves, R. M. & Heeringa, S. G. (2006). Responsive design for household surveys: tools for actively controlling survey errors and costs. Journal of the Royal Statistical Society: Series A (Statistics in Society), 169, 439-457. From http://www.jstor.org/stable/3877429?seq=1

[14] Groves, R. M. (1989). Survey errors and survey costs. Hoboken, NJ: John Wiley & Sons.

[15] Groves, R. M., Fowler, F. J. Jr., Couper, M. P., Lepkowski, J. M., Singer, E., & Tourangeau, R. (2009). Survey methodology (2nd ed.). Hoboken, NJ: John Wiley & Sons.

[16] International Monetary Fund (IMF). (2003). Data quality assessment framework. Retrieved May 7, 2010 from http://dsbb.imf.org/images/pdfs/dqrs_Genframework.pdf

[17] Lyberg, L. E., & Stukel, D. M. (2010). Quality assurance and quality control in cross-national comparative surveys. Chapter 13 in J. A. Harkness, M. Braun, B. Edwards, T. P. Johnson, L. E. Lyberg, P. P. Mohler, B-E. Pennell, T. Smith (Eds.) Survey methods in multicultural, multinational, and multiregional contexts, New York: John Wiley & Sons.

[18]Lyberg, L. E., & Biemer, P. P. (2008). Quality assurance and quality control in surveys. In E. D. de Leeuw, J. J. Hox, & D. A. Dillman (Eds.), International handbook of survey methodology. New York/London: Lawrence Erlbaum Associates/Taylor & Francis Group.

[19] Lyberg, L. E., Biemer, P. P., Collins, M., de Leeuw, E. D., Dippo, C., Schwarz, N. et al. (Eds.). (1997). Survey measurement and process quality. New York, NY: John Wiley & Sons.

[20] Lyberg, L. E., Bergdahl, M., Blanc, M., Booleman, M., Grünewald, W., Haworth, M. et al. (2001). Summary report from the Leadership Group (LEG) on quality. Retrieved March 1, 2010, from http://siqual.istat.it/SIQual/files/LEGsummary.pdf?cod=8412&tipo=2

[21] Lynn, P. (2003) Developing quality standards for cross-national survey research: five approaches. International Journal of Social Research Methodology, 6(4), 323-336. Retrieved October 8, 2009, from http://www.soc.uoc.gr/socmedia/papageo/developing quality standards for%20cross-national%20surveys.pdf

[22] Lynn, P. (Ed.) (2006). Quality profile: British Household Panel Survey (Version 2.0). Colchester: Institute for Social and Economic Research, University of Essex. Retrieved March 27, 2010, from http://www.iser.essex.ac.uk/files/bhps/quality-profiles/BHPS-QP-01-03-06-v2.pdf

[23] Morganstein, D., & Marker, D. A. (1997). Continuous quality improvement in statistical agencies, in L. E. Lyberg, P. P. Biemer, M. Collins, E. De Leeuw, C. Dippo, N. Schwarz, & D. Trewin. (Eds.), Survey measurement and process quality, New York: John Wiley & Sons.

[24] Mudryk, W., Burgess, M. J., & Xiao, P. (1996). Quality control of CATI operations in Statistics Canada. Proceedings of the Section on Survey Research Methods (pp.150-159). Alexandria, VA: American Statistical Association. From http://www.amstat.org/sections/srms/Proceedings/papers/1996_020.pdf

[25] Pierchala, C., & Surti, J. (1999), Control charts as a tool in data quality improvement (Technical Report No. DOT HS 809 005). Washington, DC: National Highway Traffic Safety Administration (NHTSA). Retrieved March 27, 2010, from http://www-nrd.nhtsa.dot.gov/Pubs/809005.PDF

[26] Project Management Institute. (2004). A guide to the project management body of knowledge (PMBOK® guide) (3rd ed.). Newtown Square, PA: Project Management Institute, Inc.

[27] Statistics Canada. (2002). Statistics Canada quality assurance framework. Ottawa: Statistics Canada. Retrieved May 7, 2010, from http://www.statcan.gc.ca/pub/12-586-x/12-586-x2002001-eng.pdf

[28] United States Bureau of the Census. (2006). Definition of data quality, (V1.3). Washington, DC: U.S. Bureau of the Census. Retrieved May 7, 2010, from http://www.census.gov/quality/P01-0_v1.3_Definition_of_Quality.pdf

[29] United States Bureau of the Census. (1998). SIPP Quality Profile (3rd ed.). Survey of Income and Program Participation (Working Paper No. 230). Washington, DC: U.S. Bureau of the Census. Retrieved March 27, 2010, from http://www.census.gov/sipp/workpapr/wp230.pdf

Return to top

Previous chapter | Next chapter | Home

© 2008 The authors of the Guidelines hold the copyright. Please contact us if you wish to\n publish any of this material in any form.