Cross Cultural Images Cross Cultural Images Cross Cultural Images Cross Cultural Images

I. Study, Organizational, and Operational Structure

Webpage last modified: 2010-Sept-29

Rachel A. Orlowski and Christopher Antoun


The following guidelines outline a number of study, organizational, and operational considerations which arise when structuring a cross-cultural survey or any survey involving multiple countries, regions, or languages. Several factors will influence how the overall study is designed and later implemented, including the source(s) and flow of funding, the availability of human and technical resources, the best way of contacting and collecting data from respondents, and the research infrastructure. All of these will vary from country to country and culture to culture. Yet, before much time is spent determining the study structure, it is critical to clearly define a study purpose because it drives all subsequent decisions, especially if conflicts between cross-cultural and local interests arise.

Cross-cultural surveys are organized in many different ways, and each has its advantages and disadvantages. These guidelines predominately address a structure with a coordinating center that designs the overall study and assumes the central organizational responsibility to the contracted survey organizations in each country where the study will be carried out. This type of organizing structure is often used in large-scale, cross-cultural surveys. Although not described here, there are situations where the coordinating center is also responsible for data collection in some or all countries. A coordinating center should include people from different countries, institutions, and affiliations. Given this focus, this chapter's primary audience is members of a coordinating center.

With this organizational structure, the coordinating center will specify the operational structure of the survey for each country to follow. It should determine what elements will be standardized across countries and what elements will be localized; there is a balance between standardization of implementation and adaptation to the cultural context. The coordinating center should inform the survey organizations of the quality standards necessary to execute the study.

Figure 1 shows study, organizational, and operational structure within the survey production process lifecycle (survey lifecycle) as represented in these guidelines. The lifecycle begins with establishing study structure (Study, Organizational, and Operational Structure) and ends with data dissemination (Data Dissemination). In some study designs, the lifecycle may be completely or partially repeated. There might also be iteration within a production process. The order in which survey production processes are shown in the lifecycle does not represent a strict order to their actual implementation, and some processes may be simultaneous and interlocked (e.g., sample design and contractual work). Quality and ethical considerations are relevant to all processes throughout the survey production lifecycle. Survey quality can be assessed in terms of fitness for intended use (also known as fitness for purpose), total survey error, and the monitoring of survey production process quality, which may be affected by survey infrastructure, costs, respondent and interviewer burden, and study design specifications (see Survey Quality).

Figure 1. The Survey Lifecycle

Survey Lifecycle Illustration

Studies involving multiple cultures, countries, regions, or languages may benefit from the use of mixed methods. A mixed methods study "involves the collection or analysis of both quantitative and/or qualitative data in a single study in which the data are collected concurrently or sequentially, are given a priority, and involve an integration of the data at one or more stages in the process of research" [5].The different toolkits of qualitative and quantitative data collection methods can be complementary for studies of cross-cultural similarities and differences in attitudes and behaviors that often require different kinds of methods and evidence [21]. Van de Vijver and Chasiotis [21] also provide an in-depth discussion and a conceptual framework for mixed methods studies. Researchers wanting to undertake a mixed methods design or to incorporate mixed methods approaches at different stages of the survey lifecycle may include these considerations when designing the study. Examples and references for mixed methods approaches are provided in the pretesting, questionnaire design and data collection chapters.


Goal: To establish the study's overall structure and locus of control at all levels and across all aspects of the study's design and implementation, and to communicate this structure to each participating country's survey organization.

  1. Determine the study objectives and identify a study structure that addresses all of the tasks of the survey lifecycle.

    Before work is done to organize or operationalize a study, the empirical aims of the research should be understood by all involved. There should be a clear direction and purpose of the research. In order to move the study goals from ideas to a concrete design, a structure of survey tasks should be clearly defined by the coordinating center. This task framework should take into account the cross-cultural nature of the survey.

    Procedural steps
    • Clearly state the study's objectives, ensuring that central and local study goals do not conflict [2] [7]. When doing so, consider the following main components of design:
    • Identify tasks necessary to complete all phases of the survey lifecycle.
      • Provide an overview of the possible risks and quality implications associated with every survey task.
      • Consider each subsequent chapter of the Cross-Cultural Survey Guidelines as a potential task in the survey process (also see Appendix A for example considerations for the completion of each task).
    • Determine the nature and relationship of tasks. Some tasks tend to have a definite start and end (e.g., sample design), others are ongoing (e.g., ethical considerations), others are often iterative (e.g., questionnaire design), and yet others can overlap (e.g., data collection and data processing). The study structure requires concurrent and coordinated attention to the different tasks in the process [10].
    • Evaluate the feasibility of implementing the study given the populations, governments, and politics of the countries being studied, as well as the availability of funding.
    Lessons learned
    • A failure to communicate overall study goals may lead to local decisions that threaten comparability across countries. For example, a country may remove some locally less salient items from the questionnaire in order to reduce the respondent burden of the interview without realizing that those items are necessary to measure an important survey construct. Conversely, a country may insert items to the questionnaire in order to study a locally-relevant topic without realizing that those items may affect the quality of the data.
    • Despite knowing the ideal way of executing a study, the available resources often dictate how a study is structured and implemented. For example, multiple sources of funding are typically needed to provide enough support to coordinate a large-scale, cross-cultural survey; furthermore, each participating country may be funded separately. Funding sources may have requirements that complicate reporting structures within the study and conflict with the goals of the overall cross-cultural survey. The points at issue may relate to a wide variety of features, from data availability to the content of questionnaires. See Appendix B for examples of how existing cross-cultural survey programs have been funded as guidance to ways in which a study can be structured.
  2. Establish an organizational structure for the study at the supranational, national, and, as necessary, subnational levels and define the associated roles and responsibilities.

    The coordinating center should first determine its own organizational structure and then set the organizational standards for participating survey organizations. In order to manage a cross-cultural survey efficiently and effectively, roles and responsibilities must be clearly delineated and communicated throughout all levels. This can be accomplished when the central coordinating center works together with local expertise in each participating country.

    Procedural steps
    • Set up a central coordinating center responsible for managing the overall study and overseeing each country's implementation of the survey.
    • Identify the working language for the management of the study.
      • Specify the language proficiency in the chosen working language for all levels of the study management.
      • Do not expect a common understanding of technical terms.
    • Become familiar with the culture and political climate of all participating countries in order to establish the most appropriate organizational structure.
      • Review standard handbooks, maps, and ethnographic literature.
      • Review recent media material that depicts the pertinent issues of the participating countries.
      • Identify accommodations that may need to be made at the local level due to (for example) specific legal regulations, local government policies, internet availability, and types of transportation infrastructure (see Data Collection for other considerations).
      • Become knowledgeable about the international, national, and regional business models that affect participating countries [4] [9].
      • Communicate with experienced researchers who have collected data in the participating countries.
    • Assess the substantive, methodological, and contractual expertise needed both centrally and for all participating countries. Arrange for expert consultation, as needed.
    • Identify the impact that structural aspects of the planned organization have on control, responsibility, and communication at the central and local levels.
      • Determine reporting responsibilities to funding sources.
      • Determine the level of control of the country-specific and cross-national data throughout the study, including analysis and publication of the data (see Data Dissemination).
      • Clearly define subsidiary regulations to specify which decisions are to be made on which level (i.e., supranational, national, or subnational levels).
      • Balance central and local participation in deciding how to address general and country-specific adaptation in processes, methods, and substantive content.
    • Consider the creation of a number of task-specific working groups. These groups should be comprised of qualified participants from the participating countries and the coordinating center.
      • Consider creating a working group for each of the tasks mentioned in Appendix A. The responsibilities listed in this appendix could be used as a checklist when making assignments.
      • Specify leadership, authority, and roles across all tasks and levels.
    • In addition to working groups, consider the creation of country lead teams to oversee the entire survey implementation in their respective country.
      • Define responsibilities for each lead team member.
      • Arrange regular meetings (or conference calls) involving the country lead teams and each working group to discuss study progress.
    • Develop a communication flowchart (i.e., who talks to whom about what) with one reference person and one back-up person at each point in the flowchart [22].
    Lessons learned
    • It is helpful to consider examples of organizational structures when planning a cross-cultural project. See Appendix C for three examples of organizational structures from well-established cross-cultural surveys.
  3. Clearly define operational specifications, including deliverables, for each task of the survey lifecycle.

    Operational specifications ensure that critical aspects of the survey process are defined and then can be controlled. They simultaneously identify required or expected quality standards and help ensure comparability across countries. The specifications should, therefore, be detailed (and measurable, when possible) with clearly delineated deliverables from the participating survey organizations at each task of the survey. In addition, each specification should be justified with a rationale. The specifications form the basis of the country-level tenders and subsequent contracts between the coordinating center and survey organizations (see Tenders, Bids, and Contracts).

    Procedural steps
    • Taking into account the overall study objectives and weighing the tradeoffs between cost and quality, detail the operational specifications and requirements [2].
      • See the subsequent chapters of the Cross-Cultural Survey Guidelines for procedural steps regarding recommendations for specifications for each task.
      • Before the study is initiated, determine which specifications are more important than others. Communicate to all participants which specifications are a top priority.
    • Determine when specifications need to be rigid and when it is possible or preferred to be more flexible.
    • Create a study timeline, production milestones, and deliverables with due dates [17].
      • If feasible, use a common project management tool between the coordinating center and each participating country.
      • Keeping in mind local considerations which may affect the study's due dates, require frequent reporting and interim products.
    • Require deliverables with unique identifiers for interviewers and sample elements.
    • Consider implementing a system with process checks, using paradata, to recognize when a survey organization is struggling to meet specifications. It is important to identify if a survey organization is having problems meeting specifications as early in the process as possible.
      • Decide what actions to take, as necessary, to rectify delays or delinquencies in meeting specifications.
      • Determine the sanctions/penalties if a participant country continues to fail to meet the specifications.
    • Establish a backup plan to ensure the completion of a high quality survey in case countries are unable to meet operational specifications.
    Lessons learned
    • Adherence to specifications must be controlled. Otherwise, some survey organizations will deviate for a myriad of reasons. Structuring clearly defined specifications and a system of checks and balances will help maintain the highest standards throughout all tasks of a cross-cultural survey.
  4. Decide upon quality standards for the implementation of the study.

    The goal of quality standards is to achieve excellence for all components related to the data [13] [20]. Setting quality standards is critical to ensuring the same level of methodological rigor across countries [7]. Local adaptations will be necessary and appropriate for some aspects of implementation of the study, but any adaptation in the procedure or instrument should be thoroughly discussed, evaluated, and documented beforehand [15]. Frequent measurement and reporting to the coordinating center, along with sufficient methodological support, should allow for timely intervention if problems do arise.

    Procedural steps
    • Use a Plan-Do-Check-Act cycle (PDCA) by first determining the study's quality standards, then implementing them throughout the research process, while assessing quality indicators at each stage, and finally making appropriate changes to repeat the cycle of PDCA [2] [6].
      • Consider all sources of error in the survey lifecycle, and define quality indicators for key steps in each survey task. See Survey Quality for common sources of error and possible indicators.
    • Acquaint study organizers with important quality control literature that distinguishes between common and special causes variation, as well as explains how to act on information about these different kinds of variation [14] [16] [19].
    • Form a team in each country that regularly meets to discuss the quality of the local survey. The team should have or should be provided with methodological expertise needed. The team should document and report any concerns to the coordinating center [1] [2].
    • Identify tools that control and maintain operational process quality.
    • Implement a certification process or a signing-off procedure for each stage in order to check and document that the study design and specification standards are being followed.
      • Quickly address and remedy, if possible, any deviations from expectations that may occur [2].
      • Invoke sanctions, as specified in the contract, if the certification is not fulfilled.
    • Consider site visits to all countries to monitor or support the implementation of quality standards. Make sure these visits are specified in the contract with each survey organization.
    • Monitor costs in order to avoid overruns.
      • Create a cost-monitoring instrument and checklist.
      • Ensure sufficient funds are allocated to be able to budget quality assessment and documentation activities.
      • Assess risk and determine contingencies for each survey task — weighing cost and errors.
    • If and where possible, incorporate methodological research. This will inform long-term quality improvement [12] [20].
    Lessons learned
    • Variations in country-level research infrastructure, research traditions, and methodological rigor need to be thoroughly investigated and understood when setting quality standards. Some countries will need more assistance in meeting some standards, and this should be taken into account early in the planning process.
  5. Identify and specify the documentation that is required from all levels of participation in the study.

    Documentation of procedures ensures that the survey is transparent and allows for replication. Documentation should be detailed and occur throughout the survey lifecycle. If documentation does not occur until the end of the survey or even the end of the survey task, details will likely be lost or forgotten. Therefore, it is important to determine documentation requirements before the study is initiated. The coordinating center should first establish its own documentation procedures and then set documentation procedures for participating survey organizations.

    Procedural steps
    • Determine documentation procedures for the coordinating center. Require that all decisions regarding the structure be documented, including the study objectives, roles and responsibilities, communication flowchart, and operational specifications.
    • Determine the documentation requirements and formats for survey organizations. When appropriate, standardize these requirements across all countries in order to be able to assess quality and comparability. Encourage survey organizations to create clear, concise, and user-friendly descriptions. In addition, these descriptions should be as transparent as possible, with sufficient detail, to ensure that they could, theoretically, be replicated.
      • Determine documentation requirements for all survey implementation procedures. Examples of procedures which should have documentation requirements include:
    • Determine documentation requirements for data collection outcomes (see Tenders, Bids, and Contracts). Detail specifically what is necessary, for example:
    • Decide when survey organizations should share documentation with the coordinating center.
    • Record any modifications made either centrally or locally to the study protocol, as well as document the impact of these modifications. Any changes countries make to their protocols and procedures must be carefully documented since these could explain potential differences in the data, either over the course of the study (within a country) or across variables (between countries).
    Lessons learned
    • Not all deviations that occur in a study can be remedied immediately, but they are helpful for planning future studies. Deviations should be documented to allow for a search of faulty or deficient operational process steps after the fact. This allows for the development of appropriate counteractions at the central and local level for the next wave of surveys.

Appendix A

Survey tasks

When determining the study structure of a cross-cultural survey, it is important that all necessary survey tasks are identified. Below are examples of survey tasks that correspond with each chapter of the Cross-Cultural Survey Guidelines. This appendix provides example considerations for the completion of each task; please see the subsequent chapters for more detailed guidance. By creating a detailed list of survey tasks, the coordinating center can become assured that no aspect of the study structure has been overlooked and can then use this list to assign organizational responsibilities.

Appendix B

Funding sources

The source and flow of funding impact the structure of a cross-cultural survey. Below are examples of how five large-scale, cross-cultural survey programs have been funded. Please see the websites of these programs for further information.

  • The European Social Survey [23] investigates the interaction between Europe's changing institutions and the attitudes, beliefs, and behavior patterns of its diverse populations using face-to-face interviews in over 30 countries throughout four rounds. Funding for the central coordinating center has come from the European Commission's Framework Programs and the European Science Foundation. National scientific bodies have funded their own country's data collection and coordination.
  • The International Social Survey Programme (ISSP) [24] investigates current social science topics in each of 43 participating countries by collecting self-administered questionnaires. Each survey organization has funded all of its own costs. There are no central funds.
  • Latinobarómetro [25] investigates social development with face-to-face interviews in 18 Latin American countries occurring sporadically. Initial funding came from the European Commission. There have been several additional funding sources, including: international organizations (e.g., Inter-American Development Bank, United Nations Development Programme, World Bank), government agencies, and private sector sources.
  • The Survey of Health, Ageing, and Retirement in Europe [26] investigates respondents in an aging population (50 and over) in 11 countries throughout three waves (2004, 2006-2007, and 2008-2009). The European Union has funded the data collection under the European Commission and funded the analyses under Advanced Multidisciplinary Analysis of New Data on Ageing. The U.S. National Institute on Aging has provided additional funding; other national funding agencies provided support as well.
  • The World Mental Health Surveys [27] investigate mental disorders with face-to-face interviews in 28 countries since 2000. Funding for the data collection and analysis coordinating centers has come from the World Health Organization. Several additional funding sources have included the U.S. National Institute of Mental Health, European Commission, MacArthur Foundation, Robert Wood Johnson Foundation, World Health Organization, Pan American Health Organization, various pharmaceutical companies, and governments of the participating countries. Each participating country has had its own source of funding.

Appendix C

Organizational structures

Below are descriptions of the organizational structures that have been used on three large-scale, cross-cultural survey programs. These examples are only illustrative. Please visit the survey programs' websites for more information about their structure.

  • European Social Survey [23]
    • The Central Coordinating Team is responsible for overseeing the entire study. The Central Coordinating Team is in contact with the Funders, the Scientific Advisory Board, the Specialist Advisory Groups, and the National Coordinators/Survey Institutes.
    • The Scientific Advisory Board consists of representatives from each participating country, two representatives from the European Commission, and two representatives from the European Science Foundation.
    • The Specialist Advisory Groups has separate teams with expertise in question design, methods, sampling, and translation.
    • The National Coordinators/Survey Institutes have one director from each country and one national survey organization from each country. The survey organizations are chosen by their country's respective national academic funding body.
  • Survey of Health, Ageing, and Retirement in Europe [3] [26]
    • The Coordinating Center oversees the entire study and reconciles differences between Country Teams and Cross-national Working Groups. The Coordinating Center is led by the Co-ordinator. Members of the group are internationally-recognized experts in their fields. The Co-ordinator receives additional support from CentERdata, the Survey Research Center, the Centre for Survey Research and Methodology, and the National Centre for Social Research.
    • Country Teams and Cross-national Working Groups form a matrix organizational structure. Country Teams are led by Country Team Leaders. They are responsible for carrying out the study in their respective country and select one national survey organization to conduct the survey.
    • Cross-national Working Groups are led by Working Group Leaders. There is a working group for each topic covered in the questionnaire, and each respective working group is responsible for their topic's module. Additionally, there are working groups for methodological concerns. The Cross-National Working Groups are set-up so each country can have a topic-specialist in each working group, but it is not always the case that each country has expert in that field.
    • The Advisory Panels are available if guidance is needed from those with experience in a given area. There are Advisory Panels with representatives from Survey Methodology and Quality Control, as well as from the Health and Retirement Study, the English Longitudinal Survey on Ageing, and the respective Countries.
  • World Mental Health Surveys [18] [27]
    • The World Health Organization is invested in the objectives of this survey and works closely with two study-level Principal Investigators. These study-level researchers make many of the ultimate decisions for the entire study. The World Health Organization is in contact with the Data Collection Coordination Center and the Analysis Coordination Center.
    • The Data Collection Coordination Center is instrumental in writing and implementing the specifications for pre-production and production activities. The University of Michigan is the Data Collection Coordination Center and its tasks include such activities as selecting survey organizations, training interviewers, and providing assistance during data collection.
    • The Analysis Coordination Center makes decisions regarding post-production activities. Harvard University is the Analysis Coordination Center.
    • The Working Groups are analysis teams that focus on one particular aspect or analytic perspective of mental health. Each Working Group is led by a Chair. Examples of focal topics include the following: ADHD, drug dependence, gender, social class, suicide, and personality disorders. The Working Groups are in contact with the Analysis Coordination Center and the Principal Investigators from each country.
    • The Principal Investigators from each country oversee their respective country's survey.
    • The Data Collection Organizations are the survey organizations within each country that carry out the field operations.


Changing existing materials (e.g., management plans, contracts, training manuals, questionnaires, etc.) by deliberately altering some content or design component to make the resulting materials more suitable for another socio-cultural context or a particular population.
Audit trail
An electronic file in which computer-assisted and Web survey software captures paradata about survey questions and computer user actions, including times spent on questions and in sections of a survey (timestamps) and interviewer or respondent actions while proceeding through a survey. The file may contain a record of keystrokes and function keys pressed, as well as mouse actions.
The systematic difference over all conceptual trials between the expected value of the survey estimate of a population parameter and the true value of that parameter in the target population.
A grouping of units on the sampling frame that is similar on one or more variables, typically geographic. For example, an interviewer for an in person study will typically only visit only households in a certain geographic area. The geographic area is the cluster.
Translating nonnumeric data into numeric fields.
The extent to which differences between survey statistics from different countries, regions, cultures, domains, time periods, etc., can be attributable to differences in population true values.
Complex survey data (or designs)
Survey datasets (or designs) based on stratified single or multistage samples with survey weights designed to compensate for unequal probabilities of selection or nonresponse.
A process by which a sample member voluntarily confirms his or her willingness to participate in a study, after having been informed of all aspects of the study that are relevant to the decision to participate. Informed consent can be obtained with a written consent form or orally (or implied if the respondent returns a mail survey), depending on the study protocol. In some cases, consent must be given by someone other than the respondent (e.g., an adult when interviewing children).
Contact rate
The proportion of all elements in which some responsible member of the housing unit was reached by the survey.
A legally binding exchange of promises or an agreement creating and defining the obligations between two of more parties (for example, a survey organization and the coordinating center) written and enforceable by law.
Cooperation rate
The proportion of all elements interviewed of all eligible units ever contacted.
Coordinating center
A research center that facilitates and organizes cross-cultural or multi-site research activities.
Disposition code
A code that indicates the result of a specific contact attempt or the outcome assigned to a sample element at the end of data collection (e.g., noncontact, refusal, ineligible, complete interview).
Altering data recorded by the interviewer or respondent to improve the quality of the data (e.g., checking consistency, correcting mistakes, following up on suspicious values, deleting duplicates, etc.). Sometimes this term also includes coding and imputation, the placement of a number into a field where data were missing.
Fitness for intended use/fitness for purpose
The degree to which products conform to essential requirements and meet the needs of users for which they are intended. In literature on quality, this is also known as "fitness for use" and "fitness for purpose."
A computation method that, using some protocol, assigns one or more replacement answers for each missing, incomplete, or implausible data item.
Interface design
Aspects of computer-assisted survey design focused on the interviewer's or respondent's experience and interaction with the computer and instrument.
Item nonresponse, item missing data
The lack of information on individual data items for a sample element where other data items were successfully obtained.
Mean Square Error (MSE)
The total error of a survey estimate; specifically, the sum of the variance and the bias squared.
Method of data collection.
Sampling units that were potentially eligible but could not be reached
The failure to obtain measurement on sampled units or items. See unit nonresponse and item nonresponse.
Outcome rate
A rate calculated based on the study's defined final disposition codes that reflect the outcome of specific contact attempts before the unit was finalized. Examples include response rates (the number of complete interviews with reporting units divided by the number of eligible reporting units in the sample.), cooperation rates (the proportion of all units interviewed of all eligible units ever contacted), refusal rates (the proportion of all units in which a housing unit or respondent refuses to do an interview or breaks-off an interview of all potentially eligible units), and contact rates (the proportion of all units are reached by the survey).
The exceeding of costs estimated in a contract.
Empirical measurements about the process of creating survey data themselves. They consist of visual observations of interviewers, administrative records about the data collection process, computer-generated measures about the process of the data collection, external supplementary data about sample units, and observations of respondents themselves about the data collection. Examples include timestamps, keystrokes, and interviewer observations about individual contact attempts.
Pilot study
A quantitative miniature version of the survey data collection process that involves all procedures and materials that will be used during data collection. A pilot study is also known as a "dress rehearsal" before the actual data collection begins.
A collection of techniques and activities that allow researchers to evaluate survey questions, questionnaires and/or other survey procedures before data collection begins.
Primary Sampling Unit (PSU)
A cluster of elements sampled at the first stage of selection.
The degree to which product characteristics conform to requirements as agreed upon by producers and clients.
Quality assurance
A planned system of procedures, performance checks, quality audits, and corrective actions to ensure that the products produced throughout the survey lifecycle are of the highest achievable quality. Quality assurance planning involves identification of key indicators of quality used in quality assurance.
Quality audit
The process of the systematic examination of the quality system of an organization by an internal or external quality auditor or team. It assesses whether the quality management plan has clearly outlined quality assurance, quality control, corrective actions to be taken, etc., and whether they have been effectively carried out.
Quality control
A planned system of process monitoring, verification, and analysis of indicators of quality, and updates to quality assurance procedures, to ensure that quality assurance works.
Quality management plan
A document that describes the quality system an organization will use, including quality assurance and quality control techniques and procedures, and requirements for documenting the results of those procedures, corrective actions taken, and process improvements made.
Refusal rate
The proportion of all units of all potentially eligible sampling units in which a respondent sampling unit refuses to do an interview or breaks off interviews of all potentially eligible sampling units.
Response rate
The number of complete interviews with reporting units divided by the number of eligible reporting units in the sample.
Restricted-use data file
A file that includes information that can be related to specific individuals and is confidential and/or protected by law. Restricted-use data files are not required to include variables that have undergone coarsening disclosure risk edits. These files are available to researchers under controlled conditions.
Sample design
Information on the target and final sample sizes, strata definitions and the sample selection methodology.
Sample element
A selected unit of the target population that may be eligible or ineligible.
Sampling frame
A list or group of materials used to identify all elements (e.g., persons, households, establishments) of a survey population from which the sample will be selected. This list or group of materials can include maps of areas in which the elements can be found, lists of members of a professional association, and registries of addresses or persons.
Sampling units
Elements or clusters of elements considered for selection in some stage of sampling. For a sample with only one stage of selection, the sampling units are the same as the elements. In multi-stage samples (e.g., enumeration areas, then households within selected enumeration areas, and finally adults within selected households), different sampling units exist, while only the last is an element. The term primary sampling units (PSUs) refers to the sampling units chosen in the first stage of selection. The term secondary sampling units (SSUs) refers to sampling units within the PSUs that are chosen in the second stage of selection.
Secondary Sampling Unit (SSU)
A cluster of elements sampled at the second stage of selection.
Strata (stratum)
Mutually exclusive, homogenous groupings of population elements or clusters of elements that comprise all of the elements on the sampling frame. The groupings are formed prior to selection of the sample.
A sampling procedure that divides the sampling frame into mutually exclusive and exhaustive groups (or strata) and places each element on the frame into one of the groups. Independent selections are then made from each stratum, one by one, to ensure representation of each subgroup on the frame in the sample.
Survey lifecycle
The lifecycle of a survey research study, from design to data dissemination.
Survey population
The actual population from which the survey data are collected, given the restrictions from data collection operations.
Survey weight
A statistical adjustment created to compensate for complex survey designs with features including, but not limited to, unequal likelihoods of selection, differences in response rates across key subgroups, and deviations from distributions on critical variables found in the target population from external sources, such as a national Census.
Target population
The finite population for which the survey sponsor wants to make inferences using the sample statistics.
An activity or group of related activities that is part of a survey process, likely defined within a structured plan, and attempted within a specified period of time.
A formal offer specifying jobs within prescribed time and budget.
Timestamps are time and date data recorded with survey data, indicated dates and times of responses, at the question level and questionnaire section level. They also appear in audit trails, recording times questions are asked, responses recorded, and so on.
Total Survey Error (TSE)
Total survey error provides a conceptual framework for evaluating survey quality. It defines quality as the estimation and reduction of the mean square error (MSE) of statistics of interest.
Unit nonresponse
An eligible sampling unit that has little or no information because the unit did not participate in the survey.
A measure of how much a statistic varies around its mean over all conceptual trials.
Working group
Experts working together to oversee the implementation of a particular aspect of the survey lifecycle (e.g., sampling, questionnaire design, training, quality control, etc.)


[1] Aitken, A., Hörngren, J., Jones, N., Lewis, D., & Zilhäo, M. J. (2003). Handbook on improving quality by analysis of process variables. Luxembourg: Eurostat. Retrieved March 27, 2010, from

[2] Biemer, P., & Lyberg, L. (2003). Introduction to survey quality. Hoboken, NJ: John Wiley & Sons.

[3] Börsch-Supan, A., Jürges, H., & Lipps, O. (2003). SHARE: Building a panel survey on health, aging and retirement in Europe. Mannheim, Germany: Mannheim Research Institute for the Economics of Aging. Retrieved March 26, 2010, from nopage_pubs/k3pzhnwk1t1zjymk_dp32.pdf

[4] Cleland, D.I. & Gareis, R. (Eds.). (2006). Global project management handbook (2nd ed.). New York, NY.: McGraw-Hill.

[5] Creswell, J.W., Plano Clark, V.L., Gutmann, M.L. & Hanson, W.E. (2003). Advanced mixed methods research designs. In A. Tashakkori and C. Teddlie (Eds), Handbook on Mixed Methods in the Behavioral and Social Sciences. (pp. 209-240). Thousand Oaks, CA: Sage Publications.

[6] Deming, E. (1986). Out of the crisis. Cambridge, MA : Cambridge University Press.

[7] Federal Committee on Statistical Methodology. (1983). Statistical policy working paper 9: Contracting for surveys. Washington, DC: Office of Management and Budget. From

[8] Fitzgerald, R., & Jowell, R. Measurement equivalence in comparative surveys: The European Social Survey (ESS) - from design to implementation and beyond. Unpublished manuscript.

[9] Griffin, R.W. & Moorehead, G. (2009). Organizational behavior - Managing people and organizations (8th ed.). Cincinnati, OH: South-Western Educational Publishing.

[10] Groves, R. M., Fowler, F. J., Jr., Couper, M. P., Lepkowski, J. M., Singer, E., & Tourangeau, R. (2009). Survey Methodology (2nd ed.). Hoboken, NJ: John Wiley & Sons.

[11] International Organization for Standardization. (2005). Market, opinion and social research-terms, definitions and service requirements. (ISO 20252). Geneva. Retrieved April 30, 2010, from

[12] Jowell, R. (1998). How comparative is comparative research? American Behavioral Scientist, 42(2), 168-177. Retrieved March 26, 2010, from

[13] Juran, J.M. & De Deo, J.A. (2010). Juran's quality handbook: The complete guide to performance excellence (6th ed.). New York: McGraw-Hill.

[14] Lyberg, L. E., & Biemer, P. P. (2008). Quality assurance and quality control in surveys. In E. D. de Leeuw, J. J. Hox, & D. A. Dillman (Eds.), International handbook of survey methodology. New York/London: Lawrence Erlbaum Associates/Taylor & Francis Group.

[15] Mohler, P. Ph., Pennell, B-E., & Hubbard, F. (2008). Survey documentation: Toward professional knowledge management in sample surveys. In E. D. de Leeuw, J. J. Hox, & D. A. Dillman (Eds.), International handbook of survey methodology (pp. 403-420). New York/London: Lawrence Erlbaum Associates/Taylor & Francis Group.

[16] Montgomery, D.C. (2005). Introduction to statistical quality control (5th ed.). Hoboken, NJ: John Wiley & Sons.

[17] Office of Management and Budget. (2006). Standards and guidelines for statistical surveys. Washington, DC: Office of Information and Regulatory Affairs, OMB. Retrieved January 10, 2010, from

[18] Pennell, B-E., Mneimneh, Z., Bowers, A., Chardoul, S., Wells, J. E., Viana, M. C., et al. (2009). Implementation of the World Mental Health survey initiative. In R. C. Kessler & T. B. Üstün (Eds.). Volume 1: Patterns of mental illness in the WMH surveys. Cambridge, MA: Cambridge University Press.

[19] Ryan, T.P. (2000). Statistical methods for quality improvement (2nd ed.). Hoboken, NJ: John Wiley & Sons.

[20] United Nations. (2005). Household surveys in developing and transition countries. New York: United Nations, Department of Economic and Social Affairs. Retrieved March 26, 2010, from

[21] van de Vijver, F. J. R. & Chasiotis, A. (2010). Making methods meet: Mixed design in cross-national research. In J. Harkness, B. Edwards, M. Braun, T. Johnson, L. E. Lyberg, P. Mohler, B-E. Pennell, & T. Smith. (Eds.), Survey methods in multicultural, multinational, and multiregional contexts. Hoboken, NJ: John Wiley & Sons.

[22] Worcester, R., Lagos, M., & Basañez, M. (2000). Cross-national studies: Lessons learned the hard way. Paper presented at the 2000 WAPOR/AAPOR Conference, Portland, OR.

Internet Links

[23] European Social Survey European Social Survey (ESS). (2010). Retrieved May 03, 2010,

[24] International Social Survey Programme (ISSP) Retrieved May 04, 2010, from

[25] Latinobarómetro. (2010). Retrieved May 04, 2010, from

[26] Survey of Health, Ageing, and Retirement in Europe Retrieved May 04, 2020, from

[27] World Mental Health Surveys (2010) Retrieved May 04, 2010,

Further Reading

Heath, A., Fisher, S., & Smith, S. (2005). The globalization of public opinion research. Annual Review of Political Science, 8, 297-333. From

Lynn, P. (2003) Developing quality standards for cross-national survey research: five approaches. International Journal of Social Research Methodology, 6(4), 323-336. Retrieved October 8, 2009, from

Return to top

Previous chapter | Next chapter | Home

@copy; 2008 The authors of the Guidelines hold the copyright. Please contact us if you wish to publish any of this material in any form.