Preregistration Template - Choice Experiments

Part A: General Information

1. Summary of the study

Provide a summary of the whole study/survey and provide details that are relevant to understand the background of the study and survey in general. This can be extensive if necessary but should be kept brief.

2. What are your research questions and/or hypotheses?

Briefly but precisely state all research questions and hypotheses of this study. Hypotheses should be testable with the methods described later.

3. What is the format of the survey instrument?

Elaborate on the survey instrument. Subquestions could be: Is it online or offline? Is the survey programmed by you or by an external company? Who is collecting the data? How long is the questionnaire?

4. Motivate and describe your sampling approach

Describe your sampling strategy in detail and provide reasons for it. If it was not a random sample, why not? Are you using quotas to achieve a representative sample? Who is your target population? Do you use screening questions?

5. How did you incentivize your sample?

Describe how respondents are compensated for completing the questionnaire. If you use individual incentives (e.g. as in a real choice experiment), explain how it works.

6. How did you determine the sample size?

Describe any power calculations you conducted or any other strategy you used to determine your sample size. Many surveys are restricted by a budget, which could also be mentioned here as a reason. However, a power calculation is recommended even in this case.

7. Did you conduct any pre-studies to develop the DCE and questionnaire? Please describe the procedure.

Describe the procedure to develop your questionnaire. This could include focus groups, expert interviews, a research project etc.

8. Define how you deal with outliers and how you exclude respondents/choices.

Criteria to determine and exclude outliers should be specified before survey conduction. Criteria could be related for example to survey time.

9. How do you deal with missing data?

In surveys, some data points may be missing, e.g. if respondents do not answer a specific question. One can exclude such respondents or use imputation methods. Explain your strategy.

10. Anything else you want to mention regarding survey information:

A free text if anything important has not been captured so far.

Part B: Experimental Design

11. Describe the choice situation or good to be valued

Provide a detailed explanation of the choice situation. What is the good to be valued. How is the scope and time frame defined. For environmental economics applications, describe the extent of the proposed change and the extent of the market. As a good orientation, see Johnston et al. (2017)

12. Which attributes (and respective levels) are you using?

For each attribute in your survey, provide a clear and concise description. Also explain each level if needed.

13. List the number of alternatives per choice set, the number of choice sets in total and any blocks or random assignments if applicable:

You can use a table here. In cases that the number of alternatives varies, or that you use different split samples for different attributes, elaborate this here.

14. How did you code your attributes to generate your design?

This is specific for your design. Which coding did you use to generate your design. Examples are dummy coding, linear, relative to a status quo, etc.

15. Did you integrate socio-demographic and other case-specific variables in the design process, and if yes how?

Case-specific variables are variables that do not vary between alternatives but vary across respondents. These variables could include all variables that you plan to integrate into your utility function, including dummies for split samples. If not applicable, leave blank.

16. Which assumptions on the utility function did you use to generate the design?

Do you use a utility maximization framework? How do you specify interaction variables? Do you use non-linear specifications of some attributes (e.g. logarithm, quadratic)?

17. How did you create your experimental design?

When you do your design with a specific software (NGENE, spdesign in R etc.), describe all assumptions (utility specification, optimization routine etc.). See for example Scarpa and Rose (2008) for reporting guidelines of efficient designs.

18. If an efficient design, which priors did you use, and how did you obtain them?

Some researchers use small priors indicating the sign of the parameters, others use priors based on assumptions or previous studies. Be concise and transparent which ones you use. See for example Bliemer and Collins (2016) and Walker et al. (2018).

19. Did you test the design, e.g., with simulation?

Monte-Carlo simulations are a good way to test the design for unbiasedness, power and efficiency, and can help you to justify your design choice. In R you can use simulateDCE (Sagebiel 2025). If you did not use simulation, describe how you decided for the design you choose and what other designs you have compared it to.

20. Do you use an additional split sample approach (between-subject design)?

Split-sample approaches are frequently used to test methodical research questions or to randomize undesired effects. If you have different versions of your questionnaire that can influence the estimated parameters, explain them and why you used them.

21. Do you randomize the order of choice sets, alternatives and attributes?

In most cases, it is recommended to randomize the order of choice sets, alternatives and attributes (Mariel et al. 2021, 40 and 94) to avoid undesired effects (e.g. left right bias). Note down if you do this, or if you do not do this, explain why not.

22. Do you use an opt-out option or a status quo?

State if you use an opt-out or status quo alternative as an alternative is your choice set. If you do not use any, you can state that you use a forced choice experiment.

23. How do you define the status quo?

The definition of the status quo is crucial for the interpretation of the results. Note down how it is described to the respondents.

24. Are your attributes constant or adapt to the status quo?

If the status quo is constant for all respondents describe where you got the constant values from. If it varies between respondents, describe how.

25. Anything else you want to mention regarding experimental design:

A free text if anything important has not been captured so far.

Part C: Estimation

26. 24. Describe your estimation strategy and any software used:

Describe generally how you estimate your models (e.g. maximum likelihood) and which software you will use to estimate models.

27. Which inference criteria do you use?

Which tests do you apply to test your hypotheses and which significance level do you use to decide on your conclusions. If you do not use formal testing or different testing approaches, describe them.

28. Which discrete choice models do you plan to estimate? If you consider multiple models, specify how you select a model or if a model averaging approach will be used.

Here, you can list all models that you plan to estimate. Often, the final model choice is an empirical question, and it is totally fine to leave it open which model you will finally use. However, you can still state which models, and which specifications, you estimate and compare to each other.

29. Do you control for unobserved and observed heterogeneity, if yes, how (e.g. interactions, membership function in latent class model)?

Most applications control for unobserved heterogeneity via mixed logit or latent class models and for observed heterogeneity via interaction terms between attributes and alternative-specific constants and case-specific variables such as age or gender. List how you plan to deal with heterogeneity. Guidance on selecting the right model for unobserved heterogeneity can be found in Mariel et al. (2013) or Sagebiel (2017).

30. Do you estimate separate models for different groups?

To compare different groups, one can either use interaction terms or estimate separate models for each group. Elaborate if you want to estimate separate models and how you want to compare estimates between models.

31. Do you derive any welfare measures and how?

Welfare measures can be compensated or equivalent measures, willingness to pay or others. Explain which ones you want to use and how you calculate them.

32. If methodical research question, describe in detail how you test your methodical hypotheses.

There are different approaches to approach methodical research questions. Often, researchers use split samples and then compare results between them using separate models or interaction terms. Explain how you approach your methodical research question and how you plan to test it.

33. Describe any additional exploratory analysis you have planned to conduct.

Sometimes, you have already some other ideas on what to do with the data but are not specific about it. Here, you can elaborate on this.

34. Anything else you want to mention regarding estimation:

A free text if anything important has not been captured so far.

Part D: Validity

35. Do you use any measures to enhance validity and reduce hypothetical bias?

The stated preferences literature suffers from validity issues and hypothetical bias and other biases. Several measures have been developed to counteract these biases. Explain in detail which ones you use and if not, why not. See Johnston et al. (2017) for best practices.

36. Which follow up questions do you use?

To ex-post assess the validity of your results, you can use follow up questions, e.g. on consequentiality, comprehension and credibility, how the respondent made their choices, which attributes were perceived etc.

37. Resources

Please add any resources which are relevant to understand your experiment

Resource Link/Description
Screenshot of choice sets [Add Link]
Link to questionnaire or PDF [Add Link]
Code for simulation [Add Link]
Additional resource 1
Additional resource 2
Additional resource 3

38. Referenced Literature

Bliemer, Michiel C. J., and Andrew T. Collins. 2016. “On Determining Priors for the Generation of Efficient Stated Choice Experimental Designs.” Journal of Choice Modelling 21: 10–14. https://doi.org/https://doi.org/10.1016/j.jocm.2016.03.001.
Johnston, Robert J, Kevin J Boyle, Wiktor Adamowicz, Jeff Bennett, Roy Brouwer, Trudy Ann Cameron, W Michael Hanemann, et al. 2017. “Contemporary Guidance for Stated Preference Studies.” Journal of the Association of Environmental and Resource Economists 4 (2): 319–405.
Mariel, Petr, Amaya de Ayala, David Hoyos, and Sabah Abdullah. 2013. “Selecting Random Parameters in Discrete Choice Experiment for Environmental Valuation: A Simulation Experiment.” Journal of Choice Modelling 7: 44–57. https://doi.org/https://doi.org/10.1016/j.jocm.2013.04.008.
Mariel, Petr, David Hoyos, Jürgen Meyerhoff, Mikolaj Czajkowski, Thijs Dekker, Klaus Glenk, Jette Bredahl Jacobsen, et al. 2021. Environmental Valuation with Discrete Choice Experiments: Guidance on Design, Implementation and Data Analysis. Springer Nature.
Sagebiel, Julian. 2017. “Preference Heterogeneity in Energy Discrete Choice Experiments: A Review on Methods for Model Selection.” Renewable and Sustainable Energy Reviews 69: 804–11. https://doi.org/https://doi.org/10.1016/j.rser.2016.11.138.
———. 2025. simulateDCE: Simulate Data for Discrete Choice Experiments.
Scarpa, Riccardo, and John M. Rose. 2008. “Design Efficiency for Non-Market Valuation with Choice Modelling: How to Measure It, What to Report and Why.” Australian Journal of Agricultural and Resource Economics 52 (3): 253–82. https://doi.org/https://doi.org/10.1111/j.1467-8489.2007.00436.x.
Walker, Joan L, Yanqiao Wang, Mikkel Thorhauge, and Moshe Ben-Akiva. 2018. “D-Efficient or Deficient? A Robustness Analysis of Stated Choice Experimental Designs.” Theory and Decision 84 (2): 215–38.