Methodological Research

Our methodological research combines several approaches including surveys, field experiments, laboratory experiments and qualitative research methods to better understand individuals preferences for health and care and how these can be measured using stated preference tasks.

Current Projects

The effects of online deliberation on altruistic preferences and moral reasoning

We are investigating the effect of online deliberation on subjects participating in a Discrete Choice Experiment (DCE). We explore the existence of a preference shift towards more altruistic preference in relation to text-based online communication between participants. We are also investigating the effect of deliberation on the prevalence of self-interested moral reasoning and other-regarding moral reasoning. The combination of online deliberation and DCEs has the potential to broaden the evaluative space of DCEs and presents an exciting opportunity to collect qualitative data on the reasons for participant choices in DCEs.

Our research examines existing quantitative and qualitative data from a choice experiment asking participants about their preference towards donating money to different charities. The choice options differed in the kind of charity (health based vs. a broader social focus), the amount to be donated, and the donation amount that would be matched by the researcher.

The experiment consisted of 3 arms:

  • Arm 1 asked individual participants to commit to a real donation funded out of their compensation;
  • Arm 2 asked participants to make an individual choice for a real donation after discussing their options with another participant via online text-based communication; and
  • Arm 3 participants engaged in online communication to come to a unanimous decision for a real donation.

HERU researchers involved in this research project: Mandy Ryan and Ruben Sakowsky

External collaborators: Emmanouil Mentzakis (University of Southampton)

Using eye-tracking to inform the design and analysis of discrete choice experiments

Discrete choice experiments (DCEs) are widely applied in economics to study choice behaviour. Current research is limited in terms of understanding how individuals process information and make choices. We explore how novel eye-tracking methods can provide insight into decision-making processes underlying choices, as well as the implications for choice data analysis.

HERU researchers involved in this research project: Mandy Ryan and Mesfin Genie

External Collaborators: F. Hermens (University of Tilburg) and N. Krucien (Evidera)

Using induced experiments to infer decision making strategies in discrete choice experiments

This study investigates the decision making strategies used by respondents when completing a discrete choice experiment (DCE). We address this question using an experimental economics technique: an induced value experiment.

Our results indicate that a large proportion of respondents do not make pay-off (utility) maximising choices. We investigate the presence of satisficing behaviour and other non-utility maximising decision rules in both hypothetical and incentivised choices.

HERU researchers involved in this research project: Verity Watson

External collaborators: S Luchini (University of Aix-Marseille)

Recently Completed Projects

Choice certainty and deliberative thinking in discrete choice experiments. A theoretical and empirical investigation

Stated preference research is criticised because respondents to hypothetical surveys may not engage with the task. Decision certainty has been used to measure task engagement. Researchers assume that respondents who make decisions about which they are certain have well-defined preferences and provide more reliable responses. In the case of DCE, we argue that the variability of response certainty is also important. We present a novel framework to identify thoughtful / deliberative respondents.

The framework combines respondents’ decision certainty with the variability in respondents’ decision certainty across a set of choice tasks. We test our framework empirically using data from two case studies. We find respondents with higher certainty variability seldom use decision heuristics, are more likely to have monotonic preferences, and have longer response times. We then incorporate mean decision certainty and variability into econometric models of choice to provide more precise estimates of individuals' preferences. We find that a re-weighting function that includes variability improves the precision of welfare estimates up to 69%.

Outcome and Translation

Respondents with higher certainty variability seldom use decision heuristics, are more likely to have monotonic preferences, and have longer response times. A re-weighting function that includes variability improves the precision of welfare estimates up to 69%.

HERU researchers involved in this research project: Verity Watson

External collaborators:  Reiger, D. (British Columbia Cancer Research Agency and University of British Columbia); Sicsic, J. (Paris Descartes University Institute of Technology)

Demand revelation in a multi-attribute discrete choice task

The efficient provision of healthcare requires information about the costs and benefits of interventions. While many costs and benefits can be measured using market prices, many healthcare interventions are not traded in markets and consequently have no market price. In this instance, stated preference methods such as DCEs may be used to estimate the benefits.

Few studies have compared hypothetical discrete choice experiment (DCE) responses with the equivalent real choices. Even when hypothetical choices and real choices are compared, it is impossible to conclude that choice differences result from the question type (hypothetical or real) and not preferences differences across the two groups. Techniques developed in experimental economics can disentangle the influence of choice context from individuals’ preferences. Using induced value experiments, this project compared hypothetical and real DCE responses, estimated the magnitude of hypothetical bias, and considered whether models of bounded rationality better explain responses to DCEs.

Our results indicate that DCEs do not reliably measure individuals’ true valuation of the good. We find little evidence that individuals make different choices in hypothetical and incentivised settings. We find that choice complexity affects individuals’ ability to make correct choices and that individuals learn as they complete the task.

Outcome and Translation

Our results imply that researchers must take choice complexity and learning into account when designing stated preference studies.

HERU researchers involved in this research project: Verity Watson

External collaborators: S Luchini (GREQAM, Marseille)

Do participants understand health economics surveys?

Debriefing questions can assess if respondents understand discrete choice experiments (DCEs) and are answering in a way consistent with theories of decision making and utility maximisation. However, there is limited literature about how often debriefing questions are included or how the results are used in health economics. We conducted an online survey of authors of published health DCEs, asking about their use of debriefing questions, including frequency, type and analysis. We descriptively analysed the sample characteristics and responses.

Outcome and Translation

These results suggest that while over half of researchers conducting health DCEs use debriefing questions, many do not analyse, use or report the responses. Given the additional respondent burden, there is a need for reliable and valid debriefing questions. In the meantime, the inclusion, analysis, and reporting of debriefing questions should be carefully considered prior to DCE implementation.

HERU researchers involved in this research project: Verity Watson

External collaborators: Pearce, A. (University of Sydney); Mulhearn, B. Viney, R. (CHERE, University of Aberdeen)

Does an oath improve demand revelation in discrete choice experiments?

This project considered the potential of ex ante corrections to improve demand revelation and reduce hypothetical bias in discrete choice experiments. Theories of social psychology emphasise that social context is important when individuals are asked to value non-market goods. We used an induced value experiment to investigate demand revelation and hypothetical bias in discrete choice experiment responses. This project drew on existing research of Drs Luchini and Jacquemet and Professor Shogren by asking participants to complete one of three oaths before taking part in the experiment. Commitment theory suggests that this should increase demand revelation and reduce hypothetical bias.

Outcome and Translation

We find that an oath improves the reliability of responses to hypothetical DCE-type tasks. We find oaths that target honesty are more effective than those that target effort.

HERU researchers involved in this research project: Verity Watson

External collaborators: N Jacquemet (University of Paris); S Luchini (GREQAM, Marseille) and J Shogren (University of Wyoming)

Eliciting preferences for healthy and sustainable food in the lab

Previous studies eliciting preferences for food products have found that different elicitation mechanisms lead to different valuations for the same good. There are two reasons for this: differences in the accuracy with which mechanisms elicit preferences or differences in how preferences are formed in response to the mechanism.

This is the first study to distinguish between these explanations using an innovative experimental design that combines induced value (IV) and home grown (HG) preference procedures for a Second Price Vickrey Auction (SPVA) and discrete choice experiment (DCE).

We elicit IV preferences for a fictitious good (token) and HG preferences for real food product (beef-based lasagne) that varies in healthiness and environmental sustainability. We find HG preferences elicited using SPVAs and DCEs are not different.

After controlling for potential differences in sample composition, our results suggest that HG preference patters varies depending on the elicitation mechanism. Our IV results show that the DCE is the most demand revealing preference-elicitation procedure. Taken together, our results imply that that lack of isomorphism in our empirical application is caused by a value-elicitation problem in the SPVA.

Outcome and Translation

Our results imply that that lack of isomorphism in our empirical application is caused by a value-elicitation problem in the SPVA.

HERU researchers involved in this research project: Verity Watson

External collaborators:  Ceronni, S. (Queen's University Belfast); MacDiarmid, J. (Rowett Institute of Nutrition and Health, University of Aberdeen).

External validity of contingent valuation: a field experiment comparing hypothetical and real payments

Whilst willingness to pay (WTP) is increasingly used in economics to value benefits, questions remain concerning its external validity, i.e. do hypothetical responses match actual responses? We present results from two within sample field tests (Thailand and Scotland). Our Thailand experiment suggests that whilst Hypothetical No responses are always a No, Hypothetical Yes responses exceed Actual Yes responses. Certainty calibrations (verbal and numerical response scales) minimise hypothetical-actual discrepancies offering a useful solution.

Failure to adjust may overstate monetary measures of value, and lead to inaccurate policy recommendations. In a follow-up study conducted in Scotland, qualitative investigation is currently taking place into why people do not behave in reality as they state in contingent valuation surveys.

HERU researchers involved in this research project: Mandy Ryan, Sebastian Heidenreich

External collaborators: J Cairns (Department of Health Services Research and Policy, London School of Hygiene and Tropical Medicine), S Jareinpituk (Department of Dental Public health, Mahidol University), E Mentzakis (Economics Department, University of Southampton) D Glynn (University of York) and C Bond (Academic Primary Care, University of Aberdeen)

For better or worse? Investigating the validity of best-worst scaling experiments in health

Choice experiments are frequently used in health economics to measure preferences for non-market goods. Best worst discrete choice experiment (BWDCE) has been proposed as a variant of the traditional “pick the best” approach. BWDCE, where participants choose the best and worst options, is argued to generate more precise preference estimates because of the additional information collected. However, for this to be the case two conditions must hold:

  1. best and worst decisions provide similar information about preferences, and
  2. asking individuals to answer more than one choice question per task does not reduce data quality. Whether these conditions hold remains under researched.

This is the first study to compare participants’ choices across three experimental conditions:

  • (i) BEST choices only,
  • (ii) WORST choices only, and
  • (iii) BEST & WORST choices (BWDCE).

We find responses to worst choices are noisier. Implied preferences from the best only and worst only choices are qualitatively different, leading to different WTP values. Respondents to BWDCE tasks are of lower quality and respondents are more likely to use simplifying decision heuristics.

Outcome and Translation

As BWDCE is being increasingly used in health economics we encourage further investigation of the method and identify important area for future research.

HERU researchers involved in this research project: Nicolas Krucien and Mandy Ryan

External collaborator:  J. Sicsic (INSERM)

Health state valuation using discrete choice experiments and best-worst scaling: a comparison of methods

Health utility indices (HUIs) are widely used in economic evaluation. The best–worst scaling (BWS) method is used to value dimensions of HUIs. However, little is known about the properties of this method. We investigate the validity of the BWS method to develop HUI, comparing it to another ordinal valuation method, the discrete choice experiment (DCE).

Using a parametric approach we find a low level of concordance between the two methods, with evidence of preference reversals. BWS responses are subject to decision biases, with significant effects on individuals’ preferences. Non-parametric tests indicate BWS data has lower stability, monotonicity and continuity compared to DCE data, suggesting the BWS provides lower quality data.

Outcome and Translation

For both theoretical and technical reasons, practitioners should be cautious both about using the BWS method to measure health-related preferences, and about using HUI based on BWS data. Given existing evidence it seems that the DCE method is a better method, at least because its limitations (and measurement properties) have been extensively researched.

HERU researchers involved in this research project: Nicolas Krucien, Verity Watson and Mandy Ryan

Re-thinking the different perspectives that can be used when eliciting preferences in health

The 2003 Health Economics paper ‘An inquiry into the different perspectives that can be used when eliciting preferences in health’ presents a conceptual framework of six perspectives along two dimensions: preferences (personal, social and socially inclusive personal) and context (ex ante and ex post).

We rethink this framework by asking four questions concerning: the patient, or the user of the treatment; the payer of the treatment; and the assessor of the value of treatment; and the timing of the illness and the nature of its risk.

These questions refine the preference and context dimensions, and leads to the identification of perspectives not classified by the original framework. We propose an extended framework with five preferences (personal, non-use, proxy, social and socially inclusive personal) and five contexts (one of which is ex post and four ex ante), resulting in 22 possible perspectives.

Outcome and Translation

We show that the DOMR framework is imprecise and incomplete in both the preference and context dimensions. Our extended five-by-five framework will facilitate comparisons across empirical studies with more clarity at the conceptual level. Our extended framework has better coverage to accommodate the expanded range of contexts in which preference elicitation is applied.

HERU researchers involved in this research project: Verity Watson

External collaborators: A Tsuchiya (ScHAAR, University of Sheffield)

Spending wisely: investigating survey mode effects in discrete choice experiment responses

We compared four survey modes: internet panel survey, mail survey, mail invitation to complete an internet survey and in-person interviews andcompared responses to a survey designed to elicit preferences of a healthcare ‘good’ likely to be relevant to all members of the population: the use of community pharmacies for managing minor illness. Preference data were collected using a DCE. For each mode, we considered: 

  1. How representative of the population were respondents to each mode?
  2. Did respondents’ preferences and willingness to pay vary across modes?
  3. Could statistical techniques be used to take account of differences in respondent characteristics?
  4. Did response validity vary across modes?

Outcome and Translation

The mail invitation to complete an internet survey was not taken forward to the main study after an extremely low response rate to the pilot. None of the modes were representative of the general population. Each mode differed from the general population in different ways. For example, while respondents to the mail survey were older, on average, than the general population, respondents to the internet panel surveys were younger. Respondents’ preferences and willingness to pay differed across modes. Response validity also differed across modes.

The results provide researchers with a characterisation and quantification of the advantages and disadvantages of each mode and thus allow them make an informed decision about which mode(s) to use in their research.

HERU researchers involved in this research project: Mandy Ryan and Verity Watson

External collaborators: T Porteous (Academic Primary Care, University of Aberdeen)

Task complexity and response certainty in discrete choice experiments

This study explores the behavioural and statistical links between utility balance and cognitive burden in discrete choice experiments (DCEs) by examining the relationship between respondents’ stated certainty about their DCE responses and the statistical precision of the econometric model. DCE experimental design emphasises utility balance across choice task alternatives, but this increases task complexity and respondents’ cognitive burden. A consequence of task complexity is response error, which increases response variability and decreases statistical efficiency.

Outcome and Translation

We find that increases in choice task utility balance decreases response certainty, and re-weighting the regression to favour respondents who are more uncertain of their choices increases the statistical precision of the econometric model.

HERU researchers involved in this research project: Verity Watson

External collaborators: H Burnett, W Ungar (The Hospital for Sick Kids, Toronto, Canada) and D Regier (University of British Columbia, Canada)

To pay or not to pay? Cost information processing in the valuation of publicly funded healthcare

Discrete choice experiments (DCEs) commonly include a monetary attribute. This enables willingness to pay (WTP), a monetary measure of benefit, to be estimated for non-monetary attributes. There has been concern that the inclusion of a cost attribute challenges the credibility of the experiment when valuing publicly funded healthcare systems. However, very little research has explored this issue. Using a UK sample, we allocated participants across two versions of a DCE: one including a cost attribute and the other excluding a cost attribute. The DCE was identical in all other respects. We find no significant difference in response time across the two surveys, monotonicity was higher for the COST DCE and cost was stated as the most commonly ignored attribute in the COST DCE. Whilst the inclusion of a cost attribute did not alter the structure of preferences, it resulted in a lower level of choice consistency. Using an unrestricted latent class model, we find evidence of a credibility effect: respondents with experience of paying for health services and who perceive the choices as realistic are less likely to ignore cost. Further, respondents with a higher response time are less likely to be cost minimisers. Results are robust across different model specifications and choice formats. DCE practitioners should give due consideration to cost credibility when including a cost attribute, ensuring participants engage with the cost attribute. Ways to do this are suggested, including careful motivation of the cost attribute, consideration to the appropriate payment vehicle and careful consideration to the cost attribute when developing and piloting the survey. Failure to do this will result in an invalid willingness to pay estimates and thus policy recommendations.

HERU researchers involved in this research project: Mandy Ryan and Mesfin Genie

External Collaborator: N. Krucien (Evidera)


Weighting or aggregating? Investigating information processing in multi-attribute choices

Multi-attribute choices are commonly analyzed in economics to value goods and services. Analysis assumes individuals consider all attributes, making trade-offs between them. Such decision-making is cognitively demanding, often triggering alternative decision rules. We develop a new model where individuals aggregate multi-attribute information into meta-attributes. Applying our model to a choice experiment (CE) dataset, accounting for attribute aggregation (AA) improves model fit. The probability of adopting AA is greater for: homogenous attribute information; participants who had shorter response time and failed the dominance test; and for later located choices. Accounting for AA has implications for welfare estimates. Our results underline the importance of accounting for information processing rules when modelling multi-attribute choices.

HERU researchers involved in this research project: Mandy Ryan and Mesfin Genie

External collaborator: Nicolas Krucien (Evidera)