The purpose of this dissertation is to examine aspects of the representational and computational influences on Bayesian reasoning as they relate to reference dependence. Across three studies, I explored how dependence on the initial problem structure influences the ability to solve Bayesian reasoning tasks. Congruence between the problem and question of interest, response errors, and individual differences in numerical abilities was assessed. The most consistent and surprising finding in all three experiments was that people were much more likely to utilize the superordinate value as part of their solution rather than the anticipated reference class values. This resulted in a weakened effect of congruence, with relatively low accuracy even in congruent conditions, as well as a different pattern of response errors than what was anticipated. There was consistent and strong evidence of a value selection bias in that incorrect responses almost always conformed to values that were provided in the problem rather than errors related to computation. The one notable exception occurred when no organizing information was available in the problem, other than the instruction to consider a sample of the same size as that in the problem. In that case, participants were most apt to sum all of the subsets of the sample to yield the size of the original sample (N). In all three experiments, higher numerical skills were generally associated with higher accuracy, whether calculations were required or not.
Problem Solving Prerequisites to Bayesian Reasoning
Solving Bayesian reasoning problems requires correctly identifying, computing, and applying values from the problem text to the solution. Identification refers to understanding the intended meaning of the values. Computation refers to the mathematical manipulation of those values. Application goes one-step further by utilizing those identified and/or computed values in the solution. We evaluated performance on eight Bayesian reasoning problems using probing questions that separate out the extent to which uninitiated reasoners can identify, compute, and apply values from problem to solution. The results suggest that reasoners are generally proficient at identifying values, but struggle with computation and application.
We propose that a mismatch in problem presentation and question structures may promote errors on Bayesian reasoning problems. In this task, people determine the likelihood that a positive test actually indicates the presence of a condition. Research has shown that people routinely fail to correctly identify this positive predictive value (PPV). We point out that the typical problem structure is likely to confuse reasoners by focusing on the incorrect reference class for answering this diagnostic question; instead, providing the anchor needed to address the diagnostic question about sensitivity (SEN). Results of two experiments are described in which participants answer diagnostic questions using problems presented with congruent or incongruent reference classes. Aligning reference classes eases both representational and computational difficulties, increasing the proportion who were consistently accurate to an unprecedented 93% on PPV questions, and 69% on SEN questions. Analysis of response components from incongruent problems indicated that many errors reflect difficulties in identifying and applying appropriate values from the problem, which are prerequisite processes that contribute to computational errors. We conclude with a discussion of the need, especially in applied settings and on initial exposure, to adopt problem presentations to guide, rather than confuse, the organization and use of diagnostic information.
Understanding diagnostic test outcomes requires determining the positive predictive value (PPV) of the test, which most laypeople and medical professionals struggle to do. Despite advances found with frequency formats and visual aids, less than 40% of people can typically identify this value. This study tests the impact of using congruent reference classes in problem-question pairings, evaluates the role of numeracy, and assesses how diagnostic value estimates affect reported likelihood-to-use the test.
Shared decision making places an emphasis on patient understanding and engagement. However, when it comes to treatment selection, research tends to focus on how doctors select pharmaceutical treatments. The current study is a qualitative assessment of how patients choose among three common treatments that have varying degrees of scientific support and side effects. We used qualitative data from 157 undergraduates (44 males, 113 females; mean age = 21.89 years) that was collected as part of a larger correlational study of depression and critical thinking skills. Qualitative analysis revealed three major themes: shared versus independent decision making, confidence in the research and the drug, and cost and availability.
Research suggests that most people struggle when asked to interpret the outcomes of diagnostic tests such as those presented as Bayesian inference problems. To help people interpret these difficult problems, we created a brief tutorial, requiring less than 10 minutes, that guided participants through the creation of an aid (either graph or table) based on an example inference problem, and then showed the correct way to calculate the positive predictive value of the problem (i.e., likelihood that positive tests correctly indicate presence of condition). Approximately 70% of those in each training condition found the correct response on at least one problem in the format for which they were trained.Â