My name is Sam Parsons. I’m a Postdoctoral Research Associate in the Department of in Experimental Psychology at Oxford University. I work in the Oxford Centre for Emotion and Affective Neuroscience, aka OCEAN. Just in case you’re here for the cat, this ginger monster is called Beau.
As well as investigating cognitive-affective processes implicated in adolescent worry, I try to do my own small bit for the improvement of psychological science. Sometimes this is promoting open and reproducible research practices; others it is frustrating others by highlighting the importance of routinely estimating and reporting the reliability of our measurements. Mostly, I’d just like us all to get along and do good research.
I am happy to be contacted to discuss reliability of cognitive measures - particularly if you have questions about my splithalf package and need assistance with it.
(I try to keep this website up to date, however it may fall slightly behind. I do keep my CV up to date, so please do refer to that)
DPhil in Experimental Psychology, 2019
University of Oxford
MSc in Psychology, 2014
Oxford Brookes University
BSc in Psychology, 2012
University of Stirling
Check the publications tab or my CV for all publications
The Combined Cognitive Bias Hypothesis proposes that emotional information processing biases associate with each other and may interact to conjointly influence mental health. Yet, little is known about the interrelationships amongst cognitive biases, particularly in adolescence. We used data from the CogBIAS longitudinal study (Booth et al. 2017), including 451 adolescents who completed measures of interpretation bias, memory bias, and a validated measure of general mental health in a typical population. We used a moderated network modelling approach to examine positive mental health related moderation of the cognitive bias network. Mental health was directly connected to positive and negative memory biases, and positive interpretation biases, but not negative interpretation biases. Further, we observed some mental health related moderation of the network structure. Network connectivity decreased with higher positive mental health scores. Network approaches allow us to model complex relationships amongst cognitive biases and develop novel hypotheses for future research.
Analytic ﬂexibility is known to inﬂuence the results of statistical tests, e.g. eﬀect sizes and p-values. Yet, the degree to which ﬂexibility in data-processing decisions inﬂuences the reliability of our measures is unknown. In this paper I attempt to address this question using a series of reliability multiverse analyses. The methods section incorporates a brief tutorial for readers interested in implementing multiverse analyses reported in this manuscript; all functions are contained in the R package splithalf. I report six multiverse analyses of data-processing speciﬁcations, including accuracy and response time cutoﬀs. I used data from a Stroop task and Flanker task at two time points. This allowed for an internal consistency reliability multiverse at time 1 and 2, and a test-retest reliability multiverse between time 1 and 2. Largely arbitrary decisions in data-processing led to diﬀerences between the highest and lowest reliability estimate of at least 0.2. Importantly, there was no consistent pattern in the data-processing speciﬁcations that led to greater reliability, across time as well as tasks. Together, data-processing decisions are highly inﬂuential, and largely unpredictable, on measure reliability. I discuss actions researchers could take to mitigate some of the inﬂuence of reliability heterogeneity, including adopting hierarchical modelling approaches. Yet, there are no approaches that can completely save us from measurement error. Measurement matters and I call on readers to help us move from what could be a measurement crisis towards a measurement revolution.
Psychological science relies on behavioral measures to assess cognitive processing; however, the field has not yet developed a tradition of routinely examining the reliability of these behavioral measures. Reliable measures are essential to draw robust inferences from statistical analyses, and subpar reliability has severe implications for measures’ validity and interpretation. Without examining and reporting the reliability of measurements used in an analysis, it is nearly impossible to ascertain whether results are robust or have arisen largely from measurement error. In this article, we propose that researchers adopt a standard practice of estimating and reporting the reliability of behavioral assessments of cognitive processing. We illustrate the need for this practice using an example from experimental psychopathology, the dot-probe task, although we argue that reporting reliability is relevant across fields (e.g., social cognition and cognitive psychology). We explore several implications of low measurement reliability and the detrimental impact that failure to assess measurement reliability has on interpretability and comparison of results and therefore research quality. We argue that researchers in the field of cognition need to report measurement reliability as routine practice so that more reliable assessment tools can be developed. To provide some guidance on estimating and reporting reliability, we describe the use of bootstrapped split-half estimation and intraclass correlation coefficients to estimate internal consistency and test-retest reliability, respectively. For future researchers to build upon current results, it is imperative that all researchers provide psychometric information sufficient for estimating the accuracy of inferences and informing further development of cognitive-behavioral assessments.
I’m more than happy to be contacted about any of my work. Including possible collaborations, or statistics/programming consulting