Program

June 5, 2019
Room: Tokyo, Beckmanns Hof

 

9:00-9:30 Coffee & Welcome
9:30-10:45 Ann-Sophie Barwich (Bloomington) & Terry Acree (Cornell):
When Your Data Requires New Theory: A Novel Behavioral Measure of Olfactory Responses (And Its Challenges)
10:45-11:00 Coffee
11:00-12:15 Jan de Ruiter (Tufts):
The Problem of “Effectism“ in Cognitive Science
12:15-13:45 Lunch
13:45-15:00 Insa Lawler (Bochum):
Measurement in Neurolinguistics: The Case of the N400 effect
15:00-15:15 Coffee
15:15-16:30 Lorenza S. Colzato (Bochum/Leiden):
Enhancing Brain and Cognition: A Theory-Driven Approach
16:30-16:45 Coffee
16:45-18:00 Hasok Chang (Cambridge):
Operationalism, Conceptual Engineering, and Epistemic Iteration
19:30 Dinner at Livingroom

June 6, 2019
Room: Tokyo, Beckmanns Hof

9:00-9:15 Coffee
9:15-10:30 Joel Michell (Sydney):
Metaphysics in Psychometrics
10:30-10:45 Coffee
10:45-12:00 Alistair Isaac (Edinburgh):
Criteria for Realism about Psychological Quantities
12:00-12:15 Coffee
12:15-13:30 Edouard Machery (Pittsburgh):
Measurement and the Replication Crisis in Psychology
13:30 Lunch at Beckmanns Hof (optional)

Abstracts (in order of talks)

Ann-Sophie Barwich (Bloomington) & Terry Acree (Cornell): When Your Data Requires New Theory: A Novel Behavioral Measure of Olfactory Responses (And Its Challenges)
Is there a code in the nose, and what’s its measure? Perceptual theories typically have two foundations: they are modeled on vision and a computational understanding of mental states (defining sensory operations by input-output correlations). Olfaction does not fit this framework. Today, no general stimulus-response model is available, and the computational operations governing odor-responses remain unexplained as the neural representation of odor deviates from other senses. One way to address this challenge is to develop a robust measure of a perceptual scale that allows for computational modeling in odor perception. We propose such a measure with new behavioral data of mixture perception, using sniff-olfactometry. Specifically, we present data from binary choice tests that reveal a stable, reproducible choice pattern of probabilistic responses. We further discuss how this test meets two epistemic challenges of sensory measurement: coordination and standardization. This talk presents collaborative results of applying philosophical thinking to scientific practice.

Jan de Ruiter (Tufts): The Problem of  “Effectism“ in Cognitive Science
In the cognitive sciences, our most common way to empirically support theories is to collect data and then demonstrate the presence of so-called „effects“ that would be predicted by our theory. An “effect” here is a difference in one or more dependent variables for different levels of one or more independent variables. To be accepted as evidence for a theory, effects should be „significant“, meaning that the demonstrated difference is shown to be unlikely to be attributable to measurement or sampling error. By using this methodology, we believe that we are following (a variant of) Popper’s falsificationism, because we formally try to reject the null hypothesis that there is no effect. In my talk, I want to address two serious problems with this ubiquitous methodology. First, it is in fact the opposite of Popper’s falsificationism. Second, and more worrying, it results in at best very weak evidence for the theory that is supposed to be supported.  I will also discuss some detrimental consequences of this methodology, and suggest some possible alternatives.

Insa Lawler (Bochum): Measurement in Neurolinguistics: The Case of the N400 effect
Neurolinguists explore event-related brain potential responses (ERPs) when studying language processing. The so-called N400 effect is a negative-going deflection that peaks around 400 milliseconds. It is a dependent measure because it tracks the relative (as opposed to absolute) amplitude of the waveform compared to another experimental condition. This N400 effect has been linked to processing semantic incongruity in (potentially) meaningful stimuli, such as linguistic utterances, (co-speech) gestures, faces, or environmental sounds. The N400 effect also functions in indirect measurements, such as measures of the associative strength between words or clinical performance measures for stroke or brain-injured patients. In my talk, I explore paradigmatic measurements of and with the N400 effect, focusing on what validates them.

Lorenza S. Colzato (Bochum/Leiden): Enhancing Brain and Cognition: A Theory-Driven Approach
The essence of cognitive enhancement is the use of any (legitimate) means to reach one’s personal best. Kurt Lewin’s claim that „nothing is as practical as a good theory“ is the leitmotif of this talk about cognitive enhancement. Whereas in the past the field of cognitive enhancement used mainly effect-driven approaches (that seek to demonstrate that an enhancing intervention can have an effect without explaining how it modulates the targeted function, this talk proposes a mechanistically oriented, theory-driven approach that tries to understand how a particular intervention modulates cognition. I will discuss three theory-driven methods. First, I will introduce the idea that the administration of the amino acid L-Tyrosine, precursor of dopamine, refills resources required for cognitive-control operations. Second, I will show that transcutaneous vagus nerve stimulation may be a promising novel noninvasive brain stimulation tool in enhancing the recognition of other’s emotions. Third, I will illustrate how recently popular sub-perceptual doses of psychedelic substances such as truffles, referred to as „microdosing“, allegedly have multiple beneficial effects including creativity and problem solving performance. I will conclude that theory should guide the use of the right methods in cognitive science.

Hasok Chang (Cambridge): Operationalism, Conceptual Engineering, and Epistemic Iteration
Operationalism in its classic form as advanced by Bridgman was an insistence on making concepts empirically meaningful through specific measurement methods. I seek to build on this line of philosophical thought by adding two further perspectives. Firstly, from the perspective of “conceptual engineering” we can see the practice of operationalism as an active intervention by the inquirer. The creation and choice of measurement methods contribute to the shaping of the concept in question, and we should consider what kinds of measurements are suitable for the particular functions that we want the concept to serve. Secondly, this operationalist conceptual engineering is best done through a process of epistemic iteration, in which inquiry is knowingly launched on the basis of an imperfect foundation, which can be improved by incorporating the outcomes of the very inquiry which was carried out on its basis. The iterative method offers a realistic possibility for an ampliative building of knowledge. It is conservative in a productive way, and promotes the coherence of the whole investigative enterprise. I will illustrate these points by revisiting the case of Kelvin’s creation and operationalization of the concept of absolute temperature.

Joel Michell (Sydney): Metaphysics in Psychometrics
Caught up by the a priori conviction that any science worthy of the name must measure the attributes it investigates, psychometricians adopted a metaphysical paradigm (without acknowledging it as such) to secure their claim that mental tests measure psychological attributes, which was threatened by the intrinsic inadequacy of test data to secure it.  The fundamental axiom of this paradigm was Thorndike’s Credo (“All that exists, exists in some amount and can be measured”), which entails its central lemma, the psychometricians’ fallacy (“All ordered attributes must be quantitative”), and which, in turn, supplies psychometrics’ primary methodological principle (“interval scales can always be conjured out of ordinal scales”).  Logically, this framework is no stronger than its fundamental axiom and refuting it, casts both the central lemma and the guiding methodological principle logically adrift. Stripped of its metaphysics, psychometric measurement is exposed as no more than the semblance of measurement.

Alistair Isaac (Edinburgh): Criteria for Realism about Psychological Quantities
A traditional worry about psychophysics, and the measurement of psychological states or properties in general, has been that psychological states are inherently non-quantitative. Color percepts, for instance, do not exhibit those features of numbers, such as standing in ratios or susceptibility to concatenation, required to legitimate quantitative representation. Proponents of this line of argument have levied it to criticize the measurement of psychological properties as meaningless. Applying recent work from the measurement realism debate, I turn this traditional line of argument on its head, arguing instead that we should accept only those psychological properties susceptible to successful measurement as real.  On this view, the criteria for successful measurement are stability across repeated value assignments, susceptibility to increased precision of assignment, and convergence in value across measurement procedures. An implication is that, not only are measurements of perceptual attributes such as color, pitch, or odor sensations meaningful, their success by these criteria legitimates realism about quantitative models of perception. I conclude by examining the implications of this line of reasoning for psychometrics and realism about more general psychological attributes, such as intelligence and personality.  

Edouard Machery (Pittsburgh): Measurement and the Replication Crisis in Psychology
In this talk I will examine whether and how measurement has contributed to the replication crisis in psychology and how this aspect of psychological practice can be improved.