skip to content

Department of History and Philosophy of Science

 

CamPoS (Cambridge Philosophy of Science) is a network of academics and students working in the philosophy of science in various parts of the University of Cambridge, including the Department of History and Philosophy of Science and the Faculty of Philosophy. The Wednesday afternoon seminar series features current research by CamPoS members as well as visitors to Cambridge and scholars based in nearby institutions. In the 2022–23 year, CamPoS is being organised by Jacob Stegenga (HPS) and Neil Dewar (Philosophy).

Seminars are held on Wednesdays, 1.00–2.30pm in Seminar Room 2.

Lent Term 2023

25 January

Uwe Peters (Leverhulme Centre for the Future of Intelligence, Cambridge)
Linguistic discrimination, processing fluency, and the foreign language effect in science

The English language now dominates scientific communications. Yet, many scientists are not English native speakers. Their proficiency in the language is often more limited, and their scientific contributions (e.g., manuscripts) in English may frequently contain linguistic features that disrupt the fluency of a reader's or listener's information processing even when the contributions are understandable. Scientific gatekeepers (e.g., journal reviewers) sometimes cite these features to justify negative decisions on manuscripts. Such justifications may rest on the prima facie plausible assumption that linguistic characteristics that hinder fast and easy understandability of scientific contributions are epistemically undesirable in science. I shall raise some doubts about this assumption by drawing on empirical research on processing fluency. I also argue that directing scientists with English as a foreign language toward approaching native-level English (as science journals commonly do) can have the negative consequence of reducing their potential to make scientific belief formation more reliable. These points suggest that one seemingly compelling justification for linguistically discriminating against potentially many scientific contributions in non-native English is questionable and that the common insistence by scientific gatekeepers on native-like English can be epistemically harmful to science.

1 February

CANCELLED

8 February

Tarun Menon (Azim Premji University)

15 February

Cecily Whiteley (Philosophy, Cambridge)

22 February

Boaz Miller (Zefat Academic College)
Ways of worldfaking

Deepfakes, namely, algorithmically created realistic images and ‎videos that make it appear as if people did something they didn't, undermine our fundamental epistemic standards and practices. Yet the nature of the epistemic threat they pose remains elusive. After all, fictional or distorted representations of reality are as old as cinema. Existing accounts of technology as extending the senses (Humphreys 2004), mediating between subjects and the world (Verbeek 2011), or translating between actants (Latour 2005) cannot characterize this threat. Existing concrete accounts of the threat of deepfakes by social epistemologists such as Regina Rini (2020) and Don Fallis (2020) fall short of their target.

Employing the notions of artifact affordance and technological possibility (Record 2013; Davis 2020), I argue that the epistemic threat of deepfakes (and CGI more generally) is that for the first time they afford ordinary computer users the practicable possibility to fairly cheaply and effortlessly make fictional worlds indistinguishable from the real world. Normatively, a deepfake is epistemically malignant when (1) a reasonable person is misled to believe that the fictional world is the actual world; (2) she forms beliefs about the actual world about issues that are morally or epistemically important. For example, a satirical deepfake of Queen Elizabeth dancing to hip-hop song is benign because a reasonable person understands this is fiction. But a deepfake of a misogynic speech by Obama is malignant because it misleads a reasonable person about Obama's views of women. I illustrate how this analysis generalizes to other case studies, such as a Photoshop makeover, or a QAnon discussion group.

1 March

Marion Boulicault (Trinity Hall, Cambridge)
How medical data infrastructures materialize oppression

It's well-known that medical practices can encode and perpetuate oppressive ideologies. Drawing on in-depth analyses of medical devices such as the spirometer (Braun 2014) and pulse oximeter (Moran-Thomas 2020), recent scholars have argued that ideologies are perpetuated not only by practices, but are also materially embedded in instruments and devices. In other words, medical devices 'materialize oppression' (Liao and Carbonell 2022).

In this talk, I pose the following question: how exactly do medical devices materialize oppression? That is, what are the specific mechanisms by which oppression becomes materialized? I offer a preliminary, non-exhaustive taxonomy of materialization mechanisms. And I do so by focusing on new examples and case studies that illustrate these mechanisms at work within medical data infrastructures rather than devices and instruments. I argue that a clearer view of how these mechanisms operate suggests possibilities for building medical technologies that liberate rather than oppress.

8 March

Mazviita Chirimuuta (University of Edinburgh)
Formal idealism/haptic realism

I propose that we redirect the realism debate away from the question of the reality of unobservable posits of scientific theories and models, and towards the question of whether those theories and models should be interpreted realistically. This makes it easier to include within the realism debate sciences of relatively large and observable items, as are many branches of biology. In computational neuroscience, models are normally interpreted as representing computations actually performed by parts of the brain. Semantically, this interpretation is literal and realistic. Ontologically, it supposes that the structure represented mathematically as a computation (i.e. a series of state transitions) is there in the brain processes. I call this supposition of a structural similarity (homomorphism) between model and target, formal realism. This stands in contrast to an alternative way to interpret the model which I call formal idealism. The view here is that whatever processes exist in the brain are vastly more complicated than the structures represented in the computational models, and that the aim of modelling is to achieve an acceptable simplification of those processes. Thus, the success of the research is more a matter of structuring than of discovering pre-existing structures.

Ultimately, the realism debate is motivated by curiosity about what it is that the best scientific representations have to tell us about the world: is this thing really as presented in the model? Thus, I argue that the contrast between formal realism vs. idealism is a good template for framing the realism debate when discussing the implications of sciences of extremely complex macro and mesoscopic systems, such as the nervous system. Formal idealism does not suppose that the structures given in scientific models are fully constructed or mind-dependent, but that there is an eliminable human component in all scientific representations, due to the fact that they can never depict the full complexity of their target systems and as such are the result of human decisions about how to simplify. Another way to describe the ineliminable human component is to say that models and other scientific representations are the product of the interaction between the human investigator and the target system. I use the sensory metaphor of touching (haptics) to describe this investigative process. Formal idealism is complemented by a haptic realism (Chirimuuta 2016) which acknowledges that models are the products both of constraints imposed by nature, and the constructive activity of scientists.

15 March

Philip Kitcher (Columbia)