skip to primary navigationskip to content

Philosophy of Cancer: the ethics and epistemology of detection and prevention

One Day Workshop
Department of History and Philosophy of Science, University of Cambridge
13 December 2017, 10am–6pm

Speakers: Anya Plutynski (Washington University in St Louis); Alex Voorhoeve (LSE); Lynette Reid (Dalhouise); Justin Biddle (Georgia Institute of Technology). Titles and abstracts below.

Register online

We gratefully acknowledge financial and administrative support from the Cambridge Cancer Centre Early Detection Programme, and from the Department of History and Philosophy of Science. This meeting has also been supported by the work of the Cambridge Philosophy and Medicine group.

Please direct any questions to the conference organisers: Stephen John (sdj22@cam.ac.uk) and Joseph Wu (jw895@cam.ac.uk).

This is the first of three workshops on philosophical and ethical issues around cancer to be held Cambridge during 2017–2018. The subsequent meetings are on the ethics and epistemology of early detection (15 March 2018, at the VHI, St Edmund's College: organiser, Gianmarco Contino, gc502@mrc-cu.cam.ac.uk) and on resource allocation in cancer (5–6 April 2018 at CRASSH: organisers, Gabriele Badano, gb251@cam.ac.uk, and Joseph Wu, jw895@cam.ac.uk). If you are interested in these events, please get in touch with the relevant organisers directly.

Schedule

10.00 Welcome
10.10 'Epistemic risks in prostate cancer screening: implications for ethics and policy' (Justin Biddle)
11.25 Coffee
11.50 'Is cancer due to bad luck?' (Anya Plutynski)
1.05 Lunch
2.00 'Precision in medical imaging: what are the ethical and epistemic trade-offs?' (Lynette Reid)
3.15 Tea
3.30 'Equality, ambiguity and screening' (Alex Voorhoeve, joint work with Thomas Rowe of Virginia Tech)
4.45 Concluding remarks
5.00 Wine reception
6.00 Finish

Talks and abstracts

'Epistemic risks in prostate cancer screening: implications for ethics and policy' (Justin Biddle)

This paper examines recent changes in recommendations for prostate cancer screening, which reflect a greater emphasis on patient autonomy. Respect for patient autonomy requires, at a minimum, that doctors communicate clearly to patients the risks and benefits of treatment options. Drawing upon recent work on epistemic risk, I examine the processes of risk assessment in prostate cancer screening, and I argue that prostate cancer diagnosis often involves value-laden judgment calls on the part of physicians. These value-laden judgments often go unnoticed by the physicians making them, and this fact in turn poses challenges for effective risk communication to patients. I conclude with a discussion of how the epistemic risks involved in prostate cancer diagnosis might be managed if respect for patient autonomy is to be advanced.

'Precision in medical imaging: what are the ethical and epistemic trade-offs?' (Lynette Reid)

'An imaging test is a way to let doctors see what's going on inside your body', says the American Cancer Society in its information for patients. It is natural to assume that a more accurate imaging test is always desirable when 'looking inside the body' to screen for or to diagnose disease. But what does 'accuracy' mean in imaging tests and in clinical practice? What are the limits we reach and trade-offs involved in improving accuracy? Current debates about overdiagnosis in cancer screening show the need to scrutinize critically our deeply held assumptions about objectivity in medical imaging. I propose for debate that there may be conditions under which an imprecise medical image is a better medical image. What ethical and epistemic challenges would be raised by a deliberate choice for imprecise medical imaging?

'Is cancer due to bad luck?' (Anya Plutynski)

In 2015, Tomasetti and Vogelstein published a paper in Science containing the following provocative statement:

... only a third of the variation in cancer risk among tissues is attributable to environmental factors or inherited predispositions. The majority is due to 'bad luck', that is, random mutations arising during DNA replication in normal, noncancerous stem cells.

The paper – and perhaps especially this rather coy reference to 'bad luck' – became a flash point for a series of letters and reviews, followed by replies and yet further counterpoints. The aim of this talk is first, to briefly explain and describe what Tomasetti and Vogelstein meant by 'bad luck'. I then turn to a comment made by Nowak and Waclaw (2017), who wrote that 'the correlation "explains" the data in the statistical but not in a biological sense'. The second part of the talk concerns what it means to explain in the 'statistical' but not the biological sense. The questions raised by critics concern not simply whether stem cell divisions do account for these average differences in incidence, but whether such statistical correlations are 'sufficient' to explain. In other words, much of the exchange turns on the matter of explanatory sufficiency. The case thus serves as an interesting case study in what it means (or ought to mean) to explain patterns of cancer incidence. I conclude by considering some implications of the debate for both primary and secondary prevention.

'Equality, ambiguity and screening' (Alex Voorhoeve, joint work with Thomas Rowe of Virginia Tech)

Decision-makers are in an ambiguous situation when they are not in a position to assign precise probabilities to all of the relevant possible outcomes of their actions. Such situations are common in medicine and public health: new measures to detect and treat cancer are an example. Many people respond to ambiguous situations in a cautious or ambiguity-averse manner, and there are good reasons for taking such ambiguity aversion to be both prudentially and morally permissible. We put forward an egalitarian view of distributive justice that incorporates ambiguity aversion. We analyse when the aims of reducing inequality and limiting ambiguity are congruent and when they conflict, and highlight a number of novel implications of the proposed view. We also demonstrate that ambiguity aversion renders a range of distributive views, from egalitarianism to utilitarianism, incompatible with the Pareto principle applied to ambiguous prospects. (This principle holds that when a first policy yields more valuable prospects for each person than a second policy, then the first policy should be chosen over the second.) We argue that this gives us strong reasons to reject the Pareto principle and conclude that it should not be used in evaluating screening policies.