On medical anthropology, understanding ‘risk’ and barriers to achieving interdisciplinarity in the world of cancer prevention: an interview with Elspeth Davies

Elspeth Davies is a fourth year PhD student at the University of Cambridge. Funded by Cancer Research UK, her work uses anthropological methods to explore the social and ethical complexities surrounding efforts to detect cancer early.

Your research is in anthropology as applied to cancer prevention and early diagnosis. For anyone unfamiliar with anthropology as a discipline, what are some of the kinds of things that medical anthropologists look at?

Medical anthropologists study the very variable ideas about what it means to be ill, to be well and to provide good care in different settings. Our discipline uses a methodology called ethnography, which involves spending long periods of time with the people we study. Instead of using standardised questionnaires in which researchers must decide what is relevant in advance, this method allows us to follow to what matters to our participants, and to consider their experiences in context. Doing so can help us to shed new light on a range of issues, including how health inequalities develop or what it .means to live with disease day-to-day.

Could you explain what you’re working on currently?

I am currently writing up my PhD thesis which explores how some cancers have become potentially controllable in the UK. By this, I mean that there have been efforts to transform malignant diseases – that were once thought to be invariably fatal – into avoidable and curable conditions, through interventions aiming to facilitate prevention and early detection. More specifically, my thesis asks the question: what does it mean to live with statistics about risk in the hope of controlling cancers?

In order to offer an answer, I spent eighteen months conducting ethnographic fieldwork with scientists developing a new screening device – a device it is hoped can be used to routinely screen people for a condition called Barrett’s oesophagus. Barrett’s oesophagus, or Barrett’s, is a condition that puts individuals at higher risk for oesophageal cancer (cancer of the food pipe). People diagnosed with Barrett’s are offered regular surveillance with endoscopies – small cameras used to look inside the food pipe. This surveillance aims to detect and facilitate the treatment of any abnormal cell changes before they become cancerous or metastatic.

The second part of my fieldwork involved spending time with people, now patients, who were learning to live with diagnoses of Barrett’s. It asks what it might mean to provide care for people who are not ill but might be in the future. In turn, it considers how patients make meaningful, good lives when presented with the statistical possibility of developing a potentially fatal disease in the future.

How did you arrive at this area of work?

You could say that I was led to this work by a constellation of unfortunate and fortunate circumstances. Having been diagnosed with melanoma, a dangerous type of skin cancer, as a teenager, I had a personal interest in questions about cancer and life after diagnosis. More fortuitously, the year I graduated from my degree in Social Anthropology, my department was offering a PhD position in my department studying the topic of cancer early detection.

I had not planned to apply for doctoral studies at that stage, but it felt like too serendipitous an opportunity to ignore. I applied and was offered the role, and now, four years later, am finishing writing up my findings.

Could you talk a little more about ‘risk’: how it is commonly understood in biomedical language, versus wider meanings – and implications – that might not be so routinely considered?  

In my work I repeatedly come back to the idea that the notion of risk, as constructed by epidemiological studies, is not the same as the notion experienced and lived by people diagnosed as ‘at risk’.

For epidemiologists, risk is a statistical abstraction that tells us what tends to happen in particular populations. For example, studies conclude that people with Barrett’s have an increased chance of developing oesophageal cancer in their lifetime compared to those without it. However, while most people with Barrett’s will never develop malignancies, these statistics cannot tell us exactly which people will. As a result, everyone with Barrett’s is offered surveillance in the UK, in the hope of improving outcomes among the minority who do develop cancer.

When risk is framed like this, screening appears to be offered with the aim of improving outcomes for populations rather than individuals; particular people are unlikely to benefit personally. The aims of these interventions are then incongruous with how they are experienced by people who become ‘at risk’ patients. With this at-risk status, individuals are often left highly aware that they could be the one who develops a deadly cancer, no matter how improbable this may be.

The tension between these dual notions of risk – that on the one hand pose cancer as a population-level possibility, and on the other as a personal fate – can be used to explain many tensions in the field of cancer control.

For example, in 2022, Public Health Wales increased the cervical screening interval for people testing HPV negative from three to five years. From the perspective of public health practitioners, and according to statistics, people who test HPV negative are at low risk of having cervical cancer. They therefore do not stand to benefit from being screened every three years. However, for some people this change was understood as a reduction in the level of care provided.

More than 30,000 people signed a petition against this proposed change. In this, they often mobilised unfortunate stories of young women who died of cervical cancer to make their case. From the perspective of these signatories, cervical screening was not a statistical endeavour but a personal one. The reduction in screening test frequency was understood to place them at greater risk of cervical cancer. Such conflicts raise important questions about what the aims of screening are, as well as who gets to decide what these should be.

Are there other overlooked tensions in the field of cancer prevention and early diagnosis that might be worth highlighting here? 

Cancer control is full of tensions and contradictions. This is one of the things that makes it so fascinating to study. For example, another friction that arises is between the admirable aspiration to facilitate cancer prevention and early detection and the reality of resource scarcity in the UK.

This was particularly the case in 2021 and 2022 when I was conducting my fieldwork. There was a backdrop of unprecedented circumstances presented by the COVID-19 pandemic, coupled with an NHS widely deemed to be ‘in crisis’. While preventative medical interventions promise to offer a way of reducing pressures on overstretched healthcare services, they often remain simultaneously hampered by these very same pressures.

What do you see as some of the challenges, or toughest hurdles to overcome, in creating meaningful interdisciplinarity in cancer research more broadly? What can anthropology bring to the field?

One of the key challenges is that different disciplines, at least in some important senses, speak different languages. In research terms, this means that they use different methodologies, theories and concepts. This is common, and can sometimes make it difficult for us to translate our work into terms that are meaningful and comprehensible to other fields.

Communication between disciplines, like all good communication, must start with listening – understanding where our audience might be coming from and what assumptions and beliefs they are bringing to our interaction. This process of listening to, and learning the language of, other disciplines can help us create work that makes sense to audiences outside of our home departments.

A further barrier, for anthropology in particular, may result from the hierarchical system of classifying evidence promoted by biomedical professionals. This is particularly the case among proponents of evidence-based medicine (EBM). In the eyes of EBM’s hierarchy, systematic reviews constitute the highest quality and most well-supported evidence, closely followed by randomised controlled trials. Perhaps one of the reasons that anthropological research has at times been overlooked by biomedical practitioners is because it gets placed at the bottom of the hierarchy of evidence. It is relegated to the tier, or level, described as ‘case reports’ or ‘background information’.

While anthropological evidence alone should not be used to answer clinical questions, this systematic devaluing of ethnographic methods has perhaps led scientists and clinicians to disregard what these approaches can offer. This is particularly relevant when ethnography is used in combination with other methods.

In the field of cancer prevention and early detection, I hope that my anthropological work might highlight some of the oft-overlooked practical and ethical complexities that arise amidst the admirable goal of controlling cancer by diagnosing risk. More generally, anthropological approaches can offer rich and nuanced insights into the issues of health and illness. They add stories to abstract statistics and centre voices that might otherwise be silenced.

Ethnographic fieldwork can provide a novel lens through which to think about biomedical interventions. This is valuable because, allegedly in the words of Albert Einstein, “We can’t solve problems by using the same kind of thinking we used when we created them.”

.

The views expressed are those of the author. Posting of the blog does not signify that the Cancer Prevention Group endorses those views or opinions.

Subscribe to our mailing list!

Be the first to comment

Leave a Reply

Your email address will not be published.


*