Honorable Mention: Alexandra Rutherford

The editors of History of the Human Sciences are delighted to learn that Alexandra Rutherford’s ‘Surveying Rape,’ published in the journal in 2017, has received an honorable mention at the 2019 awards of the Forum for History of Human Science.

Rutherford’s article is an account of the role that social science methods play in “realizing” sexual assault, amid public discussion of (and conservative-led controversy about) the statistic that 1 in 5 women students on (US) college campus experience sexual assault. Setting aside questions of methodological validity, Rutherford shows how the survey, as a measuring device, has become central to the “ontological politics” of sexual assault. Drawing on histories of feminist social science, the article suggests that the social and political life of the survey has been a central actor in rendering sexual assault legible: “only by conceptualizing the survey as an active participant in the ontological politics of campus sexual assault,” Rutherford argues, “can we understand both the persistence of the critical conservative response to the ‘1 in 5’ statistic and its successful deployment in anti-violence policy.”

The editors would like to extend their very warmest congratulations to Professor Rutherford for this much deserved recognition. The article is free to download for rest of the month at this link.

What common nature can exist?

Elizabeth Hannon & Tim Lewens (Eds) Why we disagree about human nature. Oxford University Press, 2018. 206 pp. £30 hbk.

By Simon Jarrett

If one day a disturbingly precocious child were to ask what part you had played in the nature/ nurture war, what would you reply? Were you with the massed intellectual ranks who, since the philosopher David Hull’s ground-breaking 1986 classic ‘On Human Nature,’ have denied that there is any such thing as a common nature for all humans? Or did you join Stephen Pinker’s 2003 counter-revolution, when The Blank Slate sought to reclaim the ground for the Enlightenment, and the idea that there is something essentially the same about all humans across time, space and culture?

If you are not quite sure where you stand, or perhaps too sure where you stand, then this pleasingly eclectic collection of ten essays on human nature, and whether we can meaningfully talk about such a thing, will be of great help. Its contributors, who come from psychology, philosophy of science, social and biological anthropology, evolutionary theory, and the study of animal cognition, include human nature advocates, deniers, and sceptics. We could perhaps call the sceptics ‘so-whaters’ – they agree there may be something we can attach the label ‘human nature’ to, but query whether it really matters, or carries any explanatory weight. These people would take our (hopefully apocryphal) infant prodigy aside and say, ‘well there are some conceptual complexities here that make it quite difficult to give you a straightforward answer.’

Human nature remains, alongside consciousness, one of the great explanatory gaps, a question that permeated philosophical enquiry in antiquity, lay at the heart of Enlightenment ‘science of Man’, and now forms a central anxiety of modernity.  The over-arching problem is, in essence, this: are there traits and characteristics that are biological, and not learned or culturally acquired, which we can say form something called the nature of the human, and which not only define humans as a unified entity but also differentiate them from all other species? In which case, what on earth are they? Or: are we essentially constructed by culture, our traits and characteristics formed by experience, language, learning and social relations, and once we strip away these veneers we find no inner essence that unites us a human species, no meaningful shared oneness other than what we have made ourselves? In which case, what on earth do we mean by ‘we’?

As Hannon and Lewens’ title suggests, we all disagree about human nature and – as the final chapter warns us – are probably destined always to do so, not least because of the term’s epistemological slipperiness. However, one thing on which the contributors find consensus is that the essentialist concept of human nature – ‘that to be human is to possess a crucial “human” gene, or a distinctively “human” form of… intelligence, language, technical facility, or whatever’ (pp.2-3) – is dead. The essentialist idea was killed by Charles Darwin, because if species variation occurs across time and space then there can be nothing invariable in their form and structure, and therefore nothing that we can call a fixed, universal and unchanging ‘nature’. If humankind has adapted, evolved and varied over millions of years, and across numerous environments, what common nature can exist amongst all humans, past and present?


Relation of the human face to that of the ass, 17th C. Credit: Wellcome Collection.CC BY

The death of essentialism, however, does not mean the death of the idea of a human nature. Four essays that defend the idea begin the collection, starting with a defence by Edouard Machery of his much-assailed (including in this book) ‘nomological notion’. By this Machery means identifying typicality in human beings, traits that are common to most humans, but which do not have to be universal, and do not even have to hold evolutionary significance. He includes only traits that are demonstrably biologically evolved, and excludes cultural processes, on the grounds that just because most people learn something, this does not become an essential trait of humanness. His theory falls far short of, and explicitly rejects, essentialism, but nevertheless argues that traits of groups of typical human beings, and of individual typical humans in particular life stages, constitute something we can call human nature: it is the properties that humans tend to possess as a result of evolution.

Grant Ramsey, in his contribution, calls Machery’s theory a ‘trait-bin’ account, which essentially assembles a series of typical traits and places them together into a single bin marked ‘human nature’ while assigning all other traits, cultural, environmental or whatever, to entirely separate bins. Ramsey proposes instead a ‘trait cluster’ account which, rather than assembling a collection of natural traits, captures the complex ways in which traits are related to each other, and the patterns created over life histories by their interactions. The sum of these patterns, seen as potential developmental trajectories at various stages of life, give us human nature. As Ramsey puts it: ‘trait cluster accounts hold that human nature lies not in which traits individual humans happen to have, but in the ways the traits are exhibited over human life histories’ (p.56). This is more encompassing than Machery’s account, which excludes atypical traits, but maintains that there is a nature to be derived from an exploration of all traits and their interactions.

Karola Stotz and Paul Griffiths offer a ‘developmental systems account’ which echoes Ramsey’s but argues for the adoption of the human developmental environment into an account of human nature. They use the idea of ‘niche construction’ – whereby organisms singly and collectively modify their own niches to transform natural selection pressures – to argue that there is a uniquely human developmental niche. This is the environment created for human infants comprising parental interaction, schooling and artefacts such as tool use and language. In this sense nature is culture, and humans create the selection pressures that act on future generations. Human nature is human development, environment is as important as any biological or genomic essence.

The final advocate of a specific human nature is Cecilia Heyes, who echoes Machery in believing that there are certain traits that comprise human nature, but builds into this a theory of what she calls ‘evolutionary causal essentialism’, a key element of which is ‘natural pedagogy’. This sees the teaching of human infants not as an exclusively cultural phenomenon, but as a heritable system whereby nature makes human infants receptive to teaching signals.

The reply of the sceptics to the notion of a ‘human nature’ begins with John Dupré’s ‘process perspective’, which argues that a human cannot be considered as a thing or substance (and therefore something which has a nature) but is rather a process. Humans comprise a life cycle, and are associated with different properties or traits at its different stages. In their very early stages, for example, and often in their latest stages, humans lack language. We cannot, therefore, associate humans with a fixed set of properties; they are instead a plastic process responding to changing environments, and sometimes changing those environments themselves. We could, if we like, call this process itself ‘human nature’ but such a ‘descriptive venture’ would carry little conceptual weight.

Kim Stereny’s ‘Sceptical reflections on human nature’ argues, in similar vein, that even if there is some set of traits shared by most humans – what he calls a ‘cognitive suite’ – describing these as human nature is ‘bland and uninformative’ and lacks any explanatory power. Such a descriptive account of human nature is little more than a ‘field-guide’ to our species – in which case, Serelny asks, do we need it?


SEM human hair. Credit: David Gregory & Debbie MarshallCC BY

Kevin Laland and Gillian Brown recommend that the concept of human nature simply be abandoned. It is, they argue, socially constructed in a number of ways. Evolutionary history is not easily separated into biological and cultural evolutionary processes, since each is dynamic and interacts with the other. Like Stotz and Griffiths they recognise the uniqueness and importance of the developmental niche in the human process, but see it as product of inseparable internal and constructive processes which cannot be incorporated into a theory of an evolved nature. More important is to build an understanding of the human condition over developmental and evolutionary timescales, in all its diversity and multiple processes.

Peter Richerson’s survey of major theorists from Darwin to Pinker rejects any form of strong human nature claim. The later theorists, he notes, all have a strong commitment to the ‘Modern Synthesis’ – a term popularised by Julian Huxley in 1942 – which, in very simple terms, seeks to combine evolution and heredity. For Richerson, the Modern Synthesis account of human nature, with its rejection of the fundamental role of cultural evolutionary processes against overwhelming evidence, has reached the end of the line.

Christina Toren weighs in with an anthropological broadside against the notion that some traits are products of nature rather than culture. Based on her own ethnographic studies, she calls for the rejection of notions of both nature and culture, and calls instead for a focus on ontogeny – the development of the human organism over its life cycle, and within its environments and social relations. Toren’s model focuses on the microhistorical processes that build each individual: ‘mind is a function of the whole person that is constituted over time in intersubjective relations with others in the environing world.’ No ‘nature’ can capture such complexity.

The collection ends with Maria Kronfelder’s elegant interrogation of the term ‘nature’, and the power relations lurking within its appropriation by intellectuals seeking to lay out a domain of study they can claim as their own. This welcome historicization of the subject begins in Greek antiquity and journeys through the Enlightenment, to the advent of heredity (which, Kronfelder notes, shifted from the adjective ‘hereditary’ to a nominal noun defining itself as a scientific field), and finally to Machery’s nomological account, where the book began. In each case the word ‘nature’ is used to denote a field-defining phenomenon in need of explanation – explanations which those using the term saw themselves as having the authority and capacity to produce. It was also used in contradistinction: to the supranatural, to nurture, to culture, and to other enemies which the ‘nature’ power claim could dismiss as irrelevant. Nature, in these claims, was ‘always what could be taken for granted… solid, authoritative’ and carrying some form of objective reality (p.202).

It falls to Kronfelder to explain why ‘we’ disagree, and will probably always disagree, about human nature. Firstly, when talking about our own nature, we fall into what she calls ‘essentialist traps’ involving normalcy and normativity, that we do not apply when more carefully describing other species. Secondly, we have traditionally tried to identify ‘what it means to be human’, which has led us to apply to human nature a description of what characterises our in-group, consequently dehumanising out-groups by placing them outside human signifiers. In this context, different human groups will always disagree about what it means to be human, and thus about human nature. Finally, we load the term ‘human nature’ with too many contradictory and incompatible meanings. Do we want it to be a description of a bundle of properties, a set of explanatory factors, or a boundary-determining classification? It can never be all three, but precisely which epistemological duty it is being asked to perform at any one time in any one context is often obfuscated. We will never agree, because we are arguing from parallel starting points that are invisible to one another.

At the heart of the human nature debate lies, since the collapse of the essentialist view, not only the issue of whether there is such a thing, but also whether such a thing is worth thinking about. If the account of human nature spreads so widely, becoming the set of genetic, epigenetic and environmental traits that we can observe in humans, then does it just become a conceptual mush, consisting of everything that humans ever do or experience? If purely descriptive, then does it lack any explanatory power, thereby rendering it conceptually worthless?  Or is there something about our nature that binds us, and is worth knowing? This is a defining issue for those who practice or study the human sciences, which after all is the study of humans from diverse perspectives. The collection is a hugely helpful trek across much of the best of the current scholarship, and an elegant framing of the key debates, for which the editors should be congratulated.


Simon Jarrett is a visiting lecturer and honorary research fellow at Birkbeck, University of London. His monograph on the history of ‘idiocy’ will be published in 2020. With Jan Walmsley, he is co-editor of Intellectual disability in the twentieth century: transnational perspectives on people policy and practice. (Policy Press 2019). His current research is on theories of consciousness in relation to the deficient mind.

The Buddhistic Milieu


Matthew Drage is an artist, writer and postdoctoral researcher. He lately completed his PhD at the Department of History and Philosophy of Science at Cambridge, and is now Post-Doctoral Research Fellow in the History of Art, Science and Folk Practice, at the Warburg Institue, in the School of Advanced study, University of London. His first article from his PhD, Of mountains, lakes and essences: John Teasdale and the transmission of mindfulness, appeared in December 2018, as part of the HHS special issue, ‘Psychotherapy in Europe,’ edited by Sarah Marks. Here Matthew talks to Steven Stanley – Senior Lecturer in the School of Social Sciences at Cardiff University, and Director of the Leverhulme-funded project, Beyond Personal Wellbeing: Mapping the Social Production of Mindfulness in England and Wales – about the article, and his wider research agenda on mindfulness in Britain and America.  

Steven Stanley (SS): This article is your first publication based on your PhD research project, which you recently completed. Congratulations! Can you tell us a bit about your PhD project?

Matthew Drage (MD): Thank you! So yes, my PhD project was a combined historical and ethnographic project which focused on the emergence of “mindfulness” as a healthcare intervention in Britain and America since the 1970s. My main question was: why was mindfulness seen by its proponents as such an important thing to do? Why did they seek to promote it so actively and vigorously? I focused on a key centre for the propagation of mindfulness-based healthcare approaches in the West: the Center for Mindfulness in Health, Care and Society at the University of Massachuestts Medical Center. I also looked at the transmission of mindfulness from Massachusetts to Britain in the 1990s – this is an episode I narrate in the article.

I had a real sense, when I did my fieldwork, archival research and oral history interviews, that for people who practice and teach it as their main livelihood, mindfulness was something like what the early 20th century sociologist Max Weber called a vocation. I had a strong impression that this devotion to mindfulness as a way of relieving suffering was what helped mindfulness to find so much traction in popular culture. While my PhD thesis doesn’t offer empirical support for this instinct, it does focus very closely on why mindfulness seemed so important to the people who propagated it. I argued that this was because mindfulness combined some of the most powerful features of religion – offering institutionalised answers to deep existential questions about the nature of human suffering and the purpose of life – while at the same time successfully distancing itself from religious practice, and building strong alliances with established biomedical institutions and discourses.

Maybe the real discovery – which is something I only mention briefly in this article – is that religious or quasi-religious ideas, practices and institutions, especially Buddhist retreat centres – were crucial for making this separation possible. Mindfulness relied heavily on Buddhist groups and institutions (or, at least, groups and institutions heavily influenced by Buddhism) for training, institutional support and legitimacy, whilst at the same deploying a complex array of strategies for distancing itself from anything seen as as potentially identifiable (to themselves and to outsiders) as religious.

Matthew Drage

More specifically, most mindfulness professionals I met sought to distance themselves from the rituals, images, and cosmological ideas associated with the Buddhist tradition (for example chanting, Buddha statues or the doctrine of rebirth). But at the same time, many “secular” mindfulness practitioners shared some fundamental views with contemporaneous Buddhist movements. Many held the view that the ultimate goal of teaching mindfulness in secular contexts was to help people to entirely transcend the suffering caused by human greed, hatred and delusion: that is, reach Nirvana, or Enlightenment, the central goal of Buddhist practice. And the sharing of these views between Buddhist practitioners and secular mindfulness teachers was helped by the fact that the latter frequently attended retreats with local Buddhist groups – indeed, often helped lead those groups! In my project I try to show how blurry the lines were, and that this blurriness was really at the heart of what the secular mindfulness project – at least in its early stages – was about: trying to keep the transcendental goal of Buddhism intact whilst shedding aspects of it that were seen as mere cultural accretions, deliberately blurring the boundaries between the religious and the secular. 

SS:How did this project come about?

MD: I came across secular mindfulness in 2011 through my own personal involvement with religious Buddhism. It was clearly on the rise, and while I wasn’t that interested in practising meditation in a secular context, I could see it was probably going to get big. Mindfulness seemed part of a more general cultural trend towards using science and technology to reshape the way the individual experiences and engages with the world around them. Technological developments like personal analytics for health (tracking your own fitness with wearable devices, say), and increasingly personalised user-experiences online, also seemed to exemplify. When I decided to do a PhD in 2013, I was interested in a very general way in questions of subjectivity and technology in contemporary Western culture, and I picked the one that seemed to fit best with my existing interests.

SS: Your article makes an important contribution to the historiography of recent developments in clinical psychology in Britain, especially the development of so-called ‘third-wave’ of psychotherapy (that is, approaches that include mindfulness and meditation). In particular you highlight the perhaps unexpected influence of alternative religious and spiritual ideas and practices on the emergence of British mindfulness in the form of Williams, Teasdale and Segal’s volume, Mindfulness-Based Cognitive Therapy, in the 1990s. You have also unearthed some fascinating biographical details regarding living pioneers of British mindfulness. Did you know what you were looking for before doing your study? Were you surprised by what you found?

MD: The simple answer is sort of, and yes! I kind of found what I was looking for, and (yet) I was surprised by what I found. 

When I began my research I was convinced that mindfulness was just another form of Buddhism, slightly reshaped and repackaged to make it more palatable. My supervisor, the late historian of psychoanalysis Professor John Forrester, warned me about taking this approach. I remember him telling me, “If you keep pulling the Buddhism thread, the whole garment will unravel!” And unravel it did. After about three years, I realised that the most central metaphysical commitments of the mindfulness movement were not especially Buddhist, but owed as much, if not more, to Western esotericist traditions. By this I mean the 19th century tradition that includes the spiritualist theologian Emmanuel Swedenborg, the American Transcendentalists (e.g. David Henry Thoreau and Ralph Waldo Emerson) and, in the 20th century, people like the countercultural novelist and philosopher Aldous Huxley. These thinkers shared, amongst other things, the idea that there is a perennial, universal truth at the heart of all the major religions. The influence of this view was often, I found, invisible to mindfulness practitioners themselves. Indeed, it was invisible to me for a long time. They, like me, had often encountered Buddhism through the lens of these very Western, esotericist religious or spiritual ideas, so they just appeared as if they’d come from the Buddhist tradition. So while I wasn’t surprised by the influence of spiritual ideas on mindfulness, I was surprised by their source.

I was also surprised by the conclusions I reached about its relationship with late 20th century “neoliberal” capitalism. I’m not quite ready to go public with these conclusions yet, but watch this space. I’ll have a lot to say about it in the book I’m working on about the mindfulness movement.

SS: As you say in your article, mindfulness has become a very popular global phenomenon, which in simple terms is about being more aware of the present moment. When we think of mindfulness, we tend to think of ‘being here now’. What was it like studying mindfulness as a topic of historical scholarship? And, vice versa, mindfulness is sometimes understood as referring to, as you say, a ‘realm beyond historical time’. What lessons are there for historians from the world of mindfulness?

MD: A really great question. There is a fundamental conflict between my training as an historian and the views I was encountering amidst mindfulness practitioners. They tended to use history in very specific ways to legitimise their views. Mindfulness was taken as both about a universal human capacity (and thus beyond any specific historical or cultural contingency) and primordially ancient, a kind of composite of the extremely old and the timeless. If mindfulness had a history at all, so the story within the mindfulness movement tended to go, it was coextensive with the history of human consciousness. 

I spent a lot of time thinking and writing about the history of this view of the history of mindfulness. This was challenging because it often left me feeling as though I was being somehow disloyal to my interlocutors within the mindfulness movement; as though I was – in a way that was very hard to explain to them – undermining a key but implicit pretext for their work. In the end I tried to present a view of mindfulness which takes seriously its claims to universality by examining the historicity of those claims. I do not want to assume that there are no universals available to human knowledge; and if there are, then – as feminist science and technology studies scholar Donna Haraway argues in her incredible 1988 essay, “Situated Knowledges,” universals are always situated, emerging under very specific historical conditions. My main theoretical concern came to be understanding and describing the conditions for the emergence of universalising claims about humans.

To answer the other part of your question: I think mindfulness teaches historians that time is itself a movable feast; that we should take seriously the possibility of a history of alternative or non-standard ways of thinking about time. Mindfulness practitioners often talk a lot about remaining in the “present moment,” a practice which you could think of in this way: it takes the practitioner out of the usual orientation to time, to past and future, and creates quite a different sense of the way time passes. I found that institutionalised forms of mindfulness practice, to some extent, organised to support this change in one’s approach to time. I suspect this is also linked to an idea that I talk about in my article, the idea that mindfulness is somehow “perennial” or “universal.” There is a sense in which by practising mindfulness, and especially by practising on retreat, one is removing oneself from the usual run of historical time.  I think that it would be extremely interesting to think about how to do a history of this phenomenon; a history of the way people, especially within contemplative traditions, have sought to exit historical time.

Steven Stanley

SS: Many researchers of mindfulness also practice mindfulness themselves. Did you practice mindfulness as you were studying it? If you did, how did this work in relation to your fieldwork?

MD: Yes, I did. I was reluctant to do so initially, mainly because I had my own Buddhist meditation practice, and didn’t want to add another 40 minutes to my morning meditation routine. However, when I started meeting people in the mindfulness movement, they were very insistent that mindfulness could not really be understood without being experienced. While carrying out my PhD research I went to a lot of different teacher training retreats, workshops and events, and even taught an 8-week mindfulness-based stress reduction (MBSR) course to students at Cambridge. I think that this was an indispensable part of my research, to experience first hand what people were talking about when they spoke about mindfulness. Participating in a shared sense of vocation that I encountered amongst many mindfulness professionals showed me just how emotionally compelling  mindfulness was.

SS: Mindfulness is often presented as a secular therapeutic technique which has a scientific evidence based – and that it has completely moved away from its religious roots. Does your work challenge this idea and if so, how? And, related to this, what do you mean in your article by the ‘Buddhistic milieu’?

MD: As I say above, I do mean to complicate this idea that mindfulness is a straight-up medical intervention, moving ever-further from its religious roots. I think perhaps the development of mindfulness as a mass-cultural phenomenon roughly follows this trajectory. But this trajectory is also in itself complex: the parts of the mindfulness movement that I studied were also an attempt at making society more sacred, using the secular biomedical discourse, institutionality and rationality as a means of doing so – although most people wouldn’t have talked about it in this way. Secular biomedicine, at least for the earliest proponents of mindfulness, was seen as a route through which a what we might think of (though they didn’t think of it like this) a special kind of spiritual force (a force which, in my view, has very much to do with what we normally call religion), could be transmitted.

I mean by the ‘Buddhistic milieu’ to refer to something fairly loose – the constellation of communities, institutions, texts and practices which are strongly influenced by the Buddhist tradition, but which do not – or do not always – self-identify as Buddhist. It’s a coinage inspired by sociologist of religion Colin Campbell’s idea of a “cultic milieu,” a term he used to describe the emergent New Age movement in the 1970s. For Campbell, the cultic milieu is a community of spiritual practitioners characterised by individualism, loose structure, low levels of demand on members, tolerance, inclusivity, transience, and ephemerality. When I talk about a Buddhistic milieu here, I mean something like this, but with Buddhism (very broadly construed) as a focus. Some traditions, such as the Insight meditation tradition, which did much to give rise to the secular mindfulness movement, especially encourage this type of relationship to Buddhist practice, emphasising their own secularity, and insisting on its openness to practitioners from any faith tradition.

SS: You suggest that the transmission of mindfulness follows a ‘patrilineal’ lineage which is captured by terms like dissemination, essence, seminal and birth. Your focus is very much on the male ‘founding fathers’ of Mindfulness-based stress reduction (MBSR) and Mindfulness-based cognitive therapy (MBCT) rather than the women pioneers of the movement. Given that such stories of male founders have been troubled by feminist and revisionist historians of science and psychology since the 1980s especially, can you tell us more about the gender politics of the mindfulness movement and give us a sense of the role female leaders have played in the movement?

MD: An excellent but difficult line of questioning! When I first wrote this paper – and when I started my PhD – I took a much more explicitly feminist perspective. But as I started to write, I was confronted by how incredibly sensitive a topic this is, and I’m still not quite ready to say anything very definite. Mindfulness was not, nor do I think we should expect it to have been, impervious to the tendency towards patriarchal domination that permeates society in general. And, as you suggest here, we might fruitfully read some of the key symbols of male power I identify in my article as a sign of this tendency. I can’t say much more for now by way of analysis, but I’m aiming to tackle this issue more directly in the book.

I can give a couple of cases, though, which I plan e to explore in more detail in the future. The first is the role of meditator and palliative care worker called Peggie Gillespie who worked with Jon Kabat-Zinn in the very earliest days of his Clinic in Worcester, Massachusetts (where he first developed Mindfulness-Based Stress Reduction). Gillespie joined Kabat-Zinn as co-teacher in 1979, either in the very first mindfulness course he taught to patients at the University of Massachusetts Medical, or not long afterwards. She then acted as his second-in-command for the first couple of years of the Stress Reduction Clinic’s existence. She was certainly involved in developing MBSR (which was called SP&RP – the Stress Reduction and Relaxation Program, for the first decade of its life), and even wrote the first ever book about MBSR, her 1986 work Less Stress in Thirty Days. To my knowledge, however, Gillespie only gets a single mention in any writing anywhere about the history of MBSR – in the foreword to Jon Kabat-Zinn’s Full Catastrophe Living. The second example is the relative neglect of Christina Feldman. It wasn’t until the very end of my research period that I realised just how influential a figure Feldman has been – she had led the retreat on which Kabat-Zinn had his idea for MBSR, and went on to be the primary meditation teacher of one of the main early proponents of British mindfulness, cognitive psychologist John Teasdale. Although again she’s rarely mentioned, in a sense she oversaw the birth of secular mindfulness both in Britain and in America. I’m hoping that she’ll grant me an interview, so that I can write her into the book!

SS: If a teacher or practitioner of mindfulness is interested in your research, and wants to know more about the history of mindfulness, what texts would be in your History of Mindfulness 101?

So, when it comes to straightforward history, I’d go for Jeff Wilson’s (2014) Mindful America, Anne Harrington’s (2008) Cure Within, Mark Jackson’s (2013) The Age of Stress, and David McMahan’s (2018) The Making of Buddhist Modernism. These books all do important work in both narrating episodes the history of mindfulness since the 1970s, and in situating those episodes amidst broader currents in the history of science, medicine, and religion. Finally, Wakoh Shannon Hickey’s forthcoming book Mind Cure: How Meditation Became Medicine, was published a couple of weeks ago in March 2019. I haven’t read it yet, but I know something of her doctoral research into the history of MBSR, and suspect it will provide a much more in-depth and focused exploration than has yet been seen.

Matthew Drage is a Post-Doctoral Research Fellow in the History of Art, Science and Folk Practice, at the Warburg Institue, in the School of Advanced study, University of London.

Steven Stanley is Senior Lecturer atthe School of Social Sciences, Cardiff University.

Kinds of Uncertainty: Speaking in the Name of Doubt.

This is the second part of a two-part interview, between Vanessa Rampton, Branco Weiss Fellow at the Chair of Practical Philosophy, ETH Zurich, and the anthropologist Tobias Rees, Director of the ‘Transformations of the Human Program’ at the Berggruen Institute in Los Angeles, and author of the new monograph, After Ethnos (Duke). The discussion took place following a workshop on Rees’s work at the Zurich Center for the History of Knowledge in 2017. You can read the first part of the interview here.

4. Uncertainty and/as Political Practice

Vanessa Rampton (VR): I want to continue our conversation by asking you about the implications of foregrounding uncertainty and the ‘radical openness’ you mentioned earlier for aspects of life that are explicitly normative. Take politics, for example. Have you thought about the political implications of embracing uncertainty, and what could be necessary to facilitate communication, or participation, or what it is you think is important?

Tobias Rees (TR): For me, the reconstitution of uncertainty or ignorance is principally a philosophical and poetic practice. These concepts are not reducible to the political. But they can assume the form of a radical politics of freedom.

VR: How so?

TR: For a long time, in my thinking, I observed the classical distinction between the political as the sphere of values and the intellectual as the sphere of reason. And as such I could find politics important, a matter of passion, but I also found it difficult to relate my interest in philosophical and anthropological questions to politics. And I still think the effort to subsume all Wissenschaft, all philosophy, all art under the political is vulgar and destructive. However, over the years, largely through conversations with the anthropologist, Miriam Ticktin, I have learned to distinguish between a concept of politics rooted in values and a concept of politics rooted in the primacy of the intellectual or the artistic. I think that today we often encounter a concept of politics that is all about values, inside and outside of the academy. People are ready to subject the intellectual –– the capacity to question one’s values –– to their beliefs and values.

VR: For example?

Tobias Rees: This is much more delicate than it may seem. If I point out the intellectual implausibility of a well held value … trouble is certain. Maybe the easiest way to point what I mean is to take society as an example again. We know well that the concepts (not the words) of society and the social emerged only in the aftermath of the French Revolution, under conditions of industrialization. We also know perfectly well that the emergence of the concepts of society and the social amounted to a radical reconfiguration of what politics is. I think there is broad agreement that society is not just a concept but a whole infrastructure on which our notions of justice and political participation are contingent. If I point out though that society is not an ontological truth but a mere concept – a concept indeed that is somewhat anachronistic in the world we live in, people become uncomfortable. Many have strong emotional reactions insofar as they are vetted to the social as the good, and as the only form politics takes. When I then insist, as I usually do, the conversation usually ends by my interlocutors telling me that this is not an intellectual but a political issue. That is, they exempt politics as a value domain from the intellectual. I thoroughly disagree with this differentiation.

In fact, I find this value-based concept of politics unfortunate and the readiness to subject the intellectual to values disastrous. Values are a matter of doxa, that is, of unexamined opinions, and as long as we stay on the level of doxa the constitution of a democratic public is impossible. Kant saw that clearly and made the still very useful suggestion that values are a private matter. In private you may hold whatever values you prefer, Kant roughly says, but a public can only be constituted through what is accessible to everyone in terms of critical reflection. He called this the public exercise of reason. So the question for me is how, in this moment, we might allow for a politics that is grounded in the intellectual, in reason even, rather than in values. The anti-intellectual concept of politics that dominates public and especially academic discussions is, I think, a sure recipe for disaster. Obviously this is linked, for me, to the production of uncertainty and to the question of grounding practice in uncertainty.

VR: I am very sympathetic to your desire to avoid confusing the tasks of, say, philosophy with political activism, but how does this go together with uncertainty and ignorance?

TR: Yes, it may seem that my work on the instability of knowledge or on uncertainty amounts to a critique of reason. But in fact the contrary is the case: for me, the reconstitution of ignorance, the transformation of certainty into uncertainty is an intellectual practice. Or better, an intellectual exercise. It is accomplished by way of research and reflection; it is accomplished by thinking about thinking. Another way of making this point is to say that uncertainty –– or the admission of ignorance –– is the outcome of rigorous research, it is the outcome of a practice committed, in principle, to searching for truth. If I am at my most provocative I say that uncertainty implies an open horizon –– it opens up the possibility that things could be different and this possibility of difference, of openness, is what I am after. So one big challenge that emerges from this is how can one reconcile the intellectual and the political, and I do think that’s possible. That would lead back to what I called epistemic activism.

VR:  How would that work in practice?


Michel Foucault portrait (1926-1984) french philosopher. Ink and watercolor. By Nemomain. CC-BY-SA. Source: Wikimedia Commons

TR: My personal response unfolds along two lines. The first one amounts to a gesture to Michel Foucault: with Foucault one could describe my work as a refusal to be known or to be reducible to the known. Hence, my interest in that which escapes, which cannot be subsumed, etc. A second way of responding to your question, with equal gratitude to Foucault, is to say that the political is for me first of all a matter of ethics, that is, of conduct: how do you wish to live your life? And here I advocate the primacy of the intellectual –– katalepsis –– over values. Based on these two replies one can approach the political on a more programmatic scale: whenever someone speaks in the name of unexamined values or claims to speak in the name of truth and thereby closes the horizon and undermines the primacy of the intellectual, I can make myself heard and ask questions and express doubt. And when I say doubt I don’t mean a hermeneutics of suspicion. I also don’t mean social critique. I mean radical epistemic doubt that tries to reconstitute irreducible uncertainty.

VR: So this would involve calling out the truth-claims of other actors?

TR:  I am not fond of the term calling out. The phrase tends to hide the fact that what is at stake is not only to confront the truth claims someone is making, but also to avoid the very mistakes one problematizes: to speak in the name of truth. I am more interested in speaking in the name of doubt: not a doubt that would do away with the possibility of truth and that would leave us with the merely arbitrary, but a doubt that transforms the certain into the uncertain, while maintaining the possibility of truth as measure or as guiding horizon.

5.Uncertainty as Virtue

VR: Let’s talk about the normative implications of uncertainty beyond politics. I was interested in a review of your work by Nicolas Langlitz in which he accused you of wanting to radically cultivate uncertainty, and he had arguments for why this wouldn’t work. Actually this reminds me of a passage in Dostoevsky’s The Brothers Karamazov where the Grand Inquisitor condemns Christ for having burdened humanity with free choice, and claims that actually human beings cannot cope with freedom, nor do they really desire it. Rather they prefer security or happiness: having food, clothes, a house and so on. And one question would be, how do we acknowledge uncertainty, acknowledge its importance, but not cultivate it in a way that could potentially be destructive?

TR: I have several different reactions at once. Here is reply one: I am deeply troubled by the idea of decoupling happiness from freedom. As I see it now, uncertainty is a condition of the possibility of freedom –– and of happiness. Why? Because the impossibility to know provides an irreducibly open horizon. This is one important reason for my interest in cultivating uncertainty.

Immanuel Kant (1724-1804). This work is in the public domain in its country of origin and other countries and areas where the copyright term is the author’s life plus 70 years or less. Source: Wikimedia Commons.

My second reply amounts to a series of differentiations that seem to me necessary or at the very least helpful. For example, I think it makes sense to differentiate between the epistemic and the existential as two different genres. To make my point, let me go to the beginning of the preface to the first edition of the Critique of Pure Reason, where Kant says that human reason (for reasons that are not its fault) finds itself confronted with questions it cannot answer. I am thoroughly interested in this absence of foundational answers that Kant points out here. What answers does Kant have in mind? He doesn’t actually provide examples and most modern readers tend to conclude he meant the big existential questions of the twentieth century: why am I here? What is the meaning of life? Stuff like that. However, I think that is not at all what Kant had in mind. He simply shared an epistemological observation: whenever we try to provide true foundations for knowledge, we fail. In every situation –– whether in science or in everyday life –– we cannot help but rely on conceptual presuppositions we are not aware of. What is more, there are always too many presuppositions to possibly clear the ground. The consequence, pace Kant, is that knowledge is intrinsically unstable and fragile. I am interested in precisely this instability and fragility of knowledge. Of all knowledge. Let’s say for me this instability is the condition of the possibility of freedom.

Up until this point I simply have made an epistemological observation. Now Langlitz, whose work I admire, asks if my epistemic cultivation of uncertainty is productive in the face of, say, climate change deniers. To me, he implicitly confuses here the epistemic –– which remains oriented towards truth and is an intellectual practice –– with the doxa driven rejection of the epistemic and the intellectual that is characteristic of the climate change deniers. What you are asking about though is of a different quality, right? You are asking about a more existential uncertainty.

6.Uncertainty and Medicine

VR: My question is motivated by thinking about cases such as medicine. For example, does the epistemic uncertainty you are concerned with require special measures in the clinical encounter? After all, physicians’ perceived ability to cope with uncertainty has a well-documented placebo effect. So for example physician and writer Atul Gawande – I’m thinking of his books Complications (2002) and Better (2007) – writes about all the things modern medicine doesn’t know in addition to what it does know. But he emphasizes that this self-doubt cannot become paralyzing, that physicians must act, and that action is – in many cases – in patients’ interests. So this doesn’t contradict per se what you were saying before, but it does show how epistemic uncertainty is seen as something that has to be managed in this particular professional setting, and that a kind of simulacrum of certainty may also give patients hope in a difficult situation.

TR: I think that perhaps the best way to address the questions you are raising is a research project that attempts to catalogue the multiple kinds of uncertainties that flourish in a hospital. If I stress that there are different kinds of uncertainties then this is partly because I think that different kinds of uncertainties have different kinds of causes –– and partly because I think that there is no obvious link between the epistemic uncertainty I have been cultivating and the kinds of uncertainties that plague the doctor-patient relation in medicine.

VR: I am surprised to hear you say that, because I understood the relation between technical progress and the skill of living a life in intrinsically uncertain circumstances as a central feature of your work. In Plastic Reason, for example, you quote Max Weber who says: ‘What’s the meaning of science? It has no meaning because it cannot answer the only question of importance, how shall we live and what shall we do?’ And as you know Weber came to that idea via Tolstoy, who basically says: ‘the idea that experimental science must satisfy the spiritual demands of mankind is deeply flawed’. And Tolstoy goes on to say: ‘the defenders of science exclaim – but medical science! You’re forgetting the beneficent progress made by medicine, and bacteriological inoculations, and recent surgical operations’. And that’s exactly where Weber answers: ‘well, medicine is a technical art. And there is progress in a technical art. But medicine itself cannot address questions of life and how to live, and what life you want to live.’

TR: But why does Weber answer that way? You are surely right that he arrives at the question concerning life and science via Tolstoy. However, it also seems to me that he thoroughly disagrees with Tolstoy. In my reading, Tolstoy makes an existential or even spiritual point. He places the human on the side of existential and spiritual questions and calls this life –– and then criticizes science as irrelevant in the face of these questions. Weber’s observation is, I think, a radically different one. Tolstoy is right, he says, there are questions that science cannot answer. However, if you want to live a life of reason –– or of science –– then this absence of answers is precisely what you must endure. Or, perhaps, enjoy. In other words, Weber upholds science or reason vis-à-vis its enemies.

One can refine this reading of Weber. He answers that science is meaningless. And I think the reason for this is that, as he sees it, science isn’t concerned with meaning. Indeed, from a scientific perspective human life is entirely meaningless. However, Weber nowhere argues that science is irrelevant for the challenge of living a life. On the contrary, he lists a rather large series of tools that precisely help here –– from conceptual clarity to the experience of thinking, to technical criticism. His whole methodological work can be read as an ethical treatise for how to live a life as a Wissenschaftler. According to Weber, the Tolstoy argument requires a leap of faith that those of us concerned with reason –– and with human self-assertion in the face of metaphysical claims –– cannot take.


A female figure representing science trimming the lamp of life. Engraving by A. R. Freebairn, 1849, after W. Wyon. This image is available
CC BY. Credit: Wellcome Collection

It is easy, of course, to claim that life is so much bigger than science. But then, upon inspection, there is no aspect of life that isn’t grounded in conceptual presuppositions –– and these presuppositions have little histories. That is, they didn’t always exist. They emerge, they re-organize entire domains of life, and then we take them for granted, as if they had always existed. Which they didn’t. This includes the concept of life, I hasten to add. Weber opts for the primacy of the intellectual as opposed to the primacy of the existential. And for Weber the only honest option is to accept the primacy of the intellectual. That may mean that some questions are never to be answered. But all answers he examined are little more than a harmony of illusions.

You see, I think that this is easily related back to my distinction between epistemic uncertainty and existential uncertainty. In Plastic Reason I quoted Weber not least because my fieldwork observations seemed to me a kind of empirical evidence that proves the dominant, anti-science reading of Weber wrong. If you are thinking that it is your brain that makes you human and if you are conducting experiments to figure out how a brain works, well, then you are at stake in your research. Science doesn’t occur outside of life. None of this is to say that the uncertainties that plague medicine aren’t real. But it is to say that I think it is worthwhile differentiating between kinds of uncertainty.

Tobias Rees is Reid Hoffman Professor of Humanities at the New School of Social Research in New York, Director of the Transformations of the Human Program at the Berggruen Institute in Los Angeles, and Fellow of the Canadian Institute for Advanced Research. His new book, After Ethnos is published by Duke in October 2018.

Vanessa Rampton is Branco Weiss Fellow at the Chair of Philosophy with Particular Emphasis on Practical Philosophy, ETH Zurich, and at the Institute for Health and Social Policy, McGill University. Her current research is on ideas of progress in contemporary medicine.


Kinds of Uncertainty: On Doubt as Practice

In his recent books, Plastic Reason: An Anthropology of Brain Science in Embryogenetic Terms (University of California Press, 2016) and After Ethnos (Duke University Press, 2018), the anthropologist Tobias Rees explores the curiosity required to escape established ways of knowing, and to open up what he calls “new spaces for thinking + doing.” Rees argues that acknowledging – and even embracing – the ignorance and uncertainty that underpin all forms of knowledge production is a crucial methodological part of that process of escape. In his account, doubt and instability are bound up with a radical openness that is necessary for breaking apart existing gaps and allowing the new/different to emerge – in the natural but also in the human sciences. But are there limits to such an embrace of epistemic uncertainty? How does this particular uncertainty interact with other forms of uncertainty, including existential uncertainties that we experience as vulnerable human beings? And how does irreducible epistemic uncertainty relate to ethical claims about how to live a good life? What is the relation of a radical political practice of freedom with art? After a workshop on his work at the Zurich Center for the History of Knowledge in 2017, Vanessa Rampton, Branco Weiss Fellow at the Chair of Practical Philosophy, ETH Zurich, explored these themes with Rees.

 

1. The Human

Vanessa Rampton (VR): Tobias, your recent work aims to destabilize and question common understandings of the human. I wonder how you would place your work in relation to other engagements with ‘selfhood’ within the history of philosophy, and the history of the human sciences more widely. Because there are so many ways of thinking of the self – for example the empirical, bodily self, or the rational self, or the self as relational, a social construct – that you could presumably draw on. But I also know that you want to move beyond previous attempts to capture the nature and meaning of ‘the human self’. What are the stakes of this destabilization of the human? What do you hope to achieve with it?

Tobias Rees (TR): In a way, it isn’t me who destabilizes the human. It is events in the world. As far as I can tell, we find ourselves living in a world that has outgrown the human, that fails it. If I am interested in the historicity of the figure of the human –– a figure that has been institutionalized in the human sciences –– then insofar as I am interested in rendering visible the stakes of this failure. And in exploring possibilities of being human after the human. Even of a human science after the human.

VR: When you say the human, what do you mean?

Vanessa Rampton, Branco Weiss Fellow, ETH Zurich

TR: I mean at least three different things. First, I mean a concept. We moderns usually take the human for granted. We take it for granted, that is, that there is something like the human. That there is something that we –– we humans –– all share. Something that is independent from where we are born. Or when. Independent of whether we are rich or poor, old or young, woman or man. Independent of the color of our skin. Something that constitutes our humanity. In short, something that is truly universal: the human. However, such a universal of the human is of rather recent origin. This is to say, someone had to have the idea to begin articulating an abstract, in its validity universal and thus time and place independent, concept of the human. And it turns out that this wasn’t something people wondered about or aspired to formulate before the 17th century.

Second, I mean a whole ontology – that the invention of the human between the 17th and the 19th century amounted to the invention of a whole understanding of how the real is organized. The easiest way to make this more concrete is to point out that almost all authors of the human, from Descartes to Kant, stabilized this new figure by way of two differentiations. On the one hand, humans were said to be more than mere nature; on the other hand, it was claimed that humans are qualitatively different from mere machines. Here the human, thinking thing in a world of mere things, subject in a world of objects, endowed with reason, and there the vast and multitudinous field of nature and machines, reducible –– in sharp contrast to humans –– to math and mechanics. The whole vocabulary we have available to describe ourselves as human silently implies that the truly human opens up beyond the merely nature. And whenever we use the term ‘human,’ we ultimately rely on and reproduce this ontology.

Third, I mean a whole infrastructure. The easiest way to explain what I mean by this is to gesture to the university: the distinction between humans on the one hand and nature and machines on the other quite simply mirrors the concept of the human, insofar as it implies two different kinds of realities, as it emerged between the 17th and 19th century. Now, it may sound odd, even provocative, but I think there can be little doubt that today the two differentiations that stabilized the human –– more than mere nature, other than mere machines ––fail. From research in artificial intelligence to research in animal intelligence, en passant microbiome research or climate change. One consequence of these failures is that the vocabulary we have available to think of ourselves as human fails us. And I am curious about the effects of these failures: what are their effects on what it means to be human? What are their effects on the human sciences –– insofar as those sciences are contingent on the idea that there is a separate, set apart human reality and insofar as their explanations, their sense making concepts are somewhat contingent on the idea of a universal figure of the human, that is, on the ‘the’ in ‘the human’? Can the human sciences, given that they are the institutionalized version of the figure of the human, even be the venue through which we can understand the failures of the human? Let me add that I am much less interested in answering these questions than in producing them: making visible the uncertainty of the human is one way of explaining what I think of as the philosophical stakes of the present. And I think these stakes are huge: for each one of us qua human, for the humanities and human sciences, for the universities. The department I am building at the Berggruen Institute in Los Angeles revolves around just these questions.

‘Human embryonic stem cells’ by Jenny Nichols. Credit: Jenny Nichols. CC BY

VR: What led you to doubt the concept of the human and the human sciences?

TR: My first book, Plastic Reason, was concerned with a rather sweeping event that occurred around the late 1990s: the chance discovery that some basic embryonic processes continue in adult brains. Let me put this discovery in perspective: it had been known since the 1880s that humans are born with a definite number of nerve cells, and it was common wisdom since the 1890s that the connections between neurons are fully developed by age twenty or so. The big question everyone was asking at the beginning of the twentieth century was: how does a fixed and immutable brain allow for memory, for learning, for behavioral changes? And the answer that eventually emerged was the changing intensity of synaptic communication. Consequently, most of twentieth-century neuroscience was focused on understanding the molecular basis of how synapses communicate with one another –– first in electrophysiological and then in genetic terms.

When adult cerebral plasticity was discovered in the late 1990s the focus on the synapse –– which had basically organized scientific attention for a century –– was suddenly called into question. The discovery that new neurons continue to be born in the adult human brain, that these new neurons migrate and differentiate, that axons continue to sprout, that dendritic spines continuously appear and disappear not only suggested that the brain was perhaps not the fixed and immutable machine previously imagined; it also suggested that synaptic communication was hardly the only dynamic element of the brain and hence not the only possible way to understand how we form memory or learn. What is more, it suggested that chemistry was not the only language for understanding the brain.

The effect was enormous. Within a rather short period of time, less than ten years, the brain ceased to be the neurochemical machine it had been for most of the twentieth century, but without – and this I found so intriguing – without immediately becoming something else. The beauty of the situation was that no one knew yet how to think the brain. It was a wild, an untamed, an in-between state, a no longer not-yet, a moment of incredibly intense, unruly openness that no one could tame. The whole goal of my research was to capture something of this irreducible openness and its intensity.

Anyway, when trying to capture something of the radical openness in which my fieldwork was unfolding, I began to wonder about my own field of research: if the taken for granted key concepts of brain science, that is, the concepts that constituted and stabilized the brain as an object, could become historical in a rather short period of time, then what about the terms and concepts of the human sciences? Which terms might constitute the human in such a situation? These questions led me to the obsession of trying to write brief, historicizing accounts of the key terms of the human sciences, first and foremost the human itself: when did the time and place independent concept of the human, of the human sciences we operate with emerge? And this then led me to the terms that stabilize the human: culture, society, politics, civilization, history, etc. When were these concepts invented –– concepts that silently transport definitions of who and what we are and of how the real is organized? When were they first used to describe and define humans, to set them apart as something in themselves? Where? Who articulated them? What concepts –– or ways of thinking –– existed before they emerged? And are there instances in the here and now that escape the human?

Somewhere along the way, while doing fieldwork at the Gates Foundation actually, I recognized that the vocabulary the human sciences operate with didn’t really exist before the time around 1800, plus or minus a few decades, and that their sense-making, explanatory quality relies on a figure of the human –– on an understanding of the real –– that has become untenable. I began to think that the human, just like the brain, had begun to outgrow the histories that had framed it. You said earlier, Vanessa, that I am interested in destabilizing common understandings of the human. Another way of describing my work, one I would perhaps prefer, would be to say that through the chance combination of fieldwork and historical research I discovered the instability –– and the insufficiency –– of the concept of the human we moderns take for granted and rely on. I want to make this insufficiency visible and available. The human is perhaps more uncertain than it has ever been.

VR: Listening to you, I cannot help but think that there are strong parallels between your work and the history of concepts as formulated by, say, Reinhart Koselleck or Raymond Williams. I can nevertheless sense that there is a difference –– and I wonder how you would articulate this difference?

TR: First, I am not a historian of concepts. I am primarily a fieldworker and hence operate in the here and now. What arouses my curiosity is when, in the course of my field research a ‘given,’ something we simply take for granted, is suddenly transformed into a question: an instance in which something that was obvious becomes insufficient, in which the world or some part thereof escapes it and thereby renders it visible as what it is, a mere concept. From the perspective of this insufficiency I then turn to its historicity: I show where this concept came from, when it was articulated, why, under what circumstances, and also how it never stood still and constantly mutated. But in my work this history of a concept, if one wants to call it that, is not end in itself. It is a tool to make visible some openness in the present that my fieldwork has alerted me to. In other words, the historicity is specific: the specific product of an event in the here and now, a specificity produced by way of fieldwork.

Reid Hoffman Professor of Humanities at the New School of Social Research in New York, Director of the Transformations of the Human Program at the Berggruen Institute in Los Angeles.

Second, my interest in the historicity –– rather than the history –– of concepts runs somewhat diagonal to presuppositions on which the history of concepts has been built. Koselleck, for example, was concerned with meaning or semantics and with society as the context in which changes in meaning occur. That is to say, Koselleck –– and as much is true for Williams –– operated entirely within the formation of the human. They both took it for granted that there is a distinctive human reality that is ultimately constituted by the meaning humans produce and that unfolds in society. Arguably, the human marked the condition of the possibility of their work. It is interesting to note that neither Koselleck nor Williams, nor even Quentin Skinner, ever sought to write the history of the condition of possibility of their work: they never historicized the figure of the human on which they relied. On the contrary, they simply took it for granted as the breakthrough to the truth. If I am interested in concepts and their historicity, then it is only because I am interested in the historicity of the concept of the human as a condition of possibility. How to invent the possibility of a human science beyond this condition of possibility is a question I find as intriguing as it is urgent: how to break with the ontology implied by the human? How to depart from the infrastructure of the human, while not giving up a curiosity about things human, whatever human then actually means?

 

2. Epistemic Uncertainty

VR: I am wondering if all concepts can outgrow their histories. Isn’t this more difficult in the case of, say, ‘the body’ or ‘language,’ than for our more doctrinal concepts – liberalism and socialism, for example?

TR: Your question implies, I think, a shift in register. Up until now we talked about the human and its concepts and institutions but now we are moving to a more general epistemic question: are all concepts subject to their historicity? And if so, what does this imply? Seeing as you mentioned the body, let’s take the idea –– so obvious to us today –– that we are bodies, that it is through our warm, sentient, haptic bodies that we are at home in the world. Over the last fifty years or so, really since the 1970s, a large social science literature has emerged around the body and around how we embody certain practices and so on. Much of this literature, of course, relies on Mauss on the one hand and on Merleau-Ponty on the other. And if one works through the anthropology or history of the body, one notes that most authors take the body simply as a given. It is as if they were saying, ‘Of course humans are, were, and always will be bodies.’

But were humans always bodies? At the very least one could ask when, historically speaking, did the concept of the body first emerge? When did humans first come up with a concept of the body and thus experience themselves as bodies? What work was necessary –– from physiology to philosophy –– for this emergence? To ask this question requires the readiness to expose oneself to the possibility that the category of the body and the analytical vocabulary that is contingent on this category is not obvious. There might have been times before the body –– and there might be times after it. For example, if one reads books about ancient Greece, say Bruno Snell’s The Discovery of the Mind, one learns that archaic Greek didn’t have a word for what we call the body. The Greeks had a word for torso. They had two words for skin, the skin that protects and the skin that is injured. They had terms for limbs. But the body, understood as a thing in itself, as having a logic of its own, as an integrated unit, didn’t exist.

‘Carved stone relief of Greek physician and patient’ . Credit: Wellcome Collection. CC BY

One version of taking up Snell’s observation is to say: the Greeks maybe did not have a word for body –– but of course they were bodies and therefor the social or cultural study of the body is valid even for archaic Greece. What I find problematic about such a position is that it implies that the Greeks were ignorant and that our concepts –– the body –– mark a breakthrough to the truth: we have universalized the body, even though it is a highly contingent category. Perhaps a better alternative is to systematically study how the ‘realism of the body’ on which the social and cultural study of the body is contingent became possible. A history of this possibility would have to point out that the concept of a universal body –– understood as an integrated system or organism that has a dynamic and logic of its own and that is the same all over the world –– is of rather recent origin. It doesn’t really exist before the 19th century. In any case, there are no accounts of the body –– or the experience of the body –– before that time and philosophies of the body seem to be almost exclusively a thing of the first half, plus or minus, of the twentieth century. Sure, anatomy is much older, and there were corpses, but a corpse is not a body. The alternative to the realism of the body that I briefly sketched here would imply that one can no longer naively –– by which I mean in an unexamined way –– subscribe to the body as a given. The body then has become uncertain. I am interested in fostering precisely this kind of epistemic uncertainty. To me, epistemic uncertainty is an escape from truth and thus a matter of freedom.

VR: Perhaps a kind of taken-for-granted approach to the body is so bound up with what you call ‘the human’ that questioning it is necessary for your work.

TR: Indeed, although my work led me to assume that what is true for the human or the body is true for all concepts. Every concept we have is time and place specific and thus irreducible, instable and uncertain.  But to return to the human: we live in a moment in time that produces the uncertainty of the human all by itself. I render this uncertainty visible by evoking the historicity of the human, and this in turn leads me to wonder if one could say that the human was a kind of intermezzo – a transient figure that was stable for a good 350 years but that can no longer be maintained.

VR: I wonder what you would reply if I were to say: but isn’t that obvious? Concepts are historically contingent, so what else is new?

TR: In my experience, most people grant contingency within a broader framework that they silently exempt from contingency itself. For example, if contingency means that different societies have different kinds of concepts, then society is the framework within which contingency is allowed: but society itself is exempt from contingency. One could make similar arguments with respect to culture. If we say that things are culturally specific, that some cultures have meanings that others don’t have, or entirely different ways of ordering the world, then we exempt culture from contingency.

All of this is to say, sure, you are right, social and cultural contingency are obviously not new. But what if you would venture to be a bit more radical. What if you would not exempt society and culture from contingency? Talk to a social scientist about society being contingent, and they become uncomfortable. Or they reply that maybe the concept of society didn’t exist but that people were of course always social beings, living in social relations. This is a half movement in thought. It assumes that the word has merely captured the real as it is –– but misses that the configuration of the real they refer to has been contingent on the epistemic configuration on which the concept of society has depended. We could say that the one thing a social scientist cannot afford is the contingency of the category of the social.

What I am interested in is the contingency of the very categories that make knowledge production possible. To some degree, I am conducting fieldwork to discover such contingencies, to generate an irreducible uncertainty: as an end in itself and also as a tool to bring into view in which precise sense the present is outgrowing –– escaping –– our understanding and experience of the world.

 

3. Knowledge Production Under Conditions of Uncertainty/Ignorance

VR: I imagine there is a kind of parallel here with how natural scientists would react to the fact that their concepts no longer fit, for example by developing a more up-to-date way of thinking the brain to replace the synaptic model. But it strikes me that, if done properly, this task is much more radical for practitioners of the human sciences. This is because all of our concepts – including such fundamental ones as the human and the body – are historically contingent, that we have to do away with universal categories. Our task is to fundamentally destabilize ourselves as historical subjects, as academics, as knowers. And I guess a key question is how this destabilization, this rendering visible of uncertainties, can nevertheless be linked to the kinds of knowledge production we have come to expect from the human sciences.

TR:  The question, perhaps, is what one means by knowledge production in the human sciences. I think that the human sciences have been primarily practiced as a decoding sciences. That is to say, researchers in the human sciences usually don’t ask ‘What is the human?’ No, they already knew what the human is: a social and cultural being, endowed with language. Equipped with this knowledge they then make visible all kinds of things in terms of society and culture. In addition, perhaps, one could argue that the human sciences have established themselves as guardians of the human – that is, they have been practiced in defensive terms. For example, whenever an engineer argues that machines can think and that humans are just another kind of machine, the human sciences react by defending the human against the machine. The most famous example here would maybe be Hubert Dreyfus against Seymour Papert. A similar argument though could be made with respect to genetics and genetic reductionism.

Now, if one destabilizes the figure of the human neither one of these two forms of knowledge production can be maintained. I think that this is why many in the human sciences experience the destabilization of the human as outrageous provocation. If one gets over this provocation one is left with two questions. The first is: what modes of knowledge production become possible through this destabilization of the human? Especially when this destabilization means that the entire ontological setup of the human sciences fail. Can the human sciences entertain, let alone address this question, given that they are the material infrastructure of the figure of the human that fails? Or does one need new venues of research? I often think here of the relation between modern art and the nineteenth century academy.

VR: That reminds me of Foucault.

TR: Foucault was an anti-humanist –– but he remained uniquely concerned with human reality. I think the stakes here – I say this as an admirer of Foucault – are more radical. So my second question is: what happens to the human? I am acutely interested in maintaining the possibility of the universality of the human after the human. Letting go of the idea seems disastrous. So how can one think things human without relying on a substantive or positive concept of what the human is? My tentative answer is research in the form of exposure: the task is to expose the normative concept of the human in the present, by way of fieldwork, to identify instances that escape the human and break open new spaces of possibility, each time different ones, ones that presumably don’t add up. The goal of this kind of research-as-exposure is not to arrive at some other, better conception of the human, but to render uncertain established ways of thinking the human or of being human and to thereby render the human visible and available as a question.

VR:  So if you don’t want to talk about what the human is, I’m wondering if the appropriate question would be about what the human is not.

‘Human microbial ecosystem, artistic representation’ by Rebecca D Harris. Credit: Rebecca D Harris. CC BY

TR: I think such an inversion doesn’t get us very far. I would rather say that I am interested in operating along two lines. One line revolves around the effort to produce ignorance. That is, I conduct research not so much in order to produce knowledge but the uncertainty of knowledge. The other line wonders how one could conduct research under conditions of irreducible ignorance or uncertainty, or how to begin one’s research without relying on universals. A comparative history of this or that always presupposes something stable. As does any social or cultural study. In both cases I am interested in a productive or restless uncertainty –– or second-order ignorance –– not only with respect to the human. In a way, what I am after is the reconstitution of uncertainty, of not knowing, by way of a concept of research that maintains throughout the possibility of truth.

If you were to press me to offer a systematic answer I would say, as a philosophically inclined anthropologist, that I conduct fieldwork/research because I am simultaneously interested in where our concepts of the human come from, in whether there are instances in the here and now that escape these concepts, and in rendering available the instability –– the restlessness –– of the category or the categories of the human, both as an end in itself and as a means to bring the specificity of the present into view. It strikes me as particularly important to note that what I am after is not post-humanism. As far as I can tell most post-humanists hold on to the 18th-century ontology produced by the human but then delete the human from this ontology. What interests me is to break with the whole ontology. Not once and for all but again and again. Nor am I interested in the correction of some error à la Bruno Latour – as if behind the human we can discover some essential truth –– call it Actor Network Theory –– that the moderns have forgotten and that the non-moderns have preserved and that we now all can re-instantiate to save the world.

I am not so much interested in a replacement approach –– what comes after the human? –– than in rendering visible a multiplicity of failures, each one of which opens up onto new spaces of possibility. After all, how Artificial Intelligence derails the human is rather different from how microbiome research derails it or climate change. These derailments don’t add up to something coherent. As I see it, it is precisely this not-adding-up –– this uncertainty –– that makes freedom possible. Perhaps this form of research is closer to contemporary art than to social science research, that could well be. Anyhow, the department I try to build at the Berggruen Institute revolves around the production of precisely such instances of failure and freedom.

 

Tobias Rees is Reid Hoffman Professor of Humanities at the New School of Social Research in New York, Director of the Transformations of the Human Program at the Berggruen Institute in Los Angeles, and Fellow of the Canadian Institute for Advanced Research. His new book, After Ethnos is published by Duke in October 2018.

Vanessa Rampton is Branco Weiss Fellow at the Chair of Philosophy with Particular Emphasis on Practical Philosophy, ETH Zurich, and at the Institute for Health and Social Policy, McGill University. Her current research is on ideas of progress in contemporary medicine.

The British Way in Brainwashing: Marcia Holmes in conversation with Rhodri Hayward

In the July issue of History of the Human Sciences, Marcia Holmes, a post-doctoral researcher with the Hidden Persuaders project at Birkbeck, University of London, used the 1965 film adaption of Len Deighton’s The Ipcress File to demonstrate the close relationship between Cold War fantasies of mind control and the postwar understanding of the media. In her analysis, our familiar understanding of brainwashing as an irresistible form of domination is disrupted and she instead demonstrates how the spy drama which pits a hero against the mechanical forces of scientific control provided a new template through which audiences could re-conceive their relationship to modern media.  Against the idea of the passive and pliant observer, Holmes promotes the idea of the ‘cybernetic spectator’, who plays an active role in controlling the flow of information in order to reorganise their own personality and consciousness.  In this analysis, brainwashing moves beyond being a simple disciplinary mechanism to become a potential technology of the self.  Viewed from this perspective, brainwashing is less a legacy of Cold War struggles than a part of psychedelic revolution in which consciousness became a subject for personal exploration and transformation.  Part of the joy of Holmes account is that it connects the history of cold war human sciences to the flowering of the counterculture in the 1960s: a relationship that is only just beginning to receive the attention it deserves. Marcia Holmes is here in conversation with Rhodri Hayward, Reader in History at Queen Mary, University of London, and one of the Editors of HHS. The full paper is available open access here: http://journals.sagepub.com/doi/full/10.1177/0952695117703295 

Rhodri Hayward (RH): Thanks for speaking to us, Marcia. What first drew you to Deighton’s novel and the Ipcress File film?

Marcia Holmes (MH): I admit that I had never seen The Ipcress File (dir. Sidney Furie, 1965), or read Len Deighton’s 1962 novel, until I began researching films that depict brainwashing. Perhaps this is because I’m an American and only recently transplanted to the UK. The film is well-loved by British film critics and has a strong following in Britain, but I find that many of my American colleagues have not heard of The Ipcress File. This is a shame, because it is a very enjoyable film! And for historians of science, I think The Ipcress File offers much to discuss on the intersection between Cold War politics, science, and popular culture.

This original trailer for The Ipcress File (Furie, 1965) includes some images from the film’s brainwashing sequence. A re-mastered version of the film was released on DVD by Network in 2006 (Video source: YouTube. https://www.youtube.com/watch?v=QesO-BRvUAM).

When I first watched The Ipcress File, I was intrigued by how familiar I found the film’s treatment of brainwashing – its use of flashing lights and beating sounds to create a highly cinematic rendition of mind manipulation – and yet how different this imagery was to earlier, 1950s accounts of brainwashing. In the 1950s, reports (and even fictional stories) of brainwashing endeavoured to describe the real methods of indoctrination and interrogation used by communist cadres. Essentially, these methods involved ‘softening up’ a prisoner through starvation, sleep deprivation, and solitary confinement within a featureless cell. Once the prisoner was debilitated physically and psychologically, he would be subjected to a tedious process of indoctrination or interrogation that he would be unable to resist. In the 1950s there was also speculation about whether communists used drugs or hypnosis to weaken a prisoner’s resistance; but the tenor of this speculation was to determine what methods were actually being used, not to spin fictions for the sake of entertainment.

Meanwhile, The Ipcress File knowingly offers us fantastical science fiction in how it imagines the final stage of brainwashing: not as indoctrination or interrogation per se, but as carefully calibrated visual and auditory stimulation that can reprogramme a victim’s memories, even the brain itself. The centerpiece of the film’s brainwashing process is not the featureless prison cell, but rather the ‘programming box’, a person-sized cube that completely surrounds a victim with sounds and images. This fanciful reimagining of brainwashing seems to follow in the footsteps of The Manchurian Candidate, John Frankenheimer’s 1962 film that many historians consider iconic in how it depicts Cold War cultural anxieties. However, I think The Manchurian Candidate differs significantly from The Ipcress File in that Frankenheimer’s film never actually shows techniques or processes of brainwashing, only its after-the-fact effects on a victim’s consciousness.

File:Khigh-dhiegh-trailer.jpg
Dr. Yen Lo (played by Khigh Dheigh) the communist brainwasher of The Manchurian Candidate (Frankenheimer, 1962). In a famous scene, Dr. Yen Lo describes the scientific basis of brainwashing and demonstrates brainwashing’s effects on captured American soldiers. Arguably, the film’s vagueness about specific techniques of brainwashing makes it easier for audiences to suspend their disbelief about whether brainwashing can truly reprogram minds (Image source: Wikimedia Commons. https://commons.wikimedia.org/wiki/File:Khigh-dhiegh-trailer.jpg).

For me, The Ipcress File film raises the question of how and when the imagery of flashing lights and rhythmic sounds became a trope of brainwashing. I read Len Deighton’s novel to see if the idea for the programming box had come from Deighton. I was surprised to find that Deighton’s description of brainwashing was much more in keeping with 1950s accounts. In particular, he was inspired by William Sargant’s theory of brainwashing as a form of combat neurosis. Indeed, the ‘IPCRESS’ acronym that Deighton invents refers to the softening up process as Sargant might describe it: Induction of Psycho-neuroses by Conditioned Reflex under strESS. As I investigated further, I found that it was The Ipcress File filmmakers who sought out new ways of depicting brainwashing, and that they were guided by what would be spectacular for 1960s cinema goers as well as by emerging scientific theories about the programmability of minds and brains.

RH: You locate the film within a long history of cinema’s fascination with suggestion and hypnosis – what is different about this film?

MH: In a way, The Ipcress File’s depiction of brainwashing as the manipulation of the senses, consciousness, and attention harks back to earlier films about hypnosis, such as Fritz Lang’s Dr. Mabuse, the Gambler (1922) and the Cabinet of Dr. Caligari (dir. Robert Wiene, 1920). Film scholars like Raymond Bellour have theorized how these earlier films not only depicted hypnosis but also explored cinema’s own hypnotic effects on audiences. Their directors endeavoured to capture and control viewers’ attention, and heighten the sense of peril, through innovative use of close-up shots, spotlighting, and surreal scenery.

But The Ipcress File differs from these early films in how it explicitly references the power of cinema on the mind. Audiences can see quite clearly that the IPCRESS process is achieved with the help of film projectors that cast moving, colored lights onto the screen-like walls of the programming box. Audiences also see these projected images, and hear the eerie IPCRESS noise, as the film’s protagonist experiences them: as a diegetic film that plays on their own cinema screen before them.

The villains look on as the programming box begins to be hoisted in the air. On the left-hand side of this still image, a film projector can be seen attached to the outside of the programming box by a metal arm. Copyright for this image is owned by StudioCanal; it is reproduced here for the purposes of criticism only.

In this frame, the programming box is lit with an abstract image that appears to move rhythmically in and out of focus. To the left of the box is the shadow of a film projector, implying that the abstract image is being projected from the outside of the box. Copyright for this image is owned by StudioCanal; it is reproduced here for the purposes of criticism only.

Inside the box, the victim Harry Palmer (played by Michael Caine) reacts to the sensory onslaught with intense discomfort. He tries to resist the lights and sounds around him by focusing on physical pain, gripping a bent metal nail in his hand until his palm bleeds. Copyright for this image is owned by StudioCanal; it is reproduced here for the purposes of criticism only.

This kind of overt reference to film’s psychological effects appears in several brainwashing films. For instance, The Manchurian Candidate is partly a meditation on the influence of television on American politics; and A Clockwork Orange (dir. Stanley Kubrick, 1970) portrays a very effective form of aversion therapy that utilizes film. The Ipcress File is unique, however, in how it emphasizes the structural aspects of film: how light projected through images appearing at a certain frequency can create the illusion of movement, and other effects on consciousness. This is significant because, as I explain in my paper, The Ipcress File was made in a period when many artists, and indeed some scientists, were interested in how the structural elements of film affect spectators’ brains and mental experience. The challenge for historians posed by The Ipcress File, I believe, is to account for the changing cinematic imagery of brainwashing not only with reference to specific filmmakers’ technical innovations and artistic preoccupations, but also with how such innovations and preoccupations may have been in conversation with contemporaneous developments in art, science, and media technologies – as, indeed, historians have done for earlier films about hypnosis.

RH: So could I pick up on that point and ask you to say a bit more about the relationship between particular post-war technologies and new understandings of selfhood that emerge in the 1950s and 60s?

MH: There’s an obvious genealogy that The Ipcress File invokes: popular fears and fantasies of mass media as capable of influencing, even coercing, audiences’ beliefs and behavior. During the Second World War, the Allies were intrigued by film and radio’s power to transmit propaganda, and this fascination continued in the postwar period with the advent of commercial television and televisual advertising. In the late 1950s, there’s a brief but memorable moment when Americans and Britons worried about ‘subliminal advertising’. Even though the possibility of subliminal influence was quickly and notoriously debunked, this didn’t stop moviemakers (of the B- and C-level variety) from creating ‘psycho-rama’ films that purported to embed subliminal messages that would enhance moviegoers’ sensations. Of course, psycho-rama films didn’t succeed in thrilling general audiences, and only a few were produced. But there were other 1950s’ cinematic experiments in manipulating audiences’ sensory experience – such as Cinerama and Circarama – that were relatively more successful and long-lasting. Intriguingly, when The Ipcress File first opened in British and American theaters, critics compared its brainwashing sequence to a demented form of Cinerama or Circarama. Their implication, I believe, is that the film’s depiction of brainwashing was consciously spectacular, and rather gimmicky.

How Cinerama is projected.gif
First exhibited in 1952, ‘Cinerama’ theaters had a curved cinema screen to give film audiences a more immersive experience. Three projectors were needed to cover the screen in a single image. ‘Circarama’ (later known as Circle-Vision 360) was invented by Walt Disney Studios in the 1950s, and involved nine screens aligned in a circle around the audience, and nine film projectors at the centre of the circle. Image source: Wikimedia Commons.  https://commons.wikimedia.org/wiki/File:How_Cinerama_is_projected.gif

Yet, as I mentioned before, The Ipcress File was made in a period when avant-garde artists and scientists were interested in how the structural elements of film affect spectators’ brains and mental experience. This suggests another lineage that I believe is important to understanding The Ipcress File’s imagery for brainwashing, and why this imagery is historically significant. The Ipcress File premiered in 1965, at roughly the same time that the 1960s counterculture – with its “happenings” and psychedelic art – became recognized by mainstream Americans and Britons. And so, for some art historians, The Ipcress File’s programming box evokes the immersive, multimedia exhibitions of Stan Vanderbeek and USCO, and Tony Conrad’s experimental film The Flicker (1966-67).

In September 1966, LIFE Magazine featured the psychedelic art of USCO. Image source: Google Books LIFE archive, from which the full issue can also be accessed. https://books.google.co.uk/books?id=21UEAAAAMBAJ&source=gbs_all_issues_r&cad=1

In my paper, I argue that this is a case of correlation, not causation. It just so happens that The Ipcress File’s filmmakers were responding to the same innovations in art, science and media technology that were also inspiring countercultural artists’ explorations of film and other media. For example, in my article I explain how The Ipcress File’s depiction of brainwashing references Grey Walter’s neuropsychological experiments on the effect of stroboscopic light on brainwaves. The film even shows an EEG plotter as part of the Ipcress apparatus, and the film’s villain explains that the programming box works in synchrony with the “rhythm of brainwaves.” As many historians have previously noted, Grey Walter’s scientific experiments also influenced Tony Conrad’s Flicker film, as well as the design of a distinctive artefact of 1960s’ counterculture, Bryon Gysin and Ian Sommerville’s rotating stroboscope, the ‘dreamachine’. While Flicker and the dreamachine apply Walter’s ideas to the liberating exploration of consciousness, The Ipcress File appropriates Walter’s same ideas in a dark vision of mind control.

RH: Yes I guess you’re referring to John Geiger and Nik Sheehan’s work which I’m a fan of.  It seems your work shares their aim of presenting a counter cultural history of the cold war which shows how control technologies could be subverted.

MH: Yes. One of the interesting challenges in researching the Cold War history of ‘brainwashing’ – whether you focus on the scientific research that it inspired, or its evolution as a cultural imaginary – is accounting for how certain technologies, techniques, and concepts can shift in meaning from negative to positive, from coercive to liberatory. There are well known examples, such as how the CIA initially encouraged scientific research on LSD as a potential truth serum, but the drug proved more effective as a means of ‘expanding consciousness’ than of interrogation, and it became emblematic of the 1960s counterculture as well as of brainwashing. A similar story can be told for the flotation tanks that were used in sensory deprivation experiments. In the Hidden Persuaders project, we have been investigating these developments as more than just the creative innovations of a psychedelic counterculture. We probe how these shifts have been informed by changing popular and scientific assumptions about human subjecthood – not only cybernetic models of mind, but psychoanalytic, behavioristic, and neuropsychological models. We consider the evolving cultural and intellectual meanings of brainwashing to be part of a longer history of how concepts of psychological coercion and personal freedom have changed over time.

In my paper, I discuss how the technology of immersive multimedia begins as a mode of entertainment and artistic expression, and then later becomes associated with brainwashing. It’s a rare example of a seemingly positive and liberatory technology becoming rebranded as potentially negative and coercive. There are some excellent histories of the evolution of immersive multimedia technology by scholars like Beatriz Colomina and Oliver Grau. Fred Turner, in his recent book The Democratic Surround: Multimedia and American Liberalism from World War II to the Psychedelic Sixties, offers a particularly convincing and helpful genealogy of postwar artists’ experiments with multimedia environments, a phenomenon that he dubs ‘the democratic surround’ because of artists’ utopian, liberal democratic motivations. Turner shows how the counterculture’s seemingly revolutionary installations of psychedelic art – like Vanderbeek’s Moviedrome and the USCO exhibitions – had important precursors in more mainstream exhibitions of the 1950s and early ‘60s, such as Ray and Charles Eames’ multiscreen films, and that these precursors in turn drew on earlier artistic explorations of media’s effect on the mind. He also suggests how cybernetic philosophy was variously interpreted by different artists, encouraging their belief that multi-image, multi-sound-source environments would have a beneficial, psychologically-freeing effect on spectators.

In researching the making of The Ipcress File, I learned that the movie’s producer Harry Saltzman conceived the Ipcress programming box after reading about a multimedia surround in LIFE Magazine, the ‘Knowledge Box’ that was designed by Ken Isaacs. Isaacs was a contemporary of Ray and Charles Eames, both chronologically and in his aims and inspirations. He considered his Knowledge Box as a tool of progressive education, one that took advantage of the human mind’s ability to learn from sheer exposure to information. Meanwhile, Harry Saltzman was not alone in perceiving the Knowledge Box as a potentially coercive technology – some journalists at the time also suggested it could be used for brainwashing – but as a film producer Saltzman was well placed to bring this re-interpretation of multimedia surrounds to general audiences.

Ken Isaacs’ Knowledge Box, as featured in LIFE Magazine, 14 September 1962.  Image source: Google Books LIFE archive, where there are more images of the Knowledge Box. https://books.google.co.uk/books?id=z00EAAAAMBAJ&source=gbs_all_issues_r&cad=1. It is interesting to compare the Knowledge Box with Ken Adam’s set design for the Ipcress programming box. Ken Adam’s sketches, and a brief clip from The Ipcress File that shows the programming box in action, can be found on the Deutsche Kinemathek website: https://ken-adam-archiv.de/ken-adam/ipcress-file

RH: Yes!  I guess it’s this tension around the use of coercive technologies as tools for self-mastery or psychedelic liberation that grounds your idea of the cybernetic spectator.  Could you say a little more about that?

MH: The ‘cybernetic spectator’ is my own construct for understanding the relationship between developments in cinema and television, the mind sciences, and cultural fantasies of mind control during the 1960s. It is a model of mind, a way of making sense of human subjectivity, that informed certain developments in these domains and, at times, interconnected them. I’m inspired by the work of Jonathan Crary, who argues that there is a history to our ways of perceiving, and that this history is reflected in artistic media, the human sciences, and cultural anxieties about human subjectivity. My concept of the ‘cybernetic spectator’ comes from trying to envision what Crary’s historiography might look like in the 1960s when cybernetic concepts and philosophy were rewriting many assumptions about how the mind works, not only for scientists but also for artists, media theorists, and sometimes even general audiences.

But, admittedly, cybernetics itself is tricky to define, especially for the 1960s when cybernetics’ forefathers like Norbert Wiener, Claude Shannon, and Warren McCulloch had long given up on keeping the field definitionally pure. Arguably, a strictly historicist reading of cybernetics’ originary ideas, such as what Peter Galison offers in his seminal article on cybernetics’ ‘enemy ontology,’ doesn’t help us understand the cultural and intellectual efflorescence of cybernetic concepts in the 1960s. So, scholars like Andrew Pickering and N. Katherine Hayles have advocated for a long-historical view of cybernetics as a science of complexity with a deep but lively influence on a wide variety of endeavors – not only engineering and computing, but also the psychological automata of Ross Ashby and Grey Walter, the science fiction of Philip K. Dick, the anthropological theories of Gregory Bateson, and the management philosophy of Stafford Beer. And as I noted before, Fred Turner has reminded us of the influence that cybernetic theory, and cybernetics-inspired commentators such as Marshall McLuhan and Buckminster Fuller, had on mid-century avant-garde artists. These scholarly accounts, Pickering’s and Turner’s especially, emphasize how utopian ideals routinely accompanied discussions and appropriations of cybernetic thinking in the 1960s. That is, cybernetic ideas may be value-neutral in and of themselves, or even reflect the values of the military-industrial complex, and yet for many sixties’ thinkers cybernetic philosophy nevertheless signalled a future technotopia where a free flow of information – through various forms of media! –  would liberate individual thought and behaviour. They believed that cybernetic ideas and technologies might even remake society to be more democratic and more enlightened.

Yet, as contemporaneous debates about brainwashing can attest, there were also moments during the 1960s that brought into focus the downsides, even the threat, of the cybernetic interpretation of mind and society. For example, when Marshall McLuhan gave an interview to Playboy Magazine in 1969, he prophesied a future where a worldwide media network would keep the peace within nations by responding to unrest with pacifying messages. McLuhan’s interviewer asked whether this was tantamount to brainwashing. McLuhan acknowledges the possibility, apparently with some consternation, as he argues that such an interpretation misses his point that such a network would respond to the needs and desires of its audience. The Ipcress File is another moment that clarifies the negative potential of the cybernetic spectator interpretation of mind: even though The Ipcress File movie is itself a harmless entertainment, its depiction of the programming box insinuates that the mind is vulnerable to film’s ability to stimulate the senses – that multimedia surrounds can be a technique of brainwashing.

RH: Given that we now, in our iphone addled age, live in a media-saturated environment, do you think this cybernetic model of mind and media still holds good?

MH: That is a challenging question, and a very important one considering that we live in an age of heightened political extremism. Because my own thoughts on this are constantly evolving, I’ll just sketch a couple of points that I’ve been considering lately. It does seem like we still hold many of the concerns, and many of the utopian visions, that surrounded 1960s’ cybernetic interpretations of mind and media. They seem to be especially germane to debates about ‘information bubbles’ and the cloistering effects of internet-based media. I am struck by how we often rely on spatial metaphors – concepts akin to the multimedia surround – to imagine how the internet can envelop a person with messages, with the result of radicalizing her or encouraging her belief in conspiracy theories. The solution to such a predicament is often presented in spatial terms, e.g., to ‘get out’ of one information bubble by exposing oneself to contrary information, or to leave the internet behind altogether and “enter the real world.”

And yet, unlike in the 1960s, we now have a powerful discourse on trauma, one centered around the diagnosis of PTSD, that also shapes how we imagine the effects of media on the mind and the possibilities for mental manipulation. We now understand that certain messages – usually depictions of horrific physical and/or sexual violence – can ‘trigger’ old traumas or create new, traumatic memories. To put it more generally, psychotherapeutic models of the mind, whether they are psychoanalytic, cognitive-behavioral, or otherwise, are also influential for how we imagine the effects of media on the mind, and the possibilities of mental infiltration and coercion. Cybernetic philosophy is arguably not fit for the purpose of distinguishing between psychologically harmful or beneficial messages; it is famously agnostic about the semantic content of information.  So perhaps we have moved on from the ‘cybernetic spectator’ as a prevailing model of mind and media influence, even though cybernetics’ signature technology, the Internet, dominates how we access and interpret media.

So do you think psychoanalysis provides the wellspring for a new morality that cybernetics failed to provide? 

This is an issue that we discuss in the Hidden Persuaders project. I think that psychoanalysis might be able to provide such a wellspring; it has certainly shaped our cultural discourse on trauma to be empathetic, if not moralistic (the work of Robert Jay Lifton with Vietnam veterans comes to mind). But I am not convinced that, in current practice, psychoanalytic theory serves this purpose.

I’d certainly agree with that.  Thank you so much Marcia!

Marcia Holmes is a post-doctoral researcher with the Hidden Persuaders project at Birkbeck, University of London. She is currently researching the American and British militaries’ Cold War-era community of psychological researchers, tracing how political, bureaucratic and intellectual fault lines influenced service psychologists’ assessments of brainwashing.

Rhodri Hayward is Reader in History at Queen Mary , University of London, and one of the Editors of HHS.  His most recent book, The Transformation of the Psyche in British Primary Care, was published by Bloomsbury in 2014. 

On the unexamined presence of psychotherapeutics- an interview with Sarah Marks

We were delighted in April 2017 to publish a special issue of History of the Humans Sciences, ‘Psychotherapy in Historical Perspective,’ edited by Sarah Marks, currently based at Birkbeck, University of London, as part of the Wellcome Trust-funded Hidden Persuaders project. HHS Web editor, Des Fitzgerald, spoke to Sarah about the special issue – and about how we might (re-)think the history of the psychotherapeutic complex today. 

Des Fitzgerald (DF): Sarah, thanks for taking the time for this interview. Why a history of psychotherapy, now, in 2017?

Sarah Marks (SM): The history of psychotherapy does seem to be having something of a moment right now. There’s recently been the Other Psychotherapies conference at Glasgow, the Transcultural Histories of Psychotherapy conferences at UCL, special issues of this journal, and forthcoming issues of History of Psychology and The European Journal of Psychotherapy and Counselling. So I’m happy to say that this seems to representative of a blossoming field.

The seed for this issue came about a few years back, though. As a graduate student I was very surprised at how fractional the literature seemed to be by comparison with work on, say, psychiatric diagnostics and the ‘Diagnostic and Statistic Manual of Mental Disorder,’ psychopharmaceuticals, or asylums and institutions. I thought there must be others out there working on it, and there were. It’s probably particularly relevant that I came to it initially from trying to figure out how Cognitive Behaviour Therapy become such a significant force in the UK. I don’t especially privilege ‘histories of the present’ as an approach, but I think psychotherapies as interventions – and psychotherapeutic knowledge in broader terms – do have something of an unexamined presence in contemporary society and policy, in various forms. I note that there is currently a growing critique, or even backlash against this in Britain, including from therapists themselves.  So taking a historical approach now makes good sense – it reminds us that these are by no means timeless, value-free techniques, about which there is a clear consensus. And it also helps us to excavate their intellectual foundations, which aren’t always that transparent.

But beyond the ethical or political motivations for historicizing psychotherapy, there’s a fascinating variety of stories to be unearthed. Even just in this special issue, there are vastly different models of mind, debates about cultural or moral decline, questions of identity or normality and pathology, ideas about cure or the nature of human relationships, resistance movements, and the political spectrum across left and right, to name but a few. And that’s just from looking at predominantly West European and North American examples.

 

DF: For many I think, a Venn diagram showing intersections between the history of psychotherapy and the history of psychoanalysis will more or less form a circle. But I get a strong sense that this special issue wants to prise these two apart somewhat. Is that right? And why, if so?

SM: Yes, you’re right about that. I’m not of the opinion that the history of psychoanalysis has reached its end point, as some have begun to argue. I’m working myself on its legacies in the Soviet sphere during the Cold War at the moment. There still is much to be done there. But it really has overshadowed other approaches in the literature quite drastically.

This could be because psychoanalysis has been very a productive interpretive strategy in the arts and humanities: we’re all familiar with it, and it has been very successful at captivating audiences outside of its clinical setting. It would be hard to say that about, say, behaviourism, or Gestalt. So it’s understandable that we have more histories of it. But its popularity in these spheres, and as an actual clinical movement in the 20th century, has led to a sort of Whiggish dominance of this one particularly successful approach. This has been at the expense of lots of other therapeutics or frameworks, which also had a real impact in their time, but that have now – for multiple, usually contingent reasons – been forgotten. A number of the contributions to the special issue uncover such stories: from late Victorian psychotherapeutics, to some quite peculiar Viennese competitors to Freud, or ways of understanding art therapy and psychosis.

The striking thing, though, is that from the mid-century up to the present, psychoanalysis has had some extremely militant challengers to the throne, which have, in some cases, exceeded it in terms of institutional power. Behavioural and cognitive approaches are the obvious candidates here, especially in the way they have mobilized trials and ‘evidence base’ for their cause. But there are others: Rogerian counselling has been ubiquitous at particular moments, and, increasingly, Mindfulness-based approaches. And there is an excellent emerging literature coming through that is beginning to address some of these gaps: the work of Rachael Rosner on Aaron Beck, and Matthew Drage’s forthcoming PhD on the history of Mindfulness in particular. But the fact that the ‘non-psychoanalytic circle in the Venn diagram’, as you elegantly put it, has had very little historical interrogation thus far, has quite significant implications given the status they’ve acquired.

I would be curious to think more about the nature of the overlap of the two circles. Is there a degree to which we can say that most modern psychotherapies are indebted to psychoanalysis in some sense, in terms of how we have come to structure an interpersonal therapeutic relationship? How have some of the norms of analytic training, or its ethical framework, been kept up by other approaches, which have otherwise emphatically broken away from psychoanalysis? And how have other traditions been formed in explicit opposition to, or in dialogue with Freudian thought? Perhaps we should actually draw out your suggested Venn diagram on a blackboard and see where it leads…

 

DF: There is of course a well-known view – coming especially from scholars in the wake of Georges Canguilhem and Michel Foucault –that the history of ‘psy’ science tends towards recurrence: that to (as Nikolas Rose puts it) work ‘within the true’, as a psychotherapeutic practitioner, is also to work with a history of the truthfulness of one’s own practices, and vice versa. Do you agree with this view? And where does it leave the historian?

SM: There is something to it. I mention in the introduction to the special issue the question of therapeutic traditions, and Laurence Spurling’s comment that the texts of the founders can come to play an almost Talmudic role in particular professional communities, which can at times lead to a sort of conservatism, or I suppose a ‘recurrence’, to use your quotation. There certainly are dogmatic ‘believers’ out there in the therapy world, for whom the history of the profession is mainly useful for the purposes of legitimising their ways of seeing, which are wholeheartedly assumed to be true. But that’s not a universal stereotype at all.

Working at Birkbeck, I’m currently surrounded by clinicians, many of them psychoanalytic (see this short video, for example). I do observe with curiosity the way they sometimes read or teach historical texts as sources for contemporary practical inspiration. But, at the same time, they also step outside and approach these ‘truths’ as culturally or historically situated, and examine them from a position of critical distance. This isn’t exclusive to the academy either: from interviewing full-time therapists in cognitive traditions, too, I’ve often seen this reflexive tension at play. But, from the historian’s perspective, the problem here is that we’re talking about practitioners in the way they behave and present themselves outside of the consulting room. What actually goes on when they work as clinicians is still mostly a black box to me – and that’s the case for those I am able to talk to, as well as those historical actors that I can only trace via their textual or archival paper trail. This has huge implications for what it means to write about the history of psychotherapy: mostly we’re just reconstructing the edges, without ever actually getting at the therapeutic interaction itself.

So I’m not sure I can fully agree with Rose, that we can say they are ‘working within the true’. One could infer from the evidence that this is probably what is going on, sure. But I often wonder whether it could be the norm that there are slippages around such ‘truths’ in practice, (perhaps especially in a health service where policy dictates that clinicians deliver a particular brand of therapy, which they themselves might be critical of). Therapists might integrate different approaches that contain conflicting truth claims, or they could respond to a situation in a manner which might be guided by more banal or common-sense assumptions, or personal values, that have nothing at all to do with their professed psychological worldview. Or they might tailor a ‘therapeutic alliance’ around the belief system of the client, and work in such a way that necessitates the suspension of their own truths. There could be ways to research this question, to test the theory out a bit better. But as it stands, the historian, as usual, can only tell a partial story.

 

DF: One of the things that especially strikes me about the special issue – you gesture at this in the introduction – is that the patient or service-user is much more present, as an experiencing subject, than we are perhaps used to in histories of psychology and psychotherapy. How should we think about his shift in the literature (if indeed it marks a shift)?

SM: I’d say the recipient of therapy as an experiencing subject isn’t by any means as present as it should be. Patrick Kirkham’s article in the special issue really does place the service-user (or in his particular example of autistic self advocates and their objections towards Applied Behaviour Analysis, the service-resister) at the centre. And it’s interesting to note that Patrick came at this topic not from an interest in the history of therapeutics, but somewhat tangentially, from conducting his dissertation research on neurodiversity and the autism rights movement.

Despite the fact that the service-user-as-subject is the very point of most therapies, they are usually only implicit subjects in historical writing on psychotherapy. I’m as guilty of reiterating this in my own writing as anyone else, I admit. It’s something that really struck me when I was writing the introduction, looking over what literature existed. It is incredibly problematic, that we have this looming blank space with regard to the experience of the recipient of the treatment, who is often only seen refracted through the gaze of the therapist.

It’s obviously not difficult to account for this imbalance: there are many more archives and published primary sources from practitioners than there are from patients. It’s a classic problem in the history of medicine, but I think historians of other medical fields – even psychiatry – have been doing a better job of addressing it. So I think it’s a shift in the literature that definitely should happen, and which I will look to follow through in my own work. There are some good sources of inspiration in neighbouring fields in terms of more contemporary, ethnographically orientated research. Ilina Singh’s work on children’s understandings of their ADHD diagnosis springs immediately to mind, or Juliet Foster’s monograph, Journeys Through Mental Illness.

On the other hand, there certainly is a theorized, or perhaps imagined, service-user that has cropped up in the work of sociologists, philosophers and historians. I’m thinking here of Nikolas Rose again and his autonomous, liberal ‘self’ who governs themself through psychological technologies. Equally, Ian Hacking’s patient who becomes therapeutically labelled with, and then reinterprets themselves through, a ‘human kind’ such as multiple personality disorder. Or Sonu Shamdasani’s individual who might opt in or out of an ‘optional ontology’ offered to them by psychotherapy, or who may well present to a therapist having already defined themselves in such terms in the first place.

All of these seem to capture something about the psychotherapeutic subject, and intuitively I’d say they are productive concepts to think with. But the interesting question would be to see whether, or how, they hold true in actual service-user experience, and how subjects do – or indeed do not – act in these terms. What might be the nuances of the individual case, or the particular variant of psychotherapy? How might these differ across time period or culture, or down to the level of the particular kind of institution, clinic, or private practice? Or even by the mode of delivery of self-help intervention, which can be many and varied these days? I’d love to see more work on these questions.

 

DF: The special issue is composed of many (I mean this term, as I guess you do, in its most positive sense) emerging authors in the field – was this a deliberate decision as an editor? And why, if so?

To be honest, it’s because a high proportion of the people doing good work on this topic are at an early career stage, and they were the ones who came my way, by various means. So it feels as though the history of psychotherapy itself is something of an emergent field, even though there have been some really key publications from senior scholars in previous years, as I mention in the introduction. It wasn’t necessarily a deliberate editorial choice from the outset. But there is something to what you have noticed, as this isn’t the only edited volume I’ve been involved with which specifically foregrounds early career researchers. There probably is an implicit ethic there, in terms of wanting to open up space for newer authors, because there is a lot of inspiring new work out there. I have often thought this at recent conferences, that it bodes well for the future of the field.

In other editorial work I’ve done, I’ve also sought to encourage authors from non-anglophone academic backgrounds to publish. I think we can be incredibly North American and West European focused in our field. This doesn’t by any means reflect the quality of research that is being done by scholars elsewhere – it’s just that the latter doesn’t always make it into English-language publications.

 

DF: You yourself are (if I may use a deeply problematic term) an ‘early career’ scholar working in the history of mental health. I’m wondering, if it doesn’t make you groan too much –what advice would have for others entering the field (I’m think e.g. of those who have recently entered graduate study)?

SM: It’s interesting that you’re so apologetic about the use of ‘early career’. A number of colleagues, probably myself included, have found it quite a helpful designation: it can create a sort of solidarity amongst the precariously employed, and it at least implies that you might be en route to having a career! I’ve been part of a writing group within my department, made up of early career historians, which has been enormously galvanising, both creatively and in terms of pooling advice and information, and mutual support. So I’d advise those entering the field to get organised with those around you, within your own institutions and across the field more broadly. There’s a lot on offer already to help enable this, for postgraduates especially: conferences organised by the British Society for the History of Science, the Society for the Social History of Medicine, the Institute of Historical Research’s ‘History Lab’ etc.

I think another key thing is to start becoming an active member of the research community earlier rather than later. Don’t be shy about submitting work to journals (such as History of the Human Sciences!) once you have a good argument to make, and a strong research base to support it. Peer review can be gruelling, but it does help you shape your work for the better, and responding to that kind of critique is good preparation for the viva, not to mention job interviews. Put in for conferences, or organise your own conference if there’s a theme or question that you think really needs to be talked about more. That’s how this special issue originally came about, from putting out a call for conference papers during my PhD at University College London.

I’m often heartened by how supportive academics in this particular field can be towards fledgling researchers actually, in terms of advice and encouragement, from across different institutions. So I’d say it’s a very good community to be part of.

Psychotherapy in Historical Perspective is available now at the HHS website.

Sarah Marks is a postdoctoral researcher at Birkbeck, University of London working on the history of the psy-disciplines during the Cold War and after, with the Wellcome Trust funded Hidden Persuaders project . She is co-editor (with Mat Savelli) of Psychiatry in Communist Europe

Des Fitzgerald is social media and web editor of History of the Human Sciences, and a lecture in sociology at Cardiff University.

“We should beware anyone who thinks they’ve got an easy application of biology to society” – an interview with Chris Renwick

We are delighted that Chris Renwick has joined the editorial team at History of the Human Sciences. Chris is Senior Lecturer in History at the University of York, and a Fellow of the Royal Historical Society; he is a historian of modern Britain, specialising in the intersections of politics, biology and society during the nineteenth century. His first book, British Sociology’s Lost Biological Roots appeared in 2012, and was shortlisted for the Phillip Abrams Memorial prize ; his second, Bread for All, a history of the welfare State, will be published by Penguin in 2017; he is us currently working on a new book on the intellectual origins of social mobility studies in Britain. To mark Chris’s cooption onto the editorial team, HHS web editor, Des Fitzgerald, caught up with him for a short interview.

 

Des Fitzgerald: Chris, as a historian, you work on the intersection of social science, biology, and politics in Britain in the nineteenth and early twentieth centuries. What first drew you to this area (I guess as a PhD student?) – and, in particular what made you situate it in a study of the discipline of *sociology* particularly, which of course was the topic of your first book?

 

Chris Renwick: Practically speaking, I came to work on sociology via my MA dissertation, which I wrote on the Scottish biologist and sociologist Patrick Geddes’ early career. I’d started out my MA with a broad interest in the social dimensions and applications of Darwinism, which I’d acquired through a number of modules I took with Paolo Palladino, Steve Pumfrey, and Peter Harman when I was an undergraduate at Lancaster. To be honest, I can’t remember precisely how I got to Geddes. But a good friend of mine was working on Lewis Mumford — the American social and architectural critic who was Geddes’ main, if reluctant, disciple — so Geddes was part of the intellectual furniture around me for a while. I could easily have carried on working on Geddes because his drift from T. H. Huxley’s laboratory in London to town planning in India is so fascinating. But I became more interested in a Donald MacKenzie, SSK-style, competing visions approach to the biology/society question, rather than one thinker’s programme. The significance and consequence of things doesn’t seem to make much sense without thinking through what the alternatives are at any given moment. This point crystallised for me when I was reading around the topic of the founding of the Martin White chair of sociology at the LSE — which is what my PhD thesis and book were about. I read a throw away sentence in a biography of Francis Galton that said something along the lines of “there were three candidates for this chair, which set the course for the field for the following decades, but the London  School  of Economics [LSE] didn’t see fit to choose a eugenicist. The reasons aren’t clear”. I thought that was a pretty fascinating question and couldn’t believe nobody had made a sustained effort to get the bottom of it. It was apparent immediately that my own pretty casual and unquestioned take on sociology as the general science of society actually obscured much more interesting questions about the content and practices that went into it.

 

On that latter point, it is probably significant I did my graduate degrees in History and Philosophy of Science [HPS], which intersects with Science and Technology Studies [STS] at certain points but is its own field for a number of historical reasons (people like Bob Olby and Roy Porter would trace those reason back to 1930s and the famous Soviet delegation at the International Congress of the History of Science and Technology at the Science Museum). HPS scholars — most of whom have an undergraduate background in the natural sciences — are generally instrumental when it comes to sociology: they use the intellectual tools when they need them but tend not to think of the history of those tools as something of interest. When I started my PhD I shared the common HPS assumption that the interesting questions about the relationship between biological and social science are on the biology side. I quickly realised that wasn’t true and that the hope and expectation around sociology — the desire for it to make people’s lives better — was what drove the project forwards. In fact, one thing that I came to appreciate was the importance biologists themselves attached to sociology as a project. That is something that I hope readers took from that work.

 

DF: As a sort of half insider/outsider – I’m interested in your reading of ‘British sociology project’ today.  At the end of your book, you ask – ‘how should sociology, as a general science of society, relate to biology, as a general science of life’? Whats your assessment of how well sociology is facing this question? 

 

CR: I’m never sure whether I’m a half insider or not when it comes to sociology. A number of sociologists have been incredibly enthusiastic about my work and have encouraged me to write for sociology audiences. I owe a great debt to Steve Fuller on that score; I’ve learned a lot from him. As a historian you always like to explain that things are as they are because of something that happened at a given point in the past. But you don’t always get to work on things where the current practitioners of the discipline say that the question itself is still open and the historical analysis is interesting for that reason. I think I’ve become a convert to the sociology project — and I do believe it is an intergenerational project in this country — through that process. It is still the case, though, that I find it difficult to take my historian’s hat off — the occasional pretence of neutrality — and really make the kinds of judgements that sociologists would prefer me to make about whether it was good or bad that certain things happened, like Leonard Hobhouse rather than Patrick Geddes being appointed the first Martin White Professor of Sociology.

 

As far as the question of how well sociology is doing with the biology question now, I have mixed feelings. For the most part, I think sociology has done and continues to do pretty well. I have argued before that British sociology has a long history — perhaps unique among the national traditions – of engaging with the biology question but that, for reasons that are not always clear, it has buried that story. There are plenty of people doing interesting work on the subject and one of the particularly interesting areas concerns looking at economic and social science approaches to biology, rather than vice versa, like Nik Brown, my colleague in sociology at York has been doing. I worry, however, about how the external environment, particularly the situation with funding bodies, is going to effect that. There are long standing concerns among historians that social science sources of funding are off limits, which has implications for the relationship between the two fields, not to mention particular kinds of history, which struggle to find favour with other funders. The challenge for sociology is going to be finding a way to engage with biology that doesn’t involve integrating with it, which is what might happen if funders indicate a preference for biology-led social science, as history suggests is always a great temptation.

 

DF: In some ways, you might be called a historian of the ‘biosocial’ – a term that is still is anathema to many because of the deeply ugly history of how biological and social projects have tended to inhabit one another. I know it’s banal to try to learn ‘lessons’ from history – but if we were to seek any, what might we take from the intellectual history of ‘social biology,’ in terms of the normative project of a ‘biosocial’ social science today?

 

CR: One thing that is apparent from the history of biosocial is the way it has seeped into so many aspects of our lives and thought. As you suggest, though, the biosocial has the potential to be quite toxic in its political dimensions. I’m not the greatest enthusiast for the idea that there are lessons that can be derived from history but one thing that does seem quite clear is that we should beware anyone who thinks they’ve got an easy application of biology to society. The truly interesting ideas are the biosocial ones that acknowledge the complexities and, as someone like Lancelot Hogben, whom I’ve done a lot of work on recently, would argue, that it isn’t either/or when it comes to things like heredity and the environment; there are actually distinct spheres that arise out of their interaction and need to be studied as such. It is worth noting that Galton’s original vision of eugenics certainly fits that bill. But the fact few people want to really get stuck into that probably underscores the point you made. This is probably a problem that involves reading history backwards, rather than forwards: taking the mid-twentieth-century programmes of forced sterilization in the USA and the Nazi regime as the obvious and only consequences of earlier ideas and assuming that people like Galton envisaged them. The history is much more complicated than that and a starting point for unravelling it is highlighting how it is actually embedded into the political world we still inhabit.

 

DF: You’re also now working on the history of the British welfare state. Can you say more about that project – and especially how it extends your attention to the meeting-points of biology and politics? I know you’ve written else about William Beveridge’s relationship to ‘social biology.’

 

CR: The book on the welfare state, Bread for All, which comes out in the Spring, was really a product of and companion piece to the work I’d been doing on Beveridge and social biology at the LSE. I was in the library looking at a collection of Galton lectures — annual events the Eugenics Society used to hold — and I saw Beveridge had a lecture in it. It’s not strange to find a social scientist from the early or mid-twentieth century who was interested in eugenics. When I checked the date of Beveridge’s Galton lecture, though, I suddenly realised that he had actually left the opening parliamentary debate about the Beveridge Report to go and give it. That kick started a chain of investigation that generated both the welfare state book and the book on social mobility research I’m writing up at the moment. It seems pretty obvious to me that there are strong eugenic strands running through the welfare state, as long as we appreciate that eugenics was about the environment rather simply genes by the mid-twentieth century and that the serious population research that came out of eugenics was an essential part of thinking about how to make everything work. All that has roots in a number of philosophical and political traditions, including utilitarianism, so I think it’s a pretty interesting story.

 

What is important about that state of affairs, I think, is that we appreciate that eugenics and biosocial science came in many different political flavours. There was a right wing version, which has overshadowed everything else for the obvious reason that it was and continues to be a spectacle. The much more productive sites of research, however, were on the left and among the technocratic liberals — the technical types Mike Savage has written about during the past decade. Beveridge was very much one of those thinkers. He was born in 1879 so he was part of that generation that lived and worked through the fuzzy period between the acceptance of evolutionary theory as fact and the “modern evolutionary synthesis”. So much of what we take for granted about politics and social policy after the Second World War came out of thinking about things in that uncertain environment. We’re used to talking about religion as not being a constraint on science but a source of inspiration. I think we should be doing more to talk about the biology-society intersection as a hugely productive site of work in that sense.

 

DF: The ‘human sciences’ is of course (to put to kindly) a capacious term – and the work of its *history* only multiplies the potential for confusion. What does this term mean to you? What does it mean to locate yourself (at least in part) as a historian of the human sciences?

 

CR: You’re absolutely right that the term means different things to different people. I certainly once thought of history of the human sciences [HHS] as being simply the history — as in the academic field — of the human sciences (primarily the psy-sciences). But I quickly realised that wasn’t right as dug deeper into the journal. The operative term is “human”, with the idea being we bring together people who are making some kind of contribution to our understanding of what the human is and what it has meant to be human since science became one of the dominant ways of knowing, to use that phrase, back in the early modern era. I would certainly locate myself in that sphere. After all, the welfare state, to name one example, was created in part to help people live meaningful lives.

 

DF: Finally: you recently organised a conference at York, on the future of the history of the human sciences – and you’re also co-editing a special issue of HHS on the same theme. So, then, Chris, in 200 words or fewer: what *is* the future of the history of the human sciences?

 

CR: The York conference was a really exciting event that gave everyone the opportunity to look forwards and back. One thing that was quite clear from all the papers and discussions (and this comes from heavily biased perspective of someone who helped orchestrate and organise those discussions) is that the future involves figuring out what the coalition of scholars and fields that deal with questions about the human looks like. There are challenges when it comes to broadening the field out to consider disciplines that haven’t always featured as prominently as others. I’m thinking here of the dominance of the psy-sciences, which was udnerstandable given, the context in which the field emerged. Broadening out in that way involves asking new questions and considering different practices. But, as a number of participants in the conference pointed out, it also involves asking serious questions about the status of the human in the twenty-first century. That, I would suggest, is the greatest challenge.

 

Book review: ‘Neuroscience and Critique: Exploring the Limits of the Neurological Turn.’

Jan De Vos and Ed Pluth (Eds.), Neuroscience and Critique: Exploring the Limits of the Neurological Turn

New York and London, Routledge, 2016, 236 pages, hardback £95.00, paperback £31.99, ISBN: 978-1138887350

What is it about neuroscience? Ever since a group of disparate life sciences – partly propelled by ‘the decade of the brain’ in the 1990s[ref]Rees, D and Rose, S. (2004) The New Brain Sciences: Perils and Prospects, Cambridge: Cambridge University Press[/ref] – congealed into what we today call ‘neuroscience,’ scholars from the humanities and social sciences have been committed, sometimes intensely committed, to a more-or-less sharp critique of this science, and the unspooling of its socio-political effects[ref]Edwards, R., Gillies, V., and Horsley, N. (2015) ‘Early Intervention and Evidence based Policy and Practice: Framing and Taming. Social Policy and Society 15(1): 1-10[/ref][ref]Martin, E. (2000) ‘Mind-Body Problems’, American Ethnologist 27(3): 569-590[/ref][ref]Bennett, MR and Hacker, PMS (2003) Philosophical Foundations of Neuroscience, London: Wiley[/ref]. Indeed, in recent years, neuroscience has not only been the object of critical scrutiny, but has become something of a whet-stone on which critique sharpens itself – a sort of funhouse mirror for critical social scientists to figure out what it is, exactly, they stand for. What explains this cultural role of neuroscience in the academy? Is it a fear that, in its powerfully reductive hold over human subjectivity (so it seems, anyway), neuroscience will ultimately stake a claim to all social, cultural, and human insight – an ‘expectation,’ as Jan De Vos and Ed Puth put it in their introduction to Neuroscience and Critique, ‘that the neurosciences will explain it all?’ (p.2).

Neuroscience and Critique appears in an established genre – but it has significant virtues of its own. Central among these is the sheer breadth of its scholarship: this is a properly interdisciplinary collection, featuring not only philosophers with interests in critical theory and/or psychoanalysis, but also a geographer, an anthropologist and STS scholar, a neuroscientist, and a psychologist, among others. What holds this disparate collection of interests together is a commitment to not only some kind of critical engagement with neuroscience – but also a shared attention to what, precisely, critique can do, even to what critique might be, as it gets more widely entangled in neuro-sciences and neuro-cultures. At the heart of the book, then, is a deeply committed reflexive attention to what it is we do when we think critically about neuroscience. The conjunction ‘and’ in the book’s title is crucial: at stake here are not only the ‘conditions of possibility’ for neuroscience, but also for critique itself (p.4). As I will discuss below, I think there is an uneven distribution of sophistication in the consideration of these two poles; nonetheless, readers looking for careful work on the stakes of critique today, especially as it approaches the natural sciences, will find much to think with in this volume.

The book is in three sections. The first, ‘Which Critique?’, perhaps the most overtly philosophical of the three, is also where we get the most explicit examination of the conditions of critique itself. It asks, as Jan De Vos puts it in his own contribution: ‘what are the limits of a deconstruction of neuroscience?’ (p.24). In De Vos’s account, one cannot simply do ideology-critique of neuroscience today, given the claim that neuroscience itself now makes on critical thought (‘targeting our false consciousness, laying bare the illusions involved in love, altruism, rationality…’ [p.23]). As De Vos shows, however, neuroscientific empirics remain haunted by psychological and humanistic concepts – they are inhabited, he argues, by a folk-psychological human subject, coterminous with the birth of the modern sciences, and which might itself yet be the object of critical scholarship (p.25, 39). A more overt defence of critique is offered by Nima Bassiri – who, against the fashion of the times is unconvinced that critique has to be associated with ‘negativity, undue skepticism [and] excessive suspicion’ (p.41). Bassiri proposes instead a different kind of critical question, one not mounted on this suspicious imperative, viz. (I paraphrase): what is it about contemporary selfhood that legitimizes brain science as its singular technology? This is a good question, and Bassiri approaches it through an historical epistemology of forensics – uncovering a need, especially in the nineteenth century, and amid concerns over disorders of simulation and malingering, to decide whether we are or are not our selves, in the grip of such experiences (p.55).

The second section (I am only here selectively surveying some essays from each), ‘Some Critiques’ is the most empirical part of the volume, and this, not coincidentally, is where it is strongest. Geographer Jessica Pykett, for example, analyses ‘the political significance of the influence of psychological and neuroscientific approaches in economic theory’ (p.82) – situating her account in the work of ‘discerning the precise models of the human subject selected by policy makers’ (p.88). That labour of discernment leaves Pykett well placed to propose, against the turn to ‘non-representational’ theories in geography, that ‘the widely presaged undoing of the human subject within human geography may… be premature’ (p.96). In the volume’s most compelling chapter, Cynthia Kraus, of the interdisciplinary ‘Neurogenderings’ research network, argues against self-consciously ‘critical’ programmes that are too often wrapped up in attempts to assuage conflict. Kraus argues, instead, for ‘dissensus,’ or the ‘study of social conflicts inherent to processes of knowledge and world making’ (p.104). As she points out: ‘people come to speak the language of the brain, not only because it has a prominent truth-discourse… they do it to come to terms with conflicting life situations’ (p.105). And it is not only by focusing on, but indeed exacerbating such conflict, says Kraus, that scholars interpellated by neuroculture might pose ‘the conditions under which interdisciplinarity… could be valued as a theoretical and practical solution’ (p.112).

The final section, ‘Critical Praxes,’ consists of papers by three scholars working within the neurosciences in relatively heterodox ways. I found these interesting in themselves, but (at least in the case of the latter two) struggled to relate them to the broader themes of the book. In this section is an argument for ‘embodied simulation’ from the neuroscientist, Vittorio Gallese (famous for his role in the discovery of mirror neurons) –  a proposal, in brief, that what is at stake in intersubjectivity is not only a kind of mind-reading, but actually the incorporation of others’ mental states (p.193). And there is a related discussion of empathy from the neuropsychoanalyst, Mark Solms – for whom empathy is not only a perception of others’ states, but a mode in which a subject ‘projects itself…into the object’ (p.205). For Solms, this projecting-into is always an affective move: whatever the desires of scientific psychology, ‘feelings come first’ in the work of encountering and (ultimately) tolerating the world (p.218).

There is some variability across the chapters, but I found much to stimulate and provoke in this volume. And if there is, for my taste, sometimes too much of the rhetoric of continental philosophy and critical theory here, still Neuroscience and Critique made me think hard (harder than I am used to) about the potent range of practices that we might arrange under the sign of ‘critique,’ as well as the very different inheritances and stakes of those practices. Those – like me – accustomed to being casually dismissive of the critical impulse, especially as it relates to neuroscience, have much to gain from these essays, even where there is disagreement.  Nonetheless, in the spirit of the book itself, and as a contribution to the important conversation that I think it wishes to provoke (and it should), let me here make two critical interventions of my own. They centre on the two poles of the book’s title: ‘neuroscience’ and ‘critique.’

One thing that was often unclear to me, as I read the book, was what different authors actually intended by ‘neuroscience’– who it was they were actually addressing in the guise of this figure. For example, authors in the volume (albeit not all of them) sometimes invoke ‘neuroscience’ or ‘the neurosciences,’ as if such terms represent a stable or coherent category – leaving aside the contingency, partiality, and specificity of the myriad different practices that are actually affiliated to this image. But if we are going to talk about ‘neuroscience,’ then we need to be clear whether we are talking about, for example, cognitive neuroscience, or molecular neuroscience, or systems neuroscience, or neuroanatomy, or whatever it is. This might seem like a nitpick – but actually such practices, only lately gathered under the umbrella, ‘neuroscience,’ have significantly different inheritances and trajectories. What gets lost, when we fail to recognise these differences, is any sense of the lively debates, contests, and disagreements that actually go on within ‘neuroscience’ itself. In fairness internal critique is discussed in the introduction (p.3). Still, overall, I felt that I got little sense from the book of the sheer range of (often quarrelling) methods, perspectives, epistemologies, and so on, that go on under this polyvalent noun, ‘neuroscience.’ For example, when De Kesel says in his interesting and suggestive contribution that he will ‘show the limits of neurology’s attempt to comprehend freedom’ (p.13), it is not clear to me what is indicated by that noun, ‘neurology.’ Indeed no neurological work is explicitly cited; we have only secondary philosophical texts. Similarly, Reynaert, in his philosophically rich chapter, argues ‘that neuroscience runs the risk of becoming dystopic in a logical sense by committing a category mistake’ (p.62). But relatively little ‘neuroscience’ is discussed in the chapter, beyond the now somewhat hoary example of Benjamin Libet’s experiments on free will, and conversations around it (p.75). Indeed, something that strikes me about the volume, taken in the round (by no means applicable to all chapters), is that, for a book about ‘neuroscience and critique’, there is sometimes quite a bit less actual neuroscience discussed than one might anticipate – even in chapters that claim to speak of either neuroscience or the brain.

Rather than simply picking holes, however, I want to use this feature of the volume to pose a broader question in the sociology of knowledge: what are we are we actually talking about when we talk about neuroscience? What are we (here I mean ‘we’ scholars in the social sciences and humanities, and not only the present authors) concerned about, or critical of, when we are concerned about, or critical of, ‘neuroscience’? Because clearly it is not always the laboratory practice, or the output, of an actually-existing neuroscience. And here is maybe the crux of the issue: the editors and authors would perhaps respond – with justification – by saying that they do not promise in-depth reading of neuroscience literatures; that their interest is in (as per the subtitle) ‘a neurological turn’ – which is to say, a cultural and historical object, and not a laboratory one. What concerns them is the way in which the neurosciences ‘are both situated within culture and in turn influence culture’ – as well as the practices of bordering that then ensue (p.4). Which is all fair enough. But I cannot get over the feeling that, in the absence of a committed and detailed attention to specific and carefully-parsed neuroscientific literatures, we are potentially faced with a paper tiger. Which prompts another question: how are we to think sociologically about critical attentions to ‘neuroscience,’ and to a ‘neurological turn,’ when those intentions are not necessarily or always made manifest via a sustained attention to contemporary neuroscientific experiments, practices, or concepts? Is there not some risk that we are in the presence of a phantom – that the ‘neuroscience’ in question may only be a product of the very critique that presumes to unravel it?

This brings me to my second critical point. It seems to me that the central question of the book is, as De Vos and Pluth put in a perceptive and subtle introduction: ‘what are the conditions for a critique of the neurosciences from the humanities?’ (p.3). As they point out, there can be no reactionary turning-back to the ‘before’ of neuroculture – not least because, as Nikolas Rose (2003) and others have pointed out, we cannot now separate our subjectivity from our consciousness (or indeed our inhabitation) of our cerebrality[ref]Rose, N. (2003) ‘Neurochemical Selves’, Society 41(1): 46-59[/ref]. What is needed, according to De Vos and Pluth, is ‘something other than a simple, humanistic critique of the neurosciences’ – a way of thinking that

engages in questions about the conditions of possibility, impossibility, and the domain, or range, of different sciences and disciplines… how far does the legitimacy of the neurosciences extend? How is the relation of the neurosciences to the humanities to be thought? (p. 4).

I am sympathetic to such an ambition. We see one aspect of it in the chapter by Philipp Haueis and Jan Slaby, which critically analyses the stakes of the Human Brain Project, arguing that that project is entangled in specific computational and economic infrastructures – thereby producing a kind of ‘de-organ-ization’ of the brain, even leading to a kind of ‘world-making,’ that reconfigures the outside vis-à-vis the experimental microworld of brain and computer’ (p.124, 131). We see it similarly in the contribution of Ariane Bazan, which maps a history of interaction between biology and psychology, and diagnoses a new ‘moment’ for psychology, via a neuropsychoanalysis that works to ‘characterize the… knotting-points between the biological and the mental,’ placing physiological and clinical concepts in new orders of relation, and thus subverting old hierarchies (p.181). There is much to admire in such critical analyses. And yet. In The Limits of Critique (2015), the literary theorist, Rita Felski distinguishes between two ways of being suspicious.[ref]Felski, R. (2015) The Limits of Critique, Chicago: University of Chicago Press[/ref] The first, ‘digging down,’ is the now deeply unfashionable practice of digging into the concealed ‘truth’ of the text, to discover what’s really being said (Marx and Freud are obvious icons of this mode [p.61]). The other mode of suspicion, more recent, and perhaps more subtle, works through a strategy of ‘standing back.’ The goal – clearly, Michel Foucault is the guiding light – is now ‘to “denaturalize” the text, to expose its social construction by expounding on the conditions in which it is embedded’ (p.54). I read the critical ambiton of Neuroscience and Critique very much through this latter mode. And yet, as Felski points out, the distinction between the two may be less profound than it seems: for all its analytical coolness, she argues, for all its disdain for simplistic hermeneutics, that second mode, that critical-theory procedure of ‘standing back,’ is ‘just as suspicious and distrustful’ as its truth-digging forebears – surrounding this practice is still a profound commitment to ‘drawing out undetected yet defining forces, to explain what remains invisible or unnoticed by others’ (p.83). For all its subtlety, I wonder if we are not, throughout this volume, still in that old suspicious mode. What Felski demands of us, in any event, is that we take seriously the question of whether we are not, in 2016, still mired in critical accounts of neuroscience, and neuroculture, which, even at their most sophisticated, are still working to dredge up, to make visible, to spatialize, that always-undetected, mysterious, and all-powerful force.

Neuroscience, says Joseph Dumit, in an important afterword to the book, is surprisingly weak today (p.223). Indeed it is precisely its weakness, its epistemological fragility and plasticity, argues Dumit, that makes neuroscience dangerous in the hands of industrial, political, and economic actors, working to instrumentalize research for their pre-determined ends. Dumit asks us to thus read the essays in Neuroscience and Critique as a map of fragility – a helpful guide to tensions and aporias within neuroscience, which the reader may wish not only to note, but to exacerbate (here I am reminded of Kraus’s desire for dissensus). And that reader might so exacerbate not with destructive or paranoid intent, but precisely to ‘help defend [neuroscience’s] right to explore brains against its instrumentalization by industries’ (p.226-228). This is the vital question: what kind of neuroscience do we want to see in the world? At the risk of introducing simplicities of my own: what kind of neuroscience are our scientific collaborators and colleagues working towards, and what tools do we have for working with them, for collaborating with them, even for making shared things with them? How far are ‘we’ willing to travel down that road? The map in this book is certainly a good point for starting those conversations. But it is precisely conversations, interactions, and shared readings, that need to be had. And for this, I think we need to let go of that still-suspicious work of standing back. Even if our primary interest is in cultural objects, we need to engage more closely with actual neuroscientific experiments, in their many actual manifestations in the world. This, it seems to me, is the still unrealised promise of neuroscience and critique.

 

Des Fitzgerald is a lecturer in sociology at Cardiff University. His first book, co-authored with Felicity Callard, Rethinking interdisicplinarity across the social sciences and neurosciences, is available open access from Palgrave Macmillan now.

 

July 2016 issue of ‘History of the Human Sciences’

The July 2016 issue of History of the Human Sciences (Volume 29, Issue 3) is now published. Abstracts of research articles, plus links to the full text, are below.

Elwin Hofman (KU Leuven) – ‘How to do the history of the self

The history of the self is a flourishing field. Nevertheless, there are some problems that have proven difficult to overcome, mainly concerning teleology, the universality or particularity of the self and the gap between ideas and experiences of the self. In this article, I make two methodological suggestions to address these issues. First, I propose a ‘queering’ of the self, inspired by recent developments in the history of sexuality. By destabilizing the modern self and writing the histories of its different and paradoxical aspects, we can better attend to continuities and discontinuities in the history of the self and break up the idea of a linear and unitary history. I distinguish 4 overlapping and intersecting axes along which discourses of the self present themselves: (1) interiority and outer orientation; (2) stability and flexibility; (3) holism and fragmentation; and (4) self-control and dispossession. Second, I propose studying 4 ‘practices of self’ through which the self is created, namely: (1) techniques of self; (2) self-talk; (3) interpreting the self; and (4) regulating practices. Analysing these practices allows one to go beyond debates about experience versus expression, and to recognize that expressions of self are never just expressions, but make up the self itself.

Egbert Klautke (University College London) – ‘“The Germans are beating us at our own game” – American eugenics and the German sterilization law of 1933

This article assesses interactions between American and German eugenicists in the interwar period. It shows the shifting importance and leading roles of German and American eugenicists: while interactions and exchanges between German and American eugenicists in the interwar period were important and significant, it remains difficult to establish direct American influence on Nazi legislation. German experts of race hygiene who advised the Nazi government in drafting the sterilization law were well informed about the experiences with similar laws in American states, most importantly in California and Virginia, but there is little evidence to suggest they depended on American knowledge and expertise to draft their own sterilization law. Rather, they adapted a body of thought that was transnational by nature: suggesting that the Nazis’ racial policies can be traced back to American origins over-simplifies the historical record. Still, the ‘American connection’ of the German racial hygiene movement is a significant aspect of the general history of eugenics into which it needs to be integrated. The similarities in eugenic thinking and practice in the USA and Germany force us to re-evaluate the peculiarity of Nazi racial policies.

Maurizio Esposito (University of Santiago) –  ‘From human science to biology: the second synthesis of Ronald Fisher

Scholars have paid great attention to the neo-Darwinism of Ronald Fisher. He was one of the founding fathers of the modern synthesis and, not surprisingly, his writings and life have been widely scrutinized. However, less attention has been paid to his interests in the human sciences. In assessing Fisher’s uses of the human sciences in his seminal book the Genetical Theory of Natural Selection and elsewhere, the article shows how Fisher’s evolutionary thought was essentially eclectic when applied to the human context. In order to understand how evolution works among humans, Fisher made himself also a sociologist and historian. More than a eugenically minded Darwinist, Fisher was also a sophisticated scholar combining many disciplines without the ambition to reduce, simplistically, the human sciences to biology.

Gastón Julián Gil (CONICET; Universidad Nacional de Mar del Plata) – ‘Politics and academy in the Argentinian social sciences of the 1960s: shadows of imperialism and sociological espionage

Social sciences in Latin America experienced, during the 1960s, a great number of debates concerning the very foundations of different academic fields. In the case of Argentina, research programs such as Proyecto Marginalidad constituted fundamental elements of those controversies, which were characteristic of disciplinary developments within the social sciences, particularly sociology. Mainly influenced by the critical context that had been deepened by Project Camelot, Argentinian social scientists engaged in debates about the theories that should be chosen in order to account for ‘national reality’, the origins of funding for scientific research, or the applied dimension of science. In this sense, the practices of philanthropic organizations like the Ford Foundation stimulated considerably the ideological passions of that period; those practices also contributed to fragmentation in various academic groups. In this way, the problem of American imperialism, and its consequent economic and cultural dependencies, were present in the controversies of academic fields whose historic evolutions cannot be fully understood without considering their strong links with national and international politics.

Colin Gordon – ‘The Cambridge Foucault Lexicon‘ (Review Essay)

(Extract in lieu of an abstract) This big and potentially influential volume is one sign among others of Michel Foucault’s ongoing elevation to classic status within the history of recent thought. The publishers say that the 117 entries in this volume are written by ‘the world’s leading scholars in Foucault’s thought’. Some of the 72 contributors certainly fit that billing. Alongside many established experts, there are also younger scholars whose renown lies, hopefully, in the near future; this mix gives a range of generational perspectives which is to be welcomed. The contributors are comprised overwhelmingly of philosophers working in the USA and Canada, plus a handful from western Europe, and two Australians. Foucault’s creative impact has long extended across a far wider global and intellectual community than is adequately represented here. The mass presence of philosophers doubtless reflects the commercial fact that academic reference works targeted at the university library market generally need a definite primary departmental focus. Nevertheless, it is a pity that a few more contributions have not been provided to this lexicon by some of those academics based in geography, history, politics, criminology, sociology, anthropology or classics who have engaged with, used or tested Foucault in their fields. This might have also diminished a tendency, perhaps compounded by the legacy of a past generation of commentaries focused on Foucault’s earlier books, to produce an overall emphasis which underplays Foucault’s public and political engagements.