Steve Fuller. Post-Truth: Knowledge as a Power Game; New York: Anthem Press; 218 pages; paperback $39.95; ISBN: 978-1-78308-694-8
by Steve Baxi
A consistent problem in the journalistic discourse on post-truth is the confusion between the recent phenomenon of post-truth and some historically justifiable, apolitical, entirely objective Truth – the latter having been, on some level, eclipsed by the former. Indeed, this is precisely how the Oxford English Dictionary understands post-truth, and thus the focus in mainstream media outlets and contemporary studies of truth have focused on the contentions between Truth and post-truth. However, this understanding misses the relationships of power and conditions of possibility for knowledge with respect to truth – power relations and conditions we can claim to value in research fields that place the pursuit of truth over the recent, overblown idea of Truth.
In the face of academic experts, Brexit, and social media, Steve Fuller argues that post-truth is “a deep feature of at least Western intellectual life, bringing together issues of politics, science and judgement in ways which established authorities have traditionally wished to be separate” (2018, 6). Fuller’s Post-Truth: Knowledge as a Power Game attempts to provide a set of case studies of post-truth in academia, as well as in contemporary political movements, to establish the historical character of post-truth, or what he calls a post-truth history of post-truth.
The book is divided into seven chapters, each examining Fuller’s own previously developed concepts and social epistemological stances on expertise, philosophy, sociology, and science and technology studies. Fuller especially draws on Vilfredo Pareto’s distinction between “lions” and “foxes” to help set up the tensions in his case studies. Where the lions play by the rules of the game, the foxes attempt to change the rules, but do so such that the lions believe themselves to be following the very same rules they have always followed. Fuller’s approach here is loosely genealogical, perhaps even Foucauldian, as he attempts to, at least initially, present us with a history of the present.
While Fuller coins various concepts, the most important appears to be modal power which he defines as “control over what can be true or false, which is reflected in institutions about what is possible, impossible, necessary and contingent” (2018, 188). Modal power mirrors the historical discourse on systems of exclusion, the most powerful of which is the will to know, here reconfigured to be part of the military-industrial will to know (more on this below). This form of power is intended not only to explain how the moves of the “lions” and “foxes” become possible, but also how the academic fields they inhabit are growing, changing, accepting or now rejecting certain paradigms of truth.
While commentary on post-truth’s relationship to Brexit and the 2016 US presidential election has largely understood post-truth as a rejection of the facts, Fuller provides a more complex account. He asks: if these events are the outcome of a certain discourse of rules, where were these rules crafted? How might we even think of Plato, in one of Fuller’s most provocative statements, as the original post-truth philosopher? And how does this change our view of the present? Fuller especially analyses Brexit via his long-standing anti-expertise approach to social epistemology. This allows him to read Brexit as a phenomenon incited by parliamentary elites distinct from the ethical values and strategies identified in wider public opinion. Fuller concludes from this a resurgence of a “general will” in democracy. Jean-Jacques Rousseau’s general will represents a sense of shared identity: to challenge me is not merely to challenge my opinion, but the very identity I share in, and the traditions I identify with. In the case of Brexit, this is how people come to rally around nationhood. In the case of academia, it is seen in the hard-lined alignment with unchanging paradigms of thought, where the act of placing a footnote is a way of counting yourself amongst a group with a political identity. Fuller asks where this growing predilection for academic politics came from. He thus dovetails into a genealogy of academic philosophy
Academics, Fuller argues, while claiming to be in pursuit of truth, or what Michel Foucault (borrowing from Nietzsche) would call savoir, have in fact, since Plato, been entrenched in what is only now referred to as Post-Truth.[ref] Foucault, Michel. “Appendix: The Discourse on Language.” The Archaeology of Knowledge, 220. Trans. A. M. Sheridan Smith. New York: Vintage Books, 2010. 215-237.[/ref] This claim is tried into Fuller’s concept of modal power, which is an account of how, collectively, any discourse becomes a discourse in the first place; post-truth is then an argument about the boundaries of discourse. Academic fields such as sociology are important because they are acutely aware of what counts as possible in terms of boundary-pushing. Where sociology had historically embraced the post-truth condition, with its analysis of how subjectivity evolves within a particular historical condition, its contemporary pursuit of a style of knowledge-making modelled on the rigid sciences fails to adequately challenge post-truth.
If post-truth and truth are separated by who decides to change the rules of the game or who follows them, sociology ought to be at some advantage. And yet sociology – and academia more widely – seems unable to confront these issues. To explain this, Fuller coins the term “military-industrial will to knowledge,” which exemplifies the pursuit of knowledge as “effective” or useful. Fuller diagnoses academia as frequently degenerating into conversations about the merits of certain principles without first identifying itself as part of the institutions of modal power. Here, we see how goal-oriented publications, writing, or knowledge-development relates to whether one follows the rules, or changes them: i.e. whether one is a “fox” or a “lion.” A military-industrial will to know empowers certain paradigms of thought, and thus the lions are those safeguarding an unyielding sense of academic identity; the foxes are those who would challenge these norms, but the “publish or die” state of academic positions make such self-aware shifts near impossible.
Even living outside these academic norms will not necessarily solve the problem. Fuller develops the concept of ‘protscience’ to describe how individuals come to be accustomed to understandings of science. Even deviations from institutional norms still produce their own kinds of norms which are often just as dangerous and which play into the post-truth condition. Protscience most directly threads Fuller’s discussion of Karl Popper and Thomas Kuhn into his more immediate interest in academia. By pulling in the philosophy of science, we begin to see how philosophy and science are not so distinct. If we take post-truth to be about how the rules change, science as the understanding of rules, and politics as their possibility via modal power, then these three “vocations” ultimately coincide with one another. Here, Fuller delivers on the fundamental premise of this text: that post-truth represents a collapse of traditional academic spheres into each other. To do philosophy is to do science; to do science is to do politics; to do politics is to do philosophy.
In general, Post-Truth is an insightful, thorough text which examines issues of truth with more nuance and clarity than most other recent works in the field. The book succeeds most overtly in its ability to present a case for why post-truth studies need be done. To understand the contemporary world, the promises of past theories, and where things go wrong in political controversy, we have to understand how post-truth in its contemporary condition unites all fields of inquiry. In this way, Fuller seems to owe much to John Dewey’s and Arthur Danto’s arguments that a solution to one problem implies a solution to all problems.[ref] Danto, Arthur C. Nietzsche as Philosopher, 24. New York: Columbia University Press, 1965 [/ref]
However, what Post-Truth lacks is a convincing case for its own need to present concepts and coinages that might go without a label. Despite a deep reading of the text, and research on Fuller’s past work, I am still unclear on why modal power is somehow different, necessary or even more precise a quantifier of power. In general, the post-Foucauldian world of academia, and certainly the audience that Fuller wishes to speak to, will be keenly aware of what we mean when we discuss the conditions for the possibility of certain concepts. Power on its own dictates an all-inclusive concept that unities the various fields that Fuller discusses, in a way that seems not to gain much by adding ‘modal’ to it.
Similarly, Fuller frequently draws on his own body of work, which wrestles with these themes of anti-institutionalism, elitism, and gate keeping.[ref]This is evident most clearly in Fuller, Steve. Social Epistemology. Bloomington: Indiana University Press, 1988. And Fuller, Steve. The Intellectual. Cambridge, UK: Icon, 2005.[/ref] But he does so without providing us any reason to see him as one of us, outside the world of academics as we wage a war of foxes and lions on the ground. Military Industrial Will to Knowledge has quite a ring to it, as does modal power, but these concepts sometimes sound more like the academic stiffness Fuller claims to detest, and less like the tools with which we might interrogate the various values of our post-truth society.
Steve Baxi is a Graduate Student and Teaching Assistant in the Ethics and Applied Philosophy Department at the University of the North Carolina at Charlotte. He works across philosophical traditions, with a particular interest in Nietzsche and Foucault. He is currently writing on the politics of truth, and social media ethics.
Matthew Drage is an artist, writer and postdoctoral researcher. He lately completed his PhD at the Department of History and Philosophy of Science at Cambridge, and is now Post-Doctoral Research Fellow in the History of Art, Science and Folk Practice, at the Warburg Institue, in the School of Advanced study, University of London. His first article from his PhD, Of mountains, lakes and essences: John Teasdale and the transmission of mindfulness, appeared in December 2018, as part of the HHS special issue, ‘Psychotherapy in Europe,’ edited by Sarah Marks. Here Matthew talks to Steven Stanley – Senior Lecturer in the School of Social Sciences at Cardiff University, and Director of the Leverhulme-funded project, Beyond Personal Wellbeing: Mapping the Social Production of Mindfulness in England and Wales – about the article, and his wider research agenda on mindfulness in Britain and America.
Steven Stanley (SS):This article is your first publication based on your PhD research project, which you recently completed. Congratulations! Can you tell us a bit about your PhD project?
Matthew Drage (MD): Thank you! So yes, my PhD project was a combined historical and ethnographic project which focused on the emergence of “mindfulness” as a healthcare intervention in Britain and America since the 1970s. My main question was: why was mindfulness seen by its proponents as such an important thing to do? Why did they seek to promote it so actively and vigorously? I focused on a key centre for the propagation of mindfulness-based healthcare approaches in the West: the Center for Mindfulness in Health, Care and Society at the University of Massachuestts Medical Center. I also looked at the transmission of mindfulness from Massachusetts to Britain in the 1990s – this is an episode I narrate in the article.
I had a real sense, when I did my fieldwork, archival research and oral history interviews, that for people who practice and teach it as their main livelihood, mindfulness was something like what the early 20th century sociologist Max Weber called a vocation. I had a strong impression that this devotion to mindfulness as a way of relieving suffering was what helped mindfulness to find so much traction in popular culture. While my PhD thesis doesn’t offer empirical support for this instinct, it does focus very closely on why mindfulness seemed so important to the people who propagated it. I argued that this was because mindfulness combined some of the most powerful features of religion – offering institutionalised answers to deep existential questions about the nature of human suffering and the purpose of life – while at the same time successfully distancing itself from religious practice, and building strong alliances with established biomedical institutions and discourses.
Maybe the real discovery – which is something I only mention briefly in this article – is that religious or quasi-religious ideas, practices and institutions, especially Buddhist retreat centres – were crucial for making this separation possible. Mindfulness relied heavily on Buddhist groups and institutions (or, at least, groups and institutions heavily influenced by Buddhism) for training, institutional support and legitimacy, whilst at the same deploying a complex array of strategies for distancing itself from anything seen as as potentially identifiable (to themselves and to outsiders) as religious.
More specifically, most mindfulness professionals I met sought to distance themselves from the rituals, images, and cosmological ideas associated with the Buddhist tradition (for example chanting, Buddha statues or the doctrine of rebirth). But at the same time, many “secular” mindfulness practitioners shared some fundamental views with contemporaneous Buddhist movements. Many held the view that the ultimate goal of teaching mindfulness in secular contexts was to help people to entirely transcend the suffering caused by human greed, hatred and delusion: that is, reach Nirvana, or Enlightenment, the central goal of Buddhist practice. And the sharing of these views between Buddhist practitioners and secular mindfulness teachers was helped by the fact that the latter frequently attended retreats with local Buddhist groups – indeed, often helped lead those groups! In my project I try to show how blurry the lines were, and that this blurriness was really at the heart of what the secular mindfulness project – at least in its early stages – was about: trying to keep the transcendental goal of Buddhism intact whilst shedding aspects of it that were seen as mere cultural accretions, deliberately blurring the boundaries between the religious and the secular.
SS:How did this project come about?
MD: I came across secular mindfulness in 2011 through my own personal involvement with religious Buddhism. It was clearly on the rise, and while I wasn’t that interested in practising meditation in a secular context, I could see it was probably going to get big. Mindfulness seemed part of a more general cultural trend towards using science and technology to reshape the way the individual experiences and engages with the world around them. Technological developments like personal analytics for health (tracking your own fitness with wearable devices, say), and increasingly personalised user-experiences online, also seemed to exemplify. When I decided to do a PhD in 2013, I was interested in a very general way in questions of subjectivity and technology in contemporary Western culture, and I picked the one that seemed to fit best with my existing interests.
SS: Your article makes an important contribution to the historiography of recent developments in clinical psychology in Britain, especially the development of so-called ‘third-wave’ of psychotherapy (that is, approaches that include mindfulness and meditation). In particular you highlight the perhaps unexpected influence of alternative religious and spiritual ideas and practices on the emergence of British mindfulness in the form of Williams, Teasdale and Segal’s volume, Mindfulness-Based Cognitive Therapy, in the 1990s. You have also unearthed some fascinating biographical details regarding living pioneers of British mindfulness. Did you know what you were looking for before doing your study? Were you surprised by what you found?
MD: The simple answer is
sort of, and yes! I kind of found what I was looking for, and (yet) I was
surprised by what I found.
When I began my research I was convinced that mindfulness was just another form of Buddhism, slightly reshaped and repackaged to make it more palatable. My supervisor, the late historian of psychoanalysis Professor John Forrester, warned me about taking this approach. I remember him telling me, “If you keep pulling the Buddhism thread, the whole garment will unravel!” And unravel it did. After about three years, I realised that the most central metaphysical commitments of the mindfulness movement were not especially Buddhist, but owed as much, if not more, to Western esotericist traditions. By this I mean the 19th century tradition that includes the spiritualist theologian Emmanuel Swedenborg, the American Transcendentalists (e.g. David Henry Thoreau and Ralph Waldo Emerson) and, in the 20th century, people like the countercultural novelist and philosopher Aldous Huxley. These thinkers shared, amongst other things, the idea that there is a perennial, universal truth at the heart of all the major religions. The influence of this view was often, I found, invisible to mindfulness practitioners themselves. Indeed, it was invisible to me for a long time. They, like me, had often encountered Buddhism through the lens of these very Western, esotericist religious or spiritual ideas, so they just appeared as if they’d come from the Buddhist tradition. So while I wasn’t surprised by the influence of spiritual ideas on mindfulness, I was surprised by their source.
I was also surprised by
the conclusions I reached about its relationship with late 20th
century “neoliberal” capitalism. I’m not quite ready to go public with these
conclusions yet, but watch this space. I’ll have a lot to say about it in the
book I’m working on about the mindfulness movement.
SS: As you say in your article, mindfulness has become a
very popular global phenomenon, which in simple terms is about being more aware
of the present moment. When we think of mindfulness, we tend to think of ‘being
here now’. What was it like studying mindfulness as a topic of historical
scholarship? And, vice versa, mindfulness is sometimes understood
as referring to, as you say, a ‘realm beyond historical time’. What lessons are
there for historians from the world of mindfulness?
MD: A really great
question. There is a fundamental conflict between my training as an historian
and the views I was encountering amidst mindfulness practitioners. They tended
to use history in very specific ways to legitimise their views. Mindfulness was
taken as both about a universal human capacity (and thus beyond any specific
historical or cultural contingency) and primordially ancient, a kind of
composite of the extremely old and the timeless. If mindfulness had a history
at all, so the story within the mindfulness movement tended to go, it was
coextensive with the history of human consciousness.
I spent a lot of time thinking and writing about the history of this view of the history of mindfulness. This was challenging because it often left me feeling as though I was being somehow disloyal to my interlocutors within the mindfulness movement; as though I was – in a way that was very hard to explain to them – undermining a key but implicit pretext for their work. In the end I tried to present a view of mindfulness which takes seriously its claims to universality by examining the historicity of those claims. I do not want to assume that there are no universals available to human knowledge; and if there are, then – as feminist science and technology studies scholar Donna Haraway argues in her incredible 1988 essay, “Situated Knowledges,” universals are always situated, emerging under very specific historical conditions. My main theoretical concern came to be understanding and describing the conditions for the emergence of universalising claims about humans.
To answer the other part of your question: I think mindfulness teaches historians that time is itself a movable feast; that we should take seriously the possibility of a history of alternative or non-standard ways of thinking about time. Mindfulness practitioners often talk a lot about remaining in the “present moment,” a practice which you could think of in this way: it takes the practitioner out of the usual orientation to time, to past and future, and creates quite a different sense of the way time passes. I found that institutionalised forms of mindfulness practice, to some extent, organised to support this change in one’s approach to time. I suspect this is also linked to an idea that I talk about in my article, the idea that mindfulness is somehow “perennial” or “universal.” There is a sense in which by practising mindfulness, and especially by practising on retreat, one is removing oneself from the usual run of historical time. I think that it would be extremely interesting to think about how to do a history of this phenomenon; a history of the way people, especially within contemplative traditions, have sought to exit historical time.
SS: Many researchers of mindfulness also practice
mindfulness themselves. Did you practice mindfulness as you were studying it?
If you did, how did this work in relation to your fieldwork?
MD: Yes, I did. I was reluctant to do so initially, mainly because I had my own Buddhist meditation practice, and didn’t want to add another 40 minutes to my morning meditation routine. However, when I started meeting people in the mindfulness movement, they were very insistent that mindfulness could not really be understood without being experienced. While carrying out my PhD research I went to a lot of different teacher training retreats, workshops and events, and even taught an 8-week mindfulness-based stress reduction (MBSR) course to students at Cambridge. I think that this was an indispensable part of my research, to experience first hand what people were talking about when they spoke about mindfulness. Participating in a shared sense of vocation that I encountered amongst many mindfulness professionals showed me just how emotionally compelling mindfulness was.
SS: Mindfulness is often presented as a secular therapeutic technique which has a scientific evidence based – and that it has completely moved away from its religious roots. Does your work challenge this idea and if so, how? And, related to this, what do you mean in your article by the ‘Buddhistic milieu’?
MD: As I say above, I do mean to complicate this idea that mindfulness is a straight-up medical intervention, moving ever-further from its religious roots. I think perhaps the development of mindfulness as a mass-cultural phenomenon roughly follows this trajectory. But this trajectory is also in itself complex: the parts of the mindfulness movement that I studied were also an attempt at making society more sacred, using the secular biomedical discourse, institutionality and rationality as a means of doing so – although most people wouldn’t have talked about it in this way. Secular biomedicine, at least for the earliest proponents of mindfulness, was seen as a route through which a what we might think of (though they didn’t think of it like this) a special kind of spiritual force (a force which, in my view, has very much to do with what we normally call religion), could be transmitted.
I mean by the ‘Buddhistic
milieu’ to refer to something fairly loose – the constellation of communities,
institutions, texts and practices which are strongly influenced by the Buddhist
tradition, but which do not – or do not always – self-identify as Buddhist.
It’s a coinage inspired by sociologist of religion Colin Campbell’s idea of a
“cultic milieu,” a term he used to describe the emergent New Age movement in
the 1970s. For Campbell, the cultic milieu is a community of spiritual
practitioners characterised by
individualism, loose structure, low levels of demand on members, tolerance,
inclusivity, transience, and ephemerality. When I talk about a Buddhistic
milieu here, I mean something like this, but with Buddhism (very broadly
construed) as a focus. Some traditions, such as the Insight meditation
tradition, which did much to give rise to the secular mindfulness movement,
especially encourage this type of relationship to Buddhist practice,
emphasising their own secularity, and insisting on its openness to
practitioners from any faith tradition.
SS: You suggest that the transmission of mindfulness follows a ‘patrilineal’ lineage which is captured by terms like dissemination, essence, seminal and birth. Your focus is very much on the male ‘founding fathers’ of Mindfulness-based stress reduction (MBSR) and Mindfulness-based cognitive therapy (MBCT) rather than the women pioneers of the movement. Given that such stories of male founders have been troubled by feminist and revisionist historians of science and psychology since the 1980s especially, can you tell us more about the gender politics of the mindfulness movement and give us a sense of the role female leaders have played in the movement?
MD: An excellent but
difficult line of questioning! When I first wrote this paper – and when I
started my PhD – I took a much more explicitly feminist perspective. But as I
started to write, I was confronted by how incredibly sensitive a topic this is,
and I’m still not quite ready to say anything very definite. Mindfulness was
not, nor do I think we should expect it to have been, impervious to the
tendency towards patriarchal domination that permeates society in general. And,
as you suggest here, we might fruitfully read some of the key symbols of male
power I identify in my article as a sign of this tendency. I can’t say much
more for now by way of analysis, but I’m aiming to tackle this issue more
directly in the book.
I can give a couple of cases, though, which I plan e to explore in more detail in the future. The first is the role of meditator and palliative care worker called Peggie Gillespie who worked with Jon Kabat-Zinn in the very earliest days of his Clinic in Worcester, Massachusetts (where he first developed Mindfulness-Based Stress Reduction). Gillespie joined Kabat-Zinn as co-teacher in 1979, either in the very first mindfulness course he taught to patients at the University of Massachusetts Medical, or not long afterwards. She then acted as his second-in-command for the first couple of years of the Stress Reduction Clinic’s existence. She was certainly involved in developing MBSR (which was called SP&RP – the Stress Reduction and Relaxation Program, for the first decade of its life), and even wrote the first ever book about MBSR, her 1986 work Less Stress in Thirty Days. To my knowledge, however, Gillespie only gets a single mention in any writing anywhere about the history of MBSR – in the foreword to Jon Kabat-Zinn’s Full Catastrophe Living. The second example is the relative neglect of Christina Feldman. It wasn’t until the very end of my research period that I realised just how influential a figure Feldman has been – she had led the retreat on which Kabat-Zinn had his idea for MBSR, and went on to be the primary meditation teacher of one of the main early proponents of British mindfulness, cognitive psychologist John Teasdale. Although again she’s rarely mentioned, in a sense she oversaw the birth of secular mindfulness both in Britain and in America. I’m hoping that she’ll grant me an interview, so that I can write her into the book!
SS: If a teacher or practitioner of mindfulness is
interested in your research, and wants to know more about the history of
mindfulness, what texts would be in your History of Mindfulness 101?
So, when it comes to straightforward history, I’d go for Jeff Wilson’s (2014) Mindful America, Anne Harrington’s (2008) Cure Within, Mark Jackson’s (2013) The Age of Stress, and David McMahan’s (2018) The Making of Buddhist Modernism. These books all do important work in both narrating episodes the history of mindfulness since the 1970s, and in situating those episodes amidst broader currents in the history of science, medicine, and religion. Finally, Wakoh Shannon Hickey’s forthcoming book Mind Cure: How Meditation Became Medicine, was published a couple of weeks ago in March 2019. I haven’t read it yet, but I know something of her doctoral research into the history of MBSR, and suspect it will provide a much more in-depth and focused exploration than has yet been seen.
Matthew Drage is a Post-Doctoral Research Fellow in the History of Art, Science and Folk Practice, at the Warburg Institue, in the School of Advanced study, University of London.
Steven Stanley is Senior Lecturer atthe School of Social Sciences, Cardiff University.
This is the second part of a two-part interview, between Vanessa Rampton, Branco Weiss Fellow at the Chair of Practical Philosophy, ETH Zurich, and the anthropologist Tobias Rees, Director of the ‘Transformations of the Human Program’ at the Berggruen Institute in Los Angeles, and author of the new monograph, After Ethnos (Duke). The discussion took place following a workshop on Rees’s work at the Zurich Center for the History of Knowledge in 2017. You can read the first part of the interview here.
4. Uncertainty and/as Political Practice
Vanessa Rampton (VR): I want to continue our conversation by asking you about the implications of foregrounding uncertainty and the ‘radical openness’ you mentioned earlier for aspects of life that are explicitly normative. Take politics, for example. Have you thought about the political implications of embracing uncertainty, and what could be necessary to facilitate communication, or participation, or what it is you think is important?
Tobias Rees (TR): For me, the reconstitution of uncertainty or ignorance is principally a philosophical and poetic practice. These concepts are not reducible to the political. But they can assume the form of a radical politics of freedom.
VR: How so?
TR: For a long time, in my thinking, I observed the classical distinction between the political as the sphere of values and the intellectual as the sphere of reason. And as such I could find politics important, a matter of passion, but I also found it difficult to relate my interest in philosophical and anthropological questions to politics. And I still think the effort to subsume all Wissenschaft, all philosophy, all art under the political is vulgar and destructive. However, over the years, largely through conversations with the anthropologist, Miriam Ticktin, I have learned to distinguish between a concept of politics rooted in values and a concept of politics rooted in the primacy of the intellectual or the artistic. I think that today we often encounter a concept of politics that is all about values, inside and outside of the academy. People are ready to subject the intellectual –– the capacity to question one’s values –– to their beliefs and values.
VR:
For example?
Tobias
Rees: This is much more delicate than it may seem. If I point out
the intellectual implausibility of a well held value … trouble is certain.
Maybe the easiest way to point what I mean is to take society as an example
again. We know well that the concepts (not the words) of society and the social
emerged only in the aftermath of the French Revolution, under conditions of
industrialization. We also know perfectly well that the emergence of the
concepts of society and the social amounted to a radical reconfiguration of
what politics is. I think there is broad agreement that society is not just a
concept but a whole infrastructure on which our notions of justice and
political participation are contingent. If I point out though that society is
not an ontological truth but a mere concept – a concept indeed that is somewhat
anachronistic in the world we live in, people become uncomfortable. Many have
strong emotional reactions insofar as they are vetted to the social as the good,
and as the only form politics takes. When I then insist, as I usually do, the
conversation usually ends by my interlocutors telling me that this is not an
intellectual but a political issue. That is, they exempt politics as a value
domain from the intellectual. I thoroughly disagree with this differentiation.
In
fact, I find this value-based concept of politics unfortunate and the readiness
to subject the intellectual to values disastrous. Values are a matter of doxa, that is, of unexamined opinions,
and as long as we stay on the level of doxa
the constitution of a democratic public is impossible. Kant saw that clearly and
made the still very useful suggestion that values are a private matter. In private
you may hold whatever values you prefer, Kant roughly says, but a public can
only be constituted through what is accessible to everyone in terms of critical
reflection. He called this the public exercise of reason. So the question for
me is how, in this moment, we might allow for a politics that is grounded in
the intellectual, in reason even, rather than in values. The anti-intellectual
concept of politics that dominates public and especially academic discussions
is, I think, a sure recipe for disaster. Obviously this is linked, for me, to
the production of uncertainty and to the question of grounding practice in
uncertainty.
VR: I am very sympathetic to your desire to
avoid confusing the tasks of, say, philosophy with political activism, but how
does this go together with uncertainty and ignorance?
TR:
Yes, it may seem that my work on the instability of knowledge or on uncertainty
amounts to a critique of reason. But in fact the contrary is the case: for me,
the reconstitution of ignorance, the transformation of certainty into
uncertainty is an intellectual practice. Or better, an intellectual exercise. It
is accomplished by way of research and reflection; it is accomplished by
thinking about thinking. Another way of making this point is to say that
uncertainty –– or the admission of ignorance –– is the outcome of rigorous
research, it is the outcome of a practice committed, in principle, to searching
for truth. If I am at my most provocative I say that uncertainty implies an
open horizon –– it opens up the possibility that things could be different and
this possibility of difference, of openness, is what I am after. So one big
challenge that emerges from this is how can one reconcile the intellectual and
the political, and I do think that’s possible. That would lead back to what I
called epistemic activism.
VR: How
would that work in practice?
TR:
My personal response unfolds along two lines. The first one amounts to a
gesture to Michel Foucault: with Foucault one could describe my work as a
refusal to be known or to be reducible to the known. Hence, my interest in that
which escapes, which cannot be subsumed, etc. A second way of responding to
your question, with equal gratitude to Foucault, is to say that the political
is for me first of all a matter of ethics, that is, of conduct: how do you wish
to live your life? And here I advocate the primacy of the intellectual –– katalepsis –– over values. Based on
these two replies one can approach the political on a more programmatic scale: whenever
someone speaks in the name of unexamined values or claims to speak in the name
of truth and thereby closes the horizon and undermines the primacy of the intellectual,
I can make myself heard and ask questions and express doubt. And when I say
doubt I don’t mean a hermeneutics of suspicion. I also don’t mean social
critique. I mean radical epistemic doubt that tries to reconstitute irreducible
uncertainty.
VR: So this would involve calling out the
truth-claims of other actors?
TR: I am not fond of the term calling out. The phrase tends to hide the fact that what is at stake is not only to confront the truth claims someone is making, but also to avoid the very mistakes one problematizes: to speak in the name of truth. I am more interested in speaking in the name of doubt: not a doubt that would do away with the possibility of truth and that would leave us with the merely arbitrary, but a doubt that transforms the certain into the uncertain, while maintaining the possibility of truth as measure or as guiding horizon.
5.Uncertainty as Virtue
VR: Let’s talk about the normative implications of uncertainty beyond politics. I was interested in a review of your work by Nicolas Langlitz in which he accused you of wanting to radically cultivate uncertainty, and he had arguments for why this wouldn’t work. Actually this reminds me of a passage in Dostoevsky’s The Brothers Karamazov where the Grand Inquisitor condemns Christ for having burdened humanity with free choice, and claims that actually human beings cannot cope with freedom, nor do they really desire it. Rather they prefer security or happiness: having food, clothes, a house and so on. And one question would be, how do we acknowledge uncertainty, acknowledge its importance, but not cultivate it in a way that could potentially be destructive?
TR:
I have several different reactions at once. Here is reply one: I am deeply
troubled by the idea of decoupling happiness from freedom. As I see it now, uncertainty
is a condition of the possibility of freedom –– and of happiness. Why? Because the
impossibility to know provides an irreducibly open horizon. This is one
important reason for my interest in cultivating uncertainty.
My second reply amounts to a series of differentiations that seem to me necessary or at the very least helpful. For example, I think it makes sense to differentiate between the epistemic and the existential as two different genres. To make my point, let me go to the beginning of the preface to the first edition of the Critique of Pure Reason, where Kant says that human reason (for reasons that are not its fault) finds itself confronted with questions it cannot answer. I am thoroughly interested in this absence of foundational answers that Kant points out here. What answers does Kant have in mind? He doesn’t actually provide examples and most modern readers tend to conclude he meant the big existential questions of the twentieth century: why am I here? What is the meaning of life? Stuff like that. However, I think that is not at all what Kant had in mind. He simply shared an epistemological observation: whenever we try to provide true foundations for knowledge, we fail. In every situation –– whether in science or in everyday life –– we cannot help but rely on conceptual presuppositions we are not aware of. What is more, there are always too many presuppositions to possibly clear the ground. The consequence, pace Kant, is that knowledge is intrinsically unstable and fragile. I am interested in precisely this instability and fragility of knowledge. Of all knowledge. Let’s say for me this instability is the condition of the possibility of freedom.
Up until this point I simply have made an epistemological observation. Now Langlitz, whose work I admire, asks if my epistemic cultivation of uncertainty is productive in the face of, say, climate change deniers. To me, he implicitly confuses here the epistemic –– which remains oriented towards truth and is an intellectual practice –– with the doxa driven rejection of the epistemic and the intellectual that is characteristic of the climate change deniers. What you are asking about though is of a different quality, right? You are asking about a more existential uncertainty.
6.Uncertainty and Medicine
VR: My question is motivated by thinking about cases such as medicine. For example, does the epistemic uncertainty you are concerned with require special measures in the clinical encounter? After all, physicians’ perceived ability to cope with uncertainty has a well-documented placebo effect. So for example physician and writer Atul Gawande – I’m thinking of his books Complications (2002) and Better (2007) – writes about all the things modern medicine doesn’t know in addition to what it does know. But he emphasizes that this self-doubt cannot become paralyzing, that physicians must act, and that action is – in many cases – in patients’ interests. So this doesn’t contradict per se what you were saying before, but it does show how epistemic uncertainty is seen as something that has to be managed in this particular professional setting, and that a kind of simulacrum of certainty may also give patients hope in a difficult situation.
TR:
I think that perhaps the best way to address the questions you are raising is a
research project that attempts to catalogue the multiple kinds of uncertainties
that flourish in a hospital. If I stress that there are different kinds of
uncertainties then this is partly because I think that different kinds of
uncertainties have different kinds of causes –– and partly because I think that
there is no obvious link between the epistemic uncertainty I have been
cultivating and the kinds of uncertainties that plague the doctor-patient
relation in medicine.
VR:
I am surprised to hear you say that, because I understood the relation between
technical progress and the skill of living a life in intrinsically uncertain
circumstances as a central feature of your work. In Plastic Reason, for example, you quote Max Weber who says: ‘What’s
the meaning of science? It has no meaning because it cannot answer the only
question of importance, how shall we live and what shall we do?’ And as you
know Weber came to that idea via Tolstoy, who basically says: ‘the idea that
experimental science must satisfy the spiritual demands of mankind is deeply
flawed’. And Tolstoy goes on to say: ‘the defenders of
science exclaim – but medical science! You’re forgetting the beneficent
progress made by medicine, and bacteriological inoculations, and recent
surgical operations’. And that’s exactly where Weber answers:
‘well, medicine is a technical art. And there is progress in a technical art.
But medicine itself cannot address questions of life and how to live, and what
life you want to live.’
TR:
But why does Weber answer that way? You are surely right that he arrives at the
question concerning life and science via Tolstoy. However, it also seems to me that
he thoroughly disagrees with Tolstoy. In my reading, Tolstoy makes an
existential or even spiritual point. He places the human on the side of
existential and spiritual questions and calls this life –– and then criticizes
science as irrelevant in the face of these questions. Weber’s observation is, I
think, a radically different one. Tolstoy is right, he says, there are
questions that science cannot answer. However, if you want to live a life of
reason –– or of science –– then this absence of answers is precisely what you
must endure. Or, perhaps, enjoy. In other words, Weber upholds science or
reason vis-à-vis its enemies.
One can
refine this reading of Weber. He answers that science is meaningless. And I
think the reason for this is that, as he sees it, science isn’t concerned with
meaning. Indeed, from a scientific perspective human life is entirely meaningless.
However, Weber nowhere argues that science is irrelevant for the challenge of
living a life. On the contrary, he lists a rather large series of tools that
precisely help here –– from conceptual clarity to the experience of thinking,
to technical criticism. His whole methodological work can be read as an ethical
treatise for how to live a life as a Wissenschaftler.
According to Weber, the Tolstoy argument requires a leap of faith that those of
us concerned with reason –– and with human self-assertion in the face of
metaphysical claims –– cannot take.
It is
easy, of course, to claim that life is so much bigger than science. But then,
upon inspection, there is no aspect of life that isn’t grounded in conceptual
presuppositions –– and these presuppositions have little histories. That is,
they didn’t always exist. They emerge, they re-organize entire domains of life,
and then we take them for granted, as if they had always existed. Which they
didn’t. This includes the concept of life, I hasten to add. Weber opts for the
primacy of the intellectual as opposed to the primacy of the existential. And
for Weber the only honest option is to accept the primacy of the intellectual. That
may mean that some questions are never to be answered. But all answers he
examined are little more than a harmony of illusions.
You see, I think that this is easily related back to my distinction between epistemic uncertainty and existential uncertainty. In Plastic Reason I quoted Weber not least because my fieldwork observations seemed to me a kind of empirical evidence that proves the dominant, anti-science reading of Weber wrong. If you are thinking that it is your brain that makes you human and if you are conducting experiments to figure out how a brain works, well, then you are at stake in your research. Science doesn’t occur outside of life. None of this is to say that the uncertainties that plague medicine aren’t real. But it is to say that I think it is worthwhile differentiating between kinds of uncertainty.
Tobias Rees is Reid Hoffman Professor of Humanities at the New School of Social Research in New York, Director of the Transformations of the Human Program at the Berggruen Institute in Los Angeles, and Fellow of the Canadian Institute for Advanced Research. His new book, After Ethnos is published by Duke in October 2018.
Vanessa Rampton is Branco Weiss Fellow at the Chair of Philosophy with Particular Emphasis on Practical Philosophy, ETH Zurich, and at the Institute for Health and Social Policy, McGill University. Her current research is on ideas of progress in contemporary medicine.
The Spring 1972 issue of the short-lived self-published journal Red Rat: The Journal of Abnormal Psychologists includes a review by Ruth Davies of Ken Loach’s film Family Life alongside the Yugoslavian director Dušan Makavejev’s W.R., Mysteries of the Organism.[ref]Ruth Davies, ‘Film Review: W.R. + Family Life’, Red Rat: The Journal of Abnormal Psychologists, 4, Spring 1972, pp. 28-29, p. 28. Issues of Red Rat are held in the archives at MayDay Rooms, London.[/ref] According to the reviewer both films were then showing simultaneously at the Academy Cinema on Oxford Street in London and in ‘both cases, the theme of the film is the work of a radical psychologist whose ideas have helped lay the foundations of alternative psychology; in the case of Family Life, the work of RD Laing, and in Mysteries of the Organism, Wilhelm Reich.’ Davies outlines the different approaches to psychology presented in the films: ‘Family Life is an account of the genesis of schizophrenia firmly in the Laing tradition,’ following a young woman whose diagnosis with schizophrenia is presented as deriving from her family situation, while W.R., Mysteries of the Organism combines documentary footage shot in America (interviewing people at Wilhelm Reich’s infamous Organon laboratory and following various artists around New York) with a heavily stylised narrative about sexual revolutionaries in Belgrade encountering a dashing Soviet figure skater who embodies Communism in its repressive and sexually repressed form. Though Davies is primarily concerned with the content of these two films, I was struck by how their wildly contrasting formal qualities–Loach’s drab naturalism (people wearing beige clothes drinking beige cups of tea in beige institutional rooms) versus Makavejev’s audacious experimentalism (people tearing off lurid clothes knocking down the walls of their bohemian rooms)–resembles a contrast at the heart of Oisín Wall’s new book, The British Anti-psychiatrists: From Institutional Psychotherapy to the Counter-Culture, 1960-1971. Wall demonstrates that British anti-psychiatry in the period immediately preceding the release of these films in Britain in 1971 was connected to the staid ‘square’ world of professional medicine, as well as being hugely influential within the ‘hip’ counter-culture, involving ‘collusions and collaborations between the long-haired kaftan wearing radicals who inhabit the 1960s of the contemporary popular imagination and people who, at another time, would have been the epitome of bourgeoisie [sic] stability’ (p. 2). As such, Wall’s narrative shuttles between beige institutional spaces and anarchic psychedelic communes, sees middle-aged doctors living alongside young hippies, and describes unlikely convergences of medical, spiritual, philosophical, and political discourses.
One of the most persuasive arguments Wall advances in The British Anti-Psychiatrists, and the book’s main intervention, is an insistence on the importance of acknowledging continuities and connections between the theories, practices and communities of the mainstream ‘psy’ disciplines and those of anti-psychiatry. As Wall explains, RD Laing arrived in London from Glasgow in 1956 with the intention of training as a psychoanalyst. Laing and Aaron Esterton’s work with people diagnosed with schizophrenia that forms the basis of Sanity, Madness and the Family (1964) was undertaken while Laing was involved with the Tavistock Institute of Human Relations. Laing began his analysis with Charles Rycroft, supervised by DW Winnicott and Marion Milner (prominent figures in the ‘Independent Group’ of British psychoanalysts), who were both subsequently listed in the training programme of the Philadelphia Association. Wall observes in a footnote that Winnicott invited Laing to deliver a paper at the British Psychoanalytic Association in 1966, of which Winnicott was then the president, and ‘practically begged’ Laing to join as a member (p. 181). Wall claims that even at the height of their counter-cultural notoriety when they were most vocal in their critiques of the medical establishment and of professional hierarchies, the British anti-psychiatrists continued to invoke their psychiatric credentials to gain legitimacy in certain contexts: ‘the anti-psychiatrists were not averse to using the authority of their professional status to prove a point or advance a position’ (p. 91).
A contextual chapter also places the
radical therapeutic communities associated with anti-psychiatry in historical
perspective, discussing their antecedents in mainstream psychiatry. Wall describes
the therapeutic communities established at Northfields and Mill Hill during the
Second World War and demonstrates that many principles that would go on to be
central to anti-psychiatry–including an emphasis on the therapeutic benefits of
group dynamics that challenged the centrality of the doctor-patient
relationship–were commonplace in psychiatry by the 1950s (Wall mentions the
example of a 1953 World Health Organisation report on The Community Mental Hospital, for instance). He also makes clear
not only that critiques of traditional asylums were already being voiced by the
time anti-psychiatry emerged but that mental hospital reform was well underway:
‘the Anti-Psychiatric movement’s antipathy to the hospital was well rooted in
established psychiatric practrices and discourses’ (p. 50). Though Wall does
still assert that it would be ‘naïve to suggest’ that anti-psychiatry’s ‘widespread
cultural influence’ was completely unrelated to the eventual ‘deinstitutionalisation
of the British asylums in the 1980s and 1990s’ (p. 8).
Although Wall challenges the
novelty of the two most well-known British anti-psychiartic spaces, Villa 21
and Kingsley Hall, he nonetheless concludes that both ‘went farther’ than the
therapeutic communities that preceded them ‘in the informality of staff-patient
relationships, the democratic arrangement of the community and the
de-stigmatization of mental illness’ (p. 78). Wall’s account of David Cooper’s experiments
at Villa 21, a community established at Shenley Hospital in the early 1960s, is
particularly illuminating, including perspectives from interviews conducted
with two former patients, one of whom was much more cynical in his reflections
than the other, indicating that the bombastic theoretical pronouncements made
by British anti-psychiatrists in their best-selling published work often played
out ambiguously in practice: ‘I don’t think anyone really understood why we
were there or what we were trying to achieve, or what it was meant to achieve
by us being there’ (p. 66).
Kingsley Hall in East London was
the most infamous anti-psychiatric space, renowned for its raucous LSD-fuelled
parties as much as for its innovative therapeutic methods. Wall emphasises the
American psychiatrist Joe Berke’s role in providing links with the kinds of
counter-cultural figures conventionally associated with the building, but he
points out that visits from celebrities, artists, and hip international radical
psychiatrists like Franco Basaglia and Félix Guattari were combined with those
from ‘the world of ‘square’ psychiatry’ (p. 74). He also discusses tensions
that emerged within the community that would have lasting implications for
anti-psychiatry, particularly between Laing and Esterton; the former more
anarchic and experimental, the latter more interested in retaining some
conventional medical techniques and boundaries. Laing allegedly carried a Lenin
book under his arm, while Esterton read Stalin.
Wall not only discusses
anti-psychiatry’s psychiatric roots but also traces the ways it eventually grew
entangled with the counter-culture, through a consideration of anti-psychiatrists’
links with Alexander Trocchi’s Project Sigma, their organisation of the
Dialectics of Liberation Congress at the Roundhouse in London in 1967, and
their involvement in establishing the Anti-University. As in other sections of
the book, he highlights the forms of power still at play in these ostensibly
non-hierarchical and informal networks of interpersonal relationships. Wall is
at pains to show that there’s something counter-intuitive about the place these
bourgeois medical professionals came to occupy among trendy young radicals, but
also demonstrates how their ideas in this period of counter-cultural engagement
broadened out from a critique of the psychiatric hospital to one of society at
large, emphasising the numerous oppressive institutions of which society was
comprised: ‘anti-psychiatry prescribed an apparently liberatory programme that
demanded social, and not only psychiatric, change. This change, they argued,
should be based on a fundamental reorganization of the interpersonal relations
that bind society together’ (p. 79).
Overall, The British Anti-Psychiatrists is more interested in concrete contexts than abstract concepts, in practices more than theories (or at least in how theory was practically instantiated), and the book is more interesting for that focus. The closing chapters venture into more theoretical territory, however, containing discussions of Laing’s and Cooper’s key concepts and published works. Wall briefly outlines the influence of Sartrean existentialism on Laing and Cooper; the notion that ‘madness’ can be understood as resulting from discrepancies between a person’s individual existential reality and the social reality they inhabit.[ref]Despite Wall’s introductory statements bewailing the absence of women from the British anti-psychiatry movement (p. 16), he nonetheless seems not to have reflected on the implications of using male pronouns to refer to all people in some of these later sections.[/ref] He is clear to distinguish anti-psychiatric theory from caricatures of it, asserting that certain ideas commonly associated with Laing and Cooper, particularly a romantic characterisation of madness as a form of ‘break through’, were articulated by them only rarely. He also usefully contextualises their discussions of psychic ‘liberation’ in relation to contemporaneous discourses (Third World Liberation, on the one hand, and legacies of the Second World War, on the other). The book’s final chapter on theories of the family satisfyingly loops back from the counter-culture to reiterate the book’s core argument that the anti-psychiatrists’ ‘cultural revolutionary rhetoric emerged directly out of mainstream psychiatric discourse’ (p. 143).
I found myself occasionally infuriated by the vagueness of some of the ideas presented in the book, particularly Cooper’s and Laing’s insistence on the ‘necessity of mediating between the micro-social and the macro-political’ (p. 103), but having read their speeches from the Dialectics of Liberation Congress, I’m aware that this says more about my frustrations with Laing’s and Cooper’s ideas than it does about Wall’s glosses of them, though his impeccably even-handed tone is a little unrelenting for my tastes. Reading the book I found myself longing for smatterings of archness, humour, poeticism or polemic.
Again, these objections to the
style of The British Anti-Psychiatrists
are not really faults it would be fair to level at Wall individually or at this
book in particular, but stem from more general frustrations about the
limitations implicitly imposed by established conventions of genre and
discipline (which are in turn connected to the demands and expectations of
academic institutions and publishers), constraints I also feel acutely aware of
when I write. Yet these frustrations seem worth thinking through when the
historical material being presented is politically radical, formerly inventive
or critical of existing structures. I might find some of Laing’s and Cooper’s arguments
less persuasive than Wall seems to, but the sweeping analyses, rhetorical
bombast and literary flourishes that characterise their publications couldn’t
be further from the polite timidity of tone so pervasive in current academic
history writing. Unlike in the main body of the text, The British Anti-Psychiatrist’s preface–in which Wall situates his
project in dialogue with political struggles today, relates it to his own
political commitments and talks about first encountering Laing’s enigmatic
literary work Knots (1970) as a
teenager–does significantly deviate from the unofficially mandated scholarly
mode, giving a glimpse of themes and concerns which guided the book’s
composition but remain latent or muffled in its final form. If, as Wall claims,
‘the issues that drove the radicalism of the 1960s are still very much alive
and kicking’ (p. x) and concern for these issues partially motivated him to
write the book in first place, would it not be possible to write this history
in such a way that made the contemporary urgency of those issues manifest?
Aside from these slightly churlish or at least tangential reservations about form and style, I would also love to have read more about the two communities that succeeded Kingsley Hall: the Archway Community and Sid’s Place, the former of which is the subject of Peter Robinson’s extraordinarily intimate 1972 documentary Asylum. Wall only mentions these spaces briefly, which makes it difficult to make full sense of his claim that they represented a ‘significant shift’ in their move away from ‘politics and counter-cultural fervour’ (p. 76). I wondered if there might not be ways to think about those experiments laterally, in relation to the flourishing of squatting and communal living experiments in London at that time, which could frame them as differently rather than simply less political. Luke Fowler’s Turner Prize nominated 2011 film All Divided Selves, though focused on Laing, gestures towards such connections through its inclusion of footage relating to squats, activist-run Day Centres and the radical group COPE (Community Organisation for Psychiatric Emergencies[ref]For a brief discussion of COPE (which underwent several name changes) see, Nick Crossley, Contesting Psychiatry: Social Movements in Mental Health (London: Routledge, 2006) pp. 172-173. [/ref]). This lingering question links to a reservation I had with Wall’s conclusion that by the early 1970s (when Laing went off to meditate in Ceylon and Cooper sought out militancy in Argentina) anti-psychiatric ideas had lost their significance (p. 177). Although in his introduction he claims that anti-psychiatry ‘paved the way for the birth of the Service User’s Movement’ (p. 8), I would make a stronger case than this book does that the trenchant critiques of mainstream psychiatric diagnoses and treatments articulated by people active in the Women’s Liberation Movement and Gay Liberation Movement and the proliferation of self-organised non-professional therapy groups, not to mention the emergence of the psychiatric survivors movement and radical groups of psychiatrists critical of the medical establishment (like those involved with the journal Red Rat), indicate the influence and extension of anti-psychiatric ideas well into the 1970s. While I think Wall is right to insist on the specificities of the British anti-psychiatrists’ approach, contra much of the existing scholarship on anti-psychiatry which often places them alongside contemporaneous American, French, German or Italian figures and movements (p. 21), there is also something about the extent of the British anti-psychiatrists’ fame and the wide and diffuse percolation of their ideas that undermines this approach, as Wall notes: ‘it is impossible to quantify the influence’ (p. 8).
One of Peter Sedgwick’s main intentions in his anti-anti-psychiatry diatribe Psycho Politics (1982) was to distinguish Laing’s actual theories and practices from caricatured versions of them in popular circulation (his ire was often not primarily directed at Laing himself but at those on the left who mistook Laing for a Marxist), but I would contend that caricatured, over-simplified, wishfully politicised or deliberately partial readings of Laing’s work also form part of the history and legacy of anti-psychiatry in Britain.[ref]I wrote about Peter Sedgwick’s work on Laing at length here: https://www.radicalphilosophy.com/article/lost-minds. [/ref] Sedgwick may not have approved either way, but Laing’s work inspired activists regardless of Laing’s own political evasiveness and increasing spiritualism. The fact that some people may have misread Laing or chosen to discard aspects of his work does little to undermine the things they were inspired to do as a result. The unquantifiable influence of anti-psychiatry that Wall identifies also had a historical reality, which, though its elusiveness by definition poses difficulties for the historian, it nonetheless seems worth attempting to capture.
The archival material of TV interviews and documentaries in Luke Fowler’s All Divided Selves is interspersed with 16mm footage shot by the filmmaker – a glimpse of blue sky streaked with clouds, long grass in sunlight brushing against a wire fence, sheep grazing placidly in a bracken-filled field, murky landscapes seemingly shot at dusk. The connection of these images to the film’s content is oblique, but their presence participates in conjuring an atmosphere that seems appropriate to the psychic states anti-psychiatry explored, just as in W.R., Mysteries of the Organism the orangey kaleidoscopic opening shots of sexual abandon helped convey the Reichian pronouncements that accompany them through a voiceover. Historians are not artists and Laing’s Sonnets (1979) serve as a reminder that venturing beyond one’s discipline to embrace formal experimentation might not always be a particularly good idea, but perhaps historians of radicalism interested in producing radical modes of history writing appropriate to their subjects can still learn something from other genres or media when thinking about how to present radical pasts in ways that might challenge or inspire people in the oppressive present.
Hannah Proctor is a postdoctoral fellow affiliated with the ICI Berlin. She’s in the process of finishing her first monograph Psychologies in Revolution: Alexander Luria, Soviet Subjectivities and Cultural History and is embarking on a second book project on the psychic aftermaths of left-wing political movements. She is a member of the editorial collective of Radical Philosophy.
The new special issue of History of the Human Sciences, edited by Sarah Marks, focuses on psychotherapy in Europe. Articles range across the twentieth century, tracing psychoanalysis in Greece, the transnational shaping of Yugoslav psychotherapy, hypnosis in Hungary, the role of suggestion in Soviet medicine, mindfulness in Britain, and Dialectical Behaviour Therapy in Sweden. In parallel, History of Psychology have published a special issue on psychotherapy in the Americas, edited by Rachael Rosner. Here, Marks and Rosner discuss the authors’ contributions, and what’s at stake when writing about the history of psychotherapy.
Sarah Marks (SM): Perhaps we can start by tracing how the idea for these issues came about. You and I first met at a conference at University College London in 2013 organised by myself and Sonu Shamdasani on the history of psychotherapy – but the idea for these parallel issues came from you: what was the motivation behind the idea, and the particular focus of Europe and the Americas?
Rachael Rosner (RR): Your conference was a watershed moment for me personally. For years I had been trying to figure out where the history of psychotherapy belonged. The history of science? The history of medicine? The history of the social, behavioral and human sciences? Psychotherapy straddles all of them, but from the standpoint of historians asking shared questions, there wasn’t yet a home base. Your conference was an important step in that direction.
Sonu followed in 2016 with a mini think-tank on transcultural histories of psychotherapy, which you and I attended. Felicity Callard (who had been at the 2013 conference) had just assumed co-editorship of History of the Human Sciences and Nadine Weidman had just become editor of History of Psychology. It seemed like Felicity and Nadine would likely encourage good work coming out of this nascent community. So the idea just clicked that you and I might guest-edit coordinated issues as a way of continuing the momentum. The idea was inspired by a strategy National Institutes of Health researchers had used in the late 1960s to nurture psychotherapy researchers. They published the proceedings of a workshops on psychotherapy research methods in two journals simultaneously, American Psychologist and Archives of General Psychiatry. I thought we might try something similar. Thankfully, you, Nadine and Felicity were enthusiastic. Your expertise was in European psychotherapy and mine in American, so we would focus on those regions. But this was just a starting point. Excellent work is being done on the history of psychotherapy in Asia and India and, hopefully soon also, in Africa.
SM: Both of these issues try to put the question of place at the centre of the debate – both in terms of local specificities, and the transfer of knowledge and practice across borders and cultures. For Europe, it’s curious how much long-term continuity there was despite the geopolitical divisions of the Cold War – practices including hypnosis, suggestion and group psychoanalysis which emerged in Western and Central Europe in the earlier half of the century remained in play in different parts of Eastern Europe well into the 1960s and 1970s. And we also see the crucial importance of transatlantic connections in both directions, especially from America to Europe in recent years. How did transnational and transcultural stories play out within the Americas?
RR: What is astonishing is how many of the innovations in the Americas were local improvisations on European trends. It’s not surprising that this transfer of knowledge happened within psychoanalysis, but our special issue illustrates that it was happening in other domains too. In Argentina, as Alejandro Dagfal shows, French ideas consistently spurred psychotherapeutic innovations. Jennifer Lambe’s and Cristiana Facchinetti’s and Alexander Jabert’s pieces also show the French influence, in this case among followers of French spiritualist Alan Kardec (Kardecian Spiritists). Erica Dyck’s and Patrick Farrell’s paper on LSD therapy tells the story of a disaffected British psychiatrist who found support in the isolation of the Canadian prairies. In America the trans-Atlantic trends were more heterogenous and reciprocal. British psychotherapists played a huge role in catapulting American Aaron Beck to stardom, just as Beck’s CBT helped British clinicians gain advantage with the NHS. The only article in our special issue that doesn’t follow the transcultural theme is Deborah Weinstein’s account of how family therapists in America embraced the removal of homosexuality from the Diagnostic and Statistical Manual of Mental Disorder (DSM-III) and came to normalize both homosexuality and gay families.
SM: We know that many forms of psychotherapeutic have long been entangled with religious or spiritual practices, right back to the Quaker Tuke family at the York Retreat in the 1890s – and Matthew Drage shows in HHS that Buddhism has remained a significant driving force in the transmission of mindfulness practice in Britain, even as it has become bound up with cognitive science and evidence-based outcomes studies in recent years. It seems that religion played an even more central role in psychotherapy – albeit in slightly different ways – in North and South America in the 20th Century. Could you tell us more about what your authors found in relation to this?
RR: Yes, you’re right. Psychotherapies in the Americas tapped deeply into spiritual trends right from the beginning.David Schmit’s biography of Warren Felt Evans, founder of the Mind Cure movement, takes the story of religion and psychotherapy in America farther back even than Eric Caplan’s work. Americans continued to embrace the religious aspect, even if they didn’t always recognize it as such. Carl Rogers was a minister before he became a psychologist, for instance, and client-centered therapy was as much an expression of religious as psychological imperatives; immigrant psychoanalysts who made such a big mark on American psychotherapy, like Erich Fromm, Erik Erikson and Victor Frankl, were also fully engaged with religious questions. When D. T. Suzuki brought his Buddhist practices to America mid-century, Erich Fromm and behavior therapist Albert J. (Mickey) Stunkard were hugely enthusiastic. These are just some examples of how the religious impulse remained strong throughout the history of American psychotherapy. We might imagine that Catholicism would come into play, especially in Central and South American psychotherapies, and there is scholarship to suggest as much. But the big surprise in our special issue was Kardecian Spiritism in Cuba and Brazil. Kardecian Spiritism had no presence at all in North America. So this is an exciting line of research.
SM: You yourself have done considerable work on the history of Cognitive Behavioural Therapy (CBT) in America, especially in relation to the work of Aaron Beck. Could you tell us a bit more about how you have started to write this in to the broader history of psychotherapy?
RR: Beck’s Cognitive Therapy (CT) can be difficult to grasp from the standpoint of the history of the human sciences because there is little in it that speaks to the subjective or the emotional—his ideas don’t intersect with art, literature, philosophy, the linguistic turn, etc. This lack of intersection, however, is also what makes Beck’s CT interesting historically. CT flourished at the turn of the 21st century in the U.S. and the U.K. precisely because of tensions between objectivity and subjectivity. Most psychoanalysts by then were plunging even deeper into the subjective, under the influence of Lacan, Foucault, and others. But the vast majority of non-analytic therapists—largely psychologists and social workers—were making a mad dash in the opposite direction, to objectivity. The rise of the Randomised Control Trial meant that therapists seeking federal research funding or reimbursement for treatment had no choice but to embrace objectivity. Beck was in the right place at the right time. He had been plying CT since the early 1960, with only moderate success. But now, suddenly, by 1985 or so, CT and CBT were the gold standard. They met the clinical, economic and research needs of a large number of therapists. Interestingly, the supremacy of the objective didn’t mean that Beck’s followers abdicated the subjective. They have rather been engaged in a subtle dance between objectivity and subjectivity that is fascinating to study historically.
SM: I’m aware that you’re writing a biography at the moment – could you say a bit about the challenges and rewards of biography as a genre?
RR: Historians of science often malign biography as soft scholarship. Mike Sokal has done a good job challenging this assumption, but there’s more work to do. One of the major challenges of writing biography is convincing historians that the argument is not parochial or hagiographical. That’s a tall order. I believe that biography is uniquely well-suited to the history of psychotherapy. Psychotherapy actually defies the categories historians use for bracketing our subject matter. I do not believe that psychotherapy is in fact a sub-genre of medicine, or science, the behavioral sciences, religion, psychology, or anything else. No profession has managed to corner the market on its practice. Psychotherapy is, rather, a historical chameleon. Maybe “shape shifter” is a more accurate description. Psychotherapy quickly assumes the characteristics, colors, virtues and temperament of the person practicing it—whether that person is a doctor, a minister, a rabbi, a mystic, a housewife, a psychologist or a brush salesman. Each iteration is unique to the practitioner. Biography taps into that idiographic quality. We can write social, cultural, intellectual, and other kinds of histories of psychotherapy, and they are all worthwhile. But biographies get to the core of psychotherapy because they get to the core of the person who is practicing it. Several years ago I attended the annual conference of BIO (Biographer’s International Organization), and the keynote speaker remarked that what she loves about biography is that it is experience in our shared humanity. Biographers are trying to make emotional contact, to have a shared experience, with their subjects. I love that.
SM: Like a number of my colleagues, you write from the perspective of the humanities and historical research, but you also have a background in the clinical world, and you believe strongly in the importance of writing for an audience of practitioners. Could you tell us a bit more about why this is important, and what is at stake when writing histories for this readership?
RR: I became a historian, in part, in order to agitate clinicians. The back-story is that my father was a clinical psychologist who had trained at the University of Chicago in the 1960s with people like Roy Grinker, Jr. and Bruno Bettelheim. Carl Rogers had just left Chicago, but his influence there was still very strong. Our home library included books by Freud, Jung, Fromm, Bettllheim, Rogers and others, all of whom I read avidly. I had intended to become a clinical psychologist like my father, but it became clear during my training that I was not cut out for clinical practice. Clinicians were making all kinds of assumptions about human nature I wasn’t prepared to make. They were being trained to solve problems, not to think critically, but thinking critically seemed to be where I lived. I’d been in search of a mechanism through which to bring that kind of critical inquiry to the community of clinicians about whom I cared so much. The History and Theory Area in the Department of Psychology at York University (Toronto), where I did my Ph.D. under Dr. Raymond Fancher, offered that kind of mechanism. History as practiced there was all about engaging psychologists in difficult conversations about what they do and why.
Psychotherapists
fill a unique niche in western society. They are tasked with the care of
emotional lives when those lives have become rocky and troubled. Neither the
government nor medicine nor the church is particularly good at meeting this
need, so this is a crucial function. Every therapist I have ever met, including
my father, believed that theirs is a noble calling. They rarely, if ever,
question the intrinsic and self-evident goodness of what they do. But to my
mind it’s crucial that they do just that—or run the risk of doing harm. Sadly,
I know too many stories where therapists’ over-confidence made matters worse
for a patient, not better. This is a situation ripe for historical agitation,
for inviting therapists to ask hard questions and, in the process, to take a
more circumspect and thoughtful stance in their work.
Sarah Marks is a postdoctoral researcher at Birkbeck, University of London and Reviews Editor for History of the Human Sciences. She writes on the psy-disciplines during the Cold War, and currently works with the Wellcome-funded Hidden Persuaders project.
Claire L. Shaw. Deaf in the USSR: Marginality, Community and Soviet Identity, 1917-1991; Ithaca: Cornell University Press; 310 pages; hardback $49.95; ISBN: 1501713663
In a picture taken during the 1933 May Day Parade in Moscow, we witness a procession of young athletes with firm bodies walking towards the Red Square. Dressed in a uniform of sporty blouses and practical shorts, the athletes are on their way to the Lenin Mausoleum, where they can salute the USSR’s top leaders. It’s a display seen a hundred times over – one that historians in training study in a first year course, or the general public has seen in any given documentary on life in the USSR. It would be a wholly unremarkable picture, if it were not for one detail. The first column of male and female athletes carries a banner which reads ‘glukhonemye’ or ‘deaf-mutes’. ‘With their cheerful appearance, the deaf-mutes testified to their readiness to fight alongside the working class of the USSR for the general line of the party and its leader, comrade Stalin’ the then magazine for deaf-mutes Zhizn glukhonemykh wrote about the event. Deaf people seemed intent on participating in Soviet life. They dedicated themselves to overcoming the obstacles to their inclusion into the Soviet project in general and the industrial workforce in particular. For it was the Soviet project, many leading figures in the burgeoning deaf community felt, that gave them the opportunities to emancipate themselves. No longer were they the dependent, disabled people they had been under the tsarist regime – now they could become valuable members of the working class.
However much the deaf athletes, or the editors of Zhizn glukhonemykh, subscribed to a narrative of radical inclusion, or framed perfecting the deaf masses as a Soviet aim pur sang, they were also confronted with exclusion. In everyday life not everyone was equally capable of realizing the utopian rhetoric of overcoming deafness. The deaf people on the May Day Parade picture marched alongside their hearing comrades but also distinguished themselves by carrying a banner proclaiming their deaf-muteness. This was illustrative of the separate institutions that helped deaf soviet citizens develop a distinguished communal identity, but also at times kept them at a substantial distance from the hearing world.
It is precisely these kinds of tensions between the deaf identity project and the Soviet identity project, between inclusion and exclusion, sameness and difference, which lies at the heart of Claire Shaw’sDeaf in the USSR: Marginality, Community and Soviet Identity. Shaw writes a history of deafness in the USSR from the February Revolution of 1917, to the collapse of the USSR in 1991, while situating deafness in the broader programme of Soviet selfhood. She examines the different Soviet conceptions of deafness throughout the period as influenced by factors ranging from self-advocacy, science, defectology, schooling and technology; to institutionalization, ideology and professionalization. To this end, Shaw draws on deaf journalism, films and literature produced by deaf and hearing people alike, as well as personal memoires. The main body of her source material hails from the institutional archive of VOG, an acronym that covered the different names that the Russian Society of the Deaf bore throughout the period under scrutiny. According to Shaw, VOG offers a lens through which we can gain an understanding of what it meant to be deaf that is both broad and in-depth. The society was involved with activities concerning housing, education, sign language, literacy, labour placement, cultural work, and social services, and was, as Shaw notes early on, a locus for ‘both Soviet governance and grassroots activism and community building.’ By the end of the 19070s it was estimated that more than 98% of Russian deaf people were members of VOG, although the core of its operations were directed from Moscow and to a lesser extent St. Petersburg. Inevitably, and with some exceptions, much of Shaw’s focus is on these cities.
The first chapter traces the foundation of VOG in 1926 after a period of reconceptualising deafness in reaction to the tsarist period and in exchange with the new Soviet ideas. Deaf people drew upon models developed by women and ethnic minorities to turn their differences into a path towards Sovietness while simultaneously insisting that ‘the affairs of the deaf-mutes are their own.’ Chapter two brings us to the 1930s when VOG becomes an organization of mass politics and deaf people try to write themselves into the Stalinist transformative narrative. At the same time, fears about those deaf people who could not live up to the ideal spread within the deaf organization. Chapter three examines the break in deaf history that was the Great Patriotic War. Disabled war veterans raised the overall status of people with disabilities and the postwar state infrastructure was rebuilt with an emphasis on welfare. Both trends rendered VOG a stronger and more centrally controlled organization. They also raised the existing tensions in the deaf community between striving for autonomy and being ‘passive’ recipients of expertise and care services. Chapter four zooms in on the Golden Age of deafness during the 1950s and 1960s in which deaf cultural institutions and educational efforts flourished. Deaf people came close to a functional hybrid deaf/Soviet identity that was also advertised to the world at large. Chapter five takes a detour to follow up on a nationwide debate about deaf criminality and lingering fears concerning deafness, femaleness, marginality, and otherness., while chapter six tracks the downfall of the deaf cultural community in the Brzehnev era: deaf models of selfhood gave way to curative and technological visions. Finally, an epilogue outlines with broad strokes the evolutions deafness underwent after the collapse of Soviet Union.
Deaf in the USSR is often at its most compelling when it grapples with the category of deafness itself. Many of our conceptions of what disability and deafness actually are have roots in 20th century disability and Deaf activism, and scholarship from the UK and the US. These conceptions bear specific political and historical connotations that are not self-evidently transferable to the context of Soviet Russia. Proponents of global disability studies have been rewriting this Anglo-American conceptual framework of disability to suit local contexts for quite some time now, but what place the former ‘Soviet world’ is to be assigned within global disability studies is still quite unclear. Few authors have tried their hand at the endeavour (See, for instance, the work of Michael Rembis & Natalia Pamuła [in Polish]).
Shaw employs her national case study to elaborate on specific Soviet understandings of deafness. A social interpretation of deafness, for example, was prevalent in the USSR decades before disability activists in the UK and the US formulated the social model of disability. Moreover, Shaw does so without falling into the trap of completely disconnecting the history of the USSR from international developments. After all, the social model of disability, as developed in the UK in the 1970s, was inspired by Marxism, while early Soviet conceptions of deafness in turn were influenced by 19th century conceptions of deafness hailing from German and French deaf education.
‘Could a defective body ever embody the Soviet ideal?’ is the question that returns throughout Deaf in the USSR. It is used by Shaw as a window onto the moulding of the Soviet self and, more importantly, onto the limitations of this moulding. While Shaw sporadically touches upon the subject of how deafness was related to other defective bodies, the topic is never fully addressed. Shaw emphasizes how work and employment were essential to overcoming deafness and approaching the Soviet ideal. In this regard deafness distinguishes itself from other disabilities, as it does not make access to physical labour quite as difficult. A limited discussion of the relation between ‘Soviet’ deafness and other forms of ‘Soviet’ disability would not have been uncalled for, especially as Shaw seems to take issue with the dire picture of disability in the USSR painted by researchers such as Michael Rasell and Elena Iarskaia-Smirnova.
Shaw is clearly interested in how studying deafness in the USSR can shed light on more than the history of deafness itself. At several points throughout the book she demonstrates that deafness can be useful for reevaluating broader historiographical debates. In the case of the 1933 May Day Parade photograph, she asserts that such forms of deaf inclusion shed a new light on this period. The 1930s have often been depicted as a decade in which earlier, more plural socialist visions of equality and emancipation where completely buried by the dictatorial regime of Stalin. Shaw’s broader reflections could have been worked through in more depth, but they show an important willingness to leave behind the type of disability history that follows an ‘add disability and stir’ recipe. It is in these attempts that the reader sometimes catches a glimpse of the full potential of disability as a as category of historical analysis: valuable both in its own right, and in its ability to pinpoint questions about a society at large.
Anaïs Van Ertvelde is a PhD student at the Leiden University Institute for History on the ERC funded project Rethinking Disability: The Global Impact of the International Year of Disabled Persons (1981) in Historical Perspective. Her current research focuses on how government experts, disability movements and people with disabilities themselves conceive of, and deal with, disability in the wake of the UN international year. She uses a cross-‘iron curtain’ perspective that involves three local case studies and their global entanglements: Belgium, Poland, and Canada.
In his recent books, Plastic Reason: An Anthropology of Brain Science in Embryogenetic Terms (University of California Press, 2016) and After Ethnos (Duke University Press, 2018), the anthropologist Tobias Rees explores the curiosity required to escape established ways of knowing, and to open up what he calls “new spaces for thinking + doing.” Rees argues that acknowledging – and even embracing – the ignorance and uncertainty that underpin all forms of knowledge production is a crucial methodological part of that process of escape. In his account, doubt and instability are bound up with a radical openness that is necessary for breaking apart existing gaps and allowing the new/different to emerge – in the natural but also in the human sciences. But are there limits to such an embrace of epistemic uncertainty? How does this particular uncertainty interact with other forms of uncertainty, including existential uncertainties that we experience as vulnerable human beings? And how does irreducible epistemic uncertainty relate to ethical claims about how to live a good life? What is the relation of a radical political practice of freedom with art? After a workshop on his work at the Zurich Center for the History of Knowledge in 2017, Vanessa Rampton, Branco Weiss Fellow at the Chair of Practical Philosophy, ETH Zurich, explored these themes with Rees.
1. The Human
Vanessa Rampton (VR): Tobias, your recent work aims to destabilize and question common understandings of the human. I wonder how you would place your work in relation to other engagements with ‘selfhood’ within the history of philosophy, and the history of the human sciences more widely. Because there are so many ways of thinking of the self – for example the empirical, bodily self, or the rational self, or the self as relational, a social construct – that you could presumably draw on. But I also know that you want to move beyond previous attempts to capture the nature and meaning of ‘the human self’. What are the stakes of this destabilization of the human? What do you hope to achieve with it?
Tobias Rees (TR): In a way, it isn’t me who destabilizes the human. It is events in the world. As far as I can tell, we find ourselves living in a world that has outgrown the human, that fails it. If I am interested in the historicity of the figure of the human –– a figure that has been institutionalized in the human sciences –– then insofar as I am interested in rendering visible the stakes of this failure. And in exploring possibilities of being human after the human. Even of a human science after the human.
VR: When you say the human, what do you mean?
TR: I mean at least three different things. First, I mean a concept. We moderns usually take the human for granted. We take it for granted, that is, that there is something like the human. That there is something that we –– we humans –– all share. Something that is independent from where we are born. Or when. Independent of whether we are rich or poor, old or young, woman or man. Independent of the color of our skin. Something that constitutes our humanity. In short, something that is truly universal: the human. However, such a universal of the human is of rather recent origin. This is to say, someone had to have the idea to begin articulating an abstract, in its validity universal and thus time and place independent, concept of the human. And it turns out that this wasn’t something people wondered about or aspired to formulate before the 17th century.
Second, I mean a whole ontology – that the invention of the human between the 17th and the 19th century amounted to the invention of a whole understanding of how the real is organized. The easiest way to make this more concrete is to point out that almost all authors of the human, from Descartes to Kant, stabilized this new figure by way of two differentiations. On the one hand, humans were said to be more than mere nature; on the other hand, it was claimed that humans are qualitatively different from mere machines. Here the human, thinking thing in a world of mere things, subject in a world of objects, endowed with reason, and there the vast and multitudinous field of nature and machines, reducible –– in sharp contrast to humans –– to math and mechanics. The whole vocabulary we have available to describe ourselves as human silently implies that the truly human opens up beyond the merely nature. And whenever we use the term ‘human,’ we ultimately rely on and reproduce this ontology.
Third, I mean a whole infrastructure. The easiest way to explain what I mean by this is to gesture to the university: the distinction between humans on the one hand and nature and machines on the other quite simply mirrors the concept of the human, insofar as it implies two different kinds of realities, as it emerged between the 17th and 19th century. Now, it may sound odd, even provocative, but I think there can be little doubt that today the two differentiations that stabilized the human –– more than mere nature, other than mere machines ––fail. From research in artificial intelligence to research in animal intelligence, en passant microbiome research or climate change. One consequence of these failures is that the vocabulary we have available to think of ourselves as human fails us. And I am curious about the effects of these failures: what are their effects on what it means to be human? What are their effects on the human sciences –– insofar as those sciences are contingent on the idea that there is a separate, set apart human reality and insofar as their explanations, their sense making concepts are somewhat contingent on the idea of a universal figure of the human, that is, on the ‘the’ in ‘the human’? Can the human sciences, given that they are the institutionalized version of the figure of the human, even be the venue through which we can understand the failures of the human? Let me add that I am much less interested in answering these questions than in producing them: making visible the uncertainty of the human is one way of explaining what I think of as the philosophical stakes of the present. And I think these stakes are huge: for each one of us qua human, for the humanities and human sciences, for the universities. The department I am building at the Berggruen Institute in Los Angeles revolves around just these questions.
VR: What led you to doubt the concept of the human and the human sciences?
TR: My first book, Plastic Reason, was concerned with a rather sweeping event that occurred around the late 1990s: the chance discovery that some basic embryonic processes continue in adult brains. Let me put this discovery in perspective: it had been known since the 1880s that humans are born with a definite number of nerve cells, and it was common wisdom since the 1890s that the connections between neurons are fully developed by age twenty or so. The big question everyone was asking at the beginning of the twentieth century was: how does a fixed and immutable brain allow for memory, for learning, for behavioral changes? And the answer that eventually emerged was the changing intensity of synaptic communication. Consequently, most of twentieth-century neuroscience was focused on understanding the molecular basis of how synapses communicate with one another –– first in electrophysiological and then in genetic terms.
When adult cerebral plasticity was discovered in the late 1990s the focus on the synapse –– which had basically organized scientific attention for a century –– was suddenly called into question. The discovery that new neurons continue to be born in the adult human brain, that these new neurons migrate and differentiate, that axons continue to sprout, that dendritic spines continuously appear and disappear not only suggested that the brain was perhaps not the fixed and immutable machine previously imagined; it also suggested that synaptic communication was hardly the only dynamic element of the brain and hence not the only possible way to understand how we form memory or learn. What is more, it suggested that chemistry was not the only language for understanding the brain.
The effect was enormous. Within a rather short period of time, less than ten years, the brain ceased to be the neurochemical machine it had been for most of the twentieth century, but without – and this I found so intriguing – without immediately becoming something else. The beauty of the situation was that no one knew yet how to think the brain. It was a wild, an untamed, an in-between state, a no longer not-yet, a moment of incredibly intense, unruly openness that no one could tame. The whole goal of my research was to capture something of this irreducible openness and its intensity.
Anyway, when trying to capture something of the radical openness in which my fieldwork was unfolding, I began to wonder about my own field of research: if the taken for granted key concepts of brain science, that is, the concepts that constituted and stabilized the brain as an object, could become historical in a rather short period of time, then what about the terms and concepts of the human sciences? Which terms might constitute the human in such a situation? These questions led me to the obsession of trying to write brief, historicizing accounts of the key terms of the human sciences, first and foremost the human itself: when did the time and place independent concept of the human, of the human sciences we operate with emerge? And this then led me to the terms that stabilize the human: culture, society, politics, civilization, history, etc. When were these concepts invented –– concepts that silently transport definitions of who and what we are and of how the real is organized? When were they first used to describe and define humans, to set them apart as something in themselves? Where? Who articulated them? What concepts –– or ways of thinking –– existed before they emerged? And are there instances in the here and now that escape the human?
Somewhere along the way, while doing fieldwork at the Gates Foundation actually, I recognized that the vocabulary the human sciences operate with didn’t really exist before the time around 1800, plus or minus a few decades, and that their sense-making, explanatory quality relies on a figure of the human –– on an understanding of the real –– that has become untenable. I began to think that the human, just like the brain, had begun to outgrow the histories that had framed it. You said earlier, Vanessa, that I am interested in destabilizing common understandings of the human. Another way of describing my work, one I would perhaps prefer, would be to say that through the chance combination of fieldwork and historical research I discovered the instability –– and the insufficiency –– of the concept of the human we moderns take for granted and rely on. I want to make this insufficiency visible and available. The human is perhaps more uncertain than it has ever been.
VR: Listening to you, I cannot help but think that there are strong parallels between your work and the history of concepts as formulated by, say, Reinhart Koselleck or Raymond Williams. I can nevertheless sense that there is a difference –– and I wonder how you would articulate this difference?
TR: First, I am not a historian of concepts. I am primarily a fieldworker and hence operate in the here and now. What arouses my curiosity is when, in the course of my field research a ‘given,’ something we simply take for granted, is suddenly transformed into a question: an instance in which something that was obvious becomes insufficient, in which the world or some part thereof escapes it and thereby renders it visible as what it is, a mere concept. From the perspective of this insufficiency I then turn to its historicity: I show where this concept came from, when it was articulated, why, under what circumstances, and also how it never stood still and constantly mutated. But in my work this history of a concept, if one wants to call it that, is not end in itself. It is a tool to make visible some openness in the present that my fieldwork has alerted me to. In other words, the historicity is specific: the specific product of an event in the here and now, a specificity produced by way of fieldwork.
Second, my interest in the historicity –– rather than the history –– of concepts runs somewhat diagonal to presuppositions on which the history of concepts has been built. Koselleck, for example, was concerned with meaning or semantics and with society as the context in which changes in meaning occur. That is to say, Koselleck –– and as much is true for Williams –– operated entirely within the formation of the human. They both took it for granted that there is a distinctive human reality that is ultimately constituted by the meaning humans produce and that unfolds in society. Arguably, the human marked the condition of the possibility of their work. It is interesting to note that neither Koselleck nor Williams, nor even Quentin Skinner, ever sought to write the history of the condition of possibility of their work: they never historicized the figure of the human on which they relied. On the contrary, they simply took it for granted as the breakthrough to the truth. If I am interested in concepts and their historicity, then it is only because I am interested in the historicity of the concept of the human as a condition of possibility. How to invent the possibility of a human science beyond this condition of possibility is a question I find as intriguing as it is urgent: how to break with the ontology implied by the human? How to depart from the infrastructure of the human, while not giving up a curiosity about things human, whatever human then actually means?
2. Epistemic Uncertainty
VR: I am wondering if all concepts can outgrow their histories. Isn’t this more difficult in the case of, say, ‘the body’ or ‘language,’ than for our more doctrinal concepts – liberalism and socialism, for example?
TR: Your question implies, I think, a shift in register. Up until now we talked about the human and its concepts and institutions but now we are moving to a more general epistemic question: are all concepts subject to their historicity? And if so, what does this imply? Seeing as you mentioned the body, let’s take the idea –– so obvious to us today –– that we are bodies, that it is through our warm, sentient, haptic bodies that we are at home in the world. Over the last fifty years or so, really since the 1970s, a large social science literature has emerged around the body and around how we embody certain practices and so on. Much of this literature, of course, relies on Mauss on the one hand and on Merleau-Ponty on the other. And if one works through the anthropology or history of the body, one notes that most authors take the body simply as a given. It is as if they were saying, ‘Of course humans are, were, and always will be bodies.’
But were humans always bodies? At the very least one could ask when, historically speaking, did the concept of the body first emerge? When did humans first come up with a concept of the body and thus experience themselves as bodies? What work was necessary –– from physiology to philosophy –– for this emergence? To ask this question requires the readiness to expose oneself to the possibility that the category of the body and the analytical vocabulary that is contingent on this category is not obvious. There might have been times before the body –– and there might be times after it. For example, if one reads books about ancient Greece, say Bruno Snell’s The Discovery of the Mind, one learns that archaic Greek didn’t have a word for what we call the body. The Greeks had a word for torso. They had two words for skin, the skin that protects and the skin that is injured. They had terms for limbs. But the body, understood as a thing in itself, as having a logic of its own, as an integrated unit, didn’t exist.
One version of taking up Snell’s observation is to say: the Greeks maybe did not have a word for body –– but of course they were bodies and therefor the social or cultural study of the body is valid even for archaic Greece. What I find problematic about such a position is that it implies that the Greeks were ignorant and that our concepts –– the body –– mark a breakthrough to the truth: we have universalized the body, even though it is a highly contingent category. Perhaps a better alternative is to systematically study how the ‘realism of the body’ on which the social and cultural study of the body is contingent became possible. A history of this possibility would have to point out that the concept of a universal body –– understood as an integrated system or organism that has a dynamic and logic of its own and that is the same all over the world –– is of rather recent origin. It doesn’t really exist before the 19th century. In any case, there are no accounts of the body –– or the experience of the body –– before that time and philosophies of the body seem to be almost exclusively a thing of the first half, plus or minus, of the twentieth century. Sure, anatomy is much older, and there were corpses, but a corpse is not a body. The alternative to the realism of the body that I briefly sketched here would imply that one can no longer naively –– by which I mean in an unexamined way –– subscribe to the body as a given. The body then has become uncertain. I am interested in fostering precisely this kind of epistemic uncertainty. To me, epistemic uncertainty is an escape from truth and thus a matter of freedom.
VR: Perhaps a kind of taken-for-granted approach to the body is so bound up with what you call ‘the human’ that questioning it is necessary for your work.
TR: Indeed, although my work led me to assume that what is true for the human or the body is true for all concepts. Every concept we have is time and place specific and thus irreducible, instable and uncertain. But to return to the human: we live in a moment in time that produces the uncertainty of the human all by itself. I render this uncertainty visible by evoking the historicity of the human, and this in turn leads me to wonder if one could say that the human was a kind of intermezzo – a transient figure that was stable for a good 350 years but that can no longer be maintained.
VR: I wonder what you would reply if I were to say: but isn’t that obvious? Concepts are historically contingent, so what else is new?
TR: In my experience, most people grant contingency within a broader framework that they silently exempt from contingency itself. For example, if contingency means that different societies have different kinds of concepts, then society is the framework within which contingency is allowed: but society itself is exempt from contingency. One could make similar arguments with respect to culture. If we say that things are culturally specific, that some cultures have meanings that others don’t have, or entirely different ways of ordering the world, then we exempt culture from contingency.
All of this is to say, sure, you are right, social and cultural contingency are obviously not new. But what if you would venture to be a bit more radical. What if you would not exempt society and culture from contingency? Talk to a social scientist about society being contingent, and they become uncomfortable. Or they reply that maybe the concept of society didn’t exist but that people were of course always social beings, living in social relations. This is a half movement in thought. It assumes that the word has merely captured the real as it is –– but misses that the configuration of the real they refer to has been contingent on the epistemic configuration on which the concept of society has depended. We could say that the one thing a social scientist cannot afford is the contingency of the category of the social.
What I am interested in is the contingency of the very categories that make knowledge production possible. To some degree, I am conducting fieldwork to discover such contingencies, to generate an irreducible uncertainty: as an end in itself and also as a tool to bring into view in which precise sense the present is outgrowing –– escaping –– our understanding and experience of the world.
3. Knowledge Production Under Conditions of Uncertainty/Ignorance
VR: I imagine there is a kind of parallel here with how natural scientists would react to the fact that their concepts no longer fit, for example by developing a more up-to-date way of thinking the brain to replace the synaptic model. But it strikes me that, if done properly, this task is much more radical for practitioners of the human sciences. This is because all of our concepts – including such fundamental ones as the human and the body – are historically contingent, that we have to do away with universal categories. Our task is to fundamentally destabilize ourselves as historical subjects, as academics, as knowers. And I guess a key question is how this destabilization, this rendering visible of uncertainties, can nevertheless be linked to the kinds of knowledge production we have come to expect from the human sciences.
TR: The question, perhaps, is what one means by knowledge production in the human sciences. I think that the human sciences have been primarily practiced as a decoding sciences. That is to say, researchers in the human sciences usually don’t ask ‘What is the human?’ No, they already knew what the human is: a social and cultural being, endowed with language. Equipped with this knowledge they then make visible all kinds of things in terms of society and culture. In addition, perhaps, one could argue that the human sciences have established themselves as guardians of the human – that is, they have been practiced in defensive terms. For example, whenever an engineer argues that machines can think and that humans are just another kind of machine, the human sciences react by defending the human against the machine. The most famous example here would maybe be Hubert Dreyfus against Seymour Papert. A similar argument though could be made with respect to genetics and genetic reductionism.
Now, if one destabilizes the figure of the human neither one of these two forms of knowledge production can be maintained. I think that this is why many in the human sciences experience the destabilization of the human as outrageous provocation. If one gets over this provocation one is left with two questions. The first is: what modes of knowledge production become possible through this destabilization of the human? Especially when this destabilization means that the entire ontological setup of the human sciences fail. Can the human sciences entertain, let alone address this question, given that they are the material infrastructure of the figure of the human that fails? Or does one need new venues of research? I often think here of the relation between modern art and the nineteenth century academy.
VR: That reminds me of Foucault.
TR: Foucault was an anti-humanist –– but he remained uniquely concerned with human reality. I think the stakes here – I say this as an admirer of Foucault – are more radical. So my second question is: what happens to the human? I am acutely interested in maintaining the possibility of the universality of the human after the human. Letting go of the idea seems disastrous. So how can one think things human without relying on a substantive or positive concept of what the human is? My tentative answer is research in the form of exposure: the task is to expose the normative concept of the human in the present, by way of fieldwork, to identify instances that escape the human and break open new spaces of possibility, each time different ones, ones that presumably don’t add up. The goal of this kind of research-as-exposure is not to arrive at some other, better conception of the human, but to render uncertain established ways of thinking the human or of being human and to thereby render the human visible and available as a question.
VR: So if you don’t want to talk about what the human is, I’m wondering if the appropriate question would be about what the human is not.
TR: I think such an inversion doesn’t get us very far. I would rather say that I am interested in operating along two lines. One line revolves around the effort to produce ignorance. That is, I conduct research not so much in order to produce knowledge but the uncertainty of knowledge. The other line wonders how one could conduct research under conditions of irreducible ignorance or uncertainty, or how to begin one’s research without relying on universals. A comparative history of this or that always presupposes something stable. As does any social or cultural study. In both cases I am interested in a productive or restless uncertainty –– or second-order ignorance –– not only with respect to the human. In a way, what I am after is the reconstitution of uncertainty, of not knowing, by way of a concept of research that maintains throughout the possibility of truth.
If you were to press me to offer a systematic answer I would say, as a philosophically inclined anthropologist, that I conduct fieldwork/research because I am simultaneously interested in where our concepts of the human come from, in whether there are instances in the here and now that escape these concepts, and in rendering available the instability –– the restlessness –– of the category or the categories of the human, both as an end in itself and as a means to bring the specificity of the present into view. It strikes me as particularly important to note that what I am after is not post-humanism. As far as I can tell most post-humanists hold on to the 18th-century ontology produced by the human but then delete the human from this ontology. What interests me is to break with the whole ontology. Not once and for all but again and again. Nor am I interested in the correction of some error à la Bruno Latour – as if behind the human we can discover some essential truth –– call it Actor Network Theory –– that the moderns have forgotten and that the non-moderns have preserved and that we now all can re-instantiate to save the world.
I am not so much interested in a replacement approach –– what comes after the human? –– than in rendering visible a multiplicity of failures, each one of which opens up onto new spaces of possibility. After all, how Artificial Intelligence derails the human is rather different from how microbiome research derails it or climate change. These derailments don’t add up to something coherent. As I see it, it is precisely this not-adding-up –– this uncertainty –– that makes freedom possible. Perhaps this form of research is closer to contemporary art than to social science research, that could well be. Anyhow, the department I try to build at the Berggruen Institute revolves around the production of precisely such instances of failure and freedom.
Tobias Rees is Reid Hoffman Professor of Humanities at the New School of Social Research in New York, Director of the Transformations of the Human Program at the Berggruen Institute in Los Angeles, and Fellow of the Canadian Institute for Advanced Research. His new book, After Ethnos is published by Duke in October 2018.
Vanessa Rampton is Branco Weiss Fellow at the Chair of Philosophy with Particular Emphasis on Practical Philosophy, ETH Zurich, and at the Institute for Health and Social Policy, McGill University. Her current research is on ideas of progress in contemporary medicine.
In August 2016, the University of Chicago sent a letter to new students that received a great deal of academic and media interest. In the letter John “Jay” Ellison, Dean of Students, stated that the university was committed to “intellectual freedom”, indicating that other concepts referred to – “safe spaces” and “trigger warnings” among them – were antithetical to this notion. The connection between these concepts, as well as the letter itself, was much debated at the time, and the issues raised appear to be the starting point for many of the essays in this book. Are students’ minds really being coddled, or are there valuable things to be learnt from the use of trigger warnings and the debate surrounding them?
Trigger Warnings: History, Theory, Context does not take a clear-cut and dogmatic approach to the topic (as some others have done, most prominently those who object outright to the idea of trigger warnings like Greg Lukianoff and Jonathan Haidt). Most authors in this volume adopt a carefully critical view of trigger warnings that also seeks to understand and explore their implications and uses. The book focuses on higher education in North America; the location is only to be expected, perhaps, as this is where the bulk of debate has taken place. A few essays do look beyond higher education to the broader context from which trigger warnings emerged, including a rather Whiggish history of trigger warnings based on retrospective diagnosis of Post-Traumatic Stress Disorder (chapter 1) and a more incisive look at the use of trigger warnings in the treatment of eating disorders since the 1970s (chapter 3).
The volume claims to be interdisciplinary, although contributions largely stem from those working in the arts, humanities and social sciences. This is understandable: these fields have probably been the most affected by calls for trigger warnings, as well as being concerned with the practice of critical thinking and debate (which, according to their detractors, trigger warnings stifle). The inclusion of a number of authors with a background in library and information studies raises an interesting angle for historians about the way collections are labelled and configured. As Emily Knox indicates in the introduction, the American Library Association has long been opposed to the rating of texts, a practice which holds political connotations and has tended to be fairly arbitrary, usually based on the attitudes of a small group of people. Despite voicing this opposition, however, Knox goes on to raise the central tenet that runs throughout this book: while trigger warnings can and may be used as a form of censorship, teachers and lecturers also have an obligation to consider the welfare of their students.
These two potentially conflicting ideas are reflected in the division of the book into two parts. Starting with the context and theory around trigger warnings, the second half moves on to specific case studies, designed to try and offer some practical guidance for teachers. While Kari Storla does this excellently in her piece on handling traumatic topics in classroom discussion, other case studies are less satisfying and the first half of the book is ultimately of more interest to the historian, grappling as it does with the controversies raised by trigger warnings and placing them in wider context. Are warnings important for welfare, or damaging to students’ critical thinking? Do they protect or censor? Do they fulfil a genuine need for students or do universities use them to avoid confronting systemic issues around student welfare? Most authors do not resolve these questions – indeed, few come down squarely on one side or the other. This in itself reflects the complexity of the debate. It is, of course, possible in each case cited above for both things to be true, even in the same example.
Take Stephanie Houston Grey’s chapter on the history of warnings around eating disorders. This is one of the most thought-provoking and well-written articles in the book. Grey explores the public health response to eating disorders in the late 1970s, which she argues was one of the first instances in which widespread efforts were made to restrict speech on the grounds of preventing contagion. This “moral panic” resulted in crackdowns on eating-disordered individuals, most prominently online, which stripped basic civil rights from people but was nonetheless unsuccessful in reducing the prevalence of eating disorders. Grey’s thoughtful examination of one specific example that began nearly thirty years before trigger warnings became widespread online is an interesting opportunity for reflection on the emergence of triggers. In the case of eating disorders, labelling images and words as triggering might have begun from concerns about people’s welfare, but ultimately became repressive and silencing of people with eating disorders. Providing “critical thinking tools and skill sets”, Grey suggests, might instead assist people to engage in more productive conversations around eating disorders.
Although the context of public concern about contagion is very different from the modern emphasis on managing individual trauma, there are certain lines of similarity with other pieces in the book. Indeed, an emphasis on critical thinking tools to aid welfare is one of the most practical suggestions that emerges from the volume as a whole. As Storla notes, one of the biggest myths around the use of trigger warnings is the assumption that a blanket warning alone can somehow prevent students from experiencing trauma. Storla’s “trauma-informed pedagogy” instead provides a nuanced framework which incorporates student participation at every turn. Her classes develop their own guidelines, debate the use of warnings at the start of the course and consider the difference between discomfort and trauma. This provides a lesson to students in considering multiple viewpoints (in particular those of the rest of the class). Similarly, in their chapter Kristina Ruiz-Mesa, Julie Matos and Gregory Langner suggest that encouraging students to consider the differing backgrounds of their audiences can be a valuable lesson in public speaking. In both cases, trigger warnings become part of the educational content rather than being in opposition to it.
Trigger warnings can, then, be about opening up conversation as well as closing it down. Several authors, including Jane Gavin-Herbert and Bonnie Washick, suggest that student demands for trigger warnings may not even necessarily be about individual experiences of trauma but based in wider concerns about structural violence and inequality. Taking seriously and discussing these concerns may have more impact than a simplistic warning. Indeed, Storla argues that one of her techniques – the use of “safe words” by which students can bring an end to class discussion without having to give a personal reason for doing so – has never been used by a student in her classroom. However, its existence as part of a set of communal guidelines, she feels, means students are safe and supported and thus able to engage more fully in debates. Paradoxically, having the opportunity to censor discussion might actually promote it.
As a general guide, most of the authors in this volume agree that trigger warnings are an ethical and legal practice that can and should be put in place as part of increasing access to higher education. The people most likely to request trigger warnings are minority groups, who are also at greatest risk of experiencing trauma. The problem, however, comes when these issues are individualised, as neoliberal interpretations of trigger warnings have tended to do. Bonnie Washick’s sympathetic critique of the equal access argument for trigger warnings raises the way in which warnings have led to the expectation that individuals who might be “triggered” are viewed as responsible for managing their own reactions. While trigger warnings might have begun as a form of activism and social protest, they have since been medicalised (through the framework of Post-Traumatic Stress Disorder) and individualised. By taking a critical and contextual approach to trigger warnings, both teachers and students can gain from discussing them.
Trigger Warnings: History, Theory and Context is a valuable contribution to the debate around trigger warnings in higher education today, as well as an interesting exploration into some of the nuances around why and how such a concept has emerged. An edited volume particularly suits the topic, allowing for multiple and varied perspectives. No reader will agree with everything they read here, but then that’s precisely the point. If, collectively, the authors in this book achieve any one thing it is to persuade this reader at least that trigger warnings have the potential to generate more insightful debate and critical thought than they risk preventing.
Sarah Chaney is a Research Fellow at Queen Mary Centre for the History of the Emotions, on the Wellcome Trust funded ‘Living With Feeling’ project. Her current research focuses on the history of compassion in healthcare, from the late nineteenth century to the present day. Her previous research has been in the history of psychiatry, in particular the topic of self-inflicted injury. Her first monograph, Psyche on the Skin: A History of Self-Harm was published by Reaktion in February 2017.
Jennifer Wallis, Investigating the Body in the Victorian Asylum. Doctors, Patients, and Practices, (Cham, Switzerland: Palgrave Macmillan, 2017);xvi, 276 pages; 9 b/w illustrations; hardback £20.00; ISBN, 978-3-319-56713-6.
by Louise Hide
Skin, muscle, bone, brain, fluid – Jennifer Wallis has given each its own chapter in this exemplary mesh of medical, psychiatric and social history that spans work carried out in the latter decades of the nineteenth century in Yorkshire’s West Riding Pauper Lunatic Asylum. The body – usually the dead body – is at the centre of the book, playing an active role in the construction of knowledge and the evolution of practices and technologies in the physical space of the pathology lab, as well as in the emerging disciplines of the mental sciences, neurology and pathology. Wallis explores how, in the desperate quest to uncover aetiologies and treatments for mental disorders, there was a growing conviction that ‘the truth of any disease lay deep within the fabric of the body’ (Kindle: 3822). General paralysis of the insane (GPI) is central to the book. A manifestation of tertiary syphilis and a common cause of death in male asylum patients, it was one of the few conditions that produced identifiable lesions in the brain, raising hopes that the post-mortem examination could yield new discoveries around the organic origins of other mental diseases. Investigating the Body in the Victorian Asylum is, therefore, not only about how the body of the asylum patient was framed by changing socio-medical theories and practices, but about how it was productive of them too.
Whilst reading this lucidly written monograph, it soon becomes clear that West Riding was no asylum back-water. Its superintendent, James Crichton-Browne, was determined to forge a reputation in scientific research and West Riding became the first British asylum to appoint its own pathologist in 1872. Wallis has not only marshalled a vast amount of secondary literature, but made a deep and far-reaching foray into the West Riding archives, analysing some 2,000 case records of patients who died there between 1880 and 1900. Drawing on case books, post-mortem reports, administrative records and photographs, Wallis has created a refreshingly original way of conceptualising the asylum patient. Rather than exploring his – as it usually was in ‘cases’ of general paralysis – role within tangled networks of external social agencies and medical practices, she turns her focus to the inner unchartered terrain of unclaimed corpses. She shows how the autopsy provided different ways of ‘seeing’ as the interior of the body was ‘surfaced’ through a range of new and evolving practices and technologies, such as microscopy and clinical photography. Processes for preserving human tissue and conducting post-mortem examinations were enhanced, as were methods for observing and testing tissue samples, and for recording findings. None of these practices was without an ethical dimension, such as a patient’s right to privacy and anonymity.
Doctors, perhaps, gleaned most from the living as they examined and observed patients on admission and in the wards; pathologists could venture into the deep tissues of the body, which were out of bounds for as long as a patient remained alive. Yet the two states could not be separated quite so neatly and Wallis turns her attention to the growing tensions between pathologists and asylum doctors as both scrambled to plant their disciplinary stake in the ground, navigating boundaries between the living and the dead body. How, I wondered, were practices mirrored at the London County Council pathology lab, which opened at Claybury in 1893 and also investigated various forms of tertiary syphilis, including GPI and tabes dorsalis, as well as alcoholism and tuberculosis? Wallis does touch on other laboratories, but it would be interesting to know a little more about how they associated with each other.
One of the many strengths of the book is the way in which Wallis makes connections between social and cultural mores and the impact of wider political and medical developments. Germ theory was, of course, highly influential. Wallis touches on the ‘pollution’ metaphor but might have expanded on the trope of the ‘syphilitic’ individual as a vector of moral depravity in the western context – an unexpected swerve of narrative into the belief systems of the Nuer jars slightly. Otherwise, Wallis provides a fascinating investigation of the social framing of the male body with GPI, explaining how atrophied muscle and degenerating organs might be interpreted as an assault on masculinity in a period of high industrialisation. Soft bones could be equated to a loss of virility and femininity; broken bones forced asylums to ask whether they might be due to the actions of brutal attendants, rough methods of restraint, or of physical degeneration in the patient.
Investigating the Body in the Victorian Asylum provides a meticulously researched and thoroughly readable – for all – social history of an important development in the mental sciences in the nineteenth century, centring it around the evolving practices of post-mortem examinations. I particularly like the way in which Wallis writes herself, her research process and her thinking into the book. Her respectful treatment not only of the asylum patients but of the medical and nursing staff who cared for and treated them is threaded through from beginning to end. One might not expect to be gripped by descriptions of ‘fatty muscles’, ‘boggy brains’ and ‘flabby livers’, but Wallis reveals a fascinating story that is full of originality and tells us as much about nineteenth century medical practice as about the patient himself.
Louise Hide is a Wellcome Trust Fellow in Medical Humanities and based in the Department of History, Classics and Archaeology at Birkbeck, University of London. Her research project is titled ‘Cultures of Harm in Residential Institutions for Long-term Adult Care, Britain 1945-1980s’. Her monograph Gender and Class in English Asylums, 1890-1914 was published in 2014.
In the current issue of HHS, Isabel Gabel, from the University of Chicago, analyses the links between evolutionary thought and the philosophy of history in France – showing how, in the work of Raymond Aron in particular, a moment of epistemic crisis in evolutionary theory was crucial to the formation of his thought. Here, Isabel speaks to Chris Renwick about these unexpected links between evolutionary biology and he philosophy of history. The full article is available here.
Chris Renwick (CR):Isabel,we should start with an obvious question: Raymond Aron, the main focus for your article, is a thinker most readers of History of the Human Sciences will be familiar with. But few – and I count myself among them – will have put Aron in the context you have done. What led you to connect Aron and evolutionary biology together?
Isabel Gabel (IG): Yes, this was a real revelation for me too. I knew Aron as a sociologist, public intellectual, and Cold War liberal, but had never seen his early interest in biology mentioned anywhere. It was actually in the archives of Georges Canguilhem, at the CAPHÉS in Paris, that I stumbled upon a reference to Aron and Mendelian genetics. In 1988 there was a colloquium organized in Aron’s honor, and Canguilhem’s remarks on Aron’s earliest years, and the problem of the philosophy of history in the 1930s, had been collected and published along with several others in a small volume. At the time, Canguilhem felt that not enough importance had been given to the fact that his late friend had abandoned a research project on Mendelian biology, as he put it. This totally surprised and, needless to say, delighted me. I quickly found a copy of Introduction to the Philosophy of History, and began reading.
As someone who works in both history of science and intellectual history, I frame my research questions to address both fields. Aron’s development as a thinker is really a perfect illustration of how these two fields converge, because his encounter with biology can be so precisely localized in time and space. It wasn’t just that he made the obvious connection between theories of evolution and philosophical approaches to history. Rather, it was the very specific moment in which he happened to encounter evolutionary theory, and that this happened in a very French context, which so profoundly shaped his thought.
CR: An important part of your article involves outlining the context of French debates about evolution, which provides the backdrop for Aron’s early intellectual development. As a historian of evolutionary thought myself, I found this part fascinating and something I had only really encountered periodically in my research – Naomi Beck’s work on Herbert Spencer’s reception in France is one example of where I have read about these kinds of issues before. The French context seems strikingly different from the Anglophone one. What do you think the Francophone context brings to our discussions of both the history of evolutionary thought and the human science that’s related to it?
IG: The French context is absolutely central to this story. Everything from the specifics of the French education system, to the cultural politics of Darwinism in France, to the state of the French left in the twenties and thirties played a role in how and why Aron brought evolutionary theory and the philosophy of history together. First, because debates about evolutionary mechanisms were, if not insulated from Anglophone science, at least somewhat resistant to the incursion of external concepts, the epistemic crisis of neo-Lamarckism could only have happened in France. Also, while it’s important to note that Aron’s self-understanding was very post-Henri Bergson, there is no denying Bergson’s influence on early-twentieth-century French biology. All of which is to say that mid-century France is a fascinating case for understanding the feedback loop between biology and philosophy.
Moreover, it’s the very specificity of the French case that makes it so useful for thinking through methodological questions such as the one you raise about the shared history of evolutionary thought and the human sciences. In recent years, there has been renewed interest in bringing science and humanities/social science into dialogue with one another, an impulse that historians of science should of course welcome. Part of what the story of Aron and the philosophy of history in mid-century France can teach us is how contingent these influences can be. In other words, as evolutionary theory evolves over time, so too do the ways we interpret its meaning for the human past. In France in the twenties and thirties, it was the limits of science that were most instructive to Aron. French biologists couldn’t quite bridge between observations and experiments in the present, and the theory of evolution they believed explained past events. Objectivity became, for Aron, partly about acknowledging the limits of both positivism and philosophical idealism, i.e. a way of negotiating the relationship between the limits of observation and the limits of theory.
The French context therefore instructs us not to buy in too quickly to the idea that science offers facts and humanities subsequently layer on interpretation. This picture does a disservice to both the science and the humanities. What becomes visible in the case of Aron and French evolutionary theory is that biology and philosophy were encountering parallel epistemic crises, and therefore that neither one could singlehandedly save or authorize the other.
CR: Another issue that I thought was important in connection with Raymond Aron is liberalism. As you explain in your article, most people think of liberalism when they think of Aron. However, we don’t necessarily think of liberalism when we think about evolutionary biology. Liberalism and evolutionary biology have such a fascinating and entangled history. Why do you think we are now so surprised to find people like Aron were so interested in it?
IG: Those who know Aron by reputation as a Cold War liberal may be surprised, because the conversations he helped shape were about ideology and international order. But I don’t know that everyone will be surprised that Aron was so interested in biology, so much as they might be unsettled. We associate any contact between political beliefs and evolutionary theory with deeply illiberal commitments, with racism, eugenics, and just plain old bad history. And while it’s true that we should approach attempts to import scientific data into humanist frameworks with caution, we also shouldn’t grant science more explanatory power than it can hold. In recent history, the liberal position has been a vigorous critique of biological determinism, but as Stephen Jay Gould and others repeatedly teach us, the point is not simply that society or history is autonomous from the biological, but that biology itself is not as determinist or totalizing as we sometimes understand it to be. That’s why reading the work of scientists themselves is so important, because it brings out the provisional, ambiguous, and contentious nature of their endeavors. It shows that they aren’t stripping the world of contingency, but rather prodding at and making visible new contingencies.
CR: The history you uncover in your article is incredibly revealing in what it tells us about the intellectual origins of not just Aron’s thought but the milieu out of which many people like him emerged. Do you think there is anything in that history that is of particular relevance or importance for the present?
IG: Yes, I do think there are really instructive parallels with the present. Aron came of age in a time of enormous political upheaval and two catastrophic world wars. Political and epistemological upheaval go together, and so this generation of French thinkers can speak to our own anxieties about the eclipse of humanities and social sciences by STEM fields. One way to think about this history’s relevance would be to see Aron as a cautionary tale – the science changes quite quickly as the Modern Evolutionary Synthesis takes shape, DNA explodes as a new way to understand life over time, and antihumanism gains cultural strength in France. So it’s not clear that Aron’s study of biology really got him where he wanted to go. But I actually think this picture is a little too cynical, because it ignores what’s so interesting about Aron’s philosophy to begin with. He understood that biology and philosophy were facing some of the same questions, such as how to understand the past from the perspective of the present, and whether laws that explained the present could be known to have operated the same way in the past.
In this way, we ought to pay attention to how STEM fields and the humanities are speaking to some of the same questions. For example there’s been a lot of energy around the concept of the Anthropocene recently, and it’s a perfect opportunity for historians to contribute to a conversation about something that is both a scientific claim—that humans have become a geological agent—and a historical, political, and moral one. We can offer a longer-term understanding of how history and natural history have spoken to one another in the past, how the human has been constructed through philosophy, human sciences, and natural sciences, and how thinking about the end of civilization is saturated with political imagination. Deborah Coen’s work on history of scaling is a great example, as is Nasser Zakariya’s recent book, A Final Story.
CR: I was very interested in what you had to say in your article about bridging the gap between intellectual history and history of science, which is an important issue for an interdisciplinary journal like HHS. The material or practice turn in history of science has been important in creating this division, as you explain in your conclusion. This turn needn’t rule out the human, of course, and it hasn’t, as work on subjects like the body shows. But it’s clear, as you explain, that many historians of science see intellectual history as something that needn’t concern them. Why do you think belief is misplaced and what do you think we would all gain by putting the two together again?
IG: I hope that the story I’ve told in my article illustrates one immediate benefit of overcoming the longstanding division between intellectual history and history of science. Namely, that there is historical work that just hasn’t been done as a result. Aron’s early interest in evolutionary theory, and its effect on his philosophy of history, is not an isolated case. There is enormous potential in fields like the history of knowledge, history of the humanities, as well as in fields like environmental humanities, to bring the tools of intellectual history and history of science to bear on any number of subjects.
But also within intellectual history, the elision of science has meant flawed or at least partial understandings of figures as enormously influential as Aron. At the same time, within the history of science the material turn that you mention led to a kind of reflexive suspicion of philosophy, which John Tresch has written about. Tresch sees the potential of intellectual history in a broader scale for history of science – get beyond the case study. I think this is part of the story, but that on an even more basic level the history of science will be better told if its methodological framework can accommodate the conceptual feedback that exists between science and philosophy, in addition to the feedback between science and society, institutions, and technology. One of the most exciting things about reading the work of French biologists is discovering the degree to which philosophical questions preoccupied them not as extra-scientific or ex post facto interpretations, but as urgent problems to which their research was addressed.
Isabel Gabel is Postdoctoral Fellow at the Committee on Conceptual and Historical Studies of Science at the University of Chicago. Her current book manuscript, Biology and the Historical Imagination: Science and Humanism in Twentieth-Century France, provides a genealogy of the relationship between developments in the fields of evolutionary theory, genetics, and embryology, and the emergence of structuralism and posthumanism in France.
Chris Renwick is Senior Lecturer n Modern History at the University of York, and an editor of History of the Human Sciences. His most recent book is Bread For all (Allen Lane).