The Edge of Sentience: Risk and Precaution in Humans, Other Animals, and AI | Leonardo/ISASTwith Arizona State University

The Edge of Sentience: Risk and Precaution in Humans, Other Animals, and AI

The Edge of Sentience: Risk and Precaution in Humans, Other Animals, and AI
by Jonathan Birch

Oxford University Press, Oxford 2024
400 pp. Trade, $40.00
ISBN: 978-0192870421.

Reviewed by: 
Gregory F. Tague
December 2024

Reviewed by Gregory F. Tague

Where is the margin between homeostasis and suffering in humans, mammals, fish, invertebrates, and even AI? Demarcating where feelings of pain or pleasure begin and end is not simple. As an example, sentience could appear in complex machines and software, like AI. For Jonathan Birch, a philosopher of science and animal ethics, sentience includes a “capacity to have valenced experiences” (pg. 1), ranging from distress to satisfaction. This definition helps identify welfare risks and assess ways to avoid inflicting gratuitous harm. That’s The Edge of Sentience in a nutshell; more so, Birch takes readers on a compelling, engaging, and at times controversial journey through the sentient life of humans, a variety of nonhuman animals, and artificial intelligence. The word edge in the title is important in several respects. Birch is cautious not to make grand pronouncements; rather, in situations where evidence about sentience is not yet clear, there should be a debate to negotiate disagreements. The way forward in terms of animal welfare and rights, Birch intones, is not by expert decrees but by crafting policy after discussion with average people on a public panel guided by experts. With its thoughtful framework and reasoned proposals, this book is valuable for students of ethics and animal studies as well as lab researchers and policymakers engaged in fields from farming to software engineering.

With human life, Birch discusses a case of unconsciousness where the patient, eventually revived, wondered why medical doctors presumed she’d feel no pain. A change of perception about sentience in humans, animals, and even machines is in order. We understand, mostly, the ethical implications of dealing with human pain, but what about mammals, fish, insects, etc.? For many invertebrate species, Birch concedes, sentience by human definition is still an open question. In decisions about animal welfare, often there’s a dividing line about any similarity between their suffering and ours. It’s at this boundary, according to Birch, where our ethical decisions reside. He has experience, having advised the UK government about sentience in the invertebrate taxa cephalopod mollusks (e.g., octopuses) and decapod crustaceans (e.g., lobsters). Sentience, Birch says, is not detached from consciousness, i.e., it’s not just involuntary reaction. Most animals are not automatons, as Descartes asserted, and we are beginning to realize that some AI machines might experience sentience. Understanding sentience is important for ethical policy decisions. For example, he wonders how we regulate the use of human cells transposed into a computer as a neural organoid that could be sentient.

Birch knows that sentience is not necessarily the result of complex sapience. Without evidence to the contrary, a being could have feelings without thought. Yet, he admits there has to be some consciously good/bad valence in the experienced feeling for sentience. Concerning phenomenal consciousness (i.e., the state of what it is like), he sees faults in its definition and use though it’s a phrase worth developing. Birch insists on the term valence because in reference to animals, it would be difficult to parallel all human emotions with theirs, except for basic ones like fear. Here’s his policy advice about sentience regarding humans, animals, and machines: “our precautions should be proportionate to the identified risks” (pg. 18). Following John Rawls, Birch advocates open communication to air disagreement in search of principles about sentience. Psychological valence has, therefore, ethical considerations – how to minimize or eliminate pain. This stance is part of the Birch proposals about not using nonhuman sentient systems, whether animals or AI, for human means without regard to their suffering.

Birch does not accept the behaviorist posture that conscious experience produces a null set; it can impact behavior. We and other animals did not biologically evolve valenced sentience without corresponding neural and physical states. There follows a long and jargon-filled discussion of consciousness. The author wants to cover all bases of dispute related to the edge of sentience, and he comes back to how consciousness is not just a stimulus response seen in cells but a valenced experience. Consciousness is not a metaphysical perspective since there are neurons, synapses, chemicals, etc. involved. As Darwin was correct to theorize, we evolved from previous forms to which we are related structurally and biochemically. Certainly, there’s more to life and living than just hormones and bones, Birch’s point about sentience, ethics, and a conscious mind. While neurobiological and materialist questions are worth addressing by lab researchers, the more pressing point, Birch seems to say, is how as moral creatures we act with each other, especially to the animal world we want to dominate and to AI we’ve created and control.

Considering AI, does one need a body to be conscious or sentient? Is biology necessary for sentience? Birch realizes an “edge” can be sharp or blunt, sudden or prolonged. That’s where the ethical problems lie. Had Frankenstein not created his feeling “human,” would there have been sentience in the body parts alone? Though Frankenstein’s creation had no more evolved via natural selection than a computer, the edge of sentience in some things might differ from sentience in adapted organisms but is potentially real. Though AI is still now an unpluggable machine, with advances it could easily have longevity and feelings. There’s the ethical conundrum. Are these, as Darwin speculated about biologically evolved organisms, differences in degrees and not kind? In other words, as Birch notes, there are instances of consciousness, more likely than not in many species and perhaps AI, residing on some type of fuzzy but wide borderline. While some debate the nature of phenomenal consciousness, he leans to moral consideration for psychological valence in subjective states.

Moving along, in terms of consciousness and emotion, Birch reviews lots of literature. One key study proposes how there’s consciousness without a cerebral cortex. Thus, if a cortex is not required for consciousness, consider many small-brained organisms. Beyond mere reflexes there could be basic forms of “conscious” evaluation. Birch discusses responses, like fear, and the relation, if any, to conscious valence. The question is what happens subcortically and later in the cortex, if there is one. It seems that affective consciousness (e.g., fear, care, panic, play, etc.) is fundamentally neural and spans species. On the other hand, says Birch, see affective priming as a two-step process that involves subcortical areas influencing the cortex. In some species (consider a bird’s pallium) subcortical neurons can function like a cortex. This means researchers and policymakers cannot place total emphasis on the mammalian cortex; there is enough research in other species to suggest sentience and therefore demanding the creation and enforcement of ethical norms.

The problem is that with the edge of sentience there’s no scientific certainty, yet a consensus could be based on empirical evidence. While there might not be agreement about policy, there can be a penumbra of acceptance in which both sides could work. Policymakers can disagree, but when deciding practical matters there should be an area of facts with decisions based on informed opinion and not speculation. Caution is desired if there’s a possibility of sentience and thus a welfare risk, but Birch warns of making general statements. For instance, there might be species we can deem as sentience contenders pending further study and who should be treated with caution. Birch admits these scenarios will open ethical disagreements, but the main point is to find moral consensus so as to avoid “gratuitous suffering” (pg. 131) and to take precautions for species who likely are sentient. Birch is especially interested in public involvement. Having a community group engaged in decision making will touch on shared values, acceptable risks, the reduction of hazards, justified harms, how new actions should or not be consistent with past decisions, etc.

There’s a realistic possibility that even without cognitive ability there can be some affective experience. Consider the implications for “animals” with less developed cortical brain areas. If we can assume sentience for humans in a persistent vegetative state, what does that imply about a fully functioning, conscious nonhuman being? As for embryos and fetuses, since the 1980s there has been a push for pain management in newborns, especially those who require surgery. Evidence suggests, contrary to previous medical thought, that they indeed feel pain. At this point, Birch opens the discussion of neural organoids. He notes that medical and biological research using animals has risen and will continue without alternative models. He proposes using neural organoids, i.e. stem cells manipulated to make human models. This area of neural organoids that mimic parts of a human brain is ethically gray though it could eliminate invasive experiments on live animals. The big question is whether cortical tissue as neural organoids, particularly in artificial intelligence devices, could approach sentience.

Deliberating over AI and its possible sentience, against complacency Birch recommends immediate action to mitigate potential suffering and abuse. As with animals, AI instruments will be deemed commercial objects used as means to human ends. What happens if the brain of a sentient creature is emulated into a robot?  If the robot as mechanical AI is a neuronal copy of mammal or human behavior, then it likely has sentience. Birch reminds us that sentience is not intelligence, meaning that most nonhuman animal brains can register suffering perhaps more than exhibiting higher thinking. Ethical precautions are needed. This caution could apply to AI. He suggests AI developers, though financially driven, be more open about their systems so precautions can be taken now. Otherwise, we could conceivably have populations of sentient AI machines and robots serving us and suffering at our hands.

In the end, Birch is optimistic, often referring to democratic practices and policies that can influence issues like human, animal, and AI sentience. From its opening pages, The Edge of Sentience makes complex subjects and relevant ethical issues readable. Birch has written a richly detailed and controversial book that challenges our preconceptions about ethics related to nonhuman life (e.g., fish, octopuses, crustaceans, insects, slugs, nematodes, spiders, etc.) and, surprisingly, AI.