Shamans and Robots: On Ritual, The Placebo Effect, and Artificial Consciousness | Leonardo/ISASTwith Arizona State University

Shamans and Robots: On Ritual, The Placebo Effect, and Artificial Consciousness

Shamans and Robots: On Ritual, The Placebo Effect, and Artificial Consciousness
by Roger Bartra; Gusti Gould, Translator

University of Minnesota Press, Minneapolis, MN, 2024
176 pp. $25.00 paper.
ISBN 978-1-5179-1749-4.

Reviewed by: 
Brian Reffin Smith
February 2025

Out of curiosity I asked ChatGPT to write two reviews of this book, one positive, the other negative. Having read the book, I found both reviews more or less intellectually if not really textually convincing. However, as the LLM admitted, it had enjoyed no access to the text. It had not “read” the book. All it could examine was the title and the couple of paragraphs of publisher’s blurb. When challenged about this (I suspect not isolated) betrayal of the contract between reviewer and potential readers, it replied that this, in general, was what it might well have written had it actually been able to peruse the text. This I found to be entirely convincing and nearly as good as the critic who, when asked if he had read a particular new book, said “Read it? I haven't even reviewed it yet!”

I mention this because it goes to the idea of placebo, a central concept in Roger Bartra's text. The review of an unread book, like the effect of a drugless pill, can fulfill some or all of what might be expected of the “real” thing. It might be useful. And, like the placebo, it might work, albeit less effectively, even if the person knows that it might be - even is - a fake. There are also studies where the placebo worked better than the drug. And for whatever reasons models, such as ChatGPT, are sometimes preferred by newspaper editors to journalists, and by students to professors.

Supporters of certain politicians know full well that they are lying psychotics, apparently preferring them because of and not despite this. We know a magician is tricking us. If she was really changing a dollar bill into a dove, it wouldn't any more be magic, but a miracle. We might know a shaman or a computer simulation of a psychotherapist is operating outside of all logically trustworthy reality, yet we may be healed or helped. Many pearls are currently clutched about disinformation, but I strongly suspect that the recipients of much of it know full well that it's twaddle and don't at all care. Readers of a celebrity's autobiography or novel must know that the “author” sometimes hasn't even read, let alone written it. This is post post-truth complicity, and it is already permeating much social and political interaction and performance.

What then are we to make of a book that uses a poorly understood phenomenon, like the placebo effect in connection with human consciousness, paves the way to the idea of the extension of the brain outside the body, the exocerebrum, and then leads back into the idea of a robot having to have an artificial “consciousness” itself connected to the symbolic space outside of itself?

On the one hand, it is a text by an internationally respected social anthropologist and scholar of consciousness, technology, and culture, based in Mexico. It treats subjects of great interest to many, I think: how technology might be replacing or resembling some of the roles once played by spiritual practices, with particular regard to the placebo effect and the symbolic structures surrounding and embodied in aspects of healing and in our use of, for example, smartphones. He posits consciousness extending into, or at least being shaped by, external objects and systems, “cultural prostheses”.

On the other hand, I found that the only way to read and make sense of the short book is to silence any of the usual questions and criteria one might have in mind and to treat it more like someone telling you about some ideas they have, someone who's not sure if you earned a doctorate in similar areas to theirs or if you're a person who needs a description of what a smartphone is. Not averse to suspending questioning thoughts and just listening, I nonetheless found this difficult at times, the sometimes rather flat style (whether of the original Spanish or its translation I don't know) not really helping.

The reader perhaps needs to be in a dreamlike state, just to experience Roger Bartra's thoughts and to appreciate the cloud of connections and concepts he evokes. Not to be in such a state would be to be pulled up short by questions begged, by apparent non sequiturs, by treatments of for example AI that might be irritating to those with a deeper understanding of the subject, and by a few too many platitudes, even clichés. Perhaps the book is an example of that which it treats, and one has to believe.

Can one engage with it? Yes, it describes and provokes thought about centuries of shamanistic and other presumably placebo-based efforts to heal or reduce suffering and uses these in the second half of the book to discuss the necessity (as the author sees it) for a robot to have an artificial consciousness and what this might be like (it must involve passing through “the rituals of pleasure and pain”). I found myself wondering many things, including if AIs might be - even need to be - what we would call in a human being psychopathic, and if we are not far too anthropocentric in our efforts to envisage that which we ourselves often simultaneously describe, or even define, as unimaginable. (In a possibly less than sober discussion on the roof of a VW camper van being driven round Avignon many decades ago, the father of hypertext and author of “Computer Lib” Ted Nelson defined AI to me as “I know something you don't know that will let me make something I can't understand”). It also made me wonder which will be worse: making a very advanced general intelligence before we have understood what consciousness is, yet wanting the AI to embody this cloud of unknowing, or after we think we have understood conscious awareness, and laughably wanting it to embody that? But sorry, I think we're screwed either way.

The whole book seems symbolic. It becomes what it describes, in a way.

Perhaps I had my antennae wrongly tuned, because this book needs, as I say, to be read in a state of suspended disbelief. It is hard at points to know whether the author is quoting with approval, or merely citing texts about shamanism with an anthropologist's hat on. I suspect that too would be a misreading. If you relax, there's lots of fascinating stuff in there, on the way to its conclusion that we should compare two seemingly different elements of society—shamans, who represent traditional, mystical forms of healing, and robots, which symbolize modern technological advancement. Through this comparison, he examines how both shamans and robots influence human behavior and our perceptions of reality. But we surely already knew that. Everything influences us like that. Perhaps the journey is the goal. A statement such as, towards the end of the book, “We will be in the presence of a truly conscious robot the moment we prove that it experiences relief from some type of malaise, by giving it a placebo” (because, he says, the machine will have been tricked into feeling better) raises questions not always answerable from the mixture of profundity and seeming superficiality in the text.

The book does encourage readers to think about how we interact with technology—not just as tools, but as something that deeply affects our minds and society, much like the shamanic practices once did.

The second part of the book seems to assume the importance of aspects of human consciousness in the future of AI, though this reviewer only gained the impression that that is how Roger Bartra wants it to be. It is not clear that such systems will need anything remotely like what we would call consciousness, though there doubtless will be newspaper headlines to frighten or reassure us about that over the next few decades.

Although the book was originally published in Spanish in 2019, even then Bartra could perhaps have engaged more with ongoing philosophical and ethical debates about AI. Parts of the text already seem a bit dated and a brief foray into robotic art comes across as a depthless afterthought. A robot is not the same as an AI. Robots have to move. It might be argued that a general AI should be able to move too, but we don't know. And this is the problem: we talk glibly of the singularity, when AI will become as something-or-other as us, but literally don't know what we're talking about. We might have to become dumber to accept AIs as our equal. We already have AIs whose makers cannot explain their systems' decision-making. I personally do not think that any mapping of ideas of our (again poorly defined let alone understood) human consciousness onto whatever it is they'll be doing will be in the least helpful. Indeed, it might be rather dangerous. Believable artificial stupidity, not intelligence, will be more than enough for some would-be world dominators.

But we do, of course, need both to concretely investigate and to imaginatively dream, stuff about ourselves and the techno-cultural future just ahead of us. Insofar as this book provides an unusual thought space for this to happen in, it is welcome.