The Silicon Shrink: How Artificial Intelligence Made the World an Asylum | Leonardo/ISASTwith Arizona State University

The Silicon Shrink: How Artificial Intelligence Made the World an Asylum

The Silicon Shrink: How Artificial Intelligence Made the World an Asylum
by Daniel Oberhaus

The MIT Press, Cambridge, MA, 2025
264 pp., illus., 1 b/w. Trade, $29.95
ISBN: 9780262049351

Reviewed by: 
Hannah Drayson
June 2025

Daniel Oberhaus’ The Silicon Shrink is a tour of the shaky foundations of psychiatric artificial intelligence or “PAI”– the application of machine learning in the diagnosis and treatment of mental disorders. This is Oberhaus’ second book. His first, also with The MIT press, was Extra-terrestrial Languages (2019), an overview of attempts to communicate with aliens from the 19th Century to the present day. [1] Previously a staff writer at Wired magazine, (he recently founded a company that provides brand marketing for ‘deep tech’ start-up companies) his primary interests were in energy and space travel, but while there, technology writing offered a front seat to a boom in digital mental health start-ups. Oberhaus explains however that the prompt to turn his observations of the ways in which technology was being incorporated in psychiatry into this book was his sister Paige’s suicide, ten years previously.

In the opening to the book, he describes a note left by Paige which he found among her belongings after her death. In it she reflects on her lifelong mental health struggles. Beside a drawing of a pile of diagnoses, disfunctions, and traumatic events– the various suspected causes of her illness– she had written the words: “NO ONE KNOWS WHAT THIS IS”. Paige’s message is the emblem of The Silicon Shrink’s warning, that we should be very careful about automating processes to diagnose and treat disorders that we don’t fully understand. This is not just for the more-than-sufficient reason that the people who these systems are being tested out on are often in extremely vulnerable positions. It is also because the inherently expansionist nature of both AI systems and modern psychiatry, having slipped the bounds of the context of acute mental healthcare put everyone who uses smart computing technology at risk of a loss of autonomy, dignity, and privacy. People become subject to technical systems they may not be able to opt out of.

The warnings of the book have an excellent pedigree, in agreement with the early warnings of cybernetic pioneers like Norbert Weiner, of the dangers of implementing systems that did not adequately represent the complexity of what they sought to control. Thus, the book’s critique is informed by a history of PAI from its inception: early chapters reflect both on the disciplinary and cultural currents that shaped psychiatry and artificial intelligence into disciplines that now share tendencies toward logics of control, and expansionist methods for achieving it. The discussion begins with the postwar emergence of psychiatry as a field where— following the lead of psychoanalysis– engagement with questions of mental health care moved from the asylum to the individual private consultation. Mental health became something that any individual might benefit from engaging with and taking responsibility for. In addition to movements like antipsychiatry, which sought to deinstitutionalise mental health care, this idea that mental health difficulties are part of everyday life set the scene for a contemporary context in which it is the norm for individuals to subject their behavioural data to mental health tracking and analysis. Another point was the way in which psychiatry embraced the use of new scientific treatments, drugs and tools, despite the lack of theoretical underpinnings with which to understand how or why they worked, even less the biological mechanisms that give rise of the symptoms they treat. The need for a guide in this gave rise to the Diagnostic and Statistical Manual – the DSM – which offered tick box lists of symptoms for use in diagnosis, a structure that lends itself to replicable identification amenable to insurers’ decision-making processes, and hence digitisation and automation.

A key moment in this early history that can be seen to encapsulate many concerns with PAI was the first ICCC (International Conference on Computer Communication) in 1972 which was held in order to present DARPA’s ARPANET, a key technical precursor to the contemporary internet. As part of the conference there was a showcase of 36 terminals connected to the ARPANET’s 29 distributed machines, each running different programs, such as chess games and air traffic control. As part of this display were two chat bots. The first was based on the well-known ELIZA, created by German American MIT professor Joseph Weizenbaum in 1966. His program used the speech patterns of a Rogerian psychoanalyst to simulate a conversation with another human, mirroring back inputs to the system with questions like “does thinking of X bring anything to mind” or “why do you remember X just now”? (Weizenbaum 1976) However, Weizenbaum’s goal in creating ELIZA was not to create a computational psychotherapist, but to explore human-machine communication. Eliza, and the updated version of the programme named DOCTOR that was shown to delegates at the ICCC had come to stand as point of great concern for Weizenbaum, who was fascinated by the way that users of the programme found it so easy to forget that they were not speaking to another human.

Further, and more concerning for Weizenbaum, was the discovery that people were more than happy to engage in personal therapeutic conversations with what they knew was a machine. Weinbaum argued that we should beware the tendency to allow machine systems to stand in for human ones, and to fall into the trap of believing that a computer program was able to grasp internally human experiences– despite, as he pointed out, the extent to which not even other humans were able to truly understand themselves or one another’s complexities. Weizenbaum cautioned that the acceptance of these machines as good enough meant conversely turning healers themselves into information processors, instrumentalising the therapeutic interaction and stripping it of its humanity.

The other chatbot on show at the ICCC was Kenneth Colby’s PARRY, a computer simulation of a paranoid patient. Colby, a psychoanalyst and therapist trained in medicine in Yale, hoped the system could be used as a research tool for psychiatrists- a kind of experimental subject that could be used to test models of mental illnesses. PARRY was modelled on Silvan Thompkin’s theory of paranoia, which understood it as a defence mechanism against shame. The simulated patient also offered a potential training tool for therapists. While Colby’s approach was less immediately sceptical than Weizenbaum of the potential for AI to be a useful adjunct to therapy, his work with PARRY convinced him of another serious problem with the enterprise, that of the paucity of models that explained mental health problems that could be simulated in the first case, and he became increasingly critical of the discipline’s failings. These included a lack of reliable diagnostic categories offered by the DSM, and a growing body of research in the 1970s that had seriously shaken belief in the reliability of psychiatric diagnoses. Despite the disagreements between Weizenbaum and Colby, even from the very start, PAI’s early innovators were certainly in agreement over that there was much to overcome.

This is sticky territory which the technology industry has cheerfully waded into anyway. In his telling, Oberhaus doesn’t at least appear to be entirely dismissive of the potentials of PAI, he cites examples of studies using smart phone data showing promise as a way to detect depression, as an example. However, he also cites a considerable number of issues with trusting these kinds of monitoring or diagnostic tools as reliable.  The book provides a rich overview of examples of the current status of applications of what Oberhaus calls ‘swipe psychiatry’, the use of interaction with mobile computing devices as a way to predict and influence individuals’ mental health. The ubiquity of smart phones now means that over the last decade or so ever-increasing quantities of user data has become available for what researchers have referred to as ‘digital phenotyping’. This is a behaviourist inspired enterprise which uses digital information to infer a person’s emotional state from their ‘digital exhaust’- what might be overlooked behavioural data such as scrolling and typing speeds. The proposed application of this information is not just that of inferring reading users’ mental states however, but in creating systems that in turn manipulate them in response to what is detected – treating them using the content of the digital platforms they interact with.

In a move that perhaps hides a larger discussion, Oberhaus make a connection similar to of philosopher Justin Smith’s (2024) book The Internet Is Not What You Think It Is. Taking a deep historical, philosophical approach to the internet (mostly the social web) Smith (in line with an enactivist perspective) considers the ‘web’ to be a species-specific extension of physical embodiment and cognition. Similarly, Oberhaus links digital phenotyping back to the notion of the ‘extended phenotype’ discussed by Richard Dawkins arguing that organisms, by modifying their environments, influence their own biological evolution, is essence modifying themselves as a species. The digital services and platforms that people use can come to constitute a part of their environment, at minimum mediating a considerable amount of their social interaction. Viewed in this way, the power to control digital platforms and their content is in essence the power to change human beings themselves. Oberhaus and the literature he cites on the topic suggests that applications of digital phenotyping might be at best vapourware and worst, a bridge to a form of ‘soft totalitarianism’.

The book’s later chapters deal with more recent work in PAI and discussions of detection techniques made possible by the application of digital technology, such as behavioural tracking, emotion recognition and digital phenotyping, which combine machine learning and personalised data collection. Here are many examples of the kinds of tools, particularly apps, that are currently being developed or marketed, as well as the various research projects being undertaken by “big tech” (Apple, Amazon, Alphabet, Meta and Microsoft) that explore aspects of mental health – such as social media suicide prevention, or depression detection through smart-watch data. In line with the trickiness of PAI, the relationship these companies seem to have to these applications come across in the telling as somewhat ambivalent, with projects starting and stopping amongst poor evidence of measures being reliable, and the risk of controversy and push back from or injury to users. Alongside chilling tales of data leaks from psychological support services and their effects on their clients, there is also a chapter that offers guidance on desirable moves toward best practices in the regulation of AI tools, explainable algorithms and open data, AI Registries, accountability laws, as well as commitments by companies and governments to privacy and security for users. In the specific context of mental health applications of digital tools, particularly pertinent is a need for clinical trials, which in the US are only really just starting despite many applications already being available to users following moves during the pandemic to waive the need for digital apps relating to health to have clinical trials before being deployed.

The Silicon Shrink book is a very useful primer and overview of the main issues surrounding PAI. Many of the issues discussed are familiar problems relating to AI wherever it is applied, such as the black box nature of machine learning algorithms or the requirement for increasing quantities of data for training systems. The book also offers much to consider about how these problems are only exacerbated when applied in psychiatry. There are moments when the dual histories, such as that surrounding DOCTOR and PARRY, point repeatedly towards epistemological resonances that suggest alternative paradigms within which AI and mental health might work together, although perhaps in radically different ways to those explored in The Silicon Shrink. I would say that this is what this book left me wondering about, although not something it would be fair to expect it to have found space for. While perhaps too much to ask of technologies under development in the service of surveillance capitalism, in the long run, perhaps a shift from a foregrounding of cybernetic concepts of computing as system of communication and control to other principles - that foregrounds feedback, producing systems of self-control and even self-awareness, are the forgotten history that needs to be incorporated into this one. From a history of technology perspective, (as laid out in the context of AI by Punt 2025) then, in which the users of a technology are accounted for as drivers of the form taken by that technology, this book doesn’t quite push it’s argument as far as it might, especially in those moments when the ways in which human users of technologies surprise the assumptions of its inventors. We are, perhaps at an early point historically in the story of what happens on a larger scale in the innovation of machine learning and at this point the agency – or at least the sense of the agency- of the technology itself to create unexpected changes is in the ascendant, and it is easy to forget that in some framings, these systems are part of the extended phenotype of humanity overall. Let’s hope that somehow users and innovators with AI are able to do more than simply recreate and reinforce what is already not working. Instead let’s hope that these technologies’ users, as they surprised Wiezenbaum with their willingness to share intimacies with non-human simulations, may continue to surprise us.

Notes

[1] See link for Stephanie Moran’s (2020) review in Leonardo https://leonardo.info/review/2020/03/extraterrestrial-languages.

References

Moran, S. (2020) Extraterrestrial Languages by Daneil Oberhaus. Leonardo/ISAST. Available at: https://leonardo.info/review/2020/03/extraterrestrial-languages (Accessed: 20 May 2025).

Punt, M. (2025) ‘Artificial Intelligence and The Technological Imaginary’, Leonardo, pp. 1–12. Available at: https://doi.org/10.1162/leon_a_02676.

Weizenbaum, J. (1976) Computer power and human reason: from judgment to calculation. San Francisco: Freeman.