The Line: AI and the Future of Personhood | Leonardo/ISASTwith Arizona State University

The Line: AI and the Future of Personhood

The Line: AI and the Future of Personhood
by James Boyle

The MIT Press, Cambridge, MA, 2024
336 pp. Trade, $32.95
ISBN: 9780262049160.

Reviewed by: 
Hannah Drayson
January 2025

OpenAI’s release of ChatGPT, an impressively functional large language model, in November 2022 – among much else – prompted a flurry of new academic publications on the topic of artificial intelligence. ChatGPT was not only interesting because it offered a convincing imitation of human language, at times impossible to discern from a human writer, as well as an eerie sense of sentience. In keeping with a long intellectual tradition, the ability to use complex language was, until 2022, considered a bastion of what makes human beings special. It was the logic behind Alan Turing’s (1950) suggestion of an imitation game now known as the ‘Turing Test’ to answer the question ‘can machines think?’. However, the intuition that the ability to listen and respond – at least verbally – with the appearance of understanding and abstract thought – is one of the attributes the makes human being special, no longer holds the same weight. Stephen Wolfram’s (2023) summation is gratifying in how succinctly he puts it: “the reason a neural net can be successful in writing an essay is because writing an essay turns out to be a ‘computationally shallower’ problem than we thought.”

The necessary reassessment of the relationship of complex language to sentience as a general attitude is just one example of the difference between (sometimes carefully reasoned out) philosophical positions used to understand problems posed by complex technologies like machine intelligence, and the ways in which things turn out in practice. In his book, The Line, James Boyle offers many generous insights on this as he considers how human society may navigate the ethical and legal questions raised by machine intelligence over the coming years. He charts the factors which shape ‘the line’ that separates things – entities like machines, animals and objects, what can be considered property – from persons, with moral and legal rights. However, while he gives a very well synthesised overview of the capacity-based debates that have surrounded these questions, which focus on attributes like intelligence, consciousness, or language use, that attempt to worry out the specific criteria upon which we might confer a particular kind of rights on machine intelligences, Boyle’s book also looks at the circumstances in which these questions have already been answered, focusing particularly on the legal system. Considering the precedents set by corporations, non-human animals, and transgenic entities, he explores how personhood and the rights associated with it are already conferred upon, advocated for, or denied a whole range of non- and semi-human entities.

While, as in his opening chapters on the questions in the history of AI so far, he gives many generous explanations of the theoretical discussions of ‘the line’, Boyle questions the overall usefulness of moral philosophy for reasoning out what is coming, and instead shows the expected impact which the lived experience of using new technologies can have on the ethical judgements that might be made about them. As he argues in the example of large language models, we have trouble imagining things that then seem completely obvious when they happen; the fact that countless humans across the planet have now had the experience of talking with a large language model like ChatGPT, the idea that we might grant sentience or personhood to something that can generate convincing human-like language no longer seems even a faintly viable way to draw a line. In a matter of months, one of the bastions of complex intelligence and human exceptionalism was demolished, and these kinds of surprises should be counted into our expectations.

Of course, the current excitement over AI is something that many scholars wish would go away, particularly those who find themselves faced with marking endless ‘computationally shallow’ AI generated essays in the coming academic year [1] But Boyle cautions us against overlooking the potential for dramatic and unpredictable technological change, for which we best not find ourselves unprepared. In his opening arguments to the book he is careful to address the natural scepticism to ‘singularists’ claims that at some point in the near future advancements in technology will result in an exponential explosion of autonomously self-advancing and generating machine intelligence, the results of which will be the end of humanity, in one way or another – through extermination, evolution, or basic irrelevance (which sounds faintly inviting and potentially rather comfortable). Dismissing the entire discussion in the face of this rhetoric is certainly tempting. However, Boyle is careful to point out the extent to which shifts in the performance of many AI systems over recent years suggest that even the most conservative view of the pace and potential of developments in machine intelligence should not simply dismiss the possibility of a need to deal with the questions surrounding machine personhood. In fact, he suggests that it is not unfeasible that complex autonomous systems may at least attempt to secure rights to legal autonomy or make claims of self-awareness or some form of complex sentience.

While it is easy to point to the fact that the real issue in many cases is that AI systems are ‘not intelligent enough’, it is still the case that the development and application of neural networks trained with reinforcement learning and large data sets have resulted in remarkable advances in a number of fields – not just chess, but useful things like genomics – with machine learning techniques developing algorithms able perform certain tasks and provide insights that have been a shock to their own designers. Boyle points to examples from the history of technology showing how the conditions for the emergence of an innovation can be in place for a considerable period – decades even, before some crystallising factor makes application feasible. For example, internet technologies were in place for some time before the innovation of HTML made the sudden emergence of the web possible. If we can learn anything from the history of technology, it is that firm prediction isn’t possible, but what we can get better at keeping our balance, reading the weather, and perhaps not falling out of our proverbial boats.

Particularly interesting, instructive, and disturbing is the chapter on corporate personhood which explores that the somewhat accidental ways in which it became legally feasible for US companies to be granted constitutional rights. These range from the ability to sue and be sued – something arguably practical for the conduct of business – to first amendment rights such as free speech that perhaps should not be granted to beings who cannot die, feel pain, or have any moral investment. As Boyle points out, the legal rights granted to US corporations offer a well-established precedent for similar rights being potential sought for (or by!) AIs, regardless of their status as sentient or not – although he also points towards the debates around the ontological questions about what we might understand is incorporated together when a corporation is created. As well as offering a model for what AIs might request, corporations themselves could offer vehicles for AIs to act as autonomous persons. A corporation controlled exclusively by an AI could serve as a ‘sock puppet’ offering the forms of legal personhood that corporations are already granted. Boyle’s discussion of corporate personhood focuses less on why this might be a problematic scenario, and instead digs back into the story of how companies have come to be granted these rights in the first place. Central to this is the case study he presents, of how the 14th amendment to the American Constitution – made to guarantee equal treatment of previously enslaved people – was co-opted and invoked in numerous legal proceedings that sought to instead grant corporations personhood. The story reveals a tangle of what seems to be very poorly supported historical accident at best, and at worst outright conspiracy. Artificial persons have already been granted legal rights in the United States, and the lack of genuine debate that preceded this is quite alarming given the implications of the decision. One lesson here is simply that we are not on firm ground, and that there are already problems built into the systems that we might have hoped would help us navigate this new territory.

Two further chapters turn to debates around the definition of sentience and rights in non-human animals, and in transgenic and hybrid organisms. Boyle considers the tangled way in which different sets of values become connected to one another. In his discussion of the debates around chimeric and transgenic beings, and technologies that make them possible like genetic engineering and stem cell research, Boyle’s analysis of the debates around what kind of entities it is acceptable to create reveals that the criteria by which the line is drawn are numerous and often contradictory. In many cases these values come to influence judgements about novel developments through recourse to what is often experienced as ‘intuition’ about what is natural. Popular discourse may have more influence on decisions about where lines will be drawn than ethicists may expect. Underlying political allegiances and predispositions toward interpretations of what is ‘natural’, and moral influence, will likely affect how new technologies are interpreted. Boyle also points to how quickly these allegiances may shift. For example, anti-vaccination sentiments have switched from one side of the American political spectrum to the other, and the sudden increase in evangelical support of anti-abortion legislation when not so long ago, state interventions interfering with the sanctity of the family’s private hierarchy and relationship with God would have been opposed.

Boyle’s book is, by his admission, limited in its scope to the cultural, political, and legal context of the United States and other countries closely affected by developments there. This is a story that is too complex to be told globally in one text, but a similar study written about the Chinese context would be a welcome compliment and likely fascinating. In addition to offering many useful ideas on the problem posed by ‘the line’, this book offers a lot as a case study of the impossibility of firmly predicting the development of complex technologies. It is the kinds of lessons that Boyle shares with us that make the claims of AI doom-mongers, salesmen and accelerationists alike sound so confused. A long view of how technology and culture construct one another, attention to social, legal, historical as well as technological precedents and contexts can arm us with a perspective on what is happening, while it is happening. These dimensions contain insights that offer to help navigate, perhaps even steer events as they unfold. Rather than offering predictions, Boyle draws attention to the expectations and counterintuitive developments that might likely wrongfoot us; he offers perspectives which both orient our attention in this developing story and help us to see past the surface of events. I am happy to confirm that as far as I can tell, The Line is the result of processes with computational depth.

References Stephen, W. (2023) What Is ChatGPT Doing … and Why Does It Work? Available at: https://writings.stephenwolfram.com/2023/02/what-is-chatgpt-doing-and-why-does-it-work/ (Accessed: 13 November 2024).

Turing, A.M. (1950) ‘I.—Computing Machinery and Intelligence’, Mind, LIX(236), pp. 433–460. Available at: https://doi.org/10.1093/mind/LIX.236.433.

Note

[1] For anyone not familiar with this situation, I refer readers to posts on Reddit’s r/professors forum, for discussions of how faculty members are attempting to respond appropriately to AI generated coursework, emails, and discussion points submitted by students. Incidentally Reddit has recently made an agreement with Google to allow its data to be used to train AI systems.