Review of Across and Beyond: A Transmediale Reader on Post-Digital Practices, Concepts, and Institutions | Leonardo/ISAST

Review of Across and Beyond: A Transmediale Reader on Post-Digital Practices, Concepts, and Institutions

Across and Beyond: A Transmediale Reader on Post-Digital Practices, Concepts, and Institutions
edited by Ryan Bishop, Kristoffer Gansing, Jussi Parikka and Elvia Wilk

Sternberg Press, Berlin, Germany, 2017
352 pp., illus. 52 b/w. Paper, €15.00
ISBN: 978-3-95679-289-2

June 2017

Across & Beyond is a collection of theoretical articles and artistic projects that stem from the transmediale festival in relation, mainly, to the topic of the post-digital, as the title suggests, but that exceeds the topic and the activities of the festival itself.

The ensemble of articles is a relevant contribution to the current discussions unfolding in the blurry terrains of media theory and post-digital practices. In their Introduction, the editors define the post-digital, together with post-internet as terms that “are associated with an artistic engagement with technology that is not necessarily preoccupied with the digital as such, but with life after and in the digital, working across old and new, digital and analog” (011). In fact, in the book several different concepts of what the post-digital suggests; at the same time, there can be identified a series of recurring topics, among them: time and non-linearity, the use of language for creating reality and/or its perception, materiality, infrastructures, the posthuman, and Claire Bishop’s, by now infamously famous ‘digital divide’ [1].

The contributions are organised along three main axes: Imaginaries, Interventions, and Ecologies. Through a media archaeological and genealogical methodology, the works in the Imaginaries section tackle questions, such as “what are media, when are media, and how they mediate the production of reality? What is the relationship between speculation and design? Can alternative realities really be conjured into being—or is imagination itself a product of cultural, historical, and medialogical context?” (p. 026). “Collective, political, and activist uses of technology are foregrounded” in Interventions (p. 148), a section under which is discussed the relevance of interventionist creative practices. Then, the Ecologies section groups a series of contributions that address the ecologies of infrastructure examining ‘the ethics of how media materialize and interact—with each other, with humans and nonhumans’ (p. 250).

In the Imaginaries section, Dieter Daniels summarises the history of ‘what has come to be labelled media art’ (p.048), starting from its (apparent) crisis in the previous decade to ask how to define media art today. In parallel, he analyses the processes of institutionalisation (1960s/70s) and of the growing connection between media art and media theory (1980s). Having quoted Bishop’s digital divide argument in the first page of the article to dismiss its grounding, the author, however, returns to the topic of ‘a cultural separation between “high art” and media innovation’ (p.057) at the end, rightly wishing for an integration between media art research institutions and art history to be able to develop a comprehensive cultural theory able to address the complexity of the field from different angles.

Instead of considering the imaginary in the context of psychological or sociological frameworks as a sort of tool to model reality, Jussi Parikka proposes to follow Michel Foucault and to consider the imaginary as a technique (p.076). In this way, the author is able to explain how the laboratory takes over the studio in the contemporary imaginary as a located space of knowledge creation. The question he poses is: ‘how do we engage in practices of speculation in media and design labs, which are contemporary places of recreation, imagination, technological practice and activism?’ (p.077). Thus the two main axes explored in relation to the lab in the article are time and space: the lab considered as a situated place of research, and a speculative dimension that is a characteristic of the lab practice, that of inventing the past (p. 078). Parikka proposes to consider the lab as an alternative location of the imaginary to switch speculative practices from the future to the past (p. 081). The idea of a speculative past has undeniable power linked as it is with a politics of time within post-digital culture (p. 078).

Florian Cramer’s article addresses how the separation between art and technology came about (in fact, in this text, Claire Bishop’s name is part of the title; p.122), and how it was discussed in the first place in the context of media theory (p.123). Cramer’s point is that it is not contemporary mainstream art that has to catch up with technology (the determinist perspective according to which technological progress comes first, and influences all the other spheres of culture), but on the contrary, that technology has been feeding on previous artistic strategies, which in many cases anticipated what was about to happen in the tech industry. A key point in the author’s argument is his analysis on how in the context of tech industry words create their own reality (126-128), that is to say, how metaphors for naming certain technologies begin to be considered literally, or, as it is the case of artificial intelligence, how “research coined its outcome as science fiction and now tries to prove the truth of this fiction’ (p. 127). Then, a few pages later he asserts that most tech companies, and also modern and contemporary art markets, ‘run on fictions’ (p.129). Regarding the art market, he exemplifies these fictions with the theorisation and re-branding of certain artistic movements, among which he names Minimal Art, Neo-Geo, Zombie Formalism (p.129). While it is easy to agree that the contemporary art market is flooded with speculative investment, this speculation hardly relies on storytelling—which he convincingly demonstrates is the case of tech companies. In the art market, real speculation is the result of a triangulation between blue chip galleries, collectors, and auction houses, and not much narrative is needed to make this system work. In anyway, Cramer’s analyses and methodology appears as a suitable path to be able in a near future to leave the ‘digital divide’ discussion behind once and for all—for as much as the original article has been criticised and deconstructed in different publications, including this one, since its first appearance [2], apparently we still need to go on doing so.

The relevance of language in the construction of reality, or at least in the user’s perception of interfaces and computers is also at the centre of Olia Lialina’s brilliant contribution (p.136). The author proposes that the use of the word “technology” as a synecdoche to refer to “digital technology”, or “computer technology, or “programmable system” is a way of ‘sedating’ the user (p.137-8), of making her each time (with each “vocabulary update”) less aware of what she is actually interacting with. It is, put in other words, a way of making programmable systems transparent. In this way, Lialina maintains, the computer dissolves in other technologies (p.138). Nevertheless, this analysis on the use of language is ultimately related to the main question the article poses: what should be the agenda when teaching to media artists and designers? The answer is lucid and clear: ‘To the agendas I have mentioned before, which include empowering students to change the invisible computing paradigm and refusing the “opportunity” of Art&Tech, let me add another one: To make time to formulate questions that cannot be answered by monopolies or by observing monopolies’ (p. 145).

In the section Interventions, Daphne Dragona argues that current artistic strategies and methods of subversion might be ‘soft’ (p.184), in opposition to other “harder” or more direct approaches. In this context, she proposes Obfuscation, Overidentification, and Estrangement as possible soft strategies of artistic subversion to be explored and practiced with students and workshops’ participants.

Tiziana Terranova addresses the relationship between algorithms and capital, and what could be the possibilities of emancipation of networked digital media from capital (p. 202). So the author proposes to use the concept of the common to overcome dichotomist oppositions such as state / market, or public / private (p. 203) stating: ‘the concept of the common is used here as way to instigate the thought and practice of a possible postcapitalist mode of existence for social cooperation in networked digital media’ (p. 203). Terranova’s point is that algorithms are not only instruments of capital, but that they are simultaneously constructing future possibilities for postcapitalist modes of government and production (p. 216). In this context, network digital technologies and communicational competences can help build an alternative model based on cooperation, which will also hopefully enable the ‘production of new knowledge and values’ (p. 217) coherent with this alternative model.

Cornelia Sollfrank revisits some cyberfeminist conceptualisations of the 1990s to think of their currentness in the twenty-first century. The conclusion is that the salience of rethinking gender relations is still current today, and possibly more urgent than ever. On addition to this, her conclusion is that the most impelling update contemporary techno-feminism needs to operate now is ‘to open up and include rethinking technology in terms of its dependency on capitalist logic’ (p. 245)

In the Ecologies section, Benjamin H. Bratton explores the relationship between humans and conversational bots. Considering the fact that conversational bots, such as Siri, Alexa, or Cortana, are at the same time becoming the interface between humans and most informational systems, and also that they are “learning” from the user, one question Bratton poses is if users will learn in time to talk to each other as they talk to bots (p. 307). Related to this one is the question about bot’s personalities: Will they be internalising social dynamics of domination, for instance? (p. 306). Then, his analysis focuses on programming and interaction as two different ways of embedding cognition in the system (p. 308-9): according to Bratton, ‘from the machine perspective’ interface manipulation is a way of programming (p. 309). The relevance of this assumption is that it puts the user, especially the great majority of users who are not familiar with coding, in a less passive position: ‘In this artificial context, speech is also a means of writing, inscribing, and trace-making in that it is also a form of (intentional or unintentional) AI programming’ (p. 312)—an important change in the consideration of the user that also Olga Goriunova underlines in the following and last article of the book. The point is that the more legible an interface is from the human perspective, the more hidden its algorithmic logic. Thus one of the salient questions the article open is why couldn’t Siri (or any conversational bot) be more self-revealing about its own algorithmic logic, instead of mimicking a human personality each time better (p.316).

It’s worth also spending some words on the sleek and sophisticated design of the volume, which is consistent with a clear readable logic, as well as with a good display of the artistic projects. It needs to be said, though, that such an excellent publication would have benefited of a more careful proofreading.

Across & Beyond is absolutely recommended reading for scholars and practitioners searching not only for deep critical insight on several of the main topics currently at stake in media art and theory, but also for solid and creative research on alternative paths for thinking on and practicing with technology, a context in which the postcapitalist logic is still dominant.

 

References

  1.  Bishop, C. (2012), Digital Divide: Contemporary Art and New Media. Artforum, September, p. 434-442. 
  2. The article has been reprinted together with a new article on the discussion by Bishop in Cornell, L. and Halter, E. (2015) Mass Effect. Art and the Internet in the Twenty-First Century. Cambridge (MA): MIT Press.