Sharing a world with AI

A Collaborative Manifesto

An Essay by Padmini Ray Murray

Artists have always experimented with new technologies and materials to both reflect upon and express their interpretations of the world we live in, and AI is no exception. However, the engine that drives AI is fuelled by a maximalist and extractive consumption of resources, exacerbating existing precarity, leaving creators in a baffling double bind—is it possible to embrace AI ethically? How does one challenge the mythologisation of AI while not rejecting it outright? When we brought together a group of forty artists, lawyers, technologists, and people interested in the future they will inevitably share with AI, the conversation ranged from confusion to concern to endorsement. For artists, there are obvious anxieties around how their labour might be exploited in an environment that does not as yet allow for the possibility of provenance or ownership, nor for them to benefit from the use of their work when reinvented by the alchemy of the algorithm.

There was a deep awareness that these necessary conversations are only being held in some rooms with select groups of individuals—and even in those contexts, there is varying consensus on what is meant when we talk about AI. Intelligence, as imagined by the rhetoric of AI, places its faith in a rational understanding of the world, one which assumes that humans are quantifiable
as data, while in reality, their lived experiences far exceed the limits set by the dataset. The magnitude of datasets used to train AI can obscure the reality of what it means to be human, unless they are carefully constituted with intention and care. Datasets are also at risk of being corrupted by misclassification & misrepresentation, as well as by their omissions, for accurate representation is impossible to achieve at scale.
It is still difficult to ensure consent and consensus with regards to inclusion in these datasets without absolute transparency. Despite incessant conversations about responsible AI there is still considerable confusion around who exactly can be held responsible. Inevitably this raises questions about who will enforce responsible use. Without uniform guardrails for regulation and legislation, these attempts are bound to fail in a globally networked world. The bias of hegemonic perceptions around caste, class, race, gender shape the very parameters used for data collection and permeate its processing using AI techniques.
Representing the world, which is at the heart of artistic practices, becomes increasinglyfraught but also profoundly crucial, in order to challenge the homogeneity that seeing the world through the eyes of machines, and those who programmed them, fosters. What the machine cannot see is what it has not been shown: its lack of actual sentience means that it will only know to look for that which it has already been exposed to. However, what this also means is that it is susceptible to what has been anthropomorphised as “hallucinations”: creating fictive outcomes out of data which claims to be authentic.
Artists then, are not only faced with the daunting challenge of creating worlds that stand in contradiction to these hallucinations, but with persuading us of their authenticity in opposition to these machine manufactured visions grounded in claims of rational exactitude. For artists whose social role is often defined by their desire to speak truth to power, this proliferation of assertions of truth makes this task complex and difficult. However, what we heard and understood
during the festival where these conversations took place, is that artists see themselves as most productive and persuasive in prising open the gaps created by bias, rhetoric, and already existing structural inequities. By showing different ways in which AI as a tool can be used to inhabit these interstices, they challenge dominant narratives by reengineering its assumptions. Creators working in the space are conscious that they are responsible for helping to subvert and challenge the narrative currently being driven by technology corporations, by making ethical, nuanced choices about these tools and pushing back on the inevitability of AI being the definitive artistic vocabulary of the future. This essay was also informed by conversations at a panel discussion, AI Art: A Marriage of Heaven and Hell featuring Dani Admiss, Jake Elwes, Bruce Gilchrist and Vishal Kumaraswamy.

An Essay by Michiel Baas

How does one live with AI? How does one share a planet with a technology that is omnipresent but whose very existence immediately provokes anger and fear? This Manifesto is intended to provide guidance here. It offers caution but also hope, it suggests ways to deal with AI’s presence in our lives but is also critical of ongoing developments. Most of all it builds upon the very reality that we now exist with AI. This is not merely the AI that dominates headlines in terms of its application in text and image generating tools such as ChatGPT, but also its usage in a host of other technologies that receives far less attention. This is precisely where our Manifesto enters the conversation: AI is more than meets the eye. For every possibility it engenders it also holds a threat, something to be cautious about. It may facilitate new forms of creativity but may also result in job loss. It can make information more accessible but this could also be utilized to disseminate misinformation and assist manipulation.

Medical health might be revolutionized by it but the risk of ethnic, racial or caste bias is real. While these issues are well documented, popular media has a tendency to zoom in on the sensational. AI is not sensational, instead in its ubiquitousness it is the opposite: it is common, mundane and therefore often ill-thought through. The risk of AI is not a dystopic future but instead the influence it has on lifeworlds all across the world and on an everyday basis. It is therefore paramount to Stay Alert & Critical. We need to remain vigilant of the way multinationals with deep pockets dominate the narrative and developments. Nuance is key here. The discussion about AI should not be guided by the polar opposites of its destructive or problem-solving capacities. We all share a responsibility here. Instrumental is that we carefully Consider Representation. In AI’s representation of us it builds upon datasets that are biased and subjective. How can we prevent it from replicating structural inequalities and thus entrench existing hegemonies? It comes with the realization that almost all human activity results in data now, data that human hands will curate, label and utilize by means of algorithms that make selections and reductions. This leads us to caution that we need to Advocate Wisely and therefore develop a meaningful understanding of what AI can and cannot do. In this the possibility of harm needs to be foregrounded. AI’s regulation should be trained to this but this process must be a democratic one. Beyond experts of the technical dimensions of AI it should seek the guidance of the communities it aims to assist or represent. Innovate Responsibly must therefore be something that the industry takes seriously. AI’s potential to transform the fields of medicine or education or even assist in combating climate change is promising but it should not divest people and communities of their agency. We should always ask, who is driving the innovation and who is to benefit from it? This comes with the need to Demand Justice for AI’s harmful (side)effects. The technology’s massive reliance on planetary resources is one of these. But also issues of intellectual property, privacy and misinfor- mation are crucial here. How do we prevent this technology from being applied in tools that may cause aggression, oppression or even physical violence? Finally, our Manifesto urges for AI to be Reimagined Carefully so that it responds to concerns without trapping itself in a narrative of anger and fear. AI is here to stay, it offers immense opportunities, but the threats that come with it are a very real concern too. We all have a stake in this!