TRY THIS AT HOME

A workshop exploring the potential of AI and the virtual world to help us enrich our experience of our physical worlds, especially our homes, during COVID-19. A suite of experiments and reflections will test the workshop’s approach. 

Something & FriendsRashmi Bidasaria | Marsha Bradfield | Vinay Khare 

Concept Note

This submission unfolds in three parts. We begin in PART ONE with the workshop, TRY THIS AT HOME, that Something & Friends collectively developed for an open call from the Science Gallery: ‘Has 2020 left you bored to distraction? Bored silly? Bored to tears? [… We are] calling for young people across the global network to come together and exchange ideas and collaborate on BOREDOM REBELLION.’ We outline our response to this invitation by drilling into the workshop’s theme: the reciprocal potential of AI and the virtual world to enrich our experience of our physical worlds, especially our homes, during COVID-19. The workshop’s lesson plan sketches the participants’ experience during the ninety-minute session. 

To test the workshop, Something &Friends became its first power users. Our aim was to ‘debug the programme’. This resulted in several insights, some of which are shared in PART TWO. This auto-feedback will inform the workshop as it iterates. This includes a piece of open-source curriculum, developed in  the spirit of Radical Education Workbook and with the aim of encouraging others to teach and learn about AI by mucking about. 

PART THREE of our submission shares our artistic experiments, each created by a different member of Something & Friends. By taking the workshop as a creative prompt, we explore its learning gain as it helps us to feel more at home as we move between virtual and real space.



Process Notes

PART I: Something & Friends’ Proposal for BOREDOM REBELLION

TRY THIS AT HOME takes the format of a workshop which explores two primary themes of ‘Inside-Outside’ and ‘Reality-Dreams’. The participants will go on a journey to reimagine their built environment through conversations between them, AI and the workshop’s facilitators. Creating poem-like texts, artworks and fantastical landscapes will serve as a means to reinterpret their space and engage them.
Technical requirements: computer, internet access, microphone/speaker (built in or external), laptop and digital camera (smartphone, tablet, webcam, etc.). The workshop is entirely web based to dispense with the challenges that downloading software can entail.
We will convene the workshop through an online video conferencing platform like Zoom (the specifics of this will depend on the Science Gallery’s preference). The chat function on the conferencing platform will serve as a channel where participants can communicate with the facilitators by asking questions or sharing files and other resources and vice versa. A Miro board will serve as a kind of virtual vitrine or gallery where we display the images.
Audience: With three facilitators, we can support 15 to 20 participants. We anticipate they will be between 15 and 25 years of age. Participants do not need special technical skills. An interest in AI or a drive to explore technology, systems, processes, perceptions or the future of art will suffice. The workshop will be conducted in English unless the Science gallery can provide translation. Similarly, we welcome the opportunity to support disabled participants or to include those without the required kit but would need help to offer this.
Format/Experience: We anticipate our 90-minute workshop will unfold over the following steps. However, these may well change in response to feedback from our mentor. (The experiments outlined here are indicative. It could be that two will suffice, depending on the participants’ responsiveness.)

  1. We will begin by briefly introducing the project and the facilitators. A quick icebreaker will set the tone. We aim to establish the workshop as a safe space where participants can experiment and learn from each other. This discussion of a safe online/discursive space will segue into the workshop’s interest in being ‘safe at home’ or ‘sheltering in place’ during the pandemic, and how AI may support us to be embodied in the domestic sphere but transported beyond its confines.
  2. Our first visual experiment begins with participants taking a photo of their immediate context: their desk, bedroom, kitchen, etc. (The view should be appropriately chosen). Our bespoke website will both take the image and use an AI model (DenseCap) to analyse it. This will result in an AI-generated list of the space’s content. An indicative line appears in the image where the listed items are believed to be. The degree to which the list is accurate or inaccurate throws up fascinating questions about the perception of humans and machines.
  3. The second part of this experiment shifts our view from inside to outside. Participants are invited to take a picture from a window and repeat the AI analysis described above. (We will provide images that simulate the experience of looking out if no windows are available to the participants.)
  4. At this point, we will move to reflecting on our experiment. Participants will use Zoom to send their images to the facilitators who will upload them onto a Miro board. This display will be visible to all those in the workshop. We will take ten to 15 minutes to discuss the outcomes, comparing and contrasting the two images to explore their significance.
  5. In the second section, the participants will experiment with different energies and emotions through their photograph. By using the StyleTransfer tool, participants will use AI to change the style of their ‘everyday image’, to a painting, with their own choice of artist, bringing out specific moods, feelings and change in perspective of their spaces.
  6. The final experience encourages the participants to reimagine their surroundings (as depicted in the photos from their windows). With the help of GAN and pix2pix models, the participants are guided through recreating their view with fantastical styles and elements.
  7. The group will reconvene for a final time. We will again use the Miro board to share the images. This discussion will focus on their content and mood. And we will ask ourselves: How does our perception of our actual space, specifically our domestic sphere, change as a result of our analysis and augmented representations of this world? Ideally these experiments will encourage participants to continue experimenting with AI. If more budget is available, we could do a follow-up session.
  8. We will conclude the workshop by asking for feedback. We will also confirm the participants are happy to share their images with the public. This will take place on the Miro board or through research publications.  
PART II: Insights from Workshop-Mock up
Reflections by Rashmi Bidasaria

Being ‘locked down’ in the same space with the family comes with its benefits and challenges, most obviously because of the inter-generational gap that 6 of us share. I mocked up TRY THIS AT HOME for my sister (25), mother(54) and grandmother(80).

The users were in awe of ‘how well the machine understood them’ – “How can the computer tell I’m wearing an orange shirt?” one of them exclaimed.

The younger one enjoyed the Style Transfer – primarily because of the ease of the platform. With a click of a button you were transported into a Matisse.

My mother and my grandmother envisioned similar dreams for their reality (image of the home garden) on the GauGAN. Mountains at the back of our home garden with a stream passing by the concrete jungle that we live in.

It was interesting to see that the same built environment had different meanings for each of them in reality but also very similar aspirations of where they could be transported to in their dreams.

Crossing the home garden after their encounter with GauGAN, my mother and my grandmother sure have sweet flashes of the memories they created together with the GauGAN for a larger than/full of life- dream that they dreamed.

Reflections by Marsha Bradfield

I work in bed for several hours first thing every morning. There’s a large window to my right that lets in a lot of light. On sunny days I know it’s time to move when I can’t see my screen for the glare. Wednesday was a case in point. It’s that time in autumn when there are more leaves on the ground than on the trees. Those that linger using the extra space to move in the wind, often with a last gasp of vigour. This can play out dramatically on the wall to my left – a dance of shadows to the soundtrack of seasonal gusting. (See Fig. 5 in 9-2-5)

I’ve spent more than a decade in this flat but had special appreciation for its light show this midweek past. I put this down to my experiments the night before. I’d pushed several nocturnal views through GauGan to see what it could do (see Fig. 6 for an example). Excepting those devotees of cubism, the results were visually illiterate without being intriguing, prompting me to wonder if the programme had a bias for daylight vistas. For sure, that’s what featured in the online video demos that I’d watched. [1] I expect to learn more about GauGan’s foibles through recursive use over weeks or months. What I hadn’t expected was how this AI would reach off the screen, beyond GauGan’s clunky editing suite. It has sensitised me to the infinitely more subtle expression of day and night and the seasons within my flat, how they find form as the outside bleeds in.

The shadow dance also got me thinking about the screens erected to prevent the spread of COVID-19 and how, in addition to safety barriers, they may also be frames, reflectors or magnifying glasses that enable us to see our surroundings in new and unexpected ways. (See Figs. 6 in 9-2-5)

My second observation relates to the style transfer technology of Neural Style API (see Fig. 3). In the spirit of TRY THIS AT HOME, I liked the idea of using female artists and their work to transform my domestic space. In my first experiment I applied ‘Helen Frankenthaler and her paintings’ to my bedroom. The effect was deeply disorientating while at the same time, much too personal to share here. This room no longer belonged to me and I struggled to imagine the person who called it theirs. I thought of Emma in Madame Bovary and Odette de Crécy in In Search of Lost Time, with Perec nodding towards the latter in the epigraph (See Fig. 7 in 9-2-5)

I know GauGan touts itself as the decorators’/architects’/landscapers’ dream app but in many ways, Neural Style API is more immediate and certainly more total. In my case, at least in the first instance, it helped me to think about what signifies my personal space.

I turned to a corner of my reception room and used it instead to host female artists and their work. Below are three such experiments. The Kusama is striking in its failure to capture her signature spots. I think there is potential here to cross-breed artists and their work. This gives new meaning that departs from that expressed in George Perec’s iconic text, Species of Spaces and Other Pieces (see Fig. 7 in 9-2-5). I will experiment with this recursively so that the image is part personal, part Kusama, part Parker (see figs. 8 – 10 in 9-2-5).

In some ways, the Parker experiment is the most rewarding (see fig 10 in 9-2-5). Placed side-by-side with the image of my reception room, Parker takes on the image shape or outline of the chair. This reference is doubled as she stands in front of the speaker’s chair in the House of Commons, and it’s their job to chair proceedings of parliament. To get conceptually virtual, Parker as Speaker in the UK’s House of Commons references Nancy Pelosi as Speaker in the US’s House of Representatives. It could be this dreaming is primed by current events, where, at the time of writing, if the results of the US election are not settled between now and inauguration on 20 January 2021, Pelosi could become President.

[1] See, for instance, NVIDIA, ‘GauGan: Changing Sketches into Photorealistic Masterpieces,’ YouTube Video, 1:59, 18 March 2019 https://www.youtube.com/watch?v=p5U4NgVGAwg&t=40s (accessed 15 November 2020)

PART III: Artwork Experiments 

Bibliography, References and Tech Stack

‘Boredom Rebellion Youth Symposium,’ the Science Gallery, https://sciencegallery.org/opencall/boredom-rebellion-youth-symposium-2021 (accessed 15 November 2020). 

‘Bruce Nauman,’ Tate Modern, https://www.tate.org.uk/whats-on/tate-modern/exhibition/bruce-nauman (accessed 15 November 2020).   

‘Department of Health and Social Security 1978-79,’ Context is Half the Work, https://en.contextishalfthework.net/exhibition-archive/department-of-health-and-social-security-1978-1979/ (accessed 15 November 2020). 

‘Draw This’, Dan Macnishhttps://danmacnish.com/drawthis/  (accessed 15 November 2020).

Let’s Read A Story: talking to children’s books using semantic similarity, Medium

https://medium.com/ml5js/lets-read-a-story-talking-to-books-using-semantic-similarity-f283168b4264 (accessed 15 November 2020).

‘Lo-Fi Player’, Magenta, https://magenta.tensorflow.org/lofi-player  (accessed 15 November 2020). 

Majewska, Ewa and Kuba Szreder,  ‘So Far, So Good: Contemporary Fascism, Weak Resistance, and Postartistic Practices in Today’s Poland’ eflux, #76, October 2016, https://www.e-flux.com/journal/76/71467/so-far-so-good-contemporary-fascism-weak-resistance-and-postartistic-practices-in-today-s-poland/ (accessed 15 November 2020).

‘The Next Rembrandt,’ Microsoft,  https://news.microsoft.com/europe/features/next-rembrandt/ (accessed 15 November 2020).

The University of Chicago, ‘Alison Knowles: Fluxus Event Scores,’ YouTube video, 02:57, 28 March 2020 https://www.youtube.com/watch?v=064qvwX_-kA  (accessed 15 November 2020). 

Perec, George. Species of Spaces and Other Pieces. (London: Penguin, 1998), 16.

‘Scribbling Speech’, Experiments with Google, https://experiments.withgoogle.com/scribbling-speech (accessed 15 November 2020). 

Zepeda, Lydia and David Deal, ‘Think before you eat: photographic food diaries as intervention tools to change dietary decision making and attitudes,’ International Journal of Consumer Studies 32, no. 6 (2008): 692-698.

  1. DeepAI
    1. DenseCap API
    2. NeuralStyle API
  2. gauGAN
  3. Javascript
  4. HTML5


Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

%d bloggers like this: