How Machines Hear + Play / Creating TechArt

Today’s lecture was about How Machines Hear and Play. I was curiously waiting for this session, as the title caught my attention.

Throughout the lecture, Hasan spoke about:

  • The representation of ourselves on video and in photos, is broken down into pixels. We’re all just pixels on a screen!
  • Latent face.
  • How new technologies are being created to reduce bandwidth whilst video calling, as it is such a common technology in today’s world.
  • SMART TVs upscale low bandwidth/signal to 4K video.
  • Mario video games use neural networks. The computer is trained to get better, like us!
  • Machine-human learning.
  • Alpha Go Google Deepmind, where AI beat the human mastermind – Lee So-dol the online game.
  • MIDI model – how machines hear.
  • Generative music software to try: FoxDot, Python, Pure Data, Generative FM and Pixel Synth.

Today’s Exercise

Writing our initials (M, C, B) in the blank canvas to trigger musical notes using Cou Cou software

Today I worked with Marcel and Bhargav to create a collaborative generative sound piece. We used Cou Cou and Beat Blender machine learning software… we had so much fun with both. I’d recommend finding some time to play with this software, especially if you need cheering up. It made me laugh so much.

Our collaborative piece of generative music

Santhe of Propositions

All 23 Fellows were asked to watch a video about philosopher talking about all sorts of big ideas about life, and told to respond to these ideas in the Santhe of Propositions Miro Board, ahead of tomorrow’s first CoLab Work Session with my new collaborators. I’m excited to see who I will collaborate with to create the final artwork.

The Santhe of Propositions with input from 23 Fellows
My section in the Santhe of Propositions

Dialogue Event: Creating TechArt

Again, another fascinating talk from a variety of speakers from Germany and India.

My notes from the session were:

  • At what stage does a camera become a weapon?!
  • Every picture is an empty picture.
  • We live in a world of images.
  • Drawing is an abyss of the superficial.
  • The internet is a person.
  • Tech and cast.
  • Storytelling in different formats.
  • Autonomy – relationship between humans + machines.
  • Transformer model – encoded biases (gender, class, race).
  • Machines are trained in a biased, western-centric view of thr world.
  • Can we change the neural network? What would that look like?
  • The ethics behind the unpaid neural network in your work.
  • Humans as extensions to software – humans become plugins and interfaces.
  • AI annotation – helping google. Re Captcha, eg: identify the traffic lights in these photos. We are providing unpaid labour to teach Google’s machines.
  • Exploitation, labour.
  • We are the software. We are the machine. We are inside.
  • You train the machine, whilst it is training you.
  • The people hidden behind the software.

Leave a Reply

Your email address will not be published. Required fields are marked *