Give Me a Sign

Give me a Sign is a prototype for an interactive storytelling tool which uses Machine Learning and hand gestures – from ancient Indian dance practices – to trigger urgent and important conversations of multiplicity, climate crisis and the anthropocene. 

Shayekh Mohammad Arif | Diane Edwards | Upasana Nattoji Roy


Concept Note

We are the stories that we tell ourselves. Careful thought must be given to what data we feed the machine as it’s intelligence grows.  This is a future language project that uses Machine Learning, human gestures and indigeonous wisdom.

Traditional and folk art forms have always been in sync with the geographies that they originate from. They are inherently in sync and in celebration of nature and the human condition within it. These indigeonous knowledges tell alternative stories and philosophies of our relations with the natural world, which in a time of climate crisis and environmental mutation may prove beneficial in re-establishing a philosophy of care, in regards to the biosphere.  

Through the advent of globalisation/globalised networked communication, many languages and cultures are being lost. At least 43% of the total languages that are estimated as being spoken across the world are on the brink of extinction.  Linguistic AI technologies are being trained predominantly in English adding to the all encompassing anglicisation of digital communication. We must be careful with what we feed the machine, to create a more inclusive infrastructure which doesn’t homogenise societies.

Give Me a Sign sits between these two interwoven conceptual threads, utilising the deeper meanings associated with 6 Mudras from NatyaShastra: 

Mayura: The physical practice of Mayura/prithvi mudra – is like practicing to be able to live on earth.

Mookoola: This sign is used to communicate a bud , for eating and for a navel as well.  All three meanings work with the concept of a beginning or something that is done for life to occur.

Bhramara:  The word means a bee. is the Air Element

Matsya: Life began in water – Matsya is the Water Element

Sarpa: The literal translation is the “hood of a snake”.  Snake is the Land Element

Shikhara: The literal translation is the “PEAK”.  Shikhara is the Human Element



Our P5.js code can be seen here: https://github.com/RIMa-K/Give-Me-a-Sign

Process Notes

Firstly we aligned our Miro Boards in context of the provocation : Care, Nurture, Communication and discussed what resonates deeply with each of us and is relevant to our individual practices. Each of us have a deep concern on what is being “fed” to the machine and our core context became distilled to Care and Communication 

We created an image classifying model using Google’s Teachable Machine, training the model on the 6 mudra hand gestures. This was then imported to a p5.js script written by Diane, and used to trigger .gifs designed by Upasana, sound produced by Shayekh and background colour. 

Each mudra has a literal and a deeper list of meanings, visual notes, colours and sounds that can be associated with them. We researched indegenous knowledge bases for correlated stories and extracted deeper textual quotes for each of them.  These were then interpreted and designed as sets of layers in After Effects: Text formation with the right pauses, Mudra compositions and colour, Background colour coding. These where exported as .gifs and uploaded to our p5.js script. 

FUTURE DEVELOPMENTS 

This project is at the very early stages of development, we believe it has the potential to grow into quite a powerful piece of work. 

We would like to share the project with instruction for use and implementation, and ask users to share their teachable machine models, so we can grow our  master model making it more universal and robust for use in exhibition and online.   

Another strand for development is a repository for other hand Mudras and indiginous gestures which would give information on their expanded meanings and interpretations.  

The project also has the potential to be developed into an Immersive art installation or VR : using leap motion / VR hand tracking. More imagery, video and sound could be triggered, creating a rich tapestry of deep meanings behind the ancient gestures. Using them to tell stories of contemporary environmental and societal concerns. 

Bibliography, References and Tech Stack

Natyashastra – Asamyukta Hasta – the physical Book + https://www.sahapedia.org/mudra-various-aspects-and-dimensions-0

Bhramara Mudra | Gesture Of The Bee | Steps | Benefits: https://7pranayama.com/bhramara-mudra-steps-benefits/

Protecting Endangered languages through technology and Digital Tools: https://interestingengineering.com/protecting-endangered-languages-through-technology-and-digital-tools

AI is translating messages of long long languages: https://bigthink.com/technology-innovation/a-i-is-translating-messages-of-long-lost-languages?rebelltitem=1#rebelltitem1

The Nooscope Manifested: https://nooscope.ai/

Decolonizing the Internet: https://www.goethe.de/prj/lat/en/dis/21753740.html  & https://study.soas.ac.uk/decolonising-the-internet-whose-knowledge-is-it/

Indian Indigenous Concepts and Perspectives: Developments and Future Possibilities, S. K. Kiran Kumar: https://ipi.org.in/texts/kirankumar/kk-indian-indigenous-concepts.pdf

Indigenous peoples and climate change Emerging Research on Traditional Knowledge and Livelihoods: https://www.ilo.org/wcmsp5/groups/public/—ed_protect/—protrav/—ilo_aids/documents/publication/wcms_686780.pdf

Indian Classical Dance Action Identification and Classification with Convolutional Neural Networks: http://downloads.hindawi.com/journals/am/2018/5141402.pdf

Making Kin with the Machines: https://jods.mitpress.mit.edu/pub/lewis-arista-pechawis-kite/release/1

Alternative hand gesture recognition tools for future development:

Tech Stack


More from Shayekh 

More from Upasana

More from Diane

2 responses to “Give Me a Sign”

  1. […] The project involved training an image classifying model on six mudras (hand gestures from Indian classical dance) in order to create an interactive storytelling tool. When someone makes one of the six gestures with their own hand in front of their computer’s camera, the tool responds with wording, artwork and an original soundscape based on the gesture. The video below shows the early prototype of the tool in action, and there’s more information on the process here. […]

  2. […] project is a successor from an earlier prototype where we used Google’s Teachable Machine. The key problem with the first version that we […]

Leave a Reply

Your email address will not be published. Required fields are marked *