Miming Interactivity in 10 Ways | Independent Study Experiment #1

Listen to this article

Hey reader! Welcome to episode 1 of my graduate Independent Study, a series where I conduct different experiments/studies/prototypes/installations/investigations/sketches — honestly, whatever you want to call them — in the hopes of getting one step closer to figuring out my thesis project.

In this assignment, I'm focusing on how one snippet of audio can lead to another one using loose associations. For example, a song can lead to a story, which can lead to another memory, which can lead to a noise, etc. The main challenge, here, is figuring out the interactivity of the piece.

Adam (my Independent Study supervisor) framed interactivity in this way:

Here is where people make a choice. How do they express their choice?

Does this experience need to be interactive, and if so, in what way? What are the moments of control?

- Do they press start?
- Do they push something to go to the next section?
- Do they pick the next snippet from options?
- Do they mimick the sound?
- Tell their own story before they go somewhere?


Interface wise: Is it a website, buttons, dial, phone thing, dance interface?

Adam suggested that I turn myself into the user: How would I want to experience my project, express it?

 

Here are my goals for this first assignment:

  1. Thinking about how I might build the interactive experience to select and move through sound.
  2. Thinking of 10 different ways of doing it -> Making a discovery process
  3. Miming / Pantomimeing / Acting at least two ways
  4. Building the inputs
  5. Reflecting on this experience: when was I active or passive? What was my motivation (ex: start a thing or modify a thing)?

Hélas, voici: the results of steps 1 and two. This blog post will detail 10 different interactivities that were brainstormed for this assignment (with illustrations sourced from CocoMaterial):

Table of Contents

 

Interactive Way 1: Interactive Voice Response (IVR) System

Interface, technologies: Telephone, cellphone, Interactive Voice Response (IVR)

Scenario:

  • Audience members finds or are given a fake phone card.
  • They call the number.
  • The number leads to an Interactive Voice Response System -> IVR (like the ones you get when you call, for example, a major department store or bank).
  • The IVR plays back a random audio snippet, and audience members have the choice of pressing a number on your keypad to hear the next related snippet.

 

Interactive Way 2: Accessing snippets through gestures

Interface, technologies: motion sensor

Scenario:

  • During the recording sessions for the database, participants have both their voice and gestures recorded using a mic and some kind of sensor.
  • Later, once the recording sessions are complete and the installation project is up, audience members will...
    • Hear a snippet 
    • Respond to the snippet with their own memory and have their gesture recorded
    • Hear another snippet based on the gestures that the audience made
    • Respond, et ainsi de suite

Notes from Adam: I could use a Leap Motion device or Kinect that tracks hands. There are lots of open APIs for of gesture detection. I should also look into Wekinator -> I could send data to it, and it will do the association.

 

Interactive Way 3: Web-based nodes

Interface, technologies: Computer connected to the Internet

Scenario:

  • The audience member goes to a website on a computer.
  • They are presented with a node-based network. Each node represents an audio snippet, and the snippets are connected based on how the conversation occurred at the time of recording, and any off-shoot snippets that were recorded after the fact.
  • The node could also animate in a similar way that the application TheBrain does it:

 

Interactive Way 4: They respond to the audio snippets, and the AI cues up another one

Interface, technologies: Device that listens to and plays back audio

Scenario:

  • This is very similar to Interactive Way #2, but instead of accessing the next snippets through gestures, the AI/Machine Learning thingy would find similarities in the words, tone, etc. of the audience member's response and match them to an existing snippet in the database

Notes from Adam:

  • could be machine learning
  • Hidden markov model
  • KNN -> K nearest neighbour (using speech to text, run simple analysis, use sentiment analysis, point to sound file with the nearest point
  • could also be random (and have people believe that there is some kind of link when there isn't)

 

Interactive Way 5: Interacting with audio snippets using water via touch sensors

This one was inspired by something my secondary advisor, Camille, said about undersea Internet cables, and how they rarely connect directly from the North American continent to the African continent (I discuss this briefly in my Colloquium Presentation).

Interface, technologies: water, capacitive/touch-sensitive sensors, Arduino (or any other microcontroller)

Scenario:

  • I'm not exactly sure how this would work or what it would look like, but touching water (or a container with water) would trigger an audio snippet. Perhaps the audience member would have to keep their finger/hand in the water to keep the playback going, OR the playback just starts and play once after they touch the water.

In one of our experiments for Creation and Computation, we used the Adafruit Capacitive Touch Sensors, and in their product video, Adafruit explains how this shield could work with salty water (or anything that's wet) as an electrically conductive or capacitive input. 

 

Interactive Way 6: Proximity sensors connected to speakers which speak to each other from across the room

I got this idea from Gerald, the Integrated Media technician at OCAD, while learning about surface transducers and discussing my project. They suggested something along the lines of audio that played as you walked across the room and was triggered by proximity sensors. What if the different audio was talking to each other from across a room?

Haven't really developed this idea further, but an interesting concept nonetheless...

 

Interactive Way 7: Hold down to listen, release to respond

This one isn't super developed, but, as the title implies, the audience member would hold down some kind of button or input to hear a snippet, and then release to respond. This interaction could be coupled with some of the other ideas in this post.

 

Interactive Way 8: Bone conducting speakers hooked up to head-tracking

With the help of Gerald, the Integrated Media technician, I was able to test out surface transducers. They are essentially devices that can turn any surface into a speaker. Gerald gave a short talk during one of the lectures of the Thinking Through Making course over the summer, and they mentioned how mini transducers could be placed against the temple to give the illusion that the audio is playing in someone's mind. I was (and still am) enthralled by this idea.

What if you could hear the diaspora talk to you directly in your mind?????

Adam also talked about earphones that had head-tracking technologies, with the possibility of controlling the playback of audio with the movement of the wearer's head.

The idea hasn't really been developed further beyond that, but I'm still keeping the possibility of implementing transducers for playback in mind.

 

Interactive Way 9: Whatsapp Bot

Interface, technologies: Conversational bot, cellphone

Scenario:

  • Audience members find or are given a fake phone card.
  • They scan the QR code, which opens up a Whatsapp bot (or other text messaging format).
  • The bot gives them a choice of possible audio snippets to explore, and they can respond to the bot to get a choice.

Notes from Adam: Could use If This Then That (IFTTT), or a Google Sheet with all the text responses (so that when people text, they get a random entry from a Google sheet)

 

Interactive Way 10: A mass of calling cards in a room

Inspired by Ignacio Gatica's Stones Above Diamonds, which was exhibited at the Cooper Cole gallery at the time of writing this post:

Ignacio Gatica's Stones Above Diamonds
Ignacio Gatica's Stones Above Diamonds,

Scenario:

  • Audience members choose from an assortment of mock calling cards. 
  • They can bring the card to a giant screen in the middle of the room
  • The calling cards...
    • can be swiped,
    • QR scanned, OR
    • will have a number to dial on the giant screen / touchpad
  • Audio will either play out loud, or audience members can pick up a landline telephone to hear the audio

 

Reflection

After discussing these ideas with Adam, my independent study advisor, here were some discussion points and observations that were highlighted:

  • Audio as data is constant
  • Things to consider:
    • Is this a gallery experience or at-home, or an app, or over text?
    • Do I want the experience to happen in a white box (gallery) vs a different space?
    • How will I engage people's current phone: Cards? Swipe? NFC?
  • Calling cards: what is the significance of the calling card?
    • Using a card to access culture, loved ones, association of card to access
    • Do the cards need to stay? Are the cards there because of my experience with them, is it about their relationship with the diaspora?
  • Last but so not least: the audio content is what will make or break this project. The audio stories told need to be compelling.

 

Tune into the next episode/post for my next assignment:

  • Starting to get responses to the prompts or using my own responses
  • Making the most cheap and cheerful version in the simplest, easiest ways possible using one or two ways
  • Going to the Ignacio Gatica piece and considering:
    • What do I think about it?
    • Would it make sense for me to have my own work in a gallery setting (pros and cons)? How will my own work be exhibited, shown, released?

Comments