The Ultrasonic Selfie Stick

Calebjhammel
6 min readDec 5, 2020

Creating a connectionless photo taking tool utilizing sound.

THE PROBLEM

This was by far one of the most fun projects I have ever created. I was fortunate enough to work with Natalia Vinueza from the Wonder Wonder gallery here in Boulder, CO. The gallery consists of 18 immersive rooms that guests can explore and take pictures in. Each room contains an artistic center where guests can sit and get their photo taken. This space was at the core of the project and the problem I set out to solve.

photos from a recent halloween event at Wonder Wonder

After observing patrons I began to notice that many of them wanted to take photos of their entire group or were there alone and had no one to take photos of them. A full description of the proposal and more description about the tech used to solve it can be read here. To summarize though, I set out to create a way for patrons of the gallery to take photos of themselves using their own devices, with zero direct connection, and minimal operational interaction.

The Goal: gallery patrons would walk into a room and set their phone in to a holder with the camera aimed at the room’s centerpiece. They would then walk to this location, press a button, and have their photo take.

THE TECH

The inspiration for this project’s solution came from the aptly named podcast Twenty Thousand Hertz. This podcast focuses on the auditory world and derives its name from the highest frequency a human can hear. I was visiting my parents in July 2020, just before starting design school, and listened to an episode that has captivated me ever since. The episode discussed ultrasonic tracking and the sneaky way advertisers sneak audio signatures into video content. Advertisers will then partner with 3rd party mobile applications to recognize when a user has seen their ad. These signatures are too high for you and I to hear, but can be easily detected by cell phones.

source: http://www.cochlea.org/en/hear/human-auditory-range

This tech can be used to sonically transfer data or trigger action without any connection between devices. The podcast also brought up Lisnr, a service design company that utilizes ultrasonic data transfer for good. I began obsessing over this technology and knew it was the perfect solution for Wonder Wonder.

HOW IT WORKS

This project consists of two main parts: a webpage and a beacon. Each presented their own unique challenges, failures, and successes.

The webpage utilized the incredibly helpful P5.JS libraries and some simple HTML. With these libraries and a functioning understanding of Java-Script the site was developed to listen for select audio frequencies.

first method

At first I used a getCentroid() function to obtain the average frequency level detected. This worked fine but only allowed for single tone functionality. With this limitation each webpage could only have one function. After some more work however I began to use a getEnergy() function. The function breaks down all incoming audio to individual frequency bands.

final code

This method then analyzes the bands and assigns each a value of loudness. This operation allows for multi tone functionality within the same webpage. A simple if -> then statement then reacts if a certain frequency band’s loudness value goes above a determined level. For this project the webpage is programmed to take a photo if audio between 225–275 HZ is analyzed above a value of 250. For some context, a simple conversation registers around 100. Although 225–275 HZ is an audible frequency, it was a necessary proof of concept approach given beacon limitations discussed below.

The beacon for this project was created using an Arduino UNO outfitted with an Adafruit Wave Shield for Audio. An audio sample containing the desired frequency was then loaded onto an SD card and placed inside the shield.

early prototype work

A button was then wired into the beacon and set to trigger the audio sample to play. After pressing the button users have a brief 3 second window to pose before the audio sample is sent out a 3.5 mm jack to nearby speakers. The played audio signature is then heard by the listening webpage and prompts it to take a photo. The speakers used however for this project are traditional studio monitors which clip any audio above 18,500 HZ. Although this forced me to use an audible sound and not an ultrasonic signature as desired, the core functionality of both the beacon and webpage are sound, pun intended.

early beacon prototypes
final audio shield
as viewed from the device camera
visual of the program listening, analyzing, then pausing to take a photo

WHAT WENT WRONG

Although the project was a relative success, many issues came up during its creation. The first issue was the loop() speed of the webpage. Initially the page did not pause after taking each photo and would take 10 photos per second the beacon was playing. To solve this problem I used a for() loop with a built in delay. This triggers the page to pause listening and only takes a single photo. A second problem that arose came when initially testing the audio beacon.

At first I was simply generating a tone through the Arduino’s digital pins. Although this worked, the amperage through these pins was not high enough to create a sound discernible from outside audio interference. I needed to create an amplifier that my sound could run through. To solve this problem the above mentioned Adafruit shield and speakers with built in amplifiers were used. The final problem, and the most frustrating to me, was the audio clipping from the speakers used. Discussed above, this does not defeat the purpose of the project itself as the fundamentals remain the same but does necessitate an audible frequency which goes against one of the core tenets of the desired solutions. Multiple ultrasonic speaker arrays are available for purchase and will be implemented into the project moving forward. Although I could have simply used a raw speaker wired directly into the beacon, the single amplifier of the shield was not enough to overcome the outside audio interference issue.

WHAT’S NEXT

After sorting out the obvious speaker limitations, future goals for this project are quite ambitious. This technology presents limitless potential. Lisnr for example, the current leader in ultrasonic technology, has been developing touchless ticket and payment authentication systems. I believe this technology could also revolutionize the way we experience digital content as well as in person events. Syncing streaming video content to mobile phones through such ultrasonic triggers could link content consumption experiences. Such triggers could also sync augmented reality experiences across devices. This would allow for multiple people to experience the same AR content in real time. Allowing our imaginations to run wild, I foresee a future concert where all attendants are wearing AR glasses. Ultrasonic signatures would then trigger specific AR content across all viewers. Creating such an experience through this means would allow for artists to create the AR visuals as they went, instead of creating a predetermined visual experience in which all devices begin viewing at the same time. This process would allow for a new means of artistic expression never before seen. Regardless of my future creations with this technology, I am excited to see what the future holds.

Source code for the webpage can be found here.

--

--