Final Documentation of ITP Winter Show Piece “Lingo Gizmo”

The “Lingo Gizmo” is a fabricated interface that lets people invent words for a missing dictionary. People collaborate over time to submit meanings that don’t have words yet, and invent original words for those meanings.

At the ITP Winter Show, I shared a prototype in which people could invent a word and assign an original meaning to it all in one interaction. I learned many things from this two-day user test that I can apply to a later version.

Check out this short video of the interaction below. 



You can play yourself online!

The code is hosted on Github.

Here are some fun examples of words and meanings that people created at the show.

  • ‘mlava, food for volcano monsters
  • lajayae, a person of Latin American origin wishing to live in New York City
  • juhnaheewah, the feeling that someone behind you is having a better time than you are.
  • dahvahai, good times, good mood
  • dayayay, to tell a donkey to calm down
  • erouhhtahfaher, a food that tastes like cement in a pail
  • fapaneuh, to strike fear into the enemy’s heart
  • kadaveaux, too many knishes
  • nabahoo, that feeling when you’re just not going to get out of the house today
  • payvowa, a special kind of grapefruit
  • Quaeingiwoo, a town in Western Connecticut


Inspirations & References

I created this project because I’m interested in how people build culture through shared experiences, and the ways language acts as a tool for naming and codifying that culture.  In some ways, this project is a slang maker, allowing people to name a feeling  that others may have, and give it a new status with its own word.

I also love creative word games such as Balderdash, which is a parlor game based on voting on the best-guessed or faked definitions of words for a chosen obscure word in the dictionary.

Lastly, I think of words as physically related to their meanings. The shape a word creates in one’s mouth can inform their meaning.  Therefore, it wasn’t a stretch for me to think to ask users to create words by physically interacting with a mouth. Interestingly, there is a theory called the Bouba Kiki Effect.  The theory suggests that people across different cultures and languages are very likely to associate the shapes below as “kiki” on the left, “bouba” on the right.  This phenomena suggests there may be a non-arbitrary, or purposeful, mapping between speech sounds and the visual shape of objects.

500px-Booba-Kiki.svg.pngOne last great reference suggested to me by Allison Parrish, on faculty at ITP, is the Pink Trombone.  It’s an online interface lets you manipulate the inside of a mouth to generate accurate sounds. Very fun to play with.


How It Works

Many skills and strategies went into this project. See below for a summary.


The face, teeth and tongue are designed and sewn by myself, using patterns I developed with paper prototypes. I did not know much about sewing before starting the project!

The inside of the face includes a cardboard structure with a hinge and rubber band to allow the top of the mouth to move down for consonants like “ma” “ba” and “wa”.


In my 500+ lines of code, I’ve used the p5.js libraries to play the opening animation, cycle through sound files, add chosen sounds to the user’s word, and save the user’s inputs into the file name of the recording, which is created with the p5’s recorder function.

Physical Computing

I used an Arduino Mega microcontroller because it offers enough inputs to accommodate the project’s nine sensors.  Five sensors are made up of conductive fabric and velostat circuits. Three are force sensing resistors. The last sensor is an accelerometer to measure the x axis movement of the top of the mouth. I used a ADXL326 chip.

All nine input values are processed by a sketch I’ve saved on the microcontroller. The sketch takes the analog values and turns them into a changing list of 0s or 1s to signify whether they are turned “off” or “on” by the user. The p5.serialport library allows me to send that list from my microcontroller to my online browser. My browser is running a local server to serve up my code along with the serial data so that the user can interact with the fabricated mouth interface.

Design and User Feedback

Many rounds of design, branding, and user feedback informed this project. I used lots of paper and pen to map out my ideas, and used Illustrator to finalize the design and system logic of my project.  Over time I had several formal user feedback sessions with classmates, and quickly asked for input at critical moments in the process.


Next Time

The ITP Winter Show confirmed that if I had another couple days, my list of additional features were in the correct order. Here they are!


1. Get rid of the mouse by creating a box with three buttons to let the user press “Add” “Start Over” and “Done” while interacting with the mouth interfaces. This would simplify what the user has to touch in order to complete the first interaction.


2. Create a user flow of three pages, each with a distinct purpose. First, an opening animation to attract people to play. Second, a page to create one’s word. Third, a page to add their word to the dictionary.  Currently, it’s all in one page which is a little too much for someone to get through at once.

Physical Computing

3. While I learned a lot using different sensors, next time I would use only one kind of sensor for the same kind of human gesture of pressing down. I was asking people to press each “fabric button”, but underneath were different sensors which each required a different kind of touch.

Overall Design

4. On a larger scale, my first prototype demonstrated that people understand and are interested in the concept, feel ownership over their words, have a lot of fun with the current interaction, and are very surprised and delighted by the end. However, the definitions don’t have as much variety in mood or tone as I’d like to encourage in a final version of the project. As of now, people add definitions that match the playful and slightly absurd interaction that I’ve created (strange mouth parts anyone??) . Very few are introspective, although two people did submit definitions about wishing to move to NYC or worrying that someone else is having a better time than they are.

One thing I want to do is rerecord the audio recordings to include more variety in phonemes. Right now they all end “ah” because they are taken from instructional resources online. Including “ack” and not only “kah” will give people more choice.

Considering my recordings all end in “ah”, any word people make sounds pretty silly. Therefore, the inviting but strange branding and design that I created for the show fit that experience. Next time, I can change the design to set a different tone for people’s interactions, in hopes of giving people the room to submit definitions that have more variety to them.



System Diagrams

Here are a few diagrams of my work.

Phonemes Chart


My circuits as a schematic and illustrations.

Meaning Maker Circuit Illustrations and Schematics


These webpage layouts are close to what I would finish creating with more time.

Webpage Design_Build A Word-01Webpage Design_Build A Word-02Webpage Design_Build A Word-03Webpage Design_Share a Meaning-01Webpage Design_New Meanings and Words-01






Intro to Computational Media – Update on Final Project “Meaning Maker”

I’ve decided to combine my Intro to Physical Computing and Intro to Computational Media final projects. I’ll save by “What’s Your Character” project for winter break or next semester! I’m disappointed to not work on it now, but want to focus on completing more of one project than a little less of two projects.

You can read more about my “Meaning Maker” project over here.  I’ll do as much of my larger idea as I can. At a minimum, I hope to complete the interaction of building words by using the mouth enclosure. I may have to save the “game” element of people interacting with each other through the interfaces for another time.

Here are my interaction diagrams, schematics, and interface designs.

I’m sure these will continue to change and I’ll update them with my final post.




A chart assigning phonemes to different interactions.

Phonemes Chart


My circuits as a schematic and illustrations.

Meaning Maker Circuit Illustrations and Schematics


My webpage design so far, created in Illustrator.

Webpage Design_Build A Word-01Webpage Design_Build A Word-02Webpage Design_Build A Word-03Webpage Design_Share a Meaning-01Webpage Design_New Meanings and Words-01






Class 10 Intro to Comp Media – Proposal “Collage Character Portrait”

Project Summary 

Ideal Scenario

I’m hoping to eventually design a large projected interaction. People can playfully answer “What main character are you in a novel?” People would drag images of minor characters, similes and metaphors to the outline of their figure, as a way of defining themselves by what surrounds them. The interaction would take a few steps:

  • Enter a photobooth-like environment
  • Snap a photo of their whole body, which is turned into a silhouette. This is projected on a wall.
  • Drag from a “pile” of various characters/objects at the base of the projection to surround their figure’s outline.
  • At the end of the interaction, people can print out their picture and take it home with them. Or post online. Etc.

Prototype for Now

But for now, I’ll make a simple prototype for my Intro to Computational Media final…. This will include being able to use a laptop’s camera to capture a portrait and turn it into black and white (or other colors), drag objects to to your portrait’s outline with the mouse or possibly with touch, and then finally email yourself the photo or post it online.

Context and Audience

This is definitely for fun, is interactive, and is meant to give people a moment to consider how what is outside their bodies can define their identities. (Which is in great contrast to my Intro to Physical Computing project, which externalizes what’s going on in people’s heads. You can see more about that project over here.)


Looking forward to redrawing these sketches next time…!

This is my “Prototype for Now”



Here is my Ideal Scenario

Later on, I hope to build it further into this type of interaction:




Why? And Inspirations

This idea is based on a digital humanities project I contributed to this summer. You can learn more about it below. I’m interested in how people or characters are defined in creative ways by objects or the people around them.

Background (not necessary to read unless you’re really interested! Mostly for me to reference later.)

This digital humanities project was led by my friends Sarah Berkowitz and James Ascher at University of Virginia. They used textual analysis and github to explore the nature of character in 19th century literature and new practices of digital transcription in the 21st century. Their project focused on Characters, the second volume of a book called Genuine Remains by Samuel Butler, a 19th century author in England.  Each chapter in the book is a brief series of “jokes” about a stereotypical person, such as “A Wooer, ” “An Astrologer,” and a “Corrupt Judge”. The descriptions are biting, witty, and act a bit like a dictionary of people. You can see online the transcription and analysis here on this website and over here on Github.

While working on the project, I was struck that only men were featured as main characters, which is unsurprisingly for the time.  But the absence of women made us wonder even further about ALL the “invisible ink” minor characters mentioned in each chapter. How do these passing characters add definition and meaning to the main character?   Can these “invisible” characters be made more visible?

Sarah did some amazing analysis and a group of chapters to categorize “non-specific humans,” “proper names,” “mythological creatures,” and “animals”.  I’ve been inspired to take this type of analysis and let people play with it in a visual and fun way.



Source Material

I need more references, so please suggest them! I did find:

  • This interactive window projection by NuFormer, a group in the Netherlands, came up in my google search. I like how it turns your body into another texture and outline.


  • This pinterest collection of “interactive wall installations” is helpful.

  • There I found this interesting advocacy intallation against child abuse.  It seems to just use projection from the back to turn your body into a black shadow.


  • This is also cool:



For my simple prototype, I’ll use:

  • The Coding Train videos on how to use a laptop’s camera to create portraits and modify pixels to become black and white, or whatever colors I choose.
  • A library called matter.js that Dan Shiffman recommended, which is a 2D physics engine for the web. I can use it to mimic the effect of a “pile of trash” of objects on the ground, that people can “pick up” and attach to the outside of their portrait.


Collecting ideas for a title and 1-sentence description

I am literally collecting ideas. Let me know!

Project Planning

I will be making a spread sheet to give myself a certain number of hours each week until the finals, so that I’m forced to keep this manageable.


November 15

November 22 (no class because of Thanksgiving)

November 29

December 6


User Testing of Interface/Environment

I’ll also be doing a few user tests with just paper and pencils, to understand if there’s anything elegantly simple I can do to make the interaction more compelling and easier to understand.

I also need to ask people what metaphors or similes they would describe themselves as, so I can be sure to have a variety of


I need more code references for the behavior of snapping something to the outline of a shape. I didn’t get far online. Otherwise I’ll just use the mouse to move the collaged objects around, and some very suggestive user interface to insist the user put them around their outline…

I also need to find out how to drag a shape next to another, so that the one is partially hidden behind the other. In my mind, the outline of the figure is the first layer, and shapes are a little behind the figure as a second layer.


Class 9 Intro to Comp Media – Sound

I’ve been using sound in my Intro to Physical Computing midterm. Below is a video of the interaction. The game lets you create new words to fit meanings that don’t have words yet. You build words by pressing different combinations of a mouth, teeth and tongue to unlock various consonants and vowels. There’s more to read about it over here!


Here’s an older iteration that shows the screen up close. If you hear a consonant or vowel you like, you can save it to a growing word at the bottom of the screen.

Class 8 Intro to Comp Media – “Create Your Own Birding List from Live Data”

I’m a big fan of birds! Not only can they fly, but they are avian dinosaurs, display genetic diversity on a grand scale, and are linked to the health of the environments around us.

This project uses APIs to let you view the latest birds sightings in your location, which are logged by fellow citizen scientists like us. Then you can ask for images of any bird on the list. Eventually I’d like for you to be able to mark whether or not you’ve seen any particular bird, so you can create a list of “birds to see” for yourself.

Here is a video and the sketch itself.



I’m using two APIs to make this work.

First, I pull in the latest data from Cornell Lab of Ornithology’s citizen scientist platform called eBird.  eBird allows people to create their own birding lists with a phone app. eBird then shares that information with everyone else to use with their own accounts, and through Cornell’s free API services. In my project, I request the latest bird sightings by first creating an API URL that uses the latitude and longitude of Brooklyn. A visitor can also add their own coordinates. However, changing coordinates isn’t always working for other people at the moment. I’m not sure why. I’m also not sure why I had to hardcode “&lat=” in the lat input field, but it works.

Second, I use images from Flickr using their API. To do so, I take the bird name logged by a citizen scientist out of the JSON data that is returned by eBird’s API call. I turn that name into a tag that forms a search term to add to a Flickr API URL that searches their website and returns a JSON file. Then I use that Flickr JSON’s contents to create a SECOND URL for the first photo mentioned in the Flickr JSON file. That second photo URL is what I use to display the image itself.

This was a lot of fun to work on. I’m amazed I can pair together live data into something I’d have fun using!  I also love that this is based on the work of fellow citizen scientists!

Here is my code. I really need to switch over to Atom and Github…

Index.html file


Class 6 Small Project – “Squeeze a Lime!”

In this project, I created a design to squeeze limes. I’m imagining this as part of a game where you mix your own cocktails using limes…

I connected three sensors to control three images in my p5.js sketch.  The design is online over here, although you’d need my circuit for it to work!

Here’s the interaction.


How It Works

Read on below to hear my thoughts on my physical design. As for the code, I’m sending data from my sensors through my serial port, the P5.js serial app, and into my p5.js sketch online. I’ve written code to:

  • Expect data coming in from my USB serial port
  • Create a string called inString of incoming data but only up until the ASC-II character values for carriage return and new line
  • State that if data REALLY IS coming in
  • State that if data is NOT “hello” (which I used in my “call and response” code in Arduino to require that my microcontroller be very polite and wait to be asked to send data) then to
  • Create a new variable called sensors that stores an output after it separates the”inString” numeric values from the commas
  • Create a “counter” or for loop to list the array of my sensors
  • Separate their data into an instance of each sensor, and send it the separated data from the variable sensors
  • Draw three arcs that get smaller in size the more you bend the flex sensors, by subtracting the sensor values from the dimensions of the arc.


Next time

I spent a lot of time understanding the labs about serial communication with p5.js, which was time was well spent! Therefore, this small project is more about demonstrating that understanding than it is about my ideas or execution. But next time I would spend just a little more time prototyping my physical design at the beginning as well, to make sure the code and interaction support each other as successfully as possible.

From the start, I had idea of creating a sketch to squeeze limes because I thought the flexible sensors afforded a squeezing motion. As for an enclosure, I imagined I could cover the sensors with a lime costume of sorts, so that the exterior of my circuit suggested they were limes – and thus, you should squeeze them!

Ideally, though, I would have tested this physical prototype at the start. I’d have quickly realized my assumption that the flexible sensors afford a squeezing motion was incorrect! It’s really more of a pulling down gesture. That may sound like a minor difference, but it caused a big disconnect in the user interaction of trying to squeeze limes. Squeezing doesn’t work! Pulling does! Why am I pulling on limes??

Also, my idea of a “lime costume” wasn’t successful even as a prototype. I probably need a different kind of sensor. I did try the long flex sensor, but I’d need a well-though out enclosure to that has a very strong base so that your fingers or thumb can hold on while the rest of the hand does the squeezing.

It looks like a caterpillar! Not a lime.


The Takeaway

My takeaway is that even though coding is harder for me than prototyping with construction paper, construction paper gives JUST as much design feedback as the code. Just like I would by write pseudo code to draft my code’s logic, I should create a quick physical design of my piece at the same time I’m starting my code


Here’s the code:


Class 4 – Functions, Arguments and Parameters

Here is my sketch!

Questions: Why doesn’t the pink mouth draw again? Is there an easier way to deal with defining its line within a function?? Is there a way to decrease or increase the size of shapes by percentage…?

I’m having trouble embedding my github gists lately.

Class 3 Sketch

I wanted to try several of the concepts we learned this week. To do so, I built a living room to create a few pieces that are interactive, including a lamp, picture frame, and table. However, some are working right now, others not yet! Looking forward to learning more.

Ideally, the lamp changes color, the picture frame turns “on” to show Kellee’s colorful dots, and the table shows an algorithmic design when you click on the “glass” on top of the table.

Here it is so far:



What I did learn:

How to build a slider that changes the color of a lampshade in different colors.

Got faster at using the mousePressed and mouseReleased functions.

Got faster at building buttons that turns things on and off.

How to organize my code even when it gets long.

How helpful resident office hours are.


What I got close to figuring out.

How to confine behaviors to a certain space within the canvas. For example, I really want the table cloth to turn on with a pattern when you click on the “glass”. I also want to add Kelle’s circle pattern to the inside of the picture frame when you click the switch next to it.

I know how to confirm Kellee’s circle pattern, shown here.

But for some reason the code doesn’t work well in my sketch? Not sure why yet.






ICM Class 3 – Notes from Tutorials

Here are a few sketches made after following the online tutorials. There are lots of notes in the sketches on to explain my code so that I fully understand what’s going on.

Tutorial 3.2: Draw a ball that bounces up and down on both x and y, and changes color.

Question: How do I get the ball to move around more randomly? I tried using the random() but the ball went on the fritz : /

Tutorial 3.3: Use “if/else if” statements to draw shapes with colors as you move your cursor left to right

Tutorial 3.4 Roll over a button to make it appear, and press the mouse within the button to change the background.

Tutorial 4.1 Use while and for loops to draw squares and circles across the screen, while incrementally changing the color of the circles, too.

Question: What’s the best way to have changed the blue color incrementally? I found three ways to do it, found in my code.

Tutorial 4.2 Use nested for loops to draw circles that change color and repeat across the canvas as the mouse moves. Also, the background changes color when you press the mouse.

Question: The mousePressed didn’t work when I placed it after the circle for loops, but does when I place it beforehand. Why is this? Are the circles overriding this code?

ICM Class 2 – Variables and Animating

I want to thoroughly understand the code I’m writing, so I made a very simple animation of circles and squares behaving randomly or in response to the cursor. Even with a simple sketch, I still have a lot of questions! I’ve listed them in the code itself, which you can see below. I had fun problem-solving my code when it didn’t run — especially when I was able to fix it.

Originally I wanted to be able to click the mouse four times, and each time a ball would behave differently, whether shy, clingy, bouncy, or frenetic. I’d like to learn how to have “four different clicks” and get further with shyness and bounciness.

Here is my sketch:

Here is my code: