Final Documentation of ITP Winter Show Piece “Lingo Gizmo”

The “Lingo Gizmo” is a fabricated interface that lets people invent words for a missing dictionary. People collaborate over time to submit meanings that don’t have words yet, and invent original words for those meanings.

At the ITP Winter Show, I shared a prototype in which people could invent a word and assign an original meaning to it all in one interaction. I learned many things from this two-day user test that I can apply to a later version.

Check out this short video of the interaction below. 

 

 

You can play yourself online! https://fergfluff.github.io/lingo_gizmo/

The code is hosted on Github. https://github.com/fergfluff/lingo_gizmo

Here are some fun examples of words and meanings that people created at the show.

  • ‘mlava, food for volcano monsters
  • lajayae, a person of Latin American origin wishing to live in New York City
  • juhnaheewah, the feeling that someone behind you is having a better time than you are.
  • dahvahai, good times, good mood
  • dayayay, to tell a donkey to calm down
  • erouhhtahfaher, a food that tastes like cement in a pail
  • fapaneuh, to strike fear into the enemy’s heart
  • kadaveaux, too many knishes
  • nabahoo, that feeling when you’re just not going to get out of the house today
  • payvowa, a special kind of grapefruit
  • Quaeingiwoo, a town in Western Connecticut

 

Inspirations & References

I created this project because I’m interested in how people build culture through shared experiences, and the ways language acts as a tool for naming and codifying that culture.  In some ways, this project is a slang maker, allowing people to name a feeling  that others may have, and give it a new status with its own word.

I also love creative word games such as Balderdash, which is a parlor game based on voting on the best-guessed or faked definitions of words for a chosen obscure word in the dictionary.

Lastly, I think of words as physically related to their meanings. The shape a word creates in one’s mouth can inform their meaning.  Therefore, it wasn’t a stretch for me to think to ask users to create words by physically interacting with a mouth. Interestingly, there is a theory called the Bouba Kiki Effect.  The theory suggests that people across different cultures and languages are very likely to associate the shapes below as “kiki” on the left, “bouba” on the right.  This phenomena suggests there may be a non-arbitrary, or purposeful, mapping between speech sounds and the visual shape of objects.

500px-Booba-Kiki.svg.pngOne last great reference suggested to me by Allison Parrish, on faculty at ITP, is the Pink Trombone.  It’s an online interface lets you manipulate the inside of a mouth to generate accurate sounds. Very fun to play with.

 

How It Works

Many skills and strategies went into this project. See below for a summary.

Fabrication

The face, teeth and tongue are designed and sewn by myself, using patterns I developed with paper prototypes. I did not know much about sewing before starting the project!

The inside of the face includes a cardboard structure with a hinge and rubber band to allow the top of the mouth to move down for consonants like “ma” “ba” and “wa”.

Code

In my 500+ lines of code, I’ve used the p5.js libraries to play the opening animation, cycle through sound files, add chosen sounds to the user’s word, and save the user’s inputs into the file name of the recording, which is created with the p5’s recorder function.

Physical Computing

I used an Arduino Mega microcontroller because it offers enough inputs to accommodate the project’s nine sensors.  Five sensors are made up of conductive fabric and velostat circuits. Three are force sensing resistors. The last sensor is an accelerometer to measure the x axis movement of the top of the mouth. I used a ADXL326 chip.

All nine input values are processed by a sketch I’ve saved on the microcontroller. The sketch takes the analog values and turns them into a changing list of 0s or 1s to signify whether they are turned “off” or “on” by the user. The p5.serialport library allows me to send that list from my microcontroller to my online browser. My browser is running a local server to serve up my code along with the serial data so that the user can interact with the fabricated mouth interface.

Design and User Feedback

Many rounds of design, branding, and user feedback informed this project. I used lots of paper and pen to map out my ideas, and used Illustrator to finalize the design and system logic of my project.  Over time I had several formal user feedback sessions with classmates, and quickly asked for input at critical moments in the process.

 

Next Time

The ITP Winter Show confirmed that if I had another couple days, my list of additional features were in the correct order. Here they are!

Fabrication

1. Get rid of the mouse by creating a box with three buttons to let the user press “Add” “Start Over” and “Done” while interacting with the mouth interfaces. This would simplify what the user has to touch in order to complete the first interaction.

Code

2. Create a user flow of three pages, each with a distinct purpose. First, an opening animation to attract people to play. Second, a page to create one’s word. Third, a page to add their word to the dictionary.  Currently, it’s all in one page which is a little too much for someone to get through at once.

Physical Computing

3. While I learned a lot using different sensors, next time I would use only one kind of sensor for the same kind of human gesture of pressing down. I was asking people to press each “fabric button”, but underneath were different sensors which each required a different kind of touch.

Overall Design

4. On a larger scale, my first prototype demonstrated that people understand and are interested in the concept, feel ownership over their words, have a lot of fun with the current interaction, and are very surprised and delighted by the end. However, the definitions don’t have as much variety in mood or tone as I’d like to encourage in a final version of the project. As of now, people add definitions that match the playful and slightly absurd interaction that I’ve created (strange mouth parts anyone??) . Very few are introspective, although two people did submit definitions about wishing to move to NYC or worrying that someone else is having a better time than they are.

One thing I want to do is rerecord the audio recordings to include more variety in phonemes. Right now they all end “ah” because they are taken from instructional resources online. Including “ack” and not only “kah” will give people more choice.

Considering my recordings all end in “ah”, any word people make sounds pretty silly. Therefore, the inviting but strange branding and design that I created for the show fit that experience. Next time, I can change the design to set a different tone for people’s interactions, in hopes of giving people the room to submit definitions that have more variety to them.

 

 


System Diagrams

Here are a few diagrams of my work.

Phonemes Chart

 

My circuits as a schematic and illustrations.

Meaning Maker Circuit Illustrations and Schematics

 

These webpage layouts are close to what I would finish creating with more time.

Webpage Design_Build A Word-01Webpage Design_Build A Word-02Webpage Design_Build A Word-03Webpage Design_Share a Meaning-01Webpage Design_New Meanings and Words-01

 

 

 

 

 

Intro to Computational Media – Update on Final Project “Meaning Maker”

I’ve decided to combine my Intro to Physical Computing and Intro to Computational Media final projects. I’ll save by “What’s Your Character” project for winter break or next semester! I’m disappointed to not work on it now, but want to focus on completing more of one project than a little less of two projects.

You can read more about my “Meaning Maker” project over here.  I’ll do as much of my larger idea as I can. At a minimum, I hope to complete the interaction of building words by using the mouth enclosure. I may have to save the “game” element of people interacting with each other through the interfaces for another time.

Here are my interaction diagrams, schematics, and interface designs.

I’m sure these will continue to change and I’ll update them with my final post.

 

Installation-01

 

A chart assigning phonemes to different interactions.

Phonemes Chart

 

My circuits as a schematic and illustrations.

Meaning Maker Circuit Illustrations and Schematics

 

My webpage design so far, created in Illustrator.

Webpage Design_Build A Word-01Webpage Design_Build A Word-02Webpage Design_Build A Word-03Webpage Design_Share a Meaning-01Webpage Design_New Meanings and Words-01

 

 

 

 

 

Intro to Computational Media – Notes on Github

To help myself, here is a cheat sheet for sending updates to Github using the terminal.

 

To add a repo to Github

  1. Create it manually on Github
    1. Go to your repositories on Github.
    2. Click New (green button)
    3. Follow instructions, and include a ReadMe file.
  2. Then connect your local project folder to it using terminal (instructions here) https://help.github.com/articles/adding-an-existing-project-to-github-using-the-command-line/
    1. Change the current working directory to your project folder (cd and drag folder into terminal)
    2. Initialize the local folder as a Git repository
      1. Type in “git init”
    3. Stage the file for commit to your local folder.
      1. Type in “git add .”
    4. Commit the file that you’ve staged in your local folder
      1. Type in “git commit -m “first commit” “
    5. Connect the local folder to the online repo you created
      1. Copy paste the URL in your browser
      2. Paste into your terminal “git remote add origin yourURL”
    6. Confirm the URL is correct
      1. Type in “git remote -v”
    7. Push the actual files to your now connected project folder
      1. Type in “git push -u origin master”
      2. If needed type in git push -f origin master” (because creating repo made a readme which is not part of your local project, -f forces the push even with conflicting files)

 

Going forward, when  you want to make updates:

  1. Stage the files you want to add.
    1. Type in “git add . “
  2. Commit the file that you’ve staged in your local repository.
    1. Type in “git commit -m “add existing file”
  3. Push the changes in your local repository to GitHub.
    1. Type in git push origin your-branch

If I’m given a message that I can’t escape out of because my master repo is one or more commits ahead of my local repo, then I use these commands in the Terminal.

  1. press “i”
  2. write your merge message
  3. press “esc”
  4. write “:wq”
  5. then press enter

 

If you want to get updates from someone else who added to your repo.

  1. Then enter a “git pull” command into Terminal to get the latest master repo onto my computer.

 

Class 10 Intro to Phys Comp – Final Proposal “Meaning Maker: By the Mouthful” OR TBD NEW NAME

I’ll be turning my midterm into my final, which means I’ll have more time to build my project out further. This is a game that lets people create new words to fit meanings that don’t have words in the English language… yet.

See below for my plans!

Project Summary 

To start playing this game, people can decide where they’d like to start. They can either:

  • Share a meaning, feeling, or observation in their life that could use a word in the English language.
  • Create a new word, record and spell it.
    • They can match this new word with a meaning from someone else.
    • Or if they wish, they can go right ahead and define the new word they just created.
  • Vote on the top words
  • View/listen to the best ranked words.
  • Also, because I’m curious whether people will use these new words… it would be great to let people share whether they used a word recently, too. Something like “Let us all know if you heard a word used! What was the sentence?”

Why?

I’m interested in creating a moment for people to:

  • Have fun finding out what other people are thinking about, but not talking about.
  • Build culture by creating new words a community can use together.
  • Pause for a moment to think about how the mouth forms sounds.
  • Consider whether language allows for a full expression of how they’re feeling on the inside, and give people some agency to think of language (and therefore their world-view?) as not fixed.
  • If people are especially theoretical or grounded in linguistics… they might think about how the actual sounds of words may intentionally describe objects and experiences (see more about this Kiki and Bouba effect below)

 

Context and Audience

I’d like this game to be installed in a shared place for a week or longer, in which people spend time together whether consistently or as they come and go.

Obviously, the ITP Floor is a great place to start. Placing it in a library, a coffee shop, museum, or the subway platform would all lead to different interactions, too.

I do wonder how “sticky” the interaction will be. Will people want to come back and see what others have created? Or will it be a one-time interaction?

I also wonder how people who speak multiple languages will interact with the game. Will they be inclined to add meanings that are in their language, but not English? That might mean they try to create actual words in another language with the mouth components.

Another consideration is how clumsy or expressive do I want the action of creating words to be? Right now its very playful but fairly clumsy. Linguistic and phonetic experts would find it to not be expressive as something like this Pink Trombone vocalizer. How important is that at this point?

Inspirations

As for my interest in words, I’ve always liked the game Fictionary, which lets you create fake definitions for obscure words in the dictionary. When I play the game Bananas, I enjoy turning the last round into a competition of made-up words that you must defend with fake definitions at the end.  And who doesn’t enjoy learning about words in other languages that don’t exit in English? Sometimes other languages have better words for describing feelings. Other times a word can describe a feeling I’ve never felt before.

Source Material

I met with Allison Parrish, a faculty member here at ITP.  She offered a lot of sources and examples to consider as I decide what is most important to emphasize in my project. They include:

  • Sniglets – A 1980’s American TV game about “sniglets” which can be “any word that doesn’t appear in the dictionary, but should.” The monthly TV episode as part of the show “Not Necessarily The News” became a series of books.
  • Fictionary and Balderdash – A parlor game and then board game that involves voting on best-guessed or faked-definitions of words for obscure words in the dictionary.
  • The Bouba Kiki Effect –  As an interesting example of synesthesia, people across different cultures and languages are very likely to associate the shapes below as “kiki” on the left, “bouba” on the right.  This phenomena suggests there may be a non-arbitrary mapping between speech sounds and the visual shape of objects.  Even further, some believe that the evolution of language might have to do with the neurological feedback of the shapes that the mouth creates while speaking, in that humans might use sound symbolism to non-arbitrarily map sounds to objects in the world.  It’s possible that this extends even to ideasthesia, in which people may or may not sense concepts or sense ideas as perception-like experiences. ***In short, I may need to experiment with my game design to lead users towards expressive combinations of consonants and vowels that better match the emotional “shape” of other people’s submitted meanings.***

500px-Booba-Kiki.svg.png

  • Pink Trombone – Have fun playing creating sounds with different parts of the mouth! This is a game changer for music teachers of voice and wind and brass instrumentalists… you can show your student how to shape the inside of one’s mouth! Key to shaping tone and articulating notes.
  • ITP graduate thesis project – I’m still looking for a link to a previous student’s thesis which used physical actuators to create a voice synthesis for vowels.
  • Greg Borenstein – is another ITP graduate who has a Twitter Bot account @fantasticvocab that generates new words with new meanings “out of the atoms of English”.
  • Suzette Haden Elgin – Elgin is a writer of science fiction, including a book series called Native Tongue. The series uses a new language she created, called Láadan.

Project Planning

I do need to get lots of advice to actually make this thing! These are the questions I have:

  • User interface of Mouth components
    • How to better construct my mouth parts out of fabric, especially when they become larger and might have wireless chips inside. I might be able to ask someone in the Tisch costume department.
    • How to make the mouth parts wireless, possibly using XBee wireless chips.
  • User interface of Screens
    • How to best strategize using screens, whether just one or multiple, and what combination of keyboards, mouses, and touch gesture is best.
    • How to design the environment surrounding the screen(s), and what materials are best to use.
  • Database
    • How to create a database that stores people’s words, meanings and votes, that can be accessed over time.
    • How to create a voting mechanism as well.

Bill of Materials

  • User interface: Mouth parts
    • Fabric
    • Fiber fill
    • Thread
    • Misc. other materials as needed
  • User interface: Screens
    • Can I borrow screens from the ER?
    • Do I need to buy my own?
    • Keyboards? Mice?
    • My own buttons to reduce need for keyboards?
  • Database
    • What do I need to build a database accessible online? Do I need to pay for this or can I do this for free?

 

User Testing of Interface/Environment

I have three options for how to display my project.

Most Simple

I can have just a simple set up with one or two monitors, the mouth components, and perhaps a separate enclosure for special buttons.

IMG_0595

More Work Required

Ideally, I would be able to divide up the different entry points of the game into “stations”. I’m a little concerned that the activity of coming up with meanings/definitions is too introspective to stand up to the absurd sounds of the vowels and consonants playing back while other users might be creating words.

IMG_0596

Lots of Work Required

Even more of an ideal, it would be cool to have a column with three faces for each entry point into the game.

IMG_0597

 

That’s it for now! Looking forward to doing a user test in class tomorrow.

Class 9 Intro to Physical Computing Midterm – “Making Meaning: By the Mouthful”

Here is the final iteration of my midterm project. This is a game that lets people create new words to fit meanings that don’t have words in the English language… yet.

 

The interaction so far:

 

To start playing, people can decide where they’d like to start. They can either:

  • Request a new word for a meaning they’ve always wished there was a word for.
  • Create a new word, record and spell it.
  • Define the meaning of a new word.
  • Vote on the top words and view the best ranked words.

Currently I’m focusing on designing for the second starting point of the game. To do so, people take apart a human mouth, and use a mouth, tongue and teeth in different combinations to unlock consonants and vowels. They can combine these syllables into words. There’s more to the game described in my last post.

For now, here are some more photos, the sketch online that you can play with, and my code.

The Enclosure

IMG_0547

IMG_0544IMG_0546

The Online Sketches

Below is a link to the latest code I’m working on. However, this only works if you have the mouth with you!

http://alpha.editor.p5js.org/fergfluff/sketches/rJUMPcSRb

But here’s an example of the sketch you can play online without the mouth.

http://alpha.editor.p5js.org/fergfluff/sketches/HywJ3HL0W

 

The code

Thanks to Leon, Chino and Jen!!

Class 8 Intro to Phys Comp – Start of Midterm “Making Meaning: By the Mouthful”

Summary

This project is a game that lets people create new words to fit meanings that don’t have words in the English language… yet.

To start playing, people take apart a human mouth, and use each part to pick out consonants and vowels to create random words. There’s more to the game described below, but to start here is my prototype of this first aspect of the game.

 

Larger Picture

I decided to work on this idea because I DIDN’T want to use a special sensor, so that I could focus on the interaction design itself.

As for the interest in words, I’ve also always liked the game Fictionary because you can create fake definitions for obscure words in the dictionary. And who doesn’t enjoy learning about other language’s words that don’t exit in English? Some new words I can relate to, but others describe feelings I haven’t even had before.

To be honest, I also wanted to make something soft that you could press or squish that would change something on screen.   Hence the stuffed felt!

You can see that ultimately I want the game pieces, which form a mouth, to not only fit together but also come apart so that you can press them to choose different consonants and vowels.

I’m hoping taking apart the mouth, and how soft and pliable the parts are, gives hints as to how to get different kinds of vowels and consonants. For example, you can create “nah” by pressing the tongue and mouth pieces, because those are the parts of the mouth most required to create that sound. But for vowels such as “ooh” and “uhh”, you only need to press the mouth.  I need to do some user feedback testing to see how much this gets across without explanation.

 

Below is the whole game as a user map. People can decide where they’d like to start. They can either:

  • Create a new word, record and spell it.
  • Define the meaning of a new word.
  • Vote on the top words and view the best ranked words.
  • Request a new word for a meaning they’ve always wished there was a word for.

IMG_0428

 

Process

I started by creating a paper model of the mouth.  Below is the mouth, teeth and tongue, all fit together. Ultimately, magnets might work to keep it all together. Or the friction of felt might be enough on its own.

IMG_0413IMG_0415IMG_0416

Here they are as game pieces.

IMG_0417

Then I played with soft circuits to create felt versions of these paper game pieces. Thank you to Hannah/Pulsea on Instructables for their directions on soft circuits, such as this one, and their collective Kobakant’s  very extensive library of soft things. They went to MIT Media Lab and were part of the High Low Tech research group! Very cool.

Here I am testing a tongue with a coin battery circuit.

giphy_squeezing tongue

As for how it works, the tongue has three layers of felt. In between two layers are the circuit parts. In between the other two layers you’ll find only fiber fill.

The circuit is made of a a sandwich of Velostat between two pieces of conductive fabric. Velostat is a kind of paper foil with carbon in it, making it conductive as well. The conductive fabric I found had adhesive on the back, which saved me the extra step of ironing on interfacing fabric to adhere it to the felt.

I’m assuming the circuit is completed when the “tongue” button is pressed hard enough to send electricity from one piece of conductive fabric through the velostat to the other fabric. Check out Pulsea’s instructables to understand how to shape the conductive fabric so they don’t accidentally touch, which creates a short circuit.

Half the inside of the tongue, showing the circuit’s conductive fabric on the bottom. I used alligator clips. I need to look into XBee Radios or anything else that would make these mouth parts wireless. 

IMG_0438IMG_0439

I pinned paper stencils to the felt to make it easier to cut with fabric scissors.

IMG_0431

As for the code, I used serial communication to connect the value readings of my analog sensors to my p5.js sketch.

Here is my arduino code, which uses a handshake method to slow down the transfer of data to only when the p5.js sketch asks for it and reads and prints serially the analog values of each sensor.

And here is my p5.js code. This is only my sketch.js file. For my final iteration, I’ll learn to post my code on github as a complete folder with all files.

I’m wondering if there is a better way to store my logic for which sensor combinations play which sounds. Ultimately it could be 24 consonants and 20 vowel sounds in combination with each other!?!

I also need help improving the code to allow a user to press two sensors and get only one result, instead of triggering each individual sensor’s results. Right now, it sort of works but sometimes you end up hearing as many as three sounds.

Finally, here is my circuit on Fritzing. The red line is Fritzing’s interpretation of a soft circuit…

Making Meaning_01_bb

 

Class 6 Small Project – “Squeeze a Lime!”

In this project, I created a design to squeeze limes. I’m imagining this as part of a game where you mix your own cocktails using limes…

I connected three sensors to control three images in my p5.js sketch.  The design is online over here, although you’d need my circuit for it to work! http://alpha.editor.p5js.org/fergfluff/sketches/B1ZDYUMa-

Here’s the interaction.

giphy_limes2

How It Works

Read on below to hear my thoughts on my physical design. As for the code, I’m sending data from my sensors through my serial port, the P5.js serial app, and into my p5.js sketch online. I’ve written code to:

  • Expect data coming in from my USB serial port
  • Create a string called inString of incoming data but only up until the ASC-II character values for carriage return and new line
  • State that if data REALLY IS coming in
  • State that if data is NOT “hello” (which I used in my “call and response” code in Arduino to require that my microcontroller be very polite and wait to be asked to send data) then to
  • Create a new variable called sensors that stores an output after it separates the”inString” numeric values from the commas
  • Create a “counter” or for loop to list the array of my sensors
  • Separate their data into an instance of each sensor, and send it the separated data from the variable sensors
  • Draw three arcs that get smaller in size the more you bend the flex sensors, by subtracting the sensor values from the dimensions of the arc.

 

Next time

I spent a lot of time understanding the labs about serial communication with p5.js, which was time was well spent! Therefore, this small project is more about demonstrating that understanding than it is about my ideas or execution. But next time I would spend just a little more time prototyping my physical design at the beginning as well, to make sure the code and interaction support each other as successfully as possible.

From the start, I had idea of creating a sketch to squeeze limes because I thought the flexible sensors afforded a squeezing motion. As for an enclosure, I imagined I could cover the sensors with a lime costume of sorts, so that the exterior of my circuit suggested they were limes – and thus, you should squeeze them!

Ideally, though, I would have tested this physical prototype at the start. I’d have quickly realized my assumption that the flexible sensors afford a squeezing motion was incorrect! It’s really more of a pulling down gesture. That may sound like a minor difference, but it caused a big disconnect in the user interaction of trying to squeeze limes. Squeezing doesn’t work! Pulling does! Why am I pulling on limes??

Also, my idea of a “lime costume” wasn’t successful even as a prototype. I probably need a different kind of sensor. I did try the long flex sensor, but I’d need a well-though out enclosure to that has a very strong base so that your fingers or thumb can hold on while the rest of the hand does the squeezing.

It looks like a caterpillar! Not a lime.

IMG_0280

The Takeaway

My takeaway is that even though coding is harder for me than prototyping with construction paper, construction paper gives JUST as much design feedback as the code. Just like I would by write pseudo code to draft my code’s logic, I should create a quick physical design of my piece at the same time I’m starting my code

 

Here’s the code:

 

Class 6 – Lab 2 Serial Input to P5.js

Using a physical object to control my web browser

In this lab for Intro to Physical Computing, I’m using a physical object to control what’s happening in my web browser. To do this, I’m applying what I learned about in the last lab – asynchronous serial communication – to send a flex sensor’s data through my microcontroller, serial port, Arduino code, and finally to my p5.js sketch.

Like this!

giphy_one sensor with serial.gif

It’s not very common to to control a web browser with external hardware via a laptop’s serial port. Personally, I don’t think I’ve come across this in my daily life. I’m curious as to why this isn’t always possible? I know “historical reasons” were mentioned in one of the ITP videos online. There also might not be enough daily applications to be worth building it into general consumer computers. And maybe it opens the door to nefarious activity?

But with this additional capability, I can add physical inputs from the world me into my visual coded projects in P5.js, Processing, Max/MSP, and/or OpenFrameworks.

Part I: Reading smaller sensor values that fit into 1 byte, with raw binary numbers

First, add code to your Arduino IDE to read your microcontroller

This is some simple code to send the value of a flex sensor to your serial monitor, using serial communication with the command Serial.write().

Second, prepare the P5.serialcontrol app and P5.js serialport library 

To display the flex sensor readings on my web browser, the P5.serial control app will act like an official translator between the physical and digital worlds. The app communicates serially with my microcontroller via the USB serial port, while also sending information to my HTML/JavaScript code online using web sockets. I believe also built in is a webSocket-to-serial server. As a note, P5.serialcontrol runs in the command line interface of my laptop (thankfully in the background, while I still get comfortable with Terminal).

socket-serial-connection-1.png

 

Third, set up your P5.js sketch to connect with your microcontroller

Next I’ll add some code to my p5.js sketch so that it’s connected to my USB serial port and microcontroller.

To do this, I upload the P5.serialport library as a file into my sketch online and mention it in my index.html file.  In the lab, we were asked to add this exact text into the index.html file <script language=”javascript” type=”text/javascript” src=”p5.serialport.js”></script>

But Dan Schiffman had sent our class some simpler code, which worked well:

<script src=”p5.serialport.js”></script>

Screen Shot 2017-10-15 at 3.39.07 PM.png

Then I write this code below to ask for a list of available serial ports on my laptop. To do this, I first create an instance of the serialport library and set up a call back function to list my available ports.

 

Next, Use Events and Call Backs to Create Behavior

I lost all my text in this section of the blog! DARN!

Basically, I talked about how to set up my p5.js code to expect events from my serial port, and define call back functions to perform if those events happen. For example in this lab, if data comes in through the serial port, then perform a new behavior, such as display the incoming values on the screen. A simpler example in p5.js  might be changing a ball from red to blue if I’ve clicked my mouse, because I’ve written a “call back” function that requires going to find additional code I’ve written to perform that new behavior.

giphy_one sensor with serial

What’s Happening Here?

I’m imagining this all as a relay race with a special baton with written words on it, which is really data. Each runner waits for the last runner to pass it the baton.  But the runner needs to change the language of the baton’s words each time they receive it, so they can understand what it says!

In other words, the microcontroller sends bytes via serial communication using Serial.write(), which when the computer receives that byte, understands it a ‘data event’. This triggers the serialEvent() command in p5.js to be called, which stores the bytes into a variable called inData, at the same time turning it into a number. From there, the draw() function takes that number, and displays it on the web page.

Draw a Graph With the Sensor Values

I also lost this text. : (

Here, the sensor’s value is being mapped to the x position of the graph lines being drawn.

giphy_one sensor as a graph

Part II:  Reading larger sensor values that fit into more than 1 byte, with ASC-II encoded values

Aka reading serial data as a string

Because I’m using Serial.println(),  extra bytes will be used to communicate a carriage return and line break in between each sensor value.

On the p5.js sketch side, I add the serial.readLine() command as the method of interpreting the serial data. This command is unique in that reads the incoming serial data as a string of bytes (not just one byte as with the serial.read() command that we used before). And, when that string happens to be all-numeric, it converts it to a number, which is useful for us as we want numbers to be able to display to the canvas.

However, at first this leads to an issue because the p5.js sketch will get confused when it reads the carriage returns and line breaks, which are sent in the ASCII-encoded language as either \r (for carriage return), or \n (for new line).  When it reads those respective bytes, it displays nothing on the screen, which look like gaps in the graph or flickers in the text display sketches.

To circumvent this, you need to be very explicit with the p5.js program, and tell it to only display bytes coming in through the serial port that are actual ASCII-encoded numbers, and not characters.  To do so, you add a function to the serial.Event call back function. Here’s the complete code.

Conclusion

I’m beginning to see how much effort has been put into creating commands that allow someone to switch between reading raw binary and ASC-II encoded values. For now, I’m guessing that I’d personally switch between the two when testing new sensors. I’m sure there are other typical applications? For example, I’d use Serial.write() when testing a new sensor’s range with a simple mapped range that fits into one byte. And then switch over to analogRead() to test applications of that new sensor. This is because I can now see how analogRead() in Arduino IDE and serial.readLine() in p5.js can quickly lead to needing more code to navigate interpretations of ASC-II encoded values.

 

Class 6 – Lab 1 Intro to Serial Communications

This lab helps to better understand serial communication from a microntroller. Soon enough I’ll use what I’ve learned here to write programs in other languages that can interact with my microcontroller, such as p5.js.

For now, I’ll just learn how to send data from multiple sensors through my Arduino to my computer, and learn to format that data in my serial monitor so it’s easier to read.

In general it’s good to know that serial data is sent byte by byte from one device to another, and it’s up to you how to interpret that data. But honestly, a lot of the decisions are already made for us based on common practice (for example, so far in school we’re using 9600 baud for the speed of the communication, and using 5 volt microcontrollers and laptops that can transmit and receive at that same voltage).  From what I understand, what we have to decide is whether to interpret data as raw binary or ASCII, and whether to slow down receiving data so that program doesn’t slow down with too much data. For the most part, we want to use ASCII encoded data so that it’s easier to debug in our serial monitor. To receive ASCII encoded data, we can use the Serial.print() command.

Asynchronous Serial Communication

I found this definition of asychronous serial communication on Sparkfun.com to be helpful.

“Asynchronous means that data is transferred without support from an external clock signal. This transmission method is perfect for minimizing the required wires and I/O pins, but it does mean we need to put some extra effort into reliably transferring and receiving data.” https://learn.sparkfun.com/tutorials/serial-communication 

My take away from this is that by not having to connect two devices to the same external “clock” with a bunch of wires and pins, we save a lot of physical labor. ?? But we still need to write code using pre-determined signaling rules that makes it possible for them to successfully talk to one another.

Initializing Communication Between Two Devices

To me, there are 6 things required to communicate between two devices, and have it be understandable through your serial monitor. Basically, they need to speak the same language at the same time.

  1. The data rate – the speed at which information from device is sampled by another device. The most common data rate is 9600 baud or bits per second. This means every 1/9600th of a second, the voltage’s value is interpreted as a new bit of data! Converted to bytes, this means 1200 bytes can be sent in one second!
  2. The voltage levels representing a 1 or 0 bit – this depends on whether both your devices use the same voltage. If they don’t, you’ll need to map them to each other. For example, you’ll need to map 3.3 volts to a 5 volt device so that their 0s and 1s translate across devices.
  3. The meaning of those voltage levels – is the voltage signal “true”, using a high voltage as “1” and a low voltage as “0”? Or is the signal “inverted”, with the opposite reading of a low voltage as “0” and a high voltage as “1”?
  4. Wires to send and receive data on both the sending and receiving devices. These are also called “transmit” and “receive” and “Tx” and “Rx”
  5. A common ground connection, so that both devices have a common starting point to measure voltage by.
  6. How to interpret the data as incoming bytes // How to print that data to your serial monitor so you can read it – You need to decide when the beginning of the message is, when the end is, and what to do with the bytes in between.

 

ASCII vs. Binary: What “data language” should you use, and when?

In short, my understanding is that sticking to the raw binary values of your sensor reading, such as 101011, is useful because you don’t have to ask your program or serial monitor to spend time translating “data languages”. Raw binary is also more efficient as long as you are sending values below 255, because you can send these small numbers in one raw binary byte. I’m assuming this means your program can run faster. But anything above the value 255 needs more than one byte to be sent — in fact, it needs three bytes to be sent.

However, ASCII “encoded strings” or values are ultimately better because we can actually read them to debug our code with the serial monitor. Who can read straight raw binary code anyway??

From what I understand, the creators of Arduino code decided to create two commands that let you switch between the two data languages.

  • The Serial.write() command sends binary values of sensor readings. It doesn’t format data as ASCII characters. I BELIEVE we never see the Serial.write() command’s return of a binary value in our Arduino IDE app because the serial monitor is set up to only return ASCII Characters. We have to use other serial monitor apps such as Cool Term if we want to see those binary values. But that binary value is still used within our Arduino IDE app to execute whatever code we’ve written.

 

  • The Serial.print() command formats the value it returns as an ASCII-encoded decimal number. And if you use the Serial.println() command, you get the BONUS benefit of a creating a carriage return and new line, which makes reading the data in your serial monitor easier to read.  (ASCII is like a foreign language dictionary within your computer that translates one value language into another as requested.) My understanding is that the Serial.print() command has a built-in ability to translate the raw binary data of a sensor reading into ASCII. You need Serial.print() to send values higher than 255 because they won’t fit into a single byte. Higher values need 3 bytes (one for each digit of a number such as 880), plus any other bytes assigned to the punctuation you might want to see in your serial monitor. In general, Serial.print() is great to use because it returns values that are easier for someone to read than raw binary.

Code to practice sending data values for three sensors to your serial monitor

I was able to print all three sensor readings to my serial monitor with this code.

But I’m amazed that this code works without assigning the A0, A1 and A2 pins. How does it know which pins to read?? Are “0” and analogRead enough of a clue for it to work?

void loop() {
for (int thisSensor = 0; thisSensor < 3; thisSensor++) {
int sensorValue = analogRead(thisSensor);
Serial.print(sensorValue);
Serial.print(“,”);
}
}

Advice on sending multiple sensor’s data to your serial monitor

You’ll want to make it easier to read multiple readings of sensors in your serial monitor. Otherwise, you’ll just get a long list of values and won’t be able to tell which belongs to which sensor.

To start with you, you can use punctuation to format multiple binary values by adding in tabs, commas, line breaks, etc. This is demonstrated in the code above. However, if you’re using Serial.write(), you sacrifice a binary value for each punctuation value you use… you’re out of luck if your sensor has that same reading value! You also risk slowing down your program if data is constantly coming in, because there is nothing in your code to stop it. All this data gets stuck in your “serial buffer,” which is part of your computer that holds incoming information.

Therefore, you can also add code to create a “call and response” or break in the flow of data coming from your sensors. You can require the device sending data to wait for a request from the other device once its ready to start, or done processing the data it already has in its serial buffer.

Punctuation

Using punctuation alone to separate sensor data is simple to use, in that you read each sensor’s pin and add code for a comma or line after each one. But this method doesn’t prevent your program from slowing down while the device’s serial buffer fills up (with information from the other device faster than it can receive).

const int switchPin = 2;      // digital input
 void setup() {
   // configure the serial connection:
   Serial.begin(9600);
   // configure the digital input:
   pinMode(switchPin, INPUT);
 }
void loop() {
   // read the sensor:
   int sensorValue = analogRead(A0);
   // print the results:
   Serial.print(sensorValue);
   Serial.print(",");
   // read the sensor:
   sensorValue = analogRead(A1);
   // print the results:
   Serial.print(sensorValue);
   Serial.print(",");
   // read the sensor:
   sensorValue = digitalRead(switchPin);
   // print the results:
   Serial.println(sensorValue);
}

 

Flow Control, aka Call and Response, aka Handshaking

If you do need to prevent your program from slowing down, with a little more code you can require the device with data to wait until its been asked to send more data. That way the serial buffer of your receiving device can finish what’s already on its plate.

As part of this “call and response” code, you make use of the Serial.available() command to find out how many bytes are available or waiting to be read.  I believe this means its checking the serial buffer to find out what data remains to be read?

I tried the code below, but my loop is not stopping after each data sample to wait for me to enter another input. Why is this? Also, just to clarify, the serial monitor’s text field at the top is used to send data to the microcontroller?

const int switchPin = 2;

void setup() {
Serial.begin(9600);
while (Serial.available() <= 0) {
Serial.println(“hello”); // send a starting message
delay(300); // wait 1/3 second
}
}

void loop() {
// read the sensor:
int sensorValue = analogRead(A0);
// print the results:
Serial.print(sensorValue);
Serial.print(“,”);

// read the sensor:
sensorValue = analogRead(A1);
// print the results:
Serial.print(sensorValue);
Serial.print(“,”);

// read the sensor:
sensorValue = digitalRead(switchPin);
// print the results:
Serial.println(sensorValue);

if (Serial.available() > 0) {
int inByte = Serial.read();
sensorValue = analogRead(A0);
Serial.print(sensorValue);
Serial.print(“,”);

sensorValue = analogRead(A1);
Serial.print(sensorValue);
Serial.print(“,”);

sensorValue = digitalRead(switchPin);
Serial.println(sensorValue);
}
}

 

 

Questions

There was a quick mention about how using println() in the draw() loop of your p5.js sketch will slow it down a lot, because the serial buffer will become too full. Instead, you should switch over to a call-and-response method to only get information when you need it. I was confused, but this might be cleared up in the second lab.

Class 5 – Iterations on Sun Song Player

In the last week I’ve made progress with my Sun Song Player.  See below for updates on the physical enclosure and code. As a quick reminder, my Sun Song Player is meant to play a song when the sun is bright enough, while it’s pressed against your window. Perhaps it breaks up your routine to notice the afternoon is passing.

Enclosure

My physical enclosure has seen two iterations in the last week. My first enclosure is described over here in this blog post.  My second interation is below.

Two major characteristics are missing while I test my design… While it’s frustrating to not work on these, delaying them lets me prototype cheaply while I continue to learn new skills.

For example, I’m not yet using translucent yellow acrylic, which will make it more visually interesting.  But for now I’m saving money until I have the best design for the enclosure.

Additionally, I wish the enclosure had a more interesting outline. It’s a little bland as a perfect circle. Ultimately I want it to have the outline of the first etched line on the front. However, I wasn’t sure how to fabricate the side of the enclosure to follow the unevenness of the wavy circle. Which material?  I now have at least idea – I can make it out of stacked cut acrylic like this Pibow example.

IMG_0138

Below you can see I’ve moved my control panel to the side of my second enclosure, instead of the front. I will update the sizing of this panel’s holes once I finalize the user experience, my code, and components.

IMG_0140

Below you’ll see the inside and back. There is a hole for a suction cup, which is coming in the mail! With a suction cup, I can start testing its behavior while on a window.

I need a second hole for the light sensor from Adafruit. I’ll add that when I know what size shape to cut after testing how it works.

I also need to test different speakers for quality of sound. I’m taking recommendations. The piezo ups the cheese factor at the moment. I’d like to avoid that. It’s already a pretty bright and cheery!

IMG_0141

 

Code

My code is now much closer than it was before. Having not coded so rapidly before, I’m on a learning curve!  Shout out to Yang for helping me! Also thank you to Jia for letting me learn from her Christmas tree code.

While it takes an extra click to watch, you can see below that the song by the Beatles “Here Comes the Sun” plays when you press a button.

http://www.itpblogelizabethferguson.com/wp-content/uploads/2017/10/IMG_0110.mov http://www.itpblogelizabethferguson.com/wp-content/uploads/2017/10/IMG_0108.mov

However, you can see that the song doesn’t stop playing! I’m working on that…

I’ve started to test how to play three songs, which requires build VERY logical and tight code. I’m learning how to do that in ICM! I’m taking any advice.

The most challenging and important lesson so far is learning that buttons in DIY circuits behave much more differently and unexpectedly than in regular life. I’m using simple push buttons right now, which demonstrate that an internal switch opens and closes many times while you think you’re only pressing once.  With Yang’s help, I used several strategies to ignore this behavior, including the debounce, break commands, and pause commands.

Looking forward to working on this more.