Final Documentation of ITP Winter Show Piece “Lingo Gizmo”

The “Lingo Gizmo” is a fabricated interface that lets people invent words for a missing dictionary. People collaborate over time to submit meanings that don’t have words yet, and invent original words for those meanings.

At the ITP Winter Show, I shared a prototype in which people could invent a word and assign an original meaning to it all in one interaction. I learned many things from this two-day user test that I can apply to a later version.

Check out this short video of the interaction below. 

 

 

You can play yourself online! https://fergfluff.github.io/lingo_gizmo/

The code is hosted on Github. https://github.com/fergfluff/lingo_gizmo

Here are some fun examples of words and meanings that people created at the show.

  • ‘mlava, food for volcano monsters
  • lajayae, a person of Latin American origin wishing to live in New York City
  • juhnaheewah, the feeling that someone behind you is having a better time than you are.
  • dahvahai, good times, good mood
  • dayayay, to tell a donkey to calm down
  • erouhhtahfaher, a food that tastes like cement in a pail
  • fapaneuh, to strike fear into the enemy’s heart
  • kadaveaux, too many knishes
  • nabahoo, that feeling when you’re just not going to get out of the house today
  • payvowa, a special kind of grapefruit
  • Quaeingiwoo, a town in Western Connecticut

 

Inspirations & References

I created this project because I’m interested in how people build culture through shared experiences, and the ways language acts as a tool for naming and codifying that culture.  In some ways, this project is a slang maker, allowing people to name a feeling  that others may have, and give it a new status with its own word.

I also love creative word games such as Balderdash, which is a parlor game based on voting on the best-guessed or faked definitions of words for a chosen obscure word in the dictionary.

Lastly, I think of words as physically related to their meanings. The shape a word creates in one’s mouth can inform their meaning.  Therefore, it wasn’t a stretch for me to think to ask users to create words by physically interacting with a mouth. Interestingly, there is a theory called the Bouba Kiki Effect.  The theory suggests that people across different cultures and languages are very likely to associate the shapes below as “kiki” on the left, “bouba” on the right.  This phenomena suggests there may be a non-arbitrary, or purposeful, mapping between speech sounds and the visual shape of objects.

500px-Booba-Kiki.svg.pngOne last great reference suggested to me by Allison Parrish, on faculty at ITP, is the Pink Trombone.  It’s an online interface lets you manipulate the inside of a mouth to generate accurate sounds. Very fun to play with.

 

How It Works

Many skills and strategies went into this project. See below for a summary.

Fabrication

The face, teeth and tongue are designed and sewn by myself, using patterns I developed with paper prototypes. I did not know much about sewing before starting the project!

The inside of the face includes a cardboard structure with a hinge and rubber band to allow the top of the mouth to move down for consonants like “ma” “ba” and “wa”.

Code

In my 500+ lines of code, I’ve used the p5.js libraries to play the opening animation, cycle through sound files, add chosen sounds to the user’s word, and save the user’s inputs into the file name of the recording, which is created with the p5’s recorder function.

Physical Computing

I used an Arduino Mega microcontroller because it offers enough inputs to accommodate the project’s nine sensors.  Five sensors are made up of conductive fabric and velostat circuits. Three are force sensing resistors. The last sensor is an accelerometer to measure the x axis movement of the top of the mouth. I used a ADXL326 chip.

All nine input values are processed by a sketch I’ve saved on the microcontroller. The sketch takes the analog values and turns them into a changing list of 0s or 1s to signify whether they are turned “off” or “on” by the user. The p5.serialport library allows me to send that list from my microcontroller to my online browser. My browser is running a local server to serve up my code along with the serial data so that the user can interact with the fabricated mouth interface.

Design and User Feedback

Many rounds of design, branding, and user feedback informed this project. I used lots of paper and pen to map out my ideas, and used Illustrator to finalize the design and system logic of my project.  Over time I had several formal user feedback sessions with classmates, and quickly asked for input at critical moments in the process.

 

Next Time

The ITP Winter Show confirmed that if I had another couple days, my list of additional features were in the correct order. Here they are!

Fabrication

1. Get rid of the mouse by creating a box with three buttons to let the user press “Add” “Start Over” and “Done” while interacting with the mouth interfaces. This would simplify what the user has to touch in order to complete the first interaction.

Code

2. Create a user flow of three pages, each with a distinct purpose. First, an opening animation to attract people to play. Second, a page to create one’s word. Third, a page to add their word to the dictionary.  Currently, it’s all in one page which is a little too much for someone to get through at once.

Physical Computing

3. While I learned a lot using different sensors, next time I would use only one kind of sensor for the same kind of human gesture of pressing down. I was asking people to press each “fabric button”, but underneath were different sensors which each required a different kind of touch.

Overall Design

4. On a larger scale, my first prototype demonstrated that people understand and are interested in the concept, feel ownership over their words, have a lot of fun with the current interaction, and are very surprised and delighted by the end. However, the definitions don’t have as much variety in mood or tone as I’d like to encourage in a final version of the project. As of now, people add definitions that match the playful and slightly absurd interaction that I’ve created (strange mouth parts anyone??) . Very few are introspective, although two people did submit definitions about wishing to move to NYC or worrying that someone else is having a better time than they are.

One thing I want to do is rerecord the audio recordings to include more variety in phonemes. Right now they all end “ah” because they are taken from instructional resources online. Including “ack” and not only “kah” will give people more choice.

Considering my recordings all end in “ah”, any word people make sounds pretty silly. Therefore, the inviting but strange branding and design that I created for the show fit that experience. Next time, I can change the design to set a different tone for people’s interactions, in hopes of giving people the room to submit definitions that have more variety to them.

 

 


System Diagrams

Here are a few diagrams of my work.

Phonemes Chart

 

My circuits as a schematic and illustrations.

Meaning Maker Circuit Illustrations and Schematics

 

These webpage layouts are close to what I would finish creating with more time.

Webpage Design_Build A Word-01Webpage Design_Build A Word-02Webpage Design_Build A Word-03Webpage Design_Share a Meaning-01Webpage Design_New Meanings and Words-01

 

 

 

 

 

Class 10 Intro to Comp Media – Proposal “Collage Character Portrait”

Project Summary 

Ideal Scenario

I’m hoping to eventually design a large projected interaction. People can playfully answer “What main character are you in a novel?” People would drag images of minor characters, similes and metaphors to the outline of their figure, as a way of defining themselves by what surrounds them. The interaction would take a few steps:

  • Enter a photobooth-like environment
  • Snap a photo of their whole body, which is turned into a silhouette. This is projected on a wall.
  • Drag from a “pile” of various characters/objects at the base of the projection to surround their figure’s outline.
  • At the end of the interaction, people can print out their picture and take it home with them. Or post online. Etc.

Prototype for Now

But for now, I’ll make a simple prototype for my Intro to Computational Media final…. This will include being able to use a laptop’s camera to capture a portrait and turn it into black and white (or other colors), drag objects to to your portrait’s outline with the mouse or possibly with touch, and then finally email yourself the photo or post it online.

Context and Audience

This is definitely for fun, is interactive, and is meant to give people a moment to consider how what is outside their bodies can define their identities. (Which is in great contrast to my Intro to Physical Computing project, which externalizes what’s going on in people’s heads. You can see more about that project over here.)

Illustrations

Looking forward to redrawing these sketches next time…!

This is my “Prototype for Now”

IMG_0614IMG_0615

 

Here is my Ideal Scenario

Later on, I hope to build it further into this type of interaction:

IMG_0612

IMG_0613

 

Why? And Inspirations

This idea is based on a digital humanities project I contributed to this summer. You can learn more about it below. I’m interested in how people or characters are defined in creative ways by objects or the people around them.

Background (not necessary to read unless you’re really interested! Mostly for me to reference later.)

This digital humanities project was led by my friends Sarah Berkowitz and James Ascher at University of Virginia. They used textual analysis and github to explore the nature of character in 19th century literature and new practices of digital transcription in the 21st century. Their project focused on Characters, the second volume of a book called Genuine Remains by Samuel Butler, a 19th century author in England.  Each chapter in the book is a brief series of “jokes” about a stereotypical person, such as “A Wooer, ” “An Astrologer,” and a “Corrupt Judge”. The descriptions are biting, witty, and act a bit like a dictionary of people. You can see online the transcription and analysis here on this website and over here on Github.

While working on the project, I was struck that only men were featured as main characters, which is unsurprisingly for the time.  But the absence of women made us wonder even further about ALL the “invisible ink” minor characters mentioned in each chapter. How do these passing characters add definition and meaning to the main character?   Can these “invisible” characters be made more visible?

Sarah did some amazing analysis and a group of chapters to categorize “non-specific humans,” “proper names,” “mythological creatures,” and “animals”.  I’ve been inspired to take this type of analysis and let people play with it in a visual and fun way.

 

 

Source Material

I need more references, so please suggest them! I did find:

  • This interactive window projection by NuFormer, a group in the Netherlands, came up in my google search. I like how it turns your body into another texture and outline.

 

  • This pinterest collection of “interactive wall installations” is helpful.

https://www.pinterest.com/andreazampiva/interactive-wall-installation/?lp=true

  • There I found this interesting advocacy intallation against child abuse.  It seems to just use projection from the back to turn your body into a black shadow.

https://www.pinterest.com/pin/417286721705841754/

 

  • This is also cool:

https://www.pinterest.com/pin/18577417189805861/

 

Code

For my simple prototype, I’ll use:

  • The Coding Train videos on how to use a laptop’s camera to create portraits and modify pixels to become black and white, or whatever colors I choose.
  • A library called matter.js that Dan Shiffman recommended, which is a 2D physics engine for the web. I can use it to mimic the effect of a “pile of trash” of objects on the ground, that people can “pick up” and attach to the outside of their portrait.

 

Collecting ideas for a title and 1-sentence description

I am literally collecting ideas. Let me know!

Project Planning

I will be making a spread sheet to give myself a certain number of hours each week until the finals, so that I’m forced to keep this manageable.

…BECAUSE WE ONLY HAVE THREE CLASSES / FOUR WEEKS LEFT??

November 15

November 22 (no class because of Thanksgiving)

November 29

December 6

 

User Testing of Interface/Environment

I’ll also be doing a few user tests with just paper and pencils, to understand if there’s anything elegantly simple I can do to make the interaction more compelling and easier to understand.

I also need to ask people what metaphors or similes they would describe themselves as, so I can be sure to have a variety of

Questions

I need more code references for the behavior of snapping something to the outline of a shape. I didn’t get far online. Otherwise I’ll just use the mouse to move the collaged objects around, and some very suggestive user interface to insist the user put them around their outline…

I also need to find out how to drag a shape next to another, so that the one is partially hidden behind the other. In my mind, the outline of the figure is the first layer, and shapes are a little behind the figure as a second layer.

 

Class 8 Intro to Comp Media – “Create Your Own Birding List from Live Data”

I’m a big fan of birds! Not only can they fly, but they are avian dinosaurs, display genetic diversity on a grand scale, and are linked to the health of the environments around us.

This project uses APIs to let you view the latest birds sightings in your location, which are logged by fellow citizen scientists like us. Then you can ask for images of any bird on the list. Eventually I’d like for you to be able to mark whether or not you’ve seen any particular bird, so you can create a list of “birds to see” for yourself.

Here is a video and the sketch itself.

 

 

I’m using two APIs to make this work.

First, I pull in the latest data from Cornell Lab of Ornithology’s citizen scientist platform called eBird.  eBird allows people to create their own birding lists with a phone app. eBird then shares that information with everyone else to use with their own accounts, and through Cornell’s free API services. In my project, I request the latest bird sightings by first creating an API URL that uses the latitude and longitude of Brooklyn. A visitor can also add their own coordinates. However, changing coordinates isn’t always working for other people at the moment. I’m not sure why. I’m also not sure why I had to hardcode “&lat=” in the lat input field, but it works.

Second, I use images from Flickr using their API. To do so, I take the bird name logged by a citizen scientist out of the JSON data that is returned by eBird’s API call. I turn that name into a tag that forms a search term to add to a Flickr API URL that searches their website and returns a JSON file. Then I use that Flickr JSON’s contents to create a SECOND URL for the first photo mentioned in the Flickr JSON file. That second photo URL is what I use to display the image itself.

This was a lot of fun to work on. I’m amazed I can pair together live data into something I’d have fun using!  I also love that this is based on the work of fellow citizen scientists!

Here is my code. I really need to switch over to Atom and Github…

Index.html file

https://gist.github.com/fergfluff/7a235bde9d7a661015f8089149e19b2d

Sketch.js

https://gist.github.com/fergfluff/01b21f8d42e9a37656dcd44b021c46a2

Class 6 Small Project – “Squeeze a Lime!”

In this project, I created a design to squeeze limes. I’m imagining this as part of a game where you mix your own cocktails using limes…

I connected three sensors to control three images in my p5.js sketch.  The design is online over here, although you’d need my circuit for it to work! http://alpha.editor.p5js.org/fergfluff/sketches/B1ZDYUMa-

Here’s the interaction.

giphy_limes2

How It Works

Read on below to hear my thoughts on my physical design. As for the code, I’m sending data from my sensors through my serial port, the P5.js serial app, and into my p5.js sketch online. I’ve written code to:

  • Expect data coming in from my USB serial port
  • Create a string called inString of incoming data but only up until the ASC-II character values for carriage return and new line
  • State that if data REALLY IS coming in
  • State that if data is NOT “hello” (which I used in my “call and response” code in Arduino to require that my microcontroller be very polite and wait to be asked to send data) then to
  • Create a new variable called sensors that stores an output after it separates the”inString” numeric values from the commas
  • Create a “counter” or for loop to list the array of my sensors
  • Separate their data into an instance of each sensor, and send it the separated data from the variable sensors
  • Draw three arcs that get smaller in size the more you bend the flex sensors, by subtracting the sensor values from the dimensions of the arc.

 

Next time

I spent a lot of time understanding the labs about serial communication with p5.js, which was time was well spent! Therefore, this small project is more about demonstrating that understanding than it is about my ideas or execution. But next time I would spend just a little more time prototyping my physical design at the beginning as well, to make sure the code and interaction support each other as successfully as possible.

From the start, I had idea of creating a sketch to squeeze limes because I thought the flexible sensors afforded a squeezing motion. As for an enclosure, I imagined I could cover the sensors with a lime costume of sorts, so that the exterior of my circuit suggested they were limes – and thus, you should squeeze them!

Ideally, though, I would have tested this physical prototype at the start. I’d have quickly realized my assumption that the flexible sensors afford a squeezing motion was incorrect! It’s really more of a pulling down gesture. That may sound like a minor difference, but it caused a big disconnect in the user interaction of trying to squeeze limes. Squeezing doesn’t work! Pulling does! Why am I pulling on limes??

Also, my idea of a “lime costume” wasn’t successful even as a prototype. I probably need a different kind of sensor. I did try the long flex sensor, but I’d need a well-though out enclosure to that has a very strong base so that your fingers or thumb can hold on while the rest of the hand does the squeezing.

It looks like a caterpillar! Not a lime.

IMG_0280

The Takeaway

My takeaway is that even though coding is harder for me than prototyping with construction paper, construction paper gives JUST as much design feedback as the code. Just like I would by write pseudo code to draft my code’s logic, I should create a quick physical design of my piece at the same time I’m starting my code

 

Here’s the code:

https://gist.github.com/fergfluff/8f5bfdfa52f8aac8b8b0ab720a034455

 

Class 6 – Lab 2 Serial Input to P5.js

Using a physical object to control my web browser

In this lab for Intro to Physical Computing, I’m using a physical object to control what’s happening in my web browser. To do this, I’m applying what I learned about in the last lab – asynchronous serial communication – to send a flex sensor’s data through my microcontroller, serial port, Arduino code, and finally to my p5.js sketch.

Like this!

giphy_one sensor with serial.gif

It’s not very common to to control a web browser with external hardware via a laptop’s serial port. Personally, I don’t think I’ve come across this in my daily life. I’m curious as to why this isn’t always possible? I know “historical reasons” were mentioned in one of the ITP videos online. There also might not be enough daily applications to be worth building it into general consumer computers. And maybe it opens the door to nefarious activity?

But with this additional capability, I can add physical inputs from the world me into my visual coded projects in P5.js, Processing, Max/MSP, and/or OpenFrameworks.

Part I: Reading smaller sensor values that fit into 1 byte, with raw binary numbers

First, add code to your Arduino IDE to read your microcontroller

This is some simple code to send the value of a flex sensor to your serial monitor, using serial communication with the command Serial.write().

https://gist.github.com/fergfluff/96aba8490bd9f66fb5c6ce9dfb29a14a

Second, prepare the P5.serialcontrol app and P5.js serialport library 

To display the flex sensor readings on my web browser, the P5.serial control app will act like an official translator between the physical and digital worlds. The app communicates serially with my microcontroller via the USB serial port, while also sending information to my HTML/JavaScript code online using web sockets. I believe also built in is a webSocket-to-serial server. As a note, P5.serialcontrol runs in the command line interface of my laptop (thankfully in the background, while I still get comfortable with Terminal).

socket-serial-connection-1.png

 

Third, set up your P5.js sketch to connect with your microcontroller

Next I’ll add some code to my p5.js sketch so that it’s connected to my USB serial port and microcontroller.

To do this, I upload the P5.serialport library as a file into my sketch online and mention it in my index.html file.  In the lab, we were asked to add this exact text into the index.html file <script language=”javascript” type=”text/javascript” src=”p5.serialport.js”></script>

But Dan Schiffman had sent our class some simpler code, which worked well:

<script src=”p5.serialport.js”></script>

Screen Shot 2017-10-15 at 3.39.07 PM.png

Then I write this code below to ask for a list of available serial ports on my laptop. To do this, I first create an instance of the serialport library and set up a call back function to list my available ports.

https://gist.github.com/fergfluff/54376046ed1a91d23ec1e693b25537d9

 

Next, Use Events and Call Backs to Create Behavior

I lost all my text in this section of the blog! DARN!

Basically, I talked about how to set up my p5.js code to expect events from my serial port, and define call back functions to perform if those events happen. For example in this lab, if data comes in through the serial port, then perform a new behavior, such as display the incoming values on the screen. A simpler example in p5.js  might be changing a ball from red to blue if I’ve clicked my mouse, because I’ve written a “call back” function that requires going to find additional code I’ve written to perform that new behavior.

giphy_one sensor with serial

What’s Happening Here?

I’m imagining this all as a relay race with a special baton with written words on it, which is really data. Each runner waits for the last runner to pass it the baton.  But the runner needs to change the language of the baton’s words each time they receive it, so they can understand what it says!

In other words, the microcontroller sends bytes via serial communication using Serial.write(), which when the computer receives that byte, understands it a ‘data event’. This triggers the serialEvent() command in p5.js to be called, which stores the bytes into a variable called inData, at the same time turning it into a number. From there, the draw() function takes that number, and displays it on the web page.

Draw a Graph With the Sensor Values

I also lost this text. : (

Here, the sensor’s value is being mapped to the x position of the graph lines being drawn.

giphy_one sensor as a graph

https://gist.github.com/fergfluff/54376046ed1a91d23ec1e693b25537d9.js

Part II:  Reading larger sensor values that fit into more than 1 byte, with ASC-II encoded values

Aka reading serial data as a string

Because I’m using Serial.println(),  extra bytes will be used to communicate a carriage return and line break in between each sensor value.

On the p5.js sketch side, I add the serial.readLine() command as the method of interpreting the serial data. This command is unique in that reads the incoming serial data as a string of bytes (not just one byte as with the serial.read() command that we used before). And, when that string happens to be all-numeric, it converts it to a number, which is useful for us as we want numbers to be able to display to the canvas.

However, at first this leads to an issue because the p5.js sketch will get confused when it reads the carriage returns and line breaks, which are sent in the ASCII-encoded language as either \r (for carriage return), or \n (for new line).  When it reads those respective bytes, it displays nothing on the screen, which look like gaps in the graph or flickers in the text display sketches.

To circumvent this, you need to be very explicit with the p5.js program, and tell it to only display bytes coming in through the serial port that are actual ASCII-encoded numbers, and not characters.  To do so, you add a function to the serial.Event call back function. Here’s the complete code.

Conclusion

I’m beginning to see how much effort has been put into creating commands that allow someone to switch between reading raw binary and ASC-II encoded values. For now, I’m guessing that I’d personally switch between the two when testing new sensors. I’m sure there are other typical applications? For example, I’d use Serial.write() when testing a new sensor’s range with a simple mapped range that fits into one byte. And then switch over to analogRead() to test applications of that new sensor. This is because I can now see how analogRead() in Arduino IDE and serial.readLine() in p5.js can quickly lead to needing more code to navigate interpretations of ASC-II encoded values.

 

Class 6 – Lab 1 Intro to Serial Communications

This lab helps to better understand serial communication from a microntroller. Soon enough I’ll use what I’ve learned here to write programs in other languages that can interact with my microcontroller, such as p5.js.

For now, I’ll just learn how to send data from multiple sensors through my Arduino to my computer, and learn to format that data in my serial monitor so it’s easier to read.

In general it’s good to know that serial data is sent byte by byte from one device to another, and it’s up to you how to interpret that data. But honestly, a lot of the decisions are already made for us based on common practice (for example, so far in school we’re using 9600 baud for the speed of the communication, and using 5 volt microcontrollers and laptops that can transmit and receive at that same voltage).  From what I understand, what we have to decide is whether to interpret data as raw binary or ASCII, and whether to slow down receiving data so that program doesn’t slow down with too much data. For the most part, we want to use ASCII encoded data so that it’s easier to debug in our serial monitor. To receive ASCII encoded data, we can use the Serial.print() command.

Asynchronous Serial Communication

I found this definition of asychronous serial communication on Sparkfun.com to be helpful.

“Asynchronous means that data is transferred without support from an external clock signal. This transmission method is perfect for minimizing the required wires and I/O pins, but it does mean we need to put some extra effort into reliably transferring and receiving data.” https://learn.sparkfun.com/tutorials/serial-communication 

My take away from this is that by not having to connect two devices to the same external “clock” with a bunch of wires and pins, we save a lot of physical labor. ?? But we still need to write code using pre-determined signaling rules that makes it possible for them to successfully talk to one another.

Initializing Communication Between Two Devices

To me, there are 6 things required to communicate between two devices, and have it be understandable through your serial monitor. Basically, they need to speak the same language at the same time.

  1. The data rate – the speed at which information from device is sampled by another device. The most common data rate is 9600 baud or bits per second. This means every 1/9600th of a second, the voltage’s value is interpreted as a new bit of data! Converted to bytes, this means 1200 bytes can be sent in one second!
  2. The voltage levels representing a 1 or 0 bit – this depends on whether both your devices use the same voltage. If they don’t, you’ll need to map them to each other. For example, you’ll need to map 3.3 volts to a 5 volt device so that their 0s and 1s translate across devices.
  3. The meaning of those voltage levels – is the voltage signal “true”, using a high voltage as “1” and a low voltage as “0”? Or is the signal “inverted”, with the opposite reading of a low voltage as “0” and a high voltage as “1”?
  4. Wires to send and receive data on both the sending and receiving devices. These are also called “transmit” and “receive” and “Tx” and “Rx”
  5. A common ground connection, so that both devices have a common starting point to measure voltage by.
  6. How to interpret the data as incoming bytes // How to print that data to your serial monitor so you can read it – You need to decide when the beginning of the message is, when the end is, and what to do with the bytes in between.

 

ASCII vs. Binary: What “data language” should you use, and when?

In short, my understanding is that sticking to the raw binary values of your sensor reading, such as 101011, is useful because you don’t have to ask your program or serial monitor to spend time translating “data languages”. Raw binary is also more efficient as long as you are sending values below 255, because you can send these small numbers in one raw binary byte. I’m assuming this means your program can run faster. But anything above the value 255 needs more than one byte to be sent — in fact, it needs three bytes to be sent.

However, ASCII “encoded strings” or values are ultimately better because we can actually read them to debug our code with the serial monitor. Who can read straight raw binary code anyway??

From what I understand, the creators of Arduino code decided to create two commands that let you switch between the two data languages.

  • The Serial.write() command sends binary values of sensor readings. It doesn’t format data as ASCII characters. I BELIEVE we never see the Serial.write() command’s return of a binary value in our Arduino IDE app because the serial monitor is set up to only return ASCII Characters. We have to use other serial monitor apps such as Cool Term if we want to see those binary values. But that binary value is still used within our Arduino IDE app to execute whatever code we’ve written.

 

  • The Serial.print() command formats the value it returns as an ASCII-encoded decimal number. And if you use the Serial.println() command, you get the BONUS benefit of a creating a carriage return and new line, which makes reading the data in your serial monitor easier to read.  (ASCII is like a foreign language dictionary within your computer that translates one value language into another as requested.) My understanding is that the Serial.print() command has a built-in ability to translate the raw binary data of a sensor reading into ASCII. You need Serial.print() to send values higher than 255 because they won’t fit into a single byte. Higher values need 3 bytes (one for each digit of a number such as 880), plus any other bytes assigned to the punctuation you might want to see in your serial monitor. In general, Serial.print() is great to use because it returns values that are easier for someone to read than raw binary.

Code to practice sending data values for three sensors to your serial monitor

I was able to print all three sensor readings to my serial monitor with this code.

But I’m amazed that this code works without assigning the A0, A1 and A2 pins. How does it know which pins to read?? Are “0” and analogRead enough of a clue for it to work?

void loop() {
for (int thisSensor = 0; thisSensor < 3; thisSensor++) {
int sensorValue = analogRead(thisSensor);
Serial.print(sensorValue);
Serial.print(“,”);
}
}

Advice on sending multiple sensor’s data to your serial monitor

You’ll want to make it easier to read multiple readings of sensors in your serial monitor. Otherwise, you’ll just get a long list of values and won’t be able to tell which belongs to which sensor.

To start with you, you can use punctuation to format multiple binary values by adding in tabs, commas, line breaks, etc. This is demonstrated in the code above. However, if you’re using Serial.write(), you sacrifice a binary value for each punctuation value you use… you’re out of luck if your sensor has that same reading value! You also risk slowing down your program if data is constantly coming in, because there is nothing in your code to stop it. All this data gets stuck in your “serial buffer,” which is part of your computer that holds incoming information.

Therefore, you can also add code to create a “call and response” or break in the flow of data coming from your sensors. You can require the device sending data to wait for a request from the other device once its ready to start, or done processing the data it already has in its serial buffer.

Punctuation

Using punctuation alone to separate sensor data is simple to use, in that you read each sensor’s pin and add code for a comma or line after each one. But this method doesn’t prevent your program from slowing down while the device’s serial buffer fills up (with information from the other device faster than it can receive).

const int switchPin = 2;      // digital input
 void setup() {
   // configure the serial connection:
   Serial.begin(9600);
   // configure the digital input:
   pinMode(switchPin, INPUT);
 }
void loop() {
   // read the sensor:
   int sensorValue = analogRead(A0);
   // print the results:
   Serial.print(sensorValue);
   Serial.print(",");
   // read the sensor:
   sensorValue = analogRead(A1);
   // print the results:
   Serial.print(sensorValue);
   Serial.print(",");
   // read the sensor:
   sensorValue = digitalRead(switchPin);
   // print the results:
   Serial.println(sensorValue);
}

 

Flow Control, aka Call and Response, aka Handshaking

If you do need to prevent your program from slowing down, with a little more code you can require the device with data to wait until its been asked to send more data. That way the serial buffer of your receiving device can finish what’s already on its plate.

As part of this “call and response” code, you make use of the Serial.available() command to find out how many bytes are available or waiting to be read.  I believe this means its checking the serial buffer to find out what data remains to be read?

I tried the code below, but my loop is not stopping after each data sample to wait for me to enter another input. Why is this? Also, just to clarify, the serial monitor’s text field at the top is used to send data to the microcontroller?

const int switchPin = 2;

void setup() {
Serial.begin(9600);
while (Serial.available() <= 0) {
Serial.println(“hello”); // send a starting message
delay(300); // wait 1/3 second
}
}

void loop() {
// read the sensor:
int sensorValue = analogRead(A0);
// print the results:
Serial.print(sensorValue);
Serial.print(“,”);

// read the sensor:
sensorValue = analogRead(A1);
// print the results:
Serial.print(sensorValue);
Serial.print(“,”);

// read the sensor:
sensorValue = digitalRead(switchPin);
// print the results:
Serial.println(sensorValue);

if (Serial.available() > 0) {
int inByte = Serial.read();
sensorValue = analogRead(A0);
Serial.print(sensorValue);
Serial.print(“,”);

sensorValue = analogRead(A1);
Serial.print(sensorValue);
Serial.print(“,”);

sensorValue = digitalRead(switchPin);
Serial.println(sensorValue);
}
}

 

 

Questions

There was a quick mention about how using println() in the draw() loop of your p5.js sketch will slow it down a lot, because the serial buffer will become too full. Instead, you should switch over to a call-and-response method to only get information when you need it. I was confused, but this might be cleared up in the second lab.

Class 5 – Invisible Ink Characters – Fabricating with two materials

I created a main character surrounded by minor characters that define it, made of brass metal and layered colored paper cut by the laser cutter.

IMG_0158

As for fabrication skills and materials, I wanted to learn how to print my own sketches as layered pieces. It was nice to create something more handmade (while my Adobe Suite skills catch up to what’s in my mind…) Next time I’d like to learn how to use the vinyl cutter. I didn’t go the vinyl route because I could only find vinyl with adhesive backing online, when I only wanted regular non-stick vinyl.

My second material is thin brass metal which is very bendable. I liked being able to sculpt it with my own hands, and rivet together different pieces.

When it comes to the concept of the piece, the larger question I’m asking is “How is someone defined by who or what surrounds them?”   This piece plays with ideas from a project I worked on over the summer.  You can read below about the background if you wish. The main and minor characters are chosen from a 19th century book, described below.

 


Background (not necessary to read unless you’re really interested! Mostly for me to reference later.)

This idea is based on a digital humanities project I contributed to this summer. My friends Sarah Berkowitz and James Ascher at University of Virginia explored the nature of character and digital transcription using github and analysis. Their project focused on Characters, the second volume of a book called Genuine Remains by Samuel Butler, a 19th century author in England.  Each chapter in the book is a brief description of a stereotypical person, such as “A Wooer, ” “An Astrologer,” and a “Corrupt Judge”. The descriptions are biting, witty, and act a bit like a dictionary of people. You can see online the transcription and analysis here on this website and over here on Github.

However, the chapters happen only feature male main characters. Not surprising for the 19th century! The absence made us wonder about the “invisible ink” characters that surround each main character. How do these passing characters add definition and meaning to the main character? Are they mentioned across multiple main characters? Sarah analyzed a group of chapters to categorize “non-specific humans,” “proper names,” “mythological creatures,” and “animals”.

One note – the main character in this project is actually an alderman, which is a word dating back before the 12th century but is still used to today to describe an elected official. For example, a city council member.  In this case, he’s surrounded by a king, a skinned rabbit, a table full of food, and a rooster. All of these smaller characters are mentioned as a way to describe the qualities of an alderman in a book I mentioned above.


The Process

I started with a sketch.   You can see I had the original idea of an empty figure who’s exterior negative space was taken up by little figures.

I switched from “A Bankrupt” to “An Alderman”. I thought a politician might be more relevant to the news today.

IMG_0059

The chapter describing the alderman, found in its digital form here.

Screen Shot 2017-10-10 at 9.36.51 PM.png

Below is my first sketch in pencil and then pen. I found a picture of a person online to draw.

Honestly, it was hard to make a politician instantly recognizable based only on their outline. Something to think about going forward.

IMG_0100

I selected the most visual and meaningful minor characters mentioned in the chapter above. And drew them.

IMG_0105

I mocked up dummies to place around the figure to work out their sizing.

IMG_0106

 

I went to Metalliforous, the metal store on 46th Street.  I asked for their advice on materials, and they suggested brass strips.

IMG_0120

I went to Blick to buy mat board. But mat board was too expensive to buy multiple colored sheets.

So I bought cheap “railroad” paper for 86 cents each! (This photo is taken after I cut off what I needed to laser print).

IMG_0127

Next, I figured out how to use Illustrator’s Image Trace to trace my sketches, properly join and clean up all of Illustrator’s paths, and then finally prepare each sketch’s layers so that I could print each shape on the right colored paper.

I also cut 5 x 5 inch pieces of paper to print on. This was large enough to fit all my sketch layers, but allowed me to save the rest of my material for another project.

IMG_0124

 

Here is some chicken scratch (no pun intended) showing my measurements and layering logic.

IMG_0125

The paper was on the thicker side, but still thin enough to be moved out of place by the laser’s “thumb”. I learned to tape a corner of it to the laser cutter’s bed.

IMG_0132

All my paper parts.IMG_0137

I used needle nose pliers and my hands to bend the wire.

IMG_0147

Testing the material.

IMG_0148

Drilling the brass to later rivet with the rivet gun.

IMG_0149

 

There are more pictures showing the figure with rivets but I can’t upload them. at the moment.

Here are more details of my final prototype.

The small characters are taped to tabs, upon which I added magnets.

IMG_0160

 

Magnets shown (large flat circles). I borrowed them from my refrigerator. Next time I can figure out how to glue or affix them to the brass.IMG_0163