Electronic Rituals

spring 2019 · Software experiments

In spring of 2019, I took an NYU ITP class called "Electronic Rituals, Oracles, and Fortune Telling", taught by Allison Parrish. We looked at the history and structural properties of various forms of rituals and divination, and in our assignments explored computational expressions of these practices. Here is a selection of my projects in that class:

Readings from your Favorite, Obsolete (childhood?) Website

The Prompt

Invent an “-omancy,” or a form of divination/prophecy based on observing and interpreting natural events. Your reading of “natural” should make some reference to digital/electronic/computational media.

My concept

I created a command-line Python program that can deliver a reading based off of one's favorite websites growing up. Inspired by how our browser search history might be framed as a "natural event" in a digital context, I began thinking about internet history on a longer time scale.

Like our birth place and time, the first websites we visited are embedded into our personal histories. What sites did we frequent when we were much younger, which ones remain in our memories, and which ones were possibly formative to our growth?

I decided to make a Python program that asks a querent for personal details and then generates a reading based off of keywords scraped from archives of their favorite early-internet website.

The old websites in question are scraped from the Wayback Machine, which over the years saves snapshots of websites over their histories, and provides an API to link to a site's view at any given time (or at the next closest time possible).

program1

Hey there! This is a program that peers into your formative internet days and from that, tells you a bit about yourself.

To start, what’s your name?

> Jackie

program2

Hello Jackie! It’s nice to meet you. To give us a bit more information to work with, what year were you born? Please format it with four digits, i.e.: ‘2019’, '1991’.

> 1996

Now, dig deep into your memories and think of a website you loved to go on in the past. Type it here. Some tips: don’t include www. or https://. If your favorite site was google, type it like 'google.com’

> oekakiart.com

program3

Now, final question. What age were you when you used that site? Type in a single number, i.e. '9’ or '16’.

> 11

* Formulating your reading… Please be patient :) * *

program4

***~*~Your Reading*\*~*~

Growing up, you’ve been somewhat lively at your core.

You’ve learned some sullen tendencies in more recent times, Jackie.

Try playing around with a deep point of view.

Implementation

childhood-website-implementation

Here are the general steps my program takes to make sense of a site's raw HTML and create a psychic reading. If you're interested, check out the source code!

  1. Query for the user’s birth year, favorite website, and what age they remember using that site.
  2. Use Wayback Machine API to request their view of the favorite site at the year they likely used it (or the closest year).
  3. Scrape all text with the Beautiful Soup Python library and segment it into “words”, i.e. strings containing only alphabetical characters.
  4. Use a master list of all English words in ConceptNet, and filter out any “words” from the website that are in that master list.
  5. Select a few of these resulting keywords from the website. Use ConceptNet’s relatedness API call to compare these words to common personality adjectives from the Corpora project. Keep track of the adjectives that are most related to the scraped keywords.
  6. Use the best adjectives we’ve tracked to construct a reading, filling in a phrase mad-libs style.

Implementation Notes:

  • The list of English words found in the HTML page tends to be long, so I have a way of selecting a smaller subset, which involves randomly choosing these words, with a greater chance of selecting repeated words that appear more frequently in the HTML.
  • I also reduce down the list of personality words in the Corpora project (there are about >300, which adds up because we need to do an API call for each one we compare to). The adjective list is reduced down by random selection, with the drawback that this very likely could weed out adjectives that could make more sense for a given word.
  • The program breaks if you don’t put in a valid website (that has archives on the wayback machine) and other valid inputs. It also assumes that the website you used to go on has text to pull from (so websites that are made in all Flash are a no-go and will break the program).

Emoji Ouija Board

The prompt

Prompt: Make a prototype of an electronic spirit board or other method for facilitating automatic writing (communication from unconscious/subconscious/collective gesture.)

My concept

I created a spirit board (e.g. an Ouija board) that could spell out messages using pictorial systems, rather than letters.

Precedents

I discovered an existing project called Emouija Board, but it looks like it never got beyond the paper prototype/kickstarter stage. Also says nothing about what mechanism or process they used to generated messages.

I also found an early-internet version of a web-based Ouija board, where you type in questions and get somewhat sensical, written responses that seem to be linked to key words or phrases.

The querent's experience

I created a web-based emoji ouija board that begins by prompting the querent (the person who is receiving a reading) to ask a question, and then delivers a reading comprised of 3 readings.

Before asking the question, however you have to calibrate webcam's gaze tracker. As the planchette moves to deliver a reading, the UI tells you to "focus carefully" on the planchette. Here, I was inspired by the early-internet version of Ouija above, where you had to hover over the planchette in order to get it to move.

Implementation details

One of the key decisions I had to make was how to generate an emoji message. While I set up the experience so it seems like the gaze tracking is affecting the motion of the planchette while you focus, I actually use the querent's gaze before the planchette even begins moving to generate the emoji message.

The gaze of the viewer is tracked with the help of Webgazer.js. My system notes the top few emojis that the user looks at (i.e. where their gaze hits) when the board is first presented. Then, when the planchette begins to move, its path is already predetermined - the encouragement to "focus carefully" is a bit of a trick.

I was interested in emulating the ideomotor effect - the phenomenon that makes Ouija boards "work" - in a digital form. In our class readings on Ouija boards, we learned that the way the planchette moves is due to the fact that participants are subconsciously moving it in the direction of where they want it to go. This process is also inherently very visual - during studies of Ouija in a dark room, the resulting messages were nonsensical.

In this project, I’m trying to play around with one example of how our subconscious could manifest in a digital interface. On the screen, what draws our attention? Where do our eyes linger?

There’s also an element of how to make this a hidden process: if we request the user’s webcam and give them a board, of course they’re going to think that they’re eye movement is going to control the board.

In my prototype, I tried to intentionally divert expectations for control from one part of the narrative to another, by tracking gaze at a point in the narrative where users don’t expect that their behavior will have any output.

Challenges / Limitations

Webgazer wasn’t super easy to get set up.

Although it says you can run it with a few lines of code, I had trouble setting up a local server that supported HTTPS when I tried to start from scratch. So I built my “prototype” off of some example code from Webgazer’s that provided a simple way to train the model by clicking on dots while gazing at them. I recognize that this is definitely not a good experience to package this if I wanted others to use the app.

Improving the illusion of control

While I try to fool people into thinking that their gaze while focusing on the planchette is doing the work, I think I'd need to do some work that makes that more convincing. I had hoped to stop and start the motion of the planchette in its patch based on if the querent was actively looking at it, but left that as a next-step.

Emojis as a substrate for language?

Designing a spirit board constrains you to a certain number of symbols, so it wouldn’t be feasible to put every emoji ever on a single page. One of the open questions I have is: does it even make sense to use emoji as a substrate for messages, the way letters in the English language are in traditional Ouija boards? If so, which emojis provide the best bases to span all meaning? In a way, it’s sort of similar to the design of oracle decks. Currently, I address this by randomly generating a handful each time the experience loads. This introduces an element of randomness that constrains behavior, which does not seem to be a formal characteristic of traditional spirit boards.

Lack of language processing

I currently don’t process the question at all, but I wonder if that could result in more tailored responses. For example, if you ask a yes/no question, how might we specifically track if the gaze falls more towards the “Yes” emoji or the “No” emoji?

Crowdsourced Automatic Drawing

Automatic writing (and drawing) is a type of divination that involves writing down messages from the subconscious or spirits. Crowdsourced Automatic Drawing is a project I created with a collaborator, Aileen Stanziola, to explore themes of the subconscious and subjective in a digital system.

We created a method for a group of people to create a collective drawing. One person draws according to a series of vibrations on wearable electronic glove, and the others controlling the intensities of the vibrations remotely via a web client.

I focused on the physical glove that the drawer wears, featuring four vibration motors that correspond to the directions of top, left, bottom, and right. My partner focused on the web client that participants could individually visit, allowing them to hit arrow key that affect the overall vibrations of the glove.

Video of in-class performance

We used a song from Plantasia as our meditative, other-worldly background noise.

drawing2

drawing1

The glove

System diagram

glove diagram

  1. Participants can press up, down, left, right on the web app
  2. Participants’ direction choices collect on a server
  3. Direction choices affect 4 vibration motors on the glove in real-time
  4. The drawer moves the pen according to what they perceive while wearing the glove

Making of the glove

My first steps were to test the concept by creating some electronic prototypes of the vibration behavior.

arduino-prototype

My first test consisted of connecting TIP120 transistors to PWM (Pulse Width Modulation) output pins on an Arduino, and connected two vibration motors those transistors. In this part, I wanted to get some basic code running so that the Arduino could take in data through serial communication (that would later correspond to directional commands), and translate those to vibration itensities.

On the code side, I refined a way to take in a stream of serial information corresponding to the four directions that participants would choose, and calculate a weighted average for each of the four directions we recently received. This average is then used to tell the corresponding vibration motor how hard to vibrate. For example, if half of the direction messages we get are "left", then the left motor vibrates half of the maximum possible vibration strength.

electronics

Next, I created a basic prototype of a glove out of some muslin material, with a Circuit Playground connected to vibration motors and transistors using conductive thread. This “oven mitt” prototype, which was way too loose against my skin to feel anything, gave me a key insight that the motors would need to be pressing against to the skin if the drawer is to feel any vibration.

glove1

So I opted to try a version where three tight rings would hold the left, top, and right vibration motors, and the bottom vibration motor would sit around the wrist.

Connecting an audience

We set up an overhead camera to livestream the drawing process to the class, and told classmates to go to http://eroft-final.glitch.me/ on their own devices to make their votes.

The image below is a screenshot of the site that my collaborator built, for allowing our classmates to choose directions to send to the glove. On the side of that window is the code used to control the glove via serial on the drawer's side.

site