a collaboration with my 12-year-old-self

Fall 2018 · Drawing performance / ML experiment

performance

A collaboration with my 12 year-old-self is a live-drawing performance piece. While I draw, machine learning models trained on my childhood art continuously transfer those styles onto my live drawings, allowing me to create artifacts "in collaboration" with my younger self.

Performed at the Museum of the Moving Image on December 7, 2018.

background

I remember the age of 11-12 (during the years 2007-2008) as one of the most artistically prolific periods of my life. As a kid, I loved to draw - and specifically drew a lot of digital fanart of my favorite animated shows, like Avatar: The Last Airbender and Naruto.



anime-art-collage

Here are some gems from around 2007-2008.


I drew daily and built my first ever internet friendships through online fanart communities. Drawing was something I did for fun, on my own time, and completely because it made me happy.

I grew out of that stage pretty quickly. As I got older, I started to tailor my creative energy towards "serious" goals. In high school, I focused on building a more portfolio of traditional paintings to apply to college with. While in engineering college, I figured I'd have a tech career, so I focused on projects that would help me be employable, like developing web-apps.

In this piece, I wanted to explore a dialogue between my current and younger self (and implicitly, the relationship between my current and past attitudes towards creating art), and explore ways of honoring and reconnecting with a younger version of myself.

process

collecting the data

I'm sort of a hoarder of personal data, and this is one case where I'm actually grateful for that. After I switched away from my childhood desktop computer onto my first laptop, I transferred the contents of my computer onto an external hard drive - which I also conveniently had on hand. Previous me also organized all of my art into a nice folder called "Artwork". Thanks, younger me!

training the models

This project wouldn't have gotten to where it was without the help of ml5.js, which builds upon tensorflow.js to make doing machine learning things really easy and accessible on the web.

I used ml5's style transfer example code as a base. They provide an easy interface for transferring the style of one image (on which you have to train a model) onto a new image. In this case, the styles I wanted to transfer were from my childhood drawings, and the images that would receive the styles are my live drawings.

I started following this tutorial on training your own models for style transfer, but got stuck trying to figure out using a remote GPU to do the training. Luckily, an instructor at my grad program pointed me to a helpful tutorial she wrote that explains how to run the style-transfer training with a service called Spell, which makes running heavy-duty computing jobs a lot easier using just a few commands. Spell does cost money to use, but gives new users $100 in free credits. I was able to train 6 styles without going over the $100 free credits.

developing a drawing interface for performance

Training the models on my own drawings took a while to get started, so I simultaneously worked on an interface on which I can live draw in a performance setting.

VISUAL THEME

Because this whole project was a story about my pre-teen self, I wanted to replicate a desktop environment from that era. My family didn't grow up as an Apple family - we had exclusively Windows computers. My first personal computer - one that I had in my bedroom - was a desktop computer with Windows Vista. "Replicating" this environment involved scouring the internet for screenshots of this now obsolete UI, and positioning these elements together in a webpage.

desktop-screenshot

This looks like a Windows Vista desktop but is actually just a web-based mockup made with images. Note how there are no open window previews shown in the bottom bar.

To provide some visual metaphors for the style transfer going on, I used a mock MS Paint window to stage my live drawing, a mock image preview environment to reference my old drawings, and a mock terminal window to display the "collaborative" output between my live drawing and my childhood art.

DRAWING

To create a simple drawing functionality, I used p5.js canvas and a quick tutorial.

APPLYING STYLE TRANSFER

Next, I worked on getting style to transfer onto the canvas from a machine learning model. To do this, I captured the canvas element as a JPEG image and fed the image to the model, which then outputs a version of the drawing with the appropriate style transferred.

To get a smooth, continuous style-transfer effect while drawing, I needed to do this process of capturing the image and feeding it to the model many times. My first run at this was not super successful - I didn't realize how much computational time it took to apply style transfer on an image, especially if the image was bigger than 100px.

There were a tricks I ended up using to make the style transfer work and look better as a "live" effect:

  • I fed small image sizes into the models. The smallest sizes I use are 80px by 80px JPEG images. It wouldn't be very interesting to be drawing on an 80px canvas, so I took a normal sized canvas and scaled it down once I captured it as an image.
  • I tracked when the user was actively drawing (i.e., changing the state of the canvas), and only ran the periodic style transfer updates during the active drawing states.
  • I alternated between high and low resolution style transfer when appropriate. Feeding 80px by 80px size images resulted in some blurry and low-res output. The style transfer images that are more interesting to look at are from higher resolution images. So while the user is drawing, I made the system do style transfer on lower res images, and only when the user stops drawing, the system does a more high resolution style transfer (we can afford to wait for that to happen, since we're not actively drawing).
  • To make this transition between low-res and high-res style transfer smoother, the style transfer process cycles through a gradient of image resolutions.

try it out

the performance version

The code I wrote for the performance is not really easy to learn as a tool for other people and was written specifically to make it easy to perform a story, but if you're interested, here it is live and here's the source code, with a few tips:

  1. First press the spacebar to "enter the Vista environment".
  2. Your browser screen size should be about 60% for optimal viewing. You might need to toggle it a bit.
  3. Double click on the MS paint icon to open Paint. You can hit the brush tool and the eraser tool only.
  4. Double click on the "Childhood Artwork" folder and then anywhere on the window that pops up to go into the image previews of my old art.
  5. To run style transfer, type anything into the "terminal" in the center and press enter. Anything you draw should now be style-transfered in the terminal.
  6. the ~/` key on your keyboard should trigger the audio for my performance.

an easier-to-use version

I've also modified the code written for the performance into a more convenient tool for other people to interact with! Try drawing below or going directly to the app and applying styles from my childhood art. You might need to be a bit patient while drawing - the style transfer might be laggy, so drawing slowly is recommended. Works best in Chrome on desktop.

Not loading below? Explore the app directly