Combining live drawing with my childhood artwork using machine learning.
A collaboration with my 12 year-old-self is a live-drawing performance piece. While I draw, machine learning models trained on my childhood art continuously transfer those styles onto my live drawings, allowing me to create artifacts "in collaboration" with my younger self.
I remember the age of 11-12 (during the years 2007-2008) as one of the most artistically prolific periods of my life. As a kid, I loved to draw - and specifically drew a lot of digital fanart of my favorite animated shows, like Avatar: The Last Airbender and Naruto.
I drew daily and built my first ever internet friendships through online fanart communities. Drawing was something I did for fun, on my own time, and completely because it made me happy.
I grew out of that stage pretty quickly. As I got older, I started to tailor my creative energy towards "serious" goals. In high school, I focused on building a more portfolio of traditional paintings to apply to college with. While in engineering college, I figured I'd have a tech career, so I focused on projects that would help me be employable, like developing web-apps.
In this piece, I wanted to explore a dialogue between my current and younger self (and implicitly, the relationship between my current and past attitudes towards creating art), and explore ways of honoring and reconnecting with a younger version of myself.
I'm sort of a hoarder of personal data, and this is one case where I'm actually grateful for that. After I switched away from my childhood desktop computer onto my first laptop, I transferred the contents of my computer onto an external hard drive - which I also conveniently had on hand. Previous me also organized all of my art into a nice folder called "Artwork". Thanks, younger me!
This project wouldn't have gotten to where it was without the help of ml5.js, which builds upon tensorflow.js to make doing machine learning things really easy and accessible on the web.
I used ml5's style transfer example code as a base. They provide an easy interface for transferring the style of one image (on which you have to train a model) onto a new image. In this case, the styles I wanted to transfer were from my childhood drawings, and the images that would receive the styles are my live drawings.
I started following this tutorial on training your own models for style transfer, but got stuck trying to figure out using a remote GPU to do the training. Luckily, an instructor at my grad program pointed me to a helpful tutorial she wrote that explains how to run the style-transfer training with a service called Spell, which makes running heavy-duty computing jobs a lot easier using just a few commands. Spell does cost money to use, but gives new users $100 in free credits. I was able to train 6 styles without going over the $100 free credits.
Training the models on my own drawings took a while to get started, so I simultaneously worked on an interface on which I can live draw in a performance setting.
Because this whole project was a story about my pre-teen self, I wanted to replicate a desktop environment from that era. My family didn't grow up as an Apple family - we had exclusively Windows computers. My first personal computer - one that I had in my bedroom - was a desktop computer with Windows Vista. "Replicating" this environment involved scouring the internet for screenshots of this now obsolete UI, and positioning these elements together in a webpage.
To provide some visual metaphors for the style transfer going on, I used a mock MS Paint window to stage my live drawing, a mock image preview environment to reference my old drawings, and a mock terminal window to display the "collaborative" output between my live drawing and my childhood art.
To create a simple drawing functionality, I used p5.js canvas and a quick tutorial.
Next, I worked on getting style to transfer onto the canvas from a machine learning model. To do this, I captured the canvas element as a JPEG image and fed the image to the model, which then outputs a version of the drawing with the appropriate style transferred.
To get a smooth, continuous style-transfer effect while drawing, I needed to do this process of capturing the image and feeding it to the model many times. My first run at this was not super successful - I didn't realize how much computational time it took to apply style transfer on an image, especially if the image was bigger than 100px.
There were a tricks I ended up using to make the style transfer work and look better as a "live" effect:
The code I wrote for the performance is not really easy to learn as a tool for other people and was written specifically to make it easy to perform a story, but if you're interested, here it is live and here's the source code, with a few tips:
I've also modified the code written for the performance into a more convenient tool for other people to interact with! Try drawing below or going directly to the app and applying styles from my childhood art. You might need to be a bit patient while drawing - the style transfer might be laggy, so drawing slowly is recommended. Works best in Chrome on desktop.
Not loading below? Explore the app directly