FINDING IDEAS TO PAINT WITH MACHINE LEARNING

Introduction

 

My Daily Paintings in 2015

Although I’ve always been a morning person, 2015 was my first focused effort to use my morning hours to paint daily. For this project, I painted a 7″ x 7″ acrylic on paper city-scape every weekday morning. The consistency of the process simplified so many things – from making sure I had the supplies for the week, to framing and shipping a painting when it sold. The most challenging part of this project was sifting through my travel photos – between having travelled to Europe and Asia as well as was marathon training all over Seattle and Vancouver; in all those adventures I was collecting reference photos for painting. I developed my own tagging, sorting and color systems to try to select what to paint next. Who knew that in 2019 there would be machine learning tools to automate this….

 

Figure 1: Sixteen of my early morning 7″ x 7″ paintings I did in 2015. Most of the time in this project was spent looking for the next photograph to work from.

My Initial Reaction to Style Transfer

Originally I was not interested in Style Transfer as an artist. I have seen it demonstrated many times for AI image manipulation. The process did not seem valuable to me because the painting of Van Gogh’s Starry Night was never intended to be applied to a photograph of your dog. Yes it is fun and catches our attention… but I did not see an application for my art practice. It felt like the artist did not have control.

Then I found the CycleGAN

A friend who also uses AI in his art asked me about Style Transfer, I mentioned, “Instead wouldn’t it be great if there was a process that could link two images?”  I explained to him that I have an unusual dataset as I have both the photograph I worked from (as per my 2015 daily painting project) AND the painting for hundreds of paintings I have done over the past decade. I daydreamed what if it would be possible to create a more mindful Style Transfer process could then be applied to new images. Rather than the existing examples of Old Masters work being applied to unrelated photographs.

Within a week of daydreaming about this out loud, I discovered the CycleGAN by Jun-Yan Zhu. My current understanding of the CycleGAN is that it does back and forth image processing (generation and assessment) to train itself . And although it touts that it can be used with an unaligned dataset like zebras and horses, which is valuable for most applications. I was most interested in this algorithm because the dataset can be aligned.

What is a GAN?

Generative Adversarial Networks (GANs) are a fascinating process first introduced in 2014 by Ian Goodfellow et al. There is a generation algorithm that creates ideas and an adversarial algorithm that assesses the idea like a critic. They both train and improve in tandem. Not unlike the split personality of an artist working tirelessly in their studio – painting and being a critic. There are YouTube videos out there that do fantastic jobs at explaining GANs, my go-to resource is Siraj Raval.

My challenge that CycleGAN could solve

 

As an artist, on my computer I have hundreds of thousands of images. During a week-long trip I can take upwards of 4000 photos. Over time I have learned what photos I prefer when painting, so I try to take more of these photos to bring back to the studio. This has led to many hard-drives of photos that I can’t even keep track of! Many of these photographs requiring hours of photo-editing time before they can be considered for painting (I usually skim through them with Adobe Lightroom).

….What if I could quickly process and consider them for my art?

Here is the paired data set of my photographs and their corresponding painting I created by hand (photograph & painting 2012 – 2018):

Figure 2: Eleven of the 115 photo-painting image pairs I loaded into the CycleGAN to generate a model of my painting style (All the photographs were taken by Joanne Hastie 2012 – 2018)

My first application of a CycleGAN

I created a CycleGAN model using 115 pairs of photograph images and their matching painting image (Figure 1). This CycleGAN model was applied to ALL of my digital images I own – every travel photo from my DSLR and any iphone photos. After making everything look like a Joanne Hastie painting with the CycleGAN; I then used a TensorFlow classification algorithm trained on the 115 paintings to rank which photos were most similar my paintings. The top 500 images are not only most like my paintings, but are fantastic images I had completely forgotten about from my varied adventures. It’s like reliving your experiences & ideas from a new perspective.

Adding a TensorFlow Classifier

For me, the critical part of working with a CycleGAN was pair it afterwards with a TensorFlow classifier algorithm trained on the 330 image dataset (115 photos + 115 paintings). This allowed me to rank all of the images that were most similar to my paintings and examine those first. Because I was processing hundreds of thousands of photos I would not be able to examine all of the results in a timely fashion. An interesting aspect of the classification results is almost half to the top ranked photographs were images I had already painted that were not included in the original 115 model… it was rather uncanny. This reinforced the value for me in what I was doing!

 

I am currently working on new cityscape paintings with new energy and enthusiasm because of the CycleGAN. Another example of using AI and machine learning to be more creative. Here are my first two paintings inspired from this process.

Figure 3: A photograph I took in Siracusa Sicily in 2016 (far left), was processed by the cycleGAN (center) and I then used the processed image to paint it. This image was ranked in the first out of hundreds of thousands of images for for me to paint. I loved that it added my signature flecks of red and also changed the color of the road surface to blue! You can see this painting in more detail here.

Figure 4: Here is a photograph I took in Bologna in 2017 (far left), was processed by the cycleGAN (center) and I then used the processed image to paint it. This image was ranked in the top 10 out of hundreds of thousands of images for for me to paint. Notice how again the red brush strokes have been added – especially in the bottom corners where I usually sign in red paint. You can see this painting in more detail here.

Next Steps

I am quite impressed by the CycleGANs ability to apply my style to my image. It is fun to see what it thinks I might do – my favorite is the red scribbles in the corners since I always sign my painting in red in the bottom left or right corner.  I am seriously considering to adopt as a “signature” rather than writing my name in red!

I think the most valuable aspect of this project is using the TensorFlow classifier to sort through my database of reference photos to rank potential images to paint. As I mentioned in the introduction – often that is the most time consuming aspect of the process: finding something to paint. I am excited that this process can allow me to get more out of my current photo database and bring renewed inspiration of photos I took that I may have passed by on first glance.

 

Resources & Links

  • GitHub CycleGAN – you can download code to work on your own CycleGAN
  • Google TensorFlow  – open source, pre-trained machine learning code that you can apply to your own projects. My focus has been on their classification algorithm

MORE TECHNOLOGY & ART RESEARCH

THIS IS NOT A PLACE

TENSORFLOW CLASSIFICATION

PAINTING ROBOT

Pin It on Pinterest