So much thinking and problem solving went into my first painting with the xArm7 this past week. I am uncertain if my learnings are worth sharing or if it is just my personal enjoyment of problem solving through the process. I do feel crazy putting this much brain power into this – when I could paint much faster by hand. But the continued feedback I am getting from my NeurIPS presentation last month at the Creativity Session in Vancouver, I think there might actually be an audience for explaining my process. So a sincere thank you to people who followed up with me… Regardless, even if you only skim my thoughts below, I think it makes interesting commentary for how intensive the process is, as well as calm any thoughts that robots will take over from painters anytime soon.
This past week I put a focused effort into painting with my xArm7. It has sat idle for longer than I’d like to admit, but after having spent a bit of time moving the arm around in space, I have been thinking about how to set it up for multiple projects.
Optimizing the Robot’s Reach
There are many differences between this arm and my desktop arms that paint the “Painting Variables” series. For instance, the desktop arm has a painting area of 6″x6″ after I make space for both a paint dipping area and a brush cleaning area. The xArm has reach 360 degrees. After some review, when mounted to a table it can reach up to 18″ while holding a vertical brush over a horizontal sheet.
I’ve spent much time moving tables and sheets around figuring out how to optimize the robot’s painting area. In my previous post about this robot, when it was drawing with a pencil, it could barely reach a full area of 16″ x 12″ page. I was disappointed with this. I was concerned for investing time in this equipment and project for such a limited size – especially with me pitching murals! But now with its current configuration (in the video) it can fill a 18″ x 24″ page. Much of this is due to the orientation of the gripper. As much as I love orienting the gripper more like a human hand in the previous pencil post – it severely impedes the robot’s reach.
Simple Vertical Stripes
For this vertical stripe painting, keeping it simple allowed me to develop code that can work with the new style of parametric output. I hand drew each vertical stroke on my tablet with a stylus pen. I then imported the 2D drawing into 3D CAD set up where I could turn the hand-drawn lines into robot coordinates.
A big failure that I wasn’t expecting to be an issue is that the software I am using assumes that I have a Kuka Robot. I learned the hard way that the Kuka robot coordinate system is actually very different from the xArm7. So I am currently developing Python code to translate Kuka coordinate system into my robot’s coordinate system. I found this discrepancy on my first try to articulate the robot (although its movements looked great on the virtual robot on my screen…) – since the rotational coordinates did not align, my robot physically slammed sideways into the table. This was another reason it was good to start with a simple brush strokes that did not require the robot to dynamically articulate… Future goals!!! So for now I will stick to a vertical brush and minimal angles.
The Robot Runs into Itself
This robot shows no sign of being self aware any time soon. It actually has no idea where it is in space with respect to itself – it is really just a set of 7 motors rotating that results in different brush positions.
You will notice if you look carefully some of the brush strokes lead turn and point towards the center. I was working on the brush dipping into the paint every few brush strokes. And I programmed it so that before the robot gets paint, it centers itself above the page, reorients all its motors and then dip into the paint (you will see this in the video above). If I did not do this the robot would eventually run into itself due to a ‘singularity’ issue. Because it has so many pivot, there is more than one set of positions to achieve the same brush location. So if I am programming it to go to a brush position and it has more than one solution to get there, it can lose track of where it is and it will tie itself into a knot. The robot manual called this “singularity”. Therefore I need unwind the robot consistently. Since I also need to consistently get paint – it makes sense to reset the arm before and after each dip.
Re-centering the robot every few brush strokes slows down the process. So each time I started and stopped the robot I sped the robot up about 10% (without increasing its acceleration). In one of the first runs (when I was first testing it without paint), it overloaded as its default speed was too fast. So I thought start ridiculously slow and then build up. The video has been sped up 2x to be mindful of your time.
Controlling Brush Strokes vs. Controlling Each Movement
So back to the reason there are drag marks on the paintings. For this painting, each movement the arm makes is a single command read from the csv file. This is different than my previous robot code where each brush stroke is a row in the code. My code for the smaller robot, the code reads each brush stroke and then breaks them into segments of motions. I had to dig out some trigonometry formulas to achieve this and because of this I have greater control over how to enter and exit each brush stroke. In contrast, the CAD tool exports individual movements – so a brush stroke can have multiple lines in the code. Which means I have to figure out how to denote the start and end of a brush stroke in a long series of robot positions. I am still working on how best to find the entry point and exit point in the motions to grab more paint. Sometime my code worked, sometime it didn’t – and if it didn’t the outcome was a drag mark towards the center. I don’t mind this as it is an outcome of the process.
Another artifact in this painting is some of the brush strokes area actually layers of different colors. You can see a slightly dark edge on some of the light brush strokes in the middle. This is a mistake – the robot painted the wrong color and I went back and corrected it. This happened because I had assumed the order I drew the individual vertical lines on the tablet would lead to the same sequence the 3D parametric CAD would analyze each line. It turns out the order was “a bit” arbitrary. I put “a bit” in quotes as it started off working as expected – but then did not. So my short term solution was to export each color individually as its own set of robot code. This was very time consuming… so I will have to keep thinking about how to manage colors in a painting.
Bigger Paintings = More Time & Resources
My small paint containers that last a full day when I did installation shows with my desktop robots doing 9″x12″ paintings, did not last one single 18″x24″ painting. Larger sheets, more paint, more catastrophic mistakes due to code translations… this project will take more time and resources. The difficulty in using this tool reminds me of a Harold Cohen quote:
“An artist has never really needed his tools to be easy to use; that’s a very common misunderstanding. He needs them to be difficult to use – not impossible but difficult. They have to be difficult enough to stimulate a sufficient level of creative performance, and you don’t do that with something that’s easy to use.” ~ Harold Cohen
If Painting with a Robot was Easy…
If painting with a robot were easy, everyone would be doing it. I am super keen to hear if my ramblings about the technical details of a painting is interesting. I think some of the unexpected results in painting with the robot can achieve beautiful results or at least a story. But perhaps it is only interesting to me. Hence I would love your feedback.