Making new strides in my robot project

Earlier this week I was able to get my big robot drawing. This was a big stride in the project as I am trying a different process of software tools compared to my previous little robot process. My earlier work was all programmed in Python specific to the robot. With more flexibility in the 7-DOF bigger robot, I am developing on a process using Parametric CAD and vector graphics tools.

The robot drawing using vector graphics and parametric cad. It is drawing a series of abstract compositions from my past project ML Abstracts

 

 


My robot sketching this past weekend

The flexibility of using 3D CAD for robot control

Flexibility includes being able to use different locations for the painting surface, for example horizontal on a table, vertical on a wall or angled in an easel. I’ll refrain from naming the software packages I am using until I have a better grasp of an efficient process – then I may even make tutorials based on the tutorials I see missing on YouTube. But for now, while I am defining the process and fumbling along watching YouTube tutorials myself, I will refrain from promoting tools that might not be the best for the project.

Working in a parametric CAD, I can move objects around and re-orient the robot to work within that space. This is especially valuable as the robot’s current location is in an unusual orientation. The robot is actually flipped and twisted to accommodate the gripper and hold the pencil. The x and y directions are not intuitive at all! So its nice that the CAD can calculate this for me once it is set up properly.

 The small robot has a limited working area (space as well as brush orientation) – so although I designed the robot platform in 3D CAD where I defined all my positions, and then I did all my work in Python code since everything was stationary. For example, color #1 was always in the same spot and the water dish was always in the same spot. This was also how I was able to overcome not using a vision system in the project.

The robot drawing using vector graphics and parametric cad. It is drawing a series of abstract compositions from my past project ML Abstracts

 

 

 

Close up of the ML Abstracts being drawn by the robot. I patterned 9 Machine Learning Abstracts onto the page to watch how the robot would handle it.

Getting to know the robot limitations and tolerances

So, to start drawing with the bigger robot I started with drawing some text; Of course, I had to write my name! I created the text in a vector
graphic program as it is important that all the lines have a specific start and
endpoint (also known as a vector), and then I imported the graphic into CAD
and located the drawing where my table and pad of paper was. When using the robot, I immediately noticed that
the page surface is not exactly perpendicular to the robot – as expected since there
are compounding measurements that might be off between the table, clipboard, page, and all the 
robot motors. So, in the first video you will notice the “J” in Joanne is more
defined with a thicker line than the “E” in “JOANNE” when the marker is barely touching the page.

 

 

 

 

 The robot writing my name. My first drawing experiment. I quickly noticed the z position is not constant over the page – so the pressure on the marker changed as it wrote my name.

 

 

Because of the height tolerance issue, I switched to a pencil. The pencil was smaller diameter and fit into this spring-loaded pencil holder I was sent earlier this year by the robot company by accident (I had ordered a different part). I will have to figure out a fix for a paint brush (if at all) since a brush has a much more delicate touch, compared to a pencil, a spring might require too much force to adjust the height properly. Likewise, my goal is to have changing angles in brush position, like how I hold a brush in my hand, so spring loaded holder might not work. But there are other ways to assess the tolerance and alignment issues as well…

When I switched to pencil, the drawings were of a series of ML Abstracts patterned over the page. This is definitely more convenient than drafting them by hand (as far as accuracy and speed!) I also was testing if I can swap the vector drawing in CAD space seamlessly and start thinking about how to make that process more efficient. As I do have to run some home-made Python algorithms to make the whole process work together.

 

 

 

 

I patterned 9 Machine Learning Abstracts onto the page and had the robot draw it.

Exciting but daunting at this stage in the project

My 8-year-old self would be so happy with this tool! When I was small, I would ask my father to photocopy my drawings and coloring books at his work. I much preferred the white point and smoothness of copy paper compared to brown, thin coloring book paper. So, to be able to draw any image at any scale on any surface with a robot would’ve been be amazing tool to have a few decades ago for my artistic process!

However, having a portfolio of robot paintings in my art portfolio these days, I am planning how to add paint instead of a pencil as the next step. This requires managing paint brush dips, color selection, brush stroke routines, defining the space more rigidly in the short term, but make it flexible in the long term. So, the success of drawing from CAD has me now deep in thought how to augment it to paint. Lots of work ahead of me, but that is the fun part.

 

 

 

A few more links:

  • Learn more about my robotic painting arm here.
  • See the final Machine Learning Abstract paintings that the robot was drawing here.
  • Buy your own unique robot painting in my current Daily Robot Painting Series, here.

Pin It on Pinterest

Share This