Knowledge Sharing News

Putting the "A" in STEAM: Robotic art at the World Science Festival

Image Credit: T.J. Thomson

Advances in digital technology are occurring at dizzying speeds, giving rise to the challenge of sustainable resource and energy consumption. Increasingly, the need for creative applications of scientific knowledge is becoming recognised as necessary in solving complex problems.
There is an acknowledged need to integrate arts within areas of Science Technology Engineering and Mathematics (STEM) in tackling complex problems. The result is growing advocacy for “STEAM” as a conceptual foundation for education. Gonski 2.0 reinforces this trend as a cross-curricular approach to education launched by the Australian federal government.

[small-quote name=”Bronwen Wade-Leeuwen, Jessica Vovers and Melissa Silk” title=””]STEM represents science, technology, engineering and maths. “STEAM” represents STEM plus the arts – humanities, language arts, dance, drama, music, visual arts, design and new media.[/small-quote]

The World Science Festival seeks to inspire visitors to wonder about the world around us. It educates children and adults alike about the value of science, and importantly, encourages consideration of the future ramifications of scientific developments.
The 2019 Asia-Pacific World Science Festival attracted over 200,000 people from Brisbane and beyond to explore the wonders of science. Here, Design Robotics showcased one of our latest project developments – a robot artist able to draw your portrait.

Amelia Luu, Dr Jared Donovan and Alan Burden (Image Credit: Cori Stewart)

Led by Dr Jared Donovan, with Mechatronics Engineer Amelia Luu and PhD student Alan Burden, along with a UR5, the robot produced almost 200 individual portraits for festival attendees. Human-computer interaction enabled the art-making, with users able to adjust a desired image via the software interface, changing the look of the sketches. Once happy with their portrait, the information was then sent to the robot to draw.  
[small-quote name=”Alan Burden” title=””]“The robot is able to make a very rough sketch in under 60 seconds, but the best results take around two to three minutes.”[/small-quote]

How does it work?

Complete with a 3D printed end-effector designed specifically for the task, the UR5 robotic arm was up to the task – now able to hold the three pens required to draw the portraits.

Image Credit: T.J. Thomson

With software able to run on a standard computer, an image of the subject is captured using a webcam. This image file is passed through filters within the program that use a series of algorithms to determine a final abstract version of the image. These abstract ‘sketches’ represent three variations – or layers – of lines that are combined to make up the final portrait drawing. The robot is then programmed to draw those layers onto the canvas using a Grasshopper plugin for Rhinoceros 3D. The three drawings are overlaid to create the portrait.

Image Credit: T.J. Thomson

[small-quote name=”Alan Burden” title=””]“Three different filters make the three variations of linework or sketches. When combined, the three sketches make the abstract image that the robot is programmed to draw on the canvas.”[/small-quote]
The exhibit was an enormous success, with visitors able to see how robots can act as co-creators through human-computer interaction, transforming digital images into collaborative art.
Reference to The Conversation Article: Explainer: what’s the difference between STEM and STEAM?