Materialize: connecting the convergent points between Paper Prototyping and Google Material Design.

Laura Agudelo
5 min readMar 7, 2021
Inicial project concept, printed electronics prototype and computer vision prototype.

This case study is an explorative project to connect the convergent points between Paper Prototyping as an analogue technique for usability testing, and Google Material Design as a unifying theory of a rationalised digital space.

Materialize was developed as a semester project and as the master thesis for my MFA degree at the Bauhaus Universität Weimar, between 2016 and 2017.

The Problem

During my master studies at the Bauhaus University, I became very interested in how real world analogies became principles for developing graphical user interfaces. After studying some of the theories of Google Material as a design system, I started using it as a base in several of my projects as it comprised some classic principles of good design and this very clever metaphor of a digital three dimensional environment coming from the studies of paper and ink.

However, one of the problems that I encountered while integrating it on my projects, was that it was hard to explain it to people that were not so familiar with design practices. This is when I thought that bringing it back to the physical world by using an analogue technique would be an effective way to help understand the behaviour of a digital surface in correlation to the physical world. As the theories originate from the properties of real-life paper, I thought that using something similar to paper prototyping seemed as an adequate methodology to bridge both the tangible and the intangible.

Initial project concept.

The Purpose

The goal of this project was to provide a unified experience across two different platforms, allowing users to build interface layouts and transform them into digital visualisations grounded in the physical world. Materialize intends to reinforce the importance of analogue studies in a way of understanding paradigms brought by new technologies and innovations, and at the same time, encourage further developments in service of usability practices.

To carry out this proposal, two different technical approaches, which connect analogue and digital components, were developed:

Printed Electronics

This approach consisted on a conductive-ink printed grid that supported multiple touch signals. The basic operation of the grid was based on conductive contactors printed with silver ink and inkjet technology. Each of these contactors allowed a temporal touch input from a conductive paper component and connected with a web visualisation through Arduino. This visualisation displayed the obtained touch inputs as card components.

Circuit design, PCB prototype and conductive-ink prototype with web visualisation

This first approach served as a proof of concept and enabled single touch inputs that were successfully rendered in a web visualisation. However, the technique allowed very restricted interactions, meaning that the visualisation that could be obtained was also very limited. On the other hand, several electronic elements were needed in order to build the prototyping system, which made it difficult to handle and transport, meaning low stability and durability as well. These technical disadvantages opened the consideration of using a different technology in order to get better results.

Computer Vision

Why computer vision? The greatest advantage of this approach is that every pixel of what is sensed by the camera acts as a sensor that provides information to the system. This meant that the amount of information obtained with computer vision far exceeded what the printed electronic version was giving. On the contrary of printed electronics, only a web camera was required for establishing a stable connection and no extra components were needed. The users could interact with normal paper components that could be easily built.

The most important feature of this approach was the ability of determining paper elements and their positions, which is known as object detection, and was done through image segmentation. For achieving this, a contour approximation and shape detection algorithm was used, which was then translated to coordinates that allowed to get a render through a web API.

OpenCV contour approximation and shape detection from Adrian Rosebrock, 2015.

The Final Product

Screenshots from the rendering process.

As the Google Material Design guidelines were an important part of the theoretical framework of the project, they were used as the basis for the interface design, iconography and typography usage in order to keep consistency. The design of the web product is simple and minimalistic, using flat and outlined illustrations that do not compete with the main components of each step. The navigation flow across the system is simple and lineal, divided in three main components: Camera transmission, image manipulation and edition, and the final digital version of the prototyping process.

Pictures from the testing set up.

This project would not have been possible without the help of several friends and colleagues that were always there to solve any kind of problem 🖤. Special thanks to:

Luis Miguel Zapata, for making me suffer with circuits and python.
Mariana Sanchez, for the initial project ideas.
Javier De La Hoz, for saving my console.
Santiago Florez, for helping me with Javascript over Whatsapp conversations.
Sergio Robledo, for his ninja abilities in Front-End.
Jonny Velez, for making my life easier with a line of code.

--

--

Laura Agudelo

Laura is a Product Designer focused on integrating visual design, interaction design and user research practices.