Portfolio

Hey, I’m Zach Mason, an Interactive Designer and Creative Technologist who creates work ranging from apps, games and animation, to interactive code, installations and digitally generated sculptures. Here are several key solo projects which most define my work, but if you’re interested in seeing more, there is full documentation of all my own projects from the last four years of university in the archive tab.

MACHINE LEARNT LANDSCAPES

Documentation video of an exploration within game engine, machine learning and 3D animation environments. Training faces to forms landscapes and machines to attempt to understand what images of me are composed of. Pushing the limits of digital space, and how AI may be used to form it in the near future, to suit our whims.

See More

EMOTICO

AR App created through Unity and Vuforia using object targets. Designed as a satirical user experience to make viewers question the use of medicine to regulate emotional states within modern medicine. This however could be used in industry to create apps which could actually provide heavy detail on medicines when a user aims the app on their phone at the box of the medicine.

See More

AUGMENTED REALITY BOARD GAMES

AR App created through Unity and Vuforia using image targets. Designed as a proof of concept for augmented reality board games intended for increased engagement and versatility. The idea being the pieces are the flat tiles seen, with the chess set on top being generated through augmented reality on the phone. These pieces could be changed in the app to represent any standard board game, even allowing for the user to create their own games with the app and existing board.

See More

Computer program created through Processing 3.0 with a webcam feed. The program reads colour data from objects placed on a table in a straight line, acting like a visual synthesiser. These objects colours are then read, and converted into variables which generate the object you see in the centre of the screen. This could be converted into a science museum installation, using the objects to generate visuals and sound for an interactive area.

See More

PORTRAITS OF THE MACHINE

Machine learning video output from Runway. Trained on images of my face (un-cropped), beginning with a standard facial generation trained model. The computer then begins to learn from the over 500 images I give it of myself. The final output of this trained model can be seen above. This was created to provide artistic visuals and imagery as well as to highlight the materiality we possess digitally through the mass photography of our own faces. These images could be turned into sculptures or prints which I have done for artistic purposes to make the visuals more physical to the viewer.

See More

DIGITAL MATERIALITY

Data Visualisation generated through AutoDesk Fusion, GrandPerspective and Illustrator. The towers represent blocks of data on my computer, with the size being representative of the size of the file, and the height of the towers reflecting how recently they were accessed. This piece was again generated to show users how much we hoard in a digital space without even realising it due to how intangible these files are. This could be automated into a software for other visualisation purposes for users to highlight data in new ways, such as spending through bank account applications or for data exploration on a computer.

See More

THE MIND PALACE

3D animation created with AutoDesk Fusion, Mudbox, Maya and Skannect along with the Xbox Kinect camera. This animation was created to reflect a personal understanding of mental state. It uses scans of my head, along with the flat which I lived in at the time to represent mental concepts. The whole space has been intentionally distorted in order to create a memory like feeling to the visuals. I could see this being shown within an art gallery or shown as a VR film to create more of an immersive experience, allowing the viewer to be at the centre of this mind palace.

See More