Introduction to the Project
Through my PhD Research, I am investigating Alternative Designs for Perception and Interaction in Virtual Spaces.
Digital space has been dominated by systems designed for sight based interaction as a primary sense throughout its history. Perception is not only about the senses we access information through, but also how these senses enable us to interpret the experience, and understand it. Interaction in Virtual space is possible through so many different forms, and by diversifying these further, we only increase accessibility and ease of access to wider audiences.
Regarding and designing digital things as Spaces, rather than flat layers on our screens is important to create immersion within the virtual and in doing increase engagement. Alternative Designs are especially significant as corporations begin to see the value of non-game spatial web environments, and enabling them now allows for varied sensory access from the outset.
This varied access enables people who cannot rely on the visual as their primary sense to enjoy and understand virtual spaces in the same way they can the physical, through a variety of other senses. A Research through design process is being undertaken, leading to a gradual change in the direction of the investigation through evolving practice. This research isn’t just aimed at non-visual approaches, it also seeks to include sight when important to the design of digital space, and not just implement it due to expectation.
Work so Far
This is my Blog/Website but also here is a link to my monthly posts detailing my research
Through experimenting with animation at the outset of my research journey, it allowed me to investigate the notions of space playfully with visual outputs. Animation, which I mainly did through blender is a spatial output when done in a 3D workspace using digital models, scans and key framing to create motion. This exploration with visual delight lead me to want to look deeper into photogrammetry and 3D scanning, along with idea of positioning real world objects within digital spaces to enable them to be manipulated.
Machine Learnt Landscapes was a project which spouted from this desire to look at senses of materiality and vanity within virtual realms. It began with my undergrad final year work which I then progressed by combining it with my animation based explorations during the earliest stages of my PhD.
Taking images of myself from my phone’s storage, I fed these into a machine learning GAN, creating machine learnt images of myself which could be produced infinitely. From a run of this Machine Learnt GAN, I got an output of 1000 non-selective images. I used these images to effect deformation maps and texture on 3D objects in animation to create virtual landscapes which may ‘appeal’ to my own self image.
I then converted the ideas of these animations into a game engine environment. Embodied through a rigged scan of myself shown below. This was submitted to a UCL Media Anthropology Lab Conference. It was accepted for this conference where I talked about the ideology behind a landscape made of myself. I also ran a workshop for their masters students on the basics of modelling, texturing, and shading in blender. A link to the online live space is available bellow.
Several VR Experiments came from this exploration of Machine learning in game space. I wanted to play with senses within an immersive games environment whilst learning some basics of using games engines. Trying Unreal Engine 4, I began to learn node editing enabling me to create a physics bending game based on weight and gravity with a vast array of cubes laid out on a table. Exploring VR made me think about the lack of tactility and the heavy focus on visuals, not only in games, but within the wider digital space we have access to.
Gather Exploration only furthered this thought. Looking at video conferencing in a new way, Gather brought many game elements into the conferencing space attempting to engage users with interactive and diverse spaces. However as I experienced and came to grips not only with using Gather, but making spaces to work within it, I found a distinct disconnect between the spaces we made, and the video conferencing elements overlaid on top of it. This lead me to develop a paper for DRS Bilbao with Paul Coulton, Dave Green and Joseph Lindley (currently pending acceptance) that highlights and discusses the shortcomings and benefits of the Gathers platform. The main takeaway from this was the again the overemphasis on the visual, leading me towards an interest in sound as a sensory experience on its own.
With sound in mind, I began again exploring in animated space, this time with an audio focus. I started making mockup levels to explore binaural sound in blender. This aimed to spatialise the auditory with a digital output in mind. After hearing the preliminary outputs of this, I realised there was potential for maze based games which used sound for navigation.
The stage I am currently at is creating this game for an audio only output. Below is this game at a very recent state with the visual element unhidden to clarify its workings. As the user, you play through a series of maze levels, listening for the endpoint sounds. As the levels progress, they become more sprawling, with obstacles that reset you to the start, and walls that direct your movement.
This game is an attempt to test appropriate interfaces for interacting with audio experiences, alongside optimal sounds and sensory bandwidth limits for these systems. While being presented through a game medium, the research aims to be useful for sound navigation in wider digital space, exploring how sound can be implemented with a mechanism first approach rather than falling into sound based tropes of story driven narratives.
The game was submitted as an Extended Abstract to DiGRAs 2022 Guadalajara and accepted for this, alongside a similar PhD consortium submission on the same topic.
Going forward, the focus of the project may evolve due to a Research through Design, but currently, the plan is to continue developing the game on the following timeline: