Introduction to the Project
Through my PhD Research, I am investigating Accessible Spatial Design for Virtual Perception and Interaction.
Digital space has been dominated by systems designed for sight based interaction as a primary sense throughout its history. Perception is not only about the senses we access information through, but also how these senses enable us to interpret the experience, and understand it. Interaction in Virtual space is possible through so many different forms, and by diversifying these further, we only increase accessibility for wider audiences.
Regarding and designing digital things as Spatial, rather than flat layers on our screens is important to create immersion within the virtual and in doing so, we increase people’s ability to engage. Accessible Design is especially significant as corporations begin to see the value of spatial web environments which aren’t game based, and enabling them now allows for varied sensory access from the outset.
This varied access enables people who cannot rely on the visual as their primary sense to enjoy and understand virtual spaces in the same way they can the physical, through a variety of other senses. A Research through design process is being undertaken, leading to a gradual change in the direction of the investigation through evolving practice. This research isn’t just aimed at non-visual approaches, it also seeks to include sight when important to the design of digital space, and not just implement it due to expectation as well as the potential to regard senses as a wider range of access points.
Work so Far
This page is on my Blog/Website, but below is a link to my monthly posts detailing my research as well as a breakdown of my research through design process.
Through experimenting with animation at the outset of my research journey, it allowed me to investigate the notions of space playfully with visual outputs. Animation, which I mainly explored through blender is a spatial output when done in a 3D workspace using digital models, scans and key framing to create motion. This exploration with visual delight lead me to want to look deeper into photogrammetry and 3D scanning, along with idea of positioning real world objects within digital spaces to enable them to be manipulated and decrease the borders between virtual and physical environments.
Machine Learnt Landscapes was a project which spouted from this desire to look at senses of materiality and vanity within virtual realms. It began with my undergrad final year work which I then progressed by combining it with my animation based explorations during the earliest stages of my PhD.
Taking images of myself from my phone’s storage, I fed these into a machine learning GAN, creating machine learnt images of myself which could be produced infinitely. From a run of this Machine Learnt GAN, I generated an output of 1000 non-selective images. I used these images to effect deformation maps and texture on 3D objects in animation to create virtual landscapes which may attempt to ‘appeal’ to my own self image.
I then converted the ideas of these animations into a game engine environment. Embodied through a rigged scan of myself shown below. This was submitted to a UCL Media Anthropology Lab Conference. It was accepted for this conference where I talked about the ideology behind a landscape made of myself. I also ran a workshop for their masters students on the basics of modelling, texturing, and shading in blender. A link to the online live space is available bellow.
https://www.uclmal.com/exhibition?pgid=kq1927vp-13f77b4c-647a-400f-b433-034f7de799e6
Several VR Experiments came from this exploration of Machine learning in game space. I wanted to play with senses within an immersive games environment whilst learning some basics of using games engines. Trying Unreal Engine 4, I began to learn node editing enabling me to create a physics bending game based on weight and gravity with a vast array of cubes laid out on a table. Exploring VR made me think about the lack of tactility and the heavy focus on visuals, not only in games, but within the wider digital space we have access to as well as upcoming social Meta spaces that are being advertised.
Using Gather (a web based conferencing system) for Exploration only furthered this thought. Looking at video conferencing in a new way, Gather brought many game elements into the conferencing space attempting to engage users with interactive and diverse spaces. However as I experienced and came to grips not only with using Gather, but making spaces to work within it, I found a distinct disconnect between the spaces we made, and the video conferencing elements overlaid on top of it. This lead me to develop a paper for DRS Bilbao with Paul Coulton, Dave Green and Joseph Lindley (currently pending acceptance) that highlights and discusses the shortcomings and benefits of the Gather platform. The main takeaway from this was again the overemphasis on the visual, leading me towards an interest in sound as a sensory experience on its own as a way to design something in opposition to this shortcoming of digital space.
With sound in mind, I began again exploring in animated space, this time with an audio focus. I started making mockup levels to explore binaural sound in blender using the software skills from my animation experimentation. This aimed to spatialise the auditory with a digital output in mind. After hearing the preliminary outputs of this, I realised there was potential for maze based games which used sound for navigation. I took the experience I had gained through making the Machine Learnt Landscapes project to begin making a game in Unreal Engine. this went from UE4 to an alpha version of UE5 set to release this year due to a variety of experimental sound features made available through it. This game is currently playable and is in early testing stages.
The stage I am currently at is creating this game for an audio only output. Below is this game at a very recent state with the visual element unhidden to clarify its workings. As the user, you play through a series of maze levels, listening for the endpoint sounds. As the levels progress, they become more sprawling, with obstacles that reset you to the start, and walls that direct your movement.
This game is an attempt to test appropriate interfaces for interacting with audio experiences, alongside optimal sounds and sensory bandwidth limits for these systems. While being presented through a game medium, the research aims to be useful for sound navigation in wider digital space, exploring how sound can be implemented with a mechanism first approach rather than falling into sound based tropes of story driven narratives.
The game was submitted as an Extended Abstract to DiGRAs 2022 Guadalajara and accepted for this, alongside a similar PhD consortium submission on the same topic. I have also applied for ethics approval for testing at the university to gain some preliminary user feedback which should take place within the next month. After this, we have plans to organise testing with blind and visually impaired participants working with charities focused in these areas to gain further feedback on the development for a Co-Designed Research through Design Approach.
Conference Submissions
Going forward, the focus of the project may evolve due to a Research through Design, but currently, the plan is to continue developing the game on the following timeline:
February 2022 | Continue development of game for initial user testing plus paper revisions. |
March 2022 | Initial University based testing followed by further development. |
April 2022 | Further development followed by user testing at MozFest Manchester. |
May 2022 | Writing up findings and transcripts of user testing. |
June 2022 | Workshops with blind and visually impaired users through charities. |
July 2022 | Writing up workshop outcomes and transcripts. Attending DiGRAS talking about the game. |
August 2022 | Revising game further, working out bugs and adapting based on user feedback. |
September 2022 | Paper/s on the games successes and shortcomings using data and user feedback. |
October 2022 | Further revisions based on outcomes of paper. |
November 2022 | Potential release of game more publicly with a consideration for the entire process. |
December 2022 | Consider what the paper/transcripts/workshops uncover. conceive new research through Design. |
January 2023 | Begin compiling thesis with research methodology, reference collation. |
February 2023 | Write up progression of the PhD so far to clarify findings further. |
March 2023 | Consider where last 14 months should go from here in terms of research through design exploration. |
Since writing this PhD breakdown in January most of the timeline has been maintained. Currently I am looking to pursue a Internship in the first half of 2023. Also 3 papers have been accepted and presented at conferences (DiGRA, DRS and CHI) as well as showcasing the game at MozFest in Manchester. We gained funding to the run the Co-Design workshop which happened in June and was extremely successful. We want to apply for more funding to continue the development of the game, but currently my time is taken up applying for internships and then beginning to write the initial paper on the workshop outcomes. Bellow are links to the papers alongside workshop images and sketch notes:
Co-Designed Workshop for Game Sketch Notes and Images