December 2020

I am continuing to progress the documentation video of the Machine Learnt Landscapes submission for the UCL MAL conference as the deadline was moved back a week. Here I have added a machined voice over generated by Amazon Polly to narrate the visuals. It works really well with the idea of fusing together machine and myself, as well as with the title card I added indicating the project to be a collaboration between me and machine learning.

This version has dual vocals which perhaps is more confusing, but adds to this collaborative feeling of the entire piece. I think the parts where me and the machine are most in sync work the most effectively, but I also like some of the differentiation, as well as the split sides of the audio in left and right stereo channels. They make the piece feel more otherworldly and a sense of alterity is felt. With a little more tuning in my vocal track, I think this piece could be very close to being finalised.

Trying to further sync up the vocals of the two tracks was incredibly difficult, with every minute difference being noticeable. Gradually moving small parts of my own voice, while keeping the original as I fine tune this has been the best process for making this work.

This version which is the intended final output has the beginning with just my voice included, which changes into both the machine and my own voice. This is to be in line with, and compliment the idea of training the model to understand and copy myself in a audible way. the voices are also highly in sync in this version which makes the experience feel even more unusual in a fascinating way.

After Submitting the video work for the UCL Conference, I wanted to focus more on a tactile way in which this artefact could be brought into physical space, as a form of further interaction and distancing from the machine, which created its surfaces. I began testing 3D printing with a 1mm nozzle as apposed to a 0.4mm which creates a much more tier based print, with less detail, but with over 2.5 times the speed of production. For example this print would have taken around 3 hours, but only took 50 minutes using the larger nozzle, and came out with a very interesting, rough finish.

After testing and ensuring the nozzle worked as intended, I attempted to print this model. The process of finalising the model into a sealed structure, which had a flat base, and the topology only on the top surface was rather complex, due the file size and geometric detail, but after several long load times, I managed to create the finished model shown above. I then began testing printing large based, flat prints, which can often create elephants foot, a phenomenon where the corners of the print begin to warp upwards to due to the heat and cooling forces pulling on them.

Using a combination of high build plate temperate, and glue stick layer on the base, a very flat print was achieved in around 15 hours, which was going to take over 2 days with the old nozzle, and would have had a much higher chance of failing. I also really liked the more tiered appearance of the print with the larger nozzle, which resembled a geographic topology as I wanted to convey.

Here the print can be seen, in both shadowed and transparent settings. I really enjoy the darkness which is applied tor the higher topology when put in front of a light source. I think I could potentially use this to create a form of interactive physical map, which could show your location as you move around the environment in the game itself. This could be used to potentially subvert scale, and provide an alternative perspective with the game environment in some sense, potentially be used to comment on the nature of the machine learning generated environment.

Thinking about perspectives and scales within landscapes, I wanted to attempt to somewhat subvert the scale within some other models. Something I had tried before was using data from google maps and using it as experimental space. Here I have converted the data into blender 3D models, and then tried to create infinite looping animations. I think the second better conveys this idea of space infinitely moving and, questioning the concept of scale within a digital environment to some extent. While scale heavily binds us within our physical reality, within digital environments, the same is not the case, we can resize ourselves, easily shift our perspective and also high amounts of depths can be present within incredibly small areas.

Video games are a prime example of this with games of incredibly small environments designed as eSports containing extremely small areas, while large open world games are exactly the opposite, but players often find themselves engaging for many less hours. This is due to the diverse and intriguing options within a highly adaptable small space, which couldn’t possibly be created in larger digital spaces. AI could change this, but considering how these opposing genres could vastly alter the way we perceive space is highly relevant.

Here I have gone back to testing with my virtual AI formed landscape. I wanted to think about how momentum could change perspective. If you look at the bottom of the video as it plays, the nauseous feeling isn’t too severe, while if you focus at the top, where the landscape is moving faster relative to the camera, it is almost impossible to focus on. Although this is an extremely simple concept, I wanted to quickly test how this effect would visually appear, and what the limits of momentum are within digital spaces relative to framerate.