

Xander Steenbrugge
This project is an attempt to explore new approaches to audiovisual experience based on Artificial Intelligence. I try to create rich & immersive soundscapes through a process of human-machine interaction. I do not create these works, I co-create them with the AI models that I bring to life... More artworks on my Vimeo page: https://vimeo.com/neuralsynesthesia --------- My basic workflow: 1. I first collect a dataset of images which define the visual style/theme that the AI algorithm has to learn. 2. I then train the AI model to mimic and replicate this visual style (this is done using large amounts of computational power in the cloud and can take several days.) After training, the model is able to produce visual outputs that are similar to the training data, but entirely new and unique. 3. Next, I choose the audio and process it through a custom feature extraction pipeline written in Python. 4. Finally, I let the AI create visual output with the audio features as input. I then start the final feedback loop where I manually edit, curate and rearrange these visual elements into the final work. The AI does not fully create the work, and neither do I. It is very much a collaboration.