Neurosculpture
An ongoing artistic research project focussing on merging the spectator, the artist and the artwork into one by means of state of the art technology from the world of neuroscience.
Introspection 1.0
This installation makes the virtual internal body accessible by showing a 3D reproduction of my head that gets a degree of lifelikeness due to an augmented reality projection of my brain in motion. Professor Pieter Jonker of the Biorobotics Lab at the TU Delft asked me to develop an augmented reality installation to support a project proposal that investigates the application of Augmented Reality during laparoscopic surgery (minimally invasive surgery). This led to the realisation of Introspection 1.0. In collaboration with the Computer Aided Design Engineering group of the department of Industrial Design I worked on a design that functioned as both a techno-scientific proof-of-concept demo and a work of art. Various medical visualisation methods such as fMRI, were used to create a moving picture of my internal head. A 3D laser scan of my head was redesigned – retopologized – to make it suitable for 3D animation. The digital model was 3D-printed in real size with transparent plastics and the MRI scan of my brain was projected onto it. Our idea was that the user could interact with the installation by means of a (PS-Tech) 3D tracking system and a 3D projection system, developed by Dr. Gerwin the Haan from the VR Lab, that makes it possible to map the virtual projected object on his counterpart – the real object – in 3D without distortions.
Introspection 2.0
Introspection 2.0: In September 2010 the art-science festival Key of Life invited me to exhibit at the group exhibition Brainscapes, with British artists such as Susan Aldworth and Andrew Carnie. During the festival I met neuroscientist Dr. Sarah de Rijcke who lectured on the objectivity of brain imaging. She invited the London based artist Annie Cattrell and me to participate as artists in her project Art Regarding the Brain, a continuation of her dissertation (De Rijcke, 2010). De Rijcke's project actively seeks new and productive ways to bring results of humanities research about neuro-imaging practices into the public domain, by bringing it into dialogue with artistic research, and by making humanities and artistic research strands part of the public debate. The project seeks to stimulate innovation in both public and academic debates about brain research through the arts. Sarah's research inspired me to go deeper into the realm of the brain and think about ways to make a new version of Introspection based on brainwave (EEG) technology. Especially chapter 4 of Sarah’s thesis, visualising white matter in the brain with the diffusion-weighted magnetic resonance imaging (DWI) technique, is very inspiring. DWI is a relatively new brain-imaging technique that does not seem to fit pre-vailing notions of realism or objectivity. After two decades of ‘photorealistic’ magnetic resonance imaging (MRI) - and a much longer tradition of a mechanical objectivity - how should we understand these digital, interactive diffusion-weighted images of white matter?
A Holographic Cabaret
“...Case had seen the medium before; when he’d been a teenager in the Sprawl, they’d called it, ‘dreaming real.’ He remembered thin Puerto Ricans under East Side streetlights, dreaming real to the quick beat of a salsa, dreamgirls shuddering and turning, the onlookers clapping in time. But that had needed a van full of gear and a clumsy trode helmet. What Riviera dreamed, you got...” (Gibson 1984)
The new role of the artist in relation to brain visualisation was perhaps already alluded to in William Gibson's (1984) book Neuromancer. The public sees what the performer “dreams” in real time, like a “holographic cabaret”. My next step is to do some experimental research on combining EEG techniques with eye tracking. I have already implemented an open source eye tracker in a previous project. I would like to create a setting where people look at artworks they create themselves and discover the relation between what they see and their brain activity. I'm still looking for the right formula for the neurosculpture concept. Could I develop generative algorithms that are based on the anatomy of a brain scan? Or perhaps design a new architecture based on the principles of artificial neural networks? Or some sort of fractal-based rhizome distribution system?