Changes

Art And The Brain

1,117 bytes added, 06:54, 29 October 2013
Techniques
=Techniques=
This project focuses on the idea of the brain as a filter—feeding specifically chosen audio-visual material to test subjects and imaging their resultant cortical activity. Nishimoto et. al (2011) demonstrated the reconstruction of natural video imagery from functional imaging. van Gerven et. al (2013) have shown reconstruction of letter imagery from functional imaging. Pasley et. al reconstructed speech from the human auditory cortext (2012). Additionally, work has been done on retrieving face imagery from subjects(IBM Patent 2005), predicting activity associated with noun meaning (Mitchell et. al 2008), and identifying object categories (Siminova et. al) or other kinds of semantic content.
 
Artwork implementing any of the reconstruction schemes would require trained subjects, volitionally participating. The goal of a session would be just post real time reconstruction of imagery or sound fed to subjects. These could be seen live, as an audio-visual performance, or replayed at a later date. Each audio-visual stimulus would produce one set of imaging data, determining the output reconstruction.
 
== reconstructing natural video ==
[[File:Nishimoto.etal.2011.Reconstruction.5panels.png|800px]]
*Identifying Object Categories from Event-related EEG. Irina Simanova mail, Marcel van Gerven, Robert Oostenveld, Peter Hagoort
**http://www.plosone.org/article/info%3Adoi%2F10.1371%2Fjournal.pone.0014465
 
=Conclusions and Future Directions=
This piece has a few obvious limitations, but also many strengths, pointing to future directions of work as the technical infrastructure develops. In aestheticizing “the look” of the Nishimoto video reconstructions or “the sound” of the Pasley audio reconstructions, we are dwelling on artifacts of the data collection process/training dataset/reconstruction algorithm more than anything specific to the mind of the subject. Ideally we could image some higher-level responses in the viewing subject (beyond primary visual areas). I’m sure many neuroscientists would also like to do this. Decoding higher-level processing would give us more meat for the project, more of a cognitive response as opposed to an evoked perceptual one.