Art And The Brain

From Robert-Depot
Jump to: navigation, search

<<< back to Wiki Home

Human as Filter: Screen Printing on the Mind. Disaster Series. Lost Referent. Witnesses.

pure psychic automatism

Introduction: Disaster series

Project for DXARTS 490 AU13 - Art and The Brain

Recent studies have demonstrated approximate reconstructions of visual and auditory stimuli from neuroimaging. This project, The Disaster Series, uses this technique to explore the idea of brain as filter. Test subjects are enlisted as closed conduits for audio/visual material, hidden from the audience, which is then reconstituted and exhibited as processed through their brains. The resultant videos are shown either in near real time (just post event) or at a later date as a kind of compound neuro-portraiture/printmaking project. This process confounds ideas of private/public and internal/external space. Inputs are hidden, data is grabbed from the invisible space of the subjects' heads, and the results, intimately processed, are displayed in public.

This projects dwells in the aesthetics of the imperfect reconstruction, as seen in Nishimoto et al's reconstruction videos below. They are marked by an aesthetics of incompleteness, ambiguity, inchoate meaning. Above all I find them effused with a sense of pathos. Someone (something?) is suffering from the lack of adequate information. This project stages that extraction and attempt reconstruction of imagery from the brain as an aspirational, desirous event. The system enacts the impossible desire to peer inside another's head, to see through other's eyes. This emphasizes the unbridgeable gulf between self and other, interior and exterior world.

The key determinants of meaning in this piece are the choice of video material for the training database, the choice of test subject/performers, the nature of the test stimulus to be filtered through the subject, and the output display context. The artworks below give some sense of the poetic/aesthetic ballpark I am aiming for, at least as far as subject matter.

Poetic / Aesthetic Reference

My key aesthetic references in this piece are Andy Warhol’s Disaster Paintings, and Jim Campbell’s Ambiguous Icons. Though separated by time and medium, the share an air of pathos. Warhol’s Disaster Paintings show both a fascination with tragedy and equanimity in the face of it. Prompted by disasters in the news stories of the day (such as the fatal-food poisoning event in Tuna Fish Disaster) his screen-print paintings reproduce imagery of those events ad nauseum.

Jim Campbell’s Ambiguous Icon series, particularly Ambiguous Icon #5, Running / Falling, shifts focus from the staccato puncture of unexpected calamity to the inherent pathos of the human condition. Babies have to “learn to crawl before you walk”. One must “walk before you run”. With Campbell, you “run before you fall”. Campbell’s low-resolution animations are redolent in their associations but unwilling to provide specific detail of what exactly they show.

These two bodies of work share traits as kinds of second-hand representations. Both are screen-printing techniques, images extracted from their normal context and rendered through exotic means.

Warhol: Equanimity in the face of tragedy:

Jim Campbell: Pathos inherent to the human condition

  • Ambiguous Icon #5. Running Falling. 2005. Jim Campbell.

Techniques

This project focuses on the idea of the brain as a filter—feeding specifically chosen audio-visual material to test subjects and imaging their resultant cortical activity. Nishimoto et. al (2011) demonstrated the reconstruction of natural video imagery from functional imaging. van Gerven et. al (2013) have shown reconstruction of letter imagery from functional imaging. Pasley et. al reconstructed speech from the human auditory cortext (2012). Additionally, work has been done on retrieving face imagery from subjects (IBM Patent, 2005), predicting activity associated with noun meaning (Mitchell et. al 2008), and identifying object categories (Siminova et. al) or other kinds of semantic content.

Artwork implementing any of the reconstruction schemes would require trained subjects, volitionally participating. The goal of a session would be just post real time reconstruction of the imagery and sound fed to subjects. These could be seen live, as an audio-visual performance, or replayed at a later date. Each audio-visual stimulus would produce one set of imaging data, determining the output reconstruction.


reconstructing natural video

Nishimoto et. al 2001

Nishimoto.etal.2011.Reconstruction.5panels.png

  • Reconstructing visual experiences from brain activity evoked by natural movies. Shinji Nishimoto, An T. Vu, Thomas Naselaris, Yuval Benjamini, Bin Yu & Jack L. Gallant. Current Biology, published online September 22, 2011.
    • "Record brain activity while the subject watches several hours of movie trailers"
    • http://www.youtube.com/watch?v=nsjDnYxJ0bo
    • http://www.youtube.com/watch?v=KMA23JJ1M1o
    • The computational encoding models in our study provide a functional account of brain activity evoked by natural movies. It is currently unknown whether processes like dreaming and imagination are realized in the brain in a way that is functionally similar to perception.
    • Predictive models are the gold standard of computational neuroscience and are critical for the long-term advancement of brain science and medicine. To build a computational model of some part of the visual system, we treat it as a "black box" that takes visual stimuli as input and generates brain activity as output. A model of the black box can be estimated using statistical tools drawn from classical and Bayesian statistics, and from machine learning.
    • https://sites.google.com/site/gallantlabucb/publications/nishimoto-et-al-2011
  • reconstruction from cat brain

What Does the Cat See?

  • Yang Dan. UC Berkeley. 1999.

http://www.youtube.com/watch?v=jofNR_WkoCE

reconstructing letters

0 (2).jpeg

reconstructing speech

Figure S7.jpg File:Audio File S1.wav

retrieving face images

predicting activity associated with noun meaning

Predictive celery.png

identifying object categories

Journal.pone.0014465.g002.png

Conclusions and Future Directions

This piece has a few obvious limitations but also many strengths pointing to future directions of work. In aestheticizing “the look” of the Nishimoto video reconstructions or “the sound” of the Pasley audio reconstructions, we are dwelling on artifacts of the data collection process/training dataset/reconstruction algorithm more than anything specific to the mind of the subject. Ideally, we could image some higher-level responses in the viewing subject (beyond primary visual areas). I’m sure many neuroscientists would also like to do this. Decoding higher-level processing would give us more meat for the project, more of a cognitive response as opposed to an evoked perceptual one.

Another shortcoming of the project is its essential theatricality. The videos are produced from averaged, highly correlated 10 second clips of source materials. Similar aesthetic effects to the ghostlike compositing could be achieved with simple computer programs and none of the technical infrastructure. So, in essence, the project relies on the presence of the MRI magnet, functional imaging, and the gesture of “peering inside the brain” as central vehicles for meaning.

There is much positive potential in this project. Each resulting audio-visual reconstruction is uniquely tied to the person of the test subject and the particulars of the session, stimulus material, and training dataset that produced it. There is a great opportunity for dissonance between source material and filtered signal. Rather than settling for samples extracted from Apple movie trailers and Youtube (Nishimoto et al), you could reconstruct footage of car crash disasters from a training database of some contrasting, bucolic scenery. In the Nishimoto videos it is possible to see traces of the original material poking through, this would be true in my project as well.

Future directions to explore include the modeling and reconstruction of perceptions by non-human subjects (simulating what your dog sees), implementing audio reconstruction with non-invasive methods (i.e. not ECoG), and attempting to evoke and reconstruct responses in other sensory modalities. Additionally, with the improvements in the portability of MRI technology or the development of other sensing modalities (EEG), there could be new opportunities for reconstructing internal, sensory experience.