<?xml version="1.0"?>
<feed xmlns="http://www.w3.org/2005/Atom" xml:lang="en">
		<id>http://wiki.roberttwomey.com/api.php?action=feedcontributions&amp;feedformat=atom&amp;user=GregoryParsons</id>
		<title>Robert-Depot - User contributions [en]</title>
		<link rel="self" type="application/atom+xml" href="http://wiki.roberttwomey.com/api.php?action=feedcontributions&amp;feedformat=atom&amp;user=GregoryParsons"/>
		<link rel="alternate" type="text/html" href="http://wiki.roberttwomey.com/Special:Contributions/GregoryParsons"/>
		<updated>2026-05-07T13:34:00Z</updated>
		<subtitle>User contributions</subtitle>
		<generator>MediaWiki 1.27.1</generator>

	<entry>
		<id>http://wiki.roberttwomey.com/index.php?title=FRSynth_-_Emilio_Marcelino&amp;diff=3990</id>
		<title>FRSynth - Emilio Marcelino</title>
		<link rel="alternate" type="text/html" href="http://wiki.roberttwomey.com/index.php?title=FRSynth_-_Emilio_Marcelino&amp;diff=3990"/>
				<updated>2010-06-04T00:43:37Z</updated>
		
		<summary type="html">&lt;p&gt;GregoryParsons: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== &amp;#039;&amp;#039;&amp;#039;Motivation&amp;#039;&amp;#039;&amp;#039; ==&lt;br /&gt;
&lt;br /&gt;
My goal to start this project was to include a variety of personal interests into one piece, particularly color/sound/motion.&lt;br /&gt;
&lt;br /&gt;
== &amp;#039;&amp;#039;&amp;#039;Interaction&amp;#039;&amp;#039;&amp;#039;==&lt;br /&gt;
&lt;br /&gt;
The user interacts with my piece by sitting/standing in front of the webcam the program tracks the facial movement controlling the mouse.  Depending on the location of the face on the screen the frequency of the sound wave is altered. Making a facial recognition synthesizer.&lt;br /&gt;
&lt;br /&gt;
== &amp;#039;&amp;#039;&amp;#039;Function&amp;#039;&amp;#039;&amp;#039;==&lt;br /&gt;
&lt;br /&gt;
The project will allow the users to test their creativity both visually and auditorily in a live/performance style&lt;br /&gt;
&lt;br /&gt;
== &amp;#039;&amp;#039;&amp;#039;Visualization&amp;#039;&amp;#039;&amp;#039; ==&lt;br /&gt;
&lt;br /&gt;
[http://www.youtube.com/watch?v=SClGZ5a6-uI link Final Project Video]&lt;br /&gt;
&lt;br /&gt;
== &amp;#039;&amp;#039;&amp;#039;Final Code&amp;#039;&amp;#039;&amp;#039;==&lt;br /&gt;
&lt;br /&gt;
http://imgur.com/Z6Nqa.png&lt;br /&gt;
http://imgur.com/ueyfN.png&lt;br /&gt;
http://imgur.com/x0UfE.png&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
import hypermedia.video.*;&lt;br /&gt;
import ddf.minim.*;&lt;br /&gt;
import ddf.minim.analysis.*;&lt;br /&gt;
import ddf.minim.signals.*;&lt;br /&gt;
&lt;br /&gt;
AudioPlayer player;&lt;br /&gt;
Minim minim;&lt;br /&gt;
AudioInput in;&lt;br /&gt;
AudioOutput out;&lt;br /&gt;
SawWave sine;&lt;br /&gt;
FFT fft;&lt;br /&gt;
&lt;br /&gt;
float[] xpos = new float[50];&lt;br /&gt;
float[] ypos = new float[50];&lt;br /&gt;
&lt;br /&gt;
OpenCV opencv;&lt;br /&gt;
&lt;br /&gt;
// contrast/brightness values&lt;br /&gt;
int contrast_value    = 0;&lt;br /&gt;
int brightness_value  = 0;&lt;br /&gt;
&lt;br /&gt;
float loudestFreqAmp = 0;&lt;br /&gt;
float loudestFreq = 0;&lt;br /&gt;
int timerCounter = 0;&lt;br /&gt;
float posX = 0;&lt;br /&gt;
float posY = 0;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
void setup()&lt;br /&gt;
{&lt;br /&gt;
 size(640, 480);&lt;br /&gt;
&lt;br /&gt;
 //noCursor();&lt;br /&gt;
 minim = new Minim(this);&lt;br /&gt;
 out = minim.getLineOut(Minim.STEREO);&lt;br /&gt;
 // create a sine wave Oscillator, set to 440 Hz, at 0.5 amplitude, sample rate from line out&lt;br /&gt;
 sine = new SawWave(440, 0.1, out.sampleRate());&lt;br /&gt;
 // set the portamento speed on the oscillator to 200 milliseconds&lt;br /&gt;
 sine.portamento(200);&lt;br /&gt;
 // add the oscillator to the line out&lt;br /&gt;
 out.addSignal(sine);&lt;br /&gt;
 minim.debugOn();&lt;br /&gt;
&lt;br /&gt;
 player = minim.loadFile(&amp;quot;beat.mp3&amp;quot;, 2048);&lt;br /&gt;
&lt;br /&gt;
 for (int i = 0; i &amp;lt; xpos.length; i++)&lt;br /&gt;
 {&lt;br /&gt;
   xpos[i] = 0;&lt;br /&gt;
   ypos[i] = 0;&lt;br /&gt;
 }&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
 // get a line in from Minim, default bit depth is 16&lt;br /&gt;
 in = minim.getLineIn(Minim.STEREO, 1024);&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
 opencv = new OpenCV( this );&lt;br /&gt;
 opencv.capture( width, height );                   // open video stream&lt;br /&gt;
 opencv.cascade( OpenCV.CASCADE_FRONTALFACE_ALT );  // load detection description, here-&amp;gt; front face detection : &amp;quot;haarcascade_frontalface_alt.xml&amp;quot;&lt;br /&gt;
}&lt;br /&gt;
&lt;br /&gt;
void draw()&lt;br /&gt;
{&lt;br /&gt;
&lt;br /&gt;
 player.play();&lt;br /&gt;
 // grab a new frame&lt;br /&gt;
 // and convert to gray&lt;br /&gt;
 opencv.read();&lt;br /&gt;
 opencv.convert( GRAY );&lt;br /&gt;
 opencv.contrast( contrast_value );&lt;br /&gt;
 opencv.brightness( brightness_value );&lt;br /&gt;
 opencv.flip( OpenCV.FLIP_HORIZONTAL );&lt;br /&gt;
&lt;br /&gt;
 // proceed detection&lt;br /&gt;
 java.awt.Rectangle[] faces = opencv.detect( 1.2, 2, OpenCV.HAAR_DO_CANNY_PRUNING, 40, 40 );&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
 for( int i=0; i&amp;lt;faces.length; i++ ) {&lt;br /&gt;
   posX = faces[i].x;&lt;br /&gt;
   posY = faces[i].y;&lt;br /&gt;
   if ((posX != 0) &amp;amp;&amp;amp; (posY != 0))&lt;br /&gt;
   {&lt;br /&gt;
     drawCircles(posX, posY);&lt;br /&gt;
   }&lt;br /&gt;
 }&lt;br /&gt;
&lt;br /&gt;
 // with portamento on the frequency will change smoothly&lt;br /&gt;
 float freq = map(posY, 0, height, 1500, 60);&lt;br /&gt;
 sine.setFreq(freq*.5);&lt;br /&gt;
 // pan always changes smoothly to avoid crackles getting into the signal&lt;br /&gt;
 // note that we could call setPan on out, instead of on sine&lt;br /&gt;
 // this would sound the same, but the waveforms in out would not reflect the panning&lt;br /&gt;
 float pan = map(posX, 0, width, -1, 1);&lt;br /&gt;
 sine.setPan(pan);&lt;br /&gt;
}&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
// Circle splatter machine&lt;br /&gt;
void drawCircles(float posX, float posY)&lt;br /&gt;
{&lt;br /&gt;
 background(000);&lt;br /&gt;
&lt;br /&gt;
 for (int i = 0; i &amp;lt;xpos.length - 1; i++)&lt;br /&gt;
 {&lt;br /&gt;
   xpos[i] = xpos[i + 1];&lt;br /&gt;
   ypos[i] = ypos[i + 1];&lt;br /&gt;
 }&lt;br /&gt;
&lt;br /&gt;
 xpos [xpos.length - 1] = posX;&lt;br /&gt;
 ypos [ypos.length - 1] = posY;&lt;br /&gt;
&lt;br /&gt;
 for (int i = 0; i &amp;lt; xpos.length; i++)&lt;br /&gt;
 {&lt;br /&gt;
   noStroke();&lt;br /&gt;
   fill(random(255),random(255),random(255) - i * 5);&lt;br /&gt;
   ellipse(xpos[i],ypos[i],i,i);&lt;br /&gt;
 }&lt;br /&gt;
}&lt;br /&gt;
&lt;br /&gt;
void stop()&lt;br /&gt;
{&lt;br /&gt;
 // always close Minim audio classes when you are done with them&lt;br /&gt;
 in.close();&lt;br /&gt;
 out.close();&lt;br /&gt;
 minim.stop();&lt;br /&gt;
&lt;br /&gt;
 super.stop();&lt;br /&gt;
}&lt;/div&gt;</summary>
		<author><name>GregoryParsons</name></author>	</entry>

	<entry>
		<id>http://wiki.roberttwomey.com/index.php?title=Happy_Days_-_Gregory_Parsons&amp;diff=3988</id>
		<title>Happy Days - Gregory Parsons</title>
		<link rel="alternate" type="text/html" href="http://wiki.roberttwomey.com/index.php?title=Happy_Days_-_Gregory_Parsons&amp;diff=3988"/>
				<updated>2010-06-03T23:15:48Z</updated>
		
		<summary type="html">&lt;p&gt;GregoryParsons: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== &amp;#039;&amp;#039;&amp;#039;Motivation&amp;#039;&amp;#039;&amp;#039; ==&lt;br /&gt;
&lt;br /&gt;
We have a growing shortage of fresh water in the world, and as populations rise and sources become more contaminated the problem accelerates. We often do not take into consideration our day to day affect on the planet, and often will ignore obvious means to reduce our footprint. I want to provide a means for a viewer to understand this relationship and begin to think about what they could do to counter their affect. &lt;br /&gt;
&lt;br /&gt;
== &amp;#039;&amp;#039;&amp;#039;Interaction&amp;#039;&amp;#039;&amp;#039;==&lt;br /&gt;
&lt;br /&gt;
I want the viewer of my project to question their existence within the imagery that is displayed on the screen. The longer that they &amp;#039;interact&amp;#039; with the project the more of an affect they will have on it. Using face tracking as the main stimulus for change the project will react based the length of time that the user&amp;#039;s face is tracked by the camera. &lt;br /&gt;
&lt;br /&gt;
== &amp;#039;&amp;#039;&amp;#039;Function&amp;#039;&amp;#039;&amp;#039;==&lt;br /&gt;
&lt;br /&gt;
The affect on the project will change with the amount of time the viewer is in front of the camera, for instance the sun that represents the viewer will drop water bottles onto the landscape, and the longer they are in front of the camera the more &amp;quot;smog&amp;quot; that will be visible on the screen.&lt;br /&gt;
&lt;br /&gt;
== &amp;#039;&amp;#039;&amp;#039;Visualization&amp;#039;&amp;#039;&amp;#039; ==&lt;br /&gt;
&lt;br /&gt;
http://acsweb.ucsd.edu/~gparsons/notracking.png&lt;br /&gt;
http://acsweb.ucsd.edu/~gparsons/tracking.png&lt;br /&gt;
http://acsweb.ucsd.edu/~gparsons/smog.png&lt;br /&gt;
&lt;br /&gt;
== &amp;#039;&amp;#039;&amp;#039;Final Documentation&amp;#039;&amp;#039;&amp;#039; ==&lt;br /&gt;
&lt;br /&gt;
I feel the project was successful and it turned out well polished. I feel that the user interaction was smooth, and over a long period of running it the project reacted well. &lt;br /&gt;
&lt;br /&gt;
== &amp;#039;&amp;#039;&amp;#039;Final Video Documentation&amp;#039;&amp;#039;&amp;#039; ==&lt;br /&gt;
&lt;br /&gt;
[http://www.youtube.com/watch?v=TVhZmXEU4P0 link Video Documentation]&lt;br /&gt;
&lt;br /&gt;
== &amp;#039;&amp;#039;&amp;#039;Final Code&amp;#039;&amp;#039;&amp;#039; ==&lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
/**&lt;br /&gt;
 * Final Project; Greg Parsons &lt;br /&gt;
 * VIS145B&lt;br /&gt;
 * &lt;br /&gt;
 * Face Tracking from OpenCV with actions modified for my needs, rest original programming&lt;br /&gt;
 * &lt;br /&gt;
 * The project is designed as a means to promote the viewers thought about their impact on the enviornment. &lt;br /&gt;
 * &lt;br /&gt;
 **/&lt;br /&gt;
&lt;br /&gt;
import hypermedia.video.*;&lt;br /&gt;
import ddf.minim.*;&lt;br /&gt;
import oscP5.*;&lt;br /&gt;
import netP5.*;&lt;br /&gt;
&lt;br /&gt;
//declare a new object of opencv&lt;br /&gt;
OpenCV opencv; &lt;br /&gt;
&lt;br /&gt;
//wiimote&lt;br /&gt;
float wiimote1Pitch;&lt;br /&gt;
float wiimote1Roll;&lt;br /&gt;
float wiimote1Yaw;&lt;br /&gt;
float wiimote1Accel;&lt;br /&gt;
int wiimote1ButtonA;&lt;br /&gt;
&lt;br /&gt;
//osc and wiimote&lt;br /&gt;
OscP5 oscP5;&lt;br /&gt;
NetAddress myRemoteLocation;&lt;br /&gt;
&lt;br /&gt;
//image values&lt;br /&gt;
PImage bg;&lt;br /&gt;
PImage cloud1;&lt;br /&gt;
PImage cloud2;&lt;br /&gt;
PImage trash;&lt;br /&gt;
PImage waterbottle;&lt;br /&gt;
PImage star;&lt;br /&gt;
PImage starsad;&lt;br /&gt;
PImage starquesy;&lt;br /&gt;
PImage stardead;&lt;br /&gt;
&lt;br /&gt;
//smog&lt;br /&gt;
PGraphics smog;&lt;br /&gt;
&lt;br /&gt;
//audio&lt;br /&gt;
AudioPlayer player;&lt;br /&gt;
Minim minim;&lt;br /&gt;
&lt;br /&gt;
//organic values&lt;br /&gt;
int c1 = -120;&lt;br /&gt;
int c2 = 550;&lt;br /&gt;
int h1 = 50;&lt;br /&gt;
int h2 = 100;&lt;br /&gt;
float posX = 0;&lt;br /&gt;
float posY = 0;&lt;br /&gt;
int numWb = 0;&lt;br /&gt;
int counter = 0;&lt;br /&gt;
int splashCounter = 0;&lt;br /&gt;
int wbDissapearCounter = 0;&lt;br /&gt;
int wbDelayCounter = 0;&lt;br /&gt;
int smogOpacity = 0;&lt;br /&gt;
int wiiACounter = 0;&lt;br /&gt;
int creationCounter = 0;&lt;br /&gt;
boolean faceDetected = false;&lt;br /&gt;
int wbcount=0;&lt;br /&gt;
&lt;br /&gt;
//creating waterbottle objects&lt;br /&gt;
WaterBottle[] waterBottle = new WaterBottle[200];&lt;br /&gt;
&lt;br /&gt;
void setup()&lt;br /&gt;
{&lt;br /&gt;
  size(1024, 768, P2D);&lt;br /&gt;
  frameRate(30);&lt;br /&gt;
  noCursor();&lt;br /&gt;
&lt;br /&gt;
  //declaring images&lt;br /&gt;
  bg = loadImage(&amp;quot;background2.png&amp;quot;);&lt;br /&gt;
  cloud1 = loadImage(&amp;quot;cloud1.png&amp;quot;);&lt;br /&gt;
  cloud2 = loadImage(&amp;quot;cloud2.png&amp;quot;);&lt;br /&gt;
  waterbottle = loadImage(&amp;quot;waterbottle.png&amp;quot;);&lt;br /&gt;
  star = loadImage(&amp;quot;star.png&amp;quot;);&lt;br /&gt;
  starsad = loadImage(&amp;quot;starsad.png&amp;quot;);&lt;br /&gt;
  starquesy = loadImage(&amp;quot;starquesy.png&amp;quot;);&lt;br /&gt;
  stardead = loadImage(&amp;quot;stardead.png&amp;quot;);&lt;br /&gt;
&lt;br /&gt;
  //wiimote&lt;br /&gt;
  oscP5 = new OscP5(this, 12000);&lt;br /&gt;
  myRemoteLocation = new NetAddress(&amp;quot;localhost&amp;quot;, 12000);&lt;br /&gt;
&lt;br /&gt;
  //audio&lt;br /&gt;
  minim = new Minim(this);&lt;br /&gt;
  // load a file, give the AudioPlayer buffers that are 2048 samples long&lt;br /&gt;
  player = minim.loadFile(&amp;quot;Cartoon Accent 28.mp3&amp;quot;, 2048);&lt;br /&gt;
&lt;br /&gt;
  //opencv logic&lt;br /&gt;
  opencv = new OpenCV( this );&lt;br /&gt;
  opencv.capture( width/2, height/2 );                   // open video stream&lt;br /&gt;
  opencv.cascade( OpenCV.CASCADE_FRONTALFACE_ALT );  // load detection description, here-&amp;gt; front face detection : &amp;quot;haarcascade_frontalface_alt.xml&amp;quot;&lt;br /&gt;
&lt;br /&gt;
  //creatinh smog graphic element&lt;br /&gt;
  smog = createGraphics(width, height, P3D);&lt;br /&gt;
}&lt;br /&gt;
&lt;br /&gt;
void draw()&lt;br /&gt;
{ &lt;br /&gt;
  &lt;br /&gt;
  wbcount++;&lt;br /&gt;
  image(bg, 0, 0); &lt;br /&gt;
  // grab a new frame&lt;br /&gt;
  // and convert to gray&lt;br /&gt;
  opencv.read();&lt;br /&gt;
  opencv.convert( GRAY );&lt;br /&gt;
  opencv.flip( OpenCV.FLIP_HORIZONTAL ); &lt;br /&gt;
&lt;br /&gt;
  // proceed detection&lt;br /&gt;
  java.awt.Rectangle[] faces = opencv.detect( 1.2, 2, OpenCV.HAAR_DO_CANNY_PRUNING, 40, 40 );&lt;br /&gt;
&lt;br /&gt;
  float posX = 0;&lt;br /&gt;
  float posY = 0;&lt;br /&gt;
&lt;br /&gt;
  //assign the position of the detected face to usable int values&lt;br /&gt;
  for( int i=0; i&amp;lt;faces.length; i++ ) &lt;br /&gt;
  {&lt;br /&gt;
    posX = faces[i].x*2; &lt;br /&gt;
    posY = faces[i].y*2;&lt;br /&gt;
    &lt;br /&gt;
    //create a counter for the amount of waterbottles that is created for each new position of posX, posY and create a water bottle&lt;br /&gt;
    if (numWb &amp;gt; 199)&lt;br /&gt;
    {&lt;br /&gt;
      numWb = 0;&lt;br /&gt;
    } &lt;br /&gt;
    if(wbcount&amp;gt;3) {&lt;br /&gt;
      waterBottle[numWb] = new WaterBottle(posX, posY); &lt;br /&gt;
      numWb++;  &lt;br /&gt;
      wbcount=0;&lt;br /&gt;
    }  &lt;br /&gt;
}&lt;br /&gt;
&lt;br /&gt;
  System.out.println(numWb);&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
  //debugging code for checking position of the detected face (if it is acting up and not throwing out 0,0 positions)&lt;br /&gt;
  //System.out.println(&amp;quot;posX = &amp;quot; + posX + &amp;quot; posY = &amp;quot; + posY);&lt;br /&gt;
  //System.out.println(numWb);&lt;br /&gt;
&lt;br /&gt;
  if (counter &amp;gt; 1)&lt;br /&gt;
  {&lt;br /&gt;
    for (int i = 0; i &amp;lt; numWb; i++)&lt;br /&gt;
    {&lt;br /&gt;
      waterBottle[i].displayWaterBottle();&lt;br /&gt;
    } &lt;br /&gt;
  }&lt;br /&gt;
&lt;br /&gt;
  if (counter &amp;gt; 1)&lt;br /&gt;
  {&lt;br /&gt;
    for (int i = 0; i &amp;lt; numWb; i++)&lt;br /&gt;
    {&lt;br /&gt;
      waterBottle[i].update();&lt;br /&gt;
    }&lt;br /&gt;
  }&lt;br /&gt;
&lt;br /&gt;
  //cloud movement and new position for clouds after full cycle&lt;br /&gt;
  if (c1 &amp;lt; 1300) {&lt;br /&gt;
    c1++;&lt;br /&gt;
  }&lt;br /&gt;
  else {&lt;br /&gt;
    c1 = -400;&lt;br /&gt;
    h1 = round(random(400));&lt;br /&gt;
  } &lt;br /&gt;
&lt;br /&gt;
  if (c2 &amp;lt; 1300) {&lt;br /&gt;
    c2++;&lt;br /&gt;
  }&lt;br /&gt;
  else {&lt;br /&gt;
    c2 = -400;&lt;br /&gt;
    h2 = round(random(400));&lt;br /&gt;
  } &lt;br /&gt;
&lt;br /&gt;
  //drawing the clouds on the screen with moving variables&lt;br /&gt;
  image(cloud1, c1, h1);&lt;br /&gt;
  image(cloud2, c2, h2);&lt;br /&gt;
&lt;br /&gt;
  //logic for face either detected or not&lt;br /&gt;
  if ((posX != 0) &amp;amp;&amp;amp; (posY != 0)) {&lt;br /&gt;
    faceDetected = true;&lt;br /&gt;
  }&lt;br /&gt;
  else &lt;br /&gt;
  {&lt;br /&gt;
    faceDetected = false;&lt;br /&gt;
  }&lt;br /&gt;
&lt;br /&gt;
  //drawing the smog&lt;br /&gt;
  smog.beginDraw();&lt;br /&gt;
&lt;br /&gt;
  if (smogOpacity &amp;lt; 252)&lt;br /&gt;
  {&lt;br /&gt;
    smog.background(139, 131, 134, smogOpacity);&lt;br /&gt;
  }&lt;br /&gt;
  else if (smogOpacity &amp;gt; 252)&lt;br /&gt;
  {&lt;br /&gt;
    smog.background(255, 36, 0, smogOpacity);&lt;br /&gt;
  }&lt;br /&gt;
&lt;br /&gt;
  smog.endDraw();&lt;br /&gt;
  image(smog, 0, 0); &lt;br /&gt;
&lt;br /&gt;
  //actions for face detected or not detected&lt;br /&gt;
  if (faceDetected == true)&lt;br /&gt;
  {&lt;br /&gt;
&lt;br /&gt;
    if (smogOpacity &amp;lt; 120)&lt;br /&gt;
    {&lt;br /&gt;
      image(star, posX, posY, star.width/3, star.height/3);&lt;br /&gt;
    }&lt;br /&gt;
    else if ((smogOpacity &amp;gt; 120) &amp;amp;&amp;amp; (smogOpacity &amp;lt; 200))&lt;br /&gt;
    {&lt;br /&gt;
      image(starsad, posX, posY, star.width/3, star.height/3);&lt;br /&gt;
    }&lt;br /&gt;
    else if ((smogOpacity &amp;gt; 200) &amp;amp;&amp;amp; (smogOpacity &amp;lt; 252))&lt;br /&gt;
    {&lt;br /&gt;
      image(starquesy, posX, posY, star.width/3, star.height/3);&lt;br /&gt;
    }&lt;br /&gt;
    else if (smogOpacity &amp;gt; 255) &lt;br /&gt;
    {&lt;br /&gt;
      image(stardead, 500, 500, star.width/3, star.height/3);&lt;br /&gt;
    }&lt;br /&gt;
&lt;br /&gt;
    if (smogOpacity &amp;lt; 350)&lt;br /&gt;
    {&lt;br /&gt;
      smogOpacity = smogOpacity+2; &lt;br /&gt;
    } &lt;br /&gt;
&lt;br /&gt;
    if (smogOpacity &amp;lt; 255)&lt;br /&gt;
    {&lt;br /&gt;
      if (splashCounter == 1) &lt;br /&gt;
      {&lt;br /&gt;
        player.play();&lt;br /&gt;
      }&lt;br /&gt;
&lt;br /&gt;
      if (splashCounter == 10)&lt;br /&gt;
      {&lt;br /&gt;
        splashCounter = 0;&lt;br /&gt;
        player.rewind();&lt;br /&gt;
      }&lt;br /&gt;
&lt;br /&gt;
      //System.out.println(splashCounter);&lt;br /&gt;
&lt;br /&gt;
      splashCounter++;&lt;br /&gt;
    }&lt;br /&gt;
  }&lt;br /&gt;
  else&lt;br /&gt;
  {&lt;br /&gt;
    if (smogOpacity &amp;gt; -150)&lt;br /&gt;
    {&lt;br /&gt;
      smogOpacity--;&lt;br /&gt;
    }&lt;br /&gt;
&lt;br /&gt;
    if ((numWb &amp;gt; 1) &amp;amp;&amp;amp; (wbDissapearCounter == 1))&lt;br /&gt;
    {&lt;br /&gt;
      numWb--;&lt;br /&gt;
    }&lt;br /&gt;
&lt;br /&gt;
    if (wbDissapearCounter &amp;gt; 3)&lt;br /&gt;
    {&lt;br /&gt;
      wbDissapearCounter = 0;&lt;br /&gt;
    }&lt;br /&gt;
&lt;br /&gt;
    wbDissapearCounter++;&lt;br /&gt;
  }&lt;br /&gt;
&lt;br /&gt;
  //debug&lt;br /&gt;
  //System.out.println(smogOpacity);&lt;br /&gt;
&lt;br /&gt;
  //System.out.println(faceDetected);&lt;br /&gt;
  counter++;&lt;br /&gt;
&lt;br /&gt;
  if (wiimote1ButtonA == 1) &lt;br /&gt;
  {&lt;br /&gt;
    numWb = numWb/2;&lt;br /&gt;
    smogOpacity = smogOpacity-6;&lt;br /&gt;
    wiimote1ButtonA = 0;&lt;br /&gt;
  }&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
}&lt;br /&gt;
&lt;br /&gt;
//waterbottle class&lt;br /&gt;
class WaterBottle {&lt;br /&gt;
  float wbPosX, wbPosY;&lt;br /&gt;
&lt;br /&gt;
  WaterBottle(float posX, float posY) {&lt;br /&gt;
    wbPosX = posX;&lt;br /&gt;
    wbPosY = posY;&lt;br /&gt;
  }&lt;br /&gt;
&lt;br /&gt;
  //updates the position of the Y value to cause the bottles to fall&lt;br /&gt;
  void update() { &lt;br /&gt;
    if (wbPosY &amp;lt; 468)&lt;br /&gt;
    {&lt;br /&gt;
      wbPosY = wbPosY + 10; &lt;br /&gt;
    }&lt;br /&gt;
  } &lt;br /&gt;
&lt;br /&gt;
  //creates the bottle images&lt;br /&gt;
  void displayWaterBottle() {&lt;br /&gt;
      image(waterbottle, wbPosX, wbPosY);&lt;br /&gt;
  }&lt;br /&gt;
&lt;br /&gt;
}&lt;br /&gt;
&lt;br /&gt;
//wiimote&lt;br /&gt;
void oscEvent(OscMessage theOscMessage) {&lt;br /&gt;
  if(theOscMessage.checkAddrPattern(&amp;quot;/wii/1/accel/pry&amp;quot;)==true){&lt;br /&gt;
    wiimote1Pitch = theOscMessage.get(0).floatValue();&lt;br /&gt;
    wiimote1Roll = theOscMessage.get(1).floatValue();&lt;br /&gt;
    wiimote1Yaw = theOscMessage.get(2).floatValue();&lt;br /&gt;
    wiimote1Accel = theOscMessage.get(3).floatValue();&lt;br /&gt;
  }&lt;br /&gt;
  if(theOscMessage.checkAddrPattern(&amp;quot;/wii/1/button/A&amp;quot;)==true){&lt;br /&gt;
    wiimote1ButtonA = theOscMessage.get(0).intValue();&lt;br /&gt;
  }&lt;br /&gt;
&lt;br /&gt;
}&lt;br /&gt;
&lt;br /&gt;
//opencv actions at the end of the runtime&lt;br /&gt;
public void stop() {&lt;br /&gt;
  opencv.stop();&lt;br /&gt;
  player.close();&lt;br /&gt;
  minim.stop(); &lt;br /&gt;
&lt;br /&gt;
  super.stop();&lt;br /&gt;
}&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/code&amp;gt;&lt;/div&gt;</summary>
		<author><name>GregoryParsons</name></author>	</entry>

	<entry>
		<id>http://wiki.roberttwomey.com/index.php?title=Happy_Days_-_Gregory_Parsons&amp;diff=3986</id>
		<title>Happy Days - Gregory Parsons</title>
		<link rel="alternate" type="text/html" href="http://wiki.roberttwomey.com/index.php?title=Happy_Days_-_Gregory_Parsons&amp;diff=3986"/>
				<updated>2010-06-03T23:03:07Z</updated>
		
		<summary type="html">&lt;p&gt;GregoryParsons: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== &amp;#039;&amp;#039;&amp;#039;Motivation&amp;#039;&amp;#039;&amp;#039; ==&lt;br /&gt;
&lt;br /&gt;
We have a growing shortage of fresh water in the world, and as populations rise and sources become more contaminated the problem accelerates. We often do not take into consideration our day to day affect on the planet, and often will ignore obvious means to reduce our footprint. I want to provide a means for a viewer to understand this relationship and begin to think about what they could do to counter their affect. &lt;br /&gt;
&lt;br /&gt;
== &amp;#039;&amp;#039;&amp;#039;Interaction&amp;#039;&amp;#039;&amp;#039;==&lt;br /&gt;
&lt;br /&gt;
I want the viewer of my project to question their existence within the imagery that is displayed on the screen. The longer that they &amp;#039;interact&amp;#039; with the project the more of an affect they will have on it. Using face tracking as the main stimulus for change the project will react based the length of time that the user&amp;#039;s face is tracked by the camera. &lt;br /&gt;
&lt;br /&gt;
== &amp;#039;&amp;#039;&amp;#039;Function&amp;#039;&amp;#039;&amp;#039;==&lt;br /&gt;
&lt;br /&gt;
The affect on the project will change with the amount of time the viewer is in front of the camera, for instance the sun that represents the viewer will drop water bottles onto the landscape, and the longer they are in front of the camera the more &amp;quot;smog&amp;quot; that will be visible on the screen.&lt;br /&gt;
&lt;br /&gt;
== &amp;#039;&amp;#039;&amp;#039;Visualization&amp;#039;&amp;#039;&amp;#039; ==&lt;br /&gt;
&lt;br /&gt;
http://acsweb.ucsd.edu/~gparsons/notracking.png&lt;br /&gt;
http://acsweb.ucsd.edu/~gparsons/tracking.png&lt;br /&gt;
http://acsweb.ucsd.edu/~gparsons/smog.png&lt;br /&gt;
&lt;br /&gt;
== &amp;#039;&amp;#039;&amp;#039;Final Documentation&amp;#039;&amp;#039;&amp;#039; ===&lt;br /&gt;
&lt;br /&gt;
== &amp;#039;&amp;#039;&amp;#039;Final Code&amp;#039;&amp;#039;&amp;#039; ==&lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
/**&lt;br /&gt;
 * Final Project; Greg Parsons &lt;br /&gt;
 * VIS145B&lt;br /&gt;
 * &lt;br /&gt;
 * Face Tracking from OpenCV with actions modified for my needs, rest original programming&lt;br /&gt;
 * &lt;br /&gt;
 * The project is designed as a means to promote the viewers thought about their impact on the enviornment. &lt;br /&gt;
 * &lt;br /&gt;
 **/&lt;br /&gt;
&lt;br /&gt;
import hypermedia.video.*;&lt;br /&gt;
import ddf.minim.*;&lt;br /&gt;
import oscP5.*;&lt;br /&gt;
import netP5.*;&lt;br /&gt;
&lt;br /&gt;
//declare a new object of opencv&lt;br /&gt;
OpenCV opencv; &lt;br /&gt;
&lt;br /&gt;
//wiimote&lt;br /&gt;
float wiimote1Pitch;&lt;br /&gt;
float wiimote1Roll;&lt;br /&gt;
float wiimote1Yaw;&lt;br /&gt;
float wiimote1Accel;&lt;br /&gt;
int wiimote1ButtonA;&lt;br /&gt;
&lt;br /&gt;
//osc and wiimote&lt;br /&gt;
OscP5 oscP5;&lt;br /&gt;
NetAddress myRemoteLocation;&lt;br /&gt;
&lt;br /&gt;
//image values&lt;br /&gt;
PImage bg;&lt;br /&gt;
PImage cloud1;&lt;br /&gt;
PImage cloud2;&lt;br /&gt;
PImage trash;&lt;br /&gt;
PImage waterbottle;&lt;br /&gt;
PImage star;&lt;br /&gt;
PImage starsad;&lt;br /&gt;
PImage starquesy;&lt;br /&gt;
PImage stardead;&lt;br /&gt;
&lt;br /&gt;
//smog&lt;br /&gt;
PGraphics smog;&lt;br /&gt;
&lt;br /&gt;
//audio&lt;br /&gt;
AudioPlayer player;&lt;br /&gt;
Minim minim;&lt;br /&gt;
&lt;br /&gt;
//organic values&lt;br /&gt;
int c1 = -120;&lt;br /&gt;
int c2 = 550;&lt;br /&gt;
int h1 = 50;&lt;br /&gt;
int h2 = 100;&lt;br /&gt;
float posX = 0;&lt;br /&gt;
float posY = 0;&lt;br /&gt;
int numWb = 0;&lt;br /&gt;
int counter = 0;&lt;br /&gt;
int splashCounter = 0;&lt;br /&gt;
int wbDissapearCounter = 0;&lt;br /&gt;
int wbDelayCounter = 0;&lt;br /&gt;
int smogOpacity = 0;&lt;br /&gt;
int wiiACounter = 0;&lt;br /&gt;
int creationCounter = 0;&lt;br /&gt;
boolean faceDetected = false;&lt;br /&gt;
int wbcount=0;&lt;br /&gt;
&lt;br /&gt;
//creating waterbottle objects&lt;br /&gt;
WaterBottle[] waterBottle = new WaterBottle[200];&lt;br /&gt;
&lt;br /&gt;
void setup()&lt;br /&gt;
{&lt;br /&gt;
  size(1024, 768, P2D);&lt;br /&gt;
  frameRate(30);&lt;br /&gt;
  noCursor();&lt;br /&gt;
&lt;br /&gt;
  //declaring images&lt;br /&gt;
  bg = loadImage(&amp;quot;background2.png&amp;quot;);&lt;br /&gt;
  cloud1 = loadImage(&amp;quot;cloud1.png&amp;quot;);&lt;br /&gt;
  cloud2 = loadImage(&amp;quot;cloud2.png&amp;quot;);&lt;br /&gt;
  waterbottle = loadImage(&amp;quot;waterbottle.png&amp;quot;);&lt;br /&gt;
  star = loadImage(&amp;quot;star.png&amp;quot;);&lt;br /&gt;
  starsad = loadImage(&amp;quot;starsad.png&amp;quot;);&lt;br /&gt;
  starquesy = loadImage(&amp;quot;starquesy.png&amp;quot;);&lt;br /&gt;
  stardead = loadImage(&amp;quot;stardead.png&amp;quot;);&lt;br /&gt;
&lt;br /&gt;
  //wiimote&lt;br /&gt;
  oscP5 = new OscP5(this, 12000);&lt;br /&gt;
  myRemoteLocation = new NetAddress(&amp;quot;localhost&amp;quot;, 12000);&lt;br /&gt;
&lt;br /&gt;
  //audio&lt;br /&gt;
  minim = new Minim(this);&lt;br /&gt;
  // load a file, give the AudioPlayer buffers that are 2048 samples long&lt;br /&gt;
  player = minim.loadFile(&amp;quot;Cartoon Accent 28.mp3&amp;quot;, 2048);&lt;br /&gt;
&lt;br /&gt;
  //opencv logic&lt;br /&gt;
  opencv = new OpenCV( this );&lt;br /&gt;
  opencv.capture( width/2, height/2 );                   // open video stream&lt;br /&gt;
  opencv.cascade( OpenCV.CASCADE_FRONTALFACE_ALT );  // load detection description, here-&amp;gt; front face detection : &amp;quot;haarcascade_frontalface_alt.xml&amp;quot;&lt;br /&gt;
&lt;br /&gt;
  //creatinh smog graphic element&lt;br /&gt;
  smog = createGraphics(width, height, P3D);&lt;br /&gt;
}&lt;br /&gt;
&lt;br /&gt;
void draw()&lt;br /&gt;
{ &lt;br /&gt;
  &lt;br /&gt;
  wbcount++;&lt;br /&gt;
  image(bg, 0, 0); &lt;br /&gt;
  // grab a new frame&lt;br /&gt;
  // and convert to gray&lt;br /&gt;
  opencv.read();&lt;br /&gt;
  opencv.convert( GRAY );&lt;br /&gt;
  opencv.flip( OpenCV.FLIP_HORIZONTAL ); &lt;br /&gt;
&lt;br /&gt;
  // proceed detection&lt;br /&gt;
  java.awt.Rectangle[] faces = opencv.detect( 1.2, 2, OpenCV.HAAR_DO_CANNY_PRUNING, 40, 40 );&lt;br /&gt;
&lt;br /&gt;
  float posX = 0;&lt;br /&gt;
  float posY = 0;&lt;br /&gt;
&lt;br /&gt;
  //assign the position of the detected face to usable int values&lt;br /&gt;
  for( int i=0; i&amp;lt;faces.length; i++ ) &lt;br /&gt;
  {&lt;br /&gt;
    posX = faces[i].x*2; &lt;br /&gt;
    posY = faces[i].y*2;&lt;br /&gt;
    &lt;br /&gt;
    //create a counter for the amount of waterbottles that is created for each new position of posX, posY and create a water bottle&lt;br /&gt;
    if (numWb &amp;gt; 199)&lt;br /&gt;
    {&lt;br /&gt;
      numWb = 0;&lt;br /&gt;
    } &lt;br /&gt;
    if(wbcount&amp;gt;3) {&lt;br /&gt;
      waterBottle[numWb] = new WaterBottle(posX, posY); &lt;br /&gt;
      numWb++;  &lt;br /&gt;
      wbcount=0;&lt;br /&gt;
    }  &lt;br /&gt;
}&lt;br /&gt;
&lt;br /&gt;
  System.out.println(numWb);&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
  //debugging code for checking position of the detected face (if it is acting up and not throwing out 0,0 positions)&lt;br /&gt;
  //System.out.println(&amp;quot;posX = &amp;quot; + posX + &amp;quot; posY = &amp;quot; + posY);&lt;br /&gt;
  //System.out.println(numWb);&lt;br /&gt;
&lt;br /&gt;
  if (counter &amp;gt; 1)&lt;br /&gt;
  {&lt;br /&gt;
    for (int i = 0; i &amp;lt; numWb; i++)&lt;br /&gt;
    {&lt;br /&gt;
      waterBottle[i].displayWaterBottle();&lt;br /&gt;
    } &lt;br /&gt;
  }&lt;br /&gt;
&lt;br /&gt;
  if (counter &amp;gt; 1)&lt;br /&gt;
  {&lt;br /&gt;
    for (int i = 0; i &amp;lt; numWb; i++)&lt;br /&gt;
    {&lt;br /&gt;
      waterBottle[i].update();&lt;br /&gt;
    }&lt;br /&gt;
  }&lt;br /&gt;
&lt;br /&gt;
  //cloud movement and new position for clouds after full cycle&lt;br /&gt;
  if (c1 &amp;lt; 1300) {&lt;br /&gt;
    c1++;&lt;br /&gt;
  }&lt;br /&gt;
  else {&lt;br /&gt;
    c1 = -400;&lt;br /&gt;
    h1 = round(random(400));&lt;br /&gt;
  } &lt;br /&gt;
&lt;br /&gt;
  if (c2 &amp;lt; 1300) {&lt;br /&gt;
    c2++;&lt;br /&gt;
  }&lt;br /&gt;
  else {&lt;br /&gt;
    c2 = -400;&lt;br /&gt;
    h2 = round(random(400));&lt;br /&gt;
  } &lt;br /&gt;
&lt;br /&gt;
  //drawing the clouds on the screen with moving variables&lt;br /&gt;
  image(cloud1, c1, h1);&lt;br /&gt;
  image(cloud2, c2, h2);&lt;br /&gt;
&lt;br /&gt;
  //logic for face either detected or not&lt;br /&gt;
  if ((posX != 0) &amp;amp;&amp;amp; (posY != 0)) {&lt;br /&gt;
    faceDetected = true;&lt;br /&gt;
  }&lt;br /&gt;
  else &lt;br /&gt;
  {&lt;br /&gt;
    faceDetected = false;&lt;br /&gt;
  }&lt;br /&gt;
&lt;br /&gt;
  //drawing the smog&lt;br /&gt;
  smog.beginDraw();&lt;br /&gt;
&lt;br /&gt;
  if (smogOpacity &amp;lt; 252)&lt;br /&gt;
  {&lt;br /&gt;
    smog.background(139, 131, 134, smogOpacity);&lt;br /&gt;
  }&lt;br /&gt;
  else if (smogOpacity &amp;gt; 252)&lt;br /&gt;
  {&lt;br /&gt;
    smog.background(255, 36, 0, smogOpacity);&lt;br /&gt;
  }&lt;br /&gt;
&lt;br /&gt;
  smog.endDraw();&lt;br /&gt;
  image(smog, 0, 0); &lt;br /&gt;
&lt;br /&gt;
  //actions for face detected or not detected&lt;br /&gt;
  if (faceDetected == true)&lt;br /&gt;
  {&lt;br /&gt;
&lt;br /&gt;
    if (smogOpacity &amp;lt; 120)&lt;br /&gt;
    {&lt;br /&gt;
      image(star, posX, posY, star.width/3, star.height/3);&lt;br /&gt;
    }&lt;br /&gt;
    else if ((smogOpacity &amp;gt; 120) &amp;amp;&amp;amp; (smogOpacity &amp;lt; 200))&lt;br /&gt;
    {&lt;br /&gt;
      image(starsad, posX, posY, star.width/3, star.height/3);&lt;br /&gt;
    }&lt;br /&gt;
    else if ((smogOpacity &amp;gt; 200) &amp;amp;&amp;amp; (smogOpacity &amp;lt; 252))&lt;br /&gt;
    {&lt;br /&gt;
      image(starquesy, posX, posY, star.width/3, star.height/3);&lt;br /&gt;
    }&lt;br /&gt;
    else if (smogOpacity &amp;gt; 255) &lt;br /&gt;
    {&lt;br /&gt;
      image(stardead, 500, 500, star.width/3, star.height/3);&lt;br /&gt;
    }&lt;br /&gt;
&lt;br /&gt;
    if (smogOpacity &amp;lt; 350)&lt;br /&gt;
    {&lt;br /&gt;
      smogOpacity = smogOpacity+2; &lt;br /&gt;
    } &lt;br /&gt;
&lt;br /&gt;
    if (smogOpacity &amp;lt; 255)&lt;br /&gt;
    {&lt;br /&gt;
      if (splashCounter == 1) &lt;br /&gt;
      {&lt;br /&gt;
        player.play();&lt;br /&gt;
      }&lt;br /&gt;
&lt;br /&gt;
      if (splashCounter == 10)&lt;br /&gt;
      {&lt;br /&gt;
        splashCounter = 0;&lt;br /&gt;
        player.rewind();&lt;br /&gt;
      }&lt;br /&gt;
&lt;br /&gt;
      //System.out.println(splashCounter);&lt;br /&gt;
&lt;br /&gt;
      splashCounter++;&lt;br /&gt;
    }&lt;br /&gt;
  }&lt;br /&gt;
  else&lt;br /&gt;
  {&lt;br /&gt;
    if (smogOpacity &amp;gt; -150)&lt;br /&gt;
    {&lt;br /&gt;
      smogOpacity--;&lt;br /&gt;
    }&lt;br /&gt;
&lt;br /&gt;
    if ((numWb &amp;gt; 1) &amp;amp;&amp;amp; (wbDissapearCounter == 1))&lt;br /&gt;
    {&lt;br /&gt;
      numWb--;&lt;br /&gt;
    }&lt;br /&gt;
&lt;br /&gt;
    if (wbDissapearCounter &amp;gt; 3)&lt;br /&gt;
    {&lt;br /&gt;
      wbDissapearCounter = 0;&lt;br /&gt;
    }&lt;br /&gt;
&lt;br /&gt;
    wbDissapearCounter++;&lt;br /&gt;
  }&lt;br /&gt;
&lt;br /&gt;
  //debug&lt;br /&gt;
  //System.out.println(smogOpacity);&lt;br /&gt;
&lt;br /&gt;
  //System.out.println(faceDetected);&lt;br /&gt;
  counter++;&lt;br /&gt;
&lt;br /&gt;
  if (wiimote1ButtonA == 1) &lt;br /&gt;
  {&lt;br /&gt;
    numWb = numWb/2;&lt;br /&gt;
    smogOpacity = smogOpacity-6;&lt;br /&gt;
    wiimote1ButtonA = 0;&lt;br /&gt;
  }&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
}&lt;br /&gt;
&lt;br /&gt;
//waterbottle class&lt;br /&gt;
class WaterBottle {&lt;br /&gt;
  float wbPosX, wbPosY;&lt;br /&gt;
&lt;br /&gt;
  WaterBottle(float posX, float posY) {&lt;br /&gt;
    wbPosX = posX;&lt;br /&gt;
    wbPosY = posY;&lt;br /&gt;
  }&lt;br /&gt;
&lt;br /&gt;
  //updates the position of the Y value to cause the bottles to fall&lt;br /&gt;
  void update() { &lt;br /&gt;
    if (wbPosY &amp;lt; 468)&lt;br /&gt;
    {&lt;br /&gt;
      wbPosY = wbPosY + 10; &lt;br /&gt;
    }&lt;br /&gt;
  } &lt;br /&gt;
&lt;br /&gt;
  //creates the bottle images&lt;br /&gt;
  void displayWaterBottle() {&lt;br /&gt;
      image(waterbottle, wbPosX, wbPosY);&lt;br /&gt;
  }&lt;br /&gt;
&lt;br /&gt;
}&lt;br /&gt;
&lt;br /&gt;
//wiimote&lt;br /&gt;
void oscEvent(OscMessage theOscMessage) {&lt;br /&gt;
  if(theOscMessage.checkAddrPattern(&amp;quot;/wii/1/accel/pry&amp;quot;)==true){&lt;br /&gt;
    wiimote1Pitch = theOscMessage.get(0).floatValue();&lt;br /&gt;
    wiimote1Roll = theOscMessage.get(1).floatValue();&lt;br /&gt;
    wiimote1Yaw = theOscMessage.get(2).floatValue();&lt;br /&gt;
    wiimote1Accel = theOscMessage.get(3).floatValue();&lt;br /&gt;
  }&lt;br /&gt;
  if(theOscMessage.checkAddrPattern(&amp;quot;/wii/1/button/A&amp;quot;)==true){&lt;br /&gt;
    wiimote1ButtonA = theOscMessage.get(0).intValue();&lt;br /&gt;
  }&lt;br /&gt;
&lt;br /&gt;
}&lt;br /&gt;
&lt;br /&gt;
//opencv actions at the end of the runtime&lt;br /&gt;
public void stop() {&lt;br /&gt;
  opencv.stop();&lt;br /&gt;
  player.close();&lt;br /&gt;
  minim.stop(); &lt;br /&gt;
&lt;br /&gt;
  super.stop();&lt;br /&gt;
}&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/code&amp;gt;&lt;/div&gt;</summary>
		<author><name>GregoryParsons</name></author>	</entry>

	<entry>
		<id>http://wiki.roberttwomey.com/index.php?title=Happy_Days_-_Gregory_Parsons&amp;diff=3959</id>
		<title>Happy Days - Gregory Parsons</title>
		<link rel="alternate" type="text/html" href="http://wiki.roberttwomey.com/index.php?title=Happy_Days_-_Gregory_Parsons&amp;diff=3959"/>
				<updated>2010-05-30T23:43:40Z</updated>
		
		<summary type="html">&lt;p&gt;GregoryParsons: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== &amp;#039;&amp;#039;&amp;#039;Motivation&amp;#039;&amp;#039;&amp;#039; ==&lt;br /&gt;
&lt;br /&gt;
We have a growing shortage of fresh water in the world, and as populations rise and sources become more contaminated the problem accelerates. We often do not take into consideration our day to day affect on the planet, and often will ignore obvious means to reduce our footprint. I want to provide a means for a viewer to understand this relationship and begin to think about what they could do to counter their affect. &lt;br /&gt;
&lt;br /&gt;
== &amp;#039;&amp;#039;&amp;#039;Interaction&amp;#039;&amp;#039;&amp;#039;==&lt;br /&gt;
&lt;br /&gt;
I want the viewer of my project to question their existence within the imagery that is displayed on the screen. The longer that they &amp;#039;interact&amp;#039; with the project the more of an affect they will have on it. Using face tracking as the main stimulus for change the project will react based the length of time that the user&amp;#039;s face is tracked by the camera. &lt;br /&gt;
&lt;br /&gt;
== &amp;#039;&amp;#039;&amp;#039;Function&amp;#039;&amp;#039;&amp;#039;==&lt;br /&gt;
&lt;br /&gt;
The affect on the project will change with the amount of time the viewer is in front of the camera, for instance the sun that represents the viewer will drop water bottles onto the landscape, and the longer they are in front of the camera the more &amp;quot;smog&amp;quot; that will be visible on the screen.&lt;br /&gt;
&lt;br /&gt;
== &amp;#039;&amp;#039;&amp;#039;Visualization&amp;#039;&amp;#039;&amp;#039; ==&lt;br /&gt;
&lt;br /&gt;
http://acsweb.ucsd.edu/~gparsons/notracking.png&lt;br /&gt;
http://acsweb.ucsd.edu/~gparsons/tracking.png&lt;br /&gt;
http://acsweb.ucsd.edu/~gparsons/smog.png&lt;/div&gt;</summary>
		<author><name>GregoryParsons</name></author>	</entry>

	<entry>
		<id>http://wiki.roberttwomey.com/index.php?title=Happy_Days_-_Gregory_Parsons&amp;diff=3958</id>
		<title>Happy Days - Gregory Parsons</title>
		<link rel="alternate" type="text/html" href="http://wiki.roberttwomey.com/index.php?title=Happy_Days_-_Gregory_Parsons&amp;diff=3958"/>
				<updated>2010-05-30T23:42:04Z</updated>
		
		<summary type="html">&lt;p&gt;GregoryParsons: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== &amp;#039;&amp;#039;&amp;#039;Motivation&amp;#039;&amp;#039;&amp;#039; ==&lt;br /&gt;
&lt;br /&gt;
We have a growing shortage of fresh water in the world, and as populations rise and sources become more contaminated the problem accelerates. We often do not take into consideration our day to day affect on the planet, and often will ignore obvious means to reduce our footprint. I want to provide a means for a viewer to understand this relationship and begin to think about what they could do to counter their affect. &lt;br /&gt;
&lt;br /&gt;
== &amp;#039;&amp;#039;&amp;#039;Interaction&amp;#039;&amp;#039;&amp;#039;==&lt;br /&gt;
&lt;br /&gt;
I want the viewer of my project to question their existence within the imagery that is displayed on the screen. The longer that they &amp;#039;interact&amp;#039; with the project the more of an affect they will have on it. Using face tracking as the main stimulus for change the project will react based the length of time that the user&amp;#039;s face is tracked by the camera. &lt;br /&gt;
&lt;br /&gt;
== &amp;#039;&amp;#039;&amp;#039;Function&amp;#039;&amp;#039;&amp;#039;==&lt;br /&gt;
&lt;br /&gt;
The affect on the project will change with the amount of time the viewer is in front of the camera, for instance the sun that represents the viewer will drop water bottles onto the landscape, and the longer they are in front of the camera the more &amp;quot;smog&amp;quot; that will be visible on the screen.&lt;br /&gt;
&lt;br /&gt;
== &amp;#039;&amp;#039;&amp;#039;Visualization&amp;#039;&amp;#039;&amp;#039; ==&lt;br /&gt;
&lt;br /&gt;
[[Media:notracking.png]]&lt;br /&gt;
[[Media:tracking.png]]&lt;br /&gt;
[[Media:smog.png]]&lt;/div&gt;</summary>
		<author><name>GregoryParsons</name></author>	</entry>

	<entry>
		<id>http://wiki.roberttwomey.com/index.php?title=Happy_Days_-_Gregory_Parsons&amp;diff=3957</id>
		<title>Happy Days - Gregory Parsons</title>
		<link rel="alternate" type="text/html" href="http://wiki.roberttwomey.com/index.php?title=Happy_Days_-_Gregory_Parsons&amp;diff=3957"/>
				<updated>2010-05-30T23:41:33Z</updated>
		
		<summary type="html">&lt;p&gt;GregoryParsons: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== &amp;#039;&amp;#039;&amp;#039;Motivation&amp;#039;&amp;#039;&amp;#039; ==&lt;br /&gt;
&lt;br /&gt;
We have a growing shortage of fresh water in the world, and as populations rise and sources become more contaminated the problem accelerates. We often do not take into consideration our day to day affect on the planet, and often will ignore obvious means to reduce our footprint. I want to provide a means for a viewer to understand this relationship and begin to think about what they could do to counter their affect. &lt;br /&gt;
&lt;br /&gt;
== &amp;#039;&amp;#039;&amp;#039;Interaction&amp;#039;&amp;#039;&amp;#039;==&lt;br /&gt;
&lt;br /&gt;
I want the viewer of my project to question their existence within the imagery that is displayed on the screen. The longer that they &amp;#039;interact&amp;#039; with the project the more of an affect they will have on it. Using face tracking as the main stimulus for change the project will react based the length of time that the user&amp;#039;s face is tracked by the camera. &lt;br /&gt;
&lt;br /&gt;
== &amp;#039;&amp;#039;&amp;#039;Function&amp;#039;&amp;#039;&amp;#039;==&lt;br /&gt;
&lt;br /&gt;
The affect on the project will change with the amount of time the viewer is in front of the camera, for instance the sun that represents the viewer will drop water bottles onto the landscape, and the longer they are in front of the camera the more &amp;quot;smog&amp;quot; that will be visible on the screen.&lt;br /&gt;
&lt;br /&gt;
== &amp;#039;&amp;#039;&amp;#039;Visualization&amp;#039;&amp;#039;&amp;#039; ==&lt;br /&gt;
&lt;br /&gt;
[[Media:http://acsweb.ucsd.edu/~gparsons/notracking.png]]&lt;br /&gt;
[[Media:http://acsweb.ucsd.edu/~gparsons/tracking.png]]&lt;br /&gt;
[[Media:http://acsweb.ucsd.edu/~gparsons/smog.png]]&lt;/div&gt;</summary>
		<author><name>GregoryParsons</name></author>	</entry>

	<entry>
		<id>http://wiki.roberttwomey.com/index.php?title=Happy_Days_-_Gregory_Parsons&amp;diff=3956</id>
		<title>Happy Days - Gregory Parsons</title>
		<link rel="alternate" type="text/html" href="http://wiki.roberttwomey.com/index.php?title=Happy_Days_-_Gregory_Parsons&amp;diff=3956"/>
				<updated>2010-05-30T23:40:48Z</updated>
		
		<summary type="html">&lt;p&gt;GregoryParsons: New page: == &amp;#039;&amp;#039;&amp;#039;Motivation&amp;#039;&amp;#039;&amp;#039; ==  We have a growing shortage of fresh water in the world, and as populations rise and sources become more contaminated the problem accelerates. We often do not take i...&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== &amp;#039;&amp;#039;&amp;#039;Motivation&amp;#039;&amp;#039;&amp;#039; ==&lt;br /&gt;
&lt;br /&gt;
We have a growing shortage of fresh water in the world, and as populations rise and sources become more contaminated the problem accelerates. We often do not take into consideration our day to day affect on the planet, and often will ignore obvious means to reduce our footprint. I want to provide a means for a viewer to understand this relationship and begin to think about what they could do to counter their affect. &lt;br /&gt;
&lt;br /&gt;
== &amp;#039;&amp;#039;&amp;#039;Interaction&amp;#039;&amp;#039;&amp;#039;==&lt;br /&gt;
&lt;br /&gt;
I want the viewer of my project to question their existence within the imagery that is displayed on the screen. The longer that they &amp;#039;interact&amp;#039; with the project the more of an affect they will have on it. Using face tracking as the main stimulus for change the project will react based the length of time that the user&amp;#039;s face is tracked by the camera. &lt;br /&gt;
&lt;br /&gt;
== &amp;#039;&amp;#039;&amp;#039;Function&amp;#039;&amp;#039;&amp;#039;==&lt;br /&gt;
&lt;br /&gt;
The affect on the project will change with the amount of time the viewer is in front of the camera, for instance the sun that represents the viewer will drop water bottles onto the landscape, and the longer they are in front of the camera the more &amp;quot;smog&amp;quot; that will be visible on the screen.&lt;br /&gt;
&lt;br /&gt;
== &amp;#039;&amp;#039;&amp;#039;Visualization&amp;#039;&amp;#039;&amp;#039; ==&lt;br /&gt;
&lt;br /&gt;
[[Image:http://acsweb.ucsd.edu/~gparsons/notracking.png]]&lt;br /&gt;
[[Image:http://acsweb.ucsd.edu/~gparsons/tracking.png]]&lt;br /&gt;
[[Image:http://acsweb.ucsd.edu/~gparsons/smog.png]]&lt;/div&gt;</summary>
		<author><name>GregoryParsons</name></author>	</entry>

	<entry>
		<id>http://wiki.roberttwomey.com/index.php?title=Classes/2010/VIS145B&amp;diff=3955</id>
		<title>Classes/2010/VIS145B</title>
		<link rel="alternate" type="text/html" href="http://wiki.roberttwomey.com/index.php?title=Classes/2010/VIS145B&amp;diff=3955"/>
				<updated>2010-05-30T22:39:06Z</updated>
		
		<summary type="html">&lt;p&gt;GregoryParsons: /* Final Projects */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;==Related Links:==&lt;br /&gt;
&lt;br /&gt;
[http://www.bodybuildingrevealed.com/&amp;#039;&amp;#039;&amp;#039;body building&amp;#039;&amp;#039;&amp;#039;]&lt;br /&gt;
&lt;br /&gt;
== Time and Process Based Digital Media II ==&lt;br /&gt;
Time: Thursdays 3:30-6:20pm, VAF 228&lt;br /&gt;
&lt;br /&gt;
This class is an advanced study and portfolio project course centered on the use of hardware and software to create interactive and time-based art.  These projects can take many forms—interactive installations, dynamic visualizations/sonifications, printed renderings—chosen by the students.  This will not be a course of technical instruction—rather we will consider technical and conceptual issues in tandem, supplementing discussions and activities with specific technical instruction where necessary.  There is a strong emphasis on the development and articulation of personal directions of research by the students in the course. &lt;br /&gt;
&lt;br /&gt;
I would like to split the reading/homework responsibility for two parts of the class.  In the first half of the term I will present a series of works and readings covering my particular interests--the intersections of social performance, embodied experience, and cognition.  In the latter half of the class (after the midterm) you all will do the presentations on topics of your choosing.  Working individually or in small groups, you will provide us with some conceptual provocation (reading material) covering topics you intend to engage with your final, and you will lead a discussion on technical and conceptual issues.  Reading and critical writing, in response to text and works you present and those I present, are integral to this course.&lt;br /&gt;
&lt;br /&gt;
The schedule is a living document and will be revised over the period of the course.&lt;br /&gt;
&lt;br /&gt;
== Instructor ==&lt;br /&gt;
Robert Twomey&lt;br /&gt;
&lt;br /&gt;
rtwomey@ucsd.edu&lt;br /&gt;
*http://roberttwomey.com&lt;br /&gt;
*http://experimentalgamelab.net&lt;br /&gt;
*http://crca.ucsd.edu&lt;br /&gt;
&lt;br /&gt;
Office Hours: Wednesday 3-4pm, Atkinson Hall Rm 1601 (CRCA research neighborhood).  Please e-mail me if you plan to attend.&lt;br /&gt;
&lt;br /&gt;
== Grading ==&lt;br /&gt;
*Midterm Project - 30%&lt;br /&gt;
*Final Project - 40%&lt;br /&gt;
*Presentations - 10%&lt;br /&gt;
*Readings - 10%&lt;br /&gt;
*Participation - 10%&lt;br /&gt;
&lt;br /&gt;
=== Presentations ===&lt;br /&gt;
(1) Short presentation on your work in the second week of class.  This should be a statement of your interests, direction, goals with media art.  Present examples from your own work which you feel strongly about, and which best represent your interests and trajectory.  Present examples of other artist&amp;#039;s work that serve as models for the kind of work you would like to make. (5-10 minutes each)&lt;br /&gt;
&lt;br /&gt;
(2) Medium presentation on final projects in the second semester of the course (weeks 7-9).  This is the portion of the class where you dictate the reading and the discussion.  If you are presenting on a given week, you need to provide us with a reading 1 week in advance.  We will sign up for those time slots in week 6, just after the midterm. (10-15 minutes)&lt;br /&gt;
&lt;br /&gt;
=== Reading Responses ===&lt;br /&gt;
These are written summaries and critical responses to materials assigned for out of class viewing.  Things to consider: What points does the author make?  Do you buy their assumptions or agree with their conclusions?  Reading responses will be printed and turned in to the instructor at the beginning of class.  Generally these should be 1 page long.&lt;br /&gt;
&lt;br /&gt;
=== Projects ===&lt;br /&gt;
Midterm and final projects will be graded on concept, effort, and realization. Formal proposals are a necessary component of the process so take them seriously.  Make the effort to get started early and seek the help you need--we want to see finished, well-considered pieces for the midterm and final. Additionally, you will need to submit documentation of the project after completion which includes images, video, and source code where applicable.  These materials (proposals and documentation) will all be posted to the wiki.&lt;br /&gt;
=== Documentation Policy ===&lt;br /&gt;
*section on your project&lt;br /&gt;
*source code&lt;br /&gt;
*image/video documentation.  5 images or 5 videos.&lt;br /&gt;
*descriptive writing (on intent, motivation, context)&lt;br /&gt;
&lt;br /&gt;
=== Attendance ===&lt;br /&gt;
Attendance is mandatory. Each unexcused absence will drop your final grade one letter.  There are only 10 weeks of class, please come to them all.&lt;br /&gt;
&lt;br /&gt;
== Schedule ==&lt;br /&gt;
=== Week 1 - Intro ===&lt;br /&gt;
*Introductions&lt;br /&gt;
*Scope of course, interests, technical possibilities.&lt;br /&gt;
*My work.&lt;br /&gt;
*Watch: We Live In Public.  2009. (excerpts)&lt;br /&gt;
*In class: personal page on wiki. [http://www.trsp.net/teaching/gamemod/ game-mod exercise]. [http://www.trsp.net/teaching/gamemod/gamemod_breakout_source_en.zip download link]&lt;br /&gt;
*Read: [http://www.nyu.edu/projects/xdesign/mainmenu/archive_tangible.html Against Virtualized Information], [http://www.nyu.edu/projects/xdesign/mainmenu/archive_analtictech.html Novel Analytic Techniques], and [http://www.nyu.edu/projects/xdesign/mainmenu/archive_infocounts.html What Information Counts?] by [http://www.environmentalhealthclinic.net/people/natalie-jeremijenko/ Natalie Jeremijenko]. &lt;br /&gt;
*Read: [http://www.yalealumnimagazine.com/issues/2004_03/jeremijenko.html An Engineer for the Avante Garde]&lt;br /&gt;
*Read: [http://www.worldchanging.com/archives/001450.html Natalie Jeremijenko The WorldChanging Interview]&lt;br /&gt;
*Read: [http://tech90s.walkerart.org/nj/transcript/nj_01.html Database Politics and Social Simulations], good background on her earlier artwork.&lt;br /&gt;
&lt;br /&gt;
=== Week 2 - Student Research Interests ===&lt;br /&gt;
*Due: 1 page on Jeremijenko. &lt;br /&gt;
*Presentations on your work.&lt;br /&gt;
*Read: [http://www.flong.com/texts/essays/essay_cvad/ Computer Vision for Artists and Designers: Pedagogic Tools and Techniques for Novice Programmers] Golan Levin. &amp;#039;&amp;#039;pay particular attention to part II. ELEMENTARY COMPUTER VISION TECHNIQUES.  we are going to try these in class next week.&amp;#039;&amp;#039;&lt;br /&gt;
&lt;br /&gt;
=== Week 3 - Computer Vision / Human Perception ===&lt;br /&gt;
*Due: Nothing. Read the Golan Levin piece, but no written response.&lt;br /&gt;
*Discuss:&lt;br /&gt;
**Myron Kreuger. Video Place. 1989 [http://www.youtube.com/watch?v=dqZyZrN3Pl0]&lt;br /&gt;
**Text Rain. Camille Utterback &amp;amp; Romy Achituv. 1999. [http://www.youtube.com/watch?v=toWFvXHghDk] [http://www.camilleutterback.com/]&lt;br /&gt;
**Very Nervous System.  1982-1991. [http://vimeo.com/8120954]&lt;br /&gt;
**Suicide Box.  Bureau of Inverse Technology.  1996. (13:00)&lt;br /&gt;
**Marie Sester. ACCESS.  2003. [http://accessproject.net]&lt;br /&gt;
**Messa di Voce. Golan Levin and Zach Lieberman with Jaap Blonk and Joan La Barbara. 2003.  [http://www.flong.com/projects/messa/] [http://www.tmema.org/messa/messa.html]&lt;br /&gt;
**Seen.  David Rokeby.  2002.  [http://vimeo.com/6012986]&lt;br /&gt;
**Sorting Daemon. David Rokeby. 2003. [http://homepage.mac.com/davidrokeby/sorting.html]&lt;br /&gt;
**Cheese.  Christian Moller. 2003. [http://www.christian-moeller.com/display.php?project_id=36] made in collaboration with UCSD  [http://mplab.ucsd.edu/wordpress/ Machine Perception Lab]&lt;br /&gt;
**Eyewriter. 2009 [http://www.eyewriter.org/]&lt;br /&gt;
**Saccade. 2010 [http://roberttwomey.com/saccade] (in progress)&lt;br /&gt;
*Discuss: &lt;br /&gt;
**thresholding&lt;br /&gt;
**frame difference&lt;br /&gt;
**OpenCV - [http://ubaa.net/shared/processing/opencv/ download] [http://www.cs.unc.edu/Research/stc/FAQs/OpenCV/OpenCVReferenceManual.pdf reference manual].  If you are getting this for your computer, be sure to get OpenCV, the OpenCV Processing Library, and the OpenCV Processing Examples (three separate downloads).&lt;br /&gt;
**face recognition&lt;br /&gt;
*In Class:&lt;br /&gt;
**Working alone or in small groups, do experiments with video processing and computer vision.&lt;br /&gt;
&lt;br /&gt;
=== Week 4 - Computer Vision Work ===&lt;br /&gt;
* In Class:&lt;br /&gt;
** Work on computer vision projects&lt;br /&gt;
** Talk about midterm projects.&lt;br /&gt;
&lt;br /&gt;
=== Week 5 - Midterm Workshop ===&lt;br /&gt;
*Due: Midterm project proposal.&lt;br /&gt;
**Working individually or in small groups (2-3 people), produce an interactive piece that bridges the gap between screen space and physical space.  There are many ways to do this--using image-based computer vision techniques, game controllers, audio input, or other physical hardware (Arduino?).  Think about the parameters of interaction--are you documenting viewer&amp;#039;s behavior (unknown to them), are you taking a familiar form (such as a video game) and tweaking it in some way, are you intervening in social space?  Think about what form the output will take.  In your one page proposal, describe the input(s), output(s), and dynamic of interaction, as well as some statement of your motivation.  Why is this a valuable or interesting project?  In addition to the written description, produce supporting visual materials.  These should be two functional diagram images and two visual/aesthetic images.  The functional diagrams should show the necessary software and hardware components and explain how the interaction will occur.  The aesthetic diagrams will give us a sense of what it will look like, how the output will appear.  Make a page for your project (including a title) in the Midterm Projects section at the bottom of this page, upload the necessary materials and embed them in that page.  This proposal is due in class next week where we will critique and workshop the ideas.&lt;br /&gt;
*In class:&lt;br /&gt;
**Workshop midterm project ideas. (45 minutes)&lt;br /&gt;
**Work on midterm projects. &lt;br /&gt;
*NOTE: Best of ICAM from Candy Harris.  There will be an install in the annex here at Mandeville and presentations at the Experimental Theater in the CPMC (music building). They should come see what they are going to have to live up to for their final projects. Plus the keynote speakers (ICAM alumns) always have great info about career paths after graduation.&lt;br /&gt;
&lt;br /&gt;
=== Week 6 ===&lt;br /&gt;
In class work on midterms.&lt;br /&gt;
&lt;br /&gt;
=== Week 7 - Midterm Critiques ===&lt;br /&gt;
In class critique of midterms.&lt;br /&gt;
&lt;br /&gt;
=== Week 8 ===&lt;br /&gt;
&lt;br /&gt;
Due: Written response (1 page) to one of your classmate&amp;#039;s projects.&lt;br /&gt;
&lt;br /&gt;
In Class: Draft final project proposal and post to wiki by the end of class.  In class discussion as needed.&lt;br /&gt;
&lt;br /&gt;
=== Week 9 ===&lt;br /&gt;
work on finals&lt;br /&gt;
&lt;br /&gt;
=== Week 10 - Final Critiques ===&lt;br /&gt;
In-class critiques of finals.&lt;br /&gt;
&lt;br /&gt;
=== Finals Week ===&lt;br /&gt;
Final documentation due.&lt;br /&gt;
&lt;br /&gt;
== Topics ==&lt;br /&gt;
To Be Scheduled&lt;br /&gt;
&lt;br /&gt;
&amp;#039;&amp;#039;&amp;#039;Performance for the camera, for the web&amp;#039;&amp;#039;&amp;#039;&lt;br /&gt;
*Discuss Chatroulette. Facebook, Twitter, Youtube.  Attention in the social net.&lt;br /&gt;
*ManyCam [http://www.manycam.com/]&lt;br /&gt;
*PS3 eye&lt;br /&gt;
*jennicam [http://www.wired.com/thisdayintech/2010/04/0414jennicam-launches wired]&lt;br /&gt;
*Lonelygirl15 [http://www.youtube.com/watch?v=-goXKtd6cPo youtube] [http://www.wired.com/wired/archive/14.12/lonelygirl.html article]&lt;br /&gt;
*Discuss telematic perfromance. &lt;br /&gt;
* Justin.tv [http://www.justin.tv/#r=s7RVqBU~]&lt;br /&gt;
*Read: The Presentation of Self in Everyday Life (excerpt).  Erving Goffman. 1959.&lt;br /&gt;
*Read: Performance: A Critical Introduction (excerpt).  Richard Carlson. 2004.&lt;br /&gt;
*Do: Intervention in social circuits.  Chatroulette/Facebook/Youtube exercise.&lt;br /&gt;
&lt;br /&gt;
&amp;#039;&amp;#039;&amp;#039;Social Networks/Web 2.0&amp;#039;&amp;#039;&amp;#039;&lt;br /&gt;
*Read: Protocol, Control, and Networks by Alexander Galloway and Eugene Thacker.  Grey Room 17, Fall 2004 p 6-29.  &lt;br /&gt;
*Read: DIGITAL MAOISM: The Hazards of the New Online Collectivism.  Jaron Lanier.  2006.&lt;br /&gt;
*Watch: MediatedCultures @ Kansas State http://mediatedcultures.net/mediatedculture.htm&lt;br /&gt;
*Datamining/Complex Networks, node-edge graphing.&lt;br /&gt;
&lt;br /&gt;
&amp;#039;&amp;#039;&amp;#039;Digital Memory/Personal Media: Where do we exist and how do we remember?&amp;#039;&amp;#039;&amp;#039;&lt;br /&gt;
*Read: Mediated Memories in the Digital Age (excerpt). Jose van Dijck. 2007.&lt;br /&gt;
*Read: Are you sure you want to do this?  Matthias Fuchs 1994.&lt;br /&gt;
*Read: Delete: The Virtue of Forgetting in the Digital Age (excerpt). Viktor Mayer-Schonberger. 2009.&lt;br /&gt;
*Flickr.com, Facebook&lt;br /&gt;
*Discuss: My Pocket. Burak Arikan. 2008. &lt;br /&gt;
&lt;br /&gt;
&amp;#039;&amp;#039;&amp;#039;Cognition + Creativity&amp;#039;&amp;#039;&amp;#039;&lt;br /&gt;
*Generative Art vs. Computational Creativity&lt;br /&gt;
*Casy Reas&lt;br /&gt;
*Processing.org&lt;br /&gt;
*Tom Shannon. [http://www.wired.com/magazine/2010/03/pl_arts_pendulum/all/1]&lt;br /&gt;
*Read: Triumph of the Cyborg Composer. &lt;br /&gt;
*Read: How to draw three people in a garden.  1988.&lt;br /&gt;
*Read: Shades of Computational Evocation and Meaning: The GRIOT System and Improvisational Poetry Generation. 2006.&lt;br /&gt;
&lt;br /&gt;
&amp;#039;&amp;#039;&amp;#039;Artificial Intelligence&amp;#039;&amp;#039;&amp;#039;&lt;br /&gt;
*Read: Expressive Processing (excerpt), Noah Wardrip Fruin, 2009. &lt;br /&gt;
*Read: Elephants Don&amp;#039;t Play Chess, Rodney Brooks, 1990. &lt;br /&gt;
&lt;br /&gt;
&amp;#039;&amp;#039;&amp;#039;Appropriation and Remix&amp;#039;&amp;#039;&amp;#039;&lt;br /&gt;
*Read: The Fiction of Memory.  New York Times, March 12, 2010.  Luc Sante&lt;br /&gt;
*Read: Jonatham Lethem.  The Ecstasy of Influence. Harpers Magazine.  2007. &lt;br /&gt;
*Remix Culture.  Lev.&lt;br /&gt;
*God&amp;#039;s Little Toys: Confessions of a cut &amp;amp; paste artist.  William Gibson. 2005. *http://www.wired.com/wired/archive/13.07/gibson.html&lt;br /&gt;
*Reality Hunger: A Manifesto.  David Shields. 2010.&lt;br /&gt;
&lt;br /&gt;
&amp;#039;&amp;#039;&amp;#039;Materiality in the information age.&amp;#039;&amp;#039;&amp;#039;&lt;br /&gt;
*Tangible interfaces, haptic feedback. &lt;br /&gt;
*Read: Evocative Objects: Things We Think With (excerpt). Sherry Turkle, 2007. &lt;br /&gt;
*Read: New Media and the Forensic Imagination (excerpt). Matthew Kirschenbaum. 2008.&lt;br /&gt;
*View: BIT Plane.  &lt;br /&gt;
*View: Garbage Cubes&lt;br /&gt;
*Discuss techniques of markerless tracking, augmented reality, QR codes, etc.  *Online/Offline Space.&lt;br /&gt;
&lt;br /&gt;
&amp;#039;&amp;#039;&amp;#039;Embodiment&amp;#039;&amp;#039;&amp;#039;&lt;br /&gt;
*Computing with bodies, engineered bodies&lt;br /&gt;
*tactile media, haptic interface&lt;br /&gt;
*embodied perception&lt;br /&gt;
*Read: Stelarc. &lt;br /&gt;
&lt;br /&gt;
&amp;#039;&amp;#039;&amp;#039;Self-Image&amp;#039;&amp;#039;&amp;#039;&lt;br /&gt;
*Read: Self/Image: Technology, Representation, and the Contemporary Subject (excerpt).  Amelia Jones, 2006.&lt;br /&gt;
*Do: Forensic Photoshop Exercise.&lt;br /&gt;
*http://www.flickr.com/photos/dryponder/sets/72157623726710218/&lt;br /&gt;
*http://nymag.com/daily/intel/2010/02/obama_being_forced_to_look_at.html#photo=1&lt;br /&gt;
*http://niccageaseveryone.blogspot.com/&lt;br /&gt;
*http://bubleraptor.tumblr.com/&lt;br /&gt;
*photoshop free Marie Claire issue: http://jezebel.com/5511507/so-long-as-your-face-looks-alright-everything-else-can-be-photoshopped&lt;br /&gt;
&lt;br /&gt;
== Places to Find Art ==&lt;br /&gt;
* http://we-make-money-not-art.com/&lt;br /&gt;
* http://www.isea-web.org/, http://www.isea2010ruhr.org/&lt;br /&gt;
* http://www.transmediale.de/en&lt;br /&gt;
* http://01sj.org/&lt;br /&gt;
* http://www.file.org.br/&lt;br /&gt;
* http://www.aec.at/festival_about_en.php&lt;br /&gt;
* http://www.sciencegallery.com/lightwave09&lt;br /&gt;
* Institutions that Sponsor/Show Media Art&lt;br /&gt;
** Eyebeam New York City&lt;br /&gt;
** New Museum/Rhizome.org http://rhizome.org&lt;br /&gt;
** HarvestWorks&lt;br /&gt;
** Machine Project, Los Angeles.&lt;br /&gt;
&lt;br /&gt;
== Midterm Projects ==&lt;br /&gt;
Make pages here. &lt;br /&gt;
* [[DummyProject | Dummy Project]]&lt;br /&gt;
* [[MidtermProject| MotionDJ - Leilani Martin]]&lt;br /&gt;
* [[What&amp;#039;s For Lunch, Kids? by Kelley Kim| &amp;#039;&amp;#039;What&amp;#039;s For Lunch, Kids?&amp;#039;&amp;#039;   - Kelley Kim]]&lt;br /&gt;
* [[Virtual Walk? - Joeny Thipsidakhom]]&lt;br /&gt;
* [[Untitled Midterm| Untitled - Jezreel Callejas]]&lt;br /&gt;
* [[Midterm Project - Tony Lu | Virtual Maze - Tony Lu]]&lt;br /&gt;
* [[Midterm Project  | SayCHEESE - Joel and Jenny Chang]]&lt;br /&gt;
* [[Carnival Ride| Carnival Ride - Christina Sanchez and Jennifer Sunga]]&lt;br /&gt;
* [[Hunted - Anna Lin, Jenny Wang, and Ellen Huang]]&lt;br /&gt;
* [[Wii - remote composer - Javi Lee]]&lt;br /&gt;
* [[Social Creature | Boo (formerly Social Creature) - Jet Antonio]]&lt;br /&gt;
* [[Sound Sketch - Emilio Marcelino, Greg Parsons, and Ben Brickley]]&lt;br /&gt;
&lt;br /&gt;
== Final Projects ==&lt;br /&gt;
* [[Aquarium| Aquarium - Jezreel Callejas]]&lt;br /&gt;
* [[Not So Lost| Not So Lost - Ben Brickely, Emilio Marcelino and Greg Parsons]]&lt;br /&gt;
* [[Dance in the Dark - Anna Lin, Jenny Wang and Ellen Huang]]&lt;br /&gt;
* [[Inkling One | Inkling One - Jet Antonio]]&lt;br /&gt;
* [[Untitled (for now) - Joeny Thipsidakhom]]&lt;br /&gt;
* [[uMV | uMV - Tony Lu]]&lt;br /&gt;
* [[Stone Face | Stone Face - Jennifer Sunga]]&lt;br /&gt;
* [[This Music: Prohibited | This Music: Prohibited  - Kelley Kim]]&lt;br /&gt;
* [[Blockman | Blockman - Javier Lee]]&lt;br /&gt;
* [[What do you see? | What do you see? - Christina Sanchez]]&lt;br /&gt;
* [[Happy Days - Gregory Parsons]]&lt;br /&gt;
&lt;br /&gt;
== Student Pages ==&lt;br /&gt;
Click &amp;quot;edit&amp;quot; on the right to add your own page below. &lt;br /&gt;
* [[Students/RobertTwomey | RobertTwomey]]&lt;br /&gt;
* [[Students/Javier Lee | Javier Lee]]&lt;br /&gt;
* [[Students/Jenny Wang | Jenny Wang]]&lt;br /&gt;
* [[Students/Joeny Thipsidakhom | Joeny Thipsidakhom]]&lt;br /&gt;
* [[Students/Kuan-Ting Lu | Tony Lu]]&lt;br /&gt;
* [[Students/Jezreel Callejas| Jezreel Callejas]]&lt;br /&gt;
* [[Students/ChristinaSanchez| Christina Sanchez]]&lt;br /&gt;
* [[Students/BenBrickley | BenBrickley]]&lt;br /&gt;
* [[Students/Ellen Huang | Ellen Huang]]&lt;br /&gt;
* [[Students/Kelley Kim | Kelley Kim]]&lt;br /&gt;
* [[Students/EmilioMarcelino | EmilioMarcelino]]&lt;br /&gt;
* [[Students/Anna Lin | Anna Lin]]&lt;br /&gt;
* [[Student/Jenny Chang | Jenny Chang]]&lt;br /&gt;
* [[Student/Jet Antonio | Jet Antonio]]&lt;br /&gt;
* [[Students/GregoryParsons | Gregory Parsons]]&lt;br /&gt;
* [[Students/Jennifer Sunga | Jennifer Sunga]]&lt;br /&gt;
* [[Students/LeilaniMartin | Leilani Martin]]&lt;/div&gt;</summary>
		<author><name>GregoryParsons</name></author>	</entry>

	<entry>
		<id>http://wiki.roberttwomey.com/index.php?title=Classes/2010/VIS145B&amp;diff=3883</id>
		<title>Classes/2010/VIS145B</title>
		<link rel="alternate" type="text/html" href="http://wiki.roberttwomey.com/index.php?title=Classes/2010/VIS145B&amp;diff=3883"/>
				<updated>2010-05-20T23:59:33Z</updated>
		
		<summary type="html">&lt;p&gt;GregoryParsons: /* Final Projects */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;==Related Links:==&lt;br /&gt;
&lt;br /&gt;
[http://www.bodybuildingrevealed.com/&amp;#039;&amp;#039;&amp;#039;body building&amp;#039;&amp;#039;&amp;#039;]&lt;br /&gt;
&lt;br /&gt;
== Time and Process Based Digital Media II ==&lt;br /&gt;
Time: Thursdays 3:30-6:20pm, VAF 228&lt;br /&gt;
&lt;br /&gt;
This class is an advanced study and portfolio project course centered on the use of hardware and software to create interactive and time-based art.  These projects can take many forms—interactive installations, dynamic visualizations/sonifications, printed renderings—chosen by the students.  This will not be a course of technical instruction—rather we will consider technical and conceptual issues in tandem, supplementing discussions and activities with specific technical instruction where necessary.  There is a strong emphasis on the development and articulation of personal directions of research by the students in the course. &lt;br /&gt;
&lt;br /&gt;
I would like to split the reading/homework responsibility for two parts of the class.  In the first half of the term I will present a series of works and readings covering my particular interests--the intersections of social performance, embodied experience, and cognition.  In the latter half of the class (after the midterm) you all will do the presentations on topics of your choosing.  Working individually or in small groups, you will provide us with some conceptual provocation (reading material) covering topics you intend to engage with your final, and you will lead a discussion on technical and conceptual issues.  Reading and critical writing, in response to text and works you present and those I present, are integral to this course.&lt;br /&gt;
&lt;br /&gt;
The schedule is a living document and will be revised over the period of the course.&lt;br /&gt;
&lt;br /&gt;
== Instructor ==&lt;br /&gt;
Robert Twomey&lt;br /&gt;
&lt;br /&gt;
rtwomey@ucsd.edu&lt;br /&gt;
*http://roberttwomey.com&lt;br /&gt;
*http://experimentalgamelab.net&lt;br /&gt;
*http://crca.ucsd.edu&lt;br /&gt;
&lt;br /&gt;
Office Hours: Wednesday 3-4pm, Atkinson Hall Rm 1601 (CRCA research neighborhood).  Please e-mail me if you plan to attend.&lt;br /&gt;
&lt;br /&gt;
== Grading ==&lt;br /&gt;
*Midterm Project - 30%&lt;br /&gt;
*Final Project - 40%&lt;br /&gt;
*Presentations - 10%&lt;br /&gt;
*Readings - 10%&lt;br /&gt;
*Participation - 10%&lt;br /&gt;
&lt;br /&gt;
=== Presentations ===&lt;br /&gt;
(1) Short presentation on your work in the second week of class.  This should be a statement of your interests, direction, goals with media art.  Present examples from your own work which you feel strongly about, and which best represent your interests and trajectory.  Present examples of other artist&amp;#039;s work that serve as models for the kind of work you would like to make. (5-10 minutes each)&lt;br /&gt;
&lt;br /&gt;
(2) Medium presentation on final projects in the second semester of the course (weeks 7-9).  This is the portion of the class where you dictate the reading and the discussion.  If you are presenting on a given week, you need to provide us with a reading 1 week in advance.  We will sign up for those time slots in week 6, just after the midterm. (10-15 minutes)&lt;br /&gt;
&lt;br /&gt;
=== Reading Responses ===&lt;br /&gt;
These are written summaries and critical responses to materials assigned for out of class viewing.  Things to consider: What points does the author make?  Do you buy their assumptions or agree with their conclusions?  Reading responses will be printed and turned in to the instructor at the beginning of class.  Generally these should be 1 page long.&lt;br /&gt;
&lt;br /&gt;
=== Projects ===&lt;br /&gt;
Midterm and final projects will be graded on concept, effort, and realization. Formal proposals are a necessary component of the process so take them seriously.  Make the effort to get started early and seek the help you need--we want to see finished, well-considered pieces for the midterm and final. Additionally, you will need to submit documentation of the project after completion which includes images, video, and source code where applicable.  These materials (proposals and documentation) will all be posted to the wiki.&lt;br /&gt;
=== Documentation Policy ===&lt;br /&gt;
*section on your project&lt;br /&gt;
*source code&lt;br /&gt;
*image/video documentation.  5 images or 5 videos.&lt;br /&gt;
*descriptive writing (on intent, motivation, context)&lt;br /&gt;
&lt;br /&gt;
=== Attendance ===&lt;br /&gt;
Attendance is mandatory. Each unexcused absence will drop your final grade one letter.  There are only 10 weeks of class, please come to them all.&lt;br /&gt;
&lt;br /&gt;
== Schedule ==&lt;br /&gt;
=== Week 1 - Intro ===&lt;br /&gt;
*Introductions&lt;br /&gt;
*Scope of course, interests, technical possibilities.&lt;br /&gt;
*My work.&lt;br /&gt;
*Watch: We Live In Public.  2009. (excerpts)&lt;br /&gt;
*In class: personal page on wiki. [http://www.trsp.net/teaching/gamemod/ game-mod exercise]. [http://www.trsp.net/teaching/gamemod/gamemod_breakout_source_en.zip download link]&lt;br /&gt;
*Read: [http://www.nyu.edu/projects/xdesign/mainmenu/archive_tangible.html Against Virtualized Information], [http://www.nyu.edu/projects/xdesign/mainmenu/archive_analtictech.html Novel Analytic Techniques], and [http://www.nyu.edu/projects/xdesign/mainmenu/archive_infocounts.html What Information Counts?] by [http://www.environmentalhealthclinic.net/people/natalie-jeremijenko/ Natalie Jeremijenko]. &lt;br /&gt;
*Read: [http://www.yalealumnimagazine.com/issues/2004_03/jeremijenko.html An Engineer for the Avante Garde]&lt;br /&gt;
*Read: [http://www.worldchanging.com/archives/001450.html Natalie Jeremijenko The WorldChanging Interview]&lt;br /&gt;
*Read: [http://tech90s.walkerart.org/nj/transcript/nj_01.html Database Politics and Social Simulations], good background on her earlier artwork.&lt;br /&gt;
&lt;br /&gt;
=== Week 2 - Student Research Interests ===&lt;br /&gt;
*Due: 1 page on Jeremijenko. &lt;br /&gt;
*Presentations on your work.&lt;br /&gt;
*Read: [http://www.flong.com/texts/essays/essay_cvad/ Computer Vision for Artists and Designers: Pedagogic Tools and Techniques for Novice Programmers] Golan Levin. &amp;#039;&amp;#039;pay particular attention to part II. ELEMENTARY COMPUTER VISION TECHNIQUES.  we are going to try these in class next week.&amp;#039;&amp;#039;&lt;br /&gt;
&lt;br /&gt;
=== Week 3 - Computer Vision / Human Perception ===&lt;br /&gt;
*Due: Nothing. Read the Golan Levin piece, but no written response.&lt;br /&gt;
*Discuss:&lt;br /&gt;
**Myron Kreuger. Video Place. 1989 [http://www.youtube.com/watch?v=dqZyZrN3Pl0]&lt;br /&gt;
**Text Rain. Camille Utterback &amp;amp; Romy Achituv. 1999. [http://www.youtube.com/watch?v=toWFvXHghDk] [http://www.camilleutterback.com/]&lt;br /&gt;
**Very Nervous System.  1982-1991. [http://vimeo.com/8120954]&lt;br /&gt;
**Suicide Box.  Bureau of Inverse Technology.  1996. (13:00)&lt;br /&gt;
**Marie Sester. ACCESS.  2003. [http://accessproject.net]&lt;br /&gt;
**Messa di Voce. Golan Levin and Zach Lieberman with Jaap Blonk and Joan La Barbara. 2003.  [http://www.flong.com/projects/messa/] [http://www.tmema.org/messa/messa.html]&lt;br /&gt;
**Seen.  David Rokeby.  2002.  [http://vimeo.com/6012986]&lt;br /&gt;
**Sorting Daemon. David Rokeby. 2003. [http://homepage.mac.com/davidrokeby/sorting.html]&lt;br /&gt;
**Cheese.  Christian Moller. 2003. [http://www.christian-moeller.com/display.php?project_id=36] made in collaboration with UCSD  [http://mplab.ucsd.edu/wordpress/ Machine Perception Lab]&lt;br /&gt;
**Eyewriter. 2009 [http://www.eyewriter.org/]&lt;br /&gt;
**Saccade. 2010 [http://roberttwomey.com/saccade] (in progress)&lt;br /&gt;
*Discuss: &lt;br /&gt;
**thresholding&lt;br /&gt;
**frame difference&lt;br /&gt;
**OpenCV - [http://ubaa.net/shared/processing/opencv/ download] [http://www.cs.unc.edu/Research/stc/FAQs/OpenCV/OpenCVReferenceManual.pdf reference manual].  If you are getting this for your computer, be sure to get OpenCV, the OpenCV Processing Library, and the OpenCV Processing Examples (three separate downloads).&lt;br /&gt;
**face recognition&lt;br /&gt;
*In Class:&lt;br /&gt;
**Working alone or in small groups, do experiments with video processing and computer vision.&lt;br /&gt;
&lt;br /&gt;
=== Week 4 - Computer Vision Work ===&lt;br /&gt;
* In Class:&lt;br /&gt;
** Work on computer vision projects&lt;br /&gt;
** Talk about midterm projects.&lt;br /&gt;
&lt;br /&gt;
=== Week 5 - Midterm Workshop ===&lt;br /&gt;
*Due: Midterm project proposal.&lt;br /&gt;
**Working individually or in small groups (2-3 people), produce an interactive piece that bridges the gap between screen space and physical space.  There are many ways to do this--using image-based computer vision techniques, game controllers, audio input, or other physical hardware (Arduino?).  Think about the parameters of interaction--are you documenting viewer&amp;#039;s behavior (unknown to them), are you taking a familiar form (such as a video game) and tweaking it in some way, are you intervening in social space?  Think about what form the output will take.  In your one page proposal, describe the input(s), output(s), and dynamic of interaction, as well as some statement of your motivation.  Why is this a valuable or interesting project?  In addition to the written description, produce supporting visual materials.  These should be two functional diagram images and two visual/aesthetic images.  The functional diagrams should show the necessary software and hardware components and explain how the interaction will occur.  The aesthetic diagrams will give us a sense of what it will look like, how the output will appear.  Make a page for your project (including a title) in the Midterm Projects section at the bottom of this page, upload the necessary materials and embed them in that page.  This proposal is due in class next week where we will critique and workshop the ideas.&lt;br /&gt;
*In class:&lt;br /&gt;
**Workshop midterm project ideas. (45 minutes)&lt;br /&gt;
**Work on midterm projects. &lt;br /&gt;
*NOTE: Best of ICAM from Candy Harris.  There will be an install in the annex here at Mandeville and presentations at the Experimental Theater in the CPMC (music building). They should come see what they are going to have to live up to for their final projects. Plus the keynote speakers (ICAM alumns) always have great info about career paths after graduation.&lt;br /&gt;
&lt;br /&gt;
=== Week 6 ===&lt;br /&gt;
In class work on midterms.&lt;br /&gt;
&lt;br /&gt;
=== Week 7 - Midterm Critiques ===&lt;br /&gt;
In class critique of midterms.&lt;br /&gt;
&lt;br /&gt;
=== Week 8 ===&lt;br /&gt;
&lt;br /&gt;
Due: Written response (1 page) to one of your classmate&amp;#039;s projects.&lt;br /&gt;
&lt;br /&gt;
In Class: Draft final project proposal and post to wiki by the end of class.  In class discussion as needed.&lt;br /&gt;
&lt;br /&gt;
=== Week 9 ===&lt;br /&gt;
work on finals&lt;br /&gt;
&lt;br /&gt;
=== Week 10 - Final Critiques ===&lt;br /&gt;
In-class critiques of finals.&lt;br /&gt;
&lt;br /&gt;
=== Finals Week ===&lt;br /&gt;
Final documentation due.&lt;br /&gt;
&lt;br /&gt;
== Topics ==&lt;br /&gt;
To Be Scheduled&lt;br /&gt;
&lt;br /&gt;
&amp;#039;&amp;#039;&amp;#039;Performance for the camera, for the web&amp;#039;&amp;#039;&amp;#039;&lt;br /&gt;
*Discuss Chatroulette. Facebook, Twitter, Youtube.  Attention in the social net.&lt;br /&gt;
*ManyCam [http://www.manycam.com/]&lt;br /&gt;
*PS3 eye&lt;br /&gt;
*jennicam [http://www.wired.com/thisdayintech/2010/04/0414jennicam-launches wired]&lt;br /&gt;
*Lonelygirl15 [http://www.youtube.com/watch?v=-goXKtd6cPo youtube] [http://www.wired.com/wired/archive/14.12/lonelygirl.html article]&lt;br /&gt;
*Discuss telematic perfromance. &lt;br /&gt;
* Justin.tv [http://www.justin.tv/#r=s7RVqBU~]&lt;br /&gt;
*Read: The Presentation of Self in Everyday Life (excerpt).  Erving Goffman. 1959.&lt;br /&gt;
*Read: Performance: A Critical Introduction (excerpt).  Richard Carlson. 2004.&lt;br /&gt;
*Do: Intervention in social circuits.  Chatroulette/Facebook/Youtube exercise.&lt;br /&gt;
&lt;br /&gt;
&amp;#039;&amp;#039;&amp;#039;Social Networks/Web 2.0&amp;#039;&amp;#039;&amp;#039;&lt;br /&gt;
*Read: Protocol, Control, and Networks by Alexander Galloway and Eugene Thacker.  Grey Room 17, Fall 2004 p 6-29.  &lt;br /&gt;
*Read: DIGITAL MAOISM: The Hazards of the New Online Collectivism.  Jaron Lanier.  2006.&lt;br /&gt;
*Watch: MediatedCultures @ Kansas State http://mediatedcultures.net/mediatedculture.htm&lt;br /&gt;
*Datamining/Complex Networks, node-edge graphing.&lt;br /&gt;
&lt;br /&gt;
&amp;#039;&amp;#039;&amp;#039;Digital Memory/Personal Media: Where do we exist and how do we remember?&amp;#039;&amp;#039;&amp;#039;&lt;br /&gt;
*Read: Mediated Memories in the Digital Age (excerpt). Jose van Dijck. 2007.&lt;br /&gt;
*Read: Are you sure you want to do this?  Matthias Fuchs 1994.&lt;br /&gt;
*Read: Delete: The Virtue of Forgetting in the Digital Age (excerpt). Viktor Mayer-Schonberger. 2009.&lt;br /&gt;
*Flickr.com, Facebook&lt;br /&gt;
*Discuss: My Pocket. Burak Arikan. 2008. &lt;br /&gt;
&lt;br /&gt;
&amp;#039;&amp;#039;&amp;#039;Cognition + Creativity&amp;#039;&amp;#039;&amp;#039;&lt;br /&gt;
*Generative Art vs. Computational Creativity&lt;br /&gt;
*Casy Reas&lt;br /&gt;
*Processing.org&lt;br /&gt;
*Tom Shannon. [http://www.wired.com/magazine/2010/03/pl_arts_pendulum/all/1]&lt;br /&gt;
*Read: Triumph of the Cyborg Composer. &lt;br /&gt;
*Read: How to draw three people in a garden.  1988.&lt;br /&gt;
*Read: Shades of Computational Evocation and Meaning: The GRIOT System and Improvisational Poetry Generation. 2006.&lt;br /&gt;
&lt;br /&gt;
&amp;#039;&amp;#039;&amp;#039;Artificial Intelligence&amp;#039;&amp;#039;&amp;#039;&lt;br /&gt;
*Read: Expressive Processing (excerpt), Noah Wardrip Fruin, 2009. &lt;br /&gt;
*Read: Elephants Don&amp;#039;t Play Chess, Rodney Brooks, 1990. &lt;br /&gt;
&lt;br /&gt;
&amp;#039;&amp;#039;&amp;#039;Appropriation and Remix&amp;#039;&amp;#039;&amp;#039;&lt;br /&gt;
*Read: The Fiction of Memory.  New York Times, March 12, 2010.  Luc Sante&lt;br /&gt;
*Read: Jonatham Lethem.  The Ecstasy of Influence. Harpers Magazine.  2007. &lt;br /&gt;
*Remix Culture.  Lev.&lt;br /&gt;
*God&amp;#039;s Little Toys: Confessions of a cut &amp;amp; paste artist.  William Gibson. 2005. *http://www.wired.com/wired/archive/13.07/gibson.html&lt;br /&gt;
*Reality Hunger: A Manifesto.  David Shields. 2010.&lt;br /&gt;
&lt;br /&gt;
&amp;#039;&amp;#039;&amp;#039;Materiality in the information age.&amp;#039;&amp;#039;&amp;#039;&lt;br /&gt;
*Tangible interfaces, haptic feedback. &lt;br /&gt;
*Read: Evocative Objects: Things We Think With (excerpt). Sherry Turkle, 2007. &lt;br /&gt;
*Read: New Media and the Forensic Imagination (excerpt). Matthew Kirschenbaum. 2008.&lt;br /&gt;
*View: BIT Plane.  &lt;br /&gt;
*View: Garbage Cubes&lt;br /&gt;
*Discuss techniques of markerless tracking, augmented reality, QR codes, etc.  *Online/Offline Space.&lt;br /&gt;
&lt;br /&gt;
&amp;#039;&amp;#039;&amp;#039;Embodiment&amp;#039;&amp;#039;&amp;#039;&lt;br /&gt;
*Computing with bodies, engineered bodies&lt;br /&gt;
*tactile media, haptic interface&lt;br /&gt;
*embodied perception&lt;br /&gt;
*Read: Stelarc. &lt;br /&gt;
&lt;br /&gt;
&amp;#039;&amp;#039;&amp;#039;Self-Image&amp;#039;&amp;#039;&amp;#039;&lt;br /&gt;
*Read: Self/Image: Technology, Representation, and the Contemporary Subject (excerpt).  Amelia Jones, 2006.&lt;br /&gt;
*Do: Forensic Photoshop Exercise.&lt;br /&gt;
*http://www.flickr.com/photos/dryponder/sets/72157623726710218/&lt;br /&gt;
*http://nymag.com/daily/intel/2010/02/obama_being_forced_to_look_at.html#photo=1&lt;br /&gt;
*http://niccageaseveryone.blogspot.com/&lt;br /&gt;
*http://bubleraptor.tumblr.com/&lt;br /&gt;
*photoshop free Marie Claire issue: http://jezebel.com/5511507/so-long-as-your-face-looks-alright-everything-else-can-be-photoshopped&lt;br /&gt;
&lt;br /&gt;
== Places to Find Art ==&lt;br /&gt;
* http://we-make-money-not-art.com/&lt;br /&gt;
* http://www.isea-web.org/, http://www.isea2010ruhr.org/&lt;br /&gt;
* http://www.transmediale.de/en&lt;br /&gt;
* http://01sj.org/&lt;br /&gt;
* http://www.file.org.br/&lt;br /&gt;
* http://www.aec.at/festival_about_en.php&lt;br /&gt;
* http://www.sciencegallery.com/lightwave09&lt;br /&gt;
* Institutions that Sponsor/Show Media Art&lt;br /&gt;
** Eyebeam New York City&lt;br /&gt;
** New Museum/Rhizome.org http://rhizome.org&lt;br /&gt;
** HarvestWorks&lt;br /&gt;
** Machine Project, Los Angeles.&lt;br /&gt;
&lt;br /&gt;
== Midterm Projects ==&lt;br /&gt;
Make pages here. &lt;br /&gt;
* [[DummyProject | Dummy Project]]&lt;br /&gt;
* [[MidtermProject| MotionDJ - Leilani Martin]]&lt;br /&gt;
* [[What&amp;#039;s For Lunch, Kids? by Kelley Kim| &amp;#039;&amp;#039;What&amp;#039;s For Lunch, Kids?&amp;#039;&amp;#039;   - Kelley Kim]]&lt;br /&gt;
* [[Virtual Walk? - Joeny Thipsidakhom]]&lt;br /&gt;
* [[Untitled Midterm| Untitled - Jezreel Callejas]]&lt;br /&gt;
* [[Midterm Project - Tony Lu | Virtual Maze - Tony Lu]]&lt;br /&gt;
* [[Midterm Project  | SayCHEESE - Joel and Jenny Chang]]&lt;br /&gt;
* [[Carnival Ride| Carnival Ride - Christina Sanchez and Jennifer Sunga]]&lt;br /&gt;
* [[Hunted - Anna Lin, Jenny Wang, and Ellen Huang]]&lt;br /&gt;
* [[Wii - remote composer - Javi Lee]]&lt;br /&gt;
* [[Social Creature | Social Creature - Jet Antonio]]&lt;br /&gt;
* [[Sound Sketch - Emilio Marcelino, Greg Parsons, and Ben Brickley]]&lt;br /&gt;
== Final Projects ==&lt;br /&gt;
* [[Aquarium| Aquarium - Jezreel Callejas]]&lt;br /&gt;
* [[Not So Lost| Not So Lost - Ben Brickely, Emilio Marcelino and Greg Parsons]]&lt;br /&gt;
&lt;br /&gt;
== Student Pages ==&lt;br /&gt;
Click &amp;quot;edit&amp;quot; on the right to add your own page below. &lt;br /&gt;
* [[Students/RobertTwomey | RobertTwomey]]&lt;br /&gt;
* [[Students/Javier Lee | Javier Lee]]&lt;br /&gt;
* [[Students/Jenny Wang | Jenny Wang]]&lt;br /&gt;
* [[Students/Joeny Thipsidakhom | Joeny Thipsidakhom]]&lt;br /&gt;
* [[Students/Kuan-Ting Lu | Tony Lu]]&lt;br /&gt;
* [[Students/Jezreel Callejas| Jezreel Callejas]]&lt;br /&gt;
* [[Students/ChristinaSanchez| Christina Sanchez]]&lt;br /&gt;
* [[Students/BenBrickley | BenBrickley]]&lt;br /&gt;
* [[Students/Ellen Huang | Ellen Huang]]&lt;br /&gt;
* [[Students/Kelley Kim | Kelley Kim]]&lt;br /&gt;
* [[Students/EmilioMarcelino | EmilioMarcelino]]&lt;br /&gt;
* [[Students/Anna Lin | Anna Lin]]&lt;br /&gt;
* [[Student/Jenny Chang | Jenny Chang]]&lt;br /&gt;
* [[Student/Jet Antonio | Jet Antonio]]&lt;br /&gt;
* [[Students/GregoryParsons | Gregory Parsons]]&lt;br /&gt;
* [[Students/Jennifer Sunga | Jennifer Sunga]]&lt;br /&gt;
* [[Students/LeilaniMartin | Leilani Martin]]&lt;/div&gt;</summary>
		<author><name>GregoryParsons</name></author>	</entry>

	<entry>
		<id>http://wiki.roberttwomey.com/index.php?title=Sound_Sketch_-_Emilio_Marcelino,_Greg_Parsons,_and_Ben_Brickley&amp;diff=3861</id>
		<title>Sound Sketch - Emilio Marcelino, Greg Parsons, and Ben Brickley</title>
		<link rel="alternate" type="text/html" href="http://wiki.roberttwomey.com/index.php?title=Sound_Sketch_-_Emilio_Marcelino,_Greg_Parsons,_and_Ben_Brickley&amp;diff=3861"/>
				<updated>2010-05-20T23:08:36Z</updated>
		
		<summary type="html">&lt;p&gt;GregoryParsons: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== &amp;#039;&amp;#039;&amp;#039;Motivation&amp;#039;&amp;#039;&amp;#039; ==&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Sticking to our primary interests in color, movement and scale, we decided to create a project that would require a computer, a microphone, and a mouse. With these three components we would be able to create a graffiti drawing program.  Later came another interest in using the webcam to create a drawing program.  So we replaced the mouse element with the webcam.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== &amp;#039;&amp;#039;&amp;#039;Interaction&amp;#039;&amp;#039;&amp;#039;==&lt;br /&gt;
&lt;br /&gt;
To interact with our piece the individual will have to have a webcam, a microphone and a computer with Processing.  When running the sketch we will use head tracking to replace the mouse and when speaking or blowing into the microphone, the program will draw. &lt;br /&gt;
&lt;br /&gt;
Basically... speaking and moving your head simultaneously will allow you to draw within Processing.&lt;br /&gt;
&lt;br /&gt;
== &amp;#039;&amp;#039;&amp;#039;Function&amp;#039;&amp;#039;&amp;#039;==&lt;br /&gt;
&lt;br /&gt;
We will add the webcam head tracking sketch to the original microphone/mouse sketch to create a microphone and webcam drawing program.&lt;br /&gt;
&lt;br /&gt;
== &amp;#039;&amp;#039;&amp;#039;Visualization&amp;#039;&amp;#039;&amp;#039; ==&lt;br /&gt;
&lt;br /&gt;
http://imgur.com/HEDZl.jpg&lt;br /&gt;
&lt;br /&gt;
== &amp;#039;&amp;#039;&amp;#039;Documentation&amp;#039;&amp;#039;&amp;#039; ==&lt;br /&gt;
&lt;br /&gt;
The project worked well as a display piece and functioned while under demand. After discussion with the class we agree that the project is stronger as a volume level reacting rather than frequency reacting design. It would be interesting to further develop the project and implement it into a gallery setting. Where people would walk past it and have it react. It would be better to fine tune it so that it can be used on a larger set of screens with smaller drawings so that it would not have to be reset after a short amount of use, this would allow more people and a longer amount of time for interaction. &lt;br /&gt;
&lt;br /&gt;
== &amp;#039;&amp;#039;&amp;#039;Video&amp;#039;&amp;#039;&amp;#039; ==&lt;br /&gt;
&lt;br /&gt;
[http://www.youtube.com/watch?v=vmtfh574R3o Video Documentation]&lt;br /&gt;
&lt;br /&gt;
== &amp;#039;&amp;#039;&amp;#039;Code Audio Level Based&amp;#039;&amp;#039;&amp;#039;==&lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;&lt;br /&gt;
import hypermedia.video.*;&lt;br /&gt;
import ddf.minim.*;&lt;br /&gt;
import ddf.minim.analysis.*;&lt;br /&gt;
import ddf.minim.signals.*;&lt;br /&gt;
&lt;br /&gt;
Minim minim;&lt;br /&gt;
AudioInput in;&lt;br /&gt;
&lt;br /&gt;
OpenCV opencv; &lt;br /&gt;
&lt;br /&gt;
// contrast/brightness values&lt;br /&gt;
int contrast_value    = 0;&lt;br /&gt;
int brightness_value  = 0; &lt;br /&gt;
&lt;br /&gt;
float loudestFreqAmp = 0;&lt;br /&gt;
float loudestFreq = 0;&lt;br /&gt;
int timerCounter = 0;&lt;br /&gt;
&lt;br /&gt;
void setup()&lt;br /&gt;
{&lt;br /&gt;
  size(640, 480, P2D);&lt;br /&gt;
  frameRate(30);&lt;br /&gt;
  noCursor();&lt;br /&gt;
  minim = new Minim(this);&lt;br /&gt;
  minim.debugOn();&lt;br /&gt;
  background(255);&lt;br /&gt;
  noStroke();&lt;br /&gt;
  // get a line in from Minim, default bit depth is 16&lt;br /&gt;
  in = minim.getLineIn(Minim.STEREO, 1024);&lt;br /&gt;
&lt;br /&gt;
  opencv = new OpenCV( this );&lt;br /&gt;
  opencv.capture( width, height );                   // open video stream&lt;br /&gt;
  opencv.cascade( OpenCV.CASCADE_FRONTALFACE_ALT );  // load detection description, here-&amp;gt; front face detection : &amp;quot;haarcascade_frontalface_alt.xml&amp;quot;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
}&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
void draw()&lt;br /&gt;
{&lt;br /&gt;
&lt;br /&gt;
  // grab a new frame&lt;br /&gt;
  // and convert to gray&lt;br /&gt;
  opencv.read();&lt;br /&gt;
  opencv.convert( GRAY );&lt;br /&gt;
  opencv.contrast( contrast_value );&lt;br /&gt;
  opencv.brightness( brightness_value );&lt;br /&gt;
&lt;br /&gt;
  // proceed detection&lt;br /&gt;
  java.awt.Rectangle[] faces = opencv.detect( 1.2, 2, OpenCV.HAAR_DO_CANNY_PRUNING, 40, 40 );&lt;br /&gt;
&lt;br /&gt;
  // display the image&lt;br /&gt;
  //image( opencv.image(), 0, 0 );&lt;br /&gt;
&lt;br /&gt;
  // draw face area(s)&lt;br /&gt;
  //  noFill();&lt;br /&gt;
  //  stroke(255,0,0);&lt;br /&gt;
  //  for( int i=0; i&amp;lt;faces.length; i++ ) {&lt;br /&gt;
  //    rect( faces[i].x, faces[i].y, faces[i].width, faces[i].height ); &lt;br /&gt;
  //  }&lt;br /&gt;
&lt;br /&gt;
  int posX = 0;&lt;br /&gt;
  int posY = 0; &lt;br /&gt;
&lt;br /&gt;
  for( int i=0; i&amp;lt;faces.length; i++ ) {&lt;br /&gt;
    posX = faces[i].x; &lt;br /&gt;
    posY = faces[i].y; &lt;br /&gt;
  }&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
  float m = 0;&lt;br /&gt;
  for(int i = 0; i &amp;lt; in.bufferSize() - 1; i++) {&lt;br /&gt;
    if ( abs(in.mix.get(i)) &amp;gt; m ) {&lt;br /&gt;
      m = abs(in.mix.get(i));&lt;br /&gt;
      System.out.println(in.mix.get(i));&lt;br /&gt;
       &lt;br /&gt;
    }&lt;br /&gt;
  }&lt;br /&gt;
  m*=150;&lt;br /&gt;
  drawCircles(posX, posY, m, 10);&lt;br /&gt;
  &lt;br /&gt;
  &lt;br /&gt;
  if(timerCounter &amp;gt;= 20)&lt;br /&gt;
  {&lt;br /&gt;
    background(255);&lt;br /&gt;
    timerCounter = 0;&lt;br /&gt;
  }&lt;br /&gt;
&lt;br /&gt;
  timerCounter++;&lt;br /&gt;
&lt;br /&gt;
}&lt;br /&gt;
&lt;br /&gt;
void keyPressed() {&lt;br /&gt;
  if (key == &amp;#039;a&amp;#039;) {&lt;br /&gt;
    background(255);&lt;br /&gt;
  }&lt;br /&gt;
}&lt;br /&gt;
&lt;br /&gt;
// Circle splatter machine&lt;br /&gt;
void drawCircles(float x, float y, float radius, int level)&lt;br /&gt;
{&lt;br /&gt;
  noStroke();&lt;br /&gt;
  float tt = 200 * level / 6.0; &lt;br /&gt;
  fill (tt, 0, 116);//tt, 0, 116&lt;br /&gt;
  ellipse(x, y, radius*2, radius*2);&lt;br /&gt;
  if (level &amp;gt; 1) {&lt;br /&gt;
    level = level - 1;&lt;br /&gt;
    int num = int (random(2, 5));&lt;br /&gt;
    for(int i=0; i&amp;lt;num; i++) { &lt;br /&gt;
      float a = random(0, TWO_PI);&lt;br /&gt;
      float nx = x + cos(a) * 6.0 * level; &lt;br /&gt;
      float ny = y + sin(a) * 6.0 * level; &lt;br /&gt;
      drawCircles(nx, ny, radius/2, level); &lt;br /&gt;
    }&lt;br /&gt;
  }&lt;br /&gt;
}&lt;br /&gt;
&lt;br /&gt;
void stop()&lt;br /&gt;
{&lt;br /&gt;
  // always close Minim audio classes when you are done with them&lt;br /&gt;
  in.close();&lt;br /&gt;
  minim.stop();&lt;br /&gt;
&lt;br /&gt;
  super.stop();&lt;br /&gt;
}&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== &amp;#039;&amp;#039;&amp;#039;Code Audio Frequency Based&amp;#039;&amp;#039;&amp;#039; ===&lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
import hypermedia.video.*;&lt;br /&gt;
import ddf.minim.*;&lt;br /&gt;
import ddf.minim.analysis.*;&lt;br /&gt;
import ddf.minim.signals.*;&lt;br /&gt;
&lt;br /&gt;
Minim minim;&lt;br /&gt;
AudioInput in;&lt;br /&gt;
FFT fft;&lt;br /&gt;
&lt;br /&gt;
OpenCV opencv; &lt;br /&gt;
&lt;br /&gt;
// contrast/brightness values&lt;br /&gt;
int contrast_value    = 0;&lt;br /&gt;
int brightness_value  = 0; &lt;br /&gt;
&lt;br /&gt;
float loudestFreqAmp = 0;&lt;br /&gt;
float loudestFreq = 0;&lt;br /&gt;
int timerCounter = 0;&lt;br /&gt;
&lt;br /&gt;
void setup()&lt;br /&gt;
{&lt;br /&gt;
  size(640, 480, P2D);&lt;br /&gt;
  frameRate(30);&lt;br /&gt;
  noCursor();&lt;br /&gt;
  minim = new Minim(this);&lt;br /&gt;
  minim.debugOn();&lt;br /&gt;
  background(255);&lt;br /&gt;
  noStroke();&lt;br /&gt;
  // get a line in from Minim, default bit depth is 16&lt;br /&gt;
  in = minim.getLineIn(Minim.STEREO, 1024);&lt;br /&gt;
  fft = new FFT(in.bufferSize(), in.sampleRate());&lt;br /&gt;
&lt;br /&gt;
  opencv = new OpenCV( this );&lt;br /&gt;
  opencv.capture( width, height );                   // open video stream&lt;br /&gt;
  opencv.cascade( OpenCV.CASCADE_FRONTALFACE_ALT );  // load detection description, here-&amp;gt; front face detection : &amp;quot;haarcascade_frontalface_alt.xml&amp;quot;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
} &lt;br /&gt;
&lt;br /&gt;
void draw()&lt;br /&gt;
{&lt;br /&gt;
&lt;br /&gt;
  // grab a new frame&lt;br /&gt;
  // and convert to gray&lt;br /&gt;
  opencv.read();&lt;br /&gt;
  opencv.convert( GRAY );&lt;br /&gt;
  opencv.contrast( contrast_value );&lt;br /&gt;
  opencv.brightness( brightness_value );&lt;br /&gt;
&lt;br /&gt;
  // proceed detection&lt;br /&gt;
  java.awt.Rectangle[] faces = opencv.detect( 1.2, 2, OpenCV.HAAR_DO_CANNY_PRUNING, 40, 40 );&lt;br /&gt;
&lt;br /&gt;
  // display the image&lt;br /&gt;
  //image( opencv.image(), 0, 0 );&lt;br /&gt;
&lt;br /&gt;
  // draw face area(s)&lt;br /&gt;
  //  noFill();&lt;br /&gt;
  //  stroke(255,0,0);&lt;br /&gt;
  //  for( int i=0; i&amp;lt;faces.length; i++ ) {&lt;br /&gt;
  //    rect( faces[i].x, faces[i].y, faces[i].width, faces[i].height ); &lt;br /&gt;
  //  }&lt;br /&gt;
&lt;br /&gt;
  int posX = 0;&lt;br /&gt;
  int posY = 0; &lt;br /&gt;
&lt;br /&gt;
  for( int i=0; i&amp;lt;faces.length; i++ ) {&lt;br /&gt;
    posX = faces[i].x; &lt;br /&gt;
    posY = faces[i].y; &lt;br /&gt;
  }&lt;br /&gt;
&lt;br /&gt;
  fft.window(FFT.HAMMING);&lt;br /&gt;
  for(int i = 0; i &amp;lt; fft.specSize(); i++)&lt;br /&gt;
  {&lt;br /&gt;
    // draw the line for frequency band i, scaling it by 4 so we can&lt;br /&gt;
    //see it a bit better&lt;br /&gt;
    if (fft.getBand(i) &amp;gt; loudestFreqAmp &amp;amp;&amp;amp; fft.getBand(i) &amp;gt; 10)&lt;br /&gt;
    {&lt;br /&gt;
      loudestFreqAmp = fft.getBand(i);&lt;br /&gt;
      loudestFreq = i * 4;&lt;br /&gt;
&lt;br /&gt;
      // draw the thing&lt;br /&gt;
      drawCircles(posX, posY, (int)loudestFreqAmp, 10);  &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
      timerCounter = 0;&lt;br /&gt;
      System.out.println(loudestFreq + &amp;quot;---&amp;quot; + loudestFreqAmp);&lt;br /&gt;
    }&lt;br /&gt;
  }&lt;br /&gt;
  loudestFreqAmp = 0;&lt;br /&gt;
&lt;br /&gt;
  fft.forward(in.mix);&lt;br /&gt;
&lt;br /&gt;
  if(timerCounter &amp;gt;= 20)&lt;br /&gt;
  {&lt;br /&gt;
    background(255);&lt;br /&gt;
    timerCounter = 0;&lt;br /&gt;
  }&lt;br /&gt;
&lt;br /&gt;
  timerCounter++;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
}&lt;br /&gt;
&lt;br /&gt;
void keyPressed() {&lt;br /&gt;
  if (key == &amp;#039;a&amp;#039;) {&lt;br /&gt;
    background(255);&lt;br /&gt;
  }&lt;br /&gt;
}&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
// Circle splatter machine&lt;br /&gt;
void drawCircles(float x, float y, int radius, int level)&lt;br /&gt;
{&lt;br /&gt;
  noStroke();&lt;br /&gt;
  float tt = 116 * level / 6.0; &lt;br /&gt;
  fill (tt, 45, 255);&lt;br /&gt;
  ellipse(x, y, radius*2, radius*2);&lt;br /&gt;
  if (level &amp;gt; 1) {&lt;br /&gt;
    level = level - 1;&lt;br /&gt;
    int num = int (random(2, 5));&lt;br /&gt;
    for(int i=0; i&amp;lt;num; i++) { &lt;br /&gt;
      float a = random(0, TWO_PI);&lt;br /&gt;
      float nx = x + cos(a) * 6.0 * level; &lt;br /&gt;
      float ny = y + sin(a) * 6.0 * level; &lt;br /&gt;
      drawCircles(nx, ny, radius/2, level); &lt;br /&gt;
    }&lt;br /&gt;
  }&lt;br /&gt;
}&lt;br /&gt;
&lt;br /&gt;
void stop()&lt;br /&gt;
{&lt;br /&gt;
  // always close Minim audio classes when you are done with them&lt;br /&gt;
  in.close();&lt;br /&gt;
  minim.stop();&lt;br /&gt;
&lt;br /&gt;
  super.stop();&lt;br /&gt;
}&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/code&amp;gt;&lt;/div&gt;</summary>
		<author><name>GregoryParsons</name></author>	</entry>

	<entry>
		<id>http://wiki.roberttwomey.com/index.php?title=Sound_Sketch_-_Emilio_Marcelino,_Greg_Parsons,_and_Ben_Brickley&amp;diff=3859</id>
		<title>Sound Sketch - Emilio Marcelino, Greg Parsons, and Ben Brickley</title>
		<link rel="alternate" type="text/html" href="http://wiki.roberttwomey.com/index.php?title=Sound_Sketch_-_Emilio_Marcelino,_Greg_Parsons,_and_Ben_Brickley&amp;diff=3859"/>
				<updated>2010-05-20T23:07:45Z</updated>
		
		<summary type="html">&lt;p&gt;GregoryParsons: /* Video */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== &amp;#039;&amp;#039;&amp;#039;Motivation&amp;#039;&amp;#039;&amp;#039; ==&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Sticking to our primary interests in color, movement and scale, we decided to create a project that would require a computer, a microphone, and a mouse. With these three components we would be able to create a graffiti drawing program.  Later came another interest in using the webcam to create a drawing program.  So we replaced the mouse element with the webcam.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== &amp;#039;&amp;#039;&amp;#039;Interaction&amp;#039;&amp;#039;&amp;#039;==&lt;br /&gt;
&lt;br /&gt;
To interact with our piece the individual will have to have a webcam, a microphone and a computer with Processing.  When running the sketch we will use head tracking to replace the mouse and when speaking or blowing into the microphone, the program will draw. &lt;br /&gt;
&lt;br /&gt;
Basically... speaking and moving your head simultaneously will allow you to draw within Processing.&lt;br /&gt;
&lt;br /&gt;
== &amp;#039;&amp;#039;&amp;#039;Function&amp;#039;&amp;#039;&amp;#039;==&lt;br /&gt;
&lt;br /&gt;
We will add the webcam head tracking sketch to the original microphone/mouse sketch to create a microphone and webcam drawing program.&lt;br /&gt;
&lt;br /&gt;
== &amp;#039;&amp;#039;&amp;#039;Visualization&amp;#039;&amp;#039;&amp;#039; ==&lt;br /&gt;
&lt;br /&gt;
http://imgur.com/HEDZl.jpg&lt;br /&gt;
&lt;br /&gt;
== &amp;#039;&amp;#039;&amp;#039;Documentation&amp;#039;&amp;#039;&amp;#039; ==&lt;br /&gt;
&lt;br /&gt;
The project worked well as a display piece and functioned while under demand. After discussion with the class we agree that the project is stronger as a volume level reacting rather than frequency reacting design. It would be interesting to further develop the project and implement it into a gallery setting. Where people would walk past it and have it react. It would be better to fine tune it so that it can be used on a larger set of screens with smaller drawings so that it would not have to be reset after a short amount of use, this would allow more people and a longer amount of time for interaction. &lt;br /&gt;
=== &amp;#039;&amp;#039;&amp;#039;Video&amp;#039;&amp;#039;&amp;#039; ===&lt;br /&gt;
&lt;br /&gt;
[http://www.youtube.com/watch?v=vmtfh574R3o Video Documentation]&lt;br /&gt;
&lt;br /&gt;
== &amp;#039;&amp;#039;&amp;#039;Code Audio Level Based&amp;#039;&amp;#039;&amp;#039;==&lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;&lt;br /&gt;
import hypermedia.video.*;&lt;br /&gt;
import ddf.minim.*;&lt;br /&gt;
import ddf.minim.analysis.*;&lt;br /&gt;
import ddf.minim.signals.*;&lt;br /&gt;
&lt;br /&gt;
Minim minim;&lt;br /&gt;
AudioInput in;&lt;br /&gt;
&lt;br /&gt;
OpenCV opencv; &lt;br /&gt;
&lt;br /&gt;
// contrast/brightness values&lt;br /&gt;
int contrast_value    = 0;&lt;br /&gt;
int brightness_value  = 0; &lt;br /&gt;
&lt;br /&gt;
float loudestFreqAmp = 0;&lt;br /&gt;
float loudestFreq = 0;&lt;br /&gt;
int timerCounter = 0;&lt;br /&gt;
&lt;br /&gt;
void setup()&lt;br /&gt;
{&lt;br /&gt;
  size(640, 480, P2D);&lt;br /&gt;
  frameRate(30);&lt;br /&gt;
  noCursor();&lt;br /&gt;
  minim = new Minim(this);&lt;br /&gt;
  minim.debugOn();&lt;br /&gt;
  background(255);&lt;br /&gt;
  noStroke();&lt;br /&gt;
  // get a line in from Minim, default bit depth is 16&lt;br /&gt;
  in = minim.getLineIn(Minim.STEREO, 1024);&lt;br /&gt;
&lt;br /&gt;
  opencv = new OpenCV( this );&lt;br /&gt;
  opencv.capture( width, height );                   // open video stream&lt;br /&gt;
  opencv.cascade( OpenCV.CASCADE_FRONTALFACE_ALT );  // load detection description, here-&amp;gt; front face detection : &amp;quot;haarcascade_frontalface_alt.xml&amp;quot;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
}&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
void draw()&lt;br /&gt;
{&lt;br /&gt;
&lt;br /&gt;
  // grab a new frame&lt;br /&gt;
  // and convert to gray&lt;br /&gt;
  opencv.read();&lt;br /&gt;
  opencv.convert( GRAY );&lt;br /&gt;
  opencv.contrast( contrast_value );&lt;br /&gt;
  opencv.brightness( brightness_value );&lt;br /&gt;
&lt;br /&gt;
  // proceed detection&lt;br /&gt;
  java.awt.Rectangle[] faces = opencv.detect( 1.2, 2, OpenCV.HAAR_DO_CANNY_PRUNING, 40, 40 );&lt;br /&gt;
&lt;br /&gt;
  // display the image&lt;br /&gt;
  //image( opencv.image(), 0, 0 );&lt;br /&gt;
&lt;br /&gt;
  // draw face area(s)&lt;br /&gt;
  //  noFill();&lt;br /&gt;
  //  stroke(255,0,0);&lt;br /&gt;
  //  for( int i=0; i&amp;lt;faces.length; i++ ) {&lt;br /&gt;
  //    rect( faces[i].x, faces[i].y, faces[i].width, faces[i].height ); &lt;br /&gt;
  //  }&lt;br /&gt;
&lt;br /&gt;
  int posX = 0;&lt;br /&gt;
  int posY = 0; &lt;br /&gt;
&lt;br /&gt;
  for( int i=0; i&amp;lt;faces.length; i++ ) {&lt;br /&gt;
    posX = faces[i].x; &lt;br /&gt;
    posY = faces[i].y; &lt;br /&gt;
  }&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
  float m = 0;&lt;br /&gt;
  for(int i = 0; i &amp;lt; in.bufferSize() - 1; i++) {&lt;br /&gt;
    if ( abs(in.mix.get(i)) &amp;gt; m ) {&lt;br /&gt;
      m = abs(in.mix.get(i));&lt;br /&gt;
      System.out.println(in.mix.get(i));&lt;br /&gt;
       &lt;br /&gt;
    }&lt;br /&gt;
  }&lt;br /&gt;
  m*=150;&lt;br /&gt;
  drawCircles(posX, posY, m, 10);&lt;br /&gt;
  &lt;br /&gt;
  &lt;br /&gt;
  if(timerCounter &amp;gt;= 20)&lt;br /&gt;
  {&lt;br /&gt;
    background(255);&lt;br /&gt;
    timerCounter = 0;&lt;br /&gt;
  }&lt;br /&gt;
&lt;br /&gt;
  timerCounter++;&lt;br /&gt;
&lt;br /&gt;
}&lt;br /&gt;
&lt;br /&gt;
void keyPressed() {&lt;br /&gt;
  if (key == &amp;#039;a&amp;#039;) {&lt;br /&gt;
    background(255);&lt;br /&gt;
  }&lt;br /&gt;
}&lt;br /&gt;
&lt;br /&gt;
// Circle splatter machine&lt;br /&gt;
void drawCircles(float x, float y, float radius, int level)&lt;br /&gt;
{&lt;br /&gt;
  noStroke();&lt;br /&gt;
  float tt = 200 * level / 6.0; &lt;br /&gt;
  fill (tt, 0, 116);//tt, 0, 116&lt;br /&gt;
  ellipse(x, y, radius*2, radius*2);&lt;br /&gt;
  if (level &amp;gt; 1) {&lt;br /&gt;
    level = level - 1;&lt;br /&gt;
    int num = int (random(2, 5));&lt;br /&gt;
    for(int i=0; i&amp;lt;num; i++) { &lt;br /&gt;
      float a = random(0, TWO_PI);&lt;br /&gt;
      float nx = x + cos(a) * 6.0 * level; &lt;br /&gt;
      float ny = y + sin(a) * 6.0 * level; &lt;br /&gt;
      drawCircles(nx, ny, radius/2, level); &lt;br /&gt;
    }&lt;br /&gt;
  }&lt;br /&gt;
}&lt;br /&gt;
&lt;br /&gt;
void stop()&lt;br /&gt;
{&lt;br /&gt;
  // always close Minim audio classes when you are done with them&lt;br /&gt;
  in.close();&lt;br /&gt;
  minim.stop();&lt;br /&gt;
&lt;br /&gt;
  super.stop();&lt;br /&gt;
}&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== &amp;#039;&amp;#039;&amp;#039;Code Audio Frequency Based&amp;#039;&amp;#039;&amp;#039; ===&lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
import hypermedia.video.*;&lt;br /&gt;
import ddf.minim.*;&lt;br /&gt;
import ddf.minim.analysis.*;&lt;br /&gt;
import ddf.minim.signals.*;&lt;br /&gt;
&lt;br /&gt;
Minim minim;&lt;br /&gt;
AudioInput in;&lt;br /&gt;
FFT fft;&lt;br /&gt;
&lt;br /&gt;
OpenCV opencv; &lt;br /&gt;
&lt;br /&gt;
// contrast/brightness values&lt;br /&gt;
int contrast_value    = 0;&lt;br /&gt;
int brightness_value  = 0; &lt;br /&gt;
&lt;br /&gt;
float loudestFreqAmp = 0;&lt;br /&gt;
float loudestFreq = 0;&lt;br /&gt;
int timerCounter = 0;&lt;br /&gt;
&lt;br /&gt;
void setup()&lt;br /&gt;
{&lt;br /&gt;
  size(640, 480, P2D);&lt;br /&gt;
  frameRate(30);&lt;br /&gt;
  noCursor();&lt;br /&gt;
  minim = new Minim(this);&lt;br /&gt;
  minim.debugOn();&lt;br /&gt;
  background(255);&lt;br /&gt;
  noStroke();&lt;br /&gt;
  // get a line in from Minim, default bit depth is 16&lt;br /&gt;
  in = minim.getLineIn(Minim.STEREO, 1024);&lt;br /&gt;
  fft = new FFT(in.bufferSize(), in.sampleRate());&lt;br /&gt;
&lt;br /&gt;
  opencv = new OpenCV( this );&lt;br /&gt;
  opencv.capture( width, height );                   // open video stream&lt;br /&gt;
  opencv.cascade( OpenCV.CASCADE_FRONTALFACE_ALT );  // load detection description, here-&amp;gt; front face detection : &amp;quot;haarcascade_frontalface_alt.xml&amp;quot;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
} &lt;br /&gt;
&lt;br /&gt;
void draw()&lt;br /&gt;
{&lt;br /&gt;
&lt;br /&gt;
  // grab a new frame&lt;br /&gt;
  // and convert to gray&lt;br /&gt;
  opencv.read();&lt;br /&gt;
  opencv.convert( GRAY );&lt;br /&gt;
  opencv.contrast( contrast_value );&lt;br /&gt;
  opencv.brightness( brightness_value );&lt;br /&gt;
&lt;br /&gt;
  // proceed detection&lt;br /&gt;
  java.awt.Rectangle[] faces = opencv.detect( 1.2, 2, OpenCV.HAAR_DO_CANNY_PRUNING, 40, 40 );&lt;br /&gt;
&lt;br /&gt;
  // display the image&lt;br /&gt;
  //image( opencv.image(), 0, 0 );&lt;br /&gt;
&lt;br /&gt;
  // draw face area(s)&lt;br /&gt;
  //  noFill();&lt;br /&gt;
  //  stroke(255,0,0);&lt;br /&gt;
  //  for( int i=0; i&amp;lt;faces.length; i++ ) {&lt;br /&gt;
  //    rect( faces[i].x, faces[i].y, faces[i].width, faces[i].height ); &lt;br /&gt;
  //  }&lt;br /&gt;
&lt;br /&gt;
  int posX = 0;&lt;br /&gt;
  int posY = 0; &lt;br /&gt;
&lt;br /&gt;
  for( int i=0; i&amp;lt;faces.length; i++ ) {&lt;br /&gt;
    posX = faces[i].x; &lt;br /&gt;
    posY = faces[i].y; &lt;br /&gt;
  }&lt;br /&gt;
&lt;br /&gt;
  fft.window(FFT.HAMMING);&lt;br /&gt;
  for(int i = 0; i &amp;lt; fft.specSize(); i++)&lt;br /&gt;
  {&lt;br /&gt;
    // draw the line for frequency band i, scaling it by 4 so we can&lt;br /&gt;
    //see it a bit better&lt;br /&gt;
    if (fft.getBand(i) &amp;gt; loudestFreqAmp &amp;amp;&amp;amp; fft.getBand(i) &amp;gt; 10)&lt;br /&gt;
    {&lt;br /&gt;
      loudestFreqAmp = fft.getBand(i);&lt;br /&gt;
      loudestFreq = i * 4;&lt;br /&gt;
&lt;br /&gt;
      // draw the thing&lt;br /&gt;
      drawCircles(posX, posY, (int)loudestFreqAmp, 10);  &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
      timerCounter = 0;&lt;br /&gt;
      System.out.println(loudestFreq + &amp;quot;---&amp;quot; + loudestFreqAmp);&lt;br /&gt;
    }&lt;br /&gt;
  }&lt;br /&gt;
  loudestFreqAmp = 0;&lt;br /&gt;
&lt;br /&gt;
  fft.forward(in.mix);&lt;br /&gt;
&lt;br /&gt;
  if(timerCounter &amp;gt;= 20)&lt;br /&gt;
  {&lt;br /&gt;
    background(255);&lt;br /&gt;
    timerCounter = 0;&lt;br /&gt;
  }&lt;br /&gt;
&lt;br /&gt;
  timerCounter++;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
}&lt;br /&gt;
&lt;br /&gt;
void keyPressed() {&lt;br /&gt;
  if (key == &amp;#039;a&amp;#039;) {&lt;br /&gt;
    background(255);&lt;br /&gt;
  }&lt;br /&gt;
}&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
// Circle splatter machine&lt;br /&gt;
void drawCircles(float x, float y, int radius, int level)&lt;br /&gt;
{&lt;br /&gt;
  noStroke();&lt;br /&gt;
  float tt = 116 * level / 6.0; &lt;br /&gt;
  fill (tt, 45, 255);&lt;br /&gt;
  ellipse(x, y, radius*2, radius*2);&lt;br /&gt;
  if (level &amp;gt; 1) {&lt;br /&gt;
    level = level - 1;&lt;br /&gt;
    int num = int (random(2, 5));&lt;br /&gt;
    for(int i=0; i&amp;lt;num; i++) { &lt;br /&gt;
      float a = random(0, TWO_PI);&lt;br /&gt;
      float nx = x + cos(a) * 6.0 * level; &lt;br /&gt;
      float ny = y + sin(a) * 6.0 * level; &lt;br /&gt;
      drawCircles(nx, ny, radius/2, level); &lt;br /&gt;
    }&lt;br /&gt;
  }&lt;br /&gt;
}&lt;br /&gt;
&lt;br /&gt;
void stop()&lt;br /&gt;
{&lt;br /&gt;
  // always close Minim audio classes when you are done with them&lt;br /&gt;
  in.close();&lt;br /&gt;
  minim.stop();&lt;br /&gt;
&lt;br /&gt;
  super.stop();&lt;br /&gt;
}&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/code&amp;gt;&lt;/div&gt;</summary>
		<author><name>GregoryParsons</name></author>	</entry>

	<entry>
		<id>http://wiki.roberttwomey.com/index.php?title=Sound_Sketch_-_Emilio_Marcelino,_Greg_Parsons,_and_Ben_Brickley&amp;diff=3858</id>
		<title>Sound Sketch - Emilio Marcelino, Greg Parsons, and Ben Brickley</title>
		<link rel="alternate" type="text/html" href="http://wiki.roberttwomey.com/index.php?title=Sound_Sketch_-_Emilio_Marcelino,_Greg_Parsons,_and_Ben_Brickley&amp;diff=3858"/>
				<updated>2010-05-20T23:07:28Z</updated>
		
		<summary type="html">&lt;p&gt;GregoryParsons: /* Video */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== &amp;#039;&amp;#039;&amp;#039;Motivation&amp;#039;&amp;#039;&amp;#039; ==&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Sticking to our primary interests in color, movement and scale, we decided to create a project that would require a computer, a microphone, and a mouse. With these three components we would be able to create a graffiti drawing program.  Later came another interest in using the webcam to create a drawing program.  So we replaced the mouse element with the webcam.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== &amp;#039;&amp;#039;&amp;#039;Interaction&amp;#039;&amp;#039;&amp;#039;==&lt;br /&gt;
&lt;br /&gt;
To interact with our piece the individual will have to have a webcam, a microphone and a computer with Processing.  When running the sketch we will use head tracking to replace the mouse and when speaking or blowing into the microphone, the program will draw. &lt;br /&gt;
&lt;br /&gt;
Basically... speaking and moving your head simultaneously will allow you to draw within Processing.&lt;br /&gt;
&lt;br /&gt;
== &amp;#039;&amp;#039;&amp;#039;Function&amp;#039;&amp;#039;&amp;#039;==&lt;br /&gt;
&lt;br /&gt;
We will add the webcam head tracking sketch to the original microphone/mouse sketch to create a microphone and webcam drawing program.&lt;br /&gt;
&lt;br /&gt;
== &amp;#039;&amp;#039;&amp;#039;Visualization&amp;#039;&amp;#039;&amp;#039; ==&lt;br /&gt;
&lt;br /&gt;
http://imgur.com/HEDZl.jpg&lt;br /&gt;
&lt;br /&gt;
== &amp;#039;&amp;#039;&amp;#039;Documentation&amp;#039;&amp;#039;&amp;#039; ==&lt;br /&gt;
&lt;br /&gt;
The project worked well as a display piece and functioned while under demand. After discussion with the class we agree that the project is stronger as a volume level reacting rather than frequency reacting design. It would be interesting to further develop the project and implement it into a gallery setting. Where people would walk past it and have it react. It would be better to fine tune it so that it can be used on a larger set of screens with smaller drawings so that it would not have to be reset after a short amount of use, this would allow more people and a longer amount of time for interaction. &lt;br /&gt;
=== &amp;#039;&amp;#039;&amp;#039;Video&amp;#039;&amp;#039;&amp;#039; ===&lt;br /&gt;
&lt;br /&gt;
[http://www.youtube.com/watch?v=vmtfh574R3o link Video Documentation]&lt;br /&gt;
&lt;br /&gt;
== &amp;#039;&amp;#039;&amp;#039;Code Audio Level Based&amp;#039;&amp;#039;&amp;#039;==&lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;&lt;br /&gt;
import hypermedia.video.*;&lt;br /&gt;
import ddf.minim.*;&lt;br /&gt;
import ddf.minim.analysis.*;&lt;br /&gt;
import ddf.minim.signals.*;&lt;br /&gt;
&lt;br /&gt;
Minim minim;&lt;br /&gt;
AudioInput in;&lt;br /&gt;
&lt;br /&gt;
OpenCV opencv; &lt;br /&gt;
&lt;br /&gt;
// contrast/brightness values&lt;br /&gt;
int contrast_value    = 0;&lt;br /&gt;
int brightness_value  = 0; &lt;br /&gt;
&lt;br /&gt;
float loudestFreqAmp = 0;&lt;br /&gt;
float loudestFreq = 0;&lt;br /&gt;
int timerCounter = 0;&lt;br /&gt;
&lt;br /&gt;
void setup()&lt;br /&gt;
{&lt;br /&gt;
  size(640, 480, P2D);&lt;br /&gt;
  frameRate(30);&lt;br /&gt;
  noCursor();&lt;br /&gt;
  minim = new Minim(this);&lt;br /&gt;
  minim.debugOn();&lt;br /&gt;
  background(255);&lt;br /&gt;
  noStroke();&lt;br /&gt;
  // get a line in from Minim, default bit depth is 16&lt;br /&gt;
  in = minim.getLineIn(Minim.STEREO, 1024);&lt;br /&gt;
&lt;br /&gt;
  opencv = new OpenCV( this );&lt;br /&gt;
  opencv.capture( width, height );                   // open video stream&lt;br /&gt;
  opencv.cascade( OpenCV.CASCADE_FRONTALFACE_ALT );  // load detection description, here-&amp;gt; front face detection : &amp;quot;haarcascade_frontalface_alt.xml&amp;quot;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
}&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
void draw()&lt;br /&gt;
{&lt;br /&gt;
&lt;br /&gt;
  // grab a new frame&lt;br /&gt;
  // and convert to gray&lt;br /&gt;
  opencv.read();&lt;br /&gt;
  opencv.convert( GRAY );&lt;br /&gt;
  opencv.contrast( contrast_value );&lt;br /&gt;
  opencv.brightness( brightness_value );&lt;br /&gt;
&lt;br /&gt;
  // proceed detection&lt;br /&gt;
  java.awt.Rectangle[] faces = opencv.detect( 1.2, 2, OpenCV.HAAR_DO_CANNY_PRUNING, 40, 40 );&lt;br /&gt;
&lt;br /&gt;
  // display the image&lt;br /&gt;
  //image( opencv.image(), 0, 0 );&lt;br /&gt;
&lt;br /&gt;
  // draw face area(s)&lt;br /&gt;
  //  noFill();&lt;br /&gt;
  //  stroke(255,0,0);&lt;br /&gt;
  //  for( int i=0; i&amp;lt;faces.length; i++ ) {&lt;br /&gt;
  //    rect( faces[i].x, faces[i].y, faces[i].width, faces[i].height ); &lt;br /&gt;
  //  }&lt;br /&gt;
&lt;br /&gt;
  int posX = 0;&lt;br /&gt;
  int posY = 0; &lt;br /&gt;
&lt;br /&gt;
  for( int i=0; i&amp;lt;faces.length; i++ ) {&lt;br /&gt;
    posX = faces[i].x; &lt;br /&gt;
    posY = faces[i].y; &lt;br /&gt;
  }&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
  float m = 0;&lt;br /&gt;
  for(int i = 0; i &amp;lt; in.bufferSize() - 1; i++) {&lt;br /&gt;
    if ( abs(in.mix.get(i)) &amp;gt; m ) {&lt;br /&gt;
      m = abs(in.mix.get(i));&lt;br /&gt;
      System.out.println(in.mix.get(i));&lt;br /&gt;
       &lt;br /&gt;
    }&lt;br /&gt;
  }&lt;br /&gt;
  m*=150;&lt;br /&gt;
  drawCircles(posX, posY, m, 10);&lt;br /&gt;
  &lt;br /&gt;
  &lt;br /&gt;
  if(timerCounter &amp;gt;= 20)&lt;br /&gt;
  {&lt;br /&gt;
    background(255);&lt;br /&gt;
    timerCounter = 0;&lt;br /&gt;
  }&lt;br /&gt;
&lt;br /&gt;
  timerCounter++;&lt;br /&gt;
&lt;br /&gt;
}&lt;br /&gt;
&lt;br /&gt;
void keyPressed() {&lt;br /&gt;
  if (key == &amp;#039;a&amp;#039;) {&lt;br /&gt;
    background(255);&lt;br /&gt;
  }&lt;br /&gt;
}&lt;br /&gt;
&lt;br /&gt;
// Circle splatter machine&lt;br /&gt;
void drawCircles(float x, float y, float radius, int level)&lt;br /&gt;
{&lt;br /&gt;
  noStroke();&lt;br /&gt;
  float tt = 200 * level / 6.0; &lt;br /&gt;
  fill (tt, 0, 116);//tt, 0, 116&lt;br /&gt;
  ellipse(x, y, radius*2, radius*2);&lt;br /&gt;
  if (level &amp;gt; 1) {&lt;br /&gt;
    level = level - 1;&lt;br /&gt;
    int num = int (random(2, 5));&lt;br /&gt;
    for(int i=0; i&amp;lt;num; i++) { &lt;br /&gt;
      float a = random(0, TWO_PI);&lt;br /&gt;
      float nx = x + cos(a) * 6.0 * level; &lt;br /&gt;
      float ny = y + sin(a) * 6.0 * level; &lt;br /&gt;
      drawCircles(nx, ny, radius/2, level); &lt;br /&gt;
    }&lt;br /&gt;
  }&lt;br /&gt;
}&lt;br /&gt;
&lt;br /&gt;
void stop()&lt;br /&gt;
{&lt;br /&gt;
  // always close Minim audio classes when you are done with them&lt;br /&gt;
  in.close();&lt;br /&gt;
  minim.stop();&lt;br /&gt;
&lt;br /&gt;
  super.stop();&lt;br /&gt;
}&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== &amp;#039;&amp;#039;&amp;#039;Code Audio Frequency Based&amp;#039;&amp;#039;&amp;#039; ===&lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
import hypermedia.video.*;&lt;br /&gt;
import ddf.minim.*;&lt;br /&gt;
import ddf.minim.analysis.*;&lt;br /&gt;
import ddf.minim.signals.*;&lt;br /&gt;
&lt;br /&gt;
Minim minim;&lt;br /&gt;
AudioInput in;&lt;br /&gt;
FFT fft;&lt;br /&gt;
&lt;br /&gt;
OpenCV opencv; &lt;br /&gt;
&lt;br /&gt;
// contrast/brightness values&lt;br /&gt;
int contrast_value    = 0;&lt;br /&gt;
int brightness_value  = 0; &lt;br /&gt;
&lt;br /&gt;
float loudestFreqAmp = 0;&lt;br /&gt;
float loudestFreq = 0;&lt;br /&gt;
int timerCounter = 0;&lt;br /&gt;
&lt;br /&gt;
void setup()&lt;br /&gt;
{&lt;br /&gt;
  size(640, 480, P2D);&lt;br /&gt;
  frameRate(30);&lt;br /&gt;
  noCursor();&lt;br /&gt;
  minim = new Minim(this);&lt;br /&gt;
  minim.debugOn();&lt;br /&gt;
  background(255);&lt;br /&gt;
  noStroke();&lt;br /&gt;
  // get a line in from Minim, default bit depth is 16&lt;br /&gt;
  in = minim.getLineIn(Minim.STEREO, 1024);&lt;br /&gt;
  fft = new FFT(in.bufferSize(), in.sampleRate());&lt;br /&gt;
&lt;br /&gt;
  opencv = new OpenCV( this );&lt;br /&gt;
  opencv.capture( width, height );                   // open video stream&lt;br /&gt;
  opencv.cascade( OpenCV.CASCADE_FRONTALFACE_ALT );  // load detection description, here-&amp;gt; front face detection : &amp;quot;haarcascade_frontalface_alt.xml&amp;quot;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
} &lt;br /&gt;
&lt;br /&gt;
void draw()&lt;br /&gt;
{&lt;br /&gt;
&lt;br /&gt;
  // grab a new frame&lt;br /&gt;
  // and convert to gray&lt;br /&gt;
  opencv.read();&lt;br /&gt;
  opencv.convert( GRAY );&lt;br /&gt;
  opencv.contrast( contrast_value );&lt;br /&gt;
  opencv.brightness( brightness_value );&lt;br /&gt;
&lt;br /&gt;
  // proceed detection&lt;br /&gt;
  java.awt.Rectangle[] faces = opencv.detect( 1.2, 2, OpenCV.HAAR_DO_CANNY_PRUNING, 40, 40 );&lt;br /&gt;
&lt;br /&gt;
  // display the image&lt;br /&gt;
  //image( opencv.image(), 0, 0 );&lt;br /&gt;
&lt;br /&gt;
  // draw face area(s)&lt;br /&gt;
  //  noFill();&lt;br /&gt;
  //  stroke(255,0,0);&lt;br /&gt;
  //  for( int i=0; i&amp;lt;faces.length; i++ ) {&lt;br /&gt;
  //    rect( faces[i].x, faces[i].y, faces[i].width, faces[i].height ); &lt;br /&gt;
  //  }&lt;br /&gt;
&lt;br /&gt;
  int posX = 0;&lt;br /&gt;
  int posY = 0; &lt;br /&gt;
&lt;br /&gt;
  for( int i=0; i&amp;lt;faces.length; i++ ) {&lt;br /&gt;
    posX = faces[i].x; &lt;br /&gt;
    posY = faces[i].y; &lt;br /&gt;
  }&lt;br /&gt;
&lt;br /&gt;
  fft.window(FFT.HAMMING);&lt;br /&gt;
  for(int i = 0; i &amp;lt; fft.specSize(); i++)&lt;br /&gt;
  {&lt;br /&gt;
    // draw the line for frequency band i, scaling it by 4 so we can&lt;br /&gt;
    //see it a bit better&lt;br /&gt;
    if (fft.getBand(i) &amp;gt; loudestFreqAmp &amp;amp;&amp;amp; fft.getBand(i) &amp;gt; 10)&lt;br /&gt;
    {&lt;br /&gt;
      loudestFreqAmp = fft.getBand(i);&lt;br /&gt;
      loudestFreq = i * 4;&lt;br /&gt;
&lt;br /&gt;
      // draw the thing&lt;br /&gt;
      drawCircles(posX, posY, (int)loudestFreqAmp, 10);  &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
      timerCounter = 0;&lt;br /&gt;
      System.out.println(loudestFreq + &amp;quot;---&amp;quot; + loudestFreqAmp);&lt;br /&gt;
    }&lt;br /&gt;
  }&lt;br /&gt;
  loudestFreqAmp = 0;&lt;br /&gt;
&lt;br /&gt;
  fft.forward(in.mix);&lt;br /&gt;
&lt;br /&gt;
  if(timerCounter &amp;gt;= 20)&lt;br /&gt;
  {&lt;br /&gt;
    background(255);&lt;br /&gt;
    timerCounter = 0;&lt;br /&gt;
  }&lt;br /&gt;
&lt;br /&gt;
  timerCounter++;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
}&lt;br /&gt;
&lt;br /&gt;
void keyPressed() {&lt;br /&gt;
  if (key == &amp;#039;a&amp;#039;) {&lt;br /&gt;
    background(255);&lt;br /&gt;
  }&lt;br /&gt;
}&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
// Circle splatter machine&lt;br /&gt;
void drawCircles(float x, float y, int radius, int level)&lt;br /&gt;
{&lt;br /&gt;
  noStroke();&lt;br /&gt;
  float tt = 116 * level / 6.0; &lt;br /&gt;
  fill (tt, 45, 255);&lt;br /&gt;
  ellipse(x, y, radius*2, radius*2);&lt;br /&gt;
  if (level &amp;gt; 1) {&lt;br /&gt;
    level = level - 1;&lt;br /&gt;
    int num = int (random(2, 5));&lt;br /&gt;
    for(int i=0; i&amp;lt;num; i++) { &lt;br /&gt;
      float a = random(0, TWO_PI);&lt;br /&gt;
      float nx = x + cos(a) * 6.0 * level; &lt;br /&gt;
      float ny = y + sin(a) * 6.0 * level; &lt;br /&gt;
      drawCircles(nx, ny, radius/2, level); &lt;br /&gt;
    }&lt;br /&gt;
  }&lt;br /&gt;
}&lt;br /&gt;
&lt;br /&gt;
void stop()&lt;br /&gt;
{&lt;br /&gt;
  // always close Minim audio classes when you are done with them&lt;br /&gt;
  in.close();&lt;br /&gt;
  minim.stop();&lt;br /&gt;
&lt;br /&gt;
  super.stop();&lt;br /&gt;
}&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/code&amp;gt;&lt;/div&gt;</summary>
		<author><name>GregoryParsons</name></author>	</entry>

	<entry>
		<id>http://wiki.roberttwomey.com/index.php?title=Sound_Sketch_-_Emilio_Marcelino,_Greg_Parsons,_and_Ben_Brickley&amp;diff=3856</id>
		<title>Sound Sketch - Emilio Marcelino, Greg Parsons, and Ben Brickley</title>
		<link rel="alternate" type="text/html" href="http://wiki.roberttwomey.com/index.php?title=Sound_Sketch_-_Emilio_Marcelino,_Greg_Parsons,_and_Ben_Brickley&amp;diff=3856"/>
				<updated>2010-05-20T23:06:47Z</updated>
		
		<summary type="html">&lt;p&gt;GregoryParsons: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== &amp;#039;&amp;#039;&amp;#039;Motivation&amp;#039;&amp;#039;&amp;#039; ==&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Sticking to our primary interests in color, movement and scale, we decided to create a project that would require a computer, a microphone, and a mouse. With these three components we would be able to create a graffiti drawing program.  Later came another interest in using the webcam to create a drawing program.  So we replaced the mouse element with the webcam.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== &amp;#039;&amp;#039;&amp;#039;Interaction&amp;#039;&amp;#039;&amp;#039;==&lt;br /&gt;
&lt;br /&gt;
To interact with our piece the individual will have to have a webcam, a microphone and a computer with Processing.  When running the sketch we will use head tracking to replace the mouse and when speaking or blowing into the microphone, the program will draw. &lt;br /&gt;
&lt;br /&gt;
Basically... speaking and moving your head simultaneously will allow you to draw within Processing.&lt;br /&gt;
&lt;br /&gt;
== &amp;#039;&amp;#039;&amp;#039;Function&amp;#039;&amp;#039;&amp;#039;==&lt;br /&gt;
&lt;br /&gt;
We will add the webcam head tracking sketch to the original microphone/mouse sketch to create a microphone and webcam drawing program.&lt;br /&gt;
&lt;br /&gt;
== &amp;#039;&amp;#039;&amp;#039;Visualization&amp;#039;&amp;#039;&amp;#039; ==&lt;br /&gt;
&lt;br /&gt;
http://imgur.com/HEDZl.jpg&lt;br /&gt;
&lt;br /&gt;
== &amp;#039;&amp;#039;&amp;#039;Documentation&amp;#039;&amp;#039;&amp;#039; ==&lt;br /&gt;
&lt;br /&gt;
The project worked well as a display piece and functioned while under demand. After discussion with the class we agree that the project is stronger as a volume level reacting rather than frequency reacting design. It would be interesting to further develop the project and implement it into a gallery setting. Where people would walk past it and have it react. It would be better to fine tune it so that it can be used on a larger set of screens with smaller drawings so that it would not have to be reset after a short amount of use, this would allow more people and a longer amount of time for interaction. &lt;br /&gt;
=== &amp;#039;&amp;#039;&amp;#039;Video&amp;#039;&amp;#039;&amp;#039;===&lt;br /&gt;
&lt;br /&gt;
[http://www.youtube.com/watch?v=vmtfh574R3o link title]&lt;br /&gt;
&lt;br /&gt;
== &amp;#039;&amp;#039;&amp;#039;Code Audio Level Based&amp;#039;&amp;#039;&amp;#039;==&lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;&lt;br /&gt;
import hypermedia.video.*;&lt;br /&gt;
import ddf.minim.*;&lt;br /&gt;
import ddf.minim.analysis.*;&lt;br /&gt;
import ddf.minim.signals.*;&lt;br /&gt;
&lt;br /&gt;
Minim minim;&lt;br /&gt;
AudioInput in;&lt;br /&gt;
&lt;br /&gt;
OpenCV opencv; &lt;br /&gt;
&lt;br /&gt;
// contrast/brightness values&lt;br /&gt;
int contrast_value    = 0;&lt;br /&gt;
int brightness_value  = 0; &lt;br /&gt;
&lt;br /&gt;
float loudestFreqAmp = 0;&lt;br /&gt;
float loudestFreq = 0;&lt;br /&gt;
int timerCounter = 0;&lt;br /&gt;
&lt;br /&gt;
void setup()&lt;br /&gt;
{&lt;br /&gt;
  size(640, 480, P2D);&lt;br /&gt;
  frameRate(30);&lt;br /&gt;
  noCursor();&lt;br /&gt;
  minim = new Minim(this);&lt;br /&gt;
  minim.debugOn();&lt;br /&gt;
  background(255);&lt;br /&gt;
  noStroke();&lt;br /&gt;
  // get a line in from Minim, default bit depth is 16&lt;br /&gt;
  in = minim.getLineIn(Minim.STEREO, 1024);&lt;br /&gt;
&lt;br /&gt;
  opencv = new OpenCV( this );&lt;br /&gt;
  opencv.capture( width, height );                   // open video stream&lt;br /&gt;
  opencv.cascade( OpenCV.CASCADE_FRONTALFACE_ALT );  // load detection description, here-&amp;gt; front face detection : &amp;quot;haarcascade_frontalface_alt.xml&amp;quot;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
}&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
void draw()&lt;br /&gt;
{&lt;br /&gt;
&lt;br /&gt;
  // grab a new frame&lt;br /&gt;
  // and convert to gray&lt;br /&gt;
  opencv.read();&lt;br /&gt;
  opencv.convert( GRAY );&lt;br /&gt;
  opencv.contrast( contrast_value );&lt;br /&gt;
  opencv.brightness( brightness_value );&lt;br /&gt;
&lt;br /&gt;
  // proceed detection&lt;br /&gt;
  java.awt.Rectangle[] faces = opencv.detect( 1.2, 2, OpenCV.HAAR_DO_CANNY_PRUNING, 40, 40 );&lt;br /&gt;
&lt;br /&gt;
  // display the image&lt;br /&gt;
  //image( opencv.image(), 0, 0 );&lt;br /&gt;
&lt;br /&gt;
  // draw face area(s)&lt;br /&gt;
  //  noFill();&lt;br /&gt;
  //  stroke(255,0,0);&lt;br /&gt;
  //  for( int i=0; i&amp;lt;faces.length; i++ ) {&lt;br /&gt;
  //    rect( faces[i].x, faces[i].y, faces[i].width, faces[i].height ); &lt;br /&gt;
  //  }&lt;br /&gt;
&lt;br /&gt;
  int posX = 0;&lt;br /&gt;
  int posY = 0; &lt;br /&gt;
&lt;br /&gt;
  for( int i=0; i&amp;lt;faces.length; i++ ) {&lt;br /&gt;
    posX = faces[i].x; &lt;br /&gt;
    posY = faces[i].y; &lt;br /&gt;
  }&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
  float m = 0;&lt;br /&gt;
  for(int i = 0; i &amp;lt; in.bufferSize() - 1; i++) {&lt;br /&gt;
    if ( abs(in.mix.get(i)) &amp;gt; m ) {&lt;br /&gt;
      m = abs(in.mix.get(i));&lt;br /&gt;
      System.out.println(in.mix.get(i));&lt;br /&gt;
       &lt;br /&gt;
    }&lt;br /&gt;
  }&lt;br /&gt;
  m*=150;&lt;br /&gt;
  drawCircles(posX, posY, m, 10);&lt;br /&gt;
  &lt;br /&gt;
  &lt;br /&gt;
  if(timerCounter &amp;gt;= 20)&lt;br /&gt;
  {&lt;br /&gt;
    background(255);&lt;br /&gt;
    timerCounter = 0;&lt;br /&gt;
  }&lt;br /&gt;
&lt;br /&gt;
  timerCounter++;&lt;br /&gt;
&lt;br /&gt;
}&lt;br /&gt;
&lt;br /&gt;
void keyPressed() {&lt;br /&gt;
  if (key == &amp;#039;a&amp;#039;) {&lt;br /&gt;
    background(255);&lt;br /&gt;
  }&lt;br /&gt;
}&lt;br /&gt;
&lt;br /&gt;
// Circle splatter machine&lt;br /&gt;
void drawCircles(float x, float y, float radius, int level)&lt;br /&gt;
{&lt;br /&gt;
  noStroke();&lt;br /&gt;
  float tt = 200 * level / 6.0; &lt;br /&gt;
  fill (tt, 0, 116);//tt, 0, 116&lt;br /&gt;
  ellipse(x, y, radius*2, radius*2);&lt;br /&gt;
  if (level &amp;gt; 1) {&lt;br /&gt;
    level = level - 1;&lt;br /&gt;
    int num = int (random(2, 5));&lt;br /&gt;
    for(int i=0; i&amp;lt;num; i++) { &lt;br /&gt;
      float a = random(0, TWO_PI);&lt;br /&gt;
      float nx = x + cos(a) * 6.0 * level; &lt;br /&gt;
      float ny = y + sin(a) * 6.0 * level; &lt;br /&gt;
      drawCircles(nx, ny, radius/2, level); &lt;br /&gt;
    }&lt;br /&gt;
  }&lt;br /&gt;
}&lt;br /&gt;
&lt;br /&gt;
void stop()&lt;br /&gt;
{&lt;br /&gt;
  // always close Minim audio classes when you are done with them&lt;br /&gt;
  in.close();&lt;br /&gt;
  minim.stop();&lt;br /&gt;
&lt;br /&gt;
  super.stop();&lt;br /&gt;
}&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== &amp;#039;&amp;#039;&amp;#039;Code Audio Frequency Based&amp;#039;&amp;#039;&amp;#039; ===&lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
import hypermedia.video.*;&lt;br /&gt;
import ddf.minim.*;&lt;br /&gt;
import ddf.minim.analysis.*;&lt;br /&gt;
import ddf.minim.signals.*;&lt;br /&gt;
&lt;br /&gt;
Minim minim;&lt;br /&gt;
AudioInput in;&lt;br /&gt;
FFT fft;&lt;br /&gt;
&lt;br /&gt;
OpenCV opencv; &lt;br /&gt;
&lt;br /&gt;
// contrast/brightness values&lt;br /&gt;
int contrast_value    = 0;&lt;br /&gt;
int brightness_value  = 0; &lt;br /&gt;
&lt;br /&gt;
float loudestFreqAmp = 0;&lt;br /&gt;
float loudestFreq = 0;&lt;br /&gt;
int timerCounter = 0;&lt;br /&gt;
&lt;br /&gt;
void setup()&lt;br /&gt;
{&lt;br /&gt;
  size(640, 480, P2D);&lt;br /&gt;
  frameRate(30);&lt;br /&gt;
  noCursor();&lt;br /&gt;
  minim = new Minim(this);&lt;br /&gt;
  minim.debugOn();&lt;br /&gt;
  background(255);&lt;br /&gt;
  noStroke();&lt;br /&gt;
  // get a line in from Minim, default bit depth is 16&lt;br /&gt;
  in = minim.getLineIn(Minim.STEREO, 1024);&lt;br /&gt;
  fft = new FFT(in.bufferSize(), in.sampleRate());&lt;br /&gt;
&lt;br /&gt;
  opencv = new OpenCV( this );&lt;br /&gt;
  opencv.capture( width, height );                   // open video stream&lt;br /&gt;
  opencv.cascade( OpenCV.CASCADE_FRONTALFACE_ALT );  // load detection description, here-&amp;gt; front face detection : &amp;quot;haarcascade_frontalface_alt.xml&amp;quot;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
} &lt;br /&gt;
&lt;br /&gt;
void draw()&lt;br /&gt;
{&lt;br /&gt;
&lt;br /&gt;
  // grab a new frame&lt;br /&gt;
  // and convert to gray&lt;br /&gt;
  opencv.read();&lt;br /&gt;
  opencv.convert( GRAY );&lt;br /&gt;
  opencv.contrast( contrast_value );&lt;br /&gt;
  opencv.brightness( brightness_value );&lt;br /&gt;
&lt;br /&gt;
  // proceed detection&lt;br /&gt;
  java.awt.Rectangle[] faces = opencv.detect( 1.2, 2, OpenCV.HAAR_DO_CANNY_PRUNING, 40, 40 );&lt;br /&gt;
&lt;br /&gt;
  // display the image&lt;br /&gt;
  //image( opencv.image(), 0, 0 );&lt;br /&gt;
&lt;br /&gt;
  // draw face area(s)&lt;br /&gt;
  //  noFill();&lt;br /&gt;
  //  stroke(255,0,0);&lt;br /&gt;
  //  for( int i=0; i&amp;lt;faces.length; i++ ) {&lt;br /&gt;
  //    rect( faces[i].x, faces[i].y, faces[i].width, faces[i].height ); &lt;br /&gt;
  //  }&lt;br /&gt;
&lt;br /&gt;
  int posX = 0;&lt;br /&gt;
  int posY = 0; &lt;br /&gt;
&lt;br /&gt;
  for( int i=0; i&amp;lt;faces.length; i++ ) {&lt;br /&gt;
    posX = faces[i].x; &lt;br /&gt;
    posY = faces[i].y; &lt;br /&gt;
  }&lt;br /&gt;
&lt;br /&gt;
  fft.window(FFT.HAMMING);&lt;br /&gt;
  for(int i = 0; i &amp;lt; fft.specSize(); i++)&lt;br /&gt;
  {&lt;br /&gt;
    // draw the line for frequency band i, scaling it by 4 so we can&lt;br /&gt;
    //see it a bit better&lt;br /&gt;
    if (fft.getBand(i) &amp;gt; loudestFreqAmp &amp;amp;&amp;amp; fft.getBand(i) &amp;gt; 10)&lt;br /&gt;
    {&lt;br /&gt;
      loudestFreqAmp = fft.getBand(i);&lt;br /&gt;
      loudestFreq = i * 4;&lt;br /&gt;
&lt;br /&gt;
      // draw the thing&lt;br /&gt;
      drawCircles(posX, posY, (int)loudestFreqAmp, 10);  &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
      timerCounter = 0;&lt;br /&gt;
      System.out.println(loudestFreq + &amp;quot;---&amp;quot; + loudestFreqAmp);&lt;br /&gt;
    }&lt;br /&gt;
  }&lt;br /&gt;
  loudestFreqAmp = 0;&lt;br /&gt;
&lt;br /&gt;
  fft.forward(in.mix);&lt;br /&gt;
&lt;br /&gt;
  if(timerCounter &amp;gt;= 20)&lt;br /&gt;
  {&lt;br /&gt;
    background(255);&lt;br /&gt;
    timerCounter = 0;&lt;br /&gt;
  }&lt;br /&gt;
&lt;br /&gt;
  timerCounter++;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
}&lt;br /&gt;
&lt;br /&gt;
void keyPressed() {&lt;br /&gt;
  if (key == &amp;#039;a&amp;#039;) {&lt;br /&gt;
    background(255);&lt;br /&gt;
  }&lt;br /&gt;
}&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
// Circle splatter machine&lt;br /&gt;
void drawCircles(float x, float y, int radius, int level)&lt;br /&gt;
{&lt;br /&gt;
  noStroke();&lt;br /&gt;
  float tt = 116 * level / 6.0; &lt;br /&gt;
  fill (tt, 45, 255);&lt;br /&gt;
  ellipse(x, y, radius*2, radius*2);&lt;br /&gt;
  if (level &amp;gt; 1) {&lt;br /&gt;
    level = level - 1;&lt;br /&gt;
    int num = int (random(2, 5));&lt;br /&gt;
    for(int i=0; i&amp;lt;num; i++) { &lt;br /&gt;
      float a = random(0, TWO_PI);&lt;br /&gt;
      float nx = x + cos(a) * 6.0 * level; &lt;br /&gt;
      float ny = y + sin(a) * 6.0 * level; &lt;br /&gt;
      drawCircles(nx, ny, radius/2, level); &lt;br /&gt;
    }&lt;br /&gt;
  }&lt;br /&gt;
}&lt;br /&gt;
&lt;br /&gt;
void stop()&lt;br /&gt;
{&lt;br /&gt;
  // always close Minim audio classes when you are done with them&lt;br /&gt;
  in.close();&lt;br /&gt;
  minim.stop();&lt;br /&gt;
&lt;br /&gt;
  super.stop();&lt;br /&gt;
}&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/code&amp;gt;&lt;/div&gt;</summary>
		<author><name>GregoryParsons</name></author>	</entry>

	<entry>
		<id>http://wiki.roberttwomey.com/index.php?title=Sound_Sketch_-_Emilio_Marcelino,_Greg_Parsons,_and_Ben_Brickley&amp;diff=3834</id>
		<title>Sound Sketch - Emilio Marcelino, Greg Parsons, and Ben Brickley</title>
		<link rel="alternate" type="text/html" href="http://wiki.roberttwomey.com/index.php?title=Sound_Sketch_-_Emilio_Marcelino,_Greg_Parsons,_and_Ben_Brickley&amp;diff=3834"/>
				<updated>2010-05-20T21:46:17Z</updated>
		
		<summary type="html">&lt;p&gt;GregoryParsons: /* Audio Frequency Based */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== &amp;#039;&amp;#039;&amp;#039;Motivation&amp;#039;&amp;#039;&amp;#039; ==&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Sticking to our primary interests in color, movement and scale, we decided to create a project that would require a computer, a microphone, and a mouse. With these three components we would be able to create a graffiti drawing program.  Later came another interest in using the webcam to create a drawing program.  So we replaced the mouse element with the webcam.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== &amp;#039;&amp;#039;&amp;#039;Interaction&amp;#039;&amp;#039;&amp;#039;==&lt;br /&gt;
&lt;br /&gt;
To interact with our piece the individual will have to have a webcam, a microphone and a computer with Processing.  When running the sketch we will use head tracking to replace the mouse and when speaking or blowing into the microphone, the program will draw. &lt;br /&gt;
&lt;br /&gt;
Basically... speaking and moving your head simultaneously will allow you to draw within Processing.&lt;br /&gt;
&lt;br /&gt;
== &amp;#039;&amp;#039;&amp;#039;Function&amp;#039;&amp;#039;&amp;#039;==&lt;br /&gt;
&lt;br /&gt;
We will add the webcam head tracking sketch to the original microphone/mouse sketch to create a microphone and webcam drawing program.&lt;br /&gt;
&lt;br /&gt;
== &amp;#039;&amp;#039;&amp;#039;Visualization&amp;#039;&amp;#039;&amp;#039; ==&lt;br /&gt;
&lt;br /&gt;
http://imgur.com/HEDZl.jpg&lt;br /&gt;
&lt;br /&gt;
== &amp;#039;&amp;#039;&amp;#039;Documentation&amp;#039;&amp;#039;&amp;#039; ==&lt;br /&gt;
&lt;br /&gt;
The project worked well as a display piece and functioned while under demand. After discussion with the class we agree that the project is stronger as a volume level reacting rather than frequency reacting design. It would be interesting to further develop the project and implement it into a gallery setting. Where people would walk past it and have it react. It would be better to fine tune it so that it can be used on a larger set of screens with smaller drawings so that it would not have to be reset after a short amount of use, this would allow more people and a longer amount of time for interaction. &lt;br /&gt;
&lt;br /&gt;
Images coming soon.&lt;br /&gt;
&lt;br /&gt;
== &amp;#039;&amp;#039;&amp;#039;Code Audio Level Based&amp;#039;&amp;#039;&amp;#039;==&lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;&lt;br /&gt;
import hypermedia.video.*;&lt;br /&gt;
import ddf.minim.*;&lt;br /&gt;
import ddf.minim.analysis.*;&lt;br /&gt;
import ddf.minim.signals.*;&lt;br /&gt;
&lt;br /&gt;
Minim minim;&lt;br /&gt;
AudioInput in;&lt;br /&gt;
&lt;br /&gt;
OpenCV opencv; &lt;br /&gt;
&lt;br /&gt;
// contrast/brightness values&lt;br /&gt;
int contrast_value    = 0;&lt;br /&gt;
int brightness_value  = 0; &lt;br /&gt;
&lt;br /&gt;
float loudestFreqAmp = 0;&lt;br /&gt;
float loudestFreq = 0;&lt;br /&gt;
int timerCounter = 0;&lt;br /&gt;
&lt;br /&gt;
void setup()&lt;br /&gt;
{&lt;br /&gt;
  size(640, 480, P2D);&lt;br /&gt;
  frameRate(30);&lt;br /&gt;
  noCursor();&lt;br /&gt;
  minim = new Minim(this);&lt;br /&gt;
  minim.debugOn();&lt;br /&gt;
  background(255);&lt;br /&gt;
  noStroke();&lt;br /&gt;
  // get a line in from Minim, default bit depth is 16&lt;br /&gt;
  in = minim.getLineIn(Minim.STEREO, 1024);&lt;br /&gt;
&lt;br /&gt;
  opencv = new OpenCV( this );&lt;br /&gt;
  opencv.capture( width, height );                   // open video stream&lt;br /&gt;
  opencv.cascade( OpenCV.CASCADE_FRONTALFACE_ALT );  // load detection description, here-&amp;gt; front face detection : &amp;quot;haarcascade_frontalface_alt.xml&amp;quot;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
}&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
void draw()&lt;br /&gt;
{&lt;br /&gt;
&lt;br /&gt;
  // grab a new frame&lt;br /&gt;
  // and convert to gray&lt;br /&gt;
  opencv.read();&lt;br /&gt;
  opencv.convert( GRAY );&lt;br /&gt;
  opencv.contrast( contrast_value );&lt;br /&gt;
  opencv.brightness( brightness_value );&lt;br /&gt;
&lt;br /&gt;
  // proceed detection&lt;br /&gt;
  java.awt.Rectangle[] faces = opencv.detect( 1.2, 2, OpenCV.HAAR_DO_CANNY_PRUNING, 40, 40 );&lt;br /&gt;
&lt;br /&gt;
  // display the image&lt;br /&gt;
  //image( opencv.image(), 0, 0 );&lt;br /&gt;
&lt;br /&gt;
  // draw face area(s)&lt;br /&gt;
  //  noFill();&lt;br /&gt;
  //  stroke(255,0,0);&lt;br /&gt;
  //  for( int i=0; i&amp;lt;faces.length; i++ ) {&lt;br /&gt;
  //    rect( faces[i].x, faces[i].y, faces[i].width, faces[i].height ); &lt;br /&gt;
  //  }&lt;br /&gt;
&lt;br /&gt;
  int posX = 0;&lt;br /&gt;
  int posY = 0; &lt;br /&gt;
&lt;br /&gt;
  for( int i=0; i&amp;lt;faces.length; i++ ) {&lt;br /&gt;
    posX = faces[i].x; &lt;br /&gt;
    posY = faces[i].y; &lt;br /&gt;
  }&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
  float m = 0;&lt;br /&gt;
  for(int i = 0; i &amp;lt; in.bufferSize() - 1; i++) {&lt;br /&gt;
    if ( abs(in.mix.get(i)) &amp;gt; m ) {&lt;br /&gt;
      m = abs(in.mix.get(i));&lt;br /&gt;
      System.out.println(in.mix.get(i));&lt;br /&gt;
       &lt;br /&gt;
    }&lt;br /&gt;
  }&lt;br /&gt;
  m*=150;&lt;br /&gt;
  drawCircles(posX, posY, m, 10);&lt;br /&gt;
  &lt;br /&gt;
  &lt;br /&gt;
  if(timerCounter &amp;gt;= 20)&lt;br /&gt;
  {&lt;br /&gt;
    background(255);&lt;br /&gt;
    timerCounter = 0;&lt;br /&gt;
  }&lt;br /&gt;
&lt;br /&gt;
  timerCounter++;&lt;br /&gt;
&lt;br /&gt;
}&lt;br /&gt;
&lt;br /&gt;
void keyPressed() {&lt;br /&gt;
  if (key == &amp;#039;a&amp;#039;) {&lt;br /&gt;
    background(255);&lt;br /&gt;
  }&lt;br /&gt;
}&lt;br /&gt;
&lt;br /&gt;
// Circle splatter machine&lt;br /&gt;
void drawCircles(float x, float y, float radius, int level)&lt;br /&gt;
{&lt;br /&gt;
  noStroke();&lt;br /&gt;
  float tt = 200 * level / 6.0; &lt;br /&gt;
  fill (tt, 0, 116);//tt, 0, 116&lt;br /&gt;
  ellipse(x, y, radius*2, radius*2);&lt;br /&gt;
  if (level &amp;gt; 1) {&lt;br /&gt;
    level = level - 1;&lt;br /&gt;
    int num = int (random(2, 5));&lt;br /&gt;
    for(int i=0; i&amp;lt;num; i++) { &lt;br /&gt;
      float a = random(0, TWO_PI);&lt;br /&gt;
      float nx = x + cos(a) * 6.0 * level; &lt;br /&gt;
      float ny = y + sin(a) * 6.0 * level; &lt;br /&gt;
      drawCircles(nx, ny, radius/2, level); &lt;br /&gt;
    }&lt;br /&gt;
  }&lt;br /&gt;
}&lt;br /&gt;
&lt;br /&gt;
void stop()&lt;br /&gt;
{&lt;br /&gt;
  // always close Minim audio classes when you are done with them&lt;br /&gt;
  in.close();&lt;br /&gt;
  minim.stop();&lt;br /&gt;
&lt;br /&gt;
  super.stop();&lt;br /&gt;
}&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== &amp;#039;&amp;#039;&amp;#039;Code Audio Frequency Based&amp;#039;&amp;#039;&amp;#039; ===&lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
import hypermedia.video.*;&lt;br /&gt;
import ddf.minim.*;&lt;br /&gt;
import ddf.minim.analysis.*;&lt;br /&gt;
import ddf.minim.signals.*;&lt;br /&gt;
&lt;br /&gt;
Minim minim;&lt;br /&gt;
AudioInput in;&lt;br /&gt;
FFT fft;&lt;br /&gt;
&lt;br /&gt;
OpenCV opencv; &lt;br /&gt;
&lt;br /&gt;
// contrast/brightness values&lt;br /&gt;
int contrast_value    = 0;&lt;br /&gt;
int brightness_value  = 0; &lt;br /&gt;
&lt;br /&gt;
float loudestFreqAmp = 0;&lt;br /&gt;
float loudestFreq = 0;&lt;br /&gt;
int timerCounter = 0;&lt;br /&gt;
&lt;br /&gt;
void setup()&lt;br /&gt;
{&lt;br /&gt;
  size(640, 480, P2D);&lt;br /&gt;
  frameRate(30);&lt;br /&gt;
  noCursor();&lt;br /&gt;
  minim = new Minim(this);&lt;br /&gt;
  minim.debugOn();&lt;br /&gt;
  background(255);&lt;br /&gt;
  noStroke();&lt;br /&gt;
  // get a line in from Minim, default bit depth is 16&lt;br /&gt;
  in = minim.getLineIn(Minim.STEREO, 1024);&lt;br /&gt;
  fft = new FFT(in.bufferSize(), in.sampleRate());&lt;br /&gt;
&lt;br /&gt;
  opencv = new OpenCV( this );&lt;br /&gt;
  opencv.capture( width, height );                   // open video stream&lt;br /&gt;
  opencv.cascade( OpenCV.CASCADE_FRONTALFACE_ALT );  // load detection description, here-&amp;gt; front face detection : &amp;quot;haarcascade_frontalface_alt.xml&amp;quot;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
} &lt;br /&gt;
&lt;br /&gt;
void draw()&lt;br /&gt;
{&lt;br /&gt;
&lt;br /&gt;
  // grab a new frame&lt;br /&gt;
  // and convert to gray&lt;br /&gt;
  opencv.read();&lt;br /&gt;
  opencv.convert( GRAY );&lt;br /&gt;
  opencv.contrast( contrast_value );&lt;br /&gt;
  opencv.brightness( brightness_value );&lt;br /&gt;
&lt;br /&gt;
  // proceed detection&lt;br /&gt;
  java.awt.Rectangle[] faces = opencv.detect( 1.2, 2, OpenCV.HAAR_DO_CANNY_PRUNING, 40, 40 );&lt;br /&gt;
&lt;br /&gt;
  // display the image&lt;br /&gt;
  //image( opencv.image(), 0, 0 );&lt;br /&gt;
&lt;br /&gt;
  // draw face area(s)&lt;br /&gt;
  //  noFill();&lt;br /&gt;
  //  stroke(255,0,0);&lt;br /&gt;
  //  for( int i=0; i&amp;lt;faces.length; i++ ) {&lt;br /&gt;
  //    rect( faces[i].x, faces[i].y, faces[i].width, faces[i].height ); &lt;br /&gt;
  //  }&lt;br /&gt;
&lt;br /&gt;
  int posX = 0;&lt;br /&gt;
  int posY = 0; &lt;br /&gt;
&lt;br /&gt;
  for( int i=0; i&amp;lt;faces.length; i++ ) {&lt;br /&gt;
    posX = faces[i].x; &lt;br /&gt;
    posY = faces[i].y; &lt;br /&gt;
  }&lt;br /&gt;
&lt;br /&gt;
  fft.window(FFT.HAMMING);&lt;br /&gt;
  for(int i = 0; i &amp;lt; fft.specSize(); i++)&lt;br /&gt;
  {&lt;br /&gt;
    // draw the line for frequency band i, scaling it by 4 so we can&lt;br /&gt;
    //see it a bit better&lt;br /&gt;
    if (fft.getBand(i) &amp;gt; loudestFreqAmp &amp;amp;&amp;amp; fft.getBand(i) &amp;gt; 10)&lt;br /&gt;
    {&lt;br /&gt;
      loudestFreqAmp = fft.getBand(i);&lt;br /&gt;
      loudestFreq = i * 4;&lt;br /&gt;
&lt;br /&gt;
      // draw the thing&lt;br /&gt;
      drawCircles(posX, posY, (int)loudestFreqAmp, 10);  &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
      timerCounter = 0;&lt;br /&gt;
      System.out.println(loudestFreq + &amp;quot;---&amp;quot; + loudestFreqAmp);&lt;br /&gt;
    }&lt;br /&gt;
  }&lt;br /&gt;
  loudestFreqAmp = 0;&lt;br /&gt;
&lt;br /&gt;
  fft.forward(in.mix);&lt;br /&gt;
&lt;br /&gt;
  if(timerCounter &amp;gt;= 20)&lt;br /&gt;
  {&lt;br /&gt;
    background(255);&lt;br /&gt;
    timerCounter = 0;&lt;br /&gt;
  }&lt;br /&gt;
&lt;br /&gt;
  timerCounter++;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
}&lt;br /&gt;
&lt;br /&gt;
void keyPressed() {&lt;br /&gt;
  if (key == &amp;#039;a&amp;#039;) {&lt;br /&gt;
    background(255);&lt;br /&gt;
  }&lt;br /&gt;
}&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
// Circle splatter machine&lt;br /&gt;
void drawCircles(float x, float y, int radius, int level)&lt;br /&gt;
{&lt;br /&gt;
  noStroke();&lt;br /&gt;
  float tt = 116 * level / 6.0; &lt;br /&gt;
  fill (tt, 45, 255);&lt;br /&gt;
  ellipse(x, y, radius*2, radius*2);&lt;br /&gt;
  if (level &amp;gt; 1) {&lt;br /&gt;
    level = level - 1;&lt;br /&gt;
    int num = int (random(2, 5));&lt;br /&gt;
    for(int i=0; i&amp;lt;num; i++) { &lt;br /&gt;
      float a = random(0, TWO_PI);&lt;br /&gt;
      float nx = x + cos(a) * 6.0 * level; &lt;br /&gt;
      float ny = y + sin(a) * 6.0 * level; &lt;br /&gt;
      drawCircles(nx, ny, radius/2, level); &lt;br /&gt;
    }&lt;br /&gt;
  }&lt;br /&gt;
}&lt;br /&gt;
&lt;br /&gt;
void stop()&lt;br /&gt;
{&lt;br /&gt;
  // always close Minim audio classes when you are done with them&lt;br /&gt;
  in.close();&lt;br /&gt;
  minim.stop();&lt;br /&gt;
&lt;br /&gt;
  super.stop();&lt;br /&gt;
}&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/code&amp;gt;&lt;/div&gt;</summary>
		<author><name>GregoryParsons</name></author>	</entry>

	<entry>
		<id>http://wiki.roberttwomey.com/index.php?title=Sound_Sketch_-_Emilio_Marcelino,_Greg_Parsons,_and_Ben_Brickley&amp;diff=3833</id>
		<title>Sound Sketch - Emilio Marcelino, Greg Parsons, and Ben Brickley</title>
		<link rel="alternate" type="text/html" href="http://wiki.roberttwomey.com/index.php?title=Sound_Sketch_-_Emilio_Marcelino,_Greg_Parsons,_and_Ben_Brickley&amp;diff=3833"/>
				<updated>2010-05-20T21:45:29Z</updated>
		
		<summary type="html">&lt;p&gt;GregoryParsons: /* Documentation */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== &amp;#039;&amp;#039;&amp;#039;Motivation&amp;#039;&amp;#039;&amp;#039; ==&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Sticking to our primary interests in color, movement and scale, we decided to create a project that would require a computer, a microphone, and a mouse. With these three components we would be able to create a graffiti drawing program.  Later came another interest in using the webcam to create a drawing program.  So we replaced the mouse element with the webcam.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== &amp;#039;&amp;#039;&amp;#039;Interaction&amp;#039;&amp;#039;&amp;#039;==&lt;br /&gt;
&lt;br /&gt;
To interact with our piece the individual will have to have a webcam, a microphone and a computer with Processing.  When running the sketch we will use head tracking to replace the mouse and when speaking or blowing into the microphone, the program will draw. &lt;br /&gt;
&lt;br /&gt;
Basically... speaking and moving your head simultaneously will allow you to draw within Processing.&lt;br /&gt;
&lt;br /&gt;
== &amp;#039;&amp;#039;&amp;#039;Function&amp;#039;&amp;#039;&amp;#039;==&lt;br /&gt;
&lt;br /&gt;
We will add the webcam head tracking sketch to the original microphone/mouse sketch to create a microphone and webcam drawing program.&lt;br /&gt;
&lt;br /&gt;
== &amp;#039;&amp;#039;&amp;#039;Visualization&amp;#039;&amp;#039;&amp;#039; ==&lt;br /&gt;
&lt;br /&gt;
http://imgur.com/HEDZl.jpg&lt;br /&gt;
&lt;br /&gt;
== &amp;#039;&amp;#039;&amp;#039;Documentation&amp;#039;&amp;#039;&amp;#039; ==&lt;br /&gt;
&lt;br /&gt;
The project worked well as a display piece and functioned while under demand. After discussion with the class we agree that the project is stronger as a volume level reacting rather than frequency reacting design. It would be interesting to further develop the project and implement it into a gallery setting. Where people would walk past it and have it react. It would be better to fine tune it so that it can be used on a larger set of screens with smaller drawings so that it would not have to be reset after a short amount of use, this would allow more people and a longer amount of time for interaction. &lt;br /&gt;
&lt;br /&gt;
Images coming soon.&lt;br /&gt;
&lt;br /&gt;
== &amp;#039;&amp;#039;&amp;#039;Code Audio Level Based&amp;#039;&amp;#039;&amp;#039;==&lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;&lt;br /&gt;
import hypermedia.video.*;&lt;br /&gt;
import ddf.minim.*;&lt;br /&gt;
import ddf.minim.analysis.*;&lt;br /&gt;
import ddf.minim.signals.*;&lt;br /&gt;
&lt;br /&gt;
Minim minim;&lt;br /&gt;
AudioInput in;&lt;br /&gt;
&lt;br /&gt;
OpenCV opencv; &lt;br /&gt;
&lt;br /&gt;
// contrast/brightness values&lt;br /&gt;
int contrast_value    = 0;&lt;br /&gt;
int brightness_value  = 0; &lt;br /&gt;
&lt;br /&gt;
float loudestFreqAmp = 0;&lt;br /&gt;
float loudestFreq = 0;&lt;br /&gt;
int timerCounter = 0;&lt;br /&gt;
&lt;br /&gt;
void setup()&lt;br /&gt;
{&lt;br /&gt;
  size(640, 480, P2D);&lt;br /&gt;
  frameRate(30);&lt;br /&gt;
  noCursor();&lt;br /&gt;
  minim = new Minim(this);&lt;br /&gt;
  minim.debugOn();&lt;br /&gt;
  background(255);&lt;br /&gt;
  noStroke();&lt;br /&gt;
  // get a line in from Minim, default bit depth is 16&lt;br /&gt;
  in = minim.getLineIn(Minim.STEREO, 1024);&lt;br /&gt;
&lt;br /&gt;
  opencv = new OpenCV( this );&lt;br /&gt;
  opencv.capture( width, height );                   // open video stream&lt;br /&gt;
  opencv.cascade( OpenCV.CASCADE_FRONTALFACE_ALT );  // load detection description, here-&amp;gt; front face detection : &amp;quot;haarcascade_frontalface_alt.xml&amp;quot;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
}&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
void draw()&lt;br /&gt;
{&lt;br /&gt;
&lt;br /&gt;
  // grab a new frame&lt;br /&gt;
  // and convert to gray&lt;br /&gt;
  opencv.read();&lt;br /&gt;
  opencv.convert( GRAY );&lt;br /&gt;
  opencv.contrast( contrast_value );&lt;br /&gt;
  opencv.brightness( brightness_value );&lt;br /&gt;
&lt;br /&gt;
  // proceed detection&lt;br /&gt;
  java.awt.Rectangle[] faces = opencv.detect( 1.2, 2, OpenCV.HAAR_DO_CANNY_PRUNING, 40, 40 );&lt;br /&gt;
&lt;br /&gt;
  // display the image&lt;br /&gt;
  //image( opencv.image(), 0, 0 );&lt;br /&gt;
&lt;br /&gt;
  // draw face area(s)&lt;br /&gt;
  //  noFill();&lt;br /&gt;
  //  stroke(255,0,0);&lt;br /&gt;
  //  for( int i=0; i&amp;lt;faces.length; i++ ) {&lt;br /&gt;
  //    rect( faces[i].x, faces[i].y, faces[i].width, faces[i].height ); &lt;br /&gt;
  //  }&lt;br /&gt;
&lt;br /&gt;
  int posX = 0;&lt;br /&gt;
  int posY = 0; &lt;br /&gt;
&lt;br /&gt;
  for( int i=0; i&amp;lt;faces.length; i++ ) {&lt;br /&gt;
    posX = faces[i].x; &lt;br /&gt;
    posY = faces[i].y; &lt;br /&gt;
  }&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
  float m = 0;&lt;br /&gt;
  for(int i = 0; i &amp;lt; in.bufferSize() - 1; i++) {&lt;br /&gt;
    if ( abs(in.mix.get(i)) &amp;gt; m ) {&lt;br /&gt;
      m = abs(in.mix.get(i));&lt;br /&gt;
      System.out.println(in.mix.get(i));&lt;br /&gt;
       &lt;br /&gt;
    }&lt;br /&gt;
  }&lt;br /&gt;
  m*=150;&lt;br /&gt;
  drawCircles(posX, posY, m, 10);&lt;br /&gt;
  &lt;br /&gt;
  &lt;br /&gt;
  if(timerCounter &amp;gt;= 20)&lt;br /&gt;
  {&lt;br /&gt;
    background(255);&lt;br /&gt;
    timerCounter = 0;&lt;br /&gt;
  }&lt;br /&gt;
&lt;br /&gt;
  timerCounter++;&lt;br /&gt;
&lt;br /&gt;
}&lt;br /&gt;
&lt;br /&gt;
void keyPressed() {&lt;br /&gt;
  if (key == &amp;#039;a&amp;#039;) {&lt;br /&gt;
    background(255);&lt;br /&gt;
  }&lt;br /&gt;
}&lt;br /&gt;
&lt;br /&gt;
// Circle splatter machine&lt;br /&gt;
void drawCircles(float x, float y, float radius, int level)&lt;br /&gt;
{&lt;br /&gt;
  noStroke();&lt;br /&gt;
  float tt = 200 * level / 6.0; &lt;br /&gt;
  fill (tt, 0, 116);//tt, 0, 116&lt;br /&gt;
  ellipse(x, y, radius*2, radius*2);&lt;br /&gt;
  if (level &amp;gt; 1) {&lt;br /&gt;
    level = level - 1;&lt;br /&gt;
    int num = int (random(2, 5));&lt;br /&gt;
    for(int i=0; i&amp;lt;num; i++) { &lt;br /&gt;
      float a = random(0, TWO_PI);&lt;br /&gt;
      float nx = x + cos(a) * 6.0 * level; &lt;br /&gt;
      float ny = y + sin(a) * 6.0 * level; &lt;br /&gt;
      drawCircles(nx, ny, radius/2, level); &lt;br /&gt;
    }&lt;br /&gt;
  }&lt;br /&gt;
}&lt;br /&gt;
&lt;br /&gt;
void stop()&lt;br /&gt;
{&lt;br /&gt;
  // always close Minim audio classes when you are done with them&lt;br /&gt;
  in.close();&lt;br /&gt;
  minim.stop();&lt;br /&gt;
&lt;br /&gt;
  super.stop();&lt;br /&gt;
}&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== &amp;#039;&amp;#039;&amp;#039;Audio Frequency Based&amp;#039;&amp;#039;&amp;#039; ===&lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
import hypermedia.video.*;&lt;br /&gt;
import ddf.minim.*;&lt;br /&gt;
import ddf.minim.analysis.*;&lt;br /&gt;
import ddf.minim.signals.*;&lt;br /&gt;
&lt;br /&gt;
Minim minim;&lt;br /&gt;
AudioInput in;&lt;br /&gt;
FFT fft;&lt;br /&gt;
&lt;br /&gt;
OpenCV opencv; &lt;br /&gt;
&lt;br /&gt;
// contrast/brightness values&lt;br /&gt;
int contrast_value    = 0;&lt;br /&gt;
int brightness_value  = 0; &lt;br /&gt;
&lt;br /&gt;
float loudestFreqAmp = 0;&lt;br /&gt;
float loudestFreq = 0;&lt;br /&gt;
int timerCounter = 0;&lt;br /&gt;
&lt;br /&gt;
void setup()&lt;br /&gt;
{&lt;br /&gt;
  size(640, 480, P2D);&lt;br /&gt;
  frameRate(30);&lt;br /&gt;
  noCursor();&lt;br /&gt;
  minim = new Minim(this);&lt;br /&gt;
  minim.debugOn();&lt;br /&gt;
  background(255);&lt;br /&gt;
  noStroke();&lt;br /&gt;
  // get a line in from Minim, default bit depth is 16&lt;br /&gt;
  in = minim.getLineIn(Minim.STEREO, 1024);&lt;br /&gt;
  fft = new FFT(in.bufferSize(), in.sampleRate());&lt;br /&gt;
&lt;br /&gt;
  opencv = new OpenCV( this );&lt;br /&gt;
  opencv.capture( width, height );                   // open video stream&lt;br /&gt;
  opencv.cascade( OpenCV.CASCADE_FRONTALFACE_ALT );  // load detection description, here-&amp;gt; front face detection : &amp;quot;haarcascade_frontalface_alt.xml&amp;quot;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
} &lt;br /&gt;
&lt;br /&gt;
void draw()&lt;br /&gt;
{&lt;br /&gt;
&lt;br /&gt;
  // grab a new frame&lt;br /&gt;
  // and convert to gray&lt;br /&gt;
  opencv.read();&lt;br /&gt;
  opencv.convert( GRAY );&lt;br /&gt;
  opencv.contrast( contrast_value );&lt;br /&gt;
  opencv.brightness( brightness_value );&lt;br /&gt;
&lt;br /&gt;
  // proceed detection&lt;br /&gt;
  java.awt.Rectangle[] faces = opencv.detect( 1.2, 2, OpenCV.HAAR_DO_CANNY_PRUNING, 40, 40 );&lt;br /&gt;
&lt;br /&gt;
  // display the image&lt;br /&gt;
  //image( opencv.image(), 0, 0 );&lt;br /&gt;
&lt;br /&gt;
  // draw face area(s)&lt;br /&gt;
  //  noFill();&lt;br /&gt;
  //  stroke(255,0,0);&lt;br /&gt;
  //  for( int i=0; i&amp;lt;faces.length; i++ ) {&lt;br /&gt;
  //    rect( faces[i].x, faces[i].y, faces[i].width, faces[i].height ); &lt;br /&gt;
  //  }&lt;br /&gt;
&lt;br /&gt;
  int posX = 0;&lt;br /&gt;
  int posY = 0; &lt;br /&gt;
&lt;br /&gt;
  for( int i=0; i&amp;lt;faces.length; i++ ) {&lt;br /&gt;
    posX = faces[i].x; &lt;br /&gt;
    posY = faces[i].y; &lt;br /&gt;
  }&lt;br /&gt;
&lt;br /&gt;
  fft.window(FFT.HAMMING);&lt;br /&gt;
  for(int i = 0; i &amp;lt; fft.specSize(); i++)&lt;br /&gt;
  {&lt;br /&gt;
    // draw the line for frequency band i, scaling it by 4 so we can&lt;br /&gt;
    //see it a bit better&lt;br /&gt;
    if (fft.getBand(i) &amp;gt; loudestFreqAmp &amp;amp;&amp;amp; fft.getBand(i) &amp;gt; 10)&lt;br /&gt;
    {&lt;br /&gt;
      loudestFreqAmp = fft.getBand(i);&lt;br /&gt;
      loudestFreq = i * 4;&lt;br /&gt;
&lt;br /&gt;
      // draw the thing&lt;br /&gt;
      drawCircles(posX, posY, (int)loudestFreqAmp, 10);  &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
      timerCounter = 0;&lt;br /&gt;
      System.out.println(loudestFreq + &amp;quot;---&amp;quot; + loudestFreqAmp);&lt;br /&gt;
    }&lt;br /&gt;
  }&lt;br /&gt;
  loudestFreqAmp = 0;&lt;br /&gt;
&lt;br /&gt;
  fft.forward(in.mix);&lt;br /&gt;
&lt;br /&gt;
  if(timerCounter &amp;gt;= 20)&lt;br /&gt;
  {&lt;br /&gt;
    background(255);&lt;br /&gt;
    timerCounter = 0;&lt;br /&gt;
  }&lt;br /&gt;
&lt;br /&gt;
  timerCounter++;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
}&lt;br /&gt;
&lt;br /&gt;
void keyPressed() {&lt;br /&gt;
  if (key == &amp;#039;a&amp;#039;) {&lt;br /&gt;
    background(255);&lt;br /&gt;
  }&lt;br /&gt;
}&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
// Circle splatter machine&lt;br /&gt;
void drawCircles(float x, float y, int radius, int level)&lt;br /&gt;
{&lt;br /&gt;
  noStroke();&lt;br /&gt;
  float tt = 116 * level / 6.0; &lt;br /&gt;
  fill (tt, 45, 255);&lt;br /&gt;
  ellipse(x, y, radius*2, radius*2);&lt;br /&gt;
  if (level &amp;gt; 1) {&lt;br /&gt;
    level = level - 1;&lt;br /&gt;
    int num = int (random(2, 5));&lt;br /&gt;
    for(int i=0; i&amp;lt;num; i++) { &lt;br /&gt;
      float a = random(0, TWO_PI);&lt;br /&gt;
      float nx = x + cos(a) * 6.0 * level; &lt;br /&gt;
      float ny = y + sin(a) * 6.0 * level; &lt;br /&gt;
      drawCircles(nx, ny, radius/2, level); &lt;br /&gt;
    }&lt;br /&gt;
  }&lt;br /&gt;
}&lt;br /&gt;
&lt;br /&gt;
void stop()&lt;br /&gt;
{&lt;br /&gt;
  // always close Minim audio classes when you are done with them&lt;br /&gt;
  in.close();&lt;br /&gt;
  minim.stop();&lt;br /&gt;
&lt;br /&gt;
  super.stop();&lt;br /&gt;
}&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/code&amp;gt;&lt;/div&gt;</summary>
		<author><name>GregoryParsons</name></author>	</entry>

	<entry>
		<id>http://wiki.roberttwomey.com/index.php?title=Sound_Sketch_-_Emilio_Marcelino,_Greg_Parsons,_and_Ben_Brickley&amp;diff=3832</id>
		<title>Sound Sketch - Emilio Marcelino, Greg Parsons, and Ben Brickley</title>
		<link rel="alternate" type="text/html" href="http://wiki.roberttwomey.com/index.php?title=Sound_Sketch_-_Emilio_Marcelino,_Greg_Parsons,_and_Ben_Brickley&amp;diff=3832"/>
				<updated>2010-05-20T21:44:04Z</updated>
		
		<summary type="html">&lt;p&gt;GregoryParsons: /* Documentation */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== &amp;#039;&amp;#039;&amp;#039;Motivation&amp;#039;&amp;#039;&amp;#039; ==&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Sticking to our primary interests in color, movement and scale, we decided to create a project that would require a computer, a microphone, and a mouse. With these three components we would be able to create a graffiti drawing program.  Later came another interest in using the webcam to create a drawing program.  So we replaced the mouse element with the webcam.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== &amp;#039;&amp;#039;&amp;#039;Interaction&amp;#039;&amp;#039;&amp;#039;==&lt;br /&gt;
&lt;br /&gt;
To interact with our piece the individual will have to have a webcam, a microphone and a computer with Processing.  When running the sketch we will use head tracking to replace the mouse and when speaking or blowing into the microphone, the program will draw. &lt;br /&gt;
&lt;br /&gt;
Basically... speaking and moving your head simultaneously will allow you to draw within Processing.&lt;br /&gt;
&lt;br /&gt;
== &amp;#039;&amp;#039;&amp;#039;Function&amp;#039;&amp;#039;&amp;#039;==&lt;br /&gt;
&lt;br /&gt;
We will add the webcam head tracking sketch to the original microphone/mouse sketch to create a microphone and webcam drawing program.&lt;br /&gt;
&lt;br /&gt;
== &amp;#039;&amp;#039;&amp;#039;Visualization&amp;#039;&amp;#039;&amp;#039; ==&lt;br /&gt;
&lt;br /&gt;
http://imgur.com/HEDZl.jpg&lt;br /&gt;
&lt;br /&gt;
== &amp;#039;&amp;#039;&amp;#039;Documentation&amp;#039;&amp;#039;&amp;#039; ==&lt;br /&gt;
&lt;br /&gt;
The project worked well as a display piece and functioned while under demand. After discussion with the class we agree that the project is stronger as a volume level reacting rather than frequency reacting design. It would be interesting to further develop the project and implement it into a gallery setting. Where people would walk past it and have it react. It would be better to fine tune it so that it can be used on a larger set of screens with smaller drawings so that it would not have to be reset after a short amount of use, this would allow more people and a longer amount of time for interaction. &lt;br /&gt;
&lt;br /&gt;
Images coming soon.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;&lt;br /&gt;
import hypermedia.video.*;&lt;br /&gt;
import ddf.minim.*;&lt;br /&gt;
import ddf.minim.analysis.*;&lt;br /&gt;
import ddf.minim.signals.*;&lt;br /&gt;
&lt;br /&gt;
Minim minim;&lt;br /&gt;
AudioInput in;&lt;br /&gt;
&lt;br /&gt;
OpenCV opencv; &lt;br /&gt;
&lt;br /&gt;
// contrast/brightness values&lt;br /&gt;
int contrast_value    = 0;&lt;br /&gt;
int brightness_value  = 0; &lt;br /&gt;
&lt;br /&gt;
float loudestFreqAmp = 0;&lt;br /&gt;
float loudestFreq = 0;&lt;br /&gt;
int timerCounter = 0;&lt;br /&gt;
&lt;br /&gt;
void setup()&lt;br /&gt;
{&lt;br /&gt;
  size(640, 480, P2D);&lt;br /&gt;
  frameRate(30);&lt;br /&gt;
  noCursor();&lt;br /&gt;
  minim = new Minim(this);&lt;br /&gt;
  minim.debugOn();&lt;br /&gt;
  background(255);&lt;br /&gt;
  noStroke();&lt;br /&gt;
  // get a line in from Minim, default bit depth is 16&lt;br /&gt;
  in = minim.getLineIn(Minim.STEREO, 1024);&lt;br /&gt;
&lt;br /&gt;
  opencv = new OpenCV( this );&lt;br /&gt;
  opencv.capture( width, height );                   // open video stream&lt;br /&gt;
  opencv.cascade( OpenCV.CASCADE_FRONTALFACE_ALT );  // load detection description, here-&amp;gt; front face detection : &amp;quot;haarcascade_frontalface_alt.xml&amp;quot;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
}&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
void draw()&lt;br /&gt;
{&lt;br /&gt;
&lt;br /&gt;
  // grab a new frame&lt;br /&gt;
  // and convert to gray&lt;br /&gt;
  opencv.read();&lt;br /&gt;
  opencv.convert( GRAY );&lt;br /&gt;
  opencv.contrast( contrast_value );&lt;br /&gt;
  opencv.brightness( brightness_value );&lt;br /&gt;
&lt;br /&gt;
  // proceed detection&lt;br /&gt;
  java.awt.Rectangle[] faces = opencv.detect( 1.2, 2, OpenCV.HAAR_DO_CANNY_PRUNING, 40, 40 );&lt;br /&gt;
&lt;br /&gt;
  // display the image&lt;br /&gt;
  //image( opencv.image(), 0, 0 );&lt;br /&gt;
&lt;br /&gt;
  // draw face area(s)&lt;br /&gt;
  //  noFill();&lt;br /&gt;
  //  stroke(255,0,0);&lt;br /&gt;
  //  for( int i=0; i&amp;lt;faces.length; i++ ) {&lt;br /&gt;
  //    rect( faces[i].x, faces[i].y, faces[i].width, faces[i].height ); &lt;br /&gt;
  //  }&lt;br /&gt;
&lt;br /&gt;
  int posX = 0;&lt;br /&gt;
  int posY = 0; &lt;br /&gt;
&lt;br /&gt;
  for( int i=0; i&amp;lt;faces.length; i++ ) {&lt;br /&gt;
    posX = faces[i].x; &lt;br /&gt;
    posY = faces[i].y; &lt;br /&gt;
  }&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
  float m = 0;&lt;br /&gt;
  for(int i = 0; i &amp;lt; in.bufferSize() - 1; i++) {&lt;br /&gt;
    if ( abs(in.mix.get(i)) &amp;gt; m ) {&lt;br /&gt;
      m = abs(in.mix.get(i));&lt;br /&gt;
      System.out.println(in.mix.get(i));&lt;br /&gt;
       &lt;br /&gt;
    }&lt;br /&gt;
  }&lt;br /&gt;
  m*=150;&lt;br /&gt;
  drawCircles(posX, posY, m, 10);&lt;br /&gt;
  &lt;br /&gt;
  &lt;br /&gt;
  if(timerCounter &amp;gt;= 20)&lt;br /&gt;
  {&lt;br /&gt;
    background(255);&lt;br /&gt;
    timerCounter = 0;&lt;br /&gt;
  }&lt;br /&gt;
&lt;br /&gt;
  timerCounter++;&lt;br /&gt;
&lt;br /&gt;
}&lt;br /&gt;
&lt;br /&gt;
void keyPressed() {&lt;br /&gt;
  if (key == &amp;#039;a&amp;#039;) {&lt;br /&gt;
    background(255);&lt;br /&gt;
  }&lt;br /&gt;
}&lt;br /&gt;
&lt;br /&gt;
// Circle splatter machine&lt;br /&gt;
void drawCircles(float x, float y, float radius, int level)&lt;br /&gt;
{&lt;br /&gt;
  noStroke();&lt;br /&gt;
  float tt = 200 * level / 6.0; &lt;br /&gt;
  fill (tt, 0, 116);//tt, 0, 116&lt;br /&gt;
  ellipse(x, y, radius*2, radius*2);&lt;br /&gt;
  if (level &amp;gt; 1) {&lt;br /&gt;
    level = level - 1;&lt;br /&gt;
    int num = int (random(2, 5));&lt;br /&gt;
    for(int i=0; i&amp;lt;num; i++) { &lt;br /&gt;
      float a = random(0, TWO_PI);&lt;br /&gt;
      float nx = x + cos(a) * 6.0 * level; &lt;br /&gt;
      float ny = y + sin(a) * 6.0 * level; &lt;br /&gt;
      drawCircles(nx, ny, radius/2, level); &lt;br /&gt;
    }&lt;br /&gt;
  }&lt;br /&gt;
}&lt;br /&gt;
&lt;br /&gt;
void stop()&lt;br /&gt;
{&lt;br /&gt;
  // always close Minim audio classes when you are done with them&lt;br /&gt;
  in.close();&lt;br /&gt;
  minim.stop();&lt;br /&gt;
&lt;br /&gt;
  super.stop();&lt;br /&gt;
}&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/code&amp;gt;&lt;/div&gt;</summary>
		<author><name>GregoryParsons</name></author>	</entry>

	<entry>
		<id>http://wiki.roberttwomey.com/index.php?title=Sound_Sketch_-_Emilio_Marcelino,_Greg_Parsons,_and_Ben_Brickley&amp;diff=3831</id>
		<title>Sound Sketch - Emilio Marcelino, Greg Parsons, and Ben Brickley</title>
		<link rel="alternate" type="text/html" href="http://wiki.roberttwomey.com/index.php?title=Sound_Sketch_-_Emilio_Marcelino,_Greg_Parsons,_and_Ben_Brickley&amp;diff=3831"/>
				<updated>2010-05-20T21:43:38Z</updated>
		
		<summary type="html">&lt;p&gt;GregoryParsons: /* Documentation */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== &amp;#039;&amp;#039;&amp;#039;Motivation&amp;#039;&amp;#039;&amp;#039; ==&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Sticking to our primary interests in color, movement and scale, we decided to create a project that would require a computer, a microphone, and a mouse. With these three components we would be able to create a graffiti drawing program.  Later came another interest in using the webcam to create a drawing program.  So we replaced the mouse element with the webcam.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== &amp;#039;&amp;#039;&amp;#039;Interaction&amp;#039;&amp;#039;&amp;#039;==&lt;br /&gt;
&lt;br /&gt;
To interact with our piece the individual will have to have a webcam, a microphone and a computer with Processing.  When running the sketch we will use head tracking to replace the mouse and when speaking or blowing into the microphone, the program will draw. &lt;br /&gt;
&lt;br /&gt;
Basically... speaking and moving your head simultaneously will allow you to draw within Processing.&lt;br /&gt;
&lt;br /&gt;
== &amp;#039;&amp;#039;&amp;#039;Function&amp;#039;&amp;#039;&amp;#039;==&lt;br /&gt;
&lt;br /&gt;
We will add the webcam head tracking sketch to the original microphone/mouse sketch to create a microphone and webcam drawing program.&lt;br /&gt;
&lt;br /&gt;
== &amp;#039;&amp;#039;&amp;#039;Visualization&amp;#039;&amp;#039;&amp;#039; ==&lt;br /&gt;
&lt;br /&gt;
http://imgur.com/HEDZl.jpg&lt;br /&gt;
&lt;br /&gt;
== &amp;#039;&amp;#039;&amp;#039;Documentation&amp;#039;&amp;#039;&amp;#039; ==&lt;br /&gt;
&lt;br /&gt;
The project worked well as a display piece and functioned while under demand. After discussion with the class we agree that the project is stronger as a volume level reacting rather than frequency reacting design. It would be interesting to further develop the project and implement it into a gallery setting. Where people would walk past it and have it react. It would be better to fine tune it so that it can be used on a larger set of screens with smaller drawings so that it would not have to be reset after a short amount of use, this would allow more people and a longer amount of time for interaction. &lt;br /&gt;
&lt;br /&gt;
Images coming soon.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;&lt;br /&gt;
import hypermedia.video.*;&lt;br /&gt;
import ddf.minim.*;&lt;br /&gt;
import ddf.minim.analysis.*;&lt;br /&gt;
import ddf.minim.signals.*;&lt;br /&gt;
&lt;br /&gt;
Minim minim;&lt;br /&gt;
AudioInput in;&lt;br /&gt;
&lt;br /&gt;
OpenCV opencv; &lt;br /&gt;
&lt;br /&gt;
// contrast/brightness values&lt;br /&gt;
int contrast_value    = 0;&lt;br /&gt;
int brightness_value  = 0; &lt;br /&gt;
&lt;br /&gt;
float loudestFreqAmp = 0;&lt;br /&gt;
float loudestFreq = 0;&lt;br /&gt;
int timerCounter = 0;&lt;br /&gt;
&lt;br /&gt;
void setup()&lt;br /&gt;
{&lt;br /&gt;
  size(640, 480, P2D);&lt;br /&gt;
  frameRate(30);&lt;br /&gt;
  noCursor();&lt;br /&gt;
  minim = new Minim(this);&lt;br /&gt;
  minim.debugOn();&lt;br /&gt;
  background(255);&lt;br /&gt;
  noStroke();&lt;br /&gt;
  // get a line in from Minim, default bit depth is 16&lt;br /&gt;
  in = minim.getLineIn(Minim.STEREO, 1024);&lt;br /&gt;
&lt;br /&gt;
  opencv = new OpenCV( this );&lt;br /&gt;
  opencv.capture( width, height );                   // open video stream&lt;br /&gt;
  opencv.cascade( OpenCV.CASCADE_FRONTALFACE_ALT );  // load detection description, here-&amp;gt; front face detection : &amp;quot;haarcascade_frontalface_alt.xml&amp;quot;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
}&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
void draw()&lt;br /&gt;
{&lt;br /&gt;
&lt;br /&gt;
  // grab a new frame&lt;br /&gt;
  // and convert to gray&lt;br /&gt;
  opencv.read();&lt;br /&gt;
  opencv.convert( GRAY );&lt;br /&gt;
  opencv.contrast( contrast_value );&lt;br /&gt;
  opencv.brightness( brightness_value );&lt;br /&gt;
&lt;br /&gt;
  // proceed detection&lt;br /&gt;
  java.awt.Rectangle[] faces = opencv.detect( 1.2, 2, OpenCV.HAAR_DO_CANNY_PRUNING, 40, 40 );&lt;br /&gt;
&lt;br /&gt;
  // display the image&lt;br /&gt;
  //image( opencv.image(), 0, 0 );&lt;br /&gt;
&lt;br /&gt;
  // draw face area(s)&lt;br /&gt;
  //  noFill();&lt;br /&gt;
  //  stroke(255,0,0);&lt;br /&gt;
  //  for( int i=0; i&amp;lt;faces.length; i++ ) {&lt;br /&gt;
  //    rect( faces[i].x, faces[i].y, faces[i].width, faces[i].height ); &lt;br /&gt;
  //  }&lt;br /&gt;
&lt;br /&gt;
  int posX = 0;&lt;br /&gt;
  int posY = 0; &lt;br /&gt;
&lt;br /&gt;
  for( int i=0; i&amp;lt;faces.length; i++ ) {&lt;br /&gt;
    posX = faces[i].x; &lt;br /&gt;
    posY = faces[i].y; &lt;br /&gt;
  }&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
  float m = 0;&lt;br /&gt;
  for(int i = 0; i &amp;lt; in.bufferSize() - 1; i++) {&lt;br /&gt;
    if ( abs(in.mix.get(i)) &amp;gt; m ) {&lt;br /&gt;
      m = abs(in.mix.get(i));&lt;br /&gt;
      System.out.println(in.mix.get(i));&lt;br /&gt;
       &lt;br /&gt;
    }&lt;br /&gt;
  }&lt;br /&gt;
  m*=150;&lt;br /&gt;
  drawCircles(posX, posY, m, 10);&lt;br /&gt;
  &lt;br /&gt;
  &lt;br /&gt;
  if(timerCounter &amp;gt;= 20)&lt;br /&gt;
  {&lt;br /&gt;
    background(255);&lt;br /&gt;
    timerCounter = 0;&lt;br /&gt;
  }&lt;br /&gt;
&lt;br /&gt;
  timerCounter++;&lt;br /&gt;
&lt;br /&gt;
}&lt;br /&gt;
&lt;br /&gt;
void keyPressed() {&lt;br /&gt;
  if (key == &amp;#039;a&amp;#039;) {&lt;br /&gt;
    background(255);&lt;br /&gt;
  }&lt;br /&gt;
}&lt;br /&gt;
&lt;br /&gt;
// Circle splatter machine&lt;br /&gt;
void drawCircles(float x, float y, float radius, int level)&lt;br /&gt;
{&lt;br /&gt;
  noStroke();&lt;br /&gt;
  float tt = 200 * level / 6.0; &lt;br /&gt;
  fill (tt, 0, 116);//tt, 0, 116&lt;br /&gt;
  ellipse(x, y, radius*2, radius*2);&lt;br /&gt;
  if (level &amp;gt; 1) {&lt;br /&gt;
    level = level - 1;&lt;br /&gt;
    int num = int (random(2, 5));&lt;br /&gt;
    for(int i=0; i&amp;lt;num; i++) { &lt;br /&gt;
      float a = random(0, TWO_PI);&lt;br /&gt;
      float nx = x + cos(a) * 6.0 * level; &lt;br /&gt;
      float ny = y + sin(a) * 6.0 * level; &lt;br /&gt;
      drawCircles(nx, ny, radius/2, level); &lt;br /&gt;
    }&lt;br /&gt;
  }&lt;br /&gt;
}&lt;br /&gt;
&lt;br /&gt;
void stop()&lt;br /&gt;
{&lt;br /&gt;
  // always close Minim audio classes when you are done with them&lt;br /&gt;
  in.close();&lt;br /&gt;
  minim.stop();&lt;br /&gt;
&lt;br /&gt;
  super.stop();&lt;br /&gt;
}&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/nocode&amp;gt;&lt;/div&gt;</summary>
		<author><name>GregoryParsons</name></author>	</entry>

	<entry>
		<id>http://wiki.roberttwomey.com/index.php?title=Sound_Sketch_-_Emilio_Marcelino,_Greg_Parsons,_and_Ben_Brickley&amp;diff=3830</id>
		<title>Sound Sketch - Emilio Marcelino, Greg Parsons, and Ben Brickley</title>
		<link rel="alternate" type="text/html" href="http://wiki.roberttwomey.com/index.php?title=Sound_Sketch_-_Emilio_Marcelino,_Greg_Parsons,_and_Ben_Brickley&amp;diff=3830"/>
				<updated>2010-05-20T21:42:29Z</updated>
		
		<summary type="html">&lt;p&gt;GregoryParsons: /* Documentation */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== &amp;#039;&amp;#039;&amp;#039;Motivation&amp;#039;&amp;#039;&amp;#039; ==&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Sticking to our primary interests in color, movement and scale, we decided to create a project that would require a computer, a microphone, and a mouse. With these three components we would be able to create a graffiti drawing program.  Later came another interest in using the webcam to create a drawing program.  So we replaced the mouse element with the webcam.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== &amp;#039;&amp;#039;&amp;#039;Interaction&amp;#039;&amp;#039;&amp;#039;==&lt;br /&gt;
&lt;br /&gt;
To interact with our piece the individual will have to have a webcam, a microphone and a computer with Processing.  When running the sketch we will use head tracking to replace the mouse and when speaking or blowing into the microphone, the program will draw. &lt;br /&gt;
&lt;br /&gt;
Basically... speaking and moving your head simultaneously will allow you to draw within Processing.&lt;br /&gt;
&lt;br /&gt;
== &amp;#039;&amp;#039;&amp;#039;Function&amp;#039;&amp;#039;&amp;#039;==&lt;br /&gt;
&lt;br /&gt;
We will add the webcam head tracking sketch to the original microphone/mouse sketch to create a microphone and webcam drawing program.&lt;br /&gt;
&lt;br /&gt;
== &amp;#039;&amp;#039;&amp;#039;Visualization&amp;#039;&amp;#039;&amp;#039; ==&lt;br /&gt;
&lt;br /&gt;
http://imgur.com/HEDZl.jpg&lt;br /&gt;
&lt;br /&gt;
== &amp;#039;&amp;#039;&amp;#039;Documentation&amp;#039;&amp;#039;&amp;#039; ==&lt;br /&gt;
&lt;br /&gt;
The project worked well as a display piece and functioned while under demand. After discussion with the class we agree that the project is stronger as a volume level reacting rather than frequency reacting design. It would be interesting to further develop the project and implement it into a gallery setting. Where people would walk past it and have it react. It would be better to fine tune it so that it can be used on a larger set of screens with smaller drawings so that it would not have to be reset after a short amount of use, this would allow more people and a longer amount of time for interaction. &lt;br /&gt;
&lt;br /&gt;
Images coming soon.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;nowiki&amp;gt;&lt;br /&gt;
import hypermedia.video.*;&lt;br /&gt;
import ddf.minim.*;&lt;br /&gt;
import ddf.minim.analysis.*;&lt;br /&gt;
import ddf.minim.signals.*;&lt;br /&gt;
&lt;br /&gt;
Minim minim;&lt;br /&gt;
AudioInput in;&lt;br /&gt;
&lt;br /&gt;
OpenCV opencv; &lt;br /&gt;
&lt;br /&gt;
// contrast/brightness values&lt;br /&gt;
int contrast_value    = 0;&lt;br /&gt;
int brightness_value  = 0; &lt;br /&gt;
&lt;br /&gt;
float loudestFreqAmp = 0;&lt;br /&gt;
float loudestFreq = 0;&lt;br /&gt;
int timerCounter = 0;&lt;br /&gt;
&lt;br /&gt;
void setup()&lt;br /&gt;
{&lt;br /&gt;
  size(640, 480, P2D);&lt;br /&gt;
  frameRate(30);&lt;br /&gt;
  noCursor();&lt;br /&gt;
  minim = new Minim(this);&lt;br /&gt;
  minim.debugOn();&lt;br /&gt;
  background(255);&lt;br /&gt;
  noStroke();&lt;br /&gt;
  // get a line in from Minim, default bit depth is 16&lt;br /&gt;
  in = minim.getLineIn(Minim.STEREO, 1024);&lt;br /&gt;
&lt;br /&gt;
  opencv = new OpenCV( this );&lt;br /&gt;
  opencv.capture( width, height );                   // open video stream&lt;br /&gt;
  opencv.cascade( OpenCV.CASCADE_FRONTALFACE_ALT );  // load detection description, here-&amp;gt; front face detection : &amp;quot;haarcascade_frontalface_alt.xml&amp;quot;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
}&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
void draw()&lt;br /&gt;
{&lt;br /&gt;
&lt;br /&gt;
  // grab a new frame&lt;br /&gt;
  // and convert to gray&lt;br /&gt;
  opencv.read();&lt;br /&gt;
  opencv.convert( GRAY );&lt;br /&gt;
  opencv.contrast( contrast_value );&lt;br /&gt;
  opencv.brightness( brightness_value );&lt;br /&gt;
&lt;br /&gt;
  // proceed detection&lt;br /&gt;
  java.awt.Rectangle[] faces = opencv.detect( 1.2, 2, OpenCV.HAAR_DO_CANNY_PRUNING, 40, 40 );&lt;br /&gt;
&lt;br /&gt;
  // display the image&lt;br /&gt;
  //image( opencv.image(), 0, 0 );&lt;br /&gt;
&lt;br /&gt;
  // draw face area(s)&lt;br /&gt;
  //  noFill();&lt;br /&gt;
  //  stroke(255,0,0);&lt;br /&gt;
  //  for( int i=0; i&amp;lt;faces.length; i++ ) {&lt;br /&gt;
  //    rect( faces[i].x, faces[i].y, faces[i].width, faces[i].height ); &lt;br /&gt;
  //  }&lt;br /&gt;
&lt;br /&gt;
  int posX = 0;&lt;br /&gt;
  int posY = 0; &lt;br /&gt;
&lt;br /&gt;
  for( int i=0; i&amp;lt;faces.length; i++ ) {&lt;br /&gt;
    posX = faces[i].x; &lt;br /&gt;
    posY = faces[i].y; &lt;br /&gt;
  }&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
  float m = 0;&lt;br /&gt;
  for(int i = 0; i &amp;lt; in.bufferSize() - 1; i++) {&lt;br /&gt;
    if ( abs(in.mix.get(i)) &amp;gt; m ) {&lt;br /&gt;
      m = abs(in.mix.get(i));&lt;br /&gt;
      System.out.println(in.mix.get(i));&lt;br /&gt;
       &lt;br /&gt;
    }&lt;br /&gt;
  }&lt;br /&gt;
  m*=150;&lt;br /&gt;
  drawCircles(posX, posY, m, 10);&lt;br /&gt;
  &lt;br /&gt;
  &lt;br /&gt;
  if(timerCounter &amp;gt;= 20)&lt;br /&gt;
  {&lt;br /&gt;
    background(255);&lt;br /&gt;
    timerCounter = 0;&lt;br /&gt;
  }&lt;br /&gt;
&lt;br /&gt;
  timerCounter++;&lt;br /&gt;
&lt;br /&gt;
}&lt;br /&gt;
&lt;br /&gt;
void keyPressed() {&lt;br /&gt;
  if (key == &amp;#039;a&amp;#039;) {&lt;br /&gt;
    background(255);&lt;br /&gt;
  }&lt;br /&gt;
}&lt;br /&gt;
&lt;br /&gt;
// Circle splatter machine&lt;br /&gt;
void drawCircles(float x, float y, float radius, int level)&lt;br /&gt;
{&lt;br /&gt;
  noStroke();&lt;br /&gt;
  float tt = 200 * level / 6.0; &lt;br /&gt;
  fill (tt, 0, 116);//tt, 0, 116&lt;br /&gt;
  ellipse(x, y, radius*2, radius*2);&lt;br /&gt;
  if (level &amp;gt; 1) {&lt;br /&gt;
    level = level - 1;&lt;br /&gt;
    int num = int (random(2, 5));&lt;br /&gt;
    for(int i=0; i&amp;lt;num; i++) { &lt;br /&gt;
      float a = random(0, TWO_PI);&lt;br /&gt;
      float nx = x + cos(a) * 6.0 * level; &lt;br /&gt;
      float ny = y + sin(a) * 6.0 * level; &lt;br /&gt;
      drawCircles(nx, ny, radius/2, level); &lt;br /&gt;
    }&lt;br /&gt;
  }&lt;br /&gt;
}&lt;br /&gt;
&lt;br /&gt;
void stop()&lt;br /&gt;
{&lt;br /&gt;
  // always close Minim audio classes when you are done with them&lt;br /&gt;
  in.close();&lt;br /&gt;
  minim.stop();&lt;br /&gt;
&lt;br /&gt;
  super.stop();&lt;br /&gt;
}&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/nowiki&amp;gt;&lt;/div&gt;</summary>
		<author><name>GregoryParsons</name></author>	</entry>

	<entry>
		<id>http://wiki.roberttwomey.com/index.php?title=Students/GregoryParsons&amp;diff=3829</id>
		<title>Students/GregoryParsons</title>
		<link rel="alternate" type="text/html" href="http://wiki.roberttwomey.com/index.php?title=Students/GregoryParsons&amp;diff=3829"/>
				<updated>2010-05-20T15:49:53Z</updated>
		
		<summary type="html">&lt;p&gt;GregoryParsons: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;Course page&lt;br /&gt;
[[Time and Process Based Digital Media]]&lt;br /&gt;
&lt;br /&gt;
Senior-ish student, ICAM VisArts Major. &lt;br /&gt;
&lt;br /&gt;
Interests: Film, Product Design, Device Interactivity, Web Design, Photography, Audio / Video, Video Games, Reading... &lt;br /&gt;
&lt;br /&gt;
&amp;#039;&amp;#039;&amp;#039;Midterm Project&amp;#039;&amp;#039;&amp;#039; &lt;br /&gt;
&lt;br /&gt;
Sound Sketch with Emilio Marcelino, and Ben Brickley&lt;br /&gt;
&lt;br /&gt;
&amp;#039;&amp;#039;&amp;#039;Final Project&amp;#039;&amp;#039;&amp;#039;&lt;br /&gt;
&lt;br /&gt;
Coming soon to a wiki near you (this one). &lt;br /&gt;
&lt;br /&gt;
-G&lt;/div&gt;</summary>
		<author><name>GregoryParsons</name></author>	</entry>

	<entry>
		<id>http://wiki.roberttwomey.com/index.php?title=Sound_Sketch_-_Emilio_Marcelino,_Greg_Parsons,_and_Ben_Brickley&amp;diff=3828</id>
		<title>Sound Sketch - Emilio Marcelino, Greg Parsons, and Ben Brickley</title>
		<link rel="alternate" type="text/html" href="http://wiki.roberttwomey.com/index.php?title=Sound_Sketch_-_Emilio_Marcelino,_Greg_Parsons,_and_Ben_Brickley&amp;diff=3828"/>
				<updated>2010-05-20T15:46:52Z</updated>
		
		<summary type="html">&lt;p&gt;GregoryParsons: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== &amp;#039;&amp;#039;&amp;#039;Motivation&amp;#039;&amp;#039;&amp;#039; ==&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Sticking to our primary interests in color, movement and scale, we decided to create a project that would require a computer, a microphone, and a mouse. With these three components we would be able to create a graffiti drawing program.  Later came another interest in using the webcam to create a drawing program.  So we replaced the mouse element with the webcam.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== &amp;#039;&amp;#039;&amp;#039;Interaction&amp;#039;&amp;#039;&amp;#039;==&lt;br /&gt;
&lt;br /&gt;
To interact with our piece the individual will have to have a webcam, a microphone and a computer with Processing.  When running the sketch we will use head tracking to replace the mouse and when speaking or blowing into the microphone, the program will draw. &lt;br /&gt;
&lt;br /&gt;
Basically... speaking and moving your head simultaneously will allow you to draw within Processing.&lt;br /&gt;
&lt;br /&gt;
== &amp;#039;&amp;#039;&amp;#039;Function&amp;#039;&amp;#039;&amp;#039;==&lt;br /&gt;
&lt;br /&gt;
We will add the webcam head tracking sketch to the original microphone/mouse sketch to create a microphone and webcam drawing program.&lt;br /&gt;
&lt;br /&gt;
== &amp;#039;&amp;#039;&amp;#039;Visualization&amp;#039;&amp;#039;&amp;#039; ==&lt;br /&gt;
&lt;br /&gt;
http://imgur.com/HEDZl.jpg&lt;br /&gt;
&lt;br /&gt;
== &amp;#039;&amp;#039;&amp;#039;Documentation&amp;#039;&amp;#039;&amp;#039; ==&lt;br /&gt;
&lt;br /&gt;
The project worked well as a display piece and functioned while under demand. After discussion with the class we agree that the project is stronger as a volume level reacting rather than frequency reacting design. It would be interesting to further develop the project and implement it into a gallery setting. Where people would walk past it and have it react. It would be better to fine tune it so that it can be used on a larger set of screens with smaller drawings so that it would not have to be reset after a short amount of use, this would allow more people and a longer amount of time for interaction. &lt;br /&gt;
&lt;br /&gt;
Images coming soon.&lt;/div&gt;</summary>
		<author><name>GregoryParsons</name></author>	</entry>

	<entry>
		<id>http://wiki.roberttwomey.com/index.php?title=Motion_Animation_-_Greg_Parsons&amp;diff=3707</id>
		<title>Motion Animation - Greg Parsons</title>
		<link rel="alternate" type="text/html" href="http://wiki.roberttwomey.com/index.php?title=Motion_Animation_-_Greg_Parsons&amp;diff=3707"/>
				<updated>2010-04-29T22:40:12Z</updated>
		
		<summary type="html">&lt;p&gt;GregoryParsons: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;----&lt;br /&gt;
&lt;br /&gt;
Midterm Project&lt;br /&gt;
&lt;br /&gt;
;Motivation:&lt;br /&gt;
:	I was interested in the demonstrations of  computer vision presented during the first week of lecture. Using the in class     sampling of the technologies I was impressed by how I was able to manipulate a project using frame differencing. I would like to create a project that is manipulated by movements from the viewer of the project.&lt;br /&gt;
;Interaction:&lt;br /&gt;
:	The interaction of the project will be enacted with a webcam, recording movement in a scene and manipulating a graphical image based on the data collected from the difference in frames. In total the project would react to the viewers movements in some way. &lt;br /&gt;
;Function:&lt;br /&gt;
:	My programming experience is limited to the past two quarters of instruction. But I would like to be able to achieve smooth movement in the image using some form of algorithmic processing.  &lt;br /&gt;
;Visualization:&lt;br /&gt;
:	I have a semi-working project that I created this weekend using a visual example from OpenProcessing.org that I will show in class if needed.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
----&lt;br /&gt;
&lt;br /&gt;
Code for Demo:&lt;br /&gt;
&lt;br /&gt;
Copy this into a new processing project to test.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
 * Greg Parsons VIS145B Midterm Test&lt;br /&gt;
 *  &lt;br /&gt;
 * Project Uses Frame Differencing to detect motion, sees it and draws a picture&lt;br /&gt;
 * &lt;br /&gt;
 */&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
//videocapture&lt;br /&gt;
import processing.video.*;&lt;br /&gt;
float x, y;&lt;br /&gt;
int numPixels;&lt;br /&gt;
int[] previousFrame;&lt;br /&gt;
int[] diffFrame;&lt;br /&gt;
Capture video;&lt;br /&gt;
&lt;br /&gt;
//treecursion&lt;br /&gt;
float curlx = 0; &lt;br /&gt;
float curly = 0; &lt;br /&gt;
float f = sqrt(2)/2.; &lt;br /&gt;
float deley = 20; &lt;br /&gt;
float growth = 0; &lt;br /&gt;
float growthTarget = 0; &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
int movementDiff = constrain(10, 5, 15);&lt;br /&gt;
int movementDiffNegative = constrain(10, -5, -10);&lt;br /&gt;
&lt;br /&gt;
void setup() {&lt;br /&gt;
  //video capture&lt;br /&gt;
  size(640, 480, P2D); //P2D from treecursion&lt;br /&gt;
  video = new Capture(this, width, height, 24);&lt;br /&gt;
  numPixels = video.width * video.height;&lt;br /&gt;
  previousFrame = new int[numPixels];&lt;br /&gt;
  diffFrame = new int[numPixels];&lt;br /&gt;
  loadPixels();&lt;br /&gt;
  smooth();&lt;br /&gt;
  //treecursion&lt;br /&gt;
  addMouseWheelListener(new java.awt.event.MouseWheelListener() {  &lt;br /&gt;
    public void mouseWheelMoved(java.awt.event.MouseWheelEvent evt) {  &lt;br /&gt;
      mouseWheel(evt.getWheelRotation()); &lt;br /&gt;
    }&lt;br /&gt;
  }&lt;br /&gt;
  );&lt;br /&gt;
}&lt;br /&gt;
&lt;br /&gt;
void draw() {&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
  //video capture&lt;br /&gt;
  if (video.available()) {&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
    // When using video to manipulate the screen, use video.available() and&lt;br /&gt;
    // video.read() inside the draw() method so that it&amp;#039;s safe to draw to the screen&lt;br /&gt;
    video.read(); // Read the new frame from the camera&lt;br /&gt;
    video.loadPixels(); // Make its pixels[] array available&lt;br /&gt;
&lt;br /&gt;
    int movementSum = 0; // Amount of movement in the frame&lt;br /&gt;
    for (int i = 0; i &amp;lt; numPixels; i++) { // For each pixel in the video frame...&lt;br /&gt;
      color currColor = video.pixels[i];&lt;br /&gt;
      color prevColor = previousFrame[i];&lt;br /&gt;
      // Extract the red, green, and blue components from current pixel&lt;br /&gt;
      int currR = (currColor &amp;gt;&amp;gt; 16) &amp;amp; 0xFF; // Like red(), but faster&lt;br /&gt;
      int currG = (currColor &amp;gt;&amp;gt; 8) &amp;amp; 0xFF;&lt;br /&gt;
      int currB = currColor &amp;amp; 0xFF;&lt;br /&gt;
      // Extract red, green, and blue components from previous pixel&lt;br /&gt;
      int prevR = (prevColor &amp;gt;&amp;gt; 16) &amp;amp; 0xFF;&lt;br /&gt;
      int prevG = (prevColor &amp;gt;&amp;gt; 8) &amp;amp; 0xFF;&lt;br /&gt;
      int prevB = prevColor &amp;amp; 0xFF;&lt;br /&gt;
      // Compute the difference of the red, green, and blue values&lt;br /&gt;
      int diffR = abs(currR - prevR);&lt;br /&gt;
      int diffG = abs(currG - prevG);&lt;br /&gt;
      int diffB = abs(currB - prevB);&lt;br /&gt;
      // Add these differences to the running tally&lt;br /&gt;
      movementSum += diffR + diffG + diffB;&lt;br /&gt;
      // Render the difference image to the screen&lt;br /&gt;
      //diffFrame = color(diffR, diffG, diffB);&lt;br /&gt;
      diffFrame[i] = round(sqrt(diffR*diffR + diffG*diffG + diffB*diffB));&lt;br /&gt;
      //     pixels[i] = currColor;&lt;br /&gt;
      // pixels[i] = color(diffFrame[i]);&lt;br /&gt;
      // The following line is much faster, but more confusing to read&lt;br /&gt;
      //pixels[i] = 0xff000000 | (diffR &amp;lt;&amp;lt; 16) | (diffG &amp;lt;&amp;lt; 8) | diffB;&lt;br /&gt;
      // Save the current color into the &amp;#039;previous&amp;#039; buffer&lt;br /&gt;
      previousFrame[i] = currColor;&lt;br /&gt;
    }&lt;br /&gt;
    // To prevent flicker from frames that are all black (no movement),&lt;br /&gt;
    // only update the screen if the image has changed.&lt;br /&gt;
    if (movementSum &amp;gt; 0) {&lt;br /&gt;
      updatePixels();&lt;br /&gt;
      //   println(movementSum); // Print the total amount of movement to the console&lt;br /&gt;
    }&lt;br /&gt;
  }&lt;br /&gt;
&lt;br /&gt;
  int v = round(x+y*width);&lt;br /&gt;
  if (diffFrame[v] &amp;gt; 5)&lt;br /&gt;
  {&lt;br /&gt;
    &lt;br /&gt;
&lt;br /&gt;
    if ((diffFrame[v] &amp;gt; 5) &amp;amp;&amp;amp; (diffFrame[v] &amp;lt; 15))&lt;br /&gt;
    {&lt;br /&gt;
      movementDiff = movementDiff + 3;&lt;br /&gt;
    }&lt;br /&gt;
    else if ((diffFrame[v] &amp;gt; 15) &amp;amp;&amp;amp; (diffFrame[v] &amp;lt; 50))&lt;br /&gt;
    {&lt;br /&gt;
      movementDiff = movementDiff - 3;&lt;br /&gt;
    }    &lt;br /&gt;
&lt;br /&gt;
    movementDiffNegative = (movementDiff * -1);&lt;br /&gt;
    &lt;br /&gt;
    System.out.println(&amp;quot;the movement diff is&amp;quot; + movementDiff + &amp;quot;the movementDiffNegative is&amp;quot; + movementDiffNegative);&lt;br /&gt;
    //treecursion&lt;br /&gt;
    background(250); &lt;br /&gt;
    stroke(0); &lt;br /&gt;
    curlx += (radians(360./height*movementDiff)-curlx)/deley; &lt;br /&gt;
    curly += (radians(360./height*movementDiffNegative)-curly)/deley; &lt;br /&gt;
    translate(width/2,height/3*2); &lt;br /&gt;
    line(0,0,0,height/2); &lt;br /&gt;
    branch(height/4.,17); &lt;br /&gt;
    growth += (growthTarget/10-growth+1.)/deley; &lt;br /&gt;
    println(diffFrame[v]);&lt;br /&gt;
  } &lt;br /&gt;
&lt;br /&gt;
}&lt;br /&gt;
&lt;br /&gt;
void mouseWheel(int delta) &lt;br /&gt;
{ &lt;br /&gt;
  growthTarget += delta; &lt;br /&gt;
} &lt;br /&gt;
&lt;br /&gt;
//treecursion &lt;br /&gt;
void branch(float len,int num) &lt;br /&gt;
{ &lt;br /&gt;
  len *= f; &lt;br /&gt;
  num -= 1; &lt;br /&gt;
  if((len &amp;gt; 1) &amp;amp;&amp;amp; (num &amp;gt; 0)) &lt;br /&gt;
  { &lt;br /&gt;
    pushMatrix(); &lt;br /&gt;
    rotate(curlx); &lt;br /&gt;
    line(0,0,0,-len); &lt;br /&gt;
    translate(0,-len); &lt;br /&gt;
    branch(len,num); &lt;br /&gt;
    popMatrix(); &lt;br /&gt;
&lt;br /&gt;
    //    pushMatrix(); &lt;br /&gt;
    //    line(0,0,0,-len); &lt;br /&gt;
    //    translate(0,-len); &lt;br /&gt;
    //    branch(len); &lt;br /&gt;
    //    popMatrix(); &lt;br /&gt;
    len *= growth; &lt;br /&gt;
    pushMatrix(); &lt;br /&gt;
    rotate(curlx-curly); &lt;br /&gt;
    line(0,0,0,-len); &lt;br /&gt;
    translate(0,-len); &lt;br /&gt;
    branch(len,num); &lt;br /&gt;
    popMatrix(); &lt;br /&gt;
    //len /= growth; &lt;br /&gt;
  } &lt;br /&gt;
}&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;/div&gt;</summary>
		<author><name>GregoryParsons</name></author>	</entry>

	<entry>
		<id>http://wiki.roberttwomey.com/index.php?title=Motion_Animation_-_Greg_Parsons&amp;diff=3705</id>
		<title>Motion Animation - Greg Parsons</title>
		<link rel="alternate" type="text/html" href="http://wiki.roberttwomey.com/index.php?title=Motion_Animation_-_Greg_Parsons&amp;diff=3705"/>
				<updated>2010-04-29T22:38:56Z</updated>
		
		<summary type="html">&lt;p&gt;GregoryParsons: New page:  ----  Midterm Project  ;Motivation: :	I was interested in the demonstrations of  computer vision presented during the first week of lecture. Using the in class     sampling of the technol...&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&lt;br /&gt;
----&lt;br /&gt;
&lt;br /&gt;
Midterm Project&lt;br /&gt;
&lt;br /&gt;
;Motivation:&lt;br /&gt;
:	I was interested in the demonstrations of  computer vision presented during the first week of lecture. Using the in class     sampling of the technologies I was impressed by how I was able to manipulate a project using frame differencing. I would like to create a project that is manipulated by movements from the viewer of the project.&lt;br /&gt;
;Interaction:&lt;br /&gt;
:	The interaction of the project will be enacted with a webcam, recording movement in a scene and manipulating a graphical image based on the data collected from the difference in frames. In total the project would react to the viewers movements in some way. &lt;br /&gt;
;Function:&lt;br /&gt;
:	My programming experience is limited to the past two quarters of instruction. But I would like to be able to achieve smooth movement in the image using some form of algorithmic processing.  &lt;br /&gt;
;Visualization:&lt;br /&gt;
:	I have a semi-working project that I created this weekend using a visual example from OpenProcessing.org that I will show in class if needed.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
----&lt;br /&gt;
&lt;br /&gt;
Code for Demo:&lt;br /&gt;
&lt;br /&gt;
Copy this into a new processing project to test.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;nowiki&amp;gt;/**&lt;br /&gt;
 * Greg Parsons VIS145B Midterm Test&lt;br /&gt;
 *  &lt;br /&gt;
 * Project Uses Frame Differencing to detect motion, sees it and draws a picture&lt;br /&gt;
 * &lt;br /&gt;
 */&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
//videocapture&lt;br /&gt;
import processing.video.*;&lt;br /&gt;
float x, y;&lt;br /&gt;
int numPixels;&lt;br /&gt;
int[] previousFrame;&lt;br /&gt;
int[] diffFrame;&lt;br /&gt;
Capture video;&lt;br /&gt;
&lt;br /&gt;
//treecursion&lt;br /&gt;
float curlx = 0; &lt;br /&gt;
float curly = 0; &lt;br /&gt;
float f = sqrt(2)/2.; &lt;br /&gt;
float deley = 20; &lt;br /&gt;
float growth = 0; &lt;br /&gt;
float growthTarget = 0; &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
int movementDiff = constrain(10, 5, 15);&lt;br /&gt;
int movementDiffNegative = constrain(10, -5, -10);&lt;br /&gt;
&lt;br /&gt;
void setup() {&lt;br /&gt;
  //video capture&lt;br /&gt;
  size(640, 480, P2D); //P2D from treecursion&lt;br /&gt;
  video = new Capture(this, width, height, 24);&lt;br /&gt;
  numPixels = video.width * video.height;&lt;br /&gt;
  previousFrame = new int[numPixels];&lt;br /&gt;
  diffFrame = new int[numPixels];&lt;br /&gt;
  loadPixels();&lt;br /&gt;
  smooth();&lt;br /&gt;
  //treecursion&lt;br /&gt;
  addMouseWheelListener(new java.awt.event.MouseWheelListener() {  &lt;br /&gt;
    public void mouseWheelMoved(java.awt.event.MouseWheelEvent evt) {  &lt;br /&gt;
      mouseWheel(evt.getWheelRotation()); &lt;br /&gt;
    }&lt;br /&gt;
  }&lt;br /&gt;
  );&lt;br /&gt;
}&lt;br /&gt;
&lt;br /&gt;
void draw() {&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
  //video capture&lt;br /&gt;
  if (video.available()) {&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
    // When using video to manipulate the screen, use video.available() and&lt;br /&gt;
    // video.read() inside the draw() method so that it&amp;#039;s safe to draw to the screen&lt;br /&gt;
    video.read(); // Read the new frame from the camera&lt;br /&gt;
    video.loadPixels(); // Make its pixels[] array available&lt;br /&gt;
&lt;br /&gt;
    int movementSum = 0; // Amount of movement in the frame&lt;br /&gt;
    for (int i = 0; i &amp;lt; numPixels; i++) { // For each pixel in the video frame...&lt;br /&gt;
      color currColor = video.pixels[i];&lt;br /&gt;
      color prevColor = previousFrame[i];&lt;br /&gt;
      // Extract the red, green, and blue components from current pixel&lt;br /&gt;
      int currR = (currColor &amp;gt;&amp;gt; 16) &amp;amp; 0xFF; // Like red(), but faster&lt;br /&gt;
      int currG = (currColor &amp;gt;&amp;gt; 8) &amp;amp; 0xFF;&lt;br /&gt;
      int currB = currColor &amp;amp; 0xFF;&lt;br /&gt;
      // Extract red, green, and blue components from previous pixel&lt;br /&gt;
      int prevR = (prevColor &amp;gt;&amp;gt; 16) &amp;amp; 0xFF;&lt;br /&gt;
      int prevG = (prevColor &amp;gt;&amp;gt; 8) &amp;amp; 0xFF;&lt;br /&gt;
      int prevB = prevColor &amp;amp; 0xFF;&lt;br /&gt;
      // Compute the difference of the red, green, and blue values&lt;br /&gt;
      int diffR = abs(currR - prevR);&lt;br /&gt;
      int diffG = abs(currG - prevG);&lt;br /&gt;
      int diffB = abs(currB - prevB);&lt;br /&gt;
      // Add these differences to the running tally&lt;br /&gt;
      movementSum += diffR + diffG + diffB;&lt;br /&gt;
      // Render the difference image to the screen&lt;br /&gt;
      //diffFrame = color(diffR, diffG, diffB);&lt;br /&gt;
      diffFrame[i] = round(sqrt(diffR*diffR + diffG*diffG + diffB*diffB));&lt;br /&gt;
      //     pixels[i] = currColor;&lt;br /&gt;
      // pixels[i] = color(diffFrame[i]);&lt;br /&gt;
      // The following line is much faster, but more confusing to read&lt;br /&gt;
      //pixels[i] = 0xff000000 | (diffR &amp;lt;&amp;lt; 16) | (diffG &amp;lt;&amp;lt; 8) | diffB;&lt;br /&gt;
      // Save the current color into the &amp;#039;previous&amp;#039; buffer&lt;br /&gt;
      previousFrame[i] = currColor;&lt;br /&gt;
    }&lt;br /&gt;
    // To prevent flicker from frames that are all black (no movement),&lt;br /&gt;
    // only update the screen if the image has changed.&lt;br /&gt;
    if (movementSum &amp;gt; 0) {&lt;br /&gt;
      updatePixels();&lt;br /&gt;
      //   println(movementSum); // Print the total amount of movement to the console&lt;br /&gt;
    }&lt;br /&gt;
  }&lt;br /&gt;
&lt;br /&gt;
  int v = round(x+y*width);&lt;br /&gt;
  if (diffFrame[v] &amp;gt; 5)&lt;br /&gt;
  {&lt;br /&gt;
    &lt;br /&gt;
&lt;br /&gt;
    if ((diffFrame[v] &amp;gt; 5) &amp;amp;&amp;amp; (diffFrame[v] &amp;lt; 15))&lt;br /&gt;
    {&lt;br /&gt;
      movementDiff = movementDiff + 3;&lt;br /&gt;
    }&lt;br /&gt;
    else if ((diffFrame[v] &amp;gt; 15) &amp;amp;&amp;amp; (diffFrame[v] &amp;lt; 50))&lt;br /&gt;
    {&lt;br /&gt;
      movementDiff = movementDiff - 3;&lt;br /&gt;
    }    &lt;br /&gt;
&lt;br /&gt;
    movementDiffNegative = (movementDiff * -1);&lt;br /&gt;
    &lt;br /&gt;
    System.out.println(&amp;quot;the movement diff is&amp;quot; + movementDiff + &amp;quot;the movementDiffNegative is&amp;quot; + movementDiffNegative);&lt;br /&gt;
    //treecursion&lt;br /&gt;
    background(250); &lt;br /&gt;
    stroke(0); &lt;br /&gt;
    curlx += (radians(360./height*movementDiff)-curlx)/deley; &lt;br /&gt;
    curly += (radians(360./height*movementDiffNegative)-curly)/deley; &lt;br /&gt;
    translate(width/2,height/3*2); &lt;br /&gt;
    line(0,0,0,height/2); &lt;br /&gt;
    branch(height/4.,17); &lt;br /&gt;
    growth += (growthTarget/10-growth+1.)/deley; &lt;br /&gt;
    println(diffFrame[v]);&lt;br /&gt;
  } &lt;br /&gt;
&lt;br /&gt;
}&lt;br /&gt;
&lt;br /&gt;
void mouseWheel(int delta) &lt;br /&gt;
{ &lt;br /&gt;
  growthTarget += delta; &lt;br /&gt;
} &lt;br /&gt;
&lt;br /&gt;
//treecursion &lt;br /&gt;
void branch(float len,int num) &lt;br /&gt;
{ &lt;br /&gt;
  len *= f; &lt;br /&gt;
  num -= 1; &lt;br /&gt;
  if((len &amp;gt; 1) &amp;amp;&amp;amp; (num &amp;gt; 0)) &lt;br /&gt;
  { &lt;br /&gt;
    pushMatrix(); &lt;br /&gt;
    rotate(curlx); &lt;br /&gt;
    line(0,0,0,-len); &lt;br /&gt;
    translate(0,-len); &lt;br /&gt;
    branch(len,num); &lt;br /&gt;
    popMatrix(); &lt;br /&gt;
&lt;br /&gt;
    //    pushMatrix(); &lt;br /&gt;
    //    line(0,0,0,-len); &lt;br /&gt;
    //    translate(0,-len); &lt;br /&gt;
    //    branch(len); &lt;br /&gt;
    //    popMatrix(); &lt;br /&gt;
    len *= growth; &lt;br /&gt;
    pushMatrix(); &lt;br /&gt;
    rotate(curlx-curly); &lt;br /&gt;
    line(0,0,0,-len); &lt;br /&gt;
    translate(0,-len); &lt;br /&gt;
    branch(len,num); &lt;br /&gt;
    popMatrix(); &lt;br /&gt;
    //len /= growth; &lt;br /&gt;
  } &lt;br /&gt;
}&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/nowiki&amp;gt;&lt;/div&gt;</summary>
		<author><name>GregoryParsons</name></author>	</entry>

	<entry>
		<id>http://wiki.roberttwomey.com/index.php?title=Students/GregoryParsons&amp;diff=3704</id>
		<title>Students/GregoryParsons</title>
		<link rel="alternate" type="text/html" href="http://wiki.roberttwomey.com/index.php?title=Students/GregoryParsons&amp;diff=3704"/>
				<updated>2010-04-29T22:37:27Z</updated>
		
		<summary type="html">&lt;p&gt;GregoryParsons: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;Course page&lt;br /&gt;
[[Time and Process Based Digital Media]]&lt;br /&gt;
&lt;br /&gt;
Senior-ish student, ICAM VisArts Major. &lt;br /&gt;
&lt;br /&gt;
Interests: Film, Product Design, Device Interactivity, Web Design, Photography, Audio / Video, Video Games, Reading... &lt;br /&gt;
&lt;br /&gt;
-G&lt;/div&gt;</summary>
		<author><name>GregoryParsons</name></author>	</entry>

	<entry>
		<id>http://wiki.roberttwomey.com/index.php?title=Classes/2010/VIS145B&amp;diff=3702</id>
		<title>Classes/2010/VIS145B</title>
		<link rel="alternate" type="text/html" href="http://wiki.roberttwomey.com/index.php?title=Classes/2010/VIS145B&amp;diff=3702"/>
				<updated>2010-04-29T22:36:58Z</updated>
		
		<summary type="html">&lt;p&gt;GregoryParsons: /* Midterm Projects */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Time and Process Based Digital Media II ==&lt;br /&gt;
Time: Thursdays 3:30-6:20pm, VAF 228&lt;br /&gt;
&lt;br /&gt;
This class is an advanced study and portfolio project course centered on the use of hardware and software to create interactive and time-based art.  These projects can take many forms—interactive installations, dynamic visualizations/sonifications, printed renderings—chosen by the students.  This will not be a course of technical instruction—rather we will consider technical and conceptual issues in tandem, supplementing discussions and activities with specific technical instruction where necessary.  There is a strong emphasis on the development and articulation of personal directions of research by the students in the course. &lt;br /&gt;
&lt;br /&gt;
I would like to split the reading/homework responsibility for two parts of the class.  In the first half of the term I will present a series of works and readings covering my particular interests--the intersections of social performance, embodied experience, and cognition.  In the latter half of the class (after the midterm) you all will do the presentations on topics of your choosing.  Working individually or in small groups, you will provide us with some conceptual provocation (reading material) covering topics you intend to engage with your final, and you will lead a discussion on technical and conceptual issues.  Reading and critical writing, in response to text and works you present and those I present, are integral to this course.&lt;br /&gt;
&lt;br /&gt;
The schedule is a living document and will be revised over the period of the course.&lt;br /&gt;
&lt;br /&gt;
== Instructor ==&lt;br /&gt;
Robert Twomey&lt;br /&gt;
&lt;br /&gt;
rtwomey@ucsd.edu&lt;br /&gt;
*http://roberttwomey.com&lt;br /&gt;
*http://experimentalgamelab.net&lt;br /&gt;
*http://crca.ucsd.edu&lt;br /&gt;
&lt;br /&gt;
Office Hours: Wednesday 3-4pm, Atkinson Hall Rm 1601 (CRCA research neighborhood).  Please e-mail me if you plan to attend.&lt;br /&gt;
&lt;br /&gt;
== Grading ==&lt;br /&gt;
*Midterm Project - 30%&lt;br /&gt;
*Final Project - 40%&lt;br /&gt;
*Presentations - 10%&lt;br /&gt;
*Readings - 10%&lt;br /&gt;
*Participation - 10%&lt;br /&gt;
&lt;br /&gt;
=== Presentations ===&lt;br /&gt;
(1) Short presentation on your work in the second week of class.  This should be a statement of your interests, direction, goals with media art.  Present examples from your own work which you feel strongly about, and which best represent your interests and trajectory.  Present examples of other artist&amp;#039;s work that serve as models for the kind of work you would like to make. (5-10 minutes each)&lt;br /&gt;
&lt;br /&gt;
(2) Medium presentation on final projects in the second semester of the course (weeks 7-9).  This is the portion of the class where you dictate the reading and the discussion.  If you are presenting on a given week, you need to provide us with a reading 1 week in advance.  We will sign up for those time slots in week 6, just after the midterm. (10-15 minutes)&lt;br /&gt;
&lt;br /&gt;
=== Reading Responses ===&lt;br /&gt;
These are written summaries and critical responses to materials assigned for out of class viewing.  Things to consider: What points does the author make?  Do you buy their assumptions or agree with their conclusions?  Reading responses will be printed and turned in to the instructor at the beginning of class.  Generally these should be 1 page long.&lt;br /&gt;
&lt;br /&gt;
=== Projects ===&lt;br /&gt;
Midterm and final projects will be graded on concept, effort, and realization. Formal proposals are a necessary component of the process so take them seriously.  Make the effort to get started early and seek the help you need--we want to see finished, well-considered pieces for the midterm and final. Additionally, you will need to submit documentation of the project after completion which includes images, video, and source code where applicable.  These materials (proposals and documentation) will all be posted to the wiki.&lt;br /&gt;
=== Documentation Policy ===&lt;br /&gt;
*personal wiki page&lt;br /&gt;
*source code on wiki&lt;br /&gt;
*image/video documentation as appropriate. &lt;br /&gt;
*explanatory writing (on intent, motivation, context)&lt;br /&gt;
&lt;br /&gt;
=== Attendance ===&lt;br /&gt;
Attendance is mandatory. Each unexcused absence will drop your final grade one letter.  There are only 10 weeks of class, please come to them all.&lt;br /&gt;
&lt;br /&gt;
== Schedule ==&lt;br /&gt;
=== Week 1 - Intro ===&lt;br /&gt;
*Introductions&lt;br /&gt;
*Scope of course, interests, technical possibilities.&lt;br /&gt;
*My work.&lt;br /&gt;
*Watch: We Live In Public.  2009. (excerpts)&lt;br /&gt;
*In class: personal page on wiki. [http://www.trsp.net/teaching/gamemod/ game-mod exercise]. [http://www.trsp.net/teaching/gamemod/gamemod_breakout_source_en.zip download link]&lt;br /&gt;
*Read: [http://www.nyu.edu/projects/xdesign/mainmenu/archive_tangible.html Against Virtualized Information], [http://www.nyu.edu/projects/xdesign/mainmenu/archive_analtictech.html Novel Analytic Techniques], and [http://www.nyu.edu/projects/xdesign/mainmenu/archive_infocounts.html What Information Counts?] by [http://www.environmentalhealthclinic.net/people/natalie-jeremijenko/ Natalie Jeremijenko]. &lt;br /&gt;
*Read: [http://www.yalealumnimagazine.com/issues/2004_03/jeremijenko.html An Engineer for the Avante Garde]&lt;br /&gt;
*Read: [http://www.worldchanging.com/archives/001450.html Natalie Jeremijenko The WorldChanging Interview]&lt;br /&gt;
*Read: [http://tech90s.walkerart.org/nj/transcript/nj_01.html Database Politics and Social Simulations], good background on her earlier artwork.&lt;br /&gt;
&lt;br /&gt;
=== Week 2 - Student Research Interests ===&lt;br /&gt;
*Due: 1 page on Jeremijenko. &lt;br /&gt;
*Presentations on your work.&lt;br /&gt;
*Read: [http://www.flong.com/texts/essays/essay_cvad/ Computer Vision for Artists and Designers: Pedagogic Tools and Techniques for Novice Programmers] Golan Levin. &amp;#039;&amp;#039;pay particular attention to part II. ELEMENTARY COMPUTER VISION TECHNIQUES.  we are going to try these in class next week.&amp;#039;&amp;#039;&lt;br /&gt;
&lt;br /&gt;
=== Week 3 - Computer Vision / Human Perception ===&lt;br /&gt;
*Due: Nothing. Read the Golan Levin piece, but no written response.&lt;br /&gt;
*Discuss:&lt;br /&gt;
**Myron Kreuger. Video Place. 1989 [http://www.youtube.com/watch?v=dqZyZrN3Pl0]&lt;br /&gt;
**Text Rain. Camille Utterback &amp;amp; Romy Achituv. 1999. [http://www.youtube.com/watch?v=toWFvXHghDk] [http://www.camilleutterback.com/]&lt;br /&gt;
**Very Nervous System.  1982-1991. [http://vimeo.com/8120954]&lt;br /&gt;
**Suicide Box.  Bureau of Inverse Technology.  1996. (13:00)&lt;br /&gt;
**Marie Sester. ACCESS.  2003. [http://accessproject.net]&lt;br /&gt;
**Messa di Voce. Golan Levin and Zach Lieberman with Jaap Blonk and Joan La Barbara. 2003.  [http://www.flong.com/projects/messa/] [http://www.tmema.org/messa/messa.html]&lt;br /&gt;
**Seen.  David Rokeby.  2002.  [http://vimeo.com/6012986]&lt;br /&gt;
**Sorting Daemon. David Rokeby. 2003. [http://homepage.mac.com/davidrokeby/sorting.html]&lt;br /&gt;
**Cheese.  Christian Moller. 2003. [http://www.christian-moeller.com/display.php?project_id=36] made in collaboration with UCSD  [http://mplab.ucsd.edu/wordpress/ Machine Perception Lab]&lt;br /&gt;
**Eyewriter. 2009 [http://www.eyewriter.org/]&lt;br /&gt;
**Saccade. 2010 [http://roberttwomey.com/saccade] (in progress)&lt;br /&gt;
*Discuss: &lt;br /&gt;
**thresholding&lt;br /&gt;
**frame difference&lt;br /&gt;
**OpenCV - [http://ubaa.net/shared/processing/opencv/ download] [http://www.cs.unc.edu/Research/stc/FAQs/OpenCV/OpenCVReferenceManual.pdf reference manual].  If you are getting this for your computer, be sure to get OpenCV, the OpenCV Processing Library, and the OpenCV Processing Examples (three separate downloads).&lt;br /&gt;
**face recognition&lt;br /&gt;
*In Class:&lt;br /&gt;
**Working alone or in small groups, do experiments with video processing and computer vision.&lt;br /&gt;
&lt;br /&gt;
=== Week 4 - Computer Vision Work ===&lt;br /&gt;
* In Class:&lt;br /&gt;
** Work on computer vision projects&lt;br /&gt;
** Talk about midterm projects.&lt;br /&gt;
&lt;br /&gt;
=== Week 5 - Midterm Workshop ===&lt;br /&gt;
*Due: Midterm project proposal.&lt;br /&gt;
**Working individually or in small groups (2-3 people), produce an interactive piece that bridges the gap between screen space and physical space.  There are many ways to do this--using image-based computer vision techniques, game controllers, audio input, or other physical hardware (Arduino?).  Think about the parameters of interaction--are you documenting viewer&amp;#039;s behavior (unknown to them), are you taking a familiar form (such as a video game) and tweaking it in some way, are you intervening in social space?  Think about what form the output will take.  In your one page proposal, describe the input(s), output(s), and dynamic of interaction, as well as some statement of your motivation.  Why is this a valuable or interesting project?  In addition to the written description, produce supporting visual materials.  These should be two functional diagram images and two visual/aesthetic images.  The functional diagrams should show the necessary software and hardware components and explain how the interaction will occur.  The aesthetic diagrams will give us a sense of what it will look like, how the output will appear.  Make a page for your project (including a title) in the Midterm Projects section at the bottom of this page, upload the necessary materials and embed them in that page.  This proposal is due in class next week where we will critique and workshop the ideas.&lt;br /&gt;
*In class:&lt;br /&gt;
**Workshop midterm project ideas. (45 minutes)&lt;br /&gt;
**Work on midterm projects. &lt;br /&gt;
*NOTE: Best of ICAM from Candy Harris.  There will be an install in the annex here at Mandeville and presentations at the Experimental Theater in the CPMC (music building). They should come see what they are going to have to live up to for their final projects. Plus the keynote speakers (ICAM alumns) always have great info about career paths after graduation.&lt;br /&gt;
&lt;br /&gt;
=== Week 6 - Midterm Critiques ===&lt;br /&gt;
*Due: Midterm Projects&lt;br /&gt;
*In Class:&lt;br /&gt;
**Critique of midterm projects.&lt;br /&gt;
&lt;br /&gt;
=== Week 7 ===&lt;br /&gt;
Student presentations&lt;br /&gt;
=== Week 8 ===&lt;br /&gt;
Student presentations &lt;br /&gt;
=== Week 9 ===&lt;br /&gt;
Student presentations&lt;br /&gt;
=== Week 10 - Final Critiques ===&lt;br /&gt;
In-class Critiques.&lt;br /&gt;
&lt;br /&gt;
=== Finals Week ===&lt;br /&gt;
Final documentation due.&lt;br /&gt;
&lt;br /&gt;
== Topics ==&lt;br /&gt;
To Be Scheduled&lt;br /&gt;
&lt;br /&gt;
&amp;#039;&amp;#039;&amp;#039;Performance for the camera, for the web&amp;#039;&amp;#039;&amp;#039;&lt;br /&gt;
*Discuss Chatroulette. Facebook, Twitter, Youtube.  Attention in the social net.&lt;br /&gt;
*ManyCam [http://www.manycam.com/]&lt;br /&gt;
*PS3 eye&lt;br /&gt;
*jennicam [http://www.wired.com/thisdayintech/2010/04/0414jennicam-launches wired]&lt;br /&gt;
*Lonelygirl15 [http://www.youtube.com/watch?v=-goXKtd6cPo youtube] [http://www.wired.com/wired/archive/14.12/lonelygirl.html article]&lt;br /&gt;
*Discuss telematic perfromance. &lt;br /&gt;
* Justin.tv [http://www.justin.tv/#r=s7RVqBU~]&lt;br /&gt;
*Read: The Presentation of Self in Everyday Life (excerpt).  Erving Goffman. 1959.&lt;br /&gt;
*Read: Performance: A Critical Introduction (excerpt).  Richard Carlson. 2004.&lt;br /&gt;
*Do: Intervention in social circuits.  Chatroulette/Facebook/Youtube exercise.&lt;br /&gt;
&lt;br /&gt;
&amp;#039;&amp;#039;&amp;#039;Social Networks/Web 2.0&amp;#039;&amp;#039;&amp;#039;&lt;br /&gt;
*Read: Protocol, Control, and Networks by Alexander Galloway and Eugene Thacker.  Grey Room 17, Fall 2004 p 6-29.  &lt;br /&gt;
*Read: DIGITAL MAOISM: The Hazards of the New Online Collectivism.  Jaron Lanier.  2006.&lt;br /&gt;
*Watch: MediatedCultures @ Kansas State http://mediatedcultures.net/mediatedculture.htm&lt;br /&gt;
*Datamining/Complex Networks, node-edge graphing.&lt;br /&gt;
&lt;br /&gt;
&amp;#039;&amp;#039;&amp;#039;Digital Memory/Personal Media: Where do we exist and how do we remember?&amp;#039;&amp;#039;&amp;#039;&lt;br /&gt;
*Read: Mediated Memories in the Digital Age (excerpt). Jose van Dijck. 2007.&lt;br /&gt;
*Read: Are you sure you want to do this?  Matthias Fuchs 1994.&lt;br /&gt;
*Read: Delete: The Virtue of Forgetting in the Digital Age (excerpt). Viktor Mayer-Schonberger. 2009.&lt;br /&gt;
*Flickr.com, Facebook&lt;br /&gt;
*Discuss: My Pocket. Burak Arikan. 2008. &lt;br /&gt;
&lt;br /&gt;
&amp;#039;&amp;#039;&amp;#039;Cognition + Creativity&amp;#039;&amp;#039;&amp;#039;&lt;br /&gt;
*Generative Art vs. Computational Creativity&lt;br /&gt;
*Casy Reas&lt;br /&gt;
*Processing.org&lt;br /&gt;
*Tom Shannon. [http://www.wired.com/magazine/2010/03/pl_arts_pendulum/all/1]&lt;br /&gt;
*Read: Triumph of the Cyborg Composer. &lt;br /&gt;
*Read: How to draw three people in a garden.  1988.&lt;br /&gt;
*Read: Shades of Computational Evocation and Meaning: The GRIOT System and Improvisational Poetry Generation. 2006.&lt;br /&gt;
&lt;br /&gt;
&amp;#039;&amp;#039;&amp;#039;Artificial Intelligence&amp;#039;&amp;#039;&amp;#039;&lt;br /&gt;
*Read: Expressive Processing (excerpt), Noah Wardrip Fruin, 2009. &lt;br /&gt;
*Read: Elephants Don&amp;#039;t Play Chess, Rodney Brooks, 1990. &lt;br /&gt;
&lt;br /&gt;
&amp;#039;&amp;#039;&amp;#039;Appropriation and Remix&amp;#039;&amp;#039;&amp;#039;&lt;br /&gt;
*Read: The Fiction of Memory.  New York Times, March 12, 2010.  Luc Sante&lt;br /&gt;
*Read: Jonatham Lethem.  The Ecstasy of Influence. Harpers Magazine.  2007. &lt;br /&gt;
*Remix Culture.  Lev.&lt;br /&gt;
*God&amp;#039;s Little Toys: Confessions of a cut &amp;amp; paste artist.  William Gibson. 2005. *http://www.wired.com/wired/archive/13.07/gibson.html&lt;br /&gt;
*Reality Hunger: A Manifesto.  David Shields. 2010.&lt;br /&gt;
&lt;br /&gt;
&amp;#039;&amp;#039;&amp;#039;Materiality in the information age.&amp;#039;&amp;#039;&amp;#039;&lt;br /&gt;
*Tangible interfaces, haptic feedback. &lt;br /&gt;
*Read: Evocative Objects: Things We Think With (excerpt). Sherry Turkle, 2007. &lt;br /&gt;
*Read: New Media and the Forensic Imagination (excerpt). Matthew Kirschenbaum. 2008.&lt;br /&gt;
*View: BIT Plane.  &lt;br /&gt;
*View: Garbage Cubes&lt;br /&gt;
*Discuss techniques of markerless tracking, augmented reality, QR codes, etc.  *Online/Offline Space.&lt;br /&gt;
&lt;br /&gt;
&amp;#039;&amp;#039;&amp;#039;Embodiment&amp;#039;&amp;#039;&amp;#039;&lt;br /&gt;
*Computing with bodies, engineered bodies&lt;br /&gt;
*tactile media, haptic interface&lt;br /&gt;
*embodied perception&lt;br /&gt;
*Read: Stelarc. &lt;br /&gt;
&lt;br /&gt;
&amp;#039;&amp;#039;&amp;#039;Self-Image&amp;#039;&amp;#039;&amp;#039;&lt;br /&gt;
*Read: Self/Image: Technology, Representation, and the Contemporary Subject (excerpt).  Amelia Jones, 2006.&lt;br /&gt;
*Do: Forensic Photoshop Exercise.&lt;br /&gt;
*http://www.flickr.com/photos/dryponder/sets/72157623726710218/&lt;br /&gt;
*http://nymag.com/daily/intel/2010/02/obama_being_forced_to_look_at.html#photo=1&lt;br /&gt;
*http://niccageaseveryone.blogspot.com/&lt;br /&gt;
*http://bubleraptor.tumblr.com/&lt;br /&gt;
*photoshop free Marie Claire issue: http://jezebel.com/5511507/so-long-as-your-face-looks-alright-everything-else-can-be-photoshopped&lt;br /&gt;
&lt;br /&gt;
== Places to Find Art ==&lt;br /&gt;
* http://we-make-money-not-art.com/&lt;br /&gt;
* http://www.isea-web.org/, http://www.isea2010ruhr.org/&lt;br /&gt;
* http://www.transmediale.de/en&lt;br /&gt;
* http://01sj.org/&lt;br /&gt;
* http://www.file.org.br/&lt;br /&gt;
* http://www.aec.at/festival_about_en.php&lt;br /&gt;
* http://www.sciencegallery.com/lightwave09&lt;br /&gt;
* Institutions that Sponsor/Show Media Art&lt;br /&gt;
** Eyebeam New York City&lt;br /&gt;
** New Museum/Rhizome.org http://rhizome.org&lt;br /&gt;
** HarvestWorks&lt;br /&gt;
** Machine Project, Los Angeles.&lt;br /&gt;
&lt;br /&gt;
== Midterm Projects ==&lt;br /&gt;
Make pages here. &lt;br /&gt;
* [[DummyProject | Dummy Project]]&lt;br /&gt;
* [[MidtermProject| MotionDJ - Leilani Martin]]&lt;br /&gt;
* [[What&amp;#039;s For Lunch, Kids? by Kelley Kim| &amp;#039;&amp;#039;What&amp;#039;s For Lunch, Kids?&amp;#039;&amp;#039;   - Kelley Kim]]&lt;br /&gt;
* [[Virtual Walk? - Joeny Thipsidakhom]]&lt;br /&gt;
* [[Untitled Midterm| Untitled - Jezreel Callejas]]&lt;br /&gt;
* [[Midterm Project - Tony Lu | Virtual Maze - Tony Lu]]&lt;br /&gt;
* [[Midterm Project  | Untitled - Joel and Jenny Chang]]&lt;br /&gt;
* [[Carnival Ride (Midterm Project)| Carnival Ride - Christina Sanchez and Jennifer Sunga]]&lt;br /&gt;
* [[Motion Animation - Greg Parsons]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;#039;&amp;#039;&amp;#039;Joel + Jenny, Christina + Jennifer&amp;#039;&amp;#039;&amp;#039;: Your teams named their pages the same thing.  Instead of &amp;quot;Midterm Project&amp;quot;, call it &amp;quot;Midterm Project - Carnival Ride&amp;quot;, or &amp;quot;Midterm Project Joel + Jenny&amp;quot; or something...&lt;br /&gt;
&lt;br /&gt;
&amp;#039;&amp;#039;&amp;#039;Ben, Emilio, and Gregory&amp;#039;&amp;#039;&amp;#039;: please move your project proposals from your personal pages to an actual midterm project page, and link to it from this section. &lt;br /&gt;
&lt;br /&gt;
&amp;#039;&amp;#039;&amp;#039;Everyone&amp;#039;&amp;#039;&amp;#039;: Be sure you have put your names on your project pages if it is not readily apparent who made them!&lt;br /&gt;
&lt;br /&gt;
== Student Pages ==&lt;br /&gt;
Click &amp;quot;edit&amp;quot; on the right to add your own page below. &lt;br /&gt;
* [[Students/RobertTwomey | RobertTwomey]]&lt;br /&gt;
* [[Students/Javier Lee | Javier Lee]]&lt;br /&gt;
* [[Students/Jenny Wang | Jenny Wang]]&lt;br /&gt;
* [[Students/Joeny Thipsidakhom | Joeny Thipsidakhom]]&lt;br /&gt;
* [[Students/Kuan-Ting Lu | Tony Lu]]&lt;br /&gt;
* [[Students/Jezreel Callejas| Jezreel Callejas]]&lt;br /&gt;
* [[Students/ChristinaSanchez| Christina Sanchez]]&lt;br /&gt;
* [[Students/BenBrickley | BenBrickley]]&lt;br /&gt;
* [[Students/Ellen Huang | Ellen Huang]]&lt;br /&gt;
* [[Students/Kelley Kim | Kelley Kim]]&lt;br /&gt;
* [[Students/EmilioMarcelino | EmilioMarcelino]]&lt;br /&gt;
* [[Students/Anna Lin | Anna Lin]]&lt;br /&gt;
* [[Student/Jenny Chang | Jenny Chang]]&lt;br /&gt;
* [[Student/Jet Antonio | Jet Antonio]]&lt;br /&gt;
* [[Students/GregoryParsons | Gregory Parsons]]&lt;br /&gt;
* [[Students/Jennifer Sunga | Jennifer Sunga]]&lt;br /&gt;
* [[Students/LeilaniMartin | Leilani Martin]]&lt;/div&gt;</summary>
		<author><name>GregoryParsons</name></author>	</entry>

	<entry>
		<id>http://wiki.roberttwomey.com/index.php?title=Students/GregoryParsons&amp;diff=3659</id>
		<title>Students/GregoryParsons</title>
		<link rel="alternate" type="text/html" href="http://wiki.roberttwomey.com/index.php?title=Students/GregoryParsons&amp;diff=3659"/>
				<updated>2010-04-29T21:27:20Z</updated>
		
		<summary type="html">&lt;p&gt;GregoryParsons: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;Course page&lt;br /&gt;
[[Time and Process Based Digital Media]]&lt;br /&gt;
&lt;br /&gt;
Senior-ish student, ICAM VisArts Major. &lt;br /&gt;
&lt;br /&gt;
Interests: Film, Product Design, Device Interactivity, Web Design, Photography, Audio / Video, Video Games, Reading... &lt;br /&gt;
&lt;br /&gt;
-G&lt;br /&gt;
&lt;br /&gt;
_________&lt;br /&gt;
Midterm Project&lt;br /&gt;
&lt;br /&gt;
;Motivation:&lt;br /&gt;
:	I was interested in the demonstrations of  computer vision presented during the first week of lecture. Using the in class     sampling of the technologies I was impressed by how I was able to manipulate a project using frame differencing. I would like to create a project that is manipulated by movements from the viewer of the project.&lt;br /&gt;
;Interaction:&lt;br /&gt;
:	The interaction of the project will be enacted with a webcam, recording movement in a scene and manipulating a graphical image based on the data collected from the difference in frames. In total the project would react to the viewers movements in some way. &lt;br /&gt;
;Function:&lt;br /&gt;
:	My programming experience is limited to the past two quarters of instruction. But I would like to be able to achieve smooth movement in the image using some form of algorithmic processing.  &lt;br /&gt;
;Visualization:&lt;br /&gt;
:	I have a semi-working project that I created this weekend using a visual example from OpenProcessing.org that I will show in class if needed.&lt;/div&gt;</summary>
		<author><name>GregoryParsons</name></author>	</entry>

	<entry>
		<id>http://wiki.roberttwomey.com/index.php?title=Students/GregoryParsons&amp;diff=3657</id>
		<title>Students/GregoryParsons</title>
		<link rel="alternate" type="text/html" href="http://wiki.roberttwomey.com/index.php?title=Students/GregoryParsons&amp;diff=3657"/>
				<updated>2010-04-29T21:26:51Z</updated>
		
		<summary type="html">&lt;p&gt;GregoryParsons: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;Course page&lt;br /&gt;
[[Time and Process Based Digital Media]]&lt;br /&gt;
&lt;br /&gt;
Senior-ish student, ICAM VisArts Major. &lt;br /&gt;
&lt;br /&gt;
Interests: Film, Product Design, Device Interactivity, Web Design, Photography, Audio / Video, Video Games, Reading... &lt;br /&gt;
&lt;br /&gt;
-G&lt;br /&gt;
&lt;br /&gt;
_________&lt;br /&gt;
Midterm Project&lt;br /&gt;
&lt;br /&gt;
;Motivation:&lt;br /&gt;
:	I was interested in the demonstrations of  computer vision presented during the first week of lecture. Using the in class     :sampling of the technologies I was impressed by how I was able to manipulate a project using frame differencing. I would like to create :a project that is manipulated by movements from the viewer of the project.&lt;br /&gt;
;Interaction:&lt;br /&gt;
:	The interaction of the project will be enacted with a webcam, recording movement in a scene and manipulating a graphical image :based on the data collected from the difference in frames. In total the project would react to the viewers movements in some way. &lt;br /&gt;
;Function:&lt;br /&gt;
:	My programming experience is limited to the past two quarters of instruction. But I would like to be able to achieve smooth :movement in the image using some form of algorithmic processing.  &lt;br /&gt;
;Visualization:&lt;br /&gt;
:	I have a semi-working project that I created this weekend using a visual example from OpenProcessing.org that I will show in :class if needed.&lt;/div&gt;</summary>
		<author><name>GregoryParsons</name></author>	</entry>

	<entry>
		<id>http://wiki.roberttwomey.com/index.php?title=Students/GregoryParsons&amp;diff=3652</id>
		<title>Students/GregoryParsons</title>
		<link rel="alternate" type="text/html" href="http://wiki.roberttwomey.com/index.php?title=Students/GregoryParsons&amp;diff=3652"/>
				<updated>2010-04-29T20:50:47Z</updated>
		
		<summary type="html">&lt;p&gt;GregoryParsons: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;Course page&lt;br /&gt;
[[Time and Process Based Digital Media]]&lt;br /&gt;
&lt;br /&gt;
Senior-ish student, ICAM VisArts Major. &lt;br /&gt;
&lt;br /&gt;
Interests: Film, Product Design, Device Interactivity, Web Design, Photography, Audio / Video, Video Games, Reading... &lt;br /&gt;
&lt;br /&gt;
-G&lt;br /&gt;
&lt;br /&gt;
_________&lt;br /&gt;
Midterm Project&lt;br /&gt;
&lt;br /&gt;
Motivation:&lt;br /&gt;
	I was interested in the demonstrations of  computer vision presented during the first week of lecture. Using the in class     sampling of the technologies I was impressed by how I was able to manipulate a project using frame differencing. I would like to create a project that is manipulated by movements from the viewer of the project.&lt;br /&gt;
Interaction:&lt;br /&gt;
	The interaction of the project will be enacted with a webcam, recording movement in a scene and manipulating a graphical image based on the data collected from the difference in frames. In total the project would react to the viewers movements in some way. &lt;br /&gt;
Function:&lt;br /&gt;
	My programming experience is limited to the past two quarters of instruction. But I would like to be able to achieve smooth movement in the image using some form of algorithmic processing.  &lt;br /&gt;
Visualization:&lt;br /&gt;
	I have a semi-working project that I created this weekend using a visual example from OpenProcessing.org that I will show in class if needed.&lt;/div&gt;</summary>
		<author><name>GregoryParsons</name></author>	</entry>

	<entry>
		<id>http://wiki.roberttwomey.com/index.php?title=Students/GregoryParsons&amp;diff=3651</id>
		<title>Students/GregoryParsons</title>
		<link rel="alternate" type="text/html" href="http://wiki.roberttwomey.com/index.php?title=Students/GregoryParsons&amp;diff=3651"/>
				<updated>2010-04-29T20:50:24Z</updated>
		
		<summary type="html">&lt;p&gt;GregoryParsons: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;Course page&lt;br /&gt;
[[Time and Process Based Digital Media]]&lt;br /&gt;
&lt;br /&gt;
Senior-ish student, ICAM VisArts Major. &lt;br /&gt;
&lt;br /&gt;
Interests: Film, Product Design, Device Interactivity, Web Design, Photography, Audio / Video, Video Games, Reading... &lt;br /&gt;
&lt;br /&gt;
-G&lt;br /&gt;
&lt;br /&gt;
_________&lt;br /&gt;
Midterm Project&lt;br /&gt;
&lt;br /&gt;
Motivation&lt;br /&gt;
	I was interested in the demonstrations of  computer vision presented during the first week of lecture. Using the in class     sampling of the technologies I was impressed by how I was able to manipulate a project using frame differencing. I would like to create a project that is manipulated by movements from the viewer of the project.&lt;br /&gt;
Interaction&lt;br /&gt;
	The interaction of the project will be enacted with a webcam, recording movement in a scene and manipulating a graphical image based on the data collected from the difference in frames. In total the project would react to the viewers movements in some way. &lt;br /&gt;
Function&lt;br /&gt;
	My programming experience is limited to the past two quarters of instruction. But I would like to be able to achieve smooth movement in the image using some form of algorithmic processing.  &lt;br /&gt;
Visualization&lt;br /&gt;
	I have a semi-working project that I created this weekend using a visual example from OpenProcessing.org that I will show in class if needed.&lt;/div&gt;</summary>
		<author><name>GregoryParsons</name></author>	</entry>

	<entry>
		<id>http://wiki.roberttwomey.com/index.php?title=Students/GregoryParsons&amp;diff=3531</id>
		<title>Students/GregoryParsons</title>
		<link rel="alternate" type="text/html" href="http://wiki.roberttwomey.com/index.php?title=Students/GregoryParsons&amp;diff=3531"/>
				<updated>2010-04-13T21:02:57Z</updated>
		
		<summary type="html">&lt;p&gt;GregoryParsons: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;Course page&lt;br /&gt;
[[Time and Process Based Digital Media]]&lt;br /&gt;
&lt;br /&gt;
Senior-ish student, ICAM VisArts Major. &lt;br /&gt;
&lt;br /&gt;
Interests: Film, Product Design, Device Interactivity, Web Design, Photography, Audio / Video, Video Games, Reading... &lt;br /&gt;
&lt;br /&gt;
-G&lt;/div&gt;</summary>
		<author><name>GregoryParsons</name></author>	</entry>

	<entry>
		<id>http://wiki.roberttwomey.com/index.php?title=Students/GregoryParsons&amp;diff=3530</id>
		<title>Students/GregoryParsons</title>
		<link rel="alternate" type="text/html" href="http://wiki.roberttwomey.com/index.php?title=Students/GregoryParsons&amp;diff=3530"/>
				<updated>2010-04-13T21:01:50Z</updated>
		
		<summary type="html">&lt;p&gt;GregoryParsons: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;Senior-ish student, ICAM VisArts Major. &lt;br /&gt;
&lt;br /&gt;
Interests: Film, Product Design, Device Interactivity, Web Design, Photography, Audio / Video, Video Games, Reading... &lt;br /&gt;
&lt;br /&gt;
-G&lt;/div&gt;</summary>
		<author><name>GregoryParsons</name></author>	</entry>

	<entry>
		<id>http://wiki.roberttwomey.com/index.php?title=Classes/2010/VIS145B&amp;diff=3529</id>
		<title>Classes/2010/VIS145B</title>
		<link rel="alternate" type="text/html" href="http://wiki.roberttwomey.com/index.php?title=Classes/2010/VIS145B&amp;diff=3529"/>
				<updated>2010-04-13T20:59:05Z</updated>
		
		<summary type="html">&lt;p&gt;GregoryParsons: /* Student Pages */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Time and Process Based Digital Media II ==&lt;br /&gt;
Time: Thursdays 3:30-6:20pm, VAF 228&lt;br /&gt;
&lt;br /&gt;
This class is an advanced study and portfolio project course centered on the use of hardware and software to create interactive and time-based art.  These projects can take many forms—interactive installations, dynamic visualizations/sonifications, printed renderings—chosen by the students.  This will not be a course of technical instruction—rather we will consider technical and conceptual issues in tandem, supplementing discussions and activities with specific technical instruction where necessary.  There is a strong emphasis on the development and articulation of personal directions of research by the students in the course. &lt;br /&gt;
&lt;br /&gt;
I would like to split the reading/homework responsibility for two parts of the class.  In the first half of the term I will present a series of works and readings covering my particular interests--the intersections of social performance, embodied experience, and cognition.  In the latter half of the class (after the midterm) you all will do the presentations on topics of your choosing.  Working individually or in small groups, you will provide us with some conceptual provocation (reading material) covering topics you intend to engage with your final, and you will lead a discussion on technical and conceptual issues.  Reading and critical writing, in response to text and works you present and those I present, are integral to this course.&lt;br /&gt;
&lt;br /&gt;
== Instructor ==&lt;br /&gt;
Robert Twomey&lt;br /&gt;
&lt;br /&gt;
rtwomey@ucsd.edu&lt;br /&gt;
*http://roberttwomey.com&lt;br /&gt;
*http://experimentalgamelab.net&lt;br /&gt;
*http://crca.ucsd.edu&lt;br /&gt;
&lt;br /&gt;
Office Hours: Wednesday 3-4pm, Atkinson Hall Rm 1601 (CRCA research neighborhood).  Please e-mail me if you plan to attend.&lt;br /&gt;
&lt;br /&gt;
== Grading ==&lt;br /&gt;
*Midterm Project - 30%&lt;br /&gt;
*Final Project - 40%&lt;br /&gt;
*Presentations - 10%&lt;br /&gt;
*Readings - 10%&lt;br /&gt;
*Participation - 10%&lt;br /&gt;
&lt;br /&gt;
=== Presentations ===&lt;br /&gt;
(1) Short presentation on your work in the second week of class.  This should be a statement of your interests, direction, goals with media art.  Present examples from your own work which you feel strongly about, and which best represent your interests and trajectory.  Present examples of other artist&amp;#039;s work that serve as models for the kind of work you would like to make. (5-10 minutes each)&lt;br /&gt;
&lt;br /&gt;
(2) Medium presentation on final projects in the second semester of the course (weeks 7-9).  This is the portion of the class where you dictate the reading and the discussion.  If you are presenting on a given week, you need to provide us with a reading 1 week in advance.  We will sign up for those time slots in week 6, just after the midterm. (10-15 minutes)&lt;br /&gt;
&lt;br /&gt;
=== Reading Responses ===&lt;br /&gt;
These are written summaries and critical responses to materials assigned for out of class viewing.  Things to consider: What points does the author make?  Do you buy their assumptions or agree with their conclusions?  Reading responses will be printed and turned in to the instructor at the beginning of class.  Generally these should be 1 page long.&lt;br /&gt;
&lt;br /&gt;
=== Projects ===&lt;br /&gt;
Midterm and final projects will be graded on concept, effort, and realization. Formal proposals are a necessary component of the process so take them seriously.  Make the effort to get started early and seek the help you need--we want to see finished, well-considered pieces for the midterm and final. Additionally, you will need to submit documentation of the project after completion which includes images, video, and source code where applicable.  These materials (proposals and documentation) will all be posted to the wiki.&lt;br /&gt;
=== Documentation Policy ===&lt;br /&gt;
*personal wiki page&lt;br /&gt;
*source code on wiki&lt;br /&gt;
*image/video documentation where appropriate. &lt;br /&gt;
*explanatory writing (on intent, motivation, context)&lt;br /&gt;
&lt;br /&gt;
=== Attendance ===&lt;br /&gt;
Attendance is mandatory. Each unexcused absence will drop your final grade one letter.  There are only 10 weeks of class, please come to them all.&lt;br /&gt;
&lt;br /&gt;
== Schedule ==&lt;br /&gt;
=== Week 1 - Intro ===&lt;br /&gt;
*Introductions&lt;br /&gt;
*Scope of course, interests, technical possibilities.&lt;br /&gt;
*My work.&lt;br /&gt;
*Watch: We Live In Public.  2009. (excerpts)&lt;br /&gt;
*In class: personal page on wiki. [http://www.trsp.net/teaching/gamemod/ game-mod exercise]. [http://www.trsp.net/teaching/gamemod/gamemod_breakout_source_en.zip download link]&lt;br /&gt;
*Read: [http://www.nyu.edu/projects/xdesign/mainmenu/archive_tangible.html Against Virtualized Information], [http://www.nyu.edu/projects/xdesign/mainmenu/archive_analtictech.html Novel Analytic Techniques], and [http://www.nyu.edu/projects/xdesign/mainmenu/archive_infocounts.html What Information Counts?] by [http://www.environmentalhealthclinic.net/people/natalie-jeremijenko/ Natalie Jeremijenko]. &lt;br /&gt;
*Read: [http://www.yalealumnimagazine.com/issues/2004_03/jeremijenko.html An Engineer for the Avante Garde]&lt;br /&gt;
*Read: [http://www.worldchanging.com/archives/001450.html Natalie Jeremijenko The WorldChanging Interview]&lt;br /&gt;
*Read: [http://tech90s.walkerart.org/nj/transcript/nj_01.html Database Politics and Social Simulations], good background on her earlier artwork.&lt;br /&gt;
&lt;br /&gt;
=== Week 2 - Computer Vision / Human Perception ===&lt;br /&gt;
*Due: 1 page on Jeremijenko. &lt;br /&gt;
*Presentations on your work.&lt;br /&gt;
*Discuss: CV methods—thresholding, blob-detection, facial recognition, motion/flow estimation.&lt;br /&gt;
*Discuss:&lt;br /&gt;
**Myron Kreuger. Video Place. 1989 [http://www.youtube.com/watch?v=dqZyZrN3Pl0]&lt;br /&gt;
**Text Rain. Camille Utterback &amp;amp; Romy Achituv. 1999. [http://www.youtube.com/watch?v=toWFvXHghDk] [http://www.camilleutterback.com/]&lt;br /&gt;
**Very Nervous System.  1982-1991. [http://vimeo.com/8120954]&lt;br /&gt;
**Suicide Box.  Bureau of Inverse Technology.  1996. (13:00)&lt;br /&gt;
**Marie Sester. ACCESS.  2003. [http://accessproject.net]&lt;br /&gt;
**Messa di Voce. Golan Levin and Zach Lieberman with Jaap Blonk and Joan La Barbara. 2003.  [http://www.flong.com/projects/messa/] [http://www.tmema.org/messa/messa.html]&lt;br /&gt;
**Seen.  David Rokeby.  2002.  [http://vimeo.com/6012986]&lt;br /&gt;
**Sorting Daemon. David Rokeby. 2003. [http://homepage.mac.com/davidrokeby/sorting.html]&lt;br /&gt;
**Cheese.  Christian Moller. 2003. [http://www.christian-moeller.com/display.php?project_id=36] made in collaboration with UCSD  [http://mplab.ucsd.edu/wordpress/ Machine Perception Lab]&lt;br /&gt;
**Eyewriter. http://www.eyewriter.org/ -&amp;gt; Saccade.&lt;br /&gt;
*OpenCV [http://ubaa.net/shared/processing/opencv/ download] [http://www.cs.unc.edu/Research/stc/FAQs/OpenCV/OpenCVReferenceManual.pdf reference manual].  If you are getting this for your computer, be sure to get OpenCV, the OpenCV Processing Library, and the OpenCV Processing Examples (three separate downloads).&lt;br /&gt;
*Read/Respond: [http://www.flong.com/texts/essays/essay_cvad/ Computer Vision for Artists and Designers: Pedagogic Tools and Techniques for Novice Programmers] Golan Levin. &amp;#039;&amp;#039;pay particular attention to part II. ELEMENTARY COMPUTER VISION TECHNIQUES.  we are going to try these in class next week.&amp;#039;&amp;#039;&lt;br /&gt;
&lt;br /&gt;
=== Week 3 ===&lt;br /&gt;
=== Week 4 ===&lt;br /&gt;
*Due: Midterm proposals.&lt;br /&gt;
=== Week 5 ===&lt;br /&gt;
Midterm critiques.&lt;br /&gt;
=== Week 6 ===&lt;br /&gt;
=== Week 7 ===&lt;br /&gt;
Student presentations&lt;br /&gt;
=== Week 8 ===&lt;br /&gt;
Student presentations &lt;br /&gt;
=== Week 9 ===&lt;br /&gt;
Student presentations&lt;br /&gt;
=== Week 10 ===&lt;br /&gt;
Final critiques.&lt;br /&gt;
=== Finals Week ===&lt;br /&gt;
Final documentation due.&lt;br /&gt;
== Topics ==&lt;br /&gt;
To Be Scheduled&lt;br /&gt;
&lt;br /&gt;
&amp;#039;&amp;#039;&amp;#039;Performance for the camera, for the web&amp;#039;&amp;#039;&amp;#039;&lt;br /&gt;
*Discuss Chatroulette. Facebook, Twitter, Youtube.  Attention in the social net.&lt;br /&gt;
*ManyCam [http://www.manycam.com/]&lt;br /&gt;
*PS3 eye&lt;br /&gt;
*Discuss telematic perfromance. &lt;br /&gt;
*Read: The Presentation of Self in Everyday Life (excerpt).  Erving Goffman. 1959.&lt;br /&gt;
*Read: Performance: A Critical Introduction (excerpt).  Richard Carlson. 2004.&lt;br /&gt;
*Do: Intervention in social circuits.  Chatroulette/Facebook/Youtube exercise.&lt;br /&gt;
&lt;br /&gt;
&amp;#039;&amp;#039;&amp;#039;Social Networks/Web 2.0&amp;#039;&amp;#039;&amp;#039;&lt;br /&gt;
*Read: Protocol, Control, and Networks by Alexander Galloway and Eugene Thacker.  Grey Room 17, Fall 2004 p 6-29.  &lt;br /&gt;
*Read: DIGITAL MAOISM: The Hazards of the New Online Collectivism.  Jaron Lanier.  2006.&lt;br /&gt;
*Watch: MediatedCultures @ Kansas State http://mediatedcultures.net/mediatedculture.htm&lt;br /&gt;
*Datamining/Complex Networks, node-edge graphing.&lt;br /&gt;
&lt;br /&gt;
&amp;#039;&amp;#039;&amp;#039;Digital Memory/Personal Media: Where do we exist and how do we remember?&amp;#039;&amp;#039;&amp;#039;&lt;br /&gt;
*Read: Mediated Memories in the Digital Age (excerpt). Jose van Dijck. 2007.&lt;br /&gt;
*Read: Are you sure you want to do this?  Matthias Fuchs 1994.&lt;br /&gt;
*Read: Delete: The Virtue of Forgetting in the Digital Age (excerpt). Viktor Mayer-Schonberger. 2009.&lt;br /&gt;
*Flickr.com, Facebook&lt;br /&gt;
*Discuss: My Pocket. Burak Arikan. 2008. &lt;br /&gt;
&lt;br /&gt;
&amp;#039;&amp;#039;&amp;#039;Cognition + Creativity&amp;#039;&amp;#039;&amp;#039;&lt;br /&gt;
*Generative Art vs. Computational Creativity&lt;br /&gt;
*Casy Reas&lt;br /&gt;
*Processing.org&lt;br /&gt;
*Tom Shannon. [http://www.wired.com/magazine/2010/03/pl_arts_pendulum/all/1]&lt;br /&gt;
*Read: Triumph of the Cyborg Composer. &lt;br /&gt;
*Read: How to draw three people in a garden.  1988.&lt;br /&gt;
*Read: Shades of Computational Evocation and Meaning: The GRIOT System and Improvisational Poetry Generation. 2006.&lt;br /&gt;
&lt;br /&gt;
&amp;#039;&amp;#039;&amp;#039;Artificial Intelligence&amp;#039;&amp;#039;&amp;#039;&lt;br /&gt;
*Read: Expressive Processing (excerpt), Noah Wardrip Fruin, 2009. &lt;br /&gt;
*Read: Elephants Don&amp;#039;t Play Chess, Rodney Brooks, 1990. &lt;br /&gt;
&lt;br /&gt;
&amp;#039;&amp;#039;&amp;#039;Appropriation and Remix&amp;#039;&amp;#039;&amp;#039;&lt;br /&gt;
*Read: The Fiction of Memory.  New York Times, March 12, 2010.  Luc Sante&lt;br /&gt;
*Read: Jonatham Lethem.  The Ecstasy of Influence. Harpers Magazine.  2007. &lt;br /&gt;
*Remix Culture.  Lev.&lt;br /&gt;
*God&amp;#039;s Little Toys: Confessions of a cut &amp;amp; paste artist.  William Gibson. 2005. *http://www.wired.com/wired/archive/13.07/gibson.html&lt;br /&gt;
*Reality Hunger: A Manifesto.  David Shields. 2010.&lt;br /&gt;
&lt;br /&gt;
&amp;#039;&amp;#039;&amp;#039;Materiality in the information age.&amp;#039;&amp;#039;&amp;#039;&lt;br /&gt;
*Tangible interfaces, haptic feedback. &lt;br /&gt;
*Read: Evocative Objects: Things We Think With (excerpt). Sherry Turkle, 2007. &lt;br /&gt;
*Read: New Media and the Forensic Imagination (excerpt). Matthew Kirschenbaum. 2008.&lt;br /&gt;
*View: BIT Plane.  &lt;br /&gt;
*View: Garbage Cubes&lt;br /&gt;
*Discuss techniques of markerless tracking, augmented reality, QR codes, etc.  *Online/Offline Space.&lt;br /&gt;
&lt;br /&gt;
&amp;#039;&amp;#039;&amp;#039;Embodiment&amp;#039;&amp;#039;&amp;#039;&lt;br /&gt;
*Computing with bodies, engineered bodies&lt;br /&gt;
*tactile media, haptic interface&lt;br /&gt;
*embodied perception&lt;br /&gt;
*Read: Stelarc. &lt;br /&gt;
&lt;br /&gt;
&amp;#039;&amp;#039;&amp;#039;Self-Image&amp;#039;&amp;#039;&amp;#039;&lt;br /&gt;
*Read: Self/Image: Technology, Representation, and the Contemporary Subject (excerpt).  Amelia Jones, 2006.&lt;br /&gt;
*Do: Forensic Photoshop Exercise.&lt;br /&gt;
*http://www.flickr.com/photos/dryponder/sets/72157623726710218/&lt;br /&gt;
*http://nymag.com/daily/intel/2010/02/obama_being_forced_to_look_at.html#photo=1&lt;br /&gt;
*http://bubleraptor.tumblr.com/&lt;br /&gt;
*photoshop free Marie Claire issue: http://jezebel.com/5511507/so-long-as-your-face-looks-alright-everything-else-can-be-photoshopped&lt;br /&gt;
&lt;br /&gt;
== Places to Find Art ==&lt;br /&gt;
* http://we-make-money-not-art.com/&lt;br /&gt;
* http://www.isea-web.org/, http://www.isea2010ruhr.org/&lt;br /&gt;
* http://www.transmediale.de/en&lt;br /&gt;
* http://01sj.org/&lt;br /&gt;
* http://www.file.org.br/&lt;br /&gt;
* http://www.aec.at/festival_about_en.php&lt;br /&gt;
* http://www.sciencegallery.com/lightwave09&lt;br /&gt;
* Institutions that Sponsor/Show Media Art&lt;br /&gt;
** Eyebeam New York City&lt;br /&gt;
** New Museum/Rhizome.org http://rhizome.org&lt;br /&gt;
** HarvestWorks&lt;br /&gt;
** Machine Project, Los Angeles.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Student Pages ==&lt;br /&gt;
Click &amp;quot;edit&amp;quot; on the right to add your own page below. &lt;br /&gt;
* [[Students/RobertTwomey | RobertTwomey]]&lt;br /&gt;
* [[Students/ Javier Lee | Javier Lee]]&lt;br /&gt;
* [[Students/Jenny Wang | Jenny Wang]]&lt;br /&gt;
* [[Students/Joeny Thipsidakhom | Joeny Thipsidakhom]]&lt;br /&gt;
* [[Students/Kuan-Ting Lu | Tony Lu]]&lt;br /&gt;
* [[Students/Jezreel Callejas| Jezreel Callejas]]&lt;br /&gt;
* [[Students/ChristinaSanchez| Christina Sanchez]]&lt;br /&gt;
* [[Students/BenBrickley | BenBrickley]]&lt;br /&gt;
* [[Students/Ellen Huang | Ellen Huang]]&lt;br /&gt;
* [[Students/Kelley Kim | Kelley Kim]]&lt;br /&gt;
* [[Students/EmilioMarcelino | EmilioMarcelino]]&lt;br /&gt;
* [[Students/Anna Lin | Anna Lin]]&lt;br /&gt;
* [[Student/Jenny Chang | Jenny Chang]]&lt;br /&gt;
* [[Student/Jet Antonio | Jet Antonio]]&lt;br /&gt;
* [[Students/GregoryParsons | Gregory Parsons]]&lt;br /&gt;
&lt;br /&gt;
=== How-To ===&lt;br /&gt;
Register to create a log-in in the upper right.&lt;br /&gt;
&lt;br /&gt;
wiki-text of the form: &amp;lt;code&amp;gt;&amp;lt;nowiki&amp;gt;[[Students/RobertTwomey | RobertTwomey]]&amp;lt;/nowiki&amp;gt;&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
will come out looking like this: [[Students/RobertTwomey | RobertTwomey]], which is a link to your new personal page on the wiki.  Click on it and begin editing away. &lt;br /&gt;
&lt;br /&gt;
There is editing help here http://en.wikipedia.org/wiki/Help:Editing and here http://en.wikipedia.org/wiki/Wikipedia:Cheatsheet. Image uploading help is here http://en.wikipedia.org/wiki/Wikipedia:Uploading_images.  Of course you can always view the source of my page (or any other page) to learn how to do things. &lt;br /&gt;
&lt;br /&gt;
If your embedded photo is HUGE, try some of these tips:&lt;br /&gt;
&lt;br /&gt;
* &amp;lt;code&amp;gt;&amp;lt;nowiki&amp;gt;[[Image:File.jpg]]&amp;lt;/nowiki&amp;gt;&amp;lt;/code&amp;gt; to use the full version of the file&lt;br /&gt;
* &amp;lt;code&amp;gt;&amp;lt;nowiki&amp;gt;[[Image:File.png|200px|thumb|left|alt text]]&amp;lt;/nowiki&amp;gt;&amp;lt;/code&amp;gt; to use a 200 pixel wide rendition in a box in the left margin with &amp;#039;alt text&amp;#039; as description&lt;br /&gt;
* &amp;lt;code&amp;gt;&amp;lt;nowiki&amp;gt;[[Media:File.ogg]]&amp;lt;/nowiki&amp;gt;&amp;lt;/code&amp;gt; for directly linking to the file without displaying the file&lt;/div&gt;</summary>
		<author><name>GregoryParsons</name></author>	</entry>

	<entry>
		<id>http://wiki.roberttwomey.com/index.php?title=Students/GregoryParsons&amp;diff=3528</id>
		<title>Students/GregoryParsons</title>
		<link rel="alternate" type="text/html" href="http://wiki.roberttwomey.com/index.php?title=Students/GregoryParsons&amp;diff=3528"/>
				<updated>2010-04-13T20:58:43Z</updated>
		
		<summary type="html">&lt;p&gt;GregoryParsons: New page: Here is my page!  -G&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;Here is my page!&lt;br /&gt;
&lt;br /&gt;
-G&lt;/div&gt;</summary>
		<author><name>GregoryParsons</name></author>	</entry>

	<entry>
		<id>http://wiki.roberttwomey.com/index.php?title=Classes/2010/VIS145B&amp;diff=3527</id>
		<title>Classes/2010/VIS145B</title>
		<link rel="alternate" type="text/html" href="http://wiki.roberttwomey.com/index.php?title=Classes/2010/VIS145B&amp;diff=3527"/>
				<updated>2010-04-13T20:58:27Z</updated>
		
		<summary type="html">&lt;p&gt;GregoryParsons: /* Student Pages */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Time and Process Based Digital Media II ==&lt;br /&gt;
Time: Thursdays 3:30-6:20pm, VAF 228&lt;br /&gt;
&lt;br /&gt;
This class is an advanced study and portfolio project course centered on the use of hardware and software to create interactive and time-based art.  These projects can take many forms—interactive installations, dynamic visualizations/sonifications, printed renderings—chosen by the students.  This will not be a course of technical instruction—rather we will consider technical and conceptual issues in tandem, supplementing discussions and activities with specific technical instruction where necessary.  There is a strong emphasis on the development and articulation of personal directions of research by the students in the course. &lt;br /&gt;
&lt;br /&gt;
I would like to split the reading/homework responsibility for two parts of the class.  In the first half of the term I will present a series of works and readings covering my particular interests--the intersections of social performance, embodied experience, and cognition.  In the latter half of the class (after the midterm) you all will do the presentations on topics of your choosing.  Working individually or in small groups, you will provide us with some conceptual provocation (reading material) covering topics you intend to engage with your final, and you will lead a discussion on technical and conceptual issues.  Reading and critical writing, in response to text and works you present and those I present, are integral to this course.&lt;br /&gt;
&lt;br /&gt;
== Instructor ==&lt;br /&gt;
Robert Twomey&lt;br /&gt;
&lt;br /&gt;
rtwomey@ucsd.edu&lt;br /&gt;
*http://roberttwomey.com&lt;br /&gt;
*http://experimentalgamelab.net&lt;br /&gt;
*http://crca.ucsd.edu&lt;br /&gt;
&lt;br /&gt;
Office Hours: Wednesday 3-4pm, Atkinson Hall Rm 1601 (CRCA research neighborhood).  Please e-mail me if you plan to attend.&lt;br /&gt;
&lt;br /&gt;
== Grading ==&lt;br /&gt;
*Midterm Project - 30%&lt;br /&gt;
*Final Project - 40%&lt;br /&gt;
*Presentations - 10%&lt;br /&gt;
*Readings - 10%&lt;br /&gt;
*Participation - 10%&lt;br /&gt;
&lt;br /&gt;
=== Presentations ===&lt;br /&gt;
(1) Short presentation on your work in the second week of class.  This should be a statement of your interests, direction, goals with media art.  Present examples from your own work which you feel strongly about, and which best represent your interests and trajectory.  Present examples of other artist&amp;#039;s work that serve as models for the kind of work you would like to make. (5-10 minutes each)&lt;br /&gt;
&lt;br /&gt;
(2) Medium presentation on final projects in the second semester of the course (weeks 7-9).  This is the portion of the class where you dictate the reading and the discussion.  If you are presenting on a given week, you need to provide us with a reading 1 week in advance.  We will sign up for those time slots in week 6, just after the midterm. (10-15 minutes)&lt;br /&gt;
&lt;br /&gt;
=== Reading Responses ===&lt;br /&gt;
These are written summaries and critical responses to materials assigned for out of class viewing.  Things to consider: What points does the author make?  Do you buy their assumptions or agree with their conclusions?  Reading responses will be printed and turned in to the instructor at the beginning of class.  Generally these should be 1 page long.&lt;br /&gt;
&lt;br /&gt;
=== Projects ===&lt;br /&gt;
Midterm and final projects will be graded on concept, effort, and realization. Formal proposals are a necessary component of the process so take them seriously.  Make the effort to get started early and seek the help you need--we want to see finished, well-considered pieces for the midterm and final. Additionally, you will need to submit documentation of the project after completion which includes images, video, and source code where applicable.  These materials (proposals and documentation) will all be posted to the wiki.&lt;br /&gt;
=== Documentation Policy ===&lt;br /&gt;
*personal wiki page&lt;br /&gt;
*source code on wiki&lt;br /&gt;
*image/video documentation where appropriate. &lt;br /&gt;
*explanatory writing (on intent, motivation, context)&lt;br /&gt;
&lt;br /&gt;
=== Attendance ===&lt;br /&gt;
Attendance is mandatory. Each unexcused absence will drop your final grade one letter.  There are only 10 weeks of class, please come to them all.&lt;br /&gt;
&lt;br /&gt;
== Schedule ==&lt;br /&gt;
=== Week 1 - Intro ===&lt;br /&gt;
*Introductions&lt;br /&gt;
*Scope of course, interests, technical possibilities.&lt;br /&gt;
*My work.&lt;br /&gt;
*Watch: We Live In Public.  2009. (excerpts)&lt;br /&gt;
*In class: personal page on wiki. [http://www.trsp.net/teaching/gamemod/ game-mod exercise]. [http://www.trsp.net/teaching/gamemod/gamemod_breakout_source_en.zip download link]&lt;br /&gt;
*Read: [http://www.nyu.edu/projects/xdesign/mainmenu/archive_tangible.html Against Virtualized Information], [http://www.nyu.edu/projects/xdesign/mainmenu/archive_analtictech.html Novel Analytic Techniques], and [http://www.nyu.edu/projects/xdesign/mainmenu/archive_infocounts.html What Information Counts?] by [http://www.environmentalhealthclinic.net/people/natalie-jeremijenko/ Natalie Jeremijenko]. &lt;br /&gt;
*Read: [http://www.yalealumnimagazine.com/issues/2004_03/jeremijenko.html An Engineer for the Avante Garde]&lt;br /&gt;
*Read: [http://www.worldchanging.com/archives/001450.html Natalie Jeremijenko The WorldChanging Interview]&lt;br /&gt;
*Read: [http://tech90s.walkerart.org/nj/transcript/nj_01.html Database Politics and Social Simulations], good background on her earlier artwork.&lt;br /&gt;
&lt;br /&gt;
=== Week 2 - Computer Vision / Human Perception ===&lt;br /&gt;
*Due: 1 page on Jeremijenko. &lt;br /&gt;
*Presentations on your work.&lt;br /&gt;
*Discuss: CV methods—thresholding, blob-detection, facial recognition, motion/flow estimation.&lt;br /&gt;
*Discuss:&lt;br /&gt;
**Myron Kreuger. Video Place. 1989 [http://www.youtube.com/watch?v=dqZyZrN3Pl0]&lt;br /&gt;
**Text Rain. Camille Utterback &amp;amp; Romy Achituv. 1999. [http://www.youtube.com/watch?v=toWFvXHghDk] [http://www.camilleutterback.com/]&lt;br /&gt;
**Very Nervous System.  1982-1991. [http://vimeo.com/8120954]&lt;br /&gt;
**Suicide Box.  Bureau of Inverse Technology.  1996. (13:00)&lt;br /&gt;
**Marie Sester. ACCESS.  2003. [http://accessproject.net]&lt;br /&gt;
**Messa di Voce. Golan Levin and Zach Lieberman with Jaap Blonk and Joan La Barbara. 2003.  [http://www.flong.com/projects/messa/] [http://www.tmema.org/messa/messa.html]&lt;br /&gt;
**Seen.  David Rokeby.  2002.  [http://vimeo.com/6012986]&lt;br /&gt;
**Sorting Daemon. David Rokeby. 2003. [http://homepage.mac.com/davidrokeby/sorting.html]&lt;br /&gt;
**Cheese.  Christian Moller. 2003. [http://www.christian-moeller.com/display.php?project_id=36] made in collaboration with UCSD  [http://mplab.ucsd.edu/wordpress/ Machine Perception Lab]&lt;br /&gt;
**Eyewriter. http://www.eyewriter.org/ -&amp;gt; Saccade.&lt;br /&gt;
*OpenCV [http://ubaa.net/shared/processing/opencv/ download] [http://www.cs.unc.edu/Research/stc/FAQs/OpenCV/OpenCVReferenceManual.pdf reference manual].  If you are getting this for your computer, be sure to get OpenCV, the OpenCV Processing Library, and the OpenCV Processing Examples (three separate downloads).&lt;br /&gt;
*Read/Respond: [http://www.flong.com/texts/essays/essay_cvad/ Computer Vision for Artists and Designers: Pedagogic Tools and Techniques for Novice Programmers] Golan Levin. &amp;#039;&amp;#039;pay particular attention to part II. ELEMENTARY COMPUTER VISION TECHNIQUES.  we are going to try these in class next week.&amp;#039;&amp;#039;&lt;br /&gt;
&lt;br /&gt;
=== Week 3 ===&lt;br /&gt;
=== Week 4 ===&lt;br /&gt;
*Due: Midterm proposals.&lt;br /&gt;
=== Week 5 ===&lt;br /&gt;
Midterm critiques.&lt;br /&gt;
=== Week 6 ===&lt;br /&gt;
=== Week 7 ===&lt;br /&gt;
Student presentations&lt;br /&gt;
=== Week 8 ===&lt;br /&gt;
Student presentations &lt;br /&gt;
=== Week 9 ===&lt;br /&gt;
Student presentations&lt;br /&gt;
=== Week 10 ===&lt;br /&gt;
Final critiques.&lt;br /&gt;
=== Finals Week ===&lt;br /&gt;
Final documentation due.&lt;br /&gt;
== Topics ==&lt;br /&gt;
To Be Scheduled&lt;br /&gt;
&lt;br /&gt;
&amp;#039;&amp;#039;&amp;#039;Performance for the camera, for the web&amp;#039;&amp;#039;&amp;#039;&lt;br /&gt;
*Discuss Chatroulette. Facebook, Twitter, Youtube.  Attention in the social net.&lt;br /&gt;
*ManyCam [http://www.manycam.com/]&lt;br /&gt;
*PS3 eye&lt;br /&gt;
*Discuss telematic perfromance. &lt;br /&gt;
*Read: The Presentation of Self in Everyday Life (excerpt).  Erving Goffman. 1959.&lt;br /&gt;
*Read: Performance: A Critical Introduction (excerpt).  Richard Carlson. 2004.&lt;br /&gt;
*Do: Intervention in social circuits.  Chatroulette/Facebook/Youtube exercise.&lt;br /&gt;
&lt;br /&gt;
&amp;#039;&amp;#039;&amp;#039;Social Networks/Web 2.0&amp;#039;&amp;#039;&amp;#039;&lt;br /&gt;
*Read: Protocol, Control, and Networks by Alexander Galloway and Eugene Thacker.  Grey Room 17, Fall 2004 p 6-29.  &lt;br /&gt;
*Read: DIGITAL MAOISM: The Hazards of the New Online Collectivism.  Jaron Lanier.  2006.&lt;br /&gt;
*Watch: MediatedCultures @ Kansas State http://mediatedcultures.net/mediatedculture.htm&lt;br /&gt;
*Datamining/Complex Networks, node-edge graphing.&lt;br /&gt;
&lt;br /&gt;
&amp;#039;&amp;#039;&amp;#039;Digital Memory/Personal Media: Where do we exist and how do we remember?&amp;#039;&amp;#039;&amp;#039;&lt;br /&gt;
*Read: Mediated Memories in the Digital Age (excerpt). Jose van Dijck. 2007.&lt;br /&gt;
*Read: Are you sure you want to do this?  Matthias Fuchs 1994.&lt;br /&gt;
*Read: Delete: The Virtue of Forgetting in the Digital Age (excerpt). Viktor Mayer-Schonberger. 2009.&lt;br /&gt;
*Flickr.com, Facebook&lt;br /&gt;
*Discuss: My Pocket. Burak Arikan. 2008. &lt;br /&gt;
&lt;br /&gt;
&amp;#039;&amp;#039;&amp;#039;Cognition + Creativity&amp;#039;&amp;#039;&amp;#039;&lt;br /&gt;
*Generative Art vs. Computational Creativity&lt;br /&gt;
*Casy Reas&lt;br /&gt;
*Processing.org&lt;br /&gt;
*Tom Shannon. [http://www.wired.com/magazine/2010/03/pl_arts_pendulum/all/1]&lt;br /&gt;
*Read: Triumph of the Cyborg Composer. &lt;br /&gt;
*Read: How to draw three people in a garden.  1988.&lt;br /&gt;
*Read: Shades of Computational Evocation and Meaning: The GRIOT System and Improvisational Poetry Generation. 2006.&lt;br /&gt;
&lt;br /&gt;
&amp;#039;&amp;#039;&amp;#039;Artificial Intelligence&amp;#039;&amp;#039;&amp;#039;&lt;br /&gt;
*Read: Expressive Processing (excerpt), Noah Wardrip Fruin, 2009. &lt;br /&gt;
*Read: Elephants Don&amp;#039;t Play Chess, Rodney Brooks, 1990. &lt;br /&gt;
&lt;br /&gt;
&amp;#039;&amp;#039;&amp;#039;Appropriation and Remix&amp;#039;&amp;#039;&amp;#039;&lt;br /&gt;
*Read: The Fiction of Memory.  New York Times, March 12, 2010.  Luc Sante&lt;br /&gt;
*Read: Jonatham Lethem.  The Ecstasy of Influence. Harpers Magazine.  2007. &lt;br /&gt;
*Remix Culture.  Lev.&lt;br /&gt;
*God&amp;#039;s Little Toys: Confessions of a cut &amp;amp; paste artist.  William Gibson. 2005. *http://www.wired.com/wired/archive/13.07/gibson.html&lt;br /&gt;
*Reality Hunger: A Manifesto.  David Shields. 2010.&lt;br /&gt;
&lt;br /&gt;
&amp;#039;&amp;#039;&amp;#039;Materiality in the information age.&amp;#039;&amp;#039;&amp;#039;&lt;br /&gt;
*Tangible interfaces, haptic feedback. &lt;br /&gt;
*Read: Evocative Objects: Things We Think With (excerpt). Sherry Turkle, 2007. &lt;br /&gt;
*Read: New Media and the Forensic Imagination (excerpt). Matthew Kirschenbaum. 2008.&lt;br /&gt;
*View: BIT Plane.  &lt;br /&gt;
*View: Garbage Cubes&lt;br /&gt;
*Discuss techniques of markerless tracking, augmented reality, QR codes, etc.  *Online/Offline Space.&lt;br /&gt;
&lt;br /&gt;
&amp;#039;&amp;#039;&amp;#039;Embodiment&amp;#039;&amp;#039;&amp;#039;&lt;br /&gt;
*Computing with bodies, engineered bodies&lt;br /&gt;
*tactile media, haptic interface&lt;br /&gt;
*embodied perception&lt;br /&gt;
*Read: Stelarc. &lt;br /&gt;
&lt;br /&gt;
&amp;#039;&amp;#039;&amp;#039;Self-Image&amp;#039;&amp;#039;&amp;#039;&lt;br /&gt;
*Read: Self/Image: Technology, Representation, and the Contemporary Subject (excerpt).  Amelia Jones, 2006.&lt;br /&gt;
*Do: Forensic Photoshop Exercise.&lt;br /&gt;
*http://www.flickr.com/photos/dryponder/sets/72157623726710218/&lt;br /&gt;
*http://nymag.com/daily/intel/2010/02/obama_being_forced_to_look_at.html#photo=1&lt;br /&gt;
*http://bubleraptor.tumblr.com/&lt;br /&gt;
*photoshop free Marie Claire issue: http://jezebel.com/5511507/so-long-as-your-face-looks-alright-everything-else-can-be-photoshopped&lt;br /&gt;
&lt;br /&gt;
== Places to Find Art ==&lt;br /&gt;
* http://we-make-money-not-art.com/&lt;br /&gt;
* http://www.isea-web.org/, http://www.isea2010ruhr.org/&lt;br /&gt;
* http://www.transmediale.de/en&lt;br /&gt;
* http://01sj.org/&lt;br /&gt;
* http://www.file.org.br/&lt;br /&gt;
* http://www.aec.at/festival_about_en.php&lt;br /&gt;
* http://www.sciencegallery.com/lightwave09&lt;br /&gt;
* Institutions that Sponsor/Show Media Art&lt;br /&gt;
** Eyebeam New York City&lt;br /&gt;
** New Museum/Rhizome.org http://rhizome.org&lt;br /&gt;
** HarvestWorks&lt;br /&gt;
** Machine Project, Los Angeles.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Student Pages ==&lt;br /&gt;
Click &amp;quot;edit&amp;quot; on the right to add your own page below. &lt;br /&gt;
* [[Students/RobertTwomey | RobertTwomey]]&lt;br /&gt;
* [[Students/ Javier Lee | Javier Lee]]&lt;br /&gt;
* [[Students/Jenny Wang | Jenny Wang]]&lt;br /&gt;
* [[Students/Joeny Thipsidakhom | Joeny Thipsidakhom]]&lt;br /&gt;
* [[Students/Kuan-Ting Lu | Tony Lu]]&lt;br /&gt;
* [[Students/Jezreel Callejas| Jezreel Callejas]]&lt;br /&gt;
* [[Students/ChristinaSanchez| Christina Sanchez]]&lt;br /&gt;
* [[Students/BenBrickley | BenBrickley]]&lt;br /&gt;
* [[Students/Ellen Huang | Ellen Huang]]&lt;br /&gt;
* [[Students/Kelley Kim | Kelley Kim]]&lt;br /&gt;
* [[Students/EmilioMarcelino | EmilioMarcelino]]&lt;br /&gt;
* [[Students/Anna Lin | Anna Lin]]&lt;br /&gt;
* [[Student/Jenny Chang | Jenny Chang]]&lt;br /&gt;
* [[Student/Jet Antonio | Jet Antonio]]&lt;br /&gt;
* [[Students/GregoryParsons | GregoryParsons]]&lt;br /&gt;
&lt;br /&gt;
=== How-To ===&lt;br /&gt;
Register to create a log-in in the upper right.&lt;br /&gt;
&lt;br /&gt;
wiki-text of the form: &amp;lt;code&amp;gt;&amp;lt;nowiki&amp;gt;[[Students/RobertTwomey | RobertTwomey]]&amp;lt;/nowiki&amp;gt;&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
will come out looking like this: [[Students/RobertTwomey | RobertTwomey]], which is a link to your new personal page on the wiki.  Click on it and begin editing away. &lt;br /&gt;
&lt;br /&gt;
There is editing help here http://en.wikipedia.org/wiki/Help:Editing and here http://en.wikipedia.org/wiki/Wikipedia:Cheatsheet. Image uploading help is here http://en.wikipedia.org/wiki/Wikipedia:Uploading_images.  Of course you can always view the source of my page (or any other page) to learn how to do things. &lt;br /&gt;
&lt;br /&gt;
If your embedded photo is HUGE, try some of these tips:&lt;br /&gt;
&lt;br /&gt;
* &amp;lt;code&amp;gt;&amp;lt;nowiki&amp;gt;[[Image:File.jpg]]&amp;lt;/nowiki&amp;gt;&amp;lt;/code&amp;gt; to use the full version of the file&lt;br /&gt;
* &amp;lt;code&amp;gt;&amp;lt;nowiki&amp;gt;[[Image:File.png|200px|thumb|left|alt text]]&amp;lt;/nowiki&amp;gt;&amp;lt;/code&amp;gt; to use a 200 pixel wide rendition in a box in the left margin with &amp;#039;alt text&amp;#039; as description&lt;br /&gt;
* &amp;lt;code&amp;gt;&amp;lt;nowiki&amp;gt;[[Media:File.ogg]]&amp;lt;/nowiki&amp;gt;&amp;lt;/code&amp;gt; for directly linking to the file without displaying the file&lt;/div&gt;</summary>
		<author><name>GregoryParsons</name></author>	</entry>

	</feed>