Lightfield

From Robert-Depot
Revision as of 08:31, 2 June 2015 by Rtwomey (talk | contribs) (To Do)

(diff) ← Older revision | Latest revision (diff) | Newer revision → (diff)
Jump to: navigation, search

To Do

Camera

  • photo test that gives us an idea of what the dynamic range of the camera
  • test different imaging modes / adjustments (RAW, Exposure, White Balance)
  • use a different camera

RoverDriver

  • output acquisition order as simple list of indices to be used by mp_vsfm_rectify.py

Rectification

  • include camera Z-pos in rectification/warping (to truly make them coplanar)

Images

  • color balance / retouch photo results

Rectifier

  • save arbitrary number of textures at given max resolution
  • save tile number along with x and y pos for each camera

Refocuser

Projection

  • fill 7'9" vertical @ 11'5" with HaNa's short throw.
  • container is 7'8" x 7'9" (WxH) in cross section
  • screen frame

Mechanism

  • make larger spools.
  • get clamp grips and threaded mounts for top pulleys
  • get better bobbers

Material

  • shoot scenes
  • record sound.

Future Investigations

  • check std dev error in cam position, defer to avg/expected camera position for vsfm result

Acquisition Workflow

Rover close bright.jpg

Prepping the Raspberry Pi

  • Power on the Raspberry Pi.
  • Connect to Rover_AP wireless access point when it becomes available.
    • pwd: roverrover
  • You can now access the pi at rover.local using ssh, Finder file sharing (afp://rover.local), or supercollider.

Check that the camera service is running on the raspberry pi

The python camera acquisition script should start automatically when the Pi boots up. To check this (or monitor acquisition as it happens) log onto the pi.

  • From terminal on your laptop, ssh to the pi:
ssh pi@rover.local
  • pwd: raspberry
  • Connect to the camera session currently running on the pi. At pi terminal, type:
screen -R camera
  • This will connect to the screen session started at boot. You will see 'zzzz...' periodically as the pi waits for commands. You are ready to acquire images!
  • To exit screen, do the following key combination in terminal: Ctrl-A D. This will leave screen and the camera service running. You can exit ssh if you wish and everything will still be good to go. More details about gnu screen here

Check for previous images and clear out lfimages directory

  • Open another terminal and log on to the pi with ssh as outlined above. You will use this window to clear out the lightfield images from the pi, and monitor the jpgs as they are acquired.
  • Once logged on, navigate to the lfimages directors:
cd ~/lfimages
  • Do whatever you need to do. Make a directory for data you just acquired (mkdir newset). Move all jpgs into that directory (mv *.jpg newset).
  • To free up space, remove all old images (rm *.jpg).
  • List images with:
ls -la
  • Check free disk space with:
df -h

this will list the space per partition/device in human readable format.

  • Check disk usage for lfimages directory:
du -h ~/lfimages

this will list disk usage by lfimages.

Previewing Outer Corners of Acquisition Set

It is easiest to do this through Finder. To access the pi, use Go -> Connect to Server: afp://rover.local

  • Navigate to home/lfimages
  • All images are stored here. Simply drag and drop the preview images to your local desktop.

Transfer images to your laptop

It is quickest to use rsync or scp to copy lightfield images from the pi to your laptop.

  • In the terminal:
scp -r pi@rover.local:~/lfimages/*.jpg /path/to/lightfield/data/newset

change the second argument to the path of wherever you want to store the data locally.

  • Enter the password for pi, and the copy should begin.
  • Copying is quickest over wired connection rather than wifi.

Shutting Down the Pi

  • When you are done, it is friendly to shut down the pi.
  • Ssh to the pi.
  • Tell it to shut down, and exit your ssh session:
sudo poweroff; exit
  • To do all of this in one fell swoop, use the following one-line command:
ssh pi@rover.local 'sudo poweroff; exit'

this uses ssh to connect and executes the commands in single quotes. Your pi should turn off. It is now safe to turn off the battery or unplug the cable.

Apparatus Details

Alignment Method 1: OpenCV findHomography on 2d image features

Stitching

  • Stitching wth OpenCV in python.
  • Process:
    • SIFT feature detection on input images.
    • K-nearest-neighbor matching for each test image.
    • RANSAC (RANdom SAmple Consensus) motion parameter estimation between test and nearest neighbor match
  • adapted from : https://github.com/cbuntain/stitcher/

Test 0015.jpg 9.JPG Recenter0010.jpg

  • recentering warped images with imagemagick:
convert warp*.jpg -gravity center -background black -extent 1542x1140 recentered/output.jpg

Multithreaded Alignment

Python code for image alignment/warp based on 2d features and homography (projective transform):

  • if you acquired jpgs, convert them to pngs:
mogrify
  • align all the images:
python mpalign.py /Volumes/Cistern/Pictures/lightfield/office4 /Volumes/Cistern/Pictures/lightfield/office4/features /Volumes/Cistern/Pictures/lightfield/office4/warp /Volumes/Cistern/Pictures/lightfield/office4/test_0028.png

Code:

Contact Sheet

  • Create thumbnails:
mogrify -verbose -format jpg -path thumbs -thumbnail 2048x1152 warp/*.png
  • Combine multiple thumbnails onto single large contact sheet
montage -verbose -background "#000000" -geometry +0+0 -tile 8x8 thumbs/*.jpg plenoptic_rect.jpg

Alignment Method 2: Rectify images based on Visual SFM results

Alternate method to calculate camera positions and rectify images using Visual Structure From Motion (SFM) software.

point cloud and cameras:

Visual sfm cameras.png

calculated camera positions:

Calculated camera grid.png

Camera grid side.png

  • example command line call:
./VisualSFM sfm+pmvs ~/code/lightfield/data/dark_trees/original/ ~/code/lightfield/data/dark_trees/results/result.nvm

Camera Positions

Saved from VSFM on windows:

Example header information from results:

# Camera parameter file. 

# The format of each camera is as follows:
# Filename (of the undistorted image in visualize folder)
# Original filename
# Focal Length (of the undistorted image)
# 2-vec Principal Point (image center)
# 3-vec Translation T (as in P = K[R T])
# 3-vec Camera Position C (as in P = K[R -RC])
# 3-vec Axis Angle format of R
# 4-vec Quaternion format of R
# 3x3 Matrix format of R
# [Normalized radial distortion] = [radial distortion] * [focal length]^2
# 3-vec Lat/Lng/Alt from EXIF

# The nubmer of cameras in this reconstruction
48

00000000.jpg
V:\Projects\lightfield\data\set2\raw\frame0015.jpg
2933.38110352
1296 972
-0.201449252688 0.161023091157 0.0206971828291
0.203128468259 -0.0144404684961 -0.159588810006
-1.47080192867 0.0804345818139 0.0671010757419
0.740311886625 -0.670566470894 0.0366716500798 0.0305926519924
0.995440975224 -0.0944777648417 0.0132681673468
-0.00388534918841 0.0988118639558 0.99510095046
-0.095325666478 -0.990612366239 0.0979939187319
-0.0390550480576
0 0 0

Python code

Python code to do image alignment/warp based on VSFM results.

  • Read in camera data from cameras_v2.txt file (exported from Visual SFM on windows).
  • Plot camera centers.

Scatter camera centers.png

  • Rectify images to common sampling plane (u, v) with horizontal and vertical spacings.
  • Generating contact sheet / composite of all frames in single texture

Multithreaded Rectification and Contact Sheet Generation from VSFM results in Python

python mp_vsfm_rectify.py ../data/home3/captured ../data/home3/undistorted \
../data/home3/results.nvm.cmvs/00/cameras_v2.txt ../data/home3/warped ../data/home3/thumbs \
../data/home3/camera_pos.txt ../data/home3/contact.jpg

Camera Positions

File:precise_camera_pos.zip

Code:

Coordinate Systems

  • Raspberry pi camera parameters: [1]
Sensor resolution	2592 x 1944 pixels
Sensor image area	3.76 x 2.74 mm
Pixel size	1.4 µm x 1.4 µm
Focal length	3.60 mm +/- 0.01
Horizontal field of view	53.50 +/- 0.13 degrees
Vertical field of view	41.41 +/- 0.11 degress
  • VSFM Coordinates [2]:
  • So for example, focal length of 2933.38110352 * 0.0014 mm = 4.1mm focal length
  • Actual focal length: 3.6mm / 0.0014 mm / pixel = 2,571.428 pixels

See Camera Calibration below

Homographies

example homography calculated from SIFT features using cv2.findHomography, in mpalign.py above:

Closest Image for frame0020.jpg: frame0000.jpg 0.151351351351
Writing enlarged keyframe
homography: [[  9.62688380e-01  -3.88236406e-01   2.76450722e+02]
 [  8.42117368e-03   1.27492634e+00  -2.62422323e+02]
 [ -2.70420863e-05   2.11710811e-05   1.00000000e+00]]

trying to create H (homography) for perspective transform based on VisSFM camera positions. ??

Contact Sheet

mkdir thumbs
mogrify -verbose -format jpg -path thumbs -thumbnail 1024x768 warped/*.jpg
montage -verbose -background "#000000" -geometry +0+0 -tile 16x14 thumbs/*.jpg plenoptic_rect.jpg

16953941458_920724b2b7_z.jpg

Camera Calibration

  • acquire series of images. we used raspistill, called from supercollider
  • create image list with opencv:
/Users/rtwomey/code/opencv-2.4.10/build/bin/cpp-example-imagelist_creator images.xml *.jpg
  • calibrate camera (9 x 6 pattern, 24mm squares):
/Users/rtwomey/code/opencv-2.4.10/build/bin/cpp-example-calibration -w 9 -h 6 -s 24 images.xml
  • gives results like:
<?xml version="1.0"?>
<opencv_storage>
<calibration_time>"Thu Apr 30 07:55:55 2015"</calibration_time>
<image_width>2592</image_width>
<image_height>1944</image_height>
<board_width>9</board_width>
<board_height>6</board_height>
<square_size>24.</square_size>
<flags>0</flags>
<camera_matrix type_id="opencv-matrix">
  <rows>3</rows>
  <cols>3</cols>
  <dt>d</dt>
  <data>
    2.5148029100863805e+03 0. 1.2932899929829127e+03 0.
    2.5196679186760493e+03 9.2850768663425754e+02 0. 0. 1.</data></camera_matrix>
<distortion_coefficients type_id="opencv-matrix">
  <rows>5</rows>
  <cols>1</cols>
  <dt>d</dt>
  <data>
    8.9556056301440312e-02 -2.4792541533716420e-01
    -3.8170614382947539e-03 6.4675682790495625e-04
    -4.1359741393786997e-01</data></distortion_coefficients>
<avg_reprojection_error>5.5456109151951094e-01</avg_reprojection_error>
</opencv_storage>
  • Apply calibration to undistort images:
python apply_undistort.py picam_calib.xml ../data/towers/undistorted ../data/towers/original/*.jpg
  • VisualSFM.
    • disable radial distortion (nv.ini)
    • use fixed calibration:
2514.80291008 1293.28999298 2519.66791867 928.507686634
    • use undistorted images from previous step
  • Complete VSFM call:
./VisualSFM sfm+pmvs+shared+sort+k=2514.80291008,1293.28999298,2519.66791867,928.507686634 ~/code/lightfield/data/tivon1/undistorted/ ~/code/lightfield/data/tivon1/results/results.nvm

Data Management

  • Copy lightfield data to external disk:
rsync -hvrpt --progress data /media/rtwomey/CAMERA/lightfield/

Results

Resynthesis

First attempts recombining warped, centered images.

  • Results below sum 1 row of 24 images, from initial set of 432 images (24 x 18) with 1.0 offset in each direction.
  • Camera locations specified in inches (offset from keyframe, negative or positive)
  • Different "focal planes" correspond to different offset scalar value, from 0.0 to 10.0, as indicated in upper left of image.
  • python alignImagesRansac.py images/3images/ images/3images/test0002.png results/3images

Lf far.png Lf toolbox.png Lf near.png

openFrameworks / openCV weighted sum for resynthesis

requires full set of aligned images and a txt file detailing camera positions

openFrameworks app for GLSL Resynthesis

Recombination using GLSL shader. Images are montaged into one large texture.

openFrameworks application (xCode, oF v0.8):

Plenoptic Home

576 co-registered images, acquired with the photo gantry.

you can see that the images are slightly out of order/acquired at the wrong time. this is a problem with the acquisition script.

11864907766_19cf8ef8c7_c.jpg

full resolution

recombined using a GLSL shader.

Art References

References

Stanford Cameras

Others

Appendices

Enable/Disable Apport

sudo -i gedit /etc/default/apport

A file editor is now open. Change enabled from "0" to a "1" so it looks like this:

enabled=1

To turn it off make it:

enabled=0

Now save your changes and close the file editor.

You can also use sudo service apport stop to turn it off temporarily.

find and delete subdirectories

find and delete subdirectory with certain name:

find . -type d -name thumbs -exec ls {} \;
find . -type d -name thumbs -exec rm -r {} \;

rsync portable lightfield drive to dxnas backup

if dxnas is mounted to /Volumes/lightfield-1:

rsync -avzuh --progress /Volumes/LIGHTFIELD/data/ /Volumes/lightfield-1/data/

zero-pad lightfield filenames

rename 's/\d+/sprintf("%04d",$&)/e' *.jpg

remove messup

rename 's/04d//g' *.jpg

Renaming XCode Project

 508  cd refocuserGLSL.xcodeproj/
  509  ls
  510  sed -i'.bak' 's/lightfieldGLSL/refocuserGLSL/g' *.*
  511  ls
  512  find . -type f -print0 | xargs -0 sed -i'.bak' 's/lightfieldGLSL/refocuserGLSL/g' *.*
  513  cd project.xcworkspace/
  514  sed -i'.bak' 's/lightfieldGLSL/refocuserGLSL/g' *.*
  515  ls
  516  cd ..
  517  cd xcshareddata/
  518  ls
  519  cd xcschemes/
  520  ls
  521  sed -i'.bak' 's/lightfieldGLSL/refocuserGLSL/g' *.*
  522  ls

Rover GRBL settings

$0=10 (step pulse, usec)
$1=255 (step idle delay, msec)
$2=0 (step port invert mask:00000000)
$3=1 (dir port invert mask:00000001)
$4=0 (step enable invert, bool)
$5=0 (limit pins invert, bool)
$6=0 (probe pin invert, bool)
$10=3 (status report mask:00000011)
$11=1400.000 (junction deviation, mm)
$12=0.002 (arc tolerance, mm)
$13=0 (report inches, bool)
$14=1 (auto start, bool)
$20=0 (soft limits, bool)
$21=1 (hard limits, bool)
$22=1 (homing cycle, bool)
$23=3 (homing dir invert mask:00000011)
$24=250.000 (homing feed, mm/min)
$25=600.000 (homing seek, mm/min)
$26=250 (homing debounce, msec)
$27=60.000 (homing pull-off, mm)
$100=206.598 (x, step/mm)
$101=206.598 (y, step/mm)
$102=250.000 (z, step/mm)
$110=1250.000 (x max rate, mm/min)
$111=1250.000 (y max rate, mm/min)
$112=5000.000 (z max rate, mm/min)
$120=3.000 (x accel, mm/sec^2)
$121=2.000 (y accel, mm/sec^2)
$122=10.000 (z accel, mm/sec^2)
$130=200.000 (x max travel, mm)
$131=200.000 (y max travel, mm)
$132=200.000 (z max travel, mm)