5,710
edits
Changes
no edit summary
*Two-motor hanging-plotter.
*Runs GRBL: https://github.com/grbl/grbl
=Process===Stitching=Homing parameters=*Stitching wth OpenCV in pythonHomea, homeb, 41. 375 inches*Process:**SIFT feature detection on input images. **K-nearest-neighbor matching for each test image.**RANSAC (RANdom SAmple Consensus) motion parameter estimation between test and nearest neighbor matchseparation 46 in*adapted from motors: https://github.com/cbuntain/stitcher/[[File:test_0015.jpg|400px]][[File:9.JPG|400px]][[File:recenter0010.jpg|400px]]46 x 36*recentering warped images with imagemagickoffset:<pre>convert warp*.jpg -gravity center -background black -extent 1542x1140 recentered/output.jpg</pre> =Results= =Process===Recombination==*Recombining warped5, centered images. 5*Results below sum 1 row of 24 images, from initial set of 432 images (sample space: 36 x 24 x 18) with 1.0 offset in each direction. *Camera locations specified in inches (offset from keyframeincrement: 36, negative or positive)*Different "focal planes" correspond to different offset scalar value, from 0.0 to 10.0, as indicated in upper left of image. *<code>python alignImagesRansac.py images/3images/ images/3images/test0002.png results/3images</code>[[File:lf_far.png|400px]][[File:lf_toolbox.png|400px]][[File:lf_near.png|400px]]==Acquisition==24
*Setup:
**separation: 57.25
python simple_stream_capture.py office5.nc /dev/tty.usbmodem1a12421 /Volumes/Cistern/Pictures/lightfield/office1 1.5 1.5
</syntaxhighlight>
==Acquiring Controlling Gantry from Raspberry Pi==
*Login to system:
<syntaxhighlight lang="bash">
<syntaxhighlight lang="bash">
python simple_stream_capture.py office8.nc /dev/ttyACM0 ~/images 1.5 0.5
</syntaxhighlight>
==Acquire with Raspberry Pi camera==
*Create thumbnails:
<syntaxhighlight lang="bash">
mogrify -verbose -format jpg -path thumbs -thumbnail 2048x1536 warp/*.png</syntaxhighlight>
*Combine multiple thumbnails onto single large contact sheet
<syntaxhighlight lang="bash">
montage -verbose -background "#000000" -geometry +0+0 -tile 8x8 thumbs/*.jpg plenoptic_rect.jpg
</syntaxhighlight>
*Crop down/window:
<syntaxhighlight lang="bash">
mogrify -verbose -format png -path crop -crop 2592x1944+648+486 +repage warp/*.png
</syntaxhighlight>
=Processing=
==Stitching==
*Stitching wth OpenCV in python.
*Process:
**SIFT feature detection on input images.
**K-nearest-neighbor matching for each test image.
**RANSAC (RANdom SAmple Consensus) motion parameter estimation between test and nearest neighbor match
*adapted from : https://github.com/cbuntain/stitcher/
[[File:test_0015.jpg|400px]]
[[File:9.JPG|400px]]
[[File:recenter0010.jpg|400px]]
*recentering warped images with imagemagick:
<pre>convert warp*.jpg -gravity center -background black -extent 1542x1140 recentered/output.jpg</pre>
==Recombination==
*Recombining warped, centered images.
*Results below sum 1 row of 24 images, from initial set of 432 images (24 x 18) with 1.0 offset in each direction.
*Camera locations specified in inches (offset from keyframe, negative or positive)
*Different "focal planes" correspond to different offset scalar value, from 0.0 to 10.0, as indicated in upper left of image.
*<code>python alignImagesRansac.py images/3images/ images/3images/test0002.png results/3images</code>
[[File:lf_far.png|400px]]
[[File:lf_toolbox.png|400px]]
[[File:lf_near.png|400px]]
==Multithreaded Alignment==
*if you acquired jpgs, convert them to pngs:
python mpalign.py /Volumes/Cistern/Pictures/lightfield/office4 /Volumes/Cistern/Pictures/lightfield/office4/features /Volumes/Cistern/Pictures/lightfield/office4/warp /Volumes/Cistern/Pictures/lightfield/office4/test_0028.png
</syntaxhighlight>
==Contact Sheet==
*Create thumbnails:
montage -verbose -background "#000000" -geometry +0+0 -tile 8x8 thumbs/*.jpg plenoptic_rect.jpg
</syntaxhighlight>
==Plenoptic Home==