5,710
edits
Changes
→Acquisition
=Acquisition=
'''Gantry'''
[[File:rover_medium.jpg |400px]]
[[File:rover_close_bright.jpg|400px]]
*Two-motor hanging-plotter.
*Runs GRBL: https://github.com/grbl/grbl Supercollider[[:File:lf_sc.zip]]==Python Code==Requires opencv, numpy. Python 2.7.*Generate grid points:<syntaxhighlight lang="bash">python sample_lightfield.py office5.txt 13.625 13.625 30.0 30.0 8.0 8Controlled through Mike McCrea's RoverDriver SuperCollider class.0</syntaxhighlight>'''Prepping Raspberry Pi'''*Generate gcode:<syntaxhighlight lang="bash">python render_gcodePower on Raspberry Pi.py office5.txt office5.nc 0.0 0.0 1.0 57.25 60.0 60.0 400.0</syntaxhighlight>*Stream and capture:<syntaxhighlight lang="bash">python simple_stream_capture.py office5Connect to Rover_AP wireless access point when it becomes available.nc /dev/tty.usbmodem1a12421 /Volumes/Cistern/Pictures/lightfield/office1 1.5 1.5</syntaxhighlight> Files:*[[:File:sample_lightfield.zip]]*[[:File:render_gcode.zip]]*[[pwd:File:simple_stream_capture.zip]] ==Drive Gantry from Raspberry Pi==roverrover*Login You can now connect to system:<syntaxhighlight lang="bash">ssh the pi@camerapiat rover.local</syntaxhighlight>*acquire images Check that the camera service is running on 8 x 8 gridthe raspberry pi. In terminal:
<syntaxhighlight lang="bash">
</syntaxhighlight>
</syntaxhighlight>
**''Ctrl-A D'' to exit screen without killing the service. Or you can just leave it running.
*File sharing is enabled on the pi, you can check files or transfer through Finder after connecting to server: afp://rover.local
=Alignment Method 1: OpenCV findHomography on 2d image features =