Setting up the Pis
First step I get my raspberry pis all setup how I like them. This is something I've done a few times now and should probably write down in detail for people, especially as the setup process these days is quite different to how it was with the earlier versions of the pi - maybe if enough people ask I will do :)
My ideal setup with the cameras included is:
- Raspberry pi
- Wireless usb dongle (TPLINK WN727N works out of the box with no additional power)
- Pi Camera Module
- Standard power supplies and cases
- Unpowered usb hub + cheap wired keyboard / mouse for the initial setup steps
- HDMI cable to plug into tv
Once everything is hooked up I proceed to:
- Boot the raspberry pis and use the nice 'N00B' interface to install Raspbian. The only modification I make in the config screen is to ensure camera support is on.
- Setup the wifi (or just plugin to network if using a wired connection)
- Assign a static ip address to each raspberry pi so my computer can find them easily
I can connect to the pis using SSH (with the putty software). Now I proceed to install:
- TightVNCServer (for remote desktop access from pc)
- CMake (for compiling all sorts of things)
- Samba file sharing (so I can access the pi file system from pc)
- Synergy (handy if I want to run the pi on the tv but use mouse/keyboard from pc)
In the absence of any details from me on this, a few great web sites to look at are:
Testing The Cameras
It's pretty easy to test the pi cams as some software comes packaged to record videos / take stills and show the feed on screen. By typing into putty the command:
raspivid -t 30000 -vf
I tell the video feed to show on screen for 30s, vertically flipped as my cameras are hanging upside down!
And there's me taking a photo of cameras taking a photo of me taking a photo of the cameras....
Getting the camera in c++....
So far so good - both pis work, both cameras works and we're setup ready for development. Or are we? It turns out not really - current support for the cameras in terms of coding is extremely minimal. If what you want to do is write an app that regularly takes a snap shot and sends it somewhere then fine. I however am looking to read the camera feed in c++ and do some image processing with it and at the time of writing this is not an out-of-the-box task.
The core issue with the cameras in terms of coding is that they don't come with video4linux drivers, so no standard web cam reading software (opencv included) can just read from them. Clearly it's possible as the raspivid application does it, and the source code is available so we have somewhere to start. Fortunately a very clever and helpful chap called Pierre Raufast has already done a load of the digging, and his information is all up here:
As Pierre discovered, the raspivid application uses the mmal (multimedia abstraction layer) library to access the camera data and transfer the it to the screen or encode it as stills / videos. His steps (which I recommend at least reading) are in short:
- Install the raspberry pi camera module and get it going
- Download / build / install the userland code for raspberry pi - this includes all the latest source code for the raspivid application and libaries it needs. Can be found here: https://github.com/raspberrypi/userland
- Install opencv (and in his case the face recognistion library)
- Copy the raspivid code and create a new modified one that doesn't do any fancy stuff, and instead just grabs the data from the camera and shoves it into an opencv window
The key result of Pierre's work is in this file: http://raufast.org/download/camcv_vid0.c
Having gone through his steps and made a few tweaks, I eventually got the code running:
However it dies after a few seconds due to some unknown error (probably because the code is a little out of date) and doesn't do exactly what I need it to.
So my next plan - redo some of Pierre's work using the most recent raspivid application and see if I can come up with a nice tidy camera api.
OK, so after a bit of work I've...
- Stripped out everything to do with encoding from the latest raspvid
- Re-implemented some of Pierre's work to capture the memory from each frame
In other words, I've got a functioning program that can run the camera at 1080p, and access the memory for each video frame. Here's the very basic code (currently at 720p to speed up disk writing):
This blog's got long enough for now, so I'll leave it there and write up my progress getting from this preliminary code into a nice camera api in the next installment.