Saturday, 27 December 2014

Beginners guide to soldering

Hi there!

This is just a quick blog to introduce my beginners guide to soldering video tutorial:



On the off chance that blogspot screws up the video link, this is the permanent YouTube link: http://youtu.be/2d80i5SNXpg

If you're starting out with electronics and finding soldering frustrating, take a look. It's full of the important tips and techniques that make soldering a breeze.

In case you don't get a chance to view the full 40 minutes, here's some top tips from the video (plus a few extra!):

  • The most essential piece of kit for soldering (other than a soldering iron) is the third hand - a simple device that will hold your wires in place while you solder. Spark Fun do a good one, but there's loads to choose from (just search for electronics, third hand on google).
  • Wait for the iron to get properly hot before starting
  • Cover the tip of the iron in solder, then wipe off the excess before using it. This coats it in fresh solder and makes it better at transferring heat.
  • Use the heat proof sponge that (probably) comes with the iron to wipe off excess solder when it builds up. If you haven't got a sponge, any heat proof cloth will do.
  • Always 'tin' your wires/components before trying to join them. This is basically coating the parts you want to join in a thin layer of solder, before actually trying to solder them together.
  • If you accidentally melt the plastic at the end of the wire and it starts to cover the bits you're joining, trim them off and start again. Solder won't stick to plastic - there's no point trying!
  • Make sure you've got a bit of ventilation, but there's no need to go nuts over it. I make sure I have the door to my study open so smoke doesn't build up in one room, and if it's warm outside I'll open a window.

And the number 1 rule! Heat the components you're joining, not the solder! You might find you have to touch the solder a bit with the iron to get things going, but the key is making sure that the components are hot. If you don't, the solder won't liquidize onto your components, and instead it'll just form annoying big blobs on the iron itself, which is immensely annoying!

Good luck!

Friday, 26 December 2014

Wendy light mark 2!

Time for second round of the Wendy light prototype. Last time I got things going (described here), with an ugly but functioning prototype of my light based alarm clock.

Today I'm doing a second round, with more of a focus on the aesthetics and usability of the clock. While I don't expect to end up with something that looks pretty yet, I'm aiming to improve the controls and work out the basic template for the fancy looking final version.

The big change is to turn this big chunk of buttons:


Into something that looks less like an eighties arcade game, and more like an elegant alarm clock that a beautiful lady might have in her bedroom.

The plan is to replace those buttons with 3 Phidget 1129 touch sensors, and a Sharp GP2D120 infra red range finder. The touch sensors will function as on/off, reading light and alarm cancel buttons; and the sensor will replace the snooze button.

I start off using a jig saw and a router to carve out the centre of a block of pine, then cut a square sheet of plastic to sit on top of it.


There's a hole cut out for the IR sensor, but the capacative touch sensors will pick up your finger within about half a centimetre, so they can comfortably sit below the plastic.

Tip: I've spent ages trying to find the best way to cut plastic. After much testing, I've found using a Dremmel with an abrasive attachment the best bet. If you do take this approach though, make absolutely sure you wear safety goggles - it will spit small chunks of hot plastic, and some of those will head for your face. Do not get melted plastic in your eyes!

Next up, the touch sensors:


After a bit of experimentation, I removed the chunky cable attachment and soldered on my own little headers; these are less bulky, which is important now that I'm trying to fit things inside a small and more elegant box.

Tip: If you get a hold of some of these sensors, it's worth knowing they seem very sensitive to power fluctuations - mine stopped working once the IR sensor was plugged in. I shoved a big 250uF capacitor across the power terminals which seemed to fix the problem. Other than that, they work pretty well - hook up Gnd and Vcc to those of the Arduino, and the data output (white cable if you haven't remove the socket) to an IO pin, and you'll get a HIGH signal if a finger is within range.

First step for the innards is to mount all the sensors on a couple of strips of wood that'll sit inside the box:


Next, I hook up all the Vcc and Gnd pins to a little bit of strip board. This allows me to easily attach the big capacitor and connect them to the Vcc and Gnd wires that go to the Arduino (along with the data wire from each sensor):



Tip: If your circuits are ever getting too messy, connecting small chunks of circuit together on a piece of strip board like this is a really good technique. You can cut up strip board with a hack saw, or (as with the plastic) using a Dremmel with an abrasive head (though once again - safety goggles!).

Next up, as I'm wanting this to be a little neater, I'm going to stop trailing 6 wires from bed side table to floor, and instead use a Maplins 9-way data cable to connect the switch  box to the Arduino:



Data cables are a great way to keep circuitry tidy, and come in either wrapped form (as above), or as ribbon cables. However, due to the tiny wires inside they can be a little tricky to work with, so I generally end up connecting each end to normal solid core prototyping wire; this allows me to use the data cables to bridge large gaps, but still plug things into prototyping boards or Arduinos at each end.

As I'm tight on space, on the box end I'm going to solder the data cable directly to the cables coming out of the sensors (plus another 2 for Gnd and Vcc).


Soldering wires together is a bit tricky to do well, and often results in fairly delicate connections that are easily broken with an accidental tug. A much neater approach (that I used on the Arduino end) is to solder the data cable to a small piece of strip board:


As you can see, this results in a much neater and stronger connection, that can be wrapped in more insulating tape to strengthen it further. Once this is done, you can easily solder normal solid core wire or pin connectors to the other end, making it easy to plug into an Arduino or prototyping board.

A last bit of work in the shed to drill out the holes and make a cover for the base of the switch box:


Attach everything together, along with some button labels (aka masking type with my writing on):


And it's ready to hook up!

Not a lot to show in terms of actual functional changes, though I may get around to a video of me waving my hand over the sensor to snooze the alarm! Other than that:
  • I've tweaked the reading lights so the lightest setting is lighter (based on user feedback...), 
  • The central LEDs now stay dimmed for anything other than the alarm
  • The alarm cancel now triggers a small blue pulse effect, similar to the red/green ones for on/off. This is just to give some confirmation that the touch button worked.
So generally happy with prototype number 2. Final version will be fairly similar I think. The box itself I'll probably make out of white oak, then stain the same colour as the wood the bed is made of. Plus of course I'll have to make a proper box for my side of the bed too! Then a less fancy box for the Arduino under the bed; the LEDs fixed behind the wall; probably some small side buttons so you can set the alarm time and things; Printed button labels (ideally glow-in-the-dark). That should just about round off the Wendy light!





Tuesday, 23 December 2014

The Wendy Light

After watching Wendy suffer repeatedly in the mornings, I've decided to build a prototype light based alarm clock. The idea being that (as various research / existing products indicate), waking up as a result of changes in light is much nicer than load angry noises.

The system will initially be simple. I'll preset an MR005-001 clock breakout board (containing a DR1307 chip) to the 'correct' time from my PC, and use it to trigger a NeoPixel led strip.

Research as I undersand it suggests that your circadean rythm is largely dictated by ipRGC sensors in your eyes. These are especially sensitive to blue wave lengths, and not very sensitive to red; my clock will feature a red reading light (that automatically fades off after an hour), and a blue wake up light triggered at 8AM.

The initial step was setting up this real-time clock chip:

The clock, along with some stunning soldering (ahem)

It's a DS1307 real time clock break out board, that communicates with the Arduino over I2C. Fortunately, the folks at Adafruit have been good enough to provide a ready made Arduino library to supplement it, so getting the thing going is pretty easy. Here it is wired up:

The clock hooked up to the I2C pins of an Arduino Mega

And running their demo shows the time/date being regularly printed out just as you'd expect.

Next up, is the very funky neopixel LEDs. Also from Adafruit, these awesome full colour LEDs come in various forms (strips, grids, rings) and are great fun to play with. And of course, in true Adafruit style, a ready made Arduino library exists, so they're pretty much plug-and-play.

2 strips of Neopixels running at full whack

A quick point of note - if you do get some Neopixels (and I highly recommend it), it is really important to read the Adafruit guide. The strips work out of the box, but the information supplied about current requirements and protective components is worth knowing before you even think about plugging anything in. Reading documentation isn't my strong point, but after half an hour working out why my Arduino shut down whenever I turned them on, and 2 strips of destroyed LEDs, I begrudgingly read the docs and felt very silly for not doing so!

If you do blow some LEDs, often it'll only be the first few in the strip. Try chopping off the first few and the odds are the rest of your strip will be fine.

Also worth knowing that these LEDs use up a healthy number of amps on full power, so it'll be worth investing in a decent transformer. I'm running 86 LEDs at full brightness off a 4A supply from oomlout, which is doing the job nicely; For similar reasons, make sure your wiring can take enough current, or you'll end up with melted plastic in your project!

The final job is some basic wiring up of chunky switches, and some bodgy carpentry to mount them in a box. With the LEDs fixed to a chunk of door frame edging and stuck on top of the bed, we're all set.

Chunky buttons on Wendy's side of the bed
Inside of the button box
The Arduino and clock hidden under the bed
The LED strip lit up orange in 'reading light' mode

The circuitry is pretty simple, as there's no clever components going on here. Standard switch wiring up, plus the data line for the LEDs and the I2C lines for the clock:


You can see here 4 separate blocks (each in its own little actual physical wooden box). These break down into:

  • The main clock circuit, consisting of the Arduino and the DS1307 clock.
  • The lights, with a small protective circuit; a big capacitor to protect against spikes, and a small resistor to protect the data line.
  • Switch Box 1 (on Wendy's side of the bed). This is just a collection of grounded push switches, each connected to an IO pin on the Arduino.
  • Switch Box 2 (on my side of the bed). Due to a lack of chunky switches in my collection, this currently only contains a switch for the reading light, but would eventually have the same set as switch box 1.

The full code can be downloaded here.


And last but not least, a video of the clock in action!


Next up, once I've refined the code a little I'll be updating the system to be a little more aesthetic, and probably implement a few handy extra little features like setting the time/alarm.

(update, if after reading this lovely blog you are so excited you want to see more, the next step is here!)

Sunday, 27 October 2013

GPU Accelerated Camera Processing On The Raspberry Pi

Hallo!

Over the past few days I've been hacking away at the camera module for the raspberry pi. I made a lot of headway creating a simple and nice api for the camera which is detailed here:

http://robotblogging.blogspot.co.uk/2013/10/an-efficient-and-simple-c-api-for.html

However I wanted to get some real performance out of it and that means GPU TIME! Before I start explaining things, the code is here:

http://www.cheerfulprogrammer.com/downloads/picamgpu/picam_gpu.zip

Here's a picture:



And a video of the whole thing (with description of what's going on!)




The api I designed could use mmal for doing colour conversion and downsampling the image but it was pretty slow and got in the way of opengl. However, I deliberately allowed the user to ask the api for the raw YUV camera data. This is provided as a single block of memory, but really contains 3 separate grey scale textures - one containing the 'luminosity' (Y) and another 2 that contain information to specify the colour of a pixel:



I make a few tweaks to my code to generate these 3 textures:

        //lock the chosen frame buffer, and copy it into textures
        {
            const uint8_t* data = (const uint8_t*)frame_data;
            int ypitch = MAIN_TEXTURE_WIDTH;
            int ysize = ypitch*MAIN_TEXTURE_HEIGHT;
            int uvpitch = MAIN_TEXTURE_WIDTH/2;
            int uvsize = uvpitch*MAIN_TEXTURE_HEIGHT/2;
            int upos = ysize;
            int vpos = upos+uvsize;
            ytexture.SetPixels(data);
            utexture.SetPixels(data+upos);
            vtexture.SetPixels(data+vpos);
            cam->EndReadFrame(0);
        }

And write a very simple shader to convert from yuv to rgb:

varying vec2 tcoord;
uniform sampler2D tex0;
uniform sampler2D tex1;
uniform sampler2D tex2;
void main(void) 
{
    float y = texture2D(tex0,tcoord).r;
    float u = texture2D(tex1,tcoord).r;
    float v = texture2D(tex2,tcoord).r;

    vec4 res;
    res.r = (y + (1.370705 * (v-0.5)));
    res.g = (y - (0.698001 * (v-0.5)) - (0.337633 * (u-0.5)));
    res.b = (y + (1.732446 * (u-0.5)));
    res.a = 1.0;

    gl_FragColor = clamp(res,vec4(0),vec4(1));
}


Now I simply run the shader to read in the 3 yuv textures, and write out an rgb one, ending up with this little number:


Good hat yes? Well, hat aside, the next thing to do is provide downsamples so we can run image processing algorithms at different levels. I don't even need a new shader for that, as I can just run the earlier shader, but aiming it at successively lower resolution textures. Here's the lowest one now:


The crucial thing is that in opengl you can create a texture, and then tell it to also double as a frame buffer using the following code:

bool GfxTexture::GenerateFrameBuffer()
{
    //Create and bind a new frame buffer
    glGenFramebuffers(1,&FramebufferId);
    check();
    glBindFramebuffer(GL_FRAMEBUFFER,FramebufferId);
    check();

    //point it at the texture (the id passed in is the Id assigned when we created the open gl texture)
    glFramebufferTexture2D(GL_FRAMEBUFFER,GL_COLOR_ATTACHMENT0,GL_TEXTURE_2D,Id,0);
    check();

    //cleanup
    glBindFramebuffer(GL_FRAMEBUFFER,0);
    check();
    return true;
}

Once you have a texture as a frame buffer you can set it to be the target to render to (don't forget to set the viewport as well):

        glBindFramebuffer(GL_FRAMEBUFFER,render_target->GetFramebufferId());
        glViewport ( 0, 0, render_target->GetWidth(), render_target->GetHeight() );
        check();


And also use the read pixels function to read the results back to cpu (which I do here to save to disk using the lodepng library):

void GfxTexture::Save(const char* fname)
{
    void* image = malloc(Width*Height*4);
    glBindFramebuffer(GL_FRAMEBUFFER,FramebufferId);
    check();
    glReadPixels(0,0,Width,Height,IsRGBA ? GL_RGBA : GL_LUMINANCE, GL_UNSIGNED_BYTE, image);
    check();
    glBindFramebuffer(GL_FRAMEBUFFER,0);

    unsigned error = lodepng::encode(fname, (const unsigned char*)image, Width, Height, IsRGBA ? LCT_RGBA : LCT_GREY);
    if(error) 
        printf("error: %d\n",error);

    free(image);
}

These features give us a massive range of capability. We can now chain together various shaders to apply multiple levels of filtering, and once the gpu is finished with them the data can read to the cpu and fed into image processing applications such as opencv. This is really handy, as algorithms such as object detection often have to do costly filtering before they can operate. Using the gpu as above we can avoid the cpu needing to do the work.

Thus far I've written the following filters:

  • Gaussian blur
  • Dilate
  • Erode
  • Median
  • Threshold
  • Sobel
Here's a few of them in action:
 

Enjoy!

p.s. my only annoyance right now is that I still have to go through the cpu to get my data from mmal and into opengl. If anyone knows a way of getting from mmal straight to opengl that'd be super awesome!

pp.s. right at the end, here's a tiny shameless advert for my new venture - http://www.happyrobotgames.com/no-stick-shooter. If you like my writing, check out the dev blog for regular updates on my first proper indie title!

Saturday, 26 October 2013

An Efficient And Simple C++ API for the Rasperry Pi Camera Module

For the past few days I've been messing around with my new raspberry pi camera modules (see earlier blog posts for excessive details) and part of that has involved putting together a nice and easy to use api to access the camera in c++ and read its frames. This post is a guide to installation, an overview of the very simple api and a description of the sample application.



One word of caution - as with any tinkering there is always a chance something will go wrong and result in a dead pi. If this worries you, back up first. I didn't bother, but I didn't have anything on there I was worried about losing!

Installation

Make sure you're on a recent Raspberry Pi build, and have a working Camera!

I'm assuming at this point you've got a camera module and it's working. If you've not set it up yet you may need to update your raspberry pi (depends when you bought it). I won't go over this process as it's been described 100 times already, but here's a link to get you going just in case:

http://www.raspberrypi.org/archives/3890

Once all is up and running type:

raspivid -t 10000

That should show you the raspberry pi video feed on screen for 10 seconds.

Get CMake

If you haven't already got it, you'll need cmake for building just about anything:

sudo apt-get install cmake

Download and install the latest 'userland-master'

This is the bit of the raspberry pi OS that contains the code for the camera applications and the various libraries they use. At time of writing it isn't supplied as part of the install, so you need to download, build and install it manually. To do so:

Download the latest userland-master.zip from here

Unzip it into your /opt/vc directory. You should now have a folder called /opt/vc/userland-master with various folders in it such as "host_applications" and "interfaces".

Change to the /opt/vc/userland-master folder, then build it with the following commands:

sudo mkdir build
cd build
sudo cmake -DCMAKE_BUILD_TYPE=Release ..
sudo 
make

sudo make install

Test everything worked by running raspivid again. You may see some different messages pop up (I got some harmless errors probably due to the build being so recent), but the crucial thing is that you still get the video feed on screen.

Download and build the PiCam API/Samples

The api and samples can all be downloaded here:
http://www.cheerfulprogrammer.com/downloads/picamtutorial/picamdemo.zip

Extract them into a folder in your home directory called 'picamdemo'. You should have a few cpp files in there, plus a make file and some shaders. 

Change to the folder and build the application with:

cmake .
make

Then run the sample with

./picamdemo

If all goes well you should see some text like this:
Compiled vertex shader simplevertshader.glsl:
<some shader code here>

Compiled fragment shader simplefragshader.glsl:
<some shader code here>

mmal: mmal_vc_port_parameter_set: failed to set port parameter 64:0:ENOSYS
mmal: Function not implemented
Init camera output with 512/512
Creating pool with 3 buffers of size 1048576
Init camera output with 256/256
Creating pool with 3 buffers of size 262144
Init camera output with 128/128
Creating pool with 3 buffers of size 65536
Init camera output with 64/64
Creating pool with 3 buffers of size 16384
Camera successfully created
Running frame loop

And your tv should start flicking between various resolutions of the camera feed like this:



(Edit - I've had some reports of the blogger you-tube link not working. You can see the full video here on proper you tube: http://www.youtube.com/watch?v=9bWJBSNxeXk) The API (and what it does!)

PiCam is designed to be very simple but also useful for image processing algorithms. Right now it lets you:
  • Start up the camera with a given width, height and frame rate
  • Specify a number of 'levels'. More on that later.
  • Choose whether to automatically convert the camera feed to RGBA format
Basic Initialisation

All this is done just by calling StartCamera and passing in the right parameters. It returns a pointer to a CCamera object as follows:

CCamera* mycamera = StartCamera(512,512,30,1,true);

That's a 512x512 image at 30hz, with 1 level and rgba conversion enabled.

Reading

Once started you can extract frames from the camera by calling ReadFrame and passing in a buffer:

char mybuffer[512*512*4]
mycamera->ReadFrame(0,mybuffer,sizeof(mybuffer));

ReadFrame will return the number of bytes actually read, or -1 if there was an error. An error occurs either when there is no data available or your buffer is not large enough.

In addition to ReadFrame there are 2 functions: BeginReadFrame and EndReadFrame. These slightly more advanced versions are shown in the demo, and allow you to be more efficient by locking the actual camera buffer, using it, then releasing it. Internally ReadFrame is implemented using these functions.

Shutting down

Once done, call 'StopCamera'

Levels

In image processing it is often useful to have your data provided at different resolutions. Expensive operations need to be performed on low res images to run at a good frame rate, but you may still want higher res versions around for other operations or even just showing on screen. The PiCam api will do this for you automatically (for up to 3 additional levels). If we modify the StartCamera call to this:

CCamera* mycamera = StartCamera(512,512,30,4,true);

The system will automatically generate the main image plus an additional 3 down-sampled ones (at half res, quarter res and 1/8th res). These are then accessed by specifying a level other than 0 in the call to ReadFrame (or BeginReadFrame):

mycamera->ReadFrame(0,mybuffer,sizeof(mybuffer)); //get full res frame
mycamera->ReadFrame(1,mybuffer,sizeof(mybuffer)); //get half res frame
mycamera->ReadFrame(2,mybuffer,sizeof(mybuffer)); //get quarter res frame
mycamera->ReadFrame(3,mybuffer,sizeof(mybuffer)); //get 1/8th res frame

RGBA Conversions

For most purposes you'll want the data in a nice friendly RGBA format, however if you actually want the raw YUV data feed from the camera, specify false as the last parameter to StartCamera and no conversions will be done for you.

The demo application

The picamdemo application consists of the core camera code as these files:
  • camera.h/camera.cpp
  • cameracontrol.h/cameracontrol.cpp
  • mmalincludes.h
A very simple opengl graphics api (which you are welcome to use/modify/change in any way you please):
  • graphics.h/graphics.cpp
And the main demo app itself:
  • picam.cpp
Which looks like this:

#include <stdio.h>
#include <unistd.h>
#include "camera.h"
#include "graphics.h"

#define MAIN_TEXTURE_WIDTH 512
#define MAIN_TEXTURE_HEIGHT 512

char tmpbuff[MAIN_TEXTURE_WIDTH*MAIN_TEXTURE_HEIGHT*4];

//entry point
int main(int argc, const char **argv)
{
    //should the camera convert frame data from yuv to argb automatically?
    bool do_argb_conversion = true;

    //how many detail levels (1 = just the capture res, >1 goes down by halves, 4 max)
    int num_levels = 4;

    //init graphics and the camera
    InitGraphics();
    CCamera* cam = StartCamera(MAIN_TEXTURE_WIDTH, MAIN_TEXTURE_HEIGHT,30,num_levels,do_argb_conversion);

    //create 4 textures of decreasing size
    GfxTexture textures[4];
    for(int texidx = 0; texidx < num_levels; texidx++)
        textures[texidx].Create(MAIN_TEXTURE_WIDTH >> texidx, MAIN_TEXTURE_HEIGHT >> texidx);

    printf("Running frame loop\n");
    for(int i = 0; i < 3000; i++)
    {
        //pick a level to read based on current frame (flicking through them every 30 frames)
        int texidx = (i / 30)%num_levels;

        //lock the chosen buffer, and copy it directly into the corresponding texture
        const void* frame_data; int frame_sz;
        if(cam->BeginReadFrame(texidx,frame_data,frame_sz))
        {
            if(do_argb_conversion)
            {
                //if doing argb conversion just copy data directly
                textures[texidx].SetPixels(frame_data);
            }
            else
            {
                //if not converting argb the data will be the wrong size so copy it in
                //via a temporary buffer just so we can observe something happening!
                memcpy(tmpbuff,frame_data,frame_sz);
                textures[texidx].SetPixels(tmpbuff);
            }
            cam->EndReadFrame(texidx);
        }

        //begin frame, draw the texture then end frame (the bit of maths just fits the image to the screen while maintaining aspect ratio)
        BeginFrame();
        float aspect_ratio = float(MAIN_TEXTURE_WIDTH)/float(MAIN_TEXTURE_HEIGHT);
        float screen_aspect_ratio = 1280.f/720.f;
        DrawTextureRect(&textures[texidx],-aspect_ratio/screen_aspect_ratio,-1.f,aspect_ratio/screen_aspect_ratio,1.f);
        EndFrame();
    }

    StopCamera();
}


That's the full code for exploiting all the features of the api. It is designed to loop through each detail level and render them in turn. At the top of the main function you will find a couple of variables to enable argb or change level count, and higher up you can see the frame size settings.

Questions? Problems? Comments?

I'm happy to answer any questions, hear any comments, and if you hit issues I'd like to fix them. Either comment on this blog or email me (wibble82@hotmail.com) with a sensible subject like 'pi cam problem' (so it doesn't go into the junk mail box!).

p.s. right at the end, here's a tiny shameless advert for my new venture - http://www.happyrobotgames.com/no-stick-shooter. If you like my writing, check out the dev blog for regular updates on my first proper indie title!

Pi Eyes Stage 6

Right, we've got all kinds of bits working but there's another ingredient I need before the system is just about 'functional'. For image processing I need the camera feed at multiple resolutions, so I can do cheap processing operations on high res feeds, and expensive ones on low res feeds. To do this I use the video splitter component, and have reworked my camera api to:
  • Create 4 separate outputs, each with its own resizer that does the RGB conversion but generates a different resolution.
  • Output 0 = full res, output 1 = half res etc
  • You still use ReadFrame or Begin/EndReadFrame, but now you pass in a 'level' as well
  • Internally the camera code has become a bit more complex to handle this multi output system but it's mostly just rearranging code.
I won't go into the code here as it was lots of tweaks all over the place and not easy to write. Here is a nice image of 2 of the outputs to make it more clear:


As you can see, in the top image I am at full resolution, however in the lower one it's displaying me at (in this case) 1/8th of the upper resolution. Just to demonstrate it is actually getting all the feeds (and the above isn't just from running the app twice!), this video shows it flicking between them live:




Here's the actual application code for the program above:

//entry point
int main(int argc, const char **argv)
{
    printf("PI Cam api tester\n");
    InitGraphics();
    printf("Starting camera\n");
    CCamera* cam = StartCamera(MAIN_TEXTURE_WIDTH, MAIN_TEXTURE_HEIGHT,15);

    //create 4 textures of decreasing size
    GfxTexture textures[4];
    for(int texidx = 0; texidx < 4; texidx++)
        textures[texidx].Create(MAIN_TEXTURE_WIDTH >> texidx, MAIN_TEXTURE_HEIGHT >> texidx);

    printf("Running frame loop\n");
    for(int i = 0; i < 3000; i++)
    {
        //pick a level to read based on current frame (flicking through them every second)
        int texidx = (i / 30)%4;

        //lock the chosen frame buffer, and copy it directly into the corresponding open gl texture
        const void* frame_data; int frame_sz;
        if(cam->BeginReadFrame(texidx,frame_data,frame_sz))
        {
            textures[texidx].SetPixels(frame_data);
            cam->EndReadFrame(texidx);
        }

        //begin frame, draw the texture then end frame
        BeginFrame();
        DrawTextureRect(&textures[texidx],-0.9f,-0.9f,0.9f,0.9f);
        EndFrame();
    }

    StopCamera();
}

Note the really crucial point is that my app above is just reading 1 of the levels each frame, however they are all available every frame, so if I chose (and the cpu was available) I could do something with every level. That's really key and undoubtedly what I'll need going forwards. Code for the whole thing is here:

http://www.cheerfulprogrammer.com/downloads/pi_eyes_stage6/picam_multilevel.zip

In terms of frame rate it has suffered unfortunately. That image resizer really seems to chew through frame time for some reason. Maybe there's lots of copies going on or something else funky, but it is going disappointingly slow. At 1280x720 the frame rate is probably worse than 10hz when reading the hi res data.

Next up I reckon I'll clean up the api a little - give the user options as to what to enable/disable and make sure everything shuts down right and add a post with a little tutorial on its use. Once that's done, onto gpu acceleration land....

Pi Eyes Stage 5

I'm making real progress now getting the camera module simpler and more efficient. My next goal is to rework the camera API to be a more synchronous process (no more callbacks) where the user can simply call 'ReadFrame' to get the next frame.

A Simple Syncronous API

The first step turned out to be pretty simple thanks to the 'queue' structure in mmal. I simply create my own little queue called 'OutputQueue' and change the internal camera callback to be:

void CCamera::OnVideoBufferCallback(MMAL_PORT_T *port, MMAL_BUFFER_HEADER_T *buffer)
{
    //first, add the buffer to the output queue
    mmal_queue_put(OutputQueue,buffer);
}

That code used to lock the buffer, call a callback, then return it to the port for recycling. However now it just pushes the buffer into an output list for processing by the user. Next up, I add a 'ReadFrame' function:

int CCamera::ReadFrame(void* dest, int dest_size)
{
    //default result is 0 - no data available
    int res = 0;

    //get buffer
    if(MMAL_BUFFER_HEADER_T *buffer = mmal_queue_get(OutputQueue))
    {
        //check if buffer has data in
        if(buffer->length)
        {
            //got data so check if it'll fit in the memory provided by the user
            if(buffer->length <= dest_size)
            {
                //it'll fit - yay! copy it in and set the result to be the size copied
                mmal_buffer_header_mem_lock(buffer);
                memcpy(dest,buffer->data,buffer->length);
                mmal_buffer_header_mem_unlock(buffer);
                res = buffer->length;
            }
            else
            {
                //won't fit so set result to -1 to indicate error
                res = -1;
            }
        }

        // release buffer back to the pool from whence it came
        mmal_buffer_header_release(buffer);

        // and send it back to the port (if still open)
        if (VideoCallbackPort->is_enabled)
        {
            MMAL_STATUS_T status;
            MMAL_BUFFER_HEADER_T *new_buffer;
            new_buffer = mmal_queue_get(BufferPool->queue);
            if (new_buffer)
                status = mmal_port_send_buffer(VideoCallbackPort, new_buffer);
            if (!new_buffer || status != MMAL_SUCCESS)
                printf("Unable to return a buffer to the video port\n");
        }    
    }

    return res;
}

This gets the next buffer in the output queue, copies it into memory provided by the user, and then returns it back to the port for reuse, just like the old video callback used to do.

It all worked fine first time, so my actual application code is now as simple as:

//this is the buffer my graphics code uses to update the main texture each frame
extern unsigned char GTextureBuffer[4*1280*720];

//entry point
int main(int argc, const char **argv)
{
    printf("PI Cam api tester\n");
    InitGraphics();
    printf("Starting camera\n");
    CCamera* cam = StartCamera(1280,720,15);

    printf("Running frame loop\n");
    for(int i = 0; i < 3000; i++)
    {
        BeginFrame();

        //read next frame into the texture buffer
        cam->ReadFrame(GTextureBuffer,sizeof(GTextureBuffer));

        //tell graphics code to draw the texture
        DrawMainTextureRect(-0.9f,-0.9f,0.9f,0.9f);

        EndFrame();
    }

    StopCamera();
}

As an added benefit, doing it synchronously means I don't accidentally write to the buffer while it's being copied to the texture, so no more screen tearing! Nice!

A bit more efficient

Now that I'm accessing the buffer synchronously there's the opportunity to get things more efficient and remove a frame of lag. Basically the current system goes:

  • BeginFrame (updates the main texture from GTextureBuffer - effectively a memcpy)
  • camera->ReadFrame (memcpy latest frame into GTextureBuffer)
  • DrawMainTextureRect (draws the main texture)
  • EndFrame (refreshes the screen)
There's 2 problems here. First up, our read frame call is updating GTextureBuffer after its copied into the opengl texture. This means we're always seeing a frame behind, although that could be easily fixed by calling it before BeginFrame. Worse though, we're doing 2 memcpys - first from camera to GTextureBuffer, and then from GTextureBuffer to the opengl texture. With a little reworking of the api however this can be fixed...

First, I add 'BeginReadFrame' and 'EndReadFrame' functions, which effectively do the same as the earlier ReadFrame (minus the memcpy), but split across 2 function calls:

bool CCamera::BeginReadFrame(const void* &out_buffer, int& out_buffer_size)
{
    //try and get buffer
    if(MMAL_BUFFER_HEADER_T *buffer = mmal_queue_get(OutputQueue))
    {
        //lock it
        mmal_buffer_header_mem_lock(buffer);

        //store it
        LockedBuffer = buffer;
        
        //fill out the output variables and return success
        out_buffer = buffer->data;
        out_buffer_size = buffer->length;
        return true;
    }
    //no buffer - return false
    return false;
}

void CCamera::EndReadFrame()
{
    if(LockedBuffer)
    {
        // unlock and then release buffer back to the pool from whence it came
        mmal_buffer_header_mem_unlock(LockedBuffer);
        mmal_buffer_header_release(LockedBuffer);
        LockedBuffer = NULL;

        // and send it back to the port (if still open)
        if (VideoCallbackPort->is_enabled)
        {
            MMAL_STATUS_T status;
            MMAL_BUFFER_HEADER_T *new_buffer;
            new_buffer = mmal_queue_get(BufferPool->queue);
            if (new_buffer)
                status = mmal_port_send_buffer(VideoCallbackPort, new_buffer);
            if (!new_buffer || status != MMAL_SUCCESS)
                printf("Unable to return a buffer to the video port\n");
        }    
    }
}

The key here is that instead of returning the buffer straight away, I simply store a pointer to it in BeginReadFrame and return the address and size of the data to the user. In EndReadFrame, I then proceed to unlock and release it as normal.

This means my ReadFrame function now changes to:

int CCamera::ReadFrame(void* dest, int dest_size)
{
    //default result is 0 - no data available
    int res = 0;

    //get buffer
    const void* buffer; int buffer_len;
    if(BeginReadFrame(buffer,buffer_len))
    {
        if(dest_size >= buffer_len)
        {
            //got space - copy it in and return size
            memcpy(dest,buffer,buffer_len);
            res = buffer_len;
        }
        else
        {
            //not enough space - return failure
            res = -1;
        }
        EndReadFrame();
    }

    return res;
}

In itself that's not much help. However, if I make a tweak to the application so it can copy data straight into the opengl texture and switch it to use BeginReadFrame and EndReadFrame I can avoid one of the memcpys. In addition, by moving the camera read earlier in the frame I lose a frame of lag:

//entry point
int main(int argc, const char **argv)
{
    printf("PI Cam api tester\n");
    InitGraphics();
    printf("Starting camera\n");
    CCamera* cam = StartCamera(MAIN_TEXTURE_WIDTH, MAIN_TEXTURE_HEIGHT,15);

    printf("Running frame loop\n");
    for(int i = 0; i < 3000; i++)
    {
        //lock the current frame buffer, and copy it directly into the open gl texture
        const void* frame_data; int frame_sz;
        if(cam->BeginReadFrame(frame_data,frame_sz))
        {
            UpdateMainTextureFromMemory(frame_data);
            cam->EndReadFrame();
        }

        //begin frame, draw the texture then end frame
        BeginFrame();
        DrawMainTextureRect(-0.9f,-0.9f,0.9f,0.9f);
        EndFrame();
    }

    StopCamera();
}

Much better! Unfortunately I'm still only at 15hz due to the weird interplay between opengl and the mmal resizing/converting components, but it is a totally solid 15hz at 720p - about 10hz at 1080p. I suspect I'm going to have to ditch the mmal resize/convert components eventually and rewrite them as opengl shaders, but not just yet.

Here's a video of the progress so far:



And as usual, here's the code:


Next up - downsampling!