Tuesday, 17 September 2013

Pi Bot Simulator

I decided to begin a new robot recently, but first I've decided to simulate one. Why? Well a good few reasons....
  • It's going to take some serious coding, and I wanted to make sure I was up to the task before spending time and money building it
  • Making digital tweaks to the design is easier than making them once its built
  • Programming on my nice shiny lap top is much easier than writing code and distributing it out to multiple raspberry pis while plugged into a robot!


So this'll divide up into a few areas which I'll detail here.

The Architecture

I'm writing the simulation in unity. Basically it involves:
  • A basic physical model of the robot, in a simple physical world.
  • Scripts to replicate the behaviors of the various components of the physical robot (i.e. a servo script that feeds input into a unity hinge joint).
  • A central script that runs a tcp server, and takes commands from external programs to communicate with the robot components.
This basic model is designed to simulate how the raspberry pis work in the actual robot. Each Pi is an external program that communicates with a subset of the robot components (or other raspberry pis).




This diagram shows a simplified model of what I have in mind, with only the motor and central controllers (ignoring the vision and speech processors). Crucially, the controller programs will be cross platform applications that run the same code on pc or raspberry pi. The only exception will be that the communications layer on a raspberry pi will be talking to physical devices like an IO board, wheras on a pc it'll be sending commands over TCP to unity.

A unity example

This snippet from my code shows the servo logic. It has a current position and target position, and uses them to control a damped spring on a hinge joint. The result is similar to the real world servo scenario, in which you send it a signal to tell it which position to move to.

using UnityEngine;
using System.Collections;

public class Servo : MonoBehaviour {
    
    public float Position;
    public float TargetPosition;
    
    // Use this for initialization
    void Start () {
        Application.runInBackground = true;
    }
    
    // Update is called once per frame
    void Update () {
    
    }
    
    float ConvertToWithinPI(float angle)
    {
        while(angle < -180) angle += 360;
        while(angle > 180) angle -= 360;
        return angle;
    }
    
    void FixedUpdate()
    {    
        float a = ConvertToWithinPI(ConvertToWithinPI(TargetPosition) - ConvertToWithinPI(Position));
        if(a < -5) a = -5;
        if(a > 5) a = 5;
        Position += a;
        
        JointSpring spring = hingeJoint.spring;
        spring.targetPosition = Position;
        hingeJoint.spring = spring;
    }
}


I've written similar scripts for motors, and a central one to wire them all together in which I'll add the tcp communications layer.

It in action

So without further coding ado, here's a video of it in action:


This first version is just me tinkering with the numbers in the unity editor. Here you can see me fiddling with the eyes, neck joints and wheel motors.

Next Time...

Next up, I'll get it so that the inputs and outputs to the various simulated devices can be accessed via a network connection. Once I'm there I'll be in a position to write applications in any language I like to control the various aspects of the robot (probably python for the low power stuff, and c++ for things like image processing).


Sunday, 15 September 2013

Introducing PiBot

Well I'm coming to the end of a project at work, which means I'll finally have some of that free time stuff I've heard so much about, and will need a project at home to use it all up. And so, here's the beginnings of my thoughts on my next robot...

The basic idea is to build upon my prototypes made with MmBot back in the day, the objective being to create an AI driven 'cute' robot. My experience making games has taught me it's not about making something intelligent, it's about making something appear intelligent, and that will be the goal. Something that is cute looking and makes an apparent emotional connection to it's surroundings.

Before I get all AI on anybody though, we need a basic system design. The plan is to network together a set of raspberry Pi's, talking a common language to each other via a battery powered Ethernet hub. Initial layout is as follows:

Layout of processing units in Raspberry Pi

The arrows here indicate data flow, not necessarily physical connections, as the connections between the raspberry pis (in green) are all done across the network. So in reality, all the Pis are connected to a central Ethernet hub. I'll also have a port spare in the hub for a PC to connect to for development purposes.

Crucial to the design of the system is the extendability of it. I have no idea right now as to the exact processing power required to achieve my goals, but by designing the system as a set of CPUs running in parallel, communicating over a network, I can add new modules as required.

Vision

It's crucial that this robot has a good awareness of it's surroundings. In order to make any kind of emotive connection it'll need to be identify people and make eye contact with them, or wander around a room or building trying to 'make friends'. As such a full 3D representation of the scene will be necessary. This'll be obtained by:
  • 2 raspberry pi cameras
  • A raspberry pi for each camera, doing initial 2D vision processing. This will be conversion of the images into Sobel transforms, converting them to useful formats for 3D processing, and performing 2D feature detection.
  • The stereoscopic processor will receive data from the 2 vision processors, and combine it to form a 3D view of the scene, primarily through matching features from the 2 separate cameras and data from previous frames. It'll  also be responsible for feeding back requested eye movements to the central controller that are necessary to enhance knowledge of the scene layout.
It's my hope that these systems will provide the power needed to build and maintain an understanding of the environment the robot is in, however large parts of the image processing and stereoscopic work are parallelisable, so could be distributed across more machines if necessary. The raspberry pi has a tasty GPU though which could well be harnessed for this purpose.

For recognising things such as faces in the scene, I may offload additional work to an extra raspberry pi, to take data from the stereoscopic data and match it to historical records of people, or even image data retrieved from the internet!

Audio

This comes in 2 forms - input and output. I'd like the robot to be able to understand some very basic speech commands, but also respond to audio queues such as loud noises being scary. For feedback, I intend some sort of sim speak or emotive gobbledygook language (think R2D2), as you can achieve a lot with this kind of sound without it really having to make any sense!

Currently I'm expecting not to need too much power to achieve the audio goals, so 1 raspberry pi is reserved for both input and output. This could easily be farmed out to more CPUs if necessary though.

Additional IO

All extra IO will go via a raspberry pi 'Gert Board' - an extension available designed for driving motors and reading sensor input. 

The robot will be driven by 2 fairly powerful motors, each with a built in quadrature encoder (aka thing that measures how far the motors have turned), which will allow for precise control over its movement.

PiBot's head will be on a neck controlled by 3 servos to give a full range of motion, plus an additional servo to allow the eyes to move in unison (crucial for realistic and cute eye contact). I could see me needing additional servos for controlling eye brows or mounted sensors, but they aren't on the plan just yet.

I'll be adding various LEDs or light strips on PiBot to allow it communicate 'mood' with colour (think Ian M Banks culture robots), again as its simple to code but powerful in terms of emotive feedback. 

The only extra sensors I have planned are rangefinders (probably IR) mounted at various positions to give the robot some last minute warning systems against crashing into walls or rolling down stairs!

WiFi

I'd like the robot to be able to access the internet, to retrieve data from systems like face book in order to glean any extra info it can about the world around it or the people it's communicating with. In addition, a quick and easy way to connect to it with a PC will be handy so for this I'll be adding a WiFi network adapter connected to the central processor.

HDMI

Not sure what this is for yet, but I'm sure some kind of output to a display will be useful for debugging - or maybe playing back what it's been doing. Plug PiBot into the TV after its been wandering around for a day and see what it's been up to!

Summary

Well that's the basic plan - a set of networked raspberry pis all doing their own little jobs, with a central unit talking to them all to gather up sensor data and feedback to the output devices. Exciting stuff - just gotta ship Tearaway first!