The basic idea is to build upon my prototypes made with MmBot back in the day, the objective being to create an AI driven 'cute' robot. My experience making games has taught me it's not about making something intelligent, it's about making something appear intelligent, and that will be the goal. Something that is cute looking and makes an apparent emotional connection to it's surroundings.
Before I get all AI on anybody though, we need a basic system design. The plan is to network together a set of raspberry Pi's, talking a common language to each other via a battery powered Ethernet hub. Initial layout is as follows:
Layout of processing units in Raspberry Pi |
The arrows here indicate data flow, not necessarily physical connections, as the connections between the raspberry pis (in green) are all done across the network. So in reality, all the Pis are connected to a central Ethernet hub. I'll also have a port spare in the hub for a PC to connect to for development purposes.
Crucial to the design of the system is the extendability of it. I have no idea right now as to the exact processing power required to achieve my goals, but by designing the system as a set of CPUs running in parallel, communicating over a network, I can add new modules as required.
Vision
It's crucial that this robot has a good awareness of it's surroundings. In order to make any kind of emotive connection it'll need to be identify people and make eye contact with them, or wander around a room or building trying to 'make friends'. As such a full 3D representation of the scene will be necessary. This'll be obtained by:
- 2 raspberry pi cameras
- A raspberry pi for each camera, doing initial 2D vision processing. This will be conversion of the images into Sobel transforms, converting them to useful formats for 3D processing, and performing 2D feature detection.
- The stereoscopic processor will receive data from the 2 vision processors, and combine it to form a 3D view of the scene, primarily through matching features from the 2 separate cameras and data from previous frames. It'll also be responsible for feeding back requested eye movements to the central controller that are necessary to enhance knowledge of the scene layout.
It's my hope that these systems will provide the power needed to build and maintain an understanding of the environment the robot is in, however large parts of the image processing and stereoscopic work are parallelisable, so could be distributed across more machines if necessary. The raspberry pi has a tasty GPU though which could well be harnessed for this purpose.
For recognising things such as faces in the scene, I may offload additional work to an extra raspberry pi, to take data from the stereoscopic data and match it to historical records of people, or even image data retrieved from the internet!
Audio
This comes in 2 forms - input and output. I'd like the robot to be able to understand some very basic speech commands, but also respond to audio queues such as loud noises being scary. For feedback, I intend some sort of sim speak or emotive gobbledygook language (think R2D2), as you can achieve a lot with this kind of sound without it really having to make any sense!
Currently I'm expecting not to need too much power to achieve the audio goals, so 1 raspberry pi is reserved for both input and output. This could easily be farmed out to more CPUs if necessary though.
Additional IO
All extra IO will go via a raspberry pi 'Gert Board' - an extension available designed for driving motors and reading sensor input.
The robot will be driven by 2 fairly powerful motors, each with a built in quadrature encoder (aka thing that measures how far the motors have turned), which will allow for precise control over its movement.
PiBot's head will be on a neck controlled by 3 servos to give a full range of motion, plus an additional servo to allow the eyes to move in unison (crucial for realistic and cute eye contact). I could see me needing additional servos for controlling eye brows or mounted sensors, but they aren't on the plan just yet.
I'll be adding various LEDs or light strips on PiBot to allow it communicate 'mood' with colour (think Ian M Banks culture robots), again as its simple to code but powerful in terms of emotive feedback.
The only extra sensors I have planned are rangefinders (probably IR) mounted at various positions to give the robot some last minute warning systems against crashing into walls or rolling down stairs!
WiFi
I'd like the robot to be able to access the internet, to retrieve data from systems like face book in order to glean any extra info it can about the world around it or the people it's communicating with. In addition, a quick and easy way to connect to it with a PC will be handy so for this I'll be adding a WiFi network adapter connected to the central processor.
HDMI
Not sure what this is for yet, but I'm sure some kind of output to a display will be useful for debugging - or maybe playing back what it's been doing. Plug PiBot into the TV after its been wandering around for a day and see what it's been up to!
Summary
Well that's the basic plan - a set of networked raspberry pis all doing their own little jobs, with a central unit talking to them all to gather up sensor data and feedback to the output devices. Exciting stuff - just gotta ship Tearaway first!
"so could be distributed across more machines if necessary"
ReplyDeleteYes I think this would make things slightly easier. Then you can benefit directly from more processing power.
Happy Good Day para member setia AGENS128, oke gengs kali ini kami akan memberikan refrensi untuk kalian semua yang sedang mencari permainan TOTO ONLINE dengan bonus terbesar yang akan kami berikan kepada kalian semua, jadi untuk kalian yang mau mencoba bermain bersama kami dengan bonus yang besar yang akan kami berikan kepada kalian,kalian bisa mendaftarkan diri kalian sekarang juga dan menangkan BONUS YANG BESAR HANYA BERSAMA AGENS128.
ReplyDeleteUntuk keterangan lebih lanjut, segera hubungi kami di:
BBM : D8B84EE1 atau AGENS128
WA : 0852-2255-5128
Ayo tunggu apalagi !!