Pets, Plants, and Computer Vision
Header

Open Skinner Box – PyCon 2014

May 27th, 2014 | Posted by admin in automation | code | cute | domestic life | Internet of Things | Open Source | OpenCV | pics or it didn't happen | PyCon | python | RaspberryPi | Uncategorized | vermin - (Comments Off on Open Skinner Box – PyCon 2014)

I’ve been hacking like a fiend on nights and weekends for the past few months trying to get ready for PyCon. This year I wanted to do an introductory talk about how you make internet enabled hardware using Python. The first step of this process is figuring out what hardware you want to make. I decided I wanted to do something for my pets as I am splitting my time between Boston and San Francisco and I can be gone for a week at a stretch. My friend Sophi gave a great talk at last years Open Hardware Summit about creating a low-cost nose poke detector for research and I decided I could sorta do a riff (in the jazz sense) on that idea. I decided to create an Open Skinner Box using a RaspberryPi and a few parts I had around the house.

Open Skinner Box in situ

Open Skinner Box in situ

A Skinner Box, also called an operant conditioning chamber, is a training mechanism for animal behavior. Generally a Skinner box is used to create some behavior “primitives” that can then be stringed together to do real behavioral science. The most basic Skinner Box has a cue signal for the animal (usually a light or buzzer), a way for the animal to respond to the cue (usually a nose poke detector or by pressing a lever), and a way to reward the animal usually using food. The general procedure is that the animal hear’s or sees the cue signal, performs a task, like pressing the button, and then gets a treat. This is not too far off with the training most people do with their pets already. For example, I have the rats trained to come and sit on my lap whenever they hear me shake a container with treats.

The cool thing about a Skinner box is that they are used to do real science. As a toy example, let’s say you wanted to test if some drug may have an adverse effect on learning. To test this scientifically you could have two groups of rats, one set of rats would get the drug and the others wouldn’t. We would then record how long it took the rats to learn how to use the Skinner box reliably. The scientist doing the experiment could then use this data to quantify if the drug has an effect and if that effect scales with the dosage of the drug.

So what does it do?

I wanted to create not just a Skinner Box but a web enabled Skinner Box, a sorta internet of things Skinner Box. So what features should it have? I came up with the following list:

  1. I should be able to see the rats using the Raspberry Pi’s camera.
  2. The camera data should be used to create a rough correlate of the rat’s activity.
  3. The box should run experiments automatically.
  4. I should be able to buzz and feed the rats remotely.
  5. The web interface should give a live feed of all of the events as they happen.
  6. The web interface should be able to give me daily digests of the rats activity and training.

The Mechanical Bits

Mechanical Bits of the Open Skinner Box

Mechanical Bits of the Open Skinner Box

To build the Skinner Box I got some help from a mechanical engineer friend of mine. He is a 3D printing whiz and designed the mounting brackets, food hopper, and switch mechanism for the Skinner Box. One of the cool things I learned in this process was how to use threaded brass inserts for mounting the parts to the cage and attaching the different parts to one another. There is a great tutorial here as well as this video. The source files for all the parts are now up on thing-a-verse if you would like to build them.

The Electrical Bits

Skinner Box schematic - originals are in the github repo for the project.

Skinner Box schematic – originals are in the github repo for the project.

For the electrical components in the project I used the Raspberry Pi’s GPIO pins to control the stepper, read the switch, and run the buzzer (although one could easily use the audio out). I opted to run the stepper using the RaspberryPi’s 5V source and it seems to have just enough juice to run the stepper motor. The stepper is controlled via four GPIO pins (two for each coil). The GPIO pins and the 5V source are connected to a LN2803 Darlington array that shunts the 5V source to the stepper based on whether the GPIO pins. In the next revision I will probably use a separate stepper driver and a beefier stepper like a NEMA 17. I soldered everything to a bread board for this revision but I will probably get PCBs fabricated for the next revision. When I was soldering and debugging the board I found ipython super useful. I could send each of the GPIO pins high or low and then trace the path with my multimeter. I have put both the Fritzing CAD files and a half-complete bill of materials up on github if you would like to replicate my work.

Finished Open Skinner Box electrical components.

Finished Open Skinner Box electrical components.

The Software Bits

Because I was presenting this project as an intro lesson PyCon I wanted to use as many python libraries as possible. Right before the conference I open-sourced all of the code and put it in this github repository. Some of the components I used, for example matplotlib, are sub-optimal for the task but they get the point across and minimized the amount of java script I had to write. The entire app runs in a bottle web server with the addition of greenlets to do some of the websocket work. I set the webserver up to use Bootstrap 2.3.2 to make everything look pretty. For data persistence I used MongoDB. While you can get Mongo to build and run on a RaspberryPi in retrospect I really should have dug into the deep storage of my brain and used something like PostgreSQL. Mongo is still too unstable, and difficult to install on the Pi.

The general theory of operation is that there is the main web server that holds four modules, three of which are wrapped in a python thread class. These modules are as follows:

  1. Hardware Interface – uses GPIO to buzz the buzzer, monitor the switch, and run the stepper.
  2. Camera Interface – uses OpenCV, PiCamera, and Numpy to grab from the camera, save the images, and monitor the activity.
  3. Experiment Runner – looks at the current time and decides when to begin and end experiments (e.g. it rings the buzzer, waits for the rat to press the lever, and dispenses a treat.
  4. Data Interface – Stores event,time stamp pairs to Mongo, does queries, and renders matplotlib graphics

To get all of the modules to communicate cleanly I used a simple callback structure for each module. That is to say each module holds a list of functions for particular events, and when that event happens the module iterates through the list and calls each function. For example, whenever there is a button press the button press loop calls the callback function for the data interface to write the event to the database, a second callback tells the experiment runner that the rat pushed the lever, and a third callback renders the data to the live-feed websocket.

Rat Stats -- need to replace this with some Javascript rendering.

Rat Stats — need to replace this with some Javascript rendering.

All routes on the webserver basically point to one or more of the modules to perform a task and create some amount of data to be rendered in the bottle template engine. For example for activity monitor page I have the DataInterface module query Mongo for every activity event in the past 24 hours and I then render it using matplotlib and save the result to a static image file. The web server then renders the template using the recently updated static image file. While this works, matplotlib is painfully slow. To control the feeder and buzzer remotely I simply have post events that call the dispense and buzz methods on the hardware interface. The caveat here is that these methods are non-block and are guarded simply by a flag. So for example, if you hit the feed button multiple times in a row really fast only the first press has an effect.

Skinner Box Live Shot

Skinner Box Live Shot

The camera module seems to work reasonably well for this project and I amazed by the image quality. The only drawback with the camera module is that basically you have to choose between still images and a stream of video data right now there is now really good way to both debuffer the camera stream and process the individual frames. I opted to take the less complex route of just firing the camera for still frames at the fasted rate it would support which is about once a second. Because of processing limitations on the pi I need to scale the image down to about 600×800 to do my motion calculation. For the motion calculation I just perform a running frame difference versus a more computationally costly optical flow calculation. This is a reasonably good approximation of net motion but it is subject to a lot of spikes when the lighting changes (e.g. when you turn the lights for the room on). Additionally in my haste of getting this project up and running I opted not to put a thread lock around the camera write operation. This means that sometimes when you visit the live image page you get a half finished frame. This is something that I will address soon.

Live Events Feed

Live Events Feed

Putting it all together.

I have tested the system on the rats with some limited success (see the videos below). There are some kinks I still need to work out that prevent me from running the system full time. For example, the current food hopper wheel jams fairly frequently and so farm seems to only deliver food 2-3 times before jamming. Also the buzzer I used is exceptionally annoying, and since the rats live in my bedroom I don’t want to run protocols at night when the rats are most active(rats a nocturnal). Moreover, I don’t think my current pair or rats is suitable for training. I have one exceptionally old rat (almost three to be exact); and while she is interested she lacks the mobility to perform the task. The other rat has been one of the more difficult rats I have tried to train. Normal rats can learn to come when called (or when I shake the treat jar) this rat is either too timid or too ambivalent to care most of the time.

Building a Better Mouse Trap

The Skinner box runs reasonably well at the moment but there are quite a few things I would like to do to make it more user friendly and robust. Ultimately I would like to harden the designs and then turn them over to someone to commercialize. As with an engineering task design is a process, and you get a kernel of working code and then improve and build upon it until you finally reach a working solution. Here is what I would like to do in the next revision:

  1. Replace Mongo with Postgre SQL.
  2. Replace Matplotlib with a javascript rendering framework
  3. Fix the camera thread lock issue and create a live stream of video and also store images to the database.
  4. Move the food hopper to an auger based design to minimize jamming.
  5. Add a line break sensor to make sure the rats get fed from the food hopper.
  6. Add an IR LED illumination system for the camera and add a signal LED to work with the buzzer.
  7. Improve some of the chart rendering to support arbitrary queries (e.g. how well the rats did last week over this week.
  8. A scripting language for the experiment runner. Right now the experiment runner buzzes and waits for a button press, but really I think the rats should start with random food deliveries and work up to the button press task.

Raspberry Pi Network Configuration

March 24th, 2014 | Posted by admin in code | computer vision | RaspberryPi | Uncategorized - (Comments Off on Raspberry Pi Network Configuration)

I’ve been playing with my Raspberry Pi a lot but getting the networking working on it is a pain. To me the /etc/networks/interface file is cryptic black magic. Here are a couple tools I am using to allow me to move between networks. Basically I have three networks setup: home, work, and mobile. The mobile connection is my phone’s tethering. The tethering allows me to setup the pi whereever I am and point the pi to the local network. To set up everything you need to edit /etc/network/interfaces and /etc/wpa_supplicant/wpa_supplicant.conf and make them look like the ones below. To test your changes on the pi while you are logged in you can use sudo ifdown wlan0 and sudo ifup wlan0. Basically you edit your wpa_supplicant.conf file on the pi and then use these commands to have the changes take effect.
My /etc/network/interfaces file:

My /etc/wpa_supplicant/wpa_supplicant.conf file. Make sure to add your own access points and passwords.

Another useful tool is fping. Fping can automatically ping every device on a network. So, for example, let’s say I go to a coffee shop and I want to use the pi. I turn on my cellphone tether, turn on the pi, and ssh in. I then edit the /etc/wpa_supplicant/wpa_supplicant.conf file, and either ifdown/ifup wlan0 or turn off the tether and reboot the pi. I then run ifconfig on my machine to get my laptop’s ip address on the coffee shop network. If my laptop’s IP is 192.168.42.3 I use fping to scan the entire subnet (e.g. 192.168.42.0 thru 192.168.42.255) . For example I can run:

sudo fping -s -g 192.168.42.1 192.168.42.255 -r 1

This will give me a list of IPs to ping. I can iterate through these IPs and find my pi.

This weekend I went to MicroCenter and picked up a couple of cool toys for the pi. The first thing I bought is a the Raspberry Pi IR camera. I didn’t have any IR LEDs to test with so in a pinch I used my TV remote control. TV remotes use IR light so it works as an IR flashlight. You can see the results of me taking a photo in a dark room.

Raspberry Pi IR Camera with TV Remote Flashlight.

Raspberry Pi IR Camera with TV Remote Flashlight.

I also picked up a 2600 mAH battery pack at Micro Center. I am really happy with its performance for $10. It has enough juice to run the pi remotely along with a bunch of peripherals like a small stepper motor.

ROS Industrial – Industrial Grade Awesome.

March 12th, 2014 | Posted by admin in automation | code | computer vision | demo | industrial computing | Industrial Internet | manufacturing | robots | Uncategorized - (Comments Off on ROS Industrial – Industrial Grade Awesome.)

)

Last week I had the pleasure of going to Southwest Research Institute (SwRI) to attend a ROS Industrial training session. I’ve been insanely busy for the past few months writing computer vision and other code for a fairly substantial Robot Operating System (ROS) project. I’ve been converted over to the dark side of ROS from OpenCV as ROS’s message bus, modular nature, and build tools are absolutely phenomenal. Hopefully a lot of that code will go back to the community once my employer signs the contributors agreement. I’ve gotten to know a lot about the sensor side of ROS but I wanted to round out my knowledge of the actuator side of things. This ROS-Industrial session seemed like a good place to do just this, and also get acquainted with more people working in manufacturing.

SwRI has always had a mythical place in my mind, mainly because all the cool kids got to go there when I got left behind at the lab. When I was in undergrad the RHex robot went there for testing, and while I was at Cybernet our DARPA Urban Challenge crew got to go there while I got to stay home and man the fort. A few months ago SwRI reached out to me and I asked if I could perhaps help with ROS Industrial. I’ve been trying to get some code and documentation done for them but I’ve been so busy I haven’t made as much progress as I would have liked. SwRI is currently the maintainer of ROS Industrial, and along with the OSRF they are making great strides to improving the usability of ROS in industrial settings.



The tutorials published and presented by SWRI were excellent and very well polished. They made a conscience effort to have the tutorials go from high level tutorials for decision makers all the way to nuts and bolts code introductions for programmers and integrators. I really hope the publish more of the tutorials as they were exceptionally well put together, relevant, and well thought out. To be certain what SwRI is trying to accomplish with ROS industrial is no small feat, as you can see from the video of of their 2013 Automate Demo video at the top of the page. At the the ROS Industrial training session SwRI walked us through the high level architecture of this system (all of the components are FOSS software!) at a level where I think I could probably recreate it given a few weeks of coding. For a single day session I thought they covered a lot of ground and the demos they had of ROS industrial were incredible. In addition to the 2013 Automate demo they had a another robot doing arm doing an exceptionally complex deburring maneuver around an complex bent puzzle piece. Another demo showed an on-line object tracking and path planning demo for robot finishing of automotive parts. I capture a few of the demos in a short video.

In addition to the tutorials I picked up a few new and interesting libraries from the other attendees. One that stood out was MTConnect which is a free and open XML/HTML standard for robots and CNC software to communicate their state and status. It looks pretty cool and there are already some open libraries out on github. Another cool suite of tools is this EPICS PLC communication package put out by Paul Scherrer Institute in Switzerland. There also seems to be a mirror of it by Argonne National labs. Apparently CERN uses a lot of PLCs and they were insistent that all PLCs used at CERN had open Linux drivers. EPICS stands for Experimental Physics and Industrial Control System. I haven’t looked too deeply into the package but it seems like it could be handy.

I’ve caught my breath. Time to start another company.

March 10th, 2014 | Posted by admin in automation | computer vision | Electronics | entrepreneurship | Internet of Things | TempoAutomation - (Comments Off on I’ve caught my breath. Time to start another company.)

I quit my well paying job as a hired gun for post-funding start-up in a Boston to return to the world of eating boiled news papers and sleeping on the floor of a hacker squat. I feel like I have caught my breath and I am ready to try again. After a drawn out courtship I have decided to come on as the software lead and co-founder of Tempo Automation. We’re after the very seductive idea of building robots and helping individuals and other engineers build, design, and test their own printed circuit boards (PCBs). The immediate goal is to put a pick and place robot that can go in every hacker space and engineering office around the world. Every shop with a 3D printer and a laser cutter should have one of our machines. The end-game is to convert these simple robots from merely a pick-and-place to a one stop PCB factory that does milling, solder deposition, pick and place, reflow, and ultimately AOI, programing, and testing. Raw materials go in one side, PCBs come out the other side. Idea bits get converted into atoms. We want to do for electrical engineering what the Rep-Rap and Makerbot did for mechanical engineering.

Tempo -- Maker Faire NYC 2013

Tempo — Maker Faire NYC 2013

I am really, truly, excited to be working on an interdisciplinary team once again. Robotics is a field where people still take pride in using the word “engineer” in their title. When I say engineer I mean people who want to fix problems; not just wax brass on the Titanic by writing bank software or slick advertising webpages. Being a big “E” engineer means you get to put down the mouse, pick up a multimeter and some hex keys, and build some awesome. I feel like I couldn’t have found a better set of co-founders. Jeff McAlvay embodies everything I want in a non-technical founder: he knows how to run a business but he doesn’t want to be an executive. Jeff is sincere about the things that are important: solving the problem, learning the technology, and helping the customers. I am also in awe of Jeff’s time at McMaster-Carr; McMaster is such a great organization and I want to learn more about how people like Jeff made it run so perfectly. I am also excited to be working with Tempo’s other co-founder and mechanical lead, Jesse Koenig. Jesse serves as a great counter balance to my personality. He is detail orientated where I would be hand wavy, he likes to work on the books while I do community stuff, and he seems to know just when to force me to listen and when to leave me alone and code. If this were a guy-cop-buddy movie Jesse would be the good cop to my bad cop, and Jeff would be the affable but stern police commisioner providing us with the resources to clean up the mean streets of PCB City. Rounding out our team are Prof. Peter Vajda a visiting Computer Vision professor at Stanford and Jon Thorne, one of the best all around mechanical engineers I’ve met. Ted Blackman and Cody Daniels, from the 3Scan crew are also lending us their software and hardware expertise.

I am also amazed by the community that Jeff and Jesse have chosen to surround themselves with. The people at MI7 and Langton Labs are fantastic and it reminds me so much of home at Arbor Vitae. The first thing I said when I walked into the office is that it felt like home. Jeff has also done a fantastic job of organizing and growing a hardware community in San Francisco and I am really excited to not just work on this project but get the privilege to share it with others.



Finally, I am really excited about our technology stack as it lets me use my knowledge of computer vision within a suite of new software. Things are still solidifying but it looks like I will be working not just with a lot of open software but also a lot of open hardware (and hopefully building a lot of it too). I am looking forwarded to learning a lot about Meteor, javascript, bottle/flask, and potentially ROS in the coming weeks. There is still a lot of code to write but I can already see the parts coming together.

Will this venture make us so insanely rich that we can jump Tesla’s off our yachts? Probably not, but it is an idea with a market need where we can grow a reasonable business. We also won’t ever have to talk about our “product” as being the end-all and be-all social/mobile/local/viral cloud analytics marketplace as a service. We’re doing something that matters. Something that helps make the world a better place by enabling people to build their dreams and needs. To be sure we can’t make a business if we don’t make money, and I am sure we can, but what is more important is that we’re spending every waking moment doing something we love and that we feel really matters. Most important of all I get to spend the next few years teaching people how to build electronics and use robots to make their ideas a reality. I haven’t been this happy in a long time.