Super Simple Google AIY Surveillance Camera

1992 0

In may 2018, durning the bay area Maker Faire in San Francisco, I bought a google AIY Vision Kit. A few weeks later during my 4th week of living in the San Francisco area I decided to put together my google AIY vision kit and mess around with it for a few days. I did the basics and the tutorials but wanted more. I wanted a basic surveillance camera for my room when I went home to Brooklyn.

[BMo_scrollGallery id=85 sG_thumbPosition=bottom sG_images=1 duration=slow gallery_width=600 gallery_height=400 thumbs_width=100 thumbs_height=100 sG_caption=1 sG_start=1 sG_loop=1 sG_loopThumbs=1 sG_clickable=1 sG_opacity=60 sG_area=200 sG_scrollSpeed=0 sG_autoScroll=0 sG_aS_stopOnOver=1 sG_diashowDelay=50 sG_followImages=1 sG_responsive=1]

Please note, this is not a “secure” way to do this and is not a “security” camera. Anything you do with this script can be seen by anyone and is NOT private. Twilio is not a free service, if you decide to make this script work you will need to fund a Twilio account.  $10 or $20 should be enough.

One of the example AIY projects was the “Cat, Dog, Person” detector. It was perfect for what I needed for the detection of people and pets. It had a few limitations and it seemed like it was ready for some edits. I needed to change that it did not send these images anywhere. It had to be run one image at a time. It had to be run from the terminal for each image. It required terminal input to the command so it could find the image. You had to give it an already saved image before it could run. You had to specify an output “save” location for the image google outlined. Just as importantly, the color of the outline wasn’t green.

This project was my first introduction to python3 and I was scared of changing language versions at first but then I learned that python3 is very similar to python2.7. Python 2.7 allows for more flexibility such as not needing parentheses and quotation marks where they might need to go, while python3 is more strict with it’s usage. While python3 may make the code harder to write, it is much easier to read and understand.

For this project I mashed up code from the Raspberry Pi Camera documentation, Twilio, the Google Cat, Dog, Person (Object) detector, and pyimgur. Since I wanted this to be “simple” my goal was to stick as close as possible to the “example” code I found at each source.

Adding the pi camera seemed like the best first thing to do. Twillio was second. The reason I used twilio was because I didn’t want to add modem to the outside of the google AIY, and I did not want to buy a sim chip. Finally the reason I used Imgur because it was the quickest way to send an image by Twilio (that I could find). Adding the Imgur link was the second to last part I finished.

Finally, I added a simple loop to make it repeat. Inside the loop, I save an edited file only if there is something detected. If the message is empty it will wait 10 seconds before running the loop again, but if it detects something it will slow down the process to 60 seconds before starting the loop again. The reason for making it wait longer is when my cat is in one place for an hour I do NOT want to receive 360+ text messages.

One of the issues I had with one version of my build is that it filled up the pi with too many photos of nothing being detected. One flaw in this method is that I save the camera file to disk, then open it and use it with google’s AIY inference code. To fix this I had it delete the unedited photo that it took (“infile” in the code below). The result is that it now only saves edited photos that have been detected as either a person, dog or cat.

This project has been really successful in alerting me when something is on camera. One problem is with the Raspberry Pi camera – sometimes it turns purple! So far google recognition AI has been really good at seeing things but not been very good at identifying what a cat, dog, or person is, for example:

[BMo_scrollGallery id=86 sG_thumbPosition=bottom sG_images=1 duration=slow gallery_width=600 gallery_height=400 thumbs_width=100 thumbs_height=100 sG_caption=1 sG_start=1 sG_loop=1 sG_loopThumbs=1 sG_clickable=1 sG_opacity=60 sG_area=200 sG_scrollSpeed=0 sG_autoScroll=0 sG_aS_stopOnOver=1 sG_diashowDelay=50 sG_followImages=1 sG_responsive=1]

There are a few things I want to change now that it is stable, and working

 

TO MAKE THIS SCRIPT WORK :

Needed : You need to know how to use a raspberry pi, how to use python, how to create and save a python text file.

First : You need to build a google AIY vision kit & run at least one example project

https://www.adafruit.com/product/3780

 

IF YOU ARE UNDER 18 ASK YOUR PARENTS TO HELP WITH TWILIO & IMGUR – I DID!

Register a Twilio account and create a new project. Your project will need some funding, $10 or $20 will do to get started

When you “Create A New Project” :

  • Choose “Products”
  • Choose “Programmable SMS”
  • Give your new project a name.
  • Get the “SID” for your new project
  • Get “Auth Token” for your new project
  • Register a “telephone number” with Twilio and make sure it has SMS/MMS capability

Register an imgur account

  • In settings, add an Application
  • Get the Client ID of your new Application

Run the following commands on your pi from terminal to install imgur and twilio

  • sudo pip3 install pyimgur
  • sudo pip3 install twill

Save the script into your AIY EXAMPLES directory as “objCamera.py”

note – my examples directory is “~/AIY-projects-python/src/examples/vision/”

Open the file so you can edit it.

You will need to edit the script a little : (do not delete any quotation marks)

  • Change TWILIO_SID in line 53 to the SID you got from from Twilio
  • Change TWILIO_AUTH_TOKEN in line 54 to the Auth Token you got from from Twilio
  • Change IMGUR_CLIENT_ID in line 56 to the client ID you got from IMGUR
  • Change +17777777777 in line 164 to the phone number you are sending MMS to (US NUMBERS START WITH +1)
  • Change +15555555555 in line 165 to the phone number of your Twilio Account (US NUMBERS START WITH +1)

In terminal, change to your examples directory and then “chmod” the script so it can be run

  • cd ~/AIY-projects-python/src/examples/vision/
  • chmod 755 objCamera.py

The last thing you need is a desktop folder for your saved output files

  • mkdir ~/DiscoBunnies

Run the file, and watch the results come in – in terminal, type :

  • ./objCamera.py

Here’s the script, have fun!

Facebooktwitterredditpinterestlinkedinmail

Related Post

Turn string into array with split

Posted by - October 18, 2015 0
textWithCommas = “bee,apple,pear,dog,carrot,robot” wordList = textWithCommas.split(“,”) print “the second item is : ” + textWithCommas[1] Keep Reading : Identify USB…

Identify USB device and discover its name and port

Posted by - October 17, 2015 0
Download latest PyUSB from : http://sourceforge.net/projects/pyusb/files/PyUSB%201.0/1.0.0-beta-2/pyusb-1.0.0b2.tar.gz/download Place file “pyusb-1.0.0b2.tar.gz” on your desktop gunzip <pyusb-1.0.0b2.tar.gz |tar xvfp - cd pyusb-1.0.0b2 sudo ./setup.py…

Altoids Game Box Road Test

Posted by - July 16, 2016 0
A Holiday Inn Express made the perfect testing place for the altoids box game system.  Would it work from battery…

Leave a comment

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.