Image Recognition using Amazon Rekognition
Amazon has a whole suite of tools to add artificial intelligence capabilities to your applications. Today we will be exploring Amazon Rekognition, an image analysis service. Rekognition can detect a number of interesting things such as faces, objects, and celebrities.
To interact with Rekognition, we will use Boto 3, the official Amazon AWS SDK for Python.
If you do not have an AWS account, you can create one following their documentation. Once you have signed up, note your access key—you will need it later.
Tools Needed
Setup
Our first step will be to create a new virtual environment and install boto
and decouple
. We use decouple just to manage our environment variables.
mkvirtualenv --python=$(which python3) py-rekognition |
Now that we have our virtual environment created and all necessary packages installed, we need to a way to set our environment variables. ISL recommends using foreman, or a similar process manager and using a .env
file to save your environment state in ini format. The variables can also just be exported manually or via a script.
Sample .env
file:
AWS_ACCESS_KEY=INSERT_AWS_ACCESS_KEY |
Development
In this post we will look at two functions from the library: detect_labels()
and detect_faces()
. For further examples for compare_faces()
and recognize_celebrities()
, see our Github.
In our first example we are going to use detect_labels()
. Since we have our environment variables set, the next step is to create a python file named py_detect_labels.py
. In this file we are going to:
- Read in our environment variables
- Connect to AWS
- Open an image locally
- Pass that image to Rekognition
- Print out the results
Example
Your file should look like the following:
import sys |
Let’s look at the line response = client.detect_labels(Image=imgobj)
. Here detect_labels() is the function that passes the image to Rekognition and returns an analysis of the image. detect_labels()
takes either a S3 object or an Image object as bytes. Rekognition will then try to detect all the objects in the image, give each a categorical label and confidence interval. You can also optionally include the parameters MaxLabels
and MinConfidence
.
Test It Out
You can run your program from the command line: python py_detect_labels.py john-wall.jpg
. The parameter is the name of the file you want to analyze.
The response will be:
{ |
Building on our code from detect_labels()
, we will explore another service: facial detection.
detect_faces
returns many details on a face including gender and emotion, if the person has beard, if they are wearing eyeglasses, and an approximate age range.
A sample response:
{ |
Wrap Up
That is all you need to get started using AWS’s Rekognition library. As you can see, in just a few lines of code you can easily add image or facial recognition to any application.
Checkout out our Github project for more examples.
Stay tuned for our next post in this series where we combine Rekognition with OpenCV.