Image Recognition using Amazon Rekognition
Amazon has a whole suite of tools to add artificial intelligence capabilities to your applications. Today we will be exploring Amazon Rekognition, an image analysis service. Rekognition can detect a number of interesting things such as faces, objects, and celebrities.
To interact with Rekognition, we will use Boto 3, the official Amazon AWS SDK for Python.
Our first step will be to create a new virtual environment and install
decouple. We use decouple just to manage our environment variables.
mkvirtualenv --python=$(which python3) py-rekognition
Now that we have our virtual environment created and all necessary packages installed, we need to a way to set our environment variables. ISL recommends using foreman, or a similar process manager and using a
.env file to save your environment state in ini format. The variables can also just be exported manually or via a script.
In this post we will look at two functions from the library:
detect_faces(). For further examples for
recognize_celebrities(), see our Github.
In our first example we are going to use
detect_labels(). Since we have our environment variables set, the next step is to create a python file named
py_detect_labels.py. In this file we are going to:
- Read in our environment variables
- Connect to AWS
- Open an image locally
- Pass that image to Rekognition
- Print out the results
Your file should look like the following:
Let’s look at the line
response = client.detect_labels(Image=imgobj). Here detect_labels() is the function that passes the image to Rekognition and returns an analysis of the image.
detect_labels() takes either a S3 object or an Image object as bytes. Rekognition will then try to detect all the objects in the image, give each a categorical label and confidence interval. You can also optionally include the parameters
You can run your program from the command line:
python py_detect_labels.py john-wall.jpg. The parameter is the name of the file you want to analyze.
The response will be:
Building on our code from
detect_labels(), we will explore another service: facial detection.
detect_faces returns many details on a face including gender and emotion, if the person has beard, if they are wearing eyeglasses, and an approximate age range.
A sample response:
That is all you need to get started using AWS’s Rekognition library. As you can see, in just a few lines of code you can easily add image or facial recognition to any application.
Checkout out our Github project for more examples.
Stay tuned for our next post in this series where we combine Rekognition with OpenCV.