Detect faces using Amazon Rekognition

Amazon Rekognition makes it easy to add image and video analysis to your applications using proven, highly scalable, deep learning technology that requires no machine learning expertise to use. With Amazon Rekognition, you can identify objects, people, text, scenes, and activities in images and videos, as well as detect any inappropriate content. Amazon Rekognition also provides highly accurate facial analysis and facial search capabilities that you can use to detect, analyze, and compare faces for a wide variety of user verification, people counting, and public safety use cases.

In this lab, you will use Amazon Rekognition to detect faces in an image. The lab exercise will take an image as input and will generate an image with the faces that were detected in the image outlined with bounding boxes.


Save the image above locally as you will use it for testing.

Create project

  1. Create a new .NET Core console application project.


  1. Add the following Nuget packages to the project:
  • AWSSDK.Rekognition
  • System.Drawing.Common

Nuget Nuget

  1. Add the following import statements to Program.cs:
using System;
using System.Diagnostics;
using System.Drawing;
using System.Drawing.Imaging;
using System.IO;

using System.Threading.Tasks;

using Amazon.Rekognition;
using Amazon.Rekognition.Model;
  1. Replace the Main method in Program.cs with the following async version:
static async Task Main(string[] args)
    if (args.Length != 1)
        Console.WriteLine("Please provide picture file name!");


    var fileName = args[0];

    await IdentifyFaces(fileName);
  1. Add the following IdentityFaces method implementation (details of the implementation are given below):

Please note that when you initialize AWS SDK’s AmazonRekognitionClient, you need to pass the RegionEndpoint of the region you are making labs in. The code below initializes AmazonRekognitionClient in the EUWest1 region.

static async Task IdentifyFaces(string fileName)
    var rekognitionClient = new AmazonRekognitionClient(Amazon.RegionEndpoint.EUWest1);

    var detectRequest = new DetectFacesRequest();

    var rekognitionImage = new Amazon.Rekognition.Model.Image();

    byte[] data = null;

    using (FileStream fileStream = new FileStream(fileName, FileMode.Open, FileAccess.Read))
        data = new byte[fileStream.Length];
        fileStream.Read(data, 0, (int)fileStream.Length);

    rekognitionImage.Bytes = new MemoryStream(data);

    detectRequest.Image = rekognitionImage;

    var detectResponse = await rekognitionClient.DetectFacesAsync(detectRequest);

    var outputFile = string.Empty;

    if (detectResponse.FaceDetails.Count > 0)
        // Load a bitmap to modify with face bounding box rectangles
        var facesHighlighted = new Bitmap(fileName);
        var pen = new Pen(Color.Red, 3);

        // Create a graphics context
        using (var graphics = Graphics.FromImage(facesHighlighted))
            foreach (var faceDetail in detectResponse.FaceDetails)
                // Get the bounding box
                var boundingBox = faceDetail.BoundingBox;

                Console.WriteLine("Bounding box = (" + boundingBox.Left + ", " + boundingBox.Top + ", " +
                    boundingBox.Height + ", " + boundingBox.Width + ")");

                // Draw the rectangle using the bounding box values
                // They are percentages so scale them to picture
                graphics.DrawRectangle(pen, x: facesHighlighted.Width * boundingBox.Left,
                    y: facesHighlighted.Height * boundingBox.Top,
                    width: facesHighlighted.Width * boundingBox.Width,
                    height: facesHighlighted.Height * boundingBox.Height);

            // Save the new image
            outputFile = fileName.Replace(Path.GetExtension(fileName), "_faces.jpg");

            facesHighlighted.Save(outputFile, ImageFormat.Jpeg);

            Console.WriteLine(">>> " + detectResponse.FaceDetails.Count + " face(s) highlighted in file " + outputFile);
        Console.WriteLine("No faces have been detected!");

    Console.WriteLine("The process is done");

The code above does the following:

  • Creates an instance of the AWS SDK’s AmazonRekognitionClient, initialized to the EUWest1 region, and an instance of the DetectFacesRequest class for use with the DetectFaces API.
var rekognitionClient = new AmazonRekognitionClient(Amazon.RegionEndpoint.EUWest1);

var detectRequest = new DetectFacesRequest();
  • Creates an Amazon Rekognition image, loads the input image bytes into a byte array wrapped by a memory stream, and assigns the stream to the Bytes member of the Rekognition image instance, which is in turn assigned to the Image member of the DetectFacesRequest instance that you created in the previous step.
  • Once the preparation work for this faces detection process is ready, calls DetectFacesAsync method of the AmazonRekognitionClient to complete it.
var detectResponse = await rekognitionClient.DetectFacesAsync(detectRequest);
  • The detectResponse variable is of type DetectFacesResponse. This type has a property containing a list of FaceDetail objects. The code iterates over this collection to get the bounding box for each face and draws on a new bitmap of the original image the boxes around each detected face.

Run application

Now you can build the application and run it by passing the path to the sample image:

Rekognition.exe c:\projects\001.jpg
Bounding box = (0.121953234, 0.08352848, 0.22307529, 0.0807349)
Bounding box = (0.2659542, 0.1241306, 0.2001853, 0.08941019)
Bounding box = (0.58018595, 0.32555613, 0.1754903, 0.08036656)
Bounding box = (0.82486117, 0.29587007, 0.1710771, 0.08172561)
Bounding box = (0.42694697, 0.31433603, 0.17620672, 0.07065911)
Bounding box = (0.701911, 0.45138076, 0.17246804, 0.06653859)
>>> 6 face(s) highlighted in file c:\projects\001_faces.jpg
The process is done

Check the folder with original image, you should see another file with the name 001_faces.jpg

The output file should look like the following with 6 detected faces highlighted: Rekognition