How to use YOLOv5 on AWS Lambda

AWS

This article describes how to run YOLOv5 with AWS Lambda. AWS Lambda is an AWS service that is invoked and executed only when invoked or when an event occurs. This article explains how to create a service that executes YOLOv5 using AWS Lambda.

AWS Lambda (Lambda) has two types of methods: extracting and executing a Zip file, and executing a Docker container. In the method of extracting and executing a Zip file, the function to be executed is placed in AWS S3 as a Zip file, and the Zip file is downloaded and executed. On the other hand, the Docker container execution method loads a Docker image from AWS ECR and executes it. The cold start time of Lambda (starting Lambda from an uncached state) is smaller and faster than that of the Docker container method. See this post. On the other hand, the Docker container method has a size limit of 10 GB, while the Zip file method has a limit of 250 MB for the size of the extracted file. If the size of the extracted file exceeds 250 MB, the Docker container method should be chosen. Otherwise, the Zip file method should be chosen.

Applications to be created

The application to be created this time sends images from the client to the server, the server detects objects using YOLOv5, and the resulting images are sent back to the client.

Prerequisite

The following prerequisites are required to run the samples in this article

  • AWS Account
  • AWS SAM CLI installed

Initialization

The initial setup of Lambda is performed with the following command. The runtime is python3.8. The Zip file extracting and executing method is used.

sam init --runtime python3.8 --package-type Zip --app-template hello-world --name yolov5-aws-lambdaCode language: Bash (bash)

After the above execution, the yolov5-aws-lambda directory is created, and the hello_world directory and template.yaml file, etc. are created in that directory.

.
├── events
│   └── event.json
├── hello_world
│   ├── app.py
│   ├── __init__.py
│   └── requirements.txt
├── __init__.py
├── README.md
├── template.yaml
└── tests
    ├── __init__.py
    ...Code language: plaintext (plaintext)

There are 3 files that need to be modified this time:

  • hello_world/app.py
  • hello_world/requirements.txt
  • template.yaml

hello_world/app.py

We will use the application created here. This means that YOLOv5 will be implemented using OpenCV. Also, since GPU is not available in Lambda, YOLOv5 is executed using only CPU.

import json
import cv2
import base64
import time
import numpy as np


def build_model():
    net = cv2.dnn.readNet("yolov5s.onnx")
    net.setPreferableBackend(cv2.dnn.DNN_BACKEND_OPENCV)
    net.setPreferableTarget(cv2.dnn.DNN_TARGET_CPU)
    return net


INPUT_WIDTH = 640
INPUT_HEIGHT = 640
SCORE_THRESHOLD = 0.2
NMS_THRESHOLD = 0.4
CONFIDENCE_THRESHOLD = 0.4


def detect(image, net):
    blob = cv2.dnn.blobFromImage(
        image, 1 / 255.0, (INPUT_WIDTH, INPUT_HEIGHT), swapRB=True, crop=False
    )
    net.setInput(blob)
    preds = net.forward()
    return preds


def load_classes():
    class_list = []
    with open("classes.txt", "r") as f:
        class_list = [cname.strip() for cname in f.readlines()]
    return class_list


def wrap_detection(input_image, output_data):
    class_ids = []
    confidences = []
    boxes = []

    rows = output_data.shape[0]

    image_width, image_height, _ = input_image.shape

    x_factor = image_width / INPUT_WIDTH
    y_factor = image_height / INPUT_HEIGHT

    for r in range(rows):
        row = output_data[r]
        confidence = row[4]
        if confidence >= 0.4:

            classes_scores = row[5:]
            _, _, _, max_indx = cv2.minMaxLoc(classes_scores)
            class_id = max_indx[1]
            if classes_scores[class_id] > 0.25:

                confidences.append(confidence)

                class_ids.append(class_id)

                x, y, w, h = row[0].item(), row[1].item(), row[2].item(), row[3].item()
                left = int((x - 0.5 * w) * x_factor)
                top = int((y - 0.5 * h) * y_factor)
                width = int(w * x_factor)
                height = int(h * y_factor)
                box = np.array([left, top, width, height])
                boxes.append(box)

    indexes = cv2.dnn.NMSBoxes(boxes, confidences, 0.25, 0.45)

    result_class_ids = []
    result_confidences = []
    result_boxes = []

    for i in indexes:
        result_confidences.append(confidences[i])
        result_class_ids.append(class_ids[i])
        result_boxes.append(boxes[i])

    return result_class_ids, result_confidences, result_boxes


def format_yolov5(frame):
    row, col, _ = frame.shape
    _max = max(col, row)
    result = np.zeros((_max, _max, 3), np.uint8)
    result[0:row, 0:col] = frame
    return result


def yolov5(image):
    colors = [(255, 255, 0), (0, 255, 0), (0, 255, 255), (255, 0, 0)]
    class_list = load_classes()

    net = build_model()

    inputImage = format_yolov5(image)
    outs = detect(inputImage, net)

    class_ids, confidences, boxes = wrap_detection(inputImage, outs[0])

    for (classid, confidence, box) in zip(class_ids, confidences, boxes):
        color = colors[int(classid) % len(colors)]
        cv2.rectangle(image, box, color, 2)
        cv2.rectangle(
            image, (box[0], box[1] - 20), (box[0] + box[2], box[1]), color, -1
        )
        cv2.putText(
            image,
            class_list[classid],
            (box[0], box[1] - 10),
            cv2.FONT_HERSHEY_SIMPLEX,
            0.5,
            (0, 0, 0),
        )
    return image


def base64_to_cv2(image_base64):
    # base64 image to cv2
    image_bytes = base64.b64decode(image_base64)
    np_array = np.fromstring(image_bytes, np.uint8)
    image_cv2 = cv2.imdecode(np_array, cv2.IMREAD_COLOR)
    return image_cv2


def cv2_to_base64(image_cv2):
    # cv2 image to base64
    image_bytes = cv2.imencode(".jpg", image_cv2)[1].tostring()
    image_base64 = base64.b64encode(image_bytes).decode()
    return image_base64


def lambda_handler(event, context):
    body = json.loads(event["body"])
    image = body["image"]
    image = yolov5(base64_to_cv2(image))

    return {
        "statusCode": 200,
        "body": json.dumps(
            {
                "image": cv2_to_base64(image),
            }
        ),
    }Code language: Python (python)

Since template.yaml is configured to call app.lambda_handler first, the above lambda_handler is called when Lambda is invoked.

The client sends base64-encoded images to Lambda, so Lambda decodes the images to base64 and uses it.

Copy yolov5s.onnx and classes.txt from this repository, also used in this article, into the hello_world directory.

hello_world/requirements.txt

hello_world/requirements.txt is as follows. The normal opencv-python package is large and exceeds the 250MB size limit, but we can reduce the size by specifying opencv-python-headless. The headless package does not include any GUI-related libraries, but this is not a problem since Lambda does not use them.

opencv-python-headless==4.6.0.66Code language: plaintext (plaintext)

template.yaml

The parts that need to be changed are as follows

  • Globals.Function.Timeout
    • Changed to 15 seconds
      • Since execution is not finished in the 3 seconds
  • Globals.Function.MemorySize
    • Changed to 5312MB
      • Lambda charges by execution time and MemorySize. A smaller MemorySize will reduce the fee, but may increase the execution time since less virtual CPU time is available. See this article.
  • Resources.HelloWorldFunction.Properties.Events.HelloWorld.Properties.Method
    • Changed to post
      • Since client posts the images
AWSTemplateFormatVersion: '2010-09-09'
Transform: AWS::Serverless-2016-10-31
Description: >
  yolov5-aws-lambda

  Sample SAM Template for yolov5-aws-lambda

# More info about Globals: https://github.com/awslabs/serverless-application-model/blob/master/docs/globals.rst
Globals:
  Function:
    Timeout: 15
    MemorySize: 5312

Resources:
  HelloWorldFunction:
    Type: AWS::Serverless::Function # More info about Function Resource: https://github.com/awslabs/serverless-application-model/blob/master/versions/2016-10-31.md#awsserverlessfunction
    Properties:
      CodeUri: hello_world/
      Handler: app.lambda_handler
      Runtime: python3.8
      Architectures:
        - x86_64
      Events:
        HelloWorld:
          Type: Api # More info about API Event Source: https://github.com/awslabs/serverless-application-model/blob/master/versions/2016-10-31.md#api
          Properties:
            Path: /hello
            Method: post
...Code language: YAML (yaml)

Build

Build is required when source code, templates, etc. are changed.

sam buildCode language: Bash (bash)

Deploy

--guided is specified to display the guide. Follow the sample below.

$ sam deploy --guided

Configuring SAM deploy
======================

	Looking for config file [samconfig.toml] :  Not found

	Setting default arguments for 'sam deploy'
	=========================================
	Stack Name [sam-app]: ==> any name can be used
	AWS Region [us-east-1]: ==> any region can be used
	#Shows you resources changes to be deployed and require a 'Y' to initiate deploy
	Confirm changes before deploy [y/N]: ==> blank is OK
	#SAM needs permission to be able to create roles to connect to the resources in your template
	Allow SAM CLI IAM role creation [Y/n]: ==> blank is OK
	#Preserves the state of previously provisioned resources when an operation fails
	Disable rollback [y/N]: --> blank is OK
	HelloWorldFunction may not have authorization defined, Is this okay? [y/N]: Y
	Save arguments to configuration file [Y/n]: Y
	SAM configuration file [samconfig.toml]: ==> blank is OK
	SAM configuration environment [default]: ==> blank is OK

	Looking for resources needed for deployment:
	 Managed S3 bucket: aws-sam-cli-managed-default-samclisourcebucket-ntq68vn38lmx
	 A different default S3 bucket can be set in samconfig.toml

	Saved arguments to config file
	Running 'sam deploy' for future deployments will use the parameters saved above.
	The above parameters can be changed by modifying samconfig.toml
	Learn more about samconfig.toml syntax at 
	https://docs.aws.amazon.com/serverless-application-model/latest/developerguide/serverless-sam-cli-config.html
...
Outputs
----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
...
Key                 HelloWorldApi
Description         API Gateway endpoint URL for Prod stage for Hello World function
Value               https://xxxxxxxxxx.execute-api.xxxxxxxx.amazonaws.com/Prod/hello/Code language: Bash (bash)

A samconfig.toml is created with the above settings saved. Next and subsequent deployments refer to samconfig.toml, so the –guded option is not necessary.

The API Gateway endpoint URL displayed at the end of the deployment is the endpoint.

Test

Send the image to the endpoint indicated at the end of the deployment and the image should be returned.

wget https://raw.githubusercontent.com/ultralytics/yolov5/master/data/images/zidane.jpg
image=$(base64 -w0 zidane.jpg)
echo { \"image\": "${image}" } | \
  curl -X POST -H "Content-Type: application/json" -d @-  https://xxxxxxxxxx.execute-api.xxxxxxxx.amazonaws.com/Prod/hello/ | \
  jq -r .image | \
  base64 -d > predicted.jpgCode language: Bash (bash)
predicted.jpg

All codes are available at https://github.com/otamajakusi/yolov5-aws-lambda

That’s all

Reference