YOLOv5 and YOLOv7 with ncnn on Jetson Nano

ncnn

In this article, the PyTorch version of YOLOv5 is demonstrated on the Jetson Nano. And in this article, the OpenCV version of YOLOv5 is also demonstrated on the Jetson Nano.

This time, this article explains how to run ncnn versions of YOLOv5 and YOLOv7 on Jetson Nano. ncnn is a neural network inference framework specialized for mobile devices and features no dependent libraries. OpenCV is used to display the screen this time, but OpenCV is only responsible for loading and displaying images and is not related to the inference process. The framework is also very small and builds in minutes, not 13 hours like a PyTorch build.

Installing ncnn on Jetson Nano

Instructions for installing ncnn on the Jetson Nano can be found here, but this article briefly introduces it.

sudo apt-get update
sudo apt-get install -y cmake wget
sudo apt-get install -y libprotobuf-dev protobuf-compiler libvulkan-dev
git clone --depth=1 https://github.com/Tencent/ncnn.git
cd ncnn
git submodule update --depth=1 --init
mkdir build
cd build
cmake -D CMAKE_TOOLCHAIN_FILE=../toolchains/jetson.toolchain.cmake \
        -D NCNN_DISABLE_RTTI=OFF \
        -D NCNN_BUILD_TOOLS=ON \
        -D NCNN_VULKAN=ON \
        -D CMAKE_BUILD_TYPE=Release ..
make -j4
make install
sudo mkdir /usr/local/lib/ncnn
sudo cp -r install/include/ncnn /usr/local/include/ncnn
sudo cp -r install/lib/*.a /usr/local/lib/ncnn/Code language: Bash (bash)

If you installed OpenCV from source, the following error may occur at the cmake above.

CMake Error at /usr/share/cmake-3.10/Modules/FindPackageHandleStandardArgs.cmake:137 (message):
  Could NOT find CUDA (missing: CUDA_TOOLKIT_ROOT_DIR CUDA_NVCC_EXECUTABLE
  CUDA_INCLUDE_DIRS CUDA_CUDART_LIBRARY) (Required is exact version "10.2")Code language: Bash (bash)

If this error occurs, do the following and resume processing from cmake.

sudo rm /usr/local/lib/cmake/opencv4Code language: Bash (bash)

Building and running YOLOv5 on Jetson Nano

Download Code::Blocks. Code::Blocks is an IDE and the https://github.com/Qengineering/YoloV5-ncnn-Jetson-Nano repository that provides the ncnn version of YOLOv5 uses Code::Blocks.

sudo apt-get install -y codeblocksCode language: Bash (bash)

Build YOLOv5. The original ncnn version of the YOLOv5 repository detects objects in static image, but since we want to detect objects in camera images, the following forked version of the repository is used.

git clone https://github.com/otamajakusi/YoloV5-ncnn-Jetson-Nano
codeblockCode language: Bash (bash)

When the following screen appears, select OK for GNU GCC Compiler.

Select YoloV5-ncnn-Jetson-Nano/YoloV5.cbp in the following screen 1. Perform steps 2, 3 and 4 below.

The video should be displayed, and it appears to be about 5.1 FPS.

Building and running YOLOv7 on Jetson Nano

Check out the following repositories for YOLOv7 as well.

git clone https://github.com/otamajakusi/YoloV7-ncnn-Jetson-Nano/
codeblockCode language: Bash (bash)

Select YoloV7-ncnn-Jetson-Nano/YoloV7.cbp as well as YoloV5-ncnn-Jetson-Nano.

The video should be displayed, and it appears to be about 7.1 FPS.

FPS is lower than expected

If the video speed is not up to speed, the code needs to be modified according to the camera device. The following code can be found in YoloV5-ncnn-Jetson-Nano/yolov5.cpp and YoloV7-ncnn-Jetson-Nano/yolov7main.cpp.

#if 0 /* depends on your device */
    cap.set(cv::CAP_PROP_FOURCC, cv::VideoWriter::fourcc('Y','U','Y','V'));
    cap.set(cv::CAP_PROP_FRAME_WIDTH, 640);
    cap.set(cv::CAP_PROP_FRAME_HEIGHT, 480);
    cap.set(cv::CAP_PROP_FPS, 25);
#endifCode language: Bash (bash)

This part needs to be modified, and we will explain how to fix it. First, install v4l-utils.

sudo apt install v4l-utils -yCode language: Bash (bash)

List devices with v4l2-ctl.

$ v4l2-ctl --list-devices
1080P Webcam (usb-70090000.xusb-2.1):
	/dev/video0Code language: JavaScript (javascript)

Obtains format list information for the target device. The example below retrieves the format list information for /dev/video0.

$ v4l2-ctl -d /dev/video0 --list-formats-ext
ioctl: VIDIOC_ENUM_FMT
	Index       : 0
	Type        : Video Capture
	Pixel Format: 'MJPG' (compressed)
	Name        : Motion-JPEG
		Size: Discrete 1280x720
			Interval: Discrete 0.040s (25.000 fps)
		Size: Discrete 1920x1080
			Interval: Discrete 0.040s (25.000 fps)
		Size: Discrete 640x480
			Interval: Discrete 0.040s (25.000 fps)
		Size: Discrete 320x240
			Interval: Discrete 0.040s (25.000 fps)
		Size: Discrete 1024x768
			Interval: Discrete 0.040s (25.000 fps)
		Size: Discrete 1280x1024
			Interval: Discrete 0.040s (25.000 fps)
		Size: Discrete 160x120
			Interval: Discrete 0.040s (25.000 fps)
		Size: Discrete 800x600
			Interval: Discrete 0.040s (25.000 fps)

	Index       : 1
	Type        : Video Capture
	Pixel Format: 'YUYV'
	Name        : YUYV 4:2:2
		Size: Discrete 1280x720
			Interval: Discrete 0.167s (6.000 fps)
		Size: Discrete 1920x1080
			Interval: Discrete 0.200s (5.000 fps)
		Size: Discrete 640x480
			Interval: Discrete 0.040s (25.000 fps)
		Size: Discrete 320x240
			Interval: Discrete 0.040s (25.000 fps)
		Size: Discrete 1024x768
			Interval: Discrete 0.167s (6.000 fps)
		Size: Discrete 1280x1024
			Interval: Discrete 0.167s (6.000 fps)
		Size: Discrete 160x120
			Interval: Discrete 0.040s (25.000 fps)
		Size: Discrete 800x600
			Interval: Discrete 0.167s (6.000 fps)Code language: Bash (bash)

If the camera device is initialized with OpenCV and the above Pixel Format: ‘YUYV’ and Size: Discrete 1280×720 are selected, for example, we will get only 6.000 fps. I’m not sure which format and size OpenCV selects, but this causes the fps to not be as high as expected. To get 25 fps with the above camera, modify as follows.

    cap.set(cv::CAP_PROP_FOURCC, cv::VideoWriter::fourcc('Y','U','Y','V'));
    cap.set(cv::CAP_PROP_FRAME_WIDTH, 640);
    cap.set(cv::CAP_PROP_FRAME_HEIGHT, 480);
    cap.set(cv::CAP_PROP_FPS, 25);Code language: Bash (bash)

That’s all.

Reference

Deep learning examples on Raspberry 32/64 OS - Q-engineering
A wide range of deep learning C++ examples on your Raspberry Pi 32 or 64-bit Operating System. Build from source.
GitHub - Qengineering/YoloV5-ncnn-Jetson-Nano: YoloV5 for Jetson Nano
YoloV5 for Jetson Nano. Contribute to Qengineering/YoloV5-ncnn-Jetson-Nano development by creating an account on GitHub.
GitHub - otamajakusi/YoloV5-ncnn-Jetson-Nano: YoloV5 for Jetson Nano
YoloV5 for Jetson Nano. Contribute to otamajakusi/YoloV5-ncnn-Jetson-Nano development by creating an account on GitHub.
GitHub - Qengineering/YoloV7-ncnn-Jetson-Nano: YoloV7 for a Jetson Nano using ncnn.
YoloV7 for a Jetson Nano using ncnn. Contribute to Qengineering/YoloV7-ncnn-Jetson-Nano development by creating an account on GitHub.
GitHub - otamajakusi/YoloV7-ncnn-Jetson-Nano: YoloV7 for a Jetson Nano using ncnn.
YoloV7 for a Jetson Nano using ncnn. Contribute to otamajakusi/YoloV7-ncnn-Jetson-Nano development by creating an account on GitHub.
3. 2 with Python 3 VideoCapture changes settings of
I worked with OpenCV 3.2 with Python3 and SBC OXU4. I have a true 5MPx web-camera connected to SBC. I want to take from this camera 2592x1944 resolution picture...