This is the official code for paper "Real-Time Multi-Drone Detection and Tracking for Pursuit-Evasion with Parameter Search".
Highlights: This repository uses fine-tuned yolov5 (benchmarked with yolov8, Swin-Transformer and RTMDet), deepsort and ROS to perform multi-drone detection and tracking, which can run for both Jetson Xavier NX and Jetson Nano.
***If you are using the code, please consider citing our paper and giving us a star 🌟.
@ARTICLE{10417793,
author={Xiao, Jiaping and Chee, Jian Hui and Feroskhan, Mir},
journal={IEEE Transactions on Intelligent Vehicles},
title={Real-Time Multi-Drone Detection and Tracking for Pursuit-Evasion With Parameter Search},
year={2024},
volume={},
number={},
pages={1-11},
keywords={Drones;Benchmark testing;Image edge detection;Cameras;YOLO;Real-time systems;Biological system modeling;Drones;Datasets;Object detection and tracking},
doi={10.1109/TIV.2024.3360433}}
Real-time multi-object detection and tracking are primarily required for intelligent multi-vehicle systems. This paper presents a whole life cycle multi-drone detection and tracking approach for collaborative drone pursuit-evasion operations, incorporating parameter search and edge acceleration techniques. Specifically, to address the single-class drone detection limitation of existing drone datasets, we first collect a new dataset "ICG-Drone" from various environments and then establish a performance benchmark with different models, such as YOLOv5, YOLOv8, and Swin Transformer. Based on the outstanding performance regarding accuracy, inference speed, etc., the selected YOLOv5s is further fine-tuned with a genetic algorithm, which achieves 14.8% / 3.6% improvement over 2 drone classes and 3 drone classes respectively in terms of mean average precision (mAP). Moreover, we develop an edge-accelerated detector and tracking system Drone-YOLOSORT focusing on "evader" and "pursuer" drones using TensorRT and deliver a ROS package for modular integration, which can be easily applied in a multi-vehicle system for recognizing friends and non-friends. Our system is able to reach about 24.3 FPS during inferencing, fulfilling the criteria of real-time drone detection at 20 FPS.
- Jetson Nano or Jetson Xavier NX
- Jetpack 4.5.1
- python3 (comes with default installation in Jetson Nano or Jetson Xavier NX)
- tensorrt 7.1.3.0
- torch 1.8.0
- torchvision 0.9.0
- torch2trt 0.3.0
- onnx 1.4.1
- opencv-python 4.5.3.56
- protobuf 3.17.3
- scipy 1.5.4
Follow the instructions from darknet_ros and build in your catkin_ws
.
You may run the ros app either as an executable or ros package.
git clone https://github.com/NTU-ICG/multidrone-detection-tracking.git
cd multidrone-detection-tracking
// before you cmake and make, please change ./src/main.cpp char* yolo_engine = ""; char* sort_engine = ""; to your own path
mkdir build
cd build
cmake ..
make && rm ./yolosort && mv devel/lib/yolosort/yolosort ./
if you face any errors, please see this article or see Errors
section.
- Clone the repository into the
src
folder
cd ~/catkin_ws/src
git clone https://github.com/NTU-ICG/multidrone-detection-tracking.git
- Create a folder named
resources
insidemultidrone-detection-tracking
and add theyolov5s.engine
anddeepsort.engine
inside this folder.
cd multidrone-detection-tracking
mkdir resources
// add both engine files in resources
- Build yolosort using
catkin build
cd ~/catkin_ws
catkin build yolosort
- Run yolosort package
source ~/catkin_ws/devel/setup.bash
rospack list // check if yolosort package exist
rosrun yolosort yolosort
- First tab: Start roscore
roscore
- Second tab: Echo the publisher
source ~/catkin_ws/devel/setup.bash
rostopic echo /detection/bounding_boxes
-
Third tab: Start yolosort from executable or ros (Refer to Build and Run above)
-
Fourth tab: Start stream input from rosbag file
rosbag play <BAGFILE.bag>
The "ICG-Drone" dataset can be found in uavgdrone, which includes "ICG-Drone-2c" and "ICG-Drone-3c". The "ICG_Drone" tracking dataset can be found in uavgdrone-tracking.
distribution from various resources
Data property distribution
Data property correlogram
distribution from various resources
Data property distribution
Data property correlogram
You need two models, one is the yolov5 model for detection, which is generated from tensorrtx. The other model is the deepsort model used for tracking.
There are two models to be generated, one for yolov5 and one for deepsort. The models can be found under resources.
Yolov5s was chosen for this project. You can use the following Colab notebook to train on the drones dataset.
We also provide other training models such as Yolov8, Swin-Transformer, RTMDet. These models can be trained with the following Colab notebook.
Note that pretrained models for deepsort can be retrieved from deepsort. If you need to use your own model, refer to the For Other Custom Models
section.
You can also refer to tensorrtx official readme.
The following is deepsort.onnx and deesort.engine files, you can find in baiduyun and https://github.com/RichardoMrMu/yolov5-deepsort-tensorrt/releases/tag/yolosort
Model | Url |
---|---|
百度云 | BaiduYun url passwd:z68e |
- Get yolov5 repository Although yolov5 v6.1 is available, currently yolov5 v6.0 is only supported due to tensorrtx. Please ensure your yolov5 code is v6.0.
git clone -b v5.0 https://github.com/ultralytics/yolov5.git
cd yolov5
mkdir weights
cd weights
// copy yolov5 pt file to here.
- Get tensorrtx.
cd ../..
git clone https://github.com/wang-xinyu/tensorrtx
- Generate
yolov5.wst
model. If there is a segmentation fault core dumped or illegal operation while generating theyolov5.wst
file, you can use the following notebook to generate the files.
cp tensorrtx/gen_wts.py yolov5/
cd yolov5
python3 gen_wts.py -w ./weights/yolov5s.pt -o ./weights/yolov5s.wts
// a file 'yolov5s.wts' will be generated.
yolov5s.wts
model will be generated in yolov5/weights/
.
- Build tensorrtx/yolov5 to generate
yolov5.engine
. Update theCLASS_NUM,INPUT_H,INPUT_W
intensorrtx/yolov5/yololayer.h
lines 20,21 and 22 before making.
In yololayer.h
,
// before
static constexpr int CLASS_NUM = 80; // line 20
static constexpr int INPUT_H = 640; // line 21 yolov5's input height and width must be divisible by 32.
static constexpr int INPUT_W = 640; // line 22
// after
// if your model is 2 classfication and image size is 608*608
static constexpr int CLASS_NUM = 2; // line 20
static constexpr int INPUT_H = 608; // line 21 yolov5's input height and width must be divisible by 32.
static constexpr int INPUT_W = 608; // line 22
cd tensorrtx/yolov5
// update CLASS_NUM in yololayer.h if your model is trained on custom dataset
mkdir build
cd build
cp {ultralytics}/yolov5/yolov5s.wts {tensorrtx}/yolov5/build
cmake ..
make
// serialise yolov5s engine file
sudo ./yolov5 -s yolov5s.wts yolov5s.engine s
// test your engine file
sudo ./yolov5 -d yolov5s.engine ../samples
- After generating the yolov5s.engine, and you can place
yolov5s.engine
in the main project. For example
cd {yolov5-deepsort-tensorrt}
mkdir resources
cp {tensorrtx}/yolov5/build/yolov5s.engine {yolov5-deepsort-tensorrt}/resources
- Get deepsort engine file
You can get deepsort pretrained model in this drive url
and ckpt.t7. The
deepsort.engine
file can also be found in the releases.
git clone https://github.com/RichardoMrMu/deepsort-tensorrt.git
// 根据github的说明
cp {deepsort-tensorrt}/exportOnnx.py {deep_sort_pytorch}/
python3 exportOnnx.py
mv {deep_sort_pytorch}/deepsort.onnx {deepsort-tensorrt}/resources
cd {deepsort-tensorrt}
mkdir build
cd build
cmake ..
make
./onnx2engine ../resources/deepsort.onnx ../resources/deepsort.engine
// test
./demo ../resource/deepsort.engine ../resources/track.txt
You may then add both the yolov5s.engine
and deepsort.engine
into the project.
Different versions of yolov5
Currently, tensorrtx support yolov5 v1.0(yolov5s only), v2.0, v3.0, v3.1, v4.0, v5.0 and v6.0. v6.1 supports exporting to TensorRT (see here) but does not export to TensorRT 7.1.3.0.
- For yolov5 v5.0, download .pt from yolov5 release v5.0,
git clone -b v5.0 https://github.com/ultralytics/yolov5.git
andgit clone https://github.com/wang-xinyu/tensorrtx.git
, then follow how-to-run in current page. - For yolov5 v4.0, download .pt from yolov5 release v4.0,
git clone -b v4.0 https://github.com/ultralytics/yolov5.git
andgit clone -b yolov5-v4.0 https://github.com/wang-xinyu/tensorrtx.git
, then follow how-to-run in tensorrtx/yolov5-v4.0. - For yolov5 v3.1, download .pt from yolov5 release v3.1,
git clone -b v3.1 https://github.com/ultralytics/yolov5.git
andgit clone -b yolov5-v3.1 https://github.com/wang-xinyu/tensorrtx.git
, then follow how-to-run in tensorrtx/yolov5-v3.1. - For yolov5 v3.0, download .pt from yolov5 release v3.0,
git clone -b v3.0 https://github.com/ultralytics/yolov5.git
andgit clone -b yolov5-v3.0 https://github.com/wang-xinyu/tensorrtx.git
, then follow how-to-run in tensorrtx/yolov5-v3.0. - For yolov5 v2.0, download .pt from yolov5 release v2.0,
git clone -b v2.0 https://github.com/ultralytics/yolov5.git
andgit clone -b yolov5-v2.0 https://github.com/wang-xinyu/tensorrtx.git
, then follow how-to-run in tensorrtx/yolov5-v2.0. - For yolov5 v1.0, download .pt from yolov5 release v1.0,
git clone -b v1.0 https://github.com/ultralytics/yolov5.git
andgit clone -b yolov5-v1.0 https://github.com/wang-xinyu/tensorrtx.git
, then follow how-to-run in tensorrtx/yolov5-v1.0.
Config
- Choose the model s/m/l/x/s6/m6/l6/x6 from command line arguments.
- Input shape defined in yololayer.h
- Number of classes defined in yololayer.h, DO NOT FORGET TO ADAPT THIS, If using your own model
- INT8/FP16/FP32 can be selected by the macro in yolov5.cpp, INT8 need more steps, pls follow
How to Run
first and then go theINT8 Quantization
below - GPU id can be selected by the macro in yolov5.cpp
- NMS thresh in yolov5.cpp
- BBox confidence thresh in yolov5.cpp
- Batch size in yolov5.cpp
You may need train your own model and transfer your trained model to tensorRT.
- Train Custom Model
Follow the official wiki to train your own model on your dataset. For example, I choose yolov5-s to train my model.
- Transfer Custom Model
Just like tensorRT official guideline, transfer your pytorch model to tensorrt.
Change yololayer.h
lines 20,21 and 22 (CLASS_NUM,INPUT_H,INPUT_W) to your own parameters.
cd {tensorrtx}/yolov5/
// update CLASS_NUM in yololayer.h if your model is trained on a custom dataset
mkdir build
cd build
cp {ultralytics}/yolov5/yolov5s.wts {tensorrtx}/yolov5/build
cmake ..
make
sudo ./yolov5 -s [.wts] [.engine] [s/m/l/x/s6/m6/l6/x6 or c/c6 gd gw] // serialize model to plan file
sudo ./yolov5 -d [.engine] [image folder] // deserialize and run inference, the images in [image folder] will be processed.
// For example yolov5s
sudo ./yolov5 -s yolov5s.wts yolov5s.engine s
sudo ./yolov5 -d yolov5s.engine ../samples
// For example Custom model with depth_multiple=0.17, width_multiple=0.25 in yolov5.yaml
sudo ./yolov5 -s yolov5_custom.wts yolov5.engine c 0.17 0.25
sudo ./yolov5 -d yolov5.engine ../samples
- If you meet the following error during building
fatal error: Eigen/Core: No such file or directory #include <Eigen/Core>
. Run the following
sudo ln -s /usr/include/eigen3/Eigen /usr/include/Eigen
- The following problem may occur when generating the
wts
file.
ImportError: /usr/lib/aarch64-linux-gnu/libgomp.so.1: cannot allocate memory in static TLS block
Run the following in terminal. Credits
export LD_PRELOAD=/usr/lib/aarch64-linux-gnu/libgomp.so.1
- When making yolosort, the following errors occur due to
OpenCV
in Jetson Xavier.
CMake Error at /opt/ros/melodic/share/cv_bridge/cmake/cv_bridgeConfig.cmake:113 message): Project 'cv_bridge' specifies '/usr/include/opencv' as an include dir, which is not found. It does neither exist as an absolute directory nor in '${{prefix}}//usr/include/opencv'. Check the issue tracker 'https://github.com/ros-perception/vision_opencv/issues' and consider creating a ticket if the problem has not been reported yet.
Open the file /opt/ros/melodic/share/cv_bridge/cmake/cv_bridgeConfig.cmake
and change the line. Credits
set(_include_dirs "include;/usr/include;/usr/include/opencv")
to set(_include_dirs "include;/usr/include;/usr/include/opencv4")