Nocturnal Surveillance Framework For Vehicle Detection
Abstract
In the field of traffic surveillance systems, where
effective traffic management and safety are the
primary concerns, vehicle detection and tracking
play an important role. Low brightness, low
contrast, and noise are issues with low-light
environments that result from poor lighting or
insufficient exposure.
In this paper, we proposed a vehicle detection and
tracking model based on the aerial image captured
during nighttime. Before object detection, we
performed fogging and image enhancement using
MIRNet architecture. After pre-processing,
YOLOv5 was used to locate each vehicle position in
the image. Each detected vehicle was subjected to a
Scale-Invariant Feature Transform (SIFT) feature
extraction algorithm to assign a unique identifier to
track multiple vehicles in the image frames.
To get the best possible location of vehicles in the
succeeding frames templates were extracted and
template matching was performed. The proposed
model achieves a precision score of 0.924 for
detection and 0.861 for tracking with the Unmanned
Aerial Vehicle Benchmark Object Detection and
Tracking (UAVDT) dataset, 0.904 for detection, and
0.833 for tracking with the Vision Meets Drone
Single Object-Tracking (VisDrone) dataset.