Publications

Detailed Information

Radar-Spectrogram-Based UAV Classification Using Convolutional Neural Networks

DC Field Value Language
dc.contributor.authorPark, Dongsuk-
dc.contributor.authorLee, Seungeui-
dc.contributor.authorPark, SeongUk-
dc.contributor.authorKwak, Nojun-
dc.date.accessioned2024-08-08T01:24:54Z-
dc.date.available2024-08-08T01:24:54Z-
dc.date.created2021-03-22-
dc.date.created2021-03-22-
dc.date.issued2021-01-
dc.identifier.citationSensors, Vol.21 No.1, pp.1-18-
dc.identifier.issn1424-8220-
dc.identifier.urihttps://hdl.handle.net/10371/205821-
dc.description.abstractWith the upsurge in the use of Unmanned Aerial Vehicles (UAVs) in various fields, detecting and identifying them in real-time are becoming important topics. However, the identification of UAVs is difficult due to their characteristics such as low altitude, slow speed, and small radar cross-section (LSS). With the existing deterministic approach, the algorithm becomes complex and requires a large number of computations, making it unsuitable for real-time systems. Hence, effective alternatives enabling real-time identification of these new threats are needed. Deep learning-based classification models learn features from data by themselves and have shown outstanding performance in computer vision tasks. In this paper, we propose a deep learning-based classification model that learns the micro-Doppler signatures (MDS) of targets represented on radar spectrogram images. To enable this, first, we recorded five LSS targets (three types of UAVs and two different types of human activities) with a frequency modulated continuous wave (FMCW) radar in various scenarios. Then, we converted signals into spectrograms in the form of images by Short time Fourier transform (STFT). After the data refinement and augmentation, we made our own radar spectrogram dataset. Secondly, we analyzed characteristics of the radar spectrogram dataset with the ResNet-18 model and designed the ResNet-SP model with less computation, higher accuracy and stability based on the ResNet-18 model. The results show that the proposed ResNet-SP has a training time of 242 s and an accuracy of 83.39%, which is superior to the ResNet-18 that takes 640 s for training with an accuracy of 79.88%.-
dc.language영어-
dc.publisherMultidisciplinary Digital Publishing Institute (MDPI)-
dc.titleRadar-Spectrogram-Based UAV Classification Using Convolutional Neural Networks-
dc.typeArticle-
dc.identifier.doi10.3390/s21010210-
dc.citation.journaltitleSensors-
dc.identifier.wosid000606069900001-
dc.identifier.scopusid2-s2.0-85098999567-
dc.citation.endpage18-
dc.citation.number1-
dc.citation.startpage1-
dc.citation.volume21-
dc.description.isOpenAccessY-
dc.contributor.affiliatedAuthorKwak, Nojun-
dc.type.docTypeArticle-
dc.description.journalClass1-
dc.subject.keywordAuthorCNN-
dc.subject.keywordAuthorclassification-
dc.subject.keywordAuthorUAV-
dc.subject.keywordAuthorFMCW radar-
dc.subject.keywordAuthorSTFT-
dc.subject.keywordAuthorspectrogram-
dc.subject.keywordAuthorMDS-
Appears in Collections:
Files in This Item:
There are no files associated with this item.

Related Researcher

  • Graduate School of Convergence Science & Technology
  • Department of Intelligence and Information
Research Area Feature Selection and Extraction, Object Detection, Object Recognition

Altmetrics

Item View & Download Count

  • mendeley

Items in S-Space are protected by copyright, with all rights reserved, unless otherwise indicated.

Share