Publications

Detailed Information

Object Detection and Classification in 3D Point Cloud Data for Automated Driving : 자율 주행을 위한 3D Point Cloud Data 기반 물체 탐지 및 분류 기법에 관한 연구

DC Field Value Language
dc.contributor.advisor서승우-
dc.contributor.authorMyung-Ok Shin-
dc.date.accessioned2017-07-13T07:19:53Z-
dc.date.available2017-07-13T07:19:53Z-
dc.date.issued2017-02-
dc.identifier.other000000141322-
dc.identifier.urihttps://hdl.handle.net/10371/119260-
dc.description학위논문 (박사)-- 서울대학교 대학원 : 전기·컴퓨터공학부, 2017. 2. 서승우.-
dc.description.abstractA 3D LIDAR provides 3D surface information of objects with the highest position accuracy, among available sensors that can be utilized to develop perception algorithms for automated driving vehicles. In terms of automated driving, the accurate surface information gives the following benefits: 1) the accurate position information that is quite useful itself for collision avoidance is stably provided regardless of illumination condition, because the LIDAR is an active sensor. 2) the surface information can provide precise 3D shape-oriented features for object classification. Motivated by these characteristics, we propose three algorithms for a perception purpose of automated driving vehicles based on the 3D LIDAR in this dissertation.

A very first procedure to utilize the 3D LIDAR as a perception sensor is segmentation that transform a stream of the LIDAR measurements into multiple point groups, where each point group indicate an individual object near the sensor. In chapter 2, a real-time and accurate segmentation is proposed. In particular, Gaussian Process regression is used to solve a problem called over-segmentation that increases False Positives by partitioning an object into multiple portions.

The segmentation result can be utilized as input of another perception algorithm, such as object classification that is required for designing more human-likely driving strategies. For example, it is important to recognize pedestrians in urban driving environments because avoiding collisions with pedestrians are nearly a top priority. In chapter 3, we propose a pedestrian recognition algorithm based on a Deep Neural Network architecture that learns appearance variation.

Another traffic participant that should be recognized with high-priority is a vehicle. Because various vehicle types of which appearances differ, such as a sedan,
a bus, or a truck, are present on road, detection of the vehicles with similar performance regardless of the types is necessary. In chapter 4, we propose an algorithm that makes use of a common appearance of vehicles to solve the problem. To improve performance, a monocular camera is additionally employed, where the information from both sensors are integrated by a Dempster-Shafer Theory framework.
-
dc.description.tableofcontentsChapter 1 Introduction 1
1.1 Background and Motivations 1
1.2 Contributions and Outline of the Dissertation 3
1.2.1 Real-time and Accurate Segmentation of 3D Point Clouds based on Gaussian Process Regression 3
1.2.2 Pedestrian Recognition Based on Appearance Variation Learning 4
1.2.3 Vehicle Recognition using a Common Appearance Captured by a 3D LIDAR and a Monocular Camera 5
Chapter 2 Real-time and Accurate Segmentation of 3D Point Clouds based on Gaussian Process Regression 6
2.1 Introduction 6
2.2 Related Work 10
2.3 Framework overview 15
2.4 Clustering of Non-ground Points 16
2.4.1 Graph Construction 17
2.4.2 Clustering of Points on Vertical Surface 17
2.4.3 Cluster Extension 21
2.5 Accuracy Enhancement 24
2.5.1 Approach to Handling Over-segmentation 26
2.5.2 Handling Over-segmentation with GP Regression 27
2.5.3 Learning Hyperparameters 31
2.6 Experiments 32
2.6.1 Experiment Environment 32
2.6.2 Evaluation Metrics 33
2.6.3 Processing Time 36
2.6.4 Accuracy on Various Driving Environments 37
2.6.5 Impact on Tracking 46
2.7 Conclusion 48
Chapter 3 Pedestrian recognition based on appearance variation learning 50
3.1 Introduction 50
3.2 Related Work 53
3.3 Appearance Variation Learning 56
3.3.1 Primal Input Data for the Proposed Architecture 57
3.3.2 Learning Spatial Features from Appearance 57
3.3.3 Learning Appearance Variation 59
3.3.4 Classification 61
3.3.5 Data Augmentation 61
3.3.6 Implementation Detail 61
3.4 EXPERIMENTS 62
3.4.1 Experimental Environment 62
3.4.2 Experimental Results 65
3.5 CONCLUSIONS AND FUTURE WORKS 70
Chapter 4 Vehicle Recognition using a Common Appearance Captured by a 3D LIDAR and a Monocular Camera 72
4.1 Introduction 72
4.2 Related Work 75
4.3 Vehicle Recognition 77
4.3.1 Point Cloud Processing 78
4.3.2 Image Processing 80
4.3.3 Dempster-Shafer Theory (DST) for Information Fusion 82
4.4 Experiments 84
4.5 Conclusion 87
Chapter 5 Conclusion 89
Bibliography 91
국문초록 105
-
dc.formatapplication/pdf-
dc.format.extent2092715 bytes-
dc.format.mediumapplication/pdf-
dc.language.isoen-
dc.publisher서울대학교 대학원-
dc.subject3D LIDAR-
dc.subjectReal-time Segmentation-
dc.subjectGaussian Process-
dc.subjectPedestrian Recognition-
dc.subjectDeep Neural Network-
dc.subjectVehicle Recognition-
dc.subject.ddc621-
dc.titleObject Detection and Classification in 3D Point Cloud Data for Automated Driving-
dc.title.alternative자율 주행을 위한 3D Point Cloud Data 기반 물체 탐지 및 분류 기법에 관한 연구-
dc.typeThesis-
dc.contributor.AlternativeAuthor신명옥-
dc.description.degreeDoctor-
dc.citation.pages103-
dc.contributor.affiliation공과대학 전기·컴퓨터공학부-
dc.date.awarded2017-02-
Appears in Collections:
Files in This Item:

Altmetrics

Item View & Download Count

  • mendeley

Items in S-Space are protected by copyright, with all rights reserved, unless otherwise indicated.

Share