Publications

Detailed Information

Unsupervised Bayesian Online Learning for Multi-Agent Exploration in an Unknown Environment : 비지도식 베이지안 온라인 학습을 이용한 미지 환경에서의 다중 로봇 탐사 기법

DC Field Value Language
dc.contributor.advisor김현진-
dc.contributor.author임진홍-
dc.date.accessioned2017-07-14T03:45:11Z-
dc.date.available2017-07-14T03:45:11Z-
dc.date.issued2017-02-
dc.identifier.other000000141891-
dc.identifier.urihttps://hdl.handle.net/10371/123949-
dc.description학위논문 (석사)-- 서울대학교 대학원 : 기계항공공학부, 2017. 2. 김현진.-
dc.description.abstractExploring an unknown environment with multiple robots is an enabling technology for many useful applications. This paper investigates decentralized motion planning for multi-agent exploration in a field with unknown distributions such as received signal strength (RSS) and terrain elevation. We present both supervised with RSS distribution and unsupervised methods with terrain data. The environment is modelled with a Gaussian process using Bayesian online learning by sharing the information obtained from the measurement history of each robot. Then we use the mean function of the Gaussian process to infer the multiple source locations or peaks of the distribution. The inferred locations of sources or peaks are modelled as the probability distribution using Gaussian mixture-probability hypothesis density (GM-PHD) filter. This modelling enables nonparametric approximation of mutual information between peak locations and future robot positions. We combine the variance function of the Gaussian process and the mutual information to design an informative and noise-robust planning algorithm for multiple robots. At the end, the proposed algorithm is extended by applying an unsupervised method with Dirichlet process mixture of Gaussian processes. The experimental performance of supervised method and unsupervised method are analysed by comparing with the variance-based planning algorithm. The experimental results show that the proposed algorithm learns the unknown environmental distribution more accurately and faster.-
dc.description.tableofcontents1 Introduction 1
1.1 Literature review 3
1.2 Thesis contribution 4
1.3 Thesis outline 4
2 Gaussian process model 6
2.1 Gaussian process 6
2.2 Hyperparameter optimization 8
3 Parametrization of signal source location 9
3.1 Conventional GM-PHD lter 9
3.2 Spatial prior on the birth process 11
4 Information-based multi-agent control 12
4.1 Nonparametric computation of mutual information 12
4.2 Concatenated objective-based control policy 14
5 Unsupervised implementation 17
5.1 Dirichlet process mixture of Gaussian processes 17
5.2 Parameter optimization with adaptive rejection sampling 19
6 Simulation and experiment 22
6.1 Experimental settings and results for supervised method 22
6.1.1 Experimental settings 22
6.1.2 RSS distribution learning experiment result 23
6.2 Terrain mapping simulation settings and results for unsupervised method 29
6.2.1 Simulation settings 29
6.2.2 Terrain mapping simulation result 29
6.3 RSS distribution mapping experimental settings and results for unsupervised method 36
6.3.1 Experimental settings 36
6.3.2 RSS distribution mapping experimental result 36
7 Conclusion 43
References 44
국문초록 47
-
dc.formatapplication/pdf-
dc.format.extent4840984 bytes-
dc.format.mediumapplication/pdf-
dc.language.isoen-
dc.publisher서울대학교 대학원-
dc.subjectDecentralized multi-agent-
dc.subjectactive sensing-
dc.subjectBayesian nonparametric methods-
dc.subjectunsupervised learning-
dc.subjectDirichlet process-
dc.subjectonline Gaussian process-
dc.subjectmutual information-
dc.subjectGM-PHD filter-
dc.subject.ddc621-
dc.titleUnsupervised Bayesian Online Learning for Multi-Agent Exploration in an Unknown Environment-
dc.title.alternative비지도식 베이지안 온라인 학습을 이용한 미지 환경에서의 다중 로봇 탐사 기법-
dc.typeThesis-
dc.contributor.AlternativeAuthorLim, Jinhong-
dc.description.degreeMaster-
dc.citation.pages46-
dc.contributor.affiliation공과대학 기계항공공학부-
dc.date.awarded2017-02-
Appears in Collections:
Files in This Item:

Altmetrics

Item View & Download Count

  • mendeley

Items in S-Space are protected by copyright, with all rights reserved, unless otherwise indicated.

Share