S-Space College of Engineering/Engineering Practice School (공과대학/대학원) Dept. of Electrical and Computer Engineering (전기·정보공학부) Theses (Master's Degree_전기·정보공학부)
Fast Image Stitching for Video Stabilization using SIFT Features
Cited 0 time in Web of Science Cited 0 time in Scopus
- 공과대학 전기·컴퓨터공학부
- Issue Date
- 서울대학교 대학원
- Video Stabilization ; Image Stitching ; SIFT ; Image Processing
- 학위논문 (석사)-- 서울대학교 대학원 : 전기·컴퓨터공학부, 2014. 8. 김재하.
- In recent years uses of hand-held camera, portable camera, firemens head camera, robot camera has increased a lot. However, the videos captured by these types of cameras are generally unstable, filled with unwanted shaky camera motions. Therefore, a demand for digital video stabilization has increased. There are several researches about digital video stabilization based on block-based motion compensation and feature point based motion compensation for video stabilization. However, those algorithms are computationally expensive and also cannot handle uncertain and big motions such as unstable motions generated by firemens head cameras. In this paper, an improved video stabilization method with image stitching has been proposed. Also, the computational complexity of SIFT (Scale-Invariant Feature Transform) is reduced and the matching is improved to improve the accuracy of the stabilization.
Image stitching using SIFT (Scale-Invariant Feature Transform) is done to perform improved video stabilization. After SIFT feature points are extracted and matched between two frames of unstable video, the appropriate mathematical model relating pixel coordinated from one frame to pixel coordinates in other frame is done by estimating the homography matrix between them. All the pixels of next frame are transformed according to the current frame with the homography matrix and the pixel values. Transformed frames in every iteration are stitched together to get the stitched stabilized frames.
To improve the SIFT feature point matching and improving classification error the searching area of the keypoints are considered. After initial matching with KNN matcher those matching keypoints are removed from the match list whose displacement from one frame to other is lager then a threshold value.
To reduce the computational complexity of SIFT feature point the descriptor vectors size is reduced to 24 from 128. A restriction also implemented on the number of SIFT feature points to be extracted. A region based SIFT feature points extraction is proposed in the paper to reduce the effect of measurement error. Keypoints are extracted in different regions of the frame and then merged together to the full frames keypoints. This method ensues that the keypoints are well distributed over the image frame.
- Files in This Item:
Items in S-Space are protected by copyright, with all rights reserved, unless otherwise indicated.