Publications

Detailed Information

Development of an algorithm to automatically compress a CT image to visually lossless threshold : 시각적 무손실 임계점으로 CT 영상을 자동으로 압축하는 알고리즘의 개발

Cited 0 time in Web of Science Cited 0 time in Scopus
Authors

김길중

Advisor
강흥식
Major
의과대학 협동과정 방사선응용생명과학전공
Issue Date
2013-02
Publisher
서울대학교 대학원
Keywords
CT imagecompressionvisual perceptionvisually lossless threshold
Description
학위논문 (박사)-- 서울대학교 대학원 : 협동과정 방사선응용생명과학 전공, 2013. 2. 강흥식.
Abstract
Introduction: To develop a computerized algorithm to predict the visually lossless thresholds (VLTs) of CT images solely using the original images by exploiting the image features and Digital Imaging and Communications in Medicine (DICOM) header information for Joint Photographic Experts Group 2000 (JPEG2000) compression.
Methods: A total of 206 body CT images were obtained with five different scan protocols. Five radiologists independently determined the VLT of each image for JPEG2000 compression using QUEST procedure.
The 206 images were divided randomly into two subsets: training (n = 103) and testing (n = 103) sets. Using the training set, a multiple linear regression (MLR) model was constructed regarding the image features and DICOM header information as independent variables and regarding the VLTs determined with median value of the five radiologists responses (VLTrad) as dependent variable, after determining an optimal subset of independent variables by backward stepwise selection in a four-fold cross-validation scheme.
The performance of the constructed model was evaluated on the testing set by measuring absolute differences and intra-class correlation (ICC) coefficient between the VLTrad and the VLTs predicted by the model (VLTmodel). The performance of the model was also compared those of the two image fidelity metrics, peak signal-to-noise ratio (PSNR) and high-dynamic range visual difference predictor (HDRVDP). The time for computing VLTs between MLR model, PSNR, and HDRVDP were compared using the repeated ANOVA with a post-hoc analysis. P < 0.05 was considered to indicate a statistically significant difference.
Results: The means of absolute differences with the VLTrad were 0.58 (95% CI, 0.48, 0.67), 0.73 (0.61, 0.85), and 0.68 (0.58, 0.79), for the MLR model, PSNR, and HDRVDP, respectively, showing significant difference between them (p < 0.01). According to the post-hoc analysis, the significant difference was shown between the MLR model and PSNR. The ICC coefficients of MLR model, PSNR, and HDRVDP were 0.88 (95% CI, 0.81, 0.95), 0.85 (0.79, 0.91), and 0.84 (0.77, 0.91). The mean computing times for calculating VLT per image were 1.5 ± 0.1 sec, 3.9 ± 0.3 sec, and 68.2 ± 1.4 sec, for MLR metric, PSNR, and HDRVDP, respectively. The differences between them were significant (p < 0.01).
Conclusions: we proposed a MLR model which directly predicts the VLT of a given CT image solely using the original image without compression. The proposed MLR model showed superior or comparable performance to those of image fidelity metrics while requiring less computational expenses. The model would be promising to be used for adaptive compression of CT images.
Language
English
URI
https://hdl.handle.net/10371/121798
Files in This Item:
Appears in Collections:

Altmetrics

Item View & Download Count

  • mendeley

Items in S-Space are protected by copyright, with all rights reserved, unless otherwise indicated.

Share