#PAGE_PARAMS# #ADS_HEAD_SCRIPTS# #MICRODATA#

Deep learning based image reconstruction algorithm for limited-angle translational computed tomography


Authors: Jiaxi Wang aff001;  Jun Liang aff003;  Jingye Cheng aff004;  Yumeng Guo aff005;  Li Zeng aff001
Authors place of work: Key Laboratory of Optoelectronic Technology and System of the Education Ministry of China, Chongqing University, Chongqing, China aff001;  Engineering Research Center of Industrial Computed Tomography Nondestructive Testing of the Education Ministry of China, Chongqing University, Chongqing, China aff002;  College of Computer Science, Civil Aviation Flight University of China, Guanghan Sichuan, China aff003;  College of Mathematics and Statistics, Chongqing University, Chongqing, China aff004;  College of Mathematics and Statistics, Chongqing Technology and Business University, Chongqing, China aff005
Published in the journal: PLoS ONE 15(1)
Category: Research Article
doi: https://doi.org/10.1371/journal.pone.0226963

Summary

As a low-end computed tomography (CT) system, translational CT (TCT) is in urgent demand in developing countries. Under some circumstances, in order to reduce the scan time, decrease the X-ray radiation or scan long objects, furthermore, to avoid the inconsistency of the detector for the large angle scanning, we use the limited-angle TCT scanning mode to scan an object within a limited angular range. However, this scanning mode introduces some additional noise and limited-angle artifacts that seriously degrade the imaging quality and affect the diagnosis accuracy. To reconstruct a high-quality image for the limited-angle TCT scanning mode, we develop a limited-angle TCT image reconstruction algorithm based on a U-net convolutional neural network (CNN). First, we use the SART method to the limited-angle TCT projection data, then we import the image reconstructed by SART method to a well-trained CNN which can suppress the artifacts and preserve the structures to obtain a better reconstructed image. Some simulation experiments are implemented to demonstrate the performance of the developed algorithm for the limited-angle TCT scanning mode. Compared with some state-of-the-art methods, the developed algorithm can effectively suppress the noise and the limited-angle artifacts while preserving the image structures.

Keywords:

Algorithms – Imaging techniques – Computed axial tomography – Data acquisition – Abdomen – deep learning – X-ray radiography – Image processing

Introduction

Translational computed tomography (TCT) as a new low-end CT system, which can obtain the interior image without destroying the scanned object by using the projection data obtained from the detector, is created for developing countries [1]. It utilizes translation to realize linear scanning, where the X-ray source and the flat panel detector are placed face to face with an object between them and are moved in opposite directions during the scanning process. When the projection data collected from the TCT are complete, filtered back projection (FBP)-type algorithms can accurately reconstruct some high-quality images [2, 3]. However, in some practical TCT applications, in order to reduce the scan time, decrease the X-ray radiation which may cause potential risks to patients, or scan some long objects within a limited angular range, furthermore, to avoid the inconsistency of the detector for the large angle scanning in the translational scanning scheme, the obtained projection data of the scanned object are usually incomplete.

In this circumstance, some artifacts are presented in the image reconstructed by the FBP-type method. Algebraic reconstruction algorithms, such as the simultaneous algebraic reconstruction technique (SART) [4] and the algebraic reconstruction technique (ART) [5], have better denoising effect than FBP method when the projection data are complete. However, if the available projection data are incomplete, these methods cannot obtain satisfactory reconstructed image.

In recent years, researchers are becoming increasingly interested in regularized iterative reconstruction algorithms for incomplete projection data, as these algorithms can add some prior knowledge to obtain better reconstructed image and will not be affected by the geometrical structure of the scanning mode. Hence, more and more researchers are keen to construct an appropriate transformation that can utilize prior information of the reconstructed object, and various regularized iterative reconstruction algorithms have been proposed [610]. As one of the regularized iterative reconstruction algorithms, total variation (TV)-based minimization method [11] can suppress the streak artifacts and noise when the projection data are acquired within a few-views scanning mode. However, some limited-angle artifacts will appear on the edges of the object in the reconstructed image when the projection data are acquired from limited-angle CT. In addition, staircase effect or blocky artifacts will also appear in the reconstructed image due to the assumption of the reconstructed image is piecewise constant. To address this problem, Lauzier [12] proposed an image reconstruction algorithm based on the prior image obtained from previous scanning. Then, Chen [13] utilized the prior knowledge of the known actual scanning range to propose an anisotropic total variation (ATV) method for improving the reconstructed image quality from limited-angle projection data. Wang [9] proposed a limited-angle CT image reconstruction algorithm based on the wavelet frame, and the reconstructed images show that it has advantage in suppressing noise and slope arifacts. Recently, Wang [14] incorporated the reweighted technique into the ATV method, and proposed a new iteratively reweighted ATV method to solve limited-angle CT reconstruction problem. Yu [15] found that the regularization term based on the L0-norm of the image gradient can better preserve the edge of the image, and they proposed an edge-preserving image reconstruction method for limited-angle CT. In summary, these regularized iterative reconstruction algorithms can reduce the limited-angle artifacts and noise to some extent. However, it is difficult to choose the appropriate regularization terms and adjust the regularization parameters, and these choices play a decisive role in the quality of the reconstructed images.

Nowadays, deep learning [16] has emerged as a potential method providing promising performance for image classification [17] and segmentation [18]. In recent years, deep learning has also been applied to the CT image reconstruction problem [19, 20]. Pelt et al. [21] developed a convolution neural network (CNN) which can be treated as a weighted combination of the FBP method and some learned filters, the experimental results verify that it is better than directly using FBP method for the few-views CT. In [22], Boublil utilized a CNN to integrate multiple reconstructed results to get a better reconstructed image compared to other iterative reconstruction algorithms. Chen et al. [23] utilized a CNN for post-processing of a single reconstruction result from the low-dose projection data, and experimental results show that it has advantage in structure preservation and artifact reduction. Yang et al. [24] proposed a measure named as perceptual similarity to measure the loss, which can prevent the mean squared error from overly smoothing of the image. Jin et al. [25] proposed a novel deep CNN called FBPConvNet, and they demonstrated the performance of FBPConvNet in sparse-view reconstruction for parallel beam X-ray CT. First, they apply the FBP method to the sparse-view projection data. Second, they import the image reconstructed by FBP method to the CNN trained to make the image reconstructed by FBP method as close as possible to the label image. Finally, the end of the CNN provides the reconstructed image.

Inspired by the above researches, we combine the algebraic reconstruction algorithm with deep learning to improve the quality of the reconstructed image for the limited-angle TCT scanning mode. This paper mainly has the following contributions:

  • To reduce scan time, decrease X-ray radiation or scan some long objects, furthermore, to avoid the inconsistency of the detector for the large angle scanning in translational scanning scheme, we use a limited-angle TCT scanning mode which manually rotates 30° per unit time.

  • To deal with the limited-angle TCT reconstruction problem, we develop a deep learning based image reconstruction algorithm, which does not need to choose the regularization terms and adjust the regularization parameters.

  • Some simulation experiments show that the proposed algorithm has advantage in suppressing noise and limited-angle artifacts while preserving image structures.

The rest of this paper is organized as follows. In Section II, we derive the limited-angle TCT scanning mode. In Section III, we introduce the developed image reconstruction algorithm. In Section IV, we describe the experimental design and analyse the experimental results. In Section V, we give some discussion. In Section VI, we conclude this paper.

Limited-angle TCT scanning mode

CT plays a key role in diagnostic imaging and intervention [26]. Generally speaking, the CT image reconstruction problem is actually solving the following equation:

where CT imaging matrix A has M × N elements, x = (x1, x2, …, xN)T and b = (b1, b2, …, bM)T represent the discrete attenuation coefficients of the reconstructed image and the projection data collected from the detector. CT image reconstruction is to obtain the unknown x from the known imaging matrix A and the available projection data b.

Modern CT scanners, which use sliding ring, wide-array detectors and multiple sources, have very fast rotation speed and are expensive. They are typically used by big city hospitals in developed countries and are rarely found in the rural areas of developing countries because of their high costs; therefore, a low-end CT system is required. Liu et al. [1] proposed a translation-based data acquisition mode called the translational CT system. In this system, the data acquisition scheme is based on opposite parallel linear movements. As illustrated in Fig 1, the X-ray source and the flat panel detector are positioned face to face with an object between them. During the scanning process, the X-ray source and the flat panel detector are in opposite translation and keep the object still. In other words, the gantry with an expensive slip ring is substituted by this translational technique.

Fig. 1. Translation based CT.
Translation based CT.

To exactly reconstruct an object, the classic prerequisite is the complete projection data in the fan-beam geometry; i.e., the projection data should be available for an (180° + fan angle) angular range [27]. To satisfy the aforementioned classic prerequisite, the TCT projection data acquisition scheme have to rotate several times. For example, it can be manually rotated two times (2T) or three times (3T). The 1T, 2T, and 3T schemes are shown in Fig 2. The scanning process is as follows: First, we perform translational scanning and keep the object still. Second, we stop the last scanning and manually rotate the X-ray source and the detector to the next specified location, and continue to scan the object. Finally, the above operations will be repeated until the requirements are met.

Fig. 2. Different translational modes.
Different translational modes.
The X-ray source is translated on the line where the red points located, and the corresponding flat panel detector is opposite translated on the green line.

The aforementioned several projection data acquisition schemes make the flat panel detector move a long distance, and the X-ray source that is far away from the middle X-ray source S0 (Fig 1) needs to deflect an angle to acquire projection data because of the translational scanning scheme. However, the farther the distance is from S0, the greater the slant angle is needed and the worse the illumination consistency of the detector will be. To avoid the inconsistency of the detector for the large-angle scanning in the translational scanning mode, we use the smaller angle scanning mode which manually rotates 30° per unit time. Moreover, in some industry imaging applications, the X-ray source and the detector cannot be rotated many times because of the restriction of the scanning scenario, such as the wings of aircraft scanning and the pipeline in service scanning [28, 29]. In medical imaging fields, to reduce the scanning time and decrease the dose of X-ray, which may pose potential risks to patients, the patients are scanned within a limited angular range. These scenarios lead to the limited-angle TCT reconstruction problem. As illustrated in Fig 3, when the scanning angle is 30° per unit time and the scanning range is [0°, 120°], we have to rotate the detector and the X-ray source four times. This scheme requires several rotations; however, the response of the detector has better illumination consistency than the 2T and 3T scanning schemes. Since the projection data are acquired within a limited angular range, some limited-angle artifacts will present in the reconstructed image. Next, we will focus on how to solve the limited-angle TCT reconstruction problem.

Fig. 3. Limited-angle TCT scanning mode.
Limited-angle TCT scanning mode.

Method

In the limited-angle TCT scanning mode, image reconstruction from the limited-angle TCT projection data is an ill-posed inverse problem. Regularized iterative reconstruction algorithms, which can incorporate the prior knowledge of the reconstructed image, are usually utilized to deal with this problem. However, these algorithms are difficult to choose the appropriate regularization terms and adjust the regularization parameters. With the development of deep learning technique, Jin et al [25] proposed a post processing based image reconstruction method named as FBPConvNet, which uses FBP method to obtain the initial image for the well-trained U-net, and it demonstrated compelling results to process sparse-view reconstruction for parallel beam X-ray CT.

Inspired by their work, we use SART method to obtain the initial image for the well-trained U-net to process limited-angle image reconstruction for TCT. The reason why we do this is that the SART method is better than the FBP method in the limited-angle TCT scanning mode (as shown in Fig 4). Therefore, the quality of the training set for the proposed algorithm is better than that of the FBPConvNet method. Moreover, in the field of deep learning, the quality of the training set plays a decisive role in the final result. Hence, the proposed algorithm which is called SARTConvNet is more effective than the FBPConvNet method for limited-angle TCT. In addition, if we use the image reconstructed by TV method as the input image of the CNN, we need to manually adjust the regularization parameters of the TV method which is a difficult job and different images have different optimal parameters.

Fig. 4. Reconstruction results for the scanning range [0°, 120°].
Reconstruction results for the scanning range [0°, 120°].
The first row is the images reconstructed by the FBP method, and the second row is the images reconstructed by the SART method.

The steps for the SARTConvNet method are as follows: First, we apply the SART method to the limited-angle TCT projection data which are obtained by the simulated experiments. Then, we import the image reconstructed by the SART method to the CNN trained to make the image reconstructed by the SART method as close as possible to the label image. Lastly, in the final layer of the CNN, a convolutional layer is used to make the CNN output a single channel image, which is the final reconstructed image of the SARTConvNet. Next, we will introduce the details of the SARTConvNet method.

Simultaneous algebraic reconstruction technique

In the network training stage, the CNN is trained with a subset of pairs (Tq, Lq), where Tq is the image reconstructed by the SART method from limited-angle TCT projection data, Lq is the corresponding label image. The formula for the SART method [4] is as follows:

where n is the number of iterations and β is the relaxing factor. Further, we choose β = 1 in this case,ai+≡∑j=1Naij≠0, i = 1, …, M, a+j≡∑i=1Maij≠0, j = 1, …, N. Here biAif(n) is the difference between the actual projection data and the simulated projection data. As the iterations increase, biAif(n) → 0, f(n)f*, and f* is the label image.

Deep convolutional neural network

As shown in the Fig 5, the CNN we use in this paper is based on the U-net, which is firstly applied to image segmentation [30]. And it is composed of a downhill path and an uphill path. The downhill path consists of numerous 3 × 3 zero-padded convolutions, rectified linear units and 2 × 2 max pooling operations. After each max pooling operation, which is used for down-sampling, we double the number of feature channels for the convolution layer to obtain more feature images which can increase the feature expression ability of the network [31]. The uphill path also consists of numerous 2 × 2 up-convolutions, batch normalizations and rectified linear units. The skip connection [32, 33] and the concatenation technique are available because of the loss of useful information in every convolution and max pooling. In the final layer of the CNN, a convolutional layer is used to make the CNN output a single channel image which is the final reconstructed image.

Fig. 5. Architecture of SARTConvNet.
Architecture of SARTConvNet.

Experimental process and results

In this section, we use some simulation experiments to test the feasibility and effectiveness of the proposed algorithm for the limited-angle TCT scanning mode. The configuration of the computer used in the experiment is as follows: Inter(R) Core(TM) i5-6500 3.20GHz is the CPU; NVIDIA GTX 1080 with 8GB memory is the GPU. In the course of the experiment, we use the MatConvNet deep learning framework [34] and the Matlab version is R2016b. We logarithmically change the learning rate from 0.01 to 0.001, choose the batch size, patch size, momentum, number of epochs and the gradient clipping value to be 1, 256, 0.99, 151 and 10−2, respectively. We use the GPU for training and evaluating the CNN, and the Table 1 shows the geometrical scanning parameters of the simulated TCT system. Moreover, in order to simulate the limited-angle TCT and the scanning angle is 30° per unit time, we choose to rotate the detector and the X-ray source three, four and five times, in other words, three different scanning ranges ([0°, 90°], [0°, 120°] and [0°, 150°]) are used to validate the algorithm performance for the limited-angle TCT in this work.

Tab. 1. Geometrical scanning parameters of simulated TCT system.
Geometrical scanning parameters of simulated TCT system.

In this work, we choose the peak signal-to-noise ratio (PSNR) and the structural similarity index measure (SSIM) to quantitatively evaluate the proposed algorithm. PSNR is used to estimate the difference between two images, and is defined as:

where x is the reconstructed image, y is the label image, xi,j presents pixel value in the position (i, j), N denotes the total number of the pixels in the image. SSIM, which is used to measure the structural similarity between the reconstructed image and the label image, is defined as:
where x¯ and y¯ denote mean values of x and y respectively. σx and σy represent the standard deviations, and σxy is the covariance. The constants C1 and C2 are set as in [35]. A good reconstructed image should provide the highest SSIM/PSNR value.

To evaluate the performance of the proposed algorithm, we choose three algorithms to compare with the proposed algorithm. (1) L0 method, which is an edge-preserving image reconstruction method for limited-angle CT [15]. (2) ATV method, which utilizes the prior knowledge of the known actual scanning range to improve the reconstructed image quality from limited-angle projection data [36]. (3) FBPConvNet method, which is an outstanding post processing-based deep reconstruction method.

Data preparation and experimental design

It would be nice to have some real TCT reconstructed images. However, we are in the initial research phase of the TCT, and there are not enough real projection data to train the network. In addition, most of the CT cannot provide raw projection data, and we can only obtain real CT images. So we use these real CT images to simulate the limited-angle TCT projection data to demonstrate the performance of these algorithms. Moreover, we add some Gaussian noise to the simulated projection data to evaluate the robustness of these algorithms.

The dataset we use is obtained from TCIA Collections [37]. These are 500 real full-angle CT images from many patients, and DICOM is the primary file format. Among them, there are 450 training images to train the CNN, and 50 testing images to test the performance. The locations of these images are chest and abdomen, the resolution of these images is 256 × 256.

In this paper, the main steps of the experiments are as follows: Frist, the real full-angle CT images are taken as the label images. Second, these label images are used to obtain simulated projection data for the limited-angle TCT. Finally, these four algorithms are utilized to obtain the reconstructed image.

Network training and parameter selection

The CNN part of the SARTConvNet method and the FBPConvNet method are utilized the same training strategy as that used in [25]. And the training images are many pairs of images, which reconstructed by the SART method or the FBP method and label images. For the TCIA dataset, it needs approximately 6 h to train the CNN for 151 iterations (epochs).

For a fair comparison, the parameters of these competing methods are optimized to obtain the best results in terms of the values of the evaluation index. We do this because these parameters are usually obtained by trial and error, and how to choose these parameters is still an open question to the world. In this paper, the iteration steps of the SART method is 2500, the iteration steps of TV method is 15, and the alpha value of TV is 0.1. The parameters of the L0 method are according to reference [15], and the parameters of the ATV method are according to reference [36].

Experimental results

In this work, we choose two representative images from the abdominal and chest regions among all of the test images to assess the performance of these four methods. Figs 6 and 7 show the reconstructed results of these two representative images. The image on the first column is the reference image. The subsequent columns are the results reconstructed using L0 method, ATV method, FBPConvNet method and our algorithm. The images from top to bottom in each row are the results reconstructed for the scan ranges [0°, 90°], [0°, 120°] and [0°, 150°], respectively. The red arrows point to some obvious artifacts which are enlarged in the Figs 8 and 9.

Fig. 6. The reconstructed results of the abdomen image.
The reconstructed results of the abdomen image.
The image on the first column is the reference image. The subsequent columns are the results reconstructed using L0 method, ATV method, FBPConvNet method and our algorithm. The images from top to bottom in each row are the results reconstructed for the scan ranges [0°, 90°], [0°, 120°] and [0°, 150°], respectively. The location of red arrows present some obvious artifacts, and the display window is [800 1200] HU.
Fig. 7. The reconstructed results of the chest image.
The reconstructed results of the chest image.
The image on the first column is the reference image. The subsequent columns are the results reconstructed using L0 method, ATV method, FBPConvNet method and our algorithm. The images from top to bottom in each row are the results reconstructed for the scan ranges [0°, 90°], [0°, 120°] and [0°, 150°], respectively. The location of red arrows present some obvious artifacts, and the display window is [800 1200] HU.
Fig. 8. The zoom-in view of the ROIs for the Fig 6.
The zoom-in view of the ROIs for the <em class="ref">Fig 6</em>.
The image on the first column is the ROI of the reference image. The subsequent columns are the ROIs of the reconstructed image for L0 method, ATV method, FBPConvNet method and our algorithm.
Fig. 9. The zoom-in view of the ROIs for the Fig 7.
The zoom-in view of the ROIs for the <em class="ref">Fig 7</em>.
The images on the first column are the ROIs of the reference image. The subsequent columns are the ROIs of the reconstructed image for L0 method, ATV method, FBPConvNet method and our algorithm.

As seen from Figs 69, with the increase of the scan ranges, the quality of the reconstructed images begins to improve with different degrees. The L0 method can better preserve the edges and structures, nevertheless many limited-angle artifacts appear in the reconstructed image due to lacking projected data. ATV method can better reduce the limited-angle artifacts, however, the ATV method results in a blocky effect and smoothes some important small structures, as it assumes that the image is piecewise constant. The images reconstructed by the FBPConvNet method are better than the images reconstructed by the L0 method and ATV method. Whereas, some important details and structures have been smoothed. Our method exhibits the best performance in terms of preserving the continuous structures (such as the organ edges), suppressing the limited-angle artifacts, and retaining the inherent details (see the red arrows that indicate the region for the obvious differences).

Quantitative results associated with different algorithms for different images are listed in Tables 2 and 3. As seen from these Tables, our algorithm achieves best results for the three scanning ranges on all the indexes. Moreover, the experiments show that the larger the scanning ranges, the better the image quality.

Tab. 2. Quantitative results associated with different algorithms for the abdomen image from different angle projection data.
Quantitative results associated with different algorithms for the abdomen image from different angle projection data.
Tab. 3. Quantitative results associated with different algorithms for the chest image from different angle projection data.
Quantitative results associated with different algorithms for the chest image from different angle projection data.

Next, the robustness capability of these four algorithms are tested. We add the Gaussian noise (m, σ2) to the projection data [38], the average value m is set to zero, and the variance σ2 = 10. Figs 10 and 11 show the results from the noise-add experiment, the corresponding region of interest (ROI) results are shown in Figs 12 and 13. Besides, Tables 4 and 5 are the quantitative results associated with different algorithms for the noise-add experiment. It can be observed that the images reconstructed by our algorithm are much better than the results of L0 method, ATV method and FBPConvNet method. Compared to other three methods, our algorithm can better reduce the limited-angle artifacts, suppress the noise and preserve the continuous structures.

Fig. 10. The reconstructed results of the abdomen image from the noise-add experiment.
The reconstructed results of the abdomen image from the noise-add experiment.
The image on the first column is the reference image. The subsequent columns are the results reconstructed using L0 method, ATV method, FBPConvNet method and our algorithm. The images from top to bottom in each row are the results reconstructed for the scan ranges [0°, 90°], [0°, 120°] and [0°, 150°], respectively. The location of red arrows present some obvious artifacts, and the display window is [800 1200] HU.
Fig. 11. The reconstructed results of the chest image from the noise-add experiment.
The reconstructed results of the chest image from the noise-add experiment.
The image on the first column is the reference image. The subsequent columns are the results reconstructed using L0 method, ATV method, FBPConvNet method and our algorithm. The images from top to bottom in each row are the results reconstructed for the scan ranges [0°, 90°], [0°, 120°] and [0°, 150°], respectively. The location of red arrows present some obvious artifacts, and the display window is [800 1200] HU.
Fig. 12. The zoom-in view of the ROIs for the Fig 10.
The zoom-in view of the ROIs for the <em class="ref">Fig 10</em>.
The image on the first column is the ROI of the reference image. The subsequent columns are the ROIs of the reconstructed image for L0 method, ATV method, FBPConvNet method and our algorithm.
Fig. 13. The zoom-in view of the ROIs for the Fig 11.
The zoom-in view of the ROIs for the <em class="ref">Fig 11</em>.
The image on the first column is ROI of the reference image. The subsequent columns are the ROIs of the reconstructed image for L0 method, ATV method, FBPConvNet method and our algorithm.
Tab. 4. Quantitative results associated with different algorithms for the abdomen image from different angle noise-add projection data.
Quantitative results associated with different algorithms for the abdomen image from different angle noise-add projection data.
Tab. 5. Quantitative results associated with different algorithms for the chest image from different angle noise-add projection data.
Quantitative results associated with different algorithms for the chest image from different angle noise-add projection data.

Discussion

Training loss

The training process of the CNN might be caught in the well-known over-fitting problem, and this U-net had used a technique named skip connection, which is added between the input and the output, to alleviate this problem. Then, we use the results from the experiment of the abdomen image for the scan range [0°, 150°] as an example to show how the loss function value changes with epochs for both the training dataset and the testing dataset (Fig 14). As is shown in Fig 14, the loss function value decreases with the increase of the number of epochs, and finally reaches a steady stage, which means the over-fitting problem is reduced to the lightest. In addition, the curve of the loss function value for the testing dataset has some oscillations at the beginning but will level off in the later stages.

Fig. 14. Loss function value changes in CNN training changes with epochs for both training dataset and testing dataset.
Loss function value changes in CNN training changes with epochs for both training dataset and testing dataset.

Execution time

We present the execution time of different algorithms in the experiment of the abdomen image for the scan ranges for the scan range [0°, 150°]. As shown in Table 6, the fast speed, which is indeed an advantage of deep learning, will disappear because we use the image reconstructed by SART method as the input image of the CNN. However, our method has two other advantages, one is that we do not need to choose the regularization term and adjust the regularization parameters, another one is that it produces better results than some state-of-the-art regularized iterative reconstruction methods. In addition, SART method, which is a fast algorithm among iterative reconstruction algorithms, can also be accelerated by the computer hardware and CUDA technique.

Tab. 6. Execution time with different algorithms.
Execution time with different algorithms.

Conclusions and perspectives

As a low-end CT system, TCT is in urgent demand in developing countries. To reduce scan time, decrease X-ray radiation or scan some long objects, furthermore, to avoid the inconsistency of the detector for the large angle scanning in translational scanning scheme, we use a limited-angle TCT scanning mode which introduces some additional noise and artifacts that seriously degrade the imaging quality and affect the accuracy, due to it is short of the continuous angular projection data. In this study, we develop a deep learning based limited-angle TCT image reconstruction algorithm. Experimental results show that this proposed method using the SART method is better than using the FBP method in the limited-angle TCT scanning mode, and the proposed method also has an excellent performance on suppressing the noise and the limited-angle artifacts while preserving the image structures.

The new algorithm improves the quality of the reconstructed image for limited-angle TCT scanning model, and it will be helpful for diagnosis. The main problem of our algorithm is that it needs a large dataset for training and an efficient computer is necessary. In the future, we can improve the generalization ability of our algorithm and extend it to higher dimensional cases such as 3D reconstruction to utilize more useful information. Although our algorithm is proposed for the limited-angle TCT, since the SART method and deep learning technique will not be influenced by the geometrical structure of the scanning mode, our algorithm can be extended to the generic limited angle tomography, such as C-arm cone-beam CT.

In conclusion, this paper uses a limited-angle TCT scanning model and develops a deep learning based limited-angle TCT image reconstruction algorithm. Some databases are used to evaluate the performance of the proposed method in comparison with other three methods. The experimental results demonstrate that our algorithm exhibits better performance in terms of suppressing the noise and the limited-angle artifacts while preserving the image structures.


Zdroje

1. Liu FL, Yu HY, Cong W, Wang G. Top-level design and pilot analysis of low-end CT scanners based on linear scanning for developing countries. Journal of X-ray science and technology. 2014. 22(5):673–86. doi: 10.3233/XST-140453 25265926

2. Wu WW, Quan C, Liu FL. Filtered Back-Projection Image Reconstruction Algorithm for Opposite Parallel Linear CT Scanning. Acta Optica Sinica. 2016. doi: 10.3788/AOS201636.0911009

3. Kong H, Yu HY. Analytic reconstruction approach for parallel translational computed tomography. Journal of X-ray science and technology. 2015. 23(2):213. doi: 10.3233/XST-150482 25882732

4. Andersen AH, Kak AC. Simultaneous Algebraic Reconstruction Technique (SART): A superior implementation of the ART algorithm. Ultrasonic Imaging: An International Journal. 1984. 6(1):81–94. doi: 10.1016/0161-7346(84)90008-7

5. Gordon R, Bender R, Herman GT. Algebraic Reconstruction Techniques (ART) for three-dimensional electron microscopy and X-ray photography. Journal of Theoretical Biology. 1970. 29(3):471–481. doi: 10.1016/0022-5193(70)90109-8 5492997

6. Mcgaffin MG, Fessler JA. Alternating Dual Updates Algorithm for X-ray CT Reconstruction on the GPU. IEEE Transactions on Computational Imaging. 2015. 1(3):186–199. doi: 10.1109/TCI.2015.2479555 26878031

7. Chun SY, Dewaraja YK, Fessler JA. Alternating Direction Method of Multiplier for Tomography with Nonlocal Regularizers. IEEE Transactions on Medical Imaging. 2014. 33(10):1960–1968. doi: 10.1109/TMI.2014.2328660 25291351

8. Wang CX, Zeng L, Guo YM, Zhang LL. Wavelet tight frame and prior image-based image reconstruction from limited-angle projection data. Inverse Problems and Imaging. vol. 11, no. 6, pp. 917–948, 2017. doi: 10.3934/ipi.2017043

9. Wang CX, Zeng L. Error bounds and stability in the L0 regularized for CT reconstruction from small projections. Inverse Problems and Imaging. vol. 10, no. 3, pp. 829–853, 2016.

10. Wu WW, Zhang YB, Wang Q, Liu FL, Chen PJ, Yu HY. Low-dose spectral CT reconstruction using image gradient ℓ0–norm and tensor dictionary. Applied Mathematical Modelling. vol. 63, pp. 538–557, 2018. doi: 10.1016/j.apm.2018.07.006

11. Yu HY, Wang G. Compressed sensing based interior tomography. Phys. Med. Biol. vol. 54, no. 9, pp. 2791–2805, 2009. doi: 10.1088/0031-9155/54/9/014 19369711

12. Lauzier PT, Tang j, Chen GH. Prior image constrained compressed sensing: Implementation and performance evaluation. Medical Physics 39, 66–80 (2012). doi: 10.1118/1.3666946 22225276

13. Chen ZQ, Jin X, L L, Wang G. A limited-angle CT reconstruction method based on anisotropic TV minimization. Physics in Medicine & Biology. 2013. 58(7): 2119. doi: 10.1088/0031-9155/58/7/2119 23470430

14. Wang T, Nakamoto K, Zhang HY, Liu HF. Reweighted Anisotropic Total Variation Minimization for Limited-Angle CT Reconstruction. IEEE Transactions on Nuclear Science, 2017, 64(10):2742–2760. doi: 10.1109/TNS.2017.2750199

15. Yu W, Wang CX, Huang M. Edge-preserving reconstruction from sparse projections of limited-angle computed tomography using l0-regularized gradient prior. Review of Scientific Instruments. 2017. 88(4):043703. doi: 10.1063/1.4981132 28456252

16. LeCun Y, Bengio Y, Hinton G. Deep learning. Nature. vol. 521, no. 7553, pp. 436–444, 2015. doi: 10.1038/nature14539 26017442

17. Krizhevsky A, Sutskever I, Hinton G E. ImageNet classification with deep convolutional neural networks. In International Conference on Neural Information Processing Systems. Curran Associates Inc. pp. 1097–1105, 2012.

18. Girshick R, Donahue J, Darrell T, Malik J. Rich Feature Hierarchies for Accurate Object Detection and Semantic Segmentation. Computer Science. pp. 580–587, 2013. doi: 10.1109/CVPR.2014.81

19. Wang G, Ye JC, Mueller K, Fessler A. Image Reconstruction Is a New Frontier of Machine Learning. IEEE Transactions on Medical Imaging PP. 99, 2018.

20. Wang G. A Perspective on Deep Imaging. IEEE Access. vol. 4, no. 99, pp. 8914–8924, 2017. doi: 10.1109/access.2016.2624938

21. Pelt DM. Batenburg KJ. Fast Tomographic Reconstruction From Limited Data Using Artificial Neural Networks. IEEE Transactions on Image Processing A Publication of the IEEE Signal Processing Society. vol. 22, no. 12, pp. 5238, 2013. doi: 10.1109/TIP.2013.2283142 24108463

22. Boublil D, Elad M, Shtok J, Zibulevsky M. Spatially-Adaptive Reconstruction in Computed Tomography Using Neural Networks. IEEE Transactions on Medical Imaging. vol. 34, no. 7, pp. 1474–1485, 2015. doi: 10.1109/TMI.2015.2401131 25675453

23. Chen H, Zhang Y, Zhang WH. Low-dose CT via convolutional neural network. Biomedical Optics Express. vol. 8, no. 2, pp. 679, 2017. doi: 10.1364/BOE.8.000679 28270976

24. Yang Q, Yan PK, Zhang YB, Yu HY, Shi YY, Mou XQ, et al. Low Dose CT Image Denoising Using a Generative Adversarial Network with Wasserstein Distance and Perceptual Loss. IEEE Transactions on Medical Imaging, 2018:1–1.

25. Jin KH, Mccann MT, Froustey E, Unser M. Deep Convolutional Neural Network for Inverse Problems in Imaging. IEEE Transactions on Image Processing A Publication of the IEEE Signal Processing Society. vol. 26, no. 9, pp. 4509–4522, 2017. doi: 10.1109/TIP.2017.2713099 28641250

26. Fuchs VR, S H Jr. Physicians’ views of the relative importance of thirty medical innovations. Health Aff. vol. 20, no. 5, pp. 30–42, 2001. doi: 10.1377/hlthaff.20.5.30 11558715

27. Natterer F. The mathematics of computerized tomography. Medical Physics. vol. 29, no. 1. pp. 107–109, 1986. doi: 10.1137/1.9780898719284

28. Gao HW, Zhang L, Chen ZQ, Xing YX, Cheng JG, Qi ZH. Direct filtered-backprojection-type reconstruction from a straight-line trajectory. Optical Engineering. vol. 46, no. 5, 2007. doi: 10.1117/1.2739624

29. Magnusson MB, Danielsson PE. Scanning of logs with linear cone-beam tomography. Computers & Electronics in Agriculture. vol. 41, no. 1, pp. 45–62, 2003. doi: 10.1016/s0168-1699(03)00041-3

30. Ronneberger O, Fischer P, Brox T. U-Net: Convolutional Networks for Biomedical Image Segmentation. vol. 9351, pp. 234–241, 2015. doi: 10.1007/978-3-319-24574-4_28

31. Krizhevsky A, Sutskever I, Hinton GE. ImageNet classification with deep convolutional neural networks. In International Conference on Neural Information Processing Systems. Curran Associates Inc, 2012, pp. 1097–1105.

32. Kim J, Lee JK, Lee KM. Accurate Image Super-Resolution Using Very Deep Convolutional Networks. pp. 1646–1654, 2015. doi: 10.1109/CVPR.2016.182

33. He KM, Zhang XY, Ren SQ, Sun J. Deep Residual Learning for Image Recognition. IEEE Conference on Computer Vision and Pattern Recognition, IEEE Computer Society, 2016, pp. 770–778.

34. Vedaldi A, Lenc K. MatConvNet: Convolutional neural networks for MATLAB. In Proc. 23rd ACM Int. Conf. Multimedia, pp. 689–692, 2012.

35. Wang Z, Bovik AC, Sheikh HR, Simoncelli EP. Image quality assessment: from error visibility to structural similarity, IEEE Transactions on Image Processing. 13 (2004), 600–612, doi: 10.1109/tip.2003.819861 15376593

36. Jin X, Li L, Chen ZQ, Zhang L, Xing YX. Anisotropic total variation for limited-angle CT reconstruction[C]// IEEE Nuclear Science Symposuim & Medical Imaging Conference. IEEE, 2010.

37. Clark K, Vendt B, Smith K, Freymann J, Kirby J, Koppel P, et al. The Cancer Imaging Archive (TCIA): Maintaining and Operating a Public Information Repository. Journal of Digital Imaging, Vol. 26, no. 6, pp. 1045–1057, 2013. doi: 10.1007/s10278-013-9622-7 23884657

38. Liu Y, Ma J, Fan Y, Liang Z. Adaptive-weighted total variation minimization for sparse data toward low-dose x-ray computed tomography image reconstruction. Physics in Medicine & Biology. vol. 57, no. 23, pp. 7923, 2012. doi: 10.1088/0031-9155/57/23/7923 23154621


Článok vyšiel v časopise

PLOS One


2020 Číslo 1
Najčítanejšie tento týždeň
Najčítanejšie v tomto čísle
Kurzy

Zvýšte si kvalifikáciu online z pohodlia domova

Aktuální možnosti diagnostiky a léčby litiáz
nový kurz
Autori: MUDr. Tomáš Ürge, PhD.

Všetky kurzy
Prihlásenie
Zabudnuté heslo

Zadajte e-mailovú adresu, s ktorou ste vytvárali účet. Budú Vám na ňu zasielané informácie k nastaveniu nového hesla.

Prihlásenie

Nemáte účet?  Registrujte sa

#ADS_BOTTOM_SCRIPTS#