Home Mathematics Fully Automated Segmentation of Lung Parenchyma Using Break and Repair Strategy
Article Open Access

Fully Automated Segmentation of Lung Parenchyma Using Break and Repair Strategy

  • EMAIL logo and
Published/Copyright: July 20, 2017
Become an author with De Gruyter Brill

Abstract

The traditional segmentation methods available for pulmonary parenchyma are not accurate because most of the methods exclude nodules or tumors adhering to the lung pleural wall as fat. In this paper, several techniques are exhaustively used in different phases, including two-dimensional (2D) optimal threshold selection and 2D reconstruction for lung parenchyma segmentation. Then, lung parenchyma boundaries are repaired using improved chain code and Bresenham pixel interconnection. The proposed method of segmentation and repairing is fully automated. Here, 21 thoracic computer tomography slices having juxtapleural nodules and 115 lung parenchyma scans are used to verify the robustness and accuracy of the proposed method. Results are compared with the most cited active contour methods. Empirical results show that the proposed fully automated method for segmenting lung parenchyma is more accurate. The proposed method is 100% sensitive to the inclusion of nodules/tumors adhering to the lung pleural wall, the juxtapleural nodule segmentation is >98%, and the lung parenchyma segmentation accuracy is >96%.

MSC 2010: 68T99; 68U10; 62H35

1 Introduction

Lung-related cancer is one of the most predominant diseases. In comparison to other cancers like breast cancer, lymphoma, melanoma, and prostate cancer, the survival rate of patients can be improved by early detection and diagnosis [20, 25]. Computed tomography (CT) is widely used to analyze lung nodule size variations and structure. Nowadays, thinner CT slices make more number of frames, which results in more analysis data, and this burdens radiologists. Computer-aided diagnosis/detection (CAD) is more helpful in assisting radiologists in lung parenchyma CT rendering [27].

In the lung CAD system, extracting lung parenchyma from a thoracic CT image is done to reduce analysis time. Reduction in misdiagnosis and improving the efficiency of the CAD system can be done through effective lung parenchyma segmentation. If lung parenchyma segmentation will be a procedural part in the CAD system for the analysis and assessment of lung pulmonary nodule structure, this will influence the precision of the entire lung CAD system.

As of late, researchers have made advances in lung segmentation techniques [2, 7, 19, 23]. In general, these techniques can be divided into three classifications: (i) pattern classification technique [12, 16], (ii) region growing technique [1, 11], and (iii) thresholding technique [10, 13, 14]. The pattern classification technique requires an extensive number of test information preparations and necessities to separate the components; thus, its handling time is longer [21, 22, 26]. It cannot meet the continuous requirements of the CAD framework in clinical applications. The region growing technique is not fully automated. In this technique, high-density regions of the lung attached to the lung parenchyma edges are not included/excluded accurately. It cannot separate the two lung projections. In this technique, multiple seed points need to be chosen physically. It is difficult to choose parameters from texture of area developing and fusing, which may influence the stability. The thresholding technique is straightforward and quick. It cannot preclude the branchial/tracheal zones adequately, and it can exclude nodules that are connected to the inner surface of the lung parenchyma.

In lung CT images, a sort of tumor/nodule clings to the inner wall of the lung, like juxtapleural nodules. It is a vital requirement in the judgment of early-stage lung malignancy/infection because the gray level is fundamentally the same as fat present outside the lung. Owing to juxtapleural nodule gray level variations, frequent errors are made such as identifying the tumor as fat in lung parenchyma segmentation. This leads to misanalysis and impacts the finding of the ailments.

The nodules/tumors on the lung inner wall with the same intensity gray level as fat surrounding the lung turn to the sunken boundary of the lung. Because of the significance of this nodule/tumor in diagnosis, researchers are attempting to develop measures to enhance the detection technique. The region growing active contour and level set the strategy among the most commonly utilized ones [5, 6]. Nevertheless, active counter based on the Chan-Vese segmentation technique cannot precisely judge the extent of the deformity; thus, it is difficult to identify the tumor region while growing.

Considering the above-mentioned issues in lung segmentation, this paper introduces a fully automated procedure for lung parenchyma extraction. The implementation process includes two-dimensional (2D) optimal threshold technique for binarization; then, the fat surrounding the lung and trachea is removed by 2D morphological reconstruction. In the end, the extracted lung parenchyma boundary is modified in enhanced chain code and connectivity can be done using the Bresenham method. The entire procedure of the proposed fully automatic segmentation is detailed in Figure 1. The lung parenchyma segmentation proposed in this paper can successfully extract missing juxtapleural nodules. In comparison with the active contour method, the proposed segmentation incredibly enhances the accuracy and computational speed. Thoracic CT slices generated by different machines and settings are considered to check the robustness and accuracy of the proposed method of lung parenchyma segmentation.

Figure 1: Flowchart of Automated Segmentation of Lung Parenchyma Using Break and Repair Strategy.
Figure 1:

Flowchart of Automated Segmentation of Lung Parenchyma Using Break and Repair Strategy.

2 Materials and Methods

The proposed method is separated into two phases: (i) lung parenchyma segmentation based on the thoracic CT slice image and (ii) modification of the segmented lung parenchyma boundary using enhanced chain code and Bresenham methods. Each phase involves distinctive subphases. As shown in the Figure 2A, the thoracic CT image incorporates the lung parenchyma, trachea, fat, examination table, and outside area. To reduce the computation and processing time in the first phase, except for the lung parenchyma all areas are subtracted to avoid interference in diagnosis and to accelerate the accurate lung parenchyma extraction. The initial phase of segmentation is to remove regions outside of the lung parenchyma.

Figure 2: (A) Thoracic CT Image. (B) 2D Histogram of (A).
Figure 2:

(A) Thoracic CT Image. (B) 2D Histogram of (A).

2.1 Initial Segmentation

The density of the lung parenchyma is low because it is filled with air. The density of other tissues like fat, soft tissues, and bone is higher and different. Therefore, the Hounsfield units (HUs) in the thoracic CT image are proportional to density. The HU estimations of the lung in CT images are practically the same for various cases. In this paper, initially, 2D optimal thresholding is applied to segment the lung parenchyma. The 2D thresholding process is as follows:

  1. Generate an L×L size 2D histogram as shown in Figure 2B by pairing input pixel gray level Fi and its local mean Fm , where L is gray level.

  2. Divide the histogram by two classes based on object (Co ) and background (C1) through

    (1)ω0=i=0sj=0tqij;ω1=i=s+1L1j=t+1L1qij,

    where qij is the probability of the Fi and Fm pair, and ω0 and ω1 are probabilities of C0 and C1, respectively.

  3. Calculate the variance in Co and C1 by μ0 and μ1, respectively, with total variance μT .

  4. Use those parameters to obtain variance among class:

    (2)Trace T0(s, t)=ω0[(μ0iμTi)2+(μ0jμTj)2]+ω1[(μ1iμTi)2+(μ1jμTj)2].
  5. Run step (iv) in iterations until T0(s, t) reaches maximum and this value is taken as optimal threshold:

    (3)Trace T0(s, t)=max0s,tL1{trace T0(s, t)}.

The above-mentioned 2D optimal thresholding gives better results in noisy images [18, 29]. The segmentation results are shown in Figure 3A (lung lobes and sections are in white).

Figure 3: (A) Segmentation Result. (B) 2D Reconstruction. (C) Initial Lung Parenchyma Segmentation.
Figure 3:

(A) Segmentation Result. (B) 2D Reconstruction. (C) Initial Lung Parenchyma Segmentation.

The optimal threshold is used to obtain the binary lung CT image. In the segmented image, tissues outside the lung parenchyma are in black and the lung parenchyma is in white. In the segmented image, the outside of the body and lung will be suppressed by 2D morphological reconstruction that touches the boundary of the image. The 2D morphological reconstruction image is shown in Figure 3B. The lung parenchyma is in white; however, it includes the trachea and its nodular branches. Using an area opening operation, the trachea and tissues present in between the lungs are removed. In this phase, except larger lobes, all objects are removed in the binary image, and resultant image is shown in Figure 3C. This lung-only binary image is used to label lung lobes by 2D connecting components; then, holes are filled to make a proper mask and morphological elements are applied to close uneven edges of the lung. The lung lobes are separated by extracting only the two largest blobs in ascending/descending form. Here, the segmentation and separation of the lung lobes are referred to as the “break” strategy.

2.2 Lung Boundary Modification

Lung parenchyma modification is necessary because some nodules or tumors adhere to the inner lung pleural wall. This is not included properly, as described in Section 2.1. Also, this is the main drawback of pulmonary parenchyma segmentation schemes. Tissues attached to the lung pleural wall are often considered as fat and are wiped out, because the lung border is not even and includes some curved structures. In this paper, an improved chain code algorithm is used to analyze the lung border and the Bresenham method is used to include missed nodules/tumor in the lung border.

2.2.1 Improved Chain Code

In the initial segmented lung parenchyma, the boundary includes concave and convex points. A chain code is used to symbolize convexity and concavity through determined length and directions. This concavity and convexity is analyzed using the Freeman chain code [9, 15]. Assume there are N Freeman chain code boundary points C(x), where (x=1, 2, …, N−1). C(x) is the chain code point and C(x+1) is the next chain code point, and so on. As shown in Figure 4, using this chain code, a relative chain code can be generated. The relative chain code R(x) is defined as the relation between C(x) and C(x−1):

Figure 4: Eight-Direction Freeman Chain Code.
Figure 4:

Eight-Direction Freeman Chain Code.

(4)R(x)=([C(x)C(x1)+8] mod 8)>4?R(x)8:R(x).

The absolute chain code can be generated using relative chain code with forehead absolute chain code values:

A(0)=0,

(5)A(x)=A(x1)+R(x),

where R(x) is the relative chain code and A(x) is the absolute chain code.

Then, the three-point sum is generated using absolute chain code current point A(x) using two preceding points:

(6)Sum(x)=A(x)+A(x1)+A(x2).

Finally, convexity and concavity are obtained by the three-point sum difference:

(7)Diff(x)=sum(x+3)sum(x).

This three-point sum difference is proportional to the curvature of the boundary line because it is a two-direction sum difference. The concavity difference is positive when the boundary line chain code calculation is in the clockwise course; else, it is identified as convexity.

As indicated in the chain code difference of the boundary, we can reconstruct the boundary line with the relative difference limit [30]. We can recognize convexity and concavity as summarized below:

  1. Obtain lung boundary from the binary image.

  2. Obtain the Freeman chain code with the origin point in the lung boundary line; the upper left corner point is used as the initial point for subsequent processing.

  3. Decide the chain code heading of pivot: clockwise or anticlockwise.

  4. Obtain the absolute chain code, relative chain code, and three-point sum Diff(x) of the boundary line.

  5. Set a limit as indicated by the attributes of the chain code contrast Diff(x) and then recognize the concavity and convexity points: concave limit Diff(x)>2 and convex limit Diff(x)<−1. These difference limits are empirical values. Figure 5 demonstrates the improved chain code method. The image contains an assortment of convex and concave points computations.

Figure 5: (A, B) Segregation of Concave and Convex Points.
Figure 5:

(A, B) Segregation of Concave and Convex Points.

Figure 5A shows an input test image, and marked concave-convex points are shown in Figure 5B. Symbol “o” represents the concave point in the boundary line and symbol “×” represents the convex point. Four points represent the concave because the chain code difference is >2 and 13 convex points <−1 threshold, as from our literature survey this different threshold value can be judged [24]. According to the above discussions and from the results, the improved chain code is very accurate.

2.2.2 Modification of the Sunken Boundary

With the help of concave-convex points, the sunken boundary can be filled. The following steps are adopted to modify the sunken boundary:

  1. Initiate the loop with the chain code starting location and search for the convex point and mark it as P, its location as (x1, y1), and get its position from the chain code.

  2. Search for concave point Q and its location as (x2, y2) and its position from the chain code.

  3. Spot successive convex point R and its location as (x3, y3) and the respective position from the chain code.

  4. In step (iii), on successive search, if the next convex R is also a convex point, then update the R successive convex point location and its position.

  5. Finally, connecting convex P and R for filling can be done by connecting the boundary line from the point next to P, that is p+1 to r+1, and boundary line points from p+2 to r+2, and so on, until the boundary line point reaches (p+r)/2, as shown in Figure 6.

Figure 6: Delineation of Sunken Boundary Filling.
Figure 6:

Delineation of Sunken Boundary Filling.

2.2.3 Filling the Sunken Boundary

The sunken boundary is filled in between P(x, y) and R(x, y). The connection of two points in 2D linear space is through the Bresenham method [4]. The Bresenham method is popular in connecting two points because of its accuracy and simplicity. Here, the (x1, y1) and (x2, y2) coordinates of point P and R, respectively, need to be connected. The procedure of interconnection is as follows.

The slope in between connecting two points can be 0–1 as

dx=x2x1dy=y2y1.

Then, the slope will be

(8)k=dydx,0k1.

As shown in Figure 7, a point f1(xs , ys ) is the nearest point to P(x, y). The one unit in x direction increase and in y direction about k, and the intersection point is f2(xs+1, yk ). The successive pixels are connected using the relation

Figure 7: Delineation of Bresenham Method Pixel Interconnection.
Figure 7:

Delineation of Bresenham Method Pixel Interconnection.

(9)xi+1=xi+1yi+1=(d0.5)?yi+1:yi,

where d is the distance between the slope line and pixel position. At the pinpoint, d is zero and it will be increased to d=d+k for every X direction pixel position subtracted to one d=d−1 when the Y direction pixel position is forward. The following modifications are applied for accurate filling of sunken boundaries.

  1. Exchange dx and dy, x and Y when the absolute value of the slope is >1.0.

  2. To control x and y, the sign of dx and dy can be used to increase or decrease by 1.

Here, modifying the sunken boundary of the lung lobes is referred to as “repair” strategy. To check the accuracy of this filling method, we used a previously discussed image and an image similar to the sunken structure of the lung parenchyma.

Figure 8 shows the results of the above steps to fill the sunken boundary of the structure as shown in Figure 5. Figure 9 shows the structure similar to lung lobes. Figure 9B shows the concave and convex points obtained by the improved chain code. In this example, three concave points can be seen in the sunken area. The interconnection of convex points is done to fill the sunken boundary. The results of this type of filling in all tests are accurate and satisfactory.

Figure 8: Sunken Boundary Filling Result for Figure 5A.
Figure 8:

Sunken Boundary Filling Result for Figure 5A.

Figure 9: Structure Similar to Lung Lobes for Filling.
Figure 9:

Structure Similar to Lung Lobes for Filling.

Figure 10 shows the right lung extracted from Figure 3. Figure 10B shows the marking of concave and convex points and the presence of sunken boundaries. These sunken boundaries are filled with the Bresenham algorithm and the resultant mask is shown in Figure 10D. Figure 10E shows the final extracted left lung with complete inclusion.

Figure 10: Illustration of Extracted Lung Parenchyma Repairing.
Figure 10:

Illustration of Extracted Lung Parenchyma Repairing.

3 Performance Evaluation Method

The performance of the proposed fully automated lung parenchyma segmentation method is evaluated and compared with manual segmentation. In this work, two radiologists helped in manually contouring of all slices of the thoracic CT scan image in every case, from the top portion of the lung to the bottom. The manual segmentation results were evaluated by experts and the results are considered as ground truth for original reference. The ground truth images are used to evaluate the proposed fully automated segmentation method. The accuracy of the fully automated segmentation assessed by the following five parameters as well as the accuracy of juxtapleural nodule inclusion are evaluated.

3.1 Volumetric Overlap Error (VOE%)

The rate of volume overlap is estimated with the help of the Jaccard coefficient. The Jaccard coefficient is the ratio of the total value of intersection and union of segmented region A and ground truth region B, as shown in Figure 11A. The VOE is defined as

Figure 11: Delineation of Segmentation Accuracy.
Figure 11:

Delineation of Segmentation Accuracy.

(10)VOE%=(1ABAB)×100,

where A is the segmented region and B is the ground truth region. The overlap error rate varies from 0% to 100%, i.e. an evaluated value near 0% is perfect segmentation. In perfect segmentation, the segmented and ground truth regions completely overlap [17, 28]. VOE value near 100% represents poor segmentation because of less or no overlap between the two regions, leading to poor segmentation results.

3.2 Disc Similarity Coefficient (DSC)

The similarity of two regions is also evaluated by DSC. In this evaluation technique, Dice’s coefficient needs to be calculated to evaluate similarities [28]. Dice’s coefficient is defined as

(11)DSC%=2|AB||A|+|B|×100,

where A and B are the segmented and ground truth regions, respectively. The DSC values are in the 0–100% range. When the value is near 0%, it represents less or no similarity between the segmented and ground truth regions. If the DSC value is near 100%, then the segmented and ground truth regions are more similar.

3.3 Average Symmetric Surface Distance (ASSD)

In this evaluation technique, the accuracy of the segmentation can be measured by the surface shape of the segmented region compared with the ground truth region. The distance between contour points of the segmented region is calculated and then the average of this value is compared with similar average contour point distance in the ground truth region. For symmetry, the average contour point distance of the ground truth region to the segmented region is calculated. The ASSD varies in millimeters, and a smaller value represents higher surface shape similarity. Higher distance represents less or no symmetric surface between the two regions. ASSD(A, B) is also known as mean absolute distance [17]. Let S(A) denote a set of surface pixels of A and S(B) is a set of surface pixels of B; the minimum distance from sA to S(B) is defined as

(12)d(sA,S(B))=minsBS(B)||sAsB||,

where ‖·‖ represents the Euclidean distance. The ASSD between A and B is defined as

(13)ASSD(A,B)=1|S(A)+S(B)|[sAS(A)d(sA,S(B))+sBS(B)d(sB,S(A))].

3.4 Root Mean Square Symmetric Surface Distance (RMSD)

The RMSD evaluation values are in millimeters and are also based on the surface distance between contour points [17]. However, the distance between sA and S(B) is squared before being used in RMSD evaluation:

(14)RMSD(A,B)=1|S(A)+S(B)|×[sAS(A)d2(sA,S(B))+sBS(B)d2(sB,S(A))].

RMSD values near 0 represent accurate segmentation; else, the segmentation results are inaccurate.

3.5 Accuracy of the Juxtapleural Nodule Inclusion

In this evaluation technique, the accuracy of segmentation of the juxtapleural nodule is measured. As shown in Figure 11B, the region Nseg represents the juxtapleural nodule segmented region from the proposed method and Ngt is the ground truth region. The inclusion accuracy is as follows:

(15)Inclusion_accuracy %=|NsegNgtNsegNgt|×100.

A value of Inclusion_accuracy near 0% represents less or no inclusion of juxtapleural nodules and that near 100% represents total inclusion.

4 Experiments and Results

To evaluate the performance and accuracy of the proposed method, here we consider thoracic CT slices from three different datasets. Each dataset contains CT slices generated by different machines or settings. Table 1 lists the significant parameters of the three different datasets. From the Lung Image Database Consortium and Image Database Resource Initiative (LIDC-IDRI) [3] dataset, there were 82 thoracic CT scans from 82 patients; 29 scans from the Lung TIME1 dataset; and 4 scans from the Lung TIME2 dataset [8]. In total, there were 115 scans and each scan contains 80–140 slices. From the same datasets, 21 thoracic CT scans containing juxtapleural nodules were considered.

Table 1:

Significant Parameters of Datasets.

DatasetsLIDC-IDRILung TIME1Lung TIME2
Slice thickness (mm)0.6~5.05.05.0
Slice spacing (mm)1.91.05.0
Tube current (mA)40~6276363
X-ray generator output (kV)120~140110110
Transversal resolution (mm)0.6880.580.71
Convolutional kernelStandardB60fB60f
ManufacturerGE Medical SystemsSiemensSiemens
Manufacturer modelLight-Speed16Somatom AR StarSomatom AR Star

The results of the proposed fully automated lung parenchyma segmentation were accurate for all images, which included 31 juxtapleural nodules from 21 thoracic CT scans. Figure 12 illustrates the segmentation and boundary modification results of five slices in column (A) input image, (B) initial segmentation result, (C) markings of concave-convex points on the boundary line, (D) sunken boundary filling using the Bresenham method, (E) segmentation mask, and (F) segmented lung lobe. The proposed method was compared with the most cited active contour segmentation method. In this active counter based on the Chan-Vese segmentation technique, the region of interest is marked by seed points, to segment the lung parenchyma. Figure 13 shows the segmentation results of two slices in comparison to the proposed method. In Figure 13, column (A) shows the input image, (B) segmentation results from active contour technique, and (C) segmentation results from the proposed method. The results showed that the segmentation of juxtapleural nodules by the proposed method was very accurate, compared the active contour technique with poor inclusion of nodules/tumors on the lung pleural wall.

Figure 12: Experimental Results of Segmentation and Repairing.
Figure 12:

Experimental Results of Segmentation and Repairing.

Figure 13: Segmentation Results Comparison between the (A) Input Thoracic CT Images, (B) Active-Contour Method and (C) Proposed Method.
Figure 13:

Segmentation Results Comparison between the (A) Input Thoracic CT Images, (B) Active-Contour Method and (C) Proposed Method.

The segmentation accuracy for all 115 scans was evaluated by comparing to ground truth contour using DSC in Eq. (11). The resultant similarity coefficient mean value was about 96±0.35%, compared to the active contour segmentation overlap mean of 82±3.55%, as depicted in Figure 14A. All test results from the proposed method were well in the satisfactory level except small non-overlapping on lobe edges. The juxtapleural nodule inclusion accuracy was evaluated using Eq. (15), and the resultant values are listed in Table 2. The proposed segmentation method was more accurate in inclusion of juxtapleural nodules for all cases. By analyzing all the experimental results, the proposed method is more satisfactory and stable for all cases in lung parenchyma segmentation. The proposed method is 100% accurate in sensing nodules adhering to the lung pleural wall. This resulted in 98% juxtapleural nodule inclusion; however, the complete lung parenchyma segmentation was 96% because of minor over-/undersegmentation on the hilar region. This lower evaluation value is not affected by tracing nodules/tumors adhering on the boundary, so it will not lead to a misdiagnosis.

Figure 14: Comparison of Performance Evaluation Results.
Figure 14:

Comparison of Performance Evaluation Results.

Table 2:

Comparison Results.

DSC (%)Inclusion_accuracy (%)Average computation time (s)
MeanStdMeanStdMeanStd
Active counter based on Chan-Vese method82.0243.54763.6575.93335.5331.306
Proposed method95.8590.34698.3460.6760.9070.0738

The computational complexity of the initial segmentation relies on L (gray level). The optimal threshold calculation in a 2D histogram having range [0, L×L] that makes time complexity is about O (L4). However, the computation complexity of the proposed method is mainly dependent on the number of points N on the boundary line to generating chain code and interconnection by the Bresenham method. The filling of sunken boundaries is mainly dependent on the k number of filling operations and distance D between convex points; that is, the computational complexity is O(kD2) in the most pessimistic scenario O(N2). For the active contour method computation, complexity is mainly dependent on the number of iterations n to grow segmentation O(n2logn) [6]. The proposed algorithm was verified with the simulations, and MATLAB R2014a was used for programing under Windows 10 OS, Intel i5-3210M processor system. A comparison of execution times is listed in Table 2, and comparatively the execution time is less in the proposed method.

The quality of the proposed segmentation method was determined by three evaluation techniques, and resultant values are shown in Table 3 and Figure 14B. VOE, ASSD, and RMSD measurements ought to have low incentive for good quality segmentation. The values in Table 3 demonstrate that the proposed method of segmenting lung parenchyma is more accurate. The findings related to the visual investigation were obtained by the radiologist.

Table 3:

Proposed Segmentation Method Evaluation Results in Two Phases.

VOE (%)ASSD (mm)RMSD (mm)
MeanStdMeanStdMeanStd
Initial segmentation18.5234.6437.5514.68918.80211.968
Lung boundary modification4.8381.8342.3641.1213.5280.991

5 Conclusion

In this paper, the authors propose a new fully automated lung parenchyma segmentation method. The segmentation phase includes 2D optimal threshold selection and 2D morphological reconstruction for lung parenchyma segmentation. Then, lung parenchyma boundaries are modified using improved chain code and Bresenham pixel interconnection. A total of 21 thoracic CT slices having juxtapleural nodules and 115 lung pulmonary parenchyma scans were used to verify the robustness and accuracy of the proposed method. The proposed method was 100% sensitive to inclusion of nodules/tumors present on the lung pleural wall. The juxtapleural nodule segmentation accuracy was >98% and the lung parenchyma segmentation accuracy was >96%. The results of the proposed method were compared with the most cited active contour methods. The proposed method showed good performance in inclusion of juxtapleural nodules, low computational complexity, and no user interference. The proposed method meets the prerequisites of the lung CAD framework, and provides good performance in the processing of a sequence of images.

Acknowledgments

The authors acknowledge the National Cancer Institute and Czech Technical University for creation of the free publicly available Lung Image Database Consortium and Image Database Resource Initiative and Lung TIME Database, respectively.

Bibliography

[1] S. G. Armato and W. F. Sensakovic, Automated lung segmentation for thoracic CT: impact on computer-aided diagnosis, Acad. Radiol.11 (2004), 1011–1021.10.1016/j.acra.2004.06.005Search in Google Scholar PubMed

[2] S. G. Armato, M. L. Giger, C. J. Moran, J. T. Blackburn, K. Doi and H. MacMahon, Computerized detection of pulmonary nodules on CT scans, Radiographics19 (1999), 1303–1311.10.1148/radiographics.19.5.g99se181303Search in Google Scholar PubMed

[3] S. G. Armato III, G. McLennan, L. Bidaut, M. F. McNitt-Gray, C. R. Meyer, A. P. Reeves, B. Zhao, D. R. Aberle, C. I. Henschke, E. A. Hoffman, E. A. Kazerooni, H. MacMahon, E. J. Van Beeke, D. Yankelevitz, A. M. Biancardi, P. H. Bland, M. S. Brown, R. M. Engelmann, G. E. Laderach, D. Max, R. C. Pais, D. P. Qing, R. Y. Roberts, A. R. Smith, A. Starkey, P. Batrah, P. Caligiuri, A. Farooqi, G. W. Gladish, C. M. Jude, R. F. Munden, I. Petkovska, L. E. Quint, L. H. Schwartz, B. Sundaram, L. E. Dodd, C. Fenimore, D. Gur, N. Petrick, J. Freymann, J. Kirby, B. Hughes, A. V. Casteele, S. Gupte, M. Sallamm, M. D. Heath, M. H. Kuhn, E. Dharaiya, R. Burns, D. S. Fryd, M. Salganicoff, V. Anand, U. Shreter, S. Vastagh and B. Y. Croft, The Lung Image Database Consortium (LIDC) and Image Database Resource Initiative (IDRI): a completed reference database of lung nodules on CT scans, Med. Phys.38 (2011), 915–931.10.1118/1.3528204Search in Google Scholar PubMed PubMed Central

[4] J. E. Bresenham, Algorithm for computer control of a digital plotter, IBM Syst. J.4 (1965), 25–30.10.1147/sj.41.0025Search in Google Scholar

[5] T. Chan and L. Vese, An active contour model without edges, in: International Conference on Scale-Space Theories in Computer Vision, pp. 141–151, Springer, 1999.10.1007/3-540-48236-9_13Search in Google Scholar

[6] T. F. Chan and L. A. Vese, Active contours without edges, IEEE Trans. Image Process.10 (2001), 266–277.10.1109/83.902291Search in Google Scholar PubMed

[7] G. De Nunzio, E. Tommasi, A. Agrusti, R. Cataldo, I. De Mitri, M. Favetta, S. Maglio, A. Massafra, M. Quarta, M. Torsello, I. Zecca, R. Bellotti, S. Tangaro, P. Calvini, N. Camarlinghi, F. Falaschi, P. Cerello and P. Oliva, Automatic lung segmentation in CT images with accurate handling of the hilar region, J. Digit. Imaging24 (2011), 11–27.10.1007/s10278-009-9229-1Search in Google Scholar PubMed PubMed Central

[8] M. Dolejsi, J. Kybic, M. Polovincak and S. Tuma, The lung time: annotated lung nodule dataset and nodule detection framework, in: SPIE Medical Imaging, International Society for Optics and Photonics, pp. 72601U–72601U, 2009.10.1117/12.811645Search in Google Scholar

[9] H. Freeman, On the encoding of arbitrary geometric configurations, IRE Trans. Electron. Comput.EC-10 (1961), 260–268.10.1109/TEC.1961.5219197Search in Google Scholar

[10] W. Guo and Q. Li, Effect of segmentation algorithms on the performance of computerized detection of lung nodules in CT, Med. Phys.41 (2014), 091906.10.1118/1.4892056Search in Google Scholar PubMed PubMed Central

[11] L. W. Hedlund, R. F. Anderson, P. L. Goulding, J. W. Beck, E. L. Effmann and C. E. Putman, Two methods for isolating the lung area of a CT scan for density information, Radiology144 (1982), 353–357.10.1148/radiology.144.2.7089289Search in Google Scholar

[12] J. Kirchner, J. P. Goltz, F. Lorenz, A. Obermann, E. M. Kirchner and R. Kickuth, The “dirty chest” – correlations between chest radiography, multislice CT and tobacco burden, Br. J. Radiol.85 (2014), 339–345.10.1259/bjr/62694750Search in Google Scholar

[13] J. Lai and Q. Wei, Automatic lung fields segmentation in CT scans using morphological operation and anatomical information, Biomed. Mater. Eng.24 (2014), 335–340.10.3233/BME-130815Search in Google Scholar

[14] J. K. Leader, B. Zheng, R. M. Rogers, F. C. Sciurba, A. Perez, B. E. Chapman, S. Patel, C. R. Fuhrman and D. Gur, Automated lung segmentation in X-ray computed tomography: development and evaluation of a heuristic threshold-based scheme, Acad. Radiol.10 (2003), 1224–1236.10.1016/S1076-6332(03)00380-5Search in Google Scholar

[15] Z. Lu and T. Tong, The application of chain code sum in the edge form analysis, J. Image Graphics7 (2002), 1323–1328.Search in Google Scholar

[16] M. F. McNitt-Gray, J. W. Sayre, H. K. Huang, M. Razavi and D. R. Aberle, Pattern classification approach to segmentation of digital chest radiographs and chest CT image slices, Proc. SPIE2167 (1994), 465–476.10.1117/12.175080Search in Google Scholar

[17] N. M. Noor, O. M. Rijal, J. T. C. Ming, F. A. Roseli, H. Ebrahimian, R. M. Kassim and A. Yunus, Segmentation of the lung anatomy for high resolution computed tomography (HRCT) thorax images, in: International Visual Informatics Conference, pp. 165–175, Springer, 2013.10.1007/978-3-319-02958-0_16Search in Google Scholar

[18] N. Otsu, A threshold selection method from gray-level histograms, IEEE Trans. Syst. Man Cybernet.9 (1979), 62–66.10.1109/TSMC.1979.4310076Search in Google Scholar

[19] J. Pu, J. Roos, A. Y. Chin, S. Napel, G. D. Rubin and D. S. Paik, Adaptive border marching algorithm: automatic lung segmentation on chest CT images, Comput. Med. Imaging Graphics32 (2008), 452–462.10.1016/j.compmedimag.2008.04.005Search in Google Scholar PubMed PubMed Central

[20] G. D. Rubin, J. K. Lyo, D. S. Paik, A. J. Sherbondy, L. C. Chow, A. N. Leung, R. Mindelzun, P. K. S. Desmond, S. E. Zinck, D. P. Naidich and S. Napel, Pulmonary nodules on multi-detector row CT scans: performance comparison of radiologists and computer-aided detection, Radiology234 (2005), 274–283.10.1148/radiol.2341040589Search in Google Scholar PubMed

[21] C. Shi, Y. Cheng, F. Liu, Y. Wang, J. Bai and S. Tamura, A hierarchical local region-based sparse shape composition for liver segmentation in CT scans, Pattern Recogn.50 (2016), 88–106.10.1016/j.patcog.2015.09.001Search in Google Scholar

[22] C. Shi, Y. Cheng, J. Wang, Y. Wang, K. Mori and S. Tamura, Low-rank and sparse decomposition based shape model and probabilistic atlas for automatic pathological organ segmentation, Med. Image Anal.38 (2017), 30–49.10.1016/j.media.2017.02.008Search in Google Scholar PubMed

[23] I. Sluimer, A. Schilham, M. Prokop and B. van Ginneken, Computer analysis of computed tomography scans of the lung: a survey, IEEE Trans. Med. Imaging25 (2006), 385–405.10.1109/TMI.2005.862753Search in Google Scholar PubMed

[24] J. H. Tan and J. Zhang, Identifying for the convex-concave of peripherals based on chain code difference, Sci. Technol. Eng.7 (2007), 769–772.Search in Google Scholar

[25] B. Van Ginneken, B. M. Ter Haar Romeny and M. A. Viergever, Computer-aided diagnosis in chest radiography: a survey, IEEE Trans. Med. Imaging20 (2001), 1228–1241.10.1109/42.974918Search in Google Scholar PubMed

[26] J. Wang, Y. Cheng, C. Guo, Y. Wang and S. Tamura, Shape-intensity prior level set combining probabilistic atlas and probability map constrains for automatic liver segmentation from abdominal CT images, Int. J. Comput. Assist. Radiol. Surg.11 (2016), 817–826.10.1007/s11548-015-1332-9Search in Google Scholar PubMed

[27] Y. Wei, G. Shen and J. J. Li, A fully automatic method for lung parenchyma segmentation and repairing, J. Digit. Imaging26 (2013), 483–495.10.1007/s10278-012-9528-9Search in Google Scholar PubMed PubMed Central

[28] Z. Xia, Y. Gan, J. Xiong, Q. Zhao and J. Chen, Crown segmentation from computed tomography images with metal artifacts, IEEE Signal Process. Lett.23 (2016), 678–682.10.1109/LSP.2016.2545702Search in Google Scholar

[29] S. L. Xiaodan Chen, J. Hu and Y. Liang, A survey on Otsu image segmentation methods, J. Comput. Inform. Syst.10 (2014), 4287–4298.Search in Google Scholar

[30] X. Zheng, Y. Wang, G. Wang and Z. Chen, A novel algorithm based on visual saliency attention for localization and segmentation in rapidly stained leukocyte images, Micron56 (2014), 17–28.10.1016/j.micron.2013.09.006Search in Google Scholar PubMed

Received: 2017-01-30
Published Online: 2017-07-20
Published in Print: 2019-04-24

©2019 Walter de Gruyter GmbH, Berlin/Boston

This article is distributed under the terms of the Creative Commons Attribution Non-Commercial License, which permits unrestricted non-commercial use, distribution, and reproduction in any medium, provided the original work is properly cited.

Downloaded on 23.3.2026 from https://www.degruyterbrill.com/document/doi/10.1515/jisys-2017-0020/html
Scroll to top button