Home Mathematics Effective Approach to Classify and Segment Retinal Hemorrhage Using ANFIS and Particle Swarm Optimization
Article Open Access

Effective Approach to Classify and Segment Retinal Hemorrhage Using ANFIS and Particle Swarm Optimization

  • EMAIL logo and
Published/Copyright: May 12, 2017
Become an author with De Gruyter Brill

Abstract

The main objective of this study is to progress the structure and segment the images from hemorrhage recognition in retinal fundus images in ostensible. The abnormal bleeding of blood vessels in the retina which is the membrane in the back of the eye is called retinal hemorrhage. The image folders are deliberated, and the filter technique is utilized to decrease the images specifically adaptive median filter in our suggested proposal. Gray level co-occurrence matrix (GLCM), grey level run length matrix (GLRLM) and Scale invariant feature transform (SIFT) feature skills are present after filtrating the feature withdrawal. After this, the organization technique is performed, specifically artificial neural network with fuzzy interface system (ANFIS) method; with the help of this organization, exaggerated and non-affected images are categorized. Affected hemorrhage images are transpired for segmentation procedure, and in this exertion, threshold optimization is measured with numerous optimization methods; on the basis of this, particle swarm optimization is accomplished in improved manner. Consequently, the segmented images are projected, and the sensitivity is great when associating with accurateness and specificity in the MATLAB platform.

1 Introduction

Diabetic retinopathy (DR) is the main complication of diabetic mellitus. It is the primary cause of acquired blindness in working adults [12]. DR is called the silent disease and is often recognized by the patients when changes in the retina have progressed to a certain level. At this stage, treatment becomes complicated or nearly impossible [16]. Diabetic macular edema (DME) is the main cause of central vision loss in diabetes patients. DME is characterized by retinal thickening, hard exudates, hemorrhages with or without microaneurysms and blot hemorrhages in the macula region [17]. This disease is the main cause of blindness in the developed world and called the complication of diabetes [14]. Temporal image registration facilitates are used to determine the disease or therapeutic progress by better measuring of changes both on retinal vascular tree and on the color contents of the eye fundus [11]. DR is an eye disease in which diabetes disturbs the blood vessels in the human retina and which develops as one of the main causes of vision damage [3]. The seriousness of DR is estimated using the number and kinds of lesions available on the superficial stratum of the retina [4]. The degree of diabetes is cumulative, not only in industrialized nations but also in less developed nations. It is assessed that 75% of people with DR live in emerging nations [8].

By 2030, the predominance of diabetes will increase to 4.4% of the worldwide populace. There is an efficient treatment for avoiding vision loss, but DR does not show any symptoms up to the last stage of advancement of the disease [19]. The recognition of microaneurysms is crucial in the procedure of DR grading, as it forms the basis of the decision whether an image of a patient’s eye should be considered healthy or not [13]. Pathologic neovascularization and ocular permeability are indicative of proliferative DR and age-related macular degeneration. In the latest pharmacologic interventions, inhibiting vascular endothelial growth factor (VEGF) is effective in only 30%–60% of patients and requires multiple intraocular injections associated with iatrogenic infection [10]. It leads to severe and irreversible loss of vision in the elderly in developed countries [7]. Fluorescein angiography (FA) is an imaging method for assessment of retinal vascular disease, mostly DR [22]. The retina is supplied blood predominantly (65%) by the choroid and secondarily (35%) with the help of the retinal vasculature which is situated above the retina [1].

DR is calculated to afflict up to 28.5% of people with diabetes in the US. Microvascular disease manifests subtly, as in cases of barely visible dot-like intraretinal hemorrhages, or profoundly, such as in cases of retinal vascular occlusions, vitreous hemorrhage, optic nerve ischemia, or traction retinal detachments [18]. Computer recognition of DR in digital photographs could compromise economic reimbursements to diabetic retinal screening by decreasing the prices of grading and value declaration [9]. Automatic optic disc recognition is helpful to launch a retinal synchronized scheme that can be utilized to regulate the location of other retinal deformities like exudates, drusen, and hemorrhages [15]. Diabetes mellitus (DM) is referred to as the chronic medical condition characterized by impaired glucose metabolism due to destruction of pancreatic β cell. It is mainly classified into type-I and type-II (insulin resistance) diabetes [17].

2 Literature Review

Bartczak et al. [5] had suggested that typical retinal screening utilizes untenable broadband white light which may not give the best possible visibility of different retinal features of interest. To enhance the contrast of various retinal features and lesions in fundus images, a light-emitting diode (LED)-based spectrally tunable light source was constructed, and its spectral performance was studied. Optimal illuminants for DR lesion detection were implemented using the LED-based light source, and computational example images were calculated. Retinal lesion visibility (contrast) in retinal images calculated for optimal, red-free and broadband illuminants were compared. The results show that the LED-based optimal illuminants improved the contrast of diabetic lesions in retinal images by 41% compared to the traditional retinal screening. These results are promising and show that LED could be beneficially used in next-generation retinal imaging systems.

Welikala et al. [21] had described the segmented vessel map to determine the appropriateness of retinal images for use in the creation of vessel morphometric data suitable for epidemiological studies. They included an effective three-dimensional feature set and support vector machine classification where a random subset of 800 retinal images from UK Bio bank had been used to examine the performance of the image quality algorithm. The algorithm achieved the sensitivity of about 95.33% and a specificity of 91.13% for detecting inadequate images. The strong performance of the image quality algorithm makes continuously automated analysis of vascular morphometry feasible on the entire UK Bio bank dataset (and other large retinal datasets), with minimal operator involvement, and also at low cost.

Chang et al. [6] had explained the clinical characteristics and visual results of macular hemorrhage in pathological myopia with or without choroidal neovascularization (CNV). By using FA and optical coherence tomography, all patients were evaluated to detect CNV. Clinical characteristics such as age, sex, refractory error, and myopic fundus were recorded to determine the relationship between CNV and non-CNV associated macular hemorrhage. A total of 55 patients (30 females, 54.55%) were reviewed with the mean age of 39.7 years. The CNV group was found to be significantly older than the non-CNV group (p<0.05), and there was no significant difference between sex, visual acuity, myopic severity, and the prevalence of fundus findings between CNV and non-CNV groups. Twenty-one patients (38.18%) were found to have CNV and were all treated with intravitreal anti-VEGF. The other 34 patients without CNV were left untreated.

Umesawa et al. [20] had explained the retinal hemorrhage and important finding on fundus photography. DM is referred to as the major cause of retinal hemorrhage, although other causes existed. The better characteristic functions of the association between retinal hemorrhage and HbA1c in the Japanese population are sought after. Fundus photography served as part of an annual cardiovascular disease risk survey. HbA1c had been determined by latex coagulation method throughout the study, and by using logistic regression models, the association between HbA1c the risk of retinal hemorrhage and diabetic retinal hemorrhage was examined. During a median follow-up period of 4.6 years, 509 retinal hemorrhages, including 96 diabetic retinal hemorrhages, were diagnosed. HbA1c is positively associated with the risk of retinal hemorrhage and diabetic retinal hemorrhage among subjects who were not taking medication for DM at baseline but not among subjects who were taking medication at baseline.

Systemic diseases like hypertension, diabetes, and vascular disorders are said by Aiswarya Raj and Mani [2] in the year 2015. They are affected by the retinal vessels. Whenever affected with these kinds of diseases, the retinal vessels showed some sort of vascular variations according to the severity of the conditions. So in order to diagnose these diseases, an efficient system that can detect the retinal abnormalities is required. For identifying the variations in retinal vessel caliber, AVR ratio is calculated. The literature review explained the performance evaluation of the various automatic retinal vessel classification systems and exudate identification system. The retinal vessel classification and caliber estimation can be done by exploiting either visual or geometric features that enable discrimination between veins and arteries.

3 Proposed Methodology

In this research work the objective is to develop the structure and segment the images from hemorrhage detection in retinal fundus images. Initially, the image databases are considered and the filter technique is used to minimize the images. The technique used, namely, adaptive median filter, is applied to filter the image databases. After filter technique is used the feature extraction approach is executed; it contains different extraction methods for extraction of the meaningful information (features) such as gray level co-occurrence matrix (GLCM), grey level run length matrix (GLRLM) and scale invariant feature transform (SIFT). These methods are employed, and the extraction features are presented, then the classification process is applied. Here, the classification algorithm is the artificial neural network with fuzzy interface system (ANFIS) technique which improves the classification performance. The objective of the ANFIS is to integrate the best features of the fuzzy systems and the neural networks that best detect the hemorrhage-affected images and non-affected images. In this hemorrhage-affected images are used for segmentation process it is achieved to segment the exaggerated part of the image using threshold optimization method with different optimization techniques namely, without optimization, particle swarm optimization (PSO), and genetic algorithm (GA). Based on these various techniques, the PSO algorithm is performed better and segmented images are presented. The proposed scheme attains maximum sensitivity and minimum computational time; this proposed technique will be implemented using MATLAB platform. The overall process is shown in Figure 1.

Figure 1: Overall Diagram for Proposed Method.
Figure 1:

Overall Diagram for Proposed Method.

3.1 Adaptive Median Filter

Adaptive median filter is used to empower the flexibility of the filter to change its size as needs be founded on the guess of local noise density. The adaptive median filter depends on a trans-conductance comparator, in which immersion current can be altered to go about as a local weight operator. Since the size of the filter is adjusted to the local noise content, this kind of median filter is known as adaptive median filter. With this filter, if image is noisy and target pixel’s neighboring pixel worth is somewhere around 0 and 255, then we supplant pixel esteem with the median value.

Let Ti,j which locates at (i, j), be the gray intensity of a C×D image T and min max M×M be the dynamic range of T, i.e. min i,j max MxM for all (i, j) which accords to the following rule:

(1)(i,j)A={1.......C}×{1......D}

In the conventional noise model, we assume z is the noisy image; the model is given by

(2)zi,j={Mmin with percentage pMmax with percentage qTi,j with percentage 1pq

where rate=p+q means the noise level in image, and assume the filtering window Ni,j is a window of size (2C+1)×(2C+1) centered at position (i,j), Ni,j and can be written as

(3)Ni,j={TiC,iC........xi,j,.........xi+C,j+C}

Let w=2C+1≤Nmax. The filter tries to improve the output image zi,j , the median in the window.

3.2 Feature Extraction

The filtered images are feature extracted by means of adapted GLCM, GLRLM, and SIFT in the feature extraction component. In the task of the input image identification and verification, the feature extraction plays a much guided part. The ultimate motive of the feature extraction is to scale down the original data set by evaluating certain properties or features which are capable of distinguishing an input pattern from the other.

3.2.1 Grey Level Co-occurrence Matrix

A GLCM invariably represents a matrix in which the number of rows and columns are equivalent to the number of gray levels, G, in the image. The matrix element p(u, v|d1, d2) symbolizes the equivalent segregated by a pixel distance (d1 and d2). The GLCMs are capable of gathering sufficient statistics from them by means of the grey co-props function, which furnishes the details regarding the texture of an image, which can be categorized as follows.

  • Energy

  • Entropy

  • Cluster shade

  • Homogeneity

  • Maximum probability

3.2.1.1 Energy

The angular second moment, also known as the uniformity or energy, signifies the sum total of the squares of entries in the GLCM. It is known by the name “uniformity”. The range of energy is defined as [0, 1]. The value of energy for a fixed image is considered to be one. The equation for evaluating the energy is furnished as follows.

(4)Energy=u,vp(u,v)2

where p(u, v) is the pixel value at the point u, v of the texture image of size (M×N).

3.2.1.2 Entropy

The entropy lends a helping hand to represent the texture image and to determine the distribution change in a region of the image. The corresponding parameter efficiently estimates the disorder of an image. When the image does not appear to be textually similar, a number of GLCM elements contain negligible value, revealing the fact that the entropy is unduly large. The entropy is estimated as per the following equation.

(5)Ent=x,yp(u,v)log(p(u,v))
3.2.1.3 Cluster Shade
(6)CS=u,v((uαu)+(vαv))3T(u,v)
3.2.1.4 Homogeneity

The homogeneity evaluates the uniformity of the non-zero entries in the GLCM. The higher the variations in the grey values are, the lower the GLCM homogeneity tends to be, thereby facilitating a superior GLCM contrast. The homogeneity is within the range of [0, 1]. If the image is only moderately varied, then the homogeneity is greater, and if the image is not at all changed, then the homogeneity is equivalent to one.

(7)Homogeneity=u,vp(u,v)[1+(uv)2]
3.2.1.5 Maximum Probability
(8)Max probability=max(p(u,v))
3.2.2 Grey Level Run Length Matrix

Texture features are based on this GLRLM, namely, short run emphasis (SRE), long run emphasis (LRE), gray level non-uniformity (GLN), run length non-uniformity (RLN), and run percentage (RP).

3.2.2.1 Short Run Emphasis
(9)SRE=1nu,vp(u,v)v2
3.2.2.2 Long Run Emphasis
(10)LRE=1nu,vv2*p(u,v)
3.2.2.3 Gray Level Non-uniformity
(11)GLN=1nu(vp(u,v))2
3.2.2.4 Run Length Non-uniformity
(12)RLN=1nu(xp(u,v))2
3.2.2.5 Run Percentage
(13)RP=u,vnp(u,v)*v

The general characteristic of the innovative technique is effectively elucidated in the ensuing section.

GLCM=[energy,entropy,clustershade,homogeneity,maximumprobability]

GLRLM=[SRE,LRE,GLN,RLN,RP]

The above two features are effectively exploited for the additional processing in the novel technique.

(14)Features=[GLCM, GLRLM]

3.2.3 Scale Invariant Feature Transform

SIFT consists of four stages as below.

3.2.3.1 Scale-Space Extrema Detection

The first stage of computation searches over all scales and image locations. It is implemented efficiently by using a difference-of-Gaussian (DOG) function to identify potential interest points that are invariant to scale and orientation.

3.2.3.2 Key Point Localization

At each candidate location, a detailed model is fit to determine location and scale. Key points are selected based on measures of their stability.

3.2.3.3 Orientation Assignment

One or more orientations are assigned to each key point location based on local image gradient directions. All future operations are performed on image data that have been transformed relative to the assigned orientation, scale, and location for each feature, thereby providing invariance to these transformation.

3.2.3.4 Key Point Descriptor

The local image gradients are measured at the selected scale in the region around each key point. These are transformed into a representation that allows for significant levels of local shape distortion and change in illumination. The first stage used DOG function to identify potential interest points, which were invariant to scale and orientation. DOG was used instead of Gaussian to improve the computation speed.

(15)D(i,j,σ)=(G(i,j,kσ)G(i,j,σ))l(i,j)
(16)=L(i,j,kσ)L(i,j,σ)

where * is the convolution operator, G(i, j, σ) is a variable scale Gaussian, I(i, j) is the input image, D(i, j, σ) is DOG with scale k times. In the key point localization step, the low contrast points are rejected and the edge response eliminated. Hessian matrix was used to compute the principal curvatures and eliminate the key points that have a ratio between the principal curvatures greater than the ratio. An orientation histogram was formed from the gradient orientations of sample points within a region around the key point in order to get an orientation assignment. According to experiments, the best results were achieved with a 4×4 array of histograms with eight orientation bins in each. So the descriptor of SIFT that was used is 4×4×8=128 dimensions.

After this feature extraction, the classification technique is used to classify the images as hemorrhage-affected or hemorrhage non-affected images.

3.3 Classification Technique used for Classifying the Images

In this classification process, ANFIS technique is used to classify the affected and non-affected retinal hemorrhage fundus images.

3.3.1 Artificial Neural Network with Fuzzy Interface System

ANFIS is a technique of hybridization with the advanced sort of neuro-fuzzy technique. Both the neural network and fuzzy logic controller methods are integrated in order to formulate ANFIS as a hybridization technique. The input characteristics may be mapped to the input membership functions of fuzzy interface system based on certain rules, then rules are given to a set of output characteristics and output characteristics to output membership functions, and then lastly from the output membership function a single value is obtained as output. This resultant single valued output is considered as the decision that is related to the required output. The fuzzy membership functions are selected arbitrarily in a fixed number. Fuzzy interface can be operated only in the systems for which the user is predefined and the rule structure is based on their interpretation of the characteristics of the variables. In addition, ANFIS exploits the least squares and back propagation as a combined computation for estimating the membership function parameter.

3.3.1.1 Classification of ANFIS

ANFIS architecture consists of five layers of nodes. Out of five layers, the first and the fourth layers acquire adaptive nodes, while the second, third, and fifth layers acquire fixed nodes. The organization of ANFIS comprises five-layered feed-forward neural network. In both neural network and fuzzy logic, the inputs are given to the input layer (as input membership function) and the output is attained from the output layer (as output membership function). The input characteristics could be mapped with the input membership functions regarding fuzzy interface system; then the input membership function is mapped to rules, then rules to a couple of output features, output features to output membership functions, and then lastly output membership rights operate to single valued output. This resultant single valued output is usually obtained as the conclusion which relates to the desired output. The particular fuzzy membership functions are typically selected randomly as a predetermined number. Fuzzy interface should be well organized simply within the systems, which the user generally pre-defined, and the rule structure is dependent on their interpretation of the features from the variables in Figure 2.

Figure 2: Structure for ANFIS.
Figure 2:

Structure for ANFIS.

Here, the input and output of ANFIS is represented as Zi={z1, z2, …, zn} and O={o1, o2, o3}, respectively. Each rule contains the unity weight, and the learning process of ANFIS is carried out on the classified roads. In the ANFIS architecture, two fuzzy if-then rules based on first-order Sugeno model are considered.

Rule 1: If (Z is z1) and (V is v1), then (f1=r1Z+t1V)+p1Rule 2: If (Z is z2) and (V is v2), then (f2=r2Z+t2V)+p2

3.3.1.2 Different Layers of ANFIS

The step by step process of ANFIS is explained in the vaious layers as detailed below.

First Layer

Here, the inputs are assigned and processed in the initial stage of ANFIS viz. node-i. The outputs are the fuzzy membership grades of the inputs. Then the ANFIS nodes are represented as follows:

(17)Fi1=δv1(V)

where v1 is denoted as the linguistic label which is linked with this node function. Then Fi1 is represented as the membership function of v1. The bell-shaped function is selected by using δv1(V) which has a maximum value of one and a minimum value of 0, like the generalized bell function:

(18)δk(Y)=11+(Y2niy12nimi2ni)
Second Layer

In this layer, each node is a circle node tagged “Π” which multiplies the incoming signals and sends the product out. For example,

(19)Fi2=δy1(Y1)Y1δL(Y2)=wi, for i=1,2

From the above equation, every node output signifies the firing strength of a rule.

Third Layer

In this layer, every node is a circle node marked “N” symbolizing normalization. The i-th node estimates the ratio of the i-th rule firing strength to the summation of all rules firing strengths:

(20)Fi3=wiw1+w2=w¯, for i=1,26

For simplicity, the outputs of this layer are termed as normalized firing strengths.

Fourth Layer

The captioned stage comprises square nodes i which embodies some transfer functions in it.

(21)Fi4=wi¯(riY1+tiY1+piY1)=w¯fi

where i=1, 2,.., 6 and wi¯ is the output of layer. Then the parameter sets are denoted as (ri, ti, pi). The parameters in this stage are characterized as consequent parameters.

Fifth Layer

In the final layer, a solitary circle node is present which is indicated as “Σ” that estimates the overall output as the summary of all incoming signals, i.e.

(23)Fi5=wi¯fi

The final stage output of ANFIS minimizes the error signal, which is correctly classified as preferred output. The performance of ANFIS is tested by giving higher number of signals.

The ANFIS classification algorithm is used to classify after feature extraction. ANFIS has been applied and classifies the affected hemorrhage images and non-affected hemorrhage images. Then the affected hemorrhage image is taken for segmentation process.

3.4 Segmentation Technique

The segmentation technique is used for segmenting the images using threshold optimization with PSO algorithm.

3.4.1 Threshold Optimization Process

In this segmentation method the hemorrhage-affected images have been taken, and between these images the green band image is considered for finding the histogram equalization. For histogram equalization the minimum and maximum limits are presented. In this minimum and maximum process the optimization technique is used to constrain the limit. Mainly PSO technique is performed for optimizing the limit constraints.

3.4.1.1 Finding Optimal Histogram Equalization Limits

This process is carried out to attain the segmented image, and for the purpose, the optimization technique is applied. In the threshold optimization the maximum accuracy is attained in the PSO algorithm compared to the other two techniques.

The PSO is designed as per the social behavior of birds in a flock. Each particle flies in the search space with a velocity modified by its own flying remembrance and the flying experience of its companion in the PSO. Each particle possesses a key function value which is determined by a fitness function. The overall procedure is beautifully pictured in the flowchart in Figure 3.

Figure 3: Flowchart for Particle Swarm Optimization Algorithm.
Figure 3:

Flowchart for Particle Swarm Optimization Algorithm.

Steps Involved in Particle Swarm Optimization

The diverse stages involved in the PSO are clearly spelt out below.

Initialization

At first, the particles are initialized randomly with position and velocity. Here, the particles characterize the extracted features.

Fitness Function

In respect of each and every randomly created particle, the optimization fitness functions are ascertained.

(23)fitness=rand index

Gbest and Pbest Initialization

At the beginning, the fitness value is roughly determined for each and every particle. The optimal one is deemed as the Gbest and Pbest value among the fitness value. Subsequent to that iteration, the current optimal fitness value is selected as the Pbest and the overall best fitness value chosen as the Gbest.

Velocity Computation

The velocity and the position of particle changed by means of Equation (30).

(24)vi(t+1)=vi(t)+b1rand(Pbest(t)ri(t))+b2rand(Gbestri(t))
(25)ri(t+1)=ri(t)+vi(t+1)

where vi is the particle velocity, ri is the current particle, rand is a random number between (0, 1), and b1, b2 are learning factors. Usually, b1=b2=2.

The procedure is continued until the achievement of the solution with superior fitness value.

The segmentation process is shown in Figure 4.

Figure 4: Segmentation Process for Affected Hemorrhage Image.(A) Input image, (B) Green image, (C) Histogram image, (D) Binary image, (E) Not operation image, (F) Blood vessel image, (G) Remove BV image, (H) Dilation image and (I) Remove unwanted region.
Figure 4:

Segmentation Process for Affected Hemorrhage Image.

(A) Input image, (B) Green image, (C) Histogram image, (D) Binary image, (E) Not operation image, (F) Blood vessel image, (G) Remove BV image, (H) Dilation image and (I) Remove unwanted region.

Behind the optimization process, minimum and maximum limits are constrained, the binary values are observed, the not operation method is applied to the image in this affected part, and blood vessels are shown in the image. After this process the blood vessels are removed from the image. Then the morphological method is performed, namely, dilation process; it have been full fill the hemorrhage-affected part. Finally, to remove the unwanted region in the image, region props is used in MATLAB function. Based on these methods, the hemorrhage-affected part is segmented properly, and the accuracy, sensitivity, and specificity values are predicted.

4 Result and Discussion

In this result section the hemorrhage-affected retinal fundus image is segmented using various techniques. Firstly, input images have been taken and the images filtered using adaptive median filter technique; after filtering the images, feature extraction methods are used, namely, GLCM, GLRLM, and SIFT. After feature extraction, the classification process is considered, namely, ANFIS classification which classifies the image as affected or non-affected hemorrhage image. Affected images are used for segmentation process in this threshold optimization technique, with PSO algorithm performed to find the accuracy, sensitivity, specificity, true positive (TP), true negative (TN), false positive (FP), false negative (FN), positive predictive value (PPV), negative predicted value (NPV), false positive rate (FPR), and false discovery rate (FDR).

4.1 ANFIS Classification Process

In Table 1 the comparison of performance maintenance for ANFIS classification is shown. In this classification process the accuracy value is 90%, specificity value is 89%, and sensitivity value is 91%. The sensitivity value is high compared with accuracy and specificity performance.

Table 1:

Comparison of Performance Maintenance for ANFIS Classification.

AccuracySpecificitySensitivity
908991

4.2 Table for Proposed Methods

Table 2 shows the proposed value of retina hemorrhage images. Input images, green band images, segmented images, TP, TN, FP, FN, PPV, NPV, FPR, and FDR are predicted.

Table 2:

Proposed Values of Retinal Hemorrhage Images.

Input imagesGreen band imagesSegmented imagesTPTNFPFNPPVNPVFPRFDR
1059431,98991241960.1030.9990.0200.896a
27671,088,93228,89019490.0870.9980.0250.912b
22181,096,05823,9862760.0840.9990.0210.915c
564438,44027875770.1680.9980.0060.831d

For input image (a) the TP value is 1059, TN value is 431,989, FP value is 9124, FN value is 196, PPV value is 0.103, NPV value is 0.999, FPR value is 0.020, and FDR value is 0.896. For image (b) the TP value is 2767, TN value is 1,088,932, FP value is 28,890, FN value is 1949, PPV value is 0.087, NPV value is 0.998, FPR value is 0.025, and FDR value is 0.912. For image (c) the TP value is 2218, TN value is 1,096,058, FP value is 23,986, FN value is 276, PPV value is 0.084, NPV value is 0.999, FPR value is 0.021, and FDR value is 0.915. For image (d) the TP value is 564, TN value is 438,440, FP value is 2787, FN value is 577, PPV value is 0.168, NPV value is 0.998, FPR value is 0.006, and FDR value is 0.831.

4.3 Performance Graph for Three Optimization Algorithms

The performance graph for three optimization algorithms is shown in Figure 5. The sensitivity, specificity, and accuracy values for four images are predicted.

Figure 5: Performance Graph for Three Optimization Algorithms.
Figure 5:

Performance Graph for Three Optimization Algorithms.

The three algorithms, namely, PSO, GA, and without optimization techniques are performed. The overall achievement of this graph is it has shown that the sensitivity predicts better result when compared with the other two, accuracy and specificity.

4.4 Accuracy Graph for Three Various Techniques

In Figure 6 the accuracy values for three various techniques, namely, PSO, GA, and without optimization techniques are shown. In this accuracy graph for image 1 the PSO value is 0.978%, GA value is 0.995%, and without optimization techniques value is 0.993%. For image 2 the PSO value is 0.972%, GA value is 0.984%, and without optimization techniques value is 0.977%.

Figure 6: Accuracy Graph for Three Optimization Algorithms.
Figure 6:

Accuracy Graph for Three Optimization Algorithms.

For image 3 the PSO value is 0.978%, GA value is 0.984%, and without optimization techniques value is 0.992%. Finally, for image 4 the PSO value is 0.992%, GA value is 0.989%, and without optimization techniques value is 0.995%. The PSO technique achieved better results.

4.5 Specificity Graph for Three Optimization Algorithms

Figure 7 shows the specificity values for three various techniques, namely, PSO, GA, and without optimization techniques. In this specificity graph, for image 1 the PSO value is 0.979%, GA value is 0.997%, and without optimization techniques value is 0.995%. For image 2 the PSO value is 0.974%, GA value is 0.985%, and without optimization techniques value is 0.981%.

Figure 7: Specificity Graph for Three Optimization Algorithms.
Figure 7:

Specificity Graph for Three Optimization Algorithms.

For image 3 the PSO value is 0.978%, GA value is 0.984%, and without optimization techniques value is 0.994%. Finally for image 4 the PSO value is 0.993%, GA value is 0.991%, and without optimization techniques value is 0.997%. The PSO technique performed better in this specificity graph.

4.6 Sensitivity Graph for Three Optimization Algorithms

Figure 8 shows the sensitivity values for three various techniques, namely, PSO, GA, and without optimization techniques. In this sensitivity graph, for image 1 the PSO value is 0.843%, GA value is 0.266%, and without optimization techniques value is 0.431%. For image 2 the PSO value is 0.586%, GA value is 0.794%, and without optimization techniques value is 0%.

Figure 8: Sensitivity Graph for Three Optimization Algorithms.
Figure 8:

Sensitivity Graph for Three Optimization Algorithms.

For image 3 the PSO value is 0.889%, GA value is 0.751%, and without optimization techniques value is 0.051%. Finally for image 4 the PSO value is 0.494%, GA value is 0%, and without optimization techniques value is 0%. The PSO technique performed better in this sensitivity graph. The sensitivity is high when compared with accuracy and specificity.

4.7 GUI for Segment Retinal Hemorrhage Detection Images

In Figure 9 the GUI diagram of retinal hemorrhage detection using ANFIS and PSO algorithm is shown. The input image is loaded, preprocessing technique is applied, and then feature extraction vector is 72,416.419; after this the classification is performed, the affected image is presented, and then at last the segmented image is predicted. The sensitivity value is 0.877%, specificity value is 0.999%, accuracy value is 0.999%, PPV value is 0.904%, NPV value is 0.999%, FPR value is 0.00026%, and FDR value is 0.0051%.

Figure 9: GUI for Segment Retinal Hemorrhage Detection Images.
Figure 9:

GUI for Segment Retinal Hemorrhage Detection Images.

5 Conclusion

The retinal hemorrhage fundus image is used to develop the model of image structure, and the segmented images are the main intension of this work. The retinal images undergo preprocessing technique using adaptive median filter to find the feature extraction for filtered images, and then the extracted feature images are classified using ANFIS. By using this technique the affected and non-affected images are classified, then the affected images are taken for segmentation process. In the segmentation development the threshold optimization is applied with PSO technique; it performed better compared with other techniques and segmented the images. Accuracy, specificity, and sensitivity are analyzed in this segmentation process; the sensitivity gives high results in proposed optimization technique. In the future, other various techniques and optimizations will be used for better performance.


Corresponding author: Lawrence Livingston Godlin Atlas, Assistant Professor, Computer Science and Information Technology, Maria College of Engineering and Technology, Attor, India

Bibliography

[1] M. Abràmoff, M. Garvin and M. Sonka, Retinal imaging and image analysis, J. Biomed. Eng. 3 (2010), 169–208.10.1109/RBME.2010.2084567Search in Google Scholar PubMed PubMed Central

[2] M. Aiswarya Raj and S. A. Mani, Performance evaluation of techniques for retinal abnormality detection, J. IEEE Inform. Embed. Commun. Syst. (2015), 1–6.10.1109/ICIIECS.2015.7192952Search in Google Scholar

[3] U. Akrama, S. Khalid and S. Khan, Identification and classification of microaneurysms for early detection of diabetic retinopathy, J. Pattern Recogn. 46 (2013), 107–116.10.1016/j.patcog.2012.07.002Search in Google Scholar

[4] U. Akrama, S. Khalid, A. Tariq, S. Khan and F. Azam, Detection and classification of retinal lesions for grading of diabetic retinopathy, J. Comput. Biol. Med. 45 (2014), 161–171.10.1016/j.compbiomed.2013.11.014Search in Google Scholar PubMed

[5] P. Bartczak, P. Falt and M. Hauta-Kasari, Applicability of LED-based light sources for diabetic retinopathy detection in retinal imaging, J. IEEE (2016), 355–360.10.1109/CBMS.2016.35Search in Google Scholar

[6] K. J. Chang, C.-K. Cheng and C.-H. Peng, Clinical characteristics and visual outcome of macular hemorrhage in pathological myopia with or without choroidal neovascularization, J. Ophthalmol. 6 (2016), 136–140.10.1016/j.tjo.2016.05.007Search in Google Scholar PubMed PubMed Central

[7] N. Cuenca, L. Fernandez-Sanchez, L. Campello, V. Maneu, P. De la Villa, P. Lax and I. Pinilla, Cellular responses following retinal injuries and therapeutic approaches for neurodegenerative diseases, J. Prog. Retin. Eye Res. 43 (2014), 17–75.10.1016/j.preteyeres.2014.07.001Search in Google Scholar PubMed

[8] O. Faust, R. Acharya, E. Y. K. Ng, K.-H. Ng and J. S. Suri, Algorithms for the automated detection of diabetic retinopathy using digital fundus images: a review, J. Med. Syst. 36 (2012), 145–157.10.1007/s10916-010-9454-7Search in Google Scholar PubMed

[9] A. Fleming, K. A. Goatman, S. Philip, G. Williams, G. Prescott, G. Scotland, P. McNamee, G. Leese, W. N. Wykes, P. Sharp and J. Olson, The role of haemorrhage and exudate detection in automated grading of diabetic retinopathy, J. Clin. Sci. 94 (2010), 706–711.10.1136/bjo.2008.149807Search in Google Scholar PubMed

[10] O. Galvin, A. Srivastava, O. Carroll, R. Kulkarni, S. Dykes, S. Vickers, K. Dickinson, A. Reynolds, C. Kilty, G. Redmond, R. Jones, S. Cheetham, A. Pandit, B. Kennedy, A sustained release formulation of novel quininib-hyaluronan microneedles inhibits angiogenesis and retinal vascular permeabilityin vivo, J. Control. Release233 (2016), 198–207.10.1016/j.jconrel.2016.04.004Search in Google Scholar PubMed

[11] Z. Ghassabia, J. Shanbehzadeh and A. Mohammadzadeh, A structure-based region detector for high-resolution retinal fundus image registration, J. Biomed. Signal Proc. Control23 (2016), 52–61.10.1016/j.bspc.2015.08.005Search in Google Scholar

[12] J. Kim, K. M. Kim, C.-S. Kim, E. Sohn, Y. M. Lee, K. Jo and J. S. Kim, Puerarin inhibits the retinal pericyte apoptosis induced by advanced glycation end products in vitro and in vivo by inhibiting NADPH oxidase-related oxidative stress, J. Free Radic. Biol. Med. 53 (2012), 357–365.10.1016/j.freeradbiomed.2012.04.030Search in Google Scholar PubMed

[13] I. Lazar and A. Hajdu, Retinal micro aneurysm detection through local rotating cross-section profile analysis, J. Med. Imag. 38 (2013), 1–8.Search in Google Scholar

[14] I. Lazar and A. Hajdu, Segmentation of retinal vessels by means of directional response vector similarity and region growing, J. Comput. Biol. Med. 66 (2015), 209–221.10.1016/j.compbiomed.2015.09.008Search in Google Scholar PubMed

[15] S. Lu and J. H. Lim, Automatic optic disc detection from retinal images by a line operator, J. Biomed. Eng. 58 (2011), 88–94.10.1109/TBME.2010.2086455Search in Google Scholar

[16] M. R. K. Mookiah, R. Acharya, R. J. Martis, C. K. Chua, C. M. Lim, E. Y. K. Ng and A. Laude, Evolutionary algorithm based classifier parameter tuning for automatic diabetic retinopathy grading: a hybrid feature extraction approach, J. Knowl. Based Syst. 39 (2013), 9–22.10.1016/j.knosys.2012.09.008Search in Google Scholar

[17] M. R. K. Mookiah, R. Acharya, H. Fujita, J. H. Tan, C. K. Chua, S. Bhandary, A. Laude and L. Tong, Application of different imaging modalities for diagnosis of diabetic macular edema: a review, J. Comput. Biol. Med. 66 (2015), 295–315.10.1016/j.compbiomed.2015.09.012Search in Google Scholar PubMed

[18] R. Rajagopal and R. Apte, Seeing through thick and through thin: retinal manifestations of thrombophilic and hyperviscosity syndromes, J. Survey Opthalmol. 61 (2016), 236–237.10.1016/j.survophthal.2015.10.006Search in Google Scholar PubMed

[19] A. Sopharak, B. Uyyanonvara and S. Barman, Automatic microaneurysm detection from non-dilated diabetic retinopathy retinal images using mathematical morphology methods, J. Comput. Sci. 38 (2011), 1–7.Search in Google Scholar

[20] M. Umesawa, A. Kitamura, M. Kiyama, T. Okada, H. Imano, T. Ohira, K. Yamagishi, I. Saito and H. Iso, Relationship between HbA1c and risk of retinal hemorrhage in the Japanese general population: The Circulatory Risk in Communities Study, J. Diabetes Complications30 (2016), 834–838.10.1016/j.jdiacomp.2016.03.023Search in Google Scholar PubMed

[21] R. A. Welikala, M. M. Fraz, P. J. Foster, P. H. Whincup, A. R. Rudnicka, C. G. Owen, D. P. Strachan and S. A. S. A. Barman; UK Biobank Eye and Vision Consortium. Automated retinal image quality assessment on the UK Biobank dataset for epidemiological studies, J. Comput. Biol. Med. 71 (2016), 67–76.10.1016/j.compbiomed.2016.01.027Search in Google Scholar PubMed

[22] B. Zhang, X. Wu, J. You, Q. Li and F. Karray, Detection of microaneurysms using multi-scale correlation coefficients, J. Pattern Recogn. 43 (2010), 2237–2248.10.1016/j.patcog.2009.12.017Search in Google Scholar

Received: 2016-12-26
Published Online: 2017-05-12
Published in Print: 2018-10-25

©2018 Walter de Gruyter GmbH, Berlin/Boston

This article is distributed under the terms of the Creative Commons Attribution Non-Commercial License, which permits unrestricted non-commercial use, distribution, and reproduction in any medium, provided the original work is properly cited.

Downloaded on 25.3.2026 from https://www.degruyterbrill.com/document/doi/10.1515/jisys-2016-0354/html
Scroll to top button