Home A Genetic Algorithm Approach for Discovering Tuned Fuzzy Classification Rules with Intra- and Inter-Class Exceptions
Article Open Access

A Genetic Algorithm Approach for Discovering Tuned Fuzzy Classification Rules with Intra- and Inter-Class Exceptions

  • Renu Bala

    Saroj Ratnoo is a Professor at the Department of Computer Science and Engineering, Guru Jambheshwar University of Science and Technology, Hisar, India. She finished her MSc in computing science from the University of London, UK. She completed her PhD from Jawaharlal Nehru University, New Delhi, India. She has 18 years of teaching and research experience. Her research interests include application of evolutionary and swarm intelligence algorithms in the domain of feature selection, rule mining, and exception discovery.

    EMAIL logo
    and Saroj Ratnoo

    Renu Bala is a PhD scholar at the Department of Computer Science and Engineering, Guru Jambheshwar University of Science and Technology, Hisar, India. She finished her MCA from Ch. Devi Lal University, Sirsa, India. She qualified for the UGC-NET Junior Research Fellowship in 2012. Her research interests include application of evolutionary algorithms in the domain of rule mining and exception discovery.

Published/Copyright: January 22, 2016
Become an author with De Gruyter Brill

Abstract

Fuzzy rule-based systems (FRBSs) are proficient in dealing with cognitive uncertainties like vagueness and ambiguity imperative to real-world decision-making situations. Fuzzy classification rules (FCRs) based on fuzzy logic provide a framework for a flexible human-like reasoning involving linguistic variables. Appropriate membership functions (MFs) and suitable number of linguistic terms – according to actual distribution of data – are useful to strengthen the knowledge base (rule base [RB]+ data base [DB]) of FRBSs. An RB is expected to be accurate and interpretable, and a DB must contain appropriate fuzzy constructs (type of MFs, number of linguistic terms, and positioning of parameters of MFs) for the success of any FRBS. Moreover, it would be fascinating to know how a system behaves in some rare/exceptional circumstances and what action ought to be taken in situations where generalized rules cease to work. In this article, we propose a three-phased approach for discovery of FCRs augmented with intra- and inter-class exceptions. A pre-processing algorithm is suggested to tune DB in terms of the MFs and number of linguistic terms for each attribute of a data set in the first phase. The second phase discovers FCRs employing a genetic algorithm approach. Subsequently, intra- and inter-class exceptions are incorporated in the rules in the third phase. The proposed approach is illustrated on an example data set and further validated on six UCI machine learning repository data sets. The results show that the approach has been able to discover more accurate, interpretable, and interesting rules. The rules with intra-class exceptions tell us about the unique objects of a category, and rules with inter-class exceptions enable us to take a right decision in the exceptional circumstances.

1 Introduction

Fuzzy rule-based systems (FRBSs) are linguistic rule-based systems that have been conveniently applied in classification problems [10, 21]. FRBSs are known for their competence to deal with noisy, imprecise, or incomplete information, which is often present in any real-world application. The knowledge base (KB) of FRBSs consists of a rule base (RB) and a data base (DB). RB contains fuzzy classification rules (FCRs) and DB incorporates the linguistic labels and definitions of associated fuzzy sets. The automated learning of RB is accountable for discovering a set of FCRs, and the interpretability or semantic power of DB can be enhanced by tuning it with respect to the type of membership functions (MFs), their parameters, and number of fuzzy labels.

Accuracy and interpretability are the eminent criteria for evaluating the performance of FCRs. Accuracy is the capability of FCRs to closely represent the real-world knowledge and is measured as the ratio of correctly classified instances to the total number of instances. Interpretability of the rule is the capacity to express the behavior of a real system in a comprehensible way. This is related to several factors like the model structure, the number of fuzzy rules, the number of linguistic labels, and the shape of the related fuzzy sets [28]. It is ideal to satisfy both criteria to a high degree; however, with accuracy and interpretability being the conflicting criteria, it is generally not possible. That is why researchers focus on obtaining the best trade-off between accuracy and interpretability [9, 15, 22, 34].

Even an expert of a problem domain cannot exactly identify the most appropriate MFs and number of linguistic terms to reflect a system’s true behavior. Knowing the appropriate fuzzy sets and fuzzy labels is a determining factor in automating fuzzy rule extraction processes. The accuracy and interpretability of the discovered rules are adversely affected due to the inflexibility in the concept of type of fuzzy sets and linguistic terms, which imposes hard restrictions on fuzzy rules [28]. A way to get the best trade-off between the accuracy and interpretability is to tune the fuzzy sets and number of linguistic labels (DB) in advance according to the distribution of data in the training data set available for a system.

Accuracy and interpretability are certainly important, but they are not enough to measure the real interestingness of an RB. A rule is said to be interesting if it represents a knowledge that is not only previously unknown to the users but also contradicts the original beliefs of the users [31, 32]. Although interestingness is a subjective issue, unexpected and rare knowledge is usually considered interesting. Discovery of exceptions is interesting because exceptions are rare conditions that contradict the prior knowledge about the domain, add curiosity, and improve the quality of decision making in those rare circumstances where default rules cease to work. Exceptions have low support, and therefore, it is not possible to discover exceptions using the typical rule discovery methods, as these are accountable for the generality of discovered knowledge, whereas exceptions make rules more specific.

This article proposes a three-phased design to discover fine-tuned FCRs augmented with two kinds of exceptions, namely intra- and inter-class exceptions [39]. Intra-class exceptions identify the unique and interesting features of an object within the class to which it belongs. For example, within the class of birds, there could be unique birds that have exceptional features, i.e. penguin, kiwi, and ostrich are rare non-flying birds. Inter-class exceptions are the rare features that change the class of an object. For instance, all animals that can fly are birds, whereas an animal like bat falls into the category of mammals that can fly as well as give milk. The proposed system discovers the rules that have the flexibility of fuzzy logic and perform well on the metrics of accuracy, interpretability, and interestingness. The first phase is a pre-processing step in which the type of fuzzy sets and number of linguistic labels are tuned by taking into account the mapping between the predicting attributes and the class variable. An overall appropriate DB that reflects the true mapping of the attribute values and the class is selected from a number of combinations of fuzzy sets and number of fuzzy labels. Tuning the parameters of the MFs is another dimension of this work, which can be addressed in the future. The second phase employs a genetic algorithm (GA) to evolve the FCRs. The third phase appends the exceptions to the fuzzy rules.

2 Fuzzy Classification Rules

FRBSs need to learn and update their RB (FCRs) to automate and predict the behavior of a system. An FCR is usually represented in the following form:

If(X1isA1k)(X2kisA2k)(XnisAnk),thenDm.

This rule takes the fuzzy values instead of crisp values for the antecedent parts, the consequent parts, or both. Here xi is an n-dimensional attribute vector and A1k, A2k, …, Ank are the fuzzy sets that comprise the antecedent part on unit interval [0…1]. Here k varies for an attribute xi over the number of fuzzy linguistic labels like (small, small–medium, medium, medium–large, and large). Attributes may have either the same value of k or different values. Dm is the class label and comprises the consequent part of the fuzzy rule. The following rule is an example of FCRs:

If(Math_score is high)(English_score is high), then Good_student.

This fuzzy approach of rule is important for a human expert to comprehend the classifier decision, for example, in medical diagnosis or safety critical applications. Fuzzy logic allows to use linguistic interpretations in a mathematical framework, it provides a usual means for constructing fuzzy rule-based classification systems that are closer to the human decision-making process [42]. The above rule is definitely a better and natural representation of the way human beings comprehend and reason rather than a crisp rule.

3 Fuzzy Rule Performance Measures

FRBSs are evaluated on the basis of accuracy, interpretability, and interestingness. The measure of accuracy is straightforward, and most of the initial research in FRBSs is focused on improving the accuracy of the models. Interpretability is subjective, rather difficult to define, and recently, the interest of researchers has grown manifold to build interpretable models. According to the double-axis taxonomy proposed in [9], complexity and semantics are two important criteria in measuring the interpretability of an FRBS. Complexity-based interpretability describes the complexity of the obtained RB in terms of the number of rules and the number of conditions per rule. An RB is considered more interpretable if it contains fewer number of rules with fewer number of conditions per rule provided that the performance of RB is preserved to a satisfactory level [23]. Semantic interpretability is related to the DB, and it involves adjusting the shape and parameters of MFs and number of linguistic terms. The use of inappropriately complex/simple fuzzy partitions and large/small number of linguistic labels may deteriorate semantic interpretability. To obtain interpretable linguistic models, the MFs of the associated fuzzy sets must be defined to have the following three properties: (1) The MFs should be complete to cover the universe of discourse of a variable so that every data point belongs to at least one of the fuzzy sets; (2) the MFs should represent a linguistic term with a clear semantic meaning and should be easily distinguishable from the remaining MFs of the corresponding variable; (3) the sum of membership values over a universe of discourse should be near to 1 to guarantee a uniform distribution of the meanings among the elements [28].

Further, the issue of interestingness has also drawn a significant attention from researchers. An accurate rule sometimes represents very obvious and non-interesting facts. Therefore, the rule should be discovered to cover previously unknown facts and exceptional behavior of a system.

4 Rules with Exceptions

Exceptions are often ignored as machine noise in classical rule mining algorithms. Several works dealing with discovery of exceptions exist in literature. A censored production rule (CPR), proposed by Michalski and Winston [22], is a common rule augmented with censor/exception conditions. A CPR can be written in the form

If(x1op v1j)(x2op v2j)(xnop vnj), then Dm,unless (x3op v3j)(x4op v4j);γ1:γ2P=(x1op v1j)(x2op v2j)(xnop vnj)C=(x3op v3j)(x4op v4j).

In this rule, P, the premise part, is a conjunction of attribute–value pairs. C may contain a single exception or it may be a disjunction of exceptions. It is imperative to specify that the attributes present in the premise and exception part are mutually exclusive. The “if P then D” part of a CPR holds frequently and the censor part C holds rarely. Therefore, the “unless” operator performs akin to an XOR operator between decision and exception, but it has an expositive aspect that is not commutative. To capture this asymmetry precisely, two parameters associated with this kind of rule structure are defined below:

γ1=Prob[D|P]=Ω(PD)Ω(P)=|PD||P|.γ2=Prob[C|P]=Ω(PC)Ω(P)=|PC||P|.

In the above equations, Ω(PD) is a subset of events for which both P and D hold, Ω(PC) is subset of events for which both P and C hold, and ΩP is the subset of events for which only P holds. The main constraints on rules with exceptions are that γ1+γ2≤1 and γ1>>γ2. The following rule is an example of a CPR:

If(X=bird),thenfly,unless(X=kiwiORX=ostrich).

Another popular and renowned formation for discovery of exceptions has been given in [34] in the form of a rule pair as

If Pμ, then D(strongrule).If PμC, then D(exception).

In the above-mentioned rule pair, Pμ and C are conjunction of attribute values and D and D′ are decision attribute values. The strong rule is a rule of high generality having high recall and precision. The exceptional feature C affixed with Pμ is accountable to change the divergence of the class D to D′. The second rule in the rule pair covers a few objects in the data set and has low support and high confidence. A rule triplet – consisting of strong rule, reference rule, and exception – has also been suggested in [37].

5 Related Work

A lot of work has been done in attempt to achieve a satisfactory trade-off between accuracy and interpretability. Ishibuchi and colleagues [9, 22, 34] have proposed a GA for rule selection problem that tries to maximize accuracy and minimize the number of rules. In addition, multi-objective evolutionary algorithms have also been presented for rule learning problems. These approaches discover non-dominated Pareto optimal solutions by considering accuracy and interpretability (in terms of number of rules and number of antecedent conditions per rule) as the optimization criteria [15, 16].

FCRs have demonstrated their ability in a wide spectrum of applications in the domain of control [24], modeling [25], and data mining problems [17, 20]. Therefore, there have been many attempts to discover FCRs from real valued data sets [5, 8, 11, 14, 19, 18, 21]. Most of these approaches proposed in the literature for automated discovery of FCRs use pre-defined fuzzy MFs and number of linguistic labels [6, 7, 13, 38]. These may prevent the fuzzy linguistic model from achieving the desired trade-off between interpretability and accuracy.

Although interest has grown in tuning the MFs to help discovering highly accurate fuzzy models without a lot of compromise on the interpretability front, we have come across a limited number of attempts in this domain. Mendes et al. [21] have suggested a co-evolutionary approach to simultaneously evolve a population of fuzzy rules and MFs to get a final outcome of a fuzzy rule set and their associated membership definitions that are well adapted to each other. The limitation of this work is that it pre-defines a trapezoidal MF with three linguistic labels for fuzzification of all the attributes of a data set and only adjusts the parameters of the pre-defined MF through mutation. Another co-evolutionary approach to simultaneously evolve the rules and MFs is suggested in [26, 27]. The limitation of these co-evolutionary approaches is that these have to work with really large and complex search space, have long running time for bulky data sets in terms of the number of either instances or attributes, and are more prone to premature convergence. Proposals for genetic tuning of linguistic systems as a post-processing scheme have been suggested in [1, 2, 28, 29].

Silberschatz and Tuzhilin [31, 32] have addressed the issue of interestingness. They have considered the exceptions as interesting pieces of knowledge that challenge the generally held beliefs. Suzuki and colleagues [12, 33, 35, 36] have done extensive work in the domain of exception discovery. They have classified exceptions in several categories and discovered exceptions in the form of rule pairs and rule triplets. They have addressed the task of dependence modeling, and their algorithm discovered a large number of exceptions, making the discovered knowledge unsuitable for human insight and analysis. A classification algorithm based on evolutionary approach for discovering comprehensible rules with exceptions in the form of CPRs is presented in [3, 30]. A post-processing scheme has been suggested to organize and summarize the rules in the form of “rule+exceptions” framework [40]. A genetic programming-based intelligent miner has also been proposed to mine rules with fuzzy hierarchies with exceptions at every level [4]. Vashishtha et al. have discovered classification rules with intra- and inter-class exceptions. However, their algorithm is designed to work with data sets containing discretized/nominal attributes only [39].

In this article, we extend the idea given in [39] to discover an interesting linguistic RB in the form of FCRs with intra- and inter-class exceptions. We also promulgate tuning of DB in a pre-processing phase to get more accurate and interpretable RB without having to deal with large search spaces, as in the case of the co-evolutionary approaches.

6 The Proposed Fuzzy Tuning and Evolutionary Design

This section describes the proposed design for discovery of FCRs with intra- and inter-class exceptions. The proposed algorithm consists of three phases. The first phase of the algorithm is a pre-processing phase that tunes MFs and number of linguistic terms for the fuzzification process of attributes. The second phase discovers a fuzzy rule set, and in the third phase, intra- and inter-class exceptions are appended to the rules. We have employed a crowding GA to discover FCRs in the intermediate phase of the design. This strategy maintains diversity in the population and avoids the convergence of a traditional GA to the single best solution [39]. The detailed descriptions of each phase are given next.

6.1 Phase 1: Pre-Processing

6.1.1 Normalization

Different data sets have data falling into dissimilar ranges. To simplify the fuzzification process, we applied normalization to place the data in same range (0–1). The following formula is employed for normalization:

A=Amin(A)max(A)min(A).

6.1.2 Fuzzy Tuning and Fuzzification

Several authors have used pre-defined MFs (i.e. triangular or trapezium) and fixed number of linguistic terms (i.e. small, medium, large in case of three fuzzy partitions) for fuzzifying the data sets. As single pre-defined MF and fixed number of linguistic terms may not produce suitable fuzzy partitions for classification, such approaches compromise the predictive accuracy of the rule set discovered in the subsequent phase. Therefore, we have adjusted the type and number of fuzzy partitions by taking into account the data distribution of attributes with respect to the class attribute. We have considered two fuzzy MFs (triangular and trapezium), and the linguistic terms can vary from two to five. Because the search space is not large, an exhaustive search algorithm, given in Figure 1, is used to find the best combination of MF and the number of linguistic terms for each attribute of a data set. The exhaustive search algorithm computes the gain ratio for all the eight combinations of an attribute and returns the combination for which the fuzzy gain ratio is maximal. Finally, the fuzzification process is carried out using different combinations of MFs and number of linguistic terms for different attributes.

Figure 1: Fuzzy Tuning and Fuzzification.
Figure 1:

Fuzzy Tuning and Fuzzification.

The formulas applied for computing membership degrees using triangular and trapezoidal fuzzy sets are given as

f(x;a,b,c)=max(min(xaba,cxcb),a)   f(x;p,q,r,s)=max(min(xpqp,1,rxsd),p).

Parameters a and c locate the feet of the triangle and b locates the peak for triangular function, whereas parameters p and s locate the feet of the trapezoid and parameters q and r locate the shoulders.

The gain ratio for Ak, the kth attribute is computed as

GR(Ak)=IG(Ak)SplitInfo(Ak)  SplitInfo(Ak)=j=1v|Ajk||Ak|log2(|Ajk||Ak|).

Here, |Ajk| is the cardinality of fuzzy set with respect to the value of the α cut parameter (0.5≤α≤1). |Ak| is the cardinality for all fuzzy set associated with attribute Ak.

IG(Ak)=EE(Ak)E(Ak)=j=1v|Ajk||Ak|×H(Ajk)   H(Ajk)=C=1mpj(Ajk)log2pj(Ajk),

where pj(Ajk) is the relative frequency of the jth subset of attribute Ak with respect to class C: (1≤Cm) and defined as

pj(Ajk)=|AjkC||Ajk|.

For illustration, let us consider an example data set given in Table 1. The data set is already in the normal form. The tuning algorithm given in Figure 1 computes the gain ratio for various combinations of the MFs and the number of linguistic terms, and the results obtained are listed in Table 2. The fuzzified data set (on the basis of obtained combinations of the MFs and number of linguistic terms for various attributes) is given in Table 3.

Table 1

The Example Data Set.

Att1Att2Att3Att4Att5Class
0.2220.110.050.010.551
0.16670.240.040.30.571
0.11110.350.080.20.531
0.08330.920.050.050.591
0.19440.280.80.60.81
0.30550.30.560.80.881
0.08330.40.90.90.871
0.19440.70.520.550.891
0.56660.60.051.00.542
0.58330.80.050.010.562
0.72220.810.080.020.492
0.33330.450.890.030.482
0.61110.560.050.040.592
0.38890.120.320.60.922
0.55550.810.90.611.02
0.16670.670.420.60.892
0.63880.810.01330.520.872
0.530.730.4500.882
0.19440.490.0150.560.542
0.44440.630.590.10.92
0.77780.560.010.10.543
0.750.90.050.030.553
0.90.810.180.080.593
0.91660.90.140.140.63
0.80.590.090.890.613
0.83330.920.570.550.993
0.90.830.540.540.983
0.80551.00.520.560.893
0.940.890.140.890.993
0.70.160.550.60.983
0.02220.870.980.580.043
0.01110.950.950.090.063
Table 2

Gain Ratios for Various Combinations of MFs and Number of Linguistic Terms.

Linguistic termsAttribute1Attribute2Attribute3Attribute4Attribute5
Triag.Trap.Triag.Trap.Triag.Trap.Triag.Trap.Triag.Tarp.
S, L0.2990.3760.2790.1350.0630.0150.00.1690.1300.015
S, M, L0.3990.5370.1820.1630.0090.0090.0660.0710.1190.119
S, SM, ML, L0.2980.2980.2710.2890.0900.0800.0960.1290.1210.112
S, SM, M, ML, L0.3620.3620.2430.2430.0710.0710.1270.1270.1400.139
Tuned MFs and number of linguistic terms

Triang., Triangular; Trap., trapezoidal.

Table 3

Fuzzified Example Data Set.

Sr. no.Attribute1Attribute2Attribute3Attribute4Attribute5Class
SMLSSMMLLSSMMLLSLSSMMMLLD1D2D3
11000.90.1000.80.20010000.80.20100
210001000.840.160010000.720.280100
310001000.680.320010000.880.120100
410000010.80.20010000.640.360100
51000100000.80.20.60.40000.80.2100
60.560.440010000.380.620010000.480.52100
71000100000.40.6010000.520.48100
8100001000.460.5400.80.20000.440.56100
901000100.80.20010000.840.160010
1001000100.80.20010000.760.240010
1100.220.78000.90.10.680.32001000.040.9600010
120.330.6700100000.440.561000.080.9200010
1301000100.80.20010000.640.360010
140100.80.20000.860.1400.60.40000.320.68010
15010000.90.1000.40.60.560.4400001010
16100001000.660.3400.60.40000.440.56010
1700.890.11000.90.10.940.05000.920.080000.520.48010
18010001000.60.400.680.320000.480.52010
1910000.60.400.940.06000.760.24000.840.160010
20010001000.320.680100000.40.6010
2100100100.960.040010000.840.160001
2200100010.80.20010000.80.20001
230010010.10.280.720010000.640.360001
2400100010.440.560010000.60.40001
25001000.700.640.360001000.560.440001
26001000100.360.6400.80.20000.040.96001
27001000.10.300.420.5800.840.160000.050.92001
28001000100.460.5400.760.240000.440.56001
29001000.30.90.440.5600010000.040.96001
3000.40.60.40.60000.40.600.60.40000.080.92001
31100000.30.7000.080.920.680.320.840.16000001
321000001000.20.8100.760.24000001

6.2 Phase 2: GA Design to Discover FCRs

6.2.1 Initializing Population

In this step, GA is applied to generate an initial population of FCRs. Each chromosome in the population represents a single rule following the Michigan approach. Each fuzzy rule is denoted by a binary string. An n bit string block signifies n successive linguistic fuzzy variables like “small”, “small–medium”, “medium”, “medium–large”, and “large”. The presence of a linguistic value in the antecedent part is represented by 1 bit, whereas a 0 bit marks the absence of any value. A “don’t care” state indicates the non-existence of an attribute in the rule and is encoded by all 1s or all 0s. The consequent part contains the number of bits equal to the number of class labels, i.e. for a three-class classification problem, the consequent part will have three bits. To assign the chromosome to only one of the class, only one of the bits is set to 1 in the consequent part. Figure 2 shows the encoding scheme and its mapping to the corresponding rules. The encoding scheme maps the chromosomes to rules in the CNF form where there is a conjunction between different attributes and a disjunction within the different values of the same attribute.

Figure 2: Encoding Scheme.
Figure 2:

Encoding Scheme.

6.2.2 Fitness Evaluation

In the proposed approach, the precision, recall, and simplicity of a fuzzy rule is considered for computing the fitness of an individual. To compute the precision and recall, we require a mechanism to measure the degree of match between a fuzzy rule and an object from a data set. The process of computing the degree of match between a fuzzy rule r and an object of DB u is based on the max_min operators of the Mamdani model and is the same as that used in [25]. The degree of match for an individual attribute Ai between rule r and object u is given by

mAi(r,u)={1ifs(Ai)=#,max(min(s(Tik),μTik(μ))otherwise.

In the above equation, s(Ai) is the binary string for ith attribute, # is the “don’t care” state, s(Tik) is the kth bit of the ith attribute, and μTik(μ) us the membership degree of the corresponding term in kth fuzzy set. The degree of match between premise P of rule r and object u of the data set is given by

mP(r,u)=mini(mAi(r,u)).

The conclusion of rule r and the class of object u can be directly matched, as there is no need to convert this part of the rule into the fuzzy form. To make the matching process clear, we illustrate the process in Figure 3. From the illustration given in Figure 3, we can say that the premise part of rule r and instance u matches with a degree of 0.7. This degree of match is compared to significance level α. If the value of mP(r, u) is greater than the α cut value and the conclusion part of rule r(c) matches with class label of instance u(c), we can count this as one of the true-positive TP cases.

Figure 3: The Matching Process.
Figure 3:

The Matching Process.

Now we can define precision and recall of a fuzzy rule at significance level α as follows [41]:

Precision(γ1)=uUfα(mP(r,u))(r(c)=u(c))uUfα(mP(r,u)),Recall(γ2)=uUfα(mP(r,u))(r(c)=u(c))|r(c)|.

Here, fα is the α cut function with a value >0.5.

#n=No. of conditions present in the antecedent of rule Ri.

Thereafter, fitness has been calculated according to following formula:

Fitness(Ri)=(γ1×γ2)#n

6.2.3 Genetic Operators

New individuals are created from existing ones by employing three primary genetic operators: selection, crossover, and mutation. The essential part of the selection process is to select stochastically better-fit individuals from one population and to create a new population for the next generation. In the present design, we have applied the roulette wheel method for selection. From the selected pair of individuals, two new individuals are generated using one point crossover operator. The crossover is applied with a constraint that the crossover point takes place only at the start of the string blocks. For example, two parents are generating two offspring through crossover at the beginning of fourth blocks:

Parent1: 100:0000: 1100:00:1111: 100Parent2:001: 1100: 1111:01:0000:001Child1: 100:0000: 1100:01:0000: 100Child2:001: 1100: 1111:00:1111:001

Afterward, a mutation operator generates a new individual by mutating one of the binary string blocks in the chromosome. Mutation is applied with equal probability on each string block. For example, a mutation on the second block may generate new chromosomes as given below:

Parent:(001:0000:0110:01:00001:001)Mutated:(001:0100:0110:01:00001:001)

The phenotype interpretation of the crossover and mutation operators is given in Figure 4A and B.

Figure 4: Genetic Operators. (A) Crossover. (B) Mutation. *# Represents the “don’t care” state.
Figure 4:

Genetic Operators. (A) Crossover. (B) Mutation. *# Represents the “don’t care” state.

6.2.4 Computing Crowding Similarity

As we are interested in discovering a rule set and not the single best rule, a GA with crowding technique is employed to maintain the diversity of rules in the population. The crowing GA takes two additional parameters – crowding factor and crowding subpopulation size. A subpopulation of rules of crowding subpopulation size is sampled from the GA rule population through uniform sampling. The similarity of an offspring rule is computed with the worst rule from the subpopulation. This process is repeated for crowing factor times. The offspring rule replaces the worst performing but the most similar rule. The similarity of two rules is measured using the following formula:

Similarity_count=|IworstIoffspring|IworstIoffspring.

Here, Iworst and Ioffspring are the examples covered by the worst rule of the subpopulation and the offspring rule, respectively. The more common is the data space covered by the worst and the offspring rules, more similar they are. The crowding algorithm is given in Figure 5. The GA to evolve FCRs is given in Figure 6.

Figure 5: Crowding Algorithm.
Figure 5:

Crowding Algorithm.

Figure 6: Algorithm for Evolving FCRs.
Figure 6:

Algorithm for Evolving FCRs.

6.3 Phase 3: Discovery of Exceptions – Intra- and Inter-Class

A framework for discovery of intra- and inter-class exceptions has been given by Vashishtha et al. [39]. In this section, we extend the framework for discovery of FCRs with intra- and inter-class exceptions. An FCR usually takes the symbolic form

If(x1isAik)(x2isA2k)(xnisAnk),thenDm; γ1:γ2.

Here, x1, x2,…, xn are linguistic variables and A1k, A2k,…, Ank are linguistic terms. The linguistic variables can take values from different numbers of linguistic terms. These linguistic terms may be defined using different MFs. The rules discovered by any machine learning algorithm must have high generalization power with high precision (γ1) and recall (γ2). The definitions of precision and recall for fuzzy rules have already been given in Section 3.

A discovered rule is added to the final pool of the rules only if the precision and recall of the rule exceed the user-defined threshold values for precision (tp) and recall (tr). We are calling such rules as default rules. The default rules discovered from the example data set are given in Table 4. The values for tp and tr are kept at 0.6 and 0.5, respectively.

Table 4

FCRs Discovered from the Example Data Set.

RuleFCRsPrecision (γ1)Recall (γ2)Accuracy (%)
R1(A1=small)→D10.671.084.3
R2(A1=medium)→D210.75
R3(A1=large)→D30.90.83

However, such default rules do not deal with rare situations. The rules with high generality can be augmented with an exception part, making them more accurate and interesting. The outline for discovering intra- and inter-class exceptions for fuzzy rules is given below.

6.3.1 Intra-Class Exceptions

Intra-class exceptions extract the unique or rare features of an object within the class. An FCR augmented with intra-class exceptions is represented as follows:

If(x1isAik)(x2isA2k)(xnisAnk),thenDmϖ(x4isA4k)(x5isA5k); γ1:γ2:γ3:γ4P=(x1isAik)(x2isA2k)(xnisAnk)E=(x4isA4k)(x5isA5k).

In the above rule, P signifies the premise part and E represents the exception part. Note that the conditions in the premise and exception parts are fuzzy constructs and need to be mutually exclusive. The symbol ϖ indicates the “with” operator, which is used for augmenting intra-class exceptions to the default rule. The rule states that a few objects of class Dm are very unique and of special interest, with the features x4 and x5 taking some rare values. We need to define two extra parameters to capture intra-class exceptions as below:

γ3=|fα(PEDm)||fα(PDm)|,γ4=|fα(PEDm)||fα(PE)|.

The parameter γ3 calculates the support of an intra-class exception with respect to a default rule. A minimum threshold value tE for γ3 needs to be set by a user, and it should be much less compared to the value of γ2, i.e. γ3tE and tE <<γ2. The parameter γ4 signifies the precision of a rule with intra-class exception, and the value of γ4 should always be equal to 1, i.e. γ4=1. This constraint ensures the uniqueness of the rare object in a data set, i.e. PE holds in no other class than Dm. fα signifies the α-cut function to compute γ3 and γ4, and the matching of the fuzzy rule and an object from the data set takes place the same way as described in Section 6.2.2. An FCR discovered with intra-class exceptions for rule R2 in Table 4 is given as

If(A1=Medium)Class D2ϖ(A2=small):γ1(1.0):γ2(0.75):γ3(0.11):γ4(1.0).

6.3.2 Inter-Class Exceptions

Inter-class exceptions are able to switch the class of an object. The following representation is used for a rule with its inter-class exceptions:

If(x1isA1k)(x2isA2k)(xnisAnk), then Dm(x3=A3k)(x6=A6k) (Dj); γ1:γ2:γ5:γ6.

In the above notation, Dm is the default class predicted by the premise being true. The symbol ⊕ is used as the “unless” operator, which is used for the augmentation of inter-class exceptions to the default rule. A rule with inter-class exceptions is interpreted as the following: if P holds, then the class of an object under consideration is Dm, unless any of the rare circumstances/conditions (Eis) hold true. In case any of the exceptions hold, the class of the object changes to class Dj. The rule is an affirmation, for example, the majority of the highly intelligent students perform well in examinations unless one is very sick. To capture such exceptions (Eis), two additional parameters, γ5 and γ6, with some constraints are defined as

γ5=|fα(PEDj)||fα(P)|   γ6=|fα(PEdj)||fα(PE)|,

where

γ5=1γ1;γ6=1;γ1<1.0.

The two inter-class exceptions discovered for rule 1 of the example data set are given as

If(A1=small)D1(A5=small):(D3)(A3=SmallMedium):(D2):γ1(0.67):γ2(1.0):γ5(0.0833,0.167):γ6.(1.0)

The above rule is interpreted as the following: if A1 is small, then the class of the object is D1, unless A5 is small (the class changes to D3) or unless A3 is small_medium (the class changes to D2). This rule classifies the object into the right category in rare and exceptional circumstances. The discovery of inter-class exceptions can bring a significant improvement in the accuracy of data sets where a number of exceptional conditions are spread over several small disjuncts of a data set. In the case of our example data set, the default rule set classifies 27 instances correctly out of 32 instances, giving a predictive accuracy of 84.3. The rule set augmented with inter-class exception classifies three more instances correctly, improving the accuracy up to 93.7.

The detailed algorithm for the third phase to discover exceptions is given in Figure 7. In this phase, the FCRs discovered in the previous phase are mutated over a number of generations to capture the exceptions. As the attributes occurring in the premise and exception parts are mutually exclusive, the mutation operator in this phase is restricted only to the blocks of attributes that do not figure in the premise part of the pre-discovered FCRs.

The attribute value pairs occurring in the exception part are classified as intra- and inter-class based on the framework given above. If the value of γ1 computes to <1, i.e. tpγ1<1, there is a possibility of inter-class exceptions, whereas if γ1=1, then there is a likelihood of intra-class exceptions.

Figure 7: Algorithm to Discover Intra- and Inter-Class Exceptions in the Second Phase.
Figure 7:

Algorithm to Discover Intra- and Inter-Class Exceptions in the Second Phase.

The flowchart of the proposed system for discovery of FCRs with exceptions is given in Figure 8.

Figure 8: The Flowchart of the Overall System.
Figure 8:

The Flowchart of the Overall System.

7 Experimental Design and Results

We have already demonstrated the proposed approach on an example data set containing 32 instances, five attributes, and three classes. The approach is further validated on the six data sets obtained from UCI machine learning repository. Table 5 describes the data sets with respect to number of instances, number of attributes, and number of classes.

Table 5

Description of Data Sets Used for Experimentation.

Data setNo. of instancesNo. of attributesNo. of classes
Iris15043
Wine153133
Breast Cancer Wisconsin68392
Seed21073
Glass21496
Sensor_readings_4545644

Each data set was fuzzified on the basis of the optimal combination of fuzzy MF and number of linguistic variables obtained in the pre-processing phase. Subsequently, a crowding GA was used to discover FCRs with the following parameters:

Population size40
Crossover rate0.8
Mutation rate0.1
Stopping criteriaNo improvement over the last 20 generations
Crowding subpopulation size4
Crowding factor4

In the third phase of design, mutation has been applied only to the exceptional part of the default rules (excluding the bits corresponding to the attributes appearing in the rule part). The mutation probability for this phase is kept at 0.2 and crossover was not required for this phase. The final rules augmented with intra- and inter-class exceptions for the iris data set are shown in Table 6.

Table 6

FCRs with Intra- and Inter-Class Exceptions for the Iris Data Set.

RuleIntra-class exceptionγ3γ4Inter-class exceptionγ5γ6Accuracy without exceptionAccuracy with exceptions
R1If (petal_length=small)→Iris satosa ϖ (sepal_length=medium)∨ (sepal_width=large) Iris satosa0.1431.095.0796.13
R2If (petal_width=medium)→Iris versicolor⊕(sepal_length=small): Iris virginica0.0351.0
R3If (petal_length=large)→Iris virginica ϖ (sepal_width=large): Iris virginica0.11.0

Intra-class exceptions merely discover the unique and interesting objects of a class, but do not improve the accuracy of a rule. Inter-class exceptions, although, may occur in a small and insignificant portion of a data set, yet these improve the performance (accuracy) of a default rule. The improvement in accuracy may be significant in cases there are many exceptional conditions spread over several small disjuncts in a data set. The proposed approach has discovered three FCRs with two intra-class and one inter-class exceptions. The accuracy of FCRs with exceptions increases from 95.07% to 96.13%.

The proposed approach improves the complexity-based interpretability (in terms of the number of rules) because it discovers fewer number of rules. It makes the general rules more specific by adding intra- or inter-class exceptions to the rule. This may slightly increase the number of conditions per rule; however, the rules become more correct and semantically more intuitive. Discovering FCRs augmented with exceptions, we not only capture the general behavior but also confine the rare and interesting facts about the workings of a system.

Results obtained for all the six data sets are summarized in Tables 7 and 8. The predictive accuracy of the FCRs discovered with pre-tuned optimal combinations of MFs and number of linguistic terms is compared to the predictive accuracy of FCRs obtained using pre-defined sets of MFs and number of linguistic terms for fuzzification process. It is clear from these tables that FCRs discovered with pre-tuned MFs and number of linguistic terms give better accuracy. The numbers of rules discovered are also minimal across all the data sets. Thus, the tuning in the pre-processing phase results into the best accuracy and complexity trade-off. The predictive accuracies and number of rules given in these tables are averaged over 20 runs of the GA. The summary of FCRs discovered with exceptions is given in Table 9.

Table 7

Predictive Accuracy with Different Combinations of MFs and Number of Linguistic Terms.

Data setTriangularTrapezoidalOptimal combination
(S, L)(S, M, L)(S, SM, ML, L)(S, SM, M, ML, L)(S, L)(S, M, L)(S, SM, ML, L)(S, SM, M, ML, L)
Iris75.0786.077.0778.5365.7394.890.292.495.07
Seed92.971.0556.6553.9093.5455.3375.8159.4494.29
Glass80.6282.8874.0874.0274.4166.6569.7871.0885.94
Wine93.9384.7364.3677.0796.4088.0680.6793.4496.85
Breast Cancer Wisconsin87.8172.2484.0884.5583.8769.0594.5994.9595.43
Sensor_readings_477.6478.5792.0282.3777.1374.2879.4079.7596.79
Table 8

Number of Rules with Different Combinations of MFs and Number of Linguistic Terms.

Data setTriangularTrapezoidalOptimal combination
(S, L)(S, M, L)(S, SM, ML, L)(S, SM, M, ML, L)(S, L)(S, M, L)(S, SM, ML, L)(S, SM, M, ML, L)
Iris3.03.04.23.23.03.03.04.03.0
Seed4.24.04.23.64.04.04.84.04.0
Glass4.87.65.25.05.45.45.65.04.6
Wine5.04.66.04.84.45.66.25.24.4
Breast Cancer Wisconsin3.83.45.25.23.45.86.67.63.4
Sensor_readings_46.46.46.46.45.26.05.06.44.8
Table 9

Summary of FCRs Discovered with Exceptions.

Data setsFCRs with intra-class exceptionsFCRs with inter-class exceptionsAccuracy with exceptions
Iris020196.13
Seed010094.29
Glass010186.26
Wine000096.85
Breast Cancer Wisconsin000295.96
Sensor_readings_4000096.79

We could discover exceptions in Iris, Breast Cancer, Seed, and Glass data sets, whereas no exceptions could be captured in the Wine and Sensor_reading_4 data sets. There is a slight improvement in predictive accuracy wherever inter-class exceptions were discovered. Moreover, the rules with exceptions are interesting. An RB containing FCRs with exceptions makes a system more fool-proof and infallible in rare and exceptional circumstances.

8 Conclusion

In this article, we have proposed a scheme for discovering an accurate, interpretable, and interesting KB for FRBSs. The DB is tuned in pre-processing phase to have appropriate MFs and number of linguistic labels. Inclusion of appropriate MFs and a number of linguistic terms in DB is the fundamental requirement to discover accurate and interpretable RBs. We have also proposed a procedure for discovering FCRs augmented with intra- and inter-class exceptions. Intra-class exceptions confine the unique and interesting facts about an object, whereas inter-class exceptions hold the rare features that change the class of an object. Discovery of an RB in the proposed form makes it more complete and captures the exceptional behavior of a system pertaining to small disjuncts. The proposed discovery may also prove to be useful for fuzzy control applications in predicting the behavior of a system in rare circumstances. As for future work, the proposed system needs to be applied and tested on the domain of medical diagnosis and fuzzy controllers. In this work, we have only tuned the type of MFs and number of linguistic labels to be used for attributes of a data set. Tuning of the parameter values defining the MFs may further increase the accuracy and interpretability of the KB of FRBSs.

About the authors

Renu Bala

Saroj Ratnoo is a Professor at the Department of Computer Science and Engineering, Guru Jambheshwar University of Science and Technology, Hisar, India. She finished her MSc in computing science from the University of London, UK. She completed her PhD from Jawaharlal Nehru University, New Delhi, India. She has 18 years of teaching and research experience. Her research interests include application of evolutionary and swarm intelligence algorithms in the domain of feature selection, rule mining, and exception discovery.

Saroj Ratnoo

Renu Bala is a PhD scholar at the Department of Computer Science and Engineering, Guru Jambheshwar University of Science and Technology, Hisar, India. She finished her MCA from Ch. Devi Lal University, Sirsa, India. She qualified for the UGC-NET Junior Research Fellowship in 2012. Her research interests include application of evolutionary algorithms in the domain of rule mining and exception discovery.

Bibliography

[1] R. Alcala, J. Alcala-Fdez and F. Herrera, A proposal for the genetic lateral tuning of linguistic fuzzy systems and its interaction with rule selection, IEEE Trans. Fuzzy Syst.15 (2007), 616–635.10.1109/TFUZZ.2006.889880Search in Google Scholar

[2] R. Alcalá, J. Alcalá-Fdez, M. J. Gacto and F. Herrera, Rule base reduction and genetic tuning of fuzzy systems based on the linguistic 3-tuples representation, Soft Comput.11 (2007), 401–419.10.1007/s00500-006-0106-2Search in Google Scholar

[3] K. K. Bharadwaj and B. M. Al-Maqaleh, Evolutionary approach for automated discovery of censored production rules, Trans. Eng. Comput. Technol.10 (2005), 147–152.Search in Google Scholar

[4] K. K. Bharadwaj and Saroj, A parallel genetic programming based intelligent miner for discovery of censored production rules with fuzzy hierarchy, Expert Syst. Appl. 37 (2010), 4601–4610.10.1016/j.eswa.2009.12.048Search in Google Scholar

[5] O. Cordon, F. Herrera and P. Villar, Analysis and guidelines to obtain a good uniform fuzzy partition granularity for fuzzy rule-based systems using simulated annealing, Int. J. Approx. Reason.25 (2000), 187–215.10.1016/S0888-613X(00)00052-9Search in Google Scholar

[6] O. Cordon, F. Herrera and P. Villar, Generating the knowledge base of a fuzzy rule-based system by the genetic learning of the data base, IEEE Trans. Fuzzy Syst.9 (2001), 667–674.10.1109/91.940977Search in Google Scholar

[7] O. Cordón, F. Herrera, L. Magdalena and P. Villar, A genetic learning process for the scaling factors, granularity and contexts of the fuzzy rule-based system data base, Inf. Sci.136 (2001), 85–107.10.1016/S0020-0255(01)00143-8Search in Google Scholar

[8] A. Fernandez, S. Garcia, M. Deljesus and F. Herrera, A study of the behaviour of linguistic fuzzy rule based classification systems in the framework of imbalanced data-sets, Fuzzy Sets Syst.159 (2008), 2378–2398.10.1016/j.fss.2007.12.023Search in Google Scholar

[9] M. J. Gacto, R. Alcalá and F. Herrera, Interpretability of linguistic fuzzy rule-based systems: an overview of interpretability measures, Inform. Sci.181 (2011), 4340–4360.10.1016/j.ins.2011.02.021Search in Google Scholar

[10] S. Garcia, F. Gonzalez and L. Sanchez, Evolving fuzzy rule based classifiers with GA-P: a grammatical approach, Genet. Program.1598 (1999), 650–650.10.1007/3-540-48885-5_17Search in Google Scholar

[11] F. Herrera, Genetic fuzzy systems: taxonomy, current research trends and prospects, Evol. Intell.1 (2008), 27–46.10.1007/s12065-007-0001-5Search in Google Scholar

[12] F. Hussain, H. Liu, E. Suzuki and H. Lu, Exception rule mining with a relative interestingness measure, in Proceedings of Pacific-Asia Conference on Knowledge Discovery in DataBases, PAKDD-2000, pp. 86–97, 2000.10.1007/3-540-45571-X_11Search in Google Scholar

[13] H. Ishibuchi and T. Murata, A genetic-algorithm-based fuzzy partition method for pattern classification problems, in: Genetic Algorithms and Soft Computing, F. Herrera and J. L. Verdegay (Eds.), Stud. Fuzziness Soft Comput. No. 8, pp. 555–578, Physica Verlag, Wurzburg, 1996.Search in Google Scholar

[14] H. Ishibuchi, K. Nozaki, N. Yamamoto and H. Tanaka, Selecting fuzzy if–then rules for classification problems using genetic algorithms, IEEE Trans. Fuzzy Syst.3 (1995), 260–270.10.1109/91.413232Search in Google Scholar

[15] H. Ishibuchi, T. Murata and I. B. Türkşen, Single-objective and two-objective genetic algorithms for selecting linguistic rules for pattern classification problems, Fuzzy Sets Syst.89 (1997), 135–150.10.1016/S0165-0114(96)00098-XSearch in Google Scholar

[16] H. Ishibuchi, T. Nakashima and T. Murata, Three-objective genetics-based machine learning for linguistic rule extraction, Inform. Sci.136 (2001), 109–133.10.1016/S0020-0255(01)00144-XSearch in Google Scholar

[17] H. Ishibuchi, T. Nakashima and M. Nii, Classification and modeling with linguistic information granules: advanced approaches advanced approaches to linguistic data mining, Springer, Berlin, Heidelberg, 2005.Search in Google Scholar

[18] H. Ishibuchi, T. Yamamoto and T. Nakashima, Hybridization of fuzzy GBML approaches for pattern classification problems, IEEE Trans. Syst. Man Cybern. Part B Cybern.35 (2005), 359–365.10.1109/TSMCB.2004.842257Search in Google Scholar

[19] H. Ishibuchi, Y. Nakashima and Y. Nojima, Performance evaluation of evolutionary multiobjective optimization algorithms for multiobjective fuzzy genetics-based machine learning, Soft Comput.15 (2011), 2415–2434.10.1007/s00500-010-0669-9Search in Google Scholar

[20] L. I. Kuncheva, Fuzzy classifier design, Vol. 49, Physica Verlag, Heidelberg, 2000.10.1007/978-3-7908-1850-5Search in Google Scholar

[21] R. R. F. Mendes, F. B. de Voznika, A. A. Freitas and J. C. Nievola, Discovering fuzzy classification rules with genetic programming and co-evolution, in: Principles of Data Mining and Knowledge Discovery, L. D. Raedt and A. Siebes, Eds., pp. 314–325, Springer, Berlin, Heidelberg, 2001.10.1007/3-540-44794-6_26Search in Google Scholar

[22] R. S. Michalski and P. H. Winston, Variable precision logic, Artif. Intell.29 (1986), 121–146.10.1016/0004-3702(86)90016-0Search in Google Scholar

[23] G. A. Miller, The magical number seven, plus or minus two: some limits on our capacity for processing information, Psychol. Rev.63 (1956), 81–97.10.1525/9780520318267-011Search in Google Scholar

[24] R. Palm, D. Driankov and H. Hellendoorn, Model based fuzzy control: fuzzy gain schedulers and sliding mode fuzzy controllers, Springer, Berlin, Heidelberg, 1997.10.1007/978-3-662-03401-9Search in Google Scholar

[25] W. Pedrycz, Fuzzy modelling: paradigms and practice, Vol. 7, Kluwer Academic, Norwell, MA, USA, 1996.10.1007/978-1-4613-1365-6_1Search in Google Scholar

[26] C. A. Pena-Reyes and M. Sipper, Fuzzy CoCo: a cooperative-coevolutionary approach to fuzzy modeling, IEEE Trans. Fuzzy Syst.9 (2001), 727–737.10.1109/91.963759Search in Google Scholar

[27] C.-A. Peña-Reyes and M. Sipper, Fuzzy CoCo: balancing accuracy and interpretability of fuzzy models by means of coevolution, J Casillas O Cordón F Herrera Magdal. Ed. Accuracy Improv. Linguist. Fuzzy Model. Stud. Fuzziness Soft Comput.129 (2003), 119–146.10.1007/978-3-540-37058-1_6Search in Google Scholar

[28] J. A. Sanz, A. Fernández, H. Bustince and F. Herrera, Improving the performance of fuzzy rule-based classification systems with interval-valued fuzzy sets and genetic amplitude tuning, Inform. Sci.180 (2010), 3674–3685.10.1016/j.ins.2010.06.018Search in Google Scholar

[29] J. A. Sanz, A. Fernández, H. Bustince and F. Herrera, A genetic tuning to improve the performance of fuzzy rule-based classification systems with interval-valued fuzzy sets: degree of ignorance and lateral position, Int. J. Approx. Reason.52 (2011), 751–766.10.1016/j.ijar.2011.01.011Search in Google Scholar

[30] S. Saroj and K. K. Bharadwaj, A parallel genetic algorithm approach for automated discovery of censored production rules, in Proceedings of the 25th Conference on Proceedings of the 25th IASTED International Multi-Conference: Artificial Intelligence and Applications, pp. 435–441, Anaheim, CA, USA, 2007.Search in Google Scholar

[31] A. Silberschatz and A. Tuzhilin, On subjective measures of interestingness in knowledge discovery, in: KDD, pp. 275–281, 1995.Search in Google Scholar

[32] A. Silberschatz and A. Tuzhilin, What makes patterns interesting in knowledge discovery systems, IEEE Trans. Knowl. Data Eng.8 (1996), 970–974.10.1109/69.553165Search in Google Scholar

[33] E. Suzuki, Autonomous discovery of reliable exception rules, presented at the KDD, 1997, pp. 259–262.Search in Google Scholar

[34] E. Suzuki, Undirected discovery of interesting exception rules, Int. J. Pattern Recog. Artif. Intell.16 (2002), 1065–1086.10.1142/S0218001402002155Search in Google Scholar

[35] E. Suzuki, Discovering interesting exception rules with rule pair, in Proceedings of the ECML/PKDD Workshop on Advances in Inductive Rule Learning, J. Fuernkranz, Ed., pp. 163–178, 2004.Search in Google Scholar

[36] E. Suzuki and M. Shimura, Exceptional knowledge discovery in databases based on information theory, in KDD, pp. 275–278, 1996.Search in Google Scholar

[37] E. Suzuki and J. M. Żytkow, Unified algorithm for undirected discovery of exception rules, in: Principles of Data Mining and Knowledge Discovery, D. A. Zighed, J. Komorowski and J. Żytkow, Eds., pp. 169–180, Springer, Berlin, Heidelberg, 2000.10.1007/3-540-45372-5_17Search in Google Scholar

[38] P. Thrift, Fuzzy Logic Synthesis with Genetic Algorithms, 1991.Search in Google Scholar

[39] J. Vashishtha, D. Kumar and S. Ratnoo, An evolutionary approach to discover intra- and inter-class exceptions in databases, Int. J. Intell. Syst. Technol. Appl.12 (2013), 283–300.10.1504/IJISTA.2013.056535Search in Google Scholar

[40] Yogita, Saroj, D. Kumar and Vipin, Rules + exceptions: automated discovery of comprehensible decision rules, in Advance Computing Conference, 2009. IACC 2009. IEEE International, pp. 1479–1484, 2009.Search in Google Scholar

[41] Y. Yuan, and H. Zhuang, A genetic algorithm for generating fuzzy classification rules, Fuzzy Sets Syst.84 (1996), 1–19.10.1016/0165-0114(95)00302-9Search in Google Scholar

[42] L. A. Zadeh, Fuzzy logic=computing with words, IEEE Trans. Fuzzy Syst.4 (1996), 103–111.10.1007/978-3-7908-1873-4_1Search in Google Scholar

Received: 2015-10-21
Published Online: 2016-1-22
Published in Print: 2016-4-1

©2016 by De Gruyter

This article is distributed under the terms of the Creative Commons Attribution Non-Commercial License, which permits unrestricted non-commercial use, distribution, and reproduction in any medium, provided the original work is properly cited.

Downloaded on 23.9.2025 from https://www.degruyterbrill.com/document/doi/10.1515/jisys-2015-0136/html
Scroll to top button