Home Mathematics An incremental approach to obtaining attribute reduction for dynamic decision systems
Article Open Access

An incremental approach to obtaining attribute reduction for dynamic decision systems

  • Liu Wenjun EMAIL logo
Published/Copyright: November 27, 2016

Abstract

In the 1960s Professor Hu Guoding proposed a method of measuring information based on the idea that connotation and denotation of a concept satisfies inverse ratio rule. According to this information measure, firstly we put forward the information quantity for information systems and decision systems; then, we discuss the updating mechanism of information quantity for decision systems; finally, we give an attribute reduction algorithm for decision tables with dynamically varying attribute values.

MSC 2010: 03E99

1 Introduction

In recent years a major challenge has been created due to increasing data volumes. The prevalence of continuously collected data has led to an increasing interest in the field of data streams. For example, Internet traffic generates large streams that cannot even be stored effectively unless significant resources are spent on storage. As data sets change with time, it is very time-consuming or even infeasible to run a knowledge acquisition algorithm repeatedly. To overcome this deficiency, the researchers have recently proposed many new analytic techniques. These techniques mainly address knowledge updating from three aspects: the expansion of data [17], the increasing number of attributes [811] and the variation of data values [12, 13]. For the first two aspects, a number of incremental techniques have been developed to acquire new knowledge without recomputation. However, little research has been done on the third aspect in knowledge acquisition, which motivates this study. This paper concerns attribute reduction for data sets with dynamically varying data values.

Feature selection, a common technique for data preprocessing in many areas including machine learning, pattern recognition and data mining, has hold great significance. Among various approaches to select useful features, a special theoretical framework is Pawlak’s rough set model [14, 15]. One can use rough set theory to select a subset of features that is most suitable for a given recognition problem [1621]. Rough feature selection is also called attribute reduction, which aims to select those features that keep the discernibility ability of the original ones [2226]. The feature subset generated by an attribute reduction algorithm is called a reduci. In the last two decades, researchers have proposed many reduction algorithms [2732]. However, most of these algorithms can only be applicable to static data sets. In paper [3340], several algorithms have been proposed for dynamic data sets. Here, we continue the research on the attribute reduction algorithm of dynamic data sets.

The remainder of this paper is organized as follows. Some preliminaries about rough set theory are reviewed in Section 2. In Section 3, a new form of conditional information quantity for decision systems is introduced, the properties of this information quantity are discussed. In Section 4, the updating mechanism of information quantity for decision systems are researched. Based on the conditional information quantity, an attribute reduction algorithm for decision systems with dynamically varying attribute values is constructed in Section 5.

2 Preliminaries

In this section, we first review some basic concepts in rough set theory, which can also be referred to [14, 15]. Throughout this paper, the universe U is assumed a finite nonempty set.

In rough set theory, knowledge is regarded as the classification ability of objects. Suppose we are given a finite set U ≠ ϕ of objects we are interested in. Any subset XU will be called a concept or a category in U and any family of concepts in U will be referred to as abstract knowledge about U. We will be mainly interested in the concepts which form a partition and often use equivalence relations instead of classifications, since these two concepts are mutually interchangeable and relations are easier to deal with. Suppose R is an equivalence relation over U, then by U/R we mean the family of all equivalence classes of R, and [x]R denotes an equivalence class of R containing the element xU. With each subset XU, we associate two subsets:

R_X={YU/R|YX},R¯X={YU/R|YXϕ}

called the R–lower and R–upper approximations of X respectively. When R_X=R¯X, then X is called R–definable; otherwise X is called R–undefinable.

An information system, as a basic concept in rough set theory, provides a convenient framework for the representation of objects in terms of their attribute values.

An information system is a quadruple I S = (U, A, V, f), where: U is a set of finite and nonempty objects, called the universe; A is a nonempty finite set of attributes; V is the value domain of attributes; f is an information function which assigns particular values from domains of attributes to objects, such as ∀aA, xU, f(a, x)V, where f(a,x) denotes the value of attribute a on object x.

With every subset of attributes BA, there is an associated equivalence relation ind(B) = {(x, y) ∈ U2|∀aB, f(a, x) = f(a, y)}. This equivalence relation ind(B) divides the universe U into a family of disjoint classes, the approximation space determined by the B-equivalence relation, denoted by πB, is defined as: πB = {X | XU / ind(B)}, where X is called a B-equivalence block and depicts the collection of objects that are indiscernible from each other with respect to B.

One type of special information system is called a decision system, which is denoted as DS = (U, C ∪ {d},V,f), where d is the decision attribute, C is the conditional attribute set. The positive region of d with respect to C is defined as POSC(d)=XπdC_X.DS=(U,C{d},V,f) is called a consistent decision system, if POSC(d) = U, else it is called an inconsistent decision system.

The consistent degree of adecision system DS = (U, C ∪ d, V, f) is defined asγ=|POSC(d)||U|.(1)

Obviously, a decision table is consistent if and only if its consistent degree γ is 1.

3 The information quantity for information systems and decision systems

In this section, we will use a new form of condition information quantity in decision system based on the equivalence relation. Some properties of the conditional information quantity will be given.

Definition 3.1

Given an information systemI S=(U, A, V, f)andP, Q ⊆ A, πP = {P1, P2,..., Ps}is finer than πQ = {Q1,Q2, ...Qt} is defined as: for everyPi ∈ πP, there existsQjπQ, such thatPiQj, denotesπPπQ. In this case, we also say thatπQis coarser thanπP. IfπPπQandπPπQ, we sayπPis strictly finer thanπQ, denotesπPπQ.

Obviously, if BA, then πAπB.

Definition 3.2

LetI S = (U, A, V, f)be an information system, if πA = {X1, X2,..., Xn}, the information quantity of blockXiis defined asI(Xi) = p(Xi)(1-p(Xi)); the information quantity ofπAis defined asI(πA)=i=1np(Xi)(1p(Xi)),wherep(Xi)=|Xi||U|,i=1,2,,n.

Theorem 3.3

Let IS =(U,A,V,f) be an information system, if πA = {X1, X2,..., Xn}, the information quantity ofπAsatisfies the following properties:

  1. 0I(πA)11n.

  2. I(πA)=11nifandonlyifp(Xi)=1n(i=1,2,,n).

  3. I(πA) = 0 if and only ifπA = {U}.

  4. For eachXi, XjπA, I(Xi)+ I(Xj) ≥ I(XiXj).

  5. IfBA, thenI(πB) ≤ I(πA), that is, the finer the partition, the bigger information quantity of it.

(1) I(πA)=i=1np(Xi)(1p(Xi))=1i=1np2(Xi),wherei=1np(Xi)=1.

Now, we discuss the extreme value of i=1np2(Xi) under restrained condition i=1np(Xi)=1,letH(λ)=i=1np2(Xi)+λ(i=1np(Xi)1). Since

Hλ(λ)=i=1np(Xi)1=0Hp(Xi)(λ)=2p(Xi)+λ=0

We have p(Xi)=1n, that is, when p(Xi)=1n,i=1np2(Xi) gets to its minimum value 1n, so I(πA) gets to its maximum value 11n, obviously, IA)≥0. So (1) and (2) hold, (3) is obvious.

(4) Since XiXj=ϕ,sop(XiXj)=p(Xi)+p(Xj),I(Xi)+I(Xj)I(XiXj)=p(Xi)(1p(Xi))+p(Xj)(1p(Xj))p(XiXj)(1p(XiXj))=2p(Xi)p(Xj)0.

(5) If BA, then πAπB, so each equivalence class of πB is made up of one or more equivalence classes of πA. Obviously, we can get πB through combining two equivalence classes of πA each time. From (4), we can get I(πA) ≥ I(πB).

Theorem 3.4

Let I S = (U, A, V, f) be an information system, if X, Y ⊆ U, then I(X) + I(Y) ≥ I(X ∪ Y).

Let Δ = I(X) + I(Y) - I(X ∪ Y), then

Δ=p(X)[1p(X)]+p(Y)[1p(Y)]p(XY)[1p(XY)]=p(X)+p(Y)p(XY)+[p(XY)]2[p(X)]2[p(Y)]2=p(XY)+[p(X)+p(Y)p(XY)]2[p(X)]2[p(Y)]2=p(XY)+2p(X)p(Y)2p(X)p(XY)2p(Y)p(XY)+[p(XY)]2=[p(XY)p(X)p(XY)p(Y)p(XY)+p2(XY)]+2p(X)p(Y)p(X)p(XY)p(Y)p(XY)=p(XY)[1p(X)p(Y)+p(XY)]+p(X)p(Y)p(X)p(XY)+p(X)p(Y)p(Y)p(XY)=p(XY)[1p(XY)]+p(X)[p(Y)p(XY)]+p(Y)[p(X)p(XY)]

Since 0p(X)1,0p(Y)1,0p(XY)1,andp(X)p(XY),p(Y)p(XY), so Δ ≥ 0, that is I(X) + I(Y) ≥ I(X ∪ Y).

This theorem demonstrates that if we combine two blocks, their information quality is lesser.

Definition 3.5

LetDS = (U, C ∪ {d}, V, f), X ⊆ U, πd = {Y1, Y2...Yn, the information quantity of blockXwith respective toπdis denoted asI(πd|X)=p(X)j=1np(Yj|X)(1p(Yj|X)).

Definition 3.6

LetDS = (U, C ∪ {d}, V, f), πC = {Y1, Y2...Ynthe condition information quantity ofπCwith respect toπdis defined asI(πd|πC)=i=1mp(Xi)j=1np(Yj|Xi)(1p(Yj|Xi)),wherep(Xi)=|Xi||U|,i=1,2,,m;p(Yj|Xi)=|YjXi||Xi|,j=1,2,,n.

Theorem 3.7

LetDS = (U, C ∪ {d}, V, f), πC = {Y1, Y2...Yn, the condition information quantity ofπCwith respect toπdsatisfies the following properties:

  1. I(πd|Xi)+I(πd|Xj)I(πd|(XiXj))(i,j{1,2,,m}).

  2. I(πd|Xi)+I(πd|Xj)=I(πd|(XiXj))ifandonlyif,foreachk{1,2,n},|YkXi||Xi|=|YkXj||Xj|(i,j{1,2,m}).

  3. 0I(πd|πC)I(πd).

  4. Iπd|πC=0ifandonlyifπCπd.

  5. fπC=U,thenIπd|πC=Iπd.

Proof

(1)

Δ=I(πd|(XiXj))(I(πd|Xj)+I(πd|Xj))=p(XiXj)k=1np(Yk|XiXj)(1p(Yk|XiXj))p(Xi)k=1np(Yk|Xi)(1p(Yk|Xi))p(Xj)k=1np(Yk|Xj)(1p(Yk|Xj))=|XiXj||U|k=1n|YkXi|+|YkXj||XiXj|(1|YkXi|+|YkXj||XiXj|)|Xi||U|k=1n|YkXi||Xi|(1|YkXi||Xi|)|Xj||U|k=1n|YkXj||Xj|(1|YkXj||Xj|)
=1|U|∑!k=1n((|YkXj|+|YkXj|)(1|YkXi|+|YkXj||XiXj|)|YkXi|+|YkXi|2|Xi||YkXj|+|YkXj|2|Xj|)=1|U|k=1n(|YkXi|2|Xi|+|YkXj|2|Xj|(|YkXi|+|YkXj|)2|XiXj|).

Let |Xi|=x,|Xj|=y,|YkXi|=a,|YkXj|=b, denotes

fk=|YkXi|2|Xi|+|YkXj|2|Xj|(|YkXi|+|YkXj|)2|XiXj|,

then

fk=a2x+b2y(a+b)2x+y=a2y(x+y)+b2x(x+y)(a+b)2xyxy(x+y)=a2xy+a2y2+b2x2+b2xya2xyb2xy2abxyxy(x+y)=(aybx)2xy(x+y)0.

So I(πd|Xi)+I(πd|Xj)I(πd|(XiXj)).

(2) According to the proof of (1), I(πd|Xi)+I(πd|Xj)=I(πd|(XiXj)) if and only if ay = bx, that is, if and only if for each k{1,2,,n},|YkXi||Xi|=|YkXj||Xj|,thenI(πd|Xi)+I(πd|Xj)=I(πd|(XiXj)).

(3) Obviously, when πcπd,I(πd|πc)=0. According to (1), if we combine two condition classes, the information of condition class block with respective to πd is increased, so I(πd|πC)I(πd|{U})=I(πd).

(4) If I(πd|πC)=0,that isi=1np(Xi)j=1mp(Yj|Xi)(1p(Yj|Xi))=0,so for everyXiπC, we have p(Xi)j=1mp(Yj|Xi)(1p(Yj|Xi))=0.Sincep(Xi)>0,so for eachYjπD,p(Yj|Xi)(1p(Yj|Xi))=0, that is, for each YjπD,p(Yj|Xi)=0orp(Yj|Xi)=1,thusXiYj=ϕorXiYj.i.e.,πCπd. The inverse is obvious.

(5) According to definition of I(πd|πC), we can easily determine if πC={U},thenI(πd|πC)=I(πd). We must pay attention that the inverse of (5) doesn’t hold.

Example 3.8

LetDS = (U, C ∪ {d},V, f)be a decision system, U = {u1u2...,u9}, πd = {{u1u4u7},{u2u5u8}, {u3u6u9}} πC = {{u1u2u3}, {u4u5u6}, {u7u8u9}}, obviouslyI(πd|{U})=I(πd|πC)=I(πd),butπC{U}.

Corollary 3.9

LetDS = (U, C ∪ {d},V, f)be a decision system, if B1 ⊆ B2C thenI(πd|πB2)(πd|πB1).

Corollary 3.10

LetDS = (U, C ∪ {d},V, f)be a decision system, C1, C2C, ifI(πd|πC1)=k1,I(πd|πC2)=k2,then(1)I(πd|π(C1C2))min(k1,k2);(2)I(πd|π(C1C2))max(k1,k2).

Proof

Since π(C1C2)πC1π(C1C2);π(C1C2)πC2π(C1C2), from Corollary 3.9, we have: I(πd|π(C1C2))min(k1,k2);I(πd|π(C1C2))max(k1,k2)

Definition 3.11

LetDS = (U, C ∪ {d},V, f), ifXπC and |λ(X)|> 1 thenXis said to be an inconsistent equivalence block; otherwise, it is said to be a consistent equivalence block, whereλ(X)={f(u,d)|uX}and|λ(X)|is the cardinality of λ(X).

An inconsistent equivalence block describes a group of C -indistinguishable objects that have a divergence in their decision-making, while a consistent equivalence block depicts a collection of C -definable objects that share the same decision-making.

Definition 3.12

LetDS = (U, C ∪ {d},V, f), andX ∈ πC, the inconsistent and consistent block families ofπCare denoted byπCinc={XπC||λ(X)|>1},πCcon={XπC||λ(X)|=1}respectively.

The inconsistent block family collects all of the inconsistent equivalence blocks from πC, whereas the consistent block family gathers all of the consistent equivalence blocks from πC. It is evident that πCincπCcon=πCandπCincπCcon=ϕ.

Theorem 3.13

LetDS=U,Cd,V,f,πC={X1,X2,,Xm},πd={Y1,Y2,,Yn},XjπCis a consistence block if and only if the information of blockXiwith respective toπdis zero, that is, Xi∈!πCconiffI(πd|Xi)=p(Xi)j=1np(Yj|Xi)(1p(Yj|Xi))=0.

Proof

Xi ∈ πC is a consistent block iff exists Yj ∈ πd, such that Xi ⊆ Yj, for each Ykπd(kj),XiYk=ϕ. Whereas XiYjiffp(Yj|Xi)=1,andXiYk=ϕiffp(Yj|Xi)=0.SoXiπCconiffI(πd|Xi)=p(Xi)j=1np(Yj|Xi)(1p(Yj|Xi))=0.

Corollary 3.14

Let DS=U,Cd,V,f,πC={X1,X2,,Xm},πd={Y1,Y2,,Yn},thenI(πd|πC)=XiπCincp(Xi)j=1np(Yj|Xi)(1p(Yj|Xi)).

Corollary 3.15

LetDS=U,Cd,V,f,πC={X1,X2,,Xm},πd={Y1,Y2,,Yn},DS is consistent decision system iffI(πd|πC)=0.

Corollary 3.16

In decision systemDS = (U, C ∪ {d},V, f) , aCis calledd-dispensable ifI(πd|π(C{a}))=I(πd|πc).

Corollary 3.17

LetDS = (U, C ∪ {d}, V, f)be a consistent decision system, aCisd-dispensableiffXπC{a},I(πd|X)=0.

Corollary 3.18

LetDS = (U, C ∪ {d}, V, f)be a consistent decision system, condition attributeCis indispensable with respect todiffaC,XπC{a},I(πd|X)

Definition 3.19

LetDS = (U, C ∪ {d}, V, f)be a decision system, B⊆C, ∀ aB, the significance measure (inner measure) ofainBis defined asSiginner(a,B,D)=I(πd|π(B{a}))I(πd|πB).

Definition 3.20

LetDS = (U, C ∪ {d}, V, f)be a decision system, B⊆C, ∀ aC-B, the significance measure (outer measure) ofainBis defined asSigouter(a,B,D)=I(πd|πB)I(πd|π(B{a})).

Theorem 3.21

In decision systemDS = (U, C ∪ {d}, V, f) , aCisd-dispensable, thenPOSC{a}(d)=POSc(d).

Proof

On the one hand, according to Theorem 3.7, if we combine two condition classes of decision table, the condition information quantity will increase monotonically, and only if the two condition classes Xi and Xj satisfy |YkXi||Xi|=|YkXj||Xj|, for each Yk ∈ πd, then the condition information quantity doesn’t change, that is, if πc={X1,X2,Xi,Xj,Xn},πB={X1,X2,Xi1,Xi+1,Xj1,Xj+1,Xn,Xi

Xj}, and I(πd|πC)=I(πd|πB), then for each Ykπd we have |YkXi||Xi|=|YkXj||Xj|, that is POSB(d)= POSC(d).

On the other hand, since πCπC–{a}, then πC–{a} can be obtained through combining the classes of πC. According to the analysis of the above, if I(πd|πC)=I(πd|πC{a}), we must have POSC(d)= POS(C–{a})(d).

Definition 3.22

LetDS = (U, C ∪ {d} V, f) be a decision system, BCis a relative reduct ofCrelative to decision attributedd, if

(1) I(πd|πC)=I(πd|πB)

(2) BB,I(πd|πB)I(πd|πB)

Theorem 3.23

LetDS = (U, C ∪ {d} V, f) be a consistent decision system, thenBCis a relative reduct ofCrelative to decision attributeddif and only if (1) POSC(d)=POSB(d);(2)BB,POSB(d)POSB(d).

4 Updating mechanism of information quantity for decision systems

Given a dynamic decision table, based on the information quantity, this section presents the updating mechanisms of the information quantity for dynamically varying data values.

Theorem 4.1

LetDS = (U, C ∪ {d} V, f) be a decision system, πd = {Y1,Y2,...,Yq,Yn, ifxX(|X|> 1), xYq(q ∈ {1,2,...,n}), whenxexits out of this system, the new information quantity of blockX

I(πd|X)=|U||X|(|U|1)(|X|1)I(πd|X)2(|Xp|1)(|U|1)(|Xp||YqXp).

whereX=X{x},πd={Y1;Y2;;Yq=Yq{x};;Yn}.

Proof
I(πd|X)=P(X)j=1nP(Yj|X)(1P(Yj|X))=P(X)j=1,jqnP(Yj|X)(1P(Yj|X))+P(X)P(Yq|X)(1P(Yq|X))=|X|1|U|1j=1,jq|YjX||X|1(1|YjX||X|1)+|X|1|U|1|YqX|1|X|1(1|YqX|1|X|1)=|X||U|(|X|1)(|U|1)|X||U|j=1,jq|YjX||X|(1|YjX||X|11|X|)+|X||U|(|X|1)(|U|1)|X||U|[|YqX||X|(1|YqX||X|)|X||YqX||X|2]=|X||U|(|X|1)(|U|1)|X||U|j=1,jq|YjX||X|(1|YjX||X|)|X||U|(|X|1)(|U|1)|X||U|j=1,jq|YjX||X|2+|X||U|(|X|1)(|U|1)|X||U||YqX||X|.(1|YqX||X|)
1(|X|1)(|U|1)(|X||YqX|)=|U||U|1[|X|(|X|1)|X||U|j=1,jq|YjX||X|(1|YjX||X|)1(|X|1)|U|(j=1n|YjX||YqX|)+|X||U||YqX||X|(1|YqX||X|)+1|X|1|X||U||YqX||X|(1|YqX||X|)1(|X|1)|U|(|X||YqX=|U||U|1[|X||X|1|X||U|j=1,jq|YjX||X|(1|YjX||X|)+|X||X|1|X||U||YqX||X|(1|YqX||X|)2(|X|1)|U|(|X||YqX=|U||U|1|X||X|1I(πd|X)2(|X|1)|U|(|X||YqX|)
Theorem 4.2

LetDS = (U,C ∪ {d}, V, f) be a decision system, πd = {Y1,Y2,... ,Yq,Yn}, ifxX, xYq, (q ∈ {1,2,... ,n}), whenxexits out of this system, the new information quantity of blockX

I(πd|X)=|U||U|1I(πd|X).

whereπd={Y1;Y2;;Yq=Yq{x};;Yn}.

Proof
I(πd|X)=|X||U|1j=1nP(Yj|X)(1P(Yj|X))=|U||U|1|X||U|j=1nP(Yj|X)(1P(Yj|X))=|U||U|1I(πd|X).
Theorem 4.3

LetDS = (U,C ∪ {d}, V,f) be a decision system, πC = {X1,X2,... ,Xm}, πd = {Y1,Y2,... ,Yn} ifxXp(|Xp|>1),xYq(p{1;2;;m};q{1;2;n})whenxexits out of this system, the new information quantity

I(πd|πC)=|U||U|1[I(πd|πC)+1|Xp|1I(πd|Xp)2(|Xp|1)|U|(|Xp||YqXp

whereπC={X1;X2;;Xp=Xp{x};;Yn},πd={Y1;Y2;;Yq=Yq{x};;Yn}.

Proof. According to Theorem 4.1 and Theorem 4.2, we have

I(πd|πC)=i=1mI(πd|Xi)=i=1,ipmI(πd|Xi)+I(πd|Xp))=|U||U|1i=1,ipmI(πd|Xi)+|U||Xp|(|U|1)(|Xp|1)I(πd|Xp)
2(|Xp|1)(|U|1)(|Xp||YqXp=|U||U|1i=1mI(πd|Xi)|U||U|1I(πd|Xp)+|U||Xp|(|U|1)(|Xp|1)I(πd|Xp)2(|Xp|1)(|U|1)(|Xp||YqXp|)=|U||U|1I(πd|πC)+|U|(|U|1)(|Xp|1)I(πd|Xp)2(|Xp|1)(|U|1)(|Xp||YqXp|)=|U||U|1[I(πd|πC)+1|Xp|1I(πd|Xp)2(|Xp|1)|U|(|Xp||YqXp|)]
Corollary 4.4

LetDS = (U,C ∪ {d}, V, f) be a decision system, πC = {X1,X2,...Xm}, πd = {Y1,Y2,...Yn}, ifxXp (p ∈ {1,2,... ,m} andXpπCcon, whenxexits out of this system, thenI(πd|πC)=|U||U|1I(πd|πC).

Theorem 4.5

LetDS = (U,C ∪ {d}, V, f) be a decision system, πd = {Y1,Y2,... ,Yn}, whenxgoes into this system, ifxadds toXandYq(q ∈ {1,2,... ,n}) then

I(πd|X)=|U||X|(|U|+1)(|X|+1)I(πd|X)+2(|X|+1)(|U|+1)(|X||YqX

whereX=X{x},πd={Y1;Y2;;Yq=Yq{x};;Yn}.

Proof
I(πd|X)=P(X)j=1nP(Yj|X)(1P(Yj|X))=P(X)j=1,jqnP(Yj|X)(1P(Yj|X))+P(X)P(Yq|X)(1P(Yq|X))=|X|+1|U|+1j=1,jq|YjX||X|+1(1|YjX||X|+1)+|X|+1|U|+1|YqX|+1|X|+1(1|YqX|+1|X|+1)=|X||U|(|X|+1)(|U|+1)|X||U|j=1jjq|YjX||X|(1|YjX||X|+1|X|)+|X||U|(|X|+1)(|U|+1)|X||U|[|YqX||X|(1|YqX||X|)+|X||YqX||X|2]=|X||U|(|X|+1)(|U|+1)|X||U|j=1,jq|YjX||X|(1|YjX||X|)
+|X||U|(|X|+1)(|U|+1)|X||U|j=1,jq|YjX||X|2+|X||U|(|X|+1)(|U|+1)|X||U||YqX||X|.(1|YqX||X|)+1(|X|1)(|U|1)(|X||YqX|)=|U||U|+1[|X|(|X|+1)|X||U|j=1,jq|YjX||X|(1|YjX||X|)+1(|X|+1)|U|(j=1n|YjX||YqX|)+|X||U||YqX||X|(1|YqX||X|)+1|X|1|X||U||YqX||X|(1|YqX||X|)+1(|X|+1)|U|(|X||YqX=|U||U|+1[|X||X|+1|X||U|j=1,jq|YjX||X|(1|YjX||X|)+|X||X|+1|X||U||YqX||X|(1|YqX||X|)+2(|X|+1)|U|(|X||YqX=|U||U|+1|X||X|+1I(πd|X)+2(|X|+1)|U|(|X||YqX|)
Theorem 4.6

LetDS = (U, C ∪ {d}, V, f) be a decision system, πC = {X1,X2,... ,Xm}, πd = {Y1,Y2,... ,Yn}, whenxadds to this system, then

I(πd|πC)=|U||U|+1[I(πd|πC)1|Xp|+1I(πd|Xp)+2(|Xp|+1)|U|(|Xp||YqXp

whereπC={X1;X2;;Xp=Xp{x};;Xm},πd={Y1;Y2;;Yq=Yq{x};;Yn}.

Proof

According to Theorem 4.5, we have

I(πd|πC)=i=1mI(πdm|Xi)=i=1,ipmI(πd|Xi)+I(πd|Xp))=|U||U|+1i=1,ipmI(πd|Xi)+|U||Xp|(|U|+1)(|Xp|+1)I(πd|Xp)+2(|Xp|+1)(|U|+1)(|Xp||YqXp=|U||U|+1i=1mI(πd|Xi)|U||U|+1I(πd|Xp)+|U||Xp|(|U|+1)(|Xp|+1)I(πd|Xp)+2(|Xp|+1)(|U|+1)(|Xp||YqXp|)
=|U||U|+1I(πd|πC)|U|(|U|+1)(|Xp|+1)I(πd|Xp)+2(|Xp|+1)(|U|+1)(|Xp||YqXp|)=|U||U|+1[I(πd|πC)1|Xp|+1I(πd|Xp)+2(|Xp|+1)|U|(|Xp||YqXp
Corollary 4.7

LetDS = (U, C ∪ {d}, V, f) be a decision system, πC = {X1,X2,... ,Xm}, πd = {Y1,Y2,... ,Yn}, whenxgoes into this system, ifxXp (p ∈ {1, 2,... ,m} andXpπCcon, thenI(πd|πC)=|U||U|+1I(πd|πC).

In the following, we discuss how the information quantity is changed if attribute values of one object x are varied.

Theorem 4.8

LetDS = (U, C ∪ {d}, V, f) be a decision system, πC={X1;X2;;Xp1;;Xp2;;Xm},πd={Y1;Y2;Yq1;;Yq2;;Yn}.xXp1,xYq1(p1{1;2;;m};q1{1;2;,n}).If the objectxis changed to x′, and in the new decision system, πC={X1DX2;Xp1=Xp1{x};;Xp2=Xp2{x};Xm},πd={Y1;Y2;;Yq1=Yq1{x};Yq2=Yq2{x};Yn}, then

I(πd|πC)=I(πd|πc)+1(|Xp1|1)|U|I(πd|Xp1)2(|Xp1|1)(|U|)(|Xp1||Yq1Xp1|)1|Xp2|+1)|U|I(πd|Xp2)+2(|Xp2|+1)(|U|)(|Xp2||Yq2Xp2
Proof

When x changes to x′, and πC={X1,X2,⋯!,Xp1,,Xp2,,Xm},πd={Y1,Y2,,Yq1,,Yq2,,Yn} turn into πC={X1,X2,Xp1=Xp1{x},,Xp2=Xp2{x},,Xm},πd={Y1,Y2,,Yq1=Yq1{x},,Yq2=Yq2{x},,Yn}.

This process can be divided into two steps: first, πC={X1,X2,⋯!,Xp1,,Xp2,,Xm},πd={Y1,Y2,,Yq1,...,Yq2,...,Yn}; turn into πC={X1,X2,,Xp1=Xp1{x},,Xp2,,Xm},πd={Y1,Y2,,Yq1=Yq1{x},,Yq2,,Yn} then πC={X1,X2,,Xp1,,Xp2,,Xm},πd={Y1,Y2,,Yq1,,Yq2,,Yn} turn into πC={X1,X2,Xp1,,Xp2=Xp2{x},,Xm},πd={Y1,Y2,,Yq1,,Yq2=Yq2{x},,Yn}

According to Theorem 4.6, we have

I(πd|πC)=|U||U|1[I(πd|πC)+1|Xp1|1I(πd|Xp1)2(|Xp1|1)|U|(|Xp1||Yq1Xp1I(πd|πC)=|U|1(|U|1)+1[I(πd|πC)1|Xp2|+1I(πd|Xp2)+2(|Xp2|+1)(|U|1)(|Xp2||Yq2Xp2

Since I(πd|Xp2)=|U|(|U|1)I(πd|Xp2).

Thus,

I(πd|πC)=I(πd|πC)+1(|Xp1|1)|U|I(πd|Xp1)+2(|Xp1|+1)(|U|)(|Xp1||Yq1Xp1|)2(|Xp2|+1)(|U|)I(πd|Xp2)+2(|Xp2|+1)(|U|)(|Xp2||Yq2Xp2|).
Corollary 4.9

LetDS = (U, C ∪ {d}, V, f) be a decision system, πC={X1,X2,,Xp1,,Xp2,,Xm},πd={Y1,Y2,,Yq1,,Yq2,,Yn}. When the object x is changed tox,xXp1,xXp2ifXp2πCcon,Xp2πCcon,thenI(πd|πC)=I(πd|πC)

5 Attribute reduction algorithm for decision systems with dynamically varying attribute values

Based on the updating mechanisms of the information quantity, this section introduces an attribute reduction algorithm based on information quantity for decision systems with dynamically varying attribute values. In rough set theory, core is the intersection of all reducts of a given table, and core attributes are considered as the indispensable attributes in a reduct. First, we give an algorithm to obtain the core of a dynamically decision system.

Input: A decision system DS = (U, C ∪ {d}, V, f) and object xU is changed to x′.

Output: Core attribute CoreUx, on Ux where Ux expresses the updated objects with xU changed to x′, and CoreUx, is the core of decision system DS=(Ux,C{d},V,f).

Step 1 When x is changed to x′, compute πc={X1,X2,,Xp1,,Xp2,,Xm},πd={Y1,Y2,,Yq1,,Yq2,Yn} and πC={X1,X2,,Xp1{x},,Xp2{x},,Xm},πd={Y1,Y2,,Yq1{x},,Yq2{x},Yn}

Step 2 compute I(πd|πC)

Step 3 CoreUx=ϕ, for each aC

(1) compute πC{a}={Z1,Z2,,Zt1,,Zt2,,Zs} and πC{a}={Z1,Z2,,Zt1{x},,Zt2{x},,Zs}

(2) compute I(πd|πC{a})

(3) If I(πd|πC{a})I(πd|πC) , then Coreux,=coreUx,{a}

Step 4 Return CoreUx.

Based on updating mechanisms of the information quantity, an attribute reduction algorithm for decision systems with dynamically varying attribute values is proposed in the following. In this algorithm, the existing reduction result is one of inputs, which is used to find its new reduct after data changes.

Input: A decision system DS = (U, C ∪ {d}, V,f), reduct REDU on U, and the changed object x which is changed to x′.

Output: Attribute reduct REDUx, on Ux.

Step 1 B=CoreUx.

Step 2 Compute I(πd|πB)=I(πd|πC,) then REDUx=B, turn to step 4; else turn to step 3.

Step 3 While I(πd|πB)I(πd|πC) do

{For each aCB, computer sigUxoutera,B,d;

Select a0=max{sigUxoutera,b,d},aCB;

BB{a0}.

}

Step 4 For each aB do

{compute sisigUxoutera,B,d;

If sigUxinnera,B,d=0, then BB{a}.};

Step 5 REDUx=B, return REDUx and end.

In this algorithm, firstly we obtain the core of dynamic information system; then, we add the attribute with the biggest significance gradually till the REDUx

6 Conclusion

The incremental technique is an effective way to maintain knowledge in the dynamic environment. An attribute selection for dynamic data sets is still a challenging issue in the field of artificial intelligence. In this paper, we put forward the information quantity for information systems and decision systems according to the information measure proposed by Professor Hu Guoding, and we also discuss the updating mechanism of information quantity for decision systems. Further, we give an attribute reduction algorithm for decision tables with dynamically varying attribute values. It should be pointed out that updating mechanisms of the information quantity introduced in this paper are only applicable when data are varied one by one, whereas many real data may vary in groups in application. This gives rise to many difficulties for the proposed feature selection algorithm to deal with. In our further work, we will focus on improving the incremental algorithm for updating knowledge by varying some objects simultaneously. Furthermore, as an decision system consists of the objects, the attributes, and the domain of attributes values, all of the elements in the decision system will change as time goes by under the dynamic environment. In the future, the variation of attributes and the domain of attributes values in decision system will also be taken into consideration in terms of incremental updating knowledge.

References

[1] Hu F., Wang G.Y, Huang H., Wu Y., Incremental attribute reduction based on elementary sets, in: Proceedings of the 10th International Conference on Rough Sets, Fuzzy Sets, Data Mining and Granular Computing, Regina, Canada, 2005, 185-193.10.1007/11548669_20Search in Google Scholar

[2] Liang J.Y, Wei W., Qian YH., An incremental approach to computation of a core based on conditional entropy, Chinese Journal of System Engineering Theory and Practice, 2008, 4, 81-89.Search in Google Scholar

[3] Liu D., Li TR., Ruan D., Zou W.L., An incremental approach for inducing knowledge from dynamic information systems, Fundamenta Informaticae, 2009, 94, 245-260.10.3233/FI-2009-129Search in Google Scholar

[4] Orlowska M., Maintenance of knowledge in dynamic information systems, in: R. Slowinski (Ed.), Proceeding of the Intelligent Decision Support, Handbook of Applications and Advances of the Rough Set Theory Kluwer Academic Publishers, Dordrecht, 1992, 315-330.10.1007/978-94-015-7975-9_20Search in Google Scholar

[5] Shan L., Ziarko W., Data-based acquisition and incremental modification of classification rules, Computational Intelligence, 1995, 11, 357-370.10.1111/j.1467-8640.1995.tb00038.xSearch in Google Scholar

[6] Yang M., An incremental updating algorithm for attributes reduction based on the improved discernibility matrix, Chinese Journal of Computers, 2007,30, 815-822.Search in Google Scholar

[7] Zheng Z., Wang G., RRIA: a rough set and rule tree based incremental knowledge acquisition algorithm, Fundamenta Informati- cae, 2004, 59, 299-313.Search in Google Scholar

[8] Chan C.C., A rough set approach to attribute generalization in data mining, Information Science, 1998,107, 169-176.10.1016/S0020-0255(97)10047-0Search in Google Scholar

[9] Li TR., Ruan D., Geert W., et al., A rough sets based characteristic relation approach for dynamic attribute generalization in data mining, Knowledge-Based System, 2007, 20, 485-494.10.1016/j.knosys.2007.01.002Search in Google Scholar

[10] Cheng Y., The incremental method for fast computing the rough fuzzy approximations, Data & Knowledge Engineering, 2011, 70, 84-100.10.1016/j.datak.2010.08.005Search in Google Scholar

[11] Liu D., Zhang J.B., Li TR., A probabilistic rough set approach for incremental learning knowledge on the change of attribute, in: Proceedings 2010 International Conference on Foundations and Applications of Computational Intelligence, 2010, 722-727.10.1142/9789814324700_0109Search in Google Scholar

[12] Chen H.M., Li T.R., Oiao S.J., et al., A rough set based dynamic maintenance approach for approximations in coarsening and refining attribute values, International Journal of Intelligent Systems, 2010, 25, 1005-1026.10.1002/int.20436Search in Google Scholar

[13] Liu D., Li T.R., Liu G.R., et al., An incremental approach for inducing interesting knowledge based on the change of attribute values, in: Proceedings 2009 IEEE lnternational Conference on Granular Computing, Nanchang, China, 2009, 415-418.10.1109/GRC.2009.5255084Search in Google Scholar

[14] Pawlak Z., Rough sets, International Journal of Computer and Information Sciences,, 1982, 11, 341-356.10.1007/BF01001956Search in Google Scholar

[15] Pawlak Z., Rough Sets: Theoretical Aspects of Reasoning About Data, KIuwer Academic Publishers, Dordrecht & Boston, 1991.10.1007/978-94-011-3534-4Search in Google Scholar

[16] Xu W., Li Y., Liao X., Approaches to attribute reductions based on rough set and matrix computation in inconsistent ordered information systems, Knowledge-Based Systems, 2012, 27, 78-91.10.1016/j.knosys.2011.11.013Search in Google Scholar

[17] Slowinski R., Vanderpooten D., A generalized definition of rough approximations based on similarity, IEEE Transactions on Knowledge and Data Engineering, 2000,12, 331-336.10.1109/69.842271Search in Google Scholar

[18] Stefanowski J., Tsoukias A., Incomplete information tables and rough classification, Computational Intelligence, 2001, 17, 545- 566.10.1111/0824-7935.00162Search in Google Scholar

[19] Kryszkiewicz M., Rough set approach to incomplete information systems, Information Sciences,1998, 112, 39-49.10.1016/S0020-0255(98)10019-1Search in Google Scholar

[20] Kryszkiewicz M., Rules in incomplete information systems, Information Sciences, 1999, 113, 271-292.10.1016/S0020-0255(98)10065-8Search in Google Scholar

[21] Dai J., Xu O., Approximations and uncertainty measures in incomplete information systems, Information Sciences, 2012, 198, 62-80.10.1016/j.ins.2012.02.032Search in Google Scholar

[22] Shannon C., A mathematical theory of communication, The Bell System Technical Journal,, 1948, 27, 379-423.10.1002/j.1538-7305.1948.tb01338.xSearch in Google Scholar

[23] Peng H., Long F., Ding C., Feature selection based on mutual information: criteria of max-dependency, max-relevance, and min- redundancy, IEEE Transactions on Pattern Analysis and Machine Intelligence, 2005, 27, 1226-1238.10.1109/TPAMI.2005.159Search in Google Scholar PubMed

[24] Ouinlan J., Induction on decision trees, Machine Learning, 1986, 1, 81-106.10.1007/BF00116251Search in Google Scholar

[25] Beaubouef T., Petry F., Arora G., Information-theoretic measures of uncertainty for rough sets and rough relational databases, Information Sciences,, 1998, 109, 535-563.10.1016/S0020-0255(98)00019-XSearch in Google Scholar

[26] Dai J., Wang W., Xu O., Tian H., Uncertainty measurement for interval-valued decision systems based on extended conditional entropy, Knowledge-Based Systems, 2012, 27, 443-450.10.1016/j.knosys.2011.10.013Search in Google Scholar

[27] Liang J., Chin K., Dang C., Richard C., A new method for measuring uncertainty and fuzziness in rough set theory, International Journal of General Systems, 2002, 31, 331-342.10.1080/0308107021000013635Search in Google Scholar

[28] Oian Y., Liang J., Combination entropy and combination granulation in incomplete information system, LNAI, 2006, 4046, 184-190.10.1007/11795131_27Search in Google Scholar

[29] Oian Y., Liang J., Wang F., A new method for measuring the uncertainty in incomplete information systems, International Journal of Uncertainty Fuzziness and Knowledge-Based Systems, 2009, 17, 855-880.10.1142/S0218488509006303Search in Google Scholar

[30] Dai J. H., Wang W. T., Tian H. W., Liu L., Attribute selection based on a new conditional entropy for incomplete decision systems, Knowledge-Bsed Systems, 2013, 39, 207-21310.1016/j.knosys.2012.10.018Search in Google Scholar

[31] Liu Z. H., Liu S. Y., Wang J., An attribute reduction algorithm based on the information quantity, Journal of Xidian University,, 2003, 30, 835-838Search in Google Scholar

[32] Dash M., Liu H., Consistency-based search in feature selection, Artificial Intelligence,, 2003, 151, 155-176.10.1016/S0004-3702(03)00079-1Search in Google Scholar

[33] Yang M., An incremental updating algorithm of the computation of a core based on the improved discernibility matrix, Chinese Journal of Computers, 2006, 29, 407-413.Search in Google Scholar

[34] Fan Y.N., Tseng T.L., Chern C.C., Huang C.C., Rule induction based on an incremetnal rough set, Expert Systems with Applications, 2009, 36, 11439-11450.10.1016/j.eswa.2009.03.056Search in Google Scholar

[35] Dey P., Dey S., Datta S., Sil J., Dynamic discreduction using rough sets, Applied Soft Computing, 2011, 11, 3887-3897.10.1016/j.asoc.2011.01.015Search in Google Scholar

[36] Wang F., Liang J.Y., Oian Y.H., Attribute reduction: a dimension incremental strategy, Knowledge-Based Systems, 2013, 39, 95- 108.10.1016/j.knosys.2012.10.010Search in Google Scholar

[37] Wang F., Liang J.Y., Dang C.Y., Attribute reduction for dynamic data sets, Applied Soft Computing, 2013, 13, 676-689.10.1016/j.asoc.2012.07.018Search in Google Scholar

[38] Huang C.C., Tseng T.L., Fan Y.N., Hsu C.H., Alternative rule induction methods based on incremental object using rough set theory, Applied Soft Computing, 2013, 13, 372-389.10.1016/j.asoc.2012.08.042Search in Google Scholar

[39] Liu D., Li T.R., Ruan D., Zhang J.B., Incremental learning optimization on knowledge discovery in dynamic business intelligent systems, Journal of Global optimization, 2011, 51, 325-344.10.1007/s10898-010-9607-8Search in Google Scholar

[40] Hu F., Wang G.Y., Huang H., Wu Y., Incremental attribute reduction based on elementary sets, in: Proceedings of 10th International Conference on Rough Sets, Fuzzy Sets, Data Mining, and Granular Computing, Lecture Notes in Computer Science, Regina, Canada, 2005, 185-193.10.1007/11548669_20Search in Google Scholar

Received: 2016-6-2
Accepted: 2016-8-25
Published Online: 2016-11-27
Published in Print: 2016-1-1

© 2016 Wenjun, published by De Gruyter Open.

This work is licensed under the Creative Commons Attribution-NonCommercial-NoDerivatives 3.0 License.

Articles in the same Issue

  1. Regular Article
  2. A metric graph satisfying w41=1 that cannot be lifted to a curve satisfying dim(W41)=1
  3. Regular Article
  4. On the Riemann-Hilbert problem in multiply connected domains
  5. Regular Article
  6. Hamilton cycles in almost distance-hereditary graphs
  7. Regular Article
  8. Locally adequate semigroup algebras
  9. Regular Article
  10. Parabolic oblique derivative problem with discontinuous coefficients in generalized weighted Morrey spaces
  11. Corrigendum
  12. Corrigendum to: parabolic oblique derivative problem with discontinuous coefficients in generalized weighted Morrey spaces
  13. Regular Article
  14. Some new bounds of the minimum eigenvalue for the Hadamard product of an M-matrix and an inverse M-matrix
  15. Regular Article
  16. Integral inequalities involving generalized Erdélyi-Kober fractional integral operators
  17. Regular Article
  18. Results on the deficiencies of some differential-difference polynomials of meromorphic functions
  19. Regular Article
  20. General numerical radius inequalities for matrices of operators
  21. Regular Article
  22. The best uniform quadratic approximation of circular arcs with high accuracy
  23. Regular Article
  24. Multiple gcd-closed sets and determinants of matrices associated with arithmetic functions
  25. Regular Article
  26. A note on the rate of convergence for Chebyshev-Lobatto and Radau systems
  27. Regular Article
  28. On the weakly(α, ψ, ξ)-contractive condition for multi-valued operators in metric spaces and related fixed point results
  29. Regular Article
  30. Existence of a common solution for a system of nonlinear integral equations via fixed point methods in b-metric spaces
  31. Regular Article
  32. Bounds for the Z-eigenpair of general nonnegative tensors
  33. Regular Article
  34. Subsymmetry and asymmetry models for multiway square contingency tables with ordered categories
  35. Regular Article
  36. End-regular and End-orthodox generalized lexicographic products of bipartite graphs
  37. Regular Article
  38. Refinement of the Jensen integral inequality
  39. Regular Article
  40. New iterative codes for 𝓗-tensors and an application
  41. Regular Article
  42. A result for O2-convergence to be topological in posets
  43. Regular Article
  44. A fixed point approach to the Mittag-Leffler-Hyers-Ulam stability of a fractional integral equation
  45. Regular Article
  46. Uncertainty orders on the sublinear expectation space
  47. Regular Article
  48. Generalized derivations of Lie triple systems
  49. Regular Article
  50. The BV solution of the parabolic equation with degeneracy on the boundary
  51. Regular Article
  52. Malliavin method for optimal investment in financial markets with memory
  53. Regular Article
  54. Parabolic sublinear operators with rough kernel generated by parabolic calderön-zygmund operators and parabolic local campanato space estimates for their commutators on the parabolic generalized local morrey spaces
  55. Regular Article
  56. On annihilators in BL-algebras
  57. Regular Article
  58. On derivations of quantales
  59. Regular Article
  60. On the closed subfields of Q¯~p
  61. Regular Article
  62. A class of tridiagonal operators associated to some subshifts
  63. Regular Article
  64. Some notes to existence and stability of the positive periodic solutions for a delayed nonlinear differential equations
  65. Regular Article
  66. Weighted fractional differential equations with infinite delay in Banach spaces
  67. Regular Article
  68. Laplace-Stieltjes transform of the system mean lifetime via geometric process model
  69. Regular Article
  70. Various limit theorems for ratios from the uniform distribution
  71. Regular Article
  72. On α-almost Artinian modules
  73. Regular Article
  74. Limit theorems for the weights and the degrees in anN-interactions random graph model
  75. Regular Article
  76. An analysis on the stability of a state dependent delay differential equation
  77. Regular Article
  78. The hybrid mean value of Dedekind sums and two-term exponential sums
  79. Regular Article
  80. New modification of Maheshwari’s method with optimal eighth order convergence for solving nonlinear equations
  81. Regular Article
  82. On the concept of general solution for impulsive differential equations of fractional-order q ∈ (2,3)
  83. Regular Article
  84. A Riesz representation theory for completely regular Hausdorff spaces and its applications
  85. Regular Article
  86. Oscillation of impulsive conformable fractional differential equations
  87. Regular Article
  88. Dynamics of doubly stochastic quadratic operators on a finite-dimensional simplex
  89. Regular Article
  90. Homoclinic solutions of 2nth-order difference equations containing both advance and retardation
  91. Regular Article
  92. When do L-fuzzy ideals of a ring generate a distributive lattice?
  93. Regular Article
  94. Fully degenerate poly-Bernoulli numbers and polynomials
  95. Commentary
  96. Commentary to: Generalized derivations of Lie triple systems
  97. Regular Article
  98. Simple sufficient conditions for starlikeness and convexity for meromorphic functions
  99. Regular Article
  100. Global stability analysis and control of leptospirosis
  101. Regular Article
  102. Random attractors for stochastic two-compartment Gray-Scott equations with a multiplicative noise
  103. Regular Article
  104. The fuzzy metric space based on fuzzy measure
  105. Regular Article
  106. A classification of low dimensional multiplicative Hom-Lie superalgebras
  107. Regular Article
  108. Structures of W(2.2) Lie conformal algebra
  109. Regular Article
  110. On the number of spanning trees, the Laplacian eigenvalues, and the Laplacian Estrada index of subdivided-line graphs
  111. Regular Article
  112. Parabolic Marcinkiewicz integrals on product spaces and extrapolation
  113. Regular Article
  114. Prime, weakly prime and almost prime elements in multiplication lattice modules
  115. Regular Article
  116. Pochhammer symbol with negative indices. A new rule for the method of brackets
  117. Regular Article
  118. Outcome space range reduction method for global optimization of sum of affine ratios problem
  119. Regular Article
  120. Factorization theorems for strong maps between matroids of arbitrary cardinality
  121. Regular Article
  122. A convergence analysis of SOR iterative methods for linear systems with weak H-matrices
  123. Regular Article
  124. Existence theory for sequential fractional differential equations with anti-periodic type boundary conditions
  125. Regular Article
  126. Some congruences for 3-component multipartitions
  127. Regular Article
  128. Bound for the largest singular value of nonnegative rectangular tensors
  129. Regular Article
  130. Convolutions of harmonic right half-plane mappings
  131. Regular Article
  132. On homological classification of pomonoids by GP-po-flatness of S-posets
  133. Regular Article
  134. On CSQ-normal subgroups of finite groups
  135. Regular Article
  136. The homogeneous balance of undetermined coefficients method and its application
  137. Regular Article
  138. On the saturated numerical semigroups
  139. Regular Article
  140. The Bruhat rank of a binary symmetric staircase pattern
  141. Regular Article
  142. Fixed point theorems for cyclic contractive mappings via altering distance functions in metric-like spaces
  143. Regular Article
  144. Singularities of lightcone pedals of spacelike curves in Lorentz-Minkowski 3-space
  145. Regular Article
  146. An S-type upper bound for the largest singular value of nonnegative rectangular tensors
  147. Regular Article
  148. Fuzzy ideals of ordered semigroups with fuzzy orderings
  149. Regular Article
  150. On meromorphic functions for sharing two sets and three sets in m-punctured complex plane
  151. Regular Article
  152. An incremental approach to obtaining attribute reduction for dynamic decision systems
  153. Regular Article
  154. Very true operators on MTL-algebras
  155. Regular Article
  156. Value distribution of meromorphic solutions of homogeneous and non-homogeneous complex linear differential-difference equations
  157. Regular Article
  158. A class of 3-dimensional almost Kenmotsu manifolds with harmonic curvature tensors
  159. Regular Article
  160. Robust dynamic output feedback fault-tolerant control for Takagi-Sugeno fuzzy systems with interval time-varying delay via improved delay partitioning approach
  161. Regular Article
  162. New bounds for the minimum eigenvalue of M-matrices
  163. Regular Article
  164. Semi-quotient mappings and spaces
  165. Regular Article
  166. Fractional multilinear integrals with rough kernels on generalized weighted Morrey spaces
  167. Regular Article
  168. A family of singular functions and its relation to harmonic fractal analysis and fuzzy logic
  169. Regular Article
  170. Solution to Fredholm integral inclusions via (F, δb)-contractions
  171. Regular Article
  172. An Ulam stability result on quasi-b-metric-like spaces
  173. Regular Article
  174. On the arrowhead-Fibonacci numbers
  175. Regular Article
  176. Rough semigroups and rough fuzzy semigroups based on fuzzy ideals
  177. Regular Article
  178. The general solution of impulsive systems with Riemann-Liouville fractional derivatives
  179. Regular Article
  180. A remark on local fractional calculus and ordinary derivatives
  181. Regular Article
  182. Elastic Sturmian spirals in the Lorentz-Minkowski plane
  183. Topical Issue: Metaheuristics: Methods and Applications
  184. Bias-variance decomposition in Genetic Programming
  185. Topical Issue: Metaheuristics: Methods and Applications
  186. A novel generalized oppositional biogeography-based optimization algorithm: application to peak to average power ratio reduction in OFDM systems
  187. Special Issue on Recent Developments in Differential Equations
  188. Modeling of vibration for functionally graded beams
  189. Special Issue on Recent Developments in Differential Equations
  190. Decomposition of a second-order linear time-varying differential system as the series connection of two first order commutative pairs
  191. Special Issue on Recent Developments in Differential Equations
  192. Differential equations associated with generalized Bell polynomials and their zeros
  193. Special Issue on Recent Developments in Differential Equations
  194. Differential equations for p, q-Touchard polynomials
  195. Special Issue on Recent Developments in Differential Equations
  196. A new approach to nonlinear singular integral operators depending on three parameters
  197. Special Issue on Recent Developments in Differential Equations
  198. Performance and stochastic stability of the adaptive fading extended Kalman filter with the matrix forgetting factor
  199. Special Issue on Recent Developments in Differential Equations
  200. On new characterization of inextensible flows of space-like curves in de Sitter space
  201. Special Issue on Recent Developments in Differential Equations
  202. Convergence theorems for a family of multivalued nonexpansive mappings in hyperbolic spaces
  203. Special Issue on Recent Developments in Differential Equations
  204. Fractional virus epidemic model on financial networks
  205. Special Issue on Recent Developments in Differential Equations
  206. Reductions and conservation laws for BBM and modified BBM equations
  207. Special Issue on Recent Developments in Differential Equations
  208. Extinction of a two species non-autonomous competitive system with Beddington-DeAngelis functional response and the effect of toxic substances
Downloaded on 9.12.2025 from https://www.degruyterbrill.com/document/doi/10.1515/math-2016-0077/html
Scroll to top button