Home On numerical characterizations of the topological reduction of incomplete information systems based on evidence theory
Article Open Access

On numerical characterizations of the topological reduction of incomplete information systems based on evidence theory

  • Changqing Li EMAIL logo and Yanlan Zhang
Published/Copyright: January 12, 2023
Become an author with De Gruyter Brill

Abstract

Knowledge reduction of information systems is one of the most important parts of rough set theory in real-world applications. Based on the connections between the rough set theory and the theory of topology, a kind of topological reduction of incomplete information systems is discussed. In this study, the topological reduction of incomplete information systems is characterized by belief and plausibility functions from evidence theory. First, we present that a topological space induced by a pair of approximation operators in an incomplete information system is pseudo-discrete, which deduces a partition. Then, the topological reduction is characterized by the belief and plausibility function values of the sets in the partition. A topological reduction algorithm for computing the topological reducts in incomplete information systems is also proposed based on evidence theory, and its efficiency is examined by an example. Moreover, relationships among the concepts of topological reduct, classical reduct, belief reduct, and plausibility reduct of an incomplete information system are presented.

1 Introduction

As one of the soft computing tools to deal with the vagueness and granularity in information systems, rough set theory has two basic notions [1], which are the lower and upper approximations. The knowledge reduction of information systems in terms of the lower and upper approximations is one of the most important applications of rough set theory. Incomplete information systems exist widely in real life, whose values for some attributes are missing or only partially known. The knowledge reduction of incomplete information systems based on lower and upper approximations in rough set theory has also been discussed by many authors [2,3, 4,5,6, 7,8].

In the theory of general topology, the topological interior operator and topological closure operator are two basic concepts that have close relationships with the lower and upper approximation operators in rough set theory. The topological theory is a useful foundation for the discussion of rough sets. There exist many results on the topological structures of rough sets [9,10, 11,12,13, 14,15]. Kondo presented that a topology induced by a pair of approximation operators satisfies that a set A is open if and only if it is closed [9]. Qin et al. [11], Zhang et al. [13], and Li et al. [10] also discussed the topological properties of the pair of approximation operators studied in ref. [9]. Moreover, Salama and El-Barbary [16,17] discussed missing attribute values problem in incomplete information systems by topological approach. Yu and Zhan explored the topological reduction of incomplete information systems [18].

The Dempster–Shafer theory of evidence, or the theory of belief function [19,20], is another important method to deal with uncertainty in information systems. There exist strong connections between the Dempster-Shafer theory of evidence and the rough set theory. Relationships between the belief functions and rough sets have been investigated [21,22, 23,24,25, 26,27]. Furthermore, in incomplete information systems, knowledge reduction based on rough sets is characterized by the Dempster-Shafer theory of evidence. For example, the concepts of plausibility reduct and belief reduct in incomplete information systems are proposed by Wu [28]. Belief and plausibility functions from evidence theory are employed to characterize the set approximations and attribute reductions of incomplete information systems in multigranulation rough set theory [29].

The topological reduction of incomplete information systems is characterized based on evidence theory in this article. In Section 2, we review some basic definitions in topological theory, rough sets in incomplete information systems, and evidence theory. Some properties of the pair of approximation operators in incomplete information systems are also presented. We also obtain that a topology induced by the approximation operators in an incomplete information system is just the collection of all definable sets and is pseudo-discrete, which deduces a partition. In Section 3, the topological reduction of incomplete information systems is characterized by the belief and plausibility functions from the evidence theory. By plausibility function values of sets in the partition, the definitions of significance and relative significance of attributes in incomplete information systems are also presented. Then, a topological reduction algorithm based on the evidence theory is proposed for incomplete information systems, and an example is adopted to illustrate the validity of the algorithm. In Section 4, we discuss relationships among the concepts of topological reduct and some existing ones. It is shown that a plausibility reduct is a topologically consistent set, and the converse does not necessarily hold.

2 Preliminaries

In this section, we review some basic definitions of topological theory, approximation operators in incomplete information systems, and evidence theory. Throughout this article, we always assume that the universe of discourse U is a finite and nonempty set. The class of all subsets of U will be denoted by P ( U ) .

2.1 Basic concepts in topological theory

In this subsection, some basic concepts of topological spaces are introduced. For the other basic topological concepts, we refer to [30].

Definition 1

[30] Let U be a non-empty set. A topology on U is a collection τ of subsets of U having the following properties:

  1. , U τ ,

  2. for any T τ , T τ ,

  3. B 1 , B 2 , , B n τ , B 1 B 2 B n τ .

Then, ( U , τ ) is called a topological space, each element in τ is called an open set, and the complement of an open set is called a closed set. In a topological space ( U , τ ) , if A U is open in U if and only if A is closed in U , then ( U , τ ) is called a pseudo-discrete space.

In a topological space ( U , τ ) , for any x U , denote ( x ) τ = { K x K , K τ } .

Definition 2

[30] Let ( U , τ ) be a topological space and X P ( U ) . Then, the topological interior and closure of X are, respectively, defined as follows:

int τ ( X ) = { G G is an open set and G X } ,

c l τ ( X ) = { K K is a closed set and X K } ,

where int τ and c l τ are, respectively, called the topological interior operator and topological closure operator of τ .

It can be shown that int τ ( X ) is an open set and c l τ ( X ) is a closed set in ( U , τ ) . X is an open set in ( U , τ ) if and only if int τ ( X ) = X , and X is a closed set in ( U , τ ) if and only if c l τ ( X ) = X .

The topological interior operator and topological closure operator can be also defined, respectively, by Kuratowski interior axiom and Kuratowski closure axioms.

Definition 3

[30,31] Let U be a non-empty set and int , c l : P ( U ) P ( U ) . For any X , Y U ,

  1. int ( U ) = U ,

  2. int ( X ) X ,

  3. int ( X Y ) = int ( X ) int ( Y ) ,

  4. int ( int ( X ) ) = int ( X ) ,

  5. cl ( Ø ) = Ø ,

  6. X cl ( X ) ,

  7. cl ( X Y ) = cl ( X ) cl ( Y ) ,

  8. cl ( cl ( X ) ) = cl ( X ) .

If int satisfies ( i 1 ) ( i 3 ) , then int is called an interior operator, and ( U , int ) is called an interior space [31]. If int satisfies ( i 1 ) ( i 4 ) , that is, int satisfies Kuratowski interior axiom, then int is called a topological interior operator [30]. If c l satisfies ( c 1 ) ( c 3 ) , then c l is called a closure operator, and ( U , c l ) is called a closure space [31]. If c l satisfies ( c 1 ) ( c 4 ) , that is, c l satisfies Kuratowski closure axiom, then c l is called a topological closure operator [30].

In an interior space ( U , int ) , it is easy to prove that τ ( int ) = { X int ( X ) = X } is a topology. In a closure space ( U , c l ) , it is verified that τ ( c l ) = { U X cl ( X ) = X } is a topology.

2.2 Incomplete information systems and rough set approximations

An information system is a triple S = ( U , A T , { V a a A T } ) , where U is a non-empty finite set of objects, A T is a non-empty finite set of attributes such that a : U V a for any a A T , and V a is the value set of a . For simplicity, an information system is written as S = ( U , A T ) .

If some of the attribute values for one or more objects in an information system are missing or partially known, then the information system is called an incomplete information system [32]. In an incomplete information system, the missing value can be represented by the set of all possible values for the attribute, and the partially known value can be specified as a set. Thus, the information system can be described by a set-valued information system in which a : U P ( V a ) for any a A T [2,33]. In an incomplete information system S = ( U , A T ) , for each nonempty subset A A T , a similarity relation is defined as follows [33,34]:

R A = { ( x , y ) U × U a A , a ( x ) a ( y ) } .

It is easy to know that R A is reflexive and symmetric but not necessarily transitive. Clearly, R A = a A R { a } . Denote S A ( x ) = { y U ( x , y ) R A } . S A ( x ) is called the similarity class of x w.r.t. A in S . The family of all similarity classes w.r.t. A is denoted by U R A , i.e., U R A = { S A ( x ) x U } . A pair of lower and upper approximations is defined as follows:

Definition 4

[28] Let S = ( U , A T ) be an incomplete information system and A A T . For any subset X of U , the lower approximation A ̲ ( X ) and the upper approximation A ¯ ( X ) of X are defined as follows:

A ̲ ( X ) = { x U S A ( x ) X } , A ¯ ( X ) = { x U S A ( x ) X } .

The pair ( A ̲ ( X ) , A ¯ ( X ) ) is called the rough set of X . If A ̲ ( X ) = A ¯ ( X ) , then X is referred to as a definable set.

Since R A is reflexive and symmetric, the approximations have the following properties [9,35].

Proposition 1

Let S = ( U , A T ) be an incomplete information system and A A T . Then, for any X , Y U ,

  1. A ̲ ( ) = A ¯ ( ) = , A ̲ ( U ) = A ¯ ( U ) = U ,

  2. A ̲ ( X Y ) = A ̲ ( X ) A ̲ ( Y ) , A ¯ ( X Y ) = A ¯ ( X ) A ¯ ( Y ) ,

  3. A ̲ ( X ) X A ¯ ( X ) ,

  4. A ¯ ( U \ X ) = U \ A ̲ ( X ) ,

  5. A ̲ ( X ) = X A ¯ ( X ) = X ,

  6. X Y A ̲ ( X ) A ̲ ( Y ) , A ¯ ( X ) A ¯ ( Y ) ,

  7. A B A T A ̲ ( X ) B ̲ ( X ) , B ¯ ( X ) A ¯ ( X ) .

According to Definition 3 and Proposition 1, it is easy to obtain the following result.

Corollary 1

Let S = ( U , A T ) be an incomplete information system and A A T . Then,

  1. A ̲ is an interior operator,

  2. A ¯ is a closure operator,

  3. τ ( A ¯ ) = { X U A ¯ ( X ) = X } = { X U A ̲ ( X ) = X } = τ ( A ̲ ) ,

  4. ( U , τ ( A ¯ ) ) is a pseudo-discrete space.

In the following, the topology τ ( A ¯ ) = τ ( A ̲ ) is denoted by T A . From Corollary 1(3), T A is the collection of all definable sets of the information system S w.r.t. A . There exists a conclusion about a pseudo-discrete space.

Lemma 1

[36] Let ( U , τ ) be a pseudo-discrete space. Then, { ( x ) τ x U } is a partition of U.

According to Corollary 1(4) and Lemma 1, we can obtain the following corollary.

Corollary 2

Let S = ( U , A T ) be an incomplete information system and A A T . Then, { ( x ) T A x U } is a partition of U.

From Corollaries 1 and 2, a pseudo-discrete space can be induced by the approximation operators in an incomplete information system, which deduces a partition.

2.3 Basic notions related to evidence theory

This subsection will recall some basic definitions about evidence theory.

Definition 5

[19,20] A set function Bel : P ( U ) [ 0 , 1 ] is referred to as a belief function if

  1. Bel ( ) = 0 , Bel ( U ) = 1 ,

  2. for every collection of subsets X 1 , X 2 , , X n U , n 1 ,

    Bel i = 1 n X i I { 1 , 2 , , n } ( 1 ) I + 1 Bel ( i I X i ) ,

where I is the cardinality of the set I . A set function P l : P ( U ) [ 0 , 1 ] is referred to as a plausibility function if
  1. P l ( ) = 0 , P l ( U ) = 1 ,

  2. for every collection of subsets X 1 , X 2 , , X n U , n 1 ,

    P l i = 1 n X i I { 1 , 2 , , n } ( 1 ) I + 1 P l ( i I X i ) .

Belief and plausibility functions based on the same belief structure are connected by the dual property P l ( X ) = 1 Bel ( U \ X ) . Furthermore, Bel ( X ) P l ( X ) for all X U .

Definition 6

Let Ω be a sample space, and F be a σ -algebra on Ω . Then, a real-valued function P : F [ 0 , 1 ] is referred to as a probability on ( Ω , F ) if it satisfies

  1. for any X F , 0 P ( X ) 1 ,

  2. P ( Ω ) = 1 ,

  3. for any X i F ( i = 1 , 2 , ), if X i X j = ( i j ) , then P ( i = 1 X i ) = i = 1 P ( X i ) .

Moreover, ( Ω , F , P ) is a probability space.

The probabilities of the lower and upper approximations are belief and plausibility functions, respectively.

Proposition 2

[28] Let S = ( U , A T ) be an incomplete information system and A A T . For any X U , denote

Bel A ( X ) = P ( A ̲ ( X ) ) , P l A ( X ) = P ( A ¯ ( X ) ) ,

where P ( X ) = X U and X is the cardinality of the set X. Then, Bel A and P l A are a dual pair of belief and plausibility functions.

3 Characterizations of the topological reduction of incomplete information systems based on evidence theory

Yu and Zhan defined the topological reduct of incomplete information systems [18].

Definition 7

[18] Let S = ( U , A T ) be an incomplete information system, and A A T . If T A = T A T , then A is called as a topologically consistent set of A T . If A is a topologically consistent set of A T , and no proper subset of A is a topologically consistent, then A is referred to as a topological reduct of A T . The intersection of all topological reducts of A T is called the topological core of S , which is denoted by Core ( S ) .

From Corollary 1(3) and Definition 7, we can see that a topological reduct is a minimal attribute set, which preserves the collection of all definable sets of the information system. We present an incomplete information system in Table 1, which is a modified information system of Example 1 in ref. [37].

Example 1

An incomplete information system containing information about cars is depicted in Table 1, where a special symbol “*” is used to indicate that the values of attributes are unknown, i.e., missing values. Let U = { x 1 , x 2 , , x 8 } be a set of eight cars and A T = { P , M , S , X } be a set of four attributes, where P , M , S , and X stand for price, mileage, size, and max-speed, respectively. The values of P , M , and X are { Low , Normal , High } , and the values of S are { Full , Compact } . For simplicity, we use “L, N, H, F, C” instead of “Low, Normal, High, Full, Compact” respectively. The values P ( x 3 ) , P ( x 5 ) , M ( x i ) ( i = 2 , 3 , 4 , 5 ) , and X ( x 6 ) are missing. The values P ( x 8 ) , M ( x 7 ) , M ( x 8 ) , X ( x 2 ) , X ( x 7 ) , and X ( x 8 ) are only partially known. The associated set-valued information system is given in Table 2.

Table 1

An information system of a house evaluation problem

Car P M S X
x 1 H L F L
x 2 L F Not L
x 3 C L
x 4 H F H
x 5 F H
x 6 L H F
x 7 N Not L F Not L
x 8 Not H Not H C Not H
Table 2

A set-valued information system corresponding to Table 1

Car P M S X
x 1 { H } { L } { F } { L }
x 2 { L } { L , N , H } { F } { N , H }
x 3 { L , N , H } { L , N , H } { C } { L }
x 4 { H } { L , N , H } { F } { H }
x 5 { L , N , H } { L , N , H } { F } { H }
x 6 { L } { H } { F } { L , N , H }
x 7 { N } { N , H } { F } { N , H }
x 8 { L , N } { L , N } { C } { L , N }

Then, we obtain similarity classes of elements with different attribute sets in Table 3. Hence, A T ¯ ( { x 1 } ) = { x 1 } , A T ¯ ( { x 3 , x 8 } ) = { x 3 , x 8 } , A T ¯ ( { x 2 , x 4 , x 5 , x 6 , x 7 } ) = { x 2 , x 4 , x 5 , x 6 , x 7 } . It follows that T A T = { , { x 1 } , { x 2 , x 4 , x 5 , x 6 , x 7 } , { x 3 , x 8 } , { x 1 , x 3 , x 8 } , { x 1 , x 2 , x 4 , x 5 , x 6 , x 7 } , and { x 2 , x 3 , x 4 , x 5 , x 6 , x 7 , x 8 } , U } .

Table 3

The similarity classes of elements in Example 1

S A ( x i ) S P ( x i ) S M ( x i ) S S ( x i ) S X ( x i ) S A T ( x i )
x 1 { x 1 , x 3 , x 4 , x 5 } { x 1 , x 2 , x 3 , x 4 , x 5 , x 8 } { x 1 , x 2 , x 4 , x 5 , x 6 , x 7 } { x 1 , x 3 , x 6 , x 8 } { x 1 }
x 2 { x 2 , x 3 , x 5 , x 6 , x 8 } U { x 1 , x 2 , x 4 , x 5 , x 6 , x 7 } { x 2 , x 4 , x 5 , x 6 , x 7 , x 8 } { x 2 , x 5 , x 6 }
x 3 U U { x 3 , x 8 } { x 1 , x 3 , x 6 , x 8 } { x 3 , x 8 }
x 4 { x 1 , x 3 , x 4 , x 5 } U { x 1 , x 2 , x 4 , x 5 , x 6 , x 7 } { x 2 , x 4 , x 5 , x 6 , x 7 } { x 4 , x 5 }
x 5 U U { x 1 , x 2 , x 4 , x 5 , x 6 , x 7 } { x 2 , x 4 , x 5 , x 6 , x 7 } { x 2 , x 4 , x 5 , x 6 , x 7 }
x 6 { x 2 , x 3 , x 5 , x 6 , x 8 } { x 2 , x 3 , x 4 , x 5 , x 6 , x 7 } { x 1 , x 2 , x 4 , x 5 , x 6 , x 7 } U { x 2 , x 5 , x 6 }
x 7 { x 3 , x 5 , x 7 , x 8 } { x 2 , x 3 , x 4 , x 5 , x 6 , x 7 , x 8 } { x 1 , x 2 , x 4 , x 5 , x 6 , x 7 } { x 2 , x 4 , x 5 , x 6 , x 7 , x 8 } { x 5 , x 7 }
x 8 { x 2 , x 3 , x 5 , x 6 , x 7 , x 8 } { x 1 , x 2 , x 3 , x 4 , x 5 , x 7 , x 8 } { x 3 , x 8 } { x 1 , x 2 , x 3 , x 6 , x 7 , x 8 } { x 3 , x 8 }

Let B = { M , S , X } . Then, we have that T B = T A T , it follows that B is a topologically consistent set. Let B 1 = { M , S } . We obtain that B 1 ¯ ( { x 1 } ) = { x 1 , x 2 , x 4 , x 5 } { x 1 } , then { x 1 } T B 1 . Hence, T B 1 T B , which means that B 1 is not a topologically consistent set. Let B 2 = { M , X } . We have that B 2 ¯ ( { x 1 } ) = { x 1 , x 3 , x 8 } { x 1 } , so { x 1 } T B 2 . Then, T B 2 T B , which follows that B 2 is not a topologically consistent set. Let B 3 = { S , X } . Then, B 3 ¯ ( { x 1 } ) = { x 1 , x 6 } { x 1 } , which implies that { x 1 } T B 2 . Thus, T B 3 T B . So B 3 is not a topologically consistent set. Therefore, B is a topological reduct.

The topologically consistent sets of A T can be characterized by means of the partition { ( x ) T A T x U } induced by the pseudo-discrete topological space ( U , T A T ) . In the following, denote the partition { ( x ) T A T x U } by { K 1 , K 2 , , K m } ( m N ) .

Theorem 1

Let S = ( U , A T ) be an incomplete information system, and A A T . The following are equivalent:

  1. A is a topologically consistent set of A T ,

  2. A ¯ ( K i ) = K i for all i { 1 , 2 , , m } ,

  3. A ̲ ( K i ) = K i for all i { 1 , 2 , , m } .

Proof

(1) (2). If A is a topologically consistent set of A T , then T A = T A T . Hence, for any x U , ( x ) T A T = ( x ) T A . By Corollary 1, T A is pseudo-discrete. It follows that ( x ) T A T is a closed set. Thus, A ¯ ( ( x ) T A T ) = ( x ) T A T , that is, A ¯ ( K i ) = K i for all i { 1 , 2 , , m } .

(2) (3). Since A ¯ ( K i ) = K i for all i { 1 , 2 , , m } , K i is a closed set in the topological space ( U , T A ) . According to Corollary 1, T A is pseudo-discrete. It implies that K i is an open set. Then, A ̲ ( K i ) = K i .

(3) (1). Due to A ̲ ( K i ) = K i for all i { 1 , 2 , , m } , K i is an open set in the topological space ( U , T A ) , that is, ( x ) T A T T A for all x U . For any K T A T , K = x K ( x ) T A T . It follows that K T A . Thus, T A T T A . Conversely, for any K T A , K A T ¯ ( K ) A ¯ ( K ) = K . Then, K = A T ¯ ( K ) , which implies that K T A T . Therefore, T A T A T .□

Then, the topologically consistent sets and the topological reducts of A T can be depicted based on evidence theory.

Theorem 2

Let S = ( U , A T ) be an incomplete information system, and A A T . The following are equivalent:

  1. A is a topologically consistent set of S,

  2. i = 1 m P l A ( K i ) = 1 ,

  3. i = 1 m Bel A ( K i ) = 1 .

Proof

(1) (2). By (1) and Theorem 1, A ¯ ( K i ) = K i for all i { 1 , 2 , , m } . Then,

i = 1 m P l A ( K i ) = i = 1 m P ( A ¯ ( K i ) ) = i = 1 m P ( K i ) = P i = 1 m K i = P ( U ) = 1 .

(2) (3). For any i { 1 , 2 , , m } , K i A ¯ ( K i ) . By Corollary 2 and Proposition 2,

1 = i = 1 m P l A ( K i ) = i = 1 m P ( A ¯ ( K i ) ) i = 1 m P ( K i ) = P i = 1 m K i = P ( U ) = 1 .

Since P ( K i ) P ( A ¯ ( K i ) ) for all i { 1 , 2 , , m } , we obtain P ( K i ) = P ( A ¯ ( K i ) ) , that is, P ( K i ) = K i U = A ¯ ( K i ) U = P ( A ¯ ( K i ) ) . Then, for any i { 1 , 2 , , m } , K i = A ¯ ( K i ) . It follows from K i A ¯ ( K i ) that K i = A ¯ ( K i ) . By Theorem 1, K i = A ̲ ( K i ) . Hence,

i = 1 m Bel A ( K i ) = i = 1 m P ( A ̲ ( K i ) ) = i = 1 m A ̲ ( K i ) U = i = 1 m K i U = U U = 1 .

(3) (1). For any i { 1 , 2 , , m } , A ̲ ( K i ) K i . According to Corollary 2 and Proposition 2,

1 = i = 1 m Bel A ( K i ) = i = 1 m P ( A ̲ ( K i ) ) i = 1 m P ( K i ) = P ( i = 1 m K i ) = P ( U ) = 1 .

Due to P ( A ̲ ( K i ) ) P ( K i ) for all i { 1 , 2 , , m } , we have that P ( K i ) = P ( A ̲ ( K i ) ) , that is, P ( K i ) = K i U = A ̲ ( K i ) U = P ( A ̲ ( K i ) ) . Then, for any i { 1 , 2 , , m } , K i = A ̲ ( K i ) . It follows from A ̲ ( K i ) K i that A ̲ ( K i ) = K i . Therefore, due to Theorem 1, A is a topologically consistent set of S .□

Theorem 3

Let S = ( U , A T ) be an incomplete information system, and A A T . The following are equivalent:

  1. A is a topological reduct of S,

  2. i = 1 m P l A ( K i ) = 1 , and for any nonempty subset A A , i = 1 m P l A ( K i ) > 1 ,

  3. i = 1 m Bel A ( K i ) = 1 , and for any nonempty subset A A , i = 1 m Bel A ( K i ) < 1 .

Proof

It is immediate from Theorem 2 and Definition 7.□

From Theorem 3, a topological reduct is a minimal attribute set that the sum of belief degrees (as well as plausibility degrees) of all sets in the partition from the topology is 1. Then, the topological reducts can be characterized by the numerical criterion based on evidence theory.

3.1 An algorithm for computing topological reducts of incomplete information systems

First, we define the significance of an attribute in an incomplete information system S .

Definition 8

Let S = ( U , A T ) be an incomplete information system. Define the significance of an attribute a A T by

Sig A T ( a ) = i = 1 m P l A T { a } ( K i ) i = 1 m P l A T ( K i ) .

By the definition of significance of an attribute, Core ( S ) can be characterized.

Proposition 3

Let S = ( U , A T ) be an incomplete information system. Then, Core ( S ) = { a A T Sig A T ( a ) > 0 } .

Proof

For any a Core ( S ) , A T { a } is not a topologically consistent set. Otherwise, there exists a topological reduct A A T { a } . Then, a A , which contradicts the fact a Core ( S ) A . Due to Theorem 3, i = 1 m P l A T { a } ( K i ) > 1 . According to Theorem 2, i = 1 m P l A T ( K i ) = 1 . It follows that Sig A T ( a ) > 0 . Therefore, Core ( S ) { a A T Sig A T ( a ) > 0 } . Conversely, for any a { a A T Sig A T ( a ) > 0 } , Sig A T ( a ) > 0 . Hence, Sig A T ( a ) = i = 1 m P l A T { a } ( K i ) i = 1 m P l A T ( K i ) > 0 . Since i = 1 m P l A T ( K i ) = 1 , i = 1 m P l A T { a } ( K i ) > 1 . By Theorem 3, A T { a } is not a topologically consistent set, which implies that A T { a } is not a topological reduct. Hence, a belongs to each topological reduct of A T . It follows that a Core ( S ) . Then, { a A T Sig A T ( a ) > 0 } Core ( S ) .□

According to Proposition 3, the significance of each attribute in Core ( S ) is larger than zero. Now, we present a concept of the significance of an attribute a A T A relative to the family of attributes A .

Definition 9

Let S = ( U , A T ) be an incomplete information system, A A T . Define the significance of a A T A relative to A by:

Sig A ( a ) = i = 1 m P l A ( K i ) i = 1 m P l A { a } ( K i ) .

Let i = 1 m P l ( K i ) = m . The relative significance Sig A ( a ) can measure importance degree of the attribute a relative to A .

Now, we design an algorithm to obtain a topological reduct of the incomplete information system S .

Algorithm 1

Computing the topological core and topological reduct of the incomplete information system S .

  1. Let Core ( S ) = , i = 1 m P l ( K i ) = m ;

  2. for each a A T , calculate Sig A T ( a ) = i = 1 m P l A T { a } ( K i ) i = 1 m P l A T ( K i ) ;

  3. if Sig A T ( a ) > 0 , then Core ( S ) = Core ( S ) { a } ;

  4. if Core ( A T ) = , then go to step (6), else go to step (5);

  5. if i = 1 m P l Core ( S ) ( K i ) = 1 , return Core ( S ) , else go to step (6);

  6. let A = Core ( S ) ;

  7. for each a A T A , calculate Sig A ( a ) = i = 1 m P l A ( K i ) i = 1 m P l A { a } ( K i ) ;

  8. if Sig A ( a 0 ) = max a A T A Sig A ( a ) , then A = A { a 0 } ;

  9. if i = 1 m P l A ( K i ) = 1 , then stop and output A as a reduct, else go to step (7).

We employ the following Example 2 to illustrate the mechanism of Algorithm 1.

Example 2

Continued from Example 1. The similarity classes of elements are presented in Table 4. K 1 = { x 1 } , K 2 = { x 2 , x 4 , x 5 , x 6 , x 7 } , and K 3 = { x 3 , x 8 } .

Table 4

The similarity classes of elements in Example 2

S A 1 ( x i ) S A 2 ( x i ) S A 3 ( x i ) S A 4 ( x i ) S A ( x i )
x 1 { x 1 , x 4 , x 5 } { x 1 , x 3 } { x 1 } { x 1 } { x 1 , x 6 }
x 2 { x 2 , x 5 , x 6 } { x 2 , x 5 , x 6 , x 8 } { x 2 , x 5 , x 6 } { x 2 , x 4 , x 5 , x 6 , x 7 } { x 2 , x 4 , x 5 , x 6 , x 7 }
x 3 { x 3 , x 8 } { x 1 , x 3 , x 6 , x 8 } { x 3 , x 8 } { x 3 , x 8 } { x 3 , x 8 }
x 4 { x 1 , x 4 , x 5 } { x 4 , x 5 } { x 4 , x 5 } { x 2 , x 4 , x 5 , x 6 , x 7 } { x 2 , x 4 , x 5 , x 6 , x 7 }
x 5 { x 1 , x 2 , x 4 , x 5 , x 6 , x 7 } { x 2 , x 4 , x 5 , x 6 , x 7 } { x 2 , x 4 , x 5 , x 6 , x 7 } { x 2 , x 4 , x 5 , x 6 , x 7 } { x 2 , x 4 , x 5 , x 6 , x 7 }
x 6 { x 2 , x 5 , x 6 } { x 2 , x 3 , x 5 , x 6 } { x 2 , x 5 , x 6 } { x 2 , x 4 , x 5 , x 6 , x 7 } { x 1 , x 2 , x 4 , x 5 , x 6 , x 7 }
x 7 { x 5 , x 7 } { x 5 , x 7 , x 8 } { x 5 , x 7 } { x 2 , x 4 , x 5 , x 6 , x 7 } { x 2 , x 4 , x 5 , x 6 , x 7 }
x 8 { x 3 , x 8 } { x 2 , x 3 , x 7 , x 8 } { x 3 , x 8 } { x 3 , x 8 } { x 3 , x 8 }

Let A 1 = { P , M , S } . Then, A 1 ¯ ( K 1 ) = { x 1 , x 4 , x 5 } , A 1 ¯ ( K 2 ) = { x 1 , x 2 , x 4 , x 5 , x 6 , x 7 } , and A 1 ¯ ( K 3 ) = { x 3 , x 8 } . It follows that

Sig A T ( X ) = i = 1 3 P l A 1 ( K i ) i = 1 3 P l A T ( K i ) = 3 8 + 6 8 + 2 8 1 = 3 8 > 0 .

Let A 2 = { P , M , X } . Hence, A 2 ¯ ( K 1 ) = { x 1 , x 3 } , A 2 ¯ ( K 2 ) = { x 2 , x 3 , x 4 , x 5 , x 6 , x 7 , x 8 } , and A 2 ¯ ( K 3 ) = { x 1 , x 2 , x 3 , x 6 , x 7 , x 8 } . Therefore,

Sig A T ( S ) = i = 1 3 P l A 2 ( K i ) i = 1 3 P l A T ( K i ) = 2 8 + 7 8 + 6 8 1 = 7 8 > 0 .

Let A 3 = { P , S , X } . Thus, A 3 ¯ ( K 1 ) = { x 1 } , A 3 ¯ ( K 2 ) = { x 2 , x 4 , x 5 , x 6 , x 7 } , A 3 ¯ ( K 3 ) = { x 3 , x 8 } . It implies that

Sig A T ( M ) = i = 1 3 P l A 3 ( K i ) i = 1 3 P l A T ( K i ) = 1 8 + 5 8 + 2 8 1 = 0 .

Let A 4 = { M , S , X } . Then, A 4 ¯ ( K 1 ) = { x 1 } , A 4 ¯ ( K 2 ) = { x 2 , x 4 , x 5 , x 6 , x 7 } , and A 4 ¯ ( K 3 ) = { x 3 , x 8 } . Hence,

Sig A T ( P ) = i = 1 3 P l A 4 ( K i ) i = 1 3 P l A T ( K i ) = 1 8 + 5 8 + 2 8 1 = 0 .

Then, A = Core ( S ) = { S , X } . It is easy to obtain that A ¯ ( K 1 ) = { x 1 , x 6 } , A ¯ ( K 2 ) = { x 1 , x 2 , x 4 , x 5 , x 6 , x 7 } , and A ¯ ( K 3 ) = { x 3 , x 8 } . Therefore,

i = 1 3 P l A ( K i ) = 2 8 + 6 8 + 2 8 = 10 8 > 1 .

So Core ( S ) is not a topological reduct. Due to

Sig A ( P ) = i = 1 3 P l A ( K i ) i = 1 3 P l A { P } ( K i ) = 10 8 1 = 1 4 ,

Sig A ( M ) = i = 1 3 P l A ( K i ) i = 1 3 P l A { M } ( K i ) = 1 4 ,

let A = A { P } . Since i = 1 3 P l A ( K i ) = 1 , we obtain that A = { P , S , X } is a topological reduct. Similarly, { M , S , X } is a topological reduct.

Remark 1

In [18], Yu and Zhan constructed a discernibility matrix and defined a discernibility function to obtain all topological reducts by applying the multiplication and absorption laws as many times as possible. However, this method is proper for theoretical analysis. To compute the topological reducts in incomplete information systems based on the evidence theory is an efficient numerical method. By Algorithm 1, we can easily obtain that the topological reduct of the incomplete information system of Example 6.2.7 in ref. [18] is { P , L } .

4 Relationships among the topological reduction and some attribute reductions of incomplete information systems

The concepts of classical reduct, belief reduct, and plausibility reduct of an incomplete information system are presented in ref. [28].

Definition 10

[28] Let S = ( U , A T ) be an incomplete information system, and A A T .

  1. A is referred to as a classical consistent set of S if R A = R A T . If A is a classical consistent set of S and no proper subset of A is a classical consistent set of S , then A is referred to as a classical reduct of S .

  2. A is referred to as a belief consistent set of S if Bel A ( X ) = Bel A T ( X ) for all X U R A T . If A is a belief consistent set of S and no proper subset of A is a belief consistent set of A , then A is referred to as a belief reduct of S .

  3. A A T is referred to as a plausibility consistent set of S if P l A ( X ) = P l A T ( X ) for all X U R A T . If A A T is a plausibility consistent set of S and no proper subset of A is a plausibility consistent set of S , then A is referred to as a plausibility reduct of S .

The relationships among classical consistent sets, belief consistent sets, and plausibility consistent sets were discussed by Wu [28].

Theorem 4

[28] Let S = ( U , A T ) be an incomplete information system, and A A T . Then

  1. A is a classical consistent set of S if and only if A is a belief consistent set of S,

  2. A is a classical reduct of S iff A is a belief reduct of S,

  3. If A is a classical consistent set of S, then A is a plausibility consistent set of S.

Wu also presented an example to show that a plausibility consistent set may not be a classical consistent set in ref. [28]. We obtain the relationship between a plausibility consistent set and a topologically consistent set as follows:

Theorem 5

Let S = ( U , A T ) be an incomplete information system, and A A T . If A is a plausibility consistent set of S, then A is a topologically consistent set of S.

Proof

Since A is a plausibility consistent set, by Definition 10, P l A ( S A T ( x ) ) = P l A T ( S A T ( x ) ) for all x U . Then, A ¯ ( S A T ( x ) ) U = A T ¯ ( S A T ( x ) ) U for all x U . It follows from A T ¯ ( S A T ( x ) ) A ¯ ( S A T ( x ) ) that A T ¯ ( S A T ( x ) ) = A ¯ ( S A T ( x ) ) for all x U .

For any Y T A T , A T ̲ ( Y ) = Y . Then, Y = y Y S A T ( y ) . In fact, for any y Y , it is clear that y S A T ( y ) . Hence, Y y Y S A T ( y ) . Conversely, for any y Y = A T ̲ ( Y ) , S A T ( y ) Y . Then, y Y S A T ( y ) Y . Therefore,

A ¯ ( Y ) = A ¯ ( y Y S A T ( y ) ) = y Y A ¯ ( S A T ( y ) ) = y Y A T ¯ ( S A T ( y ) ) = A T ¯ ( y Y S A T ( y ) ) = A T ¯ ( Y ) = Y .

Thus, Y T A . It implies that T A T T A .

For any Y T A , A ¯ ( Y ) = Y . Due to Y A T ¯ ( Y ) A ¯ ( Y ) , we obtain that Y = A T ¯ ( Y ) . Then, Y T A T , which follows that T A T A T . In conclusion, T A = T A T . Thus, A is a topologically consistent set.□

Remark 2

From Theorem 5, we can obtain that any plausibility reduct of S is a topologically consistent set of S . Example 3 shows that a topological reduct may not be a plausibility consistent set. Then the converse of Theorem 5 does not hold.

Example 3

An incomplete information system S = ( U , A T ) is presented in Table 5. The associated set-valued information system is given as Table 6. The similarity classes of elements are presented in Table 7.

Table 5

An incomplete information system of Example 3

U a b c
x 1 2 2
x 2 2 2
x 3 1 2
x 4 2 3 3
x 5 1 3
x 6 1 2 3
Table 6

A set-valued information system corresponding to Table 5

U a b c
x 1 { 2 } { 1 , 2 , 3 } { 2 }
x 2 { 1 , 2 } { 2 } { 2 }
x 3 { 1 } { 2 } { 2 , 3 }
x 4 { 2 } { 3 } { 3 }
x 5 { 1 , 2 } { 1 } { 3 }
x 6 { 1 } { 2 } { 3 }
Table 7

The similarity classes of elements in Example 3

x 1 x 2 x 3 x 4 x 5 x 6
S a ( x i ) { x 1 , x 2 , x 4 , x 5 } U { x 2 , x 3 , x 5 , x 6 } { x 1 , x 2 , x 4 , x 5 } U { x 2 , x 3 , x 5 , x 6 }
S b ( x i ) U { x 1 , x 2 , x 3 , x 6 } { x 1 , x 2 , x 3 , x 6 } { x 1 , x 4 } { x 1 , x 5 } { x 1 , x 2 , x 3 , x 6 }
S c ( x i ) { x 1 , x 2 , x 3 } { x 1 , x 2 , x 3 } U { x 3 , x 4 , x 5 , x 6 } { x 3 , x 4 , x 5 , x 6 } { x 3 , x 4 , x 5 , x 6 }
S { a , b } ( x i ) { x 1 , x 2 , x 4 , x 5 } { x 1 , x 2 , x 3 , x 6 } { x 2 , x 3 , x 6 } { x 1 , x 4 } { x 1 , x 5 } { x 2 , x 3 , x 6 }
S { a , c } ( x i ) { x 1 , x 2 } { x 1 , x 2 , x 3 } { x 2 , x 3 , x 5 , x 6 } { x 4 , x 5 } { x 3 , x 4 , x 5 , x 6 } { x 3 , x 5 , x 6 }
S { b , c } ( x i ) { x 1 , x 2 , x 3 } { x 1 , x 2 , x 3 } { x 1 , x 2 , x 3 , x 6 } { x 4 } { x 5 } { x 3 , x 6 }
S A T ( x i ) { x 1 , x 2 } { x 1 , x 2 , x 3 } { x 2 , x 3 , x 6 } { x 4 } { x 5 } { x 3 , x 6 }

We have that T A T = { , { x 1 , x 2 , x 3 , x 6 } , { x 4 } , { x 5 } , { x 1 , x 2 , x 3 , x 4 , x 6 } , { x 1 , x 2 , x 3 , x 5 , x 6 } , { x 4 , x 5 } , U } , T { b , c } = T A T , and T { b } = T { c } = { , U } T A T . Thus, { b , c } is a topological reduct.

We also obtain that A T ¯ ( S A T ( x 1 ) ) = { x 1 , x 2 , x 3 } , A T ¯ ( S A T ( x 2 ) ) = { x 1 , x 2 , x 3 , x 6 } , A T ¯ ( S A T ( x 3 ) ) = { x 1 , x 2 , x 3 , x 6 } , A T ¯ ( S A T ( x 4 ) ) = { x 4 } , A T ¯ ( S A T ( x 5 ) ) = { x 5 } , and A T ¯ ( S A T ( x 6 ) ) = { x 2 , x 3 , x 6 } . Since { a , b } ¯ ( S A T ( x 1 ) ) = U A T ¯ ( S A T ( x 1 ) ) , { a , c } ¯ ( S A T ( x 3 ) ) = { x 1 , x 2 , x 3 , x 5 , x 6 } A T ¯ ( S A T ( x 3 ) ) , and { b , c } ¯ ( S A T ( x 6 ) ) = { x 1 , x 2 , x 3 , x 6 } A T ¯ ( S A T ( x 6 ) ) , we obtain that { a , b } , { a , c } , and { b , c } are not plausibility consistent set. Hence, the plausibility reduct of S is A T . We see that { b , c } is a topologically consistent set; however, it is not a plausibility consistent set.

Then, we conclude the relationships among a topologically consistent set, a classical consistent set, a belief consistent set, and a plausibility consistent set of an incomplete information system S = ( U , A T ) as follows: for any A A T ,

A is a belief consistent set A is a classical consistent set ⇑̸ A is a plausibility consistent set ⇑̸ A is a topologically consistent set .

5 Conclusion

The topological reduction in an incomplete information system is a kind of attribute reduction that preserves the collection of all definable sets of the information system. In this article, the topological reduction has been characterized by the belief and plausibility functions from the evidence theory. Then, a topological reduction algorithm based on the evidence theory has been proposed in incomplete information systems, and an example has been adopted to illustrate the validity of the algorithm. The concept of topological reduct in incomplete information systems has been compared with the concept of plausibility reduct.

In our future research, we will use the results of this study to investigate topological reduction of incomplete information systems from multigranulation rough sets based on evidence theory, and present a comparison result on different reductions of incomplete information systems by numerical experiments.

Acknowledgments

The authors thank the editor and referees for their valuable comments and suggestions.

  1. Funding information: This work was supported by grants from the National Natural Science Foundation of China (Nos. 11701258 and 11871259) and the Natural Science Foundation of Fujian (Nos. 2022J01912, 2020J01801, and 2020J02043).

  2. Conflict of interest: The authors declare that there is no conflict of interests.

References

[1] Pawlak Z. Rough sets. Int J Comput Inform Sci. 1982;11:341–56. 10.1007/BF01001956Search in Google Scholar

[2] Leung Y, Wu W, Zhang W. Knowledge acquisition in incomplete information systems: a rough set approach. Eur J Oper Res. 2006;168:164–80. 10.1016/j.ejor.2004.03.032Search in Google Scholar

[3] Liang J, Xu Z. The algorithm on knowledge reduction in incomplete information systems. Int J Uncertain Fuzziness Knowl Based Syst. 2002;10(1):95–103. 10.1142/S021848850200134XSearch in Google Scholar

[4] Meng Q, Xu X, Yang L. Factor reduction of quotation with rough set on incomplete data. Procedia Manufacturing. 2020;48:18–23. 10.1016/j.promfg.2020.05.015Search in Google Scholar

[5] Qian W, Shu W. Attribute reduction in incomplete ordered information systems with fuzzy decision. Appl Soft Comput. 2018;73:242–53. 10.1016/j.asoc.2018.08.032Search in Google Scholar

[6] Qian Y, Liang J, Pedrycz W, Dang C. An efficient accelerator for attribute reduction from incomplete data in rough set framework. Pattern Recogn. 2011;44(8):1658–70. 10.1016/j.patcog.2011.02.020Search in Google Scholar

[7] Sun L, Wang L, Ding W, Qian Y, Xu J. Neighborhood multi-granulation rough sets-based attribute reduction using Lebesgue and entropy measures in incomplete neighborhood decision systems. Knowl Based Syst. 2020;192:105373. 10.1016/j.knosys.2019.105373Search in Google Scholar

[8] Thuy N, Wongthanavasu S. An efficient stripped cover-based accelerator for reduction of attributes in incomplete decision tables. Expert Syst Appl. 2020;143:113076. 10.1016/j.eswa.2019.113076Search in Google Scholar

[9] Kondo M. On the structure of generalized rough sets. Inf Sci. 2006;176(5):589–600. 10.1016/j.ins.2005.01.001Search in Google Scholar

[10] Li Z, Xie T, Li Q. Topological structure of generalized rough sets. Comput Math Appl. 2012;63:1066–71. 10.1016/j.camwa.2011.12.011Search in Google Scholar

[11] Qin K, Yang J, Pei Z. Generalized rough sets based on reflexive and transitive relations. Inf Sci. 2008;178(21):4138–41. 10.1016/j.ins.2008.07.002Search in Google Scholar

[12] Yao Y. Constructive and algebraic methods of theory of rough sets. Inf Sci. 1998;109:21–47. 10.1016/S0020-0255(98)00012-7Search in Google Scholar

[13] Zhang H, Ouyang Y, Wang Z. Note on generalized rough sets based on reflexive and transitive relations. Inf Sci. 2009;179:471–3. 10.1016/j.ins.2008.10.009Search in Google Scholar

[14] Zhang Y, Li C. Topological properties of a pair of relation-based approximation operators. Filomat. 2017;31(19):6175–83. 10.2298/FIL1719175ZSearch in Google Scholar

[15] Zhang Y, Li J, Li C. Topological structure of relation-based generalized rough sets. Fundam Inform. 2016;147:477–91. 10.3233/FI-2016-1418Search in Google Scholar

[16] Salama AS. Topological solution of missing attribute values problem in incomplete information tables. Inf Sci. 2010;180(1):631–9. 10.1016/j.ins.2009.11.010Search in Google Scholar

[17] Salama AS, El-Barbary OG. Topological approach to retrieve missing values in incomplete information systemsJ Egypt Math Soc. 2017;25(4):419–23. 10.1016/j.joems.2017.07.004Search in Google Scholar

[18] Yu H, Zhan W. On the topological properties of generalized rough sets. Inf Sci. 2014;263:141–52. 10.1016/j.ins.2013.09.040Search in Google Scholar

[19] Dempster AP. Upper and lower probabilities induced by a multivalued mapping. Ann Math Sta. 1967;38(2):325–39. 10.1007/978-3-540-44792-4_3Search in Google Scholar

[20] Shafer G. A mathematical theory of evidence. Princeton, American: Princeton University Press; 1976. 10.1515/9780691214696Search in Google Scholar

[21] Pawlak Z. Rough probability. Bull Pol Acad Sci Math. 1984;32:607–15. Search in Google Scholar

[22] Skowron A. The relationship between rough set theory and evidence theory. Bull Pol Acad Sci Math. 1989;37:87–90. Search in Google Scholar

[23] Skowron A. The rough sets theory and evidence theory. Fundam Inform. 1990;13:245–62. 10.3233/FI-1990-13303Search in Google Scholar

[24] Wu W. Knowledge reduction in random incomplete decision tables via evidence theory. Fundam Inform. 2012;115:203–18. 10.3233/FI-2012-650Search in Google Scholar

[25] Xu W, Zhang X, Zhong J, Zhang W. Attribute reduction in ordered information systems based on evidence theory. Knowl Inf Syst. 2010;25:169–84. 10.1007/s10115-009-0248-5Search in Google Scholar

[26] Yao Y, Lingras P. Interpretations of belief functions in the theory of rough sets. Inf Sci. 1998;104:81–106. 10.1016/S0020-0255(97)00076-5Search in Google Scholar

[27] Zhang Y, Li C. Relationships between relation-based rough sets and belief structures. Int J Approximate Reason. 2020;127:83–98. 10.1016/j.ijar.2020.10.001Search in Google Scholar

[28] Wu W. Attribute reduction based on evidence theory in incomplete decision systems. Inf Sci. 2008;178:1355–71. 10.1016/j.ins.2007.10.006Search in Google Scholar

[29] Tan A, Wu W, Li J, Lin G. Evidence-theory-based numerical characterization of multigranulation rough sets in incomplete information systems. Fuzzy Sets Syst. 2016;294:18–35. 10.1016/j.fss.2015.08.016Search in Google Scholar

[30] Engelking R. General topology. Warszawa, Poland: PWN; 1977. Search in Google Scholar

[31] Cěch E. Topological spaces. New York, USA: Wiley; 1966. Search in Google Scholar

[32] Orlowska E. Incomplete information: Rough set analysis. Heidelberg, Germany: Physica-Verlag; 1998. 10.1007/978-3-7908-1888-8Search in Google Scholar

[33] Guan Y, Wang H. Set-valued information systems. Inf Sci. 2006;176(17):2507–25. 10.1016/j.ins.2005.12.007Search in Google Scholar

[34] Demri S, Orlowska E. Incomplete information: structure, inference, complexity. Heidelberg, Germany: Springer-Verlag; 2002. 10.1007/978-3-662-04997-6Search in Google Scholar

[35] Yao Y. Generalized rough set model. In: Polkowski L, Skowron A (Eds.). Rough sets in knowledge discovery. Heidelberg, Germany: Physica-Verlag; 1998. p. 286–318. Search in Google Scholar

[36] Zhang Y, Li C. Numerical characterizations of topological reductions of covering information systems in evidence theory. Math Prob Eng. 2021;2021:6648108. 10.1155/2021/6648108Search in Google Scholar

[37] Wu W, Mi J, Li T. Rough approximation spaces and belief structures in infinite universes of discourse. J Comput Res Develop. 2012;49(2):327–36 (in Chinese). Search in Google Scholar

Received: 2022-06-21
Revised: 2022-09-27
Accepted: 2022-10-31
Published Online: 2023-01-12

© 2023 the author(s), published by De Gruyter

This work is licensed under the Creative Commons Attribution 4.0 International License.

Articles in the same Issue

  1. Research Articles
  2. Salp swarm and gray wolf optimizer for improving the efficiency of power supply network in radial distribution systems
  3. Deep learning in distributed denial-of-service attacks detection method for Internet of Things networks
  4. On numerical characterizations of the topological reduction of incomplete information systems based on evidence theory
  5. A novel deep learning-based brain tumor detection using the Bagging ensemble with K-nearest neighbor
  6. Detecting biased user-product ratings for online products using opinion mining
  7. Evaluation and analysis of teaching quality of university teachers using machine learning algorithms
  8. Efficient mutual authentication using Kerberos for resource constraint smart meter in advanced metering infrastructure
  9. Recognition of English speech – using a deep learning algorithm
  10. A new method for writer identification based on historical documents
  11. Intelligent gloves: An IT intervention for deaf-mute people
  12. Reinforcement learning with Gaussian process regression using variational free energy
  13. Anti-leakage method of network sensitive information data based on homomorphic encryption
  14. An intelligent algorithm for fast machine translation of long English sentences
  15. A lattice-transformer-graph deep learning model for Chinese named entity recognition
  16. Robot indoor navigation point cloud map generation algorithm based on visual sensing
  17. Towards a better similarity algorithm for host-based intrusion detection system
  18. A multiorder feature tracking and explanation strategy for explainable deep learning
  19. Application study of ant colony algorithm for network data transmission path scheduling optimization
  20. Data analysis with performance and privacy enhanced classification
  21. Motion vector steganography algorithm of sports training video integrating with artificial bee colony algorithm and human-centered AI for web applications
  22. Multi-sensor remote sensing image alignment based on fast algorithms
  23. Replay attack detection based on deformable convolutional neural network and temporal-frequency attention model
  24. Validation of machine learning ridge regression models using Monte Carlo, bootstrap, and variations in cross-validation
  25. Computer technology of multisensor data fusion based on FWA–BP network
  26. Application of adaptive improved DE algorithm based on multi-angle search rotation crossover strategy in multi-circuit testing optimization
  27. HWCD: A hybrid approach for image compression using wavelet, encryption using confusion, and decryption using diffusion scheme
  28. Environmental landscape design and planning system based on computer vision and deep learning
  29. Wireless sensor node localization algorithm combined with PSO-DFP
  30. Development of a digital employee rating evaluation system (DERES) based on machine learning algorithms and 360-degree method
  31. A BiLSTM-attention-based point-of-interest recommendation algorithm
  32. Development and research of deep neural network fusion computer vision technology
  33. Face recognition of remote monitoring under the Ipv6 protocol technology of Internet of Things architecture
  34. Research on the center extraction algorithm of structured light fringe based on an improved gray gravity center method
  35. Anomaly detection for maritime navigation based on probability density function of error of reconstruction
  36. A novel hybrid CNN-LSTM approach for assessing StackOverflow post quality
  37. Integrating k-means clustering algorithm for the symbiotic relationship of aesthetic community spatial science
  38. Improved kernel density peaks clustering for plant image segmentation applications
  39. Biomedical event extraction using pre-trained SciBERT
  40. Sentiment analysis method of consumer comment text based on BERT and hierarchical attention in e-commerce big data environment
  41. An intelligent decision methodology for triangular Pythagorean fuzzy MADM and applications to college English teaching quality evaluation
  42. Ensemble of explainable artificial intelligence predictions through discriminate regions: A model to identify COVID-19 from chest X-ray images
  43. Image feature extraction algorithm based on visual information
  44. Optimizing genetic prediction: Define-by-run DL approach in DNA sequencing
  45. Study on recognition and classification of English accents using deep learning algorithms
  46. Review Articles
  47. Dimensions of artificial intelligence techniques, blockchain, and cyber security in the Internet of medical things: Opportunities, challenges, and future directions
  48. A systematic literature review of undiscovered vulnerabilities and tools in smart contract technology
  49. Special Issue: Trustworthy Artificial Intelligence for Big Data-Driven Research Applications based on Internet of Everythings
  50. Deep learning for content-based image retrieval in FHE algorithms
  51. Improving binary crow search algorithm for feature selection
  52. Enhancement of K-means clustering in big data based on equilibrium optimizer algorithm
  53. A study on predicting crime rates through machine learning and data mining using text
  54. Deep learning models for multilabel ECG abnormalities classification: A comparative study using TPE optimization
  55. Predicting medicine demand using deep learning techniques: A review
  56. A novel distance vector hop localization method for wireless sensor networks
  57. Development of an intelligent controller for sports training system based on FPGA
  58. Analyzing SQL payloads using logistic regression in a big data environment
  59. Classifying cuneiform symbols using machine learning algorithms with unigram features on a balanced dataset
  60. Waste material classification using performance evaluation of deep learning models
  61. A deep neural network model for paternity testing based on 15-loci STR for Iraqi families
  62. AttentionPose: Attention-driven end-to-end model for precise 6D pose estimation
  63. The impact of innovation and digitalization on the quality of higher education: A study of selected universities in Uzbekistan
  64. A transfer learning approach for the classification of liver cancer
  65. Review of iris segmentation and recognition using deep learning to improve biometric application
  66. Special Issue: Intelligent Robotics for Smart Cities
  67. Accurate and real-time object detection in crowded indoor spaces based on the fusion of DBSCAN algorithm and improved YOLOv4-tiny network
  68. CMOR motion planning and accuracy control for heavy-duty robots
  69. Smart robots’ virus defense using data mining technology
  70. Broadcast speech recognition and control system based on Internet of Things sensors for smart cities
  71. Special Issue on International Conference on Computing Communication & Informatics 2022
  72. Intelligent control system for industrial robots based on multi-source data fusion
  73. Construction pit deformation measurement technology based on neural network algorithm
  74. Intelligent financial decision support system based on big data
  75. Design model-free adaptive PID controller based on lazy learning algorithm
  76. Intelligent medical IoT health monitoring system based on VR and wearable devices
  77. Feature extraction algorithm of anti-jamming cyclic frequency of electronic communication signal
  78. Intelligent auditing techniques for enterprise finance
  79. Improvement of predictive control algorithm based on fuzzy fractional order PID
  80. Multilevel thresholding image segmentation algorithm based on Mumford–Shah model
  81. Special Issue: Current IoT Trends, Issues, and Future Potential Using AI & Machine Learning Techniques
  82. Automatic adaptive weighted fusion of features-based approach for plant disease identification
  83. A multi-crop disease identification approach based on residual attention learning
  84. Aspect-based sentiment analysis on multi-domain reviews through word embedding
  85. RES-KELM fusion model based on non-iterative deterministic learning classifier for classification of Covid19 chest X-ray images
  86. A review of small object and movement detection based loss function and optimized technique
Downloaded on 12.9.2025 from https://www.degruyterbrill.com/document/doi/10.1515/jisys-2022-0214/html
Scroll to top button