Home Graded Soft Expert Set as a Generalization of Hesitant Fuzzy Set
Article Open Access

Graded Soft Expert Set as a Generalization of Hesitant Fuzzy Set

  • Afshan Qayyum EMAIL logo and Tanzeela Shaheen
Published/Copyright: January 26, 2018
Become an author with De Gruyter Brill

Abstract

Hesitant fuzzy sets play a vital role in decision analysis. Although they have been proved to be a landmark in evaluating information, there are certain deficiencies in their structure. Also, in decision analysis with the aid of hesitant fuzzy sets, the relative importance of the decision makers according to their area of expertise is ignored completely, which may be misleading in some situations. These sorts of issues have been resolved in this work by using graded soft expert (GSE) sets. The proposed structure is a modified form of soft expert sets. Some basic operations have been introduced, and certain laws satisfied by them have carefully been investigated. With the aid of GSE sets, a decision-making algorithm (accompanied with an example) has been developed in which experts have been given due weightage according to their area of expertise.

1 Introduction

The classical set theory, also called the crisp set theory, serves as one of the fundamental concepts in mathematics. However, only a limited number of traditional methods of modeling and computing can be dealt with the help of the crisp set theory. In practice, most of the problems in fields such as economics, engineering, environmental sciences, medical science, and social sciences involve information sets that are vague rather than precise. Due to the vagueness and uncertainties in these domains, traditional methods cannot be applied here.

To this end, several theories have been developed. One of them is the soft set theory, which was initiated by Molodtsov [18]. It attracted the attention of many researchers, as the theory was well equipped with parameters. Maji et al. [12, 13] discussed decision-making problems through soft sets and fuzzy soft sets. Maji et al. [14] defined the operations of union and intersection on soft sets. Ali et al. [3] improved those operations that were based on the selection of parameters in particular. Ali et al. [4] examined soft sets algebraically using these new operations. Sezgin and Atagun [24] proved certain De Morgan’s laws for soft set theory and extended the theoretical aspect of operations on soft sets. Chen et al. [6] and Ali [2] studied parametrization reduction of soft sets and discussed its application in decision analysis. Feng et al. [8] extended soft sets to soft rough sets, and Shabir et al. [25] improved the structure by introducing modified soft rough sets. Further extensions can be seen in Refs. [1, 7, 15, 16].

In the context of decision-making analysis, Alkhazaleh and Salleh [5] introduced the concept of soft expert sets. This structure can be considered as a generalization of soft sets in which experts and their opinions have been added to make decision analysis easier to handle.

To analyze decision-making problems, the hesitant fuzzy set theory has been proven to be rather worthwhile. It was presented by Torra [28] and Torra and Narukawa [29] as a generalization of the fuzzy set theory. The motivation behind this theory was the degree of hesitancy while making a decision. They introduced some basic operations and also discussed briefly its role in decision-making analysis. Yang et al. [37] extended the hesitant fuzzy set to the hesitant fuzzy rough set and also discussed operational laws in hesitant fuzzy sets. Xia and Xu [33], Meng et al. [17] and Tan et al. [26] developed a series of aggregation operators for hesitant fuzzy information and discussed their application in decision-making problems. Keeping in view the probabilities of possible values in the hesitant fuzzy elements and the linguistic terms representing opinions and preferences, hesitant fuzzy sets have been improved in Refs. [27, 30, 36, 38, 39, 40, 43]. Xu and Xia [35] proposed a variety of distance and similarity measures on hesitant fuzzy sets. Liang and Liu [11] introduced hesitant fuzzy sets into decision theoretic rough sets and explored their decision mechanism. Zhang and Wu [41] investigated the deviation of the priority weights from hesitant multiplicative preference relations in group decision-making environments. Although this theory proved to be valuable in the context of decision analysis, it has some deficiencies. The absence of a standard inclusion measure in hesitant fuzzy sets is a main hurdle in applying them to real-life decision-making problems. Wang and Xu [31] introduced some admissible orders. However, their relative importance subject to a given multiple-criteria decision-making (MCDM) problem has not been discussed, which makes it difficult to choose the best suitable admissible order. It is also stated that admissible orders can be generated using functions. However, in practical situations, it may be very difficult, which limits their applicability. Some more approaches to handle MCDM problems using neutrosophic and intuitionistic fuzzy sets can be seen in Refs. [9, 19, 20, 21, 32].

Also, while applying hesitant fuzzy sets in decision-making problems, there is no room to specify alternatives and decision makers separately, which makes it difficult to interpret the results. To overcome this deficiency, we have defined graded soft expert (GSE) sets that specify the collections of decision makers and alternatives separately in such a way that suitable weights can be assigned to each decision maker. This will help in reducing complexity in many decision-making algorithms. The operations over GSE sets have been defined, keeping in view the opinions of experts as well. Moreover, in hesitant fuzzy sets, containment of two hesitant fuzzy sets in one another does not lead to their equality. However, in GSE sets, if two GSE sets are contained in each other, they are equal. The hesitant fuzzy sets are suitable if the experts hesitate among several possible values and can be used both for individual and group decision making. These properties have been retained in GSE sets. GSE sets provide a way to handle the information efficiently so that there is no loss of information and weightage can be given according to the given problem.

No standard inclusion measure has yet been developed. In its application in decision analysis, experts’ individual weightage has totally been ignored. To overcome these problems, we introduce in this paper GSE sets that can be treated as a generalization of hesitant fuzzy sets. This structure is a modified form of soft expert sets; however, its structural and operational approach is totally different. We mainly focused to fill the gaps in the hesitant fuzzy set theory.

To facilitate our discussion, basic concepts related to soft sets, soft expert sets, and hesitant fuzzy sets have been presented in Section 2. In Section 3, GSE sets have been introduced. Some basic operations have been defined and related laws have been proved. Section 4 has been devoted to the study of decision-making problems with the aid of GSE sets. At the end, Section 5 contains some concluding remarks.

2 Preliminaries

In this section, we give some basic concepts related to soft sets, soft expert sets and hesitant fuzzy sets. These will be required in the later sections.

2.1 Soft Sets

Let U be a non-empty set representing a universe set and P(U) denote the power set of U. Let E be the set of parameters and A, B be non-empty subsets of E.

Definition 1 ([18]): A pair (F, A) is called a soft set over U, where F is a mapping given by F:AP(U).

Thus, a soft set can be considered a parameterized family of subsets of the universe U. For eA, F(e) gives the set of e-approximate elements of the soft set (F, A).

Definition 2 ([14]): For two soft sets (F, A) and (G, B) over a common universe U, we say that (F, A) is a soft subset of (G, B), denoted by (F,A)˜(G,B), if

  1. AB and

  2. F(e)⊆G(e) for all eA.

In this case, (G, B) is said to be a soft super set of (F, A).

Definition 3 ([14]): Two soft sets (F, A) and (G, B) over a common universe U are said to be soft equal if (F, A) is a soft subset of (G, B) and (G, B) is a soft subset of (F, A).

Definition 4 ([3]): Let U be an initial universe set, E be the set of parameters, and AE.

  1. (F, A) is called a relative null soft set (with respect to the parameter set A), denoted by ∅A , if F(a)=∅ for all aA.

  2. (G, A) is called a relative whole soft set (with respect to the parameter set A), denoted by A𝒰, if G(e)=U for all eA.

Remark 1: If a relative null soft set is taken over E, it is called null soft set over U and is denoted by ∅E. In a similar way, a relative whole soft set with respect to the set of parameters E is called the absolute soft set over U and is denoted by EU .

Empty soft set over U, denoted by ∅∅, is a unique soft set over U with an empty parameter set.

The operations of union and intersection on soft sets have been defined as below.

Definition 5 ([3]): (1) The extended union of two soft sets (F, A) and (G, B) over the common universe U is the soft set (H, C), where C=AB and for all eC:

H(e)={F(e)ifeA\BG(e)ifeB\AF(e)G(e)ifeAB

We write (F, A)∪ε (G, B)=(H, C).

(2) Let (F, A) and (G, B) be two soft sets over the same universe U, such that AB≠∅. The restricted union of (F, A) and (G, B) is denoted by (F, A)∪(G, B) and is defined as (F, A)∪(G, B)=(H, C), where C=AB and for all eC, H(e)=F(e)∪G(e). If AB=∅, then (F, A)∪(G, B)=∅∅.

Definition 6 ([3]): (1) The extended intersection of two soft sets (F, A) and (G, B) over a common universe U is the soft set (H, C), where C=AB and for all eC,

H(e)={F(e)ifeA\BG(e)ifeB\AF(e)G(e) ifeAB

We write (F, A)∩ε (G, B)=(H, C).

(2) Let (F, A) and (G, B) be two soft sets over the same universe U such that AB≠∅. The restricted intersection of (F, A) and (G, B) is denoted by (F, A)∩(G, B) and is defined as (F, A)∩(G, B)=(H, AB), where H(e)=F(e)∩G(e) for all eAB. If AB=∅, then (F, A)∩(G, B)=∅∅.

2.2 Soft Expert Sets

Soft sets are enriched with parameters but they lack experts’ individual opinions and the grades associated with each element of the universe. To strengthen the concept of soft sets, soft expert sets have been introduced. Here, we give some basic concepts related to soft expert sets. All the definitions related to soft expert sets have been taken from Ref. [5].

Let U be a universe set, E be a set of parameters, X be a set of experts, and O be a set of opinions. Let A be a non-empty subset of Z, where Z=E×X×O. With these notations, Alkhazaleh and Salleh [5] defined the soft expert set as stated below.

Definition 7: A pair (F, A) is called a soft expert set over U, where F is a mapping given by F:AP(U).

Thus, a soft expert set can be considered as a soft set in which the parameter set is replaced with a Cartesian product of the set of parameters, set of experts, and set of opinions.

Definition 8: For two soft expert sets (F, A) and (G, B) over U, (F, A) is called a soft expert subset of (G, B) if

  1. AB and

  2. F(a)⊆G(a) for all aA.

In that case, (G, B) will be called the soft expert superset of (F, A).

Definition 9: Two soft expert sets (F, A) and (G, B) over U are said to be equal if (F, A) is a soft expert subset of (G, B) and (G, B) is a soft expert subset of (F, A).

Definition 10: Let E={e1, e2,…,en } be a set of parameters. The NOT set of E denoted by ′IE is defined by ′IE={′Ie1, ′Ie2,...,′Ien }, where ′Iei represents “not ei ” for all i.

Definition 11: The complement of a soft expert set (F, A) is denoted and defined as (F, A)c =(Fc , ′IA), where Fc : ′IAP(U) is a mapping given by

Fc(a)=UF(IA)for all aIA.

Definition 12: The union of two soft expert sets (F, A) and (G, B) over U, denoted by (F, A)U˜(G, B), is a soft expert set (H, C), where C=AB and for all aC:

H(a)={F(a)if aABG(a)if aBAF(a)G(a)if aAB.

Definition 13: The intersection of two soft expert sets (F, A) and (G, B) over U, denoted by (F,A)˜(G,B), is a soft expert set (H, C), where C=AB, and for all aC:

H(a)={F(a)if aABG(a)if aBAF(a)G(a)if aAB.

2.3 Hesitant Fuzzy Sets

Definition 14 ([28, 33]): Let X be a fixed set. A hesitant fuzzy set on X is in terms of a function that when applied to X returns a subset of [0, 1].

Thus, if h is a hesitant fuzzy set on X, then h(x) (xX), being a subset of [0, 1], gives the possible degrees of membership. For any xX, h(x) is called hesitant fuzzy element.

Remark 2: Torra [28] defined lower and upper bounds for a hesitant fuzzy element as below:

  1. Lower bound: h(x)=min{γ:γh(x)}.

  2. Upper bound: h+(x)=max{γ:γh(x)}.

Basic operations on hesitant fuzzy sets are given below.

Definition 15 ([28, 37]): For hesitant fuzzy sets h, h1, and h2 on X, the following operations have been defined:

  1. Containment: h1 is contained in h2, denoted by h1 ≤ h2, if and only if h1(x)h2(x) and h1+(x)h2+(x) for all xX;

  2. Union: union of h1 and h2, denoted by h1 ⋒ h2, is defined for any xX as (h1h2)(x)={hh1(x)h2(x):hmax{h1(x),h2(x)};

  3. Intersection: intersection of h1 and h2, denoted by h1 ⋓ h2, is defined for any xX as (h1h2)(x)={hh1(x)h2(x):hmin{h1+(x),h2+(x)};

  4. Complement: complement of h is denoted by hc and is defined for any xX as hc(x)=γh(x){1γ}.

Operational laws investigated by Yang et al. [37] are stated in the next theorem.

Theorem 1: For hesitant fuzzy sets h1, h2, and h3 on X, the following properties hold:

  1. Idempotent: h1 ⋒ h1=h1, h1 ⋓ h1=h1;

  2. Commutative: h1 ⋒ h2=h2 ⋒ h1, h1 ⋓ h2=h2  h1;

  3. Associative: h1 ⋒ (h2 ⋒ h3)=(h1 ⋒ h2) ⋒ h3, h1 ⋓ (h2 ⋓ h3)=(h1 ⋓ h2) ⋓ h3;

  4. Distributive: h1 ⋒ (h2 ⋓ h3)=(h1 ⋒ h2) ⋓ (h1 ⋒ h3), h1 ⋒ (h2 ⋒ h3)=(h1 ⋓ h2) ⋒ (h1⋓ h3);

  5. De Morgan’s laws: (h1h2)c=h1ch2c,(h1h2)c=h1ch2c;

  6. Double negation: (hc )c =h.

3 GSE Set versus Hesitant Fuzzy Set and Soft Expert Set

In this section, the soft expert set defined by Alkhazalah and Salleh [5] has been redefined and revised, which may be called the GSE set. In order to strengthen the structure, its basic operations have been redefined in a more fruitful manner. Several laws and related results have also been investigated, some of which do not hold in hesitant fuzzy sets.

Hesitant fuzzy sets are basically introduced to handle decision-making problems in which there are several alternatives and decision makers. However, in the definition of hesitant fuzzy sets, alternatives and decision makers have not been specified. This may lead to the wrong use and interpretation of the set. Also, if we take x1, x2, and x3 as three alternatives and hesitant fuzzy set h represents a particular criteria, then for each i (i=1, 2,…,n), h(xi ) represents opinions of various decision makers in which there is no space to highlight an individual decision maker’s opinions separately. For that purpose, different techniques and algorithms were introduced, which make the decision-making problems somehow difficult to handle. One of them is to assign weights to the opinions. However, again, as opinions of the decision makers have been collected in a set without specifying their individual decisions, it is not possible to give more weightage to a particular decision maker. It may be possible by introducing a complex algorithm.

To avoid such type of situations, the GSE set can prove its worth. In the GSE set, each alternative (or attribute) and decision maker have been specified separately. Formally, it is stated as below.

Definition 16: Let U be a finite universe set containing n alternatives, E a set of criteria, and X a set of experts (or decision makers). Let O be a set of opinions with a given preference relation ≤ among the opinions. A GSE set (F, A, Y) is characterized by a mapping F:A×YP(U×O) defined for every eA and pY by F(e, p)={(ui , oi ): iI}, where I={1, 2, 3,..., n}, AE, YX, and P(U×O) denotes the power set of U×O. Here, the set of opinions O contains graded values of the given parameters, i.e. the values o1, o2,…,on can be graded as o1 ≾ o2 ≾ …, ≾ on , which means that on is the most preferred value while o1 is the least preferred one, and so forth.

The above definition states that for a given criteria e, the decision maker p gives the opinion oi for each alternative ui (i=1, 2, …, n). As an example of the preference relation in the above definition, consider the set of opinions O={excellent, very good, good, poor, very poor}. It is obvious that “excellent” is preferred over “very good,” which is preferred over “good,” which is preferred over “poor,” and the least preferred one is “very poor.” For simplicity, we can fuzzify these values according to their grading and preference; that is, the opinions can be assigned values from the interval [0, 1] based on their preference. For example, in the above-mentioned set O of opinions, “excellent” is the most preferred opinion, so it can be assigned value 1 from the interval [0, 1] while “very poor” is the least preferred opinion, so it can be assigned the value 0. The rest of the opinions will be assigned values between 0 and 1.

Thus, GSE can be regarded as a generalization of the hesitant fuzzy set in the sense that in addition to assigning hesitancy degree to the objects/alternatives, it considers experts’ individual opinions separately for each given parameter. From the application point of view, GSE sets are particularly important to handle decision-making problems in which one needs to assign weightage to the experts’ opinion separately according to their expertise. In the rest of the paper, the set of opinions O will be taken as a subset of [0, 1].

Example 1: Let U={u1, u2, u3, u4, u5} be a set of wheat types (alternatives), E={e1=moisture content, e2=protein content, e3=milling quality, e4=baking quality} be a set of criteria, X={a, b, c} be a set of experts, and O={0.0, 0.1, 0.2, 0.3, 0.4, 0.5, 0.6, 0.7, 0.8, 0.9, 1.0} be the set of possible grades for the given parameters.

Suppose that a farmer has distributed a questionnaire to the team of experts to judge the quality of wheat types on the basis of given criteria. The decision of experts in the form of GSE set F:A×XP(U×O) is given below:

  • F(e1, a)={(u1, 0.5), (u2, 0.1), (u3, 0.7), (u4, 0.9), (u5, 0.2)}, F(e1, b)={(u1, 0.5), (u2, 0.2), (u3, 0.7), (u4, 0.3), (u5, 0.4)}, F(e1, c)={(u1, 0.4), (u2, 0.3), (u3, 0.3), (u4, 0.6), (u5, 0.7)}, F(e2, a)={(u1, 0.9), (u2, 0.3), (u3, 0.2), (u4, 0.3), (u5, 0.6)}, F(e2, b)={(u1, 0.8), (u2, 0.9), (u3, 0.4), (u4, 0.1), (u5, 0.4)}, F(e2, c)={(u1, 0.7), (u2, 0.0), (u3, 0.3), (u4, 0.3), (u5, 0.6)}, F(e3, a)={(u1, 0.5), (u2, 0.5), (u3, 0.9), (u4, 0.7), (u5, 0.2)}, F(e3, b)={(u1, 0.4), (u2, 0.4), (u3, 0.8), (u4, 0.2), (u5, 0.3)}, F(e3, c)={(u1, 0.4), (u2, 0.4), (u3, 0.9), (u4, 0.7), (u5, 0.2)}, F(e4, a)={(u1, 0.6), (u2, 0.7), (u3, 0.5), (u4, 0.9), (u5, 0.7)}, F(e4, b)={(u1, 0.5), (u2, 0.8), (u3, 0.4), (u4, 0.6), (u5, 0.3)}, F(e4, c)={(u1, 0.3), (u2, 0.9), (u3, 0.5), (u4, 0.0), (u5, 0.6)}.

In the soft set theory, the basic concept is parametrization of objects in a given universe set. The various operations thus defined on soft sets depend upon the e-approximate elements of a given set for all attributes e. As a soft expert set does not only depend upon the various parameters involved but also on the opinion of experts, which is basically the main purpose of introducing soft expert sets, the operations on soft expert sets should consider these opinions as well. In the rest of the section, we define operations on GSE sets taking into consideration the respective opinions as well.

In particular, we can see that the operation of complement on the soft expert set defined in Ref. [5] takes into consideration the objects of universe and their respective attributes only ignoring their respective opinions. As in example 3.9 of Ref. [5], we can see that the complement of F(e1, p, 1)={u3} is given as Fc (Ie1, p, 1)={u1, u2, u4}, which means that according to the expert p, only the object u3 has attribute e1 and its complement states that according to the same opinion of expert p the objects u1, u2, and u4 do not have attribute e1. This idea can work if we are taking only two opinions (agree 1, disagree 0). If we consider more than two opinions (as in GSE sets), the idea may not work. In the same above case, if we take F(e1, p, 0.3)={u3} and Fc (Ie1, p, 0.3)={u1, u2, u4}, then the objects not having attribute e1 in the same degree 0.3 as the objects having that attribute do not sound accurate. Thus, for more than two opinions, we define the complement of the GSE set, as follows.

Definition 17: The complement of a GSE set (F, A, Y), denoted by (F, A, Y)c , is defined as (F, A, Y)c =(Fc , Ac , Y), where Fc :Ac ×YP(U×Oc ) is a mapping given as

Fc(ec,p)={(ui,oic):iI}

whenever

F(e,p)={(ui,oi):iI} and oic=1oi.

Definition 18: The union of two GSE sets (F, A, Y) and (G, B, Z) over U, denoted by (F, A, Y)∪(G, B, Z), is a GSE set (H, C, Y), where C=AB, X=YZ and for all eC and pX:

H(e,p)={{(ui,max{oi,oi}):iI}if(e,p)(AB,YZ){(ui,oi):iI}if(e,p)(A,Y)\(B,Z){(ui,oi):iI}if(e,p)(B,Z)\(A,Y),

whenever F(e, p)={(ui , oi ):iI} and G(e,p)={(ui,oi):iI}.

Definition 19: The intersection of two GSE sets (F, A, Y) and (G, B, Z) over U, denoted by (F, A, Y)∩(G, B, Z), is a GSE set (H, C, X), where C=AB, X=YZ and for all eC and pX:

H(e,p)={(ui,min{oi,oi}):iI}

whenever F(e, p)={(ui , oi ):iI} and G(e,p)={(ui,oi):iI}.

In the classical set, the hierarchy is characterized through set containment. However, in case of other generalizations of the classical set like the fuzzy set, soft set, or hesitant fuzzy set, it is characterized through different ways. Alkhazalah and Salleh [5] defined soft expert subsets by using the classical set containment approach in which grading of opinions is not considered. Taking into consideration the opinions of experts, we define the notion of a subset for GSE sets in a more generalized way as below.

Definition 20: For a GSE set (F, A, Y) over U and for any e, e′∈A, p, p′∈Y, if

F(e,p)={(ui,oi):iI} and F(e,p)={(ui,oi):iI},

then F(e, p) is said to be contained in F(e′, p′) [or equivalently F(e, p) is subset of F(e′, p′), denoted by F(e, p)⊆F(e′, p′)], if

oioi for each i{1,2,3,...,n}.

The above condition states that the degree of each alternative in F(e, p) is less than the corresponding degree in F(e′, p′).

Example 2: In Example 1, F(e1, b)={(u1, 0.2), (u2, 0.5), (u3, 0.4), (u4, 0.5), (u5, 0.6)}⊆F(e4, c)={(u1, 0.3), (u2, 0.8), (u3, 0.5), (u4, 0.5), (u5, 0.6)} because the opinion for each ui in F(e1, b) is less than or equal to its corresponding value in F(e4, c).

Definition 21: For two GSE sets (F, A, Y) and (G, B, Z) over U, (F, A, Y) is called subset of (G, B, Z), denoted by (F, A, Y)⊆(G, B, Z), if

  1. AB.

  2. YZ.

  3. F(e,p)G(e,p) for all eϵA,pϵY.

In this case, (G, B, Z) is called a superset of (F, A, Y) denoted by (G, B, Z)⊇ (F, A, Y).

By Definition 21, we can see that the comparison of two GSE sets is pointwise, which means that the values of the two GSE sets are compared for each pair of values separately. In case of soft expert sets, containment, as defined in Ref. [5], is a global property that ignores individual opinions completely. Also, in that case, two soft expert sets can be compared but there is no way to compare their respective values separately.

Definition 22: Two GSE sets (F, A, Y) and (G, B, Z) over U are said to be equal, denoted by (F, A, Y)=(G, B, Z), if A=B, Y=Z, and F(e, p)=G(e, p) for all eA (=B), pY (=Z).

Proposition 1: For two GSE sets (F, A, Y) and (G, B, Z) over U, if (F, A, Y)⊆(G, B, Z) and (G, B, Z)⊆(F, A, Y), then (F, A, Y)=(G, B, Z).

Proof. It can easily be proved using Definitions 22 and 21.□

This is one of the most significant results for GSE sets. The inclusion here is based on graded values or opinions as in hesitant fuzzy sets; however, the above result does not hold in case of hesitant fuzzy sets. To overcome this shortcoming, many inclusion measures and criteria have been developed. Hesitant equality has also been introduced. However, all these attempts were more or less useless in practical implementations.

Xia and Xu [33] defined the score function of hesitant fuzzy element h, that is, s(h)=(γhγ)/#h, where s(·) is the score function and #h is the number of elements in h. This score function serves as a measure to compare two hesitant fuzzy sets. Following the same technique, we define score function for a GSE set as below.

Definition 23: For a given GSE set (F, A, Y) over U={u1, u2, ..., un }, where A contains m criteria, the score function for any ui (i=1, 2,…,n) with respect to the opinions of an expert pY is denoted by s(ui , p) and is defined as

s(ui,p)=(j=1moj)/m,

where o1, o2,…,om are the respective opinions of the expert p for the alternative ui with respect to the criteria e1, e2,…,en .

Theorem 2: For any two GSE sets (F, A, Y) and (G, B, Z) over U, we have

(F, A, Y)∩(G, B, Z)⊆(F, A, Y), (G, B, Z);

(F, A, Y), (G, B, Z)⊆(F, A, Y)∪(G, B, Z).

Proof. For any GSE sets (F, A, Y) and (G, B, Z), let (F, A, Y)∩(G, B, Z)=(H, AB, YZ). As ABA, B and YZY, Z and for any eAB, pYZ using Definition 19, we have

H(e,p)={(ui,min{oi,oi}):iI},

where F(e, p)={(ui , oi ):iI} and G(e,p)={(ui,oi):iI}. Thus, by Definition 20, H(e, p)⊆{(ui , oi ):iI}=F(e, p) and H(e,p){(ui,oi):iI}=G(e,p). This shows that (F, A, Y)∩(G, B, Z)⊆(F, A, Y), (G, B, Z).

Also, let (F, A, Y)∪(G, B, Z)=(J, AB, YZ). As AAB and YYZ, for any eA, pY, using Definition 18, we have

J(e,p)={{(ui,max{oi,oi}):iI}if (e,p)(AB,YZ){(ui,oi):iI}if (e,p)(A,Y)\(B,Y)

In both cases, using Definition 20, we have F(e, p)⊆J(e, p). Similarly, G(e, p)⊆J(e, p). Thus, (F, A, Y), (G, B, Z)⊆(F, A, Y)∪(G, B, Z).□

Theorem 3: Let U be the universe set. For all GSE sets (F, A, Y), (G, B, Z), and (H, C, X) over U, the following properties hold:

  1. Idempotent: (F, A, Y)∩(F, A, Y)=(F, A, Y), (F, A, Y)∪(F, A, Y)=(F, A, Y);

  2. Commutative: (F, A, Y)∩(G, B, Z)=(G, B, Z)∩(F, A, Y), (F, A, Y)∪(G, B, Z)=(G, B, Z)∪(F, A, Y);

  3. Associative: (F, A, Y)∩((G, B, Z)∩(H, C, X))=((F, A, Y)∩(G, B, Z))∩(H, C, X), (F, A, Y)∪((G, B, Z)∪(H, C, X))=((F, A, Y)∪(G, B, Z))∪(H, C, X);

  4. Distributive: (F, A, Y)∩((G, B, Z)∪(H, C, X))=((F, A, Y)∩(G, B, Z))∪((F, A, Y)∩(H, C, X)), (F, A, Y)∪((G, B, Z)∩(H, C, X))=((F, A, Y)∪(G, B, Z))∩((F, A, Y)∪(H, C, X));

  5. De Morgan’s laws: ((F, A, Y)∩(G, B, Z))c =(F, A, Y)c ∪(G, B, Z)c , ((F, A, Y)∪(G, B, Z))c =(F, A, Y)c ∩(G, B, Z)c ;

  6. Double-negation law: ((F, A, Y)c )c =(F, A, Y).

Proof. These can be derived directly from Definitions 18, 19, 17, and 22.□

In general, absorption laws do not hold for hesitant fuzzy sets. However, these laws hold in case of the GSE set, as can be seen in the next result.

Theorem 4: For any two GSE sets (F, A, Y) and (G, B, Z) over U, the following absorption laws hold:

  1. (F,A,Y)((F,A,Y)(G,B,Z))=(F,A,Y);

  2. (F,A,Y)((F,A,Y)(G,B,Z))=(F,A,Y).

Proof. By Definitions 18 and 19, we have (F, A, Y)∩((F, A, Y)∪(G, B, Z))=(H, A∩(AB), Y∩(YZ))=(H, A, Y) such that for any eA and pY, we have

H(e,p)={F(e,p)(F(e,p)G(e,p))if (e,p)(AB,YZ)F(e,p)(F(e,p)if (e,p)(A,Y)\(B,Z).

In the first case when (e, p)∈(AB, YZ), F(e, p)={(ui , oi ):iI} and G(e,p)={(ui,oi):iI}, using Definitions 18, 19, and 20 we get

F(e,p)(F(e,p)G(e,p))={(ui,oi):iI}({(ui,oi):iI}{(ui,oi):iI})={(ui,oi):iI}{(ui,max{oi,oi}):iI}={(ui,min{oi,max{oi,oi}}):iI}{(ui,oi):iI}=F(e,p){(ui,max{oi,min{oi,oi}}):iI}={(ui,min{oi,max{oi,oi}}):iI}=F(e,p)(F(e,p)G(e,p)).

The above arguments give us our required result for the first case.

In the second case, when (e, p)∈(A, Y)\(B, Z), using Definition 18, we have

(F,A,Y)((F,A,Y)(G,B,Z))=(F,A,Y)(F,A,Y)=(F,A,Y),

which is our required result for this case as well. Thus, in both cases, we have

(F,A,Y)((F,A,Y)(G,B,Z))=(F,A,Y).

Similarly, we can prove that

(F,A,Y)((F,A,Y)(G,B,Z))=(F,A,Y).

4 Decision Making with the Aid of GSE Sets

Decision-making problems have extensively been studied using hesitant fuzzy sets in which there are several experts who have to decide among various alternatives [22, 23, 34, 42]. For that purpose, the most common approach is to aggregate the opinions first for each criteria and alternative. Then, alternatives are ranked by aggregating the average criteria.

As already mentioned, the experts’ individual opinions have been ignored while modeling decisions by hesitant fuzzy sets. Experts may have different expertise regarding different criteria. To overcome this shortcoming, GSE sets can be used to give due weightage to the opinions of experts individually.

In this section, we develop an algorithm with the aid of GSE sets for decision analysis in which experts will be given weightage according to their area of expertise. Let {u1, u2, …, un } be a finite set of n alternatives and E={e1, e2, …, em } be a set of m criteria. Further, we take X as a set of experts and O as a set of possible opinions. Our goal is to decide among the various alternatives subject to expert’s opinion regarding given criteria. This is a decision-making problem. To handle such type of problems by using GSE sets, we propose following algorithmic steps.

Algorithm 1

  • Step 1: Utilize the evaluations of experts in the form of GSE sets to determine the opinions regarding given alternatives and criteria.

  • Step 2: Find the weighted average of opinions for each pair (ui , ej ) (i=1, 2, …, n, j=1, 2, …, m) by assigning suitable weights to the experts according to their area of expertise.

  • Step 3: Using Definition 23, calculate the scores s(ui ) of ui (i=1, 2, …, n) considering the aggregate values of experts in Step 2.

  • Step 4: Rank all the alternatives according to s(ui ) in descending order.

  • Step 5: End.

This algorithm allows us to handle situations in which several experts have to make a decision on a given set of alternatives by selecting the most relevant one. Step 2 of the algorithm is designed to give due weightage to the experts according to their area of expertise. Score function is then used to aggregate the criteria in Step 3, and Step 4 ranks the alternatives according to their scores.

Example 3: A person wants to start a small business with low capital. He is considering five different businesses; u1 is a computer and mobile repair business, u2 is a baby sitting and child care business, u3 is a dairy products business, u4 is a real-estate agency business, and u5 is an artist freelance business. Let us denote the set of these business types (alternatives) by U.

Let Q={e1=high profit, e2=market area, e3=revenue and profitability, e4=ownership and taxes} be the set of criteria. Let Y={a, b, c} be the set of experts. Expert a is selected for acknowledged expertise in evaluating e1 and e4; expert b in evaluating e1, e2, and e3; and expert c in evaluating e2, e3, and e4. Also, we take O={0.0, 0.1, 0.2, 0.3, 0.4, 0.5, 0.6, 0.7, 0.8, 0.9, 1.0} as the set of possible opinions of experts regarding risk factor.

  • Step 1: Utilize the evaluations of experts in the form of GSE sets for the given problem. For ease of calculation, these can also be written in tabular form as in Tables 13.

    F(e1, a)={(u1, 0.3), (u2, 0.4), (u3, 0.2), (u4, 0.5), (u5, 0.8)}, F(e1, b)={(u1, 0.2), (u2, 0.5), (u3, 0.4), (u4, 0.5), (u5, 0.6)}, F(e1, c)={(u1, 0.4), (u2, 0.5), (u3, 0.3), (u4, 0.6), (u5, 0.7)}, F(e2, a)={(u1, 0.9), (u2, 0.0), (u3, 0.2), (u4, 0.3), (u5, 0.6)}, F(e2, b)={(u1, 0.8), (u2, 0.1), (u3, 0.4), (u4, 0.1), (u5, 0.4)}, F(e2, c)={(u1, 0.7), (u2, 0.3), (u3, 0.3), (u4, 0.3), (u5, 0.5)}, F(e3, a)={(u1, 0.5), (u2, 0.3), (u3, 0.9), (u4, 0.7), (u5, 0.2)}, F(e3, b)={(u1, 0.4), (u2, 0.4), (u3, 0.7), (u4, 0.5), (u5, 0.3)}, F(e3, c)={(u1, 0.5), (u2, 0.3), (u3, 0.9), (u4, 0.7), (u5, 0.2)}, F(e4, a)={(u1, 0.6), (u2, 0.8), (u3, 0.5), (u4, 0.7), (u5, 0.6)}, F(e4, b)={(u1, 0.5), (u2, 0.6), (u3, 0.4), (u4, 0.6), (u5, 0.3)}, F(e4, c)={(u1, 0.3), (u2, 0.8), (u3, 0.5), (u4, 0.5), (u5, 0.6)}.

  • Step 2: Find the weighted average of opinions for each pair (ui , ej) (i=1, 2, 3, 4, 5, j=1, 2, 3, 4) by assigning weight 2 to expert a for e1 and e4 and 1 for e2 and e3. Similarly, assign weight 2 to expert b each for e1, e2, and e3 and 1 for e4 and assign weight 2 to expert c each for e2, e3, and e4 and 1 for e1. Thus, opinions of experts have been aggregated in this step and results have been displayed in Table 4.

    For example, for the pair u1, e1, the weighted average has been calculated as

                   [2(0.3)+2(0.2)+1(0.4)]/(2+2+1)=0.28.

    The rest of the entries can be calculated in a similar way.

  • Step 3: Using Definition 23 for aggregate experts’ opinions instead of individual values, calculate scores s(ui ) (i=1, 2, 3, 4, 5) to get

                   s(u1)=0.495, s(u2)=0.43, s(u3)=0.48, s(u4)=0.49, s(u5)=0.49.

  • Step 4: Rank all the business types ui (i=1, 2, 3, 4, 5) in accordance with their scores s(ui ) to get the preference relation u2  u3  u4u5  u1 (the alternative with the lowest overall risk factor is the most preferred one, while the one with highest overall risk factor is the least preferred). Thus, the most appropriate business is u2.

Table 1:

Opinions of Expert a.

(e1, a) (e2, a) (e3, a) (e4, a)
u1 0.3 0.9 0.5 0.6
u2 0.4 0.0 0.3 0.8
u3 0.2 0.2 0.9 0.5
u4 0.5 0.3 0.7 0.7
u5 0.8 0.6 0.2 0.6
Table 2:

Opinions of Expert b.

(e1, b) (e2, b) (e3, b) (e4, b)
u1 0.2 0.8 0.4 0.5
u2 0.5 0.1 0.4 0.6
u3 0.4 0.4 0.7 0.4
u4 0.5 0.1 0.5 0.6
u5 0.6 0.4 0.3 0.3
Table 3:

Opinions of Expert c.

(e1, c) (e2, c) (e3, c) (e4, c)
u1 0.4 0.7 0.5 0.3
u2 0.5 0.3 0.3 0.8
u3 0.3 0.3 0.9 0.5
u4 0.6 0.3 0.7 0.5
u5 0.7 0.5 0.2 0.6
Table 4:

Weighted average of opinions.

e1 e2 e3 e4
u1 0.28 0.78 0.46 0.46
u2 0.46 0.16 0.34 0.76
u3 0.30 0.32 0.82 0.48
u4 0.52 0.22 0.62 0.60
u5 0.70 0.48 0.24 0.54

4.1 Comparative Analysis

We compare our algorithm with the following algorithm, which is based on the most widely used weighted averaging operator [10].

Algorithm 2: Let U={u1, u2, …, un } be the set of alternatives, E={e1, e2, …, el } be the set of attributes, and X={x1, x2, …, xm } be the set of experts.

Further, we take opinion of experts in the form of GSE elements.

  • Step 1: Utilize the evaluations of experts in the form of GSE sets.

  • Step 2: Separate the opinions of each expert.

  • Step 3: Assign weights to each expert according to their area of expertise.

  • Step 4: Aggregate the attributes by using a GSE-weighted average operator.

  • Step 5: Find the average of these alternatives.

  • Step 6: Arrange these alternatives according to situation.

  • Step 7: Choose the best alternative.

Now, for a comparative analysis, we apply the above algorithm to Example 3.

  • Step 1: Utilize the evaluations of experts in the form of GSE sets: F(e1, a)={(u1, 0.3), (u2, 0.4), (u3, 0.2), (u4, 0.5), (u5, 0.8)}, F(e1, b)={(u1, 0.2), (u2, 0.5), (u3, 0.4), (u4, 0.5), (u5, 0.6)}, F(e1, c)={(u1, 0.4), (u2, 0.5), (u3, 0.3), (u4, 0.6), (u5, 0.7)}, F(e2, a)={(u1, 0.9), (u2, 0.0), (u3, 0.2), (u4, 0.3), (u5, 0.6)}, F(e2, b)={(u1, 0.8), (u2, 0.1), (u3, 0.4), (u4, 0.1), (u5, 0.4)}, F(e2, c)={(u1, 0.7), (u2, 0.3), (u3, 0.3), (u4, 0.3), (u5, 0.5)}, F(e3, a)={(u1, 0.5), (u2, 0.3), (u3, 0.9), (u4, 0.7), (u5, 0.2)}, F(e3, b)={(u1, 0.4), (u2, 0.4), (u3, 0.7), (u4, 0.5), (u5, 0.3)}, F(e3, c)={(u1, 0.5), (u2, 0.3), (u3, 0.9), (u4, 0.7), (u5, 0.2)}, F(e4, a)={(u1, 0.6), (u2, 0.8), (u3, 0.5), (u4, 0.7), (u5, 0.6)}, F(e4, b)={(u1, 0.5), (u2, 0.6), (u3, 0.4), (u4, 0.6), (u5, 0.3)}, F(e4, c)={(u1, 0.3), (u2, 0.8), (u3, 0.5), (u4, 0.5), (u5, 0.6)}.

  • Step 2: Separate the opinion of each expert to obtain Tables 13.

  • Step 3: Assign weights (23,23,13)t to each expert according to their area of expertise.

  • Step 4: Find the weighted average of opinions for each pair (ui , ej ) (i=1, 2, 3, 4, 5, j=1, 2, 3, 4). Thus, the opinions of experts have been aggregated in this step and results have been displayed in Table 5.

    For example, for the pair (u1, e1), the weighted average has been calculated as

                   1−(1−0.3)(2÷3)(1−0.2)(2÷3)(1−0.4)(1÷3)=0.42697.

  • Step 5: Find the average of each alternative:

    1. u1=0.67958;

    2. u2=0.55252;

    3. u3=0.62771;

    4. u4=0.66825;

    5. u5=0.65579.

  • Step 6: Arrange these alternatives to get the preference relation u2  u3  u5  u4  u1 (the alternative with the lowest overall risk factor is the most preferred one, while the one with the highest overall risk factor is the least preferred).

  • Step 7: Thus, the most appropriate business is u2.

Table 5:

Aggregated opinions by using GSE weighted average operator.

e1 e2 e3 e4
u1 0.42697 0.95068 0.64431 0.69634
u2 0.64431 0.17232 0.50203 0.89142
u3 0.45567 0.45567 0.95519 0.64431
u4 0.70760 0.34748 0.81101 0.8069
u5 0.87571 0.69348 0.3693 0.68465

It can be seen that the results using Algorithms 1 and 2 are highly compatible.

5 Conclusions

In this paper, the GSE set has been discussed, which can be treated as a generalization of the hesitant fuzzy set. Some basic operations associated with the structure have been defined and analyzed. For comparison purposes, the notions of “subset” and “score” have also been defined. Some important results have been proved, which failed to hold in the case of hesitant fuzzy sets. For example, the notion of containment in hesitant fuzzy sets is an open problem. One of the most widely used measures of containment was given by Xia and Xu [33]. However, in that case, the inclusion of two hesitant fuzzy elements in each other does not imply their equality. This issue can be resolved by using the proposed structure. In addition, a decision-making algorithm with the aid of the GSE set is developed. There are so many techniques to solve decision-making problems through hesitant fuzzy sets. However, the suggested technique has an advantage over the existing methods in that it considers the relative importance of the experts according to their area of expertise. A practical risk decision-making example is presented to reveal the significance of the algorithm. As a future work, we aim to study and define appropriate aggregation operators, distance, and similarity measures for GSE sets.

Bibliography

[1] H. Aktaş, Some algebraic applications of soft sets, Appl. Soft Comput. 28 (2015), 327–331.10.1016/j.asoc.2014.11.045Search in Google Scholar

[2] M. I. Ali, Another view on reduction of parameters in soft sets, Appl. Soft Comput. 12 (2012), 1814–1821.10.1016/j.asoc.2012.01.002Search in Google Scholar

[3] M. I. Ali, F. Feng, X. Liu, W. K. Min and M. Shabir, On some new operations in soft set theory, Comput. Math. Appl. 57 (2009), 1547–1553.10.1016/j.camwa.2008.11.009Search in Google Scholar

[4] M. I. Ali, M. Shabir and M. Naz, Algebraic structures of soft sets associated with new operations, Comput. Math. Appl. 61 (2011), 2647–2654.10.1016/j.camwa.2011.03.011Search in Google Scholar

[5] S. Alkhazaleh and A. R. Salleh, Soft expert sets, Adv. Decis. Sci. 2011 (2011), Article ID 757868.10.1155/2011/757868Search in Google Scholar

[6] D. Chen, E. C. C. Tsang, D. S. Yeung and X. Wang, The parametrization reduction of soft sets and its application, Comput. Math. Appl. 49 (2005), 757–763.10.1016/j.camwa.2004.10.036Search in Google Scholar

[7] Q. Feng and Y. Zhou, Soft discernibility matrix and its applications in decision making, Appl. Soft Comput. 24 (2014), 749–756.10.1016/j.asoc.2014.08.042Search in Google Scholar

[8] F. Feng, X. Liu, V. Leoreanu-Fotea and Y. B. Jun, Soft sets and soft rough sets, Inform. Sci. 181 (2011), 1125–1137.10.1016/j.ins.2010.11.004Search in Google Scholar

[9] P. Ji, H. Zhang and J. Wang, A projection-based TODIM method under multi-valued neutrosophic environments and its application in personnel selection, Neural Comput. Appl. (2016), 1–14.10.1007/s00521-016-2436-zSearch in Google Scholar

[10] D. F. Li, Decision and Game Theory in Management with Intuitionistic Fuzzy Sets, vol. 308, Springer, Berlin, 2014.10.1007/978-3-642-40712-3Search in Google Scholar

[11] D. Liang and D. Liu, A novel risk decision-making based on decision-theoretic rough sets under hesitant fuzzy information, IEEE Trans. Fuzzy Syst. 23 (2015), 237–247.10.1109/TFUZZ.2014.2310495Search in Google Scholar

[12] P. K. Maji and A. R. Roy, A fuzzy soft set theoretic approach to decision making problems, Comput. Math. Appl. 203 (2007), 412–418.10.1016/j.cam.2006.04.008Search in Google Scholar

[13] P. K. Maji, R. Biswas and A. R. Roy, An application of soft sets in a decision making problems, Comput. Math. Appl. 44 (2002), 1077–1083.10.1016/S0898-1221(02)00216-XSearch in Google Scholar

[14] P. K. Maji, R. Biswas and A. R. Roy, Soft set theory, Comput. Math. Appl. 45 (2003), 555–562.10.1016/S0898-1221(03)00016-6Search in Google Scholar

[15] P. Majumdar and S. K. Samanta, Generalized fuzzy soft sets, Comput. Math Appl. 59 (2010), 1425–1432.10.1016/j.camwa.2009.12.006Search in Google Scholar

[16] P. Majumdar and S. K. Samanta, On soft mappings, Comput. Math. Appl. 60 (2010), 2666–2672.10.1016/j.camwa.2010.09.004Search in Google Scholar

[17] F. Meng, X. Chen and Q. Zhang, Induced generalized hesitant fuzzy Shapley hybrid operators and their application in multi-attribute decision making, Appl. Soft Comput. 28 (2015), 599–607.10.1016/j.asoc.2014.11.017Search in Google Scholar

[18] D. Molodtsov, Soft set theory – first results, Comput. Math. Appl. 37 (1999), 19–31.10.1016/S0898-1221(99)00056-5Search in Google Scholar

[19] J. -J. Peng, J. -Q. Wang and X. -H. Wu, Novel multi-criteria decision-making approaches based on hesitant fuzzy sets and prospect theory, Int. J. Inform. Technol. Decis. Making 15 (2016), 621–643.10.1142/S0219622016500152Search in Google Scholar

[20] H. Peng, H. Zhang and J. Wang, Probability multi-valued neutrosophic sets and its application in multi-criteria group decision-making problems, Neural Comput. Appl. (2016), 1–21.10.1007/s00521-016-2702-0Search in Google Scholar

[21] J. Peng, J. Wang and W. -E. Yang, A multi-valued neutrosophic qualitative flexible approach based on likelihood for multi-criteria decision-making problems, Int. J. Syst. Sci. 48 (2017), 425–435.10.1080/00207721.2016.1218975Search in Google Scholar

[22] R. M. Rodrguez, L. Martnez, V. Torra, Z. S. Xu and F. Herrera, Hesitant fuzzy sets: state of the art and future directions, Int. J. Intell. Syst. 29 (2014), 495–524.10.1002/int.21654Search in Google Scholar

[23] R. M. Rodrguez, B. Bedregal, H. Bustince, Y. C. Dong, B. Farhadinia, C. Kahraman, L. Martnez, V. Torra, Y. Xu, Z. S. Xu and F. Herrera, A position and perspective analysis of hesitant fuzzy sets on information fusion in decision making: towards high quality progress, Inform. Fusion 29 (2016), 89–97.10.1016/j.inffus.2015.11.004Search in Google Scholar

[24] A. Sezgin and A. O. Atagun, On operations of soft sets, Comput. Math. Appl. 61 (2011), 1457–1467.10.1016/j.camwa.2011.01.018Search in Google Scholar

[25] M. Shabir, M. I. Ali and T. Shaheen, Another approach to soft rough sets, Knowl.-Based Syst. 40 (2013), 72–80.10.1016/j.knosys.2012.11.012Search in Google Scholar

[26] C. Tan, W. Yi and X. Chen, Hesitant fuzzy Hamacher aggregation operators for multicriteria decision making, Appl. Soft Comput. 26 (2015), 325–349.10.1016/j.asoc.2014.10.007Search in Google Scholar

[27] Z. Tian, J. Wang, J. Wang and H. Zhang, A likelihood-based qualitative flexible approach with hesitant fuzzy linguistic information, Cogn. Comput. 8 (2016), 670–683.10.1007/s12559-016-9400-1Search in Google Scholar

[28] V. Torra, Hesitant fuzzy sets, Int. J. Intell. Syst. 25 (2010), 529–539.10.1002/int.20418Search in Google Scholar

[29] V. Torra and Y. Narukawa, On hesitant fuzzy sets and decision, in: Proc. 18th IEEE Int. Conf. Fuzzy Syst., pp. 1378–1382, Jeju Island, Korea, 2009.10.1109/FUZZY.2009.5276884Search in Google Scholar

[30] H. Wang and Z. Xu, Multi-groups decision making using intuitionistic-valued hesitant fuzzy information, Int. J. Comput. Intell. Syst. 9 (2016), 468–482.10.1080/18756891.2016.1175812Search in Google Scholar

[31] H. Wang and Z. Xu, Admissible orders of typical hesitant fuzzy elements and their application in ordered information fusion in multi-criteria decision making, Inform. Fusion 29 (2016), 98–104.10.1016/j.inffus.2015.08.009Search in Google Scholar

[32] J. Wang, J. Wang and H. Zhang, A likelihood-based TODIM approach based on multi-hesitant fuzzy linguistic information for evaluation in logistics outsourcing, Comput. Indust. Eng. 99 (2016), 287–299.10.1016/j.cie.2016.07.023Search in Google Scholar

[33] M. M. Xia and Z. S. Xu, Hesitant fuzzy information aggregation in decision making, Int. J. Approx. Reason. 52 (2011), 395–407.10.1016/j.ijar.2010.09.002Search in Google Scholar

[34] Z. Xu, Hesitant Fuzzy Sets Theory, vol. 314, Springer, Cham, Switzerland, 2014.10.1007/978-3-319-04711-9Search in Google Scholar

[35] Z. Xu and M. Xia, Distance and similarity measures for hesitant fuzzy sets, Inform. Sci. 181 (2011), 2128–2138.10.1016/j.ins.2011.01.028Search in Google Scholar

[36] Z. Xu and W. Zhou, Consensus building with a group of decision makers under the hesitant probabilistic fuzzy environment, Fuzzy Optim. Decis. Making 16 (2017), 481–503.10.1007/s10700-016-9257-5Search in Google Scholar

[37] X. Yang, X. Song, Y. Qi and J. Yang, Constructive and axiomatic approaches to hesitant fuzzy rough set, Soft Comput. 18 (2014), 1067–1077.10.1007/s00500-013-1127-2Search in Google Scholar

[38] S. -M. Yu, J. Wang and J. -Q. Wang, An extended TODIM approach with intuitionistic linguistic numbers, Int. Trans. Oper. Res. (2016). doi: 10.1111/itor.12363.10.1111/itor.12363Search in Google Scholar

[39] L. Yue, M. Sun and Z. Shao, The probabilistic hesitant fuzzy weighted average operators and their application in strategic decision making, J. Inform. Comput. Sci. 10 (2013), 3841–3848.10.12733/jics20102040Search in Google Scholar

[40] Y. Zhai, Z. Xu and H. Liao, Probabilistic linguistic vector-term set and its application in group decision making with multi-granular linguistic information, Appl. Soft Comput. 49 (2016), 801–816.10.1016/j.asoc.2016.08.044Search in Google Scholar

[41] Z. Zhang and C. Wu, Deriving the priority weights from hesitant multiplicative preference relations in group decision making, Appl. Soft Comput. 25 (2014), 107–117.10.1016/j.asoc.2014.08.062Search in Google Scholar

[42] X. Zhang and Z. Xu, Hesitant Fuzzy Methods for Multiple Criteria Decision Analysis, Springer international publishing, Cham, Switzerland, (2017).10.1007/978-3-319-42001-1Search in Google Scholar

[43] H. Zhou, J. Q. Wang and H. Y. Zhang, Multi-criteria decision-making approaches based on distance measures for linguistic hesitant fuzzy sets, J. Oper. Res. Soc. 10 (2017), 1–15.10.1057/jors.2016.41Search in Google Scholar

Received: 2016-11-29
Published Online: 2018-01-26

©2020 Walter de Gruyter GmbH, Berlin/Boston

This work is licensed under the Creative Commons Attribution 4.0 Public License.

Articles in the same Issue

  1. An Optimized K-Harmonic Means Algorithm Combined with Modified Particle Swarm Optimization and Cuckoo Search Algorithm
  2. Texture Feature Extraction Using Intuitionistic Fuzzy Local Binary Pattern
  3. Leaf Disease Segmentation From Agricultural Images via Hybridization of Active Contour Model and OFA
  4. Deadline Constrained Task Scheduling Method Using a Combination of Center-Based Genetic Algorithm and Group Search Optimization
  5. Efficient Classification of DDoS Attacks Using an Ensemble Feature Selection Algorithm
  6. Distributed Multi-agent Bidding-Based Approach for the Collaborative Mapping of Unknown Indoor Environments by a Homogeneous Mobile Robot Team
  7. An Efficient Technique for Three-Dimensional Image Visualization Through Two-Dimensional Images for Medical Data
  8. Combined Multi-Agent Method to Control Inter-Department Common Events Collision for University Courses Timetabling
  9. An Improved Particle Swarm Optimization Algorithm for Global Multidimensional Optimization
  10. A Kernel Probabilistic Model for Semi-supervised Co-clustering Ensemble
  11. Pythagorean Hesitant Fuzzy Information Aggregation and Their Application to Multi-Attribute Group Decision-Making Problems
  12. Using an Efficient Optimal Classifier for Soil Classification in Spatial Data Mining Over Big Data
  13. A Bayesian Multiresolution Approach for Noise Removal in Medical Magnetic Resonance Images
  14. Gbest-Guided Artificial Bee Colony Optimization Algorithm-Based Optimal Incorporation of Shunt Capacitors in Distribution Networks under Load Growth
  15. Graded Soft Expert Set as a Generalization of Hesitant Fuzzy Set
  16. Universal Liver Extraction Algorithm: An Improved Chan–Vese Model
  17. Software Effort Estimation Using Modified Fuzzy C Means Clustering and Hybrid ABC-MCS Optimization in Neural Network
  18. Handwritten Indic Script Recognition Based on the Dempster–Shafer Theory of Evidence
  19. An Integrated Intuitionistic Fuzzy AHP and TOPSIS Approach to Evaluation of Outsource Manufacturers
  20. Automatically Assess Day Similarity Using Visual Lifelogs
  21. A Novel Bio-Inspired Algorithm Based on Social Spiders for Improving Performance and Efficiency of Data Clustering
  22. Discriminative Training Using Noise Robust Integrated Features and Refined HMM Modeling
  23. Self-Adaptive Mussels Wandering Optimization Algorithm with Application for Artificial Neural Network Training
  24. A Framework for Image Alignment of TerraSAR-X Images Using Fractional Derivatives and View Synthesis Approach
  25. Intelligent Systems for Structural Damage Assessment
  26. Some Interval-Valued Pythagorean Fuzzy Einstein Weighted Averaging Aggregation Operators and Their Application to Group Decision Making
  27. Fuzzy Adaptive Genetic Algorithm for Improving the Solution of Industrial Optimization Problems
  28. Approach to Multiple Attribute Group Decision Making Based on Hesitant Fuzzy Linguistic Aggregation Operators
  29. Cubic Ordered Weighted Distance Operator and Application in Group Decision-Making
  30. Fault Signal Recognition in Power Distribution System using Deep Belief Network
  31. Selector: PSO as Model Selector for Dual-Stage Diabetes Network
  32. Oppositional Gravitational Search Algorithm and Artificial Neural Network-based Classification of Kidney Images
  33. Improving Image Search through MKFCM Clustering Strategy-Based Re-ranking Measure
  34. Sparse Decomposition Technique for Segmentation and Compression of Compound Images
  35. Automatic Genetic Fuzzy c-Means
  36. Harmony Search Algorithm for Patient Admission Scheduling Problem
  37. Speech Signal Compression Algorithm Based on the JPEG Technique
  38. i-Vector-Based Speaker Verification on Limited Data Using Fusion Techniques
  39. Prediction of User Future Request Utilizing the Combination of Both ANN and FCM in Web Page Recommendation
  40. Presentation of ACT/R-RBF Hybrid Architecture to Develop Decision Making in Continuous and Non-continuous Data
  41. An Overview of Segmentation Algorithms for the Analysis of Anomalies on Medical Images
  42. Blind Restoration Algorithm Using Residual Measures for Motion-Blurred Noisy Images
  43. Extreme Learning Machine for Credit Risk Analysis
  44. A Genetic Algorithm Approach for Group Recommender System Based on Partial Rankings
  45. Improvements in Spoken Query System to Access the Agricultural Commodity Prices and Weather Information in Kannada Language/Dialects
  46. A One-Pass Approach for Slope and Slant Estimation of Tri-Script Handwritten Words
  47. Secure Communication through MultiAgent System-Based Diabetes Diagnosing and Classification
  48. Development of a Two-Stage Segmentation-Based Word Searching Method for Handwritten Document Images
  49. Pythagorean Fuzzy Einstein Hybrid Averaging Aggregation Operator and its Application to Multiple-Attribute Group Decision Making
  50. Ensembles of Text and Time-Series Models for Automatic Generation of Financial Trading Signals from Social Media Content
  51. A Flame Detection Method Based on Novel Gradient Features
  52. Modeling and Optimization of a Liquid Flow Process using an Artificial Neural Network-Based Flower Pollination Algorithm
  53. Spectral Graph-based Features for Recognition of Handwritten Characters: A Case Study on Handwritten Devanagari Numerals
  54. A Grey Wolf Optimizer for Text Document Clustering
  55. Classification of Masses in Digital Mammograms Using the Genetic Ensemble Method
  56. A Hybrid Grey Wolf Optimiser Algorithm for Solving Time Series Classification Problems
  57. Gray Method for Multiple Attribute Decision Making with Incomplete Weight Information under the Pythagorean Fuzzy Setting
  58. Multi-Agent System Based on the Extreme Learning Machine and Fuzzy Control for Intelligent Energy Management in Microgrid
  59. Deep CNN Combined With Relevance Feedback for Trademark Image Retrieval
  60. Cognitively Motivated Query Abstraction Model Based on Associative Root-Pattern Networks
  61. Improved Adaptive Neuro-Fuzzy Inference System Using Gray Wolf Optimization: A Case Study in Predicting Biochar Yield
  62. Predict Forex Trend via Convolutional Neural Networks
  63. Optimizing Integrated Features for Hindi Automatic Speech Recognition System
  64. A Novel Weakest t-norm based Fuzzy Fault Tree Analysis Through Qualitative Data Processing and Its Application in System Reliability Evaluation
  65. FCNB: Fuzzy Correlative Naive Bayes Classifier with MapReduce Framework for Big Data Classification
  66. A Modified Jaya Algorithm for Mixed-Variable Optimization Problems
  67. An Improved Robust Fuzzy Algorithm for Unsupervised Learning
  68. Hybridizing the Cuckoo Search Algorithm with Different Mutation Operators for Numerical Optimization Problems
  69. An Efficient Lossless ROI Image Compression Using Wavelet-Based Modified Region Growing Algorithm
  70. Predicting Automatic Trigger Speed for Vehicle-Activated Signs
  71. Group Recommender Systems – An Evolutionary Approach Based on Multi-expert System for Consensus
  72. Enriching Documents by Linking Salient Entities and Lexical-Semantic Expansion
  73. A New Feature Selection Method for Sentiment Analysis in Short Text
  74. Optimizing Software Modularity with Minimum Possible Variations
  75. Optimizing the Self-Organizing Team Size Using a Genetic Algorithm in Agile Practices
  76. Aspect-Oriented Sentiment Analysis: A Topic Modeling-Powered Approach
  77. Feature Pair Index Graph for Clustering
  78. Tangramob: An Agent-Based Simulation Framework for Validating Urban Smart Mobility Solutions
  79. A New Algorithm Based on Magic Square and a Novel Chaotic System for Image Encryption
  80. Video Steganography Using Knight Tour Algorithm and LSB Method for Encrypted Data
  81. Clay-Based Brick Porosity Estimation Using Image Processing Techniques
  82. AGCS Technique to Improve the Performance of Neural Networks
  83. A Color Image Encryption Technique Based on Bit-Level Permutation and Alternate Logistic Maps
  84. A Hybrid of Deep CNN and Bidirectional LSTM for Automatic Speech Recognition
  85. Database Creation and Dialect-Wise Comparative Analysis of Prosodic Features for Punjabi Language
  86. Trapezoidal Linguistic Cubic Fuzzy TOPSIS Method and Application in a Group Decision Making Program
  87. Histopathological Image Segmentation Using Modified Kernel-Based Fuzzy C-Means and Edge Bridge and Fill Technique
  88. Proximal Support Vector Machine-Based Hybrid Approach for Edge Detection in Noisy Images
  89. Early Detection of Parkinson’s Disease by Using SPECT Imaging and Biomarkers
  90. Image Compression Based on Block SVD Power Method
  91. Noise Reduction Using Modified Wiener Filter in Digital Hearing Aid for Speech Signal Enhancement
  92. Secure Fingerprint Authentication Using Deep Learning and Minutiae Verification
  93. The Use of Natural Language Processing Approach for Converting Pseudo Code to C# Code
  94. Non-word Attributes’ Efficiency in Text Mining Authorship Prediction
  95. Design and Evaluation of Outlier Detection Based on Semantic Condensed Nearest Neighbor
  96. An Efficient Quality Inspection of Food Products Using Neural Network Classification
  97. Opposition Intensity-Based Cuckoo Search Algorithm for Data Privacy Preservation
  98. M-HMOGA: A New Multi-Objective Feature Selection Algorithm for Handwritten Numeral Classification
  99. Analogy-Based Approaches to Improve Software Project Effort Estimation Accuracy
  100. Linear Regression Supporting Vector Machine and Hybrid LOG Filter-Based Image Restoration
  101. Fractional Fuzzy Clustering and Particle Whale Optimization-Based MapReduce Framework for Big Data Clustering
  102. Implementation of Improved Ship-Iceberg Classifier Using Deep Learning
  103. Hybrid Approach for Face Recognition from a Single Sample per Person by Combining VLC and GOM
  104. Polarity Analysis of Customer Reviews Based on Part-of-Speech Subcategory
  105. A 4D Trajectory Prediction Model Based on the BP Neural Network
  106. A Blind Medical Image Watermarking for Secure E-Healthcare Application Using Crypto-Watermarking System
  107. Discriminating Healthy Wheat Grains from Grains Infected with Fusarium graminearum Using Texture Characteristics of Image-Processing Technique, Discriminant Analysis, and Support Vector Machine Methods
  108. License Plate Recognition in Urban Road Based on Vehicle Tracking and Result Integration
  109. Binary Genetic Swarm Optimization: A Combination of GA and PSO for Feature Selection
  110. Enhanced Twitter Sentiment Analysis Using Hybrid Approach and by Accounting Local Contextual Semantic
  111. Cloud Security: LKM and Optimal Fuzzy System for Intrusion Detection in Cloud Environment
  112. Power Average Operators of Trapezoidal Cubic Fuzzy Numbers and Application to Multi-attribute Group Decision Making
Downloaded on 16.9.2025 from https://www.degruyterbrill.com/document/doi/10.1515/jisys-2016-0312/html
Scroll to top button