Home Mathematics ANFIS-Based Resource Mapping for Query Processing in Wireless Multimedia Sensor Networks
Article Open Access

ANFIS-Based Resource Mapping for Query Processing in Wireless Multimedia Sensor Networks

  • Nagesha Shivappa EMAIL logo and Sunilkumar S. Manvi
Published/Copyright: July 27, 2016
Become an author with De Gruyter Brill

Abstract

Wireless multimedia sensor networks (WMSNs) are usually resource constrained, and where the sensor nodes have limited bandwidth, energy, processing power, and memory. Hence, resource mapping is required in a WMSN, which is based on user linguistic quality of service (QoS) requirements and available resources to offer better communication services. This paper proposes an adaptive neuro fuzzy inference system (ANFIS)-based resource mapping for video communications in WMSNs. Each sensor node is equipped with ANFIS, which employs three inputs (user QoS request, available node energy, and available node bandwidth) to predict the quality of the video output in terms of varying number of frames/second with either fixed or varying resolution. The sensor nodes periodically measure the available node energy and also the bandwidth. The spatial query processing in the proposed resource mapping works as follows. (i) The sink node receives the user query for some event. (ii) The sink node sends the query through an intermediate sensor node(s) and cluster head(s) in the path to an event node. A cluster head-based tree routing algorithm is used for routing. (iii) The query passes through ANFIS of intermediate sensor nodes and cluster heads, where each node predicts the quality of the video output. (iv) The event node chooses the minimum quality among all cluster heads and intermediate nodes in the path and transmits the video output. The work is simulated in different network scenarios to test the performance in terms of predicted frames/second and frame format. To the best of our knowledge, the proposed resource mapping is the first work in the area of sensor networks. The trained ANFIS predicts the output video quality in terms of number of frames/second (or H.264 video format) accurately for the given input.

MSC 2010: 68M10; 68U99; 94A99

1 Introduction

Wireless multimedia sensor networks (WMSNs) [1], [2], [3] comprise sensor nodes and sink nodes that are wirelessly connected. Each node may be involved in data, text, voice, video, and image transmission. The integration of inexpensive hardware such as complementary metal oxide semiconductor cameras, microphones, and low-power wireless networking technologies (ultra wide band, IEEE 802.11.4–2011, etc.) has made this possible. The network uses multihop communication to establish a link between the sink node and sensor nodes. The WMSNs have been very well applied to the areas of security (surveillance), traffic monitoring and enforcement, personal healthcare, environmental and industrial control, animal habitat, and gaming.

The design of the WMSN is influenced by many factors, which include resource constraints, application-specific quality of service (QoS) requirements, high bandwidth demand, multimedia source coding techniques, and multimedia in-network processing. Resources like energy of individual sensor nodes, bandwidth available to each node for communication, processing power of the CPU, amount of memory available, and operating system used have to be managed effectively for greater performance, better QoS, reliability, and long life of the WMSN. Resource allocation, resource adaptation, resource mapping, resource monitoring, resource estimation, resource discovery and selection, resource modeling, and resource scheduling are some of the issues of resource management in sensor networks. In this paper, we address resource mapping.

Resource mapping is a system-building tool or process for aligning resources and policies with system goals, strategies, and expected outcomes. A high quality of video (more frames/second and/or better frame format) requires more bandwidth for transmission, and sensor nodes consume more energy to transmit the same. The bandwidth of the network has to be shared among sensor nodes to serve multiple queries from the sink node to take care of all users of the WMSN, and the energy of the sensor node should be used judiciously for the longevity of the WMSN. In our application, we are making users contribute toward this end by specifying the quality of the required video streams as low/medium/high quality. Sensor nodes measure the residual energy and available bandwidth periodically. User request, bandwidth available to the sensor node for communication, and residual energy of the sensor node are our resources. These resources are mapped to the output quality of the video in terms of frames/second (or H.264 frame format).

Resource mapping is required in multimedia communication because of the following reasons. (i) The sensor nodes have limited resources (e.g. energy; replacing the batteries of sensor nodes may be difficult/impossible, energy harvesting facility may not be provided/impossible). (ii) The large number of queries from the users of WMSNs demands the sharing of bandwidth. (iii) Users may have to pay for the QoS; hence, they need to choose the quality of video. (iv) Users have to pay service providers for the amount of data transmission.

The adaptive neuro fuzzy inference system (ANFIS) [14] is a class of adaptive neural network that is functionally equivalent to a fuzzy inference system [23]. ANFIS with a three-input – (i) user input, (ii) bandwidth available to the sensor node, and (iii) residual energy of the sensor node and one output (number of frames/second or H.264 frame format of the video to be transmitted) – configuration is used to solve the above problem. It uses a given input/output data set to construct the fuzzy inference system whose membership function parameters are tuned (adjusted) using a backpropagation algorithm. Each sensor node is equipped with ANFIS and trained with the set of input and output training data. The sensor node receives the user request from the user through the sink node, and measures the residual energy level and the bandwidth available for video transmission. The trained ANFIS of the sensor node takes the current input values of user request, residual energy, and bandwidth, and decides the number of frames/second (or H.264 frame format) of the video to be transmitted to the sink node. The user request input (quality of the video) along with two network resources, available bandwidth and energy, are mapped to video quality (to be transmitted), which is in frames/second (or H.264 frame format). Each time the sensor node is queried for the video transmission, the ANFIS computes the output.

Users of the WMSN query the network (through sink node) for some event. The sink node sends the query to an appropriate sensor node through intermediate sensor node(s) and cluster head(s). A cluster head-based tree routing algorithm is used for routing. The query passes through the ANFIS of the intermediate sensor nodes and cluster heads, where each node predicts the quality of the video output. Finally, the event node chooses the minimum quality among all cluster heads and intermediate nodes in that path and transmits the video output.

The contributions of the paper are as follows. (i) Mapping the user request for video with some quality (low/medium/high) and available network resources (available bandwidth and residual energy of the sensor node) to output video quality from the event node by prediction using ANFIS. (ii) Developing the necessary training data based on intuition to train the ANFIS and to generate the membership functions. (iii) Considering the entire path from event node to sink node to determine the final quality of video to be transmitted from the event node. (iv) Finding the response of the WMSN in terms of quality of the video: (a) when the user requests are random and uniform, (b) when energy available in sensor nodes is random and uniform, and (c) when both user requests are uniform (e.g. all users requesting medium-quality video) and the residual energy of all sensor nodes are uniform.

This work can be used in various scenarios of WMSNs designed for various applications. Some scenarios are listed here. (i) Users may have to pay for the better quality of the video and the communication charges to the service providers. So, we can expect users to judiciously decide the quality of the video required for their purposes and request the network accordingly for low-/medium-/high-quality videos. A video with fewer frames/second (low quality) takes less energy for transmission compared to the video with more frames/second (high quality). This contributes to increase the life of the network (less energy consumption) and allows the network to provide more bandwidth to the users who really need high-quality video. (ii) If the network is for private use, the user can demand the network for appropriate video quality based on his requirement. This improves the life of the network as energy consumed is less in lower-quality video transmission.

The rest of the paper is organized as follows. Related works are described in Section 2. We describe the ANFIS-based resource mapping for query processing in WMSNs in Section 3. Section 4 is used to present simulation process, results, and analysis. We conclude the paper in Section 5.

2 Related Work

Resource constraint is one of the factors that influence the design of WMSNs. Sensor nodes are constrained in terms of processing power, memory, energy, and achievable data rate [2]. Effective management of all these resources is necessary for prolonged life of the network (energy) and to provide the required services to all users of the WMSN.

Literature on resource management issues (in wireless sensor networks), such as resource allocation [32], [33], [35], resource adaptation [15], and resource monitoring [9], [10], [22], is available. However, in our survey, we have not come across any work on resource mapping in WMSNs or wireless sensor networks. In our earlier paper on resource mapping [25], we have used the Mamdani fuzzy inference system for mapping the user request and network resources (residual energy of the node and available bandwidth to sensor node) to the quality of video in terms of frames/second. In that paper, we have done the resource mapping for only one sensor node and the networking issues were not addressed. In this paper, we have considered a clustered tree topology sensor network and multiuser scenario of the network. The mapping is improved using ANFIS, which involves learning. The ANFIS is trained with two different sets of training data separately. They are (i) to predict the output video quality in terms of frames/second, and (ii) to predict the output video quality in terms of H.264 video format. The multiuser scenario is simulated by querying the sensor network for some events. The survey on query processing and ANFIS is as follows.

Queries used in sensor networks are classified into different types based on different criteria [36]. They are (i) historical query, one-time query, persistent query; (ii) spatial query, temporal query, spatio-temporal query; (iii) exploratory query, monitoring query, range query; (iv) filtering query, non-filtering query; and (v) one-dimensional query, multidimensional query. FullFlood [7], spatio-temporal window [12], and directed diffusion [13] are the different techniques used for querying the sensor network through flooding. Time series snapshot querying is discussed in Ref. [18], which is necessary to report a time series snapshot of data to the sink for scientific analysis.

Query processing in multiuser scenario for wireless sensor networks is considered in Ref. [20]. The query processing method named network event report (NER) for multiuser scenario is defined in this work. The performance of the NER method is compared with SWIF [8], FullFlood [7], and GHT [28] considering different query loads, different network sizes, and different node densities. Query-driven tracing (QDT) is proposed as an improvement over NER in Ref. [19]. This reduces the energy consumption for event tracing when there are few arrivals of queries. The performance of FullFlood, GHT, and NER are compared with QDT by considering different query loads, different network sizes, and different node densities.

Real-time query scheduling is proposed in Ref. [6] to address conflict-free transmission scheduling for real-time query processing in wireless sensor networks. Three new schedulers are presented in this paper. They are (i) the non-pre-emptive query scheduler, (ii) the pre-emptive query scheduler, and (iii) the slack stealing query scheduler.

Energy-efficient reverse skyline query processing is proposed in Ref. [34]. This consists of theoretical analysis of properties of reverse skyline query and proposal of a new scheme, the skyband approach, to tackle the problem of reverse skyline query answering over wireless sensor networks. Multiple reverse skyline query optimization mechanisms to improve the performance are also discussed. Communication cost minimization is proposed for evaluating range reverse skyline query.

We have chosen a spatial query to query the sensor network as this satisfies the requirements of the proposed work. Users of the WMSN query the network for the happening of some event (e.g. people gathering at particular place). The sink, which knows about the location of sensors, queries the appropriate sensor to know about the event.

The ANFIS model is not used anywhere in sensor networks for video communication. We have traced the use of the ANFIS model in some other types of networks. In Ref. [16], the ANFIS model is used to predict the quality of video based on the network level and the application level parameters such as packet error rate, link bandwidth, send bitrate, and frame rate for MPEG4 video streaming over wireless networks. The work in Ref. [17] predicts the quality of video based on application layer parameters (content type, sender bit rate, and frame rate) and physical layer parameters (block error rate and link bandwidth) in wireless LANs and UMTS networks. The ANFIS model for speech and video quality prediction in mobile networks is available in Ref. [27]. Multiple QoS constrained anycast routing in MANETs using ANFIS is proposed in Ref. [4].

The QoS issues/parameters of video communication in sensor networks are low, medium, and high (or excellent, good, fair, and bad) at the user level; video frame rate and resolution at the application layer level; and bandwidth, jitter, reliability, and delay at the network layer level. Multimedia encoding techniques (at application layer level) with design objectives of high compression efficiency, low complexity, and error resiliency contribute to reduce the need of high bandwidth for video communication [2]. A comprehensive survey of different video coding/compression techniques is discussed in Ref. [24]. The computational complexity of predictive encoding techniques used in video compression standards such as MPEG, H.263, and H.264 is high, and results in a complex encoder and a simple decoder. The shifting of complexity from encoder (resides in sensor node) to decoder is achieved in distributed source coding. The details of single-layer coding, multilayer coding, and multiple description coding compression techniques can be found in Ref. [24].

The multipath and multispeed routing protocol [11] is a real-time routing protocol that attempts to achieve timeliness (delay) and reliability. This protocol spans over the network and medium access layer. The comparison of single path and multipath routing protocols for the transmission of images is discussed in Ref. [21] to show that multipath routing is better than single-path routing in terms of reliability. The directional geographical routing [5] is another multipath protocol for real-time video streaming over energy- and bandwidth-constrained sensor networks from a small number of sensor nodes to sink. Other multipath routing protocols, QoS-based routing protocols, and a complete survey of routing protocols are available in Ref. [26].

3 ANFIS-Based Resource Mapping for Query Processing

The network environment, computational model (ANFIS), and routing algorithm for the WMSN are described in this section.

3.1 Network Environment

Users gather information by querying the sink node through the Internet for events as shown in Figure 1. We consider a scenario where users are requesting for video streams of events from the WMSN by sending queries to the sink node. The sink node collects all these queries and prepares subqueries based on query similarity. The sink node, which knows the location of sensor nodes, queries the sensor nodes for the event-related video streams. After obtaining the results from the WMSN, the sink node responds to the user queries appropriately.

Figure 1: Network Environment.
Figure 1:

Network Environment.

A hierarchical (clustered tree) sensor network model with sink node at the root (as shown in Figure 2) is considered. The following assumptions are made about the network model. (i) The sensor nodes and cluster heads are positioned in a clustered tree topology with sink at the root position. All the sensor nodes are leaf nodes in their respective cluster. The cluster heads are special nodes with additional resources, and there is no cluster head election. (ii) The sink node, all sensor nodes, and the cluster heads of the network are immobile. (iii) The network consists of only homogeneous sensor nodes with the same initial energy and finite transmission and reception range. All nodes are equipped with a camera, a microphone, and ANFIS. (iv) Cluster heads are special nodes with additional energy, computing power, and range capability. (v) The sink node knows the position of all sensor nodes and cluster heads in the network. It is rich in available energy, bandwidth, memory, and computing power. (vi) The sensor nodes and cluster heads periodically measure their residual energy and available bandwidth. (vii) The clusters are formed based on the wireless transmission and reception range of sensor nodes.

Figure 2: Sensor Network Model.
Figure 2:

Sensor Network Model.

3.2 Resource Mapping using ANFIS

3.2.1 Resource Mapping Policy

In ANFIS, the bandwidth available to node (low/medium/high), residual energy of the node (low/medium/high), and user request for video transmission (low/medium/high) are used to predict the quality of video (which the node is capable to transmit) based on the resource mapping policy shown in Table 1.

Table 1:

Resource Mapping Policy.

EnergyBandwidthUser requestOutput quality
LowLowLowLow
LowLowMediumLow
LowLowHighLow
LowMediumLowLow
LowMediumMediumLow
LowMediumHighLow
LowHighLowLow
LowHighMediumLow
LowHighHighLow
MediumLowLowLow
MediumLowMediumLow
MediumLowHighLow
MediumMediumLowLow
MediumMediumMediumMedium
MediumMediumHighMedium
MediumHighLowLow
MediumHighMediumMedium
MediumHighHighMedium
HighLowLowLow
HighLowMediumLow
HighLowHighLow
HighMediumLowLow
HighMediumMediumMedium
HighMediumHighMedium
HighHighLowLow
HighHighMediumMedium
HighHighHighHigh

3.2.2 ANFIS Input-Output Training Data

The input-output training data for the ANFIS are prepared as follows. The three inputs to the ANFIS – residual energy of the node, bandwidth available to the node, and user input request – are considered linguistically as low/medium/high. The linguistic label low is considered as 40%–70% of the input, medium is considered as 55%–85%, and high is considered as 70%–100%. This is applicable to all three inputs. All inputs are changed with an increment of 10. In the training data shown in Table 2, the residual energy of the node is considered as 100% (high). The availability of the bandwidth is varied from 40% to 100% (low to medium to high). For each increment of bandwidth by 10, the user input is varied from 40% to 100% (low to medium to high). To decide the frame rate of the video to be transmitted by the sensor node in number of frames/second (or H.264 frame format), all three inputs are necessary. Even though the availability of the energy is 100%, the bandwidth may not be available to transmit high-quality video. Then, the output quality is decided based on the bandwidth available and user request. When both residual energy and available bandwidth are 100%, the ANFIS decides the output quality based on the user request input. The complete data set can be prepared by varying the energy from 40% to 100% in steps of 10, and varying bandwidth and user input as illustrated for the case with 100% energy. Table 2 shows the training data only for energy equal to 100%. If the energy available is <40%, the network is not suitable for video streaming and is used for image/voice/text information transmission. The sink injects the queries in such a way that at least 40% bandwidth is available to the sensor node.

Table 2:

ANFIS Training Data.

Energy in %Bandwidth in %User input in %Frames/secondFrame format (H.264)
100404091
100405091
100406091
100407091
100408091
100409091
1004010091
100504091
1005050122
1005060122
1005070122
1005080122
1005090122
10050100122
100604091
1006050122
1006060153
1006070153
1006080153
1006090153
10060100153
100704091
1007050122
1007060153
1007070184
1007080184
1007090184
10070100184
100804091
1008050122
1008060153
1008070184
1008080215
1008090215
10080100215
100904091
1009050122
1009060153
1009070184
1009080215
1009090245
10090100245
1001004091
10010050122
10010060153
10010070184
10010080215
10010090245
100100100276

The ANFIS in the sensor node can be trained to output the video quality in two different ways. (i) Number of frames/second with fixed resolution (as shown Table 3): the output of the ANFIS is an integer that indicates the number of frame/second of video to be transmitted. (ii) Video quality in H.264 frame format (as shown in Table 4): The output of the ANFIS is an integer that indicates the quality index that describes the quality of video as described in Table 4.

Table 3:

Video Quality in Frames/Second.

QualityFrames/second (fps)
High25–30
Medium15–24
Low9–14
Table 4:

Video Quality in H.264 Frame Format.

Quality indexLevelFrame sizeFrames/secondMax. bit rate
11SQCIF (128 × 96)3064 kbps
21bQCIF (176 × 144)15128 kbps
31.1QVGA (320 × 240)10192 kbps
41.2525SIF (352 × 240)18384 kbps
51.3CIF (352 × 288)30768 kbps
62.1525HHR (352 × 480)304 Mbps

3.2.3 ANFIS

The parameters and notations used in ANFIS are listed in Table 5. The Sugeno type [29], [30], [31] fuzzy inference system is used in the ANFIS, and bell-shaped membership functions are generated based on the training data (Table 2) developed by us based on intuition. The membership functions for user input, energy, and bandwidth are shown in Figure 3.

Table 5:

Parameters and Notations used in ANFIS.

NotationDescription
(ai, bi, ci)Premise parameter set corresponding to rule i/node i
(pi, qi, ri, ki)Consequent parameter set corresponding to rule i/node i
Low, medium, highLinguistic labels
E, BW, UIEnergy, bandwidth, user input (respectively)
μ(BWlow), μ(BWmedium), μ(BWhigh)Bandwidth membership functions
μ(Elow), μ(Emedium), μ(Ehigh)Energy membership functions
μ(UIlow), μ(UImedium), μ(UIhigh)User input membership functions
O11,O21,O31Energy fuzzified outputs of layer 1
O41,O51,O61Bandwidth fuzzified outputs of layer 1
O71,O81,O91User input fuzzified outputs of layer 1
Oi2ith node output of layer 2
Oi3ith node output of layer 3
Figure 3: Membership Functions.(A) Energy input. (B) Bandwidth input. (C) User input.
Figure 3:

Membership Functions.

(A) Energy input. (B) Bandwidth input. (C) User input.

The membership function (bell shaped) for the energy (E) is

(1)μ(Ehigh)=11+|Eciai|2bi. (1)

The bell-shaped membership function varies as the (ai, bi, ci) parameters change. This change will result in various forms of membership functions as required according to the training data. These parameters are referred to as premise parameters. There are nine membership functions in our model. They are μ(BWlow), μ(BWmedium), μ(BWhigh), μ(Elow), μ(Emedium), μ(Ehigh), μ(UIlow), μ(UImedium), and μ(UIhigh). All these are defined similar to Eq. (1).

All the 27 rules generated based on the Sugeno fuzzy inference system are shown in Table 6. The ANFIS structure for the rules (Table 6) and membership functions (Figure 3) is shown in Figure 4. It consists of the fuzzification layer, rules layer, defuzzification layer, and output layer. Each layer consists of many nodes that perform similar functions.

Table 6:

Rules.

Rule no.Rule
1If E is low and BW is low and UI is low, then fps = –0.115E – 0.07952BW – 0.1002UI + 20.21
2If E is low and BW is low and UI is medium, then fps = 0.2958E + 0.1787BW + 0.009933UI – 6.515
3If E is low and BW is low and UI is high, then fps = 0.136E + 0.1043BW – 0.002869UI – 2.05
4If E is low and BW is medium and UI low, then fps = 0.2746E – 0.002051BW + 0.2226UI – 7.276
5If E is low and BW is medium and UI is medium, then fps = 0.2375E + 0.08246BW + 0.0539UI – 19.59
6If E is low and BW is medium and UI is high, then fps = 0.2402E – 0.1353BW + 0.02623UI + 3.36
7If E is low and BW is high and UI is low, then fps = 0.1158E – 0.004052BW + 0.1017UI – 1.132
8If E is low and BW is high and UI is medium, then fps = 0.2599E – 0.02281BW – 0.1573UI + 4.418
9If E is low and BW is high and UI is high, then fps = 0.263E + 0.04583BW + 0.02576UI – 4.116
10If E is medium and BW is low and UI is low, then fps = 0.02592E + 0.2007BW + 0.2648UI – 6.624
11If E is medium and BW is low and UI is medium, then fps = 0.01667E + 0.4781BW + 0.05642UI – 27.46
12If E is medium and BW is low and UI is high, then fps = –0.1973E + 0.3284BW + 0.0283UI + 3.14
13If E is medium and BW is medium and UI is low, then fps = 0.01617E + 0.08538BW + 0.3535UI – 23.03
14If E is medium and BW is medium and UI is medium, then fps = –0.1349E – 0.1383BW – 0.1395UI + 50.01
15If E is medium and BW is medium and UI is high, then fps = 0.2676E + 0.276BW + 0.00444UI – 17.9
16If E is medium and BW is high and UI is low, then fps = –0.183E + 0.02471BW + 0.3009UI + 4.07
17If E is medium and BW is high and UI is medium, then fps = 0.2688E + 0.02577BW + 0.2754UI – 19.93
18If E is medium and BW is high and UI is high, then fps = 0.4275E – 0.04647BW – 0.02047UI – 14.92
19If E is high and BW is low and UI is low, then fps = 0.0001825E + 0.1234BW + 0.1452UI – 3.193
20If E is high and BW is low and UI is medium, then fps = 0.02303E + 0.3149BW – 0.1688UI + 1.839
21If E is high and BW is low and UI is high, then fps = 0.01383E + 0.1981BW + 0.0295UI + 1.433
22If E is high and BW is medium and UI is low, then fps = 0.02199E – 0.1413BW + 0.2625UI + 2.544
23If E is high and BW is medium and UI is medium, then fps = –0.01434E + 0.2648BW + 0.2522UI – 14.14
24If E is high and BW is medium and UI is high, then fps = –0.003636E + 0.4218BW – 0.01789UI – 17.75
25If E is high and BW is high and UI is low, then fps = 0.01625E + 0.04724BW + 0.251UI – 2.334
26If E is high and BW is high and UI is medium, then fps = –0.008062E – 0.04624BW + 0.4731UI – 18.95
27If E is high and BW is high and UI is high, then fps = –0.05737E – 0.02369BW – 0.02728UI + 37.48
Figure 4: ANFIS Structure.
Figure 4:

ANFIS Structure.

Fuzzification layer: each node in this layer is an adaptive node with a node function

O11=μ(Elow),O21=μ(Emedium),O31=μ(Ehigh),O41=μ(BWlow),O51=μ(BWmedium),O61=μ(BWhigh),O71=μ(UIlow),O81=μ(UImedium),O91=μ(UIhigh),

where E (or BW or UI) is the input to node i and Oi1 is the output of node i. The degree of membership functions of inputs E, BW, and UI are determined by this layer. These are used as antecedent parts of fuzzy rules.

Rules layer: every node in this layer is a fixed node. The inputs to each node are connected by fuzzy AND (intersection operator) as shown in IF-THEN rules in Table 6. Each node output is the product of its incoming signals.

Oi2=μ(BW)×μ(E)×μ(UI).

μ(E), μ(BW) and μ(UI) are fuzzified inputs (membership functions) to the nodes as shown in Figure 4.

For example,

Node 1 output is O12=μ(BWlow)×μ(Elow)×μ(UIlow)

Node 2 output is O22=μ(BWlow)×μ(Elow)×μ(UImedum)

Node 27 output is O272=μ(BWhigh)×μ(Ehigh)×μ(UIhigh)

as shown in Figure 4.

i = 1, 2, 3, …, 27. The output of each node represents the firing strength of a rule.

Defuzzifying layer: each node in this layer is an adaptive node with function

Oi3=piE+qiBW+riUI+ki

where (pi, qi, ri, ki) – consequent parameter set of the ith node. i = 1, 2, 3, …, 27.

Output layer: the output node is a fixed node that computes the overall output as a summation of all incoming signals.

The learning mechanism used in ANFIS is the hybrid learning rule, which is a combination of least square estimates and the backpropagation gradient descent method. This hybrid optimization method trains the membership function parameters to emulate the training data. In the forward pass, premise parameters are fixed whereas consequent parameters are identified by a least square estimator. In the backward pass, the errors are propagated backward and premise parameters are adjusted by the gradient descent method.

3.3 Routing

The cluster head-based tree routing algorithm [37] is used to route queries from sink node to event nodes, and the video stream from event nodes to sink node. The query passes through the ANFIS of intermediate sensor node(s) and cluster head(s) of the routing path and finally reaches the event node. The sensor nodes, cluster heads, and event node in the routing path predict the quality of the video using ANFIS in terms of frames/second (or H.264 video format) based on available bandwidth, residual energy, and user request. The query load on the network decides the bandwidth available to sensor nodes, whereas the residual energy of different sensor nodes depends on the age and utilization of the network and its sensor nodes. The available bandwidth and residual energy are different for these nodes in different scenarios (query load on the network) and at different life stages of the network (utilization of the network). So, the predicted frames/second of the video (or quality index in H.264) that they are capable of transmitting is different for different nodes. The event node chooses a minimum number of predicted frames/second (or minimum quality index in H.264 video format) among itself, cluster heads, and intermediate sensor nodes in the routing path for smooth transmission.

4 Simulations, Results, and Analysis

4.1 Simulation Environment

The GUI tool anfisedit of the fuzzy logic toolbox of Matlab 7 (R2010a) is used to generate the membership functions and rules for the Sugeno fuzzy inference system from the training data. The ANFIS is trained using a hybrid learning algorithm. The output of the ANFIS, which is the decision of the quality of the video to be transmitted, is used by the sensor node.

The sensor node, cluster head, sink, and network environment are simulated, and a clustered tree routing algorithm is implemented using Matlab as the general-purpose programming language. The multiuser scenario is constructed to measure the performance of the WMSN under varying query loads using the following sensor network simulation model and parameters.

4.2 Simulation Model

We have created the simulation model with parameters as mentioned below to conduct simulation experiments. (i) Two hundred sensor nodes are deployed in a clustered tree topology (as shown in Figure 2) in a 150 m × 150 m square area. The sink node is located at the position (75,175). Each cell has a bandwidth of 15 Mbps, which is shared among its members. Each sensor node is immobile and has the same fixed communication capacity. The sink node is also immobile. (ii) All the communication links are bidirectional. The signal interference over the wireless channel is ignored. (iii) Every sensor node and all cluster heads are equipped with the ANFIS model. The ANFIS model is constructed using 78 nodes, three inputs (energy of the node, bandwidth available to the node, user request), one output (frames/second of the video to be transmitted or H.264 frame format), and 27 rules. It is trained using 348 pairs of training data (Table 2). (iv) The parameters (a,b,c) for bell-shaped membership functions Elow, Emedium, Ehigh, BWlow, BWmedium, BWhigh, UIlow, UImedium, and UIhigh are (15.04, 0.79, 39.89), (15.15, 0.72, 69.57), (15.75, 1.48, 99.36), (15.05, 1.08, 39.92), (15.13, 0.49, 69.6), (15.74, 1.50, 99.37), (15.04, 0.94,39.9), (15.14, 0.63, 69.59) and (15.73, 1.46, 99.38), respectively. (v) The clustered tree routing protocol is used to route the query from sink to sensor nodes and video stream from sensor nodes to the sink.

4.3 Simulation Procedure

We refer to the subqueries prepared by the sink as queries in this simulation process. The flow of the process is as follows. (i) Initially, 10 queries are injected into the WMSN. (ii) The route for each query is found using the clustered tree routing algorithm. (iii) The ANFIS of the event node, sensor nodes, and cluster head nodes in the path (from sink to event node) predict the quality of video in number of frames/second (or H.264 frame format). (iv) The event node chooses least quality among itself, all cluster heads, and intermediate nodes in the path and transmits the video output. (v) The response to each query in terms of quality in number of frames/second (or H.264 frame format) is found and recorded. Then, the average number of frames/second (or H.264 frame format) for 10 queries is calculated. (vi) This procedure is repeated for queries 20, 30, 40, …, 140, and 150. The performance in terms of average frames/second and H.264 frame format will be investigated in the following subsections under different conditions.

4.4 Random Users Requests with Constant Node Energy

The energy available to the node is assumed at low (e.g. 40%, low is from 40% to 70%), medium (e.g. 70%, medium is from 55% to 85%), or high (e.g. 100%, high is from 70% to 100%) level. If the energy of the sensor node is at low level (40%), irrespective of the request from the user (for low- or medium- or high-quality video) and bandwidth availability (low/medium/high) to the node, the output to the users is low-quality video (Figure 5A). The output is exactly at 9 frames/s (fps) (because the residual energy level fixed at 40%). If the output is in H.264 frame format (Figure 5B) of the video, the user gets the video with minimum size, which is SQCIF (128 × 96 with 30 fps). To improve the life of the network, we have decided earlier (by training the ANFIS) that the node with low energy level should send only low-quality video as a response to the query from the sink.

Figure 5: Average Frames/Second and H.264 Frame Format for Random user Inputs and Constant Node Energy.(A) Video stream quality in fps. (B) Video stream quality in H.264 frame format.
Figure 5:

Average Frames/Second and H.264 Frame Format for Random user Inputs and Constant Node Energy.

(A) Video stream quality in fps. (B) Video stream quality in H.264 frame format.

If the residual energy in the node is medium level (e.g. 70%), it is enough to supply medium-quality video (15–24 fps) for the user if adequate bandwidth is available (which is at least medium), and the user demands for medium- or high-quality video. Again, we like to recall that we have trained our sensor node ANFIS to output only medium- or low-quality video if the energy available is medium. Up to 100 queries from users, the network responds to all queries with 15 fps video. From 100 to 150 queries to the network, the quality of output in terms of frames/second of video gradually decreases because the bandwidth is not sufficient to respond with 15 fps video. When the number of queries equals to 20, the output quality of video is low (14 fps), even though the energy and sufficient bandwidth is available to transmit the medium-quality video because of the user request (here, user request is random, which is low in this case). If the output is in H.264 frame format, the output is quality index 3 (QVGA, 320 × 240 with 10 fps).

Let us consider the case with high (e.g. 100%) residual energy. When the number of queries to the network is few, the bandwidth available to each node is somewhere between medium to high. Even then, the output quality of the video is at 19 fps because of random user requests (this means that the user himself is requesting for medium-quality video). As already mentioned, as the number of queries to the network increases, the bandwidth available to each node is less and the quality of video reduces even when the energy is full and the user is demanding for a better-quality video. If the output is in a frame format (Figure 5B), it is at quality index 4 (525SIF, 352 × 240 with 18 fps) initially and down to quality index 3 (QVGA, 320 × 240 with 10 fps) as the number of queries to the network increases. When the number of queries is 20, the quality index is down to 3 because users themselves are not interested in better quality (users requests are random).

4.5 Random Energy Levels and Fixed User Requests

The characteristics of the graphs in Figures 5A and 6A are almost similar. Logically, this is because one variable is constant and the other is random, and they are reversed and plotted again.

Figure 6: Average Frames/Second and Frame Format for Random Energy Levels and Fixed User Requests.(A) Video stream quality in fps. (B) Video stream quality in H.264 frame format.
Figure 6:

Average Frames/Second and Frame Format for Random Energy Levels and Fixed User Requests.

(A) Video stream quality in fps. (B) Video stream quality in H.264 frame format.

If the user requests for a low-quality (e.g. 40%) video, even though the energy and bandwidth available in the sensor nodes are medium or high, the user gets the video with low quality only (9–14 fps). In Figure 6A, the output is 9 fps video because the user request is low quality at 40%. If the output is in terms of H.264 frame format (Figure 6B), the user gets the video with quality index 1 (SQCIF, 128 × 96 with 30 fps). Increasing the number of queries has no effect because the number of sensor nodes deployed that share the bandwidth is adequate to provide minimum-quality video for at least 150 queries.

If the users are interested in medium-quality (e.g. 70%) video, they are expecting video from 15 to 24 fps. Users are getting 17 fps initially and fewer number of fps later. As the number of user query increases, the bandwidth available for each sensor node is less, which results in an output video with fewer frames/second. If the H.264 frame format is the quality of the output, the bandwidth and energy of nodes are sufficient to provide the video with quality index 3 (QVGA, 320 × 240 with 10 fps).

If the users are interested in high-quality (100%) video, they expect a video with 25–30 fps. However, the users get a medium-quality output (19 fps–16 fps) because of the energy levels (random energy levels because network is in its midlife) and low-quality video (14 fps–12 fps) as the number of queries increases (the shared bandwidth is just sufficient for low-quality video transmission). If the output is in H.264 frame format, initially users gets the output with quality index 4 (525SIF, 352 × 240 with 18 fps) because of energy levels of the nodes and later with quality index 3 (QVGA, 320 × 240 with 10 fps) because the bandwidth available to each sensor node decreases with increases in the number of queries.

4.6 Effect of Bandwidth when Both Energy and User Input are Fixed

The effect of the bandwidth when both energy level and user request are at a constant level is shown in Figure 7. Let us consider the network with a high energy level (100%) and high-quality video demand from users (100%). Up to 70 queries from the users, the bandwidth is sufficient to provide all users with high-quality video (with 25 fps), as shown Figure 7A. Later, as the number of queries increases, the quality of video the network can provide to the user gradually decreases from high to medium to low. When the number of queries is 150, the network is capable of providing 11 fps video to all the queries. If the output is in H.264 frame format (Figure 7B), initially up to 60 queries, the network has the bandwidth to provide all users video with quality index 5 (CIF, 352 × 288 with 30 fps) and then gradually decreases to quality index 4 (525SIF, 325 × 240 with 18 fps) and then to quality index 3 (QVGA, 320 × 240 with 10 fps). This case illustrates the effect of increases in the number of queries into the network on bandwidth, and, in turn, on the video quality the network can transmit.

Figure 7: Average Frames/Second and H.264 Frame Format when both Energy and User Input are Fixed.(A) Video stream quality in fps. (B) Video stream quality in H.264 frame format.
Figure 7:

Average Frames/Second and H.264 Frame Format when both Energy and User Input are Fixed.

(A) Video stream quality in fps. (B) Video stream quality in H.264 frame format.

Consider the case with the available energy as high (100%) and user is demanding medium-quality (70%) output or vice versa. Up to 90 queries, the bandwidth is sufficient to provide medium-quality video (18 fps) to all users. As the number of queries increases, the bandwidth is not sufficient for all queries and the output quality for all users gradually decrease. If the video quality is measured in terms of H.264 frame format, up to 110 queries, the network provides video with quality index 4 (525SIF, 352 × 240 with 18 fps), and for queries more than that, the network responds with quality index 3 (QVGA, 320 × 240 with 10 fps) because of non-availability of bandwidth. If both energy and user request are at medium level (70%), then the output quality is naturally restricted to medium (15–24 fps). Bandwidth is sufficient to transmit 17 fps up to 110 queries, and then the number of frames/second of video decreases as the number queries increases. The H.264 frame format response is initially with quality index 4 (525SIF, 352 × 240 with 18 fps) and later becomes quality index 3 (QVGA, 320 × 240 with 10 fps).

If either the energy or the user request is low (40%), then the output video stream is restricted to low quality (at 40%, it is 9 fps). The bandwidth is sufficient to transmit 9 fps video streams even if the queries rise to 150. The H.264 frame format output is restricted to quality index 1 (SQCIF, 128 × 96 with 30 fps).

5 Conclusion

The sensor nodes of the network compute the bandwidth available to them and measure the residual energy periodically. Users query the sink node for the video along with quality specification as low/medium/high. Then, the sink node, in turn, queries the appropriate sensor nodes of the network to answer the user. Users may be many, but the sink node collects user queries and prepares subqueries and then queries the appropriate nodes. The available residual energy of the node, bandwidth available to the node, and user request are fed as inputs to the ANFIS, which predicts the output quality of the video (low/medium/high) in frames/second (or H.264 frame format). Each sensor node in the network has ANFIS and uses that to predict the output quality.

The ANFIS of each node is trained with training data. It is trained to output low-quality video if the energy available is low even when the bandwidth available is medium/high and the user requests for medium-/high-quality video. It is also trained to output low-quality video when the bandwidth available is low even if energy available is medium/high and the user request is medium/high. The ANFIS can output a high-quality video only when all three inputs (E, BW, and UI) are high. It outputs medium-quality video if any one of the inputs is medium and others are medium/high (not low).

Sensor nodes are configured in clustered tree topology. The tree routing algorithm is used to route query from sink to sensor node and video stream from sensor node to sink node.

Future work involves using the network when energy goes below 40%, developing schemes to serve users with better video format and offer scalable and flexible services using software agents.

Bibliography

[1] I. F. Akyildiz, T. Melodia and K. R. Chowdhury, A survey on wireless multimedia sensor networks, Comput. Netw.51 (2007), 921–960.10.1016/j.comnet.2006.10.002Search in Google Scholar

[2] I. F. Akyildiz, T. Melodia and K. R. Chowdury, Wireless multimedia sensor networks: a survey, IEEE Wirel. Commun.14 (2007), 32–39.10.1109/MWC.2007.4407225Search in Google Scholar

[3] I. F. Akyildiz, T. Melodia and K. R.Chowdhury, Wireless multimedia sensor networks: applications and testbeds, Proc. IEEE96 (2008), 1588–1605.10.1109/JPROC.2008.928756Search in Google Scholar

[4] V. R. Budyal and S. S.Manvi, ANFIS and agent based bandwidth and delay aware anycast routing in mobile ad hoc networks, J. Netw. Comput. Appl.39 (2014), 140–151.10.1016/j.jnca.2013.06.003Search in Google Scholar

[5] M. Chen, V. Leung, S. Mao and Y. Yuan, Directional geographical routing for real-time video communications in wireless sensor networks, Comput. Commun.30 (2007), 1–16.10.1016/j.comcom.2007.01.016Search in Google Scholar

[6] O. Chipara, C. Lu and G. C. Roman, Real-time query scheduling for wireless sensor networks, IEEE Trans. Comput.62 (2013), 1850–1865.10.1109/RTSS.2007.43Search in Google Scholar

[7] A. Coman, M. A. Nascimento and J. Sander, A framework for spatio-temporal query processing over wireless sensor networks, in: Proceedings of the 1st International Workshop on Data Management for Sensor Networks, pp. 104–110, Toronto, Canada, 2004.10.1145/1052199.1052217Search in Google Scholar

[8] A. Coman, J. Sander and M. A. Nascimento, Adaptive processing of historical spatial range queries in peer-to-peer sensor networks, Distrib. Parallel Dat.22 (2007), 133–163.10.1007/s10619-007-7018-8Search in Google Scholar

[9] F. Dressler and D. Neuner, Energy efficient monitoring of distributed system resources for self-organizing sensor networks, in: IEEE Topical Conference on Wireless Sensors and Sensor Networks, pp. 145–147, 2013.10.1109/WiSNet.2013.6488662Search in Google Scholar

[10] N. Farzaneh and M. H. Yaghmaee, An adaptive competetive resource control protocol for alleviating congestion in wireless sensor networks, Wireless Pers. Commun.77 (2014), 309–328.10.1109/ISTEL.2014.7000866Search in Google Scholar

[11] E. Felemban, C. G. Lee and E. Ekici, MMSPEED: multi-path multi-speed protocol for QoS guarantee of reliability and timiliness in wireless sensor networks, IEEE Trans. Mobile Comput.5 (2006), 738–754.10.1109/TMC.2006.79Search in Google Scholar

[12] Y. J. Gehrke and S. Madden, Query processing in sensor networks, IEEE Pervas. Comput.3 (2004), 46–55.10.1109/MPRV.2004.1269131Search in Google Scholar

[13] C. Intanagonwiwat, R. Govindan, D. Estrin and J. Heidemann, Directed diffusion for sensor networking, IEEE/ACM Trans. Netw.11 (2003), 2–16.10.1109/TNET.2002.808417Search in Google Scholar

[14] J. S. R. Jang, ANFIS: adaptive network base fuzzy inference systems, IEEE Trans. Syst. Man Cybern.23 (1993), 665–685.10.1109/21.256541Search in Google Scholar

[15] J. Kang, Y. Zhang and B. Nath, TARA: topology-aware resource adaptation to alleviate congestion control in sensor networks, IEEE Trans. Parall. Distr. Syst.18, (2007), 919–931.10.1109/TPDS.2007.1030Search in Google Scholar

[16] A. Khan, L. Sun and E. Ifeachor, An ANFIS-based hybrid video quality prediction model for video streaming over wireless networks, in: The Second International Conference on Next Generation Mobile Application, Services and Technologies, Cardiff, pp. 357–362, 2008.10.1109/NGMAST.2008.72Search in Google Scholar

[17] A. Khan, L. Sun and E. Ifeachor, Learning models for video quality prediction over wireless local area network and universal mobile telecommunication system networks, IET J. Mag.4 (2010), 1389–1403.10.1049/iet-com.2009.0649Search in Google Scholar

[18] J. Lian, L. Chen, K. Naik, Y. Liu and G. B. Agnew, Gradient boundary detection for time series snapshot construction in sensor networks, IEEE Trans. Parall. Distr. Syst.18 (2007), 1462–1475.10.1109/TPDS.2007.1057Search in Google Scholar

[19] X. Liu, X. Cao, X. Gong and Z. Ma, Query-driven tracing for a multiuser scenario in wireless sensor networks, IEEE Sens. J.13 (2013), 3016–3024.10.1109/JSEN.2012.2194734Search in Google Scholar

[20] X. Liu, Z. Ma, S. Liu and X. Cao, Query processing in multi-user scenario for wireless sensor networks, IEEE Sens. J.11 (2011), 2533–2541.10.1109/JSEN.2011.2128865Search in Google Scholar

[21] M. Macit, V. C. Gungor and G. Tuna, Comparison of QoS-aware single-path vs. multi-path routing protocols for image transmission in wireless multimedia sensor networks, Ad Hoc Netw.19 (2014), 132–141.10.1016/j.adhoc.2014.02.008Search in Google Scholar

[22] J. Maerien, P. Agten, C. Huygens and W. Joosen, FAMoS: a flexible active monitoring service for wireless sensor networks, in: Proceedings of the 12th IFIP WG 6.1 international conference on Distributed Applications and Interoperable Systems, pp. 104–117, 2012.10.1007/978-3-642-30823-9_9Search in Google Scholar

[23] J. M. Mendel, Fuzzy logic systems for engineering: a tutorial, Proc. IEEE83 (1995), 345–377.10.1109/5.364485Search in Google Scholar

[24] S. Misra, M. Reisslein and G. Xue, A survey of multimedia streaming in wireless sensor networks, IEEE Commun. Surv. Tut.10 (2008), 18–39.10.1109/SURV.2008.080404Search in Google Scholar

[25] Nagesha and S. S.Manvi, QoS mapping from user to network requirements in WMSN: a fuzzy logic based approach, in: IEEE International Advance Computing Conference, pp. 137–142, 2014.10.1109/IAdCC.2014.6779308Search in Google Scholar

[26] N. A. Pantazis, S. A. Nikolidakis and D. D. Vergados, Energy-efficient routing protocols in wireless sensor networks: a survey, IEEE Commun. Surv. Tut.15 (2013), 551–591.10.1109/SURV.2012.062612.00084Search in Google Scholar

[27] C. N. Pitas, D. E. Charilas, A. D. Panagopoulos and P. Constantinou, Adaptive neuro-fuzzy inference models for speech and video quality prediction in real-world mobile communication networks, IEEE Wirel. Commun.20 (2013), 80–88.10.1109/MWC.2013.6549286Search in Google Scholar

[28] S. Ratnasamy, B. Krap, S. Estrin, R. Govindan, L. Yin and F. Yu, Data-centric storage in sensornets with GHT, a graphical hash table, Mobile Netw. Appl.8 (2003), 427–442.10.1023/A:1024591915518Search in Google Scholar

[29] M. Sugeno and G. T. Kang, Structure identification of fuzzy model, Fuzzy Set. Syst.28 (1998), 15–33.10.1016/0165-0114(88)90113-3Search in Google Scholar

[30] H. Takagi and I. Hayashi, NN-driven fuzzy reasoning, Int. J. Approx. Reason.5 (1991), 191–212.10.1016/0888-613X(91)90008-ASearch in Google Scholar

[31] T. Takagi and M. Sugeno, Fuzzy identification of systems and its application to modeling and control, IEEE Trans. Syst. Man Cybern.15 (1985), 116–132.10.1016/B978-1-4832-1450-4.50045-6Search in Google Scholar

[32] H. Wang, M. Hempel, D. Peng, W. Wang, H. Sharief and H. H. Chen, Index-based selective encryption for wireless multimedia sensor networks, IEEE Trans. Multimedia12 (2010), 215–223.10.1109/TMM.2010.2041102Search in Google Scholar

[33] W. Wang, D. Peng, H. Wang, H. Sharif and H. H. Chen, A multimedia quality driven network resource management architecture for wireless sensor networks with stream authentication, IEEE Trans. Multimedia12 (2010), 439–447.10.1109/TMM.2010.2050736Search in Google Scholar

[34] G. Wang, J. Xin, L. Chen and Y. Liu, Energy-efficient reverse skyline query processing over wireless sensor networks, IEEE Trans. Knowl. Data Eng.24 (2012), 1259–1275.10.1109/TKDE.2011.64Search in Google Scholar

[35] H. Xu, L. Huang, C. Qiao, Y. Zhang and Q. Sun, Bandwidth-power aware cooperative multipath routing for wireless multimedia sensor networks, IEEE Trans. Wirel. Commun.11 (2012), 1532–1543.10.1109/TWC.2012.020812.111265Search in Google Scholar

[36] J. Zheng and A. Jamalipour, Wireless sensor networks: a networking perspective, pp. 218–220, John Wiley & Sons Inc., NJ, 2009.10.1002/9780470443521Search in Google Scholar

[37] J. Zheng and A. Jamalipour, Wireless Sensor networks: a networking perspective, pp. 418–426, John Wiley & Sons Inc., NJ, 2009.10.1002/9780470443521Search in Google Scholar

Received: 2015-9-26
Published Online: 2016-7-27
Published in Print: 2017-7-26

©2017 Walter de Gruyter GmbH, Berlin/Boston

This article is distributed under the terms of the Creative Commons Attribution Non-Commercial License, which permits unrestricted non-commercial use, distribution, and reproduction in any medium, provided the original work is properly cited.

Downloaded on 6.1.2026 from https://www.degruyterbrill.com/document/doi/10.1515/jisys-2015-0114/html
Scroll to top button