Home Soft computing based compressive sensing techniques in signal processing: A comprehensive review
Article Open Access

Soft computing based compressive sensing techniques in signal processing: A comprehensive review

  • Ishani Mishra EMAIL logo and Sanjay Jain
Published/Copyright: September 11, 2020
Become an author with De Gruyter Brill

Abstract

In this modern world, a massive amount of data is processed and broadcasted daily. This includes the use of high energy, massive use of memory space, and increased power use. In a few applications, for example, image processing, signal processing, and possession of data signals, etc., the signals included can be viewed as light in a few spaces. The compressive sensing theory could be an appropriate contender to manage these limitations. “Compressive Sensing theory” preserves extremely helpful while signals are sparse or compressible. It very well may be utilized to recoup light or compressive signals with less estimation than customary strategies. Two issues must be addressed by CS: plan of the estimation framework and advancement of a proficient sparse recovery calculation. The essential intention of this work expects to audit a few ideas and utilizations of compressive sensing and to give an overview of the most significant sparse recovery calculations from every class. The exhibition of acquisition and reconstruction strategies is examined regarding the Compression Ratio, Reconstruction Accuracy, Mean Square Error, and so on.

MSC 2010: 94A08

1 Introduction

Compressive sensing (CS) has since taken into consideration extensively in the arts and science and engineering departments by proposing that the traditional farthest reaches of sampling theory may be conceivable [28]. CS expands upon the essential certainty that numerous signals utilizing just a couple of non-zero coefficients in an appropriate premise or word reference [29, 30, 31]. Nonlinear improvement can be able to empower recuperation of such signals from not very much estimation. This paper aims to give a chronological survey of the CS theory and its essential properties. After a concise, authentic review, the exchange of sparsity and other low-dimensional signals are described [32]. The recoup of a huge configuration signal from a little arrangement of estimations and give execution certifications to an assortment of signal reconstruction algorithms are also important in CS theory [33, 34, 35, 36, 37].

CS is a three-step process which mainly comprises of sparse representation, sampling, and reconstruction algorithms. The main aim of CS is to encode the minimum information obtained from relatively few samples as possible. This technique is known as signal compression, where the signals sampled are represented by the letter s obtained from the minimal subset, i.e., used for coding. The signal is processed to produce s transformed values from the sample signal itself. SM is the sensing matrix representation of dimension N × s. Then the Sensing Matrix is applied to an uncertain signal vector v to obtain the measurement value x. The dictionary matrix (ϕ ) represents the domain in which the vector signal v tends to accept the sparse representation as depicted in equation (1)

(1) v = ϕ θ
(2) x = S M v

The optimization process is accomplished by considering the n components present in θ as nonzero values, i.e.

(3) min θ s = θ 1 , w h e r e x = S M ϕ θ

The resultant matrix SM ϕ formed complies with the Restricted Isometry Property (RIP) and every measurement made. The value v is stored and can be retrieved when needed, with the help of the θ value. The measurement value xk varies from 1, 2,......, N. The measurement value can be obtained directly using the analog signal v (t), from the minimum samples(s!) obtained. The step shown above describes how the CS combines data acquisition and compression procedures. The measurement matrix can be obtained to generate the product which should satisfy the conditions specified in RIP. The sensing matrix is generated by following the steps shown below [57].

  • If ϕ is an orthonormal value and SM is the sensing matrix, then it forms a product SM ϕ that obeys the conditions present in RIP.

  • If there exists another orthonormal matrix Φ, where its column value has low coherence compared to the orthonormal matrix ϕ, this is similar to considering as a Discrete Fourier Transform Matrix and ϕ as an identity matrix. Then the values of N in Φ is chosen randomly to form the sensing matrix S, where S is a N ×s matrix where the coordinates N is chosen uniformly at random. The low coherence value of the sensing and basis matrices is closely linked to the RIP. The basis matrix is an encoding matrix that follows the conventional two out of two schemes. If both the matrices possess low coherence, then only a minimum number of measurements are needed to satisfy RIP.

A signal vector v is said to be compressible during expansion utilizing a basis matrix only if it produces few large coefficients θj, and the rest of the coefficients are small. Some basis matrices form sparse vector signals as described in the equation (4)

(4) min θ s = θ 1 , w h e r e x S M ϕ θ 2 2 ε

The sensing matrix and basis matrix integrates the information at the encoder side, which serves as an input to the CS system. Because the encoder is one which selects the signal of interest. The signal of interest has a sparse representation. The sensing matrix helps to reconstruct the original signal entirely at the end(decoder’s side) by reducing the number of encoding measurements used. Approximate care should be taken when choosing a sensing matrix as it expresses the adverse effects in the CS system’s accuracy and processing time. The reconstruction algorithm used is also a core concept in CS. It mainly depends on how the high dimensional data is reconstructed from the low dimensional data

2 Review of Recent researches

2.1 Compressed Sensing techniques

CS is a widely used image reconstruction technique which can reconstruct an original image from its reduced sample itself. The computational time taken by CS algorithms for image reconstruction is usually higher than the state-of-art reconstruction techniques. Sang-Hoong Jung et al. [56] introduced a greedy algorithm to overcome this complexity by using an iteration approach. The greedy algorithm used is Orthogonal Matching Pursuit (OMP), and it provides a high dimensional image quality in a short period of time. The high rate data applications consist of ambiguity such as heavy noise, interference, outliers, and channel fading present in the information to be broadcasted. In these types of applications, long term wideband sensing is quite costly to be applied. Jie Zhao et al. [54] has proposed a method of scheduling sequential CS sensing to solve the above problem. This approach is a combination of CS and sequential periodic detection techniques, which give the following outcomes, such as improved sensor quality, reduced CS recovery overhead, and appropriate wideband sensing. To provide optimal performance, CS techniques incorporate random structures, and they are widely applied to restore the missing signal samples. Christina Knill et al. [55] used CS post-processing for enhancing the Multi-Input Multi-Output (MIMO) approach, which leads to an effective regain of the full processing of the state-of-art Orthogonal Frequency-Division Multiplexing (OFDM). This method makes benefits for the information present in the CS, which leads to effective reconstruction, accelerated processing, and low computational complexity.

The Agriculture, Forestry, and Urban Planning field make use of high-resolution remote sensing images. These images exhibit the texture and features of the images in a more clear representation. The processing of these images includes various levels of complexities, such as texture Similarity, Massive Storage Space, and Information Leakage. A method called a finite-state chaotic CS cloud remote sensing image registration [62] is used to solve these complexities. The problem of texture similarity is overcome by using improved Scale-Invariant Feature Transform(SIFT) for both local and global information. CS is also applied to Hyperspectral Image(HSI) Reconstruction. The main problem to be solved is to identify the key characteristics present in the HSI image. Li Wang et al. [63] proposed a CS reconstruction algorithm for HIS images using spectral unmixing characteristics. The HIS image is sampled both spatially and spectrally. The reconstruction of the HIS image is obtained iteratively by solving a joint optimization problem which consists of the endmember and abundance matrix.

Zhitao et al. [1] have presented a method for medical signal compression in which Electro Cardiogram (ECG) information is compressed using “Set Partitioning In Hierarchical Trees (SPIHT)” method. The signals were selected from the “open source database, and the performance analyzed that their proposed codec was fundamentally more productive in compression and calculation than existing techniques. It is currently outstanding that one can recreate sparse or compressible signals precisely from a very set number of estimations, conceivably contaminated with noise. Emmanuel et al. [2] have proposed a procedure known as "compressed sensing" or "compressive sampling" depends on properties of the sensing grid, for example, the confined isometry property. In this note, we build up new outcomes about the exactness of the recreation from under sampled estimations, which enhance before appraisals and have the benefit of being progressively exquisite. Fred Chen et al. [3] have proposed a signal “Agnostic CS” obtaining framework is exhibited that tends to send telemetry data under the exchange capacity limitations of remote sensors. Luisa F.Polania et al. [4], has displayed by utilizing wavelet transform and iterative threshold technique; at that point, CS is accomplished to build the information-packed. In the wake of performing CS, “Bayesian CS (BCS)” is utilized to reproduce the first data. Standard Wavelet Dictionaries [17] is used to compress and decompress ECG signals which provide low-computational complexity for encryption and decryption. One-bit quantization CS [27] technique preserves the single sign information resulting in less storage cost and hardware complexity where the sparse signals can be reconstructed with high probability.

2.2 Medical Signal Compression methods

CS is another methodology for the obtaining and recuperation of a thin signal that empowers examining rates essentially underneath the traditional Nyquist rate. Anna M.R. Dixon et al. [5] have proposed a CS-based methodology for signal compression. ECG signal, for the most part, show excess between contiguous heartbeats because of its quasi-periodic structure. They demonstrated that their repetition inferred a huge division of regular help among successive heartbeats. In empowering nonstop remote cardiovascular observing in Wireless body sensor systems (WBSN), they can accomplish improved personalization and nature of consideration, expanded capacity of counteractive action and early conclusion, and enhanced patient autonomy, portability, and wellbeing. Amongst them, power productivity can be improved through implanted ECG compression, to diminish broadcast appointment above energy-hungry remote connections. Hossein Mamaghanian et al. [6], have evaluated the capability of the developing CS signal acquisition worldview for low-multifaceted nature energy effective ECG compression on the cutting edge Shimmer WBSN bit. Curiously, their outcomes demonstrate that CS speaks to an aggressive choice to cutting edge “Digital Wavelet Transform (DWT)” - based ECG compression arrangements. All the more explicitly, while expectedly displaying second rate compression execution than its DWT-based partner for a given recreated signal quality, its significantly lower multifaceted nature and CPU processing time empowers it to ultimately outperform their technique as far as by and large energy productivity.

CS is a quickly rising signal handling method that empowers the exact catch and reproduction of sparse signals from just a small amount of Nyquist rate tests, altogether decreasing the information rate and system power utilization. ZHANG Hong-xin1 et al. [7] suggested an in-depth relative inquiry into the current best estimates of class CS recovery. Reliability, accuracy, resistance to noise, time of calculation, and are used as primary measures. Besides, ECG signals are studied to investigate the execution of real-world bio-signals. Fred Chen et al. [8] showed an evaluated and authorized the capability of the developing CS worldview for ongoing power-efficient Electro Cardiogram (ECG) compression on resource-constrained sensors. In their research, they are applying sparsity models to exploit basic data in recovery algorithms. More precisely, returning to known sparse reconstruction algorithms, they identified creative scheme-based adjustments for the vigorous recovery of reduced signals such as ECG.

Anna M. R. Dixon et al. [9] present the utilization of CS calculations for information compression in remote sensors to address the energy and telemetry data transfer capacity in regular to remote sensor hubs. Consequences of the examination demonstrate that an advanced usage is fundamentally high power products for the remote sensor space wherever signal needs maximum gain and minimum to high goals. WBANs comprise of little insightful biomedical remote sensors appended on or embedded in the body to gather crucial biomedical information from the human anatomy giving continuous physical fitness monitoring systems [10]. Be that as it may, the utilization of regular ECG framework is confined by the patient’s versatility, transmission limit, and physical size. Along these lines, Monica Fira et al. [11] have improved remote ECG frameworks. Because of these, CS methodology as another sampling approach and the coordinated effort of sensing matrix selection algorithm dependent on unique thresholding approach were utilized to give a hearty low- robust low-complexity detection calculation in portals and passageways with high likelihood and enough precision. Jeevan K et al. [12], have proposed to utilize the block sparse Bayesian learning system to compress/ remake non-inadequate crude FECG chronicles. Especially, every segment of the network can contain just two nonzero passages. This demonstrates the system, contrasted with different calculations, for example, current CS calculations and wavelet calculations can enormously decrease code execution in CPU in the data compression stage.

2.3 Data Acquisition Techniques

In this correspondence, Anurag Singh and S. Dandapat et al. [13], have proposed and talk about similarly a few procedures for ECG signal compression propelled from the basics of CS theory, concentrating on securing systems, projection lattices and remaking word references and on the impacts of the preprocessing included. The primary methodology for ECG signal compression depends on the immediate CS securing of the signal with no preprocessing of the waveforms before taking the projections, neither for the development of the word references. This "certified" CS we will call patient-specific classical compressed sensing (PSCCS) since the word reference is worked from patient starting accounts. The second methodology executes a particular preprocessing stage intended to upgrade sparsity and improve recoverability, because of dividing the signal into single heart thumps (otherwise called cardiovascular examples) - signified further as cardiac patterns compressed sensing – (CPCS) since for this situation the gained signal and the word reference atoms are preprocessed portioned heart pulsates without or with focusing of the R wave.

The signal recovery algorithm [14] depends on limiting the pseudo-norm of the second-order difference, called the pseudo-norm, of the signal. The enhancement included is done utilizing a sequential conjugate-gradient algorithm. The lexicon learning calculation utilizes an iterative system wherein a signal reproduction, and a word reference update steps are rehashed until an assembly standard is fulfilled. The signal reproduction step is actualized by utilizing the proposed signal recovery algorithm and the word reference update step is executed by utilizing the Linear Least-Squares strategy. Broad recreation results exhibit their calculation defers improved reproduction execution for transiently associated ECG signals with respect to the cutting edge - regularized least squares and Bayesian learning-based calculations.

In this work, Anurag Singh et al. [15], proposed a Distributed Compressive Sensing (DCS) is to develop the fundamental connection structure between various channels of Multi-channel ECG signals. The joint remaking capacity of DCS lessens the number of compacted estimations required for precise reproduction without influencing the contortion level. Luisa F. Polanıa et al. [16], has inspected the fusion of CS with an effective lossy compression technique on ECG signals. The utilization of word reference figuring out how to naturally make the lexicon is depicted. Two strategies forword reference creation were presented: Patient Agnostic and specific. A comprehensive analysis of both methodologies is portrayed. Considering mobile ECG checking as an application, every system is broke down for a wide scope of CR.

Ongoing outcomes in telecardiology demonstrate that CS is a promising instrument to bring down energy utilization in WBAN for ECG monitoring. In this paper, Dana Al Akil1 et al. [18], have proposed to abuse the structure of the wavelet portrayal of the ECG signal to support the exhibition of CS-based strategies for compression and reproduction of ECG signals. Benyuan Liu et al. [19], have exhibited the structure of a low-power and territory proficient equipment motor for multi-channel compression of electrocardiogram (ECG) signals. CS is especially appropriate for low-control executions since it can drastically lessen circuit intricacy for compression tasks. Another element of the proposed design is that it is reasonable for multi-channel frameworks. The structure is actualized in a specific group of Field-Programmable Gate Arrays (FPGA) which are appropriate for low-control applications. Their estimation results demonstrate a compelling decrease in intensity utilization between 20 to 40 percent at various working frequencies utilizing a power gating system. The power utilization of a 4-channel framework and the 8-channel framework is expanded uniquely by 7.1% and 11.2% individually, contrasted with the single-channel framework. These frameworks cannot be structured together because there are a few limitations to be followed. Energy Utilization, Information Compression, And Gadget Cost are the significant requirements considered, which helps in developing an information compression system.

Andrianiaina Ravelomanantsoa et al. [20], have proposed to utilize a created CS calculation which can recoup such non-inadequate physiological signal. In this paper, Yishan Wang et al. [21] have recommended that the measurement of signal reconstruction quality, for example, PRD, isn’t sufficient for the compression method inspired by CS theory, particularly when neglecting the sampling rate of the crude signal. Among the current uses of WBSNs, a Wearable Health Monitoring System (WHMS) is the most significant. In common WHMS, scaled-down remote biosensors joined to or embedded in the human body, gather bio-signal, to give constant and consistent wellbeing checking. Yuvraj V. Parkale et al. [22], have exhibited a CS-based way to deal with compress and recuperate the detected physiological data from the remote biosensors. The CS encoding procedure has a minimum execution difficulty and is appropriate for use in energy-compelled frameworks, for example, WHMS.

This plan results in a decrease of capacity necessity and low power utilization of framework contrasted with the Nyquist sampling theory, where the examining recurrence must be at any rate twofold the most extreme recurrence present in the data signal for the accurate recreation of the data. This paper, Nidhi R, Bhadravati et al. [23], have introduced a top to bottom investigation on late patterns in CS concentrated on ECG compression. Giulia Da Poian et al. [24] have exhibited a wearable, and wireless ECG framework is right off the bat structured with Bluetooth Low Energy (BLE). It can recognize a 3-lead ECG signal, and it is remote. Besides, the advanced CS is actualized to build the energy effectiveness of a remote ECG sensor. The distinctive sparsifying premise, different compression proportions, and a few remaking calculations are recreated and reviewed. At long last, the recreation is done by the Android Application (App) on advanced mobile phones to show the signal progressively [25].

2.4 Analog to Digital Conversion

The analog-to-digital conversion (ADC) phase is one of the fundamental bottlenecks of rapid media communications frameworks. This part shows a study of various plausible simple to advanced transformation systems that are appropriate to conquer these challenges and to get the Software-Defined Radio (SDR) worldview, where most functionalities, rather than being performed in the simple area (i.e., channels and blenders), are performed in the computerized space. In SDR, the analog-to-digital transformation is executed following the reception apparatus, and the radio frequency (RF) signal is straightforwardly changed over to advance with no past blending stage. Since it is not possible to approach this thought with the conventional ADC from current business gadgets, this section depicts a few systems that might be utilized. Even though the proposed frameworks have progressively prohibitive determinations, these arrangements lessen the last multifaceted nature, as will be itemized in this section. Three diverse promising procedures, such as sub-sampling, interleaving, and CS, helps to perform this function efficiently.

3 Signal and Image Reconstruction methods

The ordinary methodology of remaking signals from handled information watches the Shannon sampling theory, which advises that the sampling rate ought to be the most astounding recurrence twice (i.e., fs ≥ 2fm). Numerous quantities of tests are required for this methodology. Also, the fundamental theory of linear variable based math recommends that the number of estimations of a discrete, limited dimensional signal ought to be at any rate as huge as its measurement to guarantee recreation. As such, the over two regular theory are legitimately corresponding to the number of occurrences precisely, i.e., and more example implies increasingly exact outcomes. In any case, these days, the innovation of a delightful system named CS yields another way to deal with remake signals utilizing a base number of cases at a lower rate. CS likewise resolves image processing and computer representation issues [64, 65, 66, 67, 68].

3.1 Convex Relaxation

With the advancement of quick strategies for Linear Programming in the eighties, the possibility of convex relaxation turned out to be encouraging. These class calculations take care of a curved improvement issue through linear programming to get recreation [41]. The quantity of estimations required for accurate recreation is little. However, techniques are computationally unpredictable. “Basis Pursuit”(BP) [44, 45], “Basis Pursuit De-Noising (BPDN),” “Least Absolute Shrinkage and Selection Operator (LASSO)” and “Least Angle Regression (LARS)” are a few instances of such calculations. BP is a standard for breaking down a signal into an "ideal" superposition of word reference components, where ideal methods having the littlest l1norm of co-efficients among every such disintegration. BP, in exceptionally over complete lexicons, prompts enormous scale streamlining issues [53].

3.2 Non-Convex Minimization Algorithms

Numerous viable issues of significance are non- convex, and most non- convex issues are hard (if certainly feasible) to explain precisely in a sensible time. In substitute minimization systems [38], the optimization is done with certain factors that are held fixed in recurrent style and linearization strategies, in which the destinations and limitations are changed (or approximated by a convex work) [39]. Different procedures incorporate inquiry calculations (for example, hereditary calculations), which depend on basic arrangement update principles to advance.

3.3 Greedy Iterative Algorithm

Because of the quick reconstruction and low complexity of numerical structure, a group of iterative ravenous calculations has been generally utilized in compressive sensing as of late. These class calculations take care of the remaking issue by finding the appropriate response, well ordered, in an iterative style. The quick and exact reproduction calculations have been the focal point of the investigation of CS, they will be the key advancements for the utilization of CS. At present, the most important greedy algorithms include matching pursuit and gradient pursuit [43]. The thought is to choose sections of θ voraciously. At every emphasis, the segment of θ that associates most with is chosen. “Matching Pursuit” (MP) [58, 59], “Orthogonal Matching Pursuit” (OMP) [48, 49, 50] and “Compressive Sampling Matching Pursuit” (CoSaMP)” [51] are the normally utilized ravenous iterative calculations because of their low usage cost and fast of reconstruction [43].

3.4 Combinatorial / Sublinear Algorithms

This class of calculations recovers sparse signals through gathering testing. They are incredibly quick and efficient, when contrasted with convex relaxation or greedy calculations yet require specific design in the estimations, Φshould be sparse. Agent calculations are Fourier Sampling Algorithm; Chaining Pursuit proper is an iterative algorithm, Heavy Hitters on Steroids (HHS) [42].

3.5 Iterative Thresholding Algorithms

These algorithms are faster than other types of algorithms. Here, right estimations are recuperated by delicate or hard thresholding, beginning from uproarious estimations given the signal is inadequate [40]. Its capacity relies on number cycles and issues setup within reach.

These calculations can provide hypothetical certification with its execution, which can be appeared in the specific one. The fundamental thought of the thresholding method is to pursue a decent possibility for the gauge of help set, which fits the estimation. “Message Passing (MP)” calculations are a significant modification of iterative thresholding calculations in which fundamental factors are related to coordinated diagram edges. “Expander Matching Pursuits,” “Sparse Matching Pursuits,” and “Sequential Sparse Matching Pursuits” areas of late proposed calculations in this space accomplish close straight recuperation time.

3.6 Adaptive Filtering

Adaptive filtering techniques are outstanding, while CS is a mainstream point as of late, so it is astonishing that no writing utilizes an adaptive filtering structure in CS remaking issue. The reason may be that the point of CS is to remake a sparse signal while the answers for general adaptive filtering calculations are not sparse. A few LMS varieties, with some sparse limitations, included their cost capacities, exist in inadequate framework distinguishing proof. Subsequently, these techniques can be connected to take care of the CS issue. We propose another method for adaptive proof of sparse frameworks dependent on the CS theory. We control the transmitted pilot (input data) and the received data signal with the end goal that loads of adaptive filtering approach the compressed form of the inadequate framework rather than the first framework. To this end, we utilize irregular channel structure at the transmitter to shape the estimation lattice as per the CS system. The ordinary recuperation calculations can remake the first sparse framework. Thus, the denoising property of CS can be conveyed in the proposed strategy at the recuperation arrange.

3.7 Distributed Sensing and Processing

In the DCS issue, we are normally keen on improving the recuperation exactness of xl, or the necessities on the estimation gadget, contrasted with the solo-sensor case by accepting that the information among sensor hubs in a system have corresponded. The thought is then that by gathering or sharing some data among the hubs and misusing this relationship, and we can accomplish better execution. The signal connection is generally displayed contrastingly, relying upon the application.

4 Analysis of Results

This segment introduces a nonexclusive correlation of certain calculations recently referenced, and some presentation examination found in writing. A presentation correlation between calculations from every class is broke down beneath. The accompanying figures clarify the assessment proportions of existing investigate from 2000 to 2018. In this manner, the present utilization of the sensor hub with CS is estimated using metrics like compression proportion (CR), Mean square error (MSE), Complexity, and different measures. The strategies, for example, SPIHT, ASEC, HTA, SPSA from different explores, are broke down.

4.1 Evaluation Measures of 2000-2018 Review paper

The performance of various acquisition and reconstruction methods is analyzed here in Figure 1 to Figure 5. From the results (2000-2018), we can conclude that the SPSA with noise is quite difficult to minimize because of the nonlinear nature of the recovered noise. In graph 1, graph 2, graph 3, and graph 4 we have analyzed the performance measures such as compression ratio, compressed signal reconstruction, compression sensing and compression measures based on the algorithms such as Simultaneous Perturbation Stochastic Approximation (SPSA), Health Technology Assessment(HTA), Analysis by Synthesis ECG compressor (ASEC) and SPIHT to be reviewed and the measures to be analyzed in the graphical representation.

Figure 1 Graphical representation of 2000-2010 review papers
Figure 1

Graphical representation of 2000-2010 review papers

Figure 2 Graphical representation of 2011-2013 review results
Figure 2

Graphical representation of 2011-2013 review results

Figure 3 Graphical representation of 2014-2016 review results
Figure 3

Graphical representation of 2014-2016 review results

Figure 4 Graphical representation of 2017-2018 review papers
Figure 4

Graphical representation of 2017-2018 review papers

Figure 5 Comparison of NMSE
Figure 5

Comparison of NMSE

An exhibition examination of the normal standardized mean square error (NMSE) between calculations from every classification is investigated underneath. From the Convex Relaxation classification, FISTA calculations are actualized. The BCS was executed, speaking to the Non-convex optimization classification. Lastly, from the Greedy calculations, the MP and OMP were executed. The near examination is given in Figure 5.

It very well may be seen that the exhibitions of the considerable number of calculations increment when the quantity of estimations M increments. In any case, it tends to be seen that a low Mesteem (M < N) enables the calculations to recover the inadequate signal bringing about low NMSE values. Among the calculations investigated, the BCS presents the best execution.

4.2 Analysis of Various Reconstruction methods in CS

The significant element of CS is that it requires proficient reconstruction calculations. The reproduction of compressed inspected signals includes an arrangement of an underdetermined system of direct conditions and, like this, has limitlessly numerous arrangements. The signal reconstruction procedure is basically to pick the best gauge of the first signal from all the potential arrangements acquired from the above converse condition. This might be accomplished by the raised improvement calculation. Different reconstruction calculations utilized for the reproduction of compressed samples signal might be delegated (Table 1),

Table 1

Analysis of reconstruction methods in Compressive Sensing

Methods Convex Relaxation Non-Convex minimization Greedy Iterative Combinatorial Iterative thresholding
[39]
[40]
[42]
[43]
[44]
[45]
[46]
[47]
[48]
[49]
[50]
[51]
  • Convex Relaxation

  • Non-Convex minimization

  • Greedy Iterative

  • Combinatorial

  • Iterative Thresholding

4.3 Complexity analysis of algorithms

The complexity of reconstruction methods are presented in Table 2,

Table 2

Complexity of reconstruction methods

Methods Complexity
BP [44, 45] O(N3)
MP [46, 47] O(M N Nit)
CoSaMP [51] O(M N)
SP [52] O(s M N)
OMP [48, 49, 50] O(s M N)
StOMP [53] O(N log N)

4.4 Performance Evaluation Metrics for CS

The CS system’s efficiency is measured by the following performance evaluation metrics [58, 59, 60, 61] illustrated, as shown below.

  • Compression Ratio: It calculates the degree to which the algorithm used removes the unwanted(redundant) data. The Compression ratio obtains the information as shown in equation (5)

(5) C o m p r e s s i o n R a t i o = O D B O C D B C

Where ODBO is the number of bits taken to represent the original signal, and CDBC is the number of bits taken to represent the compressed signal. If the compression ratio’s output value is high, then less memory space is required to store the data. The output value also consists of additional information related to the original signal, which helps in retrieving it later for processing.

  • Mean Square Error(MSE): MSE is the average square difference between the original and reconstructed signal. The output of MSE is always a non-negative integer and neither zero. If the model’s MSE output value is close to zero, then the model offers significant performance.

(6) M S E = 1 K k = 1 K A o ( k ) A r ( k ) 2
  • Percentage Root Mean Square Difference (PRMSD): PRMSD measures the acceptable level of fidelity and degree of distortion undergone by the algorithms used for compression and decompression. Visual Inspection is required to determine the adequacy of the reconstructed signal. Here the reconstructed signal’s distorted level is compared with the original signal with the following equation (6)

(7) P R M S D i n % = 100 × k = 1 K ( A o ( k ) A r ( k ) ) 2 k = 1 K ( A o ( k ) ) 2

Where Ar is the reconstructed signal, and Ao is the original signal.

  • Normalized Percentage Root Mean Square Difference (NPRMSD): It is the normalized value of PRMSD, which does not consider the mean value of the signal(A). The information of the signal lies in the variance. It can offer accurate error estimates when compared to PRMSD. The reconstructed signal fidelity issue cannot be solved from the mean value of the data obtained.

(8) N P R M S D i n % = 100 × k = 1 K ( A o ( k ) A r ( k ) ) 2 k = 1 K ( A o ( k ) A ¯ ) 2
  • Quality Score(QS): The quality score rates the overall performance of the compression technique used. If a higher value is obtained for QS, then the performance is said to have a potential effect in the signal compression. It is stated as the ratio between the Compression Ratio(CR) and PRMSD.

(9) Q S = C R P R M S D
  • Root Mean Square Error(RMSE): It calculates the quantity of error present in the reconstructed signal when compared to the original signal. The RMS method preserves the original signal quality, which serves as an added advantage. This method is more effective when compared to PRMSD.

(10) R M S E i n % = 100 × k = 1 K ( A o ( k ) A r ( k ) ) 2 ( K 1 )
  • Normalized Root Mean Square Error(NRMSE): The NRMSE of RMSE is similar to the PRMSD value obtained, except the multiplication done by a hundred.

(11) N R M S E ( ) = k = 1 K ( A o ( k ) A r ( k ) ) 2 k = 1 K ( A o ( k ) ) 2
  • Covariance: The correlation between the original signal and the reconstructed signal is measured using the covariance function.

(12) C o v a r i a n c e = E [ ( A 0 E ( A 0 ) ) ( A r E ( A r ) ) ]

Where E is the expected value for the original and reconstructed signal obtained.

  • Signal to Noise Ratio(SNR): The difference between the original and reconstructed signal is taken as the noise. The SNR compares the reconstructed signal with its background noise. The measurement used to express SNR is a logarithmic decibel scale. When compared with PRMSD and NPRMSD, the output value of SNR can predict the accuracy of the system more clearly.

(13) S N R i n d B = 10. log 10 k = 1 K ( A o ( k ) A ¯ ) 2 k = 1 K ( A 0 ( k ) A r ( k ) ) 2
  • Peak Signal to Noise Ratio(PSNR): PSNR can be represented as the ratio between the maximum intensity of the signal and the background noise. This is used to estimate the quality of the reconstructed signal. A higher PSNR value indicates a high-quality signal.

(14) P S N R = 10 log 10 M A X 2 M S E

In equation (14), the value MA used is constant and denotes the maximum intensity of the signal.

5 Conclusion

The compressive sensing and its sparse reconstruction calculations are utilized in a few regions and have been widely considered in this work. With developing interest for less expensive, quicker, and increasingly productive gadgets, the handiness of CS theory is continuously more prominent and progressively significant. This paper has given a survey of this theory. From this study, the two significant moves routed to compressive sensing are the plan of the estimation grid and the improvement of a proficient reconstruction calculation. CS theory can give valuable and promising strategies in the future. Without a doubt, this subject is in note-worthy and wide improvement in a few applications. Be that as it may, despite everything, it faces various open research difficulties. For instance, to decide the appropriate estimation grid and build up a sparse reconstruction calculation that does not know the signal’s sparsity and can be adaptive to time-shifting sparsity. Besides, signal measurable data can be included the CS securing or CS remaking to diminish the measure of required assets.



References

[1] Zhitao Lu, Dong Youn Kim, and William A. Pearlman, "Wavelet Compression of ECG Signals by the Set Partitioning in Hierarchical Trees Algorithm," In Proceedings of IEEE Transaction on Biomedical Engineering, Vol. 47, No. 7, July 2000.10.1109/10.846678Search in Google Scholar

[2] Emmanue, "The restricted isometry property and its implications for compressed sensing", In Proceedings of ELSEVIER Journal of Applied & Computational Mathematics, Vol. 346, No. 9-10, pp. 589-592.May 2008.10.1016/j.crma.2008.03.014Search in Google Scholar

[3] Fred Chen, Anantha P. Chandrakasan, "A Signal-agnostic Compressed Sensing Acquisition System for Wireless and Implantable Sensors", In Proceedings of IEEE Custom Integrated Circuit Conference, pp. 1-4, Sept 2011.10.1109/CICC.2010.5617383Search in Google Scholar

[4] Luisa F.Polania, Rafael E.Carrillo, "Compressed Sensing Based Method For ECG Compression", In Proceeding of IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), pp. 761-764, May 2011.10.1109/ICASSP.2011.5946515Search in Google Scholar

[5] Anna M.R. Dixon, Emily G. Allstot, "Compressed Sensing Reconstruction: Comparative Study with Applications to ECG Bio-Signals", In Proceeding of IEEE International Symposium of Circuits and Systems (ISCAS), pp. 805-808, 2011.10.1109/ISCAS.2011.5937688Search in Google Scholar

[6] Hossein Mamaghanian, Nadia Khaled, "Structured Sparsity Models for Compressively Sensed Electrocardiogram Signals: A Comparative Study", In Proceedings of IEEE Biomedical Circuits and Systems Conference (BioCAS), pp. 125-128, 2011.10.1109/BioCAS.2011.6107743Search in Google Scholar

[7] ZHANG Hong-xin1, WANG Hai-qing, "Implementation of compressive sensing in ECG and EEG signal processing", In Proceedings of ELSEVIER journal of China Universities of Post and Telecommunictions, Vol. 17, No. 6, pp. 122-126, Dec 2010.10.1016/S1005-8885(09)60535-5Search in Google Scholar

[8] Fred Chen, Anantha P. Chandrakasan, "Design and Analysis of a Hardware-Efficient Compressed Sensing Architecture for Data Compression in Wireless Sensors", In Proceeding of IEEE Journal of Solid state Circuit, Vol. 47, No. 3, pp. 744-756, Mar 2012.10.1109/JSSC.2011.2179451Search in Google Scholar

[9] Anna M. R. Dixon, Emily G. Allstot, "Compressed Sensing System Considerations for ECG and EMG Wireless Biosensors", In Proceedings of IEEE Transaction on Biomedical Circuit And Systems, Vol.6, No. 2, pp. 156-166, April 2012.10.1109/TBCAS.2012.2193668Search in Google Scholar PubMed

[10] Zhilin Zhang, Tzyy-Ping Jung, "Compressed Sensing for Energy-Efficient Wireless Telemonitoring of Noninvasive Fetal ECG Via Block Sparse Bayesian Learning", In Proceedings of IEEE Transaction on Biomedical Engineering, Vol. 60, NO. 2, FEB 2013.10.1109/TBME.2012.2226175Search in Google Scholar PubMed

[11] Monica Fira, Liviu Goras, "On Projection Matrices and Dictionaries in ECG Compressive Sensing - a Comparative Study", In Proceedings of IEEE 12th Symposium on Neural Network Applications in Electrical Engineering (NEUREL), pp. 1-6, Nov 2014.10.1109/NEUREL.2014.7011444Search in Google Scholar

[12] Jeevan K. Pant, Sridhar Krishnan, "Compressive Sensing of Electrocardiogram Signals by Promoting Sparsity on the Second-Order Difference and by Using Dictionary Learning", In Proceedings of IEEE Transactions On Biomedical Circuits And Systems, Vol. 8, No. 2, April 2014.10.1109/TBCAS.2013.2263459Search in Google Scholar PubMed

[13] Anurag Singh and S. Dandapat, "Distributed Compressive Sensing for Multichannel ECG Signals over Learned Dictionaries", In Proceedings of IEEE India Conference (INDICON), pp. 1-6, 2014.10.1109/INDICON.2014.7030638Search in Google Scholar

[14] D. Craven, B. McGinley, L. Kilmartin, M. Glavin and E. Jones, "Impact of compressed sensing on clinically relevant metrics for ambulatory ECG monitoring", In Proceedings of IET Journal and Magazine, Vol. 51, No. 4, 2015.10.1049/el.2014.4188Search in Google Scholar

[15] Anurag singh and Dandapat,"Distributed compressive sensing for multichannel ECG signals over learned dictionaries", In Proc. of the Annual IEEE Conference (INDICON), pune, 2014.10.1109/INDICON.2014.7030638Search in Google Scholar

[16] Luisa F. Polanıa, "Exploiting Prior Knowledge in Compressed Sensing Wireless ECG Systems", In Proceedings of IEEE Journal of Biomedical and Health Informatics, Vol. 19, No. 2, pp. 508-519, 2015.10.1109/JBHI.2014.2325017Search in Google Scholar PubMed

[17] Monica Fira, "Applications of Compressed Sensing: Compression and Encryption", In Proceedings of IEEEE 18th International Conference on Communication Technology (ICCT), pp. 1203-1207, 2018.10.1109/EHB.2015.7391505Search in Google Scholar

[18] Dana Al Akil1 and Raed M. Shubair, "On the Efficient Application of Compressive Sensing of Physiological Signals in Medical Diagnostics", In proceedings of IEEE International Conference on Electronic Devices, Systems and Applications (ICEDSA), pp. , Dec 2016.10.1109/ICEDSA.2016.7818530Search in Google Scholar

[19] Benyuan Liu, Zhilin Zhang, "The Distortion of Data Compression via Compressed Sensing in EEG telemonitoring for the Epileptic", In Proceedings of 2016 IEEE Biomedical Circuits and Systems Conference (BioCAS), pp. 512-515, 2016.10.1109/BioCAS.2016.7833844Search in Google Scholar

[20] Andrianiaina Ravelomanantsoa, Amar Rouane, Hassan Rabah1, "Design and Implementation of a Compressed Sensing Encoder: Application to EMG and ECG Wireless Biosensors", In Proceedings of Springer journal of Circuit System and Signal Processing, Vol. 36, pp. 2875-2892, No. 7, July 2017.10.1007/s00034-016-0444-ySearch in Google Scholar

[21] Yishan Wang1 & Sammy Doleschel1 & Ralf Wunderlich1 & Stefan Heinen, "Evaluation of Digital Compressed Sensing for Real-Time Wireless ECG System with Bluetooth low Energy", In Proceedings of International Springer Journal, pp. 1-7, Jul 2016.10.1007/s10916-016-0526-1Search in Google Scholar PubMed

[22] Yuvraj V. Parkale and Sanjay L. Nalbalwar, "Application of Compressed Sensing (CS) for ECG Signal Compression: A Review", In Proceedings of the International Journal of Springer on Data Engineering and Communication Technology, pp. 53-65, Aug 2016.10.1007/978-981-10-1678-3_5Search in Google Scholar

[23] Nidhi R, Bhadravati, Karnataka, India, "Fusion of Compressed Sensing Algorithms for ECG Signals", In Proceedings of International Journal of Computer Science Trends and Technology (JCST), Vol. 5, N. 5, pp. Sept 2017.Search in Google Scholar

[24] Giulia Da Poian, Christopher J Rozell3, , "Matched Filtering for Heart Rate Estimation on Compressive Sensing ECG Measurements", In Proceedings of IEEE Transaction on biomedical Engineering, Vol. 65, No. 6, pp. 1349-1358, June 2018.10.1109/TBME.2017.2752422Search in Google Scholar PubMed

[25] Mauro Mangia, Valer"Low-Complexity Biosignal Compression Using Comio Cambareri, pressed Sensing", In Proceedings of Compressed Sensing for Effective Hardware Implementaions, pp. 211-254, July 2018.10.1007/978-3-319-61373-4_8Search in Google Scholar

[26] Hamza Djelouat1(B), Mohammed Al Disi1, "Compressive Sensing Based ECGBiometric System", In proceedings of International Springer Journal of Intelligent Systems and Applications, pp. 126-137, 2018.10.1007/978-3-030-01057-7_11Search in Google Scholar

[27] Zhilin LI, Wenbo XU, "A survey on one-bit compressed sensing: theory and applications", In proceedings of Springer journal of Frontiers of Computer Science, Vol. 12, No. 12, pp. 217-320, April 2018.10.1007/s11704-017-6132-7Search in Google Scholar

[28] Islam, S.R., Kwak, D., Kabir, M.H., Hossain, M., Kwak, K.-S.: The internet of things for health care: a comprehensive survey. IEEE Access 3, 678–708 (2015).10.1109/ACCESS.2015.2437951Search in Google Scholar

[29] Cand‘es, E.J., Romberg, J., Tao, T.: Robust uncertainty principles: exact signal reconstruction from highly incomplete frequency information. IEEE Trans. Inf. Theor. 52(2), 489–509, 2006.10.1109/TIT.2005.862083Search in Google Scholar

[30] Donoho, D.L.: Compressed sensing. IEEE Trans. Inf. Theor. 52(4), 1289–1306 (2006).10.1109/TIT.2006.871582Search in Google Scholar

[31] Candes, E.J., Tao, T.: Near-optimal signal recovery from random projections: universal encoding strategies? IEEE Trans. Inf. Theor. 52(12), 5406–5425 (2006).10.1109/TIT.2006.885507Search in Google Scholar

[32] Biel, L., Pettersson, O., Philipson, L., Wide, P.: ECG analysis: a new approach in human identification. IEEE Trans. Instrum. Measur. 50(3), 808–812 (2001).10.1109/IMTC.1999.776813Search in Google Scholar

[33] Irvine, J., Wiederhold, B., Gavshon, L., Israel, S., McGehee, S., Meyer, R., Wiederhold, M.: Heart rate variability: a new biometric for human identification. In: Proceedings of the International Conference on Artificial Intelligence (IC-AI01), pp.1106–1111 (2001).Search in Google Scholar

[34] Donoho, D.L.: Compressed sensing. IEEE Trans. Inf. Theor. 52(4), 1289–1306 (2006).10.1109/TIT.2006.871582Search in Google Scholar

[35] Candes, E.J., Tao, T.: Near-optimal signal recovery from random projections: universal encoding strategies? IEEE Trans. Inf. Theor. 52(12), 5406–5425 (2006).10.1109/TIT.2006.885507Search in Google Scholar

[36] Donoho D L. Compressed sensing. IEEE Transanctions on Information Theory, 52(4): 1289–1306, 2006.10.1109/TIT.2006.871582Search in Google Scholar

[37] R.G. Baraniuk et al, Model-based compressive sensing. In Proceedings of IEEE Trans. Inf. Theory 56(4), pp. 1982–2001, 2010.10.1109/TIT.2010.2040894Search in Google Scholar

[38] Qiang Wang, Chen Meng, Weining Ma, Cheng Wang, Lei Yu,"Compressive sensing reconstruction for vibration signals based on the improved fast iterative shrinkage-thresholding algorithm", Measurement Journal, 2019.10.1016/j.measurement.2019.04.012Search in Google Scholar

[39] R. Chartrand, 2007, Exact reconstruction of sparse signals via nonconvex minimization IEEE Signal Process. vol. 14, pp. 707-710.10.1109/LSP.2007.898300Search in Google Scholar

[40] A. Maleki, 2009, Coherence analysis of iterative thresholding algorithms, in Communication, Control, and Computing, pp. 236-243, IEEE, 2009.10.1109/ALLERTON.2009.5394802Search in Google Scholar

[41] Donoho D. L., Maleki, A., 2009, Message Passing Algorithms for Compressed Sensing.10.1109/ITWKSPS.2010.5503193Search in Google Scholar

[42] S. Muthukrishnan, March. 2006, Combinatorial algorithms for Compressed Sensing. In Proc. 40th Ann. Conf. Information Sciences and Systems, Princeton.Search in Google Scholar

[43] S. Budhiraja, 2015, A Survey of Compressive Sensing Based Greedy Pursuit Reconstruction Algorithms, vol.10, no. 10, pp. 1-10.10.5815/ijigsp.2015.10.01Search in Google Scholar

[44] S. S. Chen, D. L. Donoho, and M. A. Saunders, “Atomic decomposition by basis pursuit,” SIAM Journal on Scientific Computing, vol. 20, pp. 33–61, 1998.10.1137/S1064827596304010Search in Google Scholar

[45] R. Garg and R. Khandekar, “Gradient descent with sparsification: an iterative algorithm for sparse recovery with restricted isometry property,” Proceedings of the 26th Annual International Conference on Machine Learning, pp. 337–344, 2009.10.1145/1553374.1553417Search in Google Scholar

[46] S. G. Mallat and Z. Zhang, “Matching pursuits with time-frequency dictionaries,” IEEE Transactions on Signal Processing, vol. 41, no. 12, pp. 3397–3415, Dec 1993.10.1109/78.258082Search in Google Scholar

[47] A. K. Mishra and R. S. Verster, Compressive Sensing Based Algorithms for Electronic Defence. Springer, 201710.1007/978-3-319-46700-9Search in Google Scholar

[48] J. Wen, Z. Zhou, J. Wang, X. Tang, and Q. Mo, “A sharp condition for exact support recovery of sparse signals with orthogonal matching pursuit,” in 2016 IEEE International Symposium on Information Theory (ISIT), July 2016, pp. 2364–2368.10.1109/ISIT.2016.7541722Search in Google Scholar

[49] W. Dai and O. Milenkovic, “Subspace pursuit for compressive sensing signal reconstruction,” IEEE Transactions on Information Theory, vol. 55, no. 5, pp. 2230–2249, May 2009.Search in Google Scholar

[50] D. Needell and R. Vershynin, “Uniform uncertainty principle and signal recovery via regularized orthogonal matching pursuit,” Found. Comput. Math., vol. 9, no. 3, pp. 317–334, Apr. 2009.10.1007/s10208-008-9031-3Search in Google Scholar

[51] D. Needell and J. A. Tropp, “CoSaMP: Iterative signal recovery from incomplete and inaccurate samples,” California Institute of Technology, Pasadena, Tech. Rep., 2008.10.1016/j.acha.2008.07.002Search in Google Scholar

[52] W. Dai and O. Milenkovic, “Subspace pursuit for compressive sensing signal reconstruction,” IEEE Transactions on Information Theory,vol. 55, no. 5, pp. 2230–2249, May 2009.10.1109/TIT.2009.2016006Search in Google Scholar

[53] D. L. Donoho, Y. Tsaig, I. Drori, and J. L. Starck, “Sparse solution of underdetermined systems of linear equations by stagewise orthogonal matching pursuit,” IEEE Transactions on Information Theory, vol. 58, no. 2, pp. 1094–1121, Feb 2012.10.1109/TIT.2011.2173241Search in Google Scholar

[54] J. Zhao, Q. Liu, X. Wang and S. Mao, "Scheduled Sequential Compressed Spectrum Sensing for Wideband Cognitive Radios," in IEEE Transactions on Mobile Computing vol. 17, no. 4, pp. 913-926, 1 April 2018.10.1109/TMC.2017.2744621Search in Google Scholar

[55] C. Knill, F. Roos, B. Schweizer, D. Schindler, and C. Waldschmidt, "Random Multiplexing for an MIMO-OFDM Radar With Compressed Sensing-Based Reconstruction," in IEEEMicrowave and Wireless Components Letters vol. 29, no. 4, pp. 300-302, April 2019.10.1109/LMWC.2019.2901405Search in Google Scholar

[56] S. Jung, Y. Cho, R. Park, J. Kim, H. Jung, and Y. Chung, "High-Resolution Millimeter-Wave Ground-Based SAR Imaging via Compressed Sensing," in IEEE Transactions on Magnetics, vol. 54, no. 3, pp. 1-4, March 2018, Art no. 9400504.10.1109/TMAG.2017.2764949Search in Google Scholar

[57] Paulo S.R. Diniz, Johan A.K. Suykens, Rama Chellappa, Sergios Theodoridis, “Introduction to Machine Learning,” Academic Press Library in Signal Processing: Volume 1 - Signal Processing Theory and Machine Learning, pp. 3-1506, 2014.10.1016/B978-0-12-396502-8.00001-2Search in Google Scholar

[58] Data analytics in medicine: concepts, methodologies, tools, and applications. Hershey, PA: IGI Global, Medical Information Science Reference, 2020.Search in Google Scholar

[59] Arjoune, Y., Kaabouch, N., El Ghazi, H., & Tamtaoui, A. (2018). A performance comparison of measurement matrices in compressive sensing. International Journal of Communication Systems, 31(10), e3576.10.1002/dac.3576Search in Google Scholar

[60] Němcová, A., Smíšek, R., Maršánová, L., Smital, L., & Vítek, M. (2018). A Comparative Analysis of Methods for Evaluation of ECG Signal Quality after Compression. BioMed Research International, 2018, 1–26.10.1155/2018/1868519Search in Google Scholar PubMed PubMed Central

[61] Manikandan, M. S., & Dandapat, S. (2008). Wavelet threshold based TDL and TDR algorithms for real-time ECG signal compression. Biomedical Signal Processing and Control 3(1), 44–66.10.1016/j.bspc.2007.09.003Search in Google Scholar

[62] Z. Liu, L. Wang, X. Wang, X. Shen, and L. Li, "Secure Remote Sensing Image Registration Based on Compressed Sensing in Cloud Setting," in IEEE Access vol. 7, pp. 36516-36526, 2019.10.1109/ACCESS.2019.2903826Search in Google Scholar

[63] L. Wang, Y. Feng, Y. Gao, Z.Wang and M. He, "Compressed Sensing Reconstruction of Hyperspectral Images Based on Spectral Unmixing," in IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing vol. 11, no. 4, pp. 1266-1284, April 2018.10.1109/JSTARS.2017.2787483Search in Google Scholar

[64] Vinu S, (2019). Optimal task assignment in mobile cloud computing by queue based ant-bee algorithm. Wireless Personal Communications 104(1), 173-19710.1007/s11277-018-6014-9Search in Google Scholar

[65] Rejeesh, M. R. (2019). Interest point based face recognition using adaptive neuro fuzzy inference system. Multimedia Tools and Applications 78(16), 22691-2271010.1007/s11042-019-7577-5Search in Google Scholar

[66] Sundararaj, V. (2016). An efficient threshold prediction scheme for wavelet based ECG signal noise reduction using variable step size firefly algorithm. International Journal of Intelligent Engineering and Systems 9(3), 117-126.10.22266/ijies2016.0930.12Search in Google Scholar

[67] Vinu Sundararaj, (2019). Optimised denoising scheme via opposition-based selfadaptive learning PSO algorithm for wavelet-based ECG signal noise reduction. International Journal of Biomedical Engineering and Technology 31(4), 32510.1504/IJBET.2019.103242Search in Google Scholar

[68] Sundararaj, V.,Muthukumar, S. and Kumar, R.S., 2018. An optimal cluster formation based energy efficient dynamic scheduling hybrid MAC protocol for heavy traffic load in wireless sensor networks. Computers & Security 77, pp.277-288.10.1016/j.cose.2018.04.009Search in Google Scholar

Received: 2019-09-05
Accepted: 2020-02-09
Published Online: 2020-09-11

© 2020 I. Mishra and S. Jain, published by De Gruyter

This work is licensed under the Creative Commons Attribution 4.0 International License.

Articles in the same Issue

  1. Research Articles
  2. Best Polynomial Harmony Search with Best β-Hill Climbing Algorithm
  3. Face Recognition in Complex Unconstrained Environment with An Enhanced WWN Algorithm
  4. Performance Modeling of Load Balancing Techniques in Cloud: Some of the Recent Competitive Swarm Artificial Intelligence-based
  5. Automatic Generation and Optimization of Test case using Hybrid Cuckoo Search and Bee Colony Algorithm
  6. Hyperbolic Feature-based Sarcasm Detection in Telugu Conversation Sentences
  7. A Modified Binary Pigeon-Inspired Algorithm for Solving the Multi-dimensional Knapsack Problem
  8. Improving Grey Prediction Model and Its Application in Predicting the Number of Users of a Public Road Transportation System
  9. A Deep Level Tagger for Malayalam, a Morphologically Rich Language
  10. Identification of Biomarker on Biological and Gene Expression data using Fuzzy Preference Based Rough Set
  11. Variable Search Space Converging Genetic Algorithm for Solving System of Non-linear Equations
  12. Discriminatively trained continuous Hindi speech recognition using integrated acoustic features and recurrent neural network language modeling
  13. Crowd counting via Multi-Scale Adversarial Convolutional Neural Networks
  14. Google Play Content Scraping and Knowledge Engineering using Natural Language Processing Techniques with the Analysis of User Reviews
  15. Simulation of Human Ear Recognition Sound Direction Based on Convolutional Neural Network
  16. Kinect Controlled NAO Robot for Telerehabilitation
  17. Robust Gaussian Noise Detection and Removal in Color Images using Modified Fuzzy Set Filter
  18. Aircraft Gearbox Fault Diagnosis System: An Approach based on Deep Learning Techniques
  19. Land Use Land Cover map segmentation using Remote Sensing: A Case study of Ajoy river watershed, India
  20. Towards Developing a Comprehensive Tag Set for the Arabic Language
  21. A Novel Dual Image Watermarking Technique Using Homomorphic Transform and DWT
  22. Soft computing based compressive sensing techniques in signal processing: A comprehensive review
  23. Data Anonymization through Collaborative Multi-view Microaggregation
  24. Model for High Dynamic Range Imaging System Using Hybrid Feature Based Exposure Fusion
  25. Characteristic Analysis of Flight Delayed Time Series
  26. Pruning and repopulating a lexical taxonomy: experiments in Spanish, English and French
  27. Deep Bidirectional LSTM Network Learning-Based Sentiment Analysis for Arabic Text
  28. MAPSOFT: A Multi-Agent based Particle Swarm Optimization Framework for Travelling Salesman Problem
  29. Research on target feature extraction and location positioning with machine learning algorithm
  30. Swarm Intelligence Optimization: An Exploration and Application of Machine Learning Technology
  31. Research on parallel data processing of data mining platform in the background of cloud computing
  32. Student Performance Prediction with Optimum Multilabel Ensemble Model
  33. Bangla hate speech detection on social media using attention-based recurrent neural network
  34. On characterizing solution for multi-objective fractional two-stage solid transportation problem under fuzzy environment
  35. Deep Large Margin Nearest Neighbor for Gait Recognition
  36. Metaheuristic algorithms for one-dimensional bin-packing problems: A survey of recent advances and applications
  37. Intellectualization of the urban and rural bus: The arrival time prediction method
  38. Unsupervised collaborative learning based on Optimal Transport theory
  39. Design of tourism package with paper and the detection and recognition of surface defects – taking the paper package of red wine as an example
  40. Automated system for dispatching the movement of unmanned aerial vehicles with a distributed survey of flight tasks
  41. Intelligent decision support system approach for predicting the performance of students based on three-level machine learning technique
  42. A comparative study of keyword extraction algorithms for English texts
  43. Translation correction of English phrases based on optimized GLR algorithm
  44. Application of portrait recognition system for emergency evacuation in mass emergencies
  45. An intelligent algorithm to reduce and eliminate coverage holes in the mobile network
  46. Flight schedule adjustment for hub airports using multi-objective optimization
  47. Machine translation of English content: A comparative study of different methods
  48. Research on the emotional tendency of web texts based on long short-term memory network
  49. Design and analysis of quantum powered support vector machines for malignant breast cancer diagnosis
  50. Application of clustering algorithm in complex landscape farmland synthetic aperture radar image segmentation
  51. Circular convolution-based feature extraction algorithm for classification of high-dimensional datasets
  52. Construction design based on particle group optimization algorithm
  53. Complementary frequency selective surface pair-based intelligent spatial filters for 5G wireless systems
  54. Special Issue: Recent Trends in Information and Communication Technologies
  55. An Improved Adaptive Weighted Mean Filtering Approach for Metallographic Image Processing
  56. Optimized LMS algorithm for system identification and noise cancellation
  57. Improvement of substation Monitoring aimed to improve its efficiency with the help of Big Data Analysis**
  58. 3D modelling and visualization for Vision-based Vibration Signal Processing and Measurement
  59. Online Monitoring Technology of Power Transformer based on Vibration Analysis
  60. An empirical study on vulnerability assessment and penetration detection for highly sensitive networks
  61. Application of data mining technology in detecting network intrusion and security maintenance
  62. Research on transformer vibration monitoring and diagnosis based on Internet of things
  63. An improved association rule mining algorithm for large data
  64. Design of intelligent acquisition system for moving object trajectory data under cloud computing
  65. Design of English hierarchical online test system based on machine learning
  66. Research on QR image code recognition system based on artificial intelligence algorithm
  67. Accent labeling algorithm based on morphological rules and machine learning in English conversion system
  68. Instance Reduction for Avoiding Overfitting in Decision Trees
  69. Special section on Recent Trends in Information and Communication Technologies
  70. Special Issue: Intelligent Systems and Computational Methods in Medical and Healthcare Solutions
  71. Arabic sentiment analysis about online learning to mitigate covid-19
  72. Void-hole aware and reliable data forwarding strategy for underwater wireless sensor networks
  73. Adaptive intelligent learning approach based on visual anti-spam email model for multi-natural language
  74. An optimization of color halftone visual cryptography scheme based on Bat algorithm
  75. Identification of efficient COVID-19 diagnostic test through artificial neural networks approach − substantiated by modeling and simulation
  76. Toward agent-based LSB image steganography system
  77. A general framework of multiple coordinative data fusion modules for real-time and heterogeneous data sources
  78. An online COVID-19 self-assessment framework supported by IoMT technology
  79. Intelligent systems and computational methods in medical and healthcare solutions with their challenges during COVID-19 pandemic
Downloaded on 30.11.2025 from https://www.degruyterbrill.com/document/doi/10.1515/jisys-2019-0215/html
Scroll to top button