Home Anti-leakage method of network sensitive information data based on homomorphic encryption
Article Open Access

Anti-leakage method of network sensitive information data based on homomorphic encryption

  • Junlong Shi and Xiaofeng Zhao EMAIL logo
Published/Copyright: March 25, 2023
Become an author with De Gruyter Brill

Abstract

With the development of artificial intelligence, people begin to pay attention to the protection of sensitive information and data. Therefore, a homomorphic encryption framework based on effective integer vector is proposed and applied to deep learning to protect the privacy of users in binary convolutional neural network model. The conclusion shows that the model can achieve high accuracy. The training is 93.75% in MNIST dataset and 89.24% in original dataset. Because of the confidentiality of data, the training accuracy of the training set is only 86.77%. After increasing the training period, the accuracy began to converge to about 300 cycles, and finally reached about 86.39%. In addition, after taking the absolute value of the elements in the encryption matrix, the training accuracy of the model is 88.79%, and the test accuracy is 85.12%. The improved model is also compared with the traditional model. This model can reduce the storage consumption in the model calculation process, effectively improve the calculation speed, and have little impact on the accuracy. Specifically, the speed of the improved model is 58 times that of the traditional CNN model, and the storage consumption is 1/32 of that of the traditional CNN model. Therefore, homomorphic encryption can be applied to information encryption under the background of big data, and the privacy of the neural network can be realized.

1 Introduction

In the current era, deep learning has been applied in various theoretical and practical scenarios, such as image recognition, speech recognition, biometric extraction, and text processing [1]. In order to make the model more efficient and have higher accuracy, it is inevitable to use massive data from various channels for training. However, these data often contain sensitive information of many users, especially in the fields of military, medical, and political, where data are extremely sensitive and need to be protected [2]. Inevitably, this will raise concerns about data leakage of sensitive information. In cloud computing, data owners tend to outsource their data and machine learning models to the cloud with powerful resources, which can reduce various cost pressures of data owners in data processing [3]. However, direct outsourcing may risk privacy data leakage, because the cloud may also be hacked, resulting in data loss or theft, so the cloud is not completely trustworthy [4]. In order to prevent users’ sensitive information from being leaked on the Internet, you can choose to encrypt the original data. Taking fingerprint recognition technology as an example, the relevant recognition model generally extracts the feature details of the user’s input fingerprint. The matching algorithm is used for interpretation, and even professionals cannot rely on the data to interpret the user’s fingerprint data [5]. This actually corresponds to the encryption and decoding of the user’s fingerprint data, which greatly reduces the possibility of this type of data leaking the user’s fingerprint information and reduces the user’s concern about the leakage of their own sensitive information. However, after encrypting the data in traditional encryption schemes, the ciphertext data cannot participate in the operation in the ciphertext domain. How to develop a suitable method to solve this challenge is also widely concerned in this field [6]. The emergence of the fully homomorphic encryption scheme enables the ciphertext to participate in the operation without decryption, which makes it possible to realize deep learning in the ciphertext domain [7]. Considering the extensive application of deep learning in practical scenarios, this research has important practical significance. However, due to the high complexity of homomorphic encryption technology, the computational efficiency of ciphertext on cloud platform is low [8,9]. At the same time, the size of user data in ciphertext state is large, which increases the pressure on network transmission and storage space of cloud platform [10,11]. In addition, the existing homomorphic encryption technology supports limited ciphertext computation, which leads to inefficient computation of ciphertext by the pool layer and activation function of neural network model [12]. Therefore, the efficient integer vector homomorphic encryption is used to encrypt private data, and a binary convolutional neural network is proposed to improve the computing efficiency of cloud platform for ciphertext.

2 Related work

How to control access to cloud data and ensure the security of sensitive information has become a very challenging issue. Many scholars have devoted themselves to the research of related fields. Yang et al. paid attention to the high efficiency and fine granularity of big data access and put forward relevant privacy protection access control schemes. The model proposed in the research hides all attributes in the access process to ensure high data security. The research evaluates the performance and security of the model, which supports all linear ciphertext shared access and can achieve privacy protection with less resource consumption [13]. In order to further strengthen the security of searchable symmetric encryption, Li et al. built on the concept of forward privacy and extended it, and finally focused on the concept of forward search privacy. The research proposes a new searchable symmetric encryption scheme called Khons, which satisfies the security concept proposed by the research and is proven effective. The study used big data to evaluate the program, and the evaluation results showed that the program was more effective than other programs of the same type [14]. Liu et al. proposed a new encryption scheme based on the blockchain, which can search for content. The security analysis of the scheme is carried out. The results show that the scheme can effectively resist plaintext attacks and prevent keyword selection attacks [15]. Chinnasamy et al. proposed a new technique that uses a hash algorithm to hide access policies and a signature verification scheme to provide security against insider attacks. Compared with the previous content encryption methods, the method used in the research can effectively control the access to the Internet of things, and analyzes the security against indistinguishable adaptive chosen-ciphertext attacks [16]. From the perspective of edge assistance, Xiong et al. proposed a new sharing method to protect the privacy of original data. After verification, the method used in the study is better; this scheme can obtain the same results with almost no errors, and achieves greater optimization in communication overhead and computational cost [17].

In the summary of encryption technology, the calculation theory involved in homomorphic encryption is relatively complex, which relies on complex mathematical problems [18]. The ciphertext data after homomorphic encryption can directly participate in the operation, and then the operation result is decrypted, and the result obtained is consistent with the output result obtained by the same operation processing of the original data. Brakerski et al. proposed a new model for constructing general indistinguishable obfuscation using what they call split fully homomorphic encryption. The research guarantees the security of the model by providing parameters in a properly defined oracle model, which has a simple structure, and the research conclusions show the reliability of the model [19]. Cheon and Kim analyzed the mixed mode of homomorphic encryption, and the two modes of mixed encryption are in the same framework, which can reduce storage requirements during encryption. The research results show that the model can improve the homomorphic evaluation speed of public key encryption and change the state of homomorphic encryption from private to public. Compared with the comparative models in the research, this model can also freely choose the homomorphic encryption message space [20]. Blatt et al. proposed a statistical technology toolbox, which uses homomorphic encryption to conduct large-scale genome-wide association research on encrypted related data without interaction and decryption. This method is faster than the cutting-edge multi-party computing method, and the updating and development of the method promote further research in related fields [21]. From the perspective of homomorphic probabilistic encryption, Gomez-Vallero et al. discussed the protection of multiple biological templates and proposed a corresponding encryption framework. The study concluded that all requirements related to the protection of biometric data were met and there was no loss of accuracy. The ciphertext can also be verified directly. The research results also show that the model has a high accuracy [22]. Cousins et al. used a field programmable logic gate array computational accelerator as part of a homomorphic coding accelerator. Computes encoded data by reducing the computational bottleneck of lattice cipher bases that support homomorphic encoding schemes. The results of the study show that the program is effective [23].

The above research shows some research results in the field of privacy protection and homomorphic encryption in recent years. Most of the research related to privacy protection focuses on the expansion and upgrade of various encryption technologies, and some research studies are carried out for specific scenarios or industries. The research related to homomorphic encryption also mainly focuses on the optimization of the method itself and its various applications, but there is less research on homomorphic encryption in deep learning. Therefore, this research mainly starts from this perspective, aiming to strengthen the wide application of homomorphic encryption in deep learning. In addition, in order to improve the calculation speed of the model, this study has binarized CNN, which is a new attempt compared with previous data encryption studies.

3 Homomorphic encryption framework based on efficient integer vector

3.1 Concept and operation flow of efficient integer vector homomorphic encryption

Homomorphic encryption algorithm is an algorithm that can perform any algebraic operation on the ciphertext data without decrypting the ciphertext first [24]. Homomorphic encryption technology has the characteristics of ciphertext computing ability, and it has great application value in the field of computer network security, and its specific operation process can be represented as in Figure 1.

Figure 1 
                  Homomorphic encryption scheme flow chart.
Figure 1

Homomorphic encryption scheme flow chart.

As shown in Figure 1, in general, a homomorphic encryption scheme mainly includes four steps: generating a secret key, encrypting the plaintext, performing operations on the ciphertext, and decrypting the ciphertext to obtain the decrypted plaintext [25]. The homomorphic encryption scheme on integers has turn into a hot topic in the fully homomorphic encryption system due to the advantages of simplified concept and simple operation [26]. Therefore, the research chooses efficient integer vector homomorphic encryption to encrypt the data. The homomorphic encryption algorithm based on round number vector combines the characteristics of neural network, which has high accuracy and high confidentiality [12]. The efficient integer vector is separated into two sections: encryption framework and key exchange [27].

From the perspective of the encryption framework, it is necessary to set a plaintext x Z p n as an integer vector for encryption processing, in which, n is the length of the vector and p is the size of the letter represented. The ciphertext is set as c = Z q n + 1 , where c corresponds to x. The length of c is n + 1 > n , and the letter size of c is q, and q p . In addition, it is necessary to define the secret key as a matrix, S Z q m × n , where m × n represents a matrix with m rows and n columns, and the secret key needs to satisfy the conditions shown in the following formula:

(1) S c = ω x + e .

The ω noise matrix is an e Z m × n integer parameter whose x size is satisfied. ω > e is the encryption process to find the corresponding ciphertext c. The conditions that the ciphertext need to meet are also shown in formula (1), and c the decryption of the ciphertext is to use the secret. It is carried out on the basis of the x key, and the result of decryption is the plaintext S, where x needs to meet the conditions shown in the following formula:

(2) x > S c ω .

The key exchange technology used in the study is Peikert–Vaikuntanathan–Waters (PVW); it is a re-linearization technology, which can exchange the vector key in the matrix with any other vector key [28]. The algorithm has two steps. The key S Z q m × n sum S Z q m × n is exchanged, where S Z q m × n l will be used as an intermediate key. After this, a new ciphertext can be obtained, and it needs to be clear that c can also be obtained by encrypting the same plaintext integer vector x.

In the specific exchange step, it is first necessary for S to convert to S , which corresponds to a new intermediate ciphertext related to it, and the ciphertext c has the characteristics of an order of magnitude smaller than the original ciphertext c. The process is designed to represent c each original in c i the ciphertext, and the result is a completely new ciphertext c whose value is satisfied, c = 1 . On this basis, it is assumed that c i is as shown in the following formula:

(3) c i = b i 0 + b i 1 2 + + b i ( l 1 ) 2 l 1 .

The elements of c i include [ b i 0 , b i 1 b i ( l 1 ) ] . The next step is to construct the intermediate key S 2 m × n l , the process is shown in the following formula:

(4) S c = S c .

In this process, S i j each element that can [ S i j , S i j 2 , , S i j 2 l 1 ] be used to represent can be represented as S.

The second step is to convert S to S , an integer matrix is constructed in the model M, let M Z n × n l , and the noise matrix is defined as E, the above variables need to satisfy the following formula:

(5) S M = S + E .

The assumed range of S value is [ I , T ] , where I is the identity matrix, and M can be expressed as the following formula:

(6) M = S + E T A A ,

where A is a preset set of random matrices, and A Z ( n m ) × n l .

New ciphertext can be obtained c by passing M, and the ciphertext c needs to satisfy the following formula:

(7) c = M c .

3.2 The concept and operation flow of convolutional neural network

Neural network consists of many interconnected neurons [29]. The neuron first receipts the input data, and after linear weighting calculation, the obtained result is input into the activation function. The activation function performs nonlinear calculation on it, and uses the result as the neuron output. CNN is a multi-layer perceptron similar to artificial neural network. It is a commonly used deep learning model and is widely used in visual image analysis. Its specific structure is shown in Figure 2.

Figure 2 
                  CNN structure.
Figure 2

CNN structure.

As shown in Figure 2, the structure of CNN is very similar to traditional artificial neural network, especially in the last layer of the network. From the perspective of the mathematical operation process, CNN can be mainly separated into input layer, convolution layer, pooling layer, nonlinear layer, and fully connected layer. Besides, the CNN also needs to calculate the loss value in the training stage.

From the output layer, the output layer is mainly n × n × 3 RGB images. The input data of the convolutional layer neural network are usually the original image I, which can be represented by the following equation:

(8) I R h × w × c ,

where h , w , c represent length, width, and number of channels of image, respectively. The research order p i serves as the input for each layer, where p 0 = I . The parameters of the convolutional layer are the weight coefficient a and the bias coefficient b, which are defined as fun ( . ) the activation function of the convolutional network. Then the specific calculation formula of the convolutional layer is shown in the following formula:

(9) P i l = fun ( P i 1 l a l + b l ) .

Among them, the first P i l channel of P i 1 l the input of l the first layer, the first i channel of the first i = 1 layer, the l convolution operation, a l the weight coefficient vector of b l the first l channel, the bias coefficient of the first channel, and the output size of the convolution layer can be expressed as follows:

(10) W = W F w + 2 P S + 1 ,

(11) H = H F h + 2 P S + 1 ,

where W , H represent width and height of input image, respectively, W , H represent width and height of output image, F w , F h represent width and height of filter, P represents padding, S represents moving stride, and the number of channels output by convolutional layer is equal to the number of channels during convolution operation and the number of filters used.

After convolutional layer is the pooling layer, which is an important part of the CNN. The essence of the pooling layer is downsampling, because the dimension of the data is getting higher and higher after convolution, and the feature map does not change much. After multiple consecutive convolutions, a large amount of parameters will be generated, not only large, it increases the difficulty of network training and easily causes overfitting. Therefore, a pooling layer is usually placed after the convolutional layer to compress the data, reduce the dimension, and reduce the amount of parameters.

In the nonlinear layer, ReLU activation function is widely used, and the calculation formula of the ReLU activation function is shown in the following equation:

(12) fun ( x ) = max ( 0 , x ) .

The ReLU function returns 0 for negative values in the input image, but has no effect on positive values, as shown in Figure 3.

Figure 3 
                  ReLU activation function operation process.
Figure 3

ReLU activation function operation process.

It is the main structure of the early construction of CNN. It is the last layer of the CNN. Each node of this layer is interconnected with the previous layer, and the features extracted by the network of the previous layer are integrated and mapped to the sample label space in this layer. The fully connected layer performs a weighted summation of the features output by the previous layer, and inputs the result to the activation function, and finally completes the classification of the target. The calculation formula in this process is shown in the following equation:

(13) P i l = fun ( W l × P i 1 l + b l ) ,

where P i l represents i the first channel of the input of the first layer l, P i 1 l represents the first channel of the input of the first i 1 layer l, fun ( . ) represents the activation function used in this layer, W l represents the l weight parameter of b l the first l channel, and represents the bias parameter of the first channel.

Loss function loss ( . ) is an important structure of the CNN in the training phase. The study uses the softmax loss function to calculate the loss value, and its calculation formula is shown in the following formula:

(14) loss ( P ) = log exp { P i } j = 0 K exp { P j } ,

where P is the input vector of the softmax layer, the number of categories of the label is K, i represents the correct category, and P i represents i the first i component of the corresponding input P.

3.3 Binary neural network algorithm model based on efficient integer vector homomorphic encryption

Binarized neural network (BNN) is a model architecture in which both weight and activation are represented by 1-bit. Its principle diagram is as shown in Figure 4.

Figure 4 
                  Schematic diagram of binary neural network.
Figure 4

Schematic diagram of binary neural network.

The BNN has very good characteristics. Compared with FP32 floating-point neural network, it can achieve about 32 times memory reduction, and can be replaced by bit operations such as xnor and popcount during inference. Complex multiplication and accumulation operations greatly speed up the inference process of the model, so BNN has great potential in the field of model compression and optimization acceleration.

The research uses XNOR+ Networks to complete the model training task. In XNOR+ Networks, by I performing a binarization operation on the input and weight, this operation can speed up the model calculation and reduce the memory overhead. The specific operation is shown in the following equation:

(15) x = Sign ( x ) = + 1 , x 0 1 , x < 0 .

In the training phase, the research uses the binarized I sum W to approximate the convolution calculation, and its calculation formula is shown in the following equation:

(16) I W [ Sign ( I ) Sign ( W ) ] × K α ,

where denotes the convolution operation, Sign ( I ) and Sign ( W ) denote the I binarization of the sum, , respectively, and W represents a convolution operation on a binarized matrix.

The research builds a neural network structure with six hidden layers, and uses batch normalization to optimize the internal function. This method can prevent gradient diffusion. The activation function in the model is ReLU, the classification function is SoftMax, the cost function is cross entropy, and the optimization function is Adam. In the training process of the neural network, it is assumed that training set is X, and test set is Y, so label of the training set is X_label, and label of Y _ label test set is Y _ label , which will be put into the pre-set neural network model for training and testing. In the specific algorithm process, the training set X and weight matrix are input, and the ciphertext ω and secret key S after homomorphic encryption c are output. After this the dataset needs to be applied to a neural network, a process that combines homomorphic encryption and neural networks. The input data in this process is an encrypted dataset, which can ensure that the data are already in the form of cipher text when it is used, which can ensure the security of user sensitive information. The study optimizes the encryption algorithm. In this process, the training set X, test set Y, and weight matrix ω are used as input, and the output is the ciphertext and secret key of the training set, and X the c 2 ciphertext c 1 and S 2 secret key S 1 of the test set Y.

4 Result analysis

The part of result analysis mainly includes the performance evaluation of the improved algorithm on the training set and the performance comparison with the traditional CNN. The model constructed in the research mainly operates on the MNIST data, and uses a CNN with six hidden layers for simulation experiments. In the analysis and evaluation part, the research uses training set accuracy, validation set accuracy, training time and training period as the main evaluation indicators. The research uses XNOR+ Networks to complete the model training task. In this process, the model needs to preserve both the binarization parameters and the weights of floating point numbers. During the forward pass, the model first binarizes the floating point parameters, and then uses the loss of the model for the binarized parameter sphere. In back propagation, the descending gradient of the floating-point parameters is obtained through the model loss and the update is completed.

The original MNIST dataset was used in the research experiment process. The input layer was set with 784 neurons, and then six hidden layers were set with 512, 256, 128, 64, 32, and 16 neurons, respectively. At the same time, ten neurons are set in the output layer to discriminate the results. For the Adam optimization algorithm, the study pre-sets its parameters: α = 0.001, β 1 = 0.9, β 2 = 0.999, and = 10 × 10 8 . The research carried out 200 rounds of experiments, and results are shown in Figure 5.

Figure 5 
               Cost and accuracy of the original dataset: (a) cost, (b) training set accuracy, and (c) validation set accuracy.
Figure 5

Cost and accuracy of the original dataset: (a) cost, (b) training set accuracy, and (c) validation set accuracy.

As shown in Figure 5, the research results show that the model has an accuracy rate of about 93.75%, and obtains an accuracy rate of about 89.24% on the original validation dataset. The study uses the encrypted training and test datasets to train the neural network for 200 epochs. If the two datasets are encrypted with different keys, the resulting matrix will be a non-homomorphic matrix. For this reason, the study uses the same key for encryption, and results are shown in Figure 6.

Figure 6 
               Cost and accuracy of encrypted datasets (encrypted separately): (a) cost, (b) training set accuracy, and (c) validation set accuracy.
Figure 6

Cost and accuracy of encrypted datasets (encrypted separately): (a) cost, (b) training set accuracy, and (c) validation set accuracy.

Figure 6 shows the cost comparison between the encrypted dataset and the original dataset. It can be clearly seen that the cost of the former is higher than that of the latter, and at the same time, the training accuracy of the encrypted training set has decreased, and its accuracy is 86.77%. It can be seen that encrypting the data will cause the dimension of the original dataset matrix to change from 784 to 785, resulting in a decrease in over-precision. Because the introduction of the encryption algorithm can be understood as adding a false feature point to each set of data, it can be regarded as noise during the detailed analysis. Figure 6 also shows that the validation set accuracy is lower than that of the original dataset. According to the above understanding, this may be because the training and test set data were encrypted separately at the beginning of the study, and different keys were used to make the encrypted data a non-homomorphic structure inside. In order to improve the model, the training and test set data of the research object are encrypted at the same time, and the results are shown in Figure 7.

Figure 7 
               Cost and accuracy of encrypted datasets (co-encryption): (a) cost, (b) training set accuracy, and (c) validation set accuracy.
Figure 7

Cost and accuracy of encrypted datasets (co-encryption): (a) cost, (b) training set accuracy, and (c) validation set accuracy.

From Figure 6 it can be seen that the cost shows a downward trend and is lower than the data, and the training set accuracy reached 86.39%, and the validation set accuracy rose to 84.97%. As a way to improve the model accuracy, the study adjusted the parameters of the hyperparameters. In this process, the main parameter of concern was the cycle. As a supplementary training, the cycle was increased to 400 cycles. The process is shown in Figure 8.

Figure 8 
               Cost and accuracy of encrypted datasets (400 cycles): (a) cost, (b) training set accuracy, and (c) validation set accuracy.
Figure 8

Cost and accuracy of encrypted datasets (400 cycles): (a) cost, (b) training set accuracy, and (c) validation set accuracy.

By comparison, it can be seen that when the training progresses to about 300 cycles, the accuracy begins to converge, and the final accuracy is about 86.39%. In addition, due to the use of the ReLU activation function in the study, this will cause the state of some neurons to change from the active state to the inactive state after the operation, thus losing the activation effect. To this end, the study takes the absolute value of all elements in the encryption matrix, and the result is shown in Figure 9.

Figure 9 
               Cost and accuracy of encrypted datasets (absolute value): (a) cost, (b) training set accuracy, and (c) validation set accuracy.
Figure 9

Cost and accuracy of encrypted datasets (absolute value): (a) cost, (b) training set accuracy, and (c) validation set accuracy.

It can be seen from Figure 9 that the cost value of the improved model is similar to that of the pre-improved model, but the improved model has better performance in the accuracy of the training and test set, and its values are 88.79 and 85.12%, respectively. In addition to this, the research compares the constructed model with a traditional CNN. The results show that, on the premise of the same network structure, comparing the research model with the traditional CNN, the speed of the former is 58 times that of the latter, but the storage consumption of the latter is 32 times that of the former. In terms of recognition rate, the results of XNOR+ Networks are shown in Table 1.

Table 1

Comparison between the research model and traditional CNN

Research model Traditional CNN
CNN structure type NIN structure on the CIFAR-10 dataset LeNet-5 structure on MNIST dataset AlexNet structure on ImageNet dataset (Top-1) AlexNet structure on ImageNet dataset (Top-5) NIN structure on the CIFAR-10 dataset LeNet-5 structure on MNIST dataset AlexNet structure on ImageNet dataset (Top-1) AlexNet structure on ImageNet dataset (Top-5)
Recognition rate 87.19% 82.67% 97.23% 98.36% 45.1% 65.2% 53.5% 82.4%

As shown in Table 1, the research builds the same NIN structure on the CIFAR-10 dataset, the recognition rate of the research model is 87.19%, and the recognition rate of the traditional neural network is 82.67%. The same LeNet-5 structure was built on the MNIST dataset, the recognition rate of the research model was 97.23%, and the recognition rate of the traditional neural network was 98.36%. The same AlexNet structure is built on the ImageNet dataset. The Top-1 recognition rate of the research model is 45.1%, the Top-5 recognition rate is 65.2%, and the Top-1 recognition rate of the traditional CNN is 53.5% and the Top-5 recognition rate is 82.4%. It can be seen that the binarization operation can vastly improve the computational efficiency of the neural network and reduce the storage cost of the model parameters, and can ensure certain accuracy.

5 Conclusion

With the rapid development of computer and information technology, a large number of users store a large amount of data on cloud platforms and enjoy the convenience brought by cloud computing and deep learning technology. However, these data often contain a lot of sensitive information. If these data are leaked, users may suffer losses. Therefore, the research introduces an efficient integer homomorphic encryption algorithm into the binary neural network model to complete the reasoning task on the cloud. The results of the study show that the training and validation on the original MNIST dataset achieved 93.75 and 89.24% accuracy, respectively. However, the training accuracy of the model on the encrypted training set has dropped to only 86.77%. In this regard, the study increased the training period, and the results showed that at about 300 cycles, the accuracy began to converge, and the final accuracy was about 86.39%. In addition, the study takes the absolute value of all elements in the encryption matrix, and the results show that the accuracy of the model in the training and test set is 88.79 and 85.12%, respectively. The results of the supplementary research show that the model can greatly reduce the storage consumption in the model calculation process and effectively improve the calculation speed while ensuring certain accuracy. It should be noted that using homomorphic encryption to process deep network results in the decrease of accuracy. Although it still has high accuracy, it is still not advisable in practical application. At the same time, the time overhead is still relatively large, which cannot meet the real-time application well. Once the data are too large, the computational overhead will increase geometrically. Therefore, how to improve the accuracy and reduce the time cost is the focus and difficulty of future research work.

  1. Funding information: The research is supported by: Major projects in Anhui Province “Anhui Province Higher Education Blockchain Technology Promotion Sub-Center”, Project no.: 2020qkl26.

  2. Conflict of interest: Authors state no conflict of interest.

  3. Data availability statement: The datasets used and/or analyzed during the current study are available from the corresponding author on reasonable request.

References

[1] Zhang R, Jing X, Wu S, Jiang CX, Yu FR. Device-free wireless sensing for human detection: the deep learning perspective. IEEE Internet Things J. 2020;8(4):2517–39.10.1109/JIOT.2020.3024234Search in Google Scholar

[2] Kulynych J, Greely HT. Clinical genomics, big data, and electronic medical records: reconciling patient rights with research when privacy and science collide. J Law Biosci. 2017;4(1):94–132.10.1093/jlb/lsw061Search in Google Scholar PubMed PubMed Central

[3] Hua J, Shi G, Zhu H, Wang F, Liu X, Li H. CAMPS: efficient and privacy-preserving medical primary diagnosis over outsourced cloud. Inf Sci. 2020;527:560–75.10.1016/j.ins.2018.12.054Search in Google Scholar

[4] Domingo-Ferrer J, Farras O, Ribes-González J, Sánchez D. Privacy-preserving cloud computing on sensitive data: a survey of methods, products and challenges. Comput Commun. 2019;140:38–60.10.1016/j.comcom.2019.04.011Search in Google Scholar

[5] Shaheed K, Mao A, Qureshi I. A systematic review on physiological-based biometric recognition systems: current and future trends. Arch Comput Methods Eng. 2021;28(7):4917–60.10.1007/s11831-021-09560-3Search in Google Scholar

[6] Shen M, Tang X, Zhu L, Du X, Guizan M. Privacy-preserving support vector machine training over blockchain-based encrypted IoT data in smart cities. IEEE Internet Things J. 2019;6(5):7702–12.10.1109/JIOT.2019.2901840Search in Google Scholar

[7] Ke Y, Zhang MQ, Liu J, Su TT, Yang XY. Fully homomorphic encryption encapsulated difference expansion for reversible data hiding in encrypted domain. IEEE Trans Circuits Syst Video Technol. 2020;30(8):2353–65.10.1109/TCSVT.2019.2963393Search in Google Scholar

[8] Geng Y. Homomorphic encryption technology for cloud computing. Procedia Comput Sci. 2019;1(154):73–83.10.1016/j.procs.2019.06.012Search in Google Scholar

[9] Zhang QY, Jia YG. A speech fully homomorphic encryption scheme for DGHV based on multithreading in cloud storage. Int J Netw Secur. 2022;24(6):1042–55.Search in Google Scholar

[10] Yang P, Xiong N, Ren J. Data security and privacy protection for cloud storage:a survey. IEEE Access. 2020;7(8):131723–40.10.1109/ACCESS.2020.3009876Search in Google Scholar

[11] Ding L, Wang Z, Wang X, Wu D. Security information transmission algorithms for IoT based on cloud computing. Comput Commun. 2020;4(155):32–9.10.1016/j.comcom.2020.03.010Search in Google Scholar

[12] Meftah S, Tan BH, Aung KM, Yuxiao L, Jie L, Veeravalli B. Towards high performance homomorphic encryption for inference tasks on CPU: an MPI approach. Future Gener Comput Syst. 2022;9(134):13–21.10.1016/j.future.2022.03.033Search in Google Scholar

[13] Yang K, Han Q, Li H. An efficient and fine-grained big data access control scheme with privacy-preserving policy. IEEE Internet Things J. 2016;4(2):563–71.10.1109/JIOT.2016.2571718Search in Google Scholar

[14] Li J, Huang Y, Wei Y, Lv S, Liu Z, Dong C, et al. Searchable symmetric encryption with forward search privacy. IEEE Trans Dependable Secure Comput. 2019;18(1):460–74.10.1109/TDSC.2019.2894411Search in Google Scholar

[15] Liu S, Yu J, Xiao Y. BC-SABE: blockchain-aided searchable attribute-based encryption for cloud-IoT. IEEE Internet Things J. 2020;7(9):7851–67.10.1109/JIOT.2020.2993231Search in Google Scholar

[16] Chinnasamy P, Deepalakshmi P, Dutta AK. Ciphertext-policy attribute-based encryption for cloud storage: toward data privacy and authentication in AI-enabled IoT system. Mathematics. 2021;10(1):1–24.10.3390/math10010068Search in Google Scholar

[17] Xiong J, Bi R, Zhao M, Gao J, Yang Q. Edge-assisted privacy-preserving raw data sharing framework for connected autonomous vehicles. IEEE Wirel Commun. 2020;27(3):24–30.10.1109/MWC.001.1900463Search in Google Scholar

[18] Kalpana G, Kumar PV, Aljawarneh S, Krishnaiah RV. Shifted adaption homomorphism encryption for mobile and cloud learning. Comput Electr Eng. 2018;65:178–95.10.1016/j.compeleceng.2017.05.022Search in Google Scholar

[19] Brakerski Z, Doettling N, Garg S. Candidate iO from homomorphic encryption schemes. Advances in Cryptology–EUROCRYPT; 2020. p. 79–109.10.1007/978-3-030-45721-1_4Search in Google Scholar

[20] Cheon JH, Kim J. A hybrid scheme of public-key encryption and somewhat homomorphic encryption. IEEE Trans Inf Forensics Secur. 2015;10(5):1052–63.10.1109/TIFS.2015.2398359Search in Google Scholar

[21] Blatt M, Gusev A, Polyakov Y, Goldwasser S. Secure large-scale genome-wide association studies using homomorphic encryption. Proc Natl Acad Sci. 2020;117(21):11608–13.10.1073/pnas.1918257117Search in Google Scholar PubMed PubMed Central

[22] Gomez-Barrero M, Maiorana E, Galbally J, Campisi P, Fierrez J. Multi-biometric template protection based on homomorphic encryption. Pattern Recognit. 2017;67:149–63.10.1016/j.patcog.2017.01.024Search in Google Scholar

[23] Cousins DB, Rohloff K, Sumorok D. Designing an FPGA-accelerated homomorphic encryption co-processor. IEEE Trans Emerg Top Comput. 2016;5(2):193–206.10.1109/TETC.2016.2619669Search in Google Scholar

[24] Min Z, Yang G, Sangaiah AK, Bai SJ, Liu GX. A privacy protection-oriented parallel fully homomorphic encryption algorithm in cyber physical systems. EURASIP J Wirel Commun Netw. 2019;2019(1):1–14.10.1186/s13638-018-1317-9Search in Google Scholar

[25] Mert AC, Öztürk E, Savaş E. Design and implementation of encryption/decryption architectures for BFV homomorphic encryption scheme. IEEE Trans Very Large Scale Integr (VLSI) Syst. 2019;28(2):353–62.10.1109/TVLSI.2019.2943127Search in Google Scholar

[26] Li B, Micciancio D. On the security of homomorphic encryption on approximate numbers. In Annual International Conference on the Theory and Applications of Cryptographic Techniques. Vol. 10, Issue 17; 2021. p. 648–77.10.1007/978-3-030-77870-5_23Search in Google Scholar

[27] Masuda M, Kameyama Y. FFT program generation for ring LWE-based cryptography. In International Workshop on Security. Vol. 9, Issue 8; 2021. p. 151–71.10.1007/978-3-030-85987-9_9Search in Google Scholar

[28] Gentry C, Halevi S, Lyubashevsky V. Practical non-interactive publicly verifiable secret sharing with thousands of parties. In Annual International Conference on the Theory and Applications of Cryptographic Techniques. Vol. 1, Issue 1; 2022. p. 458–87.10.1007/978-3-031-06944-4_16Search in Google Scholar

[29] Gupta A, Salau AO, Chaturvedi P, Akinola SA, Nwulu NI. Notice of violation of IEEE publication principles; artificial neural networks:its techniques and applications to forecasting. In 2019 International Conference on Automation, Computational and Technology Management (ICACTM) 2019, Vol. 4, Issue 24; 320–4.10.1109/ICACTM.2019.8776701Search in Google Scholar

Received: 2022-12-05
Revised: 2023-01-09
Accepted: 2023-01-31
Published Online: 2023-03-25

© 2023 the author(s), published by De Gruyter

This work is licensed under the Creative Commons Attribution 4.0 International License.

Articles in the same Issue

  1. Research Articles
  2. Salp swarm and gray wolf optimizer for improving the efficiency of power supply network in radial distribution systems
  3. Deep learning in distributed denial-of-service attacks detection method for Internet of Things networks
  4. On numerical characterizations of the topological reduction of incomplete information systems based on evidence theory
  5. A novel deep learning-based brain tumor detection using the Bagging ensemble with K-nearest neighbor
  6. Detecting biased user-product ratings for online products using opinion mining
  7. Evaluation and analysis of teaching quality of university teachers using machine learning algorithms
  8. Efficient mutual authentication using Kerberos for resource constraint smart meter in advanced metering infrastructure
  9. Recognition of English speech – using a deep learning algorithm
  10. A new method for writer identification based on historical documents
  11. Intelligent gloves: An IT intervention for deaf-mute people
  12. Reinforcement learning with Gaussian process regression using variational free energy
  13. Anti-leakage method of network sensitive information data based on homomorphic encryption
  14. An intelligent algorithm for fast machine translation of long English sentences
  15. A lattice-transformer-graph deep learning model for Chinese named entity recognition
  16. Robot indoor navigation point cloud map generation algorithm based on visual sensing
  17. Towards a better similarity algorithm for host-based intrusion detection system
  18. A multiorder feature tracking and explanation strategy for explainable deep learning
  19. Application study of ant colony algorithm for network data transmission path scheduling optimization
  20. Data analysis with performance and privacy enhanced classification
  21. Motion vector steganography algorithm of sports training video integrating with artificial bee colony algorithm and human-centered AI for web applications
  22. Multi-sensor remote sensing image alignment based on fast algorithms
  23. Replay attack detection based on deformable convolutional neural network and temporal-frequency attention model
  24. Validation of machine learning ridge regression models using Monte Carlo, bootstrap, and variations in cross-validation
  25. Computer technology of multisensor data fusion based on FWA–BP network
  26. Application of adaptive improved DE algorithm based on multi-angle search rotation crossover strategy in multi-circuit testing optimization
  27. HWCD: A hybrid approach for image compression using wavelet, encryption using confusion, and decryption using diffusion scheme
  28. Environmental landscape design and planning system based on computer vision and deep learning
  29. Wireless sensor node localization algorithm combined with PSO-DFP
  30. Development of a digital employee rating evaluation system (DERES) based on machine learning algorithms and 360-degree method
  31. A BiLSTM-attention-based point-of-interest recommendation algorithm
  32. Development and research of deep neural network fusion computer vision technology
  33. Face recognition of remote monitoring under the Ipv6 protocol technology of Internet of Things architecture
  34. Research on the center extraction algorithm of structured light fringe based on an improved gray gravity center method
  35. Anomaly detection for maritime navigation based on probability density function of error of reconstruction
  36. A novel hybrid CNN-LSTM approach for assessing StackOverflow post quality
  37. Integrating k-means clustering algorithm for the symbiotic relationship of aesthetic community spatial science
  38. Improved kernel density peaks clustering for plant image segmentation applications
  39. Biomedical event extraction using pre-trained SciBERT
  40. Sentiment analysis method of consumer comment text based on BERT and hierarchical attention in e-commerce big data environment
  41. An intelligent decision methodology for triangular Pythagorean fuzzy MADM and applications to college English teaching quality evaluation
  42. Ensemble of explainable artificial intelligence predictions through discriminate regions: A model to identify COVID-19 from chest X-ray images
  43. Image feature extraction algorithm based on visual information
  44. Optimizing genetic prediction: Define-by-run DL approach in DNA sequencing
  45. Study on recognition and classification of English accents using deep learning algorithms
  46. Review Articles
  47. Dimensions of artificial intelligence techniques, blockchain, and cyber security in the Internet of medical things: Opportunities, challenges, and future directions
  48. A systematic literature review of undiscovered vulnerabilities and tools in smart contract technology
  49. Special Issue: Trustworthy Artificial Intelligence for Big Data-Driven Research Applications based on Internet of Everythings
  50. Deep learning for content-based image retrieval in FHE algorithms
  51. Improving binary crow search algorithm for feature selection
  52. Enhancement of K-means clustering in big data based on equilibrium optimizer algorithm
  53. A study on predicting crime rates through machine learning and data mining using text
  54. Deep learning models for multilabel ECG abnormalities classification: A comparative study using TPE optimization
  55. Predicting medicine demand using deep learning techniques: A review
  56. A novel distance vector hop localization method for wireless sensor networks
  57. Development of an intelligent controller for sports training system based on FPGA
  58. Analyzing SQL payloads using logistic regression in a big data environment
  59. Classifying cuneiform symbols using machine learning algorithms with unigram features on a balanced dataset
  60. Waste material classification using performance evaluation of deep learning models
  61. A deep neural network model for paternity testing based on 15-loci STR for Iraqi families
  62. AttentionPose: Attention-driven end-to-end model for precise 6D pose estimation
  63. The impact of innovation and digitalization on the quality of higher education: A study of selected universities in Uzbekistan
  64. A transfer learning approach for the classification of liver cancer
  65. Review of iris segmentation and recognition using deep learning to improve biometric application
  66. Special Issue: Intelligent Robotics for Smart Cities
  67. Accurate and real-time object detection in crowded indoor spaces based on the fusion of DBSCAN algorithm and improved YOLOv4-tiny network
  68. CMOR motion planning and accuracy control for heavy-duty robots
  69. Smart robots’ virus defense using data mining technology
  70. Broadcast speech recognition and control system based on Internet of Things sensors for smart cities
  71. Special Issue on International Conference on Computing Communication & Informatics 2022
  72. Intelligent control system for industrial robots based on multi-source data fusion
  73. Construction pit deformation measurement technology based on neural network algorithm
  74. Intelligent financial decision support system based on big data
  75. Design model-free adaptive PID controller based on lazy learning algorithm
  76. Intelligent medical IoT health monitoring system based on VR and wearable devices
  77. Feature extraction algorithm of anti-jamming cyclic frequency of electronic communication signal
  78. Intelligent auditing techniques for enterprise finance
  79. Improvement of predictive control algorithm based on fuzzy fractional order PID
  80. Multilevel thresholding image segmentation algorithm based on Mumford–Shah model
  81. Special Issue: Current IoT Trends, Issues, and Future Potential Using AI & Machine Learning Techniques
  82. Automatic adaptive weighted fusion of features-based approach for plant disease identification
  83. A multi-crop disease identification approach based on residual attention learning
  84. Aspect-based sentiment analysis on multi-domain reviews through word embedding
  85. RES-KELM fusion model based on non-iterative deterministic learning classifier for classification of Covid19 chest X-ray images
  86. A review of small object and movement detection based loss function and optimized technique
Downloaded on 8.9.2025 from https://www.degruyterbrill.com/document/doi/10.1515/jisys-2022-0281/html
Scroll to top button