Home The application of graphic language in animation visual guidance system under intelligent environment
Article Open Access

The application of graphic language in animation visual guidance system under intelligent environment

  • Luning Zhao EMAIL logo
Published/Copyright: August 26, 2022
Become an author with De Gruyter Brill

Abstract

With the continuous development of society, the role of the visual guidance system in animation design has also evolved and evolved in its long history, leading to the changes in the values of modern beauty. In the field of modern social and cultural design, the visual guidance system in animation design has unique regional nature and cultural influence. The visual language should correspond to the visual environment and easy to understand and be known by people. It combines animation conception and design technology to capture the cultural charm and beauty, values, and behavioral norms of people in different fields. This article studies and analyzes the visual orientation of graphic language in the design of animation visual guidance system, and injects the graphic language with orientation into its animation design, so that the animation design is more in line with the characteristics of the times. It can be more adapted to the emerging media and better convey the information transfer between the enterprise and the audience. To further understand the audience’s tendency toward elements of graphic expression, this article analyzes the subjective perceptions of the respondents on the importance of color selection, calligraphy fonts, graphic expression, and modeling meaning. The results of the study showed that the respondents aged 21–35 paid more attention to the choice of graphic colors, and the highest number was 69.

1 Introduction

Human life is inextricably linked to graphics that can be seen everywhere. Graphics originated from a very early time in the primitive society. At that time, social development was not mature enough, and human beings had not created words, so graphics became a means of memory and information transmission. Graphical generation can be said to be based on human desire for emotional and memory communication and can also be considered as the main instinctive need of human beings. At the same time, it can be used to record the humanities and their history at that time. In the later period of the primitive society, graphics not only carried information but also gradually became an indispensable part of people’s daily life. With the continuous deepening of the industrial process, the ingenuity of movable type printing technology and papermaking technology has further stimulated the rapid development of graphics. Papermaking, in particular, allowed images and words to be better preserved and displayed, while printing greatly increased the speed and the breadth of culture and knowledge dissemination. By the 19th century, color printing and graphic design had developed. In the 21st century, with the continuous progress of science and technology, the development of computers, and the popularization of the digital and network, graphics are no longer limited to decorative behavior, recording, and communication, but also reflect the innovation of human thinking.

As a product of intuitive visual art, animation can give the audience a strong visual impact through the expression of visual language, and the shape of the image conveys the shape and structure. Modern life is inseparable from graphic language, which is not only art but also a fusion of science, technology, culture, economy, and management. Since the 21st century, visual communication has gradually become an important means of communication in the information society [1,2]. The emergence of new media has affected the communication of visual culture, the form and characteristics of visual language, and the content and mode of media communication. The visual orientation effect has an important influence on the public experience. In the new network media, designers, because of certain design credibility, promote the interaction between individuals who have not communicated, such as the mutual attention of individuals on social media. Conversely, a large number of design resources are constantly distributed, resulting in too much information. Designers not only got their own ideas but also collected a large number of materials for storage, creating expansive ideas and organizing modern graphic stitching. Therefore, in new media, visual language communicators and visual language receivers can achieve instant interaction. Ideas, emotions, and information are conveyed through intuitive visual images, which are the most intuitive and meaningful symbols.

This article consulted most of the relevant materials through the Internet and the library, summarized the existing research results, and sorted out the relevant theoretical framework, which provided a good theoretical reserve for the smooth development of this research. In the graphic language, the performance of visual and spatial thinking is closely related to the transmission of vision and information. This article discusses the expressive meaning of graphic language in the animation visual system from the perspective of visual psychology, so that the creator and the audience can better transmit information.

2 Related work

Most information systems research tends to focus on issues such as visual symbols, fonts, and language markers, or on specific areas such as readability. However, the information guidance system of a subway station is ultimately based on the movement of tourists using the station. Song and Choi researched that the Seoul subway station information guide showed its systemic importance in the information guide system by separately accessing basic principles, graphic rules, layout, and installation. To determine the characteristics of the Tokyo Metro Information Guidance System, he determined the ideal characteristics of the Subway Information Guidance System. Research has found that it excludes functional language and connects people through color and space, with a very detailed and strategic structural system [3]. Lee et al. described a web-based wireless vision-guided system that alleviates the problems associated with hard-wired audio-visually assisted patient interactive motion management systems. These systems are cumbersome to use in routine clinical practice. The web-based wireless visual display replicates existing visual displays for visually guided respiratory motion management systems. In this study, active respiratory coordinator (ABC) tracking was used as input to a visual display, captured, and transmitted to a web client. Combining data content with visual decoration, infographics can effectively convey information in an engaging and memorable way. However, creating professional infographics using these authoring tools is still not an easy task and requires a lot of time and design expertise. As a result, these tools are often unattractive to casual users, who are either unwilling to invest the time to learn them or lack the proper design expertise to create professional infographics [4]. Cui et al. explored a way to automatically generate infographics from natural language sentences. First, he did a preliminary study on the design space of infographics. Building on his initial research, he built a proof-of-concept system that automatically converts simple scale-related statistics into a set of infographics with predesigned styles. Finally, the usability and usefulness of the system are demonstrated through example results, exhibits, and expert reviews; however, the study has not yet been applied on a large scale [5].

As with children who rely on spoken language, speech-language pathologists must support and track the development of expressive language in children with complex communication needs who communicate using graphic symbols. Binger presented a framework for expressive English sentence development using graphic symbols and introduced possible methods for measuring and analyzing the use of graphic symbols. He discussed current problems in measuring pictographic discourses and proposed a series of measures for analyzing individual pictographic discourses and larger samples of utterances. The pictographic speech and sentence development framework and recommended measures are based on years of pictographic intervention research, including two large studies of preschool children with severe speech impairment. His framework describes the development of expressive language from early sign assemblages to sentences in children and adults and highlights developmental patterns specific to graphic sign making [6]. Rendering in virtual reality (VR) requires a lot of computing power, producing 90 frames per second at high resolution with good antialiasing. Video data sent to VR headsets require high bandwidth and can be achieved only over dedicated links. Denes G explains how to reduce rendering requirements and transfer bandwidth using conceptually simple techniques that integrate well with existing rendering pipelines. Each even-numbered frame is rendered at a lower resolution, and each odd-numbered frame remains high resolution but is modified to compensate for the loss of previous high spatial frequencies. The technique relies on the limited ability of the visual system to perceive high spatiotemporal frequencies. Despite the simplicity of its concept, the correct implementation of this technique requires many nontrivial steps. The display photometric time response must be modeled, flickering and motion artifacts must be avoided, and the resulting signal must not exceed the dynamic range of the display [7].

Once an image is decomposed into many visual primitives, e.g., local interest points or regions, it is very interesting to discover meaningful visual patterns from them. However, traditional clustering of visual primitives usually ignores the spatial and eigenstructures between them, thus failing to discover high-level visual patterns of complex structures. To overcome this problem, Wang et al. proposed to consider the spatial and feature context between visual primitives for pattern recognition. By discovering spatial co-occurrence patterns between visual primitives and feature co-occurrence patterns between different types of features, this method can better resolve the ambiguity of clustering visual primitives. The pattern discovery problem is formulated as regularized k-means clustering with spatial and feature context as constraints to improve pattern discovery results. The idea of k-means clustering, the k-means algorithm, is to cluster each sample into the class of its nearest mean. He proposed a novel self-learning process to progressively refine clustering results using discovered spatial or feature patterns. This self-learning process guarantees convergence, and experiments on real images verify the effectiveness of the method [8]. Because the back propagation (BP) neural network is easy to fall into the local low point and slow convergence speed in gesture recognition, Li et al. proposed a gesture recognition method combining chaos algorithm and genetic algorithm. The Chaos algorithm is a chaotic sequence encryption algorithm that first uses a one-way Hash function to hash the key to the initial value of the chaotic mapping, and the chaotic sequence is taken after several iterations; then the chaotic sequence value generated by the iterations is mapped to the american standard code for information interchange code, and then the byte-by-byte heterogeneous operation is performed with the map data. According to the ergodicity of the chaotic algorithm and the global convergence of the genetic algorithm, he encodes the weights and thresholds of the BP neural network. He used the genetic algorithm to obtain the general optimal solution, and then by adding chaotic disturbance, the general optimal solution was optimized to the exact optimal solution. Simulation and experimental results show that CGA (Canonical Genetic Algorithm) greatly improves the real time and accuracy of gesture recognition [9]. The previous studies provided a detailed analysis of the application of visual guidance systems and graphic languages. It is undeniable that these studies have greatly promoted the development of the corresponding fields. We can learn a lot from methodology and data analysis. However, there are relatively few studies on animation vision systems in the field of intelligent environments, and it is necessary to fully apply these algorithms to the research in this field.

3 The application of graphic language in animation visual guidance system in intelligent environment

As an animation language, the graphics in the whole image are no longer just based on the relationship of the physical form, but are arranged according to the structure of the picture. Depending on the composition of the image, it can be colorful or simple and clear, or it can be felt using no color but only line drawing [10]. Changing the level of graphics and the degree of harmony with other elements of the image is the key to how animation affects people’s feelings. Studies have shown that different images will bring people different emotions and different mental states, and different times will also cause people to have different feelings about the same image. The image itself is pregnant with rich meaning. The feeling of an image largely means that the visual attributes of the image stimulate people’s vision and make people express different emotional expressions. The research on image perception semantics mainly involves extracting visual image features, mapping image emotion, and so on [11]. The basic framework is shown in Figure 1.

Figure 1 
               Image semantic extraction framework.
Figure 1

Image semantic extraction framework.

As shown in Figure 1, the research on image perception semantics should first create a suitable image library and place a feature image corresponding to the image library by exporting the image features into the image library [12,13,14]. It uses a mechanism-specific mapping tool, which is used to create the emotional space corresponding to the image. The mapping method here can be neural network, genetic algorithm, support vector machine, and so on. Among them, neural networks are highly parallel and adaptable and can be applied to many fields such as control, information, and prediction. So this article analyzes the retrieval of graphical languages using neural networks. The emotional attributes of images are obtained through the emotional space corresponding to the image library, which is used to sort and retrieve images [15].

Computer animation is a tower-like two-story structure composed of logical scenes, visual scenes, and various elements. Each computer animation consists of one or more logical views, which are juxtaposed, and the first layer is the level created by these logical views. Each logical view also contains one or more visuals, which are also in a side-by-side relationship. The sequences created by these visual scenes are called the second layer, as shown in Figure 2 [16].

Figure 2 
               Two-story tower structure for visual animation.
Figure 2

Two-story tower structure for visual animation.

After the computer animation picture image library is established, the images in it must be preprocessed [17,18]. After the image preview is complete, export the visual features of the image. To facilitate the mass export of visual features of images, it must be ensured that the images in the image library can be opened and stored normally. On this page, the images in the image library are uniformly stored in JPEG format [19].

The color of the graphic is the most intuitive feature of the image, and it is the visual factor that affects human emotional characteristics. Therefore, this article first selects the most intuitive color model for research. The color feature of the graph refers to the image color of the biological world obtained from research, the color feature of the image with multiple fields [20]. The relatively simple parameters are the primary color of the image, the average color, and the color histogram, and so on, and the more complex methods are the chromaticity time, the partial histogram algorithm, and the color correlation index.

The color behavior of an image has a significant impact on the emotion of an image, and it is the most direct factor for people to judge the emotional appearance of an image. Different colors make people have different emotional responses [21]. For example, red represents happiness, joy, and bliss; green represents peace and strength; and blue represents calmness, wisdom, and depth. The normal removal of color characteristics of an image is the basis for commenting on emotional characteristics.

The selection of color gamut is very important to color features and directly affects the basic judgment and emotional response of human eyes to color features. Therefore, the determination of color space is the basis of color feature synthesis. Common color gamuts include RGB, HSV, Luv, Lab, and so on, which represent different colors at different angles [22]. There is no template for color gamut selection, but different color gamuts will affect color characteristics, so there must be a template for color gamut selection. Because the colored part is part of the visual effect, find the measurement from a visual perspective. In this context, visual consistency is generally used as the indicator of color field selection, so that the selected color field is closer to the human body’s understanding of color, so as to better reflect the influence of color features on emotional characteristics [23]. The so-called visual consistency refers to finding the space between two colors in the selected color gamut. When the visual difference between the two colors is large, the space between the two colors is also large. When the visual difference between the two colors is small, the distance between the two colors is also small. This field that satisfies color is called a color field with visual consistency, and it is also an excellent color field that reflects visual characteristics from the perspective of color characteristics [24].

Graphics are described in the RGB color space, which is a three-dimensional space model with red (Red), green (Green), and blue (Blue) as the first three colors. The model resembles a cube. It is a color model for the Cartesian coordinate system, as shown in Figure 3.

Figure 3 
               RGB color space.
Figure 3

RGB color space.

In the RGB color space, the starting position is represented by black, that is, the position where the brightness of the first three colors is zero. The deepest-to-start vertex in the cube is shown in white, which is the largest position. It is brighter than the first three colors. The dashed line from black to white on the cube represents the change in the amount of gray, and the remaining points are typical colors. Colors that occur in nature can find matching points in this color space. In this case, the space between colors, such as red (255, 0, 0), magenta (255, 0, 255), and blue (0, 0, 255), can be found based on the Euclidean distance:

(1) d = ( x 1 x 2 ) 2 + ( y 1 y 2 ) 2 + ( z 1 z 2 ) 2 .

The distance d 1 = 255 between red and magenta and the distance d 2 = 255 between blue and magenta can be calculated, and obviously, these two distances are equal [25].

The RGB color space model is commonly used in applications such as camera systems and monitors. It does not represent color in terms of consistency; for ordinary people without color expertise, RGB values cannot be used to judge colors intuitively; for ordinary people without relevant knowledge, RGB color space values are just color shades in the eyes of color blind people, lacking intuitive color sense. So it does not conform to the principle of human perception of color [26]. So it was imperative to find the best color space template we could find. The HSV color space model is visually compatible and better suited to the human viewing system than the RGB color space model. HSV is an intuitive color model for the user, reflecting the intuitive idea of “saturation” and “luminance” as two separate parameters. Similarly, the HSV color field model also has three dimensions, which can represent three different color features at the same time, namely, hue H (hue), S (saturation) saturation, and brightness V (value). The HSV gamut model looks like a modified cone, as shown in Figure 4.

Figure 4 
               HSV color model.
Figure 4

HSV color model.

In the HSV color gamut, the H color represents the type of color, such as red, yellow, and green, measured from the rounded corner of the base of the cone, ranging from 0° to 360°. S represents the color depth, such as dark red and light red, measured by the percentage radius from the circular face with the center parallel to the bottom, ranging from 0 to 100%, representing the conversion concentration from less than 0% to full concentration 100%. The brightness V represents the lightness and darkness of the color, which is measured by the percentage of the distance from the cone position to the vertex of the cone. The size of the height cone is 0–100%, which means that the brightness changes from 0% darker to 100% brighter [27]. In the HSV color gamut, the human eye’s perception of the visual characteristics of a color image according to each part of the color space and components is independent of each other, and the human eye can also detect changes between elements independently. The color field is about Euclidean size between the three color components, and the difference between the colors observed by the human eye is communicated linearly, which is more in line with the human visual system. Therefore, the HSV color field model is chosen to reflect the visual characteristics of animated graphics [28].

Because computer animations often have black borders during playback, logical scene node images and visual scene representative frame images also have black borders. Graphic libraries created using such images will directly affect the evaluation of emotional and external aspects of visual features. By using the method of color extraction, the color feature values of the image before and after the black border are removed, as presented in Table 1 [29]. It can be seen from the data comparison that the first color before and after subtraction of the black sample changes significantly, and the center color and area color also change to a certain extent. Since the color of characters has a significant impact on emotional behavior, a small color change can lead to a large change in emotion, so it is necessary to remove the black borders of the images in the image library.

Table 1

Comparison of color eigenvalues before and after black edge removal in images

Name Before removing black edges After black edge removal
Quantified primary colors and percentages (0, 0, 0, 0.210) (10, 3, 1, 0.154)
Average color (125.12,0.528,0.754) (116.24,0.654,0.748)
Local color (132.52,0.521,0.452) (174.58,0.635,0.581)

Depending on the perception of space, visual orientation is to use the three-dimensional knowledge of human space to help people get from where they are now to where they want to go. The three principles of a visual orientation system are the principle of wholeness, logic, and visibility. The visual guidance system integrates the environmental information of the entire site, makes planning, and progresses step by step [30].

The visual features of an image are the parametric features that the face removes from the visual view of the image, including color, texture, and shape, and these parametric features will have the greatest impact on the human visual image. When the human eye sees an image, the optic nerve recognizes the received signal under the authorization of the brain and has an understanding of the content and sound of the image based on previous visual experience and common sense. The process of visual cognition is shown in Figure 5 [31].

Figure 5 
               Visual cognitive process of pictures.
Figure 5

Visual cognitive process of pictures.

As shown in the figure, human visual orientation is a direct factor in the appearance of an image and has a significant impact on perception. A neural network is a self-training mapping mechanism through which more accurate knowledge can be obtained through training based on available data. The neural network constructs the visual features of the input image, finds the best relationship between the two, and maintains the best relationship as a general mapping method for all data, so that the visual features of the image can be transformed into emotional feature categories [32,33].

A neural network is a network composed of a large number of neurons, a model created by exercising the basic characteristics of the brain. Neural networks consist of two basic features that regulate things in the human brain. First, a neural network is a complex network with many interconnected dimensions. Second, determine how the neural network components are connected and how information is processed.

The function of the artificial neural network is mainly reflected in the topology system and weight of the neural network. The weights of the network represent the information storage of the network. Through the training of samples, the neural network continuously adjusts parameters such as connection weights to make the network development closer to the expected results. This process is the network training process.

There are mainly two types of artificial neurons, one is a single input neuron and the other is a multivariate input neuron.

  1. Single input neuron

A single-input neuron model is shown in Figure 6(a), where s is the input signal, ω is the connection weight, θ is the bias value, and f is a transfer function. The input signal s is multiplied by the connection weight ω to obtain ωs. It is sent to the accumulator, another input 1 is multiplied by the offset value θ, 1·θ is also sent to the accumulator, and the output q is calculated through the accumulator. q is called the net input, then n is input to the transfer function f, and the output t of the neuron is calculated in f. The calculation formula of the neuron is given as follows:

(2) q = ω s + θ ,

(3) t = f ( q ) .

Figure 6 
               Neuron model. (a) Single-input neuron model; (b) neuron model with p input signals.
Figure 6

Neuron model. (a) Single-input neuron model; (b) neuron model with p input signals.

The offset value θ and the connection weight value ω are adjustable parameters. In the actual design process, the appropriate transfer function f and appropriate parameters θ and ω can be selected to achieve the most satisfactory results.

  1. Multiple input neurons

Usually, a neuron has several inputs, and the neuron’s processing of information is nonlinear. A neuron with p input signals is shown in Figure 6(b).

Among them, the inputs of neurons are s 1, s 2, …, s p , and the corresponding weights are ω 11, ω 12, …, ω 1p respectively. θ is a bias value, the net input q is equal to the sum of the products of all input values and weights, plus the bias value θ, and they are expressed as follows:

(4) q = i = 1 p ω 1 i s i + θ ,

q is send to the transfer function f of the neuron, and the output t of the neuron is obtained through nonlinear calculation. The calculation is given as follows:

(5) t = f ( n ) = f i = 1 p ω 1 i s i + θ .

An artificial neuron network consists of many neurons, and its topology usually has two forms: hierarchical neural network and interconnected neural network. Hierarchical networks can be divided into several types, such as simple forward network and forward feedback network. The control neuron network divides all neurons into input layer, intermediate layer, and output layer according to their functions, and each layer is connected separately. Because the intermediate layer does not directly interact with the external input and output, it is also called the hidden layer. Depending on the mode of operation, there may be several hidden layers (generally no more than two) or none at all.

  1. Simple forward network

    In a simple forward network, each neuron is only connected to the neurons of the previous layer, the neurons of each layer only receive the input of the previous layer, and the subsequent neurons do not return the signal response of the previous layer. Perceptron network, RBF network, and BP network belong to the forward network.

  2. Feedback forward network

In a feedback-forward network, the output layer has detailed feedback after the input layer input, and this network can be used to add a specific sequence of instructions. Neurocognitive devices and regression BP networks fall into this category. The organizational model is shown in Figure 7.

Figure 7 
               Feedback-forward network structure.
Figure 7

Feedback-forward network structure.

The BP neural network has become the most widely used neural network due to its strong self-training ability and superior characteristics. Its basic idea is the gradient descent method, which uses the gradient search technique with a view to minimizing the mean squared error between the actual output value and the desired output value of the network. First, it has three great features:

  1. Nonlinear mapping ability: Since enough samples are provided for network training, the strategy of BP network is that it can be completed from n-dimensional input to m-dimensional output through model training and nonlinear mapping.

  2. Classification ability: Neural networks have a strong ability to distinguish input samples. In practical problems, many things have limited interaction between samples. Similar samples may belong to different classes, while samples that are farther apart may belong to the same class. Traditional methods have limited classification ability in this case, while neural networks can solve problems close to nonlinear sites, so they have a strong ability to distinguish and identify problems.

  3. Optimization calculation: Find the best parameter combination under specific conditions, so that the function performance can reach the optimal value. Neural network objects can be configured as follows: the objective function is used as the neural network transfer function, and the parameters are used as dynamic fields of the network. Computing the parameters when the neural network passes through to a steady state is the best solution to this problem.

When using the BP network for training and learning, a variety of key functions must be used, as follows.

  1. Logarithmic sigmoid function

    The logarithmic sigmoid function is also called the sigmoid logarithmic function. The function is expressed as follows:

    (6) f ( x ) = 1 ( 1 + e x ) .

  2. Tangent sigmoid function

    The tangent sigmoid function is also called the sigmoid tangent function. The value range of the function is (–1, 1), which is suitable for the case of negative and positive data. This transfer function can be selected after normalizing the data. The function expression is given as follows:

    (7) f ( x ) = tan g ( x ) .

  3. Linear function

The function expression of the linear function is given as follows:

(8) f ( x ) = k x .

The value range of the function is on the entire real number field, which is suitable for the case where the data output has positive and negative and unlimited range, and most of them are used for the final extension of the value range of the output.

The conventional BP neural network learning algorithm has certain defects, so it should be improved. There are two types of improvements. One is the use of heuristic learning methods, such as the learning algorithm that introduces the momentum factor (Trainingdm function), the variable learning rate algorithm (Trainda function), and the “elastic” learning algorithm (Trainrp function). The other category is to use more efficient numerical optimization methods, such as conjugate gradient learning algorithm (including Traincgf, Traincgp, Traincgb, Trainscg, and other functions), Quasi-Newton algorithm (including Trainbfg, Trainoss, and other functions), and Levenberg–Marquardt optimization method (Trainlm function).

Different BP learning functions should be selected for different problems. The comparison of several typical learning functions is presented in Table 2. (The convergence performance and the degree of storage space occupied are represented by Arabic numerals, 1 means the fastest convergence speed and the largest storage space occupation, and so on to reduce the degree).

Table 2

Comparison of several typical learning functions

Learning function Use question type Convergence performance Occupied storage space
Trainscg Function fitting, pattern classification 3 3
Trainrp Pattern classification 1 4
Trainlm Function fitting 2 1
Trainbfg Function fitting 3 2
Traingdx Pattern classification 4 4

The performance function of the BP network refers to the standard that can evaluate the training effect of the neural network. The performance function is measured by some feedback on the network error signal during the learning process. There are three commonly used performance functions:

  1. Mean square error

    (9) MSE = E [ e ] i = 1 n ( r i a i ) 2 n .

  2. Mean absolute error

    (10) MAE = i = 1 n | r i a i | n .

  3. Error sum of squares

(11) SSE = i = 1 n ( r i a i ) 2 .

where n is the number of output units, a i is the actual value of the kth output unit, and r i is the target value of the ith output unit.

The following is a brief introduction to the learning process of the BP network:

In the (−1,1) interval, random values are assigned to each connection weight ω ij , b jr , threshold T j , and γ r . A set of input and target samples S k = ( s 1 k , s 2 k , . . . , s n k ) and V k = ( v 1 k , v 2 k , . . . , v n k ) are randomly selected and provided to the network.

Use input sample S k = ( s 1 k , s 2 k , . . . , s n k ) , connection weight ω ij , and threshold T j to calculate the input p j of each unit in the middle layer, and then use p j to calculate the output c j of each unit in the middle layer through the transfer function.

(12) p j = i = 1 n ω i j s i T j , j = 1 , 2 , , n ,

(13) c j = f ( v j ) , j = 1 , 2 , , n .

The output L r of each unit of the output layer is calculated by using the output c j , connection weight b jr , and threshold γ r of the intermediate layer, and then the response C r of each unit of the output layer is calculated by the transfer function.

(14) L r = j = 1 m v j r c j γ r , r = 1 , 2 , , m ,

(15) C r = f ( L r ) r = 1 , 2 , , m .

By using the network target vector V k = ( v 1 k , v 2 k , . . . , v n k ) and the actual output C r of the network, the generalized error d r k of each unit of the output layer is calculated.

(16) d r k = ( y r k C r ) C r ( 1 C r ) , t = 1 , 2 , , m .

The connection weight b jr , the generalization error of the output layer d r , and the output c j of the middle layer are used to calculate the generalization error e j k of each unit in the middle layer.

(17) e j k = i = 1 n d r v j r c j ( 1 c j ) .

The generalized error d r k of the output layer and the output c j of the intermediate layer are used to correct the connection weight v jr and the threshold γ r , and the formula is expressed as follows ( r = 1 , 2 , , m ; j = 1 , 2 , , n ; 0 < α < 1 ):

(18) v j r ( N + 1 ) = v j r ( N ) + α d r k c j ,

(19) γ r ( N + 1 ) = γ r ( N ) + α d r k .

The generalized error e j k of each unit in the middle layer and the input S k = (s 1, s 2, …, s n ) of each unit in the input layer are used to modify the connection weight ω ij and the threshold ( i = 1 , 2 , , m ; j = 1 , 2 , , n ; 0 < β < 1 ), and the formula is expressed as follows:

(20) ω i j ( N + 1 ) = ω i j ( N ) + β e j k s i k ,

(21) T j ( N + 1 ) = T j ( N ) + β e j k .

The next training sample vector is randomly selected and returned to formula (12) until the training of m train-tests is completed.

Input and output samples are randomly reselected from the training sample m and returned to formula (12) until the total network error E is less than the previously determined value, that is, the network converges. If the number of trainings exceeds the specified number, the network cannot be converged. As long as the error is propagated back and forth layer by layer, and the weights and thresholds between layers are adjusted, the algorithm is called an error backpropagation algorithm. This error correction algorithm can be extended to multiple levels with multiple intermediate levels.

Although Chinese designs have different concepts in using visual language, they are equally effective in communicating visual effects to the public. Information representation in visual graphics takes many forms. Whether it is the shape of the language or the language of the shape, it is the representation of different emotions. Visual language is images, colors, lines, and their interrelationships. Generally speaking, the image of the artist is his language. This is most evident in the media. Painting, world planning, and visual architecture are arguably the most common features of the new phase of the 21st century art.

A smart environment can also be referred to as ambient intelligence. In the computer field, ambient intelligence (AmI) refers to the electronic environment that is sensitive and responsive to humans. It is based on ubiquitous computing, ubiquitous computing analysis, and human-centered computer interaction design.

With the advancement of science and technology, environmental engineering, as an important part of on-site information technology, has attracted more and more attention from researchers. Environmental perception technology is mainly traditional environmental perception technology and mobile phone perception technology. The traditional environmental sensing technology judges the user’s environment through the collected information. However, relying on textual information to judge the user interface may not achieve the desired effect, so mobile viewing technology is slowly developing. Compared with traditional environment perception technology, mobile vision technology integrates more different sensor information into smart devices. By collecting the characteristic data collected by these sensors, the user behavior and environmental processes, such as temperature, humidity, acceleration, noise, and other information, are corrected through technology. Therefore, smart technology has good growth prospects and growing material products in the military, medical, transportation, housing, logistics, and other fields.

Currently, most context-aware devices use network storage, which has better control due to better size, openness, and comparability. However, due to the large number of storage interfaces, network transceivers, and CPU memory on the server, it can easily become a system bottleneck and cannot meet the real-time storage and data transmission requirements of today’s Internet. Since all data on the network has to be forwarded and stored by the server, when more and more sensors are connected, there will be a lot of data accumulation and improper storage. It must come into contact with a large amount of data and many other useless data, so the transmission performance is low and the real-time performance is poor. In view of the problem that traditional smart devices are overloaded on the network, shutting down the system in time when the network is interrupted can solve the problem of loose and irregular storage structures. The model adopts a high-performance compression algorithm based on a multisensor database system. Data are sorted by storage time and sensor type and is always compressed by type. Table 3 presents the configuration of several sensors.

Table 3

Sensor configuration

Name Temperature sensor Pressure sensor Sound sensor Microwave sensor Infrared sensor Multifunctional sensor
Parameter 1 1 1 1 1 Various
Nonintrusive Yes Yes Yes Yes Yes Yes
Transmission frequency/MHz 433 433 433 433 433 922
Power supply Battery Battery Battery 220 V Battery Battery

When acquiring multifunction sensor data, the multifunction sensor data format is presented in Table 4. The ex_length field represents the specified length of the additional space for sending data during data transmission; the rxid field represents the number of the receiving node; the time field represents the time stamp of the data. The txid field represents the number of transmitting nodes; the parity field is used to control the parity of the database; and the rssi field represents the strength of the transmitted radio frequency signal. The multifunction sensor must integrate the data before data transmission to ensure that the data can be uniformly processed through the MODBUS process.

Table 4

Multifunction sensor data format

Data domain Length/bit Data domain Length/bit
ex_length 8 time 32
extra 160 txid 21
rxid 24 parity 3
rssi 8

4 The expression form of graphic language

Graphic language appears in life in the form of visual elements, each graphic icon has its own unique visual meaning, and the visual experience that each person has is also very different. The psychology of the visual approach will also have a profound impact on the psychology of each model, as well as have a major impact on public acceptance and cognitive change. Using a lot of visual effects can make the graphic language more colorful and artistic, so that others can feel the shock of the brain and the effects brought by the visual effects. Scientists have confirmed this idea: visual information is the most receptive of all information to the cerebral cortex, and memory is more reliable. Receiving external information is done more visually than one knows. Both text and graphics transmit information to the brain through the eyes. Because it conveys information accurately and quickly, but at the same time is aimed at people with the same type of text, an image can convey a more expressive and clear meaning, but different groups of people think differently.

The so-called animation refers to the animation produced by other technologies besides the technology that uses real people and real objects to complete the action, which is called artificial film. With the development of the times, animation, as a branch of art and technology, has gradually approached human beings. Animation must include creative design and technical support, both of which are important. One way to differentiate animation and film and television projects is to look at the type of filming technique, the creativity of the individual designers, and the film form factor. Movies and TV shows often end up with realistic filming and great production background effects. Compared with art, the creative process of animation has the basic characteristics of randomness, creativity, imagination, and flexibility. As a complete art course, animation integrates various artistic expression techniques such as painting, film and television production, and digital media. Animation is also a product of the human social spirit, which enables visions or ideas that are impossible in real life to be realized in animation activities. The most important feature of animation is “movement.” The biggest difference between visual language and animation visual language is that visual language includes all the languages that express and convey emotions to the brain through vision, that is, visual language. So whether it is two dimensional or three dimensional, abstract is still abstract. However, the visual language of animation must refer to the emotional and psychological communication expressed through animation-based imagery. Therefore, with the advent of technology, animation is no longer limited to handwriting, and more creative expressions emerge one after another. Ink and wash animation, paper-cut animation, sticky king animation, cloud dimension animation, and other forms of different styles give the audience different visual languages and convey different emotional effects.

The traditional Chinese classic “Three Monks” features characters in matching colors. The clothes of the little monk are red, the clothes of the tall monk are blue, and the clothes of the fat monk are yellow. Although the first three simple colors are used, they give the viewer a lot of attention. Due to the hot summer, the fat monk’s clothes changed from red to pink after sweating. The color application of the entire short film is very detailed and independent. What we are seeing is not just changing the color of the character, but it can collectively contribute to behavior and a sense of color appeal, as shown in Figure 8.

Figure 8 
               Character color settings in “Three Monks.” (a) Three monks; (b) two monks.
Figure 8

Character color settings in “Three Monks.” (a) Three monks; (b) two monks.

5 The application of graphic language in animation visual guidance system

Graphic language is one of the important ways of information transmission. In the globalization of cultural fusion, the communication system is playing an increasingly important role. The guidance system is an important graphic language of design, and it is an important way to establish domain understanding in the whole guidance system design.

This article selects two-thirds of each type of images from the established image library A to form the training sample library, with a total of 1,125 samples, and the remaining 375 samples form the test sample library. Among them, the sample data of the training sample library and the test sample library of each category are shown in Figure 9.

Figure 9 
               Sample data for training sample base and test sample base.
Figure 9

Sample data for training sample base and test sample base.

As shown in Figure 9, the number of game types is the largest, accounting for almost one-third; followed by music television, followed by animation, courseware, and advertising types.

To study the current Chinese public’s visual understanding of the elements in animated images, we designed a questionnaire from the aspects of color choice, calligraphic fonts, graphic expression, and modeling meaning. The subjects of the survey were 1,000 people in city A, including 500 males and 500 females. Figure 10 shows the details of age and the number of respondents working with animation images.

Figure 10 
               Identity of the respondents.
Figure 10

Identity of the respondents.

Figure 11 shows respondents’ subjective perceptions of the importance of color choice, calligraphic fonts, graphic expression, and modeling meaning. It can be seen from the figure that the respondents aged 21–30 paid more attention to the choice of color, and the highest number was 69 at the age of 21–30. It can also be seen that the number of people aged 21–30 has great fluctuations in their preference for the four aspects, and only 12 people choose calligraphy fonts. The 31–35-year olds were more evenly divided among the four categories. The number of respondents who work with animation images decreases with age, and the number of people who choose calligraphy fonts increases with age.

Figure 11 
               Respondents’ subjective perceptions of the importance of color choice, calligraphic fonts, graphic expression, and styling meaning.
Figure 11

Respondents’ subjective perceptions of the importance of color choice, calligraphic fonts, graphic expression, and styling meaning.

6 Conclusion

In general, graphical languages have stronger advantages than plain languages. It expresses the characteristics of clarity, transparency, clarity, easy to understand, not affected by regional and racial differences, and can have extensive exchanges and innovations throughout the platform. Especially in an era full of graphic elements, graphic language is an important way to understand the natural world and human society. However, visual orientation refers to the systems that designers design for public acceptance based on the principles of stability, originality, and visibility, such as icons, print ads, display pages, and grouping designs. Visual orientation needs to be planned according to the inner activities of a variety of people. Therefore, designers must also use visual physiology and psychology to manage their own basic design elements and combine personal visual acuity and graphic language to better communicate the relationship between creators and the public. Through visual guidance, the viewer’s attention can be captured so that they can follow the designer’s design ideas and quickly understand the creator’s ideas to express.

  1. Conflict of interest: The authors state no conflict of interest.

  2. Data availability statement: The data that support the findings of this study are available from the corresponding author upon reasonable request.

References

[1] Ding S, Qu S, Xi Y, Wan S. Stimulus-driven and concept-driven analysis for image caption generation. Neurocomputing. 2019;389(12):520–30.10.1016/j.neucom.2019.04.095Search in Google Scholar

[2] Zhou S, Ke M, Luo P. Multi-camera transfer GAN for person re-identification. J Vis Commun Image Representation. 2019;59:393–400.10.1016/j.jvcir.2019.01.029Search in Google Scholar

[3] Song JS, Choi SH. A Study on the characteristics of information guidance system in tokyo metro. J Korean Soc Des Cult. 2017;23(1):273–88.10.18208/ksdc.2017.23.1.273Search in Google Scholar

[4] Lee D, Kim S, Palta JR, Kim T. Technical note: real-time web-based wireless visual guidance system for radiotherapy. Australasian Phys Eng Sci Med. 2017;40(2):463–9.10.1007/s13246-017-0548-0Search in Google Scholar PubMed

[5] Cui W, Zhang X, Wang Y, Huang H, Chen B, Fang L, et al. Text-to-Viz: Automatic generation of infographics from proportion-related natural language statements. IEEE Trans Vis computer Graph. 2020;26(1):906–16.10.1109/TVCG.2019.2934785Search in Google Scholar PubMed

[6] Binger C, Kent-Walsh J, Harrington N, Hollerbach QC. Tracking early sentence-building progress in graphic symbol communication. Lang Speech Hearing Serv Sch. 2020;51(2):317–28.10.1044/2019_LSHSS-19-00065Search in Google Scholar PubMed PubMed Central

[7] Denes G, Maruszczyk K, Ash G, Mantiuk RK. Temporal resolution multiplexing: Exploiting the limitations of spatio-temporal vision for more efficient VR rendering. IEEE Trans Vis Computer Graph. 2019;25(5):2072–82.10.1109/TVCG.2019.2898741Search in Google Scholar PubMed

[8] Wang H, Yuan J, Ying W. Context-aware discovery of visual Co-occurrence patterns. IEEE Trans Image Process. 2017;23(4):1805–19.10.1007/978-981-10-4840-1_2Search in Google Scholar

[9] Li DJ, Li YY, Li JX, Fu Y. Gesture recognition based on BP neural network improved by chaotic genetic algorithm. Int J Autom Comput. 2018;15(03):1–10.10.1007/s11633-017-1107-6Search in Google Scholar

[10] Miwa K, Libben G, Yu I. Visual trimorphemic compound recognition in a morphographic script. Lang Cognition Neurosci. 2017;32(1):1–23.10.1080/23273798.2016.1205204Search in Google Scholar

[11] Gao H, Miao H, Liu L, Kai J, Zhao K. Automated quantitative verification for service-based system design: A visualization transform tool perspective. Int J Softw Eng Knowl Eng. 2018;28(10):1369–97.10.1142/S0218194018500390Search in Google Scholar

[12] Wang B, Zhang BF, Liu XW. Novel infrared image enhancement optimization algorithm combined with DFOCS. Optik. 2020;224:165476.10.1016/j.ijleo.2020.165476Search in Google Scholar

[13] Wang B, Zhang BF, Liu XW. An image encryption approach on the basis of a time delay chaotic system. Optik. 2021;225:165737.10.1016/j.ijleo.2020.165737Search in Google Scholar

[14] Abdolmaleky M, Naseri M, Batle J, Farouk A, Gong LH. Red-Green-Blue multi-channel quantum representation of digital images. Optik. 2017;128:121–32.10.1016/j.ijleo.2016.09.123Search in Google Scholar

[15] POPOVA Y. Visual graphic tools as an effective method of studying the questions of the syntax of the Russian language. Philology Matters. 2019;2019(1):123–30.10.36078/987654335Search in Google Scholar

[16] Kimura S, Horikawa Y, Katayama Y. Quick report on on-board demonstration experiment for autonomous-visual-guidance camera system for space debris removal. Trans Jpn Soc Aeronautical Space Sciences, Aerosp Technol Jpn. 2018;16(6):561–5.10.2322/tastj.16.561Search in Google Scholar

[17] Liu X, Li Y, Wang Q. Multi-view hierarchical bidirectional recurrent neural network for depth video sequence based action recognition. Int J Pattern Recognit Artif Intell. 2018;32(10):1850033.10.1142/S0218001418500337Search in Google Scholar

[18] Hou Y, Wang Q. Research and improvement of content-based image retrieval framework. Int J Pattern Recognit Artif Intell. 2018;32(12):1850043.10.1142/S021800141850043XSearch in Google Scholar

[19] Vasilijevic A, Jambrosic K, Vukic Z. Teleoperated path following and trajectory tracking of unmanned vehicles using spatial auditory guidance system. Appl Acoust. 2018;129(Jan):72–85.10.1016/j.apacoust.2017.07.001Search in Google Scholar

[20] Qian K, Zhao W, Ma Z, Ma J, Ma X, Yu H. Wearable-assisted localization and inspection guidance system using egocentric stereo cameras. IEEE Sens J. 2017;18(2):809–21.10.1109/JSEN.2017.2773487Search in Google Scholar

[21] Horton TE, Amant RS. A partial contour similarity-based approach to visual affordances in habile agents. IEEE Trans Cognit Developmental Syst. 2017;9(3):269–80.10.1109/TCDS.2017.2702599Search in Google Scholar

[22] Sasidharan V, Marepally S, Elliott SA, Baid S, Lakshmanan V, Nayyar N, et al. The miR-124 family of microRNAs is crucial for regeneration of the brain and visual system in the planarian Schmidtea mediterranea. Development. 2017;144(18):3211–23.10.1242/dev.144758Search in Google Scholar

[23] Meng Y, Wang W, Han H, Ban J. A visual/inertial integrated landing guidance method for UAV landing on the ship. Aerosp Sci Technol. 2019;85(FEB):474–80.10.1016/j.ast.2018.12.030Search in Google Scholar

[24] Swift JR, Coon WG, Guger C, Brunner P, Bunch M, Lynch T, et al. Passive functional mapping of receptive language areas using electrocorticographic signals. Clin Neurophysiol. 2018;129(12):2517–24.10.1016/j.clinph.2018.09.007Search in Google Scholar PubMed PubMed Central

[25] Kim H, Choi YH. The effects of graphic organizer instruction on korean efl learners’ english grammar acquisition. J Mirae Engl Lang Lit. 2017;22(3):121–45.Search in Google Scholar

[26] Borsky M, Mehta DD, Van Stan JH, Gudnason J. Modal and nonmodal voice quality classification using acoustic and electroglottographic features. IEEE/ACM Trans Audio Speech Lang Process. 2017;25(12):2281–91.10.1109/TASLP.2017.2759002Search in Google Scholar PubMed PubMed Central

[27] Pumarola A, Agudo A, Martinez AM, Sanfeliu A, Moreno-Noguer F. GANimation: One-Shot anatomically consistent facial animation. Int J Computer Vis. 2020;128(3):698–713.10.1007/s11263-019-01210-3Search in Google Scholar

[28] Lin ZQ, Xie B, Zou YZ. Intelligent development environment and software knowledge graph. J Computer Sci Technol. 2017;32(2):242–9.10.1007/s11390-017-1718-ySearch in Google Scholar

[29] Yan C, Xie H, Yang D, Yin J, Zhang Y, Dai Q. Supervised hash coding with deep neural network for environment perception of intelligent vehicles. IEEE Trans Intell TransportatiSyst. 2017;19(1):284–95.10.1109/TITS.2017.2749965Search in Google Scholar

[30] Ai H. Providing graduated corrective feedback in an intelligent computer-assisted language learning environment. Recall. 2017;29(pt.3):313–34.10.1017/S095834401700012XSearch in Google Scholar

[31] Zhang H, Lei X, Wang C, Yue D, Xie X. Wind power interval prediction based on improved PSO and BP neural network. J Electr Eng Technol. 2017;12(3):989–95.10.5370/JEET.2017.12.3.989Search in Google Scholar

[32] Lü J, Xie R, Zhou W. Application of LM-BP neural network in simulation of shear wave velocity of shale formation. J China Univ Pet (Ed Nat Sci). 2017;41(3):75–83.Search in Google Scholar

[33] Khalaf OI, Romero CA, Azhagu Jaisudhan Pazhani A, Vinuja G. VLSI implementation of a high-performance nonlinear image scaling algorithm. J Healthc Eng. 2021;2021:2021. 10.1155/2021/6297856.Search in Google Scholar PubMed PubMed Central

Received: 2022-02-08
Revised: 2022-04-18
Accepted: 2022-06-29
Published Online: 2022-08-26

© 2022 Luning Zhao, published by De Gruyter

This work is licensed under the Creative Commons Attribution 4.0 International License.

Articles in the same Issue

  1. Research Articles
  2. Construction of 3D model of knee joint motion based on MRI image registration
  3. Evaluation of several initialization methods on arithmetic optimization algorithm performance
  4. Application of visual elements in product paper packaging design: An example of the “squirrel” pattern
  5. Deep learning approach to text analysis for human emotion detection from big data
  6. Cognitive prediction of obstacle's movement for reinforcement learning pedestrian interacting model
  7. The application of neural network algorithm and embedded system in computer distance teach system
  8. Machine translation of English speech: Comparison of multiple algorithms
  9. Automatic control of computer application data processing system based on artificial intelligence
  10. A secure framework for IoT-based smart climate agriculture system: Toward blockchain and edge computing
  11. Application of mining algorithm in personalized Internet marketing strategy in massive data environment
  12. On the correction of errors in English grammar by deep learning
  13. Research on intelligent interactive music information based on visualization technology
  14. Extractive summarization of Malayalam documents using latent Dirichlet allocation: An experience
  15. Conception and realization of an IoT-enabled deep CNN decision support system for automated arrhythmia classification
  16. Masking and noise reduction processing of music signals in reverberant music
  17. Cat swarm optimization algorithm based on the information interaction of subgroup and the top-N learning strategy
  18. State feedback based on grey wolf optimizer controller for two-wheeled self-balancing robot
  19. Research on an English translation method based on an improved transformer model
  20. Short-term prediction of parking availability in an open parking lot
  21. PUC: parallel mining of high-utility itemsets with load balancing on spark
  22. Image retrieval based on weighted nearest neighbor tag prediction
  23. A comparative study of different neural networks in predicting gross domestic product
  24. A study of an intelligent algorithm combining semantic environments for the translation of complex English sentences
  25. IoT-enabled edge computing model for smart irrigation system
  26. A study on automatic correction of English grammar errors based on deep learning
  27. A novel fingerprint recognition method based on a Siamese neural network
  28. A hidden Markov optimization model for processing and recognition of English speech feature signals
  29. Crime reporting and police controlling: Mobile and web-based approach for information-sharing in Iraq
  30. Convex optimization for additive noise reduction in quantitative complex object wave retrieval using compressive off-axis digital holographic imaging
  31. CRNet: Context feature and refined network for multi-person pose estimation
  32. Improving the efficiency of intrusion detection in information systems
  33. Research on reform and breakthrough of news, film, and television media based on artificial intelligence
  34. An optimized solution to the course scheduling problem in universities under an improved genetic algorithm
  35. An adaptive RNN algorithm to detect shilling attacks for online products in hybrid recommender system
  36. Computing the inverse of cardinal direction relations between regions
  37. Human-centered artificial intelligence-based ice hockey sports classification system with web 4.0
  38. Construction of an IoT customer operation analysis system based on big data analysis and human-centered artificial intelligence for web 4.0
  39. An improved Jaya optimization algorithm with ring topology and population size reduction
  40. Review Articles
  41. A review on voice pathology: Taxonomy, diagnosis, medical procedures and detection techniques, open challenges, limitations, and recommendations for future directions
  42. An extensive review of state-of-the-art transfer learning techniques used in medical imaging: Open issues and challenges
  43. Special Issue: Explainable Artificial Intelligence and Intelligent Systems in Analysis For Complex Problems and Systems
  44. Tree-based machine learning algorithms in the Internet of Things environment for multivariate flood status prediction
  45. Evaluating OADM network simulation and an overview based metropolitan application
  46. Radiography image analysis using cat swarm optimized deep belief networks
  47. Comparative analysis of blockchain technology to support digital transformation in ports and shipping
  48. IoT network security using autoencoder deep neural network and channel access algorithm
  49. Large-scale timetabling problems with adaptive tabu search
  50. Eurasian oystercatcher optimiser: New meta-heuristic algorithm
  51. Trip generation modeling for a selected sector in Baghdad city using the artificial neural network
  52. Trainable watershed-based model for cornea endothelial cell segmentation
  53. Hessenberg factorization and firework algorithms for optimized data hiding in digital images
  54. The application of an artificial neural network for 2D coordinate transformation
  55. A novel method to find the best path in SDN using firefly algorithm
  56. Systematic review for lung cancer detection and lung nodule classification: Taxonomy, challenges, and recommendation future works
  57. Special Issue on International Conference on Computing Communication & Informatics
  58. Edge detail enhancement algorithm for high-dynamic range images
  59. Suitability evaluation method of urban and rural spatial planning based on artificial intelligence
  60. Writing assistant scoring system for English second language learners based on machine learning
  61. Dynamic evaluation of college English writing ability based on AI technology
  62. Image denoising algorithm of social network based on multifeature fusion
  63. Automatic recognition method of installation errors of metallurgical machinery parts based on neural network
  64. An FCM clustering algorithm based on the identification of accounting statement whitewashing behavior in universities
  65. Emotional information transmission of color in image oil painting
  66. College music teaching and ideological and political education integration mode based on deep learning
  67. Behavior feature extraction method of college students’ social network in sports field based on clustering algorithm
  68. Evaluation model of multimedia-aided teaching effect of physical education course based on random forest algorithm
  69. Venture financing risk assessment and risk control algorithm for small and medium-sized enterprises in the era of big data
  70. Interactive 3D reconstruction method of fuzzy static images in social media
  71. The impact of public health emergency governance based on artificial intelligence
  72. Optimal loading method of multi type railway flatcars based on improved genetic algorithm
  73. Special Issue: Evolution of Smart Cities and Societies using Emerging Technologies
  74. Data mining applications in university information management system development
  75. Implementation of network information security monitoring system based on adaptive deep detection
  76. Face recognition algorithm based on stack denoising and self-encoding LBP
  77. Research on data mining method of network security situation awareness based on cloud computing
  78. Topology optimization of computer communication network based on improved genetic algorithm
  79. Implementation of the Spark technique in a matrix distributed computing algorithm
  80. Construction of a financial default risk prediction model based on the LightGBM algorithm
  81. Application of embedded Linux in the design of Internet of Things gateway
  82. Research on computer static software defect detection system based on big data technology
  83. Study on data mining method of network security situation perception based on cloud computing
  84. Modeling and PID control of quadrotor UAV based on machine learning
  85. Simulation design of automobile automatic clutch based on mechatronics
  86. Research on the application of search algorithm in computer communication network
  87. Special Issue: Artificial Intelligence based Techniques and Applications for Intelligent IoT Systems
  88. Personalized recommendation system based on social tags in the era of Internet of Things
  89. Supervision method of indoor construction engineering quality acceptance based on cloud computing
  90. Intelligent terminal security technology of power grid sensing layer based upon information entropy data mining
  91. Deep learning technology of Internet of Things Blockchain in distribution network faults
  92. Optimization of shared bike paths considering faulty vehicle recovery during dispatch
  93. The application of graphic language in animation visual guidance system under intelligent environment
  94. Iot-based power detection equipment management and control system
  95. Estimation and application of matrix eigenvalues based on deep neural network
  96. Brand image innovation design based on the era of 5G internet of things
  97. Special Issue: Cognitive Cyber-Physical System with Artificial Intelligence for Healthcare 4.0.
  98. Auxiliary diagnosis study of integrated electronic medical record text and CT images
  99. A hybrid particle swarm optimization with multi-objective clustering for dermatologic diseases diagnosis
  100. An efficient recurrent neural network with ensemble classifier-based weighted model for disease prediction
  101. Design of metaheuristic rough set-based feature selection and rule-based medical data classification model on MapReduce framework
Downloaded on 7.9.2025 from https://www.degruyterbrill.com/document/doi/10.1515/jisys-2022-0074/html
Scroll to top button