Home Mathematical model on influence of past experiences on present activities of human brain
Article Open Access

Mathematical model on influence of past experiences on present activities of human brain

  • P. Raja Sekhara Rao , K. Venkata Ratnam and G. Shirisha EMAIL logo
Published/Copyright: April 25, 2025

Abstract

This article explores how emotional feelings linked to stored memories of past experiences influence the present activity of the human brain. To analyse this, a mathematical model is considered describing the dynamics in a two-layered network in which the neurons in the first layer are involved in present activities and are influenced by cells in the second layer that carry the emotional feelings associated with memories of past experiences. Initially, this article establishes sufficient conditions for the stability of a unique equilibrium solution in the system for both constant and time-varying exogenous inputs. This indicates the situation where the present activities are not disturbed by emotions emanated from past memories. Furthermore, it is observed that certain variations in the exogenous inputs can induce oscillations within the system. To manage them, this study suggests the adjustment of specific parameters that could actually control certain fluctuations according to emotional responses to past and current memories. This control aims to stabilize the brain’s current activity, allowing it to reach a balanced state.

MSC 2010: 34D23; 34K20; 92B20; 93D20

1 Introduction

Human beings are emotional, and their activities are usually emotion driven. One knows that emotions are generated by experiences. If the experiences are not in the physical world, the brain has the ability to get the same in an imagined world. Our feelings towards crime news, expected exam results, the performance of our football team in tomorrow’s match, pandemics like Covid-19 – all are imagined, stored with some emotions or impressions that may be recalled or come into the forefront when the actual situation arises. Furthermore, the emotions of a person have an impact on the surroundings or society [24]. Thus, the study of the influence of emotions is interesting and useful. Based on one’s emotions, we may predict one’s behaviour to some extent. Emotions are reflected through facial expressions or other body gestures. Scientists have been studying extensively to predict emotions and consequent actions by analysing physical body gestures. Many researchers from different fields such as social sciences, biological sciences, mathematical sciences, and engineering sciences are working on the role of emotions in their perspective. Psychologists have focused on the psychological aspects of the cause and impact of these emotions. Many emotional theories have been described by them to predict or recognize the emotions of a person. Biologists have studied the contribution of the activities of the brain on emotions and their responses. Computer scientists have tried to predict or recognize emotions using artificial neural networks. Mathematicians have tried to frame a mathematical model for emotions using different approaches.

In psychology, according to the classical view of emotion, emotions can be assessed objectively and accurately through facial expressions. But how does the brain guide us for a particular action for different emotions based on the situations? This is a key point on which many researchers are working. Barrett [3] and Zimmerman [27,28] have said that these actions for emotions are generated by the brain by using the concepts stored in the memory by earlier experiences. Albarracin [1] stated that a person’s attitudes and behaviour will be influenced by their past experience and past behaviour. Hartmann et al. have used appraisal theory to predict emotions by forming a mathematical model for emotions [7]. Islam et al. have modelled the emotional states based on wavelet analysis and the trust-region algorithm, which can be applied to hardware implementation of human emotion-based systems [9]. Prisnyakov and Prisnyakova have expressed human adaptation to emotional factors by using a model of information processing by memory [15]. Ambrosio [2] provided a mathematical approach to characterize the emergence of emotional fluxes in the human psyche. Iinuma and Kogiso [8] proposed a computational human decision-making model that handles emotion-induced behaviour. Gupta et al. [6] have tried to simulate and model the human decision-making process through a reinforcement learning-based computational model involving past experiences.

Now coming to the study of emotions using a neural network. Levine et al. [11] have performed a detailed discussion on different types of neural network models available in the literature for different types of emotions. Lee et al. [10] have used a neural network to recognize emotions through heart rate variability and skin resistance. Unluturk et al. [25] studied emotions using speech recognition by introducing a new type of neural network called ERNN. Thenius et al. [23] have proposed a new type of neural network named EMANN to predict emotions. Sharma and Dugar [22] have tried to recognize emotions through face detection using deep neural networks. Minaee and Abdolrashidi [14] have predicted emotions through facial expressions by using attentional convolutional networks. Rahul Mahadeo Shahane and Ramakrishna Sharma [16] have used a feed-forward neural network to recognize emotions. Santhoshkumar and Geetha [21] have used feed-forward deep convolution neural networks for emotion recognition from human body movements. Merlin et al. [13] have studied and compared different approaches to identify emotions on human faces, considering multiple perspectives on emotion detection by using the Viola-Jones face detection method to identify faces. Manalu and Rifai [12] used hybrid convolutional neural network-recurrent neural network algorithm to detect human emotions through facial expressions. Begazo et al. [4] have tried to detect human emotions through voice using deep learning techniques. We can see most of the work is to predict or recognize emotions, which has a wide application in the fields of marketing, economic theory, banking, the hospitality industry, media and communication, etc. Thus, so far, studies have appeared to focus on understanding which emotion caused the activity.

Our aim in this article is to understand the converse, i.e., how emotions from stored experiences could influence other activities of the human brain. This we try to do in a mathematical way! Explaining the complex phenomena of the human brain through mathematical equations is not an ordinary task. However, attempts are made to enter this labyrinth in some way, somehow, to begin with. To our knowledge, there are two ways to go. Consider as many phenomena of the brain as possible and put them in the form of mathematical equations, solve them, and see how far the solutions explain the phenomena, and modify equations, if necessary, to fit them best. This is going to be an ominous task as far as a highly complex system such as the human brain is concerned. The second path is a simpler one: select an existing mathematical model, and check how far it explains the features of the system under consideration. Modifications are always possible in a mathematical model to make it suitable for the system. This prompts us to consider the model proposed by Rao and Rao [20]. The model was proposed to study the interactive dynamics between two layers of a network of nodes (components or neuronal cells) that describe a hierarchical system, such as an information management system. We are going to utilize this model to see how emotional feelings emanating from past experiences or recorded memories influence the activities of the brain.

Except for natural activities such as attending to basic physical needs, and knee-jerk reactions, all activities of the brain involve both thinking and recalling. Thinking may be regarded as an activity of the brain for the situation under consideration, while recalling is an activity of the brain with regard to its previous experiences stored in memory, along with some impressions or attributes. A single experience could be a result of many activities, and one activity may be the root cause of many memories. Thus, each neuron in one layer could be attached to one or more neurons in another layer. In other words, certain interactive dynamics are created between these two activities of the brain. How to represent this?

For this, we consider two layers of neurons:

  • The first layer consists of neuronal cells that are involved in the present activities, and the second layer consists of those neuronal cells that reflect the emotional feelings attached to memories of past experiences.

  • Each neuron in the first layer is connected to a set of neurons (specific to it) in the second layer that tends to influence it through related memories. Thus, both layers are interconnected.

  • All neurons in both the layers are intra-connected among themselves.

  • An input to the brain will stimulate both layers of neurons. In other words, inputs to the system invoke activity in the first layer as well as stimulate activity in memory cells in the second layer.

With this background, we shall recall the following model [20]:

(1) x i ( t ) = a i x i ( t ) + j = 1 n b i j f j ( x j ( t ) ) + k = 1 r i c i i k g i k ( x i ( t ) , y i k ( t ) ) + I i , y i k ( t ) = c i k y i k ( t ) + l = 1 r i d i l h i l ( y i l ( t ) ) + J i k ,

for i = 1 , 2 , , n , k = 1 , 2 , , r i , 1 r i n ,

where x i is a typical neuron in the first layer representing the state of particular activity of the brain at any time t and y i k is a neuron in the second layer that corresponds to activities invoked by memories of past experiences. a i > 0 is the passive decay rate of x i , leading it to a resting state in the absence of any external activity, while c i k > 0 is the passive decay rate of y i k . The constant term b i j denotes the synaptic connection strength between x i ’s and x j . d i l > 0 denotes the rate at which another neuron y i l in the second layer is connected to y i k in the same subnetwork (i.e., the rate at which various emotions of past experiences related to a particular activity x i are interconnected). The parameter c i i k denotes the synaptic connection strength between y i k ’s and x i ’s (i.e., the rate at which emotional aspect of particular memory is related to the present activity). The function f j is the functional response of neuron x j towards x i , whereas, the functional g i k shows how y i k ’s are related to x i ’s. h i l is the response function of y i l towards y i k . I i and J i k are exogenous inputs to the two layers, respectively.

The response functions f j , g i k , and h i l may be chosen from a general class of functions such that system (1) has continuous solutions. We shall provide ample examples of such functions in the forthcoming sections.

In [20], it is assumed that neurons in the first layer are always supported by neurons in the second layer and, hence, are termed as cooperative and supportive network. In this article, we are going to understand how the sub-network neurons are going to influence the activities of main network neurons and how these influences are withstood to carry out their activities. Thus, the neurons y i k are no more supportive to x i ’s but try to intervene and influence their activities. They may disturb or deviate from the desired activities of x i . Thus, our aim here is to address the following questions:

  • How do the dynamics of stored memories influence the related activities of the brain?

  • How to work along with emotions in parallel?

  • Can the system remain stable under fluctuating emotions?

We shall see how far system (1) or its modifications would explain the aforementioned phenomena.

This article is organized as follows. In Section 2, a modified form of (1) is presented that includes all possible delays in communication and transmission of data in such networks. Basic properties such as the existence and uniqueness of solutions and also equilibria are discussed. Section 3 deals with the stability properties of solutions of the system considered under the influence of both constant inputs and time-varying inputs. Several sufficient conditions are obtained via Lyapunov functionals. In Section 4, we provide illustrative examples to verify the results of Section 3 and try to answer the questions posed earlier. A discussion concludes the work in Section 5.

2 Model and basic properties

The brain always learns and adds experiences from present activities into the memory store while working simultaneously with them. We consider this ability of the brain and introduce the corresponding term into (1). Concurrency of many activities at the same time may lead to processing delays among x i ’s. Also, it is quite natural that the brain may take some time to recollect previously stored memory, which results in the processing delay among y i k ’s. In certain situations, it may take some time for the emotions of previous memories to show their influence on activity, as the brain may be engaged with the other activity, which leads to transmission delays. Further time delays in transmission from new experiences learned from present activities to memory state are also relevant. By incorporating these delays into system (1), we modify (1) as

(2) x i = a i x i + j = 1 n b i j f j ( x j ( t τ j ) ) + k = 1 r i c i i k g i k ( x i , y i k ( t ϑ i k ) ) + I i , y i k = c i k y i k + l = 1 r i d i l h i l ( y i l ( t ζ i l ) ) + α i k ϕ i k ( x i ( t δ i ) ) + J i k ,

for i = 1 , 2 , , n , k = 1 , 2 , , r i , 1 r i n ,

where all the terms remain as defined in (1). Here, the new terms α i k ϕ i k ( x i ) represent the new memories that emerge from present activities. The terms τ i ’s and ζ i k ’s are the processing delays among x i ’s, and y i k ’s respectively. ϑ i k is the transmission delay from y i k ’s to x i ’s. δ i is the transmission delay from x i ’s to y i k ’s. All the delays are assumed to be non-negative constants in the present context. We note that the processing delays represented by τ i and ζ i k may be very small when compared to transmission delays ϑ i k and δ i , and hence, may be neglected. However, we consider this mathematical plausibility and go ahead with all delays and note that the choice τ i = 0 and ζ i k = 0 for all i and i k is always available for us to ignore them. In due course, readers may observe that our results hold good for the system with zero delays also.

The pictorial representation of this network may be depicted in Figure 1.

Figure 1 
               Typical representation of the proposed network. Source: Created by the authors.
Figure 1

Typical representation of the proposed network. Source: Created by the authors.

By the theory of delay differential equations, we know that the local Lipschitz conditions on response functions guarantee the existence of solutions. Hence, we assume the following Lipschitz conditions on the response functions:

(3) g i k ( x i , y i k ) g i k ( x ¯ i , y ¯ i k ) M 1 i k y i k y ¯ i k + M 2 i k x i x ¯ i , f j ( x j ) f j ( x ¯ j ) p j x j x ¯ j , h i k ( y i k ) h i k ( y ¯ i k ) q i k y i k y ¯ i k , ϕ i k ( x i ) ϕ i k ( x ¯ i ) N i k x i x ¯ i ,

for some positive constants M 1 i k , M 2 i k , p j , q i k , and N i k .

In view of conditions (3), we assume henceforth that the system (2) possesses unique solutions that are continuous in their maximal intervals of existence. The response functions f j , g i k , and h i l may be chosen from a general class of functions that satisfy the aforementioned conditions [20]. The following are some of the functions that we may use in the present context:

f j ( x j ) = tanh ( x j ) or f j ( x j ) = sin ( x j ) or f j ( x j ) = cos ( x j ) or f j ( x j ) = 1 1 + e x j or f j ( x j ) = 1 2 ( x j + 1 x j 1 ) or f j ( x j ) = x j a + x j , g i k ( x i , y i k ) = x i x i + y i k or g i k ( x i , y i k ) = x i + y i k or g i k ( x i , y i k ) = x i * y i k or g i k ( x i , y i k ) = tanh ( x i + y i k ) or g i k ( x i , y i k ) = tanh ( x i * y i k ) , h i l ( y i l ) = tanh ( y i l ) or h i l ( y i l ) = y i l a + y i l , ϕ i k ( x i ) = x i a + x i or ϕ i k ( x i ) = tanh ( x i ) .

The most common way of understanding the dynamics of a system, such as (2), is to study its behaviour at equilibrium solutions in terms of its stability. An equilibrium solution represents a known constant solution of the system, and convergence to such a known value implies that the activities are concluding to a known action/solution. An autonomous system such as (2) may possess equilibria. Existence of equilibria remains unaffected by time delays, as demonstrated in previous studies [5,17,19,20]. Therefore, we can establish that

Theorem 2.1

If the output functions satisfy conditions (3) and the parameters satisfy conditions

(4) j = 1 n 1 a i b j i p j + k = 1 r i c i i k M 2 i k + 1 c i k α i k N i k < 1 , 1 a i k = 1 r i c i i k M 1 i k + 1 c i k k = 1 r i d i k q i k < 1 ,

then model (2) has a unique equilibrium solution.

Thus, under conditions (4), model (2) possess a unique equilibrium, which may typically be represented by ( x i * , y i k * ) . As equilibria are stationary solutions of the system, we can write

(5) ( x i x i * ) = a i ( x i x i * ) + j = 1 n b i j ( f j ( x j ( t τ j ) ) f j ( x j * ) ) + k = 1 r i c i i k ( g i k ( x i , y i k ( t ϑ i k ) ) g i k ( x i * , y i k * ) ) , ( y i k y i k * ) = c i k ( y i k y i k * ) + l = 1 r i d i l ( h i l ( y i l ( t ζ i l ) ) h i l ( y i l * ) ) + α i k ( ϕ i k ( x i ( t δ i ) ) ϕ i l ( x i * ) ) ,

where i = 1 , 2 , , n , k = 1 , 2 , , r i and 1 r i n .

We will be using equations (5) whenever required in our results.

Now the question arises, under what conditions on the system parameters and functional responses do the solutions of model (2) converge to an equilibrium solution? In the next section, we try to establish different sets of sufficient conditions for the solutions to reach the equilibria reflecting the asymptotic stability of the system.

3 Stability aspects

We directly start with the stability of the unique equilibrium of (2) that exists by virtue of Theorem 2.1. The following result provides sufficient conditions on the system parameters for global asymptotic stability of equilibrium solution of (2).

Theorem 3.1

Assume that conditions (3) hold. If

(6) ( i ) a i > j = 1 n b j i p i + k = 1 r i c i i k M 2 i k + k = 1 r i α i k N i k , ( i i ) c i k > k = 1 r i d i k q i k + c i i k M 1 i k ,

for all i = 1 , 2 , , n , k = 1 , 2 , , r i and 1 r i n , then model (2) has equilibrium solution ( x i * , y i k * ) , which is globally asymptotically stable.

Proof

By choosing

V = i = 1 n x i x i * + k = 1 r i y i k y i k * + j = 1 n b i j p j t τ j t x j ( z ) x j * d z + k = 1 r i c i i k M 1 i k t ϑ i k t y i k ( z ) y i k * d z + k = 1 r i l = 1 r i d i l q i l t ζ i l t y i l ( z ) y i l * d z + α i k N i k t δ i t x i ( z ) x i * d z ,

the upper Dini derivative of V along the solutions of system (2), using (5), is given by

(7) D + V i = 1 n a i x i x i * + j = 1 n b i j p j x j ( t τ j ) x j * + k = 1 r i c i i k M 1 i k y i k ( t ϑ i k ) y i k * + k = 1 r i c i i k M 2 i k x i x i * + k = 1 r i c i k y i k y i k * + l = 1 r i d i l q i l y i l ( t ζ i l ) y i l * + α i k N i k x i ( t δ i ) x i * + j = 1 n b i j p j [ x j ( t ) x j * x j ( t τ j ) x j * ] + k = 1 r i c i i k M 1 i k [ y i k y i k * y i k ( t ϑ i k ) y i k * ] + l = 1 r i d i l q i l [ y i l y i l * y i l ( t ζ i l ) y i l * ] + k = 1 r i α i k N i k [ x i x i * x i ( t δ i ) x i * ] i = 1 n a i j = 1 n b j i p i k = 1 r i c i i k M 2 i k k = 1 r i α i k N i k x i x i * + k = 1 r i c i k c i i k M 1 i k k = 1 r i d i k q i k y i k y i k * i = 1 n A x i x i * + k = 1 r i B y i k y i k * ,

where

A = Min a i j = 1 n b j i p i k = 1 r i c i i k M 2 i k k = 1 r i α i k N i k , B = Min c i k c i i k M 1 i k k = 1 r i d i k q i k .

By hypothesis, A > 0 and B > 0 . Then, D + V ( t ) < 0 .

Conclusion follows from the standard argument.□

Remark 3.2

What does the stability of an equilibrium in such activities mean? For mathematicians, an equilibrium point is a stationary or critical point where the system is in a resting state. For activities linked to a brain or an artificial neural network, an equilibrium point is regarded as (stored) memory pattern or state where the brain/network exhibits no dynamics. Thus, it represents a state of calmness, and the brain has come to a conclusion or is focused. Theorem 3.1 provides sufficient conditions ( A > 0 and B > 0 ) on parameters and functionals to achieve this. It states that as far as the resting potentials of the two layers are high enough to withstand the influence or interference of other neurons, the system has the ability to stay in a resting state or draw a conclusion. Emotions from past experiences settled to a fixed feeling, and simultaneously, the present activity is clearly defined and fixed. At that state, the brain is in a position to give instructions to other organs with this output or could start a new activity from here.

We shall now provide some more sets of conditions on parameters for global asymptotic stability of model (2). The following inequality enables us to find more general conditions on the parameters of the system for the global asymptotic stability of equilibrium solution.

For all real numbers a , b , and η > 0 ,

(8) a b 1 4 η a 2 + η b 2

holds true.

Theorem 3.3

Assume that conditions (3) hold. The equilibrium ( x i * , y i k * ) of model (2) is globally asymptotically for any length of time delays τ i , δ i , ϑ i k , and ζ i k , for i = 1 , 2 , , n , k = 1 , 2 , , r i , provided the parameters satisfy any of the following sets of inequalities:

( a ) ( i ) 1 4 η 1 j = 1 n b i j p j + η 1 j = 1 n b j i p i + k = 1 r i c i i k ( M 2 i k + η 2 M 1 i k ) η 4 α i k N i k < a i , ( i i ) 1 4 η 2 c i i k M 1 i k + 1 4 η 3 l = 1 r i d i l q i l + η 3 k = 1 r i d i k q i k < c i k , ( b ) ( i ) 1 4 η 1 j = 1 n b i j p j 2 + η 1 j = 1 n b j i + k = 1 r i c i i k ( M 2 i k + η 2 M 1 i k ) η 4 α i k N i k < a i , ( i i ) 1 4 η 2 c i i k M 1 i k + 1 4 η 3 l = 1 r i d i l q i l + η 3 k = 1 r i d i k q i k < c i k , ( c ) ( i ) 1 4 η 1 j = 1 n b i j + η 1 j = 1 n b j i p i 2 + k = 1 r i c i i k ( M 2 i k + η 2 M 1 i k ) η 4 α i k N i k < a i , ( i i ) 1 4 η 2 c i i k M 1 i k + 1 4 η 3 l = 1 r i d i l q i l + η 3 k = 1 r i d i k q i k < c i k . where η 1 , η 2 , η 3 , a n d η 4 a r e t h e p o s i t i v e p a r a m e t e r s c h o s e n a p p r o x i m a t e l y .

Proof

(a) We construct the appropriate Lyapunov functional.

We consider V 1 ( t ) = i = 1 n ( x i x i * ) 2 2 .

Differentiating V 1 with respect to time variable along the solution of (2) and using (5), we obtain

(9) V 1 = i = 1 n [ ( x i x i * ) ( x i x i * ) ] i = 1 n a i ( x i x i * ) 2 + j = 1 n b i j p j x i x i * x j ( t τ j ) x j * + k = 1 r i c i i k ( M 2 i k ( x i x i * ) 2 + M 1 i k x i x i * y i k ( t ϑ i k ) y i k * ) .

Using (8) for η 1 > 0 , η 2 > 0 , we have

(10) ( x i x i * ) ( x j ( t τ j ) x j * ) 1 4 η 1 ( x i x i * ) 2 + η 1 ( x j ( t τ j ) x j * ) 2 ( x i x i * ) ( y i k ( t ϑ i k ) y i k * ) 1 4 η 2 ( y i k ( t ϑ i k ) y i k * ) 2 + η 2 ( x i x i * ) 2 .

Substituting (10) into (9), we obtain

(11) V 1 i = 1 n a i ( x i x i * ) 2 + 1 4 η 1 j = 1 n b i j p j ( x i x i * ) 2 + η 1 j = 1 n b i j p j ( x j ( t τ j ) x j * ) 2 + k = 1 r i c i i k M 2 i k ( x i x i * ) 2 + η 2 k = 1 r i c i i k M 1 i k ( x i x i * ) 2 + 1 4 η 2 k = 1 r i c i i k M 1 i k ( y i k ( t ϑ i k ) y i k * ) 2 .

Let V 2 = i = 1 n k = 1 r i ( y i k ( t ) y i k * ) 2 2 .

Then, the derivative of V 2 along the solutions of second equation of (2) and using (5) after simplification is given by

(12) V 2 i = 1 n k = 1 r i c i k ( y i k y i k * ) 2 + l = 1 r i d i l q i l ( y i k y i k * ) ( y i l ( t ζ i l ) y i l * ) + α i k N i k ( x i ( t δ i ) x i * ) ( y i k y i k * ) .

Using (8) for η 3 > 0 and η 4 > 0 , we have

(13) ( y i k y i k * ) ( y i l ( t ζ i l ) y i l * ) 1 4 η 3 ( y i k y i k * ) 2 + η 3 ( y i l ( t ζ i l ) y i l * ) 2 , ( x i ( t δ i ) x i * ) ( y i k y i k * ) 1 4 η 4 ( y i k y i k * ) 2 + η 4 ( x i ( t δ i ) x i * ) 2 .

Substituting (13) into (12), we obtain

(14) V 2 i = 1 n k = 1 r i c i k ( y i k y i k * ) 2 + l = 1 r i d i l q i l 1 4 η 3 ( y i k y i k * ) 2 + η 3 ( y i l ( t ζ i l ) y i l * ) 2 + α i k N i k 1 4 η 4 ( y i k ( t ) y i k * ) 2 + η 4 ( x i ( t δ i ) x i * ) 2 .

Now, consider the functional

V 3 = i = 1 n j = 1 n η 1 b i j p j t τ j t ( x j ( z ) x j * ) 2 d z + 1 4 η 2 k = 1 r i c i i k M 1 i k t ϑ i k t y i k ( z ) y i k * 2 d z + k = 1 r i η 3 l = 1 r i d i l q i l t ζ i l t y i l ( z ) y i l * 2 d z + α i k N i k η 4 t δ i t ( x i ( z ) x i * ) 2 d z ,

(15) V 3 i = 1 n j = 1 n b i j p j ( x j ( t ) x j * ) 2 j = 1 n b i j p j ( x j ( t τ j ) x j * ) 2 + 1 4 η 2 k = 1 r i c i i k M 1 i k y i k ( t ) y i k * 2 1 4 η 2 k = 1 r i c i i k M 1 i k y i k ( t ϑ i k ) y i k * 2 + k = 1 r i η 3 l = 1 r i d i l q i l y i l ( t ) y i l * 2 η 3 l = 1 r i d i l q i l y i l ( t ζ i l ) y i l * 2 + α i k N i k η 4 ( x i ( t ) x i * ) 2 α i k N i k η 4 ( x i ( t δ i ) x i * ) 2 .

Now, let V = V 1 + V 2 + V 3 .

Then, the time derivative of V along the solutions of (2), using (11), (14), and (15), is given by

V ( t ) i = 1 n a i 1 4 η 1 j = 1 n b i j p j η 1 j = 1 n b j i p i k = 1 r i c i i k ( M 2 i k + η 2 M 1 i k ) η 4 α i k N i k ( x i x i * ) 2 + k = 1 r i c i k 1 4 η 3 l = 1 r i d i l q i l η 3 k = 1 r i d i k q i k 1 4 η 2 c i i k M 1 i k ( y i k y i k * ) 2 i = 1 n A 1 ( x i x i * ) 2 + k = 1 r i B 1 ( y i k y i k * ) 2 ,

where

A 1 = Min a i 1 4 η 1 j = 1 n b i j p j η 1 j = 1 n b j i p j k = 1 r i c i i k ( M 2 i k + η 2 M 1 i k ) η 4 α i k N i k > 0 , B 1 = Min c i k 1 4 η 3 l = 1 r i d i l q i l η 3 k = 1 r i d i k q i k 1 4 η 2 c i i k M 1 i k > 0 ,

for 1 i n , 1 k r i .

By hypothesis a (i) and a (ii), it is clear that V ( t ) < 0 . Therefore, the equilibrium ( x i * , y i k * ) is globally asymptotically stable.

We now consider case (b)

(b) As in earlier case, we consider the functional V 1 ( t ) = i = 1 n ( x i x i * ) 2 2 and V 2 = i = 1 n k = 1 r i ( y i k ( t ) y i k * ) 2 2 .

For η 1 > 0 , we use inequality (8) as follows:

(16) p j ( x i x i * ) ( x j ( t τ j ) x j * ) p j 2 4 η 1 ( x i x i * ) 2 + η 1 ( x j ( t τ j ) x j * ) 2 .

Substituting (16) into (9) and simplifying, we obtain

(17) V 1 i = 1 n a i ( x i x i * ) 2 + 1 4 η 1 j = 1 n b i j p j 2 ( x i x i * ) 2 + η 1 j = 1 n b i j ( x j ( t τ j ) x j * ) 2 + k = 1 r i c i i k M 2 i k ( x i x i * ) 2 + η 2 k = 1 r i c i i k M 1 i k ( x i x i * ) 2 + 1 4 η 2 k = 1 r i c i i k M 1 i k ( y i k ( t ϑ i k ) y i k * ) 2 .

Proceeding as in ( a ) , we can conclude.

( c ) We use the same functional as in earlier cases but utilize (8) for η 1 > 0 ,

p j ( x i x i * ) ( x j ( t τ j ) x j * ) 1 4 η 1 ( x i x i * ) 2 + η 1 p j 2 ( x j ( t τ j ) x j * ) 2 .

The remainder of the proof is the same as that of earlier cases ( a ) and ( b ) . Hence, ( x i , y i k ) ( x i * , y i k * ) under the conditions ( c ) .□

Remark 3.4

Theorem 3.3 presents several sets of sufficient conditions on the parameters and functionals of the model. This means that model (2) has a larger region of stability and, thus, could provide a reasonable solution with suitable inputs. Such systems exhibit a tendency to stay stable, calm or said to have a focused state of mind and can work synchronously with past experiences for executing present activity in a better way.

Remark 3.5

One may note that for η 2 = 1 4 and η 3 = 1 2 , the restrictions on c i i k in both Theorems 3.1 and 3.3 become the same. For a proper choice of η 1 , η 2 , η 3 , and η 4 , Theorem 3.3 may include Theorem 3.1; however, conditions of Theorem 3.1 are easily verifiable and outscore Theorem 3.3.

It may be noted that none of the conditions in Theorems 3.13.3 depend on the delay parameters. Hence, these results are valid for all delays and, in particular, when τ j = 0 or ϑ i k = 0 or ζ i l = 0 . Thus, our results hold good equally for small, insignificant processing delays as well. As such, they serve as independent sets of sufficient conditions for global asymptotic stability of equilibria for various models that can be deduced from model (2), in a particular model (1).

3.1 Time-varying inputs

In artificial neural networks, fluctuations in input current, voltage, noise, etc., lead to variable inputs to the system. Human brain receives many inputs from the sense organs of the body besides internal inputs from stored data. Bringing exact input to recall a particular memory is rarely possible for a complex system as a brain. A continuous process of inputs that triggers the thought process that leads to the desired output is our usual experience. Thus, inputs to a system need not always be fixed constants. On the other hand, when multiple activities are carried out simultaneously, the inputs of one activity may interfere with the other. Such inputs are capable of disturbing the system, sometimes leading the system to nowhere. It is our common experience that when similar activities are going on in our brain, inputs to one activity may be mistakenly attributed to another activity, leading to wrong conclusions – what we call a confused state of mind. These observations motivate us to study the impact of changing inputs. Since uncontrolled inputs lead to uncontrolled systems, we restrict our study to the following cases: (i) inputs to both present activities and past experiences converging or approaching some fixed values; (ii) one of the inputs is fluctuating – reflecting an oscillatory-type input, and the other is converging to a finite value; and (iii) both types of inputs are oscillatory, reflecting a wavering mind. In the first case, one may expect the system to behave well in the sense that the solutions approach some fixed state, as in the case of constant inputs. The same is established by means of a result here. In the latter two cases, one may expect a fluctuating mind drawing no fixed conclusions, inferring a non-focused state of mind. Then, we propose some mechanisms that could possibly control these fluctuations introduced by variable inputs. We explain this through illustrative examples.

With this background, we shall now let the exogenous inputs I i = I i ( t ) and J i k = J i k ( t ) as functions of time-variable t . Accordingly, we consider

(18) x i = a i x i + j = 1 n b i j f j ( x j ( t τ j ) ) + k = 1 r i c i i k g i k ( x i , y i k ( t ϑ i k ) ) + I i ( t ) , y i k = c i k y i k + l = 1 r i d i l h i l ( y i l ( t ζ i l ) ) + α i k ϕ i k ( x i ( t δ i ) ) + J i k ( t ) ,

where i = 1 , 2 , , n , k = 1 , 2 , , r i and 1 r i n .

Under the conditions (3) on response functions, appropriate initial conditions and assuming the inputs I i ( t ) and J i k ( t ) to be bounded and continuous on [ 0 , ) , one may easily see that (18) possesses unique solutions in their maximal intervals of existence [18,26].

Our known model is (2), and we have considered model (18). With all the parameters and functional relations being the same, the only difference between the two systems is between the inputs. One natural question could be, will the solutions of (2) and (18) behave closely or similarly if the corresponding inputs stay close enough? The following result establishes this. That means we try to restrict the inputs of (18) for which the solutions of model (18) will converge to the solutions of model (2), implying that both systems behave closely or similarly eventually.

Theorem 3.6

Assume that the parametric conditions (6) hold for model (18), let the inputs satisfy 0 i = 1 n I ^ i ( t ) + k = 1 r i J ^ i k ( t ) d t < , where I ^ i ( t ) = I i ( t ) I i , J ^ i k ( t ) = J i k ( t ) J i k . Then, for any solutions ( x , y ) of model (18) and ( x ¯ , y ¯ ) of model (2), we have lim t ( x , y ) = ( x ¯ , y ¯ ) .

Proof

Consider

V ( t ) = i = 1 n x i x ¯ i + j = 1 n b i j p j t τ j t x j ( z ) x ¯ j d z + k = 1 r i c i i k M 1 i k t ϑ i k t y i k ( z ) y ¯ i k d z + k = 1 r i y i k y ¯ i k + l = 1 r i d i l q i l t ζ i l t y i l ( z ) y ¯ i l d z + α i k N i k t δ i t x i ( z ) x i * d z .

Taking the Dini derivative along the solutions of model (18):

D + V ( t ) i = 1 n a i j = 1 n b j i p i k = 1 r i c i i k M 2 i k k = 1 r i α i k N i k x i x ¯ i + k = 1 r i c i k l = 1 r i d i l q i l c i i k M 1 i k y i l y ¯ i l + i = 1 n I ^ i ( t ) + k = 1 r i J ^ i k ( t ) A ¯ i = 1 n x i x i * + k = 1 r i y i k y ¯ i k + i = 1 n I ^ i ( t ) + k = 1 r i J ^ i k ( t ) < 0 ,

where

A ¯ = Min a i j = 1 n b j i p i k = 1 r i c i i k M 2 i k k = 1 r i α i k N i k , c i k l = 1 r i d i l q i l c i i k M 1 i k .

Integrating from 0 to t, we obtain

V ( t ) + A ¯ 0 t i = 1 n x i ( s ) x ¯ i + k = 1 r i y i k ( s ) y ¯ i k d s + 0 i = 1 n I ^ i ( t ) + k = 1 r i J ^ i k ( t ) d t V ( 0 ) ,

given 0 i = 1 n I ^ i ( t ) + k = 1 r i J ^ i k ( t ) d t < .

So, the aforementioned inequality implies that V ( t ) , x i , y i k are bounded on [ 0 , ) and 0 t i = 1 n x i ( s ) x ¯ i + k = 1 r i y i k ( s ) y ¯ i k d s , for i = 1 , 2 , , n .

But we know even x i x ¯ i and y i k y ¯ i k are also bounded on [ 0 , ) , which implies that their derivatives are bounded. So, they are uniformly continuous on [ 0 , ) .

Hence, we can conclude that x i ( t ) x ¯ i and y i k ( t ) y ¯ i k as t .□

Corollary 3.7

Assume that all the hypotheses of the Theorem 3.6 are satisfied. Furthermore, if model (2)possesses equilibrium pattern ( x * , y * ) , then all the solutions of model (18) approach ( x * , y * ) .

Proof

The result follows from the observation that the equilibrium solution ( x * , y * ) is also one of the solutions of model (2).□

Remark 3.8

By taking the parameter ϕ i k = 0 and delays τ i = 0 , model (2) reduces to the model studied in [19], while model (18) simplifies to model (2.2) from [18]. Additionally, if we also let ζ i k = 0 and ϑ i k = 0 , then model (2) condenses to model (2.1) examined in [20].

Furthermore, by letting ϕ i k = 0 , the conditions of Theorem 3.1 are transformed into those of Theorem 3.2 of [19] and Theorem 4.3 of [20]. The conditions of Theorem 3.3 simplify to those of Theorem 3.1 in [19]. Finally, Theorem 3.6 is identical to Theorem 2.2 from [18].

Remark 3.9

External inputs to a system always influence the dynamics of the system. They decide the equilibria of the system and their stability and may even make the system unpredictable. Thus, tolerable limits are to be obtained for variations in inputs for which the system is not distorted. Model (2) is well behaved in the sense of Theorems 3.13.3, and it approaches a predictable (equilibrium) state. On the other hand, model (18) does not possess equilibria. Then, how to study its behaviour, or where does it go? It may be regarded as a simple distortion of model (2) as the only difference between them is in terms of their inputs. Thus, a reasonable way to study its behaviour is to examine its solutions could be with respect to those of system (2), of which one solution is its equilibrium state. This means that if the solutions of model (18) are also approaching the equilibrium ( x i * , y i k * ) of model (2), we may understand that the brain could withstand under certain external disturbances in input information and is capable of obtaining the same conclusion as that of an undisturbed state.

From the aforementioned result, it is noted that if the inputs satisfy the conditions of Theorem 3.6, then the behaviour of solutions of the model with time-varying inputs (18) is similar to that of solutions of the model with constant inputs (2).

In the next section, we provide a variety of situations in the form of numerical examples to illustrate the aforementioned results.

4 Examples and simulations

To understand the behaviour of the system with constant and time-varying inputs, we present two systems, beginning with a simple system with one neuron in each layer and next a system with two neurons in the first layer supported by two neurons each in the second layer. We allow different transmission functions ( g i k ) and different input functions (constant first followed by time-varying ones) and observe the behaviour of the system as they play the main role in connecting the two layers of neurons and invoking the activities in two layers, respectively.

Example 4.1

(19) x = 7 x + 2 f ( x ( t τ ) ) 2 g ( x , y ( t ϑ ) ) + I ( t ) , y = 6 y + 3 h ( y ( t ζ ) ) + 2 ϕ x ( t δ ) + J ( t ) .

In the aforementioned system, we let f ( x ) = sin ( x ) , h ( y ) = sin ( y ) and ϕ ( x ) = tanh ( x ( t δ ) ) . Letting transmission function g ( x , y ) to be x + y or x x + y or tanh ( x + y ) or tanh ( x y ) , we simulate the aforementioned system for various inputs and observe its behaviour.

Case (i) For I ( t ) 3 , J ( t ) 2 , constant inputs, all the conditions of Theorems 3.1 and 3.3 (choosing η = 1 2 ) are satisfied and solutions will converge to the equilibrium solution. The simulations for distinct functions of g ( x , y ) are shown in Figure 2.

Figure 2 
               (a)–(d) are the solution profiles of system (19) with constant inputs for different functions of 
                     
                        
                        
                           g
                           
                              (
                              
                                 x
                                 ,
                                 y
                              
                              )
                           
                        
                        g\left(x,y)
                     
                  . The system is able to come to a resting state, recalling past memories under constant information input. Source: Created by the authors. Matlab has been used to plot the graphs.
Figure 2

(a)–(d) are the solution profiles of system (19) with constant inputs for different functions of g ( x , y ) . The system is able to come to a resting state, recalling past memories under constant information input. Source: Created by the authors. Matlab has been used to plot the graphs.

We now consider system (2) (or (18)), as the case may be, with two neurons in the first layer, which, in turn, are connected to two neurons each in the second layer.

Example 4.2

(20) x 1 = 14 x 1 + 2 f 1 ( x 1 ( t τ 1 ) ) + 3 f 2 ( x 2 ( t τ 2 ) ) + 1 g 1 1 ( x 1 , y 1 1 ( t ϑ 1 1 ) ) + 4 g 1 2 ( x 1 , y 1 2 ( t ϑ 1 2 ) ) + I 1 ( t ) , x 2 = 18 x 2 + 2 f 1 ( x 1 ( t τ 1 ) ) + 2 f 2 ( x 2 ( t τ 2 ) ) + 3 g 2 1 ( x 2 , y 2 1 ( t ϑ 2 1 ) ) + 3 g 2 2 ( x 2 , y 2 2 ( t ϑ 2 2 ) ) + I 2 ( t ) , y 1 1 = 13 y 1 1 + 3 h 1 1 ( y 1 1 ( t ζ 1 1 ) ) + 2 h 1 2 ( y 1 2 ( t ζ 1 2 ) ) + 2 ϕ 1 1 ( x 1 ( t δ 1 ) ) + J 1 1 ( t ) , y 1 2 = 12.5 y 1 2 + 2 h 1 1 ( y 1 1 ( t ζ 1 1 ) ) + 4 h 1 2 ( y 1 2 ( t ζ 1 2 ) ) + 1.5 ϕ 1 2 ( x 1 ( t δ 1 ) ) + J 1 2 ( t ) , y 2 1 = 14 y 2 1 + 5 h 2 1 ( y 2 1 ( t ζ 2 1 ) ) + 2 h 2 2 ( y 2 2 ( t ζ 2 2 ) ) + 2.5 ϕ 2 1 ( x 2 ( t δ 2 ) ) + J 2 1 ( t ) , y 2 2 = 20 y 2 2 + 3 h 2 1 ( y 2 1 ( t ζ 2 1 ) ) + 9 h 2 2 ( y 2 2 ( t ζ 2 2 ) ) + 3 ϕ 2 2 ( x 2 ( t δ 2 ) ) + J 2 2 ( t ) .

Here, we assume that f i ( x i ) = tanh ( x i ) , h i k ( y i k ) = tanh ( y i k ) , and ϕ i k ( x i ( t ) ) = tanh ( x i ) . As in the aforementioned example, we take different cases for a different set of input functions and, under each case, we analyse the behaviour of the different systems which we obtain by taking different forms of g i k ( x i , y i k ) such as x i + y i k or x i x i + y i k or tanh ( x i + y i k ) or tanh ( x i * y i k ) .

Case (i). Let I i ( t ) 1 and J i k ( t ) 1 ; as constant inputs and as per our choice of functions, we will have the Lipschitz constants as p i = M 1 i k = M 2 i k = q i k = 1 . The parameters of this system satisfy conditions of Theorems 3.1 and 3.3, for proper choice of η i , i = 1 , 2 , 3 , 4 . The equilibrium solution of the aforementioned system is, thus, globally asymptotically stable by virtue of these theorems. Figure 3 visualizes the behaviour of solutions for distinct functions of g i k ( x i , y i k ) .

Figure 3 
               (a)–(d) are the solution profiles of system (20) with constant inputs for different functions of 
                     
                        
                        
                           
                              
                                 g
                              
                              
                                 
                                    
                                       i
                                    
                                    
                                       k
                                    
                                 
                              
                           
                           
                              (
                              
                                 
                                    
                                       x
                                    
                                    
                                       i
                                    
                                 
                                 ,
                                 
                                    
                                       y
                                    
                                    
                                       
                                          
                                             i
                                          
                                          
                                             k
                                          
                                       
                                    
                                 
                              
                              )
                           
                        
                        {g}_{{i}_{k}}\left({x}_{i},{y}_{{i}_{k}})
                     
                  . System is remaining in the same state of recalling memories even under different ways interaction. Source: Created by the authors. Matlab has been used to plot the graphs.
Figure 3

(a)–(d) are the solution profiles of system (20) with constant inputs for different functions of g i k ( x i , y i k ) . System is remaining in the same state of recalling memories even under different ways interaction. Source: Created by the authors. Matlab has been used to plot the graphs.

The following cases illustrate the impact of variable inputs under the auspices of Theorem 3.6 for the systems considered in the aforementioned examples.

Case (ii)(a) Consider I ( t ) = 3 + e t and J ( t ) = 2 + 1 1 + t 2 for Example 4.1. Both the inputs satisfy the constraints of Theorem 3.6, besides other parametrics being satisfied. So, the solutions converge to equilibria by virtue of this theorem. The simulations for distinct functions of g ( x , y ) are shown in Figure 4.

Figure 4 
               (a)–(d) are the solution profiles of system (19) for different interaction functions with time-varying inputs satisfying the constraints of Theorem Theorem 3.6. System is able to recall the same memories under such disturbances that are eventually coming closer to fixed ones. Source: Created by the authors. Matlab has been used to plot the graphs.
Figure 4

(a)–(d) are the solution profiles of system (19) for different interaction functions with time-varying inputs satisfying the constraints of Theorem Theorem 3.6. System is able to recall the same memories under such disturbances that are eventually coming closer to fixed ones. Source: Created by the authors. Matlab has been used to plot the graphs.

Case (ii)(b) Consider I i ( t ) = 1 + e t sin ( t ) and J i k ( t ) = 1 + e t cos ( t ) for the system in Example 4.2. Both the inputs satisfy the constraints of Theorem 3.6, and the solutions converge to equilibria, as all other conditions of Theorem 3.6 are satisfied. This may be noted in Figure 5.

Figure 5 
               (a)–(d) are the solution profiles of the system (20) for various interaction functions with time-varying inputs satisfying the constraints of Theorem 3.6. The multi-neuron system demonstrates the ability to retrieve related memories despite the occurrence of disturbances due to control inputs. Source: Created by the authors. Matlab has been used to plot the graphs.
Figure 5

(a)–(d) are the solution profiles of the system (20) for various interaction functions with time-varying inputs satisfying the constraints of Theorem 3.6. The multi-neuron system demonstrates the ability to retrieve related memories despite the occurrence of disturbances due to control inputs. Source: Created by the authors. Matlab has been used to plot the graphs.

Case (iii) (a) Consider I ( t ) = 3 + sin ( t ) , J ( t ) = 2 + 1 1 + t 2 . J ( t ) for Example 4.1. Clearly, J ( t ) satisfies, but I ( t ) does not satisfy the conditions of Theorem 3.6. Figure 6 depicts the behaviour in this case for distinct functions of g ( x , y ) .

Figure 6 
               (a)–(d) are the solution profiles of system (19) with distinct interactive functions, where inputs 
                     
                        
                        
                           J
                           
                              (
                              
                                 t
                              
                              )
                           
                        
                        J\left(t)
                     
                   satisfying and 
                     
                        
                        
                           I
                           
                              (
                              
                                 t
                              
                              )
                           
                        
                        I\left(t)
                     
                   doesn't satisfy the conditions of Theorem 3.6. The system exhibits oscillatory behaviour because of the disturbed external inputs from the present activities. Source: Created by the authors. Matlab has been used to plot the graphs.
Figure 6

(a)–(d) are the solution profiles of system (19) with distinct interactive functions, where inputs J ( t ) satisfying and I ( t ) doesn't satisfy the conditions of Theorem 3.6. The system exhibits oscillatory behaviour because of the disturbed external inputs from the present activities. Source: Created by the authors. Matlab has been used to plot the graphs.

Case (iii) (b) Now consider I i ( t ) = 1 + sin ( t ) J i k ( t ) = 1 + e t sin ( t ) for the system in Example 4.2, respectively. We note that I i ( t ) does not satisfy the conditions of Theorem 3.6, and the behaviour of solutions of the system is shown in Figure 7.

Figure 7 
               (a)–(d) are the  solution profiles of system (20) for different interaction functions  with oscillation inputs 
                     
                        
                        
                           
                              
                                 I
                              
                              
                                 i
                              
                           
                           
                              (
                              
                                 t
                              
                              )
                           
                        
                        {I}_{i}\left(t)
                     
                   not satisfying constrain of Theorem 3.6. The system is not able to recall particular memories due to disrupted external inputs from present activities. Source: Created by the authors. Matlab has been used to plot the graphs.
Figure 7

(a)–(d) are the solution profiles of system (20) for different interaction functions with oscillation inputs I i ( t ) not satisfying constrain of Theorem 3.6. The system is not able to recall particular memories due to disrupted external inputs from present activities. Source: Created by the authors. Matlab has been used to plot the graphs.

In the aforementioned case, the oscillations that arose in x due to fluctuating inputs are influencing y to oscillate. Thus, in neither case, the mind is not able to come to a stable state or conclude. It’s also oscillating! To control the oscillations in y , we take the interaction parameter α i k to be a smaller value or a decreasing function of t and see how this choice works.

Case (iv)(a) Let α = 0.2 , I ( t ) = 3 + sin ( t ) , and J ( t ) = 2 + 1 1 + t 2 . The simulations of the system in Example 4.1 are shown in Figure 8.

Figure 8 
               (a)–(d) are the solution profiles of the system (19) of Case (iii)(a) for various interaction functions  with interaction parameter  
                     
                        
                        
                           α
                           =
                           0.2
                        
                        \alpha =0.2
                     
                  . The system is able to recall old memories by minimizing the impact of disrupted inputs of the current activities. Source: Created by the authors. Matlab has been used to plot the graphs.
Figure 8

(a)–(d) are the solution profiles of the system (19) of Case (iii)(a) for various interaction functions with interaction parameter α = 0.2 . The system is able to recall old memories by minimizing the impact of disrupted inputs of the current activities. Source: Created by the authors. Matlab has been used to plot the graphs.

Case (iv)(b) Let α i k = 0.1 , I i ( t ) = 1 + sin ( t ) , and J i k ( t ) = 1 + e t sin ( t ) . The simulations of the system in Example 4.2 are shown in depicted in Figure 9.

Figure 9 
               (a)–(d) are the solution profiles of the system (20) of Case (iii)(b) for distinct functions of 
                     
                        
                        
                           
                              
                                 g
                              
                              
                                 
                                    
                                       i
                                    
                                    
                                       k
                                    
                                 
                              
                           
                        
                        {g}_{{i}_{k}}
                     
                  ’s  with 
                     
                        
                        
                           
                              
                                 α
                              
                              
                                 
                                    
                                       i
                                    
                                    
                                       k
                                    
                                 
                              
                           
                           =
                           0.1
                        
                        {\alpha }_{{i}_{k}}=0.1
                     
                  . The system reliably recalls memories by systematically reducing the influence of present activities. Source: Created by the authors. Matlab has been used to plot the graphs.
Figure 9

(a)–(d) are the solution profiles of the system (20) of Case (iii)(b) for distinct functions of g i k ’s with α i k = 0.1 . The system reliably recalls memories by systematically reducing the influence of present activities. Source: Created by the authors. Matlab has been used to plot the graphs.

Case (v)(a) Assuming α = e t , I ( t ) = 3 + sin ( t ) , and J ( t ) = 2 + 1 1 + t 2 . The behaviour of the system in Example 4.1 are shown in Figure 10.

Figure 10 
               (a)–(d) are the solution profiles of the system (19)  of Case (iii)(a)  for various interaction functions with interaction parameter as 
                     
                        
                        
                           α
                           =
                           
                              
                                 e
                              
                              
                                 −
                                 t
                              
                           
                        
                        \alpha ={e}^{-t}
                     
                  . The system is able to recall memories even under the state of oscillations from current activities due to the interaction function 
                     
                        
                        
                           
                              
                                 e
                              
                              
                                 −
                                 t
                              
                           
                        
                        {e}^{-t}
                     
                  . Source: Created by the authors. Matlab has been used to plot the graphs.
Figure 10

(a)–(d) are the solution profiles of the system (19) of Case (iii)(a) for various interaction functions with interaction parameter as α = e t . The system is able to recall memories even under the state of oscillations from current activities due to the interaction function e t . Source: Created by the authors. Matlab has been used to plot the graphs.

Case (v)(b) For α i k = e t , I i ( t ) = 1 + sin ( t ) , and J i k ( t ) = 1 + e t sin ( t ) . The simulations of the system in Example 4.2 are observed in Figure 11.

Figure 11 
               (a)–(d) are the solution profiles of the system (20) of Case (iii)(b)  for different functions of 
                     
                        
                        
                           
                              
                                 g
                              
                              
                                 
                                    
                                       i
                                    
                                    
                                       k
                                    
                                 
                              
                           
                        
                        {g}_{{i}_{k}}
                     
                  ’s with 
                     
                        
                        
                           
                              
                                 α
                              
                              
                                 
                                    
                                       i
                                    
                                    
                                       k
                                    
                                 
                              
                           
                           =
                           
                              
                                 e
                              
                              
                                 −
                                 t
                              
                           
                        
                        {\alpha }_{{i}_{k}}={e}^{-t}
                     
                  . Even the multi-neuron system appears to retain the ability to recall memories even when affected by current oscillatory influences, owing to the interaction function 
                     
                        
                        
                           
                              
                                 e
                              
                              
                                 −
                                 t
                              
                           
                        
                        {e}^{-t}
                     
                  . Source: Created by the authors. Matlab has been used to plot the graphs.
Figure 11

(a)–(d) are the solution profiles of the system (20) of Case (iii)(b) for different functions of g i k ’s with α i k = e t . Even the multi-neuron system appears to retain the ability to recall memories even when affected by current oscillatory influences, owing to the interaction function e t . Source: Created by the authors. Matlab has been used to plot the graphs.

Case (vi)(a) Now we consider the case where the inputs to past experiences are oscillatory, while inputs to the present state are time-varying but not fluctuating, i.e., we let I ( t ) = 3 + e t , J ( t ) = 2 + sin ( t ) . In this case, I ( t ) is within the range of Theorem 3.6, but J ( t ) is not. The resultant behaviour may be observed from Figure 12.

Figure 12 
               (a)–(d) are the solution profiles of the system (19) for various interaction functions, where input 
                     
                        
                        
                           J
                           
                              (
                              
                                 t
                              
                              )
                           
                        
                        J\left(t)
                     
                   does not satisfy the constraints of Theorem Theorem 3.6. The system oscillates because of disruptions in past memories. Source: Created by the authors. Matlab has been used to plot the graphs.
Figure 12

(a)–(d) are the solution profiles of the system (19) for various interaction functions, where input J ( t ) does not satisfy the constraints of Theorem Theorem 3.6. The system oscillates because of disruptions in past memories. Source: Created by the authors. Matlab has been used to plot the graphs.

Case (vi)(b) Consider I i ( t ) = 1 + e t and J i k ( t ) = 1 + cos ( t ) – only emotional feelings are oscillating now. Clearly, J i k ( t ) does not satisfy the constraints of Theorem 3.6. Figure 13 exhibits the behaviour of solutions in this case.

Figure 13 
               (a)–(d) are the solution profiles of the system (20) with different functions of 
                     
                        
                        
                           
                              
                                 g
                              
                              
                                 
                                    
                                       i
                                    
                                    
                                       k
                                    
                                 
                              
                           
                        
                        {g}_{{i}_{k}}
                     
                  ’s, where inputs 
                     
                        
                        
                           
                              
                                 J
                              
                              
                                 
                                    
                                       i
                                    
                                    
                                       k
                                    
                                 
                              
                           
                           
                              (
                              
                                 t
                              
                              )
                           
                        
                        {J}_{{i}_{k}}\left(t)
                     
                   does not satisfy the constraints of Theorem 3.6. The system exhibits the influence of disturbed past memories for different interactions. Source: Created by the authors. Matlab has been used to plot the graphs.
Figure 13

(a)–(d) are the solution profiles of the system (20) with different functions of g i k ’s, where inputs J i k ( t ) does not satisfy the constraints of Theorem 3.6. The system exhibits the influence of disturbed past memories for different interactions. Source: Created by the authors. Matlab has been used to plot the graphs.

As y oscillates in the aforementioned case, we try to control its influence on x in the next cases by taking the interaction parameter, i.e., c i i k as small value as possible or by taking it as a decreasing function of time.

Case (vii)(a) We let c = 1 10 , I ( t ) = 1 + e t , and J ( t ) = 1 + sin ( t ) for system in Example 4.1 and observe the behaviour. The simulations for distinct functions of g ( x , y ) are shown in Figure 14.

Figure 14 
               (a)–(d) are the solution profiles of the system (19) in Case (vi)(a) for different interactive functions with 
                     
                        
                        
                           c
                           =
                           
                              
                                 1
                              
                              
                                 10
                              
                           
                        
                        c=\frac{1}{10}
                     
                  . The system demonstrates that current activities can remain stable despite fluctuating past memories. Source: Created by the authors. Matlab has been used to plot the graphs.
Figure 14

(a)–(d) are the solution profiles of the system (19) in Case (vi)(a) for different interactive functions with c = 1 10 . The system demonstrates that current activities can remain stable despite fluctuating past memories. Source: Created by the authors. Matlab has been used to plot the graphs.

Case (vii)(b) In this case, we assume c 1 1 1 = 1 10 , c 1 1 2 = 2 10 , c 2 2 1 = 1 10 , c 2 2 2 = 2 10 , I i ( t ) = 1 + e t , and J i k ( t ) = 1 + cos ( t ) in Example 4.2.

Figure 15 
               (a)–(d) are the solution profiles of the system (20)  in Case (vi)(b) for various interactive functions with smaller values of interaction function 
                     
                        
                        
                           
                              
                                 c
                              
                              
                                 
                                    
                                       i
                                    
                                    
                                       k
                                    
                                 
                              
                           
                        
                        {c}_{{i}_{k}}
                     
                  . The system can regulate the influence of past experiences on current activities by decreasing the influencing parameter. Source: Created by the authors. Matlab has been used to plot the graphs.
Figure 15

(a)–(d) are the solution profiles of the system (20) in Case (vi)(b) for various interactive functions with smaller values of interaction function c i k . The system can regulate the influence of past experiences on current activities by decreasing the influencing parameter. Source: Created by the authors. Matlab has been used to plot the graphs.

Case (viii)(a) Now let c = e t , I ( t ) = 1 + e t , J ( t ) = 1 + sin ( t ) for system (18) and observe the behaviour through simulations for distinct functions of g ( x , y ) as given in Figure 16.

Figure 16 
               (a)–(d) are the solution profiles of the system (19) in Case (vi)(a) for various interactive functions with 
                     
                        
                        
                           c
                           =
                           
                              
                                 e
                              
                              
                                 −
                                 t
                              
                           
                        
                        c={e}^{-t}
                     
                  . The influence of past memories on present activities is controlled by taking the interaction parameter as 
                     
                        
                        
                           
                              
                                 e
                              
                              
                                 −
                                 t
                              
                           
                        
                        {e}^{-t}
                     
                  . Source: Created by the authors. Matlab has been used to plot the graphs.
Figure 16

(a)–(d) are the solution profiles of the system (19) in Case (vi)(a) for various interactive functions with c = e t . The influence of past memories on present activities is controlled by taking the interaction parameter as e t . Source: Created by the authors. Matlab has been used to plot the graphs.

Case (viii)(b) In case of system (19), we let c 1 1 1 = e t , c 1 1 2 = e t , c 2 2 1 = e t , c 2 2 2 = e t , I i ( t ) = 1 + e t , and J i k ( t ) = 1 + cos ( t ) , and we may observe the influence of such parameters c i i in Figure 17.

Figure 17 
               (a)–(d) are the solution profiles of the system (20) in Case (vi)(b) for various functions of 
                     
                        
                        
                           
                              
                                 g
                              
                              
                                 
                                    
                                       i
                                    
                                    
                                       k
                                    
                                 
                              
                           
                        
                        {g}_{{i}_{k}}
                     
                  ’s with 
                     
                        
                        
                           
                              
                                 c
                              
                              
                                 
                                    
                                       i
                                    
                                    
                                       k
                                    
                                 
                              
                           
                           =
                           
                              
                                 e
                              
                              
                                 −
                                 t
                              
                           
                        
                        {c}_{{i}_{k}}={e}^{-t}
                     
                  . The effect of past memories on present activities is managed by giving less importance via interaction parameters. Source: Created by the authors. Matlab has been used to plot the graphs.
Figure 17

(a)–(d) are the solution profiles of the system (20) in Case (vi)(b) for various functions of g i k ’s with c i k = e t . The effect of past memories on present activities is managed by giving less importance via interaction parameters. Source: Created by the authors. Matlab has been used to plot the graphs.

The following are the cases where both present and past experiences are triggered by oscillating inputs.

Case (ix)(a) We choose I ( t ) = 3 + sin ( t ) , J ( t ) = 2 + sin ( t ) in (18). Both the inputs are beyond the restrictions of Theorem 3.6. The behaviour of the system is shown in Figure 18 in this case.

Figure 18 
               (a)–(d) are the solution profiles of the system (19) for various interaction functions with oscillatory inputs. The system exhibits oscillations, showing that the brain’s activities may not be stable when the inputs from past and present emotions fluctuate. Source: Created by the authors. Matlab has been used to plot the graphs.
Figure 18

(a)–(d) are the solution profiles of the system (19) for various interaction functions with oscillatory inputs. The system exhibits oscillations, showing that the brain’s activities may not be stable when the inputs from past and present emotions fluctuate. Source: Created by the authors. Matlab has been used to plot the graphs.

Case (ix)(b) For (19), we let I i ( t ) = 1 + sin ( t ) and J i k ( t ) = 1 + cos ( t ) . Neither of I i ( t ) and J i k ( t ) satisfies the constraints of Theorem 3.6. Observe Figure 19!

Figure 19 
               (a)–(d) are the solution profiles of the system (20) for distinct interactive functions with oscillatory inputs. The fluctuating inputs from both past and present memories influence the activities of the brain. Source: Created by the authors. Matlab has been used to plot the graphs.
Figure 19

(a)–(d) are the solution profiles of the system (20) for distinct interactive functions with oscillatory inputs. The fluctuating inputs from both past and present memories influence the activities of the brain. Source: Created by the authors. Matlab has been used to plot the graphs.

Observations:

From the aforementioned two examples, we note the following:

  • If J i k satisfies the constraints of Theorem 3.6, but I i ( t ) does not, then y i k ’s may oscillate due to influence of x i ’s. This can be seen in Case (iii) (a) and (b).

  • The oscillations of y i k that arise in Case (iii) can be controlled by restricting the transmission parameter α i k to be very small or by choosing it to be a decreasing function of time t . Case (iv) and (v) illustrate it.

  • If I i ( t ) satisfies the constraints of Theorem 3.6, while J i k is beyond its purview, then as y i k ’s oscillate as do x i ’s. This is mainly because of the interacting term g i k that carries the influence of neurons of the second layer to the first. This is the content of Case (vi) (a) and Cases (vi) (b).

  • In view of the aforementioned observation, the influence of y i k ’s on x i ’s is reduced by choosing small values for c i i k (interaction parameter), as in Cases (vii) (a) and (b). We may also restrict it by choosing c i i k (interaction parameter) to be a decreasing function of time in place of fixed constants that tend to reduce the impact of emotional feelings over a period of time. Cases (viii) of examples support this thinking.

  • If both inputs are not within the constraints of Theorem 3.6, then both x i ’s and y i k ’s may not converge to equilibria and their behaviour is influenced by their respective inputs as seen in Cases (ix) of the aforementioned examples. The solutions tend to oscillate as the input functions are 1 + sin ( t ) and 1 + cos ( t ) oscillatory.

We consolidate the above in the following table:

Case External inputs Observations Simulations
(i) I i and J i k are constant inputs Both x i ’s and y i k ’s are approaching equilibria Figures 2 and 3
(ii) I i and J i k satisfy constraints of Theorem 3.6 Both x i ’s and y i k ’s are approaching a fixed state Figures 4 and 5
(iii) J i k satisfy the constraints of Theorem 3.6 but I i does not x i ’s are oscillating and due to the presence of interaction function ϕ i k , y i k ’s also oscillate Figures 6 and 7
(iv) In view of Case (iii), the rate of interaction α i k is chosen very small Oscillations in y i k ’s are controlled (Case (iv)) Figures 8 and 9
(v) In view of Case (iii), the rate of interaction α i k is chosen to be a decreasing function of t No more oscillations in y i k ’s, they are stable (Case (V)) Figures 10 and 11
(vi) I i satisfy constraints of Theorem 3.6 but and J i k does not y i k ’s oscillate, and because of the presence of transmission function g i k , even x s oscillate Figures 12 and 13
(vii) In view of Case (vi), transmission rate c i k is chosen very small Oscillations in y i k ’s are not influencing x s much (Case (vii)) Figures 14 and 15
(viii) In view of Case (vi), transmission rate c i k is chosen to be a decreasing function of time t y i k ’s are oscillating but x s are stable (Case (viii)) Figures 16 and 17
(ix) Both I i and J i k are not satisfying the constraints of Theorem 3.6 Both x i and y i k oscillate Figures 18 and 19

Remark 4.3

From the aforementioned observations, the present activities of the brain obtain disturbed in two ways: (i) when the time-varying external inputs influence it and (ii) when the fluctuations invoked by J i k ( t ) in y i k influence it. While the disturbance by I i ( t ) appears to be pertinent, one may note that the fluctuations caused by disturbances in y i k ’s are controllable through proper choice of interaction parameters c i k . In other words, present activities of the brain depend on emotions related to past experiences only if priority is given to them (high values of c i k ) and are less influential or become insignificant for low values of c i k . Cases (iv) and (v) imply this. This is a reasonable conclusion to make as it is our common experience that a calm or focused or meditative mind is less disturbed than the usual receptive one.

Similarly, it may also be observed that fluctuations generated by time-varying external input I i ( t ) in x i are able to disturb; otherwise, non-oscillating memories are related to past experiences. These fluctuations can be controlled by choosing the transmission parameter α i k appropriately. This is realistic, as in some situations, there is a need to act based on past experience by minimizing the impact of the current environment. This is what Cases (vii) and (iii) imply. Both these cases advise us to finally ignore or lessen the importance of the influence of one on the other when either of them is not in a state to decide or focus – a natural suggestion.

So far, we have studied the impact of external time-varying inputs and noted that they are capable of disturbing the activities of the system. It is well known in the literature that time delays have the capacity to destabilize a system characterized by oscillations during transition. But Theorems 3.13.3 establish global stability of model (2), and the sufficient conditions are independent of time delays also. This implies that the system is less influenced by time delays in the parametric spaces defined by them. We shall explore through numerical examples to see if delays have any impact when the conditions of the aforementioned theorems fail to hold. Since processing delays are very small and negligible in such systems, we consider only transmission delays here.

Let us examine two numerical systems, each of which contains two neurons in the first layer. In one system, these neurons are supported by one neuron each in the second layer, while in the other system, they are supported by two neurons each in the second layer.

Example 4.4

(21) x 1 = 5 x 1 + 3 f 1 ( x 1 ) + 2 f 2 ( x 2 ) 4 g 1 1 ( x 1 , y 1 1 ( t ϑ 1 1 ) ) + 4 , x 2 = 2 x 2 + 2 f 1 ( x 1 ) + 3 f 2 ( x 2 ) + 1 g 2 1 ( x 2 , y 2 1 ( t ϑ 2 1 ) ) + 1 , y 1 1 = 3 y 1 1 + 2 h 1 1 ( y 1 1 ) + 2.6 ϕ 1 1 ( x 1 ( t δ 1 ) ) + 3 , y 2 1 = 4 y 2 1 + 3 h 2 1 ( y 2 1 ) + 2.4 ϕ 2 1 ( x 2 ( t δ 2 ) ) + 8 ,

where f i ( x i ) = tanh ( x i ) , h i k ( y i k ) = tanh ( y i k ) , g i k ( x i , y i k ) = ( x i + y i k ) , ϕ i k = x x + 1 , ϑ i k = 50 , δ i = 100 for i = 1 , 2 and k = 1 .

Example 4.5

(22) x 1 = 3 x 1 + 5 f 1 ( x 1 ) + 2 f 2 ( x 2 ) 5 g 1 1 ( x 1 , y 1 1 ( t ϑ 1 1 ) ) + 2 g 1 2 ( x 1 , y 1 2 ( t ϑ 1 2 ) ) + 4 , x 2 = 2 x 2 + 2 f 1 ( x 1 ) + 5 f 2 ( x 2 ) + g 2 1 ( x 2 , y 2 1 ( t ϑ 2 1 ) ) + g 2 2 ( x 2 , y 2 2 ( t ϑ 2 2 ) ) + 2 , y 1 1 = 13 y 1 1 + 20 h 1 1 ( y 1 1 ) + 3 h 1 2 ( y 1 2 ) + 2 ϕ 1 1 ( x 1 ( t δ 1 ) ) + 3 , y 1 2 = 14 y 1 2 + 13 h 1 1 ( y 1 1 ) + 3 h 1 2 ( y 1 2 ) + 5 ϕ 1 2 ( t δ 1 ) ( x 1 ) + 8 , y 2 1 = 14 y 2 1 + 6 h 2 1 ( y 2 1 ) + 8 h 2 2 ( y 2 2 ) + 6 ϕ 2 1 ( x 2 ( t δ 2 ) ) + 6 , y 2 2 = 9 y 2 2 + 3 h 2 1 ( y 2 1 ) + 9 h 2 2 ( y 2 2 ) + 4 ϕ 2 2 ( x 2 ( t δ 2 ) ) + 10 ,

where f i ( x i ) = sin ( x i ) , h i k ( y i k ) = tanh ( y i k ) , g i k ( x i , y i k ) = x i x i + y i k , ϕ i k = tanh ( x i ) , ϑ i k = 200 , δ i = 100 for i = 1 , 2 and k = 1 , 2 .

Both systems (21) and (22) do not satisfy any parametric conditions of Theorems 3.13.3. Yet the simulations of systems (21) and (22), which can be seen in Figure 20(a) and (b), respectively, show that the model is stable in the long run.

Figure 20 
               (a) Solutions of system (21) showing stabilty and (b) solutions of system (22) showing stabilty. Source: Created by the authors. Matlab has been used to plot the graphs.
Figure 20

(a) Solutions of system (21) showing stabilty and (b) solutions of system (22) showing stabilty. Source: Created by the authors. Matlab has been used to plot the graphs.

Remark 4.6

It is evident from the aforementioned illustrations that model (2) is not losing its stability even for large values of time delays, which means that time delays have little impact on it. Furthermore, the parametric conditions of our results are only sufficient but not necessary. Hence, it may be inferred that model (2) exhibits strong stability characteristics and has a larger stability region than estimated by Theorems 3.1 and 3.3.

5 Discussion

In this article, an attempt is made to understand the impact of emotional feelings that arise from stored memories of past experiences on the related activities of the human brain. A mathematical model existing in the literature is considered and modified to explain this phenomenon. When the inputs to the system are constant, sufficient conditions on system parameters and functions are obtained to keep the system stable and approach asymptotically an equilibrium solution. Thus, activities of the brain are carried out without hindrance here. Unless thought processes conclude, the brain cannot give commands or take decisions to carry out any activity. Such conclusions are studied as stability of steady states, which are fixed solutions of the system.

Introducing time-varying inputs to the system, (A) it is established that as long as the inputs are within a given range from their constant counterparts, the solutions of the system remain close to the solutions of the corresponding system with constant inputs, and the solutions approach the same equilibrium solution of the system with constant inputs (Theorem 3.6). The system remains stable under damped oscillations as well. (B) When the inputs are beyond the range specified by Theorem 3.6, oscillations are noted, indicating that emotions do have an impact on the activities of the brain, and vice versa. Such situations may be handled by reducing the impact of interaction parameters ( c i i k / α i k , as the case may be, by assuming small values or decreasing functions of time). The oscillations/fluctuations are found to be either reduced or made to approach an equilibrium solution, as in item (A) mentioned above. These two cases advise us that either when past experiences are not in a position to conclude or when present activities are oscillatory and unable to focus, the brain should give less priority to that confused part of it and go ahead with the other activity. To elaborate, disturbances in the present activity arose through external factors that could influence it, whilst the disturbances emanating from past experiences due to external factors are controllable by restricting c i i k . Similarly, controlling α i k addresses a situation where it is necessary to make decisions based on the past by limiting the impact of the present experiences or circumstances. Remark 4.3 elaborates this. The conclusions are drawn through numerical examples. Though theoretical results are not provided to establish these observations, we believe that our study is helpful to an extent to visualize such phenomena, and results may be established in this direction. We wish to explore this in our future endeavours.

In order to explore the possibilities of a Hopf bifurcation, we have tried large values of transmission delays, but our illustrations reflect that there is little impact of transmission delays on the system. We have not considered processing delays here as they are usually very small in systems such as (2), and choosing large values for them makes it unrealistic. Though we have considered the violation of parametric conditions of Theorems 3.1 and 3.3, the system appears to be stable. This stability looks strong and suggests that the stability regions estimated here are only a small part of the large stability region of model (2). Thus, further exploration of the stability regions of model (2) is welcome.

We feel that the present content may be applicable to improve the performance or output of search engines on the internet where input search words given ( I ( t ) ) are not the exact ones but are close enough to what is to be searched for ( I ). Also, our work ensures that when one focuses on present activities with controllable deviations in external motivating factors, the influence of past memories has little impact on the present outcomes. Can this be applied to push people from trauma to a positive present? A question should be addressed carefully! Most of our present behaviour with people (or a situation) depends on our previous experiences with them. We hope the present study will enable us to work in cohesion in the present without being greatly influenced by their past deeds. Identifying such interaction functions or key parameters that reduce the influence of past experiences would be a great step towards building successful careers and a progressive society. These propositions may sound hypothetical based on the description of simple models such as (2). But this little success will motivate us to move further.

Any mathematical model becomes realistic if it withstands some test data and proves to be useful. At present, the authors are content with establishing theoretical results and explaining some phenomena of the brain. At the same time, testing of a theoretical model with real-time data may lead to the modification of the model and reinterpretation of its parameters or functional relations. For the present context, testing of our models with test data is deferred to a future exposition.

Acknowledgments

The authors are thankful to anonymous reviewers for their stimulating comments that led to a better presentation of the material.

  1. Funding information: This research work was carried out without a specific grant from any funding agency.

  2. Author contributions: P. Raja Sekhara Rao: Methodology, interpretation of results and manuscript writing. K. Venkata Ratnam: Conceptualization, simulations of numerical examples, manuscript editing, and formatting. G. Shirisha: Literature review, derivations of results, simulations of numerical examples and manuscript writing. All authors reviewed and approved the final version of the manuscript.

  3. Conflict of interest: All authors declare that they have no conflict of interest.

  4. Ethical approval: This research did not involve any human participants or animals. Ethical approval was therefore not required.

  5. Data availability statement: Data sharing is not applicable to this article as no new datasets were generated or analysed during the current study.

References

[1] Albarracin, D. (2021). Chapter 5 - The Impact of Past Experience and Past Behavior on Attitudes and Behavior, (pp. 129–157), Cambridge University Press, Cambridge. DOI: 10.1017/9781108878357.006. Search in Google Scholar

[2] Ambrosio, B. (2020). Beyond the brain: towards a mathematical modeling of emotions. Journal of Physics: Conference Series. 2090, 012119. DOI: 10.1088/1742-6596/2090/1/012119. Search in Google Scholar

[3] Barrett, L. F. (2017). How emotions are made: The secret life of the brain. Boston: Houghton Mifflin Harcourt. Search in Google Scholar

[4] Begazo, R., Aguilera, A., Dongo, I., & Cardinale, Y. (2024). A combined cnn architecture for speech emotion recognition. Sensors, 24, 5797. DOI: 10.3390/s24175797. Search in Google Scholar PubMed PubMed Central

[5] Shirisha, G., & Venkata Ratnam, K. (2022). Dynamical behavior of cooperative supportive system involving intra-network delays in information propagation. Journal of Applied Nonlinear Dynamics, 11, 719–739. DOI: 10.5890/JAND.2022.09.012. Search in Google Scholar

[6] Gupta, N., Ahirwal, M., & Atulkar, M. (2022). Simulation and modeling of human decision-making process through reinforcement learning based computational model involving past experiences. Decision Science Letters, 11, 366–378. DOI: 10.5267/j.dsl.2022.9.001.Search in Google Scholar

[7] Hartmann, K., Siegert, I., Glüge, S., Wendemuth, A., Kotzyba, M., & Deml, B. (2012). Describing human emotions through mathematical modelling. IFAC Proceedings Volumes, 45, 463–468. DOI: 10.3182/20120215-3-AT-3016.00081. Search in Google Scholar

[8] Iinuma, K., & Kogiso, K. (2021). Emotion-involved human decision-making model. Mathematical and Computer Modelling of Dynamical Systems, 27, 543–561. DOI: 10.1080/13873954.2021.1986846. Search in Google Scholar

[9] Islam, M., Ahmad, M., Yusuf, M. S. U., & Ahmed, T. (2015). Mathematical modeling of human emotions using sub-band coefficients of wavelet analysis. 2015 International Conference on Electrical Engineering and Information Communication Technology (ICEEICT), (pp. 1–6). DOI: 10.1109/ICEEICT.2015.7307398. Search in Google Scholar

[10] Lee, C., Yoo, S. K., Park, Y., Kim, N., Jeong, K., & Lee, B. (2005). Using neural network to recognize human emotions from heart rate variability and skin resistance. 2005 IEEE Engineering in Medicine and Biology 27th Annual Conference, 2005, 5523–5525. DOI: 10.1109/IEMBS.2005.1615734. Search in Google Scholar PubMed

[11] Levine, D. S. (2007). Neural network modeling of emotion. Physics of Life Reviews, 4, 37–63. DOI: 10.1016/j.plrev.2006.10.001. Search in Google Scholar

[12] Manalu, H. V., & Rifai, A. P. (2024). Detection of human emotions through facial expressions using hybrid convolutional neural network-recurrent neural network algorithm. Intell. Syst. Appl., 21, 200339. DOI: 10.1016/j.iswa.2024.200339. Search in Google Scholar

[13] Merlin, C. D., Ravi, V. R., Parthiban, M, Yashwanth Raj A, Mohanasundaram M, & Sathish, A. (2024). Human behavioural identification in different aspects using neural network. 2024 7th International Conference on Circuit Power and Computing Technologies (ICCPCT), 1, 570–575. DOI: 10.1109/ICCPCT61902.2024.10672662. Search in Google Scholar

[14] Minaee, S., & Abdolrashidi, A. (2019). Deep-emotion: Facial expression recognition using attentional convolutional network, Sensors (Basel, Switzerland), 21, 3046. DOI: 10.3390/s21093046. Search in Google Scholar PubMed PubMed Central

[15] Prisnyakov, V., & Prisnyakova, L. (1994). Mathematical modeling of emotions. Cybernetics and Systems Analysis, 30, 142–149. DOI: 10.1007/BF02366374. Search in Google Scholar

[16] Rahul Mahadeo, S., Sharma, R., & Siddeeq, S. (2019). Emotion recognition using feed forward neural network and naive bayes. International Journal of Innovative Technology and Exploring Engineering, 9, 2487–2491. DOI: 10.35940/ijitee.B7070.129219. Search in Google Scholar

[17] Raja Sekhara Rao, P., Venkata Ratnam, K., Ponnada, L., & Satpathi, D. (2017). Global dynamics of a cooperative and supportive network system with subnetwork deactivation. Nonlinear Dynamics and Systems Theory, 17, 205–216. https://www.e-ndst.kiev.ua/v17n2/8(59).Search in Google Scholar

[18] Rao, P. R. S., Ratnam, K. V., & Lalitha, P. (2014). Estimation of inputs for a desired output of a cooperative and supportive neural network. International Journal of Emerging Technologies in Computational and Applied Sciences, 9(1), 99–105. https://api.semanticscholar.org/CorpusID:15908105. Search in Google Scholar

[19] Rao, P. R. S., Ratnam, K. V., & Lalitha, P. (2015). Delay independent stability of co-operative and supportive neural networks. Nonlinear Dynamics and Systems Theory, 15, 184–197. https://www.e-ndst.kiev.ua/v15n2/7(51).Search in Google Scholar

[20] Rao, V. S. H., & Rao, P. R. S. (2007). Cooperative and supportive neural networks. Physics Letters A, 371, 101–110. DOI: 10.1016/j.physleta.2007.06.049. Search in Google Scholar

[21] Santhoshkumar, R., & Geetha, M. K. (2019). Deep learning approach for emotion recognition from human body movements with feedforward deep convolution neural networks. Procedia Computer Science, 152, 158–165. DOI: 10.1016/j.procs.2019.05.038. Search in Google Scholar

[22] Sharma, J., & Dugar, Y. (2018). Detection and recognition of human emotion using neural network. International Journal of Applied Engineering Research, 13, 6472–6477. https://api.semanticscholar.org/CorpusID:201821476.Search in Google Scholar

[23] Thenius, R., Zahadat, P., & Schmickl, T. (2013). Emann - a model of emotions in an artificial neural network. In: Proceedings of the ECAL 2013: The Twelfth European Conference on Artificial Life. ECAL 2013 (pp. 830–837). Sicily, Italy: ASME. DOI: 10.7551/978-0-262-31709-2-ch122. Search in Google Scholar

[24] Trampe, D., Quoidbach, J., & Taquet, M. (2015). Emotions in everyday life. PLoS ONE, 10, e0145450. DOI: 10.1371/journal.pone.0145450. Search in Google Scholar PubMed PubMed Central

[25] Unluturk, M. S., Oguz, K., & Atay, C. (2009). Emotion recognition using neural networks. In: Proceedings of the 10th WSEAS International Conference on Neural Networks. (pp. 82–85). Prague, Czech Republic: World Scientific and Engineering Academy and Society (WSEAS).Search in Google Scholar

[26] Vadrevu, S. H. R., & Rao, P. (2016). Time varying stimulations in simple neural networks and convergence to desired outputs. Differential Equations and Dynamical Systems, 26, 81–104. DOI: 10.1007/s12591-016-0312-z. Search in Google Scholar

[27] Zimmerman, P, “How emotions are made,” Noldus. Accessed: May 11, 2023. [Online]. Available: https://noldus.com/blog/how-emotions-are-made.Search in Google Scholar

[28] Zimmerman, P, “How to measure emotions,” Noldus. Accessed: June 23, 2023. [Online]. Available: https://noldus.com/blog/how-to-measure-emotions.Search in Google Scholar

Received: 2024-03-24
Revised: 2025-02-12
Accepted: 2025-03-13
Published Online: 2025-04-25

© 2025 the author(s), published by De Gruyter

This work is licensed under the Creative Commons Attribution 4.0 International License.

Downloaded on 16.9.2025 from https://www.degruyterbrill.com/document/doi/10.1515/cmb-2025-0024/html
Scroll to top button