Abstract
Tire pressure monitoring systems (TPMS) are essential for vehicle safety and performance as they help detect low tire pressure that impacts fuel efficiency, ride comfort, and overall safety. This study introduces a novel stacking ensemble model to improve the monitoring of nitrogen-filled pneumatic tires. Vibration signals, captured under four conditions such as idle, highspeed, normal, and puncture, using low-cost MEMS accelerometers, are processed to derive autoregressive moving average (ARMA), histogram, and statistical features. The J48 decision tree is employed for feature selection, enhancing classifier accuracy. Experiments with various machine learning classifiers show that the stacking ensemble approach significantly improves classification performance for ARMA (93.75%) and histogram (85.42%) features, thereby achieving higher accuracy than individual classifiers. These findings demonstrate that stacking ensembles can enhance TPMS capabilities, offering a cost-effective and accurate solution for real-time tire pressure monitoring. This advancement contributes to automotive safety and maintenance by enabling more reliable and precise TPMS.
1 Introduction
The tire is a flexible rubber component encircling the rim of a wheel that plays a crucial role in effectively maintaining the comfort and safety of the occupants in an automobile. The tire is considered one of the crucial parts in an automobile that influences a number of essential tasks like traction, road shock absorption, vehicle handling characteristics, proper road contact, and safety. The traction, stability and grip provided by the tires can assist the vehicles to be operated at a range of speeds on various types of road surfaces and in a variety of weather conditions. The weight of the car and its occupants is supported entirely by the tires, which also provide shock and vibration absorption for a smoother and more comfortable ride. Tires are manufactured in several variants to adapt to different weather conditions, terrains, and performance requirements. Additionally, tires are essential for braking, acceleration, cornering, and handling, all of which have an impact on the vehicle's overall performance and safety [1]. To ensure the proper functioning of tires, maintaining the specific tire pressure and tread depth is essential for conserving the stability of the vehicle and improving maneuverability, and braking performance [2]. Also, underinflation or overinflation of tires can significantly impact the longevity of tires and the performance of the vehicles. Climatic changes, thermal stresses, improper driving, varying road conditions, and continual operation of vehicles can influence the intrusion of faults in tires. Such fault scenarios create a concern for safety and degrade the comfort of the occupants. In recent times, vehicle manufacturers have designed and equipped a TPMS system that can monitor the pressure in tires through a wireless medium [3]. With the invention of such technology, various phenomena like tire life, vehicle dynamics, fuel efficiency, and performance of the vehicle have been enhanced [4]. TPMS is considered an utmost safety device; however, the gas utilized to inflate the tires is also considered essential to improve the vehicle characteristics. Of late, nitrogen gas is generally preferred over air for inflating tires due to the superior performance traits showcased [5]. Tires filled with nitrogen gas consist of pure nitrogen in comparison to air-filled tires, which contain a mix of oxygen, nitrogen, and other trace elements. Improved tire pressure retention, lesser tire gas leakage (larger nitrogen molecules), lower thermal expansion, and minimal impact on temperature variations are certain takeaways of using nitrogen gas in tires [6]. Despite the fact that nitrogen gas inflation is expensive and widely adopted for various applications, both nitrogen and air-filled tires require constant monitoring to enumerate elevated levels of safety [7].
Several research works have been carried out in recent years focussing on TPMS that are discussed as follows. TPMS can be categorized and carried out in two ways: indirect and direct TPMS. A recent study encompassed the advantages of both indirect and direct TPMS by formulating a hybrid TPMS system that measures the pressure of one wheel to indirectly estimate the other tire pressures. The phenomenon used for measurement was termed a three-index system [8]. Sachan and Iqbal developed a non-electronic stability program for vehicles to estimate the tire pressure of vehicles [9]. ResNet-50, a pre-trained model, was adopted by Muturatnam et al. to monitor the condition of nitrogen-filled tires [10]. In another study, the vibration-based assessment was performed by Robinson to study the condition of tires due to vibration generated from hammer impacts [11]. Tire stiffness and tire pressure were monitored by an adaptive Kalman filter through the measurement of unknown road roughness using a weight-based approach [12]. Fechtner et al. adopted piezo-resistive pressure sensors to develop an intelligent system for monitoring tire pressure through the mass of the vehicle [13]. Air leakage in commercial vehicle tires was determined by Huo et al. with wheel speed sensors that used radius and frequency methods [14]. A hybrid nanogenerator with a rotary cylinder was designed by He et al. to develop a self-powered TPMS [15]. Jin et al. performed a frequency analysis on tire vibrations that incorporates road roughness compensation, vibration disturbance elimination, and resonant frequency estimation for reliable and accurate monitoring of tire pressure [16].
The evolution of artificial intelligence (AI) has defined a significant growth in the adaptation of machine learning in numerous applications. Some of the renowned applications include robotics, fault diagnosis, speech recognition, object detection, surveillance, text classification, and medical imaging. Accurate results, swift operation, and minimal time have been the significant advantages of using machine learning in fault diagnosis studies. The algorithms in machine learning display robust capabilities that can be applied for both regression and classification tasks. Table 1 gives an overview of applying machine learning algorithms in mechanical fault-finding systems. It shows how different algorithms enhance the efficiency and accuracy of monitoring and diagnosing mechanical conditions.
Application of machine learning algorithms on different mechanical fault-finding systems
| Ref. | Machine learning algorithm | Mechanical system | Limitations |
|---|---|---|---|
| Patange and Jegadeeshwaran [17] | Tree-based algorithms | Multi-point cutting tool | Limited generalization capability |
| Bode et al. [18] | Fault detection algorithm (FDA) | Heat pump | Limited fault types |
| Sunal et al. [19] | Support vector machines (SVM) and multi-layer perceptron (MLP) | Centrifugal pumps | Limited dataset availability |
| Molina et al. [20] | Artificial neural network (ANN) | Methane hydrogen-fueled engines | Quantitative validation |
| Toma et al. [21] | K-nearest neighbor (KNN) and random forest | Motor bearing | Limited fault types |
| Balachandar et al. [22] | Light gradient-boosted machine classifiers | Tool condition monitoring | Limited dataset availability |
| Lee et al. [23] | SVM | Cutting tool | Limited dataset availability |
The literature survey mentioned above provides some valuable research gaps that are described as follows:
Numerous literature were oriented toward the analysis of direct TPMS systems while very minimal works have been performed using indirect TPMS. Additionally, direct TPMS is claimed to be a non-cost-effective solution.
Studies related to the usage of nitrogen-filled tires were very minimal, with few tire conditions.
Vibration-based assessments in TPMS have limitations, especially concerning the underexplored ARMA, statistical, and histogram features.
The classification of diverse tire conditions using machine learning represents a field that is in its initial stages of development.
Single (or individual) classifier performance was assessed widely. However, a stacking-based ensemble approach was not attempted.
Sensor errors in TPMS pose a significant problem due to false alarms or failure to identify real tire pressure. These errors are brought on by variables like temperature fluctuations and electromagnetic interference. Additionally, the TPMS sensor battery life may be constrained, necessitating routine maintenance and replacement, which may be inconvenient for car owners. Additionally, non-standard tire sizes or odd tire shapes may be difficult for TPMS to handle, impairing their capacity to precisely monitor tire pressure. Some TPMS signals may be open to hacking or malicious attacks, which could jeopardize vehicle security. Further, the complexity of TPMS systems may result in higher repair costs and present numerous technical difficulties for both car owners and mechanics. Addressing these issues is very crucial to ensure the reliable and effective operation of TPMS for enhanced vehicle safety and performance [24]. Hence, there is a noticeable research gap in the development of a thorough TPMS system adept at accurately determining tire pressure using indirect TPMS and machine learning techniques.
Considering the aforementioned concerns portrayed, the present study introduces a stacking ensemble classification model that combines the predictive abilities of various classifiers that can considerably improve the performance of monitoring tire conditions [25]. This method, which has been shown to be effective in a number of machine learning applications, can increase the accuracy and robustness of the identification of tire pressure anomalies. The ensemble can capture complex correlations within tire pressure data, thereby improving the system’s capability to precisely identify abnormal conditions. The technique can capture complex relationships within data by fusing the strengths of several base classifiers, improving accuracy and robustness. Owing to the aforementioned benefit, stacking is especially helpful for jobs where individual classifiers may struggle, such as when working with noisy data or severely unbalanced datasets. Additionally, stacking ensembles offer an adaptable framework for model combinations that enables the integration of different types of base classifiers. A more thorough study of the feature space is frequently the result of this diversity, which may improve generalization and predictive abilities. By updating or replacing base models, stacking models may adapt to dynamic and changing situations, improving performance over time. A stacking ensemble model gives the TPMS the ability to make decisions in real time while continuously learning, which lowers safety risks and improves overall vehicle performance. The present study consists of various technical contributions that are listed as follows.
Vibration signals from nitrogen-filled tires were monitored using a cost-effective MEMS accelerometer sensor, focusing on four conditions: high pressure, normal pressure, idle, and puncture. Three types of features – statistical, histogram, and autoregressive moving average (ARMA) – were extracted, and the most significant features were selected using a J48 decision tree algorithm.
Training and test datasets were created in an 80–20% ratio to evaluate the performance of various machine learning algorithms, including sequential minimal optimization (SMO), instance-based k-nearest neighbor (IBK), logistic model tree (LMT), random forest (RF), multilayer perceptron (MLP), J48, and naive Bayes (NB).
A stacking-based machine learning approach was applied, combining the top five performing classifiers in various ensembles of two, three, four, and five classifiers. Predictions from the individual classifiers were aggregated in these ensembles and classified using a meta-classifier, aiming to leverage the strengths of each classifier for improved overall prediction accuracy.
The study demonstrated that integrating multiple machine-learning techniques and tuning hyperparameters could significantly enhance the accuracy of tire pressure monitoring systems. The findings highlighted the potential of stacking-based ensembles in providing robust and reliable predictions, paving the way for advanced applications in automotive safety and maintenance. Ongoing research is crucial to address challenges related to data availability and model generalization, ultimately aiming to improve vehicle safety, performance, and operational efficiency.
2 Experimental setup
The present study for tire condition monitoring was carried out on a front wheel drive vehicle with nitrogen-filled tires. The reason for selecting a front wheel drive vehicle was aimed at the reduction of vibration interference from the engine and various moving parts in the car. The vibration signals from the vehicle were collected using a tri-axial waterproof MMA7361L MEMS accelerometer. The sensor was mounted on the rear axle closer to the left wheel, as represented in ref. [7]. The vibration signals were collected from the sensor, which had a frequency range of about 400 Hz and a sensitivity of 206 mV/g. The overall experiment was carried out in a real-world environment while the car operated on a smooth road on Indian highways. The car was equipped with a data acquisition (DAQ) unit that was connected to the accelerometer through shielded cords (to reduce interference of signals) and a laptop.
3 Experimental procedure
The initial phase of experimentation was started with a perfectly balanced tire consisting of an existing weight of 40 g. No additional weight was added to the tire under experimentation. The vibration data collection was performed for four different tire conditions at varying vehicle speeds between 10 and 90 kmph (excluding idle conditions). The various conditions considered along with the tire pressures are detailed as follows: idle condition (31 psi at speeds below 10 kmph), high (40 psi), puncture (19 psi), and normal (31 psi). Furthermore, the details of vibration data collected, along with feature extraction and feature selection methods, are detailed below:
Number of signals collected for each condition – 60
Total signals collected – 240
Features extracted – statistical, histogram, and ARMA
Feature selection – J48 decision tree
The features selected using the J48 algorithm were passed as input to various classifiers to determine the performance, which was further utilized to develop combinations of classifiers for the stacking approach.
4 Methodology
The complete workflow or overview of the proposed methodology is depicted in Figure 1. Initially, a low-cost MEMS sensor was used to collect vibration signals from nitrogen-filled tires on the car’s rear axle across four conditions: high pressure, normal pressure, punctured, and idle. Further, ARMA, histogram, and statistical features were extracted from the collected vibration signals. Feature selection was performed with the aid of the J48 decision tree algorithm to select the most significant features that can help improve the learning capability and optimize the computational time of the machine learning model. The training and test datasets were created with a ratio of 80:20 using the selected features. Several base classifiers like SMO, RF, J48, IBK, MLP, NB, and LMT were trained initially. The best-performing classifiers (top five) were combined to form two, three, four, and five classifier combinations for the application of stacking. Meta classifier was chosen to be the top performing individual classifier for the specific feature set. The obtained results were recorded and analyzed to verify the effectiveness of the stacking strategy.

A visual representation illustrating the sequence of steps in the present investigation. Source: Created by the authors.
4.1 Data acquisition
An NI USB-6001 DAQ facilitated digital signal processing and conditioning, allowing for the collection of equivalent signals from the accelerometer. The Data Acquisition Device is equipped with 12 input channels, offers a 14-bit resolution, and supports a maximum sampling rate of 20 kS/s. The accelerometer sensor produced an analog signal, which was subsequently converted to a digital signal by the DAQ system, allowing for seamless integration with the computer system. The interface between the DAQ and the computer system was established using the NI LabVIEW software.
4.2 Feature extraction
Feature extraction is the process of representing the information of a particular component status derived from the vibration signal signatures. Every individual fault occurrence produces a specific vibration signature that is significant from other faults. Ease of fault identification is possible through monitoring the pattern generated by the vibration signals. The applicability of feature extraction is found useful for datasets that encompass a number of features. Additionally, the computational complexity, time, and expenses involved in the case of huge volumes of features are significantly high. Thus, to eradicate the complications involved, feature extraction is widely adopted to instrument an efficient learning process with minimal resources [26]. Feature extraction in the present study utilized three distinct features: ARMA, histogram, and statistical features. A short description of every feature type is presented as follows.
The representation of time-series data with the aid of autoregressive (AR) and moving-average (MA) components combined is termed as autoregressive moving average (ARMA). ARMA is a modeling approach that captures the relationship between the present and past observations in the time-series data along with the data trends and patterns. ARMA features are found effective in identifying characteristic patterns and for collecting chronological dependencies inside data. In this study, ARMA orders ranging from 2 to 30 were determined, and the optimal value was chosen using the J48 algorithm, as illustrated in ref. [2].
Histogram features are created by graphically plotting the frequency distribution of dataset values, offering insights into the shape of the distribution through the complete data range, including bi- or multi-modality. The bin selection in the histogram depends on the significant vibration signatures generated for every condition. Higher bin counts are highly influential to noise, while lower bin count results in information loss due to data simplification. In the present study, 100 bins were extracted in total, with bin sets from 2 to 100. The optimal bin value was chosen using the J48 algorithm, as presented in ref. [2].
The calculation of summary statistics from the derived time series raw data represents the descriptive statistical features. Minimum, maximum, mode, count, sum, standard error, range, kurtosis, skewness, standard deviation, median, and mean are the existing 12 different descriptive statistical features available. Information about the shape of the data distribution, spread, and central tendency are delivered by these features. The variability and general behavior of the time-series data are well-represented and easily interpreted through statistical features [27].
4.3 Feature selection
The performance of machine learning models can portray a detrimental effect and involve unwanted noise due to the presence of duplicate or irrelevant features. Careful selection of the most prominent features can enhance the overall generalization capacity of the model. Such careful selection can lead to more consistent outcomes with minimal overfitting scenarios and enhanced accuracy. The influence of features on the predictions made can be determined by experimenting with a confined set of relevant features. The insights presented by these relevant features enable informed decision-making. Quicker interference time and faster model training are often obtained by streamlining the feature set. Overfitting is restricted through the application of feature selection, which is considered the key benefit. The major reason for overfitting relies on the excessive feature inclusion that misleads the model to observe the noises from the training data instead of patterns. The attention of the model is oriented toward the significant information carriers inside the data through prominent features. The J48 algorithm was used in the present study for feature selection. The algorithm depicts the selection features in an inverted tree format wherein the top node represents the highly significant feature even though the order of importance descends downwards. The tree prunes automatically, removing less significant features and representing only the most significant ones through branch and leaf nodes. The decision tree selected for statistical, histogram, and ARMA features is presented in the referred article.
4.4 Stacking
Stacking, in the context of machine learning (ML), is an ensemble learning technique that combines multiple individual models to create a more powerful and accurate predictive model. Stacking employs a two-level architecture. In the first level, different base models are trained on the same dataset to capture different patterns and insights. Then, in the second level, a meta-model (known as “meta classifier”) is trained using the predictions of the base models as features [25]. This meta-model learns to weigh and combine the predictions of the base models, effectively leveraging their strengths and compensating for their weaknesses. Stacking can lead to improved predictive performance as it allows the ensemble to capture complex relationships in the data and make more accurate predictions [28]. However, it can also introduce additional complexity and potential overfitting, requiring careful tuning and validation to achieve optimal results. The general overview of stacking methodology is presented in ref. [29]
5 Results and discussion
This section aims to evaluate and compare the performance of individual base classifiers and the stacking strategy applied to monitor nitrogen-filled tire conditions. The study utilized an 80% training and 20% testing data division. Performance was assessed using metrics like training, validation, and test accuracies. Validation accuracy was determined using a ten-fold cross-validation approach, where the training data were split into ten equal parts, nine for training and one for testing in each cycle. Detailed results from this evaluation process are presented in the next section.
5.1 Performance evaluation of individual base classifiers
Before implementing the stacking approach, the study assessed the performance of individual base classifiers: sequential minimal optimization (SMO), instance-based k-nearest neighbor (IBK), logistic model tree (LMT), random forest (RF), multilayer perceptron (MLP), J48, and Naive Bayes (NB). These classifiers were selected for their diverse operational principles and proven effectiveness in fault diagnosis applications. Table 2 presents the top five performing classifiers from this group for each specific feature.
Accuracy of the individual base classifiers for the statistical, histogram, and ARMA features
| Feature type | Classifier | Training accuracy (%) | Validation accuracy (%) | Testing accuracy (%) |
|---|---|---|---|---|
| Statistical | KNN | 100.00 | 83.85 | 89.58 |
| RF | 100.00 | 92.19 | 87.23 | |
| MLP | 94.27 | 88.02 | 76.60 | |
| J48 | 98.96 | 91.67 | 82.98 | |
| NB | 84.38 | 84.90 | 74.47 | |
| Histogram | RF | 100.00 | 93.75 | 75.00 |
| KNN | 100.00 | 89.58 | 75.00 | |
| MLP | 95.83 | 94.79 | 70.83 | |
| LMT | 94.79 | 92.18 | 68.75 | |
| SMO | 87.50 | 86.97 | 68.75 | |
| ARMA | KNN | 100.00 | 92.71 | 91.66 |
| J48 | 97.39 | 92.18 | 85.41 | |
| MLP | 96.87 | 91.66 | 83.33 | |
| LMT | 95.31 | 92.70 | 81.25 | |
| LR | 90.10 | 89.58 | 81.25 |
From Table 2, it can be observed that the highest classification accuracy for statistical, histogram, and ARMA features were 89.58% (KNN), 75.00% (RF), and 91.66% (KNN), respectively. The classifiers presented in Table 2 were utilized to form the base classifiers in the ensemble of two, three, four, and five classifier stacking ensembles. Additionally, the top-performing classifier in every feature type was used as the meta-classifier for further computations.
5.2 Performance evaluation of stacking classifiers
The stacking algorithm was adopted in the current study for the combination of different classifiers. The ensemble of classifiers was broadly categorized into two, three, four, and five classifiers. The meta-classifiers adopted in the study are KNN (for statistical and ARMA) and RF (for histogram). The possible combinations of the classifiers adopted in the study are detailed below.
Two classifier ensemble – 10 combinations (KNN + RF, KNN + J48, KNN + MLP, KNN + NB, RF + J48, RF + MLP, RF + NB, J48 + MLP, J48 + NB, MLP + NB)
Three classifier ensemble – 10 combinations (KNN + RF + J48, KNN + RF + MLP, KNN + RF + NB, KNN + J48 + MLP, KNN + J48 + NB, KNN + MLP + NB, RF + J48 + MLP, RF + J48 + NB, RF + MLP + NB, J48 + MLP + NB)
Four classifier ensemble – 5 combinations (KNN + RF + J48 + MLP, KNN + J48 + MLP + NB, KNN + RF + MLP + NB, KNN + RF + J48 + NB, RF + J48 + MLP + NB)
Five classifier ensemble – 1 combination (KNN + RF + J48 + MLP + NB)
The combinations mentioned above were used in the stacking approach, and the best-performing classifier combinations were identified. The performance of every individual stacking ensemble classifier is presented in Tables 3–6.
Accuracy of the two classifiers stacking for the statistical, histogram, and ARMA features
| Feature type | Meta classifier | Two classifiers | Training accuracy (%) | Validation accuracy (%) | Testing accuracy (%) |
|---|---|---|---|---|---|
| Statistical | KNN | RF + J48 | 97.92 | 90.10 | 89.58 |
| KNN + J48 | 96.35 | 88.02 | 85.42 | ||
| J48 + NB | 91.15 | 88.54 | 83.33 | ||
| RF + NB | 95.31 | 92.71 | 81.25 | ||
| KNN + RF | 93.23 | 90.10 | 79.17 | ||
| RF + MLP | 95.83 | 89.06 | 79.17 | ||
| MLP + NB | 90.10 | 86.46 | 79.17 | ||
| KNN + NB | 92.19 | 80.73 | 75.00 | ||
| J48 + MLP | 94.79 | 85.42 | 72.92 | ||
| KNN + MLP | 92.19 | 81.25 | 70.83 | ||
| Histogram | RF | RF + MLP | 97.92 | 92.71 | 85.42 |
| KNN + J48 | 96.35 | 93.75 | 85.42 | ||
| RF + KNN | 94.27 | 92.71 | 83.33 | ||
| RF + J48 | 95.31 | 91.67 | 81.25 | ||
| KNN + MLP | 90.10 | 89.58 | 81.25 | ||
| J48 + MLP | 97.40 | 93.23 | 81.25 | ||
| RF + LMT | 95.83 | 91.15 | 79.17 | ||
| J48 + LMT | 95.83 | 90.63 | 79.17 | ||
| KNN + LMT | 95.31 | 92.19 | 77.08 | ||
| MLP + LMT | 92.19 | 92.19 | 75.00 | ||
| ARMA | KNN | KNN + J48 | 96.35 | 93.75 | 93.75 |
| KNN + LR | 94.79 | 89.06 | 89.06 | ||
| KNN + MLP | 90.10 | 89.58 | 81.25 | ||
| J48 + MLP | 97.40 | 93.23 | 81.25 | ||
| J48 + LR | 95.31 | 90.63 | 79.17 | ||
| J48 + LMT | 95.83 | 90.63 | 79.17 | ||
| KNN + LMT | 95.31 | 92.19 | 77.08 | ||
| MLP + LR | 95.83 | 93.23 | 77.08 | ||
| LR + LMT | 95.31 | 93.75 | 77.08 | ||
| MLP + LMT | 92.19 | 92.19 | 75.00 |
Accuracy of the three classifiers stacking for the statistical, histogram, and ARMA features
| Feature type | Meta classifier | Three classifiers | Training accuracy (%) | Validation accuracy (%) | Testing accuracy (%) |
|---|---|---|---|---|---|
| Statistical | KNN | RF + J48 + NB | 94.79 | 89.58 | 89.58 |
| KNN + J48 + NB | 95.31 | 86.46 | 87.50 | ||
| KNN + RF + J48 | 97.40 | 89.06 | 85.42 | ||
| KNN + MLP + NB | 94.79 | 85.94 | 83.33 | ||
| RF + MLP + NB | 93.75 | 92.19 | 83.33 | ||
| KNN + RF + NB | 95.83 | 91.67 | 79.17 | ||
| KNN + J48 + MLP | 95.31 | 86.98 | 79.17 | ||
| RF + J48 + MLP | 96.35 | 88.54 | 79.17 | ||
| KNN + RF + MLP | 94.79 | 89.06 | 77.08 | ||
| J48 + MLP + NB | 94.27 | 87.50 | 77.08 | ||
| Histogram | RF | RF + KNN + MLP | 97.40 | 92.19 | 83.33 |
| RF + J48 + MLP | 97.92 | 92.71 | 83.33 | ||
| RF + MLP + LMT | 94.79 | 92.71 | 83.33 | ||
| J48 + MLP + LMT | 95.31 | 94.27 | 83.33 | ||
| KNN + J48 + MLP | 98.44 | 92.71 | 81.25 | ||
| KNN + J48 + LMT | 96.88 | 90.63 | 81.25 | ||
| RF + KNN + J48 | 95.83 | 92.71 | 79.17 | ||
| RF + J48 + LMT | 96.88 | 89.58 | 79.17 | ||
| RF + KNN + LMT | 95.83 | 93.23 | 77.08 | ||
| KNN + MLP + LMT | 91.67 | 92.19 | 70.83 | ||
| ARMA | KNN | J48 + MLP + LMT | 95.31 | 94.27 | 83.33 |
| KNN + J48 + MLP | 98.44 | 92.71 | 81.25 | ||
| KNN + J48 + LR | 95.83 | 91.15 | 81.25 | ||
| KNN + J48 + LMT | 96.88 | 90.63 | 81.25 | ||
| J48 + MLP + LR | 97.92 | 92.19 | 79.17 | ||
| KNN + LR + LMT | 97.40 | 91.67 | 77.08 | ||
| J48 + LR + LMT | 95.31 | 92.71 | 77.08 | ||
| MLP + LR + LMT | 92.71 | 93.23 | 77.08 | ||
| KNN + MLP + LR | 96.88 | 91.15 | 72.92 | ||
| KNN + MLP + LMT | 91.67 | 92.19 | 70.83 |
Accuracy of the four classifier stacking for the statistical, histogram, and ARMA features
| Feature type | Meta classifier | Four classifiers | Training accuracy (%) | Validation accuracy (%) | Testing accuracy (%) |
|---|---|---|---|---|---|
| Statistical | KNN | KNN + RF + J48 + NB | 95.31 | 89.58 | 89.58 |
| KNN + J48 + MLP + NB | 95.83 | 89.06 | 81.25 | ||
| KNN + RF + MLP + NB | 94.27 | 91.15 | 81.25 | ||
| RF + J48 + MLP + NB | 95.31 | 89.06 | 81.25 | ||
| KNN + RF + J48 + MLP | 95.31 | 88.54 | 72.92 | ||
| Histogram | RF | RF + KNN + J48 + MLP | 97.92 | 92.71 | 83.33 |
| RF + J48 + MLP + LMT | 95.83 | 91.67 | 83.33 | ||
| RF + KNN + MLP + LMT | 94.79 | 92.71 | 79.17 | ||
| RF + KNN + J48 + LMT | 96.88 | 91.67 | 77.08 | ||
| KNN + J48 + MLP + LMT | 95.31 | 93.75 | 75.00 | ||
| ARMA | KNN | KNN + J48 + MLP + LR | 97.40 | 91.67 | 81.25 |
| J48 + MLP + LR + LMT | 95.83 | 93.23 | 79.17 | ||
| KNN + J48 + LR + LMT | 96.35 | 93.23 | 77.08 | ||
| KNN + J48 + MLP + LMT | 95.31 | 93.75 | 75.00 | ||
| KNN + MLP + LR + LMT | 93.75 | 92.19 | 70.83 |
Accuracy of the five classifiers stacking for the statistical, histogram, and ARMA features
| Feature type | Meta classifier | Five classifiers | Training accuracy (%) | Validation accuracy (%) | Testing accuracy (%) |
|---|---|---|---|---|---|
| Statistical | KNN | KNN + RF + J48 + MLP + NB | 94.79 | 89.06 | 81.25 |
| Histogram | RF | RF + KNN + J48 + MLP + LMT | 96.35 | 92.70 | 77.08 |
| ARMA | KNN | KNN + J48 + MLP + LR + LMT | 95.83 | 93.75 | 79.16 |
The results are summarized in Table 7 for comparison and visualization. Additionally, performance metrics are presented in Table 8. Table 7 shows that the highest classification accuracies for individual features – statistical, histogram, and ARMA – were 89.58% (KNN), 75.00% (RF), and 91.66% (KNN), respectively. Applying a stacking-based ensemble approach significantly improved the accuracy for two-class classifiers using histogram and ARMA features, achieving 85.42% (RF + MLP) and 93.75% (KNN + J48), respectively. However, there was no improvement observed for statistical features with the stacking method. Confusion matrices supporting these findings are presented in Figure 2(a) and (b).
Performance comparison of best-in-class classifiers for the statistical, histogram, and ARMA features
| Feature type | Meta classifier | Classifier | Training accuracy (%) | Validation accuracy (%) | Testing accuracy (%) |
|---|---|---|---|---|---|
| Statistical | KNN | 100.00 | 83.85 | 89.58 | |
| Histogram | RF | 100.00 | 93.75 | 75.00 | |
| ARMA | KNN | 100.00 | 92.71 | 91.66 | |
| Statistical | KNN | RF + J48 | 97.92 | 90.10 | 89.58 |
| Histogram | RF | RF + MLP | 97.92 | 92.71 | 85.42 |
| ARMA | KNN | KNN + J48 | 96.35 | 93.75 | 93.75 |
Additional performance metrics of best-in-class classifiers for the statistical, histogram, and ARMA features
| Feature type | Meta classifier | Classifier | Precision | Recall | F-measure | MCC | ROC | PRC |
|---|---|---|---|---|---|---|---|---|
| Statistical | KNN | 0.902 | 0.896 | 0.895 | 0.864 | 0.931 | 0.835 | |
| Histogram | RF | 0.750 | 0.750 | 0.697 | 0.663 | 0.920 | 0.791 | |
| ARMA | KNN | 0.920 | 0.917 | 0.917 | 0.890 | 0.944 | 0.868 | |
| Statistical | KNN | RF + J48 | 0.897 | 0.896 | 0.896 | 0.862 | 0.971 | 0.894 |
| Histogram | RF | RF + MLP | 0.863 | 0.854 | 0.853 | 0.810 | 0.964 | 0.883 |
| ARMA | KNN | KNN + J48 | 0.940 | 0.937 | 0.938 | 0.918 | 0.937 | 0.938 |

Confusion matrix of (a) histogram features (RF + MLP) and (b) ARMA features (KNN + J48). Source: Created by the authors.
5.3 Performance evaluation with state-of-the-art techniques
The proposed methodology’s effectiveness was evaluated and compared against several advanced approaches. Table 9 demonstrates the performance of various established classifiers relative to the stacking-based approach introduced in this study.
Comparison with state-of-the-art techniques
| Ref no. | Technique | Accuracy (%) |
|---|---|---|
| Wang et al. [30] | Long short-term memory network (LSTM) | 83.00 |
| Anoop [31] | K-Star Algorithm | 89.16 |
| Ziaukas et al. [32] | CNN | 90.00 |
| Ziaukas et al. [32] | ResNet | 90.00 |
| Svensson et al. [33] | Random forest | 90.54 |
| Proposed | Stacking algorithm | 93.75 |
6 Conclusion
This study presents a novel stacking ensemble model to enhance TPMS for nitrogen-filled tires using cost-effective MEMS accelerometers to capture vibration signals under varying tire conditions. By employing ARMA, histogram, and statistical features combined with the J48 decision tree for feature selection, the stacking ensemble achieved high classification accuracy, particularly for ARMA (93.75%) and histogram (85.42%) features. These results demonstrate that the stacking approach significantly outperforms individual classifiers, particularly in identifying subtle patterns in vibration data that correlate with tire pressure conditions. The findings underscore the potential of ensemble methods to improve real-time TPMS performance, providing an accurate and reliable tool for maintaining vehicle safety. This approach not only enhances classification accuracy but also presents a practical and resource-efficient solution suited to real-world applications. Future research could explore additional ensemble configurations, real-time implementation on embedded systems, and the integration of more diverse data sources to improve generalization and robustness under various driving conditions. Ultimately, this study contributes to advancing TPMS technology, paving the way for safer and more efficient vehicle maintenance and monitoring systems.
-
Funding information: No funding was received for the research work carried out.
-
Author contributions: Viraj Chetan Shah: investigation; methodology; conceptualization; software; validation; role/writing – original draft, review & editing. Naveen Venkatesh Sridharan: conceptualization; data curation; formal analysis; investigation; methodology; software; validation; visualization; role/writing – original draft. Sugumaran V.: data curation; formal analysis; investigation; methodology; resources; supervision; validation; role/writing – review & editing. Anoop Prabhakaranpillai Sreelatha: data curation; formal analysis; investigation; methodology; role/writing – review & editing. B. R. Manju: investigation; methodology; resources; supervision; validation; role/writing – review & editing.
-
Conflict of interest: The authors declare that they have no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper.
-
Data availability statement: The datasets generated and/or analyzed during the current study are available from the corresponding author on reasonable request.
References
[1] Gent A, Walter J. Pneumatic tire. Ohio, United States of America: Mechanical Engineering Faculty Research; 2006. https://ideaexchange.uakron.edu/mechanical_ideas/854.Suche in Google Scholar
[2] Parihar H, Naveen Venkatesh S, Anoop PS, Sugumaran V. Application of feature fusion strategy for monitoring the condition of nitrogen filled tires using tree family of classifiers. Phys Scr. 2024;99(3):035210. 10.1088/1402-4896/ad2252.Suche in Google Scholar
[3] Qing D, Hongxiang Z, Lijun Y. Design of tire pressure test system based on wireless transmission. 2021 IEEE 3rd International Conference on Civil Aviation Safety and Information Technology (ICCASIT). IEEE; 2021. p. 1025–9. 10.1109/ICCASIT53235.2021.9633474.Suche in Google Scholar
[4] Sachan R, Iqbal S. Application of machine learning technique for development of indirect tire pressure monitoring system. SAE Int J Adv Curr Pract Mobil. 2021a;4(3):2021-26-0016. 10.4271/2021-26-0016.Suche in Google Scholar
[5] Rattan A, Naveen Venkatesh S, Sugumaran V, Anoop PS. Monitoring the condition of nitrogen-filled tires using weightless neural networks. Automatika. 2024;65(2):523–37. 10.1080/00051144.2024.2310979.Suche in Google Scholar
[6] Waddell WH, Napier RC, Tracey DS. Nitrogen inflation of tires. Rubber Chem Technol. 2009;82(2):229–43. 10.5254/1.3548247.Suche in Google Scholar
[7] Vasan V, Sridharan NV, Prabhakaranpillai Sreelatha A, Vaithiyanathan S. Tire condition monitoring using transfer learning-based deep neural network approach. Sensors. 2023;23(4):2177. 10.3390/s23042177.Suche in Google Scholar PubMed PubMed Central
[8] Formentin S, Onesto L, Colombo T, Pozzato A, Savaresi SM. h-TPMS: A hybrid tire pressure monitoring system for road vehicles. Mechatronics. 2021;74:102492. 10.1016/j.mechatronics.2021.102492.Suche in Google Scholar
[9] Sachan R, Iqbal S. Application of machine learning technique for development of indirect tire pressure monitoring system. SAE Int J Adv Curr Pract Mobil. 2021b;4(3):2021-26-0016. 10.4271/2021-26-0016.Suche in Google Scholar
[10] Muturatnam AB, Sridharan NV, Sreelatha AP, Vaithiyanathan S. Enhanced tyre pressure monitoring system for nitrogen filled tyres using deep learning. Machines. 2023;11(4):434. 10.3390/machines11040434.Suche in Google Scholar
[11] Robinson JH. Remote tire pressure monitoring system employing coded tire identification and radio frequency transmission, and enabling recalibration upon tire rotation or replacement. Google Patents. Accessed: March 18, 2025. [Online]. Available: https://patents.google.com/patent/US5838229A/en.Suche in Google Scholar
[12] Lee D-H, Yoon D-S, Kim G-W. New indirect tire pressure monitoring system enabled by adaptive extended Kalman filtering of vehicle suspension systems. Electronics. 2021;10(11):1359. 10.3390/electronics10111359.Suche in Google Scholar
[13] Fechtner H, Spaeth U, Schmuelling B. Smart tire pressure monitoring system with piezoresistive pressure sensors and bluetooth 5. 2019 IEEE Conference on Wireless Sensors (ICWiSe). IEEE; 2019. p. 18–23. 10.1109/ICWISE47561.2019.8971835.Suche in Google Scholar
[14] Huo B, Zhang M, Zhong S, Zhou F. Design of indirect tire pressure monitoring system based on wheel speed signal. J Phys: Conf Ser. 2023;2528(1):012045. 10.1088/1742-6596/2528/1/012045.Suche in Google Scholar
[15] He J, Cao S, Zhang H. Cylinder‐based hybrid rotary nanogenerator for harvesting rotational energy from axles and self‐powered tire pressure monitoring. Energy Sci Eng. 2020;8(2):291–9. 10.1002/ese3.560.Suche in Google Scholar
[16] Jin L, Peng X, Liu J, Zhang Q, Li J, Li L. Robust algorithm of indirect tyre pressure monitoring system based on tyre torsional resonance frequency analysis. J Sound Vib. 2022;538:117198. 10.1016/j.jsv.2022.117198.Suche in Google Scholar
[17] Patange AD, Jegadeeshwaran R. A machine learning approach for vibration-based multipoint tool insert health prediction on vertical machining centre (VMC). Measurement. 2021;173:108649. 10.1016/j.measurement.2020.108649.Suche in Google Scholar
[18] Bode G, Thul S, Baranski M, Müller D. Real-world application of machine-learning-based fault detection trained with experimental data. Energy. 2020;198:117323. 10.1016/j.energy.2020.117323.Suche in Google Scholar
[19] Sunal CE, Dyo V, Velisavljevic V. Review of machine learning based fault detection for centrifugal pump induction motors. IEEE Access. 2022;10:71344–55. 10.1109/ACCESS.2022.3187718.Suche in Google Scholar
[20] Molina S, Novella R, Gomez-Soriano J, Olcina-Girona M. New combustion modelling approach for methane-hydrogen fueled engines using machine learning and engine virtualization. Energies. 2021;14(20):6732. 10.3390/en14206732.Suche in Google Scholar
[21] Toma RN, Prosvirin AE, Kim J-M. Bearing fault diagnosis of induction motors using a genetic algorithm and machine learning classifiers. Sensors. 2020;20(7):1884. 10.3390/s20071884.Suche in Google Scholar PubMed PubMed Central
[22] Balachandar K, Salamon Arockiaraj KS, Sriraman G, Jegadeeshwaran R, Sakthivel G, Lakshmipathi J. Development of a machine learning model to predict the friction stir welding tool condition. Mater Today: Proc. 2023. 10.1016/j.matpr.2023.05.400.Suche in Google Scholar
[23] Lee WJ, Mendis GP, Sutherland JW. Development of an intelligent tool condition monitoring system to identify manufacturing tradeoffs and optimal machining conditions. Procedia Manuf. 2019;33:256–63. 10.1016/j.promfg.2019.04.031.Suche in Google Scholar
[24] Phade GM, Kulkarni AD. Tire pressure monitoring system. Int J Res Eng Appl Manag. 2020;6(01):469–73. 10.35291/2454-9150.2020.0334.Suche in Google Scholar
[25] Pavlyshenko B. Using stacking approaches for machine learning models. Proceedings of the 2018 IEEE 2nd International Conference on Data Stream Mining and Processing, DSMP 2018; 2018. p. 255–8. 10.1109/DSMP.2018.8478522.Suche in Google Scholar
[26] Amin R, Yasmin R, Ruhi S, Rahman MH, Reza MS. Prediction of chronic liver disease patients using integrated projection based statistical feature extraction with machine learning algorithms. Inform Med Unlocked. 2023;36:101155. 10.1016/j.imu.2022.101155.Suche in Google Scholar
[27] Altaf M, Akram T, Khan MA, Iqbal M, Ch MMI, Hsu CH. A new statistical features based approach for bearing fault diagnosis using vibration signals. Sensors. 2022;22(5):2012. 10.3390/S22052012.Suche in Google Scholar PubMed PubMed Central
[28] Meharie MG, Mengesha WJ, Gariy ZA, Mutuku RNN. Application of stacking ensemble machine learning algorithm in predicting the cost of highway construction projects. Eng Constr Archit Manag. 2022;29(7):2836–53. 10.1108/ECAM-02-2020-0128/FULL/XML.Suche in Google Scholar
[29] Zhang M, Zhang H, Li X, Liu Y, Cai Y, Lin H. Classification of paddy rice using a stacked generalization approach and the spectral mixture method based on MODIS time series. IEEE J Sel Top Appl Earth Observ Remote Sens. 2020;13:2264–75. 10.1109/JSTARS.2020.2994335.Suche in Google Scholar
[30] Wang X, Chen Z, Cao W, Xu G, Liu L, Liu S, et al. Artificial neural network-based method for identifying under-inflated tire in indirect TPMS. IEEE Access. 2020;8:213799–805. 10.1109/ACCESS.2020.3038895.Suche in Google Scholar
[31] Anoop PS. Implementing K-star algorithm to monitor tyre pressure using extracted statistical features from vertical wheel hub vibrations. Indian J Sci Technol. 2016;9(1):1–7. 10.17485/ijst/2016/v9i47/107926.Suche in Google Scholar
[32] Ziaukas Z, Busch A, Wielitzka M, Ortmaier T, Kobler J-P. Classification of tire pressure in a semitrailer using a convolutional neural network. 2020 IEEE International Conference on Mechatronics and Automation (ICMA). IEEE; 2020. p. 181–5. 10.1109/ICMA49215.2020.9233730.Suche in Google Scholar
[33] Svensson O, Thelin S, Byttner S, Fan Y. Indirect tire monitoring system – Machine learning approach. IOP Conf Ser: Mater Sci Eng. 2017;252:012018. 10.1088/1757-899X/252/1/012018.Suche in Google Scholar
© 2025 the author(s), published by De Gruyter
This work is licensed under the Creative Commons Attribution 4.0 International License.
Artikel in diesem Heft
- Research Articles
- Synergistic effect of artificial intelligence and new real-time disassembly sensors: Overcoming limitations and expanding application scope
- Greenhouse environmental monitoring and control system based on improved fuzzy PID and neural network algorithms
- Explainable deep learning approach for recognizing “Egyptian Cobra” bite in real-time
- Optimization of cyber security through the implementation of AI technologies
- Deep multi-view feature fusion with data augmentation for improved diabetic retinopathy classification
- A new metaheuristic algorithm for solving multi-objective single-machine scheduling problems
- Estimating glycemic index in a specific dataset: The case of Moroccan cuisine
- Hybrid modeling of structure extension and instance weighting for naive Bayes
- Application of adaptive artificial bee colony algorithm in environmental and economic dispatching management
- Stock price prediction based on dual important indicators using ARIMAX: A case study in Vietnam
- Emotion recognition and interaction of smart education environment screen based on deep learning networks
- Supply chain performance evaluation model for integrated circuit industry based on fuzzy analytic hierarchy process and fuzzy neural network
- Application and optimization of machine learning algorithms for optical character recognition in complex scenarios
- Comorbidity diagnosis using machine learning: Fuzzy decision-making approach
- A fast and fully automated system for segmenting retinal blood vessels in fundus images
- Application of computer wireless network database technology in information management
- A new model for maintenance prediction using altruistic dragonfly algorithm and support vector machine
- A stacking ensemble classification model for determining the state of nitrogen-filled car tires
- Research on image random matrix modeling and stylized rendering algorithm for painting color learning
- Predictive models for overall health of hydroelectric equipment based on multi-measurement point output
- Architectural design visual information mining system based on image processing technology
- Measurement and deformation monitoring system for underground engineering robots based on Internet of Things architecture
- Face recognition method based on convolutional neural network and distributed computing
- OPGW fault localization method based on transformer and federated learning
- Class-consistent technology-based outlier detection for incomplete real-valued data based on rough set theory and granular computing
- Detection of single and dual pulmonary diseases using an optimized vision transformer
- CNN-EWC: A continuous deep learning approach for lung cancer classification
- Cloud computing virtualization technology based on bandwidth resource-aware migration algorithm
- Hyperparameters optimization of evolving spiking neural network using artificial bee colony for unsupervised anomaly detection
- Classification of histopathological images for oral cancer in early stages using a deep learning approach
- A refined methodological approach: Long-term stock market forecasting with XGBoost
- Enhancing highway security and wildlife safety: Mitigating wildlife–vehicle collisions with deep learning and drone technology
- An adaptive genetic algorithm with double populations for solving traveling salesman problems
- EEG channels selection for stroke patients rehabilitation using equilibrium optimizer
- Influence of intelligent manufacturing on innovation efficiency based on machine learning: A mechanism analysis of government subsidies and intellectual capital
- An intelligent enterprise system with processing and verification of business documents using big data and AI
- Hybrid deep learning for bankruptcy prediction: An optimized LSTM model with harmony search algorithm
- Construction of classroom teaching evaluation model based on machine learning facilitated facial expression recognition
- Artificial intelligence for enhanced quality assurance through advanced strategies and implementation in the software industry
- An anomaly analysis method for measurement data based on similarity metric and improved deep reinforcement learning under the power Internet of Things architecture
- Optimizing papaya disease classification: A hybrid approach using deep features and PCA-enhanced machine learning
- Handwritten digit recognition: Comparative analysis of ML, CNN, vision transformer, and hybrid models on the MNIST dataset
- Multimodal data analysis for post-decortication therapy optimization using IoMT and reinforcement learning
- Predicting early mortality for patients in intensive care units using machine learning and FDOSM
- Uncertainty measurement for a three heterogeneous information system based on k-nearest neighborhood: Application to unsupervised attribute reduction
- Genetic algorithm-based dimensionality reduction method for classification of hyperspectral images
- Power line fault detection based on waveform comparison offline location technology
- Assessing model performance in Alzheimer's disease classification: The impact of data imbalance on fine-tuned vision transformers and CNN architectures
- Hybrid white shark optimizer with differential evolution for training multi-layer perceptron neural network
- Review Articles
- A comprehensive review of deep learning and machine learning techniques for early-stage skin cancer detection: Challenges and research gaps
- An experimental study of U-net variants on liver segmentation from CT scans
- Strategies for protection against adversarial attacks in AI models: An in-depth review
- Resource allocation strategies and task scheduling algorithms for cloud computing: A systematic literature review
- Latency optimization approaches for healthcare Internet of Things and fog computing: A comprehensive review
- Explainable clustering: Methods, challenges, and future opportunities
Artikel in diesem Heft
- Research Articles
- Synergistic effect of artificial intelligence and new real-time disassembly sensors: Overcoming limitations and expanding application scope
- Greenhouse environmental monitoring and control system based on improved fuzzy PID and neural network algorithms
- Explainable deep learning approach for recognizing “Egyptian Cobra” bite in real-time
- Optimization of cyber security through the implementation of AI technologies
- Deep multi-view feature fusion with data augmentation for improved diabetic retinopathy classification
- A new metaheuristic algorithm for solving multi-objective single-machine scheduling problems
- Estimating glycemic index in a specific dataset: The case of Moroccan cuisine
- Hybrid modeling of structure extension and instance weighting for naive Bayes
- Application of adaptive artificial bee colony algorithm in environmental and economic dispatching management
- Stock price prediction based on dual important indicators using ARIMAX: A case study in Vietnam
- Emotion recognition and interaction of smart education environment screen based on deep learning networks
- Supply chain performance evaluation model for integrated circuit industry based on fuzzy analytic hierarchy process and fuzzy neural network
- Application and optimization of machine learning algorithms for optical character recognition in complex scenarios
- Comorbidity diagnosis using machine learning: Fuzzy decision-making approach
- A fast and fully automated system for segmenting retinal blood vessels in fundus images
- Application of computer wireless network database technology in information management
- A new model for maintenance prediction using altruistic dragonfly algorithm and support vector machine
- A stacking ensemble classification model for determining the state of nitrogen-filled car tires
- Research on image random matrix modeling and stylized rendering algorithm for painting color learning
- Predictive models for overall health of hydroelectric equipment based on multi-measurement point output
- Architectural design visual information mining system based on image processing technology
- Measurement and deformation monitoring system for underground engineering robots based on Internet of Things architecture
- Face recognition method based on convolutional neural network and distributed computing
- OPGW fault localization method based on transformer and federated learning
- Class-consistent technology-based outlier detection for incomplete real-valued data based on rough set theory and granular computing
- Detection of single and dual pulmonary diseases using an optimized vision transformer
- CNN-EWC: A continuous deep learning approach for lung cancer classification
- Cloud computing virtualization technology based on bandwidth resource-aware migration algorithm
- Hyperparameters optimization of evolving spiking neural network using artificial bee colony for unsupervised anomaly detection
- Classification of histopathological images for oral cancer in early stages using a deep learning approach
- A refined methodological approach: Long-term stock market forecasting with XGBoost
- Enhancing highway security and wildlife safety: Mitigating wildlife–vehicle collisions with deep learning and drone technology
- An adaptive genetic algorithm with double populations for solving traveling salesman problems
- EEG channels selection for stroke patients rehabilitation using equilibrium optimizer
- Influence of intelligent manufacturing on innovation efficiency based on machine learning: A mechanism analysis of government subsidies and intellectual capital
- An intelligent enterprise system with processing and verification of business documents using big data and AI
- Hybrid deep learning for bankruptcy prediction: An optimized LSTM model with harmony search algorithm
- Construction of classroom teaching evaluation model based on machine learning facilitated facial expression recognition
- Artificial intelligence for enhanced quality assurance through advanced strategies and implementation in the software industry
- An anomaly analysis method for measurement data based on similarity metric and improved deep reinforcement learning under the power Internet of Things architecture
- Optimizing papaya disease classification: A hybrid approach using deep features and PCA-enhanced machine learning
- Handwritten digit recognition: Comparative analysis of ML, CNN, vision transformer, and hybrid models on the MNIST dataset
- Multimodal data analysis for post-decortication therapy optimization using IoMT and reinforcement learning
- Predicting early mortality for patients in intensive care units using machine learning and FDOSM
- Uncertainty measurement for a three heterogeneous information system based on k-nearest neighborhood: Application to unsupervised attribute reduction
- Genetic algorithm-based dimensionality reduction method for classification of hyperspectral images
- Power line fault detection based on waveform comparison offline location technology
- Assessing model performance in Alzheimer's disease classification: The impact of data imbalance on fine-tuned vision transformers and CNN architectures
- Hybrid white shark optimizer with differential evolution for training multi-layer perceptron neural network
- Review Articles
- A comprehensive review of deep learning and machine learning techniques for early-stage skin cancer detection: Challenges and research gaps
- An experimental study of U-net variants on liver segmentation from CT scans
- Strategies for protection against adversarial attacks in AI models: An in-depth review
- Resource allocation strategies and task scheduling algorithms for cloud computing: A systematic literature review
- Latency optimization approaches for healthcare Internet of Things and fog computing: A comprehensive review
- Explainable clustering: Methods, challenges, and future opportunities