Startseite Technik Innovative optimization of seashell ash-based lightweight foamed concrete: Enhancing physicomechanical properties through ANN-GA hybrid approach
Artikel Open Access

Innovative optimization of seashell ash-based lightweight foamed concrete: Enhancing physicomechanical properties through ANN-GA hybrid approach

  • Abdeliazim Mustafa Mohamed EMAIL logo , Bassam A. Tayeh , Yazid Chetbani , Aissa Laouissi , Maaz Osman Bashir und Yazan Issa Abu Aisheh
Veröffentlicht/Copyright: 13. September 2025
Veröffentlichen auch Sie bei De Gruyter Brill

Abstract

This study presents a novel approach to sustainable construction by utilizing three types of seashell ashes, namely, oyster shell ash (OSA), scallop shell ash (SSA), and mussel shell ash (MSA), as partial replacements for cement in lightweight foamed concrete (LFC). This novel application of aquaculture waste as an additive enhances the creation of more sustainable and resilient construction materials for urban settings. The physicomechanical properties of LFC, such as compressive strength (CS), flexural strength (FS), split tensile strength (STS), water absorption (WA), and porosity (P), were assessed utilizing response surface methodology (RSM) and artificial neural network (ANN) with K-fold cross-validation. The research examines the influence of additive type (OSA, SSA, MSA), curing duration (7–28 days), and additive concentration (0–30%) on the characteristics of LFC. Analysis of variance indicated that curing time exerted the most substantial effect on CS, FS, and STS, but additive content had a more pronounced impact on WA and P. The findings indicated favorable enhancements in CS, FS, and STS with curing durations of 28 days and additive concentrations between 4 and 20%. Replacing cement with OSA, SSA, and MSA showed favorable benefits on LFC characteristics. The predictive effectiveness of the DNN-IGWO, ANN, RSM, and Support vector machine models was evaluated using several error metrics, including mean absolute deviation, mean absolute percentage error, root mean square error, and coefficient of determination (R 2). The results showed that the hybrid DNN-IGWO model outperformed all other approaches, providing significantly higher accuracy across all attributes studied. Moreover, the incorporation of evolutionary algorithms utilizing DNN-IGWO models facilitated the discovery of optimal solutions for the multi-objective optimization of LFC properties. The optimization exposed intrinsic trade-offs between targets, such as CS vs WA and CS vs P, underscoring the necessity for meticulous equilibrium in the optimization process. This study constitutes a notable advancement in sustainable development goals in construction materials by improving concrete characteristics through the incorporation of seashell ash and sophisticated optimization methods.

1 Introduction

Lightweight foamed concrete (LFC) is a cellular concrete characterized by its lightweight nature, typically ranging in density from 400 to 1,850 kg·m−3 [1,2]. This material is classified as lightweight concrete, characterized by the presence of random air voids that are evenly distributed throughout the mixture due to the addition of foam agents in the mortar. LFC exhibits high flowability due to the presence of air voids, characterized by low cement content and minimal aggregate usage [3,4,5]. This concrete, which is classified into air-entrained and foam concrete based on pore formation methods, employs distinct approaches to introduce porosity (P) into the material [6,7]. Air-entrained concrete utilizes gas-forming chemicals mixed into mortar, where a chemical reaction during mixing generates gas bubbles, yielding a porous structure; commonly employed aerating agents include aluminum powder, calcium carbide, and hydrogen peroxide [6,8,9].

Foam concrete utilizes mechanical means to form pores, either through a pre-foaming process where a foaming agent is mixed with the water prior to incorporation into the mortar or via a mixed foaming process where the foaming agent is directly mixed with the mortar. These methods collectively contribute to the lightweight and insulating characteristics of aerated concrete [10,11,12].

The investigation of waste use, specifically waste binder particles as replacements for cement in concrete, offers an alternative approach underpinned by multiple reasons. Employing aquaculture waste as substitutes for cement presents considerable potential for improving the characteristics of cementitious materials and promoting ecologically sustainable concrete manufacturing [13,14]. Emerging aquaculture byproducts such as seashells, including oyster shell ash (OSA), scallop shell ash (SSA), and mussel shell ash (MSA), are promising as valuable components in the construction industry, promoting the adoption of more sustainable building practices. In this study, the term “additive type” refers to the different seashell ashes used as partial replacements for cement, namely, OSA, SSA, and MSA. The “additive content” refers to the percentage of cement replaced by these ashes, ranging from 0 to 30% [15,16,17,18].

Previous research extensively explored substituting cement with seashell powder in concrete mixes, revealing lengthened setting times, decreased compressive strength (CS) and weakened flexural strength (FS) as notable outcomes. Comparative analyses among seashell varieties, including periwinkle shell ash (PSA), OSA, and snail shell ash (SSA), exhibit differences in water consistency in cement pastes [11,16,19].

Olutoge et al. [20] investigated the effects of incorporating PSA into concrete. They found that as the proportion of PSA increased, the compaction factor improved while the slump decreased. Furthermore, higher PSA percentages resulted in longer initial and final setting times. In addition, the specific gravity of PSA was lower than that of ordinary Portland cement (OPC). Finally, the CS of concrete specimens decreased with increasing proportions of PSA.

Hai-Yan et al. [21] delved into the utilization of crushed oyster shell (COS) in marine concrete production, alongside fly ash (FA) and blast furnace slag (BS). Their investigation centered on evaluating the impact of different COS proportions, in conjunction with FA and BS, on the strength and durability of marine concrete. The findings elucidated that incorporating an optimal quantity of COS yielded favorable outcomes on these properties, thereby augmenting the efficacy and sustainability of concrete.

Adeala and Olaoye [22] explored the utilization of SSA as a partial substitute for cement in concrete. Their investigation revealed that when used at a 20% replacement level, SSA-blended concrete exhibited favorable characteristics, including low water absorption (WA) and high CS. These findings imply that SSA concrete may be suitable for structural applications, provided that the replacement level does not surpass 20%.

Several techniques, including response surface methodology (RSM) and artificial neural network (ANN), are utilized to investigate and optimize the properties of cement-based materials [23,24,25]. RSM, developed by Box and Wilson in 1951, optimizes processes by adjusting factors such as cement composition and curing time in concrete engineering. It aims to enhance outcomes such as CS and durability while minimizing resource usage. This systematic approach reduces the number of experiments required and identifies significant process parameters through analysis of variance (ANOVA). Regression equations predict responses based on given parameters, with response surface plots illustrating their effects [25,26,27]. Ultimately, the desirability approach is used to optimize process parameters, confirmed through validation tests [28,29].

An ANN is a data processing system structured with layers, including an input layer, one or more hidden layers, and an output layer. Each layer consists of numerous interconnected processing units known as neurons [30,31,32]. The input layer receives data, which are then processed through the hidden layers. Within these layers, neurons perform computations on the input data, passing it through activation functions to introduce nonlinearity and generate meaningful representations [24,33,34,35]. The output layer then produces the final result based on the processed information. This interconnected structure allows ANNs to learn complex patterns and relationships within data, making them effective tools for tasks such as classification, regression, and face recognition [30,35].

Rizalman and Lee [36] compared the performance of ANN and RSM in predicting the CS of palm oil fuel ash concrete and found that RSM outperformed ANN with a coefficient of determination (R 2) closer to 1. All the predicted results by RSM fell within a 10% margin of the experimental results, whereas the ANN model had three predicted results outside this margin.

Yaro et al. [37] highlighted the superior applicability of the ANN model compared with the RSM model. The ANN model demonstrates greater potential due to its capacity to simulate a broader array of nonlinear polynomials, unlike the RSM model, which is confined to capturing solely quadratic approximations. The ANN model’s adeptness in handling nonlinear relationships accounts for its superior performance.

Ray et al. [38] found that RSM models are better at predicting concrete properties compared with ANN models. This conclusion is supported by RSM models exhibiting higher determination coefficients (R 2) near 1 and lower error values compared with ANN models. Thus, RSM is the more effective approach for forecasting concrete properties based on the investigated parameters.

This work is a continuation of the research by Maglad et al. [16] on seashell ash-based LFC. They conducted an experimental companion to evaluate CS, FS, split tensile strength (STS), WA, and P of LFC containing OSA, SSA, and MSA. The results are used in this research to develop models for estimating the five properties of LFC by varying additive types and content and curing duration. For this purpose, RSM, ANN, and genetic algorithm (GA) optimization are applied.

2 Materials

Maglad et al. [16] studied the evaluation of physicomechanical properties of seashell ash-based LFC by partially replacing OPC with OSA, SSA, and MSA. The study conformed to BS EN 197-1 standards [39] for OPC and utilized clean river sand as the fine aggregate, with a maximum particle size of 4.75 mm and a specific gravity of 2.53, as per ASTM C33-03 standards [40]. Potable water was used in accordance with BS 3148 standards for concrete mixing [41].

The seashell ashes were collected from local fishermen in Teluk Bahang, Penang, Malaysia, and processed by cleaning, drying, baking at 220°C, and grinding into fine ash. These ashes, primarily composed of calcium oxide (CaO), exhibited pozzolanic properties and were tested as cement replacements at 5, 10, 15, 20, 25, and 30%. The chemical composition of the seashell ashes was verified through X-ray fluorescence analysis, and the specific gravities of the ashes were found to be 2.86 for MSA, 2.64 for OSA, and 2.27 for SSA, indicating their suitability for partial cement replacement.

A protein-based foaming agent, diluted with water in a 1:32 ratio, was employed to create foam with a stable density of 65 ± 10 kg·m−3. The LFC mix was prepared with a sand-to-cement ratio of 1.5:1 and a water-to-cement ratio of 0.48. Nineteen different LFC mixtures were prepared, varying the percentage of seashell ash substitution, and the fresh concrete was poured into molds for curing. The specimens were water-cured for 28 days to ensure complete hydration.

The CS of the LFC specimens was evaluated using cube samples (100 mm × 100 mm) according to BS EN 12390-3 standards [42]. FS tests were performed using prism specimens (100 mm × 100 mm × 500 mm) following BS EN 12390-5 standards [43], while the STS was measured using cylindrical specimens (∅ 100 mm × 200 mm) based on BS EN 12390-6 standards [44]. These tests were conducted at 7, 14, and 28 days to assess the concrete’s strength performance under axial, bending, and tensile loads.

WA was evaluated in line with ASTM C1403 standards [45], with the specimens oven-dried at 105°C and immersed in water for 24 h to determine their ability to resist moisture infiltration. The P of the LFC specimens was determined using a vacuum saturation technique.

3 Experimental results

The data used in this dataset was produced by RSM, resulting in the formulation of 63 distinct mixes of LFC. The blends were created by modifying several input parameters: the type of seashell ash additive (OSA, SSA, and MSA), curing durations (7, 14, and 28 days), and additive concentrations (0–30%). The three parameters were methodically altered to investigate their collective impact on five principal output properties: CS, FS, STS, WA, and P.

Each created blend represents a distinct combination of these elements, enabling us to examine how varying conditions influence the performance of the LFC. The experimental design was organized to guarantee that all pertinent combinations of the variables were included, thereby producing a varied and thorough dataset despite the limited total amount of data points. The dataset comprises 63 measurements for each of the five properties (CS, FS, STS, WA, and P), forming the basis for constructing robust predictive models for these LFC features. All descriptions of the materials and methods are detailed in Maglad et al. [16]. The five measured responses of LFC are recorded in Table 1. This database, which counts 63 values of each response, will be used to develop prediction models of the different studied properties.

Table 1

Experimental results of CS, FS, STS, WA, and P of the studied LFC [16]

Sample number Additive Additive type Curing time (days) Additive content (%) CS (MPa) FS (MPa) STS (MPa) WA (%) P (%)
1 OSA 1 7 0 9.2 2.1 1.38 15.54 35.10
2 1 14 0 12.2 2.8 1.8 15.22 34.40
3 1 28 0 14.5 3.4 2.3 14.50 33.70
4 1 7 5 9.3 2 1.39 16.80 36.50
5 1 14 5 12.5 2.9 1.9 16.91 35.60
6 1 28 5 14.7 3.5 2.5 15.60 34.90
7 1 7 10 9.5 2.1 1.41 17.57 37.50
8 1 14 10 12.7 2.9 2.1 17.76 36.70
9 1 28 10 15.0 3.6 2.5 16.30 35.90
10 1 7 15 9.8 2.2 1.46 18.08 38.30
11 1 14 15 13.1 3 2.1 18.25 37.50
12 1 28 15 15.4 3.7 2.6 16.75 36.70
13 1 7 20 9.0 2 1.33 18.46 38.9
14 1 14 20 12.2 2.8 1.9 18.65 38.2
15 1 28 20 14.2 3.5 2.5 17.10 37.4
16 1 7 25 8.3 1.8 1.21 18.74 39.3
17 1 14 25 11 2.5 1.7 18.95 38.8
18 1 28 25 13.0 3.1 2.2 17.4 38.1
19 1 7 30 7.3 1.6 1.1 18.97 39.7
20 1 14 30 9.9 2.3 1.6 19.25 39.2
21 1 28 30 11.7 2.8 2 17.70 38.6
22 SSA 2 7 0 9.2 2.1 1.38 15.54 35.1
23 2 14 0 12.2 2.8 1.8 15.22 34.4
24 2 28 0 14.5 3.4 2.3 14.5 33.7
25 2 7 5 9.10 2 1.37 17.06 36.7
26 2 14 5 12.3 2.8 2 17.23 36.1
27 2 28 5 14.5 3.4 2.4 15.9 35.2
28 2 7 10 9.10 2 1.38 17.88 38.0
29 2 14 10 12.4 2.8 1.9 18.03 37.3
30 2 28 10 14.6 3.5 2.4 16.7 36.4
31 2 7 15 9.4 2.1 1.39 18.36 38.9
32 2 14 15 12.7 2.9 2 18.53 38.2
33 2 28 15 14.9 3.6 2.5 17.2 37.4
34 2 7 20 8.8 1.9 1.30 18.76 39.5
35 2 14 20 11.9 2.7 1.9 18.93 38.8
36 2 28 20 14.1 3.4 2.4 17.6 38.0
37 2 7 25 7.9 1.7 1.18 19.05 40.0
38 2 14 25 10.6 2.5 1.7 19.23 39.3
39 2 28 25 12.6 3 2.1 17.9 38.6
40 2 7 30 6.9 1.5 1.04 19.27 40.4
41 2 14 30 9.4 2.2 1.4 19.53 39.7
42 2 28 30 11.1 2.7 1.9 18.2 39.0
43 MSA 3 7 0 9.2 2.1 1.38 15.54 35.1
44 3 14 0 12.2 2.8 1.8 15.22 34.4
45 3 28 0 14.5 3.4 2.3 14.5 33.7
46 3 7 5 9.5 2.1 1.42 16.61 36.1
47 3 14 5 12.9 3 2.1 16.63 35.2
48 3 28 5 15.1 3.6 2.6 15.2 34.7
49 3 7 10 10.1 2.2 1.53 17.21 36.9
50 3 14 10 13.6 3.1 2.2 17.43 36.1
51 3 28 10 15.9 3.8 2.7 15.8 35.5
52 3 7 15 10.3 2.3 1.56 17.61 37.6
53 3 14 15 14.0 3.2 2.3 17.83 36.9
54 3 28 15 16.3 4 2.8 16.2 36.2
55 3 7 20 9.4 2.1 1.41 17.91 38.2
56 3 14 20 12.7 2.9 2.1 18.13 36.7
57 3 28 20 14.9 3.6 2.5 16.5 36.9
58 3 7 25 8.3 1.8 1.23 18.11 38.7
59 3 14 25 11.2 2.6 1.8 18.33 38.2
60 3 28 25 13.1 3.2 2.2 16.7 37.5
61 3 7 30 7.6 1.7 1.13 18.31 39.1
62 3 14 30 10.3 2.4 1.6 18.53 38.7
63 3 28 30 12.0 3 2.1 16.9 38.0

4 Model development

4.1 RSM modeling

RSM is employed in the first stages of designing experiments to create predictive models for responses and to conduct optimization. These response models may be represented as linear or higher-order polynomials, as seen in the generalized formats specified in Eqs. (1) and (2) [46,47,48,49].

(1) Y = β 0 + β 1 X 1 + β 2 X 2 + β n X n + ,

(2) Y = β 0 + i k β i X i + i k β i i X i 2 + i j k β i j X i X j + ϵ ,

where Y denotes the desired response, β 0 is the regression coefficient for the constant term, and β i , β ii , and β ij are the coefficients for linear, quadratic, and the interaction of X i and X j terms, respectively. The number of factors is denoted by k, while the random error is denoted by .

RSM offers multiple modeling methodologies, each exhibiting unique attributes and varying degrees of precision. The accuracy of predictive outcomes is influenced not only by the chosen model type but also significantly by the quality and relevance of the experimental data utilized. Accurate model predictions depend on the availability of high-quality, relevant experimental data. This study selected the quadratic model to represent the responses, which include CS, FS, tensile strength, WA, and P [50,51]. This model was selected for its superior accuracy compared to other alternatives. The strength of the quadratic model lies in its ability to account for nonlinear effects and complex interactions between input variables, which is vital for achieving reliable predictions when relationships among the variables are not purely linear. These models are expressed in coded terms in Eqs. (3)–(7). Equations expressed in coded factors can be used to predict responses across different levels of each variable. Typically, a value of +1 signifies elevated levels of a factor, whereas a value of −1 denotes low levels by default. The coded equation facilitates the determination of the relative importance of variables by comparing the coefficients of factors, denoted as A: (additive type), B: (age, fays), and C: (additive content, %).

(3) C S ( M P a ) = + 5.68 1.89 A + 0.75 B + 0.19 C + 0.0038 A B + 0.001 A C 0.002 B C + 0.50 A 2 0.014 B 2 0.01 C 2 ,

(4) F S ( M P a ) = + 1.26 0.51 A + 0.18 B + 0.04 C + 0.0016 A B + 0.001 A C 0.0002 B C + 0.13 A 2 0.003 B 2 0.002 C 2 ,

(5) S T S ( M P a ) = + 0.76 0.37 A + 0.13 B + 0.039 C + 0.0005 A B 0.0002 A C 0.0001 B C + 0.1 A 2 0.002 B 2 0.0016 C 2 ,

(6) W A ( % ) = + 13.2 + 2.03 A + 0.14 B + 0.25 C 0.002 A B 0.01 A C 0.0004 B C 0.51 A 2 0.005 B 2 0.004 C 2 ,

(7) P ( % ) = + 34.19 + 2.69 A 0.17 B + 0.26 C + 0.003 A B 0.009 A C + 0.0004 B C 0.71 A 2 + 0.003 B 2 0.003 C 2 .

ANOVA is a statistical technique that is widely used in research to analyze and interpret experimental data. It operates on the principles of probability and mathematical statistics, aiming to validate models and evaluate the influence of input parameters on the variability of responses [25,52,53,54,55]. ANOVA tests the significance of differences observed among groups or treatments by partitioning the total variance into distinct components attributed to different sources, such as independent variables or interactions [56,57,58,59]. This partitioning allows researchers to determine whether the observed differences are statistically significant or merely due to random chance. ANOVA is typically conducted at a 95% confidence interval, denoted by a significance level (α) of 0.05, meaning that results with a p-value less than 0.05 are considered statistically significant [60,61,62,63,64]. The statistical parameters taken into account by ANOVA are given in Table 2.

Table 2

Statistical parameters of ANOVA

Statistical parameter Equation Definition
The squared sum (SS f ) SS f = N N n f i = 1 N n f ( y ¯ i y ¯ ) 2 (8) To estimate the square of the deviation from the general mean
y ¯ : the average response,
y ¯ i : average of the measured responses for each level i of the F-factor,
N: the total number of trials,
N n f : the number of levels of each f factor.
The squared mean (MS i ) MS i = SS i dl i (9) Is calculated by dividing the squared sum (SS i ) by the number of degrees of freedom (dl i )
The F-value F i = MS i MS e (10) Used to check the compatibility of the mathematical model on the grounds that the calculated F-values must be greater than the tabulated F
MS e is the mean squared sum of the errors
Contribution (Cont.%) Cont . % = SS f SS T × 100 (11) It shows the contribution of factors (SS f ) to the total variance (SS T ), indicating the degree of percent effect on response
The coefficient of determination (R 2) R 2 = ( y i y ¯ ) 2 ( y ¯ i y ¯ ) 2 (12) The ratio of explained variation to total variation, it is a measure of the goodness of fit

Table 3 presents the ANOVA of LFC properties with the variation in three factors: additive type (A), curing time (B), and additive content (C).

Table 3

ANOVA of LFC properties

Source SS f Df MS i F-value p-value Cont.% Significant
CS (MPa) 303.5 3 101.17 89.42 <0.0001
A 1.76 1 1.76 1.56 0.2171 0.47536733 No
B 263.62 1 263.62 233.03 <0.0001 71.2024633 Yes
C 38.11 1 38.11 33.69 <0.0001 10.2933232 Yes
Residual 66.75 59 1.13
Cor total 370.24 62
FS (MPa) 21.85 3 7.28 115.84 <0.0001
A 0.126 1 0.126 2 0.1622 0.492957746 No
B 19.91 1 19.91 316.62 <0.0001 77.89514867 Yes
C 1.82 1 1.82 28.91 <0.0001 7.120500782 Yes
Residual 3.71 59 0.0629
Cor total 25.56 62
STS (MPa) 11.52 3 3.84 100.58 <0.0001
A 0.0754 1 0.0754 1.98 0.1651 0.547567175 No
B 10.75 1 10.75 281.62 <0.0001 78.06826434 Yes
C 0.6925 1 0.6925 18.14 <0.0001 5.029048656 Yes
Residual 2.25 59 0.0382
Cor total 13.77 62
WA (%) 92.42 3 30.81 102.19 <0.0001
A 2.06 1 2.06 6.83 0.0113 1.869328494 Yes
B 20.22 1 20.22 67.09 <0.0001 18.34845735 Yes
C 70.13 1 70.13 232.65 <0.0001 63.63883848 Yes
Residual 17.79 59 0.3015
Cor total 110.2 62
P (%) 174.5 3 58.17 214.54 <0.0001
A 2.68 1 2.68 9.87 0.0026 1.406898 Yes
B 19.85 1 19.85 73.2 <0.0001 10.42049451 Yes
C 151.98 1 151.98 560.56 <0.0001 79.78371568 Yes
Residual 16 59 0.2711
Cor total 190.49 62

The contribution of curing time (B), which achieved 71.20, 77.89, and 78.06%, is more significant than that of additive content (C), with 10.29, 7.12, and 5.02% in CS, FS, and STS, respectively. Additive type (A) is deemed insignificant (P value >0.05) for CS, FS, and STS. Notably, the significance of additive content (C) with 63.63 and 79.78% is more pronounced than that of curing time (B) with 18.34 and 10.42%, as well as that of additive type (A) with 1.86 and 1.40% for WA and P, respectively.

The perturbation diagram of the obtained models is illustrated in Figure 1. The perturbation diagram serves as a graphical tool frequently utilized in engineering to illustrate the impact of various factors on the output of interest. This process assists in identifying and analyzing the impact of variations or disturbances in input variables on the overall output. The diagram illustrates that the input variables have been normalized and are displayed on a scale ranging from −1 to +1 [52]. This normalization allows for easier comparison and analysis of the input-output relationships, regardless of the original units or scales of the variables. The diagram illustrates how the system’s output responds to simultaneous changes in the three normalized inputs, helping to pinpoint the input combinations that have the most significant impact on the output, as well as potential interactions among the variables.

Figure 1 
                  Perturbation plot of (a) CS; (b) FS; (c) STS; (d) WA; and (e) P.
Figure 1

Perturbation plot of (a) CS; (b) FS; (c) STS; (d) WA; and (e) P.

Figure 1(a)–(c) illustrate an increase in the CS, FS, and STS parameters as the B factor positively increases (level +1), while an increase in the C factor occurs near the reference point on the negative side (level −1). In addition, factor A shows a slight increase on both the positive side (level +1) and the negative side (level −1). Figure 1(d) reveals a significant increase in the value of WA with the elevation of factor C on the right side (level +1), while A and B increase near the reference point on the left side (level −1). In Figure 1(e), an increase in the value of P is observed with the increase in factor B on the left side (level −1) and factor C on the right side (level +1), while A increases near the reference point on the left side (level −1) (Figure 2).

Figure 2 
                  Q–Q plot and histogram of residuals for normality testing after ANOVA analysis.
Figure 2

Q–Q plot and histogram of residuals for normality testing after ANOVA analysis.

With a significance level of α = 0.05, the p-values for all outputs presented in Table 4 exceed this threshold, indicating that we fail to reject the null hypothesis in every case. Consequently, the residuals for each output can be considered to follow a normal distribution.

Table 4

Jarque-Bera test p-values for residual normality test

Output p-value
CS 0.30916
FS 0.26934
STS 0.13400
W 0.35193
P 0.38533

Note: For all outputs, the p-values exceed the significance level (α = 0.05). Therefore, we fail to reject the null hypothesis, and the residuals can be considered normally distributed.

4.2 3D response surfaces

The 3D surface response plots illustrate the impact of variables such as additive type, curing time, and additive content (noted A, B, and C, respectively) on the properties of CS, FS, STS, WA, and P, as depicted in Figure 3(a)–(c).

Figure 3 
                  3D response surfaces for CS, FS, STS, WA, and P: (a) additive type and curing time, (b) additive type and additive content, and (c) curing time and additive content.
Figure 3 
                  3D response surfaces for CS, FS, STS, WA, and P: (a) additive type and curing time, (b) additive type and additive content, and (c) curing time and additive content.
Figure 3 
                  3D response surfaces for CS, FS, STS, WA, and P: (a) additive type and curing time, (b) additive type and additive content, and (c) curing time and additive content.
Figure 3

3D response surfaces for CS, FS, STS, WA, and P: (a) additive type and curing time, (b) additive type and additive content, and (c) curing time and additive content.

In the 3D response surface plots, color coding is used to visually represent the effect of varying input variables, such as additive type, curing time, and additive content, on the physicomechanical properties of LFC. The plots help to illustrate how these variables interact to influence outcomes like CS or FS.

The color scheme in the plots provides a clear visual guide, with pink areas indicating regions of highest response intensity, where the desired property (such as CS) is maximized. Conversely, red areas represent regions of lowest response intensity, indicating weaker performance in the property being measured. For example, a high CS may occur in the pink areas when the curing time is 28 days and the additive content is 20%, while the red areas might indicate poor results when the curing time is shorter or the additive content is lower.

The substitution of cement with three types of additives, namely, OSA, SSA and MSA, with a curing time of 28 days and an additive concentration ranging from 4 to 20%, results in maximum mechanical strength, whether in terms of compression, flexion or splitting tensile strength, for the LFC.

After a curing period of 28 days and an additive concentration fluctuating between 18 and 30%, MSA displays a lower WA capacity compared with samples containing OSA and SSA in the LFC.

Following a curing period of 28 days and an additive concentration ranging from 24 to 30%, OSA and MSA ashes exhibit lower P than the sample containing SSA ash in the LFC.

5 Predictive modeling

5.1 ANN K-fold cross validation modeling

The human brain comprises a vast network of neurons connected by synapses. When an individual interacts with their environment, such as through sight or hearing, specific neurons are activated [65,66]. This activation enables the person to distinguish between various stimuli. ANNs aim to replicate this process [67,68].

ANNs are organized in layers to successively analyze input. The input layer acquires raw data and transmits it to the following levels. Hidden layers, situated between input and output, execute computations by using weights, biases, and activation functions (e.g., ReLU, sigmoid) to convert data into significant patterns [69,70]. These layers facilitate the network’s ability to comprehend intricate relationships – an increased number of hidden layers permits more profound feature extraction, being the foundation of “deep learning.” The output layer generates the outcome, such as a classification or numerical prediction, utilizing task-specific activations like softmax or linear functions. Collectively, these layers emulate hierarchical information processing, transforming raw inputs into useful insights [72,73]. This architecture is an ANN perceptron, as shown in Figure 4.

Figure 4 
                  Graphical representation of an ANN perceptron.
Figure 4

Graphical representation of an ANN perceptron.

Supposing a neural network with N layers, we define the following quantities:

(13) y j n = k w j k a k n 1 + b j n ; N = 1 , , n 1 , n ,

(14) a j n = f k w j k a k n 1 + b j n = f ( y j n ) ; N = 1 , , n 1 , n ,

where y is the output variable to be modeled. a j 0 = x j ; x j is the jth input layer; and w j k n is the weight value of the kth neuron in the nth layer at the jth neuron in the ( n 1 ) th layer. Similarly, we define the biases b i j and the activation function f.

After an ANN model is trained on labeled data, its performance needs to be verified on new data. A critical step – model validation – is performed to ensure the accuracy of the prediction model. This process entails assessing whether the predicted results, which quantify hypothetical relationships between variables, are acceptable as adequate representations of the data.

One of the commonly used methods to assess the effectiveness of an ANN model is K-fold cross-validation, which is a resampling method that allows a model to be evaluated even when data are limited [74,75]. K-fold is easy to understand and is highly popular. Compared with other cross-validation approaches, it generally tends to produce a less biased model [76,77]. This result occurs because K-fold ensures that all observations from the original dataset have the opportunity to appear in the training set and the test set simultaneously. In the case of limited input data, K-fold is thus one of the most relevant approaches [78].

The first step is to randomly divide the dataset into K folds. This procedure is governed by a single parameter called K, which represents the number of groups into which the sample will be partitioned [79]. The value of K must be chosen wisely based on the size of the dataset, avoiding it being too low or too high. In this case, K = 3, which means that the dataset will be segmented into three distinct parts. Subsequently, we proceed with an iterative learning process, where we train the model on one-fold and test it on the others. This process repeats until each K-fold has been used in the training set at least once (Figure 5). The model’s performance metric is evaluated by taking the average of the recorded scores [81,82]. Figure 6 illustrates the flowchart of the K-fold cross-validation process employed in training the ANN model.

Figure 5 
                  Schematic representation of K-fold cross-validation [71].
Figure 5

Schematic representation of K-fold cross-validation [71].

Figure 6 
                  Flowchart of the K-fold cross-validation.
Figure 6

Flowchart of the K-fold cross-validation.

The architecture of an ANN defines how neurons are structured in layers and connected to each other [83,84]. The parameters optimized in this section include the size of the network, which encompasses the number of hidden layers and the number of nodes in each layer, as well as the activation functions for each layer [84,85]. The dimension of the network is particularly important when designing a neural network, as it determines the number of layers and nodes in each of them. Activation functions influence the output of a neuron based on its inputs. In this study, three types of activation functions were evaluated: the sigmoid transfer function (hyperbolic tangent), the linear transfer function, and the radial basis function (Gaussian) [86,87]. The optimal architectures of ANNs, aiming to maximize the performance of prediction models for CS, FS, STS, WA, and P, are presented in Table 5.

Table 5

Optimal architectures of ANN

ANN model Number of hidden layers Activation functions
CS 2 The first layer: 6 nodes 3 Sigmoïde
2 Linear
1 Gaussian
The second layer: 4 nodes 2 Sigmoïde
1 Linear
1 Gaussian
FS 1 4 nodes Sigmoïde
STS 1 4 nodes Sigmoïde
WA 1 6 nodes Sigmoïde
P 1 5 nodes 3 Sigmoïde
2 Linear

ANN-K fold validation models with different architectures have been developed.

5.2 Hybrid deep neural network optimization using improved grey wolf optimizer (DNN-IGWO)

Figure 7 depicts the architecture of the DNN-IGWO, which represents an advanced hybrid modeling strategy developed to improve both the predictive performance and computational robustness of the learning process. By integrating the deep representation learning strengths of DNNs with the metaheuristic optimization capabilities of the IGWO algorithm, this approach enables the automated configuration of network design parameters and hyperparameters [88,89,90]. The central aim of this framework is to enhance the prediction accuracy of mechanical properties through the systematic optimization of critical model components, including the number and depth of hidden layers, the distribution of neurons, and the activation function types.

Figure 7 
                  Flow chart of the hybrid algorithm DNN-IGWO.
Figure 7

Flow chart of the hybrid algorithm DNN-IGWO.

At the outset of the optimization process, the parameters of the IGWO are initialized, these include the number of search agents, the dimensionality of the problem space, and the maximum iteration count. These parameters critically influence the convergence behavior of the algorithm by regulating the dynamics of the exploration and exploitation mechanisms. Upon initialization, the structure of the DNN is constructed, detailing the number of layers and the specific configuration of neurons per layer, which will subsequently be optimized through the IGWO process.

The optimization process is focused on determining the optimal number of hidden layers, neurons per layer, activation functions, and learning algorithms. Unlike conventional methods, IGWO enabled an automated and efficient search for the best-performing DNN configuration, improving predictive accuracy and generalization. The learning algorithms evaluated included trainlm, trainbr, trainbfg, and others, while multiple activation functions were considered to ensure adaptability to nonlinear patterns. Table 6 details the optimized architecture of DNN.

Table 6

DNN optimization parameters

Hidden layers Hidden layer size Learning algorithms Activation functions
Min: 1 Max: 10 Min: 1 Max: 10 Trainlm: LM backpropagation Compet: Competitive transfer function
Trainbr: Bayesian regulation backpropagation Elliotsig: Elliot sigmoid transfer function
Trainbfg: BFGS quasi-Newton backpropagation Hardlim: Positive hard limit transfer function
Traincgb: Conjugate gradient backpropagation with Powell-Beale restarts Hardlims: Symmetric hard limit transfer function
Traincgf: Conjugate gradient backpropagation with Fletcher-Reeves updates Logsig: Logarithmic sigmoid transfer function
Traincgp: Conjugate gradient backpropagation with Polak-Ribiere updates Netinv: Inverse transfer function
Traingd: Gradient descent backpropagation Poslin: Positive linear transfer function
Traingda: Gradient descent with adaptive lr backpropagation Purelin: Linear transfer function
Traingdm: Gradient descent with momentum Radbas: Radial basis transfer function
Traingdx: Gradient descent w/momentum and adaptive lr backpropagation Radbasn: Radial basis normalized transfer function
Trainoss: One step secant backpropagation Satlin: Positive saturating linear transfer function
Trainrp: RPROP backpropagation Satlins: Symmetric saturating linear transfer function
Trainscg: Scaled conjugate gradient backpropagation Softmax: Soft max transfer function
Tansig: Symmetric sigmoid transfer function
Tribas: Triangular basis transfer function

The Levenberg–Marquardt (LM) optimizer was selected for training the ANN because of its swift convergence and stability, making it particularly appropriate for small to medium-sized datasets such as the one utilized in this study. The LM algorithm integrates the benefits of gradient descent and the Gauss-Newton method, facilitating rapid training while addressing the intricate nonlinear interactions within the dataset [91,92]. Its adaptability in managing both linear and nonlinear models renders it optimum for forecasting the physicomechanical properties of concrete, ensuring an ideal equilibrium between computing efficiency and model precision.

The optimization of DNN architectures through the IGWO has led to the development of a bespoke model tailored for the prediction of five key mechanical and physical properties of concrete: CS, FS, STS, WA, and P. The optimized architecture, whose detailed configuration is presented in Table 7, includes essential design elements such as the number of hidden layers, neurons per layer, activation functions, and the selected learning algorithm.

Table 7

Optimal parameters of DNN obtained with IGWO

Parameter HLayer number HLayer size Learning-algorithm Act-Fct
CS 3 9 Trainbr Logsig
8 Elliotsig
9 Elliotsig
FS 3 10 Trainbr Elliotsig
8 Tansig
5 Radbasn
STS 2 8 Trainbr Elliotsig
9 Radbasn
WA 2 8 Trainbr Elliotsig
8 Logsig
P 2 9 Trainbr Elliotsig
8 Radbasn

5.3 Support vector machines (SVMs)

SVMs are a machine learning technique used for classification and regression. This method is based on the Vapnik-Chervonenkis statistical learning theory [93,94,95]. In 1995, Cortes and Vapnik [96] proposed an adaptation of SVMs to solve regression problems, using the kernel trick. This approach is essential in machine learning because it allows nonlinear problems to be addressed using linear classifiers in a transformed space.

Unlike ANNs, SVMs are capable of providing reliable predictions even with limited data and are less susceptible to overfitting [97]. For example, Ulas and Sami demonstrated the effectiveness of SVMs in predicting surface roughness during turning of AISI 304 steel, despite a small amount of experimental data [98].

5.4 Predictive modeling results

In this section, we trained, tested, and validated the different models of RSM, ANN, DNN-IGWO, and SVM to predict C, FS, STS, WA, and P using the 63 experimentally obtained data points. Various visualization tools, such as scatter plots and spider plots, were used, complementing the evaluation metrics such as correlation coefficient (R 2), objective function (OBJ), and error criteria (root mean square error (RMSE), mean absolute percentage error (MAPE), and mean absolute deviation (MAD)). We used Taylor diagrams to illustrate the data availability and performance of each model. We examined the predictive capabilities of the different models by comparing the model predictions with the corresponding experimental data. Table 8 provides the formulas for calculating the error criteria.

Table 8

Error functions [80]

Criteria Formulas
RMSE x = 1 n ( y e x y p x ) 2 n
MAPE (%) x = 1 n ( y e x y p x ) / y e x n × 100
MAD x = 1 n ( y e x y p x ) n
R 2 x = 1 n ( y p x y e x ) x = 1 n ( y p x y ¯ ) 2
OBJ RMSE + MAE R 2 + 1

where y e x represents the experimental value of the xth trial.

y p x denotes the predicted value of the xth trial.

y ¯ denotes the average of the experimentally determined values.

n represents the number of experiments.

Tables 913 and the distribution plots in Figure 8 compare the predictions of C, FS, STS, WA, and P with the actual values for four prediction models: RSM, ANN, DNN-IGWO, and SVM. The points are represented by different symbols depending on the models, while the solid lines y = x and the dashed lines (+10 and −10%) serve as a reference to evaluate the accuracy of the models.

Table 9

Experimental and predictive CS results

N FS Exp. (MPa) FS RSM (MPa) FS ANN (MPa) FS DNN (MPa) FS SVM (MPa)
1 2.10 1.99 2.07 2.10 2.35
2 2.80 2.78 2.81 2.80 2.79
3 3.40 3.43 3.35 3.40 3.67
4 2.00 2.14 2.06 2.00 2.26
5 2.90 2.93 2.87 2.90 2.70
6 3.50 3.56 3.52 3.50 3.58
7 2.10 2.20 2.08 2.10 2.17
8 2.90 2.98 2.95 2.90 2.61
9 3.60 3.60 3.63 3.60 3.48
10 2.20 2.16 2.13 2.20 2.07
11 3.00 2.94 3.01 3.00 2.51
12 3.70 3.55 3.66 3.70 3.39
13 2.00 2.03 2.07 2.00 1.98
14 2.80 2.80 2.90 2.80 2.42
15 3.50 3.40 3.50 3.50 3.30
16 1.80 1.81 1.81 1.72 1.89
17 2.50 2.57 2.50 2.50 2.33
18 3.10 3.16 3.08 3.10 3.20
19 1.60 1.50 1.59 1.60 1.79
20 2.30 2.25 2.14 2.30 2.23
21 2.80 2.83 2.76 2.80 3.11
22 2.10 1.88 2.07 2.10 2.43
23 2.80 2.69 2.78 2.80 2.87
24 3.40 3.35 3.27 3.40 3.75
25 2.00 2.04 2.04 2.00 2.34
26 2.80 2.84 2.83 2.80 2.78
27 3.40 3.49 3.44 3.40 3.66
28 2.00 2.10 2.05 2.00 2.25
29 2.80 2.89 2.89 2.80 2.68
30 3.50 3.54 3.57 3.50 3.56
31 2.10 2.07 2.05 2.02 2.15
32 2.90 2.86 2.92 2.90 2.59
33 3.60 3.49 3.60 3.75 3.47
34 1.90 1.95 1.93 1.90 2.06
35 2.70 2.73 2.78 2.74 2.50
36 3.40 3.35 3.41 3.40 3.38
37 1.70 1.73 1.64 1.70 1.97
38 2.50 2.50 2.38 2.50 2.41
39 3.00 3.11 2.99 3.00 3.28
40 1.50 1.42 1.49 1.50 1.87
41 2.20 2.19 2.10 2.20 2.31
42 2.70 2.79 2.72 2.70 3.19
43 2.10 2.04 2.10 2.10 2.51
44 2.80 2.85 2.82 2.80 2.95
45 3.40 3.54 3.41 3.40 3.83
46 2.10 2.20 2.12 2.10 2.42
47 3.00 3.01 2.96 3.00 2.86
48 3.60 3.69 3.66 3.60 3.73
49 2.20 2.27 2.22 2.20 2.32
50 3.10 3.07 3.11 3.10 2.76
51 3.80 3.74 3.83 3.80 3.64
52 2.30 2.24 2.28 2.30 2.23
53 3.20 3.04 3.16 3.20 2.67
54 4.00 3.69 3.86 4.00 3.55
55 2.10 2.12 2.11 2.10 2.14
56 2.90 2.91 2.97 2.93 2.58
57 3.60 3.56 3.64 3.60 3.46
58 1.80 1.91 1.81 1.80 2.05
59 2.60 2.70 2.60 2.60 2.48
60 3.20 3.33 3.23 3.20 3.36
61 1.70 1.60 1.72 1.70 1.95
62 2.40 2.38 2.39 2.39 2.39
63 3.00 3.01 3.02 3.00 3.27
Table 10

Experimental and predictive FS results

N STS Exp. (MPa) STS RSM (MPa) STS ANN (MPa) STS DNN (MPa) STS SVM (MPa)
1 15.54 15.42 15.34 15.54 16.47
2 15.22 15.60 15.45 15.22 16.01
3 14.50 14.36 14.47 14.50 15.10
4 16.80 16.51 16.74 16.66 16.99
5 16.91 16.67 16.87 16.91 16.54
6 15.60 15.40 15.65 15.61 15.62
7 17.57 17.40 17.59 17.57 17.52
8 17.76 17.55 17.75 17.76 17.06
9 16.30 16.25 16.33 16.30 16.14
10 18.08 18.10 18.08 18.08 18.04
11 18.25 18.23 18.30 18.26 17.58
12 16.75 16.90 16.76 16.75 16.66
13 18.46 18.60 18.43 18.45 18.56
14 18.65 18.72 18.69 18.65 18.10
15 17.10 17.36 17.09 17.10 17.19
16 18.74 18.91 18.73 18.74 19.09
17 18.95 19.02 19.00 18.97 18.63
18 17.40 17.63 17.40 17.40 17.71
19 18.97 19.03 19.03 18.98 19.61
20 19.25 19.12 19.26 19.25 19.15
21 17.70 17.71 17.75 17.70 18.23
22 15.54 15.90 15.45 15.54 16.20
23 15.22 16.06 15.36 15.22 15.74
24 14.50 14.78 14.65 14.50 14.82
25 17.06 16.93 16.99 17.09 16.72
26 17.23 17.08 16.91 17.23 16.26
27 15.90 15.77 15.90 15.83 15.35
28 17.88 17.77 17.92 17.88 17.25
29 18.03 17.90 17.90 18.03 16.79
30 16.70 16.56 16.66 16.70 15.87
31 18.36 18.41 18.47 18.37 17.77
32 18.53 18.53 18.52 18.53 17.31
33 17.20 17.16 17.17 17.20 16.39
34 18.76 18.86 18.81 18.75 18.29
35 18.93 18.96 18.95 18.92 17.83
36 17.60 17.57 17.56 17.59 16.92
37 19.05 19.12 19.07 19.04 18.81
38 19.23 19.21 19.25 19.25 18.36
39 17.90 17.78 17.91 17.91 17.44
40 19.27 19.18 19.27 19.28 19.34
41 19.53 19.25 19.45 19.52 18.88
42 18.20 17.80 18.24 18.19 17.96
43 15.54 15.36 15.14 15.54 15.93
44 15.22 15.50 15.30 15.22 15.47
45 14.50 14.18 14.39 14.50 14.55
46 16.61 16.33 16.46 16.61 16.45
47 16.63 16.46 16.59 16.63 15.99
48 15.20 15.12 15.32 15.20 15.07
49 17.21 17.11 17.22 17.21 16.97
50 17.43 17.23 17.37 17.43 16.52
51 15.80 15.86 15.84 15.80 15.60
52 17.61 17.70 17.65 17.62 17.50
53 17.83 17.80 17.84 17.84 17.04
54 16.20 16.40 16.16 16.21 16.12
55 17.91 18.10 17.91 17.90 18.02
56 18.13 18.18 18.15 18.12 17.56
57 16.50 16.75 16.42 16.48 16.64
58 18.11 18.30 18.10 18.12 18.54
59 18.33 18.37 18.38 18.34 18.08
60 16.70 16.91 16.67 16.71 17.17
61 18.31 18.31 18.28 18.31 19.07
62 18.53 18.36 18.53 18.53 18.61
63 16.90 16.88 16.93 16.90 17.69
Table 11

Experimental and predictive STS results

N CS Exp. (MPa) CS RSM (MPa) CS ANN (MPa) CS DNN (MPa) CS SVM (MPa)
1 9.2 8.86 9.19 2.1 1.99
2 12.2 12.08 12.44 2.8 2.78
3 14.5 14.47 14.54 3.4 3.43
4 9.3 9.56 9.20 2 2.14
5 12.5 12.72 12.58 2.9 2.93
6 14.7 15.01 14.93 3.5 3.56
7 9.5 9.85 9.50 2.1 2.20
8 12.7 12.96 12.92 2.9 2.98
9 15.0 15.14 15.38 3.6 3.60
10 9.8 9.73 9.62 2.2 2.16
11 13.1 12.79 12.94 3 2.94
12 15.4 14.86 15.37 3.7 3.55
13 9.0 9.20 9.17 2 2.03
14 12.2 12.21 12.23 2.8 2.80
15 14.2 14.17 14.51 3.5 3.40
16 8.3 8.27 8.26 1.80 1.81
17 11 11.22 10.85 2.5 2.57
18 13.0 13.08 12.92 3.1 3.16
19 7.3 6.92 7.81 1.6 1.50
20 9.9 9.82 9.83 2.30 2.25
21 11.7 11.58 11.72 2.80 2.83
22 9.2 8.51 8.99 2.10 1.88
23 12.2 11.76 12.26 2.80 2.69
24 14.5 14.20 14.37 3.4 3.35
25 9.10 9.21 9.22 2 2.04
26 12.3 12.41 12.49 2.8 2.84
27 14.5 14.74 14.56 3.4 3.49
28 9.10 9.51 8.99 2 2.10
29 12.4 12.65 12.34 2.8 2.89
30 14.6 14.88 14.62 3.5 3.54
31 9.4 9.39 8.91 2.1 2.07
32 12.7 12.48 12.32 2.9 2.86
33 14.9 14.60 14.73 3.6 3.49
34 8.8 8.87 8.57 1.9 1.95
35 11.9 11.91 11.83 2.7 2.73
36 14.1 13.92 14.19 3.4 3.35
37 7.9 7.94 7.65 1.7 1.73
38 10.6 10.92 10.53 2.5 2.50
39 12.6 12.83 12.71 3 3.11
40 6.9 6.60 6.91 1.50 1.42
41 9.4 9.53 9.18 2.20 2.19
42 11.1 11.33 11.18 2.70 2.79
43 9.2 9.17 9.05 2.10 2.04
44 12.2 12.44 12.19 2.80 2.85
45 14.5 14.94 14.28 3.40 3.54
46 9.5 9.88 9.67 2.10 2.20
47 12.9 13.10 12.96 3 3.01
48 15.1 15.49 15.17 3.60 3.69
49 10.1 10.18 10.18 2.20 2.27
50 13.6 13.34 13.58 3.10 3.07
51 15.9 15.63 15.86 3.80 3.74
52 10.3 10.07 10.31 2.30 2.24
53 14.0 13.18 13.70 3.2 3.04
54 16.3 15.36 15.89 4 3.69
55 9.4 9.55 9.47 2.1 2.12
56 12.7 12.61 12.76 2.9 2.91
57 14.9 14.68 14.98 3.6 3.56
58 8.3 8.63 8.31 1.8 1.91
59 11.2 11.63 11.36 2.6 2.70
60 13.1 13.60 13.61 3.2 3.33
61 7.6 7.29 7.56 1.7 1.60
62 10.3 10.24 9.93 2.4 2.38
63 12.0 12.10 11.96 3 3.01
Table 12

Experimental and predictive WA results

N WA Exp. (%) WA RSM (%) WA ANN (%) WA DNN (%) WA SVM (%)
1 15.54 15.42 15.34 15.54 16.47
2 15.22 15.60 15.45 15.22 16.01
3 14.50 14.36 14.47 14.50 15.10
4 16.80 16.51 16.74 16.66 16.99
5 16.91 16.67 16.87 16.91 16.54
6 15.60 15.40 15.65 15.61 15.62
7 17.57 17.40 17.59 17.57 17.52
8 17.76 17.55 17.75 17.76 17.06
9 16.30 16.25 16.33 16.30 16.14
10 18.08 18.10 18.08 18.08 18.04
11 18.25 18.23 18.30 18.26 17.58
12 16.75 16.90 16.76 16.75 16.66
13 18.46 18.60 18.43 18.45 18.56
14 18.65 18.72 18.69 18.65 18.10
15 17.10 17.36 17.09 17.10 17.19
16 18.74 18.91 18.73 18.74 19.09
17 18.95 19.02 19.00 18.97 18.63
18 17.40 17.63 17.40 17.40 17.71
19 18.97 19.03 19.03 18.98 19.61
20 19.25 19.12 19.26 19.25 19.15
21 17.70 17.71 17.75 17.70 18.23
22 15.54 15.90 15.45 15.54 16.20
23 15.22 16.06 15.36 15.22 15.74
24 14.50 14.78 14.65 14.50 14.82
25 17.06 16.93 16.99 17.09 16.72
26 17.23 17.08 16.91 17.23 16.26
27 15.90 15.77 15.90 15.83 15.35
28 17.88 17.77 17.92 17.88 17.25
29 18.03 17.90 17.90 18.03 16.79
30 16.70 16.56 16.66 16.70 15.87
31 18.36 18.41 18.47 18.37 17.77
32 18.53 18.53 18.52 18.53 17.31
33 17.20 17.16 17.17 17.20 16.39
34 18.76 18.86 18.81 18.75 18.29
35 18.93 18.96 18.95 18.92 17.83
36 17.60 17.57 17.56 17.59 16.92
37 19.05 19.12 19.07 19.04 18.81
38 19.23 19.21 19.25 19.25 18.36
39 17.90 17.78 17.91 17.91 17.44
40 19.27 19.18 19.27 19.28 19.34
41 19.53 19.25 19.45 19.52 18.88
42 18.20 17.80 18.24 18.19 17.96
43 15.54 15.36 15.14 15.54 15.93
44 15.22 15.50 15.30 15.22 15.47
45 14.50 14.18 14.39 14.50 14.55
46 16.61 16.33 16.46 16.61 16.45
47 16.63 16.46 16.59 16.63 15.99
48 15.20 15.12 15.32 15.20 15.07
49 17.21 17.11 17.22 17.21 16.97
50 17.43 17.23 17.37 17.43 16.52
51 15.80 15.86 15.84 15.80 15.60
52 17.61 17.70 17.65 17.62 17.50
53 17.83 17.80 17.84 17.84 17.04
54 16.20 16.40 16.16 16.21 16.12
55 17.91 18.10 17.91 17.90 18.02
56 18.13 18.18 18.15 18.12 17.56
57 16.50 16.75 16.42 16.48 16.64
58 18.11 18.30 18.10 18.12 18.54
59 18.33 18.37 18.38 18.34 18.08
60 16.70 16.91 16.67 16.71 17.17
61 18.31 18.31 18.28 18.31 19.07
62 18.53 18.36 18.53 18.53 18.61
63 16.90 16.88 16.93 16.90 17.69
Table 13

Experimental and predictive P results

N P Exp. (%) P RSM (%) P ANN (%) P DNN (%) P SVM (%)
1 35.10 35.12 35.12 35.10 35.49
2 34.40 34.33 34.42 34.40 35.06
3 33.70 33.53 33.58 33.70 34.20
4 36.50 36.32 36.32 36.48 36.33
5 35.60 35.55 35.54 35.62 35.90
6 34.90 34.77 34.74 34.90 35.05
7 37.50 37.36 37.40 37.52 37.17
8 36.70 36.60 36.54 36.67 36.74
9 35.90 35.85 35.80 35.91 35.89
10 38.30 38.25 38.32 38.29 38.02
11 37.50 37.50 37.41 37.52 37.59
12 36.70 36.77 36.73 36.69 36.73
13 38.90 38.97 39.07 38.87 38.86
14 38.20 38.24 38.16 38.21 38.43
15 37.40 37.54 37.51 37.41 37.57
16 39.30 39.54 39.65 39.33 39.70
17 38.80 38.82 38.80 38.78 39.27
18 38.10 38.15 38.14 38.10 38.42
19 39.70 39.96 40.08 39.69 40.54
20 39.20 39.25 39.34 39.21 40.11
21 38.60 38.60 38.63 38.60 39.26
22 35.10 35.69 35.04 35.42 35.29
23 34.40 34.93 34.42 34.40 34.86
24 33.70 34.17 33.76 33.70 34.00
25 36.70 36.84 36.81 36.70 36.13
26 36.10 36.09 36.08 36.09 35.70
27 35.20 35.36 35.42 35.20 34.84
28 38.00 37.84 38.03 37.99 36.97
29 37.30 37.10 37.20 37.31 36.54
30 36.40 36.40 36.55 36.40 35.69
31 38.90 38.68 38.88 38.91 37.81
32 38.20 37.95 38.00 38.19 37.39
33 37.40 37.27 37.39 37.39 36.53
34 39.50 39.36 39.51 39.54 38.66
35 38.80 38.65 38.64 38.81 38.23
36 38.00 37.99 38.05 38.00 37.37
37 40.00 39.88 39.99 40.00 39.50
38 39.30 39.18 39.18 39.30 39.07
39 38.60 38.55 38.56 38.60 38.21
40 40.40 40.25 40.34 40.26 40.34
41 39.70 39.56 39.65 39.70 39.91
42 39.00 38.96 38.96 39.00 39.06
43 35.10 34.83 35.11 35.33 35.09
44 34.40 34.09 34.43 34.40 34.66
45 33.70 33.38 33.70 33.70 33.80
46 36.10 35.94 36.08 36.10 35.93
47 35.20 35.21 35.32 35.25 35.50
48 34.70 34.53 34.67 34.70 34.64
49 36.90 36.89 36.91 36.90 36.77
50 36.10 36.17 36.08 36.10 36.34
51 35.50 35.51 35.50 35.50 35.49
52 37.60 37.68 37.57 37.60 37.61
53 36.90 36.98 36.72 36.90 37.18
54 36.20 36.34 36.17 36.20 36.33
55 38.20 38.31 38.10 38.20 38.45
56 36.70 37.63 37.27 36.70 38.03
57 36.90 37.02 36.73 36.90 37.17
58 38.70 38.79 38.58 38.70 39.30
59 38.20 38.12 37.84 38.19 38.87
60 37.50 37.53 37.28 37.50 38.01
61 39.10 39.11 39.22 39.10 40.14
62 38.70 38.45 38.65 38.70 39.71
63 38.00 37.89 38.09 37.89 38.86
Figure 8 
                  Comparison of measured values of (a) CS, (b) FS, (c) STS, (d) WA, and (e) P and predicted values using RSM, ANN, SNN-IGWO, and SVM models.
Figure 8

Comparison of measured values of (a) CS, (b) FS, (c) STS, (d) WA, and (e) P and predicted values using RSM, ANN, SNN-IGWO, and SVM models.

These distribution plots show that the neural network-based models ANN and DNN-IGWO significantly outperform traditional RSM and SVM models in terms of predictive accuracy. DNN-IGWO performs best, with near-perfect predictions within the ±10% error band, followed by ANN. RSM models show moderate accuracy, with greater dispersion. Finally, SVM models are the least accurate, with predictions largely deviating from the actual values, which limits their reliability for this task.

Table 14 and the spider plots in Figure 9 present the comparative analysis of the predictive models (RSM, ANN, DNN, and SVM) applied to the five target outputs (C, FS, STS, WA, and P). This analysis highlights the superiority of DNN. For all outputs, the DNN model systematically presents the best performances with the lowest values of error indicators (MAD, RMSE, MAPE) and the highest coefficients of determination (R 2 ≈ 0.999), reflecting remarkable accuracy and generalization capacity. Conversely, the SVM model is distinguished by the highest errors and the lowest R 2 values, particularly for the C, STS, WA, and P outputs, demonstrating a more limited predictive capacity. The RSM and ANN models offer intermediate performances, with the ANN sometimes approaching the DNN without, however, equaling it. Thus, the DNN model appears to be the most reliable and robust predictive tool for modeling the outputs studied.

Table 14

Comparison between performance indices of RSM, ANN, IGWO, and SVM models

CS FS STS WA P
RSM ANN DNN SVM RSM ANN DNN SVM RSM ANN DNN SVM RSM ANN DNN SVM RSM ANN DNN SVM
MAD 0.245 0.144 0.016 0.839 0.069 0.039 0.006 0.208 0.059 0.045 0.008 0.158 0.155 0.060 0.009 0.454 0.143 0.097 0.021 0.417
RMSE 0.307 0.194 0.052 1.044 0.088 0.053 0.025 0.246 0.073 0.060 0.034 0.197 0.203 0.094 0.021 0.554 0.212 0.143 0.056 0.527
MAPE (%) 2.135 1.289 0.137 7.373 2.682 1.493 0.248 8.098 3.374 2.459 0.375 8.743 0.927 0.361 0.049 2.619 0.392 0.261 0.056 1.112
R 2 0.984 0.994 0.999 0.818 0.981 0.993 0.999 0.853 0.976 0.984 0.995 0.826 0.976 0.995 0.999 0.838 0.985 0.993 0.999 0.914
OBJ 0.278 0.169 0.034 1.036 0.079 0.046 0.015 0.245 0.067 0.053 0.021 0.194 0.181 0.077 0.015 0.548 0.179 0.121 0.038 0.494
Figure 9 
                  Spider plots of RSM, ANN, DNN-IGWO, and SVM for (a) CS, (b) FS, (c) STS, (d) WA, and (e) P.
Figure 9

Spider plots of RSM, ANN, DNN-IGWO, and SVM for (a) CS, (b) FS, (c) STS, (d) WA, and (e) P.

The Taylor diagrams presented in Figure 10 compare the performance of four prediction models: RSM, ANN, DNN-IGWO, and SVM in terms of standard deviation and correlation coefficient. The DNN-IGWO model stands out as the best performer, as it faithfully reproduces the experimental data with a correlation coefficient close to 1 and a standard deviation almost identical to the reference. ANN follows closely, also showing high accuracy. The RSM models show intermediate performance, with moderate correlations and standard deviations somewhat far from the reference. Finally, SVM performs the worst, displaying a low correlation and an inadequate standard deviation. Overall, the neural network-based models (DNN-IGWO and ANN) significantly outperform conventional approaches, demonstrating their superiority for this prediction task.

Figure 10 
                  Taylor diagrams of RSM, ANN, DNN-IGWO, and SVM for (a) CS, (b) FS, (c) STS, (d) WA, and (e) P.
Figure 10

Taylor diagrams of RSM, ANN, DNN-IGWO, and SVM for (a) CS, (b) FS, (c) STS, (d) WA, and (e) P.

6 GA multi-objective optimization results

The resolution of multi-objective optimization problems is now a central point in the analysis of most processes. Multi-objective optimization aims to maximize several components of a function vector. Unlike single-objective optimization, solving a multi-objective problem does not lead to a unique solution but instead leads to a set of solutions called the Pareto optimal solution set [99,100,101]. This section addresses the optimization of the physicomechanical properties of LFC using GAs based on empirical models obtained through ANN. GAs are highly efficient optimization methods for finding compromises and have become popular in the field of engineering optimization [102]. They replicate the principles of genetics and the Darwinian concept of natural selection (survival of the fittest). The first step is to arbitrarily select a population of initial solutions in the search space (the chromosomes) and then evaluate the performance of these solutions to create a new population of solutions by using evolutionary operators such as selection, crossover, and mutation [80,103,104]. This cycle (Figure 11) is repeated until a satisfactory solution is obtained.

Figure 11 
               GAs flowchart for multi-objective optimization.
Figure 11

GAs flowchart for multi-objective optimization.

The optimization process begins with data preprocessing, which involves collecting and preparing the input data required for training the ANN. Data normalization is applied to ensure that all input variables are on a comparable scale. The dataset is then divided into training and testing sets to assess the performance of the ANN model.

Once the data are prepared, the next step is training the ANN model. The ANN is designed as a regression model to predict target output variables based on the input features. The training process involves adjusting the network weights to minimize prediction errors, typically using backpropagation and gradient descent algorithms. The trained ANN model is later incorporated into the fitness evaluation of the GA.

After training the ANN, the GA is initialized. The GA begins by generating an initial population of candidate solutions, each representing a set of potential model parameters or hyperparameters for optimization. A key element of the GA is the fitness function, which evaluates the performance of each candidate solution. In this case, the fitness function is based on the ANN model’s accuracy in predicting the desired outputs. The GA is also configured with parameters such as the crossover rate, mutation rate, and the number of generations for evolution.

The next step is to evaluate the fitness of each individual in the population. Using the ANN model, the fitness of each candidate is assessed by comparing the predicted outputs to the actual target values. This process ensures that the GA selects candidate solutions that minimize errors while optimizing other objectives, such as improving CS and FS in the context of LFC.

Once the fitness evaluation is complete, the GA performs selection, where the top-performing candidates (parents) are chosen based on their fitness scores. Selection methods like roulette wheel or tournament selection are employed to increase the likelihood that individuals with better performance will pass their genes to the next generation [105,106].

Once the parents are selected, the crossover (recombination) process is applied, during which parent solutions are combined to generate offspring. Crossover enables the algorithm to explore new regions of the solution space by blending traits from two parent solutions, potentially yielding better-performing solutions in future generations.

To maintain diversity in the population and prevent premature convergence, a mutation step is introduced. In this stage, small random changes are made to some offspring to explore less-explored areas of the solution space. This randomness ensures that the algorithm avoids getting trapped in local optima and increases the likelihood of finding a global optimum [107,108].

The population is then updated with the newly generated offspring, replacing the previous generation. This iterative evolutionary process of selection, crossover, mutation, and population updating continues until a stopping criterion is met.

At the end of each iteration, a convergence check is performed to determine whether the stopping criteria – such as reaching a maximum number of generations or achieving a predefined threshold for improvement – have been satisfied. If the criteria are met, the optimization process concludes; otherwise, the algorithm returns to the fitness evaluation stage and proceeds with another iteration.

Once the algorithm converges, the optimized solution is obtained, representing the best-performing set of parameters for the ANN regression model. These parameters are then used to enhance the model’s predictive accuracy while meeting the multi-objective optimization goals of the problem.

This section aims to identify all the optimal solutions that yield superior physicomechanical properties for LFC, including maximum CS, FS, tensile strength, WA, and P. The different optimization conditions are outlined in Table 15.

Table 15

Initial conditions for optimization by GA

Parameters Objectives Lower limit Upper limit
Additive type Gamme OSA (1) SSA (2) MSA (3)
Curing time (days) Gamme 7 28
Additive content (%) Gamme 0 30
CS (MPa) Max. 4.857 15.341
FS (MPa) Max. 1.312 3.895
STS (MPa) Max. 0.9377 2.861
WA (%) Max. 15.7597 19.352
P (%) Max. 35.219 40.426

The Pareto fronts (2D) depicted in Figure 12(a)–(d) showcase various combinations of OBJs (CS vs FS, STS, WA, P). These fronts outline the spectrum of variation between two properties and are characterized by series of points. Each transition from one point to another represents an improvement in one OBJ at the expense of the other. The selection of a solution hinges on user preference. However, Pareto fronts offer significant utility by streamlining options and aiding decision-makers in pinpointing a desired operating point from the optimal Pareto point set [109,110].

Figure 12 
               Pareto front graphs obtained by GA for CS: (a) FS, (b) STS, (c) WA, and (d) P.
Figure 12

Pareto front graphs obtained by GA for CS: (a) FS, (b) STS, (c) WA, and (d) P.

Figure 12(a) and (b) depict the Pareto front for two sets of OBJs: CS vs FS, and CS vs STS. These graphs illustrate the relationship between these properties, showcasing the trade-offs that occur when attempting to optimize both simultaneously. When aiming to maximize both CS and FS or CS and STS, the Pareto front highlights regions were achieving higher values for both properties is feasible. Conversely, moving toward regions where both functions decrease signifies a compromise in the performance of the concrete mix. These insights are invaluable for decision-making, as they help in identifying the most favorable operating points and trade-offs based on project requirements and priorities [111,112].

Figure 12(c) and (d) depicts the Pareto front and reveal a fundamental trade-off between the two OBJs. In this scenario, maximizing one function inevitably leads to the minimization of the other. This relationship is evident between CS and WA, as well as CS and P. The Pareto front illustrates that achieving higher CS values corresponds to lower WA and P values. These findings underscore the inherent compromise involved in optimizing these properties simultaneously. Note that no single solution is superior; rather, the solutions represent compromises that need to be considered carefully based on specific project requirements and objectives [113,114,115].

7 Conclusion

This work explores the physicomechanical behavior of LFC with seashell ash as an additive, a waste product, in concrete production to promote recycling of waste materials and reduce the reliance on cement that have high carbon footprint attending sustainable development goals (Nos 9, 11, 12, 13, and 15), this research helps in reducing gas emissions associated with cement production. RSM and ANN are employed to model and optimize mechanical strength, WA, and P. Multi-objective optimization using GA identifies optimal levels in LFC production. On the basis of experimental work, the following conclusions can be drawn:

  • Curing time significantly influences the mechanical properties of LFC (CS, FS, and STS) by its contribution of 71.20, 77.89, and 78.06%, respectively. In contrast, additive content contributes 10.29, 7.12, and 5.02%, respectively, indicating its low impact on these properties.

  • OSA, SSA, and MSA insignificantly (P value >0.05) affect CS, FS, and STS, respectively.

  • Additive content exerts a greater influence on WA and P with 63.63 and 79.78%, respectively, compared with curing time of 18.34 and 10.42% and additive type of 1.86 and 1.40%, respectively.

  • The highest CS, FS, and STS were achieved over a 28-day curing period and with an additive content ranging from 4 to 20%, in addition to substituting cement with three types of additives, namely, OSA, SSA, and MSA.

  • For samples subjected to a 28-day curing period, additive content ranging from 18 to 30% and the presence of MSA showed lower WA of LFC than samples containing OSA and SSA.

  • For samples subjected to a 28-day curing period, additive content ranging from 24 to 30%, with OSA and MSA, exhibited lower P of LFC compared with the sample containing SSA.

  • The hybrid ANN (DNN-IGWO) demonstrated excellent accuracy and reliability in predicting experimental results. It was distinguished by a higher coefficient of determination (R 2) and significantly lower error values (MAD, RMSE, MAPE, and OBJ), thus outperforming RSM, ANN, and SVM models for all predicted LFC properties.

  • The combined use of the hybrid ANN and the RSM method is recommended, as these approaches are complementary and contribute to enhancing the quality of the overall statistical analysis.

  • The Pareto front analysis reveals a trade-off between maximizing the mechanical properties of LFC (CS, FS, and STS) and minimizing its physical properties (WA and P), emphasizing the conflicting nature of these objectives.

Acknowledgments

The authors extend their appreciation to Prince Sattam bin Abdulaziz University for funding this research work through the project number (PSAU/2024/01/31676).

  1. Funding information: This study was supported by research fund from Prince Sattam bin Abdulaziz University, 2024.

  2. Author contributions: Conceptualization: A.-M.M and B.-A.T.; methodology: Y.-C. and A.-L.; validation: B.-A.T.; formal analysis: Y.-I. A. and M.-O. B.; investigation: Y.-C., Y.-I. A., and A.-L.; resources: A.-M.M. and B.-A.T.; writing – original draft preparation: B.-A.T.; writing – review and editing: A.-M.M., Y.-I. A., and A.-L.; visualization: Y.-C, Y.-I. A., and A.-L.; project administration: B.-A.T.; and funding acquisition: A.-M.M. All authors have accepted responsibility for the entire content of this manuscript and approved its submission.

  3. Conflict of interest: The authors state no conflict of interest.

  4. Data availability statement: The datasets generated and/or analyzed during the current study are available from the corresponding author on reasonable request.

References

[1] Sunarno, Y., M. W. Tjaronge, R. Irmawaty, A. B. Muhiddin, M. A. Caronge, and M. Tumpu. Ultrasonic pulse velocity (UPV) and initial rate of water absorption (IRA) of foam concrete containing blended cement. Materials Research Proceedings, Vol. 31, 2023, pp. 571–580.10.21741/9781644902592-59Suche in Google Scholar

[2] Amran, M., R. Fediuk, N. Vatin, Y. Huei Lee, G. Murali, T. Ozbakkaloglu, et al. Fibre-reinforced foamed concretes: A review. Materials (Basel), Vol. 13, 2020, id. 4323.10.3390/ma13194323Suche in Google Scholar PubMed PubMed Central

[3] Gencel, O., T. Bilir, Z. Bademler, and T. Ozbakkaloglu. A detailed review on foam concrete composites: Ingredients, properties, and microstructure. Applied Sciences, Vol. 12, 2022, id. 5752.10.3390/app12115752Suche in Google Scholar

[4] Alharthai, M., M. A. O. Mydin, N. S. Alimrani, S. S. Majeed, and B. A. Tayeh. Evaluating deterioration of the properties of lightweight foamed concrete at elevated temperatures. Journal of Building Engineering, Vol. 84, 2024, id. 108515.10.1016/j.jobe.2024.108515Suche in Google Scholar

[5] Amran, Y. H. M., N. Farzadnia, and A. A. A. Ali. Properties and applications of foamed concrete; a review. Construction and Building Materials, Vol. 101, 2015, pp. 990–1005.10.1016/j.conbuildmat.2015.10.112Suche in Google Scholar

[6] Amran, M., Y. Huei Lee, N. Vatin, R. Fediuk, S. Poi-Ngian, Y. Yong Lee, et al. Design efficiency, characteristics, and utilization of reinforced foamed concrete: A review. Crystals, Vol. 10, 2020, id. 948.10.3390/cryst10100948Suche in Google Scholar

[7] Hilal, A. A., N. H. Thom, and A. R. Dawson. Pore structure and permeation characteristics of foamed concrete. Journal of Advanced Concrete Technology, Vol. 12, 2014, pp. 535–544.10.3151/jact.12.535Suche in Google Scholar

[8] Shankar, A. N., S. Chopade, R. Srinivas, N. K. Mishra, H. K. Eftikhaar, G. Sethi, et al. Physical and mechanical properties of foamed concrete, a literature review. Materials Today: Proceedings, Vol. 80, 2023, pp. 276–283.10.1016/j.matpr.2023.10.105Suche in Google Scholar

[9] Raj, A., D. Sathyan, and K. M. Mini. Physical and functional characteristics of foam concrete: A review. Construction and Building Materials, Vol. 221, 2019, pp. 787–799.10.1016/j.conbuildmat.2019.06.052Suche in Google Scholar

[10] Mohammad, M. Development of foamed concrete: enabling and supporting design, University of Dundee, Dundee, United Kingdom, 2011.Suche in Google Scholar

[11] Priyatham, B., M. T. S. Lakshmayya, and D. Chaitanya. Review on performance and sustainability of foam concrete. Materials Today: Proceedings, University of Dundee, Dundee, United Kingdom, 2023.10.1016/j.matpr.2023.04.080Suche in Google Scholar

[12] Hamad, A. J. Materials, production, properties and application of aerated lightweight concrete. International Journal of Materials Science and Engineering, Vol. 2, 2014, pp. 152–157.10.12720/ijmse.2.2.152-157Suche in Google Scholar

[13] da Silva, A. L., E. R. Kohlman Rabbani, and M. Shakouri. Seashell powder as a sustainable alternative in cement-based materials: a systematic literature review. Sustainability, Vol. 17, 2025, id. 592.10.3390/su17020592Suche in Google Scholar

[14] Poudel, S., U. Bhetuwal, P. Kharel, S. Khatiwada, D. KC, S. Dhital, et al. Waste glass as partial cement replacement in sustainable concrete: Mechanical and fresh properties review. Buildings, Vol. 15, 2025, id. 857.10.3390/buildings15060857Suche in Google Scholar

[15] Tayeh, B. A., M. W. Hasaniyah, A. M. Zeyad, and M. O. Yusuf. Properties of concrete containing recycled seashells as cement partial replacement: A review. Journal of Cleaner Production, Vol. 237, 2019, id. 117723.10.1016/j.jclepro.2019.117723Suche in Google Scholar

[16] Maglad, A. M., M. A. O. Mydin, S. D. Datta, and B. A. Tayeh. Assessing the mechanical, durability, thermal and microstructural properties of sea shell ash based lightweight foamed concrete. Construction and Building Materials, Vol. 402, 2023, id. 133018.10.1016/j.conbuildmat.2023.133018Suche in Google Scholar

[17] Wijayasundara, M., P. Mendis, and R. H. Crawford. Methodology for the integrated assessment on the use of recycled concrete aggregate replacing natural aggregate in structural concrete. Journal of Cleaner Production, Vol. 166, 2017, pp. 321–334.10.1016/j.jclepro.2017.08.001Suche in Google Scholar

[18] Ahmed, M. M., K. A. M. El-Naggar, D. Tarek, A. Ragab, H. Sameh, A. M. Zeyad, et al. Fabrication of thermal insulation geopolymer bricks using ferrosilicon slag and alumina waste. Case Studies in Construction Materials, Vol. 15, 2021, id. e00737.10.1016/j.cscm.2021.e00737Suche in Google Scholar

[19] Adewuyi, A. P., S. O. Franklin, and K. A. Ibrahim. Utilization of mollusc shells for concrete production for sustainable environment. International Journal of Scientific Engineering and Research, Vol. 6, 2015, pp. 201–208.Suche in Google Scholar

[20] Olutoge, F. A., O. M. Okeyinka, and O. S. Olaniyan. Assessment of the suitability of periwinkle shell ash (PSA) as partial replacement for ordinary Portland cement (OPC) in concrete. International Journal of Research and Reviews in Applied Sciences, Vol. 10, 2012, pp. 428–434.Suche in Google Scholar

[21] Hai-Yan, C., L. G. Li, Z.-M. Lai, A. K. H. Kwan, P.-M. Chen, and P. L. Ng. Effects of crushed oyster shell on strength and durability of marine concrete containing fly ash and blastfurnace slag. Materials Science, Vol. 25, 2019, pp. 97–107.10.5755/j01.ms.25.1.18772Suche in Google Scholar

[22] Adeala, A. J. and J. O. Olaoye. Structural properties of snail shell ash concrete (SSAC). Journal of Emerging Technologies and Innovative Research, Vol. 6, 2019, pp. 24–31.Suche in Google Scholar

[23] Kellouche, Y., B. Boukhatem, M. Ghrici, R. Rebouh, and A. Zidol. Neural network model for predicting the carbonation depth of slag concrete. Asian Journal of Civil Engineering, Vol. 22, 2021, pp. 1401–1414.10.1007/s42107-021-00390-zSuche in Google Scholar

[24] Sahraoui, M. and T. Bouziani. ANN modelling approach for predicting SCC properties-Research considering Algerian experience. Part II. Effects of aggregates types and contents. Journal of Building Materials and Structures, Vol. 8, 2021, pp. 63–71.10.34118/jbms.v8i1.778Suche in Google Scholar

[25] Chetbani, Y., R. Zaitri, B. A. Tayeh, I. Y. Hakeem, F. Dif, and Y. Kellouche. Physicomechanical behavior of high-performance concrete reinforced with recycled steel fibers from twisted cables in the brittle state – experimentation and statistics. Buildings, Vol. 13, 2023, id. 2290.10.3390/buildings13092290Suche in Google Scholar

[26] Hammoudi, A., K. Moussaceb, C. Belebchouche, and F. Dahmoune. Comparison of artificial neural network (ANN) and response surface methodology (RSM) prediction in compressive strength of recycled concrete aggregates. Construction and Building Materials, Vol. 209, 2019, pp. 425–436.10.1016/j.conbuildmat.2019.03.119Suche in Google Scholar

[27] Hameed, M. M., M. K. AlOmar, W. J. Baniya, and M. A. AlSaadi. Prediction of high-strength concrete: high-order response surface methodology modeling approach. Engineering with Computers, Vol. 38, 2022, pp. 1655–1668.10.1007/s00366-021-01284-zSuche in Google Scholar

[28] Dean, A., D. Voss, D. Draguljić, A. Dean, D. Voss, and D. Draguljić. Response surface methodology. Design and Analysis of Experiments, Springer, Cham, 2017, pp. 565–614.10.1007/978-3-319-52250-0_16Suche in Google Scholar

[29] Chelladurai, S. J. S., M. Kurugan, A. P. Ray, M. Upadhyaya, V. Narasimharaj, and S. Gnanasekaran. Optimization of process parameters using response surface methodology: A review. Materials Today: Proceedings, Vol. 37, 2021, pp. 1301–1304.10.1016/j.matpr.2020.06.466Suche in Google Scholar

[30] Alaloul, W. S. and A. H. Qureshi. Data processing using artificial neural networks. Dynamic data assimilation-beating the uncertainties, IntechOpen, Dundee, United Kingdom, 2020.Suche in Google Scholar

[31] Micheli-Tzanakou, E. Artificial neural networks: an overview. Network: Computation in Neural Systems, Vol. 22, 2011, pp. 208–230.10.3109/0954898X.2011.638355Suche in Google Scholar PubMed

[32] Zakaria, M., A. S. Mabrouka, and S. Sarhan. Artificial neural network: a brief overview. Neural Networks, Vol. 1, 2014, id. 2.Suche in Google Scholar

[33] Krenker, A., J. Bešter, and A. Kos. Introduction to the artificial neural networks. Artificial neural networks: Methodological advances and biomedical applications, InTech, Rijeka, Croatia, 2011, pp. 1–18.10.5772/15751Suche in Google Scholar

[34] Biem, A. Neural networks: A review. Data classification: Algorithms and applications, CRC Press (Taylor & Francis Group), Boca Raton, Florida, USA, 2014, pp. 205–244.Suche in Google Scholar

[35] Apicella, A., F. Donnarumma, F. Isgrò, and R. Prevete. A survey on modern trainable activation functions. Neural Networks, Vol. 138, 2021, pp. 14–32.10.1016/j.neunet.2021.01.026Suche in Google Scholar PubMed

[36] Rizalman, A. N. and C. C. Lee. Comparison of artificial neural network (ANN) and response surface methodology (RSM) in predicting the compressive strength of POFA concrete. Applications of Modelling and Simulation, Vol. 4, 2020, pp. 210–216.Suche in Google Scholar

[37] Yaro, N. S. A., M. H. Sutanto, N. Z. Habib, M. Napiah, A. Usman, and A. Muhammad. Comparison of Response Surface Methodology and Artificial Neural Network approach in predicting the performance and properties of palm oil clinker fine modified asphalt mixtures. Construction and Building Materials, Vol. 324, 2022, id. 126618.10.1016/j.conbuildmat.2022.126618Suche in Google Scholar

[38] Ray, S., M. Haque, T. Ahmed, and T. T. Nahin. Comparison of artificial neural network (ANN) and response surface methodology (RSM) in predicting the compressive and splitting tensile strength of concrete prepared with glass waste and tin (Sn) can fiber. Journal of King Saud University-Engineering Sciences, Vol. 35, 2023, pp. 185–199.10.1016/j.jksues.2021.03.006Suche in Google Scholar

[39] British Standard Institution. Cement composition, specifications and conformity criteria for common cements (BS EN 17-1:2019), BSI, 2019. https://www.en-standard.eu/bs-en-197-1-2011-cement-composition-specifications-and-conformity-criteria-for-common-cements/(accessed September 15, 2024).Suche in Google Scholar

[40] A. Commitee, C09. ASTM C33-03, Standard specifications for concrete agregates, ASTM Int, West Conshohocken, Pennsylvania, USA, 2003.Suche in Google Scholar

[41] BS 3148:1980. Methods of test for water for making concrete (including notes on the suitability of the water) (withdrawn), British Standards Institution - Publication Index | NBS, n.d. https://www.thenbs.com/PublicationIndex/documents/details?Pub=BSI&DocId=11281 (accessed September 15, 2024).Suche in Google Scholar

[42] BSI, BS EN 12390-3. Testing hardened concrete. Part 3: Compressive strength of test specimens, British Standards Institution (BSI), London, United Kingdom, 2009.Suche in Google Scholar

[43] B.S. En, 12390-5. Testing hardened concrete–Part 5: flexural strength of test specimens, Br. Stand. Institution-BSI CEN Eur. Comm. Stand, London, United Kingdom, 2009.Suche in Google Scholar

[44] E.N. CSN, 12390-6. Testing hardened concrete-Part 6: Tensile splitting strength of test specimens, Czech Office for Standards, Metrology and Testing (ÚNMZ), Czech Repub, 2009.Suche in Google Scholar

[45] C1403. Standard test method for rate of water absorption of masonry mortars, n.d. https://www.astm.org/c1403-15.html (accessed September 15, 2024).Suche in Google Scholar

[46] Kim, J., D.-G. Kim, and K. H. Ryu. Enhancing response surface methodology through coefficient clipping based on prior knowledge. Processes, Vol. 11, 2023, id. 3392.10.3390/pr11123392Suche in Google Scholar

[47] Chiappini, F. A., S. M. Azcarate, C. M. Teglia, and H. C. Goicoechea. Fundamentals of design of experiments and optimization: Data modeling in response surface methodology. In Introduction to quality by design in pharmaceutical manufacturing and analytical development, Springer, Cham, Switzerland, 2023, pp. 67–89.10.1007/978-3-031-31505-3_4Suche in Google Scholar

[48] Taavitsainen, V.-M. T. Experimental optimization and response surfaces. Chemometrics in practical applications, IntechOpen, Dundee, United Kingdom, 2012, pp. 91–138.Suche in Google Scholar

[49] Hong, W. C., B. S. Mohammed, I. Abdulkadir, and M. S. Liew. Modeling and optimizing the effect of palm oil fuel ash on the properties of engineered cementitious composite. Buildings, Vol. 13, 2023, id. 628.10.3390/buildings13030628Suche in Google Scholar

[50] Zhang, Y., Q. Zhang, A. H. AlAteah, A. Essam, and S. A. Mostafa. Predictive modeling for mechanical characteristics of ultra high-performance concrete blended with eggshell powder and nano silica utilizing traditional technique and machine learning algorithm. Case Studies in Construction Materials, Vol. 21, 2024, id. e04025.10.1016/j.cscm.2024.e04025Suche in Google Scholar

[51] Gupta, A. K. Predictive modelling of turning operations using response surface methodology, artificial neural networks and support vector regression. International Journal of Production Research, Vol. 48, 2010, pp. 763–778.10.1080/00207540802452132Suche in Google Scholar

[52] Chetbani, Y., M. Boumaaza, R. Zaitri, A. Belaadi, A. Ben Mahammed, A. Laouissi, et al. Study of the effect of hemp fibers and brick waste powder on the mechanical characteristics of mortar: experimental and statistical analysis. Journal of Natural Fibers, Vol. 22, 2025, id. 2438900.10.1080/15440478.2024.2438900Suche in Google Scholar

[53] Brahimi, M., R. Benderradji, E. Raouache, Y. Chetbani, A. Laouissi, and A. J. Chamkha. A numerical study and statistical approach of the impact of nanofluids on mixed convection in a ventilated cavity. The International Journal of Advanced Manufacturing Technology, Vol. 134, 2024, pp. 5281–5300.10.1007/s00170-024-14455-1Suche in Google Scholar

[54] Bensmail, M., R. Zaitri, M. Hani, Y. Chetbani, D. Benamara, and A. Laouissi. Analyzing the effects of recycled aggregates on the workability and mechanical characteristics of concrete through mixture design and optimization techniques. World Journal of Engineering, Vol. 74, 2025, pp. 255–260.10.1108/WJE-01-2025-0049Suche in Google Scholar

[55] Ben Salah, H. and M. Hani. An investigation of the effect of high temperature on the strength compression and ultrasonic pulse velocity of self-compacting concrete. The Journal of Engineering and Exact Sciences, Vol. 10, 2024, id. 16818.10.18540/jcecvl10iss1pp16818Suche in Google Scholar

[56] Alem, D. D. An overview of data analysis and interpretations in research. International Journal of Academic Research in Education and Review, Vol. 8, 2020, pp. 1–27.Suche in Google Scholar

[57] Bertinetto, C., J. Engel, and J. Jansen. ANOVA simultaneous component analysis: A tutorial review. Analytica Chimica Acta: X, Vol. 6, 2020, id. 100061.10.1016/j.acax.2020.100061Suche in Google Scholar PubMed PubMed Central

[58] Mezaouri, S., S. M. Aissa Mamoune, H. Siad, M. Lachemi, M. Boumaaza, A. Belaadi, et al. Prediction of cementitious composite characteristics based on waste glass powder and aggregates: Experimental and statistical analysis. Measurement, Vol. 224, 2025, id. 117609.10.1016/j.measurement.2025.117609Suche in Google Scholar

[59] Saiah, W., A. Rabahi, M. Boumaaza, M. Hani, A. Belaadi, Y. Chetbani, et al. Antioxidant, anti-inflammatory, and anti-diabetic assessment of (2Z)-2 (arylimino)-2Hchromene-3-carboxamides: An in vitro-in silico study by applying ANN-GA, MCDM, and RSM optimization techniques. Results in Chemistry, Vol. 6, 2025, id. 102249.10.1016/j.rechem.2025.102249Suche in Google Scholar

[60] Myers, J. L., A. D. Well, and R. F. Lorch Jr. Research design and statistical analysis, Routledge, New York, USA, 2013.10.4324/9780203726631Suche in Google Scholar

[61] Nanda, A., B. B. Mohapatra, A. P. K. Mahapatra, A. P. K. Mahapatra, and A. P. K. Mahapatra. Multiple comparison test by Tukey’s honestly significant difference (HSD): Do the confident level control type I error. International Journal of Statistics and Applied Mathematics, Vol. 6, 2021, pp. 59–65.10.22271/maths.2021.v6.i1a.636Suche in Google Scholar

[62] Lee, D. K. Alternatives to P value: confidence interval and effect size. Korean Journal of Anesthesiology, Vol. 69, 2016, id. 555.10.4097/kjae.2016.69.6.555Suche in Google Scholar PubMed PubMed Central

[63] Bouyaya, L., A. Belaadi, M. Boumaaza, A. Lekrine, B. X. Chai, Y. Chetbani, et al. Chemical processing effect on the tensile strength of waste palm fiber-reinforced HDPE biocomposite: Optimizing using response surface methodology. Journal of Natural Fibers, Vol. 21, 2024, id. 2421810.10.1080/15440478.2024.2421810Suche in Google Scholar

[64] Kellouche, Y., B. A. Tayeh, Y. Chetbani, A. M. Zeyad, and S. A. Mostafa. Comparative study of different machine learning approaches for predicting the compressive strength of palm fuel ash concrete. Journal of Building Engineering, Vol. 88, 2024, id. 109187.10.1016/j.jobe.2024.109187Suche in Google Scholar

[65] Nwadiugwu, M. C. Neural networks, artificial intelligence and the computational brain. arXiv preprint arXiv:2101.08635, 2020.Suche in Google Scholar

[66] Arshavsky, Y. I. Neurons versus networks: The interplay between individual neurons and neural networks in cognitive functions. The Neuroscientist, Vol. 23, 2017, pp. 341–355.10.1177/1073858416670124Suche in Google Scholar PubMed

[67] Yang, G. R. and X.-J. Wang. Artificial neural networks for neuroscientists: a primer. Neuron, Vol. 107, 2020, pp. 1048–1070.10.1016/j.neuron.2020.09.005Suche in Google Scholar PubMed PubMed Central

[68] Montesinos López, O. A., A. Montesinos López, and J. Crossa. Fundamentals of artificial neural networks and deep learning. In Multivariate statistical machine learning methods for genomic prediction, Springer, Cham, Switzerland, 2022, pp. 379–425.10.1007/978-3-030-89010-0_10Suche in Google Scholar

[69] Taherdoost, H. Deep learning and neural networks: Decision-making implications. Symmetry (Basel), Vol. 15, 2023, id. 1723.10.3390/sym15091723Suche in Google Scholar

[70] Qamar, R. and B. A. Zardari. Artificial neural networks: An overview. Mesopotamian Journal of Computer Science, Vol. 2023, 2023, pp. 124–133.10.58496/MJCSC/2023/015Suche in Google Scholar

[71] Laouissi, A., M. M. Blaoui, H. Abderazek, M. Nouioua, and A. Bouchoucha. Heat treatment process study and ANN-GA based multi-response optimization of C45 steel mechanical properties. Metals And Materials International, Vol. 28, 2022, pp. 3087–3105.10.1007/s12540-022-01197-6Suche in Google Scholar

[72] Thakur, A. and A. Konde. Fundamentals of neural networks. International Journal for Research in Applied Science and Engineering Technology, Vol. 9, 2021, pp. 407–426.10.22214/ijraset.2021.37362Suche in Google Scholar

[73] Worden, K., G. Tsialiamanis, E. J. Cross, and T. J. Rogers. Artificial neural networks. In Machine learning in modeling and simulation: Methods and applications, Springer, Cham, Switzerland, 2023, pp. 85–119.10.1007/978-3-031-36644-4_2Suche in Google Scholar

[74] Lyu, Z., Y. Yu, B. Samali, M. Rashidi, M. Mohammadi, T. N. Nguyen, et al. Back-propagation neural network optimized by K-fold cross-validation for prediction of torsional strength of reinforced concrete beam. Materials (Basel), Vol. 15, 2022, id. 1477.10.3390/ma15041477Suche in Google Scholar PubMed PubMed Central

[75] Nti, I. K., O. Nyarko-Boateng, and J. Aning. Performance of machine learning algorithms with different K values in K-fold cross-validation. International Journal of Information Technology and Computer Science, Vol. 6, 2021, pp. 61–71.10.5815/ijitcs.2021.06.05Suche in Google Scholar

[76] Raschka, S. Model evaluation, model selection, and algorithm selection in machine learning. arXiv preprint arXiv:1811.12808, 2018.Suche in Google Scholar

[77] Jung, Y. Multiple predicting K-fold cross-validation for model selection. Journal of Nonparametric Statistics, Vol. 30, 2018, pp. 197–215.10.1080/10485252.2017.1404598Suche in Google Scholar

[78] Pandian, S. K-fold cross validation technique and its essentials. analyticsvidhya.com, 2022.Suche in Google Scholar

[79] Hakmi, T., A. Hamdi, A. Laouissi, H. Abderazek, S. Chihaoui, and M. A. Yallese. Mathematical modeling using ANN based on k-fold cross validation approach and MOAHA multi-objective optimization algorithm during turning of polyoxymethylene POM-C. Jordan Journal of Mechanical & Industrial Engineering, Vol. 18, 2024, pp. 179–190.10.59038/jjmie/180114Suche in Google Scholar

[80] Laouissi, A., M. A. Yallese, A. Belbah, S. Belhadi, and A. Haddad. Investigation, modeling, and optimization of cutting parameters in turning of gray cast iron using coated and uncoated silicon nitride ceramic tools. Based on ANN, RSM, and GA optimization. The International Journal of Advanced Manufacturing Technology, Vol. 101, 2019, pp. 523–548.10.1007/s00170-018-2931-8Suche in Google Scholar

[81] Marcot, B. G. and A. M. Hanea. What is an optimal value of k in k-fold cross-validation in discrete Bayesian network analysis? Computational Statistics, Vol. 36, 2021, pp. 2009–2031.10.1007/s00180-020-00999-9Suche in Google Scholar

[82] Pal, K. and B. V. Patel. Data classification with k-fold cross validation and holdout accuracy estimation methods with 5 different machine learning techniques. In 2020 Fourth International Conference on Computing Methodologies and Communication (ICCMC), IEEE, 2020, pp. 83–87.10.1109/ICCMC48092.2020.ICCMC-00016Suche in Google Scholar

[83] Dastres, R. and M. Soori. Artificial neural network systems. International Journal of Imaging and Robotics, Vol. 21, 2021, pp. 13–25.Suche in Google Scholar

[84] Abdolrasol, M. G. M., S. M. S. Hussain, T. S. Ustun, M. R. Sarker, M. A. Hannan, R. Mohamed, et al. Artificial neural networks based optimization techniques: A review. Electronics, Vol. 10, 2021, id. 2689.10.3390/electronics10212689Suche in Google Scholar

[85] Panda, S. and G. Panda. Fast and improved backpropagation learning of multi‐layer artificial neural network using adaptive activation function. Expert Systems, Vol. 37, 2020, id. e12555.10.1111/exsy.12555Suche in Google Scholar

[86] Ionin, A. S., L. N. Karelina, N. S. Shuravin, M. S. Sidel’nikov, F. A. Razorenov, S. V. Egorov, et al. Experimental study of the transfer function of a superconducting Gauss neuron prototype. JETP Letters, Vol. 118, 2023, pp. 766–772.10.1134/S002136402360324XSuche in Google Scholar

[87] Dubey, S. R., S. K. Singh, and B. B. Chaudhuri. Activation functions in deep learning: A comprehensive survey and benchmark. Neurocomputing, Vol. 19, 2022, pp. 1–19.Suche in Google Scholar

[88] Reffas, O., H. Boumediri, Y. Karmi, M. S. Kahaleras, I. Bousba, and L. Aissa. Statistical analysis and predictive modeling of cutting parameters in EN-GJL-250 cast iron turning: application of machine learning and MOALO optimization. The International Journal of Advanced Manufacturing Technology, Vol. 137, 2025, pp. 1–19.10.1007/s00170-025-15098-6Suche in Google Scholar

[89] Karmi, Y., H. Boumediri, O. Reffas, Y. Chetbani, S. Ataya, R. Khan, et al. Integration of hybrid machine learning and multi-objective optimization for enhanced turning parameters of EN-GJL-250 cast iron. Crystals, Vol. 15, 2025, id. 264.10.3390/cryst15030264Suche in Google Scholar

[90] Touati, S., H. Boumediri, Y. Karmi, M. Chitour, K. Boumediri, A. Zemmouri, et al. Performance analysis of steel W18CR4V grinding using RSM, DNN-GA, KNN, LM, DT, SVM models, and optimization via desirability function and MOGWO. Heliyon, Vol. 11, 2025, id. e42640.10.1016/j.heliyon.2025.e42640Suche in Google Scholar PubMed PubMed Central

[91] Eade, E. Gauss-Newton/Levenberg-Marquardt optimization. Tech. Rep., 2013, pp. 1–14.Suche in Google Scholar

[92] de Jesús Rubio, J. Stability analysis of the modified Levenberg–Marquardt algorithm for the artificial neural network training. IEEE Transactions on Neural Networks and Learning Systems, Vol. 32, 2020, pp. 3510–3524.10.1109/TNNLS.2020.3015200Suche in Google Scholar PubMed

[93] Awad, M. and R. Khanna. Support vector machines for classification. In Efficient learning machines: Theories, concepts, and applications for engineers and system designers, Springer, New York, USA, 2015, pp. 39–66.10.1007/978-1-4302-5990-9_3Suche in Google Scholar

[94] Syam, N. and R. Kaul. Support vector machines in marketing and sales. In Machine learning and artificial intelligence in marketing and sales, Emerald Publishing Limited, Bingley, United Kingdom, 2021, pp. 85–137.10.1108/978-1-80043-880-420211005Suche in Google Scholar

[95] Otchere, D. A., T. O. A. Ganat, R. Gholami, and S. Ridha. Application of supervised machine learning paradigms in the prediction of petroleum reservoir properties: Comparative analysis of ANN and SVM models. Journal of Petroleum Science & Engineering, Vol. 200, 2021, id. 108182.10.1016/j.petrol.2020.108182Suche in Google Scholar

[96] Cortes, C. and V. Vapnik. Support-vector networks. Machine Learning, Vol. 20, 1995, pp. 273–297.10.1023/A:1022627411411Suche in Google Scholar

[97] Raouache, E., A. Laouissi, F. Khalfallah, and Y. Chetbani. Development and optimization of a prediction system model for mechanical properties in rotary friction-welded polyamide joints using the SVM approach and GA optimization. International Journal of Advanced Manufacturing Technology, Vol. 132, 2024, pp. 1005–1017.10.1007/s00170-024-13450-wSuche in Google Scholar

[98] Çaydaş, U. and S. Ekici. Support vector machines models for surface roughness prediction in CNC turning of AISI 304 austenitic stainless steel. Journal of Intelligent Manufacturing, Vol. 23, 2012, pp. 639–650.10.1007/s10845-010-0415-2Suche in Google Scholar

[99] Deb, K., K. Sindhya, and J. Hakanen. Multi-objective optimization. In Decision sciences, CRC Press, Boca Raton, Florida, USA, 2016, pp. 161–200.10.1201/9781315183176-4Suche in Google Scholar

[100] Taha, K. Methods that optimize multi-objective problems: A survey and experimental evaluation. IEEE Access, Vol. 8, 2020, pp. 80855–80878.10.1109/ACCESS.2020.2989219Suche in Google Scholar

[101] Azzouz, R., S. Bechikh, and L. Ben Said. Dynamic multi-objective optimization using evolutionary algorithms: a survey. Recent advances in evolutionary multi-objective optimization, Springer, Cham, Switzerland, 2017, pp. 31–70.10.1007/978-3-319-42978-6_2Suche in Google Scholar

[102] Wang, Z. and A. Sobey. A comparative review between Genetic Algorithm use in composite optimisation and the state-of-the-art in evolutionary computation. Composite Structures, Vol. 233, 2020, id. 111739.10.1016/j.compstruct.2019.111739Suche in Google Scholar

[103] Gupta, S. K. An overview of genetic algorithms: a structural analysis. International Journal of Innovative Science and Research Technology, Vol. 15, 2021, id. 58.Suche in Google Scholar

[104] Türkoğlu, B. and H. Eroğlu. Genetic Algorithm for Route Optimization. In Applied genetic algorithm and its variants: Case studies and new developments, Springer, Singapore, 2023, pp. 51–79.10.1007/978-981-99-3428-7_3Suche in Google Scholar

[105] Tamaki, H., H. Kita, and S. Kobayashi. Multi-objective optimization by genetic algorithms: A review. In Proceedings of IEEE International Conference on Evolutionary Computation, IEEE, 1996, pp. 517–522.10.1109/ICEC.1996.542653Suche in Google Scholar

[106] Konak, A., D. W. Coit, and A. E. Smith. Multi-objective optimization using genetic algorithms: A tutorial. Reliability Engineering & System Safety, Vol. 91, 2006, pp. 992–1007.10.1016/j.ress.2005.11.018Suche in Google Scholar

[107] Xu, M. Advancing genetic programming for learning scheduling heuristics, Victoria University of Wellington, Wellington, New Zealand, 2024.Suche in Google Scholar

[108] Zhang, W., G. Xiao, M. Gen, H. Geng, X. Wang, M. Deng, et al. Enhancing multi-objective evolutionary algorithms with machine learning for scheduling problems: recent advances and survey. Frontiers in Industrial Engineering, Vol. 2, 2024, id. 1337174.10.3389/fieng.2024.1337174Suche in Google Scholar

[109] Rahim, A. A. A., S. N. Musa, S. Ramesh, and M. K. Lim. A systematic review on material selection methods. Proceedings of the Institution of Mechanical Engineers, Part L: Journal of Materials: Design and Applications, Vol. 234, 2020, pp. 1032–1059.10.1177/1464420720916765Suche in Google Scholar

[110] Li, Y., J. Baik, M. M. Rahman, I. Anagnostopoulos, R. Li, and T. Shu. Pareto optimization of CNN models via hardware-aware neural architecture search for drainage crossing classification on resource-limited devices. In Proceedings of the SC'23 Workshops of the International Conference on High Performance Computing, Network, Storage, and Analysis, 2023, pp. 1767–1775.10.1145/3624062.3624258Suche in Google Scholar

[111] Mattson, C. A. and A. Messac. Pareto frontier based concept selection under uncertainty, with visualization. Optimization and Engineering, Vol. 6, 2005, pp. 85–115.10.1023/B:OPTE.0000048538.35456.45Suche in Google Scholar

[112] Rebello, C. M., M. A. F. Martins, D. D. Santana, A. E. Rodrigues, J. M. Loureiro, A. M. Ribeiro, et al. From a pareto front to pareto regions: A novel standpoint for multiobjective optimization. Mathematics, Vol. 9, 2021, id. 3152.10.3390/math9243152Suche in Google Scholar

[113] Ngatchou, P., A. Zarei, and A. El-Sharkawi. Pareto multi objective optimization. In Proceedings of the 13th International Conference on, Intelligent Systems Application to Power Systems, IEEE, 2005, pp. 84–91.10.1109/ISAP.2005.1599245Suche in Google Scholar

[114] Guédas, B. and P. Dépincé. A compromise definition in multiobjective multidisciplinary design optimization. In 8th World Congress on Structural and Multidisciplinary Optimization, 2009.Suche in Google Scholar

[115] Ciftcioglu, Ö. and M. S. Bittermann. Adaptive formation of Pareto front in evolutionary multi-objective optimization. Evolutionary Computation, Vol. 17, 2009, pp. 417–444.10.5772/9619Suche in Google Scholar

Received: 2025-03-05
Revised: 2025-05-01
Accepted: 2025-07-02
Published Online: 2025-09-13

© 2025 the author(s), published by De Gruyter

This work is licensed under the Creative Commons Attribution 4.0 International License.

Artikel in diesem Heft

  1. Review Articles
  2. Utilization of steel slag in concrete: A review on durability and microstructure analysis
  3. Technical development of modified emulsion asphalt: A review on the preparation, performance, and applications
  4. Recent developments in ultrasonic welding of similar and dissimilar joints of carbon fiber reinforcement thermoplastics with and without interlayer: A state-of-the-art review
  5. Unveiling the crucial factors and coating mitigation of solid particle erosion in steam turbine blade failures: A review
  6. From magnesium oxide, magnesium oxide concrete to magnesium oxide concrete dams
  7. Properties and potential applications of polymer composites containing secondary fillers
  8. A scientometric review on the utilization of copper slag as a substitute constituent of ordinary Portland cement concrete
  9. Advancement of additive manufacturing technology in the development of personalized in vivo and in vitro prosthetic implants
  10. Recent advance of MOFs in Fenton-like reaction
  11. A review of defect formation, detection, and effect on mechanical properties of three-dimensional braided composites
  12. Non-conventional approaches to producing biochars for environmental and energy applications
  13. Review of the development and application of aluminum alloys in the nuclear industry
  14. Advances in the development and characterization of combustible cartridge cases and propellants: Preparation, performance, and future prospects
  15. Recent trends in rubberized and non-rubberized ultra-high performance geopolymer concrete for sustainable construction: A review
  16. Cement-based materials for radiative cooling: Potential, material and structural design, and future prospects
  17. A comprehensive review: The impact of recycling polypropylene fiber on lightweight concrete performance
  18. A comprehensive review of preheating temperature effects on reclaimed asphalt pavement in the hot center plant recycling
  19. Exploring the potential applications of semi-flexible pavement: A comprehensive review
  20. A critical review of alkali-activated metakaolin/blast furnace slag-based cementitious materials: Reaction evolution and mechanism
  21. Dispersibility of graphene-family materials and their impact on the properties of cement-based materials: Application challenges and prospects
  22. Research progress on rubidium and cesium separation and extraction
  23. A step towards sustainable concrete with the utilization of M-sand in concrete production: A review
  24. Studying the effect of nanofillers in civil applications: A review
  25. Studies on the anticorrosive effect of phytochemicals on mild steel, carbon steel, and stainless-steel surfaces in acid and alkali medium: A review
  26. Nanotechnology for calcium aluminate cement: thematic analysis
  27. Research Articles
  28. Investigation of the corrosion performance of HVOF-sprayed WC-CoCr coatings applied on offshore hydraulic equipment
  29. A systematic review of metakaolin-based alkali-activated and geopolymer concrete: A step toward green concrete
  30. Evaluation of color matching of three single-shade composites employing simulated 3D printed cavities with different thicknesses using CIELAB and CIEDE2000 color difference formulae
  31. Novel approaches in prediction of tensile strain capacity of engineered cementitious composites using interpretable approaches
  32. Effect of TiB2 particles on the compressive, hardness, and water absorption responses of Kulkual fiber-reinforced epoxy composites
  33. Analyzing the compressive strength, eco-strength, and cost–strength ratio of agro-waste-derived concrete using advanced machine learning methods
  34. Tensile behavior evaluation of two-stage concrete using an innovative model optimization approach
  35. Tailoring the mechanical and degradation properties of 3DP PLA/PCL scaffolds for biomedical applications
  36. Optimizing compressive strength prediction in glass powder-modified concrete: A comprehensive study on silicon dioxide and calcium oxide influence across varied sample dimensions and strength ranges
  37. Experimental study on solid particle erosion of protective aircraft coatings at different impact angles
  38. Compatibility between polyurea resin modifier and asphalt binder based on segregation and rheological parameters
  39. Fe-containing nominal wollastonite (CaSiO3)–Na2O glass-ceramic: Characterization and biocompatibility
  40. Relevance of pore network connectivity in tannin-derived carbons for rapid detection of BTEX traces in indoor air
  41. A life cycle and environmental impact analysis of sustainable concrete incorporating date palm ash and eggshell powder as supplementary cementitious materials
  42. Eco-friendly utilisation of agricultural waste: Assessing mixture performance and physical properties of asphalt modified with peanut husk ash using response surface methodology
  43. Benefits and limitations of N2 addition with Ar as shielding gas on microstructure change and their effect on hardness and corrosion resistance of duplex stainless steel weldments
  44. Effect of selective laser sintering processing parameters on the mechanical properties of peanut shell powder/polyether sulfone composite
  45. Impact and mechanism of improving the UV aging resistance performance of modified asphalt binder
  46. AI-based prediction for the strength, cost, and sustainability of eggshell and date palm ash-blended concrete
  47. Investigating the sulfonated ZnO–PVA membrane for improved MFC performance
  48. Strontium coupling with sulphur in mouse bone apatites
  49. Transforming waste into value: Advancing sustainable construction materials with treated plastic waste and foundry sand in lightweight foamed concrete for a greener future
  50. Evaluating the use of recycled sawdust in porous foam mortar for improved performance
  51. Improvement and predictive modeling of the mechanical performance of waste fire clay blended concrete
  52. Polyvinyl alcohol/alginate/gelatin hydrogel-based CaSiO3 designed for accelerating wound healing
  53. Research on assembly stress and deformation of thin-walled composite material power cabin fairings
  54. Effect of volcanic pumice powder on the properties of fiber-reinforced cement mortars in aggressive environments
  55. Analyzing the compressive performance of lightweight foamcrete and parameter interdependencies using machine intelligence strategies
  56. Selected materials techniques for evaluation of attributes of sourdough bread with Kombucha SCOBY
  57. Establishing strength prediction models for low-carbon rubberized cementitious mortar using advanced AI tools
  58. Investigating the strength performance of 3D printed fiber-reinforced concrete using applicable predictive models
  59. An eco-friendly synthesis of ZnO nanoparticles with jamun seed extract and their multi-applications
  60. The application of convolutional neural networks, LF-NMR, and texture for microparticle analysis in assessing the quality of fruit powders: Case study – blackcurrant powders
  61. Study of feasibility of using copper mining tailings in mortar production
  62. Shear and flexural performance of reinforced concrete beams with recycled concrete aggregates
  63. Advancing GGBS geopolymer concrete with nano-alumina: A study on strength and durability in aggressive environments
  64. Leveraging waste-based additives and machine learning for sustainable mortar development in construction
  65. Study on the modification effects and mechanisms of organic–inorganic composite anti-aging agents on asphalt across multiple scales
  66. Morphological and microstructural analysis of sustainable concrete with crumb rubber and SCMs
  67. Structural, physical, and luminescence properties of sodium–aluminum–zinc borophosphate glass embedded with Nd3+ ions for optical applications
  68. Eco-friendly waste plastic-based mortar incorporating industrial waste powders: Interpretable models for flexural strength
  69. Bioactive potential of marine Aspergillus niger AMG31: Metabolite profiling and green synthesis of copper/zinc oxide nanocomposites – An insight into biomedical applications
  70. Preparation of geopolymer cementitious materials by combining industrial waste and municipal dewatering sludge: Stabilization, microscopic analysis and water seepage
  71. Seismic behavior and shear capacity calculation of a new type of self-centering steel-concrete composite joint
  72. Sustainable utilization of aluminum waste in geopolymer concrete: Influence of alkaline activation on microstructure and mechanical properties
  73. Optimization of oil palm boiler ash waste and zinc oxide as antibacterial fabric coating
  74. Tailoring ZX30 alloy’s microstructural evolution, electrochemical and mechanical behavior via ECAP processing parameters
  75. Comparative study on the effect of natural and synthetic fibers on the production of sustainable concrete
  76. Microemulsion synthesis of zinc-containing mesoporous bioactive silicate glass nanoparticles: In vitro bioactivity and drug release studies
  77. On the interaction of shear bands with nanoparticles in ZrCu-based metallic glass: In situ TEM investigation
  78. Developing low carbon molybdenum tailing self-consolidating concrete: Workability, shrinkage, strength, and pore structure
  79. Experimental and computational analyses of eco-friendly concrete using recycled crushed brick
  80. High-performance WC–Co coatings via HVOF: Mechanical properties of steel surfaces
  81. Mechanical properties and fatigue analysis of rubber concrete under uniaxial compression modified by a combination of mineral admixture
  82. Experimental study of flexural performance of solid wood beams strengthened with CFRP fibers
  83. Eco-friendly green synthesis of silver nanoparticles with Syzygium aromaticum extract: characterization and evaluation against Schistosoma haematobium
  84. Predictive modeling assessment of advanced concrete materials incorporating plastic waste as sand replacement
  85. Self-compacting mortar overlays using expanded polystyrene beads for thermal performance and energy efficiency in buildings
  86. Enhancing frost resistance of alkali-activated slag concrete using surfactants: sodium dodecyl sulfate, sodium abietate, and triterpenoid saponins
  87. Equation-driven strength prediction of GGBS concrete: a symbolic machine learning approach for sustainable development
  88. Empowering 3D printed concrete: discovering the impact of steel fiber reinforcement on mechanical performance
  89. Advanced hybrid machine learning models for estimating chloride penetration resistance of concrete structures for durability assessment: optimization and hyperparameter tuning
  90. Influence of diamine structure on the properties of colorless and transparent polyimides
  91. Post-heating strength prediction in concrete with Wadi Gyada Alkharj fine aggregate using thermal conductivity and ultrasonic pulse velocity
  92. Special Issue on Recent Advancement in Low-carbon Cement-based Materials - Part II
  93. Investigating the effect of locally available volcanic ash on mechanical and microstructure properties of concrete
  94. Flexural performance evaluation using computational tools for plastic-derived mortar modified with blends of industrial waste powders
  95. Foamed geopolymers as low carbon materials for fire-resistant and lightweight applications in construction: A review
  96. Autogenous shrinkage of cementitious composites incorporating red mud
  97. Mechanical, durability, and microstructure analysis of concrete made with metakaolin and copper slag for sustainable construction
  98. Special Issue on AI-Driven Advances for Nano-Enhanced Sustainable Construction Materials
  99. Advanced explainable models for strength evaluation of self-compacting concrete modified with supplementary glass and marble powders
  100. Analyzing the viability of agro-waste for sustainable concrete: Expression-based formulation and validation of predictive models for strength performance
  101. Special Issue on Advanced Materials for Energy Storage and Conversion
  102. Innovative optimization of seashell ash-based lightweight foamed concrete: Enhancing physicomechanical properties through ANN-GA hybrid approach
  103. Production of novel reinforcing rods of waste polyester, polypropylene, and cotton as alternatives to reinforcement steel rods
Heruntergeladen am 21.12.2025 von https://www.degruyterbrill.com/document/doi/10.1515/rams-2025-0131/html?lang=de
Button zum nach oben scrollen