Abstract
Materials informatics is increasingly essential for precise material property prediction, simplifying experimental processes. Traditional methods like density functional theory are sluggish and struggle with complex structure–property relationships. In contrast, artificial intelligence, especially crystal graph convolutional neural networks (CGCNN), excels in predicting material behaviors particularly in crystalline systems revolutionizing materials discovery and design. CGCNN leverages crystal lattice structures for accurate predictions; however, there remains a lack of systematic analysis addressing its foundational principles, performance boundaries, and areas for improvement. This article is novel in offering a comprehensive and critical evaluation of CGCNN, detailing its architecture, predictive strengths, limitations, and integration with emerging technologies such as generative models. It emphasizes benchmarking protocols, best practices, and forward-looking strategies to bridge traditional physics-based methods and modern data-driven approaches. By articulating both the achievements and current gaps in CGCNN-based materials modeling, providing valuable guidance for researchers aiming to harness deep learning for next-generation materials discovery.
Graphical abstract
Materials informatics is essential for precise material property prediction, enhancing experimental efficiency. Traditional methods like density functional theory are slow and ineffective for complex relationships. CGCNN excel in predicting material behaviors, especially in crystalline materials. This review fills the gap in CGCNN analysis, advancing material data science for improved property prediction and experimentation.

1 Introduction
The field of material engineering and science is the confluence point of processing, structure, properties, and performance of materials [1]. The efficacy of a material in specific tasks depends on its properties, such as hardness, density, and thermal expansion, which, in turn, are influenced by its structure. Materials processing converts raw materials into usable forms through precise adjustment of the structure and properties of materials to achieve desired outcomes. These interconnected aspects yield significant results across diverse fields such as medicine, energy, manufacturing, and biotechnology, demonstrating the continuous evolution of material engineering and science and its potential for societal and environmental progress. Like many other fields, computational methods have arisen as propitious solutions to tackle the questions inherently posed in material analysis, right from processing to their applications [2,3,4,5]. High-throughput computational techniques enrooted in quantum mechanics enable predictive modeling of material properties crucial for advancing materials design across applications from electronics to energy storage [1,6]. However, there exists a notable gap in the seamless integration of high-throughput computational techniques, particularly those rooted in quantum mechanics, with material informatics science. Specifically, there is a need to enhance the synergy between computational methods and data-driven approaches to accelerate the discovery, design, and optimization of novel materials across various applications. In this connection, material informatics science is a rapidly evolving interdisciplinary field at the intersection of materials science, data science, and machine learning (ML). It focuses on developing computational tools and methods to hasten the discovery, design, and optimization of newfangled materials. One of the central goals of materials informatics science is to predict material properties with high accuracy and efficiency, thereby reducing the time and cost associated with experimental trial-and-error approaches. Predicting material properties is of paramount importance in various scientific and engineering domains, including semiconductor devices, energy storage systems, catalysis, and structural materials. Researchers like Hill et al. [7] and Saad et al. [8] illustrated the application of data mining systems in envisaging properties of AB compounds, underscoring the pivotal role of computational methods in materials science. Jain et al. emphasized the significance of computational materials science in accelerating materials innovation, particularly through predictive approaches [4]. Traditional methods for predicting material properties often rely on theoretical models, empirical correlations, or costly experimental measurements. Commonly employed ab initio methods such as density-functional theory (DFT) and density functional perturbation theory (DFPT) and wave function theories have aided in predicting material properties [9,10,11]. However, such traditional techniques are often time-consuming, resource-intensive, and limited in their ability to capture complex structure–property relationships [6,12]. With the accumulation of vast experimental and simulation-based datasets, there is a growing need for modern tools capable of automatically exploring big data. Artificial intelligence (AI) systems, a ML algorithm, and particularly deep learning (DL) have emerged as transformative forces for material engineering and science, offering automated learning from past experiences to enhance performance without explicit programming [13] and have emerged as powerful tools for materials discovery and design. A schematic of AI–ML–DL context and subdivision of DL and their application in the material engineering and science fields (are shown in Figure 1). ML algorithms, which model material structure and atomic interactions, provide comparable accuracy to DFT calculations with significantly reduced computation time [13]. Yet, challenges persist, especially in representing variable data such as crystal systems as fixed-length vectors for ML compatibility [14,15,16,17,18].

Overview of the hierarchical structure linking AI, ML, and DL, highlighting key subcategories and a range of real-world applications.
This article aims to address the limitations of traditional and ab initio methods in predicting material properties, particularly in capturing complex structure–property relationships and handling large datasets. Furthermore, it seeks to explore the potential of AI, ML, and DL algorithms in material engineering and science. Specifically, the study aims to investigate how DL algorithms can model material structure and atomic interactions accurately with reduced computation time compared to traditional methods. In addition to DL frameworks like crystal graph convolutional neural network (CGCNN), recent studies have also explored hybrid learning approaches and fuzzy logic-based ML models to improve prediction accuracy and handle uncertainties in material property data. Hybrid models, which combine traditional physical models with ML algorithms, have shown promise in enhancing generalizability across different material classes. These alternative approaches contribute to the diversification of methodologies within materials informatics and highlight the importance of interpretability and flexibility in modeling strategies [19,20]. Moreover, recent research has focused on developing approaches to represent complex materials data such as crystal systems in formats suitable for DL algorithms, thereby improving both model interpretability and applicability in materials science. In particular, efforts have been made to address the interpretability challenges associated with graph-based DL models like CGCNN. Several studies [21,22] have proposed techniques such as attention mechanisms and feature attribution to identify the most influential nodes and edges in crystal graph structures, helping to clarify how specific atomic configurations influence property predictions. Additionally, visualization tools have been introduced to interpret complex graph embeddings, offering more intuitive insights into model behavior. These advancements mark significant progress toward making graph-based models more transparent and trustworthy, which is critical for their widespread adoption in materials informatics. Supported by large-scale materials datasets from high-throughput simulations and combinatorial experiments, such models continue to serve as powerful tools for accelerating materials discovery and design [4,23,24,25,26,27,28,29]. Models like the atomistic line graph neural network and convolutional graph neural networks (CGCNN) have demonstrated promising results, particularly in predicting the structural and electronic characteristics of various materials, especially crystalline materials, due to their tailored exploitation of the inherent structure of crystal lattices [22,30]. Despite these advancements, existing DL models face challenges, including their exclusive development for molecular or crystal datasets and the lack of global state descriptions necessary for predicting state-dependent properties [31,32]. These limitations underscore the necessity for a more unified, adaptable, and critically assessed approach to model development. The novelty of this article lies in its dedicated and systematic focus on CGCNN, aiming to consolidate scattered insights into a cohesive framework that evaluates its theoretical underpinnings, architectural evolutions, application domains, and practical limitations. A comprehensive review enables in signifying valued perceptions about the state-of-the-art techniques, benchmarking protocols, best practices, and future research directions for advancing CGCNN and its applications in predicting material properties such as prediction of energies of formation [33–35], bandgaps [16,36,37], melting temperatures [38–40], thermal conductivity [14,39], and mechanical behavior of materials [41–43]. The subsequent investigation focuses on leveraging DL technologies, specifically graph-based models, to efficiently predict material properties while addressing challenges related to variable inputs and the quantity of the dataset. Performance evaluations and comparison of graph-based models like CGCNN with traditional approaches are undertaken across various material properties. Integration of domain knowledge into DL models is explored to enhance both performance and interpretability with special emphasis on the convergence of materials science and DL to facilitate efficient prediction of material properties. Benchmarking against ab initio methods like DFT [9,11] and assessing the effectiveness of graph-based models like CGCNN and its variants in capturing atomic interactions are also central to this study. The motivation for conducting a comprehensive review of CGCNN stems from the growing interest and significance of this approach in materials informatics research. While CGCNN has shown promising results in various applications, there remains a need for a systematic and in-depth analysis of its principles, capabilities, limitations, and potential advancements. Therefore, this research aims to fill the existing gap in the literature by providing a thorough evaluation of CGCNN and proposing strategies to overcome its limitations, ultimately advancing the field of materials informatics. Through the integration of valued perceptions about state-of-the-art techniques, benchmarking protocols, best practices, and future research directions. Importantly, this article addresses a gap in current literature by critically assessing CGCNN’s contributions and challenges and proposing strategic research directions for its future advancement. Through this, it contributes to the broader convergence of materials science and ML, fostering more efficient and intelligent material property prediction workflows.
2 Fundamentals of CGCNNs
2.1 Explanation of CNNs for crystal graphs
Data-driven approaches have revolutionized material research, replacing traditional trial-and-error methods. Neural networks, at the forefront of DL with the rise of big data and AI, impart a lasting impact in fitting probability distributions of samples [44,45]. In the context of materials science, the challenge lies in extracting features from crystal structures for regression results. Graph neural networks (GNNs) appeared as a solution, particularly in expressing crystal structures using graph data structures [46–48]. Here, nodes represent atoms, and edges signify chemical bonds. The universal properties of crystals, like atomic density and volume, are considered, as crystal structures fundamentally govern such properties [33]. A specific adaptation for crystal graph representation is the CGCNN [22]. CGCNN employs atomic coordinates to create crystal graphs, defining bonds based on proximity and Voronoi faces, which are the boundaries between regions in a Voronoi diagram that define the closest areas around each atom [49]. The connection between atoms is established when they possess a common Voronoi face where the distance lies inside the Cordero covalent bond length, referring to the standard covalent bond distances between atoms as defined by Cordero et al., with a tolerance of 0.25 Å [16]. Notably, the studies illustrate that connecting only to the nearest 12 neighbors yields effective results. Representation of crystal structures is pivotal for accurate property prediction. For GNNs, suitable graph design is essential to express the original crystal structures [31]. In contrast to traditional DL algorithms focusing on chemical properties, recent graph models like CGCNN emphasize the crystal structure distribution of materials. Nodes are encoded via one-hot encoding, and edge features comprise atom information and bond weight [16]. The effectiveness of GNN models lies in their ability to cluster data in a high-dimensional space, feature calculation, and map generation to target areas. Figure 2 illustrates the full CGCNN workflow, starting from different types of material structures (e.g., bulk, MOF, 2D, polymer), which are transformed into node–edge graphs.

General workflow schematic of the procedure of CGCNN.
These graphs are input into a CGCNN architecture that performs feature extraction via convolution and pooling operations. Various hyperparameters are tuned to optimizing learning. The final output is the predicted material property, such as formation energy or bandgap, with high accuracy (R 2 = 0.91).
2.2 Overview of different graph architectures
Graphs represent a sort of data structure that can model object sets (nodes) along with their association sets (edges). In contemporary research, investigation of graphs through DL approaches is gathering immense pondering owing to the significant expressive capability of graphs and thus can be employed in denoting varied research schemes spanning diverse fields like social sciences [50], natural sciences [51–53], humanity [54], and others [55]. Graph analysis on account of being a typical non-Euclidean data structure on DL emphasizes node classification, link prediction, and clustering. GNNs belong to techniques based on DL functioning in graph domains. In the modern-day scenario, GNN emerges as a widely applied graph analysis system owing to its resounding performance. In this section, an overview deliberation on different architectures used in GNNs [56–58], specifically focusing on graph convolutional networks (GCNs) [59], graph attention networks (GATs) [60], and other relevant architectures is provided, highlighting their key characteristics and references to seminal works in the field. In addition to GCNs and GATs, there exist several other architectures in the domain of GNNs, each with its own unique characteristics and applications. Some examples include Graph Sample and Aggregation (GraphSAGE) [61], gated graph neural networks [62], and message passing neural networks [63]. These architectures differ in terms of their mechanisms for passing messages, aggregation strategies, and ways of incorporating node features. Figure 3 presents the conceptual evolution of graph learning architectures, tracing the progression from CNNs to GNNs and their specialized derivatives. Initially, CNNs were developed to handle regular grid structures, such as 1D sequences (e.g., text) and 2D grids (e.g., images), by applying convolutional filters across structured inputs. However, CNNs are inherently limited to Euclidean domains, where data points are regularly spaced. To overcome this limitation, GNNs generalize convolution operations to irregular graph structures by enabling nodes to aggregate information from their neighbors based on graph connectivity.

Overview of different architectures of GNN, CNN, GCN, and GAT.
2.2.1 GCNs
GCNs are one of the foundational designs in the domain of GNNs. They are designed to extend the concept of CNNs to irregular graph structures. CNNs revolutionized DL due to their ability to extract multi-scale localized spatial features and generate decidedly communicative representations, primarily through shared weights, multiple layers, and local connections [64]. Such features prove to be decisive in resolving graph-related problems. Nevertheless, CNNs capability of operation on regular Euclidean data like texts (1D sequences) and images (2D grids) only, though such data structures are considered as instances of graphs, pose certain problems. Thus, GCNs are considered a generalization of CNNs to data on graph structures. Introduced by Thomas Kipf and Max Welling in 2017 [59], GCNs aggregate information from neighboring nodes iteratively using convolution operations, allowing them to learn representations of nodes within a graph. GCNs have been widely used in various graph-associated events, namely, link prediction, node classification, and graph classification. The underlying idea in the context of GCNs is the generalization of the convolution operation from regular grids (like images) to graph-structured data, enabling effective information propagation and feature learning in graph domains. Thus, GCNs have the ability to model input and/or output comprising elements as well as interdependence. Numerous all-encompassing reviews of GCNs are available. Bronstein et al. [65] thoroughly reviewed the geometric DL and deliberated not only on the associated difficulties and problems but also on viable solutions, real-time applications, and finally the guidelines for its future. Another extensive review on GCNs is presented by Zhang et al. [66]. These reviews reveal that GCNs are one of the fundamental architectures in GNNs. GCNs extend the concept of convolutional layers from regular grids to irregular graph structures. A GCN layer defines the first-order approximation of a localized spectral filter on graphs, and we can describe the propagation rule in GCNs as shown in Figure 3.
2.2.2 GATs
GAT incorporates the attention mechanism into the propagation step of graph-based learning [60] and represents a recent advancement in the field of GNNs. The distinctions between GCNs and GATs in terms of architecture, aggregation, and feature learning mechanisms are systematically outlined in Table 1.
GCNs and GATs
Feature | GCNs | GATs |
---|---|---|
Year introduced | 2017 (Thomas Kipf & Max Welling) | 2017 (Petar Veličković et al.) |
Main idea | Extends CNN convolution operations to graph data, aggregating information from neighboring nodes | Uses attention mechanism to dynamically determine the importance of neighboring nodes during aggregation |
Aggregation method | Aggregates information from all neighboring nodes with equal weights | Dynamically assigns weights to neighboring nodes, giving more importance to relevant nodes |
Information propagation | Propagates information between nodes using convolution with fixed weights | Propagates information between nodes using self-attention mechanism |
Key advantages | Simple structure and efficient learning process | More capable of handling complex graph structures with flexible weight assignment |
Disadvantages | Treats all neighboring nodes equally, unable to assign more importance to critical nodes | Higher computational cost compared to GCN, can become costly on large graphs |
Key feature learning | Learn features based on fixed weights between nodes and their neighbors | Learning varying weights between nodes, capturing more complex relationships |
Mechanism utilized | Convolution operation | Attention mechanism |
2.3 Graph convolution and CGCNN: Unveiling structural information in materials
Graph convolution is a key operation in graph-based neural networks. It involves aggregating feature vectors from neighboring nodes based on the connections in the graph, as illustrated in Figure 4.

Assorted architectures of (a) GCNN and (b) CGNN models.
In the context of crystal structures, where atoms form bonds by connecting through chemical bonds, this operation becomes crucial for capturing structural information. In the standard representation, a graph is denoted by ( V , E , A , or u ), where V represents the set of nodes (atoms), E represents the set of edges (bonds), and A or u symbolizes the adjacent matrix or interface defining the connections between nodes. This representation forms the basis for CGCNN [22]. The idea of GNNs is contemplated by Battaglia et al. [31] and provides a modular framework for DL supporting relational reasoning and combinatorial generalization. In a crystal graph, atoms connected by the same bond are considered neighbors. To enhance the influence of bonds in CGCNN, features of nodes are concatenated with features of bonds, as shown in Figure 4. Designing suitable graphs for expressing original crystal structures is crucial for accurate material property predictions using GNNs. Unlike traditional DL algorithms that focus on chemical properties, CGCNN represents nodes using one-hot encoding, and edge features incorporate information about two connected atoms and bond weight. Xie and Grossman introduced CGCNN, uniquely considering crystal lattice periodicity and space group symmetries. It addresses the challenge of representing crystal structures effectively [22]. CGCNN utilizes superpositions of local features to distinguish between different crystal structures. The main idea revolves around utilizing CNNs to map data to a target distribution. This involves updating atom features based on surrounding atoms and bonds, and then computing overall atom and bond features using a pooling function. Pooling layers are used in neural networks to reduce the dimensionality of the data while retaining essential features, which helps to decrease computational load and improve the model’s ability to generalize. In this context, the pooling layer combines node and edge feature vectors, aiming to minimize losses and align with the target distribution defined by DFT. In experiments, CGCNN has demonstrated outstanding prediction performance in capturing the properties of crystals, showcasing its efficacy in uncovering structural information in materials [31].
3 Methodologies and architectures
3.1 Components of CGCNN
CGCNN employs three fundamental components: graph representation, convolutional layers, and pooling layers. These components are interconnected to process crystal structures and predict material properties efficiently. CGCNN is a model designed for crystals, considering a crystal as a collection of atoms with lattice periodicity. The model employs key methodologies involving atomic, bond, and global state attributes denoted as V , E , and u , respectively. In the context of crystals, a bond loosely represents the distance between atoms, which is smaller than a pre-defined limit. In representing a crystal graph, nodes denote atoms, while edges signify connections
After undergoing R convolutions, the network generates feature vectors v i (R) for each atom. Subsequently, the pooling layer aggregates these individual atom features to create a comprehensive feature vector v c for the crystal. This process ensures that the network maintains permutational invariance regarding the indexing of each atom and size invariance concerning the choice of unit cell
This involves a pooling function, such as a normalized summation. Besides pooling and convolutional layers, the CGCNN encompasses a pair of fully connected hidden layers having depths L1 and L2 for apprehending complex mappings amid the crystal structure and property. Each layer has a certain depth (number of nodes) and plays a crucial role in learning the complex relationships between crystal structure and material properties. Consequently, an output layer connects to the hidden layer L2 for predicting the target property ŷ (predicted value as ŷ) [22,63,67]. In CGCNN studies, most researchers note that the neural network architecture, specifically possessing the convolution function as given in equation (1), has the maximum impact on performance prediction, with the simpler convolution function as given below:
In the equation, ⊙ represents element-wise multiplication with respect to the matrix. σ represents a sigmoid function, ensuring the probability lies within the range of 0 and 1. Additionally, g denotes a nonlinear activation function applied to an undirected multigraph where atoms are represented as nodes, and interconnections among atoms in a crystal are represented as edges. Each node i is associated with a feature vector v i , which encodes the characteristics of the corresponding atom. Similarly, the feature vector u (i, j) k represents the characteristics of the kth bond connecting atom i and atom j for each edge (i, j) k . Bias and weight matrices for the tth convolution step are denoted by W (t) and b (t), respectively. The dimensions of total vector features attained from the pooling process and each atom vector remain the same. The prediction of material properties involves concatenating pooled vectors originating from the surface graph and converting the results through calculation.
3.2 Methodology of CGCNN
The methodology of CGCNN involves several key components: graph representation of crystal structures, message passing for information propagation, and neural network architectures tailored for graph data [22,46,63]. The graph representation in CGCNN captures the spatial relationships between atoms in the crystal lattice, which are crucial for understanding material properties. Each atom is associated with a feature vector encoding its chemical properties, such as atomic number, electronegativity, and coordination number. At each convolutional layer t, CGCNN utilizes a message passing mechanism to propagate information across the crystal graph. The message from neighboring nodes is computed in equation (4). h i denotes the hidden state of node i , N (v) represents the set of neighbors of node i , e vw is the feature vector of edge ( v , w ), and M t is the message function aggregating these features
Message passing is a fundamental operation in CGCNN, where information is exchanged among neighboring atoms from the graph. This process simulates the physical interactions between atoms and allows the model to capture the local environment of each atom within the crystal structure. During message passing, each atom aggregates information from its neighbors based on their feature vectors and updates its own representation, accordingly, as expressed in the following equation:
U denotes a learnable update function that transforms the aggregated message into a new hidden state for each node. This iterative process continues for multiple layers, enabling the model to capture increasingly complex patterns and dependencies within the crystal lattice. CGCNN employs neural network architecture intended precisely for graph-structured data. Graph convolutional architecture typically comprises multiple layers, each involving graph convolutional operations followed by non-linear activation functions. These operations aggregate information from neighboring nodes and update node representations through learned transformations. This comprehensive architecture allows CGCNN to effectively model the relationships within crystal structures and predict material properties with high accuracy.
3.3 Advancements and applications of CGCNN
CGCNNs have undergone significant development in recent research, demonstrating various architectures, adaptations, and advancements. Researchers have refined and innovated upon CGCNN since its inception in Xie and Grossman pioneering work [22]. Different features and combinations have been explored, with extensive benchmarking on datasets like QM9 [68,69]. CGCNN’s flexibility in describing crystal structures is evident in its successful prediction of diverse properties of materials, like bandgap, Fermi energy, formation energy, elastic properties, etc. For example, Sanyal et al. [70] achieved an enhancement in CGCNN’s predictive performance through multitask learning, emphasizing the sharing of parameters at lower levels. Incorporating domain knowledge has further enhanced CGCNN’s capabilities, as demonstrated by Xie and Grossman [22] and Sanyal et al. [70]. This versatility positions CGCNN as a powerful tool for predicting crystal properties, contributing valuable insights to materials science applications. Concerning architectures, CGCNN typically includes convolutional layers for updating atom feature vectors based on atomic and bond interactions. Pooling layers are utilized to create a comprehensive crystal feature vector, ensuring consistency regardless of atom indexing and size. Variations may include incorporating fully connected hidden layers with depths customized to capture complex relationships between crystal structure and properties. Advancements in CGCNN models explore diverse approaches, such as integrating domain knowledge to enhance prediction accuracy. Adaptations, like CGCNN’s use of undirected graphs, highlight the flexibility of these models in capturing atomic interactions. Thus, the evolution of CGCNN architecture and variations underscores their versatility and effectiveness in leveraging graph-based representations for materials science applications. The integration of crystallographic information into the CGCNN for property prediction involves a multi-layered approach. In previous studies, an example was provided demonstrating the effectiveness of a basic CGCNN architecture comprising a single linear convolution layer and one pooling layer. This model successfully distinguished between two distinct crystal structures [22,70]. The model’s capacity to extract nuanced structural differences relies on the arrangement of atom connections, unveiling inherent relationships between structure and property. The CGCNN’s generality was further demonstrated by training the model on properties calculated from the Materials Project [4]. The focus of this work lies on two aspects of generality, (i) the broad applicability of the model to different structure types and chemical compositions and (ii) the model’s ability to accurately predict a variety of properties. The database used comprises a wide variety of inorganic crystals, spanning from natural metals to minerals complex. Subsequent to eliminating incomplete data, it consists of 46,744 materials, including 87 elements, categorized into 7 lattice systems, and grouped into 216 space groups. A concise summary of the dataset’s key parameters is presented in Table 2.
A simple tabular representation summarizing the parameters of the dataset materials mentioned
Parameter | Value | Open source | Refs. |
---|---|---|---|
Versatile materials | ≥46,744 | Materials Project, AFLOW, OQMD | [2,71] |
Elements | ≥87 | [24,27] | |
Lattice system | ≥7 | [72] | |
Space group | ≥216 | [4,25] | |
Property | ≥156,236 |
Notably, the materials exhibit diverse complexity, with compositions ranging from binary to quaternary compounds, containing up to seven different elements in a single crystal. The primitive cell atom count ranges from 1 to 200, though most (90%) have less than 60 atoms. The database, mainly derived from the Inorganic Crystal Structure Database, offers a thorough overview of recognized stoichiometric inorganic crystals, underscoring the model’s strength and adaptability [72].
4 Graph-based DL in materials data science
At present, techniques like GCNN, belonging to the deep neural network type, have exhibited noteworthy potential in the enhancement of proficiency in extracting characteristics for multifunctional materials [73]. Despite the availability of different methods for gathering the physicochemical properties related to the estimation of the materials property, the strong contender in this direction emerged is GNN. Graph-based DL, particularly exemplified by the CGCNN model, appears as a prominent feature in materials information science. CGCNN leverages the inherent graph structure of materials, representing atoms as nodes and bonds as edges, to effectively capture the complex relationships between crystal structures and material properties (Figure 5).

CGCNN model representing a crystal as graph G, atoms in the crystal cell as nodes ( v i, v j), and any interatomic information as edges ( u i,j ). Global state of the graph, edges, and nodes is entrenched into vectors based on manifold information levels extracted from the materials.
By applying graph convolutional operations, CGCNN effectively extracts features from the graph representation, incorporating spatial relationships between atoms and capturing the local environment of each atom. This approach allows CGCNN to learn rich and informative representations of materials, enabling accurate predictions of various material properties. The efficacy of CGCNN in materials information science has been demonstrated in several studies. Although DL models like SchNet, a model developed to predict the physical properties of chemical substances and solid materials by effectively learning atomic interactions in both molecular and solid-state materials, have incorporated periodic boundary conditions for convolution and are applicable for solid-state materials and molecules alike, the majority of DL models applicable to molecules fail in the case of solid-state systems owing to their dissimilar characteristics [73]. However, recent advancements have seen the progression of multiple graphs related DL models focused on predicting material properties of crystal systems, representing a fundamental step toward widespread adoption of GNNs in inorganic crystalline materials research [74]. The assignment of material property prediction is formulated like understanding a mapping from the graph to the target property. Notable examples include the CGCNN model first proposed in 2018 by Xie and Grossman, which considers information from the crystal structure graph as an input for a CNN. Likewise, Xie and Grossman applied CGCNN to predict material properties such as electronic conductivity, thermal conductivity, and bandgap energies with high accuracy and interpretability. Similarly, Louis et al. analyzed novel graph neural network (GATGNN) in the material property forecasting like formation and absolute energy, bandgap and elastic properties, and bulk and shear moduli of inorganic materials [75]. Park and Wolverton have developed the upgraded variant of the CGCNN model, namely iCGCNN, for the accuracy of two distinctive illustrations [76]. In the first illustration, iCGCNN is shown to realize a drastically enhanced prognostic accuracy in the case of training/validation on 180,000/20,000 thermodynamic stability entries calculated from DFT that were, in turn, obtained from the open quantum materials database (OQMD) and appraised using 230,000 entries as a separate test set [24]. As a second illustration, iCGCNN achieved a 31% success rate that is 2.4 times greater in comparison to the original CGCNN and whopping 310 times higher when compared to an undirected high-throughput search in the assistance of superior output search of materials in case of ThCr2Si2 structure-type [76]. Screening of 132,600 compounds having elemental decorations belonging to the ThCr2Si2 prototype crystal structure is undertaken by them, and this exercise has resulted in the identification of 97 novel distinctive compounds possessing high stability. In the process, 757 DFT calculations are performed by them, leading to a 130 times acceleration of the computational time required for the high-throughput search. Thus, iCGCNN speeds up the detections of high-performance novel materials via swift and precise identification of crystalline compounds possessing the desired properties [76]. Wang et al. studied a modified version of the CGCNN algorithm, namely mCGCNN, which effectively amalgamates materials’ physical and/or chemical properties with their crystal structure attributes. It solves problems like data fusion associated with the contemporary generic models for materials. Substantial improvement in the proficient and accurate estimation of gravimetric capacity is achieved by them for lithium-ion batteries (LIBs) [77]. Many open-source data such as Materials Project database, etc. are used to collate all the crystalline material properties already employed in CGCNN models. As shown in Table 3, CGCNN and its advanced variants leverage diverse datasets to predict material properties ranging from bandgaps to gravimetric capacity in LIBs, yielding significant performance improvements across multiple domains.
CGCNN and its variants in materials property prediction
Model | Dataset/database | Predicted properties | Performance/outcome |
---|---|---|---|
CGCNN | Materials project, QM9 | Bandgap, Fermi energy, formation energy, elastic properties | Accurate prediction of material properties using crystal graph structure |
iCGCNN | OQMD, 230,000 entries | Thermodynamic stability | 2.4× higher success rate than CGCNN, 310× faster than high-throughput search; discovered 97 novel stable compounds |
mCGCNN | Materials Project, 2440 crystal structures related to LIBs | Gravimetric capacity for LIBs | Significant improvement in gravimetric capacity prediction, effectively combining physical/chemical properties |
MOFs | Theoretical MOFs database (330,000 MOFs) | Methane adsorption behavior | Identified four MOFs with the highest methane absorption efficiency |
So far, 2,440 data crystal structures related to LIBs materials are generated from the Materials Project database by employing MP-API. Six gravimetric capacity-related properties representing the capacity per unit mass of battery that are utilized as labels in the prediction process are also obtained. The crystallography information file format (found from the Materials Project database) tracks a key-value pair structure, where each explicit data item is denoted by a key with the conforming value as the data [4]. Such data may include crystal nomenclature, its elemental constitution, cell parameters, symmetry, atomic coordinates, associated references, etc. [78]. This standardization of format enables the DL algorithms to read and analyze the data with ease. In addition, Wang et al. have developed the algorithms for envisaging the methane adsorption behavior of metal−organic frameworks (MOFs) and utilized them for predicting the methane adsorption volume of a few randomly chosen zeolitic imidazolate frameworks and covalent organic framework materials [79]. The findings indicate the model’s effectiveness in forecasting more porous substances. The same model was used to conduct a tiered analysis of a theoretical MOFs database (around 330,000 MOFs), with four theoretical MOFs showing the highest methane absorption efficiency. Additional investigation of all the MOFs screened evinces the interplay between working capacity with certain structural features. The results deduce that the unearth of innovative materials not only related to gas adsorption but also to other fields that include materials and molecules interactions can be accelerated through the integration of DL and hierarchical screening [79]. Furthermore, CGCNN is exploited efficaciously in several material science activities, like materials discovery, property forecast, and materials design. Its ability to seamlessly integrate the characteristics associated with material crystal structure and its physical as well as chemical properties makes it a versatile tool for advancing materials information science [17,80–84]. Overall, CGCNN represents a significant advancement in graph-based DL for materials information science, offering a powerful framework for predicting material properties and accelerating materials discovery and design.
5 Other variants and improvements in the CGCNN model
Graph-based representations naturally capture the connectivity between entities, making them intuitive and computationally feasible for expressing atomic structures. Traditional DL methods often flatten these representations into lower-dimensional vectors, leading to the loss of spatial and structural information. CNNs address this limitation by allowing hierarchical information processing while preserving spatial characteristics. Thus, employing graph representations and GCNs has emerged as an effective approach for classifying and predicting properties of atomic systems. GCNs have demonstrated strong effectiveness in node-representation learning tasks [59,66,78]. They have been extended to predict material properties by representing the atom as a graph structure, where nodes represent atoms/particles and edges represent interatomic connections. This approach laid the foundation for the CGCNN model [22]. This structure is easily understandable because it allows for the extraction of local contributions of chemical atmosphere, which significantly influences the overall properties of crystals. Insights gained from CGCNN regarding chemical characteristics aid in refining the exploration area for identifying new materials. To quantitatively assess the performance of CGCNN and its variants, we summarize the reported mean absolute errors (MAEs) across several material properties in Table 4.
Quantitative comparison of CGCNN variants: MAEs across key material properties with respective applications and references
Variant model | MAEs properties | Application | Refs. | |||
---|---|---|---|---|---|---|
Formation energy (E f) (eV atom−1) | Bandgap (E g) (eV) | Bulk modulus (K VRH) Log10 (GPa) | Shear modulus (G VRH) Log10 (GPa) | |||
CGCNN | 0.039 | 0.388 | 0.053 | 0.087 | Adsorbents for wastewater purification, LIB materials etc. | [22] |
CSGN | 0.043 | 0.263 | 0.048 | 0.088 | — | [85] |
MT-CGCNN | 0.041 | 0.290 | — | — | [70] | |
iCGCNN | 0.038 | — | — | — | [76] | |
CGCNN with Eigen Pooling | 0.038 | 0.037 | 0.053 | 0.086 | [86] | |
OGCNN | 0.46 | 0.296 | 0.054 | 0.082 | [87] |
Park and Wolverton introduced an enhanced version of the CGCNN model known as the improved variant of the CGCNN model (iCGCNN) [76]. This adaptation surpasses the original by integrating information regarding 3-body correlations between nearby atoms, Voronoi decomposition of the crystal structure, and enhancing the depiction of interatomic bonds in the graphs [22,35]. iCGCNN demonstrates approximately 20% higher accuracy compared to CGCNN. However, iCGCNN struggles with predicting thermal stability due to its oversight of the geometric structure of crystals, despite considering the atom placement within them. A recent advancement in the field introduced the orbital graph convolutional neural network (OGCNN) [87]. This model builds upon the CGCNN architecture by incorporating atomic orbital information through the orbital field matrix (OFM) representation. The OFM includes bonding details between atomic orbitals, utilizing electron configurations and solid angles of neighboring atoms for enhanced information [88]. By integrating larger-dimensional edge feature vectors compared to CGCNN, OGCNN exhibits improved learning and prediction capabilities for both formation energetics and bandgaps. Additionally, enhanced bonding information facilitates better communication between nodes in the network. Chen et al. introduced the materials graph network (MEGNET) [73], which predicts molecular properties by integrating global state variables. MEGNET forms a unified framework for correlated properties, with each block capturing interactions among atoms and their local environments. However, increasing the number of blocks may reduce the contrast of atoms’ embeddings. In the context of the MEGNET model, transfer learning involves leveraging learned element embeddings and model parameters from a pre-trained model to enhance the performance of a model trained on a smaller or different dataset [73]. This approach aims to improve prediction accuracy by transferring knowledge encoded in the embeddings, enabling better generalization even with limited data. Schütt et al. introduced the SchNet model [89], which employs continuous filter convolutional layers to predict properties of molecules based on atom-wise renditions. Unlike traditional models, the output layers in the SchNet model are not tailored to individual properties. Sanyal et al. [70] introduced the multi-task crystal graph convolutional network (MTCGCNN), which combines CGCNN with multi-task learning (MTL) to predict multiple material properties. In MTCGCNN, tasks within MTL share the crystal representation to predict material properties, allowing for the transfer of learning from one characteristic to another. While MTL accelerates the learning rate of the crystal embedding space, it may not necessarily improve prediction accuracy for all properties. For instance, some properties, like formation energy, may be excellently predicted using a simple CGCNN model. To accurately compare MTL and MTCGCNN models, it's essential to understand the specific limitations of MTL models, such as scalability and interpretability. While MTL models offer benefits in jointly learning multiple tasks, MTCGCNN may address some of these limitations by employing specialized architectures tailored for material science tasks, like CGCNNs. This tailored approach in MTCGCNN to handle material data and tasks could potentially result in improved performance, scalability, and interpretability compared to traditional MTL models [70]. Additionally, Louis et al. introduced the global attention graph neural network (GATGNN) [75] framework, aimed at enhancing the understanding of internal relationships within crystal graphs and their properties. This framework incorporates augmented-GAT layers, which include local attention layers for observing the environment and a global attention layer for learning relationships among adjacent particles and overall crystal structure properties. The global attention layer aggregates specific environment vectors to provide a comprehensive vision of the entire crystal graph. While CGCNN and MEGNET models excel in predicting certain material properties, GATGNNs also demonstrate strong potential in capturing features for effective material property prediction. In GNNs, similar to traditional CNNs, the convolution and pooling functions are utilized. Pooling function mechanisms can be systematically classified based on their features [60,75,90]. Though the pooling function can occur at both local levels and global levels, research suggests that the local pooling function alone does not fully account for the efficiency of GNNs on broadly used benchmarks [91]. Nevertheless, pooling function operators are typically constructed to be appropriate across both local and global contexts. The Eigen Pooling operator, utilizing the graph Fourier transform, pools features based on structural data from the graph signal, preserving the original graph structure [92]. Enhancing the pooling mechanism in the CGCNN model (Figure 6) with several variants presents a significant opportunity for improvement, often overlooked yet crucial for model performance. As depicted in Figure 6, the integration of advanced pooling strategies into the CGCNN framework including ReLU-activated addition characteristics and global eigen pooling demonstrates potential for notable improvements in predictive accuracy and model generalization.

A few variants of the CGCNN model. (a) Conventional architecture design of CGCNN. (b) Alteration architecture design of CGCNN.
Notably, the CGCNN modified version, which represents the local surroundings of crystals using composition and structure-dependent vectors rather than human-designed features, shows promise in predicting properties like number group, element blocks, radius, and electronegativity in perovskites and elemental boron [22,68,93]. However, its performance in predicting the local energy of inorganic materials lags [46].
Recent advancements in DL models have shown promise in predicting multiple material properties within a single framework. For instance, the MatErials Graph Network (MEGNet), an advanced version of the GNN, has been leveraged to forecast internal energy across various temperatures and properties such as entropy, enthalpy, and energy-free Gibbs within the same model [73]. MEGNet also demonstrates notable precision in predicting electrical properties by consolidating various free energy models into a unified framework, incorporating global state variables like temperature, pressure, and entropy. Another notable model, HydraGNN, investigated by Lupo Pasini et al. [94], employs multitask learning to predict mixing enthalpy, atomic magnetic moment, and charge transfer, simultaneously. With architecture comprising two sets of layers, HydraGNN learns both common and specific features of each material property. In HydraGNN’s convolutional layers, a variant of GCNN known as principal neighborhood aggregation facilitates the classification of different graphs. While HydraGNN is primarily designed for predicting molecular atomization energies [92], it also demonstrates flexibility for applications across various materials science properties. To facilitate a clearer comparison among leading graph-based models, Figure 7 presents the MAE values of CGCNN, SchNet, and MEGNET in predicting formation energies, adapted from a widely cited benchmark study [2,68,71].

Comparative MAE for formation energy prediction using CGCNN, SchNet, and MEGNet models.
This visual representation highlights the relative predictive accuracies of these models and offers valuable insight into their practical utility in materials informatics.
6 Real-world applications of CGCNN in material discovery
While CGCNN has shown great promise in academic settings, its practical impact in real-world materials discovery is equally significant and increasingly evident through various case studies. In this section, we highlight concrete examples demonstrating how CGCNN-based approaches have directly contributed to experimental materials design and selection. One such example is the adsorption proficiency of Nb2CT x toward the heavy metals with the help of a CGCNN model has been predicted by Jaffari et al. [85]. Their results specify that Pb(ii) ions exhibiting a MAE and root mean squared error (RMSE) >0.09 and >0.16 eV, respectively, possess larger adsorption energy as compared to Cd(ii) ions. In a nutshell, their investigation reveals the great efficacy of the proposed technique in comparison to the adsorption behavior of various materials. The study can be a guiding light in designing aqueous environmental experiments for the elimination of other hazardous pollutants. Figure 8 depicts the schematic design of the complete research undertaken by Jaffari et al. [85].
![Figure 8
Schematic illustration of the CGCNNs for the estimation of adsorption capability of Nb2CT
x
for Cd(ii) and Pb(ii) ions. (Reproduced (or adapted) with permission. [85] Copyright 2013, Royal Society of Chemistry.).](/document/doi/10.1515/ntrev-2025-0200/asset/graphic/j_ntrev-2025-0200_fig_008.jpg)
Schematic illustration of the CGCNNs for the estimation of adsorption capability of Nb2CT x for Cd(ii) and Pb(ii) ions. (Reproduced (or adapted) with permission. [85] Copyright 2013, Royal Society of Chemistry.).
The CGCNN framework offers advantages over traditional ML operations by eliminating the necessity for costly and error-prone extensive data labeling. This feature enables CGCNN to avoid recomputing the heat of adsorption or geometric properties common in traditional ML models. Recent applications of CGCNN include predictions in various materials domains such as intermetallic alloys [95], electroreduction catalysts [96], and photoanode materials [97]. In the gas storage sector, Wang et al. [79] demonstrated CGCNN’s effectiveness in predicting methane adsorption in metal-organic frameworks (MOFs) as illustrated in Figure 9.
![Figure 9
Representation of hypothetical MOFs for methane adsorption using the CGCNN model and four hypothetical MOFs (a)–(d) of crystal structures adsorption optimized at 298 K and under 35 bar, and for quantified adsorption isotherms. (Reproduced (or adapted) with permission. [79] Copyright@ 2020, American Chemical Society.).](/document/doi/10.1515/ntrev-2025-0200/asset/graphic/j_ntrev-2025-0200_fig_009.jpg)
Representation of hypothetical MOFs for methane adsorption using the CGCNN model and four hypothetical MOFs (a)–(d) of crystal structures adsorption optimized at 298 K and under 35 bar, and for quantified adsorption isotherms. (Reproduced (or adapted) with permission. [79] Copyright@ 2020, American Chemical Society.).
Gu et al. introduced a facile yet adaptable illustration to accelerate catalyst screening using the inclusive deep-learning models for binding energy estimation [98]. The model, trained on a dataset including methane working capacity (Nwc), accurately screened potential MOFs from a database of approximately 330,000 structures [79,99]. An all-through DL classification model founded on CGCNN achieved an AUC of 0.930 for methane uptake prediction, facilitating rapid evaluation of a colossal, large number of hypothetical MOFs. This approach holds promise for developing materials tailored to specific gas storage or separation applications, with potential industrial significance. For energy storage applications, Wang et al. introduced the mCGCNN (modified CGCNN) model, which effectively integrates physical as well as chemical properties of materials with crystal structure features and thereby improves the precision and proficiency of estimation of gravimetric capacity for lithium-ion batteries [77]. This model addresses the data fusion challenge in materials science and achieves high accuracy in gravimetric capacity prediction and classification tasks, as shown in Figure 10.
![Figure 10
Modified CGCNN model diagram for prediction of gravimetric capacity LIBs such as (a) model structure and (b) and (c) CGCNN and mCGCNN model test set. (Reproduced (or adapted) with permission. [77] Copyright 2023 published by Elsevier Ltd.).](/document/doi/10.1515/ntrev-2025-0200/asset/graphic/j_ntrev-2025-0200_fig_010.jpg)
Modified CGCNN model diagram for prediction of gravimetric capacity LIBs such as (a) model structure and (b) and (c) CGCNN and mCGCNN model test set. (Reproduced (or adapted) with permission. [77] Copyright 2023 published by Elsevier Ltd.).
The model achieved an AUC of 0.99 and classification accuracy of 95.5%, demonstrating its utility for real-time battery material screening. mCGCNN provides enhanced suppleness and can be employed as an invaluable tool in swift and accurate screening of highly persuasive batteries through the unification of numerical data with the crystal structure of the materials. Moreover, its potential extends to the detection of many additional multi-functional materials, thus exhibiting its wide applicability across various domains of materials science [77].
In the field of catalysis, researchers applied an ensemble CGCNN model to a dataset of over 40,000 unrelaxed alloy structures for CO2 reduction catalysts (Figure 11).
![Figure 11
(a) The complete roadmap of the estimation model for binding energies and (b) a typical bar plot of predicted binding energies of In3Cu7 using 5 different models for LS-CGCNN, LS-SchNet, representing their bias. (*) denotes the application of the original CGCNN (binding site) substitution is carried out. (Reproduced with permission [98]. Copyright 2020 American Chemical Society).](/document/doi/10.1515/ntrev-2025-0200/asset/graphic/j_ntrev-2025-0200_fig_011.jpg)
(a) The complete roadmap of the estimation model for binding energies and (b) a typical bar plot of predicted binding energies of In3Cu7 using 5 different models for LS-CGCNN, LS-SchNet, representing their bias. (*) denotes the application of the original CGCNN (binding site) substitution is carried out. (Reproduced with permission [98]. Copyright 2020 American Chemical Society).
This modified model is applied to unrelaxed structures realizing the MAEs (0.085 and 0.116 eV) for binding energies of H and CO, respectively. These results of an economical model outperformed the best reported method that needs costly geometry relaxations yet can achieve errors of 0.13 eV for both H and CO binding energies. Analysis of the model parameters discloses its effective learning of the binding site related to chemical information. By exploiting the graph structure of materials, CGCNN effectively models the interactions between atoms, facilitating precise electronic property predictions. In addition to electronic properties, CGCNN is adept at predicting mechanical properties like elastic constants, fracture toughness, and bulk modulus. Its ability to consider atomic arrangements and their connectivity enables accurate modeling of material behaviors under mechanical stress, crucial for designing materials with desired mechanical properties. Case studies demonstrate CGCNN’s effectiveness in real-world applications. Researchers have leveraged the CGCNN in various applications. As an instance, CGCNN has been used to predict band structures of new materials for photovoltaic applications, contributing to the discovery of efficient solar cell materials [100]. Beyond property prediction, CGCNN has facilitated targeted materials design in photovoltaics by Xie and Grossman [22], Takatsu et al. [93], and Zhan et al. [100] used CGCNN to screen a large scale of double-perovskite chemical space, identifying promising perovskite candidates for solar cells [22,93,100]. In another application, a CGCNN model was designed to learn per-site properties of materials for assessing local information such as atomic vibration frequency, magnetic moment, Bader charge, and site-projected d-band and O2p-band centers. Such pre-site properties have an extensive arena of applicability in electronics, spintronics, thermodynamics, and catalysis [101]. These capabilities make CGCNN suitable for practical applications in electronics, spintronics, thermoelectrics, and catalysis. ML methods, including DL methods like CGCNN, are increasingly used in predicting material characteristics, demonstrating their usefulness in gathering complex associations within the data and exhibiting valuable perceptions about the material performance. These real-world case studies highlight CGCNN’s practical contributions across multiple domains, ranging from environmental remediation and energy storage to catalysis and photovoltaics. Its ability to learn from complex structural data and deliver reliable predictions has made it a critical tool in reducing experimental workloads, optimizing materials selection, and accelerating innovation in materials science.
7 Advancements in graph representation learning: Evaluation and challenges
7.1 Performance evaluation of CGCNN
The motivation behind CGCNN is rooted in advancements in graph representation learning, exemplified by studies conducted by researchers [56,61,78,102,103]. These works focus on learning low-dimensional vector representations of graph nodes, edges, or subgraphs, departing from traditional ML methods reliant on handcrafted features. Inspired by the success of word embedding techniques like SkipGram [104] and DeepWalk [105] in natural language processing, researchers introduced graph embedding methods such as node2vec [106], LINE [107,108], TADW. However, these methods encounter challenges such as lack of parameter sharing between nodes and difficulty in generalization to dynamic graphs or novel data. In the subsequent section on performance evaluation and comparison, the efficacy of CGCNN in predicting material properties is assessed compared to traditional methods and other DL approaches. Evaluation metrics such as MAE, RMSE, and R-squared (R 2) score are discussed to gauge the accuracy and predictive power of CGCNN models. The formulas for MAE, RMSE, and R 2 are expressed by equations (6)–(8), respectively [109]
whereas y is the actual value, ŷ is the predicted value, and n is the number of data. The MAE method quantifies the error between the actual and predicted values by calculating the average of the absolute differences. In model evaluation, a smaller MAE indicates better performance, as it reflects reduced discrepancy between predicted and actual values. Thus, a smaller MAE signifies superior model performance
Since y is the actual value, ŷ is the predicted value, and n is the number of data; RMSE is a widely used regression metric that quantifies the difference between actual and predicted values by computing the square root of the average of squared differences. It serves as a measure of the magnitude of errors in the model’s predictions, making it a key indicator alongside MAE in regression analysis
Since t is the actual value, y is the predicted value, and t¯ is the average value. The R 2 score, also known as the coefficient of determination or R-squared, measures the proportion of variance in the target variable that is explained by the model. A value closer to 1 indicates better performance, implying that the model has higher explanatory power and accounts for a larger portion of the variance in the data [109]. The readout function in CGCNN employs a universal mean pooling layer to calculate the mean feature vector for totally the atoms constituting a crystal structure, resulting in a single feature vector corresponding to every crystal. This model has demonstrated prediction precision in respect of various properties, including formation energy, bandgap, bulk and shear moduli, Poisson’s ratio, and metal-non-metal classification, which can be benchmarked. For electronic properties, CGCNN achieves the formation energy (MAEs of 0.039 eV/atom) and predicts the bandgap (MAEs of 0.388 eV), respectively [22]. Additionally, the model achieves the MAEs of 0.054 log (GPa) and 0.087 log (GPa) for bulk and shear moduli prediction, respectively. It also attains 80% accuracy for metal classification and 95% accuracy for non-metal classification [89]. Compared to benchmark models and other CGCNN variants, CGCNN demonstrates superior performance. For instance, it has exhibited superior performance over SchNet based on the Materials Project crystal dataset [4]. Notably, CGCNN attains the MAEs of 0.33 eV for bandgap and 0.028 eV/atom for Fermi energy prediction, respectively. It also realizes the MAE of 0.079 and 0.050 log (GPa) for shear and bulk moduli prediction, respectively. Moreover, CGCNN can achieve accuracy values of 78.9 and 90.6% for metal classification and nonmetal classifications. To enhance prediction accuracy, Park and Wolverton introduced the iCGCNN model, which incorporates three-body atomic correlations and Voronoi tessellation features [76]. This model also includes edge feature convolution during atomic convolutions, leading to over 20% improvement in predicting formation energies and convex hull distances compared to the original CGCNN. Furthermore, the computational efficiency and predictive performance of CGCNN have been benchmarked against traditional methods such as DFT, empirical potentials, and classical physics-based models. These comparisons highlight CGCNN’s superior accuracy and scalability, albeit with higher computational demands. It has also outperformed conventional ML approaches such as random forests, support vector machines, and linear regression particularly in terms of robustness and handling complex materials data. Despite these advancements, several challenges limit CGCNN’s broader applicability. A key concern is overfitting, especially when the model is trained on small or biased datasets lacking diversity in crystal structures or chemical compositions, which reduces its generalizability to novel materials. Additionally, interpretability remains a critical bottleneck. The graph embeddings and message-passing mechanisms used in CGCNN are often opaque, making it difficult to determine how specific atomic features influence predictions. While exploratory methods such as attention mechanisms and gradient-based attribution have been proposed to improve transparency, their adoption in materials science is still nascent. Overcoming these limitations is essential to establishing reliable and interpretable AI-driven frameworks for accelerated materials discovery.
7.2 Challenges in graph representation learning
Despite the great potential of GNN, issues like over-smoothing have been observed in advanced GNN models, where increasing depth makes the node embeddings similar, hindering scalability and classification of unlabeled nodes. To address this, a deeper graph attention neural network (deeper GATGNN) is introduced, extending the architecture of GATGNN by adding differentiable group normalization layers and additive skip connections [110]. These enhancements allow for better clustering of nodes and prevent over-normalization, ultimately improving the scalability and performance of the model. This article provides insights into the capabilities and limitations of CGCNN for predicting material properties, comparing its performance against traditional methods and other DL techniques. Additionally, advancements like deeper GATGNN address challenges in GNNs, enhancing their scalability and effectiveness [110]. The challenges and limitations associated with CGCNN, along with potential sources of bias, while referencing relevant literature, are data availability. For example, Xie and Grossman [22] and Choudhary and DeCost [30] studied that one of the primary challenges faced by CGCNN is the availability and quality of training data. High-quality datasets for crystalline materials with accurately annotated properties may be limited in size and diversity. The solution to these data augmentation techniques would be supplementing existing datasets through techniques such as symmetry operations, structural transformations, or introducing noise. Collaboration between research institutions, materials databases, and computational scientists can facilitate the creation and sharing of larger and more diverse datasets for interpretability. For example, Magar et al. tackle the challenge of interpreting the decisions made by CGCNN models, particularly in complex crystal structures where the relationships between atoms are intricate [111]. By resolving it using integrating attention mechanisms into the CGCNN architecture can help highlight important atomic interactions and improve model interpretability. Developing visualization techniques to represent the learned features and decision-making process of CGCNN models can aid in understanding model behavior. Training CGCNN models can be computationally costly, especially when dealing with large and complex crystal structures or when using deep neural network architectures. Employing model compression techniques, namely, quantization, low-rank factorization, pruning, etc., may lead to diminishing the computational costs of CGCNN models without significantly compromising performance. Employing distributed computing frameworks and hardware accelerators (e.g., GPUs, TPUs) can quicken the training process of CGCNN models as investigated by Agrawal and Choudhary [112] and Hinton et al. [113]. CGCNN models may suffer from limitations and biases inherent in the training data, model architecture, and learning process. Employing techniques such as fairness-aware learning and bias correction methods can help identify and mitigate biases in CGCNN predictions. Performing rigorous cross-validation and sensitivity analysis can help assess the robustness of CGCNN models and identify potential sources of bias [114]. Addressing these challenges and limitations is crucial for advancing the effectiveness, reliability, and applicability of CGCNN models in predicting material properties accurately and responsibly.
7.3 Integration of CGCNN with generative models for inverse materials design
Recent advances in generative models, such as variational autoencoders (VAEs), generative adversarial networks, and diffusion probabilistic models, have opened promising avenues for the de novo design of crystal structures with targeted properties. While the CGCNN has demonstrated significant success in predicting material properties from crystal graphs [115,116], integrating CGCNN with generative models represents a powerful strategy to enable inverse design workflows in materials discovery. One prominent integration approach involves using CGCNN as a property predictor within reinforcement learning frameworks, where generative agents propose candidate crystal structures and receive feedback based on CGCNN-evaluated properties [117,118].
This strategy can guide the efficient exploration of desirable material spaces. Another promising method is incorporating CGCNN as a latent prior or constraint within generative architecture such as VAEs or diffusion models, thereby steering the generative process toward physically meaningful and property-optimized structures [119,120]. Hybrid frameworks have also been proposed, in which generative and predictive models are co-trained or sequentially fine-tuned to accelerate the discovery of materials with optimal functionalities [118,121,122]. Despite the potential advantages, several challenges must be addressed for effective integration. Generating valid, stable, and symmetry-consistent crystal graphs remains non-trivial, particularly given the complex periodic and relational nature of inorganic materials [123,124]. Furthermore, reconciling the supervised learning paradigm of CGCNN, which requires labeled datasets, with the often unsupervised or semi-supervised training schemes of generative models presents data alignment difficulties [118,122]. Finally, optimizing multiple conflicting objectives – such as formation energy, bandgap, and thermodynamic stability – within a single unified model remains an open problem. Nevertheless, recent developments, such as the Fourier-transformed crystal properties representation [115], crystal diffusion variational autoencoder [120], and MatterGen diffusion models [122], show that integrating graph-based property predictors like CGCNN into the generative loop is increasingly feasible and impactful. These innovations pave the way for property-driven, scalable, and interpretable materials discovery pipelines, enabling inverse design of novel inorganic materials with unprecedented efficiency.
8 Future directions and integration with other computational methods
As the field of materials informatics continues to evolve, several potential avenues for future research in CGCNN and material property prediction are emerging. First, there is a growing interest in enhancing the interpretability of CGCNN models. Incorporating interpretable AI methods, such as attention mechanisms or saliency maps, can offer insights into the learned features and the fundamental physics influencing material properties [125]. In the domain of crystalline materials, a persistent challenge is the accurate prediction of electronic structures, elastic properties, and related physical quantities. Existing graph learning frameworks often fall short in reliably predicting material properties such as bandgaps and bulk/shear moduli. Further advancements in graph-based DL frameworks necessitate innovative architecture designs that transcend real space, incorporating multiple tiers of material information, including local and/or global symmetries, topologies, orbitals, and reciprocal space data information [126]. Additionally, further exploration of uncertainty quantification methods within CGCNN frameworks can enable more reliable predictions by accounting for uncertainty in input data and model parameters [114]. Moreover, extending CGCNN models to predict multi-property relationships and exploring transfer learning approaches for knowledge transfer between different materials domains represent promising directions for future research [44,127]. Recent advancements in CGCNN and material property prediction have been driven by ongoing developments in DL techniques, data availability, and computational resources. One emerging trend is the integration of domain knowledge into CGCNN models through the incorporation of physical constraints and principles. This hybrid approach, often referred to as physics-informed ML, leverages domain-specific knowledge to guide model learning and improve prediction accuracy [128]. Another emerging trend is the utilization of GNNs for highly productive screening of material properties, enabling the rapid detection of advanced materials with desired functionalities [129]. Furthermore, the democratization of materials data through open-access databases and collaborative initiatives is facilitating broader access to training datasets and fostering interdisciplinary collaborations in materials informatics research (Materials Project). Although large databases such as the Materials Project and OQMD have significantly advanced data-driven materials discovery, they exhibit notable biases that limit model generalizability. These include overrepresentation of certain material classes (e.g., oxides), underrepresentation of others (e.g., nitrides, halides), and a focus on equilibrium phases while neglecting metastable yet experimentally relevant materials. Additionally, structural simplifications such as the omission of defects, disorder, and surface effects – reduce the realism of the data. Similar dataset limitations are seen in other materials research domains. For instance, in fused silica, DL models for detecting subsurface defects are constrained by datasets that fail to capture microstructural complexity [130]. Likewise, in lithium-rich manganese oxides, understanding oxygen-redox mechanisms is hindered by the lack of structurally diverse data reflecting real degradation pathways [131]. Addressing these gaps through better-curated, diverse datasets and integration of experimental data is essential for improving the reliability and transferability of models like CGCNN. For instance, combining CGCNN with DFT calculations can enhance prediction accuracy by incorporating quantum mechanical insights into the model. Integrating CGCNN with other computational techniques presents promising opportunities to enhance material property prediction and discovery. For example, combining CGCNN with ab initio methods such as DFT can incorporate quantum mechanical insights, improving prediction accuracy and reliability. Similarly, coupling CGCNN with molecular dynamics (MD) simulations allows for the exploration of dynamic properties and phase transitions in complex materials systems. Despite its successes, several limitations of CGCNN must be acknowledged. A primary concern is overfitting, particularly when models are trained on small or unbalanced datasets. A lack of diversity in crystal structures or material types can limit the model’s ability to generalize to unseen systems. Interpretability also remains a challenge. While recent advances in explainability offer some insights, the graph-based architecture of CGCNN often results in opaque decision-making processes, which may hinder its use in critical or high-stakes applications. Moreover, the computational intensity of CGCNN, especially when handling large-scale graphs or high-dimensional inputs, can limit scalability. However, by integrating CGCNN with complementary computational frameworks like DFT and MD, many of these challenges can be mitigated. These hybrid approaches promise not only improved accuracy but also deeper physical insight, making them valuable for accelerating materials discovery. Furthermore, integrating CGCNN with uncertainty quantification methods, such as Bayesian inference or ensemble learning, can provide probabilistic predictions and quantify the confidence of model predictions [112]. An essential feature in the deployment of graph-based DL in the case of material science is the practicability of extending learning models that are established on trivial and regular graphs to bigger and complex material systems [132]. These systems include molecule–solid hybrid systems, surface and interface structures, solids with point and line defects, among others. Technologies like transferring learning and graph attention, actively developed in other DL fields, warrant attention to ascertain their applicability in this evolving domain of research. Overall, the future of CGCNN and material property prediction lies in enhancing model interpretability, leveraging domain knowledge, and integrating with other computational methods for a more holistic understanding of material behavior. Ongoing developments and emerging trends in DL, data availability, and collaborative research efforts are expected to drive further advancements in the field.
9 Conclusions
This comprehensive review offers deliberations into the capabilities and limitations of CGCNN in the prediction of material properties, contextualizing its performance within the broader landscape of materials informatics and DL methodologies. Throughout the analysis, it becomes evident that the CGCNN model, tailored for crystals with atoms possessing lattice periodicity, has demonstrated remarkable versatility and effectiveness in various materials science applications. Owing to this, graph-based illustrations for materials science applications are increasingly leveraged for the evolution of CGCNN architectures. To date, CGCNN models have successfully predicted electronic properties such as bandgaps, carrier mobility, and electronic band structures, and their effectiveness is established in practical scenarios, signifying an ever-evolving role in accelerating materials discovery and design processes. Among the key advantages of CGCNN are its ability to directly learn from crystal structures without manual feature engineering, its scalability to large datasets, and its capacity to capture complex atomistic interactions using graph representations. These characteristics have made CGCNN a benchmark model in materials property prediction tasks. However, several limitations persist. CGCNN models may suffer from overfitting when trained on limited or imbalanced datasets, and their interpretability remains a challenge due to the black-box nature of DL models. Additionally, they can be computationally demanding and may require significant resources for training and hyperparameter tuning. Data heterogeneity and the lack of universal representations for diverse material systems further limit model generalizability. Despite these challenges, future expansion of CGCNN in material property prediction hinges on improving model interpretability, integrating with other computational frameworks, and leveraging increasingly diverse and high-quality datasets from interdisciplinary fields in materials science. It is heartening to note that contemporary developments and emerging trends in DL, data availability, and collaborative research efforts are poised to drive progressions in this field. By capitalizing on these advancements and fostering interdisciplinary collaboration, the future of CGCNN and its role in accelerating materials discovery and design processes appears promising. Ultimately, CGCNN stands as a testament to the power of DL in revolutionizing materials engineering and science, paving the way for innovative solutions and breakthroughs in diverse applications.
Acknowledgments
The authors acknowledge the support of the Glocal University 30 Project Fund of Gyeongsang National University in 2025.
-
Funding information: This work was supported by the Glocal University 30 Project Fund of Gyeongsang National University in 2025.
-
Author contributions: Nilam Qureshi: conceptualization, writing – original draft and investigation, Jinhong Bang: writing – review & editing, Jaehyeok Doh: writing – review and editing, supervision. All authors have accepted responsibility for the entire content of this manuscript and approved its submission.
-
Conflict of interest: The authors state no conflict of interest.
-
Data availability statement: Data sharing is not applicable to this article as no datasets were generated or analysed during the current study.
References
[1] Choudhary K, DeCost B, Chen C, Jain A, Tavazza F, Cohn R, et al. Recent advances and applications of deep learning methods in materials science. NPJ Comput Mater. 2022;8:59.10.1038/s41524-022-00734-6Search in Google Scholar
[2] Saal JE, Kirklin S, Aykol M, Meredig B, Wolverton C. Materials design and discovery with high-throughput density functional theory: The open quantum materials database (OQMD). JOM. 2013;65:1501–9.10.1007/s11837-013-0755-4Search in Google Scholar
[3] Curtarolo S, Hart GLW, Nardelli MB, Mingo N, Sanvito S, Levy O. The high-throughput highway to computational materials design. Nat Mater. 2013;12:191–201.10.1038/nmat3568Search in Google Scholar PubMed
[4] Jain A, Ong SP, Hautier G, Chen W, Richards WD, Dacek S, et al. Commentary: The materials project: A materials genome approach to accelerating materials innovation. APL Mater. 2013;1:011002.10.1063/1.4812323Search in Google Scholar
[5] Edwards KL. Materials and design: the art and science of material selection in product design. Mater Des. 2003;24:401–2.10.1016/S0261-3069(03)00043-8Search in Google Scholar
[6] Deka MK, Roy M, Goswami L, Deka M. Artificial Intelligence in Material Engineering: A review on applications of AI in Material Engineering. Adv Eng Mater. 2023;25:2300104.10.1002/adem.202300104Search in Google Scholar
[7] Hill J, Mulholland G, Persson K, Seshadri R, Wolverton C, Meredig B. Materials science with large-scale data and informatics: Unlocking new opportunities. MRS Bull. 2016;41:399–409.10.1557/mrs.2016.93Search in Google Scholar
[8] Saad Y, Gao D, Ngo T, Bobbitt S, Chelikowsky JR, Andreoni W. Data mining for materials: Computational experiments with AB compounds. Phys Rev B Condens Matter Mater Phys. 2012;85:104104.10.1103/PhysRevB.85.104104Search in Google Scholar
[9] Šimůnek A, Vackář J. Hardness of covalent and ionic crystals: First-principle calculations. Phys Rev Lett. 2006;96:085501.10.1103/PhysRevLett.96.085501Search in Google Scholar PubMed
[10] Booth GH, Grüneis A, Kresse G, Alavi A. Towards an exact description of electronic wavefunctions in real solids. Nature. 2013;493:365–70. 10.1038/nature11770.Search in Google Scholar PubMed
[11] Soler JM, Artacho E, Gale JD, García A, Junquera J, Ordejón P, et al. The Siesta method for ab initio order-N materials simulation. J Phys: Condens Matter. 2001;14:2745.10.1088/0953-8984/14/11/302Search in Google Scholar
[12] Xu Z. Density functional perturbation theory and adaptively compressed polarizability operator. PhD Thesis. UC Berkeley; 2019.Search in Google Scholar
[13] Schleder GR, Padilha ACM, Acosta CM, Costa M, Fazzio A, From DFT. to machine learning: Recent approaches to materials science - A review. J Phys Mater. 2019;2:032001.10.1088/2515-7639/ab084bSearch in Google Scholar
[14] Seko A, Togo A, Hayashi H, Tsuda K, Chaput L, Tanaka I. Prediction of low-thermal-conductivity compounds with first-principles anharmonic lattice-dynamics calculations and bayesian optimization. Phys Rev Lett. 2015;115:205901.10.1103/PhysRevLett.115.205901Search in Google Scholar PubMed
[15] Xue D, Balachandran PV, Hogden J, Theiler J, Xue D, Lookman T. Accelerated search for materials with targeted properties by adaptive design. Nat Commun. 2016;7:11241.10.1038/ncomms11241Search in Google Scholar PubMed PubMed Central
[16] Isayev O, Oses C, Toher C, Gossett E, Curtarolo S, Tropsha A. Universal fragment descriptors for predicting properties of inorganic crystals. Nat Commun. 2017;8:15679.10.1038/ncomms15679Search in Google Scholar PubMed PubMed Central
[17] Ghiringhelli LM, Vybiral J, Levchenko SV, Draxl C, Scheffler M. Big data of materials science: Critical role of the descriptor. Phys Rev Lett. 2015;114:105503.10.1103/PhysRevLett.114.105503Search in Google Scholar PubMed
[18] Isayev O, Fourches D, Muratov EN, Oses C, Rasch K, Tropsha A, et al. Materials cartography: Representing and mining materials space using structural and electronic fingerprints. Chem Mater. 2015;27:735–43.10.1021/cm503507hSearch in Google Scholar
[19] Liu Y, Zhao T, Ju W, Shi S. Materials discovery and design using machine learning. J Materiomics. 2017;3:159–77.10.1016/j.jmat.2017.08.002Search in Google Scholar
[20] Ulkir O, Akgun G, Karadag A. Mechanical behavior prediction of 3D-printed PLA/wood composites using artificial neural network and fuzzy logic. Polym Adv Technol. 2025;36:e70103.10.1002/pat.70103Search in Google Scholar
[21] Li L, Yu H, Wang Z. Attention-based interpretable multiscale graph neural network for MOFs. J Chem Theory Comput. 2025;21:1369–81.10.1021/acs.jctc.4c01525Search in Google Scholar PubMed
[22] Xie T, Grossman JC. Crystal graph convolutional neural networks for an accurate and interpretable prediction of material properties. Phys Rev Lett. 2018;120:145301.10.1103/PhysRevLett.120.145301Search in Google Scholar PubMed
[23] Choudhary K, Garrity KF, Reid ACE, DeCost B, Biacchi AJ, Hight Walker AR, et al. The joint automated repository for various integrated simulations (JARVIS) for data-driven materials design. NPJ Comput Mater. 2020;6:173.10.1038/s41524-020-00440-1Search in Google Scholar
[24] Kirklin S, Saal JE, Meredig B, Thompson A, Doak JW, Aykol M, et al. The Open Quantum Materials Database (OQMD): Assessing the accuracy of DFT formation energies. NPJ Comput Mater. 2015;1:15010.10.1038/npjcompumats.2015.10Search in Google Scholar
[25] Curtarolo S, Setyawan W, Hart GLW, Jahnatek M, Chepulskii RV, Taylor RH, et al. AFLOW: An automatic framework for high-throughput materials discovery. Comput Mater Sci. 2012;58:218–26.10.1016/j.commatsci.2012.02.005Search in Google Scholar
[26] Ramakrishnan R, Dral PO, Rupp M, Von Lilienfeld OA. Quantum chemistry structures and properties of 134 kilo molecules. Sci Data. 2014;1:140022.10.1038/sdata.2014.22Search in Google Scholar PubMed PubMed Central
[27] Draxl C, Scheffler M. NOMAD: The FAIR concept for big data-driven materials science. MRS Bull. 2018;43:676–82.10.1557/mrs.2018.208Search in Google Scholar
[28] Wang R, Fang X, Lu Y, Yang CY, Wang S. The PDBbind database: Methodologies and updates. J Med Chem. 2005;48:4111–9.10.1021/jm048957qSearch in Google Scholar PubMed
[29] Zakutayev A, Wunder N, Schwarting M, Perkins JD, White R, Munch K, et al. An open experimental database for exploring inorganic materials. Sci Data. 2018;5:180053.10.1038/sdata.2018.53Search in Google Scholar PubMed PubMed Central
[30] Choudhary K, DeCost B. Atomistic line graph neural network for improved materials property predictions. NPJ Comput Mater. 2021;7:185.10.1038/s41524-021-00650-1Search in Google Scholar
[31] Battaglia PW, Hamrick JB, Bapst V, Sanchez-Gonzalez A, Zambaldi V, Malinowski M, et al. Relational inductive biases, deep learning, and graph networks. ArXiv Preprint ArXiv:180601261; 2018.Search in Google Scholar
[32] Cho H, Choi IS. Three-dimensionally embedded graph convolutional network (3DGCN) for molecule interpretation. ArXiv Preprint ArXiv:1811.09794; 2018.Search in Google Scholar
[33] Faber F, Lindmaa A, von Lilienfeld OA, Armiento R. Crystal structure representations for machine learning models of formation energies. Int J Quantum Chem. 2015;115:1094–101.10.1002/qua.24917Search in Google Scholar
[34] Deml AM, O’Hayre R, Wolverton C, Stevanović V. Predicting density functional theory total energies and enthalpies of formation of metal-nonmetal compounds by linear regression. Phys Rev B. 2016;93:085142.10.1103/PhysRevB.93.085142Search in Google Scholar
[35] Ward L, Liu R, Krishna A, Hegde VI, Agrawal A, Choudhary A, et al. Including crystal structure attributes in machine learning models of formation energies via Voronoi tessellations. Phys Rev B. 2017;96:024104.10.1103/PhysRevB.96.024104Search in Google Scholar
[36] Ward L, Agrawal A, Choudhary A, Wolverton C. A general-purpose machine learning framework for predicting properties of inorganic materials. NPJ Comput Mater. 2016;2:16028.10.1038/npjcompumats.2016.28Search in Google Scholar
[37] Pilania G, Mannodi-Kanakkithodi A, Uberuaga BP, Ramprasad R, Gubernatis JE, Lookman T. Machine learning bandgaps of double perovskites. Sci Rep. 2016;6:19375.10.1038/srep19375Search in Google Scholar PubMed PubMed Central
[38] Seko A, Maekawa T, Tsuda K, Tanaka I. Machine learning with systematic density-functional theory calculations: Application to melting temperatures of single- and binary-component solids. Phys Rev B Condens Matter Mater Phys. 2014;89:054303.10.1103/PhysRevB.89.054303Search in Google Scholar
[39] Seko A, Hayashi H, Nakayama K, Takahashi A, Tanaka I. Representation of compounds for machine-learning prediction of physical properties. Phys Rev B. 2017;95:144110.10.1103/PhysRevB.95.144110Search in Google Scholar
[40] Pilania G, Gubernatis JE, Lookman T. Structure classification and melting temperature prediction in octet AB solids via machine learning. Phys Rev B Condens Matter Mater Phys. 2015;91:214302.10.1103/PhysRevB.91.214302Search in Google Scholar
[41] Kong CS, Broderick SR, Jones TE, Loyola C, Eberhart ME, Rajan K. Mining for elastic constants of intermetallics from the charge density landscape. Phys B Condens Matter. 2015;458:1–7.10.1016/j.physb.2014.11.002Search in Google Scholar
[42] De Jong M, Chen W, Notestine R, Persson K, Ceder G, Jain A, et al. A statistical learning framework for materials science: Application to elastic moduli of k-nary inorganic polycrystalline compounds. Sci Rep. 2016;6:34256.10.1038/srep34256Search in Google Scholar PubMed PubMed Central
[43] Furmanchuk A, Agrawal A, Choudhary A. Predictive analytics for crystalline materials: Bulk modulus. RSC Adv. 2016;6:95246–51. 10.1039/c6ra19284j.Search in Google Scholar
[44] Mueller T, Gilad Kusne A, Ramprasad R. Machine learning in materials science: recent Progress and emerging applications. Rev Comput Chem. 2016;29:186–273.10.1002/9781119148739.ch4Search in Google Scholar
[45] Butler KT, Davies DW, Cartwright H, Isayev O, Walsh A. Machine learning for molecular and materials science. Nature. 2018;559:547–55.10.1038/s41586-018-0337-2Search in Google Scholar PubMed
[46] Scarselli F, Gori M, Tsoi AC, Hagenbuchner M, Monfardini G. The graph neural network model. IEEE Trans Neural Netw. 2009;20:61–80.10.1109/TNN.2008.2005605Search in Google Scholar PubMed
[47] Defferrard M, Bresson X, Vandergheynst P. Convolutional neural networks on graphs with fast localized spectral filtering. Adv Neural Inf Process Syst. 2016;29:1606.09375.Search in Google Scholar
[48] Trinajstic N. Chemical graph theory. CRC Press; 2018. p. 352.10.1201/9781315139111Search in Google Scholar
[49] Blatov VA. Voronoi-Dirichlet polyhedra in crystal chemistry: Theory and applications. Crystallogr Rev. 2004;10:249–318.10.1080/08893110412331323170Search in Google Scholar
[50] Wu Y, Lian D, Xu Y, Wu L, Chen E. Graph convolutional networks with markov random field reasoning for social spammer detection. In Proceedings of the AAAI Conference on Artificial Intelligence. 2020;34:1054–61.10.1609/aaai.v34i01.5455Search in Google Scholar
[51] Sanchez-Gonzalez A, Heess N, Springenberg JT, Merel J, Riedmiller M, Hadsell R, et al. Graph networks as learnable physics engines for inference and control. In International Conference on Machine Learning. PMLR; 2018:4470–9.Search in Google Scholar
[52] Battaglia PW, Pascanu R, Lai M, Rezende D, Kavukcuoglu K. Interaction networks for learning about objects, relations and physics. Adv Neural Inf Process Syst. 2016;29:1612.00222.Search in Google Scholar
[53] Fout A, Byrd J, Shariat B, Ben-Hur A. Protein interface prediction using graph convolutional networks. Adv Neural Inf Process Syst. 2017;30:1–10.Search in Google Scholar
[54] Hamaguchi T, Oiwa H, Shimbo M, Matsumoto Y. Knowledge transfer for out-of-knowledge-base entities : a graph neural network approach. ArXiv Preprint ArXiv:170605674; 2017.10.24963/ijcai.2017/250Search in Google Scholar
[55] Dai H, Khalil EB, Zhang Y, Dilkina B, Song L. Learning combinatorial optimization algorithms over graphs. Adv Neural Inf Process Syst. 2017;30:1704.01665.Search in Google Scholar
[56] Zhang Z, Cui P, Zhu W. Deep learning on graphs: a survey. IEEE Trans Knowl Data Eng. 2020;34:249–70.10.1109/TKDE.2020.2981333Search in Google Scholar
[57] Wu Z, Pan S, Chen F, Long G, Zhang C, Yu PS. A comprehensive survey on graph neural networks. IEEE Trans Neural Netw Learn Syst. 2020;32:4–24.10.1109/TNNLS.2020.2978386Search in Google Scholar PubMed
[58] Chami I, Abu-El-Haija S, Perozzi B, Ré C, Murphy K. Machine learning on graphs: a model and comprehensive taxonomy. J Mach Learn Res. 2020;23:1–64.Search in Google Scholar
[59] Kipf TN, Welling M. Semi-supervised classification with graph convolutional networks. ArXiv Preprint ArXiv:160902907; 2016.Search in Google Scholar
[60] Veličković P, Cucurull G, Casanova A, Romero A, Liò P, Bengio Y. Graph attention networks. ArXiv Preprint ArXiv:171010903; 2017.Search in Google Scholar
[61] Hamilton WL, Ying R, Leskovec J. Inductive representation learning on large graphs. Adv Neural Inf Process Syst. 2017;30:1025–35.Search in Google Scholar
[62] Li Y, Tarlow D, Brockschmidt M, Zemel R. Gated graph sequence neural networks. ArXiv Preprint ArXiv:151105493; 2015.Search in Google Scholar
[63] Gilmer J, Schoenholz SS, Riley PF, Vinyals O, Dahl GE. Neural message passing for quantum chemistry. In International Conference on Machine Learning. PMLR; 2017:1263–72.Search in Google Scholar
[64] Lecun Y, Bengio Y, Hinton G. Deep learning. Nature. 2015;521:436–44.10.1038/nature14539Search in Google Scholar PubMed
[65] Bronstein MM, Bruna J, LeCun Y, Szlam A, Vandergheynst P. Geometric deep learning: going beyond Euclidean data. IEEE Signal Process Mag. 2017;34:18–42.10.1109/MSP.2017.2693418Search in Google Scholar
[66] Zhang S, Tong H, Xu J, Maciejewski R. Graph convolutional networks: a comprehensive review. Comput Soc Netw. 2019;6:11.10.1186/s40649-019-0069-ySearch in Google Scholar PubMed PubMed Central
[67] Henaff M, Bruna J, LeCun Y. Deep convolutional networks on graph-structured data. ArXiv Preprint ArXiv:150605163; 2015.Search in Google Scholar
[68] Jain A, Hautier G, Moore CJ, Ping Ong S, Fischer CC, Mueller T, et al. A high-throughput infrastructure for density functional theory calculations. Comput Mater Sci. 2011;50:2295–310.10.1016/j.commatsci.2011.02.023Search in Google Scholar
[69] Faber FA, Lindmaa A, Von Lilienfeld OA, Armiento R. Machine learning energies of 2 million elpasolite (ABC2D6) crystals. Phys Rev Lett. 2016;117:135502.10.1103/PhysRevLett.117.135502Search in Google Scholar PubMed
[70] Sanyal S, Balachandran J, Yadati N, Kumar A, Rajagopalan P, Sanyal S, et al. MT-CGCNN: Integrating crystal graph convolutional neural network with multitask learning for material property prediction. ArXiv Preprint ArXiv:181105660; 2018.Search in Google Scholar
[71] Jiang H, Lin X, Wang L, Ren Y, Zhan S, Ma W. Predicting material properties by deep graph networks. Cryst Res Technol. 2022;57:2200064.10.1002/crat.202200064Search in Google Scholar
[72] Hellenbrandt M. The inorganic crystal structure database (ICSD) - Present and future. Crystallogr Rev. 2004;10:17–22.10.1080/08893110410001664882Search in Google Scholar
[73] Chen C, Ye W, Zuo Y, Zheng C, Ong SP. Graph networks as a universal machine learning framework for molecules and crystals. Chem Mater. 2018;31:3564–72.10.1021/acs.chemmater.9b01294Search in Google Scholar
[74] Korolev V, Mitrofanov A, Korotcov A, Tkachenko V. Graph convolutional neural networks as “general-purpose” property predictors: the universality and limits of applicability. J Chem Inf Model. 2020;60:22–8.10.1021/acs.jcim.9b00587Search in Google Scholar PubMed
[75] Louis S-Y, Zhao Y, Nasiri A, Wong X, Song Y, Liu F, et al. Graph convolutional neural networks with global attention for improved materials property prediction. Phys Chem Chem Phys. 2020;22:18141–8. 10.1039/D0CP01474E Search in Google Scholar
[76] Park CW, Wolverton C. Developing an improved crystal graph convolutional neural network framework for accelerated materials discovery. Phys Rev Mater. 2020;4:06381.10.1103/PhysRevMaterials.4.063801Search in Google Scholar
[77] Wang S, Ji Y, Liu J, Liu Z, Zhang X, Guo Y, et al. Integrating crystal structure and numerical data for predictive models of lithium-ion battery materials: A modified crystal graph convolutional neural networks approach. J Energy Storage. 2024;80:110220.10.1016/j.est.2023.110220Search in Google Scholar
[78] Zhou J, Cui G, Hu S, Zhang Z, Yang C, Liu Z, et al. Graph neural networks: A review of methods and applications. AI Open. 2020;1:57–81.10.1016/j.aiopen.2021.01.001Search in Google Scholar
[79] Wang R, Zhong Y, Bi L, Yang M, Xu D. Accelerating discovery of metal-organic frameworks for methane adsorption with hierarchical screening and deep learning. ACS Appl Mater Interfaces. 2020;12:52797–807.10.1021/acsami.0c16516Search in Google Scholar PubMed
[80] Qureshi N, Harpale K, Shinde M, Vutova K, More M, Kim T, et al. Hierarchical MoS2 -based onion-flower-like nanostructures with and without seedpods via hydrothermal route exhibiting low turn-on field emission. J Electron Mater. 2019;48:1590–8.10.1007/s11664-018-06908-7Search in Google Scholar
[81] Qureshi N, Arbuj S, Shinde M, Rane S, Kulkarni M, Amalnerkar D, et al. Swift tuning from spherical molybdenum microspheres to hierarchical molybdenum disulfide nanostructures by switching from solvothermal to hydrothermal synthesis route. Nano Converg. 2017;4:25.10.1186/s40580-017-0119-9Search in Google Scholar PubMed PubMed Central
[82] Qureshi N, Choi CH, Doh J. Expediting high-yield mxene carbides and nitrides synthesis for next-generation 2D materials. Adv Mater Technol. 2024;9:10.10.1002/admt.202301611Search in Google Scholar
[83] Qureshi N, Lee S, Chaudhari R, Mane P, Pawar J, Chaudhari B, et al. Hydrothermal generation of 3-dimensional WO3 nanocubes, nanobars and nanobricks, their antimicrobial and anticancer properties. J Nanosci Nanotechnol. 2021;21:5337–43.10.1166/jnn.2021.19450Search in Google Scholar PubMed
[84] Jang H, Park YJ, Chen X, Das T, Kim MS, Ahn JH. Graphene-based flexible and stretchable electronics. Adv Mater. 2016;28:4184–202.10.1002/adma.201504245Search in Google Scholar PubMed
[85] Jaffari ZH, Abbas A, Umer M, Kim ES, Cho KH. Crystal graph convolution neural networks for fast and accurate prediction of adsorption ability of Nb2CTx towards Pb(ii) and Cd(ii) ions. J Mater Chem A Mater. 2023;11:9009–18.10.1039/D3TA00019BSearch in Google Scholar
[86] Durvasula H, Vrinda Kakarla S, Thazhemadam A, Roy R, Arya A. Prediction of material properties using crystal graph convolutional neural networks. ACM International Conference Proceeding Series, Association for Computing Machinery; 2022:68–73.10.1145/3529399.3529411Search in Google Scholar
[87] Karamad M, Magar R, Shi Y, Siahrostami S, Gates ID, Farimani AB. Orbital graph convolutional neural network for material property prediction. Phys Rev Mater. 2020;4:093801.10.1103/PhysRevMaterials.4.093801Search in Google Scholar
[88] Lam Pham T, Kino H, Terakura K, Miyake T, Tsuda K, Takigawa I, et al. Machine learning reveals orbital interaction in materials. Sci Technol Adv Mater. 2017;18:756–65.10.1080/14686996.2017.1378060Search in Google Scholar PubMed PubMed Central
[89] Schütt KT, Sauceda HE, Kindermans PJ, Tkatchenko A, Müller KR. SchNet - A deep learning architecture for molecules and materials. J Chem Phys. 2018;148:241722.10.1063/1.5019779Search in Google Scholar PubMed
[90] Grattarola D, Zambon D, Bianchi FM, Alippi C. Understanding pooling in graph neural networks. IEEE Trans Neural Netw Learn Syst. 2021;35:2708–18.10.1109/TNNLS.2022.3190922Search in Google Scholar PubMed
[91] Mesquita D, Souza AH, Kaski S. Rethinking pooling in graph neural networks. Adv Neural Inf Process Syst. 2020;33:2220–31.Search in Google Scholar
[92] Ma Y, Wang S, Aggarwal CC, Tang J. Graph convolutional networks with eigenpooling. KDD ’19: Proceedings of the 25th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining; 2019:723–31.10.1145/3292500.3330982Search in Google Scholar
[93] Takatsu H, Hernandez O, Yoshimune W, Prestipino C, Yamamoto T, Tassel C, et al. Cubic lead perovskite PbMoO3 with anomalous metallic behavior Cubic lead perovskite PbMoO3 with anomalous metallic behavior. Phys Rev B. 2017;95:155105.10.1103/PhysRevB.95.155105Search in Google Scholar
[94] Lupo Pasini M, Zhang P, Temple Reeve S, Youl Choi J. Multi-Task graph neural networks for simultaneous prediction of global and atomic properties in ferromagnetic systems. Mach Learn Sci Technol. 2022;3:025007.10.1088/2632-2153/ac6a51Search in Google Scholar
[95] Palizhati A, Zhong W, Tran K, Back S, Ulissi ZW. Toward predicting intermetallics surface properties with high-throughput DFT and convolutional neural networks. J Chem Inf Model. 2019;59:4742–9.10.1021/acs.jcim.9b00550Search in Google Scholar PubMed
[96] Kim M, Yeo BC, Park Y, Lee HM, Han SS, Kim D. Artificial intelligence to accelerate the discovery of N2 electroreduction catalysts. Chem Mater. 2020;32:709–20.10.1021/acs.chemmater.9b03686Search in Google Scholar
[97] Noh J, Gu GH, Kim S, Jung Y. Uncertainty-quantified hybrid machine learning/density functional theory high throughput screening method for crystals. J Chem Inf Model. 2020;60:1996–2003.10.1021/acs.jcim.0c00003Search in Google Scholar PubMed
[98] Gu GH, Noh J, Kim S, Back S, Ulissi Z, Jung Y. Practical deep-learning representation for fast heterogeneous catalyst screening. J Phys Chem Lett. 2020;11:3185–91.10.1021/acs.jpclett.0c00634Search in Google Scholar PubMed
[99] Boyd PG, Chidambaram A, García-Díez E, Ireland CP, Daff TD, Bounds R, et al. Data-driven design of metal–organic frameworks for wet flue gas CO2 capture. Nature. 2019;576:253–6.10.1038/s41586-019-1798-7Search in Google Scholar PubMed
[100] Zhan L, Ye D, Qiu X, Cen Y. Discovery of stable hybrid organic-inorganic double perovskites for high-performance solar cells via machine-learning algorithms and crystal graph convolution neural network method. ArXiv Preprint ArXiv:230800490; 2023.Search in Google Scholar
[101] Karaguesian J, Lunger JR, Shao-Horn Y, Gomez-Bombarelli R. Crystal graph convolutional neural networks for per-site property prediction. In Fourth Workshop on Machine Learning and the Physical Sciences; 2021:1710.10324.Search in Google Scholar
[102] Cui Z, Henrickson K, Ke R, Wang Y. Traffic graph convolutional recurrent neural network: a deep learning framework for network-scale traffic learning and forecasting. IEEE Trans Intell TransportatiSyst. 2020;21:4883–94.10.1109/TITS.2019.2950416Search in Google Scholar
[103] Goyal P, Ferrara E. Graph embedding techniques, applications, and performance: A survey. Knowl Based Syst. 2018;151:78–94.10.1016/j.knosys.2018.03.022Search in Google Scholar
[104] Mikolov T, Chen K, Corrado G, Dean J. Distributed representations of words and phrases and their compositionality. Adv Neural Inf Process Syst. 2013;26:1310.4546v1.Search in Google Scholar
[105] Perozzi B, Al-Rfou R, Skiena S. DeepWalk: Online learning of social representations. Proceedings of the ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, Association for Computing Machinery; 2014:701–10.10.1145/2623330.2623732Search in Google Scholar
[106] Grover A, Leskovec J. Node2vec: Scalable feature learning for networks. Proceedings of the ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, vol. 13-17- August-2016, Association for Computing Machinery; 2016;13:855–64.10.1145/2939672.2939754Search in Google Scholar PubMed PubMed Central
[107] Tang J, Qu M, Wang M, Zhang M, Yan J, Mei Q. LINE: Large-scale information network embedding. WWW 2015 - Proceedings of the 24th International Conference on World Wide Web, Association for Computing Machinery, Inc; 2015:1067–77.10.1145/2736277.2741093Search in Google Scholar
[108] Yang C, Liu Z, Zhao D, Sun M, Chang EY. Network representation learning with rich text information. IJCAI. 2015;2015:2111–7.Search in Google Scholar
[109] Bishnoi B. Materials informatics: an algorithmic design rule. ArXiv Prepr ArXiv:230503797. 2023.Search in Google Scholar
[110] Omee SS, Louis SY, Fu N, Wei L, Dey S, Dong R, et al. Scalable deeper graph neural networks for high-performance materials property prediction. Patterns. 2021;3:2109.12283.10.1016/j.patter.2022.100491Search in Google Scholar PubMed PubMed Central
[111] Magar R, Wang Y, Barati Farimani A. Crystal twins: self-supervised learning for crystalline material property prediction. NPJ Comput Mater. 2022;8:231.10.1038/s41524-022-00921-5Search in Google Scholar
[112] Agrawal A, Choudhary A. Deep materials informatics: Applications of deep learning in materials science. MRS Commun. 2019;9:779–92.10.1557/mrc.2019.73Search in Google Scholar
[113] Hinton G, Vinyals O, Dean J. Distilling the knowledge in a neural network. ArXiv Prepr ArXiv:150302531. 2015.Search in Google Scholar
[114] Abdar M, Pourpanah F, Hussain S, Rezazadegan D, Liu L, Ghavamzadeh M, et al. A Review of uncertainty quantification in deep learning: Techniques, applications and challenges. Inf Fusion. 2021;76:243–97.10.1016/j.inffus.2021.05.008Search in Google Scholar
[115] Ren Z, Tian SIP, Noh J, Oviedo F, Xing G, Li J, et al. An invertible crystallographic representation for general inverse design of inorganic crystals with targeted properties. Matter. 2022;5:314–35.10.1016/j.matt.2021.11.032Search in Google Scholar
[116] Gibson J, Hire A, Hennig RG. Data-augmentation for graph neural network learning of the relaxed energies of unrelaxed structures. NPJ Comput Mater. 2022;8:211.10.1038/s41524-022-00891-8Search in Google Scholar
[117] Cheng M, Fu C-L, Okabe R, Chotrattanapituk A, Boonkird A, Hung NT, et al. AI-driven materials design: a mini-review. ArXiv Preprint ArXiv:2502.02905; 2025.Search in Google Scholar
[118] Alverson M, Baird SG, Murdock R, Johnson J, Sparks TD. Generative adversarial networks and diffusion models in material discovery. Digital Discovery. 2024;3:62–80.10.1039/D3DD00137GSearch in Google Scholar
[119] Li W, Chen P, Xiong B, Liu G, Dou S, Zhan Y, et al. Deep learning modeling strategy for material science: from natural materials to metamaterials. J Physics: Mater. 2022;5:014003.10.1088/2515-7639/ac5914Search in Google Scholar
[120] Pakornchote T, Choomphon-anomakhun N, Arrerut S, Atthapak C, Khamkaeo S, Chotibut T, et al. Diffusion probabilistic models enhance variational autoencoder for crystal structure generative modeling. Sci Rep. 2024;14:1275.10.1038/s41598-024-51400-4Search in Google Scholar PubMed PubMed Central
[121] Long T, Fortunato NM, Opahle I, Zhang Y, Samathrakis I, Shen C, et al. Constrained crystals deep convolutional generative adversarial network for the inverse design of crystal structures. NPJ Comput Mater. 2021;7:66.10.1038/s41524-021-00526-4Search in Google Scholar
[122] Zeni C, Pinsler R, Zügner D, Fowler A, Horton M, Fu X, et al. A generative model for inorganic materials design. Nature. 2025;639:624–32.10.1038/s41586-025-08628-5Search in Google Scholar PubMed PubMed Central
[123] Li C-N, Liang H-P, Zhao B-Q, Wei S-H, Zhang X. Machine learning assisted crystal structure prediction made simple. J Mater Inform. 2024;4:15.10.20517/jmi.2024.18Search in Google Scholar
[124] Xiao H, Li R, Shi X, Chen Y, Zhu L, Chen X, et al. An invertible, invariant crystal representation for inverse design of solid-state materials using generative deep learning. Nat Commun. 2023;14:7027.10.1038/s41467-023-42870-7Search in Google Scholar PubMed PubMed Central
[125] Samek W, Wiegand T, Müller K-R. Explainable artificial intelligence: understanding, visualizing and interpreting deep learning models. ArXiv Preprint ArXiv:1708.08296; 2017.Search in Google Scholar
[126] Gong W, Yan Q. Graph-based deep learning frameworks for molecules and solid-state materials. Comput Mater Sci. 2021;195:110332.10.1016/j.commatsci.2021.110332Search in Google Scholar
[127] Madras D, Creager E, Pitassi T, Zemel R. Learning adversarially fair and transferable representations. International Conference on Machine Learning; 2018:3384–93.Search in Google Scholar
[128] Hao Z, Liu S, Zhang Y, Ying C, Feng Y, Su H, et al. Physics-informed machine learning: a survey on problems, methods and applications. ArXiv Preprint ArXiv:221108064; 2022.Search in Google Scholar
[129] Oviedo F, Ferres JL, Buonassisi T, Butler KT. Interpretable and explainable machine learning for materials science and chemistry. Acc Mater Res. 2022;3:597–607.10.1021/accountsmr.1c00244Search in Google Scholar
[130] Cao H, Peng X, Shi F, Tian Y, Kong L, Chen M, et al. Advances in subsurface defect detection techniques for fused silica optical components: A literature review. J Mater Res Technol. 2025;35:809–35.10.1016/j.jmrt.2025.01.045Search in Google Scholar
[131] Jin Y, Zhao Z, Ren P-G, Zhang B, Chen Z, Guo Z, et al. Recent advances in oxygen redox activity of lithium-rich manganese-based layered oxides cathode materials: mechanism, challenges and strategies. Adv Energy Mater. 2024;14:2402061.10.1002/aenm.202402061Search in Google Scholar
[132] Meuwly M. Machine learning for chemical reactions. Chem Rev. 2021;121:10218–39.10.1021/acs.chemrev.1c00033Search in Google Scholar PubMed
© 2025 the author(s), published by De Gruyter
This work is licensed under the Creative Commons Attribution 4.0 International License.
Articles in the same Issue
- Research Articles
- MHD radiative mixed convective flow of a sodium alginate-based hybrid nanofluid over a convectively heated extending sheet with Joule heating
- Experimental study of mortar incorporating nano-magnetite on engineering performance and radiation shielding
- Multicriteria-based optimization and multi-variable non-linear regression analysis of concrete containing blends of nano date palm ash and eggshell powder as cementitious materials
- A promising Ag2S/poly-2-amino-1-mercaptobenzene open-top spherical core–shell nanocomposite for optoelectronic devices: A one-pot technique
- Biogenic synthesized selenium nanoparticles combined chitosan nanoparticles controlled lung cancer growth via ROS generation and mitochondrial damage pathway
- Fabrication of PDMS nano-mold by deposition casting method
- Stimulus-responsive gradient hydrogel micro-actuators fabricated by two-photon polymerization-based 4D printing
- Physical aspects of radiative Carreau nanofluid flow with motile microorganisms movement under yield stress via oblique penetrable wedge
- Effect of polar functional groups on the hydrophobicity of carbon nanotubes-bacterial cellulose nanocomposite
- Review in green synthesis mechanisms, application, and future prospects for Garcinia mangostana L. (mangosteen)-derived nanoparticles
- Entropy generation and heat transfer in nonlinear Buoyancy–driven Darcy–Forchheimer hybrid nanofluids with activation energy
- Green synthesis of silver nanoparticles using Ginkgo biloba seed extract: Evaluation of antioxidant, anticancer, antifungal, and antibacterial activities
- A numerical analysis of heat and mass transfer in water-based hybrid nanofluid flow containing copper and alumina nanoparticles over an extending sheet
- Investigating the behaviour of electro-magneto-hydrodynamic Carreau nanofluid flow with slip effects over a stretching cylinder
- Electrospun thermoplastic polyurethane/nano-Ag-coated clear aligners for the inhibition of Streptococcus mutans and oral biofilm
- Investigation of the optoelectronic properties of a novel polypyrrole-multi-well carbon nanotubes/titanium oxide/aluminum oxide/p-silicon heterojunction
- Novel photothermal magnetic Janus membranes suitable for solar water desalination
- Green synthesis of silver nanoparticles using Ageratum conyzoides for activated carbon compositing to prepare antimicrobial cotton fabric
- Activation energy and Coriolis force impact on three-dimensional dusty nanofluid flow containing gyrotactic microorganisms: Machine learning and numerical approach
- Machine learning analysis of thermo-bioconvection in a micropolar hybrid nanofluid-filled square cavity with oxytactic microorganisms
- Research and improvement of mechanical properties of cement nanocomposites for well cementing
- Thermal and stability analysis of silver–water nanofluid flow over unsteady stretching sheet under the influence of heat generation/absorption at the boundary
- Cobalt iron oxide-infused silicone nanocomposites: Magnetoactive materials for remote actuation and sensing
- Magnesium-reinforced PMMA composite scaffolds: Synthesis, characterization, and 3D printing via stereolithography
- Bayesian inference-based physics-informed neural network for performance study of hybrid nanofluids
- Numerical simulation of non-Newtonian hybrid nanofluid flow subject to a heterogeneous/homogeneous chemical reaction over a Riga surface
- Enhancing the superhydrophobicity, UV-resistance, and antifungal properties of natural wood surfaces via in situ formation of ZnO, TiO2, and SiO2 particles
- Synthesis and electrochemical characterization of iron oxide/poly(2-methylaniline) nanohybrids for supercapacitor application
- Impacts of double stratification on thermally radiative third-grade nanofluid flow on elongating cylinder with homogeneous/heterogeneous reactions by implementing machine learning approach
- Synthesis of Cu4O3 nanoparticles using pumpkin seed extract: Optimization, antimicrobial, and cytotoxicity studies
- Cationic charge influence on the magnetic response of the Fe3O4–[Me2+ 1−y Me3+ y (OH2)] y+(Co3 2−) y/2·mH2O hydrotalcite system
- Pressure sensing intelligent martial arts short soldier combat protection system based on conjugated polymer nanocomposite materials
- Magnetohydrodynamics heat transfer rate under inclined buoyancy force for nano and dusty fluids: Response surface optimization for the thermal transport
- Review Articles
- A comprehensive review on hybrid plasmonic waveguides: Structures, applications, challenges, and future perspectives
- Nanoparticles in low-temperature preservation of biological systems of animal origin
- Fluorescent sulfur quantum dots for environmental monitoring
- Nanoscience systematic review methodology standardization
- Nanotechnology revolutionizing osteosarcoma treatment: Advances in targeted kinase inhibitors
- AFM: An important enabling technology for 2D materials and devices
- Carbon and 2D nanomaterial smart hydrogels for therapeutic applications
- Principles, applications and future prospects in photodegradation systems
- Do gold nanoparticles consistently benefit crop plants under both non-stressed and abiotic stress conditions?
- An updated overview of nanoparticle-induced cardiovascular toxicity
- Arginine as a promising amino acid for functionalized nanosystems: Innovations, challenges, and future directions
- Advancements in the use of cancer nanovaccines: Comprehensive insights with focus on lung and colon cancer
- Membrane-based biomimetic delivery systems for glioblastoma multiforme therapy
- The drug delivery systems based on nanoparticles for spinal cord injury repair
- Green synthesis, biomedical effects, and future trends of Ag/ZnO bimetallic nanoparticles: An update
- Application of magnesium and its compounds in biomaterials for nerve injury repair
- Micro/nanomotors in biomedicine: Construction and applications
- Hydrothermal synthesis of biomass-derived CQDs: Advances and applications
- Research progress in 3D bioprinting of skin: Challenges and opportunities
- Review on bio-selenium nanoparticles: Synthesis, protocols, and applications in biomedical processes
- Gold nanocrystals and nanorods functionalized with protein and polymeric ligands for environmental, energy storage, and diagnostic applications: A review
- An in-depth analysis of rotational and non-rotational piezoelectric energy harvesting beams: A comprehensive review
- Advancements in perovskite/CIGS tandem solar cells: Material synergies, device configurations, and economic viability for sustainable energy
- Deep learning in-depth analysis of crystal graph convolutional neural networks: A new era in materials discovery and its applications
- Review of recent nano TiO2 film coating methods, assessment techniques, and key problems for scaleup
- Special Issue on Advanced Nanomaterials for Carbon Capture, Environment and Utilization for Energy Sustainability - Part III
- Efficiency optimization of quantum dot photovoltaic cell by solar thermophotovoltaic system
- Exploring the diverse nanomaterials employed in dental prosthesis and implant techniques: An overview
- Electrochemical investigation of bismuth-doped anode materials for low‑temperature solid oxide fuel cells with boosted voltage using a DC-DC voltage converter
- Synthesis of HfSe2 and CuHfSe2 crystalline materials using the chemical vapor transport method and their applications in supercapacitor energy storage devices
- Special Issue on Green Nanotechnology and Nano-materials for Environment Sustainability
- Influence of nano-silica and nano-ferrite particles on mechanical and durability of sustainable concrete: A review
- Surfaces and interfaces analysis on different carboxymethylation reaction time of anionic cellulose nanoparticles derived from oil palm biomass
- Processing and effective utilization of lignocellulosic biomass: Nanocellulose, nanolignin, and nanoxylan for wastewater treatment
- Retraction
- Retraction of “Aging assessment of silicone rubber materials under corona discharge accompanied by humidity and UV radiation”
Articles in the same Issue
- Research Articles
- MHD radiative mixed convective flow of a sodium alginate-based hybrid nanofluid over a convectively heated extending sheet with Joule heating
- Experimental study of mortar incorporating nano-magnetite on engineering performance and radiation shielding
- Multicriteria-based optimization and multi-variable non-linear regression analysis of concrete containing blends of nano date palm ash and eggshell powder as cementitious materials
- A promising Ag2S/poly-2-amino-1-mercaptobenzene open-top spherical core–shell nanocomposite for optoelectronic devices: A one-pot technique
- Biogenic synthesized selenium nanoparticles combined chitosan nanoparticles controlled lung cancer growth via ROS generation and mitochondrial damage pathway
- Fabrication of PDMS nano-mold by deposition casting method
- Stimulus-responsive gradient hydrogel micro-actuators fabricated by two-photon polymerization-based 4D printing
- Physical aspects of radiative Carreau nanofluid flow with motile microorganisms movement under yield stress via oblique penetrable wedge
- Effect of polar functional groups on the hydrophobicity of carbon nanotubes-bacterial cellulose nanocomposite
- Review in green synthesis mechanisms, application, and future prospects for Garcinia mangostana L. (mangosteen)-derived nanoparticles
- Entropy generation and heat transfer in nonlinear Buoyancy–driven Darcy–Forchheimer hybrid nanofluids with activation energy
- Green synthesis of silver nanoparticles using Ginkgo biloba seed extract: Evaluation of antioxidant, anticancer, antifungal, and antibacterial activities
- A numerical analysis of heat and mass transfer in water-based hybrid nanofluid flow containing copper and alumina nanoparticles over an extending sheet
- Investigating the behaviour of electro-magneto-hydrodynamic Carreau nanofluid flow with slip effects over a stretching cylinder
- Electrospun thermoplastic polyurethane/nano-Ag-coated clear aligners for the inhibition of Streptococcus mutans and oral biofilm
- Investigation of the optoelectronic properties of a novel polypyrrole-multi-well carbon nanotubes/titanium oxide/aluminum oxide/p-silicon heterojunction
- Novel photothermal magnetic Janus membranes suitable for solar water desalination
- Green synthesis of silver nanoparticles using Ageratum conyzoides for activated carbon compositing to prepare antimicrobial cotton fabric
- Activation energy and Coriolis force impact on three-dimensional dusty nanofluid flow containing gyrotactic microorganisms: Machine learning and numerical approach
- Machine learning analysis of thermo-bioconvection in a micropolar hybrid nanofluid-filled square cavity with oxytactic microorganisms
- Research and improvement of mechanical properties of cement nanocomposites for well cementing
- Thermal and stability analysis of silver–water nanofluid flow over unsteady stretching sheet under the influence of heat generation/absorption at the boundary
- Cobalt iron oxide-infused silicone nanocomposites: Magnetoactive materials for remote actuation and sensing
- Magnesium-reinforced PMMA composite scaffolds: Synthesis, characterization, and 3D printing via stereolithography
- Bayesian inference-based physics-informed neural network for performance study of hybrid nanofluids
- Numerical simulation of non-Newtonian hybrid nanofluid flow subject to a heterogeneous/homogeneous chemical reaction over a Riga surface
- Enhancing the superhydrophobicity, UV-resistance, and antifungal properties of natural wood surfaces via in situ formation of ZnO, TiO2, and SiO2 particles
- Synthesis and electrochemical characterization of iron oxide/poly(2-methylaniline) nanohybrids for supercapacitor application
- Impacts of double stratification on thermally radiative third-grade nanofluid flow on elongating cylinder with homogeneous/heterogeneous reactions by implementing machine learning approach
- Synthesis of Cu4O3 nanoparticles using pumpkin seed extract: Optimization, antimicrobial, and cytotoxicity studies
- Cationic charge influence on the magnetic response of the Fe3O4–[Me2+ 1−y Me3+ y (OH2)] y+(Co3 2−) y/2·mH2O hydrotalcite system
- Pressure sensing intelligent martial arts short soldier combat protection system based on conjugated polymer nanocomposite materials
- Magnetohydrodynamics heat transfer rate under inclined buoyancy force for nano and dusty fluids: Response surface optimization for the thermal transport
- Review Articles
- A comprehensive review on hybrid plasmonic waveguides: Structures, applications, challenges, and future perspectives
- Nanoparticles in low-temperature preservation of biological systems of animal origin
- Fluorescent sulfur quantum dots for environmental monitoring
- Nanoscience systematic review methodology standardization
- Nanotechnology revolutionizing osteosarcoma treatment: Advances in targeted kinase inhibitors
- AFM: An important enabling technology for 2D materials and devices
- Carbon and 2D nanomaterial smart hydrogels for therapeutic applications
- Principles, applications and future prospects in photodegradation systems
- Do gold nanoparticles consistently benefit crop plants under both non-stressed and abiotic stress conditions?
- An updated overview of nanoparticle-induced cardiovascular toxicity
- Arginine as a promising amino acid for functionalized nanosystems: Innovations, challenges, and future directions
- Advancements in the use of cancer nanovaccines: Comprehensive insights with focus on lung and colon cancer
- Membrane-based biomimetic delivery systems for glioblastoma multiforme therapy
- The drug delivery systems based on nanoparticles for spinal cord injury repair
- Green synthesis, biomedical effects, and future trends of Ag/ZnO bimetallic nanoparticles: An update
- Application of magnesium and its compounds in biomaterials for nerve injury repair
- Micro/nanomotors in biomedicine: Construction and applications
- Hydrothermal synthesis of biomass-derived CQDs: Advances and applications
- Research progress in 3D bioprinting of skin: Challenges and opportunities
- Review on bio-selenium nanoparticles: Synthesis, protocols, and applications in biomedical processes
- Gold nanocrystals and nanorods functionalized with protein and polymeric ligands for environmental, energy storage, and diagnostic applications: A review
- An in-depth analysis of rotational and non-rotational piezoelectric energy harvesting beams: A comprehensive review
- Advancements in perovskite/CIGS tandem solar cells: Material synergies, device configurations, and economic viability for sustainable energy
- Deep learning in-depth analysis of crystal graph convolutional neural networks: A new era in materials discovery and its applications
- Review of recent nano TiO2 film coating methods, assessment techniques, and key problems for scaleup
- Special Issue on Advanced Nanomaterials for Carbon Capture, Environment and Utilization for Energy Sustainability - Part III
- Efficiency optimization of quantum dot photovoltaic cell by solar thermophotovoltaic system
- Exploring the diverse nanomaterials employed in dental prosthesis and implant techniques: An overview
- Electrochemical investigation of bismuth-doped anode materials for low‑temperature solid oxide fuel cells with boosted voltage using a DC-DC voltage converter
- Synthesis of HfSe2 and CuHfSe2 crystalline materials using the chemical vapor transport method and their applications in supercapacitor energy storage devices
- Special Issue on Green Nanotechnology and Nano-materials for Environment Sustainability
- Influence of nano-silica and nano-ferrite particles on mechanical and durability of sustainable concrete: A review
- Surfaces and interfaces analysis on different carboxymethylation reaction time of anionic cellulose nanoparticles derived from oil palm biomass
- Processing and effective utilization of lignocellulosic biomass: Nanocellulose, nanolignin, and nanoxylan for wastewater treatment
- Retraction
- Retraction of “Aging assessment of silicone rubber materials under corona discharge accompanied by humidity and UV radiation”