Home Random forest and artificial neural network-based tsunami forests classification using data fusion of Sentinel-2 and Airbus Vision-1 satellites: A case study of Garhi Chandan, Pakistan
Article Open Access

Random forest and artificial neural network-based tsunami forests classification using data fusion of Sentinel-2 and Airbus Vision-1 satellites: A case study of Garhi Chandan, Pakistan

An erratum for this article can be found here: https://doi.org/10.1515/geo-2022-0737
  • Shabnam Mateen , Narissara Nuthammachot EMAIL logo and Kuaanan Techato
Published/Copyright: February 24, 2024
Become an author with De Gruyter Brill

Abstract

This article proposes random forest algorithm (RFA), multi-layer perception (MLP) artificial neural network (ANN), and support vector machine (SVM) method for classifying the fused data of Sentinel-2, Landsat-8, and Airbus Vision-1 satellites for the years 2016 and 2023. The first variant of fusion is performed for Sentinel-2 and Landsat-8 data to sharpen it to 10 m spatial resolution, while in the second case, Sentinel-2 and Airbus Vision-1 data are fused together to achieve a spatial resolution of 3.48 m. MLP-ANN, SVM, and RFA methods are applied to the sharpened dataset for the years 2023 and 2016 having spatial resolutions of 3.48 and 10 m, respectively, and a detailed comparative analysis is performed. Google earth engine is utilized for ground data validation of the classified samples. An enhanced convergence time of 100 iterations was achieved using MLP-ANN for the classification of the dataset at 3.48 m spatial resolution, while the same method took 300 iterations with the dataset at 10 m spatial resolution to achieve a minimum limit Kappa hat score of 0.85. With 10 m spatial resolution, the MLP-ANN achieved an overall accuracy of 96.6% and a Kappa hat score of 0.94, while at 3.48 m spatial resolution, the aforementioned scores are enhanced to 98.5% and 0.97, respectively. Similarly, with 10 m spatial resolution, the RFA achieved an overall accuracy of 92.6% and a Kappa hat score of 0.88, while at 3.48 m spatial resolution, the abovementioned scores are enhanced to 96.5 and 0.95% respectively. In view of the forgoing, the MLP-ANN showed better performance as compared to the RFA method.

Graphical abstract

1 Introduction

In recent times, due to natural disasters, human encroachments, and environmental factors, mega scale landscape changes are observed around the world [1,2]. Remote sensing technologies are mature enough to monitor such changes and then using different statistical models the impact of such changes on the human race can be predicted so that prior planning is done [3]. Satellite programs are essential components for remotely sensed data and in this regard several popular programs are run by European space agency and united state government led programs such as National Aeronautics and Space Administration (NASA). Some of the popular satellite programs include the Landsat program (https://landsat.gsfc.nasa.gov/) and Copernicus Program (https://www.copernicus.eu/en/about-copernicus). The Landsat program operates various satellites, the first mission Landsat 1 was launched in 1975 and currently Landsat 9 is also functional [4]. Similarly, the Copernicus Program has launched several satellite missions such as Sentinel-1A, Sentinel-2A, Sentinel-2B, and so on. Both Landsat and Copernicus Programs offer free access to multispectral images around the world [5]. For collection of remote sensing data, it is important to consider two factors, the first one is the spectral bands and the second one is the spatial resolution of the bands. The recent Sentinel missions offer free data access with 13 multispectral bands at a spatial resolution of 10, 20, and 60 m [6]. While the Landsat-8 and Landsat-9 missions’ data have 11 multispectral bands with spatial resolutions of 15, 30, and 100 m [7]. Apart from the aforementioned satellite programs, several other missions such as UK Airbus Vision-1 program [8] and Planet scope mission [9] offer four multispectral bands with highest spatial resolution of 3.48 and 3 m, respectively. However, such programs do not offer free data access to the users and research community. Although free data access is not available from such satellite missions, data access can be approved using special requests to the program owners. Apart from the satellite missions, data fusion of two or more satellites for band sharpening is an important research field. Data fusion and sharpening techniques are utilized for enhancing spatial resolution of spectrally rich multispectral data [10] and to enhance the statistical classification indices. In the existing literature, data fusion methods between various satellite missions have been reported such as fusion of Sentinel-1 and Sentinel-2 data [11,12,13], Sentinel-2 and planet scope data fusion to a spatial resolution of 3 m [14], Sentinel-2 and UAV data fusion [15], and Sentinel-2 and Landsat-8 data fusion [16,17]. Several data fusion methods have been reported in the literature such as neural network for fusion of Sentinel-2 and Landsat-8 data [17], deep learning-based fusion method [18], model-based method [19], and spatial spectral fusion model-based method for data fusion of Sentinel-2 and planet scope satellites [20]. The method reported by Sigurdsson et al. [19] is utilized in the author’s previous work [21] and a fusion model was created for Sentinel-2 and Landsat-8 images to sharpen all bands of Sentinel-2 satellite to a spatial resolution of 10 m. In the study by Zhao and Liu [20], an effective method based on spatial spectral fusion is utilized for sharpening ten bands of Sentinel-2 data to a spatial resolution of 3 m.

Preprocessing remote sensing data is followed by image classification. Supervised, unsupervised [22], and deep learning-based classifiers are the two types of classifiers reported in the literature. This survey focuses on the latter. Unsupervised image classification methods include K-mean Clustering [23], Spectral Clustering, Fuzzy C-mean, and PCA [24,25]. Unsupervised classification has a performance tradeoff when the input data are large. Supervised classification uses labeled input data to train classifiers, with support vector machine (SVM) [26] being the most common method. Popular classifiers include decision tree, random forest algorithm (RFA) [27,28], and deep learning/AI-based classifiers such as artificial neural network (ANN), fuzzy logic, and convolutional neural network (CNN) [29]. This article uses multi-layer perception (MLP)-ANN and RFA classifiers, so the literature review focuses on these two methods.

RFA is a type of supervised classification method which is based on the concepts of building multiple random decision trees for random samples of input datasets [30]. RFA offers several inherent benefits such as low computational cost, excellent convergence rate, avoiding over and under fitting of the datasets in the estimation process [31]. In the study by Gislason et al. [31], RFA is proposed to estimate the land cover changes, while the robustness test of the RFA method is reported [32]. Additional applications of the RFA method are reported by Belgiu et al. [33], while the accuracy assessment of RFA method for land cover changes is discussed Rodriquez and Galiano [34]. RFA method equally applies to pixel-based and object-based classification and analysis [35]. As discussed above, the most commonly utilized deep learning and AI approaches are ANN-, CNN-, and fuzzy logic-based classifiers. In the study by Vali et al. [36], a dynamic learning ANN method is proposed for estimation of the land cover changes using multispectral satellite data. A detailed study on the benefits of MLP-ANN and radial basis ANN network for image classification was reported by Foody [37]. A survey on the advantages and disadvantages of ANN based classifiers and conventional classifiers are detailed by Huang and Lippmann [38]. Basic theory of finite state automata and basic structure of ANN networks are elaborated by Cleeremans [39]. In the study by Decatur [40], author utilized ANN for terrain classification, while in the study by Kulkarni [41], fuzzy ANN combined supervised classifier is introduced for the classification of the multispectral images. ANN is proposed for Split and merges segmentation of aerial images by Laprade [42], while the convergence rate of a deep learning classifier is studied by Hathaway and Bezdek [43]. One of the biggest disadvantages of any deep learning classifier is the tradeoff in the convergence rate vs the classification accuracy when a medium resolution input dataset is utilized [44]. This means that the higher the image spatial resolution, the deep learning classifiers may converge fast as compared to a low spatial resolution image. Furthermore, Ritica and Manekar [45] reported an adaptive neural fuzzy system (ANFIS) for image classification while fuzzy logic system is reported for the Landover classification problem in the study by Taufik et al. [46]. In previous literature [47,48], automated ANFIS-based classification interfaces are developed for Landover change detection and classification.

In this work, the study area is chosen within one of the regions of the Billion Tree Tsunami Project of Pakistani government (“https://en.wikipedia.org/wiki/Billion_Tree_Tsunami”). Since the project was initiated in 2014 and completed recently, there is no detailed analysis available using remote sensing methods.

Based on the literature survey, the main objectives of this article include the following:

  1. Two variants of data fusion are performed. The first data fusion is performed [20] for Sentinel-2 and Vision-1 satellites, to sharpen the dataset to a spatial resolution of 3.48 m. The second variant of data fusion is already reported in author’s previous work [21], and by utilizing the same, data for the years 2016–2023 are sharpened to a spatial resolution of 10 m.

  2. MLP-ANN, SVM, and RFA classifiers are utilized with input dataset at spatial resolutions of 10 m (2016–2023) to estimate the temporal changes in our study area.

  3. MLP-ANN, SVM, and RFA classifiers are tested with datasets at 3.48 m and 10 m spatial resolutions (2023), to analyze the effect of spatial resolution enhancement on the accuracy enhancement of the classifiers.

The article is arranged as follows: Section 2 describes the materials and methods. Proposed methodology is presented in Section 3, results and discussion are presented in Section 4. Finally, the article is concluded and future aspects are included in Section 5.

2 Materials and methods

This section presents an overview of the study area, satellite imagery, data fusion, sharpening method, and classification method based on MLP-ANN.

2.1 Study area

Figure 1 displays the administrative map of Khyber Pakhtunkhwa province of Pakistan, with the proposed study area (Ghari Chandan) at 33°50′ North and 71°42′ East near the province’s capital. In 2014, the Govt of KP began the Billion Tree Tsunami forest plantation drive in Ghari Chandan. The region experiences extreme temperatures in both summer and winter. In summer, the average temperature is 40–45°C, while in winter, it is 4–10°C. The crops and vegetation in the study area rely on the annual rainfall, as there are no canals for irrigation and the underground water is only suitable for drinking [21].

Figure 1 
                  (1) Map of Pakistan; (2) Map of KP province; (3) Study area.
Figure 1

(1) Map of Pakistan; (2) Map of KP province; (3) Study area.

2.2 Airbus Vision-1 data collection

The Airbus Vision -1 satellite data were captured on 15th February 2023 and shared after approval of the authors request # RITM0105689 - UM-LL Project Start Up Id. PP0089618 (data acquisition) by the Satellite Products & Services Operator, Intelligence Airbus Defense and Space, United Kingdom. The data were captured with a viewing angle of 18.863° and 0% cloud cover with some haze. Further details about Vision-1 data are available at https://earth.esa.int/eogateway/documents/20142/37627/Airbus-Vision-1-Imagery-user-guide-V1.1.pdf. Vision-1 satellite imagery consists of four bands namely red, green, blue, and near infrared (NIR) with a spatial resolution of 3.48 m, while the additional panchromatic band has a spatial resolution of 0.87 m. In this work, the additional panchromatic band is not utilized for data fusion. Table 1 shows the central wavelengths and the spatial resolution of all bands.

Table 1

Band details of Vision-1 satellite [8]

Band Wavelength (nm) Resolution (m)
Panchromatic 450–650 0.87
Blue 440–510 3.48
Green 510–590 3.48
Red 600–670 3.48
NIR 760–910 3.48

2.3 Sentinel-2 data collection

The Sentinel-2 satellite data were captured for the years 2016–2023, using semi-automatic classification plugin (SCP) (“https://plugins.qgis.org/plugins/SemiAutomaticClassificationPlugin/”) installed in QGIS version 3.22.11. The data were captured at 0.01% cloud cover. Sentinel-2 satellite imagery consists of 13 bands, out of which four bands (red, green, blue, and NIR) have spatial resolution of 10 m, 6 bands (vegetation red edge 1, vegetation red edge 2, vegetation red edge 3, NIR narrow, shortwave infrared-1, and shortwave infrared-2 (SWIR-cirrus)) have spatial resolution of 20 m, while the remaining 3 bands (coastal aerosol, water vapor, and SWIR-) have spatial resolution of 60 m. Further details about the bands of Sentinel-2 satellites are available at “https://custom-scripts.sentinel-hub.com/custom-scripts/sentinel-2/bands/.

2.4 Image fusion and sharpening

In this work, two variants of the data fusion are performed. The first data fusion is performed [20] for Sentinel-2 and Vision-1 satellites, to sharpen the dataset to a spatial resolution of 3.48 m for the year 2023. The second variant of data fusion is already done in author’s previous work [21], and by utilizing the same, data for the years 2016–2023 are sharpened to a spatial resolution of 10 m. Here we will discuss the details of the data fusion between Sentinel and Vision-1 satellites only. As discussed in Section 2.2, the Airbus Vision-1 satellite has four bands, namely, blue, green, red, and NIR bands with a spatial resolution of 3.48 m, and an additional panchromatic band at a spatial resolution of 0.87 m. Moreover, in the case of Sentinel-2 satellite, it contains 13 spectral bands; however, the highest spatial resolution of several bands is 10 m. So, it is always beneficial to work towards fusion of data from two or more satellites. This work is inspired by the study by Zhao and Liu [20] and proposes a spatial-spectral fusion method for Airbus Vision-1 and Sentinel-2 satellites. A block diagram explaining the spatial-spectral fusion method is shown in Figure 2. As shown in the figure, the following steps are utilized.

Figure 2 
                  Sentinel-2–Airbus vision-1 bands fusion l/sharpening based on the concepts presented in [20].
Figure 2

Sentinel-2–Airbus vision-1 bands fusion l/sharpening based on the concepts presented in [20].

Step 1: As a first step, input dataset is prepared. The Airbus Vision-1 satellite bands at a spatial resolution of 3.48 m are stacked together, while for Sentinel-2, 10 and 20 m bands are stacked separately.

Step 2: In the second step, Sentinel-2 bands are up-sampled to a spatial resolution of 3.48 m.

Step 3: In the third step, spectral mapping is used to create 10 simulated bands for Vision-1 (3.48 m) to match the spectral bands of Sentinel-2 for band to band fusion.

Step 4: In the fourth step, band to band fusion is performed and the synthetic image has spatial details of Vision-1 in a 3.48 m resolution and the spectral characteristics of Sentinel-2 in the ten bands.

From the study by Zhao and Liu [20], a linear transformation method is utilized to convert four bands of Airbus Vision-1 satellite to ten bands so that it has the same number of bands as Sentinel-2 satellite.

(1) V ( k , σ ) = L × V ( k , μ ) + e ,

where L represents the spectral mapping matrix, e is the error term, V ( k , μ ) shows the input data of Vision-1 satellite, V ( k , σ ) shows the mapped Vision-1 bands, where k represents the spatial resolution of 3.48 m. Moreover from equation (1), μ = 4 which shows the four spectral bands of Vision-1 satellite, while σ = 10 , which represents the ten transformed bands of Vision-1 satellite similar to Sentinel-2 satellite in the number of bands. Zhao and Liu [20] reported that the spectral mapping matrix L is estimated using least square method and the authors have referred to the aforementioned cited work. The spectral correlation is done using the following expression [20]:

(2) C ̅ = argmaxcorr m ( V m ( k , μ ) , S m ( k , σ ) ) .

All parameters of equation (2) are defined in the study by Zhao and Liu [20]. After spectral mapping and correlation, data fusion is done between Airbus Vision-1 data and Sentinel-2 data. As a final step, a similar neighbor search and weight optimization algorithms are utilized to reconstruct the sharpened ten bands of Sentinel-2 satellite and the mathematical formulas are reported in the study by Zhao and Liu [20]. In order to perform image sharpening, the algorithms are written as functions compiled in MATLAB R2020b.

2.5 Classification methods

In this research work, ANN is utilized for forest area classification for the Billion Tree Tsunami plantation drive started by government of Khyber Pakhtunkhwa province, Pakistan in 2014. For the comparative study, RFA and SVM methods are utilized. The ANN and RFA methods are discussed here with further details.

2.5.1 MLP-ANN

An MLP-ANN is a supervised classification/regression method that can be effectively utilized to estimate both linear and nonlinear data functions. Figure 3a shows the working diagram of MLP-ANN networks. As shown in the figure, the MLP-ANN network consists of an input layer, followed by hidden layers, and finally the output data are collected from the output layer.

Figure 3 
                     (a) MLP-ANN working diagram and (b) ensemble learning utilized in RFA implementation.
Figure 3

(a) MLP-ANN working diagram and (b) ensemble learning utilized in RFA implementation.

Let an input feature set function F ( * ) : T D 1 T D 2 is learned using training dataset, where D 1 represents the input data dimension, while the dimension of the output dataset is represented by D 2 . Moreover, the dimension of the training dataset must be the same as D 1 , while for classification problem, D 1 = D 2 . The input layer consists of input neurons, i.e., [ F 1 , F 2 F n ] representing the input feature sets. There may be single or multiple hidden layers depending on the application and optimization of the layers and the corresponding neurons. Let [ w 1 , w 2 w n ] represent the weights, then each neuron in the hidden layer is represented as follows: [ w 1 ,   F 1 , w 2 , F 2 w n F n ] . Moreover, the linear weighted summation is restricted by a nonlinear activation function. Several nonlinear activation functions such as sigmoid, tangent hyperbolic, and signum functions are utilized. If an MLP–ANN has more than one hidden layer, then the same procedure is repeated.

Finally, the output layers accept the weighted sum of each neuron from the last hidden layer and an output feature set is generated.

2.5.2 RFA

RFA is a supervised classification method which divides the input dataset into random n samples. Then, for n random sample, n decision trees are formed. The output of each decision tree is compared with each other and a final output is generated based on the majority voting or averaging techniques. Figure 3b shows a block diagram of the implementation flow chart of the RFA algorithm. Multiple decision trees offer several advantages such as enhanced accuracy and robustness. Boosting and bagging ensemble learning processes serve as fundamental units in the implementation of the RFA method; however, boosting suffers from overfitting problems in the estimated dataset. In the bagging process, the aforementioned problem is solved by utilizing the concept of integrated models and by reducing the variance. As given in Figure 4, the following steps explain the RFA method.

Figure 4 
                     Accuracy assessment test.
Figure 4

Accuracy assessment test.

Step 1: Input dataset (F) is divided into n random samples [ F 1 , F 2 F n ]

Step 2: For n random samples [ F 1 , F 2 F n ] , n decision trees are constructed

Step 3: The outputs of n decision trees are constructed are combined and voting is done

Step 4: Majority voting or averaging is utilized for classification and regression, respectively.

2.6 Training labels and accuracy assessment of the classifiers

Figure 4 shows the steps involved in performing the accuracy assessment test of the classified data. The SCP (SCP-7.10.11) reported by Congedo [49] is utilized for the classification and accuracy assessment test. To classify the dataset, three training labels/classes, namely, forest, bareland, and vegetation are utilized. For each class, a total number of 30 samples are utilized with total training labels of 90. After the data are classified, the classified area for each class is calculated from the classification report generated using SCP plugin. As shown in Figure 4, for the accuracy assessment test, the new set of labels/training classes are generated based on the following mathematical formulations. As a result of classification report, let the percent area proportion of each class is represented as y i , the target standard deviation is expressed as ω o , and the standard deviation of each class be represented as ω i . Here i shows the number of classes, which in our case are three. Based on the above, the total number of samples N for accuracy assessment is calculated using the expression (3) [49].

(3) N = i = 1 n y i ω i ω o 2 .

Let us choose ω o = 0.01, then samples for each are composed of two parts: (1) Equal number of samples ( N n 1 ) and (2) area proportion-based samples ( N y i ). Expression (4) is utilized to calculate the samples for each class N i [49].

(4) N i = ( N y i + N n 1 ) 2 .

3 The proposed methodology

The proposed methodology utilized in this work is shown in Figure 5. It is a two-step methodology, where in the first step, two variants of image fusion are performed. In the first case, data fusion is performed between Sentinel-2 and Airbus Vision-1 for the year 2023 and the dataset is sharpened to a spatial resolution of 3.48 m, while in the second case, the method reported by Mateen et al. [21] is utilized to sharpen the datasets of the years 2016–2023 to a spatial resolution of 10 m. As a second step, the sharpened images at 10 and 3.48 m spatial resolution are classified using RFA and MLP-ANN algorithms. Finally, the classified data are compared with the ground samples and percentage error in the classified classes is calculated. More importantly, the accuracy assessment enhancement is studied for MLP-ANN, SVM, and RFA algorithms with the input data at 10 and 3.48 m spatial resolutions, respectively.

Figure 5 
               Proposed method.
Figure 5

Proposed method.

4 Results

Based on the acquired data, the results are discussed here with further details.

4.1 Image fusion and sharpening

The results presented in Figure 6a–d provide a comparison of the original raster consisting of the Sentinel-2 bands and the sharpened raster of the Sentinel-2 bands to a spatial resolution of 3.48 m. Figure 6a and c show the original Sentinel-2 data and a zoom on the original raster, respectively. By comparing it with Figure 6b and d, the original raster consists of blur pixels, while the images sharpened to a spatial resolution of 3.48 m contains no blur pixels as compared to the original dataset. Apart from the raster, a comparison between the individual bands is also presented. Figure 7a–d show the original Sentinel-2 band 4 (10 m) and the band 4 sharpened to a spatial resolution of 3.48 m. Figure 7a and c show the original Sentinel-2 band 4 and a zoom on the original band 4, respectively. By comparing it to Figure 7b and d, the original band 4 contains significant distorted pixels as compared to the sharpened band 4. The sharpened band 4 at a spatial resolution of 3.48 m is significantly improved.

Figure 6 
                  (a) Sentinel-2 raster original, (b) Sentinel-2 raster sharpened, (c) Zoom view of Sentinel-2 raster original, and (d) Zoom view of Sentinel-2 raster sharpened.
Figure 6

(a) Sentinel-2 raster original, (b) Sentinel-2 raster sharpened, (c) Zoom view of Sentinel-2 raster original, and (d) Zoom view of Sentinel-2 raster sharpened.

Figure 7 
                  (a) Sentinel-2 Band 4 original, (b) Sentinel-2 Band 4 sharpened, (c) Zoom on Sentinel-2 Band 4 original, and (d) Zoom on Sentinel-2 Band 4 sharpened.
Figure 7

(a) Sentinel-2 Band 4 original, (b) Sentinel-2 Band 4 sharpened, (c) Zoom on Sentinel-2 Band 4 original, and (d) Zoom on Sentinel-2 Band 4 sharpened.

Figure 8a–d show a comparison between the sharpening of Sentinel-2 band 12 from a spatial resolution of 20–3.48 m. Figure 8a and c show the original Sentinel-2 band 12 and a zoomed view of the original band 12, respectively. By comparing it to Figure 8b and d, the original band 12 has significant number of blurry pixels as compared to the sharpened Band 12. Apart from the aforementioned presented results, quantitative evaluation criteria such as correlation coefficient (CC), root mean square error, and average absolute difference is reported in the study by Zhao and Liu [20]. Note that a good image fusion means that the calculated score of the quantitative evaluation criteria is close to zero.

Figure 8 
                  (a) Sentinel-2 Band 12 original, (b) Sentinel-2 Band 12 sharpened, (c) Zoomed view of Sentinel-2 Band 12 original, and (d) Zoomed view of Sentinel-2 Band 12 sharpened.
Figure 8

(a) Sentinel-2 Band 12 original, (b) Sentinel-2 Band 12 sharpened, (c) Zoomed view of Sentinel-2 Band 12 original, and (d) Zoomed view of Sentinel-2 Band 12 sharpened.

Moreover, it is also important to calculate the spectral distrotion index I k , spatial distrotion index I σ , and joint spectral and spatial quality index QNR [20]. The sampe proceedure reported in the study by Zhao and Liu [20] is adopted to calculate all the aforementioned quantitative indices. Using the presented concepts, the spectral CCs for ten fused bands at a spatial resolution of 3.48 m are recorded as follows: red = 0.981, green = 0.991, blue = 0.982, NIR = 0.980, vegetation red edge 1 = 0.983, vegetation red edge 2 = 0.971, vegetation red edge 3 = 0.973, NIR narrow = 0.962, shortwave infrared-1 = 0.984, and shortwave infrared-2 = 0.979. Moreover, the spatial CCs for the red, blue, green, and NIR bands are recorded as follows: red = 0.971, green = 0.961, blue = 0.992, and NIR = 0.970. Moreover, the spectral distrotion index I k , spatial distrotion index I σ , and joint spectral and spatial quality index QNR are recorded as follows: I k = 0.0089 , I σ = 0.001 , and QNR = 0.969. In the aforementioned discussion, the quantitative evaluation scores are in good agreement with the reported ranges presented in the study by Zhao and Liu [20] and thus it confirms the authenticity of the fusion and sharpening of the Sentinel-2 and Vision-1 satellites data.

4.2 Image classification

The image classification is performed by using the SCP in the QGIS. MLP-ANN, RFA, and SVM methods are used for image classification with datasets sharpened at 10 and 3.48 m spatial resolutions. RFA has 25 trees and 500 iterations, while MLP-ANN has two hidden layers, with 10 and 20 neurons, respectively. For MLP-ANN, a logistic function is used as activation function and 500 iterations are chosen for image classification. The classified results are discussed here.

4.2.1 Comparison based on fusion results

In this section, the classification results of the data for the year 2023 are compared by utilizing the MLP-ANN, RFA, and SVM methods. The classification results are tested at 3.48 and 10 m spatial resolutions, respectively. The main aim of this analysis is to confirm the enhancement in the accuracy assessment with respect to the spatial resolution enhancement of the input dataset. Figure 9a–c shows the qualitative analysis of the classified data for the year 2023 at 3.48 and 10 m spatial resolutions by utilizing SVM, MLP-ANN, and RFA classifiers. In order to have a better understanding of the results presented in Figure 9a–c, a comparison of the classified areas and statistical parameters at 3.48 and 10 m spatial resolution is presented in Figures 10a and b and 11a–c, respectively. From Figures 10a, using MLP-ANN and at spatial resolution of 10 m, the % areas for forests, bareland, and fields classes are estimated as 48, 38.4, and 13.1%, respectively, for the year 2023. While with the same spatial resolution, the % areas for forests, bareland, and fields classes with RFA method are estimated as 56.5, 25.4, and 18.9%, respectively, for the year 2023. Similarly, with the SVM method at spatial resolution of 10 m, the % areas for forests, bareland, and fields classes with RFA method are estimated as 38.9, 57.7, and 3.3%, respectively, for the year 2023. From Figure 10b, using MLP-ANN and at spatial resolution of 3.48 m, the % areas for forests, bareland, and fields classes are estimated as 50.5, 34.7, and 14.8%, respectively, for the year 2023. While with the same spatial resolution, the % areas for forests, bareland, and fields classes with RFA method are estimated as 51.7, 32.2, and 16.1%, respectively, for the year 2023. Similarly, with the SVM method at spatial resolution of 3.48 m, the % areas for forests, bareland, and fields classes with RFA method are estimated as 46.4, 48.2, and 5.4%, respectively, for the year 2023. All the three supervised classifiers showed an increasing trend in the forest area and decreasing trend in the bareland class at 3.48 m spatial resolution.

Figure 9 
                     Classified data comparison at 3.48 m and 10 m spatial resolution (2023). (a) SVM, (b) ANN, and (c) RFA.
Figure 9

Classified data comparison at 3.48 m and 10 m spatial resolution (2023). (a) SVM, (b) ANN, and (c) RFA.

Figure 10 
                     Classified areas. (a) 10 m spatial resolution and (b) 3.48 m spatial resolution.
Figure 10

Classified areas. (a) 10 m spatial resolution and (b) 3.48 m spatial resolution.

Figure 11 
                     Accuracy matrices at 3.48 m and 10 m spatial resolution (2023). (a) SVM, (b) ANN, and (c) RFA.
Figure 11

Accuracy matrices at 3.48 m and 10 m spatial resolution (2023). (a) SVM, (b) ANN, and (c) RFA.

Figure 11a–c shows the comparison of the accuracy matrices by using SVM, ANN, and RFA methods with input dataset at 3.48 and 10 m spatial resolutions. From Figure 11a, the MLP-ANN classifier shows an increasing trend in the overall Kappa hat score at 3.48 m spatial resolution from 0.95 (10 m) to 0.97. Also, the overall accuracy showed an improvement at 3.48 m from 95.5 (10 m) to 97.6%. Similarly, from Figure 11b and with the RFA classifier, there is an increasing trend both in the overall accuracy and Kappa hat scores at 3.48 m spatial resolution. The accuracy matrices of SVM shown in Figure 11c also show the same increasing trend for the overall accuracy and overall Kappa hat score.

Out of the three classifiers, MLP-ANN showed better accuracy scores, so an iteration convergence test was done for MLP-ANN at 3.48 and 10 m spatial resolution datasets. The results are presented in Figure 12a and b. From the presented results, it is evident that at 10 m dataset, cost function converged to zero in 300, while at 3.48 m, it took 100 iterations, respectively.

Figure 12 
                     (a) MLP-ANN cost minimization. (a) 10 m spatial resolution and (b) 3.48 m spatial resolution.
Figure 12

(a) MLP-ANN cost minimization. (a) 10 m spatial resolution and (b) 3.48 m spatial resolution.

4.2.2 Comparison based on temporal data (2016–2023)

In this subsection, the classification results of the data for the years 2016–2023 are compared at by utilizing the MLP-ANN, RFA, and SVM methods. The main aim of this analysis is to compare the changes in the classes with respect to the temporal changes, i.e., 2016–2023. Figure 13a–c shows the qualitative analysis of the classified data for the years 2016–2023 by utilizing SVM, MLP-ANN, and RFA classifiers. In order to have a better understanding of the results presented in Figure 13a–c, a comparison of the classified areas for the dataset of the years 2016–2023 is given in Figure 14a and b. Similarly, the comparison of the statistical data is presented in Figure 15a and b.

Figure 13 
                     (a) Classified area (2016–2023). (a) SVM, (b) MLP-ANN, and (c) RFA.
Figure 13

(a) Classified area (2016–2023). (a) SVM, (b) MLP-ANN, and (c) RFA.

Figure 14 
                     Classified areas. (a) 2016 and (b) 2023.
Figure 14

Classified areas. (a) 2016 and (b) 2023.

Figure 15 
                     (a) Statistical data for the year 2016 and (b) statistical data for the year 2023.
Figure 15

(a) Statistical data for the year 2016 and (b) statistical data for the year 2023.

Figure 14a shows the classified areas of each class for the dataset of the year 2016. Using MLP-ANN, the % areas for forests, bareland, and vegetation classes are estimated as 12, 75.5, and 12.5%, respectively, while the % areas for forests, bareland, and vegetation classes with RFA method are estimated as 9.5, 72.5, and 17.9%, respectively. Using SVM, the % classified areas of the aforementioned classes are recorded as 26.1, 67, and 6.9%, respectively.

Figure 14b shows the classified areas of each class for the dataset of the year 2023. Using MLP-ANN, the % areas for forests, bareland, and vegetation classes are estimated as 48, 38.4, and 13.6%, respectively, while the % areas for forests, bareland, and vegetation classes with RFA method are estimated as 56.5, 25.4, and 18.1%, respectively. Using SVM, the % classified areas of aforementioned classes are recorded as 39.9, 57.7, and 3.3%, respectively. Both MLP-ANN and RFA classifiers show a reasonable increasing trend in the % classified area for forest class from 2016 to 2023, while a decreasing trend in the bareland class.

Figure 15a shows the user accuracies (UA), producer accuracies (PA), and individual Kappa hat scores for each class for the dataset of the year 2016. Using MLP-ANN, the % PA, % UA, and Kappa hat scores for each class are well above 85%, thus it shows strong agreement with ground pixels. The same is true for RFA classifier. In case of SVM, the % PA and % UA scores are below 80% while the Kappa hat scores for each class are also in the range of 0.65–0.70. Thus, SVM shows moderate agreement.

Figure 15b shows the UA, PA, and individual Kappa hat scores for each class for the dataset of the year 2023. Using MLP-ANN, the % PA, % UA, and Kappa hat scores for each class are well above 85%, thus it shows strong agreement with ground pixels. The same is true for RFA classifier. In case of SVM, the % PA and %UA scores are below 80% while the Kappa hat scores for each class are also in the range of 0.65–0.75. Thus, SVM shows moderate agreement for the classified dataset for the year 2023.

4.2.3 Comparison based on change map.

Figure 16a–c shows the change map for 2016–2023 using MLP-ANN, RFA, and SVM methods, while Figure 17 displays the class change vs % area change for the dataset from the years 2016 to 2023. Figure 17 reveals that 36.3% of bareland area in 2016 was mapped to forest area in 2023 with MLP-ANN, and 40.9 and 16.4% with RFA and SVM methods, respectively. Additionally, 31.6% of bareland area changed to other classes with MLP-ANN, while with RFA and SVM methods, the recorded changes are 21.5 and 15.8%, respectively, from 2016–2023.

Figure 16 
                     (a) Change map for the years 2016–2023. (a) RFA, (b) MLP-ANN, and (c) SVM.
Figure 16

(a) Change map for the years 2016–2023. (a) RFA, (b) MLP-ANN, and (c) SVM.

Figure 17 
                     Class change vs % area change.
Figure 17

Class change vs % area change.

4.2.4 Ground data matching

From the results presented in Sections 4.2.14.2.3, it is verified that the strong agreement between classified and ground data is provided by MLP-ANN and RFA methods. Thus, the ground data validation is done only for the aforementioned methods. For ground data matching, Google earth is utilized. Overall, three samples are recorded as ground data samples and the designated coordinates for each sample are shown in Table 2.

Table 2

Coordinates of the ground samples in WGS 84/UTM zone 42 N system

Directions Ground sample 1 Ground sample 2 Ground sample 3
North 3,747,892 3,746,259 3,746,286
South 3,747,719 3,746,096 3,745,939
East 753,437 753,546 753,577
West 753,650 753,759 754,002

Figure 18a–c compares the classified sample 1 using MLP-ANN and RFA methods with the ground sample 1 collected from Google earth at the designated coordinates. In order to calculate the % area for each class, the classification report for sample 1 is generated using the SCP toolbox of the QGIS software. The % estimated areas for each class of sample 1 classified using MLP-ANN and RFA methods are tabulated in Table 3. In order to calculate the % areas for forest, bareland, and vegetation classes in the reference sample 1, the area calculator toolbox of the QGIS software is utilized to draw polygons around each class to find the area proportion for each class. Finally, from the total area of ground sample 1, the % areas for each class are calculated and the same has been tabulated in Table 3. Using Table 3, the classified areas’ absolute error plot for sample 1 is drawn and is shown in Figure 19. From the presented results, it is shown that using MLP-ANN, 1.42% area error is observed in the forest class, while with RFA method, the calculated % area error in the forest class is around 15.26%. Moreover, using MLP-ANN, 1.51% area error is observed in the bareland class, while with RFA method, the calculated % area error in the forest class is around 8.59%. Similarly, 2.93 and 6.67% area errors are observed in the vegetation class using MLP-ANN and RFA methods, respectively.

Figure 18 
                     (a) Classified sample 1 using MLP-ANN, (b) classified sample 1 using RFA, and (c) Ground sample 1.
Figure 18

(a) Classified sample 1 using MLP-ANN, (b) classified sample 1 using RFA, and (c) Ground sample 1.

Table 3

Sample 1% area for each class

Classes MLP-ANN (%) RFA (%) Ground data (%)
Forest 59.65 42.97 58.23
Bareland 20.54 27.62 19.03
Vegetation 19.81 29.41 22.74
Figure 19 
                     % Classified area error in sample 1 using MLP-ANN and RFA.
Figure 19

% Classified area error in sample 1 using MLP-ANN and RFA.

Figure 20a–c compares the classified sample 2 using MLP-ANN and RFA methods with the ground sample 2 collected from Google earth at the designated coordinates. As mentioned above, the same steps are utilized to calculate the % area for each class in the classified and ground sample 2. The % estimated areas for each class of sample 2 classified using MLP-ANN and RFA methods and the respective % areas for ground sample 2 are tabulated in Table 4.

Figure 20 
                     (a) Classified sample 2 using MLP-ANN, (b) Classified sample 2 using RFA, and (c) Ground sample 2.
Figure 20

(a) Classified sample 2 using MLP-ANN, (b) Classified sample 2 using RFA, and (c) Ground sample 2.

Table 4

Sample 2% area for each class

Classes MLP-ANN (%) RFA (%) Ground data (%)
Forest 49.45 52.30 42.26
Bareland 44.34 22.10 47.90
Vegetation 06.21 25.60 09.84

Using Table 4, the classified area absolute error plot for sample 2 is drawn and is shown in Figure 21. From the presented results, it is shown that using MLP-ANN, 7.19% area error is observed in the forest class, while with RFA method, the calculated % area error in the forest class is around 10.04%. Moreover, using MLP-ANN, 3.56% area error is observed in the bareland class, while with RFA method, the calculated % area error in the forest class is around 25.6%. Similarly, 3.63 and 15.76% area errors are observed in the vegetation class using MLP-ANN and RFA methods, respectively.

Figure 21 
                     % Classified area error in sample 2 using MLP-ANN and RFA.
Figure 21

% Classified area error in sample 2 using MLP-ANN and RFA.

Similarly, Figure 22a–c compares the classified sample 3 using MLP-ANN and RFA methods with the ground sample 3. The aforementioned steps are utilized to calculate the % area for each class in the classified and ground sample 3. The % estimated areas for each class of sample 3 classified using MLP-ANN and RFA methods and the respective % areas for ground sample 3 are tabulated in Table 5. Using Table 5, the classified area absolute error plot for sample 3 is drawn and is shown in Figure 23. From the presented results, it is shown that using MLP-ANN, 3.6% area error is observed in the forest class, while with RFA method, the calculated % area error in the forest class is around 11.05%. Moreover, using MLP-ANN, 2.63% area error is observed in the bareland class, while with RFA method, the calculated % area error in the forest class is around 13.65%. Similarly, 0.97 and 2.6% area errors are observed in the vegetation class using MLP-ANN and RFA methods, respectively.

Figure 22 
                     (a) Classified sample 3 using MLP-ANN, (b) classified sample 3 using RFA, and (c) Ground sample 3.
Figure 22

(a) Classified sample 3 using MLP-ANN, (b) classified sample 3 using RFA, and (c) Ground sample 3.

Table 5

Sample 3% area for each class

Classes MLP-ANN (%) RFA (%) Ground data (%)
Forest 69.75 77.20 66.15
Bareland 23.57 12.55 26.20
Vegetation 6.68 10.25 07.65
Figure 23 
                     % Classified area error in sample 3 using MLP-ANN and RFA.
Figure 23

% Classified area error in sample 3 using MLP-ANN and RFA.

4.3 Discussion

Billion tree tsunami project was initiated in 2014 by a provincial government of Pakistan (https://en.wikipedia.org/wiki/Billion_Tree_Tsunami). This research work is an effort to utilize remote sensing data for the classification of forests area starting from year 2016 to 2023. The classified data for the forest area will authenticate the temporal growth of forests in our study area. In this work, satellite images from Sentinel-2, Landsat-8, and Vision-1 programs are utilized. Since Vision-1 program offers four multispectral bands at spatial resolution of 3.48 m, so a spectral spatial correlation method is utilized for the fusion of Sentinel-2 and Vision-1 data and to sharpen ten bands of Sentinel-2 data to the same resolution as Vision-1 data.

The image sharpening results are presented in Figures 6(ad) to 8(ad). In comparison to the authors’ previously reported work [21], this work has a potential benefit for utilizing Vision-1 data at higher spatial resolution of 3.48 m. The four multispectral bands at a higher spatial resolution of 3.48 m are utilized to sharpen ten bands of Sentinel-2 satellite. Thus, the sharpened data of Sentinel-2 satellite which has high spectral contents and high spatial resolution is much more appropriate for classification purpose. As given in Section 4, the calculated spectral CCs, spatial CCs, spectral distortion index I k , spatial distrotion index I σ , and joint spectral and spatial quality index and QNR scores are in the ranges as discussed [21].

The image classification results based on fusion of data are presented in Figures 912. In order to authenticate the higher classification accuracies at a higher spatial resolution of 3.48 m, another set of Sentinel-2 sharpened data at a 10 m spatial resolution is also utilized and classified with MLP-ANN, RFA, and SVM methods for the year 2023. By comparing the accuracies and statistical data of the classified images at spatial resolutions of 3.48 and 10 m, it is observed that all the aforementioned methods showed enhanced indices at higher spatial resolution of 3.48 m. Thus, from the aforementioned discussion, it is obvious that with higher spatial resolution of the dataset, the statistical quantitative indices such as individual and overall accuracies, and Kappa hat scores are enhanced with some percentage as compared to 10 m spatial resolution dataset.

The image classification results based on temporal data (2016–2023) are presented in Figures 1315. The results presented in Figure 15a and b confirms that by using MLP-ANN and RFA methods, the % PA, % UA, and Kappa hat scores for each class are well above 85%, thus it shows strong agreement with ground pixels. While in case of SVM, the % PA and % UA scores are below 80% while the Kappa hat scores for each class are also in the range of 0.65–0.70. Thus, SVM shows moderate agreement for the dataset of the years 2016–2023.

The change map comparison results based on temporal data (2016–2023) are presented in Figures 16 and 17. Figure 17 reveals that 36.3% of bareland area in 2016 was mapped to forests area in 2023 with MLP-ANN, and 40.9 and 16.4% with RFA and SVM methods respectively. Additionally, 31.6% of bareland area changed to other classes with MLP-ANN, while with RFA and SVM methods, the recorded changes are 21.5 and 15.8%, respectively, from 2016 to 2023.

In view of the forgoing, in this work, and in comparison to the authors’ previous work [21], a more comprehensive and detailed analysis is performed using the three aforementioned supervised classification methods. Moreover, to the authors best knowledge, the data fusion and sharpening at such a higher resolution (3.48 m) has never been exploited for the subject study area.

Ma et al. [44] analyzed the most important aspect and used MLP-ANN to achieve the same statistical parameters (Kappa hat, overall accuracy). This idea is proved in Figure 12 which shows that the convergence rate at 3.48 m is three times faster as compared to 10 m resolution to reach the same Kappa hat and accuracy.

The comparison results of the ground data validation are presented in Figures 1823. The ground samples are compared to the classified samples. It is concluded that that MLP-ANN has lower % absolute area errors as compared to the RFA method [44].

5 Conclusion and future work

This article examines MLP-ANN, SVM, and RFA methods for classifying forest area in the Billion Tree Tsunami Project, Garhi Chandan, Pakistan. First, Sentinel-2 and Vision-1 satellites are fused and sharpened to 3.48 m resolution. Then, MLP-ANN, SVM, and RFA classifiers are used to classify the sharpened data. Additionally, another set of Sentinel-2 sharpened data at 10 m resolution is classified with the aforementioned methods and compared for temporal changes. Results and discussion showed that MLP-ANN and RFA classifiers had higher overall Kappa hat and classification accuracies when using the sharpened dataset at 3.48 m. Individual Kappa hat scores, PA (%) and UA (%) were also improved compared to the 10 m spatial resolution. This work could be extended to include the panchromatic band (0.87 m) of Vision-1 satellite, potentially sharpening to the centimeter level.

Acknowledgements

The authors would like to thank Dr. Yongquan Zhao (yqzhao@link.cuhk.edu.hk) for providing technical support in the implementation of the image fusion and sharpening of the data for Sentinel-2 and Vision-1 satellites. The authors would also like to thank the Satellite Products & Services Operator, Intelligence Airbus Defence and Space, United Kingdom for approving their request # RITM0105689 - UM-LL Project Start Up Id. PP0089618 (data acquisition) and for providing them with the data related to the study area.

  1. Author contributions: Shabnam Mateen and Narissara Nuthammachot: Writing original draft, data collection, fusion, and classification of the data, Narissara Nuthammachot and Kuaanan Techato: Proof reading and project supervision.

  2. Conflict of interest: The authors declare no conflicts of interest related to this research. No competing financial or non-financial interests exist.

References

[1] Stephens CM, Lall U, Johnson FM, Marshall LA. Landscape changes and their hydrologic effects: Interactions and feedback across scales. Earth Sci Rev. 2021;212:103466. Elsevier BV. 10.1016/j.earscirev.2020.103466.Search in Google Scholar

[2] Kvie KS, Heggenes J, Bårdsen BJ, Røed KH. Recent large-scale landscape changes, genetic drift and reintroductions characterize the genetic structure of Norwegian wild reindeer. Conserv Genet. 2019;20:1405–19. Springer Science and Business Media LLC. 10.1007/s10592-019-01225-w.Search in Google Scholar

[3] Yang H, Kong J, Hu H, Du Y, Gao M, Chen F. A review of remote sensing for water quality retrieval: progress and challenges. Remote Sens. 2022;14:1770. MDPI AG. 10.3390/rs14081770.Search in Google Scholar

[4] Masek JG, Wulder MA, Markham B, McCorkel J, Crawford CJ, Storey J, et al. Landsat 9: Empowering open science and applications through continuity. Remote Sens Environ. 2020;248:111968. Elsevier BV. 10.1016/j.rse.2020.111968.Search in Google Scholar

[5] Gascon F, Bouzinac C, Thépaut O, Jung M, Francesconi B, Louis J, et al. Copernicus Sentinel-2A calibration and products validation status. Remote Sens. 2017;9:584. MDPI AG. 10.3390/rs9060584.Search in Google Scholar

[6] Szantoi Z, Strobl P. Copernicus Sentinel-2 calibration and validation. Eur J Remote Sens. 2019;52:253–5. Informa UK Limited. 10.1080/22797254.2019.1582840.Search in Google Scholar

[7] Liu S, Qi Z, Li X, Yeh A. Integration of convolutional neural networks and object-based post-classification refinement for land use and land cover mapping with optical and SAR data. Remote Sens. 2019;11:690. MDPI AG. 10.3390/rs11060690.Search in Google Scholar

[8] Ustin SL, Middleton EM. Current and near-term advances in Earth observation for ecological applications. Ecol Process. 2021;10:1. Springer Science and Business Media LLC. 10.1186/s13717-020-00255-4.Search in Google Scholar PubMed PubMed Central

[9] Millin-Chalabi G, Langston B, Holmes J, Meade R, Stopher A, Best C, et al. A multisensor and multitemporal approach to assess wildfire occurrence and landscape dynamics on Marsden Moor Estate, West Yorkshire. research.manchester.ac.uk; 2022. https://research.manchester.ac.uk/en/publications/a-multisensor-and-multitemporal-approach-to-assess-wildfire-occur-2 [Accessed 29 Aug. 2023].Search in Google Scholar

[10] Vizzari M. PlanetScope, Sentinel-2, and Sentinel-1 data integration for object-based land cover classification in google earth engine. Remote Sens. 2022;14:2628. MDPI AG. 10.3390/rs14112628.Search in Google Scholar

[11] Restaino R, Vivone G, Addesso P, Chanussot J. Hyperspectral sharpening approaches using satellite multiplatform data. IEEE Trans Geosci Remote Sens. 2021;59:578–96. Institute of Electrical and Electronics Engineers (IEEE). 10.1109/tgrs.2020.3000267.Search in Google Scholar

[12] Drakonakis GI, Tsagkatakis G, Fotiadou K, Tsakalides P. OmbriaNet—supervised flood mapping via convolutional neural networks using multitemporal Sentinel-1 and Sentinel-2 data fusion. IEEE J Sel Top Appl Earth Obs Remote Sens. 2022;15:2341–56. Institute of Electrical and Electronics Engineers (IEEE). 10.1109/jstars.2022.3155559.Search in Google Scholar

[13] Hafner S, Nascetti A, Azizpour H, Ban Y. Sentinel-1 and Sentinel-2 data fusion for urban change detection using a dual stream U-net. IEEE Geosci Remote Sens Lett. 2022;19:1–5. Institute of Electrical and Electronics Engineers (IEEE). 10.1109/lgrs.2021.3119856.Search in Google Scholar

[14] Chen Y, Bruzzone L. Self-supervised SAR-optical data fusion of sentinel-1/-2 images. IEEE Trans Geosci Remote Sens. 2022;60:1–11. Institute of Electrical and Electronics Engineers (IEEE). 10.1109/tgrs.2021.3128072.Search in Google Scholar

[15] Li Z, Zhang HK, Roy DP, Yan L, Huang H. Sharpening the Sentinel-2 10 and 20 m bands to planetscope-0 3 m resolution. Remote Sens. 2020;12:2406. MDPI AG. 10.3390/rs12152406.Search in Google Scholar

[16] Ma Y, Chen H, Zhao G, Wang Z, Wang D. Spectral index fusion for salinized soil salinity inversion using Sentinel-2A and UAV images in a coastal area. IEEE Access. 2020;8:159595–608. Institute of Electrical and Electronics Engineers (IEEE). 10.1109/access.2020.3020325.Search in Google Scholar

[17] Ao Z, Sun Y, Xin Q. Constructing 10-m NDVI Time series from Landsat 8 and Sentinel 2 images using convolutional neural networks. IEEE Geosci Remote Sens Lett. 2021;18:1461–5. Institute of Electrical and Electronics Engineers (IEEE). 10.1109/lgrs.2020.3003322.Search in Google Scholar

[18] Shao Z, Cai J, Fu P, Hu L, Liu T. Deep learning-based fusion of Landsat-8 and Sentinel-2 images for a harmonized surface reflectance product. Remote Sens Environ. 2019;235:111425. Elsevier BV. 10.1016/j.rse.2019.111425.Search in Google Scholar

[19] Sigurdsson J, Armannsson SE, Ulfarsson MO, Sveinsson JR. Fusing Sentinel-2 and Landsat 8 satellite images using a model-based method. Remote Sens. 2022;14:3224. MDPI AG. 10.3390/rs14133224.Search in Google Scholar

[20] Zhao Y, Liu D. A robust and adaptive spatial-spectral fusion model for PlanetScope and Sentinel-2 imagery. GIScience Remote Sens. 2022;59:520–46. Informa UK Limited. 10.1080/15481603.2022.2036054.Search in Google Scholar

[21] Mateen S, Nuthammachot N, Techato K, Ullah N. Billion Tree Tsunami Forests classification using image fusion technique and random forest classifier applied to Sentinel-2 and Landsat-8 images: A case study of Garhi Chandan Pakistan. ISPRS Int J Geo-Information. 2022;12:9. MDPI AG. 10.3390/ijgi12010009.Search in Google Scholar

[22] Mehmood M, Shahzad A, Zafar B, Shabbir A, Ali N. Remote sensing image classification: A comprehensive review and applications. Ahmad A, editor. Math Probl Eng. 2022;1–24. Hindawi Limited. 10.1155/2022/5880959.Search in Google Scholar

[23] Dhanachandra N, Manglem K, Chanu YJ. Image segmentation using K-means clustering algorithm and subtractive clustering algorithm. Procedia Comput Sci. 2015;54:764–71. Elsevier BV. 10.1016/j.procs.2015.06.090.Search in Google Scholar

[24] Zhao Y, Yuan Y, Wang Q. Fast spectral clustering for unsupervised hyperspectral image classification. Remote Sens. 2019;11:399. MDPI AG. 10.3390/rs11040399.Search in Google Scholar

[25] Reza MS, Ma J. ICA and PCA integrated feature extraction for classification. IEEE 13th International Conference on Signal Processing (ICSP). IEEE; 2016. 10.1109/icsp.2016.7877996.Search in Google Scholar

[26] Tzotsos A, Argialas D. Support vector machine classification for object-based image analysis. Lecture notes in geoinformation and cartography. Berlin, Heidelberg: Springer; p. 663–77. 10.1007/978-3-540-77058-9_36.Search in Google Scholar

[27] Yang CC, Prasher SO, Enright P, Madramootoo C, Burgess M, Goel PK, et al. Application of decision tree technology for image classification using remote sensing data. Agric Syst. 2003;76:1101–17. Elsevier BV. 10.1016/s0308-521x(02)00051-3.Search in Google Scholar

[28] Aniah P, Bawakyillenuo S, Codjoe SNA, Dzanku FM. Land use and land cover change detection and prediction based on CA-Markov chain in the Savannah ecological zone of Ghana. Environ Chall. 2023;10:100664. Elsevier BV. 10.1016/j.envc.2022.100664.Search in Google Scholar

[29] Chen L, Li S, Bai Q, Yang J, Jiang S, Miao Y. Review of image classification algorithms based on convolutional neural networks. Remote Sens. 2021;13:4712. MDPI AG. 10.3390/rs13224712.Search in Google Scholar

[30] Breiman L. Machine learning. (Vol. 45), Springer Science and Business Media LLC; 2001. p. 5–32. 10.1023/a:1010933404324.Search in Google Scholar

[31] Gislason PO, Benediktsson JA, Sveinsson JR. Random forests for land cover classification. Pattern Recognit Lett. 2006;27:294–300. Elsevier BV. 10.1016/j.patrec.2005.08.011.Search in Google Scholar

[32] Pelletier C, Valero S, Inglada J, Champion N, Dedieu G. Assessing the robustness of Random Forests to map land cover with high resolution satellite image time series over large areas. Remote Sens Environ. 2016;187:156–68. Elsevier BV. 10.1016/j.rse.2016.10.010.Search in Google Scholar

[33] Belgiu M, Drăguţ L. Random forest in remote sensing: A review of applications and future directions. ISPRS J Photogramm Remote Sens. 2016;114:24–31. Elsevier BV. 10.1016/j.isprsjprs.2016.01.011.Search in Google Scholar

[34] Rodriguez-Galiano VF, Ghimire B, Rogan J, Chica-Olmo M, Rigol-Sanchez JP. An assessment of the effectiveness of a random forest classifier for land-cover classification. ISPRS J Photogramm Remote Sens. 2012;67:93–104. Elsevier BV. 10.1016/j.isprsjprs.2011.11.002.Search in Google Scholar

[35] Ma L, Li M, Ma X, Cheng L, Du P, Liu Y. A review of supervised object-based land-cover image classification. ISPRS J Photogramm Remote Sens. 2017;130:277–93. Elsevier BV. 10.1016/j.isprsjprs.2017.06.001.Search in Google Scholar

[36] Vali A, Comai S, Matteucci M. Deep learning for land use and land cover classification based on hyperspectral and multispectral earth observation data: A review. Remote Sens. 2020;12:2495. MDPI AG. 10.3390/rs12152495.Search in Google Scholar

[37] Foody GM. Supervised image classification by MLP and RBF neural networks with and without an exhaustively defined set of classes. Int J Remote Sens. 2004;25:3091–104. Informa UK Limited. 10.1080/01431160310001648019.Search in Google Scholar

[38] Huang W, Lippmann RP. Neural net and traditional classifiers. Neural Information Processing Systems; 1987. https://proceedings.neurips.cc/paper/1987/hash/4e732ced3463d06de0ca9a15b6153677-Abstract.html [Accessed 29 Aug. 2023].Search in Google Scholar

[39] Cleeremans A, Servan-Schreiber D, McClelland JL. Finite State automata and simple recurrent networks. Neural Comput. 1989;1:372–81. MIT Press – Journals. 10.1162/neco.1989.1.3.372.Search in Google Scholar

[40] Decatur. Application of neural networks to terrain classification. International Joint Conference on Neural Networks. IEEE; 1989. 10.1109/ijcnn.1989.118592.Search in Google Scholar

[41] Kulkarni AD, Lulla K. Fuzzy neural network models for supervised classification: Multispectral image analysis. Geocarto Int. 1999;14:42–51. Informa UK Limited. 10.1080/10106049908542127.Search in Google Scholar

[42] Laprade RH. Split-and-merge segmentation of aerial photographs. Comput Vision Graphics Image Process. 1988;44:77–86. Elsevier BV. 10.1016/s0734-189x(88)80032-x.Search in Google Scholar

[43] Hathaway RJ, Bezdek JC. Recent convergence results for the fuzzy C-means clustering algorithms. J Classification. 1988;5:237–47. Springer Science and Business Media LLC. 10.1007/bf01897166.Search in Google Scholar

[44] Ma L, Liu Y, Zhang X, Ye Y, Yin G, Johnson BA. Deep learning in remote sensing applications: A meta-analysis and review. ISPRS J Photogramm Remote Sens. 2019;152:166–77. Elsevier BV. 10.1016/j.isprsjprs.2019.04.015.Search in Google Scholar

[45] Thakur R, Manekar VL. Artificial intelligence-based image classification techniques for hydrologic applications. Appl Artif Intell. 2022;36:1. 10.1080/08839514.2021.2014185.Search in Google Scholar

[46] Taufik A, Sharifah S, Ahmad S. Land cover classification of Landsat 8 satellite data based on Fuzzy Logic approach. IOP Conf Ser Earth Environ Sci. 2016;1:37. 10.1088/1755-1315/37/1/012062.Search in Google Scholar

[47] Yuan H, Van Der Wiele CF, Khorram S. An automated artificial neural network system for land use/land cover classification from Landsat TM imagery. Remote Sens. 2009;1:243–65. 10.3390/rs1030243.Search in Google Scholar

[48] El-Harby AA, Alotaibi AS. An automatic ANFIS system for classifying features from remotely sensed images using a novel technique for correcting training and test data. Inf Sci Lett. 2022;11(4):1239–49. 10.18576/isl/110423.Search in Google Scholar

[49] Congedo L. Semi-Automatic Classification Plugin: A Python tool for the download and processing of remote sensing images in QGIS. J Open Source Softw. 2021;6(64):3172.10.21105/joss.03172Search in Google Scholar

Received: 2023-09-06
Revised: 2023-12-06
Accepted: 2023-12-07
Published Online: 2024-02-24

© 2024 the author(s), published by De Gruyter

This work is licensed under the Creative Commons Attribution 4.0 International License.

Articles in the same Issue

  1. Regular Articles
  2. Theoretical magnetotelluric response of stratiform earth consisting of alternative homogeneous and transitional layers
  3. The research of common drought indexes for the application to the drought monitoring in the region of Jin Sha river
  4. Evolutionary game analysis of government, businesses, and consumers in high-standard farmland low-carbon construction
  5. On the use of low-frequency passive seismic as a direct hydrocarbon indicator: A case study at Banyubang oil field, Indonesia
  6. Water transportation planning in connection with extreme weather conditions; case study – Port of Novi Sad, Serbia
  7. Zircon U–Pb ages of the Paleozoic volcaniclastic strata in the Junggar Basin, NW China
  8. Monitoring of mangrove forests vegetation based on optical versus microwave data: A case study western coast of Saudi Arabia
  9. Microfacies analysis of marine shale: A case study of the shales of the Wufeng–Longmaxi formation in the western Chongqing, Sichuan Basin, China
  10. Multisource remote sensing image fusion processing in plateau seismic region feature information extraction and application analysis – An example of the Menyuan Ms6.9 earthquake on January 8, 2022
  11. Identification of magnetic mineralogy and paleo-flow direction of the Miocene-quaternary volcanic products in the north of Lake Van, Eastern Turkey
  12. Impact of fully rotating steel casing bored pile on adjacent tunnels
  13. Adolescents’ consumption intentions toward leisure tourism in high-risk leisure environments in riverine areas
  14. Petrogenesis of Jurassic granitic rocks in South China Block: Implications for events related to subduction of Paleo-Pacific plate
  15. Differences in urban daytime and night block vitality based on mobile phone signaling data: A case study of Kunming’s urban district
  16. Random forest and artificial neural network-based tsunami forests classification using data fusion of Sentinel-2 and Airbus Vision-1 satellites: A case study of Garhi Chandan, Pakistan
  17. Integrated geophysical approach for detection and size-geometry characterization of a multiscale karst system in carbonate units, semiarid Brazil
  18. Spatial and temporal changes in ecosystem services value and analysis of driving factors in the Yangtze River Delta Region
  19. Deep fault sliding rates for Ka-Ping block of Xinjiang based on repeating earthquakes
  20. Improved deep learning segmentation of outdoor point clouds with different sampling strategies and using intensities
  21. Platform margin belt structure and sedimentation characteristics of Changxing Formation reefs on both sides of the Kaijiang-Liangping trough, eastern Sichuan Basin, China
  22. Enhancing attapulgite and cement-modified loess for effective landfill lining: A study on seepage prevention and Cu/Pb ion adsorption
  23. Flood risk assessment, a case study in an arid environment of Southeast Morocco
  24. Lower limits of physical properties and classification evaluation criteria of the tight reservoir in the Ahe Formation in the Dibei Area of the Kuqa depression
  25. Evaluation of Viaducts’ contribution to road network accessibility in the Yunnan–Guizhou area based on the node deletion method
  26. Permian tectonic switch of the southern Central Asian Orogenic Belt: Constraints from magmatism in the southern Alxa region, NW China
  27. Element geochemical differences in lower Cambrian black shales with hydrothermal sedimentation in the Yangtze block, South China
  28. Three-dimensional finite-memory quasi-Newton inversion of the magnetotelluric based on unstructured grids
  29. Obliquity-paced summer monsoon from the Shilou red clay section on the eastern Chinese Loess Plateau
  30. Classification and logging identification of reservoir space near the upper Ordovician pinch-out line in Tahe Oilfield
  31. Ultra-deep channel sand body target recognition method based on improved deep learning under UAV cluster
  32. New formula to determine flyrock distance on sedimentary rocks with low strength
  33. Assessing the ecological security of tourism in Northeast China
  34. Effective reservoir identification and sweet spot prediction in Chang 8 Member tight oil reservoirs in Huanjiang area, Ordos Basin
  35. Detecting heterogeneity of spatial accessibility to sports facilities for adolescents at fine scale: A case study in Changsha, China
  36. Effects of freeze–thaw cycles on soil nutrients by soft rock and sand remodeling
  37. Vibration prediction with a method based on the absorption property of blast-induced seismic waves: A case study
  38. A new look at the geodynamic development of the Ediacaran–early Cambrian forearc basalts of the Tannuola-Khamsara Island Arc (Central Asia, Russia): Conclusions from geological, geochemical, and Nd-isotope data
  39. Spatio-temporal analysis of the driving factors of urban land use expansion in China: A study of the Yangtze River Delta region
  40. Selection of Euler deconvolution solutions using the enhanced horizontal gradient and stable vertical differentiation
  41. Phase change of the Ordovician hydrocarbon in the Tarim Basin: A case study from the Halahatang–Shunbei area
  42. Using interpretative structure model and analytical network process for optimum site selection of airport locations in Delta Egypt
  43. Geochemistry of magnetite from Fe-skarn deposits along the central Loei Fold Belt, Thailand
  44. Functional typology of settlements in the Srem region, Serbia
  45. Hunger Games Search for the elucidation of gravity anomalies with application to geothermal energy investigations and volcanic activity studies
  46. Addressing incomplete tile phenomena in image tiling: Introducing the grid six-intersection model
  47. Evaluation and control model for resilience of water resource building system based on fuzzy comprehensive evaluation method and its application
  48. MIF and AHP methods for delineation of groundwater potential zones using remote sensing and GIS techniques in Tirunelveli, Tenkasi District, India
  49. New database for the estimation of dynamic coefficient of friction of snow
  50. Measuring urban growth dynamics: A study in Hue city, Vietnam
  51. Comparative models of support-vector machine, multilayer perceptron, and decision tree ‎predication approaches for landslide ‎susceptibility analysis
  52. Experimental study on the influence of clay content on the shear strength of silty soil and mechanism analysis
  53. Geosite assessment as a contribution to the sustainable development of Babušnica, Serbia
  54. Using fuzzy analytical hierarchy process for road transportation services management based on remote sensing and GIS technology
  55. Accumulation mechanism of multi-type unconventional oil and gas reservoirs in Northern China: Taking Hari Sag of the Yin’e Basin as an example
  56. TOC prediction of source rocks based on the convolutional neural network and logging curves – A case study of Pinghu Formation in Xihu Sag
  57. A method for fast detection of wind farms from remote sensing images using deep learning and geospatial analysis
  58. Spatial distribution and driving factors of karst rocky desertification in Southwest China based on GIS and geodetector
  59. Physicochemical and mineralogical composition studies of clays from Share and Tshonga areas, Northern Bida Basin, Nigeria: Implications for Geophagia
  60. Geochemical sedimentary records of eutrophication and environmental change in Chaohu Lake, East China
  61. Research progress of freeze–thaw rock using bibliometric analysis
  62. Mixed irrigation affects the composition and diversity of the soil bacterial community
  63. Examining the swelling potential of cohesive soils with high plasticity according to their index properties using GIS
  64. Geological genesis and identification of high-porosity and low-permeability sandstones in the Cretaceous Bashkirchik Formation, northern Tarim Basin
  65. Usability of PPGIS tools exemplified by geodiscussion – a tool for public participation in shaping public space
  66. Efficient development technology of Upper Paleozoic Lower Shihezi tight sandstone gas reservoir in northeastern Ordos Basin
  67. Assessment of soil resources of agricultural landscapes in Turkestan region of the Republic of Kazakhstan based on agrochemical indexes
  68. Evaluating the impact of DEM interpolation algorithms on relief index for soil resource management
  69. Petrogenetic relationship between plutonic and subvolcanic rocks in the Jurassic Shuikoushan complex, South China
  70. A novel workflow for shale lithology identification – A case study in the Gulong Depression, Songliao Basin, China
  71. Characteristics and main controlling factors of dolomite reservoirs in Fei-3 Member of Feixianguan Formation of Lower Triassic, Puguang area
  72. Impact of high-speed railway network on county-level accessibility and economic linkage in Jiangxi Province, China: A spatio-temporal data analysis
  73. Estimation model of wild fractional vegetation cover based on RGB vegetation index and its application
  74. Lithofacies, petrography, and geochemistry of the Lamphun oceanic plate stratigraphy: As a record of the subduction history of Paleo-Tethys in Chiang Mai-Chiang Rai Suture Zone of Thailand
  75. Structural features and tectonic activity of the Weihe Fault, central China
  76. Application of the wavelet transform and Hilbert–Huang transform in stratigraphic sequence division of Jurassic Shaximiao Formation in Southwest Sichuan Basin
  77. Structural detachment influences the shale gas preservation in the Wufeng-Longmaxi Formation, Northern Guizhou Province
  78. Distribution law of Chang 7 Member tight oil in the western Ordos Basin based on geological, logging and numerical simulation techniques
  79. Evaluation of alteration in the geothermal province west of Cappadocia, Türkiye: Mineralogical, petrographical, geochemical, and remote sensing data
  80. Numerical modeling of site response at large strains with simplified nonlinear models: Application to Lotung seismic array
  81. Quantitative characterization of granite failure intensity under dynamic disturbance from energy standpoint
  82. Characteristics of debris flow dynamics and prediction of the hazardous area in Bangou Village, Yanqing District, Beijing, China
  83. Rockfall mapping and susceptibility evaluation based on UAV high-resolution imagery and support vector machine method
  84. Statistical comparison analysis of different real-time kinematic methods for the development of photogrammetric products: CORS-RTK, CORS-RTK + PPK, RTK-DRTK2, and RTK + DRTK2 + GCP
  85. Hydrogeological mapping of fracture networks using earth observation data to improve rainfall–runoff modeling in arid mountains, Saudi Arabia
  86. Petrography and geochemistry of pegmatite and leucogranite of Ntega-Marangara area, Burundi, in relation to rare metal mineralisation
  87. Prediction of formation fracture pressure based on reinforcement learning and XGBoost
  88. Hazard zonation for potential earthquake-induced landslide in the eastern East Kunlun fault zone
  89. Monitoring water infiltration in multiple layers of sandstone coal mining model with cracks using ERT
  90. Study of the patterns of ice lake variation and the factors influencing these changes in the western Nyingchi area
  91. Productive conservation at the landslide prone area under the threat of rapid land cover changes
  92. Sedimentary processes and patterns in deposits corresponding to freshwater lake-facies of hyperpycnal flow – An experimental study based on flume depositional simulations
  93. Study on time-dependent injectability evaluation of mudstone considering the self-healing effect
  94. Detection of objects with diverse geometric shapes in GPR images using deep-learning methods
  95. Behavior of trace metals in sedimentary cores from marine and lacustrine environments in Algeria
  96. Spatiotemporal variation pattern and spatial coupling relationship between NDVI and LST in Mu Us Sandy Land
  97. Formation mechanism and oil-bearing properties of gravity flow sand body of Chang 63 sub-member of Yanchang Formation in Huaqing area, Ordos Basin
  98. Diagenesis of marine-continental transitional shale from the Upper Permian Longtan Formation in southern Sichuan Basin, China
  99. Vertical high-velocity structures and seismic activity in western Shandong Rise, China: Case study inspired by double-difference seismic tomography
  100. Spatial coupling relationship between metamorphic core complex and gold deposits: Constraints from geophysical electromagnetics
  101. Disparities in the geospatial allocation of public facilities from the perspective of living circles
  102. Research on spatial correlation structure of war heritage based on field theory. A case study of Jinzhai County, China
  103. Formation mechanisms of Qiaoba-Zhongdu Danxia landforms in southwestern Sichuan Province, China
  104. Magnetic data interpretation: Implication for structure and hydrocarbon potentiality at Delta Wadi Diit, Southeastern Egypt
  105. Deeply buried clastic rock diagenesis evolution mechanism of Dongdaohaizi sag in the center of Junggar fault basin, Northwest China
  106. Application of LS-RAPID to simulate the motion of two contrasting landslides triggered by earthquakes
  107. The new insight of tectonic setting in Sunda–Banda transition zone using tomography seismic. Case study: 7.1 M deep earthquake 29 August 2023
  108. The critical role of c and φ in ensuring stability: A study on rockfill dams
  109. Evidence of late quaternary activity of the Weining-Shuicheng Fault in Guizhou, China
  110. Extreme hydroclimatic events and response of vegetation in the eastern QTP since 10 ka
  111. Spatial–temporal effect of sea–land gradient on landscape pattern and ecological risk in the coastal zone: A case study of Dalian City
  112. Study on the influence mechanism of land use on carbon storage under multiple scenarios: A case study of Wenzhou
  113. A new method for identifying reservoir fluid properties based on well logging data: A case study from PL block of Bohai Bay Basin, North China
  114. Comparison between thermal models across the Middle Magdalena Valley, Eastern Cordillera, and Eastern Llanos basins in Colombia
  115. Mineralogical and elemental analysis of Kazakh coals from three mines: Preliminary insights from mode of occurrence to environmental impacts
  116. Chlorite-induced porosity evolution in multi-source tight sandstone reservoirs: A case study of the Shaximiao Formation in western Sichuan Basin
  117. Predicting stability factors for rotational failures in earth slopes and embankments using artificial intelligence techniques
  118. Origin of Late Cretaceous A-type granitoids in South China: Response to the rollback and retreat of the Paleo-Pacific plate
  119. Modification of dolomitization on reservoir spaces in reef–shoal complex: A case study of Permian Changxing Formation, Sichuan Basin, SW China
  120. Geological characteristics of the Daduhe gold belt, western Sichuan, China: Implications for exploration
  121. Rock physics model for deep coal-bed methane reservoir based on equivalent medium theory: A case study of Carboniferous-Permian in Eastern Ordos Basin
  122. Enhancing the total-field magnetic anomaly using the normalized source strength
  123. Shear wave velocity profiling of Riyadh City, Saudi Arabia, utilizing the multi-channel analysis of surface waves method
  124. Effect of coal facies on pore structure heterogeneity of coal measures: Quantitative characterization and comparative study
  125. Inversion method of organic matter content of different types of soils in black soil area based on hyperspectral indices
  126. Detection of seepage zones in artificial levees: A case study at the Körös River, Hungary
  127. Tight sandstone fluid detection technology based on multi-wave seismic data
  128. Characteristics and control techniques of soft rock tunnel lining cracks in high geo-stress environments: Case study of Wushaoling tunnel group
  129. Influence of pore structure characteristics on the Permian Shan-1 reservoir in Longdong, Southwest Ordos Basin, China
  130. Study on sedimentary model of Shanxi Formation – Lower Shihezi Formation in Da 17 well area of Daniudi gas field, Ordos Basin
  131. Multi-scenario territorial spatial simulation and dynamic changes: A case study of Jilin Province in China from 1985 to 2030
  132. Review Articles
  133. Major ascidian species with negative impacts on bivalve aquaculture: Current knowledge and future research aims
  134. Prediction and assessment of meteorological drought in southwest China using long short-term memory model
  135. Communication
  136. Essential questions in earth and geosciences according to large language models
  137. Erratum
  138. Erratum to “Random forest and artificial neural network-based tsunami forests classification using data fusion of Sentinel-2 and Airbus Vision-1 satellites: A case study of Garhi Chandan, Pakistan”
  139. Special Issue: Natural Resources and Environmental Risks: Towards a Sustainable Future - Part I
  140. Spatial-temporal and trend analysis of traffic accidents in AP Vojvodina (North Serbia)
  141. Exploring environmental awareness, knowledge, and safety: A comparative study among students in Montenegro and North Macedonia
  142. Determinants influencing tourists’ willingness to visit Türkiye – Impact of earthquake hazards on Serbian visitors’ preferences
  143. Application of remote sensing in monitoring land degradation: A case study of Stanari municipality (Bosnia and Herzegovina)
  144. Optimizing agricultural land use: A GIS-based assessment of suitability in the Sana River Basin, Bosnia and Herzegovina
  145. Assessing risk-prone areas in the Kratovska Reka catchment (North Macedonia) by integrating advanced geospatial analytics and flash flood potential index
  146. Analysis of the intensity of erosive processes and state of vegetation cover in the zone of influence of the Kolubara Mining Basin
  147. GIS-based spatial modeling of landslide susceptibility using BWM-LSI: A case study – city of Smederevo (Serbia)
  148. Geospatial modeling of wildfire susceptibility on a national scale in Montenegro: A comparative evaluation of F-AHP and FR methodologies
  149. Geosite assessment as the first step for the development of canyoning activities in North Montenegro
  150. Urban geoheritage and degradation risk assessment of the Sokograd fortress (Sokobanja, Eastern Serbia)
  151. Multi-hazard modeling of erosion and landslide susceptibility at the national scale in the example of North Macedonia
  152. Understanding seismic hazard resilience in Montenegro: A qualitative analysis of community preparedness and response capabilities
  153. Forest soil CO2 emission in Quercus robur level II monitoring site
  154. Characterization of glomalin proteins in soil: A potential indicator of erosion intensity
  155. Power of Terroir: Case study of Grašac at the Fruška Gora wine region (North Serbia)
  156. Special Issue: Geospatial and Environmental Dynamics - Part I
  157. Qualitative insights into cultural heritage protection in Serbia: Addressing legal and institutional gaps for disaster risk resilience
Downloaded on 8.9.2025 from https://www.degruyterbrill.com/document/doi/10.1515/geo-2022-0595/html
Scroll to top button