Abstract
Terrestrial Laser Scanning (TLS) is increasingly used in geomonitoring for 3D displacement analysis. However, assessing the sensitivity of the implemented monitoring strategy, which is crucial for correctly interpreting observations and detecting deformations, is challenging due to the impact of complex spatial correlations and other influencing factors. Traditional methods for sensitivity analysis often assume uncorrelated measurements, leading to biased and overestimated sensitivity. This can lead to suboptimal choices of monitoring strategy, false expectations, and errors in displacement detection. This study introduces a new method for quantifying the uncertainty of spatially aggregated (averaged) TLS-based displacement estimates in geomonitoring by empirically locally sampling such aggregated values (ELSA). The method implicitly accounts for the mentioned spatial correlations and their local variations, therefore, providing a more realistic sensitivity quantification. Validation using real-world datasets and simulated displacements demonstrates the method’s ability to provide a realistic uncertainty estimate and, subsequently, a good sensitivity estimate, realized herein by means of the Minimal Detectable Bias (MDB). Finally, we investigated several data preprocessing steps and demonstrated their effectiveness in enhancing both sensitivity and the quality of uncertainty estimates.
1 Introduction
Terrestrial Laser Scanning (TLS) has become a widely adopted technology for geomonitoring, providing high-resolution 3D spatial data (point clouds) with centimeter-level precision. Its ability to sense complex terrains remotely without the need for physical installations within the monitored region of interest makes it particularly suitable for monitoring hazardous and unstable areas. Consequently, TLS is used in various geomonitoring applications, e.g. monitoring of landslides [1], rockfalls [2] and debris flows [3].
In point cloud-based geomonitoring, deformation processes are typically observed by comparing sequential point cloud acquisitions and computing displacement estimates. These 3D datasets enable detailed deformation analysis, with different methods producing results ranging from 2D deformation maps [4] to 3D deformation vector fields (DVFs) [5], each providing valuable insights into deformation mechanisms [6]. When multiple epochs are acquired, these datasets can be processed as time series data, facilitating the identification of underlying dynamic behaviors [7].
Computing the displacements is fundamental to point cloud-based geomonitoring, but assessing the sensitivity of the measurement system and associated data-processing methods is equally important. In this context, “sensitivity” refers to the capacity to detect potential displacements [8], [9], [10], [11], [12], answering the question of how small a displacement can be while still being considered statistically significant. Assessing the achievable sensitivity is important not only after the measurements but also during the planning phase of a geomonitoring project, as knowing the sensitivity of a specific measurement setup can help in decision-making.
To assess whether surface model changes are statistically significant [13], 14], introduced the concept of level of detection (LoD) as a threshold to distinguish significant differences in digital elevation models (DEMs) from errors and outliers arising from measurement uncertainty. Building upon this [4], extended the concept of LoD to 3D point cloud displacement in geomonitoring as part of their Multiscale Model to Model Cloud Comparison (M3C2) algorithm. This evaluates the significance of the displacements along the surface normal and uses the surface roughness as a proxy for the point cloud precision. Factors influencing this precision include measurement noise, surface and material properties, atmospheric effects and scanning geometry.
Several strategies have aimed at improving sensitivity through local data aggregation. For instance [15], proposed averaging of point cloud differences within a defined spatial neighborhood to enhance the detection of millimeter-scale displacements over short distances, and many following works have adopted this practice. Extending this idea into the spatiotemporal domain [16], incorporated averaging across multiple sequential point cloud epochs to further increase sensitivity.
However, the mentioned studies, when conducting sensitivity assessments, generally assumed that the uncertainty of displacement estimates follows a normal distribution and consists of uncorrelated observations. Based on the assumption of independent observations, where the standard error of the mean (SE) was commonly chosen as an indicator of sensitivity. This approach oversimplifies the analysis and overlooks the spatial correlations inherent in TLS data, as demonstrated in many studies, including, e.g. [17], 18]. These correlations can lead to biased estimates [19], reduced effective sample sizes [18], underestimated uncertainty and consequently overestimated sensitivity.
To address these limitations, a substantial body of literature (e.g. [17], [20], [21], [22], [23], [24]) focuses on deriving comprehensive stochastic models for TLS-based point clouds. This is mostly done through the empirical determination of fully populated variance-covariance matrices (VCM) and the subsequent analyses of the effects of considering or disregarding such VCMs in further data processing. Often, the emphasis has been on a particular case of evaluating the statistical significance of changes in the parameters defining the modeled 3D surfaces used to approximate the obtained point clouds. Despite this special focus, such stochastic models provide the information needed to determine whether a computed displacement is statistically significant. However, the empirical derivation of a fully populated VCM in a general sense remains an unresolved challenge [19]. To name a few related challenges, these methods often require prior experimental investigations with dedicated and suitable setups, specific surface modeling assumptions, or prior knowledge about the behavior of the measurement system.
In geomonitoring, surface modeling is often a complex and challenging task. As a result, deformation analyses often bypass surface modeling and operate directly on point clouds. Moreover, the behavior of measurement systems is difficult to model correctly due to many influencing factors, such as refraction, which introduces complex spatially and temporally varying correlations [25]. So far, no methods are capable of estimating the required VCMs based on the data acquired on site, while the VCMs estimated using special experimental setups and equipment (e.g. as in ref. [19]) do not generalize well enough to be applicable in geomonitoring.
To tackle this challenge, in this study, we introduce an approach for estimating the uncertainty of averaged displacement estimates based on on-site acquired data and use this approach to evaluate the achievable sensitivity in long-range TLS-based geomonitoring. The sensitivity computation is performed specifically by computing the Minimal Detectable Bias (MDB) from the averaged displacement estimates derived directly from TLS point clouds. Our method addresses the influence of spatial correlations and local factors, such as refraction and surface properties, without explicitly building a VCM. This is achieved by subdividing the computed point cloud into smaller regions within which we assume stationary stochastic processes, computing and sampling displacement estimate averages, and deriving statistics (e.g. MDB) based on these samples (Monte-Carlo-based approach). We validate our approach by analyzing displacement estimates of known magnitude simulated on top of a real-world geomonitoring dataset. We further demonstrate the suitability of our approach on two TLS geomonitoring datasets, compare the computed sensitivity estimates to standard approaches, and quantify the differences, showcasing the relevance of the proposed method. Finally, we explore several processing steps for TLS-based displacement estimates capable of enhancing the sensitivity and provide an indication of what is achievable in that regard.
The article is organized as follows: Section 2 introduces the theoretical background and methods including the workflow for estimating uncertainty based on locally averaged displacement estimates. Section 3 describes the dataset used in subsequent investigations. Section 4 details the analysis of the uncertainty estimates and their influence on sensitivity calculations. Conclusions and an outlook are presented in Section 6.
2 Methods
2.1 Theoretical background
A monitoring system (including both the deployed instruments and processing methods) is considered sensitive when it can detect a displacement with a specified probability of false alarm (significance level α) and a specified probability of detection (test power 1 − β), where β represents the probability is the probability of a missed detection (type II error) [26].
For geomonitoring, among the available techniques, the M3C2 algorithm [4] and its variants [27], 28] are the most widely deployed approaches to estimate geometric surface changes. For the sensitivity analysis of displacement estimates, M3C2 relies on the LoD, which is considered to be the sensitivity measure and can be used to indicate the minimum detectable changes [4]. The LoD is calculated by considering the variances of the two surface models or point clouds (reference and target) in the direction of the surface normal and the assumed co-registration error. The formula of LoD95 (for a 95 % confidence interval) [4] is given by:
where σ1(d)2 and σ2(d)2 denote the local surface variability (i.e. the combined measurement noise and the effect of the surface roughness) in each point cloud, which is computed from a subsection of the point cloud within a cylindrical neighborhood with diameter d around the query point. The additional parameter σreg quantifies an isotropic registration uncertainty. For an α other than 0.05, the factor 1.96 is replaced by the corresponding critical value.
Despite its broad applicability, the LoD shows three key limitations. (i) The standard LoD formulation does not account for the probability of a type II error, (ii) it assumes that all observations are uncorrelated, an assumption which does not hold for TLS points clouds, as discussed in Section 1, and (iii) the LoD computation assumes isotropic registration uncertainty (see Eq. (1)), implying uniform uncertainty in all directions.
To address the first limitation of the LoD, recent studies [29] have started to incorporate hypothesis testing as well-established statistical techniques for sensitivity analysis, with a specific focus on calculating the Minimal Detectable Bias (MDB) as a measure of sensitivity. The MDB was introduced by ref. [30] as part of a hypothesis testing-based approach for outlier detection, known as Baarda’s data snooping procedure. Although introduced for analysis of geodetic networks, it is a more widely applicable method [31], [32], [33]. For point cloud-based deformation analysis, we can define the MDB as a measure of the minimal displacement that has to occur between two epochs in order to be detected with significance level α while allowing for a probability of β that the test fails to detect it.
For the hypothesis testing of an observed displacement estimate d, we have the null hypothesis,
To compute the MDB, we have to consider the alternative hypothesis
where z1−α/2 is the critical value (two-tailed) for a chosen significance level α (e.g. 0.05), z1−β for a chosen power 1 − β of the test, and σ d represents the standard deviation of the computed displacement. In contrast to the LoD formulation, where the uncertainty is derived from the individual surface variabilities of the two point clouds, σ d in the MDB approach reflects an empirically estimated uncertainty based on the observed differences between epochs. Importantly, this formulation is agnostic to the specific method used to compute the point cloud differences.
In the literature adopting the MDB for point cloud-based deformation analysis (e.g. [29]), the parameter σ d is still computed as in the case of LoD (term in the parentheses in Eq. (1)), based on the analysis of the local point cloud variance and an additional constant factor requiring prior knowledge and some simplifying assumptions. Hence, although using MDB as a measure of sensitivity eliminates the first of the abovementioned limitations, it still faces the second and third limitation of the standard LoD (does not account for correlations in the data and it assumes an isotropic registration error) if they are not explicitly addressed.
Consequently, the parameter σ d in Eq. (2) must reflect all relevant uncertainty sources, such as registration error, instrument measurement noise, as well the effect of environmental, material, and geometric influences, as discussed in Section 1. Some of these effects can vary substantially across a point cloud, especially in the case of long-range TLS-based geomonitoring [36]. This also holds for the registration uncertainty, which depends on the estimated transformation parameters and the spatial position of the investigated points. Hence, this makes the third assumption, that the registration error is uniformly distributed, not valid.
Given these limitations and requirements for σ d , we propose a direct estimation of the uncertainty σ d based on empirical statistical analysis of locally averaged and sampled displacement estimates.
In the following section, we describe the workflow of the proposed empirical estimation of σ d , which is later used for investigations in Section 4.
2.2 Workflow for empirical uncertainty estimation
The workflow follows the previously introduced ideas for the estimation of σ d . In point cloud-based monitoring applications, displacements are most commonly derived from the comparison of reference and target point cloud epochs. These two-epoch displacement estimates are widely regarded as the standard case in geomonitoring and will also serve as the foundation for our subsequent analysis.
The general workflow is realized using the following four steps: (1) reference and target point clouds are subdivided into local segments, (2) displacements are computed between the corresponding segments, (3) displacements are aggregated within a defined local neighborhood, and (4) the empirical σ d is derived per local segment through sampling of the aggregated displacements and computing the related statistics. This method of estimating σ d will hereafter be referred to as ELSA (Empirical Local Sampling of Aggregated values). The resulting σ d can then be used for calculating the Minimal Detectable Bias (MDB) as in Eq. (2).
Within this study, we present only one specific implementation of the workflow for one way of calculating displacement estimates from TLS point clouds acquired from a single viewpoint, which is typical for geomonitoring setups. Although most common algorithms, such as M3C2, compute displacements in the normal direction of the surface, we compute the displacements in the LoS (Line of Sight) direction for the following reasons. First, it aligns with the inherent nature of data acquisition by TLS instruments. Hence, no additional uncertainty is introduced, e.g. through the estimation of the surface normal. Second, it enables processing of the single-viewpoint point cloud in a 2D spherical image representation (a.k.a. range images or depth maps), a common representation in computer vision applications. This representation, in turn, (i) improves data processing efficiency by reducing the dimensionality from 3D to 2D and (ii) facilitates the use of established image-processing algorithms (primarily relevant for Section 4.3). Irrespective of this specific implementation, the core ideas of the proposed workflow can be easily adapted to process displacement estimates in any arbitrary or multiple directions and, hence, it can also be used, for example, for processing the M3C2-based displacement estimates.
The prerequisite for the workflow is that the target point cloud
The upcoming subsections present the aforementioned specific implementation of the proposed workflow, and they follow the four steps presented. Section 2.2.1 describes the subdivision of point clouds into local segments (Step 1) and rasterization, which is necessary for representing 3D point clouds as spherical range images and for computing the LoS displacement estimates (Step 2); and Section 2.2.2 describes definition of local neighborhoods and displacement aggregation strategy (Step 3), as well as the computation of the parameter σ d (Step 4).
2.2.1 Subdivision and rasterization
Let
Assuming that the aforementioned spatial influence factors (Section 1) have a similar or constant influence within a smaller space, we subdivide the point cloud into smaller, local segments. In general, this subdivision can be achieved in different ways, such as dividing the point cloud using a regular grid, segmenting along major break lines, or applying advanced segmentation methods like supervoxels [37].
Herein, with the spherical image representation of
In the following text, we present the remaining part of the implemented workflow for a single segment; this process has to be repeated for each segment. To generate the spherical image representation, we first define a regular grid of size n × m with horizontal and vertical angles θ i and ϕ j respectively, as depicted in Figure 1, where i = 1, …, n and j = 1, …, m. The step size of the grid corresponds to the scanning resolution Δφ used during the measurement. This ensures an approximately lossless data transformation, where each pixel approximately corresponds to one measured point. The grid coordinates are defined as:

The rasterization of the acquired point cloud
Subsequently, we define the two-dimensional matrix
where
The LoS displacement estimates are computed as the range measurement differences between the reference and target range raster. This can now easily be computed by the cell(pixel)-wise difference of the interpolated range raster, resulting in the displacement (or range difference) raster D:
2.2.2 Spatial aggregation and variability estimation
As mentioned in Section 1, we assume that spatial influence factors exhibit similar or constant effects within each such segment, allowing us to also assume that the computed range differences can be represented as a stationary stochastic process. Based on this stationarity assumption, we estimate σ d for a given segment using an approach inspired by the bootstrapping approach.
We follow a similar path as the LoD computation in M3C2; however, in M3C2 first computes the average within a local (cylindrical) neighborhood and then derives the difference, whereas we first compute the difference and then average within a local neighborhood.
For that, we first partition the raster D into non-overlapping tiles T of size t × t as shown in Figure 2. This tile size t serves the same purpose of neighborhood selection for the averaging as, e.g. the cylindrical neighborhood diameter d for M3C2.

The aggregation is based on the range difference raster D, which is partitioned into non-overlapping tiles Tk,l based on the tile size t.
The tiles Tk,l ⊆ D are extracted from the segment, where k and l index the tiles along the horizontal and vertical dimensions, respectively. The indices k and l are defined as:
A tile Tk,l is respectively:
As the sensitivity of the displacement estimates is commonly increased through local averaging (i.e. computing local mean of displacement estimates), we also use the average as the aggregation function f A (⋅) within our workflow and apply it to each tile:
However, any other function (e.g. the median or weighted average) can also be used depending on the specific application requirements. Therefore, we refer to this process as aggregation to maintain generalizability. Each aggregated value (one per tile) represents a final displacement estimate of improved sensitivity and can then be treated as an independent sample of a stationary process.
The estimate of the σ
d
, is then calculated as
where |Dt,A| denotes the total numbers of tiles, and
While the underlying computations differ, the resulting uncertainty
3 Dataset
We evaluate our approach using two datasets, chosen to demonstrate the generalizability of the approach as well as the variability of the σ d estimates. The first one features the El Capitan cliff in Yosemite National Park (USA). The dataset consists of 25 TLS point clouds (see Figure 3a), acquired by the United States Geological Service (USGS) [38] from 2. May until 6. May 2022. The point clouds were acquired with a Riegl VZ-2000i scanner positioned approximately 1,000 m from the cliff face. The instrument was mounted on a surveying tripod, which remained fixed in place for the entire observation period. The scanner was configured with an angular resolution of 0.01°, resulting in a point spacing of approximately 12 cm at the base of the cliff and 25 cm at the top. Scanning the entire cliff face required approx. 24 min, covering an area of roughly 1,100 m in width and 700 m in height.

Orthographic renderings of point clouds colored by intensity values from the study sites: (a) the scene at El Capitan, California, USA, with segments A, B, and C marked by red boxes, indicating the areas selected for detailed analysis. (b) The scene at Säntis, Switzerland, with the location of the segments D, E, and F.
The second dataset consists of observations of the Säntis north face, Switzerland [39] (see Figure 3b). In total 9 point clouds were acquired using a Riegl VZ-4000 scanner in irregular intervals on the 27. and 28. September 2023. The scanner was mounted with a tilt of 10° on a custom mount placed on the ground. This tilt was necessary to capture the uppermost parts of the rock face, which reach an elevation angle of 38°. The scanner was configured with an angular resolution of 0.005°. The scanning range extended from approximately 700 m at the base of the rock face to 2,000 m at the highest point, resulting in a point spacing of roughly 8 cm at the base and up to 24 cm at the top. Each scan required approximately 60 min to complete and spanned a field of view of horz. 120° × vert. 40°.
In Section 4, we present the application of the proposed workflow on the latter two datasets and demonstrate the achievable sensitivity of the averaged displacement estimates. For this analysis we chose two scans per dataset for the computation of displacement estimates. To avoid unfavorable conditions due to refraction [40] and ensure that no deformations have occurred, we chose two subsequent point clouds acquired in the early morning for each dataset. Each point cloud pair was registered via extracted planar patches and the “Multi Station Adjustment” procedure implemented in the Riegl RiScan Pro software.
To showcase the sensitivity quantification using our workflow, we focus on three distinct segments in each dataset, selected to capture variations in surface properties, scanner range, and angle of incidence (AOI). The segments for the El Capitan dataset cover ranges between 850 and 1,100 m and mainly feature bare rock surfaces. The segments of the Säntis dataset range 850 m–1,500 m. Compared to the homogeneous surfaces in the El Capitan dataset, the segments for Säntis exhibit much rougher surfaces, sometimes with vegetated parts and/or accumulations of rubble.
The spatial extent was heuristically set to 5°, balancing the need to preserve the assumption of stationarity with ensuring an adequate number of samples for estimating σ d .
4 Results
In this section, we present the results of our analysis focusing on the assessment of our method to estimate the uncertainty of the averaged displacement and its implications on sensitivity and classification. We begin by examining the uncertainty estimated by our method, as introduced in Section 2.2, and compare it to the standard case. This is followed by a validation study of the Minimal Detectable Bias (MDB) using a Monte Carlo simulation. We then explore improvements in sensitivity through additional preprocessing steps and conclude with an evaluation of displacement classification based on the proposed uncertainty estimates. In the following sections, the uncertainty σ d (from Eq. (1)) for our method is calculated by Eq. (10).
4.1 Uncertainty estimates of averaged displacements
With the first analysis, we want to investigate how the uncertainty σ
d
of spatially averaged displacement estimate with our method (ELSA) behaves with different aggregation tile sizes t, as the general assumption is that larger tile sizes will reduce the uncertainty respectively. Additionally, we compare this behavior to the standard error of the mean (SE), which represents the conventional approach for estimating uncertainty in aggregated data, e.g. in LoD. The SE for a tile size t is given by

Standard deviation estimated using the standard error of the mean (SE) and ELSA for the three segments A, B, and C of the El Capitan dataset, as a function of the tiling size t.

Standard deviation estimated using the standard error of the mean (SE) and ELSA for the three segments D, E, and F of the Säntis dataset, as a function of the tiling size t.
The results show that the standard deviation σ
d
computed by ELSA is consistently higher than that computed using the SE for any given aggregation tile size. This trend persists as the tile size increases, whereas for uncorrelated measurements, one would expect σ
d
to decrease proportionally to
Furthermore, as demonstrated by ref. [15], spatial aggregation generally reduces the uncertainty of displacement estimates. However, this reduction is less pronounced than expected based on the typical SE estimate and does not tend toward 0 as expected based on the simplified variance propagation. In contrary, in many cases it reaches a certain plateau, after which further averaging is meaningless. It can be observed that our uncertainty estimates can be used to better analyze up to which point it is meaningful to average the displacements, and after which point the averaging will just reduce resolution without increasing sensitivity. For our particular datasets, we identify a suitable breaking point at tile size t = 20, which corresponds of averaging approximately 400 points. For t > 20 the improvements in σ d are less than 10 % whereas the resolution continues with its quadratic loss.
The differences between the curves related to different segments underscore the importance of estimating σ d locally. For example, the σ d of segment F of the Säntis dataset is substantially higher compared to the other segments. One contributing factor is the larger distance between the instrument and the rockface, which scales the influence of the various effects, such as refraction or registration errors, for example. Additionally, the scene contains significantly more vegetation than segments D or E, further increasing uncertainty due to increased surface roughness and, therefore, reducing sensitivity. As we can observe, the uncertainty represented by SE, resp. LoD95, also accounts for these differences, as indicated by the different initial uncertainty (without averaging). However, it would fail to capture deviations with lower spatial frequency (e.g. due to spatial correlations over larger distances), as it relies only on the limited neighborhood size defined by the cylinder radius used for M3C2 computation. In contrast, ELSA leverages the entire segment size to compute uncertainty and can capture and reflect such deviations.
The overall offset between our uncertainty estimate σ d and the SE could theoretically be corrected by incorporating the standard deviation of registration σreg as an additional uncertainty term as included in LoD95 (see Eq. (1)). However, this formulation is based on the assumption that σreg is constant across the entire scene, which does not align with the reality. This is well demonstrated in our results, as the offsets between the curves for different segments are different. Hence, a constant σreg cannot adequately account for this. Furthermore, even with σreg applied and potentially reducing the offset between σ d and the uncertainty based on SE, the approach would not address the different slopes of the curves observed across varying aggregation tile sizes, esp. for smaller tile sizes (i.e. less averaging). Therefore, using the proposed ELSA method can provide better uncertainty estimates, and consequently improve the sensitivity estimates.
4.2 Validation of MDB derived from ELSA
In the previous section, we demonstrated a notable difference between the common and the proposed way (ELSA) of computing the empirical standard deviation σ d of averaged displacement estimates and argued on the relevance of these differences. In this section, we verify the validity of σ d estimates via the proposed ELSA method and the subsequent estimation of the MDB as the sensitivity metric by utilizing synthetically simulated ground truth displacements.
To evaluate the correctness of the computed σ
d
and subsequently MDB, we simulate displacement estimates for two cases: Case 1
In order to analyze the correctness of the MDB calculated in this way, we analyze all computed aggregated (averaged) displacements (i.e. pixels in the range raster D after averaging) and decide whether they correspond to the simulated displacements (alternative hypothesis
Validation of significance level and type II error, based on the simulated displacement of size MDB (α = 0.05 and β = 0.2) for segment A, El Capitan.
| Metric | Nominal | Empirical |
|---|---|---|
| Significance level α | 0.05 | 0.056 |
| Type II error β | 0.2 | 0.195 |
The results from simulation-based validation show that the empirically calculated values for α of 0.056 and β of 0.195 fall closely to the nominal value of 0.2. This indicates that the MDB calculated using the proposed method correctly reflects the minimal displacement for the chosen type I and type II error probabilities, making it a suitable metric for assessing the sensitivity of the implemented deformation monitoring approach.
4.3 Sensitivity improvement by additional preprocessing
Although σ d calculated using ELSA is a better representative of the true uncertainty of the averaged displacements than commonly used values (as demonstrated in the previous two sections), it still falls short in representing the full scope of the displacement uncertainties present in the real-world data. This is primarily because we use the standard deviation as a metric, which does not account for systematic effects such as constant bias and may underestimate uncertainty in cases where the distribution of values deviates significantly from normality (e.g. heavy tails). When we analyzed the specific segments of our real-world datasets, we observed such systematic effects in the displacement estimates. In order to improve the MDB based on ELSA a better representative of the sensitivity, and in order to improve the sensitivity of our specific displacement estimation workflow, we identified three preprocessing steps designed to address these issues, capable of mitigating their detrimental impact on the uncertainty estimate, and subsequently improving sensitivity. They are implemented as follows:
(i) Image-correlation based alignment By visual inspection of extracted segments, we observed smaller misalignment errors in the lateral direction between the target and reference point cloud. We assume this is primarily caused by inaccuracies in the registration of the point clouds. To address this, we implemented a local alignment of the point cloud based on the intensity information. For that, an intensity image of each individual segment is computed following the method outlined in Section 2.2.1; but with intensity replacing the range value. The resulting intensity image is then used in a frequency-based cross-correlation image alignment method, following the technique proposed by ref. [41]. The estimated subpixel shift is then applied in the respective angular direction to the segment of
(ii) Edge removal Even when the two point clouds are well-aligned, the range differences at the surface edge tend to be significantly larger than the overall range differences in D. This is likely due to the effect of mixed pixels at surface edges, which commonly result in inaccurate range values. Consequently, we propose removing edge points from the range raster Mref and Mtarget.
We remove the edge points by first generating a binary edge mask from the range raster Mref, as we define the edges as regions characterized by rapid changes in local surface topography. To extract these changes, we first subtract the 2D linear trend from raster Mref. Next, we compute the local gradients and convert them into local slopes following the approach described by ref. [42]. An edge mask is then generated by thresholding the slopes (we chose a threshold of 75°), classifying all values above this threshold as edges. To refine the mask, a morphological closing operation is applied, involving dilation followed by erosion of the binary edge mask raster. Finally, the edge mask is applied to the range rasters Mref and Mtarget to exclude edge-influenced pixels. Since this step removes high-gradient (edge) pixels regardless of their LoS value, any genuine displacement in the non-edged (flat) areas is untouched.
(iii) Planar trend removal We also apply a planar trend removal to the range differences D. This process involves estimating a 2D trend by fitting a planar surface to the range differences. The corresponding values of the fitted surface are then interpolated for each Di,j and subtracted from the range differences, effectively removing the surface trend.
The described preprocessing steps should ultimately reduce the uncertainty σ d and simultaneously improve the representativeness of the related MDB estimate. We demonstrate the impact of these steps on the segment A from the El Capitan dataset. Their effect on the values of range difference raster D are shown in Figure 6, while Figure 7 highlights the corresponding uncertainty estimates of aggregated displacement σ d .

Results of the preprocessing steps applied to improve the bias and variability of the raw range differences (a). In the first step, the target dataset gets aligned by intensity-based image correlation using the intensity map (c) from the TLS, resulting in range difference (b) with significantly reduced bias and variability. In the second step, edge artifacts based on a local slope map computed from the range map (f) reduce big outliers caused by mixed pixels (d). The final step of removing a plane trend (e) reduces any remaining bias and further reduces the variability.

Uncertainty estimate using ELSA with different preprocessing steps applied for segment A of the El Capitan dataset. The standard error (of the mean) is shown with the results from the proposed data-driven approach is given with the additional preprocessing steps (intensity-based image correlation alignment, removal of edge effects, and planar trend removal), to further improve the variability.
The raw range differences without any preprocessing (Figure 6a) exhibit a strong bias, with a mean of μ = −0.034 m. The standard deviation of the displacement is σ = 0.073 m. The application of the first preprocessing step (i) results in a range difference raster D, as shown in Figure 6b. This step corresponds to the lateral adjustment of the point cloud
The next preprocessing step applies the edge removal. Using the extracted edge mask, we eliminate all pixels from the range difference raster D where the corresponding slopes exceed the threshold. As shown in Figure 6d, this step removes areas with larger range differences. The effect on the σ
d
in Figure 7 is, therefore, most evident for cases with no aggregation or very small aggregation size
The final preprocessing step is the planar trend removal, with the results displayed in Figure 6e. This step eliminates any remaining planar trend and bias, further reducing uncertainty. The cause of this trend could, for example, be residual effects from registration in the line-of-sight (LoS) direction, which were not specifically accounted for in the lateral alignment of the first preprocessing step. Although the observed changes in this case were minor, they are still reflected in the slight reduction of σ d in Figure 7. It should be noted that the application of this preprocessing step could partially absorb and remove the impact of deformations on the computed displacement values in the LoS direction, corrupting the implemented monitoring workflow. Hence, this last preprocessing step should only be applied in the cases for which it is known that the expected area under deformation is small compared to the chosen point cloud segment sizes.
Examining the resulting σ d in Figure 7, we can assess the uncertainty before and after each preprocessing step and the corresponding effect when the displacement estimates get averaged. The corresponding uncertainties for the tile size t = 20 are listed in Table 2 along with the respective MDB computed by Eq. (2) with α = 5 % and β = 20 %.
Estimated uncertainty σ d and the corresponding Minimal Detectable Bias (MDB) using ELSA for different preprocessing steps applied to the raw data (tile size t = 20, α = 0.05, β = 0.2). The last row reports the standard error (SE) estimate and corresponding MDB for comparison.
| Applied preprocessing | σ d | MDB |
|---|---|---|
| No preprocessing | 0.026 | 0.073 |
| Alignment (Al) | 0.008 | 0.022 |
| Al + Edge Removed (ER) | 0.006 | 0.017 |
| Al + ER + Plane Removed (PR) | 0.005 | 0.014 |
| SE | 0.004 | 0.011 |
The estimated averaged displacement uncertainty without preprocessing σd,w/o is 0.026 m, while with all preprocessing applied σd,w/ is reduced to 0.005 m. Consequently, the calculated MDB by Eq. (2) are MDBw/o = 0.073 m and with all preprocessing steps applied, MDBw/ = 0.014 m, resulting in a 80 % reduction of uncertainty and a corresponding increase in sensitivity. The most significant improvement is achieved by the image-correlation-based alignment, accounting for nearly 70 % of the total reduction with all preprocessing steps applied.
The investigated preprocessing steps demonstrated their potential for significantly improving the monitoring sensitivity. Based on this preliminary evidence, we have a first indication that the achievable sensitivity of detecting LoS displacements with long-range TLS in geomonitoring can approach approximately 1 cm. This finding will be further explored in the following section using simulated displacements for this specific point cloud segment, while future studies will need to investigate the generalizability of these indications.
4.4 Significant displacement detection using critical value based on ELSA
After assessment of the proposed method to estimate σ d by ELSA, we now evaluate how these uncertainties affect displacement classification. For this purpose, we compare classification results based on σ d obtained using the standard error (SE), our ELSA approach, and ELSA with additional preprocessing steps. We simulate three displacement scenarios with increasing magnitudes (0.005 m, 0.02 m, and 0.1 m) and add this to the real displacement estimates of segment A of the El Capitan dataset.
Figure 8 summarizes the results, where each row corresponds to one of the three displacement magnitudes, and each column shows the classification outcome for each method used to estimate σ d . The red dashed line outlines the patch in which the (true) displacement was introduced. Pixels classified as not showing significant displacement are shown in grey, whereas pixels classified as significantly displaced are shown in blue.

Classification results of the displacements based on the uncertainty estimate based on three methods (columns) across three simulated displacement sizes (rows). The first column represents the classification based on the estimation computed by the standard error (SE), the second is our method ELSA, and the third is ELSA with the additional preprocessing steps. The outline of the simulated displacement patch is indicated by the red dashed line.
First, we examine the case where the uncertainty is estimated using the SE approach. Using the standard deviation derived from the SE method for significance testing leads to nearly the entire patch being classified as significantly deformed across all simulated displacements, resulting in a large number of false positives. Overall, these results demonstrate that using the SE to estimate uncertainty leads to poorer classification performance due to its unrealistic small estimated measurement uncertainty σ d , which is in accordance with the data presented in Figures 4 and 5.
When the uncertainty σ d is estimated using ELSA without preprocessing (middle column), we observe a significant improvement in the classification results by a substantial reduction of false positive classification outside the simulated displacement area. The displacements of size 0.01 m and 0.02 m are not classified as significant, while the displaced patch of 0.1 m is classified as significant. This aligns well when compared to the respective σ d and resulting MDB for the case of no preprocessing in Table 2. The artifacts, appearing as falsely detected displacements outside the region of simulated deformations (consistent blue regions on the left-hand side), are caused by outliers, primarily located at the edges of the rock surface.
When we combine the uncertainty estimation of ELSA with the previously discussed preprocessing steps, we are capable of fully correctly classifying larger displacements
To evaluate the classification performance, we computed standard performance metrics for binary classification – recall, precision, accuracy, and F1 score – based on the pixel-wise comparison with simulated displacement ground truth [43]. The results are presented in Table 3. Recall measures the ability to classify actual displacements correctly. A higher recall corresponds to fewer missed detections. When we base the classification on the uncertainty measure using the standard error (SE) and our methods (ELSA), both exhibit good recall. However, relying solely on the recall of SE can be misleading, as the classification based on SE also labels a large portion of non-displaced areas as significantly displaced, as previously observed. The precision metric reflects the proportion of correctly detected displacements among all locations that were classified as significant displaced, with higher precision indicating fewer false positives. The results based on the SE method demonstrate very low precision (0.02–0.06), whereas only the ELSA method with preprocessing and for displacements of at least 1 cm achieves substantially higher precision (0.39–0.74), reaching levels that may be practically useful. Accuracy provides an overall performance measure of correct classifications, where ELSA outperforms the classification based on the SE. To conclude, these results demonstrate that using σ d computed using ELSA provides a more suitable base for calculating the critical value and subsequent hypothesis testing with the goal of detecting significant deformations within the monitored scene.
Performance metrics for the classification of significant displacement for three displacement sizes (0.01, 0.02, and 0.1 m), based on the method used to compute the uncertainty for the classification.
| Displ. size [m] | Method | Recall | Precision | Accuracy | F1 score |
|---|---|---|---|---|---|
| 0.01 | SE | 0.59 | 0.04 | 0.13 | 0.70 |
| ELSA w/o preproc. | 0.00 | 0.00 | 0.77 | 0.00 | |
| ELSA w/ preproc. | 0.79 | 0.74 | 0.97 | 0.76 | |
| 0.02 | SE | 0.26 | 0.02 | 0.11 | 0.03 |
| ELSA w/o preproc. | 0.00 | 0.00 | 0.77 | 0.00 | |
| ELSA w/ preproc. | 0.99 | 0.74 | 0.98 | 0.85 | |
| 0.1 | SE | 0.99 | 0.06 | 0.16 | 0.12 |
| ELSA w/o preproc. | 0.81 | 0.22 | 0.82 | 0.34 | |
| ELSA w/ preproc. | 1.00 | 0.63 | 0.97 | 0.78 |
4.5 Spatial assessment of MDB using ELSA
Expanding on the previous evaluation, we aim to investigate the spatial extent of the classified displacement and to get further indication of the achievable sensitivity of LoS displacement estimates using the proposed workflow.
To achieve this, we use the same dataset and setup as in Section 4.4. We iteratively simulate displacements of the predefined patch (outlined in Figure 8) with increasing magnitudes from 0.001 m to 0.1 m, by 0.001 m step size. After each iteration, we classify each pixel as significantly deformed or not based on their displacements. Then, we check if any pixel is newly correctly classified as significantly displaced within the patch compared to the previous iteration. If yes the respective magnitude of the simulated displacement is stored for that pixel location. Hence, in the end, we get a heat map-type visualization of the smallest displacements that we detected as significant at a particular pixel location, which indicates the smallest deformation that we could have detected given a critical value (α of 5 %) as a threshold. Additionally, we evaluate the case without any simulated displacement to identify pixels that are incorrectly classified as significantly displaced, labeling them as outliers.
The results of this evaluation are presented in Figure 9, where each pixel is colored according to the smallest displacement magnitude that leads to its classification as significantly deformed, given it is not an outlier (grey). Again, the results are presented for three cases, for the classification using critical value based on SE and based on ELSA, with and without preprocessing. The results for the SE reveal a high rate of outliers, with nearly the entire patch being classified as significantly displaced, even in the absence of any simulated displacement. This again confirms that critical value based on SE is overly optimistic (too small) and can lead to a large type I error rate. In contrast, our method starts classifying displacements in the range of approximately 0.06 m–0.08 m as significant, which is in accordance to the results presented in previous section for the case of analyzing the displacement estimates without preprocessing. When additional preprocessing steps are applied, the results show a further improvement in sensitivity, with significant displacements being detected at much smaller magnitudes, ranging between 0.01 m and 0.02 m. This highlights the substantial enhancement for the classification of small-scale displacements achieved through preprocessing a local point cloud segment using the proposed workflow. Finally, this indicates that sub-centimeter to a few centimeter displacement detection is achievable for long-range TLS geomonitoring in the case of LoS displacement monitoring using the proposed monitoring setup and workflow.

Result of the classification result using uncertainty estimates by (a) standard error, (b) our data-driven ELSA (tile size t = 20), and (c) ELSA with the additional preprocessing steps. Per pixel, the minimal simulated displacement is shown, which led to classification as significantly displaced. In grey, the areas that were already classified as significantly displaced when no deformation was simulated. The red dashed line outlines the simulated displacement patch.
5 Discussion
This work set out to improve the sensitivity analysis of TLS-based deformation monitoring by replacing the classic LoD formulation with an empirical, locally adaptive estimate of the displacement uncertainty σ d .
One of the major limitations of the standard LoD (Eq. (1)) is its use of a single, scene-wide registration term σreg. Introducing a spatially varying σreg(x) for each location x would, in principle, eliminate this shortcoming and could yield LoD values comparable to those produced by ELSA. In that sense, ELSA could also be interpreted as a practical, data-driven method to estimate this σreg(x), assuming no other additional effect influences the displacement estimates. Developing a fully analytical LoD with spatially varying σreg(x) remains a task for future work.
Although the experiments relied on LoS range differences, ELSA is agnostic to the choice of displacement estimator. For the example of M3C2, the magnitude of the displacement vector can be treated as the averaged displacement estimates, and neighboring cylinders within a predefined segment can play the role of the tiles used by ELSA. Computing the empirical variance across cylinders in a predefined local segment of the point cloud yields a local specific σ d that feeds into the same MDB (Eq. (1)) expression. The ELSA can also support the selection of an optimal M3C2 cylinder diameter by identifying the point at which further averaging no longer reduces σ d and instead only diminishes spatial resolution as seen in Figures 4 and 5.
ELSA requires no prior knowledge of instrument noise, surface roughness, or atmospheric effects; all such influences are folded into the local variance. The price for this generality is that individual contributions cannot be isolated. If the study of the individual contributing factors, error propagation approaches, as proposed by ref. [44], are still required.
6 Conclusions
In this study, we propose a new method for quantifying the uncertainty of spatially aggregated (averaged) TLS-based displacement estimates in geomonitoring by empirically locally sampling such aggregated values, hence termed Empirical Local Sampling of Aggregated values (ELSA). This method is directly based on empirical analysis of on-site obtained and averaged displacement estimates and is therefore adaptable to various scenes, processing pipelines, and displacement estimation algorithms, offering a versatile approach for estimating the uncertainty and subsequent sensitivity.
A key distinction of our approach is its ability to account for the effects that systematically impact neighboring displacement estimates at lower spatial frequencies and cause spatial correlations, which are not considered in traditional uncertainty estimates, like LoD95, expecting uncorrelated measurements. We validated the correctness of the uncertainty estimates based on ELSA and the impact of using such estimates for the computation of Minimal Detectable Bias (MDB) as a measure of achievable deformation monitoring sensitivity through simulation.
We demonstrated the efficacy of ELSA on two distinct datasets. We used a specific displacement estimation workflow for LoS displacement estimates from locally averaged differences of rasterized and aligned range measurements after the point cloud registration. We further improved both our uncertainty estimates, and the achievable sensitivity by reducing the displacement estimate uncertainty through workflow-specific preprocessing steps applied locally to point cloud segments. These steps notably improved sensitivity, with a reduction of the uncertainty of LoS displacement estimates up 70 %. The MDB values calculated based on ELSA for the TLS point cloud segments subjected to these preprocessing steps hint towards achievable cm-level sensitivity in long-range geomonitoring. However, assessing the generalizability of these indications across different monitored scenes, instruments and data processing pipelines necessitates further investigation.
Our future research will focus on extending the method on multi-viewpoint setups and different displacement estimation algorithms, especially exploring its application in cases of estimating dense 3D displacement vector fields, like ones provided by the F2S3 method [5], as well as on investigating the extension of the proposed approach also to consider temporal dimension.
Acknowledgments
The authors thank Skye Corbett, Brian Collins, and Autumn Helfrich for providing the ground-based lidar data set of El Capitan cliff in Yosemite National Park which was utilized in this study. These data are available at https://doi.org/10.5066/P1YT5RK3. We thank the Swiss Academy of Sciences for financially supporting this research through the Swiss Geodetic Commission, and Helena Laasch for her discussion and input on the manuscript.
-
Research ethics: Not applicable.
-
Informed consent: Not applicable.
-
Author contributions: All authors have accepted responsibility for the entire content of this manuscript and approved its submission.
-
Use of Large Language Models, AI and Machine Learning Tools: Improve readability and language.
-
Conflict of interest: All other authors state no conflict of interest.
-
Research funding: None declared.
-
Data availability: The data that support the findings of this study are openly available in U.S. Geological Survey Data Release at https://doi.org/10.5066/P1YT5RK3.
References
1. Jaboyedoff, M, Oppikofer, T, Abellán, A, Derron, MH, Loye, A, Metzger, R, et al.. Use of LIDAR in landslide investigations: a review. Nat Hazards 2012;61:5–28. https://doi.org/10.1007/s11069-010-9634-2.Search in Google Scholar
2. Abellán, A, Calvet, J, Vilaplana, JM, Blanchard, J. Detection and spatial prediction of rockfalls by means of terrestrial laser scanner monitoring. Geomorphology 2010;119:162–71. https://doi.org/10.1016/j.geomorph.2010.03.016.Search in Google Scholar
3. Bremer, M, Sass, O. Combining airborne and terrestrial laser scanning for quantifying erosion and deposition by a debris flow event. Geomorphology 2012;138:49–60. https://doi.org/10.1016/j.geomorph.2011.08.024.Search in Google Scholar
4. Lague, D, Brodu, N, Leroux, J. Accurate 3D comparison of complex topography with terrestrial laser scanner: application to the Rangitikei canyon (N-Z). ISPRS J Photogrammetry Remote Sens 2013;82:10–26. https://doi.org/10.1016/j.isprsjprs.2013.04.009.Search in Google Scholar
5. Gojcic, Z, Zhou, C, Wieser, A. F2S3: robustified determination of 3D displacement vector fields using deep learning. J Appl Geodesy 2020;14:177–89. https://doi.org/10.1515/jag-2019-0044.Search in Google Scholar
6. Kromer, RA, Jean Hutchinson, D, Lato, MJ, Gauthier, D, Edwards, T. Identifying rock slope failure precursors using LiDAR for transportation corridor hazard management. Eng Geol 2015;195:93–103. https://doi.org/10.1016/j.enggeo.2015.05.012.Search in Google Scholar
7. Kuschnerus, M, Lindenbergh, R, Lodder, Q, Brand, E, Vos, S. Detecting anthropogenic volume changes in cross sections of a sandy beach with permanent laser scanning. Int Arch Photogram Rem Sens Spatial Inf Sci 2022;XLIII-B2-2022:1055–61. https://doi.org/10.5194/isprs-archives-XLIII-B2-2022-1055-2022.Search in Google Scholar
8. Even-Tzur, G. More on sensitivity of a geodetic monitoring network. J Appl Geodesy 2010;4. https://doi.org/10.1515/jag.2010.006.Search in Google Scholar
9. Heck, B. Statistische Ausreisserkriterien zur Kontrolle geodätischer Beobachtungen. In: Conzett, R, Matthias, H, Schmid, H, editors. Ingenieurvermessung ‘80. Beiträge Zum VIII. Int. Kurs Für Ingenieurvermessung, page Beitrag B 10. Dümmler; 1980.Search in Google Scholar
10. Niemeier, W. Principal component analysis and geodetic networks – some basic considerations. In: Borre, K, Welsch, WM, editors. Survey control networks. Proceedings of the meeting of study group 5B. Denmark: Aalborg University Centre; 1982. International Federation of Surveyors (FIG).Search in Google Scholar
11. Niemeier, W. Deformationsanalyse. In: Pelzer, H, editor. Geodätische Netze in Landes- Und Ingenieurvermessung II. Stuttgart: Konrad Wittwer; 1985:559–623 pp.Search in Google Scholar
12. Pelzer, H. Zur Analyse geodätischer Deformationsmessungen. Dtsch Geodatische Komm Bayer Akad Wiss C 1971;164.Search in Google Scholar
13. Brasington, J, Rumsby, BT, McVey, RA. Monitoring and modelling morphological change in a braided gravel-bed river using high resolution GPS-based survey. Earth Surf Process Landf 2000;25:973–90. https://doi.org/10.1002/1096-9837(200008)25:9¡973::AID-ESP111¿3.0.CO;2-Y.10.1002/1096-9837(200008)25:9<973::AID-ESP111>3.0.CO;2-YSearch in Google Scholar
14. Lane, SN, Westaway, RM, Murray Hicks, D. Estimation of erosion and deposition volumes in a large, gravel-bed, braided river using synoptic remote sensing. Earth Surf Process Landf 2003;28:249–71. https://doi.org/10.1002/esp.483.Search in Google Scholar
15. Abellán, A, Jaboyedoff, M, Oppikofer, T, Vilaplana, JM. Detection of millimetric deformation using a terrestrial laser scanner: experiment and application to a rockfall event. Nat Hazards Earth Syst Sci 2009;9:365–72. https://doi.org/10.5194/nhess-9-365-2009.Search in Google Scholar
16. Kromer, R, Abellán, A, Hutchinson, D, Lato, M, Edwards, T, Jaboyedoff, M. A 4D filtering and calibration technique for small-scale point cloud change detection with a terrestrial laser scanner. Remote Sens 2015;7:13029–52. https://doi.org/10.3390/rs71013029.Search in Google Scholar
17. Jurek, T, Kuhlmann, H, Holst, C. Impact of spatial correlations on the surface estimation based on terrestrial laser scanning. J Appl Geodesy 2017;11:143–55. https://doi.org/10.1515/jag-2017-0006.Search in Google Scholar
18. Schmitz, B, Kuhlmann, H, Holst, C. Investigating the resolution capability of terrestrial laser scanners and its impact on the effective number of measurements. ISPRS J Photogrammetry Remote Sens 2020;159:41–52. https://doi.org/10.1016/j.isprsjprs.2019.11.002.Search in Google Scholar
19. Schmitz, B, Kuhlmann, H, Holst, C. Towards the empirical determination of correlations in terrestrial laser scanner range observations and the comparison of the correlation structure of different scanners. ISPRS J Photogrammetry Remote Sens 2021;182:228–41. https://doi.org/10.1016/j.isprsjprs.2021.10.012.Search in Google Scholar
20. Harmening, C, Neuner, H. Using structural risk minimization to determine the optimal complexity of b-spline surfaces for modelling correlated point cloud data. In: Novák, P, Crespi, M, Sneeuw, N, Sansò, F, editors. IX Hotine-Marussi symposium on mathematical geodesy. Cham: Springer International Publishing; 2021:165–74 pp.10.1007/1345_2019_88Search in Google Scholar
21. Holst, C, Artz, T, Kuhlmann, H. Biased and unbiased estimates based on laser scans of surfaces with unknown deformations. J Appl Geodesy 2014;8. https://doi.org/10.1515/jag-2014-0006.Search in Google Scholar
22. Kermarrec, G, Paffenholz, JA, Alkhatib, H. How significant are differences obtained by neglecting correlations when testing for deformation: a real case study using bootstrapping with terrestrial laser scanner observations approximated by B-spline surfaces. Sensors 2019;19:3640. https://doi.org/10.3390/s19173640.Search in Google Scholar PubMed PubMed Central
23. Kermarrec, G, Kargoll, B, Alkhatib, H. Deformation analysis using B-spline surface with correlated terrestrial laser scanner observations – a bridge under load. Remote Sens 2020;12:829. https://doi.org/10.3390/rs12050829.Search in Google Scholar
24. Zhao, X, Kermarrec, G, Kargoll, B, Alkhatib, H, Neumann, I. Influence of the simplified stochastic model of TLS measurements on geometry-based deformation analysis. J Appl Geodesy 2019;13:199–214. https://doi.org/10.1515/jag-2019-0002.Search in Google Scholar
25. Friedli, E. Point cloud registration and mitigation of refraction effects for geomonitoring using long-range terrestrial laser scanning [Ph.D. thesis]. ETH Zurich; 2020.Search in Google Scholar
26. Hogg, RV, McKean, JW, Craig, AT. Introduction to mathematical statistics, 8th ed. Boston: Pearson; 2019.Search in Google Scholar
27. Yang, Y, Schwieger, V. Patch-based M3C2: towards lower-uncertainty and higher-resolution deformation analysis of 3D point clouds. Int J Appl Earth Obs Geoinf 2023;125:103535. https://doi.org/10.1016/j.jag.2023.103535.Search in Google Scholar
28. Zahs, V, Winiwarter, L, Anders, K, Williams, JG, Rutzinger, M, Höfle, B. Correspondence-driven plane-based M3C2 for lower uncertainty in 3D topographic change quantification. ISPRS J Photogrammetry Remote Sens 2022;183:541–59. https://doi.org/10.1016/j.isprsjprs.2021.11.018.Search in Google Scholar
29. Kuschnerus, M, Lindenbergh, R, Vos, S, Hanssen, R. Statistically assessing vertical change on a sandy beach from permanent laser scanning time series. ISPRS Open J Photogramm Remote Sens 2024;11:100055. https://doi.org/10.1016/j.ophoto.2023.100055.Search in Google Scholar
30. Baarda, W. A testing procedure for use in geodetic networks, volume 2 of publications on geodesy. Delft, NL: Netherlands Geodetic Commission; 1968.10.54419/t8w4sgSearch in Google Scholar
31. Imparato, D. GNSS-Based receiver autonomous integrity monitoring for aircraft navigation [Ph.D. thesis]. Delft University of Technology; 2016.Search in Google Scholar
32. Lehmann, R. Improved critical values for extreme normalized and studentized residuals in Gauss–Markov models. J Geod 2012;86:1137–46. https://doi.org/10.1007/s00190-012-0569-0.Search in Google Scholar
33. Teunissen, PJG. Testing theory, 3rd ed. Delft: TU Delft OPEN Publishing; 2024.Search in Google Scholar
34. Francisco Rofatto, V, Matsuoka, MT, Klein, I, Veronez, MR, Bonimani, ML, Lehmann, R. A half-century of Baarda’s concept of reliability: a review, new perspectives, and applications. Surv Rev 2020;52:261–77. https://doi.org/10.1080/00396265.2018.1548118.Search in Google Scholar
35. Hassouna, A. Sample size calculation. Cham: Springer International Publishing; 2023:339–419 pp.10.1007/978-3-031-20758-7_4Search in Google Scholar
36. Fan, L, Smethurst, JA, Atkinson, PM, Powrie, W. Error in target-based georeferencing and registration in terrestrial laser scanning. Comput Geosci 2015;83:54–64. https://doi.org/10.1016/j.cageo.2015.06.021.Search in Google Scholar
37. Lin, Y, Wang, C, Zhai, D, Li, W, Li, J. Toward better boundary preserved supervoxel segmentation for 3D point clouds. ISPRS J Photogrammetry Remote Sens 2018;143:39–47. https://doi.org/10.1016/j.isprsjprs.2018.05.004.Search in Google Scholar
38. Corbett, SC, Collins, BD, Helfrich, AL. Ground-based lidar data of the southeast face of El Capitan. California: Yosemite National Park; 2025.Search in Google Scholar
39. Schmid, L, Medic, T, Frey, O, Wieser, A. Target-based georeferencing of terrestrial radar images using TLS point clouds and multi-modal corner reflectors in geomonitoring applications. ISPRS Open J Photogramm Remote Sens 2024;13:100074. https://doi.org/10.1016/j.ophoto.2024.100074.Search in Google Scholar
40. Friedli, E, Presl, R, Wieser, A. Influence of atmospheric refraction on terrestrial laser scanning at long range. In: 4th joint international symposium on deformation monitoring (JISDM), 15–17 May 2019. Athens, Greece; 2019.Search in Google Scholar
41. Guizar-Sicairos, M, Thurman, ST, Fienup, JR. Efficient subpixel image registration algorithms. Opt Lett 2008;33:156. https://doi.org/10.1364/OL.33.000156.Search in Google Scholar
42. Tang, J, Pilesjö, P. Estimating slope from raster data: a test of eight different algorithms in flat, undulating and steep terrain. In: River basin management 2011. Riverside, California, USA; 2011:143–54 pp.10.2495/RM110131Search in Google Scholar
43. Bishop, CM. Pattern recognition and machine learning. Information science and statistics. New York: Springer; 2006.Search in Google Scholar
44. Winiwarter, L, Anders, K, Höfle, B. M3C2-EP: pushing the limits of 3D topographic point cloud change detection by error propagation. ISPRS J Photogrammetry Remote Sens 2021;178:240–58. https://doi.org/10.1016/j.isprsjprs.2021.06.011.Search in Google Scholar
© 2025 the author(s), published by De Gruyter, Berlin/Boston
This work is licensed under the Creative Commons Attribution 4.0 International License.