Abstract
The results of deformation analysis are crucial for the safety of engineering infrastructure and people’s lives. A key element of this analysis is the identification of stable points of the monitoring network, which will constitute the reference for further calculated displacements. This paper proposes a new method of stability analysis aimed at identifying stable groups of reference points in a displacement monitoring network. It is based on coordinate transformation and involves searching for the optimal set of transformation parameters identified with the optimal point in the transformation parameters space. A hybrid algorithm, which combines two optimization algorithms – Hooke–Jeeves and Simulated Annealing – is used to search for the solution. Two variants of the objective function were tested as the elements of the algorithm. Multiple solutions (groups of congruent reference points) can be detected in case they exist. The simulated 2D network with two congruent point groups was used as an example to illustrate the performance of the proposed algorithm. The proposed hybrid algorithm appeared to overperform the individual Hooke–Jeeves and Simulated Annealing algorithms.
1 Introduction
Many reasons cause the earth’s surface deformations. They can be natural or indicated by human activity. Whatever the reason, the changes can affect technical infrastructure, cultural heritage, or in some cases, pose a danger to human life. These are the reasons for monitoring the deformations to evaluate safety level and predict potential hazards. Water dams [1], [2], [3], bridges [4], 5], industrial structures [6], tall buildings [7], and cultural heritage sites [8], 9] are the most common objects of deformation monitoring. Deformation monitoring can also be applied to better understand natural processes like Earth’s crust movements [10], landslides [11], or glacier changes [12], especially when connected with the safety of engineering structures [13], 14].
A wide variety of measuring tools and measuring methods can be applied for deformation monitoring. Some of them are commonly classified as geotechnical (structural) and include sensors such as accelerometers, extensometers, inclinometers, strainmeters, and others [15], 16]. These tools are able to register physical quantities as a result of deformations. They usually provide information about local changes and are often applied in automatically working control systems monitoring the object’s health in real time.
However, there are situations where the absolute displacements are necessary. In such cases, geodetic methods can be applied. Geodetic measurements allow for the registration of the geometrical state of the measuring network, which consists of control points on the object and reference points that are expected to be stable. Measurement of the network is repeated periodically. The measurement of the network completed at a specific time is referred to as the measurement epoch. The object deformations can be obtained by comparison of two or more epochs. The object points displacements can be considered absolute only unless a part of the reference points remains stable between two analysed epochs.
Although the network design assumes that the location of the reference points is free of deformations, this assumption is often verified negatively in real situations. Such a situation is more likely when a long period is considered. Therefore, verifying reference points’ stability appears as a crucial element preceding the calculation of displacements. Many methods of stability analysis have been developed [17]. Identification of the stable reference points can be conducted by determining the displacements, e.g., [18] or separately, as a procedure preceding the calculation of displacements. Some identification algorithms use observation values of analysed epochs [18], [19], [20], and others are based on coordinates, which usually come from an independent preliminary adjustment of individual epochs. In the last case, the precision of the coordinates is usually available, and the coordinate transformation procedure is used for stability analysis [21], [22], [23].
Regardless of the applied identification algorithm, the problem is not simple, and the identification result can be equivocal. This may occur when the network covers a wide, particularly geologically differentiated, area [24], 25]. Additional source of difficulties may be varied in the direction character of displacement causes. As a result, more than one group of mutually stable but unstable concerning other groups, reference points can be identified. The commonly applied identification algorithms usually do not deal properly with such cases.
This article presents a new algorithm that is based on coordinate transformation and is able to detect various groups of mutually stable points. It is a hybrid algorithm, and it uses optimization methods to search for the solution. The proposed algorithm is described in Section 2.2.3. The key element that enables the application of optimization algorithms is the objective function. Two variants of the objective function were proposed in this article. They are described in Section 2.1 and they are both based on the weight functions originally applied for the robust estimation.
The proposed algorithm is applicable for vertical (1D), horizontal (2D), or spatial (3D) networks, but due to the simplicity of the vertical displacements (1D) measurements, it is reasonable to consider its application for 2D or 3D networks.
The detailed description of the proposed algorithm is presented in Section 2. Section 3 shows the results of the numerical test for the simulated 2D object, which was conducted to verify the algorithm’s effectiveness. The discussion of the results and the conclusions are presented in Section 4.
2 Algorithm of stability analysis
Measurement results of each epoch are the subject of preliminary adjustment. This process verifies the consistency of internal measurement data and delivers the coordinates of the object points along with their accuracy assessment. Local network reference (e.g., free condition) is usually applied to the preliminary adjustment.
The results of the preliminary adjustment can be used for the deformation analysis. In this approach, the coordinate differences of the control points represent their displacements. This is true only when the coordinates of two analysed epochs are referred to the same stable reference system. It can be achieved by the coordinate transformation using stable reference points as the transformation tie points. Coordinate transformation can be defined by the following equation:
Where:
a i – position vector for the point in the starting system of coordinates,
b i – position vector for the point in the final system,
t – translation vector.
R is a rotation matrix that is applied only to 2D and 3D transformations.
Assuming that vector a i in equation (1) corresponds with the coordinates of point i for the first epoch, and vector c i with coordinates for the same point for the second epoch, residuum vector r i (2) can be determined for each point:
The residual vector r i is the indicator of the control points’ stability. If there exists a group of control points that maintain mutual consistency in both considered epochs, it is possible to find such transformation parameters that residuals r i for them will not excide values resulting from measurement precision. The final task of the reference system identification is to find such transformation parameters represented by R and t that the residual vectors for the most numerous group of potentially stable reference points will not exceed the assumed threshold:
where ||r i || is the L2 (Euclidian) norm of residual vector. As the point coordinates for both epochs come from preliminary adjustment, the positional precision can be used to formulate the threshold value:
The factor m denotes the relation between the precision characteristics resulting from preliminary adjustments of both epochs and the threshold connected with the expectation of the points’ stability. In the model case, when the group of stable points is not affected by any displacements, the value of m is often taken as 2. In practical cases, especially when the analysed epochs are divided over a long period of time or when the terrain is not stable over a large area, the criterion of stability, except for the network measurement precision, must also take into account other influences as the control points’ micro-movements. The value of m has to be enlarged then. Such a case is described by García-Asenjo et al. in [14].
Knowing the final transformation parameters by which residua on reference points are within an acceptable range, we can easily calculate the displacements of all control points as transformation residuals
There are several ways to obtain the desirable transformation parameters. The first one is a commonly known congruency test [26], [27], [28]. Other methods that can be listed are numerical control of object adherence [29], Iterative Weighted Similarity Transformation (IWST) [5], 30], 31], or a robust algorithm proposed by Hu et al. [32], using Mahalanobis distance.
Another category that the method proposed in this article belongs to includes methods using optimization algorithms. They are based on the assumption that transformation parameters create a certain space of solutions (search space). The dimension of such space depends on the transformation type, and it can be one for 1D transformation, three for 2D transformation, or six for 3D transformation. Each point of this space corresponds to a set of transformation parameters and, consequently, to the set of residuals on all considered tie points. In the work [23], two optimization algorithms were proposed – the Simulated Annealing and Hooke–Jeeves algorithm. Both algorithms used a new objective function that considered values of transformation residuals and the number of congruent points.
Another issue that must be taken into account is the potential existence of various mutually stable groups of control points. It can appear when the network covers a large, differentiated in geological structure area. Such cases are the subject of [24] and are especially challenging for identification procedures. Selecting the dominating, most numerous group can be incorrect in specific situations. In the author’s opinion, only an expert can make the right decision, taking into account aspects beyond the geometry and precision of the network.
The desired identification algorithm should, therefore, detect all mutually stable subgroups with the transformation residuals for each subgroup. Such an approach was applied in [23] and is continued in this study.
2.1 Objective function
In the case of coordinate transformation, an objective function delivers a measure of the fitting of congruent points, which is represented by the transformation residuals. The definition of the objective function result is a complex question. The measure of ‘fitting’ must take not only the set of values of transformation residuals but also consider that residuals for a group of unstable points (the largest) should not affect the measure of the fitting of the congruent points.
In [23], the author proposed an objective function based on the L1 norm of coordinate transformation residuals. If a group of congruent points is detected, some points with the maximal residuals are excluded from the L1 norm. The function appeared to be efficient in detecting the groups of congruent points, but it has two flaws. Selection of the maximal residuals needs ordering the points by the length of the residual vector, which increases numerical cost – especially when the function is used many thousands of times, as it is in the case of optimization methods. Furthermore, the rejection of the large residual vector from the objective function causes a sudden change in the objective function value, which hinders the operation of some optimization procedures.
The mentioned disadvantages caused the new research to be undertaken, aimed at finding a new, improved, objective function formulation. Due to the problem specificity, functions applied in the robust estimation were taken into consideration. The robust estimation is a method of network adjustment with the assumption of the existence of outlying observations. The estimation process is conducted iteratively, and the basic element of the adjustment procedure is a weight function that modifies the observations’ weights depending on their residuals. If the observation gets a large residuum, it gets, in consequence, decreased weight, and it does not influence the estimation results in the next iteration. There are many robust estimation methods, such as Huber’s [33], Hampel’s [34], Danish [35], Kadaj’s [36], and many others. They differ in the weight function definition, the effectiveness of the parameters estimation, and outliers elimination.
Applying the robust approach for deformation analysis is not new. Except for observation adjustment [36], [37], [38], they were used for establishing the deformation model [3], 39], 40].
Two robust functions have been applied in this research. The first one is the classical Huber weight function. It can be expressed by formula (7),
where x is the residual vector, and f originally stands for an observation’s standard deviation, and, in this study, is connected with preliminary stability assessment. The Huber weight function is commonly used in many applications. It is quite simple, but it is composed of two elementary functions that require the check of the argument value to choose the corresponding elementary function.
Equation (8) describes the Kadaj weight function [36], where k x originally denotes the standard deviation of x, and in this study, it is connected with a preliminary assessment of stability. Although the form of the function is more complex (it contains the power function), it is uniform in the whole range of argument values.
The objective function is expected to show the places of the search space that correspond with the objective, which, in most solutions, is associated with the minimal value of the function.
The selected robust weight functions originally have a maximum of x = 0. To obtain a minimum for the small residuals, the original functions must be modified, as shown in equations (9) and (10). To compose the objective function for which a minimum is associated with the objective, we must formulate auxiliary functions H(x) and K(x) by reversal of the original robust weight functions.
The distribution of both auxiliary functions is shown in Figure 1.

Distribution of auxiliary functions for f = 1 and k x = 2.
Assuming r i is the vector of transformation residuals the corresponding objective function for the Huber weight function can be formulated as:
The detailed analysis showed that the Kadaj function (10) reaches its asymptote very fast (for x/k = 9,
The value of the c parameter is selected empirically. It should be as small as possible to distinguish various solutions far from the optimum and not to disrupt the main objective function distribution.
2.2 Optimization methods
Optimization methods are applied in many areas of science and technology. They allow for finding the best solution when the problem has its mathematical model defined by certain number of parameters. The parameters create the space of solutions. Each point of this space is described by the objective function value, indicating the level of reaching the objective. A wide variety of algorithms have been invented to look for the solution. Their effectiveness mainly depends on the number of parameters and objective function distribution. The rapid development in computational technology induced the development of optimization, resulting in new algorithms, often inspired by natural processes like evolutionary rules [41], 42] or living organisms’ behavior [43].
Considering the number of parameters, the coordinate transformation problem is not very complex. The maximum number of parameters in the case of 3D transformation does not exceed 7. It allows for the application of some algorithms that lose their effectiveness when the number of parameters is increased. The objective function value distribution is difficult to predict. It can have one or more minima. This suggests a choice of algorithms of universal character.
Two algorithms were applied in [23] for the solution of constant points identification. The Hooke–Jeeves and Simulated Annealing algorithm. Although simulated annealing appeared to be more effective in the analysed problem solution, the Hooke–Jeeves algorithm also appeared to have some advantages. Both algorithms are described in the subsequent Sections 2.2.1. and 2.2.2. A combination of both algorithms using the main advantages of each of them is proposed in Section 2.2.3.
2.2.1 Hooke–Jeeves algorithm
The Hooke–Jeeves (HJ) local search algorithm proposed in 1961 [44] is an efficient way to find the optimization problem solution. The HJ is referred to as the pattern search algorithm, and it is fully deterministic. It is simple and relatively fast-converging. Because no objective function gradient is necessary for the algorithm to work, it can be convenient for cases when the objective function has no analytical form and is obtained based on empirical data.
The following parameters must be defined to apply the HJ method to a particular problem:
d – orthogonal basis of n independent linearly orthogonal vectors,
τ0 – initial length of the search step, dependent on the search space as well as the objective function distribution,
γ – search step decreasing factor in subsequent iterations,
τend – minimal step length constituting the criterion for ending the iterative process,
x0 – the starting point of the procedure.
A more detailed description of the algorithm can be found in [23] or [45]. The HJ method is commonly used to solve optimization problems in various fields of engineering. Identification of the water dam parameters [46] or aircraft parameter estimation [47] can be given as examples. Applications for the problems of geodetic measurements are not numerous, but 3D transformation problems [48], identifying reference points in displacement monitoring networks [23], or second order design of geodetic networks [45] can be quoted as examples.
2.2.2 Simulated annealing
The algorithm was first proposed in the work of Metropolis et al. [49], but only the development of computational technology made it possible to apply the algorithm to solve real problems. The work of Kirkpatrick et al. [50] was also an important milestone. The simulated annealing (SA) belongs to the metaheuristic algorithms. Its main advantage is the ability to find the global minimum when local minima exist. The SA algorithm is also useful when the solution has to meet complex criteria.
Its features caused the SA method to be applied to solve a wide variety of problems from various areas of technology and science. Solving a system of multiple linear equations [41], structural health monitoring [51], a bearing fault diagnosis [52], or UAV path planning [53] are only selected examples from a large number of others. The algorithm is also used for the solution of various problems in the area of geodetic measurements, like geodetic measurements results procesing [54], design of measurement networks [43], 45], 55], 56], deformation analysis [23], or coordinate transformation [57]. A thorough description of SA is provided in [58].
The basic idea of the SA algorithm is iterative searching through the search space, looking for the best solution reflected by the optimal (usually minimal) value of the objective function. The new, potentially better solution is generated randomly using normal distribution centered in the current solution. The search scope corresponding to the temperature decreases with subsequent iterations. Assuming x i is a current solution, Δx i is a randomly generated incremental vector to the potential new solution the relation (13) describes the way of choosing a new solution.
The new solution is generally the solution corresponding to the lower value of the objective function (case 1). The algorithm allows, however, the possibility of choosing a worse solution (case 2). It can be applied in an extremely complex distribution of the objective function with many local minima when the global minimum is the only aim of the process. The probability p of the choice of the worse solution is determined with the Boltzmann distribution. In simplified cases, a small constant value e.g., p = 0.001 or p = 0 can be assumed.
Application of the SA method to the stability analysis requires defining a few key elements.
2.2.2.1 Definition of the objective function
The objective function evaluates the current solution, allowing the approach to the solution. In this work, the objective function is described in Section 2.1.
2.2.2.2 Cooling scheme
The cooling scheme defines the initial temperature and the formula for determining the decrease in temperature in subsequent iterations. The separate factor t is assumed to be a temperature in this application. t0 = 1 was taken as the initial value of t. The change in t values in subsequent iterations is described with formula (14),
where i denotes the iteration number, and β is the so-called cooling factor. Its value is selected depending on the nature of the optimization problem.
2.2.2.3 Molecule movement model
Two elements must be defined to define the molecule movement model. The first is the way of generating a new solution. As it is commonly accepted, a new solution is generated in the way described by formula (15):
In this study, the particular transformation parameters constitute the vector x. The incremental vector Δx i is generated randomly, using normal distribution N(0, σ i ), where σ i is the vector of standard deviations of particular transformation parameters corresponding to x in i-th iteration. Formula (16) describes the value of the standard deviation in iteration i:
The initial values of standard deviation σ0 are selected depending on the nature of the problem, and they reflect the assumed range of the corresponding parameter variability.
The acceptance criterion for a new solution is the second element of the molecule movement model. In this study, formula (13) was applied with p = 0, which means there is no possibility of accepting a worse solution.
2.2.2.4 Criterion for ending the iterative process
The condition for ending the iterative process can be formulated based on:
the final temperature,
the number of iterations,
the acceptable value of the objective function
In this study, the temperature threshold tend is applied (17).
The SA algorithm is able to find a global solution when local solutions exist. However, in this study, the local minima corresponding to the subgroups are also an object of interest. The ability of the SA algorithm to find local minima except for global minimum can be controlled by the cooling speed. Slowing down the cooling process (colling factor β closer to 1) makes the procedure more likely to find a global minimum. In case of faster cooling, the procedure will more likely return the solutions (local minima) corresponding to the smaller subgroups of mutually stable control points.
In conclusion, there is a challenging task to select the optimal value of the cooling factor to detect the real subgroups of the stable control points and minimize the junk solutions that do not correspond with the acceptable subgroups.
2.2.3 Hybrid algorithm
The optimization methods vary in terms of algorithms used, which are dedicated to the specific types of problems. It also applies to the two above-described algorithms. The SA needs a large amount of calculations. It is able to find the optimal (global) solution with a complex distribution of the objective function and when the local minima are present. On the other hand, it is not very effective when the objective function distribution is regular with one evident minimum. The second algorithm (HJ) is considered fast, but it is likely to find a local minimum as a final solution.
In the case of the stable points identification with the objective function described in Section 2.1 multiple local minima are possible. They can correspond to the separate subgroups of points that are mutually stable within a group but are not stable in relation to the points of another group. Also the single unstable points that do not belong to any group can create a local minima in the search space of transformation parameters. Such situations can appear in real cases [24], 25].
The existence of multiple subgroups can be problematic for some traditional algorithms based on the assumption of existing one group of stable points that contains the majority of the reference network. The example concerning the commonly known congruency test was presented in [23]. The algorithms proposed in the works of Neitzel [20], 59] or Lehman and Lösler [60] are more efficient. They can identify even very small subgroups of congruent points, but they use a combinatorial approach, which leads to an exponential increase in computation cost with the number of points. In consequence, the combinatorial methods can be applied only for not very large networks. In the case of optimization algorithms, the computation cost also depends on the network size, but this dependence is only linear.
As only one solution (group of congruent reference points) can be found in the single activation of the hybrid procedure, it must be activated a number of times to properly identify all the potential, internally congruent groups of reference points. This number depends on the existence of smaller subgroups that are identified with lower probability. Based on the series of experiments, a number of trials of 500–1,000 can be recommended.
The research [23] shows that although the minima corresponding with the particular groups are evident, the distribution of the objective function in the minimum vicinity is more complex and may contain local minima. As a result, the HJ algorithm delivers quite a large number of solutions corresponding to the small subsets of the larger groups of mutually stable points or with unstable points. The SA algorithm performs better, but it suffers from a far from optimal procedure of selecting the next step at the initial stage of the search process, when the way to the closest minimum is obvious. Another problem connected with the SA algorithm is its tendency to find global or dominating minima with a lower objective function value. In the case of this study, where all possible subgroups of stable points should be found, it is not an advantage.
That was the reason for proposing the new hybrid algorithm, which is a combination of two previously described algorithms. The idea of using hybrid algorithms is not new. Twardochleb et al. [61] proposed a combination of Monte Carlo and HJ methods to solve problems of an economy. Mustafa [62] applied a combination of two metaheuristic algorithms – Particle Swarm Optimization and SA for the maximum partition problem. Rios-Coelho A.C. et al. [63] applied a combination of Particle Collision Algorithm with HJ for the chemical equilibrium nonlinear systems. Those and many other examples show that a combination of appropriately selected elementary algorithms can increase the efficiency of the problem solution.
In the case of this study, the combination of HJ and SA algorithms was used to establish a new hybrid algorithm. The HJ is the main method. It is used to find the first (closest) minimum of the objective function, which is then a starting point for the SA algorithm. The SA algorithm plays a supporting role and was modified for application within a hybrid algorithm. It starts at the minimum point found by HJ, and its function is to test whether the minimum is not a subminimum. The modification concerns the termination criteria. The algorithm finishes its work in case a temperature reaches the assumed threshold (minimum confirmed) or when a point with a lower value of the objective function than the current minimum is found. In the second case, the point found becomes the starting point from which the subsequent run of HJ is started. As the SA algorithm is not intended to find the final solution, its cooling rate can be faster, which shortens the whole procedure. The flowchart of the modified version of the SA algorithm is shown in Figure 2. The hybrid procedure finishes when SA is not able to find the new minimum and thus confirms the minimum found by HJ. The detailed flowchart of the proposed hybrid algorithm is shown in Figure 3.

Flowchart of the modified SA algorithm.

Flowchart of the hybrid algorithm.
The proposed hybrid algorithm has two main advantages. The first one is the shorter time of operation, which arises from the fact that the faster HJ method is applied to seek the solution, and a faster cooling rate is applied for the SA algorithm. Another advantage resulting from the features of the HJ algorithm is a higher probability of detecting the non-dominating subgroups of stable points that are also an object of interest.
As multiple minima can appear in the objective function distribution, it is necessary to set a strategy for how to distinguish the local minima corresponding to the searched groups of constant points from the others that are connected with smaller groups or single unstable points. Two parameters are of key importance to adjust the hybrid algorithm to the particular task.
The first one is the objective function parameter connected with the expected constant points congruency (k for F K or f for F H ). The first estimation of this parameter is connected with a preliminary assessment of stability. The experiments show that k = mσ XY or correspondingly f = 0.5mσ XY for m = 2 are the recomendable initial values. As it was mentioned in Section 2, in many cases, it is impossible to assess the stability level using the preliminary adjustment, and the objective function parameter must be adjusted empirically by gradually increasing the m parameter. There is no strictly formulated limit for m, but one must remember that the bigger the m value, the worse the final precision of the displacements.
Another parameter that strongly influences the hybrid algorithm’s performance is the cooling factor of the SA algorithm, which determines the ability to skip the local minimum found by the HJ algorithm. The effectiveness of the SA procedure depends, however, on the number, location, and stability level of the constant points. Due to the unknown number and location of the constant points, the cooling factor must be adjusted experimentally. The reasonable range of cooling factor values can be taken as 0.999 to 0.9999. In justified cases, the cooling can be even slower.
3 Numerical example
The proposed hybrid algorithm was tested on a numerical example. The simulated two-epoch 2D test network was used for the test. It contains 20 points. The points’ placement is shown in Figure 4. Table 1 contains point coordinates generated for both epochs. The network contains two subgroups of internally stable points. Points creating a particular subgroup keep mutual stability within the group, but are not stable in relation to the other points. 11 points are unstable in relation to both subgroups of stable points as well as to themselves mutually. The movements assumed for the groups are listed in Table 2. The coordinates were noised by true errors with the normal distribution and standard deviation of σ XY = 5 mm, which is an equivalent of disturbance in both epochs with errors of 3.5 mm standard deviation. 2D transformation parameters from epoch 1 to epoch 2 were assumed as the solution vector elements. With the assumption of the constant scale, the solution vector x consists of three elements: x1 = dX, x2 = dY, x3 = dα. Where dX and dY denote the incremental shift of the object’s barycenter in X and Y direction and dα denotes the rotation change between epoch 1 and 2.

Test network points placement.
Coordinates of points of the test network for both epochs.
| Nr | Epoch 1 | Epoch 2 | Differences | |||
|---|---|---|---|---|---|---|
| X [m] | Y [m] | X [m] | Y [m] | ΔX [m] | ΔY [m] | |
| 1 | 832.011 | 1,627.014 | 831.940 | 1,626.952 | −0.071 | −0.062 |
| 2 | 682.656 | 1,701.691 | 682.588 | 1,701.638 | −0.068 | −0.053 |
| 3 | 890.686 | 1,771.035 | 890.629 | 1,770.962 | −0.057 | −0.072 |
| 4 | 687.990 | 1,829.710 | 687.939 | 1,829.665 | −0.052 | −0.044 |
| 5 | 821.343 | 1,893.719 | 821.307 | 1,893.653 | −0.036 | −0.066 |
| 6 | 922.691 | 1,952.394 | 922.649 | 1,952.451 | −0.042 | 0.057 |
| 7 | 746.665 | 1,995.067 | 746.691 | 1,995.171 | 0.025 | 0.104 |
| 8 | 1,093.382 | 2,005.735 | 1,093.479 | 2,005.689 | 0.098 | −0.046 |
| 9 | 890.686 | 2,112.417 | 890.634 | 2,112.376 | −0.052 | −0.041 |
| 10 | 778.670 | 2,149.756 | 778.672 | 2,149.826 | 0.002 | 0.070 |
| 11 | 1,002.702 | 2,144.422 | 1,002.754 | 2,144.421 | 0.052 | −0.001 |
| 12 | 832.011 | 2,229.767 | 831.957 | 2,229.856 | −0.054 | 0.089 |
| 13 | 997.368 | 2,251.104 | 997.410 | 2,251.057 | 0.042 | −0.046 |
| 14 | 1,082.714 | 2,288.442 | 1,082.656 | 2,288.480 | −0.058 | 0.038 |
| 15 | 928.025 | 2,331.115 | 927.970 | 2,331.155 | −0.055 | 0.040 |
| 16 | 1,040.041 | 2,379.122 | 1,039.982 | 2,379.155 | −0.059 | 0.033 |
| 17 | 1,184.062 | 2,491.138 | 1,184.141 | 2,491.174 | 0.079 | 0.036 |
| 18 | 992.034 | 2,496.472 | 991.981 | 2,496.504 | −0.053 | 0.032 |
| 19 | 784.004 | 2,448.465 | 784.040 | 2,448.511 | 0.036 | 0.046 |
| 20 | 1,034.707 | 2,629.825 | 1,034.764 | 2,629.758 | 0.057 | −0.067 |
The movements simulated for the groups of constant points.
| Group | dX [m] | dY [m] | Rotation dα [g] |
|---|---|---|---|
| 1, 2, 3, 4, 5 | −0.053 | −0.062 | −0.0065 |
| 14, 15, 16, 18 | −0.049 | 0.033 | −0.0019 |
Figure 5 shows the distribution of the objective function for F K objective function (a) and F H objective function (b). Rotation angle α = −0.0040 g was assumed for the pictures. For better legibility, the values of both functions are inverted (the top of the hill in the picture corresponds to the minimum of the function).

Resulting objective function. (a) – F K , (b) – F H
The experiment was aimed at checking if the proposed algorithm is able to effectively identify both subgroups of constant points. It consisted of performing n = 1,000 trials for three algorithms: HJ, SA, and a new hybrid algorithm. The results are shown in Tables 3, 4, and 5. Each algorithm was tested using both analysed objective functions – F K and F H . For the F K objective function (12), c = 1.0/m2 was set. This choice ensures that the L2 norm component does not significantly affect the value of F K when the components of r i are expressed in meters. The frequency of finding, as well as the corresponding objective function value, is listed for each solution. As the same precision was assumed for all points, the common threshold value ɛ i = 0.008 m was applied for all points and all tested algorithms.
The results of identification for the HJ algorithm.
| Group | F K | F H | Result | ||
|---|---|---|---|---|---|
| Obj. f. | % Hits | Obj. f. | % Hits | ||
| 14, 15, 16, 18 | 16.438 | 38.6 | 15.003 | 42.3 | A |
| 1, 2, 3, 4, 5 | 15.687 | 28.1 | 14.329 | 20.5 | A |
| 4, 9 | – | – | 16.413 | 6.1 | |
| 11 | 19.199 | 6.4 | 17.887 | 4.3 | |
| 10 | 18.701 | 8.1 | 17.424 | 3.5 | |
| 1, 2, 3, 4 | – | – | 14.733 | 3.1 | B |
| 6 | – | – | 17.497 | 2.8 | |
| 1, 2 | – | – | 16.220 | 2.2 | B |
| 13 | 19.209 | 1.7 | 17.681 | 2.0 | |
| 14 | 18.826 | 1.1 | 17.481 | 1.9 | |
| 16 | 19.357 | 0.4 | 17.419 | 1.7 | |
| 17 | 18.833 | 5.9 | 17.830 | 1.6 | |
| 14, 16 | – | – | 16.462 | 1.5 | B |
| 8 | 19.065 | 1.1 | 17.907 | 1.0 | |
| 12 | 19.712 | 0.2 | 17.723 | 0.9 | |
| 18 | – | – | 17.563 | 0.8 | |
| 7 | – | – | 17.718 | 0.7 | |
| 9, 20 | 18.264 | 1.8 | 17.037 | 0.7 | |
| 2 | – | – | 17.911 | 0.6 | |
| 19 | 19.056 | 4.2 | 17.588 | 0.5 | |
| 1, 2, 3 | – | – | 15.434 | 0.4 | B |
| 5 | – | – | 17.621 | 0.3 | |
| 16, 18 | – | – | 16.333 | 0.2 | |
| 20 | – | – | 18.080 | 0.2 | |
| 15, 16 | – | – | 16.265 | 0.1 | B |
| 1, 3, 4, 5 | – | – | 15.028 | 0.1 | B |
| 10, 16 | 18.870 | 0.3 | – | – | |
| 9 | 18.972 | 0.2 | – | – | |
| – | 18.914 | 1.9 | – | – | |
The results of identification for the SA algorithm.
| Group | F K | F H | ||
|---|---|---|---|---|
| Obj. f. | % Hits | Obj. f. | % Hits | |
| 1, 2, 3, 4, 5 | 15.687 | 78.0 | 14.329 | 82.1 |
| 14, 15, 16, 18 | 16.438 | 22.0 | 15.003 | 17.8 |
| 9 | – | – | 18.209 | 0.1 |
The results of identification for the hybrid algorithm.
| Group | F K | F H | ||
|---|---|---|---|---|
| Obj. f. | % Hits | Obj. f. | % Hits | |
| 1, 2, 3, 4, 5 | 15.687 | 54.0 | 14.329 | 44.5 |
| 14, 15, 16, 18 | 16.438 | 46.0 | 15.003 | 55.5 |
3.1 Hooke–Jeeves algorithm
The HJ algorithm is indeed fully deterministic. However, the solution depends on the starting point, which is generated randomly. As a result, various local minima can be detected. The following parameters were taken for the identification procedure:
The initial extent of the search area used for random generation of the starting point was assumed as 0.15 m for dX/dY and 0.015 g for dα. The identification results for two example runs are summarized in Table 3. The expected results (one of the assumed groups) are marked by A in the Result column. The results that are subsets of the correct results are marked by B. The rest (not marked) are solutions considered incorrect due to few points and a bigger value of the objective function.
3.2 Simulated annealing
Although the SA algorithm is dedicated to the cases where multiple local minima exist, the starting point was generated randomly, as in the case of the HJ algorithm. The following parameters were taken for the identification procedure:
The identification results of SA algorithm are summarized in Table 4.
3.3 Hybrid algorithm
The parameters for the HJ algorithm were the same as those for HJ as an individual algorithm analysis.
Similarly to the SA as a stand-alone algorithm, the same search space extent parameters were applied.
The value of the cooling factor was selected in an experimental way. For F K , the cooling factor β = 0.9995 must be set for the proper identification of both subgroups. For F H , β = 0.999 appeared to be enough.
The identification results of the Hybrid algorithm are summarized in Table 5.
As it can be realized, both variants of the objective functions based on robust estimation weight functions can efficiently show the places in the search space that correspond to the groups of stable reference points. Application of the F K objective function means a higher computational cost, but the function distribution is more regular. The F H objective function produces more local minima, which can be seen in Figure 5b, and leads to numerous solutions when using the HJ algorithm.
The test confirmed that the HJ algorithm is prone to irregular objective function distribution, which is especially visible in the case of the F H objective function. Although many of the erroneous solutions can be filtered using quite simple criteria, it must be concluded that the HJ algorithm is not advisable as a stand-alone method.
The SA algorithm appears to be able to perform much better, but this was at the expense of longer computational time. Even with the cooling factor β = 0.9999, the wrong solutions were not fully eliminated in the case of the F H objective function. In the case of F H , cooling factor β = 0.9997 appeared to be enough to eliminate wrong solutions.
The hybrid algorithm fulfilled the expectations, delivering the correct solutions. Due to the different distribution of the applied objective functions in the search space, different parameters of the SA component appeared to be necessary for the proper identification of the constant reference points. In the case of F K , a slower cooling (β = 0.9995) was necessary. In the case of the F H objective function, a faster cooling of SA (β = 0.999) could be applied to get the proper solution.
Although the computation cost is not a crucial element of the assessment of analysed algorithms, a comparison of the processing time for all algorithms and for both objective functions gives valuable information about the effectiveness of particular identification procedures. Figure 6 Shows the approximate processing times for 1000-time analysis for three algorithms.

Comparison of processing time for 1000-time analysis.
Another issue that should be discussed in the context of the identification approach proposed in this study is the probability of detection of the particular groups of stable points. When appropriate parameters of objective functions are selected, particular groups of stable points are associated with the corresponding minima in the search space, as can be seen in Figure 6. The values of the objective function in these minima will depend on the number of points creating the group and the level of their internal congruency. The most desired result of the identification procedure is selecting all the groups with possibly balanced probability. Excluding the HJ algorithm, it can be easily seen that the SA algorithm significantly prefers the dominating solution (group 1, 2, 3, 4, 5) due to a lower objective function. It arises from the features of the SA algorithm, which is rather dedicated to finding the global minimum. In the case of a hybrid algorithm, both groups are detected with a more balanced probability. As mentioned before, such a result is more desirable due to the expert’s deciding role in the final decision.
4 Summary and conclusions
Proper identification of the stable reference points in a monitoring network is a complex problem. It is especially concerning in cases when the monitoring network is large and the identification result is equivocal. The presented research is a continuation of the research described in [23], and it utilizes the idea of the solution space search and application of optimization algorithms. The newly proposed objective function formulation using robust weight functions appeared to be an efficient way to detect the subgroups of congruent point groups. Two variants of the objective function based on Kadaj’s and Huber’s robust weight functions were applied. Both of them showed sufficient efficiency in detecting subgroups of congruent reference points. The F K objective function, based on Kadaj’s weight function, although more numerically costly, in some cases delivers a cleaner solution. The definite conclusion needs, however, extended research on more numerous and diversified examples.
It also appeared that hybridization of the algorithm was the right solution. The hybrid algorithm, combining the HJ and SA procedures similarly to SA, selected only the proper groups of stable points, but it was significantly faster. Another advantage of the hybrid algorithm is higher sensitivity for minority subgroups, which should also be considered as a potential final reference system for deformations.
Although the stability identification result was limited only to the assumed groups of stable points, it can be expected that in real cases, the identification result will not always be clear and obvious, even with the application of the SA or Hybrid algorithm. For this reason, a filtering procedure must be applied for the processing of the raw result. The development of the optimal filtering algorithm needs further research.
-
Research ethics: Not applicable.
-
Informed consent: Not applicable.
-
Author contributions: The author has accepted responsibility for the entire content of this manuscript and approved its submission.
-
Use of Large Language Models, AI and Machine Learning Tools: Grammarly, a language writing assistant, was used to improve language.
-
Conflict of interest: The author states no conflict of interest.
-
Research funding: None declared.
-
Data availability: All data used during the study appear in the submitted article. Details of the code applied can be available from the corresponding author upon reasonable request.
References
1. Chrzanowski, A, Szostak, A, Steeves, R. Reliability and efficiency of dam deformation monitoring schemes. In: Proceedings of the 2011 annual conference of Canadian dam association (CDA/ACB). Fredericton, NB, Canada: Canadian Dam Association (CDA); 2011.Search in Google Scholar
2. Zaczek-Peplinska, J, Kowalska, M. Application of non-contact geodetic measurement techniques in dam monitoring. Arch Civ Eng 2022;68:49–70.Search in Google Scholar
3. Konakoglu, B. Analysis of dam deformation with robust weight functions. Geod Vestn 2020;64:198–213. https://doi.org/10.15292/geodetski-vestnik.2020.02.198-213.Search in Google Scholar
4. Erdélyi, J, Kopáčik, A, Kyrinovič, P. Spatial data analysis for deformation monitoring of bridge structures. Appl Sci 2020;10:8731. https://doi.org/10.3390/app10238731.Search in Google Scholar
5. Sušić, Z, Batilović, M, Durović, R, Marković, M, Vujinović, M. Deformation analysis of Ratkov Laz bridge using Pelzer and IWST method. Geod Vestn 2023;67:487–504. https://doi.org/10.15292/geodetski-vestnik.2023.04.487-504.Search in Google Scholar
6. Zheng, K, Zhang, Y, Zhao, C, Liu, L. Rotary kiln cylinder deformation measurement and feature extraction based on EMD method. Eng Lett 2015;23:4.Search in Google Scholar
7. Madimarova, G, Suleimenova, D, Pentayev, T, Khalykov, Y, Baydauletova, G, Tumazhanova, S, et al.. The geodetic monitoring of deformations of a high-rise building using ground-based laser scanning technology. J Appl Eng Sci 2022;20:1083–92. https://doi.org/10.5937/jaes0-37001.Search in Google Scholar
8. Markiewicz, J, Tobiasz, A, Kot, P, Muradov, M, Shaw, A, Al-Shamma’a, A. Review of surveying devices for structural health monitoring of cultural heritage buildings. In: 2019 12th international conference on developments in eSystems engineering (DeSE), Kazan, Russia. IEEE; 2019:597–601 pp.10.1109/DeSE.2019.00113Search in Google Scholar
9. Prochazka, J, Záleský, J, Jiřikovský, T, Salak, J. Long-term stability monitoring in the Prague Castle area. Acta Geodyn Geomater 2010;7:411–29.Search in Google Scholar
10. Cacoń, S, Švábenský, O, Kontny, B, Weigel, J, Jamroz, O, Ćmielewski, K, et al.. Deformation analysis of the upper part of the Earth crust in the Snieznik Massif (Polish and Czech sides between 1993 and 2003). Acta Geodyn Geomater 2004;1:135.Search in Google Scholar
11. Karsznia, K, Tarnowska, A. Proposition of an integrated geodetic monitoring system in the areas at risk of landslides. Chall Mod Technol 2013;4:33–40.Search in Google Scholar
12. Xu, C, Li, Z, Wang, F, Mu, J. Spatio-temporal changes of mass balance in the ablation area of the Muz Taw glacier, sawir mountains, from multi-temporal terrestrial geodetic surveys. Remote Sens 2021;13:1465. https://doi.org/10.3390/rs13081465.Search in Google Scholar
13. Di Stefano, F, Cabrelles, M, García-Asenjo, L, Lerma, JL, Malinverni, ES, Baselga, S, et al.. Evaluation of long-range mobile mapping system (MMS) and close-range photogrammetry for deformation monitoring. A case study of Cortes de Pallás in Valencia (Spain). Appl Sci 2020;10:6831.10.3390/app10196831Search in Google Scholar
14. García-Asenjo, L, Martínez, L, Baselga, S, Garrigues, P, Luján, R. Design, establishment, analysis, and quality control of a high-precision reference frame in Cortes de Pallás (Spain). Appl Geomatics 2023;15:359–70.10.1007/s12518-022-00481-9Search in Google Scholar
15. Moore, JR, Gischig, V, Button, E, Loew, S. Rockslide deformation monitoring with fiber optic strain sensors. Nat Hazards Earth Syst Sci 2010;10:191–201. https://doi.org/10.5194/nhess-10-191-2010.Search in Google Scholar
16. Maghsoudi, A, Kalantari, B. Monitoring instrumentation in underground structures. Open J Civ Eng 2014;4:135–46. https://doi.org/10.4236/ojce.2014.42012.Search in Google Scholar
17. Prószyński, W, Kwaśniak, M. Podstawy geodezyjnego wyznaczania przemieszczeń: Pojęcia i elementy metodyki. Warsaw, Poland: Warsaw University of Technology Press; 2015.Search in Google Scholar
18. Amiri-Simkooei, AR, Alaei-Tabatabaei, SM, Zangeneh-Nejad, F, Voosoghi, B. Stability analysis of deformation monitoring network points using simultaneous observation adjustment of two epochs. J Surv Eng 2016;143:04016020.10.1061/(ASCE)SU.1943-5428.0000195Search in Google Scholar
19. Baselga, S, García-Asenjo, L, Garrigues, P. Deformation monitoring and the maximum number of stable points method. Measurement 2015;70:27–35. https://doi.org/10.1016/j.measurement.2015.03.034.Search in Google Scholar
20. Neitzel, F. Die Methode der maximalen Untergruppe (MSS) und ihre Anwendung in der Kongruenzuntersuchung geodätischer Netze. ZFV - Z Geod, Geoinf Landmanagement 2005;130:82–91.Search in Google Scholar
21. Velsink, H. On the deformation analysis of point fields. J Geod 2015;89:1071–87. https://doi.org/10.1007/s00190-015-0835-z.Search in Google Scholar
22. Kamiński, W, Nowel, K. Local variance factors in deformation analysis of non-homogenous monitoring networks. Surv Rev 2013;45:44–50. https://doi.org/10.1179/1752270612y.0000000019.Search in Google Scholar
23. Odziemczyk, W. Application of optimization algorithms for identification of reference points in a monitoring network. Sensors 2021;21:1739. https://doi.org/10.3390/s21051739.Search in Google Scholar PubMed PubMed Central
24. Prószyński, W. Problem of partitioned bases in monitoring vertical displacements for elongated structures. Geod Cartogr 2010;59:53–67. https://doi.org/10.2478/v10277-012-0001-1.Search in Google Scholar
25. Wujanz, D, Avian, M, Krueger, D, Neitzel, F. Identification of stable areas in unreferenced laser scans for automated geomorphometric monitoring. Earth Surf Dyn 2018;6:303–17. https://doi.org/10.5194/esurf-6-303-2018.Search in Google Scholar
26. Van Mierlo, J. A testing procedure for analysing geodetic deformation measurements. In: Proceedings of the 2nd international symposium on deformation measurements by geodetic methods. Stuttgart/Bonn, Germany: Konrad Wittwer; 1978.Search in Google Scholar
27. Niemeier, W. Zur Kongruenz Mehrfach Beobachteter Geodätischer Netze. Hannover Germany: University of Hannover; 1979. Fachrichtung Vermessungswesen.Search in Google Scholar
28. Denli, HH, Deniz, R. Global congruency test methods for GPS networks. J Surv Eng 2003;129:95–8.10.1061/(ASCE)0733-9453(2003)129:3(95)Search in Google Scholar
29. Adamczewski, Z. Algorytm numerycznej kontroli przylegania obiektów. Geod Kartografia 1979;27:195–200.Search in Google Scholar
30. Chen, YQ. Analysis of deformation surveys – a generalized method. Fredericton, Canada: University of New Brunswick, Department of Surveying Engineering; 1983. Technical Report No. 94.Search in Google Scholar
31. OGl, O, Ipadeola, AO, Shittu, OG, Ojegbile, BM. Application of iterative weighted similarity transformation (IWST) deformation detection method using coordinate differences from different observational campaigns. J Phys Sci Innov 2016;8:68–84.Search in Google Scholar
32. Hu, Y, Fang, X, Zeng, W, Kutterer, H, Li, D. Statistical robust estimation of spatial symmetric transformations based on Mahalanobis distance. IEEE Trans Geosci Remote Sens 2024;62:5801710. https://doi.org/10.1109/tgrs.2024.3431689.Search in Google Scholar
33. Huber, PJ. Robust estimation of a location parameter. Ann Math Statist 1964;35:73–101. https://doi.org/10.1214/aoms/1177703732.Search in Google Scholar
34. Hampel, FR, Ronchetti, EM, Rousseeuw, PJ, Stahel, WA. Robust statistics: the approach based on influence functions. New York: Wiley; 1986.Search in Google Scholar
35. Berberan, A. Outlier detection and heterogeneous observations a simulation case study. Aus J Geod Photogramm Surv 1992;56:49–61.Search in Google Scholar
36. Kadaj, R. Wyrównanie z obserwacjami odstającymi [Adjustment with outliers] (in Polish). Przegląd Geod 1978;8:252–3.Search in Google Scholar
37. Nowel, K, Kamiński, W. Robust estimation of deformation from observation differences for free control networks. J Geod 2014;88:749–64. https://doi.org/10.1007/s00190-014-0719-7.Search in Google Scholar
38. Třasák, P, Štroner, M. Outlier detection efficiency in the high precision geodetic network adjustment. Acta Geod Geophys 2014;49:161–75. https://doi.org/10.1007/s40328-014-0045-9.Search in Google Scholar
39. Muszyński, Z. Zastosowanie metod estymacji odpornej do geodezyjnego opisu deformacji obiektu budowlanego. Acta Sci Pol Geod Descr Terrarum 2008;7:3–14.Search in Google Scholar
40. Caspary, W, Borutta, H. Robust estimation in deformation models. Surv Rev 1987;29:29–45. https://doi.org/10.1179/sre.1987.29.223.29.Search in Google Scholar
41. Islam, MS, Khatoon, MT, Siddiquee, KNA, Yong, WH, Huda, MH. Performance analysis of simulated annealing and genetic algorithm on systems of linear equations. F1000Research 2021;10:1297. https://doi.org/10.12688/f1000research.73581.1.Search in Google Scholar
42. Batilović, M, Durović, R, Sušić, Z, Kanović, Ž, Cekić, Z. Robust estimation of deformation from observation differences using some evolutionary optimisation algorithms. Sensors 2021;22:159. https://doi.org/10.3390/s22010159.Search in Google Scholar PubMed PubMed Central
43. Jia, F, Lichti, D. A comparison of simulated annealing, genetic algorithm and particle swarm optimization in optimal first-order design of indoor TLS networks. ISPRS Ann Photogramm Remote Sens Spatial Inf Sci 2017;IV-2/W4:75–82. https://doi.org/10.5194/isprs-annals-iv-2-w4-75-2017.Search in Google Scholar
44. Hooke, R, Jeeves, TA. “Direct search” solution of numerical and statistical problems. J ACM 1961;8:212–29. https://doi.org/10.1145/321062.321069.Search in Google Scholar
45. Odziemczyk, W. Comparison of selected reliability optimization methods in application to the second order design of geodetic network. J Appl Geod 2024;18:223–36. https://doi.org/10.1515/jag-2023-0024.Search in Google Scholar
46. Xiong, F, Wei, B, Xu, F. Identification of arch dam mechanical parameters based on sensitivity analysis and Hooke–Jeeves algorithm optimization. Structures 2022;46:88–98. https://doi.org/10.1016/j.istruc.2022.10.052.Search in Google Scholar
47. Pal, T, Kaushik, M. Aircraft parameter estimation using a novel hybrid Luus–Jaakola/Hooke–Jeeves neural-network based optimization technique. Proc Inst Mech Eng, Part G 2023;237:2196–208. https://doi.org/10.1177/09544100221140980.Search in Google Scholar
48. Zeng, H, Yi, Q, Wu, Y. Iterative approach of 3D datum transformation with a non-isotropic weight. Acta Geod Geophys 2016;51:557–70. https://doi.org/10.1007/s40328-015-0144-2.Search in Google Scholar
49. Metropolis, N, Rosenbluth, AW, Rosenbluth, MN, Teller, AH, Teller, E. Equation of state calculations by fast computing machines. J Chem Phys 1953;21:1087–92. https://doi.org/10.1063/1.1699114.Search in Google Scholar
50. Kirkpatrick, S, Gelatt, CDJr, Vecchi, MP. Optimization by simulated annealing. Science 1983;220:671–80. https://doi.org/10.1126/science.220.4598.671.Search in Google Scholar PubMed
51. Ghannadi, P, Kourehli, SS, Mirjalili, S. A review of the application of the simulated annealing algorithm in structural health monitoring (1995-2021). Frat Ed Integrità Strutt 2023;17:51–76. https://doi.org/10.3221/igf-esis.64.04.Search in Google Scholar
52. He, F, Ye, Q. A bearing fault diagnosis method based on wavelet packet transform and convolutional neural network optimized by simulated annealing algorithm. Sensors 2022;22:1410. https://doi.org/10.3390/s22041410.Search in Google Scholar PubMed PubMed Central
53. Huo, L, Zhu, J, Wu, G, Li, Z. A novel simulated annealing based strategy for balanced UAV task assignment and path planning. Sensors 2020;20:4769. https://doi.org/10.3390/s20174769.Search in Google Scholar PubMed PubMed Central
54. Baselga, S, Klein, I, Suraci, SS, Castro de Oliveira, L, Matsuoka, MT, Rofatto, VF. Performance comparison of least squares, iterative and global L1 norm minimization and exhaustive search methods for outlier detection in leveling networks. Acta Geodyn Geomater 2020;17:425–38.10.13168/AGG.2020.0031Search in Google Scholar
55. Berné, JL, Baselga, S. First-order design of geodetic networks using the simulated annealing method. J Geod 2004;78:47–54. https://doi.org/10.1007/s00190-003-0365-y.Search in Google Scholar
56. Baselga, S. Second order design of geodetic networks by the simulated annealing method. J Surv Eng 2011;137:167–73.10.1061/(ASCE)SU.1943-5428.0000053Search in Google Scholar
57. Odziemczyk, W. Application of simulated annealing algorithm for 3D coordinate transformation problem solution. Open Geosci 2020;12:491–502. https://doi.org/10.1515/geo-2020-0038.Search in Google Scholar
58. Van Laarhoven, PJM, Aarts, EHL. Simulated annealing: theory and applications. Dordrecht: Springer; 1987.10.1007/978-94-015-7744-1Search in Google Scholar
59. Neitzel, F. Identifizierung konsistenter Datengruppen am Beispiel der Kongruenzuntersuchung geodätischer Netze. Munich: DGK; 2004.Search in Google Scholar
60. Lehmann, R, Lösler, M. Congruence analysis of geodetic networks – hypothesis tests versus model selection by information criteria. J Appl Geod 2017;11:271–83. https://doi.org/10.1515/jag-2016-0049.Search in Google Scholar
61. Twardochleb, M, Król, T, Włoch, P, Kuka, B. Effectiveness of hybrid optimization methods in solving test problems and practical issues. Informatyka Ekonomiczna 2013;4:279–89.Search in Google Scholar
62. Mustafa, W. Hybridization of particle swarm optimization and simulated annealing for maximum partition problem. J Math Comput Sci 2021;11:2058–74.Search in Google Scholar
63. Rios-Coelho, AC, Sacco, WF, Henderson, N. A metropolis algorithm combined with Hooke–Jeeves local search method applied to global optimization. Appl Math Comput 2010;217:843–53. https://doi.org/10.1016/j.amc.2010.06.027.Search in Google Scholar
© 2025 the author(s), published by De Gruyter, Berlin/Boston
This work is licensed under the Creative Commons Attribution 4.0 International License.