Home Improvement of orbit prediction accuracy using extreme gradient boosting and principal component analysis
Article Open Access

Improvement of orbit prediction accuracy using extreme gradient boosting and principal component analysis

  • Min Zhai , Zongbo Huyan , Yuanyuan Hu , Yu Jiang EMAIL logo and Hengnian Li
Published/Copyright: June 22, 2022

Abstract

High-accuracy orbit prediction plays a crucial role in several aerospace applications, such as satellite navigation, orbital maneuver, space situational awareness, etc. The conventional methods of orbit prediction are usually based on dynamic models with clear mathematical expressions. However, coefficients of perturbation forces and relevant features of satellites are approximate values, which induces errors during the process of orbit prediction. In this study, a new orbit prediction model based on principal component analysis (PCA) and extreme gradient boosting (XGBoost) model is proposed to improve the accuracy of orbit prediction by learning from the historical data in a simulated environment. First, a series of experiments are conducted to determine the approximate numbers of features, which are used in the following machine learning (ML) process. Then, PCA and XGBoost models are used to find incremental corrections to orbit prediction with dynamic models. The results reveal that the designed framework based on PCA and XGBoost models can effectively improve the orbit prediction accuracy in most cases. More importantly, the proposed model has excellent generalization capability for different satellites, which means that a model learned from one satellite can be used on another new satellite without learning from the historical data of the target satellite. Overall, it has been proved that the proposed ML model can be a supplement to dynamic models for improving the orbit prediction accuracy.

1 Introduction

Orbit prediction is one of the foundations of space technology. It plays an important role in many applications such as satellite navigation, satellite control, collision avoidance, as well as landing area determination (Wen et al. 2020, Zeng et al. 2021). Current orbit prediction with numerical integral is usually based on accurate dynamic models. The higher the accuracy of dynamic models, the better the orbit prediction.

However, dynamic models have their limitations. For example, atmosphere drag is one of the major perturbations for low earth orbit (LEO) satellites, while the existing density models of atmosphere are not accurate enough. Therefore, different approaches to improve the orbit prediction accuracy have been studied. Levit and Marshall (2011) have proposed a high-precision numerical propagator based on two-line element (TLE) catalog to improve the orbit prediction accuracy. In the study by Chen et al. (2017), an error analysis method by using historical orbital data has been introduced and presented the periodic characteristics of the orbit prediction error. In the study by Goh et al. (2016), a preprocessed orbit parameter method has been proposed to minimize the orbit propagation and attitude determination errors. In the study by Wang et al. (2017), a corresponding real-time orbit correction method has been used to reduce the impact of earth rotation parameters on GNSS ultra-rapid orbit prediction, which can improve the accuracy of ultra-rapid orbit prediction at least by 50%. Sang et al. (2017) have presented a two-step TLE-based method in which the numerical orbits are first fitted into a TLE set, and then correction functions are applied to improve the position accuracy considering the accuracy, computing efficiency, and memory.

It has been demonstrated that the machine learning (ML) methods have great potential in learning knowledge from a large amount of data, which have been used in different fields. The ML methods have also shown great promise for a wide range of applications in the aerospace field, such as collision avoidance (Uriot et al. 2020), spacecraft guidance dynamics, and control (Izzo et al. 2019). In the study by Perez and Bevilacqua (2015), a novel approach based on neural networks is presented to reduce the atmosphere density errors compared with three empirical models. The test results indicate that the neural networks produce density estimates with less error than the density from the three empirical models studied. In the study by Sharma and Cutler (2015), a learning approach based on distribution regression and transfer learning has been presented to determine an orbit. Their tests show that the proposed ML approach is superior to the conventional methods such as the extended Kalman filter. In the study by Peng and Bai (2018), a new orbit prediction method based on a supervised ML method named support vector machine has been proposed to improve the orbit prediction accuracy significantly. In the study by Gao and Hauser (2019), a data-driven framework for nonlinear optimal control using k-nearest neighbors method has been proposed to determine initial guesses for new problems with the help of precomputed solutions to similar problems. In the study by Peng and Bai (2019), three recently developed ML approaches to improve orbit prediction accuracy are systematically investigated, including support vector machine, artificial neural network, and Gaussian processes in a simulation environment. In the study by Cheng et al. (2019), a real-time optimal control approach has been proposed using an interactive deep reinforcement learning (DRL) algorithm to solve the Moon fuel-optimal landing problem. In the study by Perez and Bevilacqua (2015), a new method for calibrating density models using neural networks has been proposed. Results have shown that such method holds potential for future onboard implementation on real spacecraft. In the study by Jung et al. (2021), a recurrent neural network model has been proposed to predict re-entry trajectories of uncontrolled space objects. In the study by Cheng et al. (2021), an adaptive neural network control approach has been successfully demonstrated to achieve accurate and robust control of nonlinear systems with unknown dynamics. In the study by Jiang et al. (2020), DRL with deep neural networks has been applied to plan the path of hopping rovers over irregular asteroid surfaces.

It has been noticed that only training a ML model without dynamic models through the large amount of high-accuracy satellite positions can not achieve the same accuracy based on current dynamic models for orbit prediction. If the ML model can find out the underlying orbit prediction errors pattern, it can be applied to correct the orbit predictions calculated by dynamic models. In this article, a new ML method is proposed to find incremental corrections to the predicted orbit with inaccurate dynamic models. Several experiments in simulated environments have been conducted to validate the performance of this ML method. Contributions of this article are summarized as follows:

  1. Different combinations of features that may contribute to the orbit prediction errors have been investigated. The most suitable combination of features is found. It helps to establish a more accurate and efficient ML model. A factor is introduced to determine whether the features selected are suitable for orbit prediction correction.

  2. After tests and comparisons, the XGBoost method shows the best performance among several ML methods.

  3. Principal component analysis (PCA) method is introduced to reduce the noise and redundant information of the dataset. Experimental results show that the XGBoost model combined with PCA (called PCA–XGBoost model for simplicity) has better performance than the XGBoost model only.

  4. Generalization capabilities of the PCA–XGBoost model on satellites with different semi-axis altitude and different longitudes of ascending node (RAAN) are rigorously investigated. Results show that the model has a great potential in generalizing to different new satellites.

The remaining parts of this article are organized as follows. In Section 2, the related works are presented, including an introduction of the simulation environment, a brief introduction of the PCA method, XGBoost method, and hyperparameter optimization method. The design of the learning process and the selection of learning and target variables are discussed in detail. In Section 3, the analysis and comparison of XGBoost and PCA–XGBoost model are shown. In Section 4, conclusions are summarized, and future researches are suggested.

2 Simulation environment and ML approach for improving orbit prediction accuracy

In this section, the proposed model, PCA–XGBoost, is presented. First, detailed information of the simulation environment and workflow of the proposed model is introduced. Then, the review of the ML methods is briefly presented. The construction of learning dataset is also introduced, and evaluation metrics of models are presented.

2.1 Simulation environment

The workflow in the simulation environment for the ML method is shown in Figure 1. The whole workflow can be divided into three parts. In the first part, which is called “Data Collection,” the following states of satellites are simulated: the true orbit, the observation data, the estimated orbit, and the predicted orbit. The second part presents the preprocessing of data for the ML method. In the third part, the structure of the ML method is presented.

Figure 1 
                  The framework of the proposed PCA–XGBoost model. “True Orbit” is generated by using the true dynamic model, “Predicted Orbit,” and “Estimated Orbit” are generated using the assumed dynamic model. “Estimated Orbit” is determined from the observation data generated by using the assumed dynamic model. “Predicted Orbit” is propagated from “Estimated Orbit.” The proposed model tries to learn the underlying relationship between the relative prediction errors and true prediction errors, whose output is the correction of future predicted orbit.
Figure 1

The framework of the proposed PCA–XGBoost model. “True Orbit” is generated by using the true dynamic model, “Predicted Orbit,” and “Estimated Orbit” are generated using the assumed dynamic model. “Estimated Orbit” is determined from the observation data generated by using the assumed dynamic model. “Predicted Orbit” is propagated from “Estimated Orbit.” The proposed model tries to learn the underlying relationship between the relative prediction errors and true prediction errors, whose output is the correction of future predicted orbit.

Parameters of the true dynamic model and assumed dynamic model are summarized in Table 1. JGM3 (Tapley 1994) is chosen as the gravity field of the earth. The order of the true dynamic model is 40 × 40 , and the assumed dynamic model is 20 × 20 . For the true dynamic model, the gravity of the Sun, all the planets, Pluto, and the Moon are taken into consideration. For the assumed dynamic model, only the gravity of the Sun, the Moon, and Jupiter are considered. DE430 is used as the ephemeris (Folkner et al. 2014). Atmosphere density is calculated with the NRLMSISE-00 model proposed by Picone et al. (2002) for both true dynamic model and assumed dynamic model. The drag coefficient C d and single-parameter reflection coefficient C r are assumed to be constant. These models are implemented by using the orbit calculation and parameters estimation software developed by State Key Laboratory of Astronautics Dynamics.

Table 1

Parameters of true dynamic model and assumed dynamic model used for estimated orbit and predicted orbit

Parameters True model Assumed model
Earth shape WGS84 WGS84
Harmonic gravity field 40 × 40 20 × 20
Third-body perturbation Sun, Sun Planes, Pluto, The Moon Sun, Jupiter, The Moon
Atmosphere model NRLMSISE-00 NRLMSISE-00

Range ( ρ ), azimuth ( α ), and elevation ( η ) from three ground-based stations are simulated whose parameters are shown in Table 2. When a satellite is 3° above the horizon, the satellite is considered to be observable and measurements will be generated. The observation errors follow normal distributions with zero biases. The noise of the azimuth, elevation, and range are shown in Table 2. As for the estimation process, the least square method is used to estimate the orbit at the starting epoch of the given track. Take epoch t i as an example, the measurement data of the past 12 h (from t i 12 h to t i ) is used, and the orbit X i at epoch t i is estimated. In this article, since an LEO satellite is simulated, the orbit X i and drag coefficient parameter C d , i at the epoch t i are estimated using the assumed dynamic model. Once the estimation process is converged, the orbit will be propagated to future epochs for generating the predicted orbit with the same assumed dynamic model. The maximum prediction duration is chosen to be Δ t = 7 days. Parameters of the simulated satellite are presented in Table 3. The simulated satellite will be used to generate data for training the ML model.

Table 2

Parameters of three radar stations

Station Eglin, FL Clear, AK Kaena point, HI
Latitude (deg) 30.57 64.29 21.57
Longitude (deg) 86.21 149.19 158.27
Altitude (m) 34.7 213.3 300.2
Maximum range (m) 13,210 4,910 6,380
Feasible elevation (deg) 1–90 1–90 1–90
σ ρ (m) 32.1 62.5 92.5
σ α (deg) 0.0154 0.0791 0.0224
σ η (deg) 0.0147 0.024 0.0139
Table 3

Parameters of simulated satellite

Parameter name Parameter value
Semi-major axis a (km) 6783.34
Eccentricity e 0.006793
Inclination i (deg) 51.6393
Argument of perigee ω (deg) 14.5438
RAAN Ω (deg) 262.6471
Mean anomaly (deg) 345.5909
Area-to-mass ratio 0.05
Drag coefficient C d 2.2
Reflection coefficient C r 1.25

2.2 Review of related ML method

2.2.1 XGBoost model

XGBoost is a scalable ML system of tree boosting proposed by Chen and Guestrin (2016). It is an important decision-tree-based ensemble ML algorithm with classification and regression tree (CART; Trendowicz and Jeffery 2014) as the base learner, which can describe the complex nonlinear relationship between input and output data with low overfitting risk. XGBoost has been successfully applied in many tasks, such as the classification of COVID-19 patient data (Dong et al. 2021), flash flood risk assessment (Ma et al. 2021), and prediction and analysis of train arrival delay (Shi et al. 2021). The objective function of the model can be described as follows:

(1) L ( Θ ) = l ( Θ ) + Ω ( Θ ) ,

where L represents the objective function, Θ represents parameters of the training model, l is the training loss function to evaluate the prediction of the model, Ω ( Θ ) represents the regularization term that controls the complexity of the model. l ( Θ ) can be written as follows:

(2) l ( Θ ) = i = 1 n ( y i y ˆ i ) ,

where n represents the size of the data, y i represents the true value, and y ˆ i represents the predicted value of the model. In the ensemble learning model, y ˆ i is calculated as follows:

(3) y ˆ i = j = 1 J f j ( X i ) ,

where X i is the vector of input learning variables, J is the number of the trees used in the ensemble learning model, and f j is the individual regression model.

The second term of Eq. (1) is usually called regularization term, which can be written as follows:

(4) Ω ( Θ ) = j = 1 J Ω ( f j ) ,

(5) Ω ( f ) = γ T + 1 2 λ ω 2 ,

where γ is the complexity of each leaf, T is the number of leaves in a decision tree, λ is the parameter to scale the penalty, and ω is the weight vector of the leaves.

During the boosting process, y ˆ i k be the prediction of the ith regressor at the kth iteration is calculated as follows:

(6) y ˆ i k = y ˆ i k 1 + η f k ( x i ) ,

where η is called the step-size or shrinkage, which is also known as the learning rate. It is a hyperparameter in the XGBoost model. Hence, the objective function is written as follows:

(7) L k = i = 1 n l ( y i , y ˆ i k 1 + f k ( x i ) ) + Ω ( f k ) .

To simplify the objective function, the XGBoost algorithm is taken as following steps:

  1. Use second-order Taylor expansion to approximate the objective function, Eq. (7) can be written as follows:

    (8) L k = i = 1 n [ l ( y i , y ˆ i k 1 ) + g k ( x i ) f k ( x i ) + 1 2 h k ( x i ) ( f k ( x i ) ) 2 + Ω ( f k ) ,

    where g k is the first derivative of the function l ( y i , y ˆ i k 1 ) , and h k is the second derivatives of the function l ( y i , y ˆ i k 1 ) , which are defined as follows:

    (9) g k ( x i ) = l ( y i , y ˆ i k 1 ) y ˆ i k 1 ,

    (10) h k ( x i ) = 2 l ( y i , y ˆ i k 1 ) ( y ˆ i k 1 ) 2 .

  2. Since the item l ( y i , y ˆ i k 1 ) has no relationship with f k ( x i ) , and replace the last term of Eq. (8) by Eq. (5), Eq. (8) can be written as follows:

    (11) L k = i = 1 n [ g k ( x i ) f k ( x i ) + 1 2 h k ( x i ) ( f k ( x i ) ) 2 ] + γ T + 1 2 λ ω 2 .

  3. Define f k ( x i ) = j = 1 T ω j k I j k , where I j k denotes the set of training samples x i in the leaf, then Eq. (11) is rewritten as follows:

    (12) L k = i = 1 n g k ( x i ) j = 1 T ω j k + 1 2 h k ( x i ) j = 1 T ω j k 2 + γ T + 1 2 λ j = 1 T ω j k 2 .

    The sums of g k ( x i ) and h k ( x i ) can be simplified as follows:

    (13) G j k = i I j k g k ( x i ) , H j k = i I j k h k ( x i ) .

    Then Eq. (12) can be written as follows:

    (14) L k = j = 1 T [ G j k ω j k + 1 2 ( H j k + λ ) ( ω j k ) 2 ] + γ T .

  4. Taking the derivatives of Eq. (12) with respect to ω j k for each leaf and making the derivative of L k to be zero, the the best weight ω j k can be written as follows:

    (15) ω j k = G j k H j k + λ .

  5. The corresponding optimal value can be calculated by:

    (16) L ˜ k = 1 2 j = 1 T G j k 2 H j k + λ + γ T .

2.2.2 Principal component analysis

PCA (Levada 2020) is a multi-variable analysis method in applications such as feature selection, lossy data compression, and dimension reduction, which can reduce the number of variables and the redundant information of the original dataset. PCA is implemented on the basis of a database, and its idea is to transform the original set of parameters into a new set with lower dimensions preserving the intrinsic information of the original data.

Assume the original dataset has the formulation as follows:

(17) X = x 11 x 12 x 1 n x 21 x 22 x 2 n x 31 x 32 x 3 n x m 1 x m 2 x m n ,

where m represents the size of dataset and n represents the number of features of the dataset.

PCA for the dataset can be done as each following step:

  1. Evaluate a new dataset as follows:

    (18) X new = x 11 x ¯ 1 x 12 x ¯ 1 x 1 n x ¯ 1 x 21 x ¯ 2 x 22 x ¯ 2 x 2 n x ¯ 2 x 31 x ¯ 3 x 32 x ¯ 3 x 3 n x ¯ 3 x m 1 x ¯ m x m 2 x ¯ m x m n x ¯ m ,

    where x ¯ j = 1 n i = 1 n x j i ( j = 1 , 2 , 3 , , m ).

  2. Calculate the covariance matrix of X new X new T .

  3. Calculate eigenvectors e i ( i = 1 , 2 , , n ) and corresponding eigenvalues λ i ( i = 1 , 2 , , n ) of matrix C as Eq. (19):

    (19) C e i = λ i e i ,

    and then rank the eigenvalues λ i through the descending order { λ 1 , λ 2 , , λ n } and the corresponding eigenvectors { e 1 , e 2 , , e n } .

  4. Select the least d , which can satisfy Eq. (20):

    (20) i = 1 d λ i i = 1 d λ i t ,

    where t is the threshold, which is usually set to be 95%.

  5. Generate the new dataset X using the selected d principal components:

    (21) X = X [ e 1 , e 2 , e 3 , , e d ] .

As can be seen, the dimension of the transformed dataset X is d . The new dataset X is also used as the input of the proposed PCA–XGBoost model.

2.2.3 Hyper-parameters optimization

The optimized hyperparameters can improve the learning ability and prediction accuracy of the model. Two methods (Qu et al. 2021) are used to optimize the key hyperparameters including cross-validation and grid search.

Cross-validation is a statistical method used to testify the performance of the regressor. The basic idea is to group the original data, one part as the training set and the other as the validation set. The training set is used to train the regressor, and then the validation set is used to verify the accuracy of the training model, which is used to evaluate the performance of the regressor. In general, the original data is divided into N groups, each of which is validated once, and the rest of the N-1 groups are used as training sets. In this way, N models are obtained, and the average value of regressor accuracy for the final validation set of N models is used as the regressor’s performance index. This method can effectively avoid overfitting and underfitting states, and the results have individual practicability.

Grid search is another optimization method used in this article. The key hyperparameters of the XGBoost model are summarized as follows:

  1. the number of estimators used in XGBoost model;

  2. the maximum depth of the tree;

  3. the rate at which the model upgrades its weight;

  4. the minimum number of samples used in each leaf;

  5. the ratio of number of samples from total training samples used in each upgrade;

  6. the ratio of number of features selected from total training features used for each upgrade;

  7. regularisation parameters to prevent overfit;

  8. complexity control to prevent overfitting.

It is necessary to set the value range for the hyperparameters to be optimized, and then the model is trained by traversing different combinations of hyperparameters. Each combination corresponds to a model, and the model error is calculated. By comparing errors under different combinations of hyperparameters, the hyperparameters that meet the prediction requirements are selected and parameter settings of the model are determined.

2.3 Construction of the learning dataset

Before talking about the construction of the learning dataset used in this article, some notations that are used throughout this article are introduced for simplicity. The symbol X ( t ) is used as the representation of the orbit state at epoch t. The superscript of X ( t ) represents the coordinate frame used to express X ( t ) , such as the classical orbit elements (COE), the earth-centered inertial frame, the radial-transverse-normal frame (RTN), and the earth-centered fixed frame. X COE ( t ) = [ a , e , i , Ω , ω , M ] T , X ECI ( t ) = [ X , Y , Z , V X , V Y , V Z ] T (X, Y and Z are the components of position. V X , V Y , and V Z are the components of velocity). X ECF ( t ) and X RTN ( t ) are similar to X ECI ( t ) , and the difference between them is the coordinate frame. X True ( t ) represents the true orbit at epoch t, X Est ( t ) is the representation of estimated orbit at epoch t, and X Pre ( t i , t j ) is the symbol of predicted orbit at epoch t i based on the estimated orbit X Est ( t j ) ( t i > t j ). e ( t ) represents the true orbit prediction error, which is calculated as follows: e ( t i ) = X True ( t i ) X Pre ( t i , t j ) . ξ ( t ) represents the relative orbit prediction error, which is calculated as: ξ ( t ) = X Est ( t i ) X Pre ( t i , t j ) .

The following learning variables may contain information contributing to orbit prediction errors:

  1. Prediction duration Δ t = t i t j ; ( t i t j );

  2. Relative prediction error ξ ( t j ) ;

  3. Estimated orbit X Est ( t j ) ;

  4. Estimated drag coefficient C d ( t j ) , which is important for LEO satellites;

  5. Predicted orbit X Pre ( t i , t j ) ;

The relative prediction error ξ ( t j ) will be expressed in COE and RTN frames. The estimated orbit X Est ( t j ) and predicted orbit X Pre ( t i , t j ) will be expressed in all the four different coordinate frames mentioned earlier. Following experiments will be conducted to determine the most suitable learning variables for the machine learning orbit prediction improvment problem.

The target variable is true orbit prediction error e ( t i ) , which is expressed in RTN frame. e ( t i ) = [ e x , e y , e z , e v x , e v y , e v z ] T . Since the target variable has six elements, six ML models will be trained separately to predict each element. During the collection of the dataset, each estimated orbit is propagated from epoch t j to epoch t i . Then, the target variable can be calculated. The prediction duration should satisfy the condition: Δ t Δ t max . As discussed earlier, Δ t max = 7 days and the whole dataset will be used to train the ML model.

As illustrated in Figure 2, the learning result of the ML model is denoted as e ML . In the ideal case, e ML is expected to be equal as the target variable e ( t i ; t j ) . However, affected by different kinds of factors, e ML will hardly be equal to the target variable e ( t i ; t j ) . An indicator e res ( t i ; t j ) is defined as follows:

(22) e res ( t i , t j ) = e True ( t i , t j ) e ML ( t i , t j ) .

e res ( t i , t j ) can represent how close the output of ML model is to the target variable e True ( t i , t j ) . So the statistical properties of e res ( t i , t j ) will be used to evaluate the performance of the proposed ML model.

Figure 2 
                  Illustration of learning and target variables.
Figure 2

Illustration of learning and target variables.

2.4 Model evaluation metrics

To evaluate the performance of the proposed model, the following three indicators are mainly used:

(23) RMSE = 1 n i = 1 n ( y i True y i Pre ) 2 ,

(24) P = 100 % i = 1 n y i True y i Pre i = 1 n y i True ,

(25) R 2 = 1 i = 1 n ( y i True y i Pre ) 2 i = 1 n ( y i True y ¯ True ) 2 ,

where n is the size of the dataset, y i True is the true value of the data point, y i Pre is the prediction value of the data point, and y ¯ True is the mean value of the data points.

The variable R 2 reveals the strength of the relationship between dependent and independent variables. The range of R 2 is usually (0, 1). When the variable R 2 approaches 1, the relationship between dependent and independent variables becomes stronger and the performance of the regression model becomes better. If the value of the variable R 2 is negative, the dependent and independent variables have weak or no relationship.

The indicators P and RMSE evaluate the prediction accuracy between the true value and the output value of the ML model. The indicators P and RMSE reach 0 when the predicted value is identical to the true value. The variable P directly indicates the percentage of the residual error e res ( t i ; t j ) with respect to the true error of the testing data. So lower value of the indicators P and RMSE indicate that the model has better performance.

3 Experimental results and discussion

In this section, the proposed PCA–XGBoost model is evaluated on a simulated satellite. First, the XGBoost model is applied to choose the most appropriate combination of features. Then, based on the chosen combination of features, the PCA–XGBoost model is trained. Results demonstrate that the PCA–XGBoost model can greatly improve the orbit prediction accuracy. It is also found that the PCA–XGBoost model has better performance than the XGBoost model only. The generalization capacity to future epochs, and different satellites will be discussed later.

In this article, all the ML methods such as the XGBoost method, the PCA method, and the grid search method are implemented by Python 3.8. The learning variables will be scaled using the min and max value of each variable. The hyperparameters of XGBoost model are determined by the grid search method and five fold cross validation. The processing of dataset is implemented by using pandas and numpy packages in Python 3.8.

3.1 Results on different combinations of features

As discussed in Section 2.3, different combinations of features may affect the orbit prediction accuracy. Experiments based on the XGBoost model have been done to choose the most suitable combination of features. e x is taken as an instance and the optimized hyperparameters are summarized in Table 4. R 2 is used as a statistical indicator for evaluating the relationship between the dependent and independent variables. If the R 2 is negative, the features are not suitable for the target variable. The higher R 2 , the better. Results of the overall experiments are presented in Table 5.

Table 4

Optimized parameters of the XGBoost model for e x component

Parameter name Parameter value
n_estimators 5,000
max_depth 4
min_child_weight 7
learning_rate 0.05
gamma 0
subsample 0.5
colsample_bytree 0.6
reg_alpha 0.1
reg_lamda 0.8
Table 5

Experimental design and performance of XGBoost model for orbit prediction

No. Parameters under consideration R 2 for the training dataset R 2 for the testing dataset
e x e y e z e v x e v y e v z e x e y e z e v x e v y e v z
1 Δ t + ξ COE + ξ RTN 0.601 0.576 0.239 0.571 0.532 1.330 0.341 0.609 0.321 0.071 0.061 0.048
2 C d + ξ COE + ξ RTN 0.572 0.436 0.136 0.601 0.573 1.118 0.443 0.626 0.410 0.093 0.103 0.053
3 Δ t + C d + ξ COE + ξ RTN 0.134 0.606 0.102 0.613 0.607 0.274 0.664 0.848 0.359 0.885 0.762 0.563
4 C d + X Est COE + ξ COE + ξ RTN 0.903 0.912 0.879 0.901 0.883 0.941 0.806 0.816 0.659 0.803 0.756 0.616
5 Δ t + C d + X Est COE + ξ COE + ξ RTN 0.989 0.994 0.949 0.956 0.943 0.976 0.863 0.872 0.731 0.870 0.808 0.695
6 Δ t + C d + X Est ECI + ξ COE + ξ RTN 0.828 0.710 0.843 0.817 0.788 0.375 0.745 0.826 0.666 0.827 0.770 0.689
7 Δ t + C d + X Est ECF + ξ COE + ξ RTN 0.801 0.453 0.601 0.630 0.716 0.294 0.714 0.756 0.610 0.774 0.704 0.601
8 Δ t + C d + X Est RTN + ξ COE + ξ RTN 1.180 0.287 0.912 0.539 1.31 1.23 0.569 0.814 0.152 0.833 0.526 0.237
9 Δ t + C d + X Est RTN + X Est COE + ξ COE + ξ RTN 0.989 0.315 1.252 0.641 0.329 1.440 0.623 0.823 0.428 0.853 0.699 0.409
10 Δ t + C d + X Est ECI + X Est COE + ξ COE + ξ RTN 0.791 0.681 0.591 0.665 0.842 0.645 0.811 0.751 0.640 0.765 0.788 0.735
11 Δ t + C d + X Est ECF + X Est COE + ξ COE + ξ RTN 0.527 0.386 0.512 0.537 0.685 0.435 0.783 0.758 0.136 0.816 0.740 0.686
12 Δ t + C d + X Est ECF + X Est RTN + ξ COE + ξ RTN 0.342 0.113 0.573 0.437 0.339 0.269 0.512 0.648 0.413 0.006 0.173 0.103
13 Δ t + C d + X Est ECI + X Est RTN + ξ COE + ξ RTN 0.827 0.510 0.644 0.616 0.788 0.751 0.745 0.826 0.665 0.827 0.769 0.689
14 Δ t + C d + X Est RTN + X Est COE + X Pre RTN + X Pre COE 1.84 1.32 1.21 5.31 1.08 1.15 0.024 0.013 0.36 0.0028 0.219 0.168
15 Δ t + C d + X Est RTN + X Est ECI + X Pre RTN + X Pre ECI 3.02 1.94 2.12 6.31 1.84 1.41 0.103 0.067 0.33 0.0248 0.216 0.336
16 Δ t + C d + X Est RTN + X Est ECF + X Pre RTN + X Pre ECF 2.44 1.36 2.08 4.76 1.77 1.24 0.069 0.053 0.45 0.0139 0.318 0.254
17 Δ t + C d + X Est ECI + X Est COE + X Pre ECI + X Pre COE 2.86 1.51 1.09 5.99 1.32 1.31 0.032 0.023 0.36 0.0034 0.161 0.172
18 Δ t + C d + X Est ECF + X Est COE + X Pre ECF + X Pre COE 3.95 1.72 0.79 8.22 2.36 1.75 0.32 0.26 1.03 0.065 0.243 0.732

From Table 5, it is found that the best results are from Run 5 for both training and testing datasets. It means that the most suitable types of input parameters are Δ t , C d , X Est COE , ξ COE , and ξ RTN . Relative prediction error ξ should be the input parameter for all the six elements. Δ t and C d are also two important factors for orbit prediction of LEO satellites. As for estimated orbit X Est , X Est COE is the most suitable parameter. From Run 14 to Run 18, R 2 are all negative, which means that X Pre expressed in different coordinate frames are not suitable for orbit prediction problem. If the number of input parameters is too small, the model will have worse performance. It is also found that more input parameters may not improve the model performance either. In the following study, parameters considered in Run 5 will be used to test the performance of the proposed XGBoost and PCA–XGBoost model in generalizing to future epochs and other different satellites.

3.2 Results of XGBoost and PCA–XGBoost model

Parameters considered in Run 5 are used to validate the proposed XGBoost and PCA–XGBoost model. The best performance of all the six components for the training dataset are presented in Table 6 which means that the factor P and RMSE are minimum among the experiments and R 2 is maximum for all the six elements. The results of each component are five fold cross-validation results.

Table 6

Results of XGBoost model and PCA–XGBoost model for the training dataset

Model R 2 P (%) RMSE
XGBoost e x 0.9895 8.34 6.27 (m)
e y 0.9943 5.70 372.61 (m)
e z 0.9493 11.45 4.72 (m)
e v x 0.9562 10.45 0.76 (m/s)
e v y 0.9426 9.40 7.15 × 1 0 3 (m/s)
e v z 0.9764 12.29 6.28 × 1 0 3 (m/s)
PCA–XGBoost e x 0.9954 6.47 4.16 (m)
e y 0.9995 2.29 241.04 (m)
e z 0.9928 6.96 2.46 (m)
e v x 0.9993 2.94 0.3516 (m/s)
e v y 0.9990 3.08 1.98 × 1 0 3 (m/s)
e v z 0.9922 7.24 3.61 × 1 0 3 (m/s)

Indicator P is used to compare the XGBoost model and PCA–XGBoost model. The comparison results for the training dataset between these two models are shown in Figure 3. It can be found that while the XGBoost model has a good performance, the proposed PCA–XGBoost model is better. For each element, the proposed model shows an improvement of 1.86, 3.39, 4.49, 7.51, 6.32, and 5.04%, respectively, than the XGBoost model. Especially for the component e y , result of PCA–XGBoost ( P = 2.29 % ) is better than result obtained in study by Peng and Bai (2018), whose P is 9.6% with the same simulated satellite. The main reason is that PCA reduces the redundant information of the original dataset and preserves the intrinsic information of the original data.

Figure 3 
                  Comparison of XGBoost and the proposed PCA–XGBoost model for six components in training dataset (training in week 
                        
                           
                           
                              1
                              →
                              3
                           
                           1\to 3
                        
                     ).
Figure 3

Comparison of XGBoost and the proposed PCA–XGBoost model for six components in training dataset (training in week 1 3 ).

Detailed performance of the PCA–XGBoost model is shown in Figure 4. It shows that the PCA–XGBoost model works for all six components of the training dataset. The mean values of residuals e res have all been reduced to almost zero, and the standard deviations have also been significantly reduced.

Figure 4 
                  Performance of PCA–XGBoost model on training dataset. (a) 
                        
                           
                           
                              P
                              =
                              6.47
                              %
                           
                           P=6.47 \% 
                        
                     , (b) 
                        
                           
                           
                              P
                              =
                              2.29
                              %
                           
                           P=2.29 \% 
                        
                     , (c) 
                        
                           
                           
                              P
                              =
                              6.96
                              %
                           
                           P=6.96 \% 
                        
                     , (d) 
                        
                           
                           
                              P
                              =
                              2.94
                              %
                           
                           P=2.94 \% 
                        
                     , (e) 
                        
                           
                           
                              P
                              =
                              3.08
                              %
                           
                           P=3.08 \% 
                        
                     , and (f) 
                        
                           
                           
                              P
                              =
                              7.24
                              %
                           
                           P=7.24 \% 
                        
                     .
Figure 4

Performance of PCA–XGBoost model on training dataset. (a) P = 6.47 % , (b) P = 2.29 % , (c) P = 6.96 % , (d) P = 2.94 % , (e) P = 3.08 % , and (f) P = 7.24 % .

3.3 Generalization results to future epochs

In this section, generalization results to future epochs are shown. The training data are the historical orbit prediction data in the first 3 weeks (weeks 1 3 ) and the testing data are in the following 1 week (week 4). Experimental results of the testing dataset are presented in Table 7. Comparison results for the testing dataset are shown in Figure 5. For each element, the PCA–XGBoost model shows an improvement of 6.95, 5.88, 5.74, 11.17, 7.09, and 5.50% than the XGBoost model for the testing dataset.

Table 7

Results of XGBoost model and PCA–XGBoost model for the testing dataset

Model R 2 P (%) RMSE
XGBoost e x 0.8628 36.81 46.53 (m)
e y 0.8724 20.83 9449.95 (m)
e z 0.7312 51.15 17.69 (m)
e v x 0.8702 21.49 10.87 (m/s)
e v y 0.8077 40.46 5.02 × 1 0 2 (m/s)
e v z 0.6948 56.95 3.32 × 1 0 2 (m/s)
PCA–XGBoost e x 0.9051 29.86 38.69 (m)
e y 0.9422 14.95 6356.68 (m)
e z 0.7424 46.41 17.32 (m)
e v x 0.9595 10.32 6.07 (m/s)
e v y 0.8452 36.38 4.50 × 1 0 2 (m/s)
e v z 0.7236 51.46 3 . 16 × 1 0 2 (m/s)
Figure 5 
                  Comparison of XGBoost and PCA–XGBoost model for six components in the testing dataset (training in week 
                        
                           
                           
                              1
                              →
                              3
                           
                           1\to 3
                        
                      and testing in week 4).
Figure 5

Comparison of XGBoost and PCA–XGBoost model for six components in the testing dataset (training in week 1 3 and testing in week 4).

Detailed performance of the proposed PCA–XGBoost model is shown in Figure 6. The horizontal axis shows the prediction duration Δ t, and the vertical axis shows the original/ML-predicted/residual orbit prediction errors, respectively. The testing data have been grouped every day in the standard boxplot where the central marker is the median value. The indicator P is shown under each figure. From Figure 6 and Table 7, it can be concluded that:

  1. For e x , e y , e v x , and e v y , both the mean value and the standard deviation have been significantly reduced, although the indicator P are larger than the corresponding indicator on training data in Figure 3. This is reasonable because the testing dataset is different from the training dataset. After propagating, the testing dataset may contain information that is not included in the training data.

  2. For e z and e v z , the performances are not as good as that of the other four components and P are 55.9 and 60.0%, respectively. It is obvious from Figure 6 that the biases and standard deviations have also been reduced at a certain level. The reason of this phenomenon might be that the cross-track motion has been modeled accurately enough so that the simulated orbit prediction errors are mainly noise that can not be removed.

Figure 6 
                  Performance of PCA–XGBoost model on testing dataset. (a) 
                        
                           
                           
                              P
                              =
                              29.86
                              %
                           
                           P=29.86 \% 
                        
                     , (b) 
                        
                           
                           
                              P
                              =
                              14.95
                              %
                           
                           P=14.95 \% 
                        
                     , (c) 
                        
                           
                           
                              P
                              =
                              46.41
                              %
                           
                           P=46.41 \% 
                        
                     , (d) 
                        
                           
                           
                              P
                              =
                              10.32
                              %
                           
                           P=10.32 \% 
                        
                     , (e) 
                        
                           
                           
                              P
                              =
                              36.38
                              %
                           
                           P=36.38 \% 
                        
                     , and (f) 
                        
                           
                           
                              P
                              =
                              51.46
                              %
                           
                           P=51.46 \% 
                        
                     .
Figure 6

Performance of PCA–XGBoost model on testing dataset. (a) P = 29.86 % , (b) P = 14.95 % , (c) P = 46.41 % , (d) P = 10.32 % , (e) P = 36.38 % , and (f) P = 51.46 % .

Theoretically, the indicator P can be zero, which means that the proposed ML model can capture all the errors. However, in reality, the indicator P can be close to zero but not be zero due to the random noise in the orbit determination process or the intrinsic limitations of the ML methods. It is confirmed that the trained PCA–XGBoost model not only has a good learning performance on the training dataset but also has a good generalization ability to future epochs for all the six components.

3.4 Generalization results to other satellites

Different satellites with the variation of Δ RAAN from 0 to 45 ° with a step size of 5 ° are simulated. PCA–XGBoost model is trained on the original satellite using the training dataset of week 1 3 and tested on the testing dataset of week 4 for the new different satellites.

Results for the new satellites with different RAAN are shown in Figure 7. If the value of P is larger than 100%, the generalization of the new satellite is failed because the orbit prediction errors have not been reduced at all. From Figure 7, it can be concluded that the proposed model has shown great generalization capacity on new satellites for almost all six components. It means that the model might have learned common patterns of LEO satellites. For e y and e v x , generalization results are perfect for all Δ RAAN, and the values of P are both under 20%, which means that for these two components almost 80% prediction errors have been corrected by the trained ML model on each new satellite. The results of e x and e v z are less exciting with the value of P under 60% and 85%, respectively. The results of e z and e v y are the worst but still have feasible generalization capabilities for Δ RAAN from about 0 to 3 0 ° and 0 to 2 0 ° , respectively.

Figure 7 
                  Generalization performance of PCA–XGBoost model training on original satellite and testing on satellites with different RAAN.
Figure 7

Generalization performance of PCA–XGBoost model training on original satellite and testing on satellites with different RAAN.

Different satellites with a variation of semi-major axis from 0 to 100 km with a step size of 10 km are simulated. The proposed model is trained on the original satellite by using the training dataset of week 1 3 and tested on the testing dataset of week 4 on new satellites. Indicator P is chosen to evaluate the performance of the model on new satellites.

Results for the new satellites are shown in Figure 8. Similar results show that the model has good generalization capabilities on new satellites for all the six elements. Especially for e y and e v x , generalization results are perfect with Δ a from all, and the values of P are both under 20%, which means that almost 80% prediction errors have been corrected by the trained ML model on each new satellite for these two components. For e z and e v z , generalization results are still impressive with the value of P under 80%. For e x , generalization results are good except the point at Δ a = 60 km . It may be caused by different factors, such as the randomness of the dataset at the region of Δ a = 60 km . As for e v y , generalization are the worst among these six elements but still perform well from about Δ a from 0 to 60 km.

Figure 8 
                  Generalization performance of PCA–XGBoost training on original satellite and testing on satellites with different semi-major axis.
Figure 8

Generalization performance of PCA–XGBoost training on original satellite and testing on satellites with different semi-major axis.

4 Conclusion

In this article, an effective ML model combined XGBoost with PCA has been proposed and a series of experiments have been conducted to validate the reliability and effectiveness of this PCA–XGBoost model. The most suitable learning variables have been selected for each of the six orbit elements. Experimental results show that the most suitable learning variables should be chosen as prediction duration Δ t , estimated drag coefficient C d , estimated orbit X Est expressed in COE frame, and relative prediction error ξ expressed in the COE frame and RTN frame. The experimental results also demonstrate that the trained PCA–XGBoost model can greatly improve the orbit prediction accuracy for the training dataset. For all six elements in the training dataset, P of the PCA–XGBoost model are all under 8%, which means almost 92% predicition errors have been corrected in the training dataset. Generalization capabilities of the trained PCA–XGBoost model have also been investigated. Generalization capability to future epochs is demonstrated to be good for all six components. Furthermore, the generalization capability to other different satellites is studied, where the proposed model is trained with one satellite and tested on the other new satellites. Results reveal that the proposed model could be generalized to a relatively wide range of nearby satellites that have not been used for the training process.

Further research is suggested to apply the established framework to real data, where the PCA–XGBoost model is trained on one satellite with a large amount of data and several satellites with a small amount of data. Besides, the proposed model is a data-driven method. it is necessary to find an efficient way to train the PCA–XGBoost model when applying to a new different satellite in practice.

  1. Funding information: The authors state no funding involved.

  2. Author contributions: All authors have accepted responsibility for the entire content of this manuscript and approved its submission.

  3. Conflict of interest: Yu Jiang, who is the co-author of this article, is a current Editorial Board member of Open Astronomy. This fact did not affect the peer-review process. The authors declare no other conflict of interest.

References

Chen T, Guestrin C. 2016. Xgboost: A scalable-tree boosting system. In Proceedings of the 22nd ACM SIGKDD International Conference on knowledge discovery and data mining. ACM. 10.1145/2939672.2939785.Search in Google Scholar

Chen L, Bai XZ, Liang YG, Li KB. 2017. Orbital error analysis based on historical data. Singapore: Springer. 10.1007/978-981-10-2963-9_3Search in Google Scholar

Cheng L, Wang Z, Jiang F. 2019. Real-time control for fuel-optimal moon landing based on an interactive deep reinforcement learning algorithm. Astrodynamics. 3(4):12. 10.1007/s42064-018-0052-2Search in Google Scholar

Cheng L, Wang Z, Jiang F, Li J. 2021. Adaptive neural network control of nonlinear systems with unknown dynamics. Adv Space Res. 67(3):1114–1123. 10.1016/j.asr.2020.10.052Search in Google Scholar

Dong C, Qiao Y, Shang C, Liao X, Yuan X, Cheng Q, et al. 2021. Non-contact screening system based for COVID-19 on xgboost and logistic regression. Comput Biol Med. 141:105003. ISSN 0010-4825. 10.1016/j.compbiomed.2021.105003Search in Google Scholar PubMed PubMed Central

Folkner WM, Williams JG, Boggs DH, Park RS, Kuchynka P. 2014. The planetary and lunar ephemerides DE430 and DE431. Interplanetary Network Progress Report. 196(1):42–196. Search in Google Scholar

Gao T, Hauser K. 2019. A data-driven indirect method for nonlinear optimal control. Astrodynamics. 3(4):15. 10.1007/s42064-019-0051-3Search in Google Scholar

Goh ST, Chia JW, Chin ST, Low KS, Lim LS. 2016. A pre-processed orbital parameters approach for improving cubesat orbit propagator and attitude determination. Trans Japan Soc Aeronaut Space Sci. 59(5):278–286. 10.2322/tjsass.59.278Search in Google Scholar

Izzo D, Mrtens M, Pan B. 2019. A survey on artificial intelligence trends in spacecraft guidance dynamics and control. Astrodynamics. 3(4):13. 10.1007/s42064-018-0053-6Search in Google Scholar

Jiang J, Zeng X, Guzzetti D, You Y. 2020. Path planning for asteroid hopping rovers with pre-trained deep reinforcement learning architectures. Acta Astronaut. 171:265–279. ISSN 0094-5765. 10.1016/j.actaastro.2020.03.007Search in Google Scholar

Jung O, Seong J, Jung Y, Bang H. 2021. Recurrent neural network model to predict re-entry trajectories of uncontrolled space objects. Adv Space Res. 68(6):2515–2529. 10.1016/j.asr.2021.04.041Search in Google Scholar

Levada A. 2020. Parametric PCA for unsupervised metric learning. Pattern Recognit Lett. 135:425–430. 10.1016/j.patrec.2020.05.011Search in Google Scholar

Levit C, Marshall W. 2011. Improved orbit predictions using two-line elements. Adv Space Res. 47(7):1107–1115. ISSN 0273-1177. 10.1016/j.asr.2010.10.017Search in Google Scholar

Ma M, Zhao G, He B, Li Q, Dong H, Wang S, et al. 2021. Xgboost-based method for flash flood risk assessment. J Hydrol. 598:126382. ISSN 0022-1694. 10.1016/j.jhydrol.2021.126382Search in Google Scholar

Peng H, Bai X. 2018. Improving orbit prediction accuracy through supervised machine learning. Adv Space Res. 61(10):2628–2646. ISSN 0273-1177. 10.1016/j.asr.2018.03.001Search in Google Scholar

Peng H, Bai X. 2019. Comparative evaluation of three machine learning algorithms on improving orbit prediction accuracy. Astrodynamics. 3(4):19. 10.1007/s42064-018-0055-4Search in Google Scholar

Perez D, Bevilacqua R. 2015. Neural network based calibration of atmospheric density models. Acta Astronaut. 110:58–76. ISSN 0094-5765.10.1016/j.actaastro.2014.12.018Search in Google Scholar

Picone JM, Hedin AE, Drob DP, Aikin AC. 2002. NRLMSISE-00 empirical model of the atmosphere: Statistical comparisons and scientific issues. J Geophys Res Space Phys. 107(A12):SIA15–1–SIA 15–16. 10.1029/2002JA009430Search in Google Scholar

Qu Z, Xu J, Wang Z, Chi R, Liu H. 2021. Prediction of electricity generation from a combined cycle power plant based on a stacking ensemble and its hyperparameter optimization with a grid-search method. Energy. 227:120309. ISSN 0360-5442. 10.1016/j.energy.2021.120309Search in Google Scholar

Sang J, Li B, Chen J, Zhang P, Ning J. 2017. Analytical representations of precise orbit predictions for earth orbiting space objects. Adv Space Res. 59(2):698–714. ISSN 0273-1177. 10.1016/j.asr.2016.10.031Search in Google Scholar

Sharma S, Cutler JW. 2015. Robust orbit determination and classification: A learning theoretic approach. Interplanetary Network Progress Report. 203(2):127–130. Search in Google Scholar

Shi R, Xu X, Li J, Li Y. 2021. Prediction and analysis of train arrival delay based on Xgboost and Bayesian optimization. Appl Soft Comput. 109:107538. ISSN 1568-4946. 10.1016/j.asoc.2021.107538Search in Google Scholar

Tapley BD. 1994. The jgm-3 gravity model. Annales Geophys. 12(1):C192. Search in Google Scholar

Trendowicz A, Jeffery R. 2014. Classification and regression trees. In: Software project effort estimation. Cham: Springer. 10.1007/978-3-319-03629-8_10.Search in Google Scholar

Uriot T, Izzo D, Simoes L, Abay R, Merz K. 2022. Spacecraft collision avoidance challenge: design and results of a machine learning competition. Astrodynamics. 6:121–140. 10.1007/s42064-021-0101-5Search in Google Scholar

Wang Q, Hu C, Xu T, Chang G, Alberto. 2017. Impacts of earth rotation parameters on gnss ultra-rapid or-bit prediction: Derivation and real-time correction. Advances in Space Research, 60(12):2855–2870. ISSN 0273-1177.10.1016/j.asr.2017.09.022Search in Google Scholar

Wen T, Zeng X, Circi C, Gao Y. 2020. Hop reachable domain on irregularly shaped asteroids. J Guidance Control Dynam. 43(7):1269–1283. 10.2514/1.G004682Search in Google Scholar

Zeng X, Wen T, Yu Y, Circi C. 2021. Potential hop reachable domain over surfaces of small bodies. Aerospace Sci Technol. 112:106600. ISSN 1270-9638. 10.1016/j.ast.2021.106600Search in Google Scholar

Received: 2022-02-27
Revised: 2022-05-14
Accepted: 2022-06-15
Published Online: 2022-06-22

© 2022 Min Zhai et al., published by De Gruyter

This work is licensed under the Creative Commons Attribution 4.0 International License.

Articles in the same Issue

  1. Research Articles
  2. Deep learning application for stellar parameters determination: I-constraining the hyperparameters
  3. Explaining the cuspy dark matter halos by the Landau–Ginzburg theory
  4. The evolution of time-dependent Λ and G in multi-fluid Bianchi type-I cosmological models
  5. Observational data and orbits of the comets discovered at the Vilnius Observatory in 1980–2006 and the case of the comet 322P
  6. Special Issue: Modern Stellar Astronomy
  7. Determination of the degree of star concentration in globular clusters based on space observation data
  8. Can local inhomogeneity of the Universe explain the accelerating expansion?
  9. Processing and visualisation of a series of monochromatic images of regions of the Sun
  10. 11-year dynamics of coronal hole and sunspot areas
  11. Investigation of the mechanism of a solar flare by means of MHD simulations above the active region in real scale of time: The choice of parameters and the appearance of a flare situation
  12. Comparing results of real-scale time MHD modeling with observational data for first flare M 1.9 in AR 10365
  13. Modeling of large-scale disk perturbation eclipses of UX Ori stars with the puffed-up inner disks
  14. A numerical approach to model chemistry of complex organic molecules in a protoplanetary disk
  15. Small-scale sectorial perturbation modes against the background of a pulsating model of disk-like self-gravitating systems
  16. Hα emission from gaseous structures above galactic discs
  17. Parameterization of long-period eclipsing binaries
  18. Chemical composition and ages of four globular clusters in M31 from the analysis of their integrated-light spectra
  19. Dynamics of magnetic flux tubes in accretion disks of Herbig Ae/Be stars
  20. Checking the possibility of determining the relative orbits of stars rotating around the center body of the Galaxy
  21. Photometry and kinematics of extragalactic star-forming complexes
  22. New triple-mode high-amplitude Delta Scuti variables
  23. Bubbles and OB associations
  24. Peculiarities of radio emission from new pulsars at 111 MHz
  25. Influence of the magnetic field on the formation of protostellar disks
  26. The specifics of pulsar radio emission
  27. Wide binary stars with non-coeval components
  28. Special Issue: The Global Space Exploration Conference (GLEX) 2021
  29. ANALOG-1 ISS – The first part of an analogue mission to guide ESA’s robotic moon exploration efforts
  30. Lunar PNT system concept and simulation results
  31. Special Issue: New Progress in Astrodynamics Applications - Part I
  32. Message from the Guest Editor of the Special Issue on New Progress in Astrodynamics Applications
  33. Research on real-time reachability evaluation for reentry vehicles based on fuzzy learning
  34. Application of cloud computing key technology in aerospace TT&C
  35. Improvement of orbit prediction accuracy using extreme gradient boosting and principal component analysis
  36. End-of-discharge prediction for satellite lithium-ion battery based on evidential reasoning rule
  37. High-altitude satellites range scheduling for urgent request utilizing reinforcement learning
  38. Performance of dual one-way measurements and precise orbit determination for BDS via inter-satellite link
  39. Angular acceleration compensation guidance law for passive homing missiles
  40. Research progress on the effects of microgravity and space radiation on astronauts’ health and nursing measures
  41. A micro/nano joint satellite design of high maneuverability for space debris removal
  42. Optimization of satellite resource scheduling under regional target coverage conditions
  43. Research on fault detection and principal component analysis for spacecraft feature extraction based on kernel methods
  44. On-board BDS dynamic filtering ballistic determination and precision evaluation
  45. High-speed inter-satellite link construction technology for navigation constellation oriented to engineering practice
  46. Integrated design of ranging and DOR signal for China's deep space navigation
  47. Close-range leader–follower flight control technology for near-circular low-orbit satellites
  48. Analysis of the equilibrium points and orbits stability for the asteroid 93 Minerva
  49. Access once encountered TT&C mode based on space–air–ground integration network
  50. Cooperative capture trajectory optimization of multi-space robots using an improved multi-objective fruit fly algorithm
Downloaded on 14.9.2025 from https://www.degruyterbrill.com/document/doi/10.1515/astro-2022-0030/html
Scroll to top button