Home Principal Component Analysis based on data characteristics for dimensionality reduction of ECG recordings in arrhythmia classification
Article Open Access

Principal Component Analysis based on data characteristics for dimensionality reduction of ECG recordings in arrhythmia classification

  • Agnieszka Wosiak EMAIL logo
Published/Copyright: September 17, 2019

Abstract

Due to the growing problem of heart diseases, the computer improvement of their diagnostics becomes of great importance. One of the most common heart diseases is cardiac arrhythmia. It is usually diagnosed by measuring the heart activity using electrocardiograph (ECG) and collecting the data as multidimensional medical datasets. However, their storage, analysis and knowledge extraction become highly complex issues. Feature reduction not only enables saving storage and computing resources, but it primarily makes the process of data interpretation more comprehensive. In the paper the new igPCA (in-group Principal Component Analysis) method for feature reduction is proposed. We assume that the set of attributes can be split into subgroups of similar characteristic and then subjected to principal component analysis. The presented method transforms the feature space into a lower dimension and gives the insight into intrinsic structure of data. The method has been verified by experiments done on a dataset of ECG recordings. The obtained effects have been evaluated regarding the number of kept features and classification accuracy of arrhythmia types. Experiment results showed the advantage of the presented method compared to base PCA approach.

1 Introduction

The progress in health technologies’ development and the growing capabilities of diagnostic equipment cause the process of medical analysis and diagnosis highly challenging due to large and multidimensional datasets. Automated knowledge extraction from massive data and the medical inference based on data analysis pose highly complex issues. This is mainly due to the limitations imposed by the performance of computer systems, but also because of the methodological problems inherent in multidimensional data analysis. Therefore it is often called "the curse of dimensionality" [1, 2, 3].

Reduction of large datasets can be performed by reducing the number of analyzed parameters (dimensions) or by decreasing the number of analyzed cases. The dimensionality reduction can be carried out through statistical methods, primarily Principal Component Analysis (PCA) [4] or by using feature selection techniques [5, 6]. Dataset cardinality reduction can be achieved by sampling, grouping or instance selection methods [7].

In this research, we propose a modification to the application of PCA method called igPCA (in-group Principal Component Analysis). It introduces the preprocessing phase that arranges the related features into groups of similar distribution. We compare the performance of the considered algorithm in arrhythmia classification with accuracy results attained for an original set of features and for a dataset passed through unchanged PCA. We applied our method to reduce data derived from ECG signals to improve storage and inference process in solving arrhythmia classification problem. In the research we use a reference “ARRHYTHMIA” dataset, derived from the UCI repository [8]. However, the proposed method is to be applied for a real dataset of similar structure.

The remainder of this paper is organized as follows. Section 2 (Background) describes the problem of feature cardinality reduction in terms of data intrinsic characteristic. It also presents literature review introducing feature selection techniques based on data characteristics. Next

Section 3 (Method overview) presents the proposed procedure for in-group principal component analysis. Section 4 describes the medical problem of arrhythmia and a dataset resulting from ECG recordings. In Section 5 (Experimental results and discussion) we describe the studies that were conducted. We introduce data characteristic and discuss the results. Finally, in Section 6 (Conclusions) we summarize our research and describe further works.

2 Background

2.1 Problem statement

The development of computer technologies and their wide use in medicine give a significant increase in medical data repositories. The process of extracting information from these huge datasets, which is essential for a given treatment, becomes more and more complex, and requires analysis for different types of data [9]. Even though collecting greater number of data may contribute to a comprehensive diagnosis, it requires more resources related to storage, processing and increased computing costs [10, 11]. For this reason, a wide range of methods for data complexity reduction are considered. They refer to two basic approaches: instance reduction and feature reduction. The difference between them is shown in Figure 1 [7].

Figure 1 Data reduction approaches
Figure 1

Data reduction approaches

In literature, the problem of data multidimensionality is often referred to as "the curse of dimensionality" [1]. There is no doubt that a larger number of parameters describing cases may lead to more comprehensive analysis. However, the multidimensional nature of data may hinder, and sometimes even prevent, their proper interpretation, and classical approaches to the analysis of these data may not be sufficient [12]. Moreover, it is not very likely that all variables are independent, and that the collinearity may lead to instability in the solution space and, as a consequence, inconsistent results. Hence, to eliminate the negative impact of multidimensionality on data analysis, key attributes should be identified and the dimensionality ought to be reduced in such a way as not to lose the knowledge that could be obtained from the data. Experts continue to play the leading role in the selection of parameters necessary to make a diagnosis, but more and more often the process of discovering medical knowledge is carried out using the IT techniques.

There is usually no need to reduce cases with regard to medical datasets. However, if data streams derived, for example, from ECG or EEG, are considered [13], correct (normal) values or the most common values can be disregarded. The problem of a large number of features describing medical cases (referred to as "multidimensionality of data"), is much more common. Multidimensional data are characterized by a very large number of parameters, far exceeding the number of cases.

Two main approaches for dimensionality reduction can be distinguished:

  • – feature extraction (feature transformation),

  • – feature selection.

The process of feature selection (FS) includes identification of relevant attributes from the dataset’s characteristics and removal of the majority of irrelevant or redundant parameters [14]. The selected subset of features should be chosen so that it still reflects the relations and characteristics of the entire set of instances [15, 16]. The goal of feature selection is to build a subset of m features FS = xi1 , xi2 , . . . , xim from the original set of n attributes F = x1, x2, . . . , xn with m < n that optimizes an objective function J(F). It can be expressed by (1).

(1) x 1 x 2 x n x i 1 x i 2 x i m , x i 1 , x i 2 , x i m = arg max m , i m [ J ( x 1 , x 2 , , x n ) ]

The application of space transformation techniques introduces another possibility for dimensionality reduction. The result constitutes an entirely new set of features of a smaller cardinality by combining the original attributes. The goal of feature extraction is to find for a feature space xi Rn a mapping y = g(x) : Rn Rm with m < n such that the transformed feature vector yi Rm preserves (most of) the information or structure in Rn. It can be expressed by (2).

(2) x 1 x 2 x n y i 1 y i 2 y i m g = x 1 x 2 x n

Numerous approaches of space transformation were proposed to meet different criteria and specific requirements of domain applications. Nonetheless, the most common approaches are based on the linear methods and include factor analysis and principal component analysis (PCA) [17, 18, 19].

Principal component analysis is a standard multivariate data analysis technique. It reduces the number of space dimensions, preserving dataset’s variation and intrinsic dependencies [20]. It goes towards explaining the correlations between variables using a smaller set of linear combinations of these variables, which are referred to as the main components. The principal components analysis for dimensionality reduction results from the fact that the total variability of the data set consisting of m variables can often be kept for a smaller set of k other variables, which are constituted by linear combinations of primary variables, i.e. k main components carry almost as much information as the original m variables. Consequently, the procedure involves building the first few data components that hold the majority of a dataset’s variation, instead of investigating thousands of original variables. Furthermore, principal component analysis may bring benefits also for datasets with an average or a low number of features [21].

Feature space reduction is usually performed without considering group structure. However, possessing the underlying group characteristic may bring additional benefits, making use of structural information about the features and discovering meaningful subsets of features. Moreover, there are some domain applications, where parameters tend to repeat distribution models, which makes incorporating group structure into the process of feature selection even more well-founded.

2.2 Related works

Feature selection methods have been an active field of the studies for decades. However, most of them address selecting features at the individual feature level, whereas considering group structure may contribute to the better performance of the subsequent analysis.

Nonetheless, there are many interesting investigations based on group feature selection and the Lasso approach [22]. Yuan and Lin in [23] proposed the group Lasso model that selects grouped variables and improves prediction in regression problems. In [24] Meier, van de Geer and Bühlmann extended the group Lasso to logistic regression models,which are suitable for high-dimensional data. Jacob, Obozinski and Vert in turn extended the group Lasso to sparsity patterns, which are unions of overlapping groups and help recover sparse connected patterns in a graph [25].

Li et al. in [26] perform parameter selection at the group and individual feature levels simultaneously with streaming features. Their GFSSF algorithm identifies relevant features from important groups and selects variables with sparsity at both the group and individual feature levels.

In [27] Murkute and Borkar presented group feature selection method at group level to execute feature selection. Their objective was to execute the feature selection within the group and between groups of features in order to select discriminative features and remove redundant features to obtain optimal subset. The method called EGVS (efficient group variable selection) comprises of two stages: within group variable selection that select discriminative features within the group (each feature is evaluated individually) and between group selection, when all the features are reevaluated so as to remove redundancy.

All presented surveys confirmed efficiency of grouping features in terms of data reduction. However, the authors focused on feature selection, wheres there is still lack of a solution considering transformation techniques, which is a subject of the proposed method.

3 Method overview

The proposed in-group feature extraction method (igPCA) is based on principal component analysis incorporating diversity in distribution of various parameters. The steps can be presented as follows:

  1. The process starts with data preparation, which aims at adjusting original datasets to analysis needs.

  2. Feature grouping based on statistical analysis of distribution is carried out and groups of similar characteristics are distinguished:

    1. The first group consists of binary features (B).

    2. The rest of the features are divided into five groups based on skewness of the frequency distributions and ranging:

      1. a highly positive distribution (1),

      2. a moderately positive distribution (2),

      3. a symmetric distribution (3),

      4. a moderately negative distribution (4),

      5. a highly negative distribution (5).

    Differences in skewness for distributions are illustrated in Figure 2.

    The process of splitting starts with designating a center of each group. It is the feature that distribution is:

    1. the most positively skewed,

    2. the most negatively skewed,

    3. the closest to being symmetrical,

    4. of skewness value in the middle between the most positively distributed one and 0,

    5. of skewness value in the middle between the most negatively distributed one and 0,

    The rest of features are assigned to the closest group.

  3. The principal component analysis is performed for each of six separate groups of features and results in six sets of new features:

    1. FSB - for binary features,

    2. FS1 - for highly positive distributed features,

    3. FS2 - for moderately positive distributed features,

    4. FS3 - for symmetrically distributed features,

    5. FS4 - for moderately negative distributed features,

    6. FS5 - for highly negative distributed features.

  4. The final result set of new features is a sum denoted as (3):

(3) F i g P C A = F S B F S 1 F S 2 F S 3 F S 4 F S 5
Figure 2 Skewness of frequency distribution
Figure 2

Skewness of frequency distribution

4 Data description

Due to the growing problem of heart diseases, the computer improvement of their diagnostics becomes of great importance. One of the most common heart diseases is cardiac arrhythmia. It refers to a medical condition, when a heart beats irregularly, which may be harmless or even life threatening. Therefore correct identification of arrhythmia is essential for further medical treatment [28].

Cardiac arrhythmia is usually diagnosed by measuring the heart activity using electrocardiograph (ECG) and then analyzing the recorded data. Parameter values comes in the form of ECG waveforms and can be used along with other information on the patient, including his age and medical history. The spectral characteristics and time domain features of the ECG may be combined to improve arrhythmia classification [29, 30]. However, it may be difficult for a medical staff to find dependencies and irregularities in complex and long ECG recordings. Therefore, new solutions for automating arrhythmia diagnosis are considered.

The dataset for experimental studies derives from UCI Machine Learning Repository [8] and can be referenced as “ARRHYTHMIA” dataset. This dataset contains medical recordings of 452 patients. Each sample is described by 279 attributes. They include 4 parameters referencing general patient’s data: his age, sex, height and weight. The rest of features relates to ECG recordings, e.g. an average width (in msec.) of linear Q wave, an average width (in msec.) of linear R wave, an average width (in msec.) of linear S wave, the number of intrinsic deflections and an existence of ragged R wave. Every observation is described by a label designating one of 16 different classes. The first class corresponds to normal ECG recording with no arrhythmia. Recordings labeled by numbers 2–15 are classified as different types of cardiac arrhythmia, whereas class 16 refers to unlabeled patients. The details of the dataset were also introduced in [31].

5 Experimental results and discussion

The aim of the experiments was to examine the performance of proposed in-group Principal Component Analysis for space reduction of ECG data to support its classification. The experiments were conducted on the reference “ARRHYTHMIA” dataset described in Section 4.

The investigation included following stages:

  1. Data preparation with exclusion of parameters with uniform values (or parameters for which the number of non-zero values was below 10)

  2. Separating features with binary values and putting them together in one group (B).

  3. Grouping the rest features into 5 groups of similar distribution according to the methodology introduced in Section 3.

  4. Classification based on all features using kNN with k = 5.

  5. Standardization of features and performing PCA in all the features.

  6. Choosing the number of principal components to be retained by explained variance ratio and a scree plot.

  7. Classification based on retained number of principal components using kNN with k = 5.

  8. Performing PCA and carrying out the step 6. for each of six groups.

  9. Classification based on a set of features resulting from the sum of principal components derived form the step 8.

  10. 10. Comparing results of classification.

All steps of the experimental procedure were implemented in Python language [32]. The key stages and their results are described in the following subsections.

5.1 Data preparation

The original "ARRHYTHMIA" dataset contained parameters of uniform values (usually zeros) or attributes where the number of rows with non-zero values was below 10 (< 2% of all recordings). These features were removed and the final number of attributes that were subjected for further analysis equaled 192.

5.2 Grouping features

The whole set of features was divided into 6 groups as described in Table 1. The first column represents group name, the second - number of features that were assigned to that group, the third and the fourth columns contain the type of distribution and the range of skewness values. The last column shows a frequency distribution plot of the center of that group as indicated in the method description (Section 3).

Table 1

Characteristics of separated groups

Group No of features Type Skewness Distribution of
name included of distribution range the center feature
Group B 14 binary not applicable not applicable
Group 1 42 highly positive 2.5001 – 15.3435
Group 2 39 moderately positive 0.4110 – 2.4916
Group 3 46 close to symmetrical -0.3375 – 0.3932
Group 4 32 moderately negative -2.5776 – -0.4483
Group 5 19 highly negative -10.2626 – -2.6664

5.3 Principal component analysis

The key element of the principal component analysis is determination of the number of principal components to be kept for further analysis. Different stopping criteria can be applied [33]. In our approach scree plot [34] was used. Moreover, according to Kaiser rule, all components with eigenvalues under 1.0 were dropped [35]. As a result we decided to retain:

  1. 13 principal components after PCA performed for the whole dataset (Figure 3.),

  2. no principal component for binary features (Group B),

  3. 3 principal components for every other group (see Figures 48)

Figure 3 Scree plot of PCA
Figure 3

Scree plot of PCA

Figure 4 Scree plot of igPCA in the Group 1
Figure 4

Scree plot of igPCA in the Group 1

Figure 5 Scree plot of igPCA in the Group 2
Figure 5

Scree plot of igPCA in the Group 2

Figure 6 Scree plot of igPCA in the Group 3
Figure 6

Scree plot of igPCA in the Group 3

Figure 7 Scree plot of igPCA in the Group 4
Figure 7

Scree plot of igPCA in the Group 4

Figure 8 Scree plot for PCA performed in the Group 5
Figure 8

Scree plot for PCA performed in the Group 5

5.4 Classification results

The accuracy of classification based on a set of features resulting from the sum of principal components as proposed in our method igPCA, described in Section 3, was compared to classification accuracies obtained for:

  1. the original set of features,

  2. dataset described by principal components of analysis performed on original dataset.

The experiments have been conducted using 10-fold cross-validation.

To verify the experimental results, the statistical Mann–Whitney U test was applied. The outcomes were regarded as statistically significant if p-value < 0.05 was observed.

The results of comparison are presented in Table 2. The first column represents datasets, the second number of features for the original dataset or principal components for transformed datasets, and the third average accuracy results of ten repeated classification processes.

Table 2

Classification accuracies

Dataset No of attributes Accuracy ± SD p-value
Original 192 0.72 ± 0.10 N/A
After PCA 12 0.75 ± 0.07 < 0.001
After igPCA 10 0.79 ± 0.10 < 0.001

The last column of the Table 2 shows the statistical significance of differences when compared to the original dataset. One can notice that the feature transformation process significantly improves classification regardless of PCA approach. However, it is must be emphasized that ig-PCA keeps less principal components than PCA and allows to draw additional conclusions, pointing out the importance of different groups of attributes. In terms of arrhythmia, the binary features appeared to have no impact on the final structure of principal components. Moreover, it is also noticable that the remainder of groups equally contribute to the final results.

6 Conclusions

In this paper, we evaluate the possibility to apply in-group prancipal component analysis to improve handling ECG recordings and their analysis in terms of arrhythmia classification.

Cardiac arrhythmia is one of the most common heart diseases. Its diagnosis demands measuring the heart activity using electrocardiograph (ECG) and collecting the data as multidimensional medical datasets. Their storage, analysis and knowledge extraction become a highly complex issues.Feature reduction not only enables saving storage and computing resources, but primarily it makes the process of data interpretation more comprehensive.

We proposed a new igPCA (in-group Principal Component Analysis) method for feature reduction. We assumed that the set of attributes can be split into subgroups of similar characteristic and then subjected to principal component analysis. The method has been verified by experiments done on a dataset of ECG recordings. The obtained effects have been evaluated regarding the number of kept features and accuracy of classification of arrhythmia types. Experiment results confirmed by statistical verification showed the advantage of the presented approach compared to base principal component analysis. The ig-PCA outperformed base PCA approach in terms of classification accuracy and number of principal components required to perform analysis. Moreover, it enables revealing insight into intrinsic data structure.

Each medical study has its own design, measurement characteristics and various assumptions about data structure. Therefore, there is no universal statistical method that deals with different datasets, and new investigations should be performed. Further studies will involve other medical problems, e.g. juvenile growth restriction disorder in children where the problem of feature extraction was encountered [6] They will also focus on investigating the impact of amounts of missing data on the validity of an imputation technique. Some other methods for dealing with grouped feature selection will be considered.

References

[1] Bellman R.E., Adaptive control processes: a guided tour, Princeton University Press, 2015.Search in Google Scholar

[2] Chen L., Curse of dimensionality, In: Liu L., Özsu M.T. (Eds.), Encyclopedia of Database Systems, Springer US, 2009.10.1007/978-0-387-39940-9_133Search in Google Scholar

[3] Keogh E., Mueen A., Curse of dimensionality, In: Sammut C., Webb G.I. (Eds.), Encyclopedia of Machine Learning, Springer Science & Business Media, 2011.10.1007/978-0-387-30164-8_192Search in Google Scholar

[4] Abdi H., Williams L.J., Principal component analysis, Wiley Interdisciplinary Reviews: Computational Statistics, 2010, 2(4), 433-459.10.1002/wics.101Search in Google Scholar

[5] Wosiak A., Zakrzewska D., Feature Selection for Classification Incorporating Less Meaningful Attributes in Medical Diagnostics, In: M. Ganzha, L. Maciaszek, M. Paprzycki (Eds.), Proceedings of the 2014 Federated Conference on Computer Science and Information Systems (7-10 September 2014, Warsaw, Poland), Annals of Computer Science and Information Systems, 2014, 235-240.10.15439/2014F296Search in Google Scholar

[6] Wosiak A., Zakrzewska D., Integrating Correlation-Based Feature Selection and Clustering for Improved Cardiovascular Disease Diagnosis, Complexity, 2018, DOI: 10.1155/2018/2520706.10.1155/2018/2520706Search in Google Scholar

[7] Byczkowska-Lipińska L., Wosiak A., Instance Selection Techniques in Reduction of Data Streams Derived from Medical Devices, Przegląd Elektrotechniczny, 2017, 93, 115-118.10.15199/48.2017.12.29Search in Google Scholar

[8] Lichman M., UCI Machine Learning Repository, 2017, http://archive.ics.uci.edu/mlSearch in Google Scholar

[9] Wojciechowski A., Staniucha R., Mouth features extraction for emotion classification, In: M. Ganzha, L.Maciaszek, M. Paprzycki (Eds.), Proceedings of the 2016 Federated Conference on Computer Science and Information Systems (11-14 September 2016, Gdańsk, Poland), Annals of Computer Science and Information Systems, 2016, 1685-1692.10.15439/2016F390Search in Google Scholar

[10] Liu G., Kong L., Gopalakrishnan V., A Partitioning Based Adaptive Method for Robust Removal of Irrelevant Features from High-dimensional Biomedical Datasets, In: AMIA Summits on Translational Science Proceedings, American Medical Informatics Association, 2012, 52-61.Search in Google Scholar

[11] Pyle D., Data preparation for data mining, Morgan Kaufmann, 1999.Search in Google Scholar

[12] Donoho D.L., High-dimensional data analysis: The curses and blessings of dimensionality, AMS Math Challenges Lecture, 2000, 1-33.Search in Google Scholar

[13] Szajerman D., Napieralski P., Lecointe J.P., Joint analysis of simultaneous EEG and eye tracking data for video images, COMPEL - The international journal for computation and mathematics in electrical and electronic engineering, 2018, 37(5), 1870-1884.10.1108/COMPEL-07-2018-0281Search in Google Scholar

[14] Hall M. A., Correlation-based feature selection for machine learning, The University of Waikato, 1999.Search in Google Scholar

[15] Guyon I., Elisseeff A., An introduction to variable and feature selection, Journal of Machine Learning Research, 2003, 3, 1157-1182.Search in Google Scholar

[16] Chandrashekar G., Sahin F., A survey on feature selection methods, Computers & Electrical Engineering, 2014, 40(1), 16-28.10.1016/j.compeleceng.2013.11.024Search in Google Scholar

[17] Kim J.O., Mueller C.W., Factor analysis: Statistical methods and practical issues, Sage, 1978.10.4135/9781412984256Search in Google Scholar

[18] Dunteman G. H., Principal components analysis, Sage, 1989.10.4135/9781412985475Search in Google Scholar

[19] Smith L.I., A tutorial on principal components analysis, Cornell University, USA, 2002.Search in Google Scholar

[20] Groth D., Hartmann S., Klie S., Selbig J., Principal components analysis, Computational Toxicology: Volume II, 2013, 527-547.10.1007/978-1-62703-059-5_22Search in Google Scholar PubMed

[21] Bressan M., Vitria J., On the selection and classification of independent features, IEEE Transactions on Pattern Analysis and Machine Intelligence, 2003, 25(10), 1312-1317.10.1109/TPAMI.2003.1233904Search in Google Scholar

[22] Tibshirani R., Regression shrinkage and selection via the lasso, Journal of the Royal Statistical Society: Series B (Methodological), 1996, 58(1), 267-288.10.1111/j.2517-6161.1996.tb02080.xSearch in Google Scholar

[23] Yuan M., Lin Y., Model selection and estimation in regression with grouped variables, Journal of the Royal Statistical Society: Series B (Statistical Methodology), 2006, 68(1), 49-67.10.1111/j.1467-9868.2005.00532.xSearch in Google Scholar

[24] Meier L., Van De Geer S., Bühlmann, P., The group lasso for logistic regression, Journal of the Royal Statistical Society: Series B (Statistical Methodology), 2008, 70(1), 53-71.10.1111/j.1467-9868.2007.00627.xSearch in Google Scholar

[25] Jacob L., Obozinski G., Vert J. P., Group lasso with overlap and graph lasso, In: Proceedings of the 26th Annual International Conference on Machine Learning, ACM, 2009, 433-440.10.1145/1553374.1553431Search in Google Scholar

[26] Li H.,Wu X., Li Z., Ding W., Group feature selection with streaming features, In: IEEE 13th International Conference on Data Mining, 2013, 1109-1114.10.1109/ICDM.2013.137Search in Google Scholar

[27] Murkute N., Borkar, P., Effective Method of Feature Selection on Features Possessing Group Structure, International Journal of Computer Science and Information Technologies, 2016, 7(3), 1111-1115.Search in Google Scholar

[28] Gupta V., Srinivasan S., Kudli S.S., Prediction and Classification of Cardiac Arrhythmia, Stanford University, 2014.Search in Google Scholar

[29] Lipinski P., Yatsymirskyy M., Eflcient 1D and 2D Daubechies wavelet transforms with application to signal processing, In: International Conference on Adaptive and Natural Computing Algorithms, Springer, Berlin, Heidelberg, 2007, 391-398.10.1007/978-3-540-71629-7_44Search in Google Scholar

[30] Clifford G.D., Azuaje F., Mcsharry P., ECG statistics, noise, artifacts, and missing data, Advanced methods and tools for ECG data analysis, 2006, 55-100.Search in Google Scholar

[31] Guvenir H.A., Acar B., Demiroz G., Cekin A., A supervised machine learning algorithm for arrhythmia analysis, Computers in Cardiology, IEEE, 1997, 433-436.10.1109/CIC.1997.647926Search in Google Scholar

[32] van Rossum G., Python tutorial, Technical Report CS-R9526, Centrum voor Wiskunde en Informatica (CWI), Amsterdam, 1995.Search in Google Scholar

[33] Jackson D. A., Stopping rules in principal components analysis: a comparison of heuristical and statistical approaches, Ecology, 1993, 74(8), 2204-2214.10.2307/1939574Search in Google Scholar

[34] Cattell R.B., The Scree Test For The Number Of Factors, Multivariate Behavioral Research, 1966, 1(2), 245-276.10.1207/s15327906mbr0102_10Search in Google Scholar PubMed

[35] Raîche G., Walls T.A., Magis D., Riopel M., Blais J.G., Non-graphical solutions for Cattell’s scree test. Methodology, 2013, 9(1), 23-29.10.1027/1614-2241/a000051Search in Google Scholar

Received: 2019-06-17
Accepted: 2019-07-02
Published Online: 2019-09-17

© 2019 A. Wosiak, published by De Gruyter

This work is licensed under the Creative Commons Attribution 4.0 International License.

Articles in the same Issue

  1. Regular Articles
  2. Non-equilibrium Phase Transitions in 2D Small-World Networks: Competing Dynamics
  3. Harmonic waves solution in dual-phase-lag magneto-thermoelasticity
  4. Multiplicative topological indices of honeycomb derived networks
  5. Zagreb Polynomials and redefined Zagreb indices of nanostar dendrimers
  6. Solar concentrators manufacture and automation
  7. Idea of multi cohesive areas - foundation, current status and perspective
  8. Derivation method of numerous dynamics in the Special Theory of Relativity
  9. An application of Nwogu’s Boussinesq model to analyze the head-on collision process between hydroelastic solitary waves
  10. Competing Risks Model with Partially Step-Stress Accelerate Life Tests in Analyses Lifetime Chen Data under Type-II Censoring Scheme
  11. Group velocity mismatch at ultrashort electromagnetic pulse propagation in nonlinear metamaterials
  12. Investigating the impact of dissolved natural gas on the flow characteristics of multicomponent fluid in pipelines
  13. Analysis of impact load on tubing and shock absorption during perforating
  14. Energy characteristics of a nonlinear layer at resonant frequencies of wave scattering and generation
  15. Ion charge separation with new generation of nuclear emulsion films
  16. On the influence of water on fragmentation of the amino acid L-threonine
  17. Formulation of heat conduction and thermal conductivity of metals
  18. Displacement Reliability Analysis of Submerged Multi-body Structure’s Floating Body for Connection Gaps
  19. Deposits of iron oxides in the human globus pallidus
  20. Integrability, exact solutions and nonlinear dynamics of a nonisospectral integral-differential system
  21. Bounds for partition dimension of M-wheels
  22. Visual Analysis of Cylindrically Polarized Light Beams’ Focal Characteristics by Path Integral
  23. Analysis of repulsive central universal force field on solar and galactic dynamics
  24. Solitary Wave Solution of Nonlinear PDEs Arising in Mathematical Physics
  25. Understanding quantum mechanics: a review and synthesis in precise language
  26. Plane Wave Reflection in a Compressible Half Space with Initial Stress
  27. Evaluation of the realism of a full-color reflection H2 analog hologram recorded on ultra-fine-grain silver-halide material
  28. Graph cutting and its application to biological data
  29. Time fractional modified KdV-type equations: Lie symmetries, exact solutions and conservation laws
  30. Exact solutions of equal-width equation and its conservation laws
  31. MHD and Slip Effect on Two-immiscible Third Grade Fluid on Thin Film Flow over a Vertical Moving Belt
  32. Vibration Analysis of a Three-Layered FGM Cylindrical Shell Including the Effect Of Ring Support
  33. Hybrid censoring samples in assessment the lifetime performance index of Chen distributed products
  34. Study on the law of coal resistivity variation in the process of gas adsorption/desorption
  35. Mapping of Lineament Structures from Aeromagnetic and Landsat Data Over Ankpa Area of Lower Benue Trough, Nigeria
  36. Beta Generalized Exponentiated Frechet Distribution with Applications
  37. INS/gravity gradient aided navigation based on gravitation field particle filter
  38. Electrodynamics in Euclidean Space Time Geometries
  39. Dynamics and Wear Analysis of Hydraulic Turbines in Solid-liquid Two-phase Flow
  40. On Numerical Solution Of The Time Fractional Advection-Diffusion Equation Involving Atangana-Baleanu-Caputo Derivative
  41. New Complex Solutions to the Nonlinear Electrical Transmission Line Model
  42. The effects of quantum spectrum of 4 + n-dimensional water around a DNA on pure water in four dimensional universe
  43. Quantum Phase Estimation Algorithm for Finding Polynomial Roots
  44. Vibration Equation of Fractional Order Describing Viscoelasticity and Viscous Inertia
  45. The Errors Recognition and Compensation for the Numerical Control Machine Tools Based on Laser Testing Technology
  46. Evaluation and Decision Making of Organization Quality Specific Immunity Based on MGDM-IPLAO Method
  47. Key Frame Extraction of Multi-Resolution Remote Sensing Images Under Quality Constraint
  48. Influences of Contact Force towards Dressing Contiguous Sense of Linen Clothing
  49. Modeling and optimization of urban rail transit scheduling with adaptive fruit fly optimization algorithm
  50. The pseudo-limit problem existing in electromagnetic radiation transmission and its mathematical physics principle analysis
  51. Chaos synchronization of fractional–order discrete–time systems with different dimensions using two scaling matrices
  52. Stress Characteristics and Overload Failure Analysis of Cemented Sand and Gravel Dam in Naheng Reservoir
  53. A Big Data Analysis Method Based on Modified Collaborative Filtering Recommendation Algorithms
  54. Semi-supervised Classification Based Mixed Sampling for Imbalanced Data
  55. The Influence of Trading Volume, Market Trend, and Monetary Policy on Characteristics of the Chinese Stock Exchange: An Econophysics Perspective
  56. Estimation of sand water content using GPR combined time-frequency analysis in the Ordos Basin, China
  57. Special Issue Applications of Nonlinear Dynamics
  58. Discrete approximate iterative method for fuzzy investment portfolio based on transaction cost threshold constraint
  59. Multi-objective performance optimization of ORC cycle based on improved ant colony algorithm
  60. Information retrieval algorithm of industrial cluster based on vector space
  61. Parametric model updating with frequency and MAC combined objective function of port crane structure based on operational modal analysis
  62. Evacuation simulation of different flow ratios in low-density state
  63. A pointer location algorithm for computer visionbased automatic reading recognition of pointer gauges
  64. A cloud computing separation model based on information flow
  65. Optimizing model and algorithm for railway freight loading problem
  66. Denoising data acquisition algorithm for array pixelated CdZnTe nuclear detector
  67. Radiation effects of nuclear physics rays on hepatoma cells
  68. Special issue: XXVth Symposium on Electromagnetic Phenomena in Nonlinear Circuits (EPNC2018)
  69. A study on numerical integration methods for rendering atmospheric scattering phenomenon
  70. Wave propagation time optimization for geodesic distances calculation using the Heat Method
  71. Analysis of electricity generation efficiency in photovoltaic building systems made of HIT-IBC cells for multi-family residential buildings
  72. A structural quality evaluation model for three-dimensional simulations
  73. WiFi Electromagnetic Field Modelling for Indoor Localization
  74. Modeling Human Pupil Dilation to Decouple the Pupillary Light Reflex
  75. Principal Component Analysis based on data characteristics for dimensionality reduction of ECG recordings in arrhythmia classification
  76. Blinking Extraction in Eye gaze System for Stereoscopy Movies
  77. Optimization of screen-space directional occlusion algorithms
  78. Heuristic based real-time hybrid rendering with the use of rasterization and ray tracing method
  79. Review of muscle modelling methods from the point of view of motion biomechanics with particular emphasis on the shoulder
  80. The use of segmented-shifted grain-oriented sheets in magnetic circuits of small AC motors
  81. High Temperature Permanent Magnet Synchronous Machine Analysis of Thermal Field
  82. Inverse approach for concentrated winding surface permanent magnet synchronous machines noiseless design
  83. An enameled wire with a semi-conductive layer: A solution for a better distibution of the voltage stresses in motor windings
  84. High temperature machines: topologies and preliminary design
  85. Aging monitoring of electrical machines using winding high frequency equivalent circuits
  86. Design of inorganic coils for high temperature electrical machines
  87. A New Concept for Deeper Integration of Converters and Drives in Electrical Machines: Simulation and Experimental Investigations
  88. Special Issue on Energetic Materials and Processes
  89. Investigations into the mechanisms of electrohydrodynamic instability in free surface electrospinning
  90. Effect of Pressure Distribution on the Energy Dissipation of Lap Joints under Equal Pre-tension Force
  91. Research on microstructure and forming mechanism of TiC/1Cr12Ni3Mo2V composite based on laser solid forming
  92. Crystallization of Nano-TiO2 Films based on Glass Fiber Fabric Substrate and Its Impact on Catalytic Performance
  93. Effect of Adding Rare Earth Elements Er and Gd on the Corrosion Residual Strength of Magnesium Alloy
  94. Closed-die Forging Technology and Numerical Simulation of Aluminum Alloy Connecting Rod
  95. Numerical Simulation and Experimental Research on Material Parameters Solution and Shape Control of Sandwich Panels with Aluminum Honeycomb
  96. Research and Analysis of the Effect of Heat Treatment on Damping Properties of Ductile Iron
  97. Effect of austenitising heat treatment on microstructure and properties of a nitrogen bearing martensitic stainless steel
  98. Special Issue on Fundamental Physics of Thermal Transports and Energy Conversions
  99. Numerical simulation of welding distortions in large structures with a simplified engineering approach
  100. Investigation on the effect of electrode tip on formation of metal droplets and temperature profile in a vibrating electrode electroslag remelting process
  101. Effect of North Wall Materials on the Thermal Environment in Chinese Solar Greenhouse (Part A: Experimental Researches)
  102. Three-dimensional optimal design of a cooled turbine considering the coolant-requirement change
  103. Theoretical analysis of particle size re-distribution due to Ostwald ripening in the fuel cell catalyst layer
  104. Effect of phase change materials on heat dissipation of a multiple heat source system
  105. Wetting properties and performance of modified composite collectors in a membrane-based wet electrostatic precipitator
  106. Implementation of the Semi Empirical Kinetic Soot Model Within Chemistry Tabulation Framework for Efficient Emissions Predictions in Diesel Engines
  107. Comparison and analyses of two thermal performance evaluation models for a public building
  108. A Novel Evaluation Method For Particle Deposition Measurement
  109. Effect of the two-phase hybrid mode of effervescent atomizer on the atomization characteristics
  110. Erratum
  111. Integrability analysis of the partial differential equation describing the classical bond-pricing model of mathematical finance
  112. Erratum to: Energy converting layers for thin-film flexible photovoltaic structures
Downloaded on 14.9.2025 from https://www.degruyterbrill.com/document/doi/10.1515/phys-2019-0050/html
Scroll to top button