Home Gas Turbine Engine Gas-path Fault Diagnosis Based on Improved SBELM Architecture
Article
Licensed
Unlicensed Requires Authentication

Gas Turbine Engine Gas-path Fault Diagnosis Based on Improved SBELM Architecture

  • Feng Lu EMAIL logo , Jipeng Jiang and Jinquan Huang
Published/Copyright: November 8, 2018
Become an author with De Gruyter Brill

Abstract

Various model-based methods are widely used to aircraft engine fault diagnosis, and an accurate engine model is used in these approaches. However, it is difficult to obtain general engine model with high accuracy due to engine individual difference, lifecycle performance deterioration and modeling uncertainty. Recently, data-driven diagnostic approaches for aircraft engine become more popular with the development of machine learning technologies. While these data-driven methods to engine fault diagnosis tend to ignore experimental data sparse and uncertainty, which results in hardly achieve fast fault diagnosis for multiple patterns. This paper presents a novel data-driven diagnostic approach using Sparse Bayesian Extreme Learning Machine (SBELM) for engine fault diagnosis. This methodology addresses fast fault diagnosis without relying on engine model. To enhance the reliability of fast fault diagnosis and enlarge the detectable fault number, a SBELM-based multi-output classifier framework is designed. The reduced sparse topology of ELM is presented and utilized to fault diagnosis extended from single classifier to multi-output classifier. The effects of noise and measurement uncertainty are taken into consideration. Simulation results show the SBELM-based multi-output classifier for engine fault diagnosis is superior to the existing data-driven ones with regards to accuracy and computational efforts.

PACS: 47.85.Gj

Funding statement: This paper is supported by National Nature Science Foundation of China (No.61304113), Jiangsu Province Nature Science Foundation (No. BK20130802), and China Outstanding Postdoctoral Science Foundation (No.2015T80552).[Correction added after ahead-of-print publication on 20 September 2016: The Funding number of the sponsor National Nature Science Foundation was updated from No.61304133 to No.61304113.]

Nomenclature

x

training data vector

t

expected output vector

tˆ

predicted output

N

number of training data

M

number of hidden neurons

o

input vector weight

r

input vector bias

H

hidden layer output vector

H+

Moore-Penrose generalized inverse

A, B

an diagonal matrix

SW

flow parameter

SE

efficiency parameter

LP

low pressure

HP

high pressure

NL

low-pressure compressor speed

NH

high-pressure compressor speed

T22

fan outlet temperature

T3

compressor outlet temperature

P3

compressor outlet pressure

P43

turbine inlet pressure

T6

turbine outlet temperature

P6

turbine outlet pressure

NHcor

conversion speed

Greek letters
α

prior distribution

β

output weight

βˆ

approximated Gauss mean

σ( )

sigmoid function

approximated Gauss covariance

Θ

Randomly generated parameters

Subscripts
F

fan

C

compressor

H

high-pressure turbine

L

low-pressure turbine

Acknowledgements

The authors wish to thank all of the team-mates and the anonymous reviewers for their constructive comments and great help in the writing process, which improve the manuscript significantly.

References

1. Kraft J, Sethi V, Singh R. Optimization of aero gas turbine maintenance using advanced simulation and diagnostic methods. J Eng Gas Turbines Power 2014;136(11):111602.10.1115/1.4027356Search in Google Scholar

2. Li YG. Aero gas turbine flight performance estimation using engine gas path measurements. J Propulsion Power 2015;31(3):851–60.10.2514/1.B35381Search in Google Scholar

3. Volponi AJ, Depold H, Ganguli R, Daguang C. The use of Kalman filter and neural network methodologies in gas turbine performance diagnostics: a comparative study. J Eng Gas Turbines Power 2003;125(4):917–24.10.1115/1.1419016Search in Google Scholar

4. Li YG, Abdul Ghafirm MF, Wang L, Singh R, Huang K, Feng X. Non-linear multiple points gas turbine Off-design performance adaptation using a genetic algorithm. J Eng Gas Turbines Power 2011;133(7):071701.10.1115/1.4002620Search in Google Scholar

5. Vanini ZNS, Meskin N, Khorasani K. Multiple-model sensor and components fault diagnosis in gas turbine engines using autoassociative neural networks. J Eng Gas Turbines Power 2014;136(9):091603.10.1115/1.4027215Search in Google Scholar

6. Ogaji OT, Singh R. Advanced engine diagnostics using artificial neural networks. Appl Soft Comput 2003;3(3):259–71.10.1016/S1568-4946(03)00038-3Search in Google Scholar

7. Lobada I, Feldshteyn Y, Ponomaryov V. Neural networks for gas turbine fault identification: multilayer perceptron or radial basis network. Int J Turbo Jet Engines 2012;29(1):37–48.10.1115/GT2011-46752Search in Google Scholar

8. Jack LB, Nandi AK. Support vector machines for detection and characterization of rolling element bearing faults. J Mechanical Eng Sci 2001;215(9):1065–71.10.1177/095440620121500907Search in Google Scholar

9. Huang HZ, Cui PL, Peng W, Gao HY, Wang HK. Fatigue lifetime assessment of aircraft engine disc via multi-source information fusion. Int J Turbo Jet Engines 2014;31(2):167–74.10.1515/tjj-2013-0043Search in Google Scholar

10. Urban LA. Gas path analysis applied to turbine engine condition monitoring, AIAA paper 72–1082, 1972.10.2514/3.60240Search in Google Scholar

11. Doel DL. An assessment of weighted-least-squares-based gas path analysis. J Eng Gas Turbines Power 1994;116(93):366–73.10.1115/1.2906829Search in Google Scholar

12. Li YG, Korakinitis T. Nonlinear weighted-least-squares estimation approach for gas-turbine diagnostic applications. J Propulsion Power 2011;27(2):337–45.10.2514/1.47129Search in Google Scholar

13. Tan HS. Fourier neural networks and generalized single hidden layer networks in aircrafts engine fault diagnositics. J Eng Gas Turbine Power 2006;128:773–82.10.1115/1.2179465Search in Google Scholar

14. Samanta B, Al-Balushi KR, Al-Araimi SA. Artificial neural networks and support vector machines with genetic algorithm for bearing fault detection. Eng Appl Artif Intelligence 2003;16(7):657–65.10.1016/j.engappai.2003.09.006Search in Google Scholar

15. Huang GB, Zhu QY, Siew CK. Extreme learning machine: a new learning scheme of feedforward neural networks. Neural Network 2004;2:985–90.Search in Google Scholar

16. Huang G, Huang GB, Song S, You K. Trends in extreme learning machines: a review. Neural Network 2015;61:32–48.10.1016/j.neunet.2014.10.001Search in Google Scholar PubMed

17. Vong CM, Tai KL, Pun CH, Wong PK. Fast and accurate face detection by sparse Bayesian extreme learning machine. Neural Comput Appl 2015;26(5):1149–56.10.1007/s00521-014-1803-xSearch in Google Scholar

18. Michael E. Sparse Bayesian learning and the relevance vector machine. J Machine Learn Res 2001;1:211–44.Search in Google Scholar

19. Wang GG, Lu M, Dong YQ, Zhao XJ. Self-adaptive extreme learning machine. Neural Comput Appl 2015;27(2):291–303.10.1007/s00521-015-1874-3Search in Google Scholar

20. Luo J, Vong CM, Wong PK. Sparse Bayesian extreme learning machine for multi-classification. Neural Networks Learn 2014;25(4):836–43.10.1109/TNNLS.2013.2281839Search in Google Scholar PubMed

21. Lima AR, Cannon AJ, Hsieh WW. Nonlinear regression in environmental sciences using extreme learning machines: a comparative evaluation. Environ Modelling Software 2015;73:175–88.10.1016/j.envsoft.2015.08.002Search in Google Scholar

22. Zhang XY, Huang QQ, Cao SC, Liu F. Establishing a parametric flight loads identification method with GA-ELM model. Adv Aeronautical Sci Eng 2014;5(4):497–501.Search in Google Scholar

23. Banerjee KS. Generalized inverse of matrices and its applications. Technimetrics 1973;15(1):15–197.10.1080/00401706.1973.10489026Search in Google Scholar

24. Faul AC, Tipping M. Analysis of sparse Bayesian learning. Advances in Neural Information Processing System, 2002:383–90.Search in Google Scholar

25. Rong HJ, Huang GB, Ong AY. Extreme learning machine for multi-categories classification applications. IEEE World Congress on Computational Intelligence, 2008:1709–13.10.1109/IJCNN.2008.4634028Search in Google Scholar

26. Platt JC, Critianini N, Shawe-Taylor J. Large margin DAGs for multi-class classification. Advance in Neural Information Processing System, Vol. 12, 2000:547–53.Search in Google Scholar

27. Azimi-Sadjadi MR, Zekavat SA. Cloud classification using support vector machines. In: Proc of the 2000 IEEE Geoscience and Remote Sensing Symposium (IGRASS 2000), Honolulu, Hawaii, Vol. 2, 2000:669–71.10.1109/IGARSS.2000.861666Search in Google Scholar

28. Lu F, Chen Y, Huang JQ, Zhang DD. An integrated nonlinear model-based approach to gas turbine engine sensor fault diagnostics. Proc IMechE Part G J Aerospace Eng 2014;228(11):2007–21.10.1177/0954410013511596Search in Google Scholar

29. Borguet S, Leonerd O. Comparison of adaptive filters for gas turbine performance monitoring. J Comput Appl Math 2010;234(7):2201–12.10.1016/j.cam.2009.08.075Search in Google Scholar

30. Zhou D, Zhang H, Weng S. A new gas path fault diagnostic method of gas turbine based on support vector machine. J Eng Gas Turbines Power 2015;137(10):102605.10.1115/IMECE2014-36367Search in Google Scholar

31. Aretakis N, Mathioudakis K, Stamatis A. Identification of sensor faults on turbofan engines using pattern recognition techniques. Control Eng Pract 2004;12(7):827–36.10.1016/j.conengprac.2003.09.011Search in Google Scholar

Appendix

The improved SBELM for multi-classification algorithm.

  1. Initialization:

    Set initial number of hidden nodes M (large enough is needed)

    Generate input weights randomly and calculate the hidden layer output H=[h1(Θ;x1),,hN(Θ;xM)]T // they are shared

    While 1 to m // m classifiers

    w=0M×1, α=1051M×1 // set up parameters and hyper-parameters

  2. Step 1: Estimation of output weights w

    1. =0M×M; //Hessian matrix

    2. g=0L×1; //Gradient

    3. While i=1 to N // sequentially calculate the mapping of every input xi to hxi with random ELM hidden weights.

      1. +yi(1yi)h(xi)Th(xi)

      2. gg+(1)(tiyi)h(xi)T

        End while

    4. +diag(α)

    5. gg+αw // ∙ denotes dot product.

    6. Conduct the inversion of ∑

    7. Find step-size λ with line-search method.

    8. wwλ1g

    9. If norm (g) is under a pre-defined gradient tolerance, then go step 2

      Otherwise, go to step 1.

  3. Step 2: Estimation of hyper parameter α

    For every αk

    1. αk(1αk1kk)/wk2, // 1kkis the kth diagonal element of 1

      End for

  4. Step 3: Sparse mechanism

    For every αk

    1. If αk >predefined maximum

      Pruneαk,hkx,wk; MM1

      Remain the corresponding hidden neurons and output

  5. If the absolute difference between two successive logarithm values of αk is lower given Tolerance, then stop. Otherwise, repeat 1 to step 3.

End while and obtain the improved SBELM for multi-classification architecture.

Received: 2016-07-21
Accepted: 2016-09-01
Published Online: 2018-11-08
Published in Print: 2018-12-19

© 2018 Walter de Gruyter GmbH, Berlin/Boston

Downloaded on 5.9.2025 from https://www.degruyterbrill.com/document/doi/10.1515/tjj-2016-0050/html
Scroll to top button