Abstract
In the ever-evolving landscape of deep learning (DL), the transformer model emerges as a formidable neural network architecture, gaining significant traction in neuroimaging-based classification and regression tasks. This paper presents an extensive examination of transformer’s application in neuroimaging, surveying recent literature to elucidate its current status and research advancement. Commencing with an exposition on the fundamental principles and structures of the transformer model and its variants, this review navigates through the methodologies and experimental findings pertaining to their utilization in neuroimage classification and regression tasks. We highlight the transformer model’s prowess in neuroimaging, showcasing its exceptional performance in classification endeavors while also showcasing its burgeoning potential in regression tasks. Concluding with an assessment of prevailing challenges and future trajectories, this paper proffers insights into prospective research directions. By elucidating the current landscape and envisaging future trends, this review enhances comprehension of transformer’s role in neuroimaging tasks, furnishing valuable guidance for further inquiry.
-
Research ethics: Not applicable.
-
Author contributions: Xinyu Zhu: Contributed to the literature search and writing of the main manuscript. Lan Lin: Contributed to the revision of the manuscript. Shen Sun, Yutong Wu and Xiangge Ma: Contributed to the idea of the manuscript. The authors have accepted responsibility for the entire content of this manuscript and approved its submission.
-
Use of Large Language Models, AI and Machine Learning Tools: None declared.
-
Conflict of interest: The authors state no conflict of interest.
-
Research funding: None declared.
-
Data availability: Not applicable.
Appendix
An overview of transformer-based models for the classification of dementia.
Research | Model | Task | Dataset | Modalities | Subject Information | Accuracy (%) |
---|---|---|---|---|---|---|
Duan et al. (2023) | Aux-ViT | AD spectrum classification | ADNI | sMRI GM |
CN 376 AD 64 Total 440 |
89.58 |
Hoang et al. (2023) | Vision transformer | MCI to AD conversion prediction | ADNI | sMRI | sMCI 340 pMCI 258 Total 598 |
83.27 |
Hu et al. (2023a) | Conv-Swinformer | AD spectrum classification | ADNI | sMRI | CN 970 MCI 1412 AD 508 Total 2,890 |
93.56 |
Hu et al. (2023b) | VGG-TSwinformer | MCI to AD conversion prediction | ADNI | sMRI | sMCI 154 pMCI 121 Total 275 |
77.2 |
Huang and Li (2023) | RST | AD spectrum classification | ADNI AIBL |
sMRI | CN 1451 AD 584 Total 2035 |
99.59 |
Jun et al. (2023) | Medical transformer | AD spectrum classification | IXI Cam-CAN ABIDE |
sMRI | CN 433 MCI 748 AD 359 Total 1,540 |
83.47 |
Kadri et al. (2021) | CrossViT | AD spectrum classification | ADNI OASIS |
sMRI | CN 450 MCI 570 AD 730 Total 1750 |
99 |
Kadri et al. (2022) | Vision transformer | AD spectrum classification | ADNI OASIS |
sMRI PET |
CN 610 MCI 670 AD 690 Total 1970 |
96 |
Khatri and Kwon (2023) | SSL-ViT | MCI to AD conversion prediction | ADNI | PET | sMCI 245 pMCI 224 Total 469 |
92.31 |
Li et al. (2021) | Transformer | AD spectrum classification | ADNI | sMRI SNP |
CN 193 AD 161 sMCI 207 pMCI 130 Total 691 |
91.43 |
Li et al. (2022a) | Trans-ResNet | AD spectrum classification | UKB AIBL ADNI |
sMRI | CN 37442 AD 276 Total 37,718 |
93.85 |
Li et al. (2022b) | CoT-ResNet-18 CCS-ResNet-50 |
AD spectrum classification | ADNI | sMRI | CN 116 MCI 187 AD 200 Total 503 |
97.9 |
Liu et al. (2023a) | Multi-modal mixing transformer | AD spectrum classification | ADNI AIBL |
sMRI Clinical data |
CN 839 AD 359 Total 1,198 |
99.4 |
Liu et al. (2023b) | TriFormer | AD spectrum classification | ADNI | sMRI Clinical data |
CN 343 AD 271 sMCI 217 pMCI 194 Total 1,025 |
84.1 |
Sarraf et al. (2023) | OViTAD | AD spectrum classification | ADNI | rs-fMRI sMRI |
CN 207 MCI 906 AD 631 Total 1744 |
99.0 |
Sun et al. (2021) | Residual network | AD spectrum classification | ADNI | sMRI | CN 255 MCI 205 AD 55 Total 515 |
97.1 |
Wang et al. (2022) | IGnet | AD spectrum classification | ADNI | sMRI SNP |
CN 205 AD 174 Total 379 |
83.78 |
Zhao et al. (2023a) | IDA-Net | AD spectrum classification | ADNI AIBL |
sMRI | CN 1282 AD 498 sMCI 724 pMCI 309 Total 2,813 |
92.7 |
Zheng et al. (2022) | Transformer | MCI to AD conversion prediction | ADNI | sMRI | sMCI 104 pMCI 145 Total 249 |
83.3 |
Zuo et al. (2022) | ATAT | AD spectrum classification | ADNI | fMRI | CN 86 SMC 82 EMCI 86 LMCI 76 Total 330 |
87.5 |
Zuo et al. (2023a) | DAGAE | AD spectrum classification | ADNI | fMRI | CN 75 LMCI 75 Total 150 |
85.33 |
Zuo et al. (2023b) | CT-GAN | AD spectrum classification | ADNI | fMRI DTI |
CN 84 EMCI 80 LMCI 41 AD 63 Total 268 |
90.24 |
Zuo et al. (2023c) | BSFL | AD spectrum classification | ADNI | DTI fMRI | CN 82 SMC 82 EMCI 82 LMCI 76 Total 322 |
95.57 |
-
CN, control normal; MCI, mild cognitive impairment; AD, Alzheimer’s disease; sMCI, stable mild cognitive impairment; pMCI, progressive mild cognitive impairment; SMC, significant memory concern; EMCI, early mild cognitive impairment; LMCI, late mild cognitive impairment; ADNI, Alzheimer’s disease neuroimaging initiative; AIBL, Australian imaging, biomarker and lifestyle; UKB, UK biobank; IXI, information extraction from images; Cam-CAN, Cambridge centre for ageing and neuroscience; ABIDE, autism brain imaging data exchange; OASIS, open access series of imaging studies.
References
Abe, S. (2010) Feature selection and extraction. In: Abe, S. (Ed.). Support vector machines for pattern classification. Springer, London, pp. 331–341.10.1007/978-1-84996-098-4_7Suche in Google Scholar
Adlard, P.A., Tran, B.A., Finkelstein, D.I., Desmond, P.M., Johnston, L.A., Bush, A.I., and Egan, G.F. (2014). A review of β-amyloid neuroimaging in Alzheimer’s disease. Front. Neurosci. 8: 327, https://doi.org/10.3389/fnins.2014.00327.Suche in Google Scholar PubMed PubMed Central
Alharthi, A.G. and Alzahrani, S.M. (2023). Do it the transformer way: a comprehensive review of brain and vision transformers for autism spectrum disorder diagnosis and classification. Comput. Biol. Med. 167: 107667, https://doi.org/10.1016/j.compbiomed.2023.107667.Suche in Google Scholar PubMed
Aramadaka, S., Mannam, R., Sankara Narayanan, R., Bansal, A., Yanamaladoddi, V.R., Sarvepalli, S.S., and Vemula, S.L. (2023). Neuroimaging in Alzheimer’s disease for early diagnosis: a comprehensive review. Cureus 15: e38544, https://doi.org/10.7759/cureus.38544.Suche in Google Scholar PubMed PubMed Central
Ba, J., Kiros, J.R., and Hinton, G.E. (2016). Layer normalization. arXiv.org. abs/1607.06450. https://doi.org/10.48550/arXiv.1607.06450.Suche in Google Scholar
Bannadabhavi, A., Lee, S., Deng, W., Ying, R., and Li, X. (2023). Community-aware transformer for autism prediction in fMRI connectome. Lect. Notes Comput. Sci. 14227: 287–297, https://doi.org/10.1007/978-3-031-43993-3_28.Suche in Google Scholar
Beheshti, I., Mishra, S., Sone, D., Khanna, P., and Matsuda, H. (2020). T1-weighted MRI-driven brain age estimation in Alzheimer’s disease and Parkinson’s disease. Aging. Dis. 11: 618–628, https://doi.org/10.14336/ad.2019.0617.Suche in Google Scholar PubMed PubMed Central
Bengio, Y. (2013). Deep learning of representations: looking forward. Lect. Notes Comput. Sci. 7978: 1–37, https://doi.org/10.1007/978-3-642-39593-2_1.Suche in Google Scholar
Bi, Y., Abrol, A., Fu, Z., and Calhoun, V. (2023) MultiViT: multimodal vision transformer for schizophrenia prediction using structural MRI and functional network connectivity data. In: 2023 IEEE 20th international symposium on biomedical imaging, ISBI, pp. 1–5.10.1109/ISBI53787.2023.10230385Suche in Google Scholar
Brauwers, G. and Frasincar, F. (2023). A general survey on attention mechanisms in deep learning. IEEE Trans. Knowl. Data Eng. 35: 3279–3298, https://doi.org/10.1109/tkde.2021.3126456.Suche in Google Scholar
Brickman, A.M., Zahodne, L.B., Guzman, V.A., Narkhede, A., Meier, I.B., Griffith, E.Y., Provenzano, F.A., Schupf, N., Manly, J.J., Stern, Y., et al.. (2015). Reconsidering harbingers of dementia: progression of parietal lobe white matter hyperintensities predicts Alzheimer’s disease incidence. Neurobiol. Aging. 36: 27–32, https://doi.org/10.1016/j.neurobiolaging.2014.07.019.Suche in Google Scholar PubMed PubMed Central
Brown, A., Salo, S.K., and Savage, G. (2023). Frontal variant Alzheimer’s disease: a systematic narrative synthesis. Cortex 166: 121–153, https://doi.org/10.1016/j.cortex.2023.05.007.Suche in Google Scholar PubMed
Cai, H., Gao, Y., and Liu, M. (2023). Graph transformer geometric learning of brain networks using multimodal MR images for brain age estimation. IEEE Trans. Med. Imaging 42: 456–466, https://doi.org/10.1109/tmi.2022.3222093.Suche in Google Scholar
Carey, G., Görmezoğlu, M., de Jong, J.J.A., Hofman, P.A.M., Backes, W.H., Dujardin, K., and Leentjens, A.F.G. (2021). Neuroimaging of anxiety in Parkinson’s disease: a systematic review. Mov. Disord. 36: 327–339, https://doi.org/10.1002/mds.28404.Suche in Google Scholar PubMed PubMed Central
Child, R., Gray, S., Radford, A., and Sutskever, I. (2019). Generating long sequences with sparse transformers. arXiv.org, abs/1904.10509. https://doi.org/10.48550/arXiv.1904.10509.Suche in Google Scholar
Cole, J.H. (2020). Multimodality neuroimaging brain-age in UK biobank: relationship to biomedical, lifestyle, and cognitive factors. Neurobiol. Aging. 92: 34–42, https://doi.org/10.1016/j.neurobiolaging.2020.03.014.Suche in Google Scholar PubMed PubMed Central
Deng, J., Dong, W., Socher, R., Li, L.J., Kai, L., and Li, F.F. (2009) Imagenet: a large-scale hierarchical image database. In: 2009 IEEE conference on computer vision and pattern recognition, pp. 248–255.10.1109/CVPR.2009.5206848Suche in Google Scholar
Devlin, J., Chang, M.W., Lee, K., and Toutanova, K. (2019) BERT: pre-training of deep bidirectional transformers for language understanding. In: 2019 conference of the north american chapter of the association for computational linguistics: human language technologies (NAACL HLT 2019), Vol. 1, pp. 4171–4186.Suche in Google Scholar
Dosovitskiy, A., Beyer, L., Kolesnikov, A., Weissenborn, D., Zhai, X., Unterthiner, T., Dehghani, M., Minderer, M., Heigold, G., Gelly, S., et al.. (2020). An image is worth 16x16 words: transformers for image recognition at scale. ICLR 2021 - 9th International Conference on Learning Representations, 2021.Suche in Google Scholar
Duan, Y., Wang, R., and Li, Y. (2023) Aux-ViT: classification of Alzheimer’s disease from MRI based on vision transformer with auxiliary branch. In: 2023 5th international conference on communications, information system and computer engineering (CISCE), pp. 382–386.10.1109/CISCE58541.2023.10142358Suche in Google Scholar
Fedus, W., Zoph, B., and Shazeer, N. (2022). Switch transformers: scaling to trillion parameter models with simple and efficient sparsity. J. Mach. Learn. Res. 23.Suche in Google Scholar
Franke, K. and Gaser, C. (2019). Ten years of brainAGE as a neuroimaging biomarker of brain aging: what insights have we gained? Front. Neurol. 10: 789, https://doi.org/10.3389/fneur.2019.00789.Suche in Google Scholar PubMed PubMed Central
Frisoni, G.B., Altomare, D., Thal, D.R., Ribaldi, F., van der Kant, R., Ossenkoppele, R., Blennow, K., Cummings, J., van Duijn, C., Nilsson, P.M., et al.. (2022). The probabilistic model of Alzheimer disease: the amyloid hypothesis revised. Nat. Rev. Neurosci. 23: 53–66, https://doi.org/10.1038/s41583-021-00533-w.Suche in Google Scholar PubMed PubMed Central
Gale, S.A., Acar, D., and Daffner, K.R. (2018). Dementia. Am. J. Med. 131: 1161–1169, https://doi.org/10.1016/j.amjmed.2018.01.022.Suche in Google Scholar PubMed
Gao, X., Cai, H., and Liu, M. (2023). A hybrid multi-scale attention convolution and aging transformer network for Alzheimer’s disease diagnosis. IEEE J. Biomed. Health Inform. 27: 3292–3301, https://doi.org/10.1109/jbhi.2023.3270937.Suche in Google Scholar
Han, K., Wang, Y., Chen, H., Chen, X., Guo, J., Liu, Z., Tang, Y., Xiao, A., Xu, C., Xu, Y., et al.. (2023). A survey on vision transformer. IEEE Trans. Pattern Anal. Mach. Intell. 45: 87–110, https://doi.org/10.1109/tpami.2022.3152247.Suche in Google Scholar
He, K., Zhang, X., Ren, S., and Sun, J. (2016a) Deep residual learning for image recognition. In: 2016 IEEE conference on computer vision and pattern recognition (CVPR), pp. 770–778.10.1109/CVPR.2016.90Suche in Google Scholar
He, K., Zhang, X., Ren, S., and Sun, J. (2016b). Identity mappings in deep residual networks. Computer Vision – ECCV 2016, PT IV 9908: 630–645, https://doi.org/10.1007/978-3-319-46493-0_38.Suche in Google Scholar
He, S., Feng, Y., Grant, P.E., and Ou, Y. (2022a). Deep relation learning for regression and its application to brain age estimation. IEEE Trans. Med. Imaging 41: 2304–2317, https://doi.org/10.1109/tmi.2022.3161739.Suche in Google Scholar PubMed PubMed Central
He, S., Grant, P.E., and Ou, Y. (2022b). Global-local transformer for brain age estimation. IEEE Trans. Med. Imaging 41: 213–224, https://doi.org/10.1109/tmi.2021.3108910.Suche in Google Scholar
He, K., Gan, C., Li, Z., Rekik, I., Yin, Z., Ji, W., Gao, Y., Wang, Q., Zhang, J., and Shen, D. (2023). Transformers in medical image analysis. Intell. Med. 3: 59–78, https://doi.org/10.1016/j.imed.2022.07.002.Suche in Google Scholar
Hoang, G.M., Kim, U.H., and Kim, J.G. (2023). Vision transformers for the prediction of mild cognitive impairment to Alzheimer’s disease progression using mid-sagittal sMRI. Front. Aging. Neurosci. 15: 1102869, https://doi.org/10.3389/fnagi.2023.1102869.Suche in Google Scholar PubMed PubMed Central
Hu, Z., Li, Y., Wang, Z., Zhang, S., and Hou, W. (2023a). Conv-Swinformer: integration of CNN and shift window attention for Alzheimer’s disease classification. Comput. Biol. Med. 164: 107304, https://doi.org/10.1016/j.compbiomed.2023.107304.Suche in Google Scholar PubMed
Hu, Z., Wang, Z., Jin, Y., and Hou, W. (2023b). VGG-TSwinformer: transformer-based deep learning model for early Alzheimer’s disease prediction. Comput. Methods Programs Biomed. 229: 107291, https://doi.org/10.1016/j.cmpb.2022.107291.Suche in Google Scholar PubMed
Huang, Y. and Li, W. (2023). Resizer Swin transformer-based classification using smri for Alzheimer’s disease. Appl. Sci. 13: 9310, https://doi.org/10.3390/app13169310.Suche in Google Scholar
Huang, G., Liu, Z., Maaten, L.V.D., and Weinberger, K.Q. (2017) Densely connected convolutional networks. In: 2017 IEEE conference on computer vision and pattern recognition (CVPR), pp. 2261–2269.10.1109/CVPR.2017.243Suche in Google Scholar
Jack, C.R.Jr., Bernstein, M.A., Fox, N.C., Thompson, P., Alexander, G., Harvey, D., Borowski, B., Britson, P.J., J, L.W., Ward, C., et al.. (2008). The Alzheimer’s disease neuroimaging initiative (ADNI): MRI methods. J. Magn. Reson. Imaging 27: 685–691, https://doi.org/10.1002/jmri.21049.Suche in Google Scholar PubMed PubMed Central
Jirsaraie, R.J., Gorelik, A.J., Gatavins, M.M., Engemann, D.A., Bogdan, R., Barch, D.M., and Sotiras, A. (2023). A systematic review of multimodal brain age studies: uncovering a divergence between model accuracy and utility. Patterns 4: 100712, https://doi.org/10.1016/j.patter.2023.100712.Suche in Google Scholar PubMed PubMed Central
Jo, T., Nho, K., and Saykin, A.J. (2019). Deep learning in Alzheimer’s disease: diagnostic classification and prognostic prediction using neuroimaging data. Front. Aging. Neurosci. 11, https://doi.org/10.3389/fnagi.2019.00220.Suche in Google Scholar PubMed PubMed Central
Jun, E., Jeong, S., Heo, D.W., and Suk, H.I. (2023). Medical transformer: universal encoder for 3-D brain MRI analysis. IEEE Trans. Neural Netw. Learn. Syst.: 1–11, https://doi.org/10.1109/tnnls.2023.3308712.Suche in Google Scholar PubMed
Kadri, R., Bouaziz, B., Tmar, M., and Gargouri, F. (2021). CrossViT wide residual squeeze-and-excitation network for Alzheimer’s disease classification with self attention ProGAN data augmentation. Int. J. Hybrid. Intell. Syst. 17: 163–177, https://doi.org/10.3233/his-220002.Suche in Google Scholar
Kadri, R., Bouaziz, B., Tmar, M., and Gargouri, F. (2022). Multimodal deep learning based on the combination of efficientnetV2 and ViT for Alzheimer’s disease early diagnosis enhanced by SAGAN data augmentation. IJCISIM 14: 313–325.Suche in Google Scholar
Kadri, R., Bouaziz, B., Tmar, M., and Gargouri, F. (2023). Efficient multimodel method based on transformers and CoAtNet for Alzheimer’s diagnosis. Digit. Signal Process. 143: 104229, https://doi.org/10.1016/j.dsp.2023.104229.Suche in Google Scholar
Kang, W., Lin, L., Zhang, B., Shen, X., and Wu, S. (2021). Multi-model and multi-slice ensemble learning architecture based on 2D convolutional neural networks for Alzheimer’s disease diagnosis. Comput. Biol. Med. 136: 104678, https://doi.org/10.1016/j.compbiomed.2021.104678.Suche in Google Scholar PubMed
Kang, W., Lin, L., Sun, S., and Wu, S. (2023). Three-round learning strategy based on 3D deep convolutional GANs for Alzheimer’s disease staging. Sci. Rep. 13: 5750, https://doi.org/10.1038/s41598-023-33055-9.Suche in Google Scholar PubMed PubMed Central
Ketonen, L.M. (1998). Neuroimaging of the aging brain. Neurol. Clin. 16: 581–598, https://doi.org/10.1016/s0733-8619(05)70082-7.Suche in Google Scholar PubMed
Khan, S., Naseer, M., Hayat, M., Zamir, S.W., Khan, F.S., and Shah, M. (2022). Transformers in vision: a survey. ACM Comput. Surv. 54, https://doi.org/10.1145/3505244.Suche in Google Scholar
Khatri, U. and Kwon, G.R. (2023). Explainable vision transformer with self-supervised learning to predict Alzheimer’s disease progression using 18F-FDG PET. Bioengineering 10, https://doi.org/10.3390/bioengineering10101225.Suche in Google Scholar PubMed PubMed Central
Krizhevsky, A., Sutskever, I., and Hinton, G.E. (2017). ImageNet classification with deep convolutional neural networks. Adv. Neural Inf. Process. Syst. 2: 1097–1105.Suche in Google Scholar
Lecun, Y., Bottou, L., Bengio, Y., and Haffner, P. (1998). Gradient-based learning applied to document recognition. JPROC 86: 2278–2324, https://doi.org/10.1109/5.726791.Suche in Google Scholar
Lepikhin, D., Lee, H., Xu, Y., Chen, D., Firat, O., Huang, Y., Krikun, M., Shazeer, N.M., and Chen, Z. (2020). GShard: scaling giant models with conditional computation and automatic sharding. ICLR 2021 - 9th International Conference on Learning Representations, 2021.Suche in Google Scholar
Li, Q., Cai, W., Wang, X., Zhou, Y., Feng, D.D., and Chen, M. (2014) Medical image classification with convolutional neural network. In: 2014 13th international conference on control automation robotics & vision (ICARCV), pp. 844–848.10.1109/ICARCV.2014.7064414Suche in Google Scholar
Li, Y., Liu, Y., Wang, T., and Lei, B. (2021). A method for predicting Alzheimer’s disease based on the fusion of single nucleotide polymorphisms and magnetic resonance feature extraction. Lect. Notes Comput. Sci. 13050: 105–115.10.1007/978-3-030-89847-2_10Suche in Google Scholar
Li, C., Cui, Y., Luo, N., Liu, Y., Bourgeat, P., Fripp, J., and Jiang, T. (2022a) Trans-ResNet: integrating transformers and CNNs for Alzheimer’s disease classification. In: 2022 IEEE 19th international symposium on biomedical imaging (ISBI), pp. 1–5.10.1109/ISBI52829.2022.9761549Suche in Google Scholar
Li, C., Wang, Q., Liu, X., and Hu, B. (2022b). An attention-based CoT-ResNet with channel shuffle mechanism for classification of Alzheimer’s disease levels. Front. Aging. Neurosci. 14: 930584, https://doi.org/10.3389/fnagi.2022.930584.Suche in Google Scholar PubMed PubMed Central
Li, J., Chen, J., Tang, Y., Wang, C., Landman, B.A., and Zhou, S.K. (2023). Transforming medical imaging with transformers? A comparative review of key properties, current progresses, and future perspectives. Med. Image Anal. 85: 102762, https://doi.org/10.1016/j.media.2023.102762.Suche in Google Scholar PubMed PubMed Central
Lim, B.Y., Lai, K.W., Haiskin, K., Kulathilake, K., Ong, Z.C., Hum, Y.C., Dhanalakshmi, S., Wu, X., and Zuo, X. (2022). Deep learning model for prediction of progressive mild cognitive impairment to Alzheimer’s disease using structural MRI. Front Aging. Neurosci. 14: 876202, https://doi.org/10.3389/fnagi.2022.876202.Suche in Google Scholar PubMed PubMed Central
Lin, E., Lin, C.H., and Lane, H.Y. (2021). Deep learning with neuroimaging and genomics in Alzheimer’s disease. Int. J. Mol. Sci. 22, https://doi.org/10.3390/ijms22157911.Suche in Google Scholar PubMed PubMed Central
Lin, T., Wang, Y., Liu, X., and Qiu, X. (2022). A survey of transformers. AI Open 3: 111–132, https://doi.org/10.1016/j.aiopen.2022.10.001.Suche in Google Scholar
Littlejohns, T.J., Holliday, J., Gibson, L.M., Garratt, S., Oesingmann, N., Alfaro-Almagro, F., Bell, J.D., Boultwood, C., Collins, R., Conroy, M.C., et al.. (2020). The UK biobank imaging enhancement of 100,000 participants: rationale, data collection, management and future directions. Nat. Commun. 11: 2624, https://doi.org/10.1038/s41467-020-15948-9.Suche in Google Scholar PubMed PubMed Central
Liu, P.J., Saleh, M., Pot, E., Goodrich, B., Sepassi, R., Kaiser, L., and Shazeer, N. (2018). Generating wikipedia by summarizing long sequences. 6th International Conference on Learning Representations, ICLR 2018 - Conference Track Proceedings, 2018.Suche in Google Scholar
Liu, Z., Lin, Y., Cao, Y., Hu, H., Wei, Y., Zhang, Z., Lin, S., and Guo, B. (2021) Swin transformer: hierarchical vision transformer using shifted windows. In: 2021 IEEE/CVF international conference on computer vision (ICCV 2021), pp. 9992–10002.10.1109/ICCV48922.2021.00986Suche in Google Scholar
Liu, L., Liu, S., Zhang, L., To, X.V., Nasrallah, F., and Chandra, S.S. (2023a). Cascaded multi-modal mixing transformers for Alzheimer’s disease classification with incomplete data. Neuroimage 277: 120267, https://doi.org/10.1016/j.neuroimage.2023.120267.Suche in Google Scholar PubMed
Liu, L., Lyu, J., Liu, S., Tang, X., Chandra, S.S., and Nasrallah, F.A. (2023b) TriFormer: a multi-modal transformer framework for mild cognitive impairment conversion prediction. In: 2023 IEEE 20th international symposium on biomedical imaging, ISBI, pp. 1–4.10.1109/ISBI53787.2023.10230709Suche in Google Scholar
Liu, L., Sun, S., Kang, W., Wu, S., and Lin, L. (2024). A review of neuroimaging-based data-driven approach for Alzheimer’s disease heterogeneity analysis. Rev. Neurosci. 35: 121–139, https://doi.org/10.1515/revneuro-2023-0033.Suche in Google Scholar PubMed
Miao, S., Xu, Q., Li, W., Yang, C., Sheng, B., Liu, F., Teame, T., and Yu, X. (2023). MMTFN: multi‐modal multi‐scale transformer fusion network for Alzheimer’s disease diagnosis. Int. J. Imaging Syst. Technol. 34: e22970, https://doi.org/10.1002/ima.22970.Suche in Google Scholar
Mu, Y., Zhao, H., Guo, J., and Li, H. (2022) MSRT: multi-scale spatial regularization transformer for multi-label classification in calcaneus radiograph. In: 2022 IEEE 19th international symposium on biomedical imaging (ISBI), pp. 1–4.10.1109/ISBI52829.2022.9761435Suche in Google Scholar
Nestor, S.M., Rupsingh, R., Borrie, M., Smith, M., Accomazzi, V., Wells, J.L., Fogarty, J., and Bartha, R. (2008). Ventricular enlargement as a possible measure of Alzheimers disease progression validated using the Alzheimers disease neuroimaging initiative database. Brain 131: 2443–2454, https://doi.org/10.1093/brain/awn146.Suche in Google Scholar PubMed PubMed Central
Nyberg, L. (2017). Neuroimaging in aging: brain maintenance. F1000Res 6: 1215, https://doi.org/10.12688/f1000research.11419.1.Suche in Google Scholar PubMed PubMed Central
Ollier, W., Sprosen, T., and Peakman, T. (2005). UK biobank: from concept to reality. Pharmacogenomics 6: 639–646, https://doi.org/10.2217/14622416.6.6.639.Suche in Google Scholar PubMed
Parmar, N., Vaswani, A., Uszkoreit, J., Kaiser, L., Shazeer, N.M., Ku, A., and Tran, D. (2018) Image transformer. In: International conference on machine learning, Vol. 80.Suche in Google Scholar
Parvaiz, A., Khalid, M.A., Zafar, R., Ameer, H., Ali, M., and Fraz, M.M. (2023). Vision transformers in medical computer vision – a contemplative retrospection. Eng. Appl. Artif. Intell. 122: 106126, https://doi.org/10.1016/j.engappai.2023.106126.Suche in Google Scholar
Peng, H., Gong, W., Beckmann, C.F., Vedaldi, A., and Smith, S.M. (2021). Accurate brain age prediction with lightweight deep neural networks. Med. Image Anal. 68: 101871, https://doi.org/10.1016/j.media.2020.101871.Suche in Google Scholar PubMed PubMed Central
Peng, B., Alcaide, E., Anthony, Q.G., Albalak, A., Arcadinho, S., Biderman, S., Cao, H., Cheng, X., Chung, M., Grella, M., et al.. (2023) RWKV: reinventing RNNs for the transformer era. In: Findings of the association for computational linguistics: EMNLP 2023, pp. 14048–14077.10.18653/v1/2023.findings-emnlp.936Suche in Google Scholar
Qiu, J., Ma, H., Levy, O., Yih, S., Wang, S., and Tang, J. (2019). Blockwise self-attention for long document understanding. Findings of the Association for Computational Linguistics Findings of ACL. EMNLP 2020: 2555–2565.10.18653/v1/2020.findings-emnlp.232Suche in Google Scholar
Qodrati, Z., Taji, S.M., Ghaemi, A., Danyali, H., Kazemi, K., and Ghaemi, A. (2023) Brain age estimation with twin vision transformer using hippocampus information applicable to Alzheimer dementia diagnosis. In: 2023 13th international conference on computer and knowledge engineering (ICCKE), pp. 585–589.10.1109/ICCKE60553.2023.10326248Suche in Google Scholar
Rae, J.W., Potapenko, A., Jayakumar, S.M., Hillier, C., and Lillicrap, T.P. (2020) Compressive transformers for long-range sequence modelling. In: 8th international conference on learning representations, ICLR 2020.Suche in Google Scholar
Rao, Y.L., Ganaraja, B., Murlimanju, B.V., Joy, T., Krishnamurthy, A., and Agrawal, A. (2022). Hippocampus and its involvement in Alzheimer’s disease: a review. 3 Biotech. 12: 55, https://doi.org/10.1007/s13205-022-03123-4.Suche in Google Scholar PubMed PubMed Central
Risacher, S.L. and Saykin, A.J. (2019). Neuroimaging in aging and neurologic diseases. Handb. Clin. Neurol. 167: 191–227, https://doi.org/10.1016/b978-0-12-804766-8.00012-1.Suche in Google Scholar PubMed PubMed Central
Rispoli, V., Schreglmann, S.R., and Bhatia, K.P. (2018). Neuroimaging advances in Parkinson’s disease. Curr. Opin. Neurol. 31: 415–424, https://doi.org/10.1097/wco.0000000000000584.Suche in Google Scholar
Roy, A., Saffar, M., Vaswani, A., and Grangier, D. (2020). Efficient content-based sparse attention with routing transformers. Trans. Assoc. Comput. Linguist. 9: 53–68, https://doi.org/10.1162/tacl_a_00353.Suche in Google Scholar
Sarraf, S., Sarraf, A., DeSouza, D.D., Anderson, J.A.E., and Kabia, M. (2023). OViTAD: optimized vision transformer to predict various stages of Alzheimer’s disease using resting-state fMRI and structural MRI data. Brain Sci. 13: 260, https://doi.org/10.3390/brainsci13020260.Suche in Google Scholar PubMed PubMed Central
Schmidhuber, J. (2015). Deep learning in neural networks: an overview. Neural Networks 61: 85–117, https://doi.org/10.1016/j.neunet.2014.09.003.Suche in Google Scholar PubMed
Shafiq, M. and Gu, Z.Q. (2022). Deep residual learning for image recognition: a survey. Appl Sci. 12, https://doi.org/10.3390/app12188972.Suche in Google Scholar
Simonyan, K. and Zisserman, A. (2015). Very deep convolutional networks for large-scale image recognition. 3rd International Conference on Learning Representations, ICLR 2015 - Conference Track Proceedings, 2015.Suche in Google Scholar
Smith, C.D., Malcein, M., Meurer, K., Schmitt, F.A., Markesbery, W.R., and Pettigrew, L.C. (1999). MRI temporal lobe volume measures and neuropsychologic function in Alzheimer’s disease. J. Neuroimaging 9: 2–9, https://doi.org/10.1111/jon1999912.Suche in Google Scholar PubMed
Sudlow, C., Gallacher, J., Allen, N., Beral, V., Burton, P., Danesh, J., Downey, P., Elliott, P., Green, J., Landray, M., et al.. (2015). UK biobank: an open access resource for identifying the causes of a wide range of complex diseases of middle and old age. PLoS Med. 12: e1001779, https://doi.org/10.1371/journal.pmed.1001779.Suche in Google Scholar PubMed PubMed Central
Sun, H., Wang, A., Wang, W., and Liu, C. (2021). An improved deep residual network prediction model for the early diagnosis of Alzheimer’s disease. Sensors 21, https://doi.org/10.3390/s21124182.Suche in Google Scholar PubMed PubMed Central
Sun, Y., Dong, L., Huang, S., Ma, S., Xia, Y., Xue, J., Wang, J., and Wei, F. (2023). Retentive network: a successor to transformer for large language models. arXiv.org, abs/2307.08621. https://doi.org/10.48550/arXiv.2307.08621.Suche in Google Scholar
Tay, Y., Dehghani, M., Bahri, D., and Metzler, D. (2022). Efficient transformers: a survey. ACM Comput. Surv. 55: 1–28, https://doi.org/10.1145/3530811.Suche in Google Scholar
Varanasi, L.V.S.K.B.K. and Dasari, C.M. (2022) PsychNet: explainable deep neural networks for psychiatric disorders and mental illness. In: 2022 IEEE 6th conference on information and communication technology (CICT), pp. 1–6.10.1109/CICT56698.2022.9997832Suche in Google Scholar
Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A.N., Kaiser, L., and Polosukhin, I. (2017) Attention is all You need. In: Advances in neural information processing systems 30 (nips 2017), pp. 30.Suche in Google Scholar
Wang, A., Chen, H., Lin, Z., Pu, H., and Ding, G. (2024). RepViT: Revisiting mobile CNN from ViT perspective. 2024 IEEE/CVF Conference on Computer Vision and Pattern Recognition. 15909-15920.10.1109/CVPR52733.2024.01506Suche in Google Scholar
Wang, J.X., Li, Y., Li, X., and Lu, Z.H. (2022). Alzheimer’s disease classification through imaging genetic data with IGnet. Front Neurosci. 16: 846638, https://doi.org/10.3389/fnins.2022.846638.Suche in Google Scholar PubMed PubMed Central
Wood, D.A., Kafiabadi, S., Busaidi, A.A., Guilhem, E., Montvila, A., Lynch, J., Townend, M., Agarwal, S., Mazumder, A., Barker, G.J., et al.. (2022). Accurate brain-age models for routine clinical MRI examinations. Neuroimage 249: 118871, https://doi.org/10.1016/j.neuroimage.2022.118871.Suche in Google Scholar PubMed
Wu, Y., Gao, H., Zhang, C., Ma, X., Zhu, X., Wu, S., and Lin, L. (2024). Machine learning and deep learning approaches in lifespan brain age prediction: a comprehensive review. Tomography 10: 1238–1262, https://doi.org/10.3390/tomography10080093.Suche in Google Scholar PubMed PubMed Central
Xie, Y., Zhang, W., Li, C., Lin, S., Qu, Y., and Zhang, Y. (2014). Discriminative object tracking via sparse representation and online dictionary learning. IEEE Trans. Cybern. 44: 539–553, https://doi.org/10.1109/tcyb.2013.2259230.Suche in Google Scholar PubMed
Xin, J., Wang, A., Guo, R., Liu, W., and Tang, X. (2023). CNN and swin-transformer based efficient model for Alzheimer’s disease diagnosis with sMRI. Biomed. Signal Proces. 86: 105189, https://doi.org/10.1016/j.bspc.2023.105189.Suche in Google Scholar
Xu, W., Xu, Y., Chang, T., and Tu, Z. (2021) Co-scale conv-attentional image transformers. In: 2021 IEEE/CVF international conference on computer vision (ICCV 2021), pp. 9961–9970.10.1109/ICCV48922.2021.00983Suche in Google Scholar
Xu, X., Lin, L., Sun, S., and Wu, S. (2023). A review of the application of three-dimensional convolutional neural networks for the diagnosis of Alzheimer’s disease using neuroimaging. Rev. Neurosci. 34: 649–670, https://doi.org/10.1515/revneuro-2022-0122.Suche in Google Scholar PubMed
Yang, Z. and Liu, Z. (2020). The risk prediction of Alzheimer’s disease based on the deep learning model of brain 18F-FDG positron emission tomography. Saudi. J. Biol. Sci. 27: 659–665, https://doi.org/10.1016/j.sjbs.2019.12.004.Suche in Google Scholar PubMed PubMed Central
Zeng, H., Shan, X., Feng, Y., and Wen, Y. (2023) MSAANet: multi-scale axial attention network for medical image segmentation. In: 2023 IEEE international conference on multimedia and expo, ICME, pp. 2291–2296.10.1109/ICME55011.2023.00391Suche in Google Scholar
Zhang, Q.L. and Yang, Y. (2021). ResT: An efficient transformer for visual recognition. Advances In Neural Information Processing Systems 34 (Neurips 2021). 30.Suche in Google Scholar
Zhang, L., Wang, M., Liu, M., and Zhang, D. (2020). A Survey on deep learning for neuroimaging-based brain disorder analysis. Front Neurosci. 14: 779, https://doi.org/10.3389/fnins.2020.00779.Suche in Google Scholar PubMed PubMed Central
Zhang, C., Bengio, S., Hardt, M., Recht, B., and Vinyals, O. (2021). Understanding deep learning (still) requires rethinking generalization. Commun. ACM 64: 107–115, https://doi.org/10.1145/3446776.Suche in Google Scholar
Zhao, Q., Huang, G., Xu, P., Chen, Z., Li, W., Yuan, X., Zhong, G., Pun, C.M., and Huang, Z. (2023a). IDA-Net: inheritable deformable attention network of structural MRI for Alzheimer’s disease diagnosis. Biomed. Signal Process. 84: 104787, https://doi.org/10.1016/j.bspc.2023.104787.Suche in Google Scholar
Zhao, Y., Yuan, X., Yuan, Y., Deng, S., and Quan, J. (2023b). Relation extraction: advancements through deep learning and entity-related features. Soc. Netw. Anal. Min. 13: 92, https://doi.org/10.1007/s13278-023-01095-8.Suche in Google Scholar PubMed PubMed Central
Zhao, Z., Chuah, J.H., Lai, K.W., Chow, C.O., Gochoo, M., Dhanalakshmi, S., Wang, N., Bao, W., and Wu, X. (2023c). Conventional machine learning and deep learning in Alzheimer’s disease diagnosis using neuroimaging: a review. Front Comput. Neurosci. 17: 1038636, https://doi.org/10.3389/fncom.2023.1038636.Suche in Google Scholar PubMed PubMed Central
Zheng, G., Zhang, Y., Zhao, Z., Wang, Y., Liu, X., Shang, Y., Cong, Z., Dimitriadis, S.I., Yao, Z., and Hu, B. (2022). A transformer-based multi-features fusion model for prediction of conversion in mild cognitive impairment. Methods 204: 241–248, https://doi.org/10.1016/j.ymeth.2022.04.015.Suche in Google Scholar PubMed
Zheng, W., Liu, H., Li, Z., Li, K., Wang, Y., Hu, B., Dong, Q., and Wang, Z. (2023). Classification of Alzheimer’s disease based on hippocampal multivariate morphometry statistics. CNS Neurosci. Ther. 29: 2457–2468, https://doi.org/10.1111/cns.14189.Suche in Google Scholar PubMed PubMed Central
Zhu, C., Ping, W., Xiao, C., Shoeybi, M., Goldstein, T., Anandkumar, A., and Catanzaro, B. (2021). Long-short transformer: efficient transformers for language and vision. Adv. Neural Inf. Process. Syst. 21: 17723–17736.Suche in Google Scholar
Zuo, Q., Lu, L., Wang, L., Zuo, J., and Ouyang, T. (2022). Constructing brain functional network by adversarial temporal-spatial aligned transformer for early AD analysis. Front Neurosci. 16: 1087176, https://doi.org/10.3389/fnins.2022.1087176.Suche in Google Scholar PubMed PubMed Central
Zuo, Q., Hu, J., Zhang, Y., Pan, J., Jing, C., Chen, X., Meng, X., and Hong, J. (2023a). Brain functional network generation using distribution-regularized adversarial graph autoencoder with transformer for dementia diagnosis. Comput. Model. Eng. Sci. 137: 2129–2147, https://doi.org/10.32604/cmes.2023.028732.Suche in Google Scholar PubMed PubMed Central
Zuo, Q., Shen, Y., Zhong, N., Chen, C.L.P., Lei, B., and Wang, S. (2023b). Alzheimer’s disease prediction via brain structural-functional deep fusing network. IEEE Trans. Neural. Syst. Rehabil. Eng. 31: 4601–4612, https://doi.org/10.1109/tnsre.2023.3333952.Suche in Google Scholar
Zuo, Q., Zhong, N., Pan, Y., Wu, H., Lei, B., and Wang, S. (2023c). Brain structure-function fusing representation learning using adversarial decomposed-VAE for analyzing MCI. IEEE Trans. Neural. Syst. Rehabil. Eng. 31: 4017–4028, https://doi.org/10.1109/tnsre.2023.3323432.Suche in Google Scholar
© 2024 Walter de Gruyter GmbH, Berlin/Boston
Artikel in diesem Heft
- Frontmatter
- Comparison of data-driven thresholding methods using directed functional brain networks
- Dissecting the immune response of CD4+ T cells in Alzheimer’s disease
- Neurobiological mechanisms in the kynurenine pathway and major depressive disorder
- Involvement of kinases in memory consolidation of inhibitory avoidance training
- Transformer-based approaches for neuroimaging: an in-depth review of their role in classification and regression tasks
Artikel in diesem Heft
- Frontmatter
- Comparison of data-driven thresholding methods using directed functional brain networks
- Dissecting the immune response of CD4+ T cells in Alzheimer’s disease
- Neurobiological mechanisms in the kynurenine pathway and major depressive disorder
- Involvement of kinases in memory consolidation of inhibitory avoidance training
- Transformer-based approaches for neuroimaging: an in-depth review of their role in classification and regression tasks