The deal.II library, version 9.7
-
Daniel Arndt
, Wolfgang Bangerth
, Maximilian Bergbauer
, Bruno Blais
, Marc Fehling
, Rene Gassmöller
, Timo Heister, Luca Heltai
, Martin Kronbichler
, Matthias Maier
, Peter Munch
, Sam Scheuerman
, Bruno Turcksin
, Siarhei Uzunbajakau
, David Wellsund Michał Wichrowski
Abstract
This paper provides an overview of the new features of the finite element library deal.II, version 9.7.
-
Research ethics: Not applicable.
-
Informed consent: Not applicable.
-
Author contributions: All authors have accepted responsibility for the entire content of this manuscript and approved its submission.
-
Use of Large Language Models, AI and Machine Learning Tools: None declared.
-
Conflict of interest: The authors state no conflict of interest.
-
Research funding: deal.II and its developers are financially supported through a variety of funding sources. D. Arndt and B. Turcksin: Research sponsored by the Laboratory Directed Research and Development Program of Oak Ridge National Laboratory, managed by UT-Battelle, LLC, for the U. S. Department of Energy and supported by the U.S. Department of Energy, Office of Science, Office of Advanced Scientific Computing Research, Next-Generation Scientific Software Technologies program, under contract number DE-AC05-00OR22725. W. Bangerth was partially supported by the National Science Foundation under awards OAC-1835673, EAR-1925595, and OAC-2410847. W. Bangerth, T. Heister, and R. Gassmöller were partially supported by the Computational Infrastructure for Geodynamics initiative (CIG), through the National Science Foundation (NSF) under Award No. EAR-2149126 via The University of California, Davis. M. Bergbauer was supported by the German Research Foundation (DFG) under the project “High-Performance Cut Discontinuous Galerkin Methods for Flow Problems and Surface-Coupled Multiphysics Problems” Grant Agreement No. 456365667. B. Blais was supported by the National Science and Engineering Research Council of Canada (NSERC) through the RGPIN-2020-04510 Discovery Grant and the MMIAOW Canada Research Level 2 in Computer-Assisted Design and Scale-up of Alternative Energy Vectors for Sustainable Chemical Processes. M. Fehling was partially supported by the ERC-CZ grant LL2105 CONTACT, funded by the Czech Ministry of Education, Youth and Sports. He was also partially supported by the Charles University Research Centre Program No. UNCE/24/SCI/005. R. Gassmöller was also partially supported by NSF Awards EAR-1925677 and EAR-2054605. T. Heister was also partially supported by NSF Awards OAC-2015848, EAR-1925575, and OAC-2410848. M. Kronbichler was partially supported by the German Federal Ministry of Research, Technology and Space, project “PDExa: Optimized software methods for solving partial differential equations on exascale supercomputers”, grant agreement No. 16ME0637. K. L. Heltai is a member of Gruppo Nazionale per il Calcolo Scientifico (GNCS) of Istituto Nazionale di Alta Matematica (INdAM). LH was partially supported by the Italian Ministry of University and Research (MUR), under the grant MUR PRIN 2022 No. 2022WKWZA8 “Immersed methods for multiscale and multiphysics problems (IMMEDIATE)”, and acknowledges the MIUR Excellence Department Project awarded to the Department of Mathematics, University of Pisa, CUP I57G22000700001. M. Kronbichler and L. Heltai were partially supported by the European Research Council (ERC) under the European Union’s Horizon 2020 research and innovation programme (call HORIZON-EUROHPC-JU-2023-COE-03, grant agreement No. 101172493 “dealii-X: an Exascale Framework for Digital Twins of the Human Body”). M. Maier was partially supported by NSF Award DMS-2045636 and by the Air Force Office of Scientific Research under grant/contract number FA9550-23-1-0007. D. Wells was supported by NSF Award OAC-1931516. Charles University is acknowledged for providing computing time on the Sněhurka cluster. This research used in part resources on the Palmetto Cluster at Clemson University under National Science Foundation awards MRI 1228312, II NEW 1405767, MRI 1725573, and MRI 2018069. The views expressed in this article do not necessarily represent the views of NSF or the United States government.
-
Data availability: Not applicable.
References
[1] Y. Saad and M. H. Schultz, “GMRES: a generalized minimal residual algorithm for solving nonsymmetric linear systems,” SIAM J. Sci. Stat. Comput., vol. 7, no. 3, pp. 856–869, 1986. https://doi.org/10.1137/0907058.Suche in Google Scholar
[2] C. Greif, T. Rees, and D. B. Szyld, “GMRES with multiple preconditioners,” SeMA J., vol. 74, no. 2, pp. 213–231, 2016. https://doi.org/10.1007/s40324-016-0088-7.Suche in Google Scholar
[3] W. Pazner, T. Kolev, and C. R. Dohrmann, “Low-order preconditioning for the high-order finite element de Rham complex,” SIAM J. Sci. Comput., vol. 45, no. 2, pp. A675–A702, 2023, https://doi.org/10.1137/22m1486534.Suche in Google Scholar
[4] M. Wichrowski, “Matrix-free ghost penalty evaluation via tensor product factorization,” arXiv preprint arXiv:2503.00246, 2025.Suche in Google Scholar
[5] The deal.II developers, “List of changes for deal.II release 9.7,” [Online]. Available at: https://dealii.org/developer/doxygen/deal.II/changes_between_9_6_0_and_9_7_0.html.Suche in Google Scholar
[6] D. Arndt et al.., “The deal.II library, version 9.3,” J. Numer. Math., vol. 29, no. 3, pp. 171–186, 2021, https://doi.org/10.1515/jnma-2021-0081.Suche in Google Scholar
[7] M. Wichrowski, P. Munch, M. Kronbichler, and G. Kanschat, “Smoothers with localized residual computations for geometric multigrid methods for higher-order finite elements,” SIAM J. Sci. Comput., vol. 47, no. 3, pp. B645–B664, 2025. https://doi.org/10.1137/23M1625962.Suche in Google Scholar
[8] B. Turcksin, M. Kronbichler, and W. Bangerth, “WorkStream – a design pattern for multicore-enabled finite element computations,” ACM Trans. Math. Software, vol. 43, no. 1, pp. 1–29, 2016, https://doi.org/10.1145/2851488.Suche in Google Scholar
[9] J. Reinders, Intel Threading Building Blocks, O’Reilly Media, 2007.Suche in Google Scholar
[10] T.-W. Huang, D.-L. Lin, C.-X. Lin, and Y. Lin, “Taskflow: a lightweight parallel and heterogeneous task graph computing system,” IEEE Trans. Parallel Distr. Syst., vol. 33, no. 6, pp. 1303–1320, 2021, https://doi.org/10.1109/tpds.2021.3104255.Suche in Google Scholar
[11] C. R. Trott et al.., “Kokkos 3: programming model extensions for the exascale era,” IEEE Trans. Parallel Distr. Syst., vol. 33, no. 4, pp. 805–817, 2022, https://doi.org/10.1109/tpds.2021.3097283.Suche in Google Scholar
[12] D. Arndt et al.., “The deal.II library, version 9.5,” J. Numer. Math., vol. 31, no. 3, pp. 231–246, 2023, https://doi.org/10.1515/jnma-2023-0089.Suche in Google Scholar
[13] P. R. Amestoy, I. S. Duff, J.-Y. L’Excellent, and J. Koster, “A fully asynchronous multifrontal solver using distributed dynamic scheduling,” SIAM J. Matrix Anal. Appl., vol. 23, no. 1, pp. 15–41, 2001, https://doi.org/10.1137/s0895479899358194.Suche in Google Scholar
[14] S. Filippone and M. Colajanni, “PSBLAS: a library for parallel linear algebra computation on sparse matrices,” ACM Trans. Math. Software, vol. 26, no. 4, pp. 527–550, 2000, https://doi.org/10.1145/365723.365732.Suche in Google Scholar
[15] A. Prokopenko, D. Lebrun-Grandié, D. Arndt, and B. Turcksin, “Arborx 2.0,” 2025 [Online]. Available at: https://www.osti.gov/biblio/code-154205.Suche in Google Scholar
[16] W. Bangerth, “Experience converting a large mathematical software package written in C++ to C++20 modules,” arXiv preprint, 2025, https://arxiv.org/abs/2506.21654.Suche in Google Scholar
[17] M. Bergbauer, P. Munch, W. A. Wall, and M. Kronbichler, “High-performance matrix-free unfitted finite element operator evaluation,” SIAM J. Sci. Comput., vol. 47, no. 3, pp. B665–B689, 2025, https://doi.org/10.1137/24m1653689.Suche in Google Scholar
[18] W. E. Brown, “A proposal for the world’s dumbest smart pointer, v4,” 2014 [Online]. Available at: https://www.open-std.org/jtc1/sc22/wg21/docs/papers/2014/n4282.pdf.Suche in Google Scholar
[19] W. Bangerth, R. Hartmann, and G. Kanschat, “deal.II – a general purpose object oriented finite element library,” ACM Trans. Math. Software, vol. 33, no. 4, pp. 24–es, 2007, https://doi.org/10.1145/1268776.1268779.Suche in Google Scholar
[20] D. Arndt et al.., “The deal.II finite element library: design, features, and insights,” Comput. Math. Appl., vol. 81, pp. 407–422, 2021, https://doi.org/10.1016/j.camwa.2020.02.022.Suche in Google Scholar
[21] G. Kanschat, “Multi-level methods for discontinuous Galerkin FEM on locally refined meshes,” Comput. Struct., vol. 82, no. 28, pp. 2437–2445, 2004, https://doi.org/10.1016/j.compstruc.2004.04.015.Suche in Google Scholar
[22] B. Janssen and G. Kanschat, “Adaptive multilevel methods with local smoothing for H1- and Hcurl-conforming high order finite element methods,” SIAM J. Sci. Comput., vol. 33, no. 4, pp. 2095–2114, 2011, https://doi.org/10.1137/090778523.Suche in Google Scholar
[23] T. C. Clevenger, T. Heister, G. Kanschat, and M. Kronbichler, “A flexible, parallel, adaptive geometric multigrid method for FEM,” ACM Trans. Math. Software, vol. 47, no. 1, pp. 1–27, 2021, https://doi.org/10.1145/3425193.Suche in Google Scholar
[24] P. Munch, T. Heister, L. Prieto Saavedra, and M. Kronbichler, “Efficient distributed matrix-free multigrid methods on locally refined meshes for FEM computations,” ACM Trans. Parallel Comput., vol. 10, no. 1, pp. 1–38, 2023, https://doi.org/10.1145/3580314.Suche in Google Scholar
[25] N. Giuliani, A. Mola, and L. Heltai, “π-BEM: a flexible parallel implementation for adaptive, geometry aware, and high order boundary element methods,” Adv. Eng. Software, vol. 121, pp. 39–58, 2018, https://doi.org/10.1016/j.advengsoft.2018.03.008.Suche in Google Scholar
[26] W. Bangerth, C. Burstedde, T. Heister, and M. Kronbichler, “Algorithms and data structures for massively parallel generic adaptive finite element codes,” ACM Trans. Math. Software, vol. 38, no. 2, pp. 1–28, 2012, https://doi.org/10.1145/2049673.2049678.Suche in Google Scholar
[27] M. Maier, M. Bardelloni, and L. Heltai, “LinearOperator – a generic, high-level expression syntax for linear algebra,” Comput. Math. Appl., vol. 72, no. 1, pp. 1–24, 2016, https://doi.org/10.1016/j.camwa.2016.04.024.Suche in Google Scholar
[28] M. Maier, M. Bardelloni, and L. Heltai, “LinearOperator benchmarks, version 1.0.0,” Zenodo, 2016, https://doi.org/10.5281/zenodo.47202.Suche in Google Scholar
[29] W. Bangerth and O. Kayser-Herold, “Data structures and requirements for hp finite element software,” ACM Trans. Math. Software, vol. 36, no. 1, pp. 1–31, 2009, https://doi.org/10.1145/1486525.1486529.Suche in Google Scholar
[30] M. Fehling and W. Bangerth, “Algorithms for parallel generic hp-adaptive finite element software,” ACM Trans. Math. Software, vol. 49, no. 3, pp. 1–26, 2023, https://doi.org/10.1145/3603372.Suche in Google Scholar
[31] D. Davydov, T. Gerasimov, J.-P. Pelteret, and P. Steinmann, “Convergence study of the h-adaptive PUM and the hp-adaptive FEM applied to eigenvalue problems in quantum mechanics,” Adv. Model. Simul. Eng. Sci., vol. 4, no. 1, p. 7, 2017, https://doi.org/10.1186/s40323-017-0093-0.Suche in Google Scholar PubMed PubMed Central
[32] A. Sartori, N. Giuliani, M. Bardelloni, and L. Heltai, “deal2lkit: a toolkit library for high performance programming in deal.II,” SoftwareX, vol. 7, pp. 318–327, 2018, https://doi.org/10.1016/j.softx.2018.09.004.Suche in Google Scholar
[33] M. Kronbichler and K. Kormann, “A generic interface for parallel cell-based finite element operator application,” Comput. Fluids, vol. 63, pp. 135–147, 2012, https://doi.org/10.1016/j.compfluid.2012.04.012.Suche in Google Scholar
[34] M. Kronbichler and K. Kormann, “Fast matrix-free evaluation of discontinuous Galerkin finite element operators,” ACM Trans. Math. Software, vol. 45, no. 3, pp. 1–40, 2019, https://doi.org/10.1145/3325864.Suche in Google Scholar
[35] R. Gassmöller, H. Lokavarapu, E. Heien, E. G. Puckett, and W. Bangerth, “Flexible and scalable particle-in-cell methods with adaptive mesh refinement for geodynamic computations,” Geochem. Geophys. Geosyst., vol. 19, no. 9, pp. 3596–3604, 2018. https://doi.org/10.1029/2018gc007508.Suche in Google Scholar
[36] A. DeSimone, L. Heltai, and C. Manigrasso, “Tools for the solution of PDEs defined on curved manifolds with deal.II,” SISSA, Tech. Rep. 42/2009/M, 2009.Suche in Google Scholar
[37] J. Zarestky, M. Bigler, M. Brazile, T. Lopes, and W. Bangerth, “Reflective writing supports metacognition and self-regulation in graduate computational science and engineering,” Comput. Educ. Open, vol. 3, pp. 1–12, 2022. https://doi.org/10.1016/j.caeo.2022.100085.Suche in Google Scholar
[38] L. Heltai, W. Bangerth, M. Kronbichler, and A. Mola, “Propagating geometry information to finite element computations,” ACM Trans. Math. Software, vol. 47, no. 4, pp. 1–30, 2021, https://doi.org/10.1145/3468428.Suche in Google Scholar
[39] L. Heltai and A. Mola, “Towards the integration of CAD and FEM using open source libraries: a collection of deal.II manifold wrappers for the OpenCASCADE library,” SISSA, Tech. Rep., 2015.Suche in Google Scholar
[40] A. Griewank, D. Juedes, and J. Utke, “Algorithm 755: ADOL-C: a package for the automatic differentiation of algorithms written in C/C++,” ACM Trans. Math. Software, vol. 22, no. 2, pp. 131–167, 1996, https://doi.org/10.1145/229473.229474.Suche in Google Scholar
[41] C. Geuzaine and J.-F. Remacle, “Gmsh: a 3-D finite element mesh generator with built-in pre-and post-processing facilities,” Int. J. Numer. Methods Eng., vol. 79, no. 11, pp. 1309–1331, 2009, https://doi.org/10.1002/nme.2579.Suche in Google Scholar
[42] “muparser: fast math parser library,” Available at: https://beltoforion.de/en/muparser.Suche in Google Scholar
[43] D. J. Gardner, D. R. Reynolds, C. S. Woodward, and C. J. Balos, “Enabling new flexibility in the sundials suite of nonlinear and differential/algebraic equation solvers,” ACM Trans. Math. Softw., vol. 48, no. 3, pp. 1–24, 2022, https://doi.org/10.1145/3539801.Suche in Google Scholar
[44] M. Galassi et al.., GNU Scientific Library Reference Manual, 3rd ed. Network Theory Ltd., 2009.Suche in Google Scholar
[45] “GSL: GNU scientific library,” Available at: http://www.gnu.org/software/gsl.Suche in Google Scholar
[46] “OpenCASCADE: open CASCADE technology, 3D modeling & numerical simulation,” Available at: http://www.opencascade.org/.Suche in Google Scholar
[47] “SymEngine: fast symbolic manipulation library, written in C++,” Available at: https://symengine.org/.Suche in Google Scholar
[48] R. B. Lehoucq, D. C. Sorensen, and C. Yang, ARPACK Users’ Guide: Solution of Large-Scale Eigenvalue Problems with Implicitly Restarted Arnoldi Methods, Philadelphia, SIAM, 1998.Suche in Google Scholar
[49] H. Anzt et al.., “Ginkgo: a high performance numerical linear algebra library,” J. Open Source Softw., vol. 5, no. 52, p. 2260, 2020. https://doi.org/10.21105/joss.02260.Suche in Google Scholar
[50] H. Anzt et al.., “Ginkgo: a modern linear operator algebra framework for high performance computing,” ACM Trans. Math. Software, vol. 48, no. 1, pp. 1–33, 2022. https://doi.org/10.1145/3480935.Suche in Google Scholar
[51] C. Burstedde, L. C. Wilcox, and O. Ghattas, “p4est: scalable algorithms for parallel adaptive mesh refinement on forests of octrees,” SIAM J. Sci. Comput., vol. 33, no. 3, pp. 1103–1133, 2011, https://doi.org/10.1137/100791634.Suche in Google Scholar
[52] T. Schulze et al.., “Open asset import library (assimp),” 2021. Available at: https://github.com/assimp/assimp.Suche in Google Scholar
[53] The HDF Group, “Hierarchical data format, version 5,” 2025. Available at: https://www.hdfgroup.org/HDF5/.Suche in Google Scholar
[54] S. Balay et al.., “PETSc/TAO users manual,” Argonne National Laboratory, Tech. Rep. ANL-21/39 – Revision 3.23, 2025.Suche in Google Scholar
[55] S. Balay et al.., “PETSc web page,” 2025 [Online]. Available at: https://petsc.org/.Suche in Google Scholar
[56] E. Anderson et al.., LAPACK Users’ Guide, 3rd ed. Philadelphia, PA, Society for Industrial and Applied Mathematics, 1999.Suche in Google Scholar
[57] M. Mayr et al.., “Trilinos: enabling scientific computing across diverse hardware architectures at scale,” arXiv preprint arXiv:2503.08126, 2025.Suche in Google Scholar
[58] The Trilinos Project Team, The Trilinos Project Website. Available at: https://trilinos.github.io/.Suche in Google Scholar
[59] “Boost C++ libraries,” Available at: http://www.boost.org/.Suche in Google Scholar
[60] “Magic Enum C++,” Available at: https://github.com/Neargye/magic_enum.Suche in Google Scholar
[61] A. Javeed, D. P. Kouri, D. Ridzal, and G. Von Winckel, “Get ROL-ing: an introduction to Sandia’s rapid optimization library,” in 7th International Conference on Continuous Optimization, 2022 [Online]. Available at: https://www.osti.gov/servlets/purl/2004204/.Suche in Google Scholar
[62] T. A. Davis, “Algorithm 832: UMFPACK V4.3–an unsymmetric-pattern multifrontal method,” ACM Trans. Math. Software, vol. 30, no. 2, pp. 196–199, 2004. https://doi.org/10.1145/992200.992206.Suche in Google Scholar
[63] The CGAL Project, CGAL User and Reference Manual, 6.0.1 ed. CGAL Editorial Board, 2024 [Online]. Available at: https://doc.cgal.org/6.0.1/Manual/packages.html.Suche in Google Scholar
[64] G. Karypis and V. Kumar, “A fast and high quality multilevel scheme for partitioning irregular graphs,” SIAM J. Sci. Comput., vol. 20, no. 1, pp. 359–392, 1998, https://doi.org/10.1137/s1064827595287997.Suche in Google Scholar
[65] L. S. Blackford et al.., ScaLAPACK Users’ Guide, Philadelphia, PA, Society for Industrial and Applied Mathematics, 1997.Suche in Google Scholar
[66] “VTK,” Available at: https://vtk.org.Suche in Google Scholar
[67] P. R. Amestoy, A. Buttari, J.-Y. L’Excellent, and T. Mary, “Performance and scalability of the block low-rank multifrontal factorization on multicore architectures,” ACM Trans. Math. Software, vol. 45, no. 1, pp. 1–26, 2019, https://doi.org/10.1145/3242094.Suche in Google Scholar
[68] J. E. Roman, F. Alvarruiz, C. Campos, L. Dalcin, P. Jolivet, and A. Lamas Daviña, “Improvements to slepc in releases 3.14–3.18,” ACM Trans. Math. Software, vol. 49, no. 3, pp. 1–11, 2023, https://doi.org/10.1145/3603373.Suche in Google Scholar
[69] “zlib,” Available at: https://zlib.net.Suche in Google Scholar
[70] P. C. Africa et al.., “The deal.II library, version 9.6,” J. Numer. Math., vol. 32, no. 4, pp. 369–380, 2024. https://doi.org/10.1515/jnma-2024-0137.Suche in Google Scholar
© 2025 Walter de Gruyter GmbH, Berlin/Boston
Artikel in diesem Heft
- Frontmatter
- SUPG space-time scheme on anisotropic meshes for general parabolic equations
- Adaptive computation of elliptic eigenvalue topology optimization with a phase-field approach
- Improvements of algebraic flux-correction schemes based on Bernstein finite elements
- The deal.II library, version 9.7
Artikel in diesem Heft
- Frontmatter
- SUPG space-time scheme on anisotropic meshes for general parabolic equations
- Adaptive computation of elliptic eigenvalue topology optimization with a phase-field approach
- Improvements of algebraic flux-correction schemes based on Bernstein finite elements
- The deal.II library, version 9.7