Home Physical Sciences Quantum mechanics and human dynamics
Article Open Access

Quantum mechanics and human dynamics

  • Paul L.A. Popelier ORCID logo EMAIL logo
Published/Copyright: September 24, 2025

Abstract

The story of the development of quantum mechanics is briefly told, from the perspective of human dynamics, and in the spirit of the sobering truth behind it. Three long term contributions from the author’s lab are succinctly presented: (i) FFLUX, a machine-learnt potential for the molecular dynamics of peptides in water, thereby overhauling the architecture of classical force fields; (ii) REG, an unbiased and minimal method to compute chemical insight, thereby reducing the vexing gap between it and modern wavefunctions; (iii) AIBL, a gas-phase-based predictor of pKa values in solution, including for tautomerisable and multiprotic compounds of more than 50 atoms, and able to even correct experiment.

Introduction

Purpose of this article

The full story of the development of quantum mechanics has been told many times, sometimes vividly. 1 The story is awkward, starting with an unexpected twist, and inevitably entangled with human nature and human dynamics. Repeating this story here would not do justice to its full complexity and sheer length. Instead, this article will highlight a few lesser-known facts and do so in an undisguised and candid manner. Subsequently, a short but poignant digression explains where things can go wrong when quantum mechanics treads beyond its familiar territory, as it does in cosmology and high-energy physics. We then move on to the safer grounds of quantum chemistry in order to concisely set the scene for three sustained contributions made by the author’s group: FFLUX, AIBL and REG. All three will be explained in a balancing act of brevity and enticement, deliberately leaving out most details such that the overall message is preserved.

The mission of the International Year of Quantum Science and Technology (IYQ) instructs participants to be honest. The mission statement recognises that “many things have been and will be said about quantum science and technology that are untrue or misleading. All people proceed from learning simple, sometimes incorrect stories to deeper, more nuanced ones.” The author strove to keep all this in mind while writing this article. He stayed away from the red-carpet language we read and hear too much. Too much excellent and important work has been done, over many decades if not centuries, by largely silent or even self-effacing contributors who often pioneered. This phenomenon is a product of human dynamics. But what is meant by human dynamics? In this article an answer will be provided, as the author discovered it himself, both from personal experience and by having read on other scientists.

Human dynamics expresses itself in a number of phenomena appearing in how science is done. The first example of human dynamics is that just mentioned above, which produces forgotten pioneers amongst others. There is a large number of scientists who do not end up with the credit they are due for various reasons: their modesty, lack of interest in public recognition, lack of confidence, low self-esteem, not fitting in the red-carpet world, self-isolation by focusing on scientific topics that are (not yet) popular, or just out to have fun solving problems whoever understands or appreciates their solutions. But human dynamics can also literally mean travelling to a far destination such as the United States of America, for example in 1925, at a crucial time in the development of quantum mechanics taking place not there but in Europe. This unfortunate travel decision by Born, in an internetless world with ocean steamers, weakened the position of Göttingen (where he was then based) in the swift development of quantum mechanics at that time. Human dynamics can also refer to groupthink, the excesses of which can lead to the abysmal situation in string theory where no decision is ever taken to abandon it altogether in spite of its massive, stubborn and widely acknowledged flaws. The structure of careers and funding, another product of human dynamics, is partially to blame for this sad state of affairs. Human dynamics is also at work when strong personalities dominate the direction in which a research field progresses, or the way to think about it. If sustained, it brings about power structures that decide who publishes in fancy journals and who not. Next, the naming of an invention, discovery, equation, law or concept is also subject to human dynamics. Stigler’s law of eponymy 2 claims that no scientific discovery is named after its original discoverer. With this law is associated a long list of examples such as Pythagoras’s theorem, which was known to Babylonian mathematicians. Ironically, even Stigler’s law was actually not attributable to him. Human dynamics also simply refers to tensions and even rivalries between individual scientists or groups thereof. Finally, human dynamics also involves intellectual laziness, causing scientists to be unaware of developments of which they do not (yet?) realise the importance. Such inertia has afflicted science (and mathematics) on a number of occasions.

Finally, one should not forget what a clear-headed European thinker, such as Penrose, thinks about quantum mechanics. He believes that saying that quantum mechanics is incomplete is polite; according to him, the truth is that quantum mechanics is wrong. 3 Physics Nobel prize winner ‘t Hooft also claims that quantum mechanics is wrong in its current form. Of course, as an intellectual creation it is formidable and can boast many successes, both in solid state physics and by underpinning the “whole of chemistry”, hinting at Dirac’s adage 4 of 1929. This practical success cannot be taken away by the remaining conceptual confusion surrounding quantum mechanics, in particular in connection with the measurement problem (the collapse of the wave function) where Penrose’s main gripe lies.

In summary, the full story of quantum mechanics is not linear nor one-dimensional; this article seeks to convey a flavour of this characterisation.

The origin of quantum mechanics

It is about a hundred years ago that modern quantum mechanics was born. In fact, it was a twin birth because modern quantum mechanics came in two versions. Both were proposed in rapid succession (by several months) compared to the slower development of the old quantum theory, which had ruled between 1900 and 1925. As a bunch of heuristic corrections to classical mechanics, the old quantum theory was ready to be replaced by something consistent and more powerful, while working for the right reasons.

The first version is called matrix mechanics, the germs of which were proposed by Heisenberg in his mid twenties. However, it was the technically versed but less famous Kramers who had laid 5 the groundwork for it. The second, slightly younger, version is named wave mechanics and was introduced by Schrödinger while in his late thirties. Although this version was more of a single-handed product it was not developed in total isolation, as can be deduced from two facts. First, before Schrödinger thought of his priceless equation, he had been spurred to look for such an equation by Debije, also known as Debye, while living outside of his native Netherlands. Indeed, back then, there was no predictive mathematical framework inspired by, and corresponding to, the wave-like nature of particulate matter that de Broglie had introduced. Secondly, immediately after Schrödinger had obtained his equation, he sought feedback from the mathematician Weyl. Fascinatingly, Schrödinger initially feared that his equation would not work, fatally so, a fact that is little known. The reason for this (misplaced) fatalism was that he could not imagine how proper vibration frequencies can appear without boundary conditions, unlike in the case of the partial differential equation governing pressure in the classical mechanics of an elastic body.

The beginnings of quantum mechanics were humble and actually rooted in solving a practical problem of commercial value. The manufacturing industry wondered how the efficiency of its light bulbs could be improved, which led to the set-up of the well-known black-body radiation experiment. Many a student may have wondered where this quite bizarre experiment actually comes from, as undergraduate textbooks are typically silent on the motivation behind it all. This conundrum is not solved by artificial intelligence (AI) (readily used with a Google search engine), which may wrongly inform those same inquisitive students that “black-body radiation was studied because it led to the development of quantum mechanics”. Not so: no one had an inkling that the whole new science of quantum mechanics would emerge from researching such a mundane problem. A properly informed human knows, unlike AI, that quantum mechanics was stumbled upon during the rather boring fitting of curves measured in connection with black-body radiation, a problem studied for practical and even commercial reasons.

Indeed, Planck, who unwittingly originated quantum mechanics, was one of several researchers hunting for the mathematical expression that best fits the spectral energy density of the black-body as a function of its temperature and emitted frequency. It took the otherwise conservative Planck a leap of imagination to divide energy into discrete increments in order to discover the best fit. More precisely, Planck assumed that radiation energy is emitted not continuously but in discrete packets called quanta. Although he saw this hypothesis as a mere mathematical device, it became soon clear that, right here, nature revealed its fundamentally different character at very small scales; the radically new concept was quantisation.

Ironically, Planck was advised not to study physics because of the then prevailing opinion that physics was complete. This misleading mind set was surely inspired by the spectacular success of classical mechanics and thermodynamics. Alas, misplaced hubris failed to recognise that 19th century physics explained only part of the world. Nature forced completely new thinking upon the scientific enterprise, and with this new paradigm shift heralded an era of spectacular success.

The doctrine of quantum mechanics

Quantum mechanics as we know it today, and as used in solid-state physics, materials science and computational chemistry, was consolidated by the ever so forceful Bohr who was also portrayed as a fuzzy thinker, as expressed both in his writing and speaking. As a dominant figure, he persuaded the Carlsberg brewery to fund a new building originally called the Institute for Theoretical Physics of the University of Copenhagen. Actually, there were three geographical hotspots of quantum mechanics: Göttingen, Munich and Copenhagen. The latter won out, due to human dynamics, one can argue, including visiting the United States of America at the wrong time, as Göttingen-based academic Born later found out. Secondly, it is again human dynamics that should illuminate why Munich ended up as an underrated hotspot acting as a theatre backstage of “mere” mentoring. How is that Sommerfeld, while in Munich, supervised 7 researchers, both doctorally and postdoctorally, who all went on to win a Nobel prize while he himself failed to win one, in spite of being nominated 84 times? There also existed researchers such as Einstein, Schrödinger and Dirac, who remained largely unaffiliated with these hotspots, yet they managed to contribute immensely. Of course being in the right place at the right time, in a different way, helped.

It was the so-called Copenhagen interpretation of quantum mechanics that eventually gave the impression that quantum mechanics was a closed and completely understood subject. This impression tuned out to be false, as one could expect from the unfortunate combination of Bohr’s fuzziness and forcefulness. After all, is he not one who signed it off? The community ignored any healthy and ultimately helpful criticism, such as that 6 of Hermann. She exposed a fundamental misunderstanding about von Neumann’s proof on the impossibility of embedding quantum mechanics in a hidden variable theory. She showed that the theorem was only true under the specific assumptions made by von Neumann. Her work 7 was overlooked or dismissed for decades until brave and lucid minds such as Bell discovered the flaw and independently of Hermann published 8 the same criticism about 3 decades later, in 1966. One can only imagine how quantum mechanics would look today had its pioneers Hermann’s work seriously when published in 1935. Bell’s work led to Bell’s inequality, a falsifiable cornerstone of quantum mechanics with far reaching implications, even philosophical ones. That this inequality is violated was experimentally verified, ultimately by closing any interpretational loopholes. This pivotal result not only vindicated quantum mechanics but also sharpened the contours of its bizarre, counterintuitive character. Curiously, as one of the (last) bastions of major and healthy advances in physics, it does not appear to feature in cosmology. A reader of popular science would intuitively expect Bell’s theorem to heavily shape the regrettably wild speculations that routinely feature in this research field. Finally, we should mention that another such bastion is the experimental confirmation 9 of the Brout-Englert-Higgs boson, also simply known as the Higgs boson. Here too, human dynamics must surely be at work but one can only speculate which type: last incoming scientist has last word? Networking and international appeal? The community’s laziness?

The Schrödinger equation revisited

The triumph of the Schrödinger equation lies in the fact that it contains only fundamental physical constants (e.g. the speed of light, the mass of the electron, Planck’s constant) while nevertheless being able to predict the whole of chemistry. It does so without invoking any chemical parameters but may need an inordinate amount of computer time. In fact, the challenge of predicting chemical phenomena is quite similar to that of predicting the weather from the Navier–Stokes equations, which again do not contain any “weather parameters”. After all, both these equations and the Schrödinger equation are information generators; they necessitate the art of finding the most accurate prediction within the least amount of computation time. These compact but powerful equations stimulate the third leg of science: computation and simulation. Whereas the first two legs, theory and experiment, delivered insight into a large array of natural phenomena, it has been clear for a while that precious key equations are not enough to make actual predictions. It is quite an achievement to extract Newton’s equations of motion from a large set of observations but it is another thing to then use them to predict configurations of our solar system. Even with only three celestial bodies analytical techniques start faltering, and it is computer simulation that comes to the rescue.

The main challenge for this leg of science is to make the mapping between simulation results and experimental observation exact and perfect. Unfortunately for the field of molecular simulation, we are still a long way off this ambitious goal. As we shall see below, we still cannot predict more than one property of liquid water, all even within say 50 % from their respective experimental values. Moreover, we have to remain vigilant and critically examine if any agreement is fortuitous. It is important to be convinced that agreement between experiment and simulation has been obtained for the right reasons. Only then can computational science become a source of reliable information, and can we allow it to subsume older sciences such as chemistry, physics, geology or biology, as special applications.

Quite soon after the Schrödinger equation had proved that it successfully predicted and fully explained the spectrum of a single hydrogen atom, it had to prove its prowess in doing the same for something as simple, yet surprisingly complicated, as helium. This was looked into 10 by Heisenberg who was initially agitated by the Schrödinger equation as a competitor to his own version of quantum mechanics. However, he managed to embrace this equation as a computationally superior alternative, almost as soon as it came out of the oven in 1926. Later work 11 by Hylleraas was very much motivated by what he called an “extremely important question, if the numerical calculation according to wave mechanics also leads to exact results for many-electron problems”. The outcome of his pioneering work was promising towards this grand goal. Such tests have continued to be carried out, 12 even very recently, involving increasingly involved calculations but with the aspiration of constructing ever more compact wavefunctions. Moving away for a moment, from theoretical and computational chemistry towards quantum electrodynamics, even more extreme agreement between experiment and quantum mechanics was reached for the anomalous magnetic moment of the electron. However, the path 13 of this spectacular agreement (of currently 12 digits) was scandalously tortuous, with manipulation as well as genuine error upon error.

In its full form, the Schrödinger equation describes the nuclei that appear in molecules also in terms of a wavefunction. This description is counterintuitive to more than the proverbial 99.99 % of chemists who think of nuclei as points in space, without extension or probabilistic character. The Born-Oppenheimer approximation gives them a license to think like this: molecules are a framework of nuclear points surrounded by a swarm of wavefunction-like electrons. Almost half a century ago Woolley published a paper 14 entitled “Must a molecule have shape?” where he argues that molecular structure is not an intrinsic property. He states that molecular-beam gas-phase experiments probe molecular stationary states that cannot be understood in terms of molecular structures.

Quantum mechanics going astray

The present article sits firmly within computational chemistry as will soon become clear. However, as it has been commissioned in the widest context of quantum mechanics’ role and influence, it is appropriate to slip in a word about how science can go astray, or perhaps even disappear altogether. This sad phenomenon occurs when any experimental verification (i) is absent, or (ii) fails the pivotal attribute of repeatability. The first situation is the case in the field of high-energy physics and the second in cosmology, two fields where quantum mechanics also plays a role.

First, in cosmology we cannot re-run the experiment that is the creation of the Universe, or the scenario of whatever happened at its beginning, if there even is one. Similarly, the grand challenge of the Origin of Life also suffers from the impossibility of experimental repetition. Instead, one has to work like a detective who is actually in the same position: he (yes, the detective is male this time) cannot re-run the murder. He cannot repeat the murder in different conditions each time, as if it were a typical scientific experiment. Yet, scientific experiments are designed in that way; they exaggerate and control a very small piece of repeatable reality, and then draw cause-effect relationships from the observations. Yet, a detective has to work with the observable consequences of a single, poorly understood event from the past. The type of reasoning needed to proceed here is called abduction (or abductive inference). 15 It is the last fortress of human thought that current AI has failed to conquer. Only, so-called General AI can start to master abduction. Any attempts at this human-like capability have hitherto nose-dived.

Recently, it is often said that cosmology is in a crisis, a situation exacerbated by an increasing number of incoming experimental observations, one firmer than the previous; the upshot is that they deviate from predictions made by standard theory, louder and clearer as time progresses. It is crucial to take these deviations seriously and resolve them by a minimal and substantiated theory. It is futile and dangerous to ignore these deviations and to fantasise about the character of the Universe when it was supposedly very small, an ever varying number of billions of years ago. A fine example of a researcher who focuses hard, with his graduate students, on proposing a new theory that endeavours to explain these deviations is Turok. Over and above working in this correct way, and thereby maintaining a proven scientific tradition, he has a strong sense of minimality. Turok constructs 16 his explanation and theory in a grounded way while using the smallest number of assumptions. This is at variance with the “style” of Hawking whose ever-changing and unfalsifiable speculations 17 appear far-fetched and non-minimal. This is the same Hawking who stated that physics would be more interesting if the Brout-Englert-Higgs boson had not been found.

Secondly, in theoretical high-energy physics, the state-of-the-art is also dire, that is, it also finds itself in a crisis. The imposing goal of this piece of physics is to unify general relativity[1] and quantum mechanics into a so-called “theory of everything”. The latter name is highly regrettable and even arrogant: it reveals a stunning ignorance about the Universe, and all that is in it, including how it works. Can such a theory of everything explain why the vast majority of humans has ten fingers, for example? Or predict when and how the Russo-Ukrainian war will finish?

Let us never forget that the ultimate arbiter of a theory or a model is experiment. No matter how minimal or elegant this theory, if its predictions do not “make proper contact with experiment”, it is all wrong. By “proper contact” is meant that the theory generates at least one falsifiable prediction but ideally a number of them, which collectively lead to excellent (at least promising) agreement between theory and experiment. The hard lesson that such contact is often lacking has yet to be learnt in fundamental physics, for example, where string theory is increasingly seen as failing to be that theory of everything. Note that this failure is far beyond the ultimate veracity of experimental values. In other words, even when better experimental technology offers theoreticians a superior experimental value string theory would still be wrong. In fact, at the moment it does not even make a single prediction. String theory, which was presented as the only game in town for too many years has not lived up to its repeated promises. This demise was clear already two decades ago. 18 This disconcerting conclusion must have been reached, perhaps even earlier, by perspicacious researchers who are honest with themselves. If free from human dynamics, they can express the view that even updated versions of string theory do not tackle old, well-known problems with it, but actually introduce new problems. Disturbingly, string theory has not been verified by experiment. However, string theorists will remind everyone that the word “yet” should be added here. Worse, string theory has not made even a single experimental prediction. This is why it is apt to designate it as “not even wrong” and not even science. This is what happens if a theory (if it is one in the first place, as some argue) is allowed to develop in an experimental vacuum as it were. The theory expands in an unbridled way, without any direction from, or judgement by, experiment. No wonder that you-tuber Hossenfelder is becoming increasingly frantic and increasingly spews venom. This is where quantum mechanics, and the broader science that it connects to, go wrong, in the author’s opinion.

Quantum chemistry

An area where quantum mechanics did not go astray this badly is in the study of matter at atomic scale, i.e. in quantum chemistry and solid-state physics. However, here too, one should stay watchful about whether a calculation or prediction works for the right reasons, as illustrated in the next section. Very soon after the introduction of the Schrödinger equation, its immediate application to molecular hydrogen again survived this stringent test. Suddenly, the mystery, up to that point in 1927, of the stability and formation of (nonpolar) chemical bonds disappeared. This article is not the place to review the history of quantum chemistry but, in continuation of this article’s undisguised style, it will just mention a quick “the-emperor-is-naked truth” about density functional theory (DFT).

The original dream behind DFT was for it to offer a much faster, and a more intuitive, computational framework to predict properties of matter at roughly ambient conditions. On one hand, DFT’s superior speed would be guaranteed by the three-dimensional nature of the electron density, as opposed to the horribly high dimensionality of wavefunction space. On the other hand, DFT’s superior intuition would be guaranteed by the more tangible, real-space character of the electron density. Moreover, the latter is measurable and observable. The same cannot be said of a wavefunction, whether describing one electron or many electrons. Yet, ironically, it is those wavefunctions that came to the rescue of DFT, thrice actually. The first time was to finally make the kinetic energy accurate enough, compared to a true functional of the electron density, such as some power of the electron density or of the magnitude of the electron density’s gradient. This development is often described as the birth of DFT, most likely because at last it made DFT sufficiently accurate. However, in reality, this decision caused the partial death of DFT because it moved DFT away from its original wavefunctionless tenet. The second time a wavefunction contamination improved DFT’s accuracy, and thereby further killed off DFT’s main tenet, is the introduction of hybrid functionals. Here, wavefunctions came to the aid of calculating the exchange energy more accurately, as achieved by something as basic as Hartree–Fock theory. The third time was when parameters crept into the functionals, and experimental data of a set of guiding molecules were fitted to them. This decision also undermined the original, first-principles character of DFT.

In summary, quantum mechanics has its firm origins in embracing an experimental anomaly. From this work emerged a novel paradigm that has withstood extensive testing, each time vindicated as correct. Yet, quantum mechanics’ deep nature is not understood. Quite a few scientists are content with just using it. However, there are also no-nonsense thinkers, such as Penrose, who rightly call the “observer problem” a red herring. In the end it is dangerous to surrender to something that is not understood and leave it at that. The history of science offers several examples where allowing mysteries to be unresolved ultimately leads to a dead end, classical mechanics being a famous example. Quantum mechanics cannot be an exception. Thus, it is best to indeed bother with what quantum mechanics actually means. Whereas ionic liquids, high throughput screening, systems biology, genomics, buckminsterfullerene, graphene, room temperature superconductivity and biofuels have come and gone (at least in terms of their hype, while a valuable rump remains), quantum mechanics has stayed, and will surely do so over the next 100 years but hopefully in a more approachable form.

A critical look at quantum chemistry in action: examples of remaining challenges

Predicting the properties of condensed-matter water

There is still no theory that can predict even half a dozen of properties of liquid water within a generous 50 % deviation from experiment. Now, one should not make the naïve theoretician’s mistake of taking experimental values as immutable quantities. Indeed, they too are sometimes disputed, and are known to change over time. For example, when Kell listed 19 the thermal expansion coefficient α of liquid water to be 2.76 × 10−4 K−1 (at 300 K) in 1967, in 1975 its value 20 had become 2.5712 × 10−4 K−1 (at 298 K) or 7 % smaller. However, this update pales into insignificance to a prediction that is almost an order of magnitude too large, after some browsing through the literature. Indeed, work 21 published in 2016 reports exactly such a shocking discrepancy. A deep neural-network potential was trained on energies and forces from reference DFT calculations for a broad range of condensed water configurations using the functionals RPBE 22 and BLYP. 23 The prediction for α was 9.2 times too large for RPBE, and “only” 3.8 times too large for BLYP. The main point of that work, published in a high-impact journal, is that the introduction of van der Waals (vdW) energies comes to the rescue. This correction massively improves the BLYP prediction but still overpredicts by a factor of 1.7, that is, a stubborn deviation from experiment of 69 %. The vdW-corrected α value for RPBE is 44 % too high. In summary, the vdW correction pushes α in the right direction, that is, toward experiment but still leaves behind large errors, more so for BLYP than for RPBE. Still, this is not proof that it is vdW energies that matter, when changing other terms could also move the result in the right direction.

How did the authors carry out this admittedly helpful, but not definitive, vdW correction? As with many high-impact articles, this crucial point is buried in their sizeable supporting information file. The DFT-D3 method 24 was chosen to save the day, as discussed quantitatively in the previous paragraph. The next paragraph will discuss how this method makes qualitative improvements. Yet, peering into their supporting information reveals that the C6 and the C8 coefficients for the OH, OO and HH interactions are virtually identical between the two functionals RPBE and BLYP. The dramatically positive impact that the addition of a form of vdW energy has on this thermodynamic property (and several others) hinges on only two parameters: sr,6 and s8. The former is involved in a damping function of a particular shape. The use of damping functions has been criticised. 25 Even without accepting this criticism, one faces the fact that an “elaborate property predictor” is critically dependent on the presence of only two parameters and the very values they take. Presumably, playing with other parameters could achieve the same result.

Including vdW interactions in this way also yields a qualitative improvement. This time the property is the existence (or not) of a temperature where the density of liquid water is a maximum, and thus larger than that of ice. Experimentally we know that such a temperature exists: at about 4 °C (at typical atmospheric pressures). Both “bare” functionals (i.e. without vdW correction) give rise to simulations that lack this maximum altogether. This is a spectacular failure and indeed of a qualitative nature. Again, one can worry about how exactly the introduction of those magic vdW terms manage to repair this disastrous prediction. It then turns out that the magnitude of the vdW interaction between O and H atoms is only a meagre 0.5 kBT or about 1 kJ/mol, at room temperature. This is a relatively small amount of energy, to be seen in the context of many other energy contributions each associated with further multi-kJ/mol errors due to their inevitably approximate nature. Thus, it is not compelling to accept that a tiny amount of energy, of one type and without any error bar given, suddenly makes everything fine.

Let us put these dramatic findings into perspective. The predictor of liquid water properties involves a whole molecular dynamics simulation, which is an intricate business in the first place: there are oodles of decisions to take. In turn, the atomistic potential used here involves energies and forces that are predicted by a convoluted neural network, not to speak of several other decisions and concepts invoked by DFT itself. Against this overwhelming background, it is the mere presence of two parameters that suddenly makes liquid water look like liquid water. Molecular simulation scientists sometimes joke about whether they would trust drinking their own simulated liquid water. In this case, only in the presence of sr,6 and s8 does this water start being safe enough to drink.

In summary, it is an art to predict properties of liquid water, (i) truly from first-principles, without experimental input, (ii) for the right reasons, and (iii) in a minimal way. The state-of-the-art is still some way off this goal but our machine-learning based force field FFLUX offers a better chance of getting this right by its very architecture.

Proteins

One half of the 2024 Nobel Prize for chemistry went to the prediction of protein structure from a given amino acid sequence. This long-awaited achievement was made possible by the design of a suitable AI algorithm, called AlphaFold, which operates on a large databank of crystal structures of many thousands of proteins. While the winners decisively beat older algorithms to reach this goal, AlphaFold has a fundamental weakness: it is a huge fit based on known data. It is tempting to regard this fit as lacking much physical insight.

This weakness is catastrophic when tackling intrinsically disordered proteins (IDP). This type of protein refuses to be crystallised and hence cannot be studied experimentally in atomistic detail. These proteins were once thought to be rare anomalies but now turn out to be the cornerstone of a new 26 paradigm of Life itself if one can push it that far. Life simply needs the flexibility that IDPs offer. Moreover, crystallisable proteins are only a subset of all proteins. The fact that crystallised proteins constitute the vast majority of studied proteins gives the wrong impression they represent the full “molecular machinery” that Life depends on. Put bluntly, this paradigm is dated. Many important diseases such as Alzheimer’s cannot be understood without understanding IDPs. This lack of understanding causes the lack of any meaningful progress in combatting this cruel disease.[2] It is clear that AlphaFold is inadequate and limiting as a research tool within the new paradigm of Life that imposes itself upon us. AlphaFold’s strategy is actually a dead end when confined[3] to protein crystallography.

Instead, we propose our FFLUX method, which is based on quantum mechanics and machine learning. In other words, FFLUX is truly predictive, even in the case of IDPs. FFLUX trains the quantum atoms themselves instead of the protein structures. These trained atoms are then knowledgeable about how to interact with other atoms inside a molecular dynamics simulation, which then in turn predicts the structure and dynamics of IDPs in aqueous solution. This is how physics can be put into protein folding.

Another large field of research in connection with proteins is that of enzymatic catalysis. It turns out that, as this text is written, computational enzyme design is extremely difficult, often producing enzymes with low catalytic activity. A recent viewpoint 27 offers suggestions as to how to improve the situation but not from the point of view of advanced quantum chemistry, as advocated by FFLUX.

Three contributions from the author’s lab

On the background of Quantum Chemical Topology (QCT)

The atom is a key concept in chemistry. Hence, one needs a trustworthy way of defining it, and indeed calculating its properties, as it exists within a molecule, or more broadly, in a system of atoms (excluding extreme conditions). The co-workers of Bader, and Bader himself, have provided the community with a solution for this problem, one that has gone from strength to strength since its inception 28 more than half a century ago. This approach is referred to as a Quantum Theory of Atoms in Molecules (QTAIM), 29 , 30 , 31 , 32 which has been explained 31 , 32 , 33 , 34 and reviewed 35 , 36 , 37 several times before. The essence of this approach is to use the language of dynamical systems (attractor, separatrix, critical point, gradient path, …) in order to understand molecules, their aggregates and condensed matter.[4]

Originally, this beautiful, powerful and universal piece of mathematics was only applied to the electron density and to its Laplacian, two quantum mechanical functions of great import, both in real 3D space. Over time, more functions were analysed in this distinct way such that the author found it useful, in response to human dynamics, to bundle them under the name Quantum Chemical Topology (QCT).[5] For example, the electron localisation function 39 served as a wealth of information after its first quantum topological analysis 40 leading to decades of fruitful follow-up research. The keyword of QCT united the factions that had contributed to what is best considered as a single QCT approach, which is then logically distinct from alternative, non-QCT approaches. This distinction applies to population analyses, energy decomposition analyses, and any other tool of interpretational quantum chemistry. In 2001 we showed 41 how our six-dimensional volume integration over two quantum topological atoms freed QTAIM from the constraint of the virial theorem because this achievement enabled potential energy to be computed independently of this theorem. Hence, a QCT-based energy partitioning 42 could henceforth operate on a molecular geometry that is not a stationary point. Shortly thereafter, Blanco et al. 43 proposed a version of topological energy partitioning called Interacting Quantum Atoms (IQA), which we extended 44 to be used in conjunction with Density Functional Theory.

Perhaps even more time is needed before “the theoretical and computational chemistry establishment” (another construction of human dynamics) is au fait with the QCT perspective, for it to hopefully appreciate it too, at that long-awaited point. More details on QCT can be found elsewhere 38 , 45 , 46 including the one that there is a sweet spot 47 for the selection of quadrature grids for integration over atomic volumes.

Contribution 1: FFLUX or designing a force field that sees the electrons

FFLUX (formerly known as QCTFF) is the longest running project in the group. So far, it has involved more than 20 group members, and even more MChem, MSc and summer students. Several other, very different projects have run simultaneously. However, only FFLUX became the flagship project, which continuously progressed over the best part of 25 years, if one starts with its early days of investigating the electrostatic interaction. 48 , 49 , 50 The motivation of FFLUX was clear from the onset: replace classical force fields such as AMBER or CHARMM by something more reliable. Our strategy was always to start from scratch and to overhaul the architecture of such classical force fields. Critical and honest reading of the classical force field literature reveals that they are unable to make proper contact with experiment.

This alarming state-of-affairs has been spelled out already in 2003 where versions of AMBER, CHARMM, GROMOS and OPLS supply a cacophony 51 of molecular dynamics results that disagree between themselves, More alarmingly, their predictions deviate, even qualitatively, from NMR, Raman and infrared experiments, for something as simple as trialanine in aqueous solution. The status quo is equally disconcerting 52 for structural ensembles of IDPs. This is why the FFLUX research program started ab ovo, and proposed a force field that is much closer to the underlying quantum mechanics. The key to do so is to directly link the parametric values of the force field to atomic properties that can also directly and transparently be linked to (reduced) density matrices (of which the electron density itself is a special case) and the energies that correspond to these density matrices, including the kinetic energy 53 and the electron correlation energy. 54 In doing so, it abandons design principles that have proven to stifle progress over decades. Indeed, instead of observing a gradient of irreversible progress, one witnesses an unresolved going forwards and backwards: energy terms are added and then taken away again a few year later, and endless reparameterisations makes one gain some agreement with experiment here while losing some there.

Reviewing the more than 80 papers behind FFLUX is best done by means of book such as an issue in the Springer series “Lecture Notes in Chemistry”, which is planned over the next few years as FFLUX nears completion. Such a platform will be more instructive than a review because the former offers a much needed didactic dimension. It is high time this research effort reaches a wider audience, which would benefit from a breath of fresh air and a future-proof alternative. In this article, it is sounder to highlight FFLUX’s characteristics and design decisions, which would otherwise drown in details and milestones that are better understood from the original publications.

The following points are siphoned off and numbered in order to clarify FFLUX to the now large machine learning community in computational chemistry, with the anticipation that FFLUX will start resonating with it:

  1. All atomic properties (charge, energy, dipole moment, …) come from one single formula, which is an integration over atomic volume(s) of a quantum density. This decision is crucial to guarantee consistency in the processing of atomic information. Put differently: if the force field, which is based on a haystack of decisions, makes a prediction that does not agree well with experiment then the culprit needle will not be hiding in this part of the haystack.

  2. FFLUX started out with a thorough, complete and systematic treatment of electrostatics. A proper understanding of the convergence behaviour 55 of the multipolar expansion, combined with multipolar Ewald summation, 56 enable perfect electrostatics, or at least clear monitoring of its performance. Getting electrostatics right is a rock to build the force field on; in order to make future-proof progress, at least one part has to be robust and trustworthy.

  3. Polarisation is handled without polarisabilities. 57 FFLUX is interested in the result of the polarisation process, not the process itself. This result is expressed in the values that atomic charges or multipole moments take on in the presence of a particular arrangement of surrounding atoms.

  4. Perturbation theory is never used. At long-range it defines polarisabilities and dispersion coefficients but neither appear in FFLUX. Dispersion energy is subsumed in the dynamic electron correlation energy, which FFLUX captures. Fundamentally, FFLUX does so, not only between atoms that are in different molecules, but also within the same molecule. Moreover, FFLUX has access to this type of energy even within an atom. As a result, FFLUX does not depend on any external ad hoc, add-on dispersion corrections, versions of which became popular from the mid-2000s. The full energy balance is always accounted for by all the types of atomic energies 42 , 43 (kinetic energy as subsumed in steric energy, 58 electrostatic, exchange and correlation energies) offered by the quantum topological partitioning. Finally, at short-range, perturbation theory becomes ill-defined and using it is a poor strategy from a computational point of view. It is better to proceed with the supermolecular approach, which handles Pauli’s antisymmetrisation impeccably. Moreover, this approach automatically delivers electron densities that are compatible with the idea of a self-consistent field. Perturbation theory reaches this mandatory property only after iterating towards it, which is time-consuming in a molecular dynamics simulation. Never mind the polarisation catastrophe, the physical breakdown of a model that is suppressed by a mathematical patch-up called damping functions.

  5. Machine learning steps in to acquire and then predict how an atomic property changes as its environment changes. Originally, and long before it became the hype that it is today, we invoked 59 machine learning to simplify the treatment of polarisation while using distributed polarisabilities. 57 , 60 Although starting off with neural networks we soon discovered that the completely unrelated method of Gaussian Process Regression 61 is more accurate than neural nets. The price one pays for this performance is a higher computational cost but this cost is automatically mitigated by Moore’s Law. We pioneered 62 the use of Gaussian Process Regression in the area of atomic potential design.

  6. It is useful to be aware of four key aspects of our particular use of machine learning. They are, not in any particular order:

    1. Our work is well embedded in the field of intermolecular potentials, and more broadly, in that of quantum chemistry. This situation should be contrasted with more cavalier uses of machine learning.

    2. FFLUX is unique in learning already partitioned atomic information. In other words, alternative machine-learnt potentials use machine learning itself to partition a quantum system. In contrast, we let the machine learning operate on already obtained, physically meaningful atomic properties. Those alternative methods typically return amazing prediction errors for the whole molecule trained on. However, a serious problem is that “rogue atoms” (e.g. a hydrogen with the energy of an oxygen) may emerge because the machine learning has no clue how to create meaningful atoms.

    3. We are creative in the use and modification of machine learning methods. For example, we were the first to use the grey-wolf algorithm 63 (and our improved version of it) in the search for optimal hyperparameters. Moreover, for the first time we proposed a solution 64 for the breakdown of the principle of the maximisation of the likelihood function as a guide to improve model performance.

    4. We made a breakthrough in an old problem: how to make molecular flexibility compatible with atomic multipole moments. Before our introduction of machine learning, atomic multipole moments were only used for rigid molecules while the first and second derivatives 65 of their intermolecular potentials required Euler angles. This restriction also hampered the field of polymorphism prediction of molecular crystals. FFLUX is able to contribute to this field 66 for any type of molecule.

  7. FFLUX treats bonded and non-bonded atoms in exactly the same way. This is a consequence of the way QCT sees condensed matter: “atoms are atoms are atoms” wherever they are. There is no special treatment for solutes as opposed to solvents; all solvation is explicit with each atom interacting with any other. Another consequence of this principle is that intramolecular phenomena or effects (such as hydrogen bonding) are treated in the same way as intermolecular ones.

  8. The architecture of FFLUX is highly modular. This characteristic is an advantage because each module can be independently improved, which is not the case with classical force fields where the improvement of one energy term can spoil the overall performance. This is impossible unless there is error compensation: in classical force fields the energy terms do not represent what it says on the tin; they hold up each other like a house of cards.

  9. Our in-house software (for example 67 , 68 ) is being streamlined, automated and documented in preparation as a one-click package, as an ongoing process of development. Most packages that generate machine-learnt potentials stay in the lab of origin, aware that a public release offers a whole new level of scrutiny and vulnerability.

  10. Chemical insight accompanies the geometry optimisations or molecular dynamics trajectories executed by FFLUX. This element is precious because classical force fields fail to provide such a narrative. The honest reason is that their energy terms are not necessarily what they say they are. In contrast, FFLUX works with IQA energies, which are well-connected to chemical insight, as extracted from modern wavefunctions. The next section shows how to obtain this chemical insight rigorously, by computation and with few assumptions, if any.

Contribution 2: the Relative Energy Gradient (REG) method or how to compute chemical insight

Again, it is better to discuss only the bigger picture of the REG method, 69 which was proposed in 2017. By not reporting the technical details here, there is a chance that REG’s universality and power may be clearer and shine on all who can benefit from it.

When chemists explain a phenomenon, such as the stability of a molecular complex or a rotation barrier, they often do the same thing, which boils down to pointing out which atoms are responsible for this phenomenon and how they are responsible for it. The “how” part of this comprehensive question typically involves one (or more) energy types; they can be written down precisely, in terms of quantum mechanical density matrices. There are four of these energy types and they are: (i) steric, (ii) electrostatic (attractive or repulsive), (iii) pure exchange (a degree of covalent bonding), or (iv) (dynamic electron) correlation (“dispersion”, often sloppily called “van der Waals”) (which, for clarity, excludes the exchange energy with which it is often lumped together).

Is such an analysis safe? Is it unique? How does it connect to the underlying quantum reality, if at all? This type of analysis is carried out all the time but the outcome is often contentious by contradicting other analyses. For example, even a simple phenomenon, such as the origin of the rotation barrier in ethane, is already controversial. Is the barrier due to steric repulsion between two hydrogens? Or is it due to hyperconjugation stabilising the staggered conformation most such that the eclipsed conformation becomes unfavourable and thereby effectively creates an energy barrier?

The first thing one needs, inevitably, is an energy partitioning method. There is a plethora of these methods, going back all the way to the 1970s. An unusually critical review 70 reveals that they are all problematic at some point, even after having been modified to remedy a particular issue. Although published in 2015, this review does not mention IQA. This is a missed opportunity because IQA does not suffer from the typical problem that non-IQA energy partitioning suffer from. The vast majority of REG studies operate on IQA energies. However, REG is not tied to IQA, as shown by a recent study, 71 which used Natural Bonding Orbitals (besides IQA) to determine the factors accounting for reaction selectivity. The IQA energy decomposition scheme is additive, which means that the sum of all intra-atomic and interatomic energy contributions sum up to the total, original energy of the partitioned system (except for an acceptable “recovery error” due to the numerical nature of the atomic volume integration). Now, hoes does REG work? This is best explained with a metaphor, the full account 72 of which has been published elsewhere.

We bring in the metaphor of a football game and its analysis, in particular, that of a goal. This goal corresponds to the chemical phenomenon we are aiming to understand. Each player corresponds to an atom, such that all 22 players constitute the molecule (or system that has been partitioned). It is a human habit or even desire, both in football and in chemistry, to ascribe a phenomenon (i.e. goal or rotation barrier, for example) in terms of a few entities (i.e. atoms or players). We are aware that all entities contribute all the time, at least in principle. However, it makes sense to tease out which ones contribute the most. This is exactly what REG does.

Metaphorically speaking, REG cannot do this by looking at a single snapshot of the match but rather needs a video clip. Returning to chemistry, REG requires the availability of a relevant sequence of molecular geometries, centred around the phenomenon under study, and each geometry endowed with all atomic energies. This sequence embodies a change in the total system, and is governed by a control coordinate. An example of a control coordinate is a dihedral angle in the case of ethane’s rotation barrier, or the O⋯O distance in the water dimer. But how can the REG method point out which atom is largely responsible for the phenomenon at hand, which occurs in the whole system?

How REG works, essentially, can be described right here, in this short paragraph. First, REG calculates the (energy) gradient for the whole system as it varies as a function of a change in the control coordinate. Then REG calculates the gradient for each atom’s own energy contribution. Then REG calculates the ratio of each atomic gradient over the system’s gradient. The larger the (absolute value) of this ratio the more important the role of the corresponding atom in explaining the overall chemical phenomenon.

Several important details have been skipped in the previous paragraph, motivated by a desire to convey the wood for the trees. REG delivers a ranking of the importance of atoms in terms of how well their energetic behaviour resembles that of the total system. For example, if an atom’s energy increases, dramatically speaking, say five times more than that of the total system then surely this atom contributes much to the total system’s behaviour. Returning to the football metaphor, this means that the key action of one (or a few) player(s) can be linked to the scoring of that goal. It is he (or they) that can be associated with the overall game’s “phenomenon” of goal scoring.

The REG method has been applied in many case studies, each revisiting well-known chemical phenomena or questions, often debated. Amongst these are: the gauche effect, 73 the anomeric effect, 74 the torsional barrier in biphenyl, 72 the mechanism of an enzymatic reaction, 75 halogen-alkane nucleophilic substitution (SN 2) reactions, 76 an advanced re-examination of the secondary interaction hypothesis in DNA base pairs, 77 and finally halogen bonding, 78 also in the presence of electron correlation, 79 to name a few. REG quantified the degree of covalency in a hydrogen bond, solved the old controversy of whether biphenyl’s planar energy barrier is due to steric repulsion between its ortho-hydrogens, demystified so-called “through-space” interactions in enzymatic active sites, explained how a molecular crystal is held together in the curious absence of hydrogen bonds, and computed a pharmacophore from scratch in a way that is fully compatible with the underlying quantum mechanics.

Contribution 3: Ab Initio Bond Length (AIBL) or how to correct a pKa experiment

Here again we will avoid technical details; we will be brief, even more so than in the previous section, in an attempt to reach the punchlines faster. However, the next paragraph makes a small detour in order to set the scene better. Some light background reading on the AIBL method can be found in an article 80 in the popular science magazine Research Outreach.

The roots of the AIBL method lie in the construction of a Quantitative Structure Activity Relationship (QSAR) using the idea of the so-called Bond Critical Point (BCP) space. 81 A critical point is one where the gradient of the electron density vanishes. For our current purpose, a BCP is a point in space at which quantum mechanical properties (e.g. the electron density, its Laplacian, the ellipticity) are evaluated. From a practical and heuristic point of view BCP properties can characterise a bond, at least to some extent. Each BCP can then be plotted in a hyperdimensional space called BCP space, where each point compactly characterises a bond. The installation of a straightforward Euclidean distance in this space enabled the fast calculation of molecular similarity. In 2001, we showed 82 that this approach was more efficient and effective than that of the then familiar and popular Carbo similarity index and its variants. However, the most important discovery 81 was that this distance in BCP space made contact with experiment. In particular, the author found a very strong correlation between the experimentally determined pKa and the distance in BCP space for a set of para-substituted benzoic acids. Actually, this major result was achieved through the sigma-constant that appears in Hammett’s equation. With BCP space now firmly anchored in physical organic chemistry, the idea of BCP space proved to be successful in many other case studies including that 83 of base-promoted hydrolysis rate constants for a set of 40 esters. This method soon deserved 82 a name, which settled on Quantum Topological Molecular Similarity (QTMS), especially because it was able to point out the active centre, that is, that part of the molecule that is responsible for the QSAR.

Eventually it was asserted that for a set of molecules that have structural or chemical commonality there exists a novel type of linear free energy relationship (LFER). This LFER connects a single “active” bond length (i.e. in an ab initio equilibrium geometry in the gas phase) with an experimental pKa value in aqueous solution. This powerful observation led to the acronym AIBLHiCoS (Ab Initio Bond Length High Correlation Subsets) to coin this pKa prediction method, 84 which was later just called AIBL. One just needs to geometry-optimise, in the gas phase, a set of molecules while getting away with modest levels of theory. The proposed method can isolate erroneous experiments, operate in non-aqueous solutions and at different temperatures. Moreover, the existence of active fragments is demonstrated in a variety of sizeable biomolecules for which the pKa is successfully predicted.

In subsequent work 85 we presented solutions to some of the more complex challenges in pKa prediction, namely that of (i) multiprotic compounds, (ii) macroscopic values for compounds that tautomerise, and (iii) compounds with more than 50 atoms. More spectacularly, we re-measured 85 a pKa that AIBL identified as wrong. Lo and behold: the new experimental value agreed much better with the computational prediction than before. This is a theoretical or computational chemist’s dream. In particular, we identified that the literature values for drug compounds celecoxib, glimepiride and glipizide were inaccurate. Our newly measured experimental values matched our initial predictions to within 0.26 pKa units, whereas previous values were found to deviate by up to 1.68 pKa units.

The accurate prediction of aqueous pKa values for tautomerisable compounds is a formidable task, even for the most established in silico tools. Empirical approaches often fall short due to a lack of pre-existing knowledge of dominant tautomeric forms. In fairly recent work 86 AIBL predicted pKa values for herbicide/therapeutic derivatives of 1,3-cyclohexanedione and 1,3-cyclopentanedione to within just 0.24 units. Our AIBL model uses a single ab initio bond length from one protonation state. Moreover, AIBL is as accurate as other more complex regression approaches using more input features, and even outperforms the popular computer program Marvin.

There is other elegant work 87 that predicts pKa within the QCT context based on electron localisation-delocalisation matrices but currently the landscape of pKa predictors is dominated by very fast but very crude methods. This status quo sets a paradigm that industry has accepted and maintains with disappointing inertia, even if better results can be achieved with computer power already existing in that same industry. Gas-phase ab initio optimisations are routine and cheap in the sense of a CPU-minute scale. Over the years AIBL has shown an incredible robustness in terms of theory and experiment always converging. If ever there was a disagreement between theory and experiment it was because AIBL focused on the wrong conformation, or identified a new LFER with a tighter molecular skeleton, or experiment was simply wrong. In all case studies, things always fell into place. Finally, it is useful to think of AIBL as in the context of a pick-up sticks game such as Mikado. Each stick corresponds to a high-correlation subset, that is, a congeneric set of molecules housing a bond whose (gas-phase) length magically correlates with pKa in solution. Chemical space then falls apart into a large bundle of pick-up sticks, each pointing in an appropriate direction in the plotting space of pKa versus bond length. It is fair to say that AIBL is strong and reliable because it makes predictions and most likely works for the right reason as it is based on a minimal number of descriptors and without exception.

Conclusions

Quantum mechanics is an edifice of science, far beyond short-lived fads. It definitely deserves a whole year dedicated to it, and probably another year again, after another hundred years’ time. The compact story told here is not the usual, tired one of a few stars walking on their allocated red carpets. Instead, this story seeks to highlight the human dynamics behind the development of quantum mechanics. As a piece of science, it has delivered tremendous progress in many areas, both fundamental and applied. Yet, it continues to be plagued by a persistent feeling of being very hard to understand, or being “incomplete” (and even wrong according to some brave but well-informed voices) when pushed to its conceptual limits.

Finally, three quantum-based contributions from the author’s lab are discussed in the hope that the wood will now be clearer than the trees. The first contribution consists of FFLUX, a machine-learnt potential aimed at peptide (and ultimately protein) simulation in aqueous solution. The quantum mechanical character of the chemical reality of FFLUX is clear: it uses physically and chemically well-defined quantum atoms, gleaned from high-level wavefunctions of small systems, with proven transferability, in order to make experimentally testable predictions on the structure and dynamics of peptides in water. The same is true for the second contribution, which is that of the REG method. REG captures fundamental types of energy (electrostatic, exchange, kinetic and electron correlation) calculated for well-defined quantum atoms. By an ingenious and minimal ranking procedure, REG can explain any chemical phenomenon as a result of a direct link to quantum mechanics. The third contribution, called AIBL, links the simple but very powerful quantum mechanical property of an equilibrium bond length (occurring in given functional group, at 0 K and for a molecule in the gas phase) with its pKa in solution, at any ambient temperature.


Corresponding author: Paul L.A. Popelier, Department of Chemistry, The University of Manchester, Oxford Road, Manchester, M13 9PL, Great Britain, e-mail:
Article note: A collection of invited papers to celebrate the UN’s proclamation of 2025 as the International Year of Quantum Science and Technology.

Award Identifier / Grant number: EP/X024393/1

Acknowledgments

The author is grateful to the European Research Council (ERC) for the award of an Advanced Grant underwritten by the UKRI-funded Frontier Research grant EP/X024393/1. He is grateful to his research group, past and present, for them to have furthered knowledge over the last quarter of a century and more.

  1. Research ethics: Not applicable.

  2. Informed consent: Not applicable.

  3. Author contributions: Paul Popelier wrote the manuscript in full.

  4. Use of Large Language Models, AI and Machine Learning Tools: Not at all.

  5. Conflict of interest: None.

  6. Research funding: European Research Council (ERC) Advanced Grant underwritten by the UKRI-funded Frontier Research grant EP/X024393/1.

  7. Data availability: Not applicable.

References

1. Jones, S. The Quantum Ten. A Story of Passion, Tragedy, Ambition and Science; Thomas Allen Publishers: Toronto, Canada, 2008.Search in Google Scholar

2. Gieryn, T. F. Science and Social Structure: A Festschrift for Robert K. Merton; NY Academy of Sciences: New York, 1980; p. 147.Search in Google Scholar

3. Jaimungal, C., 2024. https://www.youtube.com/watch?v=sGm505TFMbU (accessed 2025-08-22).Search in Google Scholar

4. Dirac, P. A. M. Proc. Roy. Soc. London 1929, A123, 714.10.1098/rspa.1929.0094Search in Google Scholar

5. Baggott, J. The Quantum Story. A History in 40 Moments; Oxford Univ. Press: Oxford, Great Britain, 2013.Search in Google Scholar

6. Hermann, G. Naturwissenschaften 1935, 23, 718. https://doi.org/10.1007/bf01491142.Search in Google Scholar

7. Mermin, N. D.; Schack, R. Found. Phys. 2018, 48, 1007. https://doi.org/10.1007/s10701-018-0197-5.Search in Google Scholar

8. Bell, J. S. Rev. Mod. Phys. 1966, 38, 447. https://doi.org/10.1103/revmodphys.38.447.Search in Google Scholar

9. Aad, G.; Abajyan, T.; Abbott, B.; Abdallah, J.; Abdel Khalek, S.; Abdelalim, A. A.; Abdinov, O.; Aben, R.; Abi, B.; Abolins, M.; AbouZeid, O. S.; Abramowicz, H.; Abreu, H.; Acharya, B. S.; Adamczyk, L.; Adams, D. L.; Addy, T. N.; Adelman, J.; Adomeit, S.; Adragna, P.; Adye, T.; Aefsky, S.; Aguilar-Saavedra, J. A.; Agustoni, M.; Aharrouche, M.; Ahlen, S. P.; Ahles, F.; Ahmad, A.; Ahsan, M.; Aielli, G.; Akdogan, T.; Åkesson, T. P. A.; Akimoto, G.; Akimov, A. V.; Alam, M. S.; Alam, M. A.; Albert, J.; Albrand, S.; Aleksa, M.; Aleksandrov, I. N.; Alessandria, F.; Alexa, C.; Alexander, G.; Alexandre, G.; Alexopoulos, T.; Alhroob, M.; Aliev, M.; Alimonti, G.; Alison, J.; Allbrooke, B. M. M.; Allport, P. P.; Allwood-Spiers, S. E.; Almond, J.; Aloisio, A.; Alon, R.; Alonso, A.; Alonso, F.; Altheimer, A.; Alvarez Gonzalez, B.; Alviggi, M. G.; Amako, K.; Amelung, C.; Ammosov, V. V.; Amor Dos Santos, S. P.; Amorim, A.; Amram, N.; Anastopoulos, C.; Ancu, L. S.; Andari, N.; Andeen, T.; Anders, C. F.; Anders, G.; Anderson, K. J.; Andreazza, A.; Andrei, V.; Andrieux, M. L.; Anduaga, X. S.; Angelidakis, S.; Anger, P.; Angerami, A.; Anghinolfi, F.; Anisenkov, A.; Anjos, N.; Annovi, A.; Antonaki, A.; Antonelli, M.; Antonov, A.; Antos, J.; Anulli, F.; Aoki, M.; Aoun, S.; Aperio Bella, L.; Apolle, R.; Arabidze, G.; Aracena, I.; Arai, Y.; Arce, A. T. H.; Arfaoui, S.; Arguin, J. F.; Arik, E. Phys. Lett. B 2012, 716, 1.Search in Google Scholar

10. Heisenberg, W. Z. Phys. 1926, 39, 499. https://doi.org/10.1007/bf01322090.Search in Google Scholar

11. Hylleraas, E. A. Z. Phys. 1929, 54, 347. https://doi.org/10.1007/bf01375457.Search in Google Scholar

12. Turbiner, A. V.; Vieyra, J. C. L.; del Valle, J. C.; Nader, D. J. Int. J. Quant. Chem. 2022, 122, e26879.10.1002/qua.26879Search in Google Scholar

13. Consa, O. arXiv preprint arXiv:2110.02078, 2021.Search in Google Scholar

14. Woolley, R. G. J. Am. Chem. Soc. 1978, 100, 1073. https://doi.org/10.1021/ja00472a009.Search in Google Scholar

15. Larson, E. J. The Myth of Artificial Intelligence. Why Computers Can’t Think the Way We Do; Belknap Press of Harvard University Press: Cambridge, Massachusetts, USA, 2021.10.4159/9780674259935Search in Google Scholar

16. Turok, N. https://indico.ph.ed.ac.uk/event/277/contributions/3103/attachments/1605/2467/TriangularTurokSlides.pdf (accessed 2025-08-24).Search in Google Scholar

17. Hertog, T. Het Onstaan van de Tijd; Uitgeverij Lannoo: Tielt, Flanders, 2023.Search in Google Scholar

18. Woit, P. Not Even Wrong; Vintage Books: London, Great Britain, 2006.Search in Google Scholar

19. Kell, G. S. J. Chem. Eng. Data 1967, 12, 66. https://doi.org/10.1021/je60032a018.Search in Google Scholar

20. Kell, G. S. J. Chem. Eng. Data 1975, 20, 97. https://doi.org/10.1021/je60064a005.Search in Google Scholar

21. Morawietz, T.; Singraber, A.; Dellago, C.; Behler, J. Proc. Natl. Acad. Sci. 2016, 113, 8368. https://doi.org/10.1073/pnas.1602375113.Search in Google Scholar PubMed PubMed Central

22. Hammer, B.; Hansen, L. B.; Norskov, J. K. Phys. Rev. 1999, B59, 7413.10.1103/PhysRevB.59.7413Search in Google Scholar

23. Lee, C.; Yang, W.; Parr, R. G. Phys. Rev. 1988, B37, 785.10.1103/PhysRevB.37.785Search in Google Scholar

24. Grimme, S.; Antony, J.; Ehrlich, S.; Krieg, H. J. Chem. Phys. 2010, 132, 154104. https://doi.org/10.1063/1.3382344.Search in Google Scholar PubMed

25. Popelier, P. L. A. J. Mol. Model. 2022, 28, 276. https://doi.org/10.1007/s00894-022-05188-7.Search in Google Scholar PubMed PubMed Central

26. Ball, P. How Life Works; Picador: Great Britain, 2023.10.7208/chicago/9780226826691.001.0001Search in Google Scholar

27. Hossack, E. J.; Hardy, F. J.; Green, A. P. ACS Catal. 2023, 13, 12436. https://doi.org/10.1021/acscatal.3c02746.Search in Google Scholar

28. Bader, R. F. W.; Beddall, P. M. J. Chem. Phys. 1972, 56, 3320. https://doi.org/10.1063/1.1677699.Search in Google Scholar

29. Popelier, P. L. A.; Aicken, F. M.; O’Brien, S. E. Chemical Modelling: Applications and Theory. In Royal Society of Chemistry Specialist Periodical Report; Hinchliffe, A., Ed.; Great Britain: Cambridge, Vol. 1, 2000; p. 143.Search in Google Scholar

30. Bader, R. F. W.; Nguyen-Dang, T. T. Adv. Quant. Chem. 1981, 14, 63.Search in Google Scholar

31. Bader, R. F. W. Atoms in Molecules. A Quantum Theory; Oxford Univ. Press: Oxford, Great Britain, 1990.10.1093/oso/9780198551683.001.0001Search in Google Scholar

32. Popelier, P. L. A. Atoms in Molecules. An Introduction; Pearson Education: London, Great Britain, 2000.Search in Google Scholar

33. Martín Pendás, A.; Contreras-Garcia, J. Topological Approaches to the Chemical Bond; Springer, 2023.10.1007/978-3-031-13666-5Search in Google Scholar

34. Popelier, P. L. A. The Quantum Theory of Atoms in Molecules. In The Nature of the Chemical Bond Revisited; Frenking, G., Shaik, S., Eds.; Wiley VCH, Chapter 8, 2014; p. 271.Search in Google Scholar

35. Matta, C.; Boyd, R. The Quantum Theory of Atoms in Molecules. From Solid State to DNA and Drug Design; Wiley VCH: Weinheim, 2007.10.1002/9783527610709Search in Google Scholar

36. Bader, R. F. W. Chem. Rev. 1991, 91, 893. https://doi.org/10.1021/cr00005a013.Search in Google Scholar

37. Koch, D.; Pavanello, M.; Shao, X.; Ihara, M.; Ayers, P. W.; Matta, C. F.; Jenkins, S.; Manzhos, S. Chem. Rev. 2024, 124, 12661. https://doi.org/10.1021/acs.chemrev.4c00297.Search in Google Scholar PubMed

38. Popelier, P. L. A. On Quantum Chemical Topology. In Challenges and Advances in Computational Chemistry and Physics Dedicated to “Applications of Topological Methods in Molecular Chemistry”; Chauvin, R., Lepetit, C., Alikhani, E., Silvi, B., Eds.; Springer: Switzerland, 2016; p. 23.Search in Google Scholar

39. Becke, A. D.; Edgecombe, K. E. J. Chem. Phys. 1990, 92, 5397. https://doi.org/10.1063/1.458517.Search in Google Scholar

40. Silvi, B.; Savin, A. Nature 1994, 371, 683. https://doi.org/10.1038/371683a0.Search in Google Scholar

41. Popelier, P. L. A.; Kosov, D. S. J. Chem. Phys. 2001, 114, 6539. https://doi.org/10.1063/1.1356013.Search in Google Scholar

42. Darley, M. G.; Popelier, P. L. A. J. Phys. Chem. A 2008, 112, 12954. https://doi.org/10.1021/jp803271w.Search in Google Scholar PubMed

43. Blanco, M. A.; Martín, A.; Francisco, P. E. J. Chem. Theory Comput. 2005, 1, 1096.10.1021/ct0501093Search in Google Scholar PubMed

44. Maxwell, P.; Martín Pendás, A.; Popelier, P. L. A. Phys. Chem. Chem. Phys. 2016, 18, 20986.10.1039/C5CP07021JSearch in Google Scholar

45. Popelier, P. L. A. Quantum Chemical Topology. In The Chemical Bond – 100 Years Old and Getting Stronger; Mingos, M., Ed.; Springer: Switzerland, 2016; p. 71.10.1007/430_2015_197Search in Google Scholar

46. Popelier, P. L. A. On Topological Atoms and Bonds. In Intermolecular Interactions in Molecular Crystals; Novoa, J., Ed.; RSC Cambridge: Great Britain, 2018; p. 147.10.1039/BK9781782621737-00147Search in Google Scholar

47. Aicken, F. M.; Popelier, P. L. A. Can. J. Chem. 2000, 78, 415. https://doi.org/10.1139/v00-026.Search in Google Scholar

48. Joubert, L.; Popelier, P. L. A. Phys. Chem. Chem. Phys. 2002, 4, 4353. https://doi.org/10.1039/b204485d.Search in Google Scholar

49. Popelier, P. L. A.; Stone, A. J.; Wales, D. J. Faraday Discuss. 1994, 97, 243. https://doi.org/10.1039/fd9949700243.Search in Google Scholar

50. Kosov, D. S.; Popelier, P. L. A. J. Chem. Phys. 2000, 113, 3969. https://doi.org/10.1063/1.1288384.Search in Google Scholar

51. Mu, Y.; Kosov, D. S.; Stock, G. J. Phys. Chem. B 2003, 107, 5064. https://doi.org/10.1021/jp022445a.Search in Google Scholar

52. Rauscher, S.; Gapsys, V.; Gajda, M. J.; Zweckstetter, M.; de Groot, B. L.; Grubmüller, H. J. Chem. Theor. Comput. 2015, 11, 5513. https://doi.org/10.1021/acs.jctc.5b00736.Search in Google Scholar PubMed

53. Fletcher, T. L.; Kandathil, S. M.; Popelier, P. L. A. Theor. Chem. Acc. 2014, 133, 1499. https://doi.org/10.1007/s00214-014-1499-0.Search in Google Scholar

54. Silva, A. F.; Duarte, L. J.; Popelier, P. L. A. Struct. Chem. 2020, 31, 507. https://doi.org/10.1007/s11224-020-01495-y.Search in Google Scholar

55. Yuan, Y.; Mills, M. J. L.; Popelier, P. L. A. J. Comput. Chem. 2014, 35, 343. https://doi.org/10.1002/jcc.23469.Search in Google Scholar PubMed

56. Symons, B. C. B.; Popelier, P. L. A. J. Chem. Phys. 2022, 156, 244107. https://doi.org/10.1063/5.0095581.Search in Google Scholar PubMed

57. In het Panhuis, M.; Popelier, P. L. A.; Munn, R. W.; Angyan, J. G. J. Chem. Phys. 2001, 114, 7951. https://doi.org/10.1063/1.1361247.Search in Google Scholar

58. Wilson, A. L.; Popelier, P. L. A. J. Phys. Chem. A 2016, 120, 9647. https://doi.org/10.1021/acs.jpca.6b10295.Search in Google Scholar PubMed

59. Houlding, S.; Liem, S. Y.; Popelier, P. L. A. Int. J. Quant. Chem. 2007, 107, 2817. https://doi.org/10.1002/qua.21507.Search in Google Scholar

60. Stone, A. J. The Theory of Intermolecular Forces. Second Edition, Vol. 32; Clarendon Press: Oxford, 2013.10.1093/acprof:oso/9780199672394.001.0001Search in Google Scholar

61. Rasmussen, C. E.; Williams, C. K. I. Gaussian Processes for Machine Learning; The MIT Press: Cambridge, USA, 2006.10.7551/mitpress/3206.001.0001Search in Google Scholar

62. Handley, C. M.; Popelier, P. L. A. J. Chem. Theor. Comput. 2009, 5, 1474. https://doi.org/10.1021/ct800468h.Search in Google Scholar PubMed

63. Isamura, B. K.; Popelier, P. L. A. Artif. Intell. Chem. 2023, 1, 100021. https://doi.org/10.1016/j.aichem.2023.100021.Search in Google Scholar

64. Isamura, B. K.; Popelier, P. L. A. AIP Adv. 2023, 13, 095202. https://doi.org/10.1063/5.0151033.Search in Google Scholar

65. Popelier, P. L. A.; Stone, A. J. Mol. Phys. 1994, 82, 411. https://doi.org/10.1080/00268979400100314.Search in Google Scholar

66. Brown, M. L.; Skelton, J. M.; Popelier, P. L. A. J. Phys. Chem. A 2023, 127, 1702. https://doi.org/10.1021/acs.jpca.2c06566.Search in Google Scholar PubMed PubMed Central

67. Manchev, Y. T.; Burn, M. J. ichor: Computational Chemistry Data Management Library for Machine Learning Force Field Development, 4.0.3 ed.; GitHub: Manchester, UK, 2024.10.26434/chemrxiv-2024-f8h7nSearch in Google Scholar

68. Burn, M. J.; Popelier, P. L. A. Digit. Discov. 2023, 2, 152. https://doi.org/10.1039/d2dd00082b.Search in Google Scholar

69. Thacker, J. C. R.; Popelier, P. L. A. Theor. Chem. Acc. 2017, 136, 86. https://doi.org/10.1007/s00214-017-2113-z.Search in Google Scholar PubMed PubMed Central

70. Phipps, M. J. S.; Fox, T.; Tautermann, C. S.; Skylaris, C.-K. Chem. Soc. Rev. 2015, 44, 3177. https://doi.org/10.1039/c4cs00375f.Search in Google Scholar PubMed

71. Cador, A.; Morell, C.; Tognetti, V.; Joubert, L.; Popelier, P. L. A. ChemPhysChem 2024, e202400163. https://doi.org/10.1002/cphc.202400163.Search in Google Scholar PubMed

72. Popelier, P. L. A.; Maxwell, P. I.; Thacker, J. C. R.; Alkorta, I. Theor. Chem. Acc. 2019, 138, 12. https://doi.org/10.1007/s00214-018-2383-0.Search in Google Scholar PubMed PubMed Central

73. Thacker, J. C. R.; Popelier, P. L. A. J. Phys. Chem. A 2018, 122, 1439–1450. https://doi.org/10.1021/acs.jpca.7b11881.Search in Google Scholar PubMed

74. Khan, D.; Duarte, L. J.; Popelier, P. L. A. Molecules 2022, 27, 5003. https://doi.org/10.3390/molecules27155003.Search in Google Scholar PubMed PubMed Central

75. Thacker, J. C. R.; Vincent, M. A.; Popelier, P. L. A. Chem. Eur. J. 2018, 14, 11200.10.1002/chem.201802035Search in Google Scholar PubMed PubMed Central

76. Alkorta, I.; Thacker, J. C. R.; Popelier, P. L. A. J. Comput. Chem. 2018, 39, 546. https://doi.org/10.1002/jcc.25098.Search in Google Scholar PubMed PubMed Central

77. Backhouse, O. J.; Thacker, J. C. R.; Popelier, P. L. A. ChemPhysChem 2019, 20, 555. https://doi.org/10.1002/cphc.201801180.Search in Google Scholar PubMed

78. Orangi, N.; Eskandari, K.; Thacker, J. C. R.; Popelier, P. L. A. ChemPhysChem 2019, 20, 1922. https://doi.org/10.1002/cphc.201900250.Search in Google Scholar PubMed

79. Alkorta, I.; Silva, A. F.; Popelier, P. L. A. Molecules 2020, 25, 2674. https://doi.org/10.3390/molecules25112674.Search in Google Scholar PubMed PubMed Central

80. Popelier, P.; Popelier, P. L. A. Res. Outreach 2019, 109, 90. https://doi.org/10.32907/ro-109-9093.Search in Google Scholar

81. Popelier, P. L. A. J. Phys. Chem. A 1999, 103, 2883. https://doi.org/10.1021/jp984735q.Search in Google Scholar

82. O’Brien, S. E.; Popelier, P. L. A. J. Chem. Inf. Comput. Sci. 2001, 41, 764.10.1021/ci0004661Search in Google Scholar PubMed

83. Chaudry, U. A.; Popelier, P. L. A. J. Phys. Chem. A 2003, 107, 4578. https://doi.org/10.1021/jp034272a.Search in Google Scholar

84. Anstöter, C.; Caine, B. A.; Popelier, P. L. A. J. Chem. Inf. Model. 2016, 56, 471–483. https://doi.org/10.1021/acs.jcim.5b00580.Search in Google Scholar PubMed

85. Caine, B. A.; Bronzato Paul, M.; Popelier, L. A. Chem.Sci. 2019, 10, 6368.10.1039/C9SC01818BSearch in Google Scholar

86. Caine, B. A.; Bronzato, M.; Fraser, T.; Kidley, N.; Dardonville, C.; Popelier, P. L. A. Commun. Chem. 2020, 3, 21. https://doi.org/10.1038/s42004-020-0264-7.Search in Google Scholar PubMed PubMed Central

87. Matta, C. F.; Ayers, P. W.; Cook, R. Electron Localization-Delocalization Matrices; Springer: Switzerland, 2024.10.1007/978-3-031-51434-0Search in Google Scholar

Received: 2025-04-30
Accepted: 2025-08-31
Published Online: 2025-09-24
Published in Print: 2025-11-25

© 2025 the author(s), published by De Gruyter, Berlin/Boston

This work is licensed under the Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License.

Articles in the same Issue

  1. Frontmatter
  2. IUPAC Recommendations
  3. Experimental methods and data evaluation procedures for the determination of radical copolymerization reactivity ratios from composition data (IUPAC Recommendations 2025)
  4. IUPAC Technical Reports
  5. Kinetic parameters for thermal decomposition of commercially available dialkyldiazenes (IUPAC Technical Report)
  6. FAIRSpec-ready spectroscopic data collections – advice for researchers, authors, and data managers (IUPAC Technical Report)
  7. Review Articles
  8. Are the Lennard-Jones potential parameters endowed with transferability? Lessons learnt from noble gases
  9. Quantum mechanics and human dynamics
  10. Quantum chemistry and large systems – a personal perspective
  11. The organic chemist and the quantum through the prism of R. B. Woodward
  12. Relativistic quantum theory for atomic and molecular response properties
  13. A chemical perspective of the 100 years of quantum mechanics
  14. Methylene: a turning point in the history of quantum chemistry and an enduring paradigm
  15. Quantum chemistry – from the first steps to linear-scaling electronic structure methods
  16. Nonadiabatic molecular dynamics on quantum computers: challenges and opportunities
  17. Research Articles
  18. Alzheimer’s disease – because β-amyloid cannot distinguish neurons from bacteria: an in silico simulation study
  19. Molecular electrostatic potential as a guide to intermolecular interactions: challenge of nucleophilic interaction sites
  20. Photophysical properties of functionalized terphenyls and implications to photoredox catalysis
  21. Combining molecular fragmentation and machine learning for accurate prediction of adiabatic ionization potentials
  22. Thermodynamic and kinetic insights into B10H14 and B10H14 2−
  23. Quantum origin of atoms and molecules – role of electron dynamics and energy degeneracy in atomic reactivity and chemical bonding
  24. Clifford Gaussians as Atomic Orbitals for periodic systems: one and two electrons in a Clifford Torus
  25. First-principles modeling of structural and RedOx processes in high-voltage Mn-based cathodes for sodium-ion batteries
  26. Erratum
  27. Erratum to: Furanyl-Chalcones as antimalarial agent: synthesis, in vitro study, DFT, and docking analysis of PfDHFR inhibition
Downloaded on 10.1.2026 from https://www.degruyterbrill.com/document/doi/10.1515/pac-2025-0504/html
Scroll to top button