Ab Initio, Ab Initi-ish
on modeling molecular systems from their quantum mechanical origins
I guess this series has mostly turned into a learn-as-I-go theoretical chemistry primer. I do hope it is as enjoyable to read as it has been to write and learn. I do apologize for the delay, I know all 2 of you reading this have been at the edge of your seats waiting to hear about ab initio modeling methods. Unfortunately, I took a family beach vacation and was pre-occupied with a simpler sort of wave mechanics.
Note: please forgive the janky sub- and superscripts used outside of the equations; until Substack supports inline LaTeX, I must make do :,(
First, A Slight Complication
A potential energy surface, as you’ll recall from our last discussion, describes the potential energy of a molecule as a function of its atomic coordinates. Now, something we’ve hand waved thus far is the nontrivial fact that we are able to model a molecular system as such an arrangement at all. After all, when we talk about atomic coordinates, we are talking about the spatial position of the atomic nuclei alone, and these are certainly not the only energetic contributors to a system…what about the electrons?
It’s time to graduate from classical mechanical approximations and address this herd of very small elephants in the room. To do so, we’ll need to rewind a bit.
A Brief, On the Origins of Quantum Mechanics
The 1920s, as you may or may not know, was a hot time for particle physics. In 1923, French physicist Louis de Broglie hypothesized that all particles must exhibit wave-particle duality. On the back of this hypothesis, Erwin Schrödinger’s 1926 wave mechanics adapted from known heat and wave equations to mathematically describe the behavior of matter waves.
In Schrödinger’s wave mechanics, the Schrödinger equation:
tells us how the static wave function Ψ of a matter wave evolves over time. One of the more insightful ways I’ve learnt to think about such an equation is as one which models probability flow, much in the same way different sorts of mechanical waves model energy flow via pressure or particle displacement.
As made evident by its form, the Schrödinger equation maps a relation between the static and dynamic properties of the wave function - its hamiltonian-dependent and time derivative-dependent components, respectively - through the use of complex numbers. The use of the imaginary number i here is critical as it ensures the resulting time evolution operator is unitary, which preserves the normalization of the wave function and ensures that the wave function’s total probability |Ψ|² (as established by the Born rule) remains constant over time. If it didn’t, the solution would depend on real exponentials rather than complex oscillations, causing the probability amplitude to grow or decay exponentially over time, suggesting physically impossible particle behavior (think: regions of infinite likelihood or particles vanishing).
Schrödinger’s use of complex numbers here, as an aside, is really interesting because it marked one of the first times in physics history that complex numbers were treated not just as a convenience of algebra, to be omitted in solutions, but instead as a critical contributor to the physical meaning of the equation's solution set.
Wait, what are the complex numbers doing here exactly?
Well, we can observe that for energy eigenstates of form:
of the wave function which solves the time-independent Schrödinger equation:
the evolution about the complex plane over time manifests as a phase rotation. This is what we mean when we talk about a complex oscillation.
The time-independent Schrödinger equation for a multiple particle system represents a matter wave as a standing wave in configuration space: one which oscillates in time but whose peak amplitude profile does not move in space. Instead, it yields a rotation in the complex plane for each spatial component of the wave function at a rate proportional to its energy. Such a phase rotation preserves the wave’s oscillatory form and permits the expected effects of wave interference.
Stepping back now, we can see with more confidence that the time-independent Schrödinger equation tells us what stationary states Ψ and their energies E are possible, while the time-dependent Schrödinger equation tells us how the wave moves through its state space over time. General quantum states are superpositions of these stationary states, each evolving with its own phase factor.
Now, let’s talk a little more about the wave function Ψ. The wave function encompasses all necessary information about our system’s quantum state (position, momentum, energy, spin). For a system of interacting particles, the wave function is a function of 3N + 1 coordinates: 3N spatial coordinates and time. Where spin is considered, spin coordinates are also included for each particle.
Though a theoretically critical equation, we can now see that the Schrödinger equation of a multiple particle system is a many body problem, as the wave function requires three spatial coordinates for every electron’s position. Analytical solutions to such an equation are consequently not trivial beyond very simple cases such as a particle in a box.
The Born-Oppenheimer Approximation
Fortunately for us, hot on the tail of Schrödinger was German physicist Max Born and his collaborator J Robert Oppenheimer with an important observation: because the mass difference between an electron and an atomic nucleus is so significant, from the point of view of an electron an atomic nucleus would appear stationary. Relatedly, from the point of view of an atomic nucleus, the position of the surrounding electrons would appear to update instantaneously.
By applying these observations to the Schrödinger equation, Born and Oppenheimer proved it possible to separately solve it’s nuclear and electronic components.
The Born-Oppenheimer approximation for the electronic Schrödinger equation, given by:
yields wave functions and energies for a system’s electrons which are parametrically dependent on the system’s nuclear positions (that’s what that ; operator indicates).
Holding the nuclear positions fixed in various arrangements, R, this equation can be systematically solved to map out Eₑ(R), which is our system’s potential energy surface!
Exciting, right? Well, yes, but let’s not get ahead of ourselves. Systematically solving the electronic Schrödinger equation is still a many body problem, and its worth discussing how it can be handled within the Born-Oppenheimer framework.
The Hartree Method
In 1927, the same year the Born-Oppenheimer approximation was published, Douglas Hartree developed the Hartree method - a strategic guess and check approach to solving a system’s electronic Schrödinger equation.
The core assumption enabling the Hartree method is that a system’s electronic Schrödinger equation can be represented as a product of one electron wave functions - also known as atomic orbitals - as follows:
where the system has N electrons. This product is, predictably, called the Hartree Product. Such an assumption implies that, in this model, the system’s electrons do not interact. But never fear - we’ll make sure to pretend that they do!
The Hartree method begins by guessing an initial orbital value, ψᵢ⁰, for each electron and deriving an initial mean field potential Vᵢ⁰(r) from it. It is this mean field potential that does the heavy lifting: it dictates that the potential energy experienced by the electron at any given point approximates the potential energy it would experience in an equivalent (see: realistic) system of interacting electrons.
How So?
By evaluating to the average electrostatic field created by all other electrons. The mean field is given in the original Hartree method as follows:
This represents the classic Coulombic repulsion experienced by our chosen electron from all other electrons.
This potential is represented in the Hartree Hamiltonian:
Which is solved for in the single electron Hartree equation:
That you’ll notice is a familiar form, mirroring the time-independent Schrödinger equation. The single-electron Hartree equation is then solved for each electron i at orbital ψᵢ(n+1) to yield a new potential Vᵢ(n+1) and orbital εᵢ(n+1). This process is repeated until the potential V(r) and the orbital εᵢ converge, that is:
This convergence signifies that the resulting field is self-consistent. Self-consistency here implies that the effective potential is consistent with the electronic charge density, and the single-electron wave functions and energies are stable.
The Hartree-Fock Method
Now, there is an issue with the initial Hartree method. The Hartree product, as presented, violates the Pauli Exclusion principle. The Pauli Exclusion principle, as you’ll probably recall from high school chemistry class, states that two fermions (such as electrons) cannot be found in the same quantum state. There is nothing inherent in the Hartree product’s form which prevents this from happening!
The Pauli exclusion principle is enforced mathematically through a relation known as antisymmetry. Anti-symmetry says that a wave function must change sign upon the exchange of the coordinates (spatial and spin) of any two electrons. Enforcing antisymmetry ensures that the probability of finding any two electrons in the exact same state will always be zero. Thus, if any two wave functions in the Hartree product’s solution are equivalent for two distinct electrons, the system’s electronic wave function must evaluate to zero everywhere.
How can we accomplish this mathematically?
Taking the simplest form of the Hartree product, a system of two particles:
We need to establish anti-symmetry as follows:
To do so, we can take a linear combination of each Hartree product in this slater determinant:
Represented in matrix form as:
The 1/sqrt(2) here is a normalization factor. This result is anti-symmetric, and the wave function evaluates to zero for any pair of electrons in equivalent quantum state. Nice!
We can generalize this result to any number of electrons by writing it as a slater determinant of the following form:
This determinant evaluates to zero if the set is linearly dependent - that means that the system’s wave equation will evaluate to zero if any linear combination of the determinant’s columns (or rows) evaluates to zero. This satisfies the Pauli Exclusion principle as, when two columns (or rows) are identical, that means physically that two electrons are trying to occupy the same spin-orbital.
This determinant can be shorthanded in bra-ket notation (also called Dirac notation) as follows:
This was the work of Hartree’s collaborator Vladimir Fock, published 1930.
Beyond introducing antisymmetry, the Slater determinant introduces indistinguishability. In a system of indistinguishable electrons, positions can be swapped at random without impacting system state, while the same is not true of systems of distinguishable electrons. This leads to significant statistical effects which impact total energy evaluations of the system. Indistinguishability enables the inclusion of a key energetic contributor to the system’s potential: the exchange operator. This operator accounts for the energy difference between systems of distinguishable and indistinguishable electrons.
The single-electron Hartree-Fock equation:
Replaces the hamiltonian given in the original single-electron Hartree equation with the Fock operator, which has the form:
Which adds a mean field accounting for both the Coulombic and exchange effects to the familiar Hartree Hamiltonian:
where the Coulombic potential J is given by the familiar looking:
and the exchange potential K is given as:
Solving the eigenvalue equation:
similar to how we did in the original Hartree method yields a new potential Vᵢ(n+1) and orbital εᵢ(n+1), and the process is repeated until convergence is achieved.
It is important to note that though the original Hartree and Hartree-Fock methods appear quite similar, how we can interpret the eigenvalues, εᵢ, at the point of convergence is slightly different: the original Hartree method allows us only to interpret them as orbital energies while the Hartree-Fock method, through what is known as Koopman’s theorem, enables us to interpret them as ionization energies as well - a key experimental observable. For the sake of article length, we won’t walk through Koopman’s theorem derivation here, but it’s pretty interesting if you’re looking to dive deeper.
Whew! Ok, that was a lot of (albeit really interesting) math to dissect. As you can hopefully now join me in appreciating, finding even approximate solutions to the electronic wave equation is no small task.
A Return to the Nuclear Schrödinger Equation
As a reminder, solving the electronic wave equation for a system in various states is what allows us to map out Eₑ(R), our system’s potential energy surface.
Once we know Eₑ(R), we can solve the nuclear Schrödinger equation:
Solving the nuclear Schrödinger equation describes the probability distribution of the nuclear positions within the system, allowing us to determine the vibrational, rotational, and translational states of the nuclei. As we discussed in detail in Potential Energy Surfaces, such a surface allows us to predict our system’s molecular dynamics and many of its chemical and physical properties. How epic!
The Rest of the Electronic Effects
Fortunately or unfortunately for you (depending on your interest level thus far) things get more complicated from here.
For many complex molecular systems, it is necessary to understand their electronic structure with more nuance than the Hartree-Fock method can provide.
The Hartree-Fock method, as a mean field method, accounts fully for the exchange effects between same spin electrons through its inclusion of direct Coulombic and exchange terms. It does not, however, account for what we call correlation effects.
For reasons we will discuss, this means we need a different and hopefully better approximation of the electronic Schrödinger equation.
The Hohenberg-Kohn Theorems
Let’s fast forward to the 1960’s, when a pair of physicists by the name of Pierre Hohenberg and Walter Kohn discovered a very interesting property of molecular systems.
In their 1964 paper, “Inhomogeneous Electron Gas”, Hohenberg and Kohn showed in a proof by contradiction that the ground state electron density of a system uniquely determines its external potential (up to a constant). The ground state here refers to the system’s static, temperature-independent, all-energy minimum. The ground state electron density here, ρ₀(r), refers to the total probability of finding any of the system's electrons within a small volume around point r, irrespective of the positions of the other electrons. The external potential here refers to the set of non-electronic forces (ie, Coulombic potential created by atomic nuclei) that act on the electron cloud.
Taken together, the implication of this proof is that by knowing only the ground state electron density of a system, there exists a way to resolve the exact atomic environment that created it. Impressive!
Hohenberg and Kohn’s work further shows that all properties of the ground state can be expressed as functionals - functions which take functions as input and return scalars - of the ground state electron density.
This tells us that, given only the electron density, we can find much more than just the atomic arrangement; we can in fact find all quantum properties of the ground state.
Now, it is important to note that what Hohenberg and Kohn have constructed here is an existence proof - it says “we guarantee this mapping exists!” but does not explain how to evaluate these functionals themselves. This is nonetheless an important result, because the electron density of the system depends only on a single position vector r having three spatial coordinates irrespective of the number of electrons in the system. This majorly reduces the complexity of our problem from being one whose dimensionality scales with 3N to one that is fixed.
The second major insight in the work of Hohenberg and Kohn comes in the form of their second theorem, which establishes the variational principle for a system’s electron density. Essentially, Hohenberg and Kohn show that when the energy obtained from the ground state energy functional:
is minimized, it will be exactly the system’s ground state energy.
Hohenberg and Kohn show this by proving that the energy obtained from this functional given any trial density represents an upper bound to the true ground state energy, yielding exactly the ground state energy only when the trial density given is the exact ground state density.
While theoretically impactful, this set of theorems introduced a frustratingly circular problem. Although they tell you the implications of knowing a system’s electron density, they do not resolve the problem of actually finding this electron density. Finding the electron density would require minimizing the energy functional, but to minimize the energy functional we need to know the system’s wave function! What do we do?
The Kohn-Sham Equation
A year later, in 1965, Kohn tackled this circularity problem with collaborator Lu Jeu Sham. In doing so, they developed the foundational set of equations for a modeling approach known as Kohn-Sham Density Functional Theory (KS-DFT). KS-DFT solves the same problem as Hartree-Fock - approximating the electronic Schrödinger equation for a many body system - but with a greater ability to yield true-to-reality results.
How so?
Well, for the Hartree-Fock method, recall that we use a mean field to describe the potential experienced by a given electron, which accounts for both Coulombic and exchange effects. What I haven’t told you yet is that exchange effects are not the only ones experienced by opposite spin electrons - there are also what we call correlation effects! Correlation effects arise from instantaneous interactions between electrons that occur beyond the mean field - van der Waals forces are a good example.
Kohn and Sham devised their approach in a way that incorporates these effects. They first imagined a fictitious version of a molecular system wherein the electrons do not interact, devising a Schrödinger-like single electron equation - the Kohn-Sham equation, of course - for such a system:
Then, they showed that the orbitals εᵢ in this system could be carefully constructed to yield the exact same electron density as its interacting counterpart:
This is significant because we know, thanks to the Hohenberg-Kohn theorems, that the electron density exactly determines all other properties. Because our real and fictitious systems share an electron density, we can find a solution without dealing with our real Schrödinger equation at all! Instead, we can work with a self-consistent approach iterating upon the electron density of the system. The algebraic approach here is similar to the one used in Hartree-Fock, but rather than a mean field we are deriving the electron density from the initial orbital guess and using that to calculate the new potential and orbital values until we achieve convergence for both.
So…what’s the catch?
If this sounds too good to be true, its because it kind of is. The Kohn-Sham equation is indeed incredibly valuable, but tucked away inside the Kohn-Sham equation is a little secret. You see, the Vₑբբ, or the effective potential, that we saw in the Kohn-Sham equation is actually a summation of three separate potentials:
The first two are no big deal. The vₑₓₜ is the external potential - we know that the electron density determines this value thanks to the first Hohenberg-Kohn theorem. The vₕ is the Hartree potential - that’s just the mean field for electron-electron repulsion that we constructed in the Hartree-Fock method. The third term is not so simple.
The Exchange-Correlation Situation
The vₓ꜀ term is called the exchange-correlation potential. It is the term to which Kohn and Sham’s ingenuity is owed, because it translates the many-body exchange and correlation effects into a single-particle potential.
This potential is the functional derivative of the exchange-correlation energy with respect to the electron density:
Eₓ꜀[ρ] is called the exchange correlation energy, a functional of the electron density of course describing the energy contribution of the exchange-correlation effects to the system.
Now, this exchange correlation energy can be expressed as follows:
Where the integral represents the electrostatic interaction between the electron density and its associated exchange-correlation hole, hₓ꜀.
What on earth is the exchange correlation hole?
The exhange-correlation “hole” is not a literal hole - it's a function describing how the presence of an electron at position r₁ affects the probability of finding any other electron at position r₂. Put more precisely, the exchange-correlation hole hₓ꜀(r₁,r₂) represents the difference between:
The conditional probability of finding an electron at r₂ given that there's an electron at r₁
The unconditioned probability of finding an electron at r₂ (which is just the density ρ(r₂))
The “hole” is so called for the probability depletion centered around r₁'s position - the probability of finding another electron exactly at r₁ is strongly reduced due to both Pauli exclusion (for same-spin electrons) and Coulombic repulsion (for all electrons).
Now, an important mathematical property is that this hole integrates to exactly -1 electron when integrated over all space:
This makes intuitive sense: the presence of one electron can be thought of as excluding exactly one electron's worth of density from the rest of the system.
You may, like I was, be hoping that this is a neat and predictable function - perhaps something decreasing smoothly and isotropically away from r₁. Unfortunately, the exact hole is far more complex in nature. It depends on the overall electronic structure of the particular molecular system, can be highly anisotropic, and has different length scales for exchange and correlation effects. In some systems, it can even be oscillatory!
The exchange correlation functional’s difficult to describe form has been responsible for a massive amount of theoretical work in recent years. The search for increasingly accurate approximations to the exchange-correlation hole has led to hundreds of proposed functionals, each with its own strengths and weaknesses for different classes of molecular systems.
Approximating the Exchange Correlation Functional
With the exchange correlation hole being the central mathematical challenge in modeling quantum many-body effects, there are two primary approaches to approximating the exchange-correlation functional:
Direct hole-based approximations, which explicitly model the exchange-correlation hole itself and then integrate it to obtain the corresponding functional.
Implicit functional approximations, which approximate the exchange-correlation functional directly, avoiding explicit construction of the hole while ensuring that the resulting functional implicitly corresponds to a physically reasonable hole that satisfies known mathematical constraints.
The Local Density Approximation (LDA)
The earliest devised of such approximations was the LDA, a direct hole-based approximation belonging to Kohn and Sham’s 1965 paper “Self-Consistent Equations Including Exchange and Correlation Effects”. The exchange correlation energetics in LDA are modeled from the uniform electron gas, a case wherein the electrons are assumed to exist in a homogenous, constant density. This simplifies calculations enough to be tractable.
The LDA exchange correlation functional is expressed as:
And can be represented as a sum of functionals separately modeling the exchange and correlation energies:
which allows the two types of effects to be handled separately - an unsurprising approach given the difficulty difference between handling them.
LDA Exchange Functional
The exact form of the exchange energy functional can be derived from the uniform electron gas as follows:
Which corresponds to the uniform electron gas’ exchange hole as follows:
where:
is the fermi wavevector - the radius of the Fermi sphere in momentum space, representing the maximum momentum that electrons can have at absolute zero temperature. The Fermi sphere is an approximation of a Fermi surface, and is useful in characterizing what are known as degenerate fermion systems, one such case being the uniform electron gas. The Fermi surface emerges from the system’s electronic structure and its shape is related to the electronic energy bands, which are determined by the electronic potential energy surfaces from the Born-Oppenheimer approximation. Beyond our immediate discussion, fermi surfaces, fermi spheres, and fermi wavevectors, come up often in modeling problems within condensed matter physics and I highly recommend a deeper dive if it sparks your interest.
LDA Correlation Functional
The LDA correlation energy functional was not addressed in the original Kohn-Sham equation, owing to its difficulty. Proposals for its form came later on, the first by Eugene Wigner in his 1934 paper “On the Interaction of Electrons in Metals”, utilizing Rayleigh-Schrödinger perturbation theory, a method for modeling small perturbations to a system with a fully resolved quantum state without re-evaluating the full Schrödinger equation. Then, in 1976, physicists Gunnarsson and Lundqvist developed parameterizations for calculating both exchange and correlation effects in solids based on many-body perturbation theory, a generalization of Rayleigh-Schrödinger accounting for perturbations caused by many interacting particles.
Then, in 1980, physicists Ceperley and Alder performed quantum Monte Carlo simulations of the uniform electron gas which established the basis for modern parametrizations for the correlation energy functional used in LDA. Various such parameterizations exist, the most common being the Vosko-Wilk-Nusair (VWN) and Perdew-Wang (PW92) formulations. Each formulation provides ε(ρ) for different density values, which is then used in the correlation energy functional of form:
The Generalized Gradient Approximation (GGA)
GGA is an approximation method adapting from LDA by incorporating a density gradient-dependent enhancement factor, F(s). This approach is considered a semi-local one: the enhancement factor modulates the local exchange energy based on how quickly the density is changing in space, allowing the functional to distinguish between different molecular environments that might have the same local density but different density gradients. GGA, unlike LDA, focuses on directly modifying the energy functional rather than approximating the exchange correlation hole directly.
Several iterations upon GGA were proposed throughout the 80s and 90s, with Axel Beck’s B88 being among the first, followed by Lee, Yang, and Parr’s LYP which was derived from empirical measurements of the helium atom’s correlation energy rather than from the uniform electron gas.
In 1996, a seminal work by Perdew, Burke, and Ernzenhof introduced PBE, a version of GGA simplified from an earlier proposal by Perdew and Wang. Unlike earlier approaches, PBE works independent of empirical parameters while still satisfying key mathematical constraints: having the correct uniform gas limit, exhibiting appropriate scaling behavior, following the sum rules for the exchange-correlation hole, and adhering to the Lieb-Oxford bound. PBE has become among the most popular modern approaches, owing to its comparative simplicity.
The general form of the exchange correlation functional in PBE is given as:
As in LDA, the exchange correlation energy functional can be represented as a sum of its exchange and correlation components.
GGA Exchange Functional
Then, the exchange component looks as follows:
With ε(ρ) being the exchange energy density of the uniform electron gas as given in LDA, and our enhancement factor F(s) being:
Where κ = 0.804 and μ≈0.21951 are dimensionless constants satisfying two of the discussed constraints: κ satisfying the Lieb-Oxford bound and μ recovering the second order gradient expansion for the exchange energy in the limit of a slowly varying electron gas. We then have s given by:
s is the gradient upon which our enhancement factor depends, measuring the rate of change of the density normalized to the scale of the Fermi wavelength:
When s is small, the density varies slowly relative to the local Fermi wavelength. When s is large, it varies rapidly. At s=0, the system locally resembles the uniform electron gas. Regions where s is near 1 are typically bonding regions, while those where s>=2 are typically key regions far from the atomic nuclei - think density tails, atomic shells, or bond interfaces.
GGA Correlation Functional
The correlation component, given by:
Where ε(ρ) is the LDA correlation energy per particle, H(t, rₛ, ζ) is a function incorporating corrections from another reduced gradient given by t where rₛ is the Wiegner-Sietz radius and ζ is the relative spin polarization.
Interpreting this physically, the Wigner-Seitz radius specifies how compressed the electrons are locally; in regions where electrons are compressed together, correlation effects become stronger. The relative spin polarization meanwhile accounts for differences between same-spin and opposite-spin electron interactions - as same-spin electrons are kept apart by the Pauli exclusion principle, correlation primarily affects opposite-spin electrons.
Now, it is the exchange correlation functional’s sensitivity to key regions of rapidly changing density, accounted for by gradient approximations in both the exchange and correlation terms, that makes this approach far more accurate than LDA for many systems. For example, GGA does a better job of modeling biomolecules and water molecules due to its ability to characterize the energetic contributions of hydrogen bonds. In semiconductors and insulators, GGA outperforms LDA in predicting band gaps, lattice constants, and surface energies. In magnetic materials, GGA better predicts magnetic moments.
And Finally, A Wrap Up
LDA and GGA are certainly not the only active approaches to DFT, but I would exhaust my word count trying to detail them all. It is safe to say that this is an area of active research work and innovation. There remain several open questions in DFT as a whole. If you are curious, as I am, I encourage you to work your way through this article, wherein seven such questions are elaborated upon. I will present them here without discussion, in case any pique your interest:
Must adiabatic connection curves be convex?
What is the tightest lieb-oxford bound for exchange?
Can we find the semiclassical expansion of exchange-correlation?
Can the hartree-fock kinetic energy ever exceed the true kinetic energy?
What makes a density accurate?
Can we input densities into trial wave functions?
Are densities of particles related to densities of states?
If you’d be interested in discussing any of these problems and the current state of the art with me, please reach out. They are theoretically dense but incredibly interesting points for consideration and I’d love a fellow curious friend to dissect them with.
As always, I caveat my writing with the context that I am not (yet) an expert in this domain, only a curious armchair researcher. If there are points for theoretical or mathematical correction and clarification, please don’t hesitate to reach out. I’d love to chat!
Some Resources
I had the pleasure of reading and watching an immense number of pieces while preparing this writeup, covering a lot more ground than originally anticipated. I will link out to them here in case you’d like to do your own digging!
A highly palatable intro to role of complex numbers in the Schrödinger equation:
Born and Oppenheimer’s “On the Quantum Theory of Molecules”: https://fisica.ufpr.br/mossanek/etc/born-oppenheimer_en.pdf
Hartree’s “The Wave Mechanics of an Atom with a Non-Coulomb Central Field. Part II. Some Results and Discussion”: https://www.cambridge.org/core/journals/mathematical-proceedings-of-the-cambridge-philosophical-society/article/abs/wave-mechanics-of-an-atom-with-a-noncoulomb-central-field-part-ii-some-results-and-discussion/5916E7A0DEC0A051B435688BE2ACD57E
Hohenberg and Kohn’s “Inhomogenous Electron Gas”: https://journals.aps.org/pr/pdf/10.1103/PhysRev.136.B864
Kohn and Sham’s “Self-Consistent Equations Including Exchange and Correlation Effects”: https://journals.aps.org/pr/pdf/10.1103/PhysRev.140.A1133
Rayleigh-Schrödinger Perturbation Theory: https://www.chemistry.tcd.ie/assets/pdf/ss/DAMB/DAMB%20SS/PERTURBATION%20THEORY.pdf
Wigner’s “On the Interaction of Electrons in Metals”: https://journals.aps.org/pr/abstract/10.1103/PhysRev.46.1002
Degenerate Fermion Systems: https://web.mit.edu/8.322/Spring%202007/notes/DFSCropped.pdf
An introduction to Density Functional Theory: https://www.imperial.ac.uk/media/imperial-college/research-centres-and-groups/computational-materials-science/teaching/DFT_NATO.pdf
The amazing professor Chris Kramer’s Density Functional Theory Series:
An introduction to the Hohenberg-Kohn theorems for chemists: https://people.chem.ucsb.edu/metiu/horia/OldFiles/115C/KH_Ch4.pdf
Perdew, Burke, and Ernzerhof’s “"Generalized Gradient Approximation Made Simple”: https://dft.uci.edu/pubs/PBE97.pdf
And many, many more.
My dear friends, thank you so much for reading along. Stay curious, and see you next time!
~ Erika :)

