March 9, 2025

Phase Transitions & The Ising Model

Authors: Ryan Peters*
* Core Contributor; Correspondence to ryanirl@icloud.com;

I recently made a small foray into statistical mechanics while trying to understand the critical brain hypothesis [1] [2] [3]. Including what exactly it means for the brain to operate near a 2nd-order phase transition, and why the observation that the size and lifetime distributions of neuronal avalanches are power-law distributed offers experimental support for criticality. This lead me down a pretty deep rabbit hole of ideas in statistical mechanics such as the scaling hypothesis, mean-field theory, and the renormalization group. Some of which I will explore in this post.

This post on phase transitions and the Ising model is going to be one in a couple of posts I hope to make on topics at the intersection of physics, neuroscience, and machine learning. These topics include (but are not necessarily limited to) free energy and entropy, Hopfield networks, Boltzmann machines, and the critical brain hypothesis. The discussion of the Ising model in this post will give a smooth transition into a future post on Hopfield networks, which are energy-based models of associative memory built on top of the Ising model. That said, these posts are momentarily in competition with a bunch of mechanistic interpretability posts I also want to get up soon.

To give a high level overview of this post, we will first cover some background information on what a phase transition is and its classifications. Introduce critical exponents, the scaling hypothesis, and the concept of universality observed between different models. Then, we will discuss the Ising model, a model of magnetism at the scale of the magnetic dipole moment of an electron, in a varying number of dimensions. We will discuss what it means to solve a thermodynamic problem, namely computing the free energy and it’s various derivatives, then solve the Ising model in zero- and one-dimensions. Finally, we will cover the mean field approximation of the Ising model in dd-dimensions, talk about when this approximation does and does not hold, and provide a very interesting link between mean-field theory and variational inference in Bayesian statistics.

Phase Transitions

A phase transition is an event, that occurs when small changes in a control parameter (such as temperature) lead to large, and sometimes discontinuous, change in the macroscopic properties of a system in the thermodynamic limit. These macroscopic properties are often referred to as order parameters (such as magnetism, see Figure 2). Using the Ehrenfest classification, phase transitions are further classified by the system’s free energy, where NN-th-order phase transitions involve a discontinuity in the NN-th partial derivative of the free energy with respect to the order parameter. But, throughout this post we will only distinguish between discontinuous (first-order) and continuous (2nd-order and above) phase transitions. This distinction can be seen more clearly in Figure 1 below.

Figure 1: Comparison of first-order and second-order phase transitions in the Ising model. Left panel: First-order phase transition showing magnetization MM as a function of external field hh at T<TcT < T_\mathrm{c}. The magnetization exhibits a discontinuous jump from M=1M = -1 to M=1M = 1 at the critical point h=0h = 0 (dotted vertical line), demonstrating the discontinuity characteristic of first-order phase transitions. Right panel: Second-order phase transition depicting the absolute magnetization M|M| versus reduced temperature T/TcT/T_\mathrm{c} at zero external field. The magnetization follows the power law M(1T/Tc)β|M| \sim (1 - T/T_\mathrm{c})^\beta for T<TcT < T_\mathrm{c} and continuously approaches zero at TcT_\mathrm{c} (dashed vertical line), with critical exponent β=0.5\beta = 0.5 from the mean-field approximation.

Phase transitions occur at a critical point given by a co-existence curve (a sort of hyperplane) that separates two distinct phases of matter, such as water and ice. When crossing this boundary, the systems macroscopic properties change abruptly, whether through a first- or second-order phase transition. As the system size grows larger, approaching the thermodynamic limit, the properties on each side of this boundary become more sharply defined. This behavior is explained through renormalization group theory, which is a bit beyond the scope of this post, but I will refer to it once or twice more.

Throughout this post, we will use magnetism as the pedagogical example. At a microscopic level, magnetic materials derive their properties from the alignment of magnetic dipole moments. These moments arise primarily from the intrinsic spin of an electron, creating tiny magnetic fields that can interact with neighboring atoms. In ferromagnetic materials (such as iron), these dipole moments can align producing a permanent magnet. When a sufficient fraction of these magnetic moments align, they produce a net magnetic field. Here, we refer to magnetic materials as ferromagnet, and non-magnetic materials as paramagnets.

To give a concrete example of a phase transition that’s relevant within the context of this blog post, let’s consider the ferromagnet-paramagnet phase transition. The magnetization curve shown in Figure 2 illustrates the temperature dependence of the order parameter MM in a ferromagnetic system. For temperatures below the critical temperature TcT_c (called the Curie temperature), the system exhibits spontaneous magnetization, represented by the two symmetric branches of the curve. These branches emerge from a bifurcation at TcT_c and grow in magnitude as temperature decreases, following the power law M(TcT)βM \propto (T_c - T)^\beta, where β\beta is the critical exponent (which is something we will discuss later). The dashed line at M=0M = 0 for T<TcT < T_c represents an unstable state, while the continuation above TcT_c represents the paramagnetic phase where the thermal fluctuations overcome the ordering tendency of the magnetic interactions resulting in the loss of magnetization.

Figure 2: Phase diagram of the Ising model showing magnetization (MM) as a function of reduced temperature (T/TcT/T_c).

The first phase transition occurs as we vary the temperature with no external magnetic field present. Consider a permanent magnet at room temperature (such as iron) where the magnetic moments align themselves spontaneously in the same direction, creating a net magnetic field. As we heat this magnet above TcT_c, it undergoes a dramatic change where the thermal energy becomes strong enough to overcome the alignment of these magnetic moments, and the material transitions into a paramagnetic phase where the magnetic moments point in random directions, resulting in no net magnetic field. This is a continuous or second-order phase transition where the magnetization gradually decreases to zero as we approach the critical temperature (Figure 1 above).

The second type of transition becomes apparent when we apply an external magnetic field at a fixed temperature below TcT_c. Starting with a strong field in one direction, the magnetic moments align with this field. As we decrease the field strength to zero and then increase it in the opposite direction, the magnetization shows a sudden jump or discontinuity as it switches direction, characteristic of a first-order phase transition. Although, it is also worth noting that because all of the spins are aligned in one direction, there is a phenomenon called hysteresis, which is a sort of lag that occurs due to the spins not wanting to flip even if the external field flips. In this sense, the system’s response depends on its history, and hysteresis reflects the system’s preference to maintain alignment among its magnetic moments even as the external field reverses.

Critical Exponents and Universality

One interesting property of continuous phase transitions is that they exhibit a characteristic power-law behavior in various thermodynamic quantities as the system approaches some critical point. These power laws are characterized by critical exponents, which quantify how physical observables scale with the distance from the critical point. For a magnetic system at temperature TT near its critical temperature TcT_c, we define the reduced temperature t=(TTc)/Tct = (T - T_c)/T_c as a measure of this distance.

Table 1: Primary critical exponents and their associated scaling relations. Here, tt denotes the reduced temperature, hh the external field, and dd the spatial dimension.

This critical behavior is described by a set of critical exponents, each associated with a specific thermodynamic quantity. For example, the magnetization MM, below TcT_c, scales as MtβM \sim |t|^\beta. Similarly, the magnetic susceptibility χ\chi diverges as χtγ\chi \sim |t|^{-\gamma}. These power laws characterize the singular behavior near the critical point, and can be observed across a wide range of (surprisingly different) physical systems. A larger set of the primary critical exponents and their scaling relations are presented in Table 1 above, and a more graphical plot can be seen in Figure 3 below.

Figure 3: Critical behavior of thermodynamic quantities in the 2D Ising model as functions of the reduced temperature t=(TTc)/Tct = (T-T_c)/T_c. Left panel: The magnetization MtβM \sim |t|^\beta for t<0t < 0 with critical exponent β=0.125\beta = 0.125 (= 1/8), showing how the order parameter continuously approaches zero as the critical temperature is approached from below. Center panel: The correlation length ξtν\xi \sim |t|^{-\nu} with critical exponent ν=1.0\nu = 1.0, diverging at tct_c, which indicates the emergence of long-range correlations at the critical point. Right panel: The magnetic susceptibility χtγ\chi \sim |t|^{-\gamma} with critical exponent γ=1.75\gamma = 1.75 (= 7/4), demonstrating the divergence of fluctuations near the critical point. These precise critical exponents are characteristic of the 2D Ising universality class and illustrate how different physical observables exhibit power-law behavior near the critical point, with exponents that are related through scaling relations.

The concept of universality as it relates to critical phenomena states that systems with distinct microscopic interactions, characterized by their (potentially very different) Hamiltonians, can exhibit identical critical exponents. This universal behavior can be understood through the role of fluctuations and the divergence of the correlation length ξtν\xi \sim |t|^{-\nu} near the critical point (more on correlation functions in the next section). When ξ\xi becomes much larger than any microscopic length scale, fluctuations on all length-scales become increasingly important, and the individual microscopic interactions becomes irrelevant to the critical behavior. These large fluctuations sort of wash out the microscopic details, leaving only the fundamental symmetries and dimensionality to determine the critical behavior. Systems that share these basic properties form a universality class, characterized by their identical critical exponents.

To give a concrete example, the liquid-gas critical point and the ferromagnetic phase transition in the three-dimensional Ising model have the same critical exponents, and are thus in the same universality class. This is despite the (large) differences in their microscopic physics, where one involves molecular interactions in a fluid and the other involves magnetic moments on a lattice.

Correlation Functions

To understand critical behavior, it’s going to be useful to have an idea for what form the spatial correlation function takes on. Consider a magnetic system where s(r)s(\mathbf{r}) represents the local spin at position r\mathbf{r}. The two-point correlation function is defined as

G(r1,r2)=s(r1)s(r2)s(r1)s(r2)G(\mathbf{r}_1, \mathbf{r}_2) = \langle s(\mathbf{r}_1)s(\mathbf{r}_2) \rangle - \langle s(\mathbf{r}_1) \rangle \langle s(\mathbf{r}_2) \rangle

Where this function measures the degree to which fluctuations at different points in space are correlated, after subtracting out the effects of the average magnetization. For translationally invariant systems, this depends only on the relative displacement r=r2r1\mathbf{r} = \mathbf{r}_2 - \mathbf{r}_1, allowing us to rewrite G(r1,r2)G(\mathbf{r}_1, \mathbf{r}_2) as G(r)G(\mathbf{r}). Away from criticality, this correlation function exhibits exponential decay

G(r)er/ξG(\mathbf{r}) \sim e^{-r/\xi}

To understand why this is, consider a system with local interactions. The correlations between distant points must be built up through a chain of intermediate interactions. Each “step” in this chain introduces a multiplicative factor. The accumulated effect over a distance rr naturally leads to an exponential decay, with ξ\xi characterizing the typical distance over which correlations persist. However, what is very important to note is that at the critical point, the correlation function transitions to a power-law decay

G(r)1rd2+ηG(\mathbf{r}) \sim \frac{1}{r^{d-2+\eta}}

where dd is the spatial dimension and η\eta is the anomalous dimension. Where the anomalous dimension η\eta is a sort of correction term that represents how the spatial correlations deviate from what would be expected in a simple non-interacting or mean field system, due to the critical fluctuations. This transition from exponential to power-law behavior reflects the emergence of scale invariance at criticality, as the divergence of ξ\xi eliminates the characteristic length scale that gave us the exponential decay.

Figure 4: Visualization of the concept of correlation length. The figure depicts a 2D plane in 3D space with randomly oriented spins (represented by arrows) distributed throughout. Within a circular region of diameter ξ\xi, all spins are aligned in the same direction, while outside this region, spin orientations are random and uncorrelated. The correlation length ξ\xi defines the characteristic spatial scale over which the system maintains order. Spins separated by distances less than ξ\xi tend to be correlated, while those separated by distances greater than ξ\xi tend to behave independently. As the temperature approaches the critical point TcT_c, this correlation length diverges according to ξtν\xi \sim |t|^{-\nu}, leading to long-range order and scale-invariant behavior characteristic of second-order phase transitions.

This is very interesting because it implies that systems that undergo a second-order phase transition have scale-free dynamics at the critical point. Within the context of a future post on the critical brain hypothesis, this is of interest because it explains why observing scale-free dynamics, in for example the size and lifetime distribution of neuronal avalanches, provide evidence for the brain operating near a second-order phase transition between ordered and disordered states. I’ll post more on this another time, but it also adds intuition for why the brain might want to self-organize or fine-tune near the critical point. That is, at the critical point there is a divergence in the correlation length allowing for optimal message passing between pairs of neurons.

The complete theoretical understanding of this power-law behavior emerges from the renormalization group analysis. Furthermore, the full derivation of the correlation function forms can be followed from the Ornstein-Zernike theory of correlation functions, which is slightly beyond the scope of this post. The Berlinsky [4] book has a good discussion of this in Chapter 13.

The Scaling Hypothesis

It turns out that the various critical exponents introduced in the previous section are not all independent. The scaling hypothesis, one of the fundamental principles in the theory of critical phenomena, asserts that only two of the critical exponents are independent, and that the remaining exponents can be determined through a set of scaling relations. The most fundamental of these relations are the Rushbrooke and Widom scaling laws

α+2β+γ=2\alpha + 2\beta + \gamma = 2 γ=β(δ1)\gamma = \beta(\delta - 1)

Further scaling relations can be derived by considering the role of the spatial dimension dd and the anomalous dimension η\eta. These relationships, known as the hyperscaling relations, and take the form

α=2dνβ=ν2(d2+η)γ=ν(2η)δ=d+2ηd2+η\begin{align*} \alpha &= 2 - d\nu \\[0.25em] \beta &= \frac{\nu}{2}(d - 2 + \eta) \\[0.25em] \gamma &= \nu(2 - \eta) \\[0.25em] \delta &= \frac{d + 2 - \eta}{d - 2 + \eta} \end{align*}

The existence of these scaling relations reveals a sort of simplicity underlying critical phenomena. Despite the complexity of microscopic interactions, systems near criticality can be characterized by just a few independent parameters. This reduction in complexity suggests that the behavior at critical points is not dependent on the specific details of the microscopic interactions, providing further support to the concept of universality discussed previously.

While I didn’t provide a full derivation of the scaling relationships, a thorough and pedagogical treatment can be found in Kopietz’s “Introduction to the Renormalization Group,” particularly in sections 1.2 and 1.3.

The Ising Model

The Ising model, often referred to as the Drosophila of statistical mechanics, is an abstraction of the microscopic properties of magnetism through discrete magnetic dipole moments (spins) that can either point up (+1+1) or down (1-1). More formally, the Ising model is defined on a dd-dimensional lattice Λ\Lambda where at each site iΛi \in \Lambda there is a discretized abstraction of the spin σi{+1,1}\sigma_i \in \{+1, -1\}. When the spins align we have a net magnetic field. When the spins do not align, they cancel each other out resulting in no net magnetization. Most important to our study of the Ising model in this post is the existence (or lack thereof depending on the dimensionality of the system) of first- and second-order phase transitions in the order parameter (magnetization) as we vary various control parameters (e.g., the strength of the external field or the temperature respectively).

Figure 5: Left: Random spin configuration in a 2D Ising model lattice at high temperature, exhibiting disorder with no clear pattern of aligned spins. Right: Low-temperature configuration showing two distinct magnetic domains separated by a well-defined domain wall. The left domain consists predominantly of -1 spins (light), while the right domain contains mostly +1 spins (dark), illustrating the spontaneous symmetry breaking and phase separation characteristic of the ferromagnetic phase below the critical temperature.

In the ferromagnetic phase below the critical temperature TcT_c, the system spontaneously breaks symmetry as spins collectively align, forming clusters of uniform magnetization called magnetic domains. In real systems, these domains form due to competing interactions, boundary effects, and non-equilibrium dynamics. The boundaries between oppositely aligned domains are called domain walls (see Figure 5 above or the cover for this post), which have a characteristic width and energy cost (an interesting area of research). As temperature increases toward TcT_c, thermal fluctuations cause these domain walls to become increasingly irregular and fluid. At the critical point, the correlation length diverges, meaning that fluctuations occur at all length scales, which dissolves the distinct domains as the system transitions to the paramagnetic phase.

Solving a Thermodynamic System

Before we go any further, it might be worth noting what the goals are, and what exactly it means to solve a system like the Ising model. First, we are interested in computing the mean magnetization of the spins σ\langle \sigma \rangle, which is computed at the ensemble/thermodynamic mean of the individuals spins

σ=sσsp(σs)\langle \sigma \rangle = \sum_s \sigma_s p(\sigma_s)

Where σs\sigma_s are the possible micro-states of the system, and p(s)p(s) are the Boltzmann probabilities. These probabilities depend on the energy of each microstate, which is given by the Hamiltonian H\mathcal{H} of the system. The Hamiltonian encodes all interactions and external influences in the system, and evaluating it for a particular microstate ss gives us the energy E(s)E(s).

Following the principles of statistical mechanics, lower energy states are assigned higher probabilities than higher energy states according to the Boltzmann distribution

p(s)=1ZeβE(s)p(s) = \frac{1}{Z} e^{-\beta E(s)}

Where β=1kBT\beta = \frac{1}{k_B T} is the thermodynamic beta, kBk_B is the Boltzmann constant, and ZZ is the Partition function (a normalization factor to ensure the probabilities sum to one). We are working in the canonical ensemble, which is a system at thermodynamic equilibrium with a reservoir heat bath, and so the canonical partition function is given by

Z=seβE(s)Z = \sum_{s} e^{-\beta E(s)}

The Helmholtz free energy of the system is related to the Partition function as

F=ETS=Tlog(Z)F = \langle E \rangle - T \langle S \rangle = -T \log(Z)

From the free energy, we can find most of what we are interested in. That is, the mean magnetization is related to the free energy as σ=F/h\langle \sigma \rangle = - \partial F / \partial h, and the magnetic susceptibility is χ=m/h\chi = \partial \langle m \rangle / \partial h, which refers how much the magnetization changes with the external field hh. And so in some sense, we are mainly interested in the free-energy of the system and it’s various derivatives, which we can get by computing the Partition function. Thus, to solve a thermodynamic system like this roughly boils down to computing the Partition function ZZ.

The Ising Hamiltonian

In true statistical mechanics fashion (I say this as someone who is not a Physicist), the first thing we will want to do is define the total energy of any state of the system, the Hamiltonian. To do so, notice that there are only two net forces on the system. Given any dipole moment σi\sigma_i, the forces acting on it are going to be the spins of it’s neighbors and that of any external magnetic field. First, we can model the neighboring interactions with the following term:

12i,jJijσiσj-\frac{1}{2} \sum_{\langle i, j \rangle} J_{ij} \sigma_i \sigma_j

Where σi{1,1}\sigma_i \in \{-1, 1\} is the spin of the ii-th dipole, JijJ_{ij} is the coupling strength between the ii-th and jj-th dipole, and the i,j\langle i, j \rangle notation means to sum over all possible configurations of ii and jj. Notice that because of the symmetry with ii and jj in the i,j\langle i, j \rangle notation, we also have a factor of one-half to account for the double adding of terms.

Next, we have the interaction of an external field hh on each dipole. This interaction can be written as

ihiσi-\sum_i h_{i} \sigma_i

Where hih_i is the coupling of the external field on the ii-th dipole. Summing these two terms together gives us the Hamiltonian of the Ising model

H=12i,jJijσiσjihiσiH = -\frac{1}{2} \sum_{\langle i, j \rangle} J_{ij} \sigma_i \sigma_j - \sum_i h_{i} \sigma_i

Here, we will assume the traditional simplifications that JijJ_{ij} and hih_i uniformly interact with all of the dipoles, and allow the one-half term to be absorbed by JJ. This gives us the following simplified Hamiltonian which we will be working with for the rest of this blog post

H=Ji,jσiσjhiσiH = -J \sum_{\langle i, j \rangle} \sigma_i \sigma_j - h \sum_i \sigma_i

Next, we will try and solve this system (compute the Partition function ZZ) in a varying number of dimensions.

Exact Solution in Zero-Dimensions

In zero-dimensions, there is only a single spin and the interaction term in the Ising Hamiltonian vanishes, meaning that we are left with only the interactions between the spin and the external field hh. Therefore, in zero-dimensions our new Hamiltonian becomes

H=hσH = -h \sigma

As previously mentioned, we must compute the partition function. Given there is only one spin with two possible states micro-states ({1,+1}\{-1, +1\}), the Partition function can be trivially computed as

Z=s=±1eβE(s)=eβh+eβh=2cosh(βh)\begin{align*} Z &= \sum_{s = \pm 1} e^{-\beta E(s)} \\ &= e^{\beta h} + e^{-\beta h} \\ &= 2 \cosh(\beta h) \end{align*}

From this partition function, we can compute most of what we are interested in. First, the free energy is given by

F=Tlog(Z)=Tlog(2cosh(βh))\begin{align*} F &= -T \log(Z) \\ &= -T \log(2 \cosh(\beta h)) \end{align*}

The mean magnetization can be computed as the thermodynamic mean or expectation of the spin(s), which results in

σ=s=±1P(s)s=1Zs=±1E(s)s=1Z[eβheβh]=1Z[2sinh(βh)]=tanh(βh)\begin{align*} \langle \sigma \rangle &= \sum_{s = \pm 1} P(s) s \\ &= \frac{1}{Z} \sum_{s = \pm 1} E(s) s \\ &= \frac{1}{Z} \big[ e^{\beta h} - e^{-\beta h} \big] \\ &= \frac{1}{Z} \big[ 2 \sinh(\beta h) \big] \\ &= \tanh(\beta h) \end{align*}

Which is bounded strictly between 1-1 and 11, dependent on the inverse temperature β=1T\beta = \frac{1}{T} (assuming kB=1k_B = 1) and the external field hh. Figure 6 below visualizes the mean magnetism (x-axis) at different temperatures. Given any non-zero external field, the average magnetism of the spin is always the sign of the external magnetic field. Right at zero external field, the mean magnetism is always zero (except for the divergence at absolute-zero, which we won’t worry about).

Figure 6: Zero-dimensional Ising model results showing the paramagnetic behavior of an isolated spin. Left panel: Magnetization MM as a function of temperature TT for various external field strengths. At zero field (h=0h=0), the magnetization is precisely zero at all temperatures due to the equal probability of spin up and down states. As external field strength increases (h=0.1,0.5,1.0h=0.1, 0.5, 1.0), the system exhibits induced magnetization that monotonically decreases with temperature. Right panel: Magnetization as a function of external field hh at different temperatures. The curves demonstrate the characteristic S-shaped response typical of paramagnetism, with steeper slopes at lower temperatures indicating higher magnetic susceptibility. As temperature increases, thermal fluctuations increasingly compete with the aligning effect of the external field, resulting in a more gradual approach to saturation.

The magnetic susceptibility χ\chi, which measures the system’s response to small changes in the external field, can be derived from the average magnetization as

χ=σh=htanh(βh)=βsech2(βh)\chi = \frac{\partial \langle \sigma \rangle}{\partial h} = \frac{\partial}{\partial h} \tanh(\beta h) = \beta \text{sech}^2(\beta h)

Meaning that while the zero-dimensional Ising model responds to external fields, it cannot exhibit a phase transition. The susceptibility remains analytical for all temperatures and fields, showing no divergence that would result in a phase transition.

Exact Solution in One-Dimension

In one dimension, the Ising Hamiltonian is given by

H=Jiσiσi+1hiσiH = - J \sum_{i} \sigma_i \sigma_{i+1} - h \sum_i \sigma_i

We define the periodic boundary condition σN+1=σ1\sigma_{N+1} = \sigma_{1} converting this into a sort of circularly linked list. In the thermodynamic limit where NN \to \infty, this shouldn’t matter too much, but it turns out that this assumptions affords us a couple of nice simplifications that makes the math easier later on.

Before computing the Partition function, let us first perform a change of variables

H=Jiσiσi+1hiσi=i(Jσiσi+1h2(σi+σi+1))=iE(σi,σi+1)\begin{align*} H &= -J \sum_{i} \sigma_i \sigma_{i+1} - h \sum_i \sigma_i \\ &= \sum_{i} ( -J \sigma_i \sigma_{i+1} - \frac{h}{2} (\sigma_i + \sigma_{i+1}) ) \\ &= \sum_{i} E(\sigma_i, \sigma_{i+1}) \end{align*}

Where E(σi,σi+1)=(Jσiσi+1h2(σi+σi+1))E(\sigma_i, \sigma_{i+1}) = (- J \sigma_i \sigma_{i+1} - \frac{h}{2} (\sigma_i + \sigma_{i+1}) ) is the energy of the bonds between lattice sites ii and i+1i+1. Notice the factor of 12\frac{1}{2} that snuck in due to the double counting of the external field energy.

Again, the partition function takes on the following form

Z={s}eβE({s})Z = \sum_{\{s\}} e^{-\beta E(\{s\})}

Where {s}\{s\} are the 2N2^N possible states of the system. Notice that because we have interacting terms, this is already a lot harder of a partition function to solve than the zero-dimensional case. In my studies of this form, I came across two solutions: A combinatorial one, and one that involves a transfer matrix. I much prefer the latter, and so we will follow along with that solution. To do so, we will introduce a couple of notational tricks that allow us to rewrite the Partition function as a product of matrices (the transition matrix), which will result in a big simplification happening. First, let’s expand and simplify the Partition function a bit.

Z=s1s2sNeβiE(σi,σi+1)=s1s2sNeβE(σ1,σ2)βE(σ2,σ3)βE(σN,σ1)=s1s2sNeβE(σ1,σ2)eβE(σ2,σ3)eβE(σN,σ1)=s1s2sNT(σ1,σ2)T(σ2,σ3)T(σN,σ1)\begin{align*} Z &= \sum_{s_1} \sum_{s_2} \cdots \sum_{s_N} e^{-\beta \sum_{i} E(\sigma_i, \sigma_{i+1})} \\ &= \sum_{s_1} \sum_{s_2} \cdots \sum_{s_N} e^{-\beta E(\sigma_1, \sigma_{2}) - \beta E(\sigma_2, \sigma_{3}) - \ldots - \beta E(\sigma_N, \sigma_{1})} \\ &= \sum_{s_1} \sum_{s_2} \cdots \sum_{s_N} e^{-\beta E(\sigma_1, \sigma_{2})} e^{-\beta E(\sigma_2, \sigma_{3})} \cdots e^{-\beta E(\sigma_N, \sigma_{1})} \\ &= \sum_{s_1} \sum_{s_2} \cdots \sum_{s_N} T(\sigma_1, \sigma_2) T(\sigma_2, \sigma_3) \cdots T(\sigma_N, \sigma_1) \end{align*}

Where T(σi,σj)=eβE(σi,σj)T(\sigma_i, \sigma_j) = e^{-\beta E(\sigma_i, \sigma_j)}. Next, notice that T(σ1,σ2)T(\sigma_1, \sigma_{2}) can only take on four possible states: T(,)T(\uparrow, \uparrow), T(,)T(\uparrow, \downarrow), T(,)T(\downarrow, \uparrow), and T(,)T(\downarrow, \downarrow). Thus, we can represent the function T(σi,σj)T(\sigma_i, \sigma_j) with the following two-by-two (transfer) matrix

T=[T(,)T(,)T(,)T(,)]=[eβ(J+h)eβJeβJeβ(Jh)]\mathbf{T} = \begin{bmatrix} T(\uparrow, \uparrow) & T(\uparrow, \downarrow) \\ T(\downarrow, \uparrow) & T(\downarrow, \downarrow) \end{bmatrix} = \begin{bmatrix} e^{\beta(J+h)} & e^{-\beta J} \\ e^{-\beta J} & e^{\beta(J-h)} \end{bmatrix}

Where the value of T(σ1,σ2)T(\sigma_1, \sigma_{2}) is given by an element in T\mathbf{T}, namely Tσi,σj\mathbf{T}_{\sigma_i, \sigma_j}. Now, the key property that makes this technique so successful relies results from the following property

σjtσi,σjtσj,σk=(TT)σi,σk\sum_{\sigma_j} t_{\sigma_i, \sigma_j} t_{\sigma_j, \sigma_k} = (\mathbf{T} * \mathbf{T})_{\sigma_i, \sigma_k}

Which is just the definition of the matrix dot product. From this, we can simplify the partition function as

Z=s1s2sNT(σ1,σ2)T(σ2,σ3)T(σN,σ1)=s1s3sN(TT)σ1,σ3T(σN,σ1)=s1s4sN(TTT)σ1,σ4T(σN,σ1)=s1(TN)σ1,σ1\begin{align*} Z &= \sum_{s_1} \sum_{s_2} \cdots \sum_{s_N} T(\sigma_1, \sigma_2) T(\sigma_2, \sigma_3) \cdots T(\sigma_N, \sigma_1) \\ &= \sum_{s_1} \sum_{s_3} \cdots \sum_{s_N} (\mathbf{T} * \mathbf{T})_{\sigma_1, \sigma_3} \cdots T(\sigma_N, \sigma_1) \\ &= \sum_{s_1} \sum_{s_4} \cdots \sum_{s_N} (\mathbf{T} * \mathbf{T} * \mathbf{T})_{\sigma_1, \sigma_4} \cdots T(\sigma_N, \sigma_1) \\ &= \sum_{s_1} (\mathbf{T}^N)_{\sigma_1, \sigma_1} \end{align*}

Because we are now only summing over σ1\sigma_1, the only possible states left are T(,)T(\uparrow, \uparrow) and T(,)T(\downarrow, \downarrow), which are the diagonal values of TN\mathbf{T}^N, or the trace.

Z=Tr[TN]Z = \text{Tr}[\mathbf{T}^N]

The fact that we have a periodic boundary condition is really what enables the partition function ZZ to be written as the NN-th power of T\mathbf{T}. This is a huge simplification, and has a closed form solution after eigenvalue analysis.

As a reminder, recall that the trace of a matrix is equal to the sum of its eigenvalues. Furthermore, because T\mathbf{T} is symmetric and real, then TN\mathbf{T}^N is also, and that

Tr[TN]=λ+N+λN\text{Tr}[\mathbf{T}^N] = \lambda_{+}^N + \lambda_{-}^N

Here, we define the eigenvalues as λ+\lambda_{+} and λ\lambda_{-}, and assume that λ+λ\lambda_{+} \ge \lambda_{-}, then solve for them analytically. Also, note that because of the power of NN that we only care about the larger of the two eigenvalues since in the thermodynamic limit (NN \to \infty) we have

Z=λ+N+λN=λ+N(1+λNλ+N)λ+NZ = \lambda_{+}^N + \lambda_{-}^N = \lambda_{+}^N (1 + \frac{\lambda_{-}^N}{\lambda_{+}^N}) \approx \lambda_{+}^N

Setting up the characteristic equation and solving for λ+\lambda_{+} and λ\lambda_{-} gives

det(TλI)=(eβ(J+h)λ)(eβ(Jh)λ)e2βJ=λeβJ2cosh(βh)+2sinh(2βJ)+λ2\begin{align*} \det (\mathbf{T} - \lambda I) &= (e^{\beta (J + h)} - \lambda) (e^{\beta (J - h)} - \lambda) - e^{-2\beta J} \\ &= -\lambda e^{\beta J} 2 \cosh(\beta h) + 2 \sinh(2\beta J) + \lambda^2 \end{align*}

Whose solutions (after setting to zero and solving for lambda) are

λ±=12(2eβJcosh(βh)±4e2βJcosh2(βh)8sinh(2βJ))=eβJcosh(βh)±e2βJcosh2(βh)2sinh(2βJ)=eβJcosh(βh)±e2βJcosh2(βh)(e2βJe2βJ)=eβJcosh(βh)±e2βJ(sinh2(βh)+1)e2βJ+e2βJ=eβJcosh(βh)±e2βJsinh2(βh)+e2βJ=eβJ[cosh(βh)±sinh2(βh)+e4βJ]\begin{align*} \lambda_{\pm} &= \frac{1}{2} \big( 2 e^{\beta J} \cosh(\beta h) \pm \sqrt{4 e^{2\beta J} \cosh^2 (\beta h) - 8 \sinh(2\beta J)} \big) \\[0.5em] &= e^{\beta J} \cosh(\beta h) \pm \sqrt{e^{2\beta J} \cosh^2 (\beta h) - 2 \sinh(2\beta J)} \\[0.5em] &= e^{\beta J} \cosh(\beta h) \pm \sqrt{e^{2\beta J} \cosh^2 (\beta h) - (e^{2\beta J} - e^{-2\beta J})} \\[0.5em] &= e^{\beta J} \cosh(\beta h) \pm \sqrt{e^{2\beta J}(\sinh^2(\beta h) + 1) - e^{2\beta J} + e^{-2\beta J}} \\[0.5em] &= e^{\beta J} \cosh(\beta h) \pm \sqrt{e^{2\beta J}\sinh^2(\beta h) + e^{-2\beta J}} \\[0.5em] &= e^{\beta J} \big[ \cosh(\beta h) \pm \sqrt{\sinh^2(\beta h) + e^{-4\beta J}} \big] \end{align*}

Now for the free energy, we want a system that is not proportional to N and so we instead define the free-energy per spin as f=N1Ff = N^{-1} F. This gives us

f=TN1log(Z)=TN1log(λ+N+λN)\begin{align*} f &= -T N^{-1} \log(Z) \\ &= -T N^{-1} \log(\lambda_{+}^N + \lambda_{-}^N) \end{align*}

Finally, taking the thermodynamic limit we get the final free-energy per spin as

f=TN1log(λ+N)=Tlog(λ+)\begin{align*} f &= -T N^{-1} \log(\lambda_{+}^N) \\ &= -T \log(\lambda_{+}) \end{align*}

Which evaluates to

f=Tlog(eβJ[cosh(βh)±sinh2(βh)+e4βJ])f = -T \log(e^{\beta J} \big[ \cosh(\beta h) \pm \sqrt{\sinh^2(\beta h) + e^{-4\beta J}} \big])

After some tedious algebra, the magnetization per spin M=σM = \langle \sigma \rangle can be computed from the free energy as

M=sinh(βh)sinh2(βh)+e4βJM = \frac{\sinh(\beta h)}{\sqrt{\sinh^2(\beta h) + e^{-4 \beta J}}}

At zero external field (h=0)(h=0), analysis shows that M=0M=0 for all T>0T>0. This absence of spontaneous magnetization indicates that the one-dimensional Ising model also cannot sustain long-range order at any non-zero temperature. The magnetic susceptibility χ\chi can be computed by taking a second derivative

χ=mh=β1tanh2(βHeff)1βJsech2(βHeff)\chi = \frac{\partial m}{\partial h} = \beta \frac{1 - \tanh^2(\beta H_{\text{eff}})}{1 - \beta J\text{sech}^2(\beta H_{\text{eff}})}

where Heff=J+hH_{\text{eff}} = J + h is the effective field. This susceptibility remains finite for all non-zero temperatures, confirming the absence of a phase transition in the one-dimensional Ising model.

Figure 7: Magnetization behavior of the one-dimensional Ising model as a function of temperature for various external field strengths. At zero external field (h=0h=0), the magnetization remains exactly zero at all non-zero temperatures, demonstrating the absence of spontaneous magnetization in the 1D Ising model. This confirms the mathematical result that no phase transition occurs in the one-dimensional case, as the system cannot sustain long-range order at any finite temperature. As external fields of increasing strength (h=0.1,0.5,1.0h=0.1, 0.5, 1.0) are applied, the system exhibits induced magnetization that asymptotically approaches zero at high temperatures as thermal fluctuations overcome the influence of both the external field and the nearest-neighbor interactions. The results are similar to the zero-dimensional case.

This lack of a phase transition in the one-dimensional Ising model is a result of the fact that the thermal fluctuations at any non-zero temperature are sufficient to destroy long-range order. That is, creating a domain wall in one dimension costs a finite amount of energy (2J2J), but the entropy gain from placing this domain wall anywhere along the chain grows logarithmically with system size. At any non-zero temperature, the entropic contribution to the free energy eventually dominates, favoring a disordered state. This competition between energy and entropy explains why higher dimensions are necessary for the existence of phase transitions at finite temperatures.

Mean Field Theory for dd-Dimensions

For the two-dimensional Ising model in the absence of an external field, there is a solution by Onsager [5], but it’s a bit too complicated to give the full derivation here. Instead, we will discuss an approximation of the Hamiltonian that will allow us to solve the Ising model in arbitrary dimensions, namely the mean field approximation. Unfortunately, this approximation leads to quite poor results for low dimensions, and actually breaks down right at the critical point which we’ll talk about later.

The goal of this approximation is to decouple interacting terms between neighboring spins by replacing them with a single interaction to the mean field. In doing so, we are able to convert the many-body problem (with many interacting spins) to a one-body problem where each spin only interacts with the mean field. But this often lends itself to a circular argument (called self-consistency) in which the dynamics of our new mean-field system rely on microscopic states, but the microscopic states rely now on the mean-field (rather than the individual terms). We will see this dependence appear later on.

Given the Hamiltonian of the Ising model in an arbitrary dd-Dimensions (d0d \ge 0)

H=Ji,jσiσjhiσiH = -J \sum_{\langle i, j \rangle} \sigma_i \sigma_j - h \sum_i \sigma_i

and the magnetization of the system M={σ}M = \langle \{ \sigma \} \rangle. We can then express the interacting spins as

σiσj=(σiM+M)(σjM+M)=M2+M(σiM)+M(σjM)+(σiM)(σjM)M2+MσiM2+MσjM2M2+M(σi+σj)\begin{align*} \sigma_i \sigma_j &= (\sigma_i - M + M) (\sigma_j - M + M) \\ &= M^2 + M(\sigma_i - M) + M(\sigma_j - M) + (\sigma_i - M) (\sigma_j - M) \\ &\approx M^2 + M \sigma_i - M^2 + M \sigma_j - M^2 \\ &\approx -M^2 + M (\sigma_i + \sigma_j) \end{align*}

In the mean field approximation, we neglect the correlation term (σiM)(σjM)(\sigma_i - M) (\sigma_j - M) because it represents fluctuations from the mean at different sites, which we assume to be uncorrelated and of higher order. This gives us the following new mean-field Hamiltonian HmfH_{\text{mf}}

Hmf=Ji,jM2+M(σi+σj)hiσiH_{\text{mf}} = -J \sum_{\langle i, j \rangle} -M^2 + M (\sigma_i + \sigma_j) - h \sum_i \sigma_i

In dd dimensions, the number of neighbors each dipole on the lattice has is z=2d\mathit{z}=2d. Given this, the first term in the Hamiltonian becomes

Ji,jM2=12JNzM2-J \sum_{\langle i, j \rangle} -M^2 = \frac{1}{2} J N \mathit{z} M^2

And additionally, the second term becomes

Ji,jM(σi+σj)=JzMiσi-J \sum_{\langle i, j \rangle} M (\sigma_i + \sigma_j) = -J \mathit{z} M \sum_i \sigma_i

Then, our mean-field Hamiltonian becomes

Hmf=12JNzM2(JzM+h)iσi=12JNzM2Heffiσi\begin{align*} H_{\text{mf}} &= \frac{1}{2} J N \mathit{z} M^2 - (J \mathit{z} M + h) \sum_i \sigma_i \\ &= \frac{1}{2} J N \mathit{z} M^2 - H_{\text{eff}} \sum_i \sigma_i \end{align*}

Where the effective field Heff=JzM+hH_{\text{eff}} = J \mathit{z} M + h is the total magnetic field felt by the spins. In other words, we have essentially removed the pairwise spin-spin interactions, and replaced them with the effective field HeffH_{\text{eff}} which consists of the external field hh and the mean field JzMJ \mathit{z} M. Because all of the pairwise spins have been decoupled, the partition function becomes trivial to compute.

Zmf={σ}exp[βHmf({σ})]={σ}exp[β(12JNzM2Heffiσi)]=exp[β12JNzM2](σiexp[βHeffiσi])=exp[β12JNzM2](σii=1Nexp[βHeffσi])=exp[β12JNzM2](i=1Nσiexp[βHeffσi])=exp[β12JNzM2](2cosh(βHeff))N=2Nexp[β12JNzM2]coshN(βHeff)\begin{align*} Z_{\text{mf}} &= \sum_{\{\sigma\}} \exp\big[-\beta H_{\text{mf}}(\{\sigma\}) \big] \\ &= \sum_{\{\sigma\}} \exp\big[-\beta (\frac{1}{2} J N \mathit{z} M^2 - H_{\text{eff}} \sum_i \sigma_i) \big] \\ &= \exp\big[-\beta \frac{1}{2} J N \mathit{z} M^2\big] \big( \sum_{\sigma_i} \exp\big[\beta H_{\text{eff}} \sum_i \sigma_i\big] \big) \\ &= \exp\big[-\beta \frac{1}{2} J N \mathit{z} M^2\big] \big( \sum_{\sigma_i} \prod_{i=1}^N \exp\big[\beta H_{\text{eff}}\sigma_i\big] \big) \\ &= \exp\big[-\beta \frac{1}{2} J N \mathit{z} M^2\big] \big( \prod_{i=1}^N \sum_{\sigma_i} \exp\big[\beta H_{\text{eff}}\sigma_i\big] \big) \\ &= \exp\big[-\beta \frac{1}{2} J N \mathit{z} M^2\big] (2 \cosh(\beta H_{\text{eff}}))^N \\ &= 2^N \exp\big[-\beta \frac{1}{2} J N \mathit{z} M^2\big] \cosh^N(\beta H_{\text{eff}}) \end{align*}

Notice that the swap of the product and sum above is possible because σii=1Nσi=i=1Nσiσi\sum_{\sigma_i} \prod_{i=1}^N \sigma_i = \prod_{i=1}^N \sum_{\sigma_i} \sigma_i. Also note that the product came from the fact that eiσi=ieσie^{\sum_i \sigma_i} = \prod_i e^{\sigma_i}, which is pretty trivial to see if you expand out the sum.

Then, the free energy fmff_{\text{mf}} per spin is

fmf=TN1log(Z)=12JzM2Tlog(2cosh(βHeff))\begin{align*} f_{\text{mf}} &= -T N^{-1} \log(Z) \\ &= \frac{1}{2} J \mathit{z} M^2 - T \log(2 \cosh(\beta H_{\text{eff}})) \end{align*}

Remember that β=1/T\beta = 1/T because we assume that kT=1k_T = 1, and thus the factors of TT cancel out in the left hand side of the free-energy. The behavior of the free energy for varying temperatures can be seen in Figure 8.

Figure 8: The characteristic shapes of the mean-field free energy at different temperatures. Above TcT_c, the free energy has a single minimum at M=0M = 0, reflecting the stability of the paramagnetic phase. At TcT_c, this minimum becomes very flat, indicating the critical point where susceptibility diverges. Below TcT_c, the free energy develops a characteristic double-well structure with minima at finite ±M\pm M, corresponding to the two possible directions of spontaneous magnetization. The central maximum at M=0M = 0 represents the unstable state.

Now we can compute the magnetization as M=fmf/HeffM = - \partial f_{\text{mf}} / \partial H_{\text{eff}}, which gives the self-consistency equation

M=tanh(β(JzM+h))M = \tanh(\beta (J \mathit{z} M + h))

To repeat what was said before, this is called the self-consistency equation because the mean-field relies on the microscopic states, but the microscopic states now also rely on the mean-field. That is, MM is a function of itself!

Figure 9: Graphical solutions to the self-consistency equation M=tanh(βJM)M = tanh(\beta JM) for the dd-dimensional Ising model in zero external field at different temperatures. The dashed line represents y=My = M, while the colored curves show y=tanh(βJM)y = tanh(\beta JM) at three distinct temperatures relative to the critical temperature TcT_c. Intersection points (black circles) indicate solutions to the self-consistency equation. Left panel (blue curve): For T>TcT > T_c, only a single solution exists at M=0M = 0, corresponding to the paramagnetic phase with no spontaneous magnetization. Middle panel (green curve): At T=TcT = T_c, the slopes of the two curves match at M=0M = 0, marking the critical point where the system transitions between ordered and disordered phases. Right panel (red curve): For T<TcT < T_c, three solutions emerge: an unstable solution at M=0M = 0 and two stable solutions at M=±1M = \pm 1, reflecting the ferromagnetic phase with spontaneous symmetry breaking. This graphical representation illustrates the emergence of non-zero spontaneous magnetization below TcT_c as predicted by mean field theory.

To understand the phase transition in this mean field approximation, let us first determine the critical temperature by analyzing the self-consistency equation. At zero external field (h=0)(h=0), we can examine when a non-zero magnetization first appears as we lower the temperature. For small MM, we can expand the hyperbolic tangent using its Taylor series

tanh(x)=x13x3+O(x5)\tanh(x) = x - \frac{1}{3}x^3 + O(x^5)

Truncating at the cubic term and substituting in x=βJzMx = \beta JzM gives

M=βJzM13(βJz)3M3+O(M5)M = \beta JzM - \frac{1}{3}(\beta Jz)^3 M^3 + O(M^5) M[1βJz+13(βJz)3M2]=0M[1 - \beta Jz + \frac{1}{3}(\beta Jz)^3 M^2] = 0

This equation has two types of solutions: M=0M = 0 (paramagnetic phase) and solutions where the term in square brackets vanishes (ferromagnetic phase). The critical temperature occurs when the coefficient of the linear term vanishes, and therefore

1βcJz=01 - \beta_c Jz = 0 Tc=JzkBT_c = \frac{Jz}{k_B}

Above TcT_c, only M=0M = 0 is stable. Below TcT_c, non-zero solutions emerge, leading to the onset of spontaneous magnetization. More on the validity of these results in the next section.

Critical Exponents of the MFT Solution

Next, let’s derive some of the critical exponents to analyze how various quantities scale near TcT_c. First, consider the magnetization below TcT_c. As we did before, let’s define the reduced temperature t=(TTc)/Tct = (T - T_c)/T_c, then notice that near TcT_c we have

βJz=JzkBT=TcT\beta Jz = \frac{Jz}{k_BT} = \frac{T_c}{T}

Substituting back into our expansion and rearranging gives

M[1βJz+13(βJz)3M2]=0M[1 - \beta Jz + \frac{1}{3}(\beta Jz)^3 M^2] = 0 M[1TcT+13(TcT)3M2]=0M[1 - \frac{T_c}{T} + \frac{1}{3}(\frac{T_c}{T})^3 M^2] = 0 M[TTcT+13(TcT)3M2]=0M[\frac{T - T_c}{T} + \frac{1}{3}(\frac{T_c}{T})^3 M^2] = 0

Which has a non-interesting solution at M=0M=0 and another (more interesting) solution when the inner part is zero giving

TTcT+13(TcT)3M2=0\frac{T - T_c}{T} + \frac{1}{3}(\frac{T_c}{T})^3 M^2 = 0 13(TcT)3M2=TcTT\frac{1}{3}(\frac{T_c}{T})^3 M^2 = \frac{T_c - T}{T} M2=3(TcTT)(TTc)3M^2 = 3 (\frac{T_c - T}{T} ) (\frac{T}{T_c})^3 M2=3t(1+t)2M^2 = 3t (1+t)^2 M=±3t(1+t)2M = \pm \sqrt{3t (1+t)^2}

Which for t1t \ll 1 we have that M±3tM \sim \pm \sqrt{3t} or that Mt1/2M \sim |t|^{1/2} with the critical exponent being β=1/2\beta = 1/2. For the magnetic susceptibility, we can do something similar by differentiating the self-consistency equation with respect to hh resulting in

Mh=β(1tanh2[β(JzM+h)])(JzMh+1)\frac{\partial M}{\partial h} = \beta(1 - \tanh^2[\beta(JzM + h)])(Jz\frac{\partial M}{\partial h} + 1) χ=β(1tanh2[β(JzM+h)])(Jzχ+1)\chi = \beta(1 - \tanh^2[\beta(JzM + h)])(Jz \chi + 1)

Given no external field h=0h=0 and TTc+T \rightarrow T_c^+ where M=0M=0 we get roughly the following

χ=β(10)(Jzχ+1)\chi = \beta(1 - 0)(Jz \chi + 1) χ=β1βJzβct\chi = \frac{\beta}{1-\beta Jz} \sim \frac{\beta_c}{t}

This gives χtγ\chi \propto |t|^{-\gamma} with γ=1\gamma = 1. I skipped a couple of steps, but the whole derivation, and the derivation of some other critical exponents, can be found in Franz Utermohlen notes [6] on the topic, or either the Berlinsky [4] or Kopietz [7] books.

Figure 10: Critical behavior in mean-field theory (MFT) as a function of reduced temperature t=(TTc)/Tct = (T-T_c)/T_c. Left panel: Magnetization with critical exponent β=0.5\beta = 0.5. Right panel: Magnetic susceptibility exhibiting the power law divergence with critical exponent γ=1.0\gamma = 1.0.

Remember that MFT gives us an approximate solution to the true Hamiltonian, so how good are these results? It turns out that in high dimensions, they are quite good (exact even), but fail in lower dimensions. Namely, above something called the upper critical dimension (where dc=4d_c = 4 for the mean-field Ising model), fluctuations from the mean become irrelevant and these exponents are exact. Below dcd_c, fluctuations modify the critical behavior in a way that mean field theory fails to capture.

For example, at zero external field (h=0h=0), this equation predicts a second-order phase transition at a critical temperature Tc=Jz/kBT_c = Jz/k_B for all dimensions. The spontaneous magnetization below this temperature follows M(TcT)1/2M \propto (T_c - T)^{1/2}, with the critical exponent β=1/2\beta = 1/2 being independent of dimension. However, this contradicts our exact solutions for the 1D and 2D Ising model, where in one dimension, the transfer matrix method yields the exact partition function, showing that the free energy per site is analytic for all finite temperatures. The correlation length, given by ξ=1/ln[tanh(βJ)]\xi = -1/\ln[\tanh(\beta J)], remains finite for all T>0T > 0, demonstrating the absence of long-range order at any non-zero temperature.

Furthermore, in the two-dimensional case, Onsager’s exact solution shows that while a phase transition does exist, it differs significantly from the mean field prediction. The critical temperature in the exact solution (Tc2.27T_c \approx 2.27) is lower than the mean field prediction of kBTc/J=4k_BT_c/J = 4. More importantly, the magnetization near the critical point follows M(TcT)1/8M \propto (T_c - T)^{1/8}, with β=1/8\beta = 1/8, rather than the mean field value of 1/21/2. We will talk further about these discrepancies in a couple of sections.

The Variational Principle

Next, I want to have a quick word about the theoretical basis for mean field theory, why it works, and present a quick relationship to variational inference. Recall that our goal from MFT is to simplify a many-body problem with interacting degrees of freedom into a more tractable one-body problem. In doing so, the (usually intractable) pairwise interactions are replaced by a single interaction between each degree of freedom and an effective “mean field.” In this sense, the first step in MFT involves replacing the original Hamiltonian HH, which encodes the full many-body interactions, with an approximate Hamiltonian. For consistency with standard nomenclature, we label this approximate form the trial Hamiltonian HTH_T. Note that the trial Hamiltonian is just an approximation of the real Hamiltonian HH. Now, let ZZ be the partition function of HH in the canonical ensemble, and define the free energy FF to be the familiar

F=Tlog(Z)F = -T\log(Z)

Which, for complicated HH with many interacting degrees of freedom, the problem of computing the partition function (ZZ), and thus the free energy (FF), is typically intractable. Here’s where the variational principle comes in. We introduce the trial Hamiltonian, HTH_T, which ideally captures the essential features of HH while being much easier to handle. This trial Hamiltonian has its own partition function ZTZ_T and free energy FTF_T defined analogously. Then, the probability distribution associated with the trial Hamiltonian is

PT({s})=eβHT({s})ZTP_T(\{s\}) = \frac{e^{-\beta H_T(\{s\})}}{Z_T}

Now, we can derive the relationship between ZZ and ZTZ_T as

Z={s}eβH({s})={s}eβHT({s})eβH({s})eβHT({s})={s}eβHT({s})eβ(H({s})HT({s}))={s}PT({s})ZTeβ(H({s})HT({s}))=ZT{s}PT({s})eβ(HHT)\begin{align*} Z &= \sum_{\{s\}} e^{-\beta H(\{s\})} \\ &= \sum_{\{s\}} e^{-\beta H_T(\{s\})} \frac{e^{-\beta H(\{s\})}}{e^{-\beta H_T(\{s\})}} \\ &= \sum_{\{s\}} e^{-\beta H_T(\{s\})} e^{-\beta(H(\{s\}) - H_T(\{s\}))} \\ &= \sum_{\{s\}} P_T(\{s\}) Z_T e^{-\beta(H(\{s\}) - H_T(\{s\}))} \\ &= Z_T \sum_{\{s\}} P_T(\{s\}) e^{-\beta(H - H_T)} \end{align*}

This trial Hamiltonian allows us to rewrite the true free energy in a clever way by using this fact. Substituting back into the expression for FF, we find that

F=Tlog(Z)=Tlog(ZT{s}PT({s})eβ(HHT))=Tlog(ZT)Tlog({s}PT({s})eβ(HHT))=FTTlog({s}PT({s})eβ(HHT))=FTTlog(eβ(HHT)T)\begin{align*} F &= -T\log(Z) \\ &= -T\log\left(Z_T \sum_{\{s\}} P_T(\{s\}) e^{-\beta (H - H_T)}\right) \\ &= -T\log(Z_T) - T\log\left(\sum_{\{s\}} P_T(\{s\}) e^{-\beta (H - H_T)}\right) \\ &= F_T - T\log\left(\sum_{\{s\}} P_T(\{s\}) e^{-\beta (H - H_T)}\right) \\ &= F_T - T\log\left(\langle e^{-\beta (H - H_T)} \rangle_T\right) \end{align*}

where T\langle * \rangle_T is the expectation taken over the trial distribution PTP_T. Simplifying this a bit further using Jensen’s inequality, which states that log(X)log(X)\log(\langle X \rangle) \ge \langle \log(X) \rangle for a convex function like the logarithm, we get:

F=FTTlog(eβ(HHT)T)FTTlog(eβ(HHT))TFTTβ(HHT)TFT+TβHHTTFT+HHTT\begin{align*} F &= F_T - T\log\left(\langle e^{-\beta (H - H_T)} \rangle_T\right) \\ &\leq F_T - T \langle \log(e^{-\beta (H - H_T)}) \rangle_T \\ &\leq F_T - T \langle -\beta (H - H_T) \rangle_T \\ &\leq F_T + T\beta \langle H - H_T \rangle_T \\ &\leq F_T + \langle H - H_T \rangle_T \end{align*}

Where the right hand side of this inequality (called the Bogoliubov inequality) is called the variational free energy

FvarFT+HHTTF_{var} \triangleq F_T + \langle H - H_T \rangle_T

This equation has a really nice interpretation. The first term, FTF_T is the free energy of the trial Hamiltonian. The second term, HHTT\langle H - H_T \rangle_T, measures how much the trial Hamiltonian HTH_T deviates from the true Hamiltonian HH, averaged over the trial distribution. Together they provide an upper bound on the true free energy FF. The optimal trial Hamiltonian is the one that minimizes this upper bound, bringing the approximation as close as possible to the true free energy.

Figure 11: Image from Gregory Gundersen’s blog post on VI and the ELBO [8]. Schematic representation of the variational approach to mean field theory. The blob represents a family of trial distributions Q\mathcal{Q} parameterized by the effective field. The optimization process starts with an initial trial distribution q(0)(Z)Qq^{(0)}(\mathbf{Z}) \in \mathcal{Q} and iteratively minimizes the KL divergence between the current approximation q(t)(Z)q^{(t)}(\mathbf{Z}) and the true Boltzmann distribution p(ZX)p(\mathbf{Z}|\mathbf{X}). This process is equivalent to minimizing the variational free energy Fvar=FT+HHTTF_{var} = F_T + \langle H - H_T \rangle_T, which provides an upper bound to the true free energy FF according to the Bogoliubov inequality. The optimal distribution q(Z)q^*(\mathbf{Z}) corresponds to the self-consistency equation M=tanh(β(JzM+h))M = \tanh(\beta(JzM + h)) in the Ising model, where the effective field parameter minimizes the difference between the trial Hamiltonian HTH_T and the true many-body Hamiltonian HH.

The reason this is nice is because we essentially converted the mean-field problem into a variational inference one, where the closer HTH_T is to HH, the closer the variational free energy becomes to the real free energy of the problem at hand. That is… if we parameterize HTH_T with a set of weights, then optimizing over these weights is equivalent to making FvarF_{var} as small as possible… which causes HTH_T to become a better approximation to HH… and thus FvarF_{var} approaches FF. And et voilà, it’s literally just variational inference. In fact, the Bogoliubov inequality is a sort of negative ELBO.

The see this briefly in play, let’s consider the generic Ising Hamiltonian again

H=Ji,jσiσjhiσiH = -J \sum_{\langle i, j \rangle} \sigma_i \sigma_j - h \sum_i \sigma_i

Again, in the MFT approximation we want to remove the interacting terms and absorb them into an effective field, let’s now call bb.

H=biσiH = b \sum_{i} \sigma_i

Now, notice that bb is out parameter or weight we will optimize over. That is, we want to know the effective field bb that best approximates the true free energy FF given our MFT assumption. Plugging this into the variational free energy, computing the derivative, and optimizing for bb reveals

b=h+zJtanh(βb)b = h + zJ \tanh(\beta b)

Which is just our self-consistency equation! Well, after substituting in the connection between the effective field and magnetism M=tanh(βb)M = \tanh(\beta b) that is. Doing so gives

M=tanh(β(JzM+h))M = \tanh(\beta (JzM + h))

Super cool isn’t it! Now we have derived the self-consistency equation in two ways (albeit more in-depth in the former).

The Upper Critical Dimension

Recall that for MFT, we removed second order correlations between fluctuations, represented by terms of the form (σiM)(σjM)(\sigma_i - M)(\sigma_j - M). While this approximation captures qualitative features of phase transitions, it fails to accurately predict critical exponents below a certain spatial dimension, known as the upper critical dimension dcd_c. To determine dcd_c, we can check the validity of MFT by examining the ratio of fluctuations to the square of the order parameter

(δM)2M21\frac{\langle (\delta M)^2 \rangle}{\langle M \rangle^2} \ll 1

This inequality must hold for the mean field approximation to be a good approximation. Near the critical point, we can express this ratio in terms of known quantities. The numerator, representing fluctuations about the mean, is related to the magnetic susceptibility M/h\partial M/\partial h, while the denominator scales as the square of the magnetization. Using our mean field results

Mtβt1/2,t<0Mhtγt1\begin{align*} \langle M \rangle &\sim |t|^{\beta} \sim |t|^{1/2}, \quad t < 0 \\ \frac{\partial M}{\partial h} &\sim |t|^{-\gamma} \sim |t|^{-1} \end{align*}

Here, t=(TTc)/Tct = (T-T_c)/T_c is the reduced temperature, and β=1/2\beta=1/2 and γ=1\gamma=1 are the mean field critical exponents.

These mean field exponents are modified by fluctuations. In general, fluctuations are controlled by the correlation length ξ\xi, which determines the size of correlated regions. The Ginzburg criterion states that mean field theory breaks down when fluctuations within a correlated volume become comparable to the square of the order parameter.

The correlation volume scales as ξd\xi^d, where dd is the spatial dimension. The magnitude of fluctuations within this volume can be derived from the two-point correlation function G(r)=σ0σrσ0σrG(r) = \langle \sigma_0\sigma_r \rangle - \langle \sigma_0 \rangle \langle \sigma_r \rangle, which scales as r(d2)r^{-(d-2)} at the critical point. Integrating over the correlation volume gives a typical fluctuation size of ξ(d2)\xi^{-(d-2)}. Therefore

(δM)2M2ξ(d2)t1\frac{\langle (\delta M)^2 \rangle}{\langle M \rangle^2} \sim \frac{\xi^{-(d-2)}}{|t|^1}

Using the mean field scaling relation ξtνt1/2\xi \sim |t|^{-\nu} \sim |t|^{-1/2}, we obtain that

(δM)2M2t(d2)/2t1t(d4)/2\frac{\langle (\delta M)^2 \rangle}{\langle M \rangle^2} \sim \frac{|t|^{(d-2)/2}}{|t|^1} \sim |t|^{(d-4)/2}

For mean field theory to become exact as t0t \to 0 (approaching the critical point), this ratio must vanish, requiring

(d4)/2>0(d-4)/2 > 0

This yields dc=4d_c = 4 as the upper critical dimension for the Ising model with short-range interactions. For d>4d > 4, fluctuations become irrelevant near the critical point, and mean field theory provides exact critical exponents. At d=4d = 4, we find logarithmic corrections to mean field behavior where

ξt1/2(lnt)1/4\xi \sim |t|^{-1/2}(\ln|t|)^{1/4} Mt1/2(lnt)1/4\langle M \rangle \sim |t|^{1/2}(\ln|t|)^{1/4}

For d<4d < 4, the mean field description breaks down near the critical point, and we must use more sophisticated methods like the renormalization group to obtain correct critical exponents.

The physical intuition behind this dimensional dependence is that in higher dimensions, each spin has more neighbors (z=2dz = 2d), making the mean field approximation more accurate as dimensionality increases. When d>4d > 4, this effect is strong enough to suppress critical fluctuations entirely.

Specifically, MFT fails at the critical point because of the growing magnitude of fluctuations. As we approach TcT_c, the correlation length ξ\xi diverges, meaning that spins become increasingly correlated over large distances. Mean field theory assumes that each spin feels only the average effect of its neighbors, effectively treating fluctuations as small and uncorrelated. However, near TcT_c, fluctuations occur on all length scales up to ξ\xi, and these fluctuations are highly correlated.

In lower dimensions (d<4d < 4), these correlated fluctuations become particularly important as the collective behavior of many spins moving together can no longer be approximated by independent spins responding to an average field. This collective behavior leads to different critical exponents than those predicted by mean field theory, requiring more sophisticated theoretical approaches.

Final Words

A bit of a longer post I guess, but I found this topic really interesting, and it’s been pretty widely studied so there is a lot of depth that can be explored. This post is basically just a dump of everything that I have been studying along this axis for the past couple of months. In fact, I could probably write another 20 pages on renormalization group methods, and it’s super interesting relationships to deep learning [9] [10] [11]. There have even been some people who have tried to take similar approaches to studying criticality [12], which I think is an interesting research direction. But, I will leave this for another time. There is just too much to write about, and to many things that I want to look into. Most likely the next post up will be on mechanistic interpretability, but we’ll see.

References

  1. Beggs, J. M. (2022). The Cortex and the Critical Point: Understanding the Power of Emergence.
  2. Beggs, J. M. (2022). Addressing skepticism of the critical brain hypothesis. Link
  3. Tian, Y., Tan, Z., Hou, H., Li, G., Cheng, A., Qiu, Y., Weng, K., Chen, C., Sun, P. (2022). Theoretical foundations of studying criticality in the brain. Link
  4. Berlinsky, A. J., Harris, A. B. (2019). Statistical Mechanics: An Introductory Graduate Course.
  5. Onsager, L. (1944). Crystal Statistics. I. A Two-Dimensional Model with an Order-Disorder Transition. Link
  6. Utermohlen, F. (2018). Mean Field Theory Solution of the Ising Model. Link
  7. Kopietz, P., Bartosch, L., Schütz, F. (2010). Introduction to the Functional Renormalization Group.
  8. Gundersen, G. (2021). The ELBO in Variational Inference. Link
  9. Mehta, P., Schwab, D. J. (2014). An exact mapping between the Variational Renormalization Group and Deep Learning. Link
  10. Bény, C. (2013). Deep learning and the renormalization group. Link
  11. Rojefferson (2019). Deep Learning and the Renormalization Group. Link
  12. Meshulam, L., Bialek, W. (2024). Statistical mechanics for networks of real neurons. Link
  13. Ising, E. (1925). Beitrag zur Theorie des Ferromagnetismus. Link
  14. Baxter, R. J. (2016). Exactly solved models in statistical mechanics.
  15. Sakthivadivel, D. A. R. (2022). Magnetisation and Mean Field Theory in the Ising Model. Link

Figures

  1. Custom. First- versus second-order phase transitions. Link.
  2. Custom. Magnetism phase diagram. Link.
  3. Custom. Critical exponents for magnetism, correlation length, and susceptibility. Link.
  4. From the Physics Stack Exchange. Illustration of the correlation length. Link.
  5. Custom. Illustration of the 2D Ising model on a lattice. Link.
  6. Custom. Zero-dimensional Ising solution. Link.
  7. Custom. One-dimensional Ising solution. Link.
  8. Custom. Free energy for the mean-field Ising model at various temperatures. Link.
  9. Custom. Visualization of the self-consistency equation at various temperatures. Link.
  10. Custom. Critical exponents for the mean-field Ising. Link.
  11. From Gregory Gundersen’s blog post on variational inference. Link.

Tables

  1. Custom. Various critical exponents.