Submit Essay

Once you submit your essay, you can no longer edit it.

# Riemann Hypothesis: Past and Future

The Riemann hypothesis is a longstanding mathematical conjecture, first formulated by Bernhard Riemann in 1859, that has gained some renown due to it being chosen as one of the Clay Institute’s Millennium Problems and for its \$1 million bounty. The most common formulation of the conjecture may appear esoteric: it states that the nontrivial zeroes of a certain meromorphic function called the Riemann zeta function all have real part $1/2$. In this exposition, I’ll provide a rough picture of what the terms in this conjecture mean and why the Riemann hypothesis is so significant.

Some parts of the essay are technical and go into a level of detail that laypeople might be uncomfortable with. If you don't understand some parts, you can skip ahead: I'll try to summarize these parts at the beginning and the end so that readers who are unfamiliar with the subject details can still follow along. 3Blue1Brown has an excellent video that serves as a visual aid for understanding the Riemann zeta function.

## Counting prime numbers

When Riemann wrote his 1859 paper, he focused on the problem of counting the number of primes less than a given natural number. A prime number is a natural number divisible only by 1 and itself. The first few prime numbers are $2, 3, 5, 7, \ldots$, and due to integers having the property of unique factorization, we can think of prime numbers as the multiplicative building blocks of all natural numbers, with all natural numbers able to be written in one—and only one—way as the product of prime numbers. (This is also the reason that 1 is not considered prime, as doing so would violate the property of unique factorization.)

The first result in analytic number theory concerned with the distribution of primes may date back to Euclid. In the 4th century BCE, he proved the existence of infinitely many prime numbers using a simple argument: if there were finitely many prime numbers, we could take the product of all primes on the list and add 1. The resulting number would not be divisible by any of the primes on the list—and yet, it would necessarily be divisible by some prime, due to the property of unique factorization.

With this question settled, mathematicians turned their attention to the problem of counting the number of primes less than or equal to a given natural number $N$, a quantity often denoted by $\pi(N)$. Gauss noticed that the density of prime numbers around a natural number $N$ was very well approximated by $1/\ln(N)$, and so the prime number conjecture was born, which stated that the ratio

would approach 1 in the limit as $N \to \infty$.

The prime number conjecture stood for a long time. Chebyshev proved that if the limit did exist it must be equal to 1, And it was known that the ratio was bounded between constants $C_1 \lt 1$ and $C_2 \gt 1$, but these results were much weaker than the ratio having a limit of 1. After decades of failed attempts to cope with the problem using classical methods, Riemann’s 1859 paper provided a new direction for research by linking the prime counting problem to the nascent field of complex analysis.

## Meromorphic functions

Suppose that you have a pool with sources of water feeding into it and sinks draining water away. In this scenario, there are two approaches for estimating how much water is being fed into the pool on net every second: You can either measure the individual sources and sinks, figure out each of their contributions and then sum them up; or, you can take a global perspective and simply measure the water level in the pool. If water is an incompressible liquid, in the sense that its density remains constant, then the two approaches must yield the same answer.

Complex differentiable functions behave like the water in the pool—but in how much they rotate rather than how much they flow outward. In other words, they are like irrotational vector fields. The case of a differentiable function from the complex numbers to the complex numbers is similar to a pool with no sources and sinks: if we take any curve on the complex plane and look at how much the function “rotates” around that curve, we always find an answer of zero, just as we would when looking at the change of the water level in a pool without sources or sinks. Meromorphic functions are an appropriate generalization of this concept that allow the pool to have some sources or sinks, which are referred to as poles in complex analysis. Near to such poles, these functions behave the same way that $1/s^k$ behaves near $s=0$ for some positive integer $k$.

What does this have to do with prime numbers? This was Riemann’s genius: His idea was to define a complex differentiable function that would both have the property outlined above and would also allow us, if we look at its total rotation over some curve, to infer something about the distribution of prime numbers. Stated abstractly, it’s not at all clear this strategy can be made to work. But it can.

Readers who don’t know what complex differentiable functions are can think of them as some object to which the water analogy in this section applies with respect to counting prime numbers.

## The Riemann zeta function

The function Riemann ultimately defined is the Riemann zeta function. Without going into too much detail, the function turns out to be exactly what we want, and Riemann succeeded in using it to turn the question about the distribution of prime numbers into a question about the location of the complex zeroes of this function. These zeroes are complex numbers, and the question of how well we can count the prime numbers is also a question about what the real parts of these complex numbers are.

For those familiar with signal processing, the idea is essentially to do Fourier analysis on the prime counting function $\pi(x)$ to decompose it into individual waves, with the zeroes of the Riemann zeta function corresponding to the wave components that make up the prime counting function in aggregate.

The zeta function is easy to define on complex numbers $s$ whose real part is greater than 1. It’s defined as

Euler proved that this function has a deep connection to the prime numbers, since by the unique factorization property mentioned previously, we can express this sum as a product

If it’s unclear how that works, I recommend expanding out the first few terms of the product. This product can equivalently be expressed as

where the infinite product runs over all prime numbers. This is simply using the geometric series sum identity that $1+x+x^2+\ldots=(1-x)^{-1}$ for $|x| \lt 1$.

What Riemann really wanted to do was define a complex differentiable function with an obvious connection to the prime numbers, so he exploited the Euler product by focusing on its logarithmic derivative. This turns out to give us exactly what we want: we obtain

where $\Lambda(n)$ is the von Mangoldt function, defined so that it’s equal to $\ln{p}$ if $n=p^k$ for some prime ${p}$ and positive integer $k$, and equal to $0$ otherwise. We can think of this function as counting prime powers with an appropriate weight, which is necessary for the resulting zeta function to be complex differentiable. Roughly speaking, the von Mangoldt function counts primes with a weight that is proportional to their number of digits.

In his 1859 paper, Riemann’s essential contribution was the proof that our definition of $\zeta(s)$—which only makes sense for $(s)$ having real part greater than 1—can be extended to the whole complex plane, becoming a meromorphic function with its only pole at $s=1$. Moreover, the resulting function satisfies a functional equation that makes its zeroes symmetric around the half line. This is the line in the complex plane consisting of numbers whose real part is $1/2$, with the exception of the trivial zeroes at the negative even integers $s=-2,\ -4,\ -6,\ \ldots$

## The exact formula

Using an identity known as Perron’s formula, this is all enough to extract an exact formula of the sum $\psi\left(x\right)=\Lambda\left(1\right)+\Lambda\left(2\right)+\ \Lambda\left(3\right)+\ldots+\Lambda(n)$ where $n$ is the largest natural number smaller than $x$. As stated in the previous section, this sum is simply counting the prime numbers with "weights" such that the weight of each prime number is proportional to its number of digits. In other words, larger primes are "heavier", though only proportionally to their number of digits. This weighting turns out to be the correct choice when we're using the Riemann zeta function, and we can always undo the weights later using a technique called partial summation.

The idea is that Perron’s formula expresses this sum as the “rotation” of

over a particular curve, and we’re able to look at the individual “sources” of this rotation. These come from the pole of $\zeta'(s)$ at $s=1$ and the zeroes of $\zeta(s)$, which consist of the trivial zeroes listed above and the nontrivial zeroes in the so-called critical strip $0\lt \mathfrak{R}\left(s\right)\lt 1$. The exact formula we obtain as a result is

where $\rho$ ranges over all the zeroes of the Riemann zeta function. Riemann succeeded in obtaining this expression, and the prime number theorem—which is equivalent to the statement that $\frac{\psi\left(x\right)}{x}\rightarrow1$ as $x\rightarrow\infty$—was proven in 1896 using Riemann’s approach. The essential ingredient in the proof was an argument that the Riemann zeta function had no zeroes with a real part of 1.

Even without understanding the above exposition, it’s easy to see why the zeroes of the Riemann zeta function occupy such a central position in analytic number theory, just by looking at the above formula. The zeroes are precisely the sources and sinks from the prior pool analogy that drive the distribution of the prime numbers we observe. Understanding their location and distribution is essential to better understanding the distribution of the prime numbers.

And since the zeroes of the Riemann zeta function in the critical strip are symmetric about the half-line, and smaller real parts mean the approximation $\psi\left(x\right)\approx x$ has a smaller error, our best hope is that all the nontrivial zeroes have real part $1/2$. This is the Riemann hypothesis, and it’s equivalent to saying that the error of the approximation $\psi\left(x\right)\approx x$ is “as small as possible”. The approximation $\psi(x) \approx x$ is in fact equivalent to the approximation

where $\textrm{Li}$ is the offset logarithmic integral. This approximation captures the idea, first noticed by Gauss, that the density of prime numbers around $N$ is roughly $1/\ln(N)$. The integral is just adding these densities up from $2$ to $N$.

As we can easily go from estimates of $\psi(x)$ to estimates of $\pi(x)$ using a partial summation, this estimate of $\psi(x)$ also implies strong asymptotic bounds on the prime counting function $\pi(x)$.

In short, the closer the Riemann hypothesis is to being true, the better we can approximate prime numbers by assuming that they are "randomly distributed" given the constraint on their density imposed by the prime number theorem. In general, the Riemann hypothesis justifies the use of certain heuristics in analytic number theory that are based on assuming some object or collection of objects behaves "randomly" and then guessing what its behavior must be like using probabilistic tools.

## Should we expect it to hold?

Littlewood is famous for saying that there was “no imaginable reason” that the Riemann hypothesis should be true. One way to motivate his skepticism is to turn the above randomness analogy for the Riemann hypothesis against itself: Seeing as the prime numbers are such a structured and rigid collection of numbers, and are not at all random or arbitrary, why should we expect them to be well-approximated by heuristics that assume they behave randomly?

And on the computational front, while an enormous number of nontrivial zeroes have been computed on the half-line and none have been found away from the half-line, the computational evidence is in fact weaker than it might first seem: Consider the limits of current computational ability and the fact that the zeroes of the Riemann zeta function have some distributional properties that appear to be a function of the notoriously slowly growing function $\log \log T$, where $T$ is the absolute value of the imaginary part of the zeroes. This is another way of saying that it’s possible our computations don’t yet extend to the “typical regime” in which $T$ is astronomically large.

However, there are theoretical reasons to suspect the hypothesis is indeed true. While counterexamples have been found in other domains with “similar” zeta functions, none of these functions admit an Euler product, which is an essential fact about the Riemann zeta function. Moreover, the Riemann hypothesis is equivalent to other statements that we have good reason to believe are true. For example, notice that

where $\mu(n)$ is the Mobius function is defined to be 1 for square-free integers with an even number of prime factors, -1 for square-free integers with an odd number of prime factors, and 0 otherwise. And the exact same Perron’s formula argument, assuming the Riemann hypothesis, now gives us the asymptotic

for all $\epsilon\gt 0$. In fact, this is equivalent to the Riemann hypothesis.

If the asymptotic seems familiar from statistics, that’s because this is the same bound we would get on the sum of $\mu(k)$ if we assumed it was basically a coin flip. The central limit theorem would then give us this same asymptotic with probability 1. Therefore, the Riemann hypothesis is equivalent to the idea that whether a square-free integer has an even or an odd number of prime factors is essentially random, which is a concrete instance of the "randomness principle" mentioned previously. That said, while such probabilistic arguments often work in analytic number theory, and this one might seem particularly convincing, there are many examples for which they do not. One ought to take them with a grain of salt.

Overall, the consensus in the field is that the Riemann hypothesis is likely true, but there’s a non-negligible chance that it is false. I put the chance that it is true at 90%, which agrees with the current community forecast on this question.

Notably, some have claimed to have proven the Riemann hypothesis, though none of these claims are yet generally accepted.

## When is it likely to be settled?

Current methods seem incapable of dealing with the Riemann hypothesis. This is mostly due to our weak understanding of the behavior of the Riemann zeta function to the left of $s=1$, which is where the original infinite series definition no longer holds. We understand $\zeta(s)$ quite well if the real part of $s$ is less than $0$ or greater than $1$, but in the "critical strip" lying in the middle our understanding is quite poor. The difficulty of the problem of proving the Riemann hypothesis is due to the behavior of $\zeta(s)$ within this critical strip.

What is known about zero-free regions of the function mostly comes from leveraging the behavior to the right of $s=1$ to infer something about behavior a little bit to the left. However, these methods have been insufficient to prove a result as strong as the Riemann hypothesis. Settling it would almost certainly require a new kind of approach to the problem rather than a clever application of existing methods. And so there is no particular reason to expect the hypothesis to be proven or disproven anytime soon. In addition, if the Riemann hypothesis is false, there’s good evidence that the counterexample must be astronomically large, and so not amenable to computer search.

The Riemann hypothesis was conjectured in 1859 and has stood open for 162 years. Over those years, the number of people working on the problem and the number of tools at their disposal have grown. One idea, therefore, is picking some plausible growth rate of effort that is being spent toward solving the problem—such as $2 \%$ per year—and calculating the equivalent total effort that has been expended on solving the problem so far, which amounts to around 50 times the effort spent in 2021 alone. Laplace’s rule then gives us an exponential distribution with arrival time $\beta=50 \, \textrm{years}$. Our median forecast for the year in which the hypothesis would be resolved should then be $2021+50 \ln(2) \approx 2056$, where I ignore the growth rate after 2021 since I believe it’s dominated by $1/\beta$.

An alternative approach is to look at the base rate of Millennium Prize problems being solved in general. Aside from the Poincaré conjecture solved by Grigoriy Perelman in 2010, the remaining six problems have all stood for 21 years since their designation by the Clay Mathematics Institute. Using Laplace's rule gives a rate of $(1+1)/(21 \times 7 + 2) = 2/149 \approx 1.34 \%$ per year per problem. If we use $149/2 = 74.5$ as the $\beta$ parameter of the exponential distribution, we now get a median time of $2021 + 74.5 \ln(2) \approx 2073$ for the Riemann hypothesis being settled. It's important that we only count the time after 2000 because otherwise the estimation will suffer from selection bias: The Millennium Prize problems were selected because they remained unsettled as of the year 2000.

The Metaculus community offers a median forecast of $2067$ on this question, which falls between the estimates given by these two methods.

## How badly could the hypothesis fail?

Even if the Riemann hypothesis in its strong form is false, the question about the error of the prime number theorem approximation would remain open. Therefore, we may want to know the answer to the question: If the Riemann hypothesis is false, how bad can the approximation $\psi(x) \approx x$ be?

More formally, the following could be a question of particular interest: define the set $S$ to consist of all positive real numbers $r$ such that the asymptotic $M\left(N\right)=O(N^{r+\epsilon})$ holds for the function $M\left(N\right)$ defined earlier for all $\epsilon\gt 0$. Since $S$ is a set of real numbers bounded from below, it must have an infimum. What will we know about $\inf{S}$ in 100 years?

The Riemann hypothesis asserts that $\inf{S}=1/2$, the best result we can hope for. However, if this is false, the possibility that $1/2 \lt \inf S \lt 1$ remains open. Such a result would be helpful for analytic number theorists even if it’s weaker than the full Riemann hypothesis. Currently our best upper bound on this infimum is $1$ and the best lower bound is $1/2$, since our knowledge of the locations of the zeroes of the Riemann zeta function is not sharp enough to tell us anything stronger. Metaculus questions for the upper and lower bound respectively are available below.

## Conclusion

The Riemann hypothesis is one of the most important open problems in analytic number theory because the zeroes of the Riemann zeta function are similar to hidden sources and sinks that govern the behavior of many mathematical objects, including prime numbers. While there are strong reasons both theoretical and computational to expect it to hold, it’s far from certain that the hypothesis is true. And though it has remained open for a long time, we can take some encouragement from base rates, which indicate a significant chance it will be resolved before the end of the century.

Categories:
Mathematics