Arick Shao 邵崇哲

1 + 2 + 3 + 4 + 5 + ... = WTF?

A while back, I discovered Numberphile, a Youtube account containing mathematics-related videos. The videos feature various mathematicians and scientists giving short presentations on various topics related to numbers, and to mathematics in general, in a style intended for a non-mathematical audience. In general, Numberphile is a fantastic site, and it tackles in very accessible ways many interesting concepts that one would not encounter in any standard math curriculum. Anyone who is interested in mathematics or is just curious (and has some time to waste) should check it out.

My first encounter with Numberphile, however, was less stellar. A few months ago, I was asked by a student in the real analysis course I was teaching about a video, ASTOUNDING: 1 + 2 + 3 + 4 + 5 + ... = -1/12, on Numberphile. In the video, physicists Tony Padilla and Ed Copeland claimed that the infinite sum \( 1 + 2 + 3 + \dots \) has the value \( -1/12 \) and proceeded to offer a "proof". We had just discussed convergent, absolutely convergent, and divergent series in my course, and I was asked to reconcile the result of the video with the theory of series we had covered in class.

Unsurprisingly, upon my first viewing of the video, I had several strong objections. I found it to be misleading and even a bit dishonest; needless to say, I was not a fan. I was also not alone in my misgivings, as many others in the wider mathematical community had voiced complaints about the video. By the time I was shown the video, it had already gone somewhat viral, aided by an article by Phil Plait in Slate.

A disclaimer: I am, of course, well aware of the fact that there is some absolutely fascinating (and also rigorous) mathematical content within this \( -1/12 \) business; for buzzwords, see analytic continuation and Ramanujan summation. Given its "thinking-outside-the box" quality and its apparent relevance in physics (of which I am unfortunately nowhere close to being knowledgeable), I do think it is a great topic for a non-technical exposition video, in particular for a site such as Numberphile. Any objections I have lie entirely with the specifics of the video itself.

Since the "ASTOUNDING" video was posted, the scientists involved have responded by posting a follow-up video containing a more detailed discussion of the matter (though I still find much of it problematic). More recently, Edward Frenkel took a stab at this subject in his own Numberphile video. Frenkel also defended the original video, commending it for its attempt at exposing this topic to the wider world and for the dialogue that the video subsequently generated. While he is absolutely correct that this episode generated plenty of constructive dialogue, unfortunately, I doubt these subsequent discussions had the same viral effect as the original video.

So, Why Complain?

Ok, so I've mentioned that \(1 + 2 + 3 + \dots = -1/12\) can in fact be made into a mathematically rigorous result in a relevant and interesting way. So, then, what could be objectionable about a video that presented this to the world?

The first objection is the main statement claimed by the video without qualifications: that the value of the infinite sum \(1 + 2 + 3 + \dots\) is, unequivocally, \(-1/12\). What exactly do we mean by the infinite sum \(1 + 2 + 3 + \dots\) having any particular value? How do we even define this sum in the first place? Throughout the video, there was no discussion whatsoever of this question, only the bold claim that the value, however it is defined, is \(-1/12\). This is sensationalism, and the result of this is a misleading presentation.

Those who are acquainted with divergent series and alternative definitions of infinite summation know that the question "what does one mean exactly by convergence?" is absolutely central to any discussion in this direction. The fundamental idea here is that although one most readily associates \(1 + 2 + 3 + \dots\) with either "infinity" or "no value at all", there are in fact creative but meaningful ways to associate this sum with a finite value. By failing to even acknowledge this point, the video unfortunately disregarded much of the underlying philosophy and the richness in the mathematics. While their follow-up video is more nuanced and discusses analytic continuation to an extent, it still suffers from some of the same issues.

On a related note, the title of the video itself is also a bit unfortunate for the same reasons. "ASTOUNDING: 1 + 2 + 3 + 4 + 5 + ... = -1/12" is a title worthy of Upworthy, and it neatly encapsulates the sensationalism mentioned above. (On the other hand, the flashy title did its job and likely contributed to the notoriety of the video.) I do not think this is a recurring issue with Numberphile, though, as the other video titles seem interesting without being similarly obnoxious.

The other objection is the "proof" of the result shown in the video (and also in parts of the follow-up video), which is in many ways flat out wrong and adds to the misleading nature of the video. While it is perfectly reasonable to omit details, especially those of a technical nature, the points that are left in certainly need to be correct. Determining which details should be included and how those ideas should be expressed is often a difficult process, especially since one must also at the same time maintain the interest of the audience. However, regardless of the difficulties involved, making assertions that are misleading or wrong is simply academically irresponsible.

Now, despite the lack of rigor and correctness, I would not necessarily advocate cutting out the "proof" shown in the video, which does have historical significance and can offer a heuristic impression of the problem (though as is, there are enough things wrong with the argument to render it unconvincing). Being too fixated on a completely rigorous proof and on the technical background that would entail would probably be counterproductive. Instead, I would merely insist on honesty: what is presented in the video is not a proof at all (and hence should not be referred to as a proof), but is instead an informal heuristic argument. If one presents such an informal argument, one should indicate the parts that are nonrigorous, heuristic, or do not work exactly as shown.

Consider a curious audience member who watches the video (which is great!) but lacks background knowledge in analysis. Upon seeing and hearing academic authority figures produce this argument, this person would very likely gain the impression of having seen a definitive and incontrovertible proof of the fact that \(1 + 2 + 3 + \dots = -1/12\). Personally, I find this to be quite damaging. While it's great that many enjoyed the video, it is unfortunate that many have left with a very mistaken impression of the nature of \(1 + 2 + 3 + \dots = -1/12 \).

In summary, I think that the video would have been fantastic and very educational had the following been done:

  1. The video was more nuanced in their main statement (i.e., "Can we somehow, through cleverness and creativity, make meaningful sense of \(1 + 2 + 3 + \dots\) as a finite number? Why, yes we can, and the answer that we find is \(-1/12\)!").
  2. The video was honest with the nature of its "proof" (i.e., "This is not a proof by any means, but only a first heuristic argument, and several steps you will see cannot actually be done as stated.").

The truth is, this is without a doubt an astounding, mind-blowing result, and it is definitely something that should be communicated and celebrated. However, in communicating this, one should not resort to sensationalistic and misleading statements.

So now I've made a number of criticisms toward this video which deserve further substantiation. To further expound these objections, we will have to build a bit of background on infinite sums. The point is not to get in all the details (which would require far more space and time than I have), but to provide a basic sense of what methods have been developed and of why there are multiple methods in the first place. From this process, one should hopefully see how the presentation of the "ASTOUNDING" video falls short.

What Are Infinite Sums Exactly?

In the standard college curriculum—either in calculus or later in an analysis course—one generally deals with infinite sums as follows. For a series $$ x_1 + x_2 + x_3 + x_4 + x_5 + \dots \text{,} $$ one looks at the partial sums, that is, sums of finitely many terms of the series: $$ s_1 = x_1 \text{,} \qquad s_2 = x_1 + x_2 \text{,} \qquad s_3 = x_1 + x_2 + x_3 \text{,} \qquad \dots \text{,} $$ and so on. Now, we can repeat this process for as many terms as we want without any trouble. We say that the above series converges to some value \( L \) if the partial sums \( s_n \) become as close as we want to \( L \) as \( n \) becomes sufficiently large. With respect to calculus terminology, this means that $$ \lim_{ n \rightarrow \infty } s_n = L \text{.} $$

For example, consider the geometric series $$ \frac{1}{2} + \frac{1}{4} + \frac{1}{8} + \dots \text{, }$$ By putting our head down and computing, we can compute the partial sums: $$ s_1 = \frac{1}{2} \text{,} \qquad s_2 = \frac{1}{2} + \frac{1}{4} = \frac{3}{4} \text{,} \qquad s_3 = \frac{1}{2} + \frac{1}{4} + \frac{1}{8} = \frac{7}{8} \text{,} \qquad \dots \text{,} \qquad s_n = 1 - 2^{-n} \text{.} $$ Observe that by adding enough terms, our sum will become as close to \(1\) as we would like. Thus, by the standard calculus definition of infinite sums, we have that $$ \frac{1}{2} + \frac{1}{4} + \frac{1}{8} + \dots = 1 \text{.} $$

If we apply this definition to our infinite sum in question, \(1 + 2 + 3 + \dots\), then it utterly fails to have a finite value. Indeed, the corresponding partial sums "accelerate toward infinity" and do not tend toward any particular finite number, including \( -1/12 \). Thus, if we accept this definition of infinite summation, then \(1 + 2 + 3 + \dots\) is certainly not, by any means whatsoever, \( -1/12 \). In this sense, the only possible reasonable value to attach to this infinite sum is \( +\infty \).

This brings us back to the point made above that was ignored in the video: in order to associate any finite value such as \(-1/12\) to the sum \( 1 + 2 + 3 + \dots \), we must reexamine and redefine what we mean by infinite sums. In other words, we are moving the goalposts by changing the meaning of infinite sums taking a finite value! Any presentation that does not candidly address this fact is already misleading the audience, not to mention missing the heart of the matter. Moreover, along these same lines, another important question was ignored: when (if at all) is it sensible to adopt some new way to define infinite sums?

Below, we explore various ways that have been devised, often rather creatively, to make sense of infinite sums. We also discuss the strengths and drawbacks associated with each method.

Absolute Convergence

Let us begin with what is probably the least debatable notion: absolute convergence. We say that an infinite sum \( x_1 + x_2 + x_3 + \dots \) converges absolutely if this sum converges in the above calculus sense, and if the sum of the absolute values of the terms, \( |x_1| + |x_2| + |x_3| + \dots \), also has a finite value in the same sense (though in this case, we don't care what the value is, only that it has one). While the definition itself is a bit technical, and its details are beyond the scope of this writing, what is important is the properties enjoyed by absolutely convergent infinite sums.

For example, for such an absolutely convergent infinite sum, $$ x_1 + x_2 + x_3 + \dots = L \text{,} $$ we have the following properties:

  1. Like for finite sums, if we remove the first term from the summation, that is, if we consider the infinite sum \( x_2 + x_3 + x_4 + \dots \), then its value is what we should expect: $$ x_2 + x_3 + x_4 + \dots = L - x_1 \text{.}$$
  2. Like for finite sums, if we change the order of summation, then the result does not change. For instance, if we add instead \(x_2\), then \(x_7\), then \(x_5\), then \(x_{13}\), and so on through all the \(x_n\)'s, then at the end of the (infinitely long) day, we will still get the same answer, \(L\).

The geometric series that we mentioned before, $$ \frac{1}{2} + \frac{1}{4} + \frac{1}{8} + \dots = 1 \text{,} $$ is in fact absolutely convergent (indeed, the sum of the absolute values of the terms is exactly the same as the original sum). One can easily check that if we drop the first term, we obtain $$ \frac{1}{4} + \frac{1}{8} + \frac{1}{16} + \dots = \frac{1}{2} = 1 - \frac{1}{2} \text{.} $$ Similarly, if we rearrange the terms, we will never change the resulting value.

In short, absolutely convergent series generally share the same properties that one finds for finite sums. This provides convincing evidence for absolute convergence being an appropriate notion of infinite summation. (For those with more mathematical background: absolute convergent series can in fact also be directly connected to Lebesgue integration theory.)

Back to the Calculus Definition

Let us now return to the standard calculus definition, which is actually a weaker concept than absolute convergence. For instance, one can show that the infinite sum $$ 1 - \frac{1}{2} + \frac{1}{3} - \frac{1}{4} + \frac{1}{5} - \dots $$ converges in the standard calculus sense (to \( \ln 2 \)!), but not absolutely. In particular, the harmonic series $$ 1 + \frac{1}{2} + \frac{1}{3} + \frac{1}{4} + \frac{1}{5} + \dots = | 1 | + \left| - \frac{1}{2} \right| + \left| \frac{1}{3} \right| + \left| - \frac{1}{4} \right| + \left| \frac{1}{5} \right| $$ fails to converge to a finite number (in the calculus sense), as it eventually becomes larger than any positive number. Thus, we say that this infinite sum is conditionally convergent.

But, why would we even care to highlight conditionally convergent series? To answer this, we go back to the two properties enjoyed by absolutely convergent sums. It is not too difficult to see that property (1) still holds for conditionally convergent series; as expected, if one drops the first term in the series, then the result differs by exactly this term that is dropped.

What is very interesting, however, is that property (2) needs not hold. If one rearranges the terms of a conditionally convergent series, then the result needs not be the same. Furthermore, one often proves the following incredible fact in an undergraduate analysis class: if an infinite sum is conditionally convergent (but is not absolutely convergent), then one can find rearrangements of its terms that sum to any value. Therefore, by rearranging the terms in $$ 1 - \frac{1}{2} + \frac{1}{3} - \frac{1}{4} + \frac{1}{5} + \dots \text{,} $$ we can make this sum to any value we want (for example, \(-1/12\))!

So, what can we immediately take away from this? If we deal with the standard calculus definition of sums, we have to be firmly aware of the fact that we may not be able to rearrange the terms in the sum without changing the result. Indeed, if we were to toy around with a conditionally convergent series such as the above, and we were to start rearranging terms in the summation without a care in the world, we would likely produce a bunch of garbage. Another point that should be made is that there is no "free lunch". While the calculus definition of infinite sums is more "powerful" than absolutely convergent sums, in the sense that we can attach finite values to more sums, there is a price to be paid: we lose the ability to rearrange terms freely, which is a "natural" property one usually associates with addition.

Let's Get Crazy!

We now turn our attention to a more interesting infinite sum, $$ 1 - 1 + 1 - 1 + 1 - 1 + \dots \text{,} $$ known as Grandi's series. Going straight to the calculus definition, we start computing the partial sums: $$ 1 = 1 \text{,} \qquad 1 - 1 = 0 \text{,} \qquad 1 - 1 + 1 = 1 \text{,} \qquad 1 - 1 + 1 - 1 = 0 \text{,} \qquad 1 - 1 + 1 - 1 + 1 = 1 \text{,} $$ and so on. In particular, the partial sums of this series oscillates between \(1\) and \(0\), and hence do not tend toward any single number. Thus, by the standard calculus sense, Grandi's series does not converge to any finite value, neither absolutely nor conditionally.

Within the "ASTOUNDING" video, the presenters examine the series \( 1 - 1 + 1 - 1 + 1 - 1 + \dots \), noted that it does not tend to any number in the above sense, and then proceeded to attach the value \( 1/2 \) to it. While the video does not go into details, which is completely fair, they again make the mistake of stating that there are "proofs" of this "fact" without mentioning that one is actually changing the definition of infinite sums altogether in order to obtain \( 1/2 \)! Again, this is a misrepresentation of what is going on.

Now, how do we broaden our concept of infinite sums so that Grandi's series will take a meaningful finite value? The basic intuition is that Grandi's series is a bit too "bumpy" for it to converge in the calculus sense; indeed, the partial sums jump discretely between \(1\) and \(0\). Thus, the idea is that we want to somehow "smooth" the partial sums. One common way to do this is through a process known as Cesàro averaging (this is described in another Numberphile video featuring James Grime, which explains this quite nicely in an elementary manner).

Consider again the partial sums of Grandi's series, which we label as $$ s_1 = 1 \text{,} \qquad s_2 = 0 \text{,} \qquad s_3 = 1 \text{,} \qquad s_4 = 0 \text{,} \qquad \dots \text{.} $$ But, instead of considering what happens to these \(s_n\)'s as \(n\) becomes very big, as we did in the calculus definition, we instead look at the averages of these \(s_n\)'s: $$ a_1 = \frac{1}{1} ( s_1 ) = 1 \text{,} \qquad a_2 = \frac{1}{2} ( s_1 + s_2 ) = \frac{1}{2} \text{,} \qquad a_3 = \frac{1}{3} ( s_1 + s_2 + s_3 ) = \frac{2}{3} \text{,} \\ a_4 = \frac{1}{4} ( s_1 + s_2 + s_3 + s_4 ) = \frac{1}{2} \text{,} \qquad a_5 = \frac{1}{5} ( s_1 + s_2 + s_3 + s_4 + s_5 ) = \frac{3}{5} \text{,} $$ and so on. In contrast to the \(s_n\)'s, the \(a_n\)'s in fact do tend toward a number: \(1/2\)!

One interesting fact, which is a standard exercise often found in an undergraduate analysis class, is that any infinite sum that takes a value in the standard calculus sense will also take the same value in the Cesàro averaged sense. 1 Thus, this Cesàro summation is a strictly more powerful definition, in the sense that one can associate values to more infinite sums using the Cesàro method compared to the calculus definition. However, as we mentioned earlier, there is no "free lunch"; there is a price to be paid for this bit of extra power. To enlarge our family of convergent infinite sums, we must give up some properties that one would naturally associate with summations.

Consider Grandi's series again, and let us insert a bunch of zeroes throughout the sum. For example, one particularly organized way to do this is as follows: $$ 1 + 0 - 1 + 1 + 0 - 1 + 1 + 0 - 1 + 1 + 0 - 1 + \dots \text{.} $$ Now, since we only added zeroes, if life were reasonable, this should certainly take the same (Cesàro-averaged) value as Grandi's series, \(1/2\). But, let us compute the partial sums, $$ s_1 = 1 \text{,} \qquad s_2 = 1 \text{,} \qquad s_3 = 0 \text{,} \qquad s_4 = 1 \text{,} \qquad s_5 = 1 \text{,} \qquad s_6 = 0 \text{,} \qquad \dots \text{,} $$ and the Cesàro averages, $$ a_1 = \frac{1}{1} (1) \text{,} \qquad a_2 = \frac{1}{2} (1 + 1) = 1 \text{,} \qquad a_3 = \frac{1}{3} (1 + 1 + 0) = \frac{2}{3} \text{,} \qquad a_4 = \frac{1}{4} (1 + 1 + 0 + 1) = \frac{3}{4} \text{,} \\ a_5 = \frac{1}{5} (1 + 1 + 0 + 1 + 1) = \frac{4}{5} \text{,} \qquad a_6 = \frac{1}{6} (1 + 1 + 0 + 1 + 1 + 0) = \frac{2}{3} \text{,} \qquad \dots \text{.} $$

What happens now to the averages? If we were to continue the above computations indefinitely, we would see that the \(a_n\)'s no longer tend toward \(1/2\), but instead to \(2/3\). Thus, this new series, obtained from Grandi's series by adding only zeroes, converges, in the Cesàro sense, to a different value \(2/3\).

This phenomenon can never happen with infinite sums in the calculus sense, and at first glance, this does seem quite bizarre. However, this is an artifact of this averaging process, which is tremendously sensitive to the order and placement of the terms of the summation. One could then question whether Cesàro sums, which have this somewhat "unnatural" property, are still a viable definition of infinite sums. This question is philosophical in nature, and the answer could depend on the specific application at hand. For example, such averages can successfully describe certain behaviors of Fourier series, hence Cesàro summations could perhaps be viewed as the "correct" notion of infinite sum in this setting. In other instances, however, what one considers as "correct" or "reasonable" could be very different.

At one point, the argument of the "ASTOUNDING" video considered the following sum of two series: $$ ( 1 + 2 + 3 + 4 + 5 + \dots ) - 4 ( 1 + 2 + 3 + 4 + 5 + \dots ) = 1 - 2 + 3 - 4 + 5 - \dots \text{.} $$ However, to obtain the desired series on the right-hand side, they had to insert extra zero terms into the second series on the left-hand side in order for the terms to align favorably. As we saw for Cesàro sums, this cannot be done without possibly ruining the answer. This is one way in which the "proof" in the "ASTOUNDING" video is wrong.

Going back now to the series in the title, \( 1 + 2 + 3 + \dots \), if we again compute the partial sums and their averages, as before, we would see that the \( a_n \)'s again tend toward infinity. Thus, even with this more powerful Cesàro summation, we still cannot attach a natural finite value to \( 1 + 2 + 3 + \dots \), not \( -1/12 \) nor anything else. To do this, we will have to be even more creative.

Let's Get Even Crazier!

Let us now focus our efforts on the main series of interest. Consider first the infinite sums, $$ \frac{1}{1^s} + \frac{1}{2^s} + \frac{1}{3^s} + \dots \text{,} $$ where \(s\) is some real power. If you have some calculus and/or analysis background, you may recall that this series tends to a finite value (in the standard calculus sense) when \( s \gt 1 \). 1

Moreover, if you have some familiarity with complex numbers, then you can get a bit wild and replace \( s \) above by a complex number: $$ \frac{1}{1^z} + \frac{1}{2^z} + \frac{1}{3^z} + \dots \text{.} $$ Although this sum is now complex-valued, one can still discuss the convergence properties of this infinite sum as before. Similar to its real-valued counterparts, this complex-valued series can be shown to converge if and only if the real part of \( z \) is greater than \( 1 \).

Consider now the complex-valued function $$ \zeta (z) = \frac{1}{1^z} + \frac{1}{2^z} + \frac{1}{3^z} + \dots \text{,} $$ defined for all complex \(z\) for which the above makes sense (in the usual calculus definition), namely, when \( \operatorname{Re} z \gt 1 \). (This is, in fact, the very famous Riemann zeta function!) One nice fact about this function \( \zeta \) is that it is (complex-)analytic. Without getting into technical details, being analytic roughly means that \( \zeta \) can be described using power/Taylor series, like those one would see in a first-year calculus course. Those familiar with analytic functions would know that this is a very exclusive family of functions with extremely nice properties. (A more specific statement of "niceness" would be that the derivatives of an analytic function at a single point completely determine the behavior of the function around that point.)

Now, the kicker is that there is one, and exactly one, way to extend \( \zeta \) so that it is an analytic function on all the complex numbers, except for \( z = 1 \). (Again, we lack the space to discuss this in detail here, but this analytic continuation is a fascinating topic worth looking into.) Given the "niceness" and the exclusivity of analytic functions, it seems natural to define \( \zeta \) to be this analytically extended function, with \( \zeta (z) \) now well-defined for every complex \( z \) except for \( 1 \).

Recall that for \( \operatorname{Re} z \gt 1 \), we had precisely that $$ \zeta (z) = \frac{1}{1^z} + \frac{1}{2^z} + \frac{1}{3^z} + \dots \text{.} $$ However, if we are convinced of the "naturalness" of analytic functions and extensions, then we can take a leap of faith and interpret \( \zeta (z) \) to be the infinite sum $$ \frac{1}{1^z} + \frac{1}{2^z} + \frac{1}{3^z} + \dots \text{,} $$ for any complex \( z \neq 0 \)! In particular, if we take \(z = -1\), then by this interpretation, $$ \zeta (-1) "=" \frac{1}{1^{-1}} + \frac{1}{2^{-1}} + \frac{1}{3^{-1}} + \dots = 1 + 2 + 3 + \dots \text{,} $$ that is, the infinite sum we are interested in!

With a bit of clever computing, one can see that \( \zeta (-1) = -1/12 \). Therefore, if we believe in the "naturalness" of this analytic extension, then we could possibly associate \( -1/12 \) as a natural value for \( 1 + 2 + 3 + \dots \). With this new definition via analytic continuation, we could now make sense of \( 1 + 2 + 3 + \dots \) taking a finite value! On one hand, this seems to be a very powerful process, succeeding with \( 1 + 2 + 3 + \dots \) when all previous methods have failed. Unfortunately, again there is no free lunch, and we have to give up something to gain this additional power.

Consider now the series $$ 2 + 3 + 4 + 5 + \dots \text{,} $$ namely, the preceding series with the first term removed. If the universe is just, then the value that this takes should be one less than \( 1 + 2 + 3 + \dots \), i.e., \( -1/12 - 1 = -13/12 \). However, one reasonable way to construct the above series via analytic continuation is to define (hat tip to Terence Tao's blog) $$ 2 + 3 + 4 + 5 + \dots "=" ( 1 + 2 + 3 + \dots ) + ( 1 + 1 + 1 + \dots ) = \zeta (-1) + \zeta (0) \text{.} $$ (This particular construction is related to the general theory of Dirichlet series.) Evaluating \(\zeta\) at \(-1\) and \(0\) then yields 2 $$ 2 + 3 + 4 + 5 + \dots "=" -\frac{1}{12} - \frac{1}{2} = -\frac{7}{12} \neq -\frac{13}{12} \text{.} $$ Uh oh... Again, if we want to take this more liberal interpretation of infinite sums, we will have to give up on yet another piece of intuition about summations that we hold dear.

In the follow-up to the "ASTOUNDING" video, the presenters do mention this analytic continuation. However, they fail to properly note this leap of faith one takes in defining and in interpreting \( \zeta (-1) \) as the value of the sum \( 1 + 2 + 3 + \dots \). Again, this is an instance of moving the goalposts. Moreover, in not mentioning this more clearly, the audience overlooks the mathematical brilliance involved in formulating this new definition! (On the other hand, the follow-up video does mention that there exist other methods with which one can associate \(-1/12\) to \(1 + 2 + 3 + \dots\), which is important evidence that there is some deep meaning behind \(-1/12\).)

The supplementary video also similarly flubs the proofs involving analytic continuation. For example, at one point, they make use of the geometric sum formula, $$ 1 + x + x^2 + x^3 + x^4 + \dots = \frac{1}{1 - x} \text{,} $$ which only holds in the usual sense when \( |x| \lt 1 \) (the video actually stated \( x \lt 1 \), which is very, very wrong). However, they then apply this formula to numbers \( x = -1 \) to obtain $$ 1 - 1 + 1 - 1 + 1 - 1 + \dots = (-1)^0 + (-1)^1 + (-1)^2 + \dots = \frac{1}{2} \text{,} $$ which is certainly invalid in the standard sense. This is in fact the "creative" step, which is another instance of the analytic continuation discussed before. In particular, this involves a redefinition of infinite sums that was unacknowledged at that point in the video.

One way that one could possibly interpret \(-1/12\) is, vaguely, as what would be found if one were to "navigate around" the "blowing up" of the sum \(1 + 2 + 3 + \dots\) to infinity. With this heuristic, we can perhaps think of the "blowing up" as \( \zeta (z) \) blowing up at \(z = 1\), and we can then view the analytic continuation as navigating along the complex plane around this singularity at \(z = 1\). This prescription is, of course, rather vague, but it does suggest how this "crazy" definition of \(1 + 2 + 3 + \dots\) may be plausible in some settings. That this theory does seem to have relevance in physics makes it that much more fascinating.

Whether this is an appropriate notion of infinite summation depends, of course, on the specific context. For example, in the "ASTOUNDING" video, the interviewer, Brady Haran (who is also the creator of Numberphile), asks whether one would reach \(-1/12\) if one were to punch in \( 1 + 2 + 3 + \dots \) into a calculator and continue indefinitely. The answer is clearly "no". One explanation, in terms of what was discussed here, would be that this calculator example subscribes to the standard calculus definition and interpretation of infinite sums. Indeed, typing \( 1 + 2 + 3 + \dots \) into the calculator would effectively compute all the partial sums of this series. Therefore, this "calculator model" would be one instance where the standard calculus definition of infinite sums would be the appropriate notion, and not the analytic continuation definition, which carries an inherently different interpretation.


The discussion above is certainly far too lengthy for an expository video, in particular one for non-mathematical audiences. (It is, unfortunately, also too brief to cover all the important points with a sufficient amount of care.) At the end of the day, though, the point of all this is that these strange values for divergent series come from different definitions for infinite sums. Moreover, these different definitions have different pros and cons, as well as different interpretations.

Hence, I would argue that the "ASTOUNDING" video, which obscures these facts by asserting "the answer to \(1 + 2 + 3 + \dots\) is \(-1/12\)" without qualification and by passing erroneous arguments as proofs, does a disservice to the overall discussion. Furthermore, I feel that by even informally acknowledging these points, one could actually enhance the presentation, as it adds much richness to the topic and opens the door for further discussions and explorations.

Anyone who has ever attempted to teach mathematics at any level knows that this is an extremely difficult endeavor. How does one communicate mathematics effectively, and in an interesting and clear manner, without getting bogged down in details and turning off the audience along the way? However, at the same time, as educators and communicators, we are also have another major responsibility: what we pass on to the audience should not be false nor misleading. While fulfilling all these requirements in tandem poses a daunting challenge, in our job, these responsibilities should not be compromised.

1. Thanks to Josh Green for noting the mistakes, which are now corrected.

2. The last couple sentences were modified to further clarify how the analytic continuation is done, as the previous writing was overly vague about this. Thanks to Jacob Tsimerman for pointing this out.

Valid XHTML 1.0 Strict!Valid CSS!