Arick Shao 邵崇哲

# Why Prove Things?

This is a question held by many as they first come into contact with rigorous mathematics. Why complicate something that may already be intuitive and tangible by adding all these formalisms? Is this nothing more than an academic exercise?

A concrete instance of this is the simple proof in the previous article, [What Do Mathematicians Do?]. The mathematical statement is that the sum of two even integers is even, and the short proof is reproduced below:

Suppose $$x$$ and $$y$$ are even integers. Then, by the definition of evenness, there are integers $$x^\prime$$ and $$y^\prime$$ such that $$x = 2 x^\prime$$ and $$y = 2 y^\prime$$. As a result, $$x + y = 2 x^\prime + 2 y^\prime = 2 (x^\prime + y^\prime)$$. Since $$x + y$$ is $$2$$ times an integer, $$x + y$$ must itself be even.

The conclusion that adding two even numbers yields an even number should already be quite clear without any formal reasoning. In this case, the proof may likely take away from rather than enhance clarity. So, why do we insist that students of mathematics, in particular at the undergraduate level, prove such "obvious" statements?

### What Is a Proof?

Before addressing the value of proofs, we should first ask what exactly is a mathematical proof. A proof is in essense an argument in favor of some mathematical statement being true, given some assumptions and axioms. In the preceding example, the mathematical statement is that the sum of two even integers is even, and the assumptions and axioms include the definitions of integers and integer addition.

Now, the arguments in proofs are quite different from those seen elsewhere in debates or essays. In a mathematical proof, the argument is carried out using formal logic (for example, sentential and predicate logic). While this is reflective of the logical reasoning one likely uses in everyday life, what distinguishes formal logic is that it contains very precise rules on what steps of reasoning are permitted. This is quite analogous to, say, the game of chess and its unambiguous rules on which moves are allowed or forbidden. In particular, the proof in the previous example has this feel, with one advancing from the givens ($$x$$ and $$y$$ being even integers) to the conclusion ($$x + y$$ being an even integer) through a series of allowable moves representing clear logical steps.

However, note that the example proof was written in prose that is more indicative of informal logic, and not in the precise symbols of formal logic. While this argument could in fact be expressed completely formally, to do so would result in excessively lengthy writing. For more complex mathematical statements, a fully formal proof would surely be too prohibitively lengthy on paper to be practical. Consequently, a mathematical proof that one generally encounters is, in practice, a informal description of an argument that could be made using formal logic. The key, and somewhat subjective, point is that the informal argument from the above example contains enough information to convince a trained mathematician that a fully formal argument from the assumptions to the conclusion could indeed be made.

### The Value of Mathematical Truth

The fundamental reason for insisting on proofs goes back to its foundational role—it is how "truth" is determined in mathematics. This is perhaps best explained through an analogous scientific notion: the role of empirical observations in determining "truth" in science. Indeed, one values in particular scientific theories that have been confirmed repeatedly through experiments and observations. In turn, by passing these tests of scientific rigor, one obtains a sense of reliability and confidence on the likely correctness of this theory. (This confidence is far more than philosophical; after all, trust in various scientific theories has led to cutting-edge medical advancements and to humans landing on the moon.)

In mathematics, the situation is similar, with proofs serving as a litmus test for mathematical facts. Like in science, one values mathematical statements that have been verified through a formal proof, as these statements have gained a sense of reliability and confidence. Similarly, advancements in rigorous mathematics have driven many modern developments in science and engineering.

It is also instructive to contrast the mathematical and scientific processes. In science, while repeated empirical observations and experiments can make a theory more likely to be true, the finding will always be associated with a likelihood. No number of observations would ever be able to completely establish that a scientific theory is entirely true without any doubt.

On the other hand, a formal proof of a mathematical statement would allow one to conclude in principle that the statement is precisely true (if we exclude, for the sake of discussion, the possibility of human errors in the proof itself). Whereas the scientific process can only affect the likelihood that a theory holds, formal mathematical reasoning leads to the definitive mathematical truth of a statement. Thus, in mathematics, one can make much more precise statements than one could in science (or philosophy, etc.).

Of course, there is a price to be paid for this breathtaking precision inherent in mathematics, in that this precision also limits the things one can study mathematically. In particular, since proofs can only involve abstract objects for which formal logical reasoning applies, mathematics itself is also restricted to these same objects. While mathematics is often used to model aspects of the real world, it is also separated in this sense from this real world that science directly studies.

One of the great hallmarks of the scientific revolution that transformed the world is the rigorization of the scientific process. A similar "mathematical revolution" also occured over a century ago that fully rigorized mathematics. In both disciplines, the adoption of rigor and the resulting reliability have played tremendously important roles in forging the modern world.

### At the Edge of Intuition

While the preceding discussion was rather abstract and philosophical, there are in fact many concrete ways in which mathematical proofs are useful. Going back to the initial example, the statement that the sum of even numbers is even is simple enough that one can be intuitively convinced of its correctness without the formal proof. However, what happens if a topic is complex enough that the intuition is not yet clear, or if one's initial intuition turns out to be misleading?

To get at this further, let us consider a well-known example, the Monty Hall problem. The set-up is a game show, in which a contestant plays the following game:

• There are three closed doors, behind one of which is a brand new car, and behind the remaining two are goats. The goal of the contestant is to guess the door containing the car.
• The contestant begins by choosing one of the three doors.
• The host then responds by opening one of the doors the contestant did not choose. The host knows beforehand what is behind each door, so the door opened by the host will always reveal a goat.
• The contestant then has the choice to either remain with the door she originally picked, or to switch to the remaining unopened door.
• The contestant either wins or loses, depending on whether her final choice of door contains the car or a goat.

What, then, is the best strategy to maximize the contestant's probability of winning? (The assumption, of course, is that the contestant prefers the car over the goat.)

Following one's initial intuitions, it might seem that it does not matter whether the contestant remained with the original door or switched. Since the contestant does not know where the car is, she may only make a blind guess with a random 50% chance of success. However, this answer would in fact be incorrect—the optimal strategy is to always switch doors!

The explanation behind this is as follows. If the contestant does not switch doors, then she wins if she makes the correct initial guess, which occurs with probability $$1/3$$. On the other hand, if the contestant switches doors, then she wins if shes makes an incorrect initial guess, which occurs with probability $$2/3$$. The difficulty in grasping the correct solution mainly arises since the initial intuition often leads one astray.

For such tricky and confusing situations, how does one determine that a given solution is reliable? In mathematics, this is where rigorous proofs come in, as this provides a means to check that a deduction of a solution is correct. By demanding rigor, one gains a safeguard against faulty deductions, which can happen when intuition is misleading. For instance, in the Monty Hall problem, one can model this using probability theory and then formally prove that switching doors leads to a higher chance of winning.

(The previous discussion assumes the standard interpretation of the Monty Hall game. There are, in fact, alternative interpretations that lead to different answers. In other words, the assumptions also matter!)

(There is also a certain bit of irony in trumpeting the Monty Hall problem, since after the solution was initially published, a number of mathematicians with Ph.D.'s wrote in to complain that the solution was wrong!)

More generally, we demand the same level of rigor for even the most basic arguments, such as the evenness of a sum of even numbers. While this is to an extent pedadogical, it does demonstrate that all of the mathematics we rely on today lie on solid foundations.

### Proofs in Science and Engineering

Finally, we conclude with some notable examples of proofs used alongside science and engineering.

Theoretical physicists have devised many mathematical models to describe physical phenomena; highly notable examples include Newtonian mechanics, quantum mechanics, and relativity. One can ask what could be mathematically proved from these models; this then leads to educated guesses for what may be observed in the real world. As a recent example, the Big Bang theory was suggested by mathematical models of relativity. On the flip side, if the conclusions proved from the mathematical models are not compatible with real-world observations, then this may suggest that the physical theory itself is erroneous. Similarly, mathematical modeling of real-world phenomena has become pervasive throughout chemistry, biology, economics, and finance.

Next, suppose you are modeling some real-world phenomenon on a computer (for instance, simulating fluid flow), and suppose the computations involve solving a differential equation. Of course, a machine generally cannot compute the exact solution of the differential equation; it can only produce approximations of these solutions. How could you be sure that these approximations remain close to the actual solution, and that your computational model actually reflects the real-world phenomenon you are studying? In many cases, one can rigorously prove that an approximate solution from some given algorithm will remain close to the actual solution. Moreover, this often comes with quantitative information: "at worst, the approximation can differ from the real solution by a certain amount." Such proofs help to build confidence toward the computational schemes that we rely on to simulate and study aspects of the real world.

This idea of proofs bestowing reliability has also taken root in computer science. In some critical situations—think nuclear plants or space missions—it is imperative that a computer program runs correctly as intended. In such cases, when repeatedly testing a program is not exhaustive enough, one can opt instead to prove that the program will always run correctly. The basic idea is to model both the program and the programming language itself mathematically, so that the program becomes a series of formal commands, each of which has a well-defined effect on the state of a theoretical machine. One then proves within this model that at the end of the program, the hypothetical machine will be in the desired state.

For example, in the early 1990s, Intel released a microprocessor containing a subtle bug that resulted in some incorrect computations involving floating point division. While this was not noticeable to most users, such errors can be disastrous for the scientific and mathematical research community, who can unknowingly churn out wrong results even without making any mistakes themselves. Aside from the public relations fallout, this was also an expensive mistake that cost Intel approximately \$475 million (USD). Since then, researchers have in some instances preemptively dealt with potential bugs in processors by crafting proofs that a given unit will run correctly.