Wednesday, September 23, 2015

Sam Harris, Max Tegmark, and mathematical ontology

This afternoon I saw that Sam Harris was in conversation on his podcast with MIT cosmologist Max Tegmark, famous for his book Our Mathematical Universe, in which he famously argued that our universe is mathematics, if we really get down to the fundamental nature of reality. When I did, I sighed audibly and decided that this episode of Waking Up probably would not be worth listening to since I generally find so much metaphysical speculation an absolute academic sinkhole, especially when that bizarre.

No such luck for me, though. I was cajoled by my friend Pete, despite being desperately busy at the moment, into listening (at least to the first third--and I made it through half so far) and, if I thought it worthwhile, to write a little something about mathematical ontology. I mean, I had to. He wrote me an email including words in all caps to this effect, and maybe he was right. I was stunned by what I heard, and I hopefully can make some small contribution to their discussion here.

Before getting started, let me credentialize a little, though in a way I doubt will prove odious. I have a doctorate in (abstract) mathematics, an undergraduate degree in physics, and have thought about these topics for getting on a decade and a half or two. I am not a mathematical ontologist (someone who fusses about with trying to figure out what constitutes the existence of mathematical objects), and I'm only a "philosopher" insomuch as other people keep accusing me of being one. I also wrote a book a couple of years ago dealing with some of what they talked about, and notably, I've never found "the unreasonable effectiveness of mathematics" the least bit mysterious and am thoroughly confused at why people are so easily mystified by the question.

Since I only listened to the first half of their talk, I'll keep my commentary limited to the relevant bits, which stretch roughly from 19 minutes into the podcast until about the 37-minute mark.

Why should we trust mathematics?

Harris asks this question to really open the discussion about mathematics, and his take is that our intuitions are often very misleading about reality, citing quantum mechanics and relativity theory. If our intuitions are so misleading, why should be be so willing to trust mathematics epistemologically, stating clearly that he understands the pragmatic reasons--it works.

In response, Tegmark talks about how physics is able to make predictions that are able to be proven correct, and so in a way, he is suggesting that empiricism (data, which means asking reality for feedback) gives us a firm epistemological foundation to work from. I don't disagree. Where I do diverge from the thoughts of these two, though, is that I think mathematics is far more empirical than most people think.

Perhaps it is because my doctorate was done in combinatorics (as opposed to some useful branch of mathematics), but I see math, at its very basis, as being about counting. Certainly, counting is where math began, a fact that seems to account for why it took so long for the number zero to be invented. The thing is, counting is inherently empirical.

If I think I have five things, once I have a definition for five, I can count them. One, two, three, four, and--I do have five, okay. It turns out that I can separate those five things into two piles, one of two things and one of three. I can see them. There's two. There's three. And then I can put them together, and there's five. I can do it like an experiment: five balls, five trees, five sticks, five rocks, five people, five birds. Every single time, I can separate those five into two groups of two and three (or one and four, or, a bit more abstractly, five and zero), and I see 2+3=5. This isn't some abstract effort. It's naming sizes of collections and then looking at them, counting them, in reality.

I can make predictions too. I can come up with all these names for numbers, which ultimately come down to "add one more this many times," and then I can make predictions about them. I can imagine I have ten grapefruits over here and another twenty grapefruits over there, and I can do the math, play with the abstracted things we call numbers, and predict that if I combine my piles, I'll have thirty grapefruits. Then I can combine them, and, as predicted, I'll have thirty grapefruits. I don't think anyone calls this level of effectiveness "unreasonable."

Notice that this is exactly the epistemic basis that Tegmark asserts, apparently to Harris's satisfaction, that gives physics its legs, and I don't think it's one any but the most hardened skeptic would doubt. Math, at least where it comes to counting and basic arithmetic operations, has empirical foundations--indeed, that's how the math was invented in the first place.

Tegmark remarked slightly before this part of the conversation began that he thinks that a scientific theory should be taken seriously even if it includes unfalsifiable elements so long as it has other parts that are testable, and he implies that our trust in a theory increases with the number of tested and proven cases. He mentions black holes and general relativity, the wobble in the precession of the perihelion of Mercury, and so on. We can't know what's going on in a black hole by observation, but we can predict it using a theory that works so well in so many other places that we should probably at least listen to what general relativity has to say about your fate if you fell into a region of super-gravity.

So too in mathematics. Suppose I only have access to a few thousand things that I can count (as many ancient number systems seems to indicate was often the case in the ancient world--see Chinese which uses ten thousand, wan, and then one hundred million (ten thousand ten thousands), yi, as its basic big numbers, which makes saying big numbers in Chinese pretty inconvenient sometimes). If that's the case, I'll quickly run out of a practical way to keep testing my theory of numbers. Sure, I can define a number called a million and add it to another number called one hundred thousand and come up with 1,100,000, and if that is (practically) unfalsifiable, we can still trust it is correct because every other pair of numbers we've added in the same way has worked out.

That example may seem silly, but there are really impractical numbers. Take a number that has a trillion digits, for example--or one that the number of digits is described by a number with a trillion digits. What does such a number count? Nothing real, without getting really abstruse (the number of atoms in the observable universe is estimated to be an 81-digit or 82-digit number). Now take another number of similar size. Add them together, or multiply them, or raise them to powers of one another, say that many times over. Somewhere in here, we get beyond anything that's like being falsifiable in any realistic sense, and yet we know we can trust the numbers will come out right if we have a machine that can do the arithmetic because we trust the theory. Really big numbers, then, are like the insides of black holes, and we trust the mathematical structures because they work in literally every little case we can possibly check.

But proofs

But aren't these mathematical facts proven, and so we don't have to trust the theory? Well, yes, they are. The point, though, is that we could trust the theory even without the proofs, although in doing so, we'd introduce an element of uncertainty and find a gap in which we can argue about epistemic warrant and other things that philosophers of science like to argue about--sometimes for good reasons.

The proofs aren't irrelevant, though; they're very important. They are not so important, however, that we get to commit the philosopher's greatest error of forgetting the world for his abstractions. Math is, at bottom, empirical and then abstracted from there. It is not the other way around. Let me explain. 

Mysticism and Platonism

It's easy to argue that my last book, Dot, Dot, Dot, is largely a treatise on why people shouldn't be Platonists. I'll avoid rehashing too much of it here.

Harris wonders at the primacy of mathematics, or its "unreasonable effectiveness," and asks Tegmark for his take. Tegmark notes that we have to ask what mathematics is and correctly notes that if we ask lots of different people, we'll get lots of different answers. He then says most mathematicians would say mathematics is a set of "structures" that are "to be discovered," giving examples like numbers: 1, 2, 3, and so on, and 2+2=4. I assume that by "to be discovered," he means by logic, as opposed to empirically as I just discussed. In so doing, he echoes Ian Stewart's remark (quoted in Dot, Dot, Dot with a reference) that most mathematicians hold some kind of "unexamined blend" of two takes on mathematical ontology: Platonism and formalism.

I explained this in Dot, Dot, Dot in considerable detail. The general thrust of my explanation is that mathematics is a kind of philosophy that performs logic on certain axioms which are, in many cases, very "self-evident" statements. Indeed, as I just argued, they are ones that can be (and were originally) derived empirically and then made into abstractions. Once a set of mathematical (or other philosophical) axioms are determined, though, and the type of logic being used is given, the combination of the two produces a structure, to borrow Tegmark's word, called an axiomatic system.

An axiomatic system is a collection of statements together with their truth values under a specific kind of logic, all standing in relation to the set of axioms that underlie them. And this is why mathematics seems so discovered. The truths, falsehoods, and undecidables of every axiomatic system--all abstract objects, which means ideas--are determined in total in the very instant the axioms and logic that define the system are chosen. Finding them out is discovering them, as if they exist in an imaginary landscape defined by the axioms and logic themselves. It's exactly like turning over the cards in the children's board game Candyland, as I argued in the book. Once the cards are shuffled (axioms and logic are chosen), the game (entire axiomatic system, or "mathematics") is determined. It's just a matter of going through it and discovering what happens (though harder, more interesting, and more useful by far).

The axioms, though--those we invent, sort of. We certainly invent some of them, but I'll come back to that. The ones we didn't invent are merely "not invented" because they're empirical. Numbers like one and two and three, even zero, fundamental definitions for the way addition works, and so on, are either self-evident axioms or direct consequences of others that are either self-evident or abstracted variants on ones that really are self-evident.

Some aren't so clear, though, like the Axiom of Infinity, which implies that at least one infinite set exists. That, I'd say, we invented. And we can choose to use it (standard mathematics, and some weird others) or not (finitist and ultrafinitist mathematics). Once we have infinity, we have to wonder about choice across infinite sets (Axiom of Choice), and we can choose to accept it or reject it. In each case, we get a different axiomatic system, a different mathematics.

So when Harris says that "mathematics is a landscape of possible discovery that exceeds our current understanding--and may, in fact, always exceed it," yes, and yes, necessarily.

This is the case without even remarking upon Godel's famous incompleteness theorems, results that show that the kinds of axiomatic systems we usually associate with mathematics cannot simultaneously be complete (all statements have determinable truth values) and coherent (no contradictions). Because there are infinitely many numbers (or indefinitely many, to satisfy the finitists out there), there are infinitely many theorems, and we'll only ever state a finite number of those. Harris alludes to this directly by implying that we'll always only know a finite number of primes while also knowing the cardinality of the set of primes to be infinite--and so there's always another theorem lurking out there: "p_newly_realized is a prime number" (although "n is an integer" would work too, for any big enough integer n because as there are infinitely many such theorems, there isn't time to think them all up).

The fundamental mystery isn't mysterious

Harris goes on to raise the point again about the "fundamental mystery: why should mathematics be so useful for describing the physical world and making predictions?"

Tegmark responds that you'll get a lot of answers depending upon who you ask. He says some people (who are not like me) will say there's no mystery: "math is useful, go away," they'll say. I don't think there's a mystery, but pragmatism isn't my reason. He says others are Platonists, and so on, going to the extreme case of himself where he answers that it's because the world is mathematics. Bah. Metaphysical speculation.

So here is why mathematics is reasonably effective, and why we should be surprised if it weren't. Mathematics, at the beginning of its efforts, is about counting things. This effort is inherently empirical, as I argued, so we are linking mathematics to the world from its very basic beginnings and then abstracting via logic from there. All of the math we have ever built started with counting and added layers of abstraction from that concrete basis. I suppose it didn't have to be this way, in some grand sense of the phrase, but really, it did. Why would we have expended energy developing mathematics that didn't apply to the universe we find ourselves in? Even now many of us wrinkle our noses at mathematicians who are too enamored with that endeavor, despite having sufficient resources to fund it.

There's, as Harris alluded to, an infinite landscape of mathematics that could have been, but we built the mathematics that is rooted in our experience of reality instead of something else. We could define addition or multiplication or even numbers differently, and for some abstract purposes, mathematicians sometimes do. We don't do much with that, though, because if we used those axioms for "basic" mathematics, we'd get answers that diverge from our experience.

Perhaps the most famous an obvious example of this fact goes all the way back to Euclid, some two millennia ago. While laying out the foundational axioms (postulates) of geometry, he included the parallel postulate (usually stated via Playfair's Axiom now: In a plane, given a line and a point not on it, at most one line parallel to the given line can be drawn through the point.) This is an axiom, a self-evident truth, of planar geometry--often called Euclidean geometry--but it is not true of spherical or hyperbolic geometry, both of which are important in cosmology. Those two are different geometrical systems.

What we see is that when we change the axioms, we get completely different mathematics, and there are lots of possible choices, though the vast majority of them are bad. The entirety of my argument for the reasonableness of the effectiveness of mathematics is that we chose to keep and develop those axioms that are useful to the real world or seemingly logical extensions of those instead of any number of others--and we didn't have to.

So why is mathematics unreasonably effective? Because out of all of the many possible ways we could have built math (infinitely many, really), we built the one that applies to our world by starting with self-evident axioms and building upward and outward from there.

Is all math unreasonably effective?

No, not even within the confines of mathematics we have developed. It isn't clear, for example, that infinity is terribly useful. Finitists claim that all relevant mathematics can be done without it, and they seem to have made a strong case for that fact. There are also frequent articles being published arguing that infinity is where physical models break. So should we accept the Axiom of Infinity or not? Is infinity, and all its corollaries, unreasonably effective mathematics? Probably not. Maybe--but probably not.

Let's say it is, though. Let's say that the Axiom of Infinity is surprisingly effective for something. That will bring us to the Axiom of Choice pretty quickly. Is it unreasonably effective? Well, there's a reason there has been a lot of controversy in mathematics surrounding Choice. On the one hand, it seems desperately arbitrary to reject Choice, even on infinite sets, but on the other, accepting it causes the Banach-Tarski Paradox, in which a single solid object can be deconstructed into five distinct pieces and reassembled into two exact copies of the original object (apparently implying that, in some sense, 1=2, or that two is just another form of one). So, is mathematics on one side or the other of Choice unreasonably effective? Who knows?

This is what I know, though: if the universe provided some solid empirical reason (to root Harris's question about epistemology) to accept the Axiom of Infinity and the Axiom of Choice, say some weird quantum effect showed that the Banach-Tarski Paradox isn't only not paradoxical but is part of how nature works at sufficiently small scales within certain energy ranges, we'd accept them both and declare the mathematics that results "unreasonably effective." On the other hand, if nature showed us good reasons to reject them both as bad axioms, we'd accept their negations and declare the mathematics that results "unreasonably effective."

Quelle "unreasonably effective."

Is the universe made of math?

Who knows? This is pure metaphysical speculation, but if I were invited to speculate, I'd say it isn't. Metaphysical speculation of this kind is probably always more likely to be wrong than right. Still, this case is probably worse, and Harris catches Tegmark at it with exactly the pertinent question.

Harris pushes Tegmark on whether or not language can be said to do the same thing as mathematics--characterize the universe at a fundamental level since the universe is describable in language (this being a big part of Tegmark's case--electrons, etc., are identifiable with a set of numbers, and that's that). I think Harris busts the whole thing there: yes, the same claim applies to language, because math is just a subset of language (because everything in math can be expressed in language, and most of "math" is shorthand).

Tegmark, of course, dissents. He says that math is inherently more powerful than language, but he gives himself away (and is wrong) in the moment where Tegmark admits that human languages are "notoriously vague." At that, he also admits that the power of mathematics is that it is a very precise kind of language. But why is mathematics so precise?

It is precise because of something few readers will believe: math is precise because it is simple. Math is reality stripped of everything complicated about reality. Math is a kind of philosophy in which we use very robustly self-evident axioms or those that seem to logically follow from those and in which pretty much everybody agrees upon those axioms, at least for the "basics." It's simiplified but empirically based axioms and cold, cruel logic.

A conversation worth having

Now, Tegmark makes some remarks near the end of this segment that let me come full circle when he discusses his "optimistic view" of a mathematical universe. He says something that implied to me that eventually we'll run out of better data, but we'll always have math, and imagination, to continue to push the boundaries of our knowledge. This is an interesting conversation to have.

When I first met Vic Stenger, I asked him this very question: What happens when we get to a point where there's no practical way, or maybe even no physical way, to extract more relevant data from the universe? Say, for instance, we physically cannot, for whatever reasons, pry another decimal place out of our measurements. Now suppose we have two competing models that would be resolved two or four or twenty or one hundred decimal places further down than we can get. How do we choose between them?

Sadly, Vic told me he didn't think that would be possible. He didn't think there would be limits on how much information we could pry out of reality, but even if there aren't physical ones (I suspect there are, given Heisenberg and the Planck dimensions), there certainly are practical ones. What if the particle collider needed to answer the question requires more energy than the total output of our sun for a century, for instance? Even a tiny fraction of that much energy is unlikely to be worth the effort.

Tegmark is right, though: there, at that limit, if nowhere before, math and our imagination, along with the other elements that lead us to accept physical models, become the defining criteria for choosing one model over another. Working out what those criteria are constitutes an excellent pursuit for the philosophy of science, and I think the question is fundamentally very hard because it asks how we will define scientific epistemology in a domain in which empiricism can't rule the epistemic roost.

No comments:

Post a Comment