Me, mathematics, and physics

At Harvard I took both math and physics courses all the way through, but I knew which was my subject. In mathematics, any principle you learned was simply true: you could apply it anywhere you liked and you’d never reach a falsehood. In physics, that wasn’t enough. You had to have some kind of deeper understanding to see that this principle was appropriate here and that one could be applied there. It made me uneasy.

Forty years on, what kind of mathematician have I become? One whose pride is that he doesn’t apply principles blindly but guided by deeper understanding! Indeed, not long ago I published an essay on “Inverse Yogiisms” all about mathematicians’ habit of following rigorous logic to misleading, though mathematically valid, conclusions.

Which brings me to two thoughts. One is that maybe I could have been a pretty good physicist after all. The other is that as a physicist, maybe I would have been a little less distinctive.

[3 February 2019]

What might finally make people stop believing

We were listening to Christmas carols today and they reminded me disturbingly that the Christ child was a baby boy, not a girl, destined to grow up to be a man, not a woman.

All my life I have marveled that no amount of absurdity in religious doctrine seems to dent people’s faith. But listening to these carols, for the first time, I had a glimmering of what might finally bring Christianity down. Cathedrals of nonsense and impossibility? — no problem. Traditional sex roles? This is a problem.

[20 December 2018]

What happened to P vs. NP?

The great discovery of computer science in the 1970s was the distinction between P and NP. All problems in P were equivalently tractable, and all problems in NP (assuming P≠NP) were equivalently intractable. And this came with a prediction seemingly beyond doubt: as computers grew faster in the future, the gulf between the two classes would widen.

What ever happened? Forty years on, P vs. NP is famous as a mathematical challenge, a million-dollar Clay problem, and it’s a reference point for all kinds of theoretical analyses; but though computers got millions of times faster, the gulf seems to have narrowed! Nowadays a long list of NP-complete problems are largely tractable in practice, including traveling salesman, maximum cut, edge coloring, frequency assignment, maximum clique, 3SAT, set cover, vertex coloring, knapsack, and integer programming. And only theorists nowadays regard all problems in NP as equivalent.

The explanation seems to have two parts. One is that NP-completeness is defined by worst-case analysis, and many problems turn out to be rarely in that zone, especially if softened up with good heuristics or randomization. 3SAT is one example, now a workhorse of practical computing, and even more striking is that the simplex algorithm is used all the time for linear programming even though it’s an exponential algorithm for a polynomial problem. The other is that NP-completeness is defined by exact optima, whereas if one relaxes the goal by a fraction, what used to be exponential may become polynomial. By such means a discrete problem can often be approximately solved, even in the worst case, via continuous algorithms such as linear or semidefinite programming. An example is max cut.

Nobody seems to talk much about this great surprise at the epicenter of computer science. This puzzles me.

[9 December 2018]

Advice for an applied math talk

1.  Begin with a thorough discussion of your Table of Contents. This will help the audience follow your presentation.

2.  Aim for about three slides per minute. For example, 72 slides is good for a 25-minute talk.

3.  Report error estimates and computer timings to at least six digits of accuracy. For example, it’s more precise to say you got an error of 3.22005e-8 in 5.52723 seconds than 3.2e-8 in 5.5 seconds.

4.  Present numbers, not plots. Plots leave a vague and fuzzy impression.

5.  Be sure to give details of the proofs. A proof without details isn’t rigorous.

6.  Aim the laser at the screen and wave it around the part of the slide you’re talking about. This helps the audience pay attention.

7. As your time slot nears its end, speed up if necessary. Don’t risk failing to cover all your material.

[6 December 2018]

Two views of mathematical functions

The 19th century view was that a function was analytic. Of course, some functions were less smooth than that, but analyticity was the starting assumption.

The 20th century view is that a function is continuous. Of course, some functions are smoother than that, but continuity is the starting assumption. (Have you ever seen a merely continuous function? I think there is one main example: a random Brownian path.)

The 20th century view has utterly taken over among pure mathematicians. One can understand this on the basis that analytic functions are too easy to offer research challenges. Unfortunately, the same view has been copied by the numerical analysts, who ought to have been guided by what applications look like. One sees this in their edifice of results expressed in the language of Sobolev spaces. The point of Sobolev spaces is to make precise distinctions between, say, a function with one derivative and a function with one-and-a-half derivatives. Nineteenth-century mathematicians did not spend their time on such distinctions.

And so it is that numerical analysts prove technical, rigorous theorems about the convergence of algorithms whose convergence is nowhere near optimal for most applied problems. Meanwhile pure mathematics continues its drift away from science.

[1 December 2018]

Flypaper and the Faraday cage

For the first time in my life this summer, I had to buy flypaper. The fruit flies in our kitchen in Lyon were annoying. It became fascinating to watch them dart around seemingly at random, and occasionally light on the sticky paper.

Suppose they fly truly in a path of Brownian motion (in fact it’s not so simple), and suppose I had hung a ring of flypapers (in fact I hung just one). Then this would have been the same mathematics as the Faraday cage! For electric fields are governed by Laplace’s equation, just like Brownian motion.

Our analysis of the Faraday cage translates as follows to fruit flies getting through a ring of flypapers to a banana. (a) As long as there are good-sized gaps between the papers, the number of flies getting through will not be negligible (this was Feynman’s error). (b) If the papers themselves are made narrower, even down to a scale of millimeters, the effectiveness of the cage diminishes only slightly (in proportion to an inverse-logarithm of the radius). This is one of the paradoxes of Brownian paths, going back to Berg and Purcell in 1977.

[24 November 2018]

Fake News

In this awful Trump era we’ve learned that whether people consider something to be true or not may not depend very much on, well, whether it is true or not. Deeper than truth, we have learned, lies allegiance.

But this is not our first disobedience! Wise societies have long accepted that while it is appropriate to reason from facts in other areas, rationality becomes boorish and simplistic when the subject is religion. In that department, one must respect a deeper source of knowledge called “faith”. So, long before the current era, we’ve all had plenty of practice with the intellectual dishonesty needed for a world of fake news. Learning to shut off our minds when discussing religion was our original sin as rational citizens, the tasting of the tree of disknowledge.

[28 November 2018]