The great discovery of computer science in the 1970s was the distinction between P and NP. All problems in P were equivalently tractable, and all problems in NP (assuming P≠NP) were equivalently intractable. And this came with a prediction seemingly beyond doubt: as computers grew faster in the future, the gulf between the two classes would widen.
What ever happened? Forty years on, P vs. NP is famous as a mathematical challenge, a million-dollar Clay problem, and it’s a reference point for all kinds of theoretical analyses; but though computers got millions of times faster, the gulf seems to have narrowed! Nowadays a long list of NP-complete problems are largely tractable in practice, including traveling salesman, maximum cut, edge coloring, frequency assignment, maximum clique, 3SAT, set cover, vertex coloring, knapsack, and integer programming. And only theorists nowadays regard all problems in NP as equivalent.
The explanation seems to have two parts. One is that NP-completeness is defined by worst-case analysis, and many problems turn out to be rarely in that zone, especially if softened up with good heuristics or randomization. 3SAT is one example, now a workhorse of practical computing, and even more striking is that the simplex algorithm is used all the time for linear programming even though it’s an exponential algorithm for a polynomial problem. The other is that NP-completeness is defined by exact optima, whereas if one relaxes the goal by a fraction, what used to be exponential may become polynomial. By such means a discrete problem can often be approximately solved, even in the worst case, via continuous algorithms such as linear or semidefinite programming. An example is max cut.
Nobody seems to talk much about this great surprise at the epicenter of computer science. This puzzles me.
[9 December 2018]