Julia is a programming language widely loved because it is fast — maybe ten times faster than MATLAB and Python. No surprise that people are excited, eh?
Well, yes, it is a surprise, in fact a paradox, an oddity of our times. Computers have been speeding up exponentially for a long time, and according to the Top500 list, each 15 years brings another factor of 10000. Today’s top machines are rated at 1016 floating-point operations per second, whereas fifteen years ago it was 1012 and thirty years ago 108. So Julia’s speed is equivalent to just three or four years of progress up the Moore’s Law curve. Why is this exciting?
I think the explanation is that these supercomputer benchmarks have lost contact with the machines most of us actually use. My machine runs at around ten gigaflops, a million times slower than the Top500 champion, like a high-end computer of THIRTY YEARS AGO. To get much faster, I’d have to learn a new way of computing. The top machines have millions of processors—“cores”—and it takes special methods to exploit them. Most scientists don’t bother.
The politically correct view is that in the end we will all learn these special methods, but I don’t believe it. I think the machines will do more of the adapting than we do: that ways will be found to let us program in the old ways and still benefit, even if imperfectly, from massive parallelism. The human brain isn’t optimal either, but it manages to use a billion processors.
[9 February 2015]