I agree with Dietrich Epp: it's a combination of several things that make GHC fast.
First and foremost, Haskell is very high-level. This enables the compiler to perform aggressive optimisations without breaking your code.
Think about SQL. Now, when I write a
SELECT statement, it might look like an imperative loop, but it isn't. It might look like it loops over all rows in that table trying to find the one that matches the specified conditions, but actually the "compiler" (the DB engine) could be doing an index lookup instead — which has completely different performance characteristics. But because SQL is so high-level, the "compiler" can substitute totally different algorithms, apply multiple processors or I/O channels or entire servers transparently, and more.
I think of Haskell as being the same. You might think you just asked Haskell to map the input list to a second list, filter the second list into a third list, and then count how many items resulted. But you didn't see GHC apply stream-fusion rewrite rules behind the scenes, transforming the entire thing into a single tight machine code loop that does the whole job in a single pass over the data with no allocation — the kind of thing that would be tedious, error-prone and non-maintainable to write by hand. That's only really possible because of the lack of low-level details in the code.
Another way to look at it might be… why shouldn't Haskell be fast? What does it do that should make it slow?
You say it sounds slow to "create functions on the fly at run-time", but if you look very carefully, you don't actually do that. It might look like you do, but you don't. If you say
(+5), well, that's hard-coded into your source code. It cannot change at run-time. So it's not really a dynamic function. Even curried functions are really just saving parameters into a data block. All the executable code actually exists at compile-time; there is no run-time interpretation. (Unlike some other languages that have an "eval function".)
Think about Pascal. It's old and nobody really uses it any more, but nobody would complain that Pascal is slow. There are plenty of things to dislike about it, but slowness is not really one of them. Haskell isn't really doing that much that's different to Pascal, other than having garbage collection rather than manual memory management. And immutable data allows several optimisations to the GC engine [which lazy evaluation then complicates somewhat].
I think the thing is that Haskell looks advanced and sophisticated and high-level, and everybody thinks "oh wow, this is really powerful, it must be amazingly slow!" But it isn't. Or at least, it isn't in the way you'd expect. Yes, it's got an amazing type system. But you know what? That all happens at compile-time. By run-time, it's gone. Yes, it allows you to construct complicated ADTs with a line of code. But you know what? An ADT is just a plain ordinary C
structs. Nothing more.
The real killer is lazy evaluation. When you get the strictness / laziness of your code right, you can write stupidly fast code that is still elegant and beautiful. But if you get this stuff wrong, your program goes thousands of times slower, and it's really non-obvious why this is happening.
For example, I wrote a trivial little program to count how many times each byte appears in a file. For a 25KB input file, the program took 20 minutes to run and swallowed 6 gigabytes of RAM! That's absurd!! But then I realized what the problem was, added a single bang-pattern, and the run-time dropped to 0.02 seconds.
This is where Haskell goes unexpectedly slowly. And it sure takes a while to get used to it. But over time, it gets easier to write really fast code.
What makes Haskell so fast? Purity. Static types. Laziness. But above all, being sufficiently high-level that the compiler can radically change the implementation without breaking your code's expectations.
But I guess that's just my opinion...