It seems like all my Python friends are playing around with Haskell lately, so it's been on my mind a lot lately. Since I'm a language guy, Haskell has always fascinated me. I wrote my first article on Haskell in 2006 (Everything Your Professor Failed to Tell You About Functional Programming), and I think it's a beautiful, interesting, awe-inspiring language. However, I've never really achieved full mastery of it. Since I've been thinking about it a lot lately, I thought I'd share some of my thoughts:
Haskell is well known for being very concise. The functionality / lines of code ratio is very good. However, if I compare Haskell code to Python code, one thing is clear to me. Haskell squeezes in a lot more function calls per line of code than Python does. We have this idea in programming that the number of lines of code that a programmer can write in a given time period is relatively constant regardless of language. However, in my experience, I can write lines of code in Python, Ruby, and Java much more quickly than I can in Haskell. Part of this is because you can squeeze a heck of a lot more function calls into a single line in Haskell.
It's pretty common to compare languages based on how many lines of code it takes to implement a particular algorithm. I think it'd be really interesting to compare languages based on how many symbols (such as words, operators, punctuation, significant whitespace, etc.) it takes to implement a particular algorithm. It'd also be interesting to compare languages based on their symbol / line ratio. Haskell and APL have very high symbol / line ratios. Assembly has a pretty low symbol / line ratio.
Some languages are easier for newbies to understand than other languages. Python and Java are easy to understand even if you don't know them. APL and Forth are impossible to understand if you don't know them. I think there are a lot of things that factor into whether a language is easy for newbies to understand. For instance, how close is the language to English? COBOL tries to be like English, so it's easy for newbies to understand. How many unusual symbols are used? APL uses a lot, so it's hard for newbies to understand. Does it follow normal, mathematical, infix notation? Forth doesn't, so it's difficult for newbies to understand. It can be difficult for newbies to understand Haskell code due to the liberal sprinkling of things like $, ., >>=, etc.
There's also something to be said for languages that tend to use overly concise names. Consider the name creat() in C--it hurts my brain not to put an e at the end of it! Slightly more verbose names can be very helpful for newbies. If I read some Python code, and I see "threading", I have a decent idea what that is about. If I read some Haskell code, and I see "TVar", I have no clue what that is about. TVars are so incredibly interesting, but I certainly wish it had a more newbie-friendly name!
Haskell has fascinated me for years. However, as soon as someone mentions Category Theory to me, my eyes start to glaze over, because when it comes to programming, I am a linguist, not a mathematician. I remember Larry Wall said that he approached Perl from a linguistics point of view, and that he really liked the fact that Perl was a little messy, contextual, redundant, etc. Afterall, so is English!
When I learn a new language, the questions that come to my mind are what's the syntax and what does the syntax do? Only after I truly appreciate the syntax and semantics can I appreciate the underlying model. For instance, I've always liked Python because of its syntax, but it took years for me to realize how cool Python was because of its "everything is dicts and functions, and you can hack stuff all you want" nature.
The last thing I want to cover is composability. Some languages and language features are more composable than others. For instance, it's really easy with structs, objects, algebraic data types, etc. to write something like a.b.c.d. This means an instance of A has a reference to an instance of B, which has a reference to an instance of C, etc. It's trivial to connect an object graph in this way. Functions are also very composable. In Haskell, a . b . c . d basically means a(b(c(d()))). However, monads aren't quite as trivially composable. It has taken a non-trivial amount of time for my buddy John Chee to try to explain liftM and monad transformers to me. It'll be interesting to see how all that stuff turns out.
Of course, Haskell isn't the only language to have to worry about composability. I remember seeing a talk by one of the Scala compiler developers who said that the biggest source of bugs he had seen was from people mixing features of Scala togehter in ways that he had never considered before. Compiling each feature one at a time to work on the JVM is easy--getting all the features to play nicely with one another when they're used at the same time in unexpected ways is quite a bit harder.
So as I watch all of my friends learn Haskell, these are the things that have been on my mind. It's a fascinating language, so I always enjoy reading other people's perspectives on it!
By the way, thanks go to John Chee and Neuman Vong for patiently answering all my Haskell questions.