Thursday, February 15, 2007

Haskell: Syntax

Learning Haskell is hard. Those who disagree with this basic premise should stop reading now; they're obviously far more talented and intelligent than I am. However, wrapping your mind around the concepts is just one part of the difficulty. I believe that the syntax itself contributes to the difficulty of reading and understanding Haskell. I think that there are some parts of Haskell's syntax that are meant to be clever and flexible; however, this same flexibility leads them to be difficult to decipher.

Unfortunately, I fear the only ones who are going to read this post are the ones most likely to disagree with it, but I'm going to try anyway ;)

Function application

I think "f(a, b)" more clearly portrays function application than "f a b" does. Consider "map myFunc a". It'd be a lot visually clearer which function was getting called and which function was being passed if it were written "map(myFunc, a)".

The syntax for currying

I think the currying syntax is terrible. Consider "unwords . map showVal". This is a curry because map takes two arguments. Now consider "unwords . spam showVal". Is this a curry? Who knows? Unless you understand the type signature for "spam", you don't know.

I think it's important for the syntax of a language to help you decipher code even if you aren't familiar all the libraries being used. For instance, Haskell does this by forcing you to use upper case names for types, and I think this was an improvement over C. I think you should have to *work* to make a curry happen, and it should be obvious to the reader what is happening. Perhaps "unwords . curry(map, showVal)" would be clearer.

Function type declarations

Similarly, the syntax "f :: a -> b -> c" places the burden of currying on you when it's often the case that you aren't even thinking about currying. It'd be a lot clearer to just say "f :: (a, b) -> c". This doesn't stop you from using currying; it just frees you from having to think about it when you're not actually using it yet.

If you actually are writing a function that returns another function, say that! For instance, "f :: (a) -> ((b) -> c)". Now the reader will know that you plan on actually returning a function without the issue of currying getting in the way.

Point free style

Point free style can be elegant. For instance, consider "h = f . g". This says that "h" is "f of g", and it's more elegant than "h x = (f . g) x".

However, consider "main = getArgs >>= putStrLn . (!! 0)". When point free style is used in the middle of a larger expression, perhaps with a curry (or in this case a section), a ">>=", and function composition, things can get out of control. I think it's often clearer to use an explicit argument so that the reader doesn't have to figure out where the invisible argument is being consumed. Even though the following is one line longer, it's much easier to read:
main = do args <- getArgs
putStrLn (args !! 0)
Point free style is even more inscrutible if no type declaration is given. In such cases, the reader is forced to hunt down the right number and type of arguments.

The $ operator

"$" is function application. Consider "putStrLn $ args !! 0". This is the same as "putStrLn (args !! 0)", but you don't need the parenthesis. Aside from the fact that you're left wondering which has higher precedence, "$" or "!!", this is an improvement. However, if you have a very long line with a mix of "." and "$", things can get confusing. Worse, you end up reading the code from right to left. It seems strange to have to look at the operators in order to figure out whether to read from left to right or right to left. Sometimes, you're left reading from the inside outward. It's trivial to write a function application operator that has its arguments in reverse, and I think it improves readability. After all, that's exactly how UNIX pipes work, for instance, "cat a | b | c". Furthermore, that's how ">>=" works for monads. My earlier comment about excessive use of point free style still stands, though. If you get too many "invisible" variables, it may be better to use a "let" or a "where".

Record member access

I think C's "a.b" is clearer than "b a", especially when you have multiple levels. Consider, "a.b.c" vs. "(c . b) a".

Mixing monads

For new programmers, OOP can be mildly difficult to understand, but it's powerful. Monads are even harder to understand, and they're even more powerful. If you consider "obj.getHouse().getDoor().ringDoorbell()", playing with multiple objects at the same time seems to have linear mental complexity. However, playing with multiple monads at the same time seems to have exponential mental complexity, especially since you can only use the "do" syntax with one monad at a time. If you mix the State monad (for state) with an Either (for error handling), a Maybe (for NULL encapsulation), IO, etc., things get tricky. Hand waving a bit, if it were trivial to mix and match monads, there wouldn't be a need to put IORef in the library--the user could just mix IO and State on the fly.

Conclusion

In general, I really like Haskell, so please have *some* mercy on me when you flame me to death for stating these opinions ;)

37 comments:

Anonymous said...

Shannon,

I'm learning Haskell at the moment too, and I have (or had) the same hangups on the syntax you did. In particular, the way that the function composition operator requires that pipelines of functions be read right to left throws me for a loop...

... so I "fixed" it.

(-->) = flip (.)

-- same as
-- pipe = h . g . f
pipe = f --> g --> h

Still don't need to mention how many parameters f, g, h, or pipe take -- but -- I can read it more easily.

I've also found that I can't read haskell source very well without having GHCi open so I can query about the types of things.

Philip

Neil Mitchell said...

Learning Haskell is hard, but some of the things you mention in your post aren't examples of being hard, they are examples of being different to C. Function application is one of them, I think.

Currying is really powerful, and very nice to have available so freely - it lets you think in higher order functions much more easily. It's very different, especially if you are used to thinking in a value world, but well worth it once your brain "wraps itself" to the appropriate level.

The use of . and $ is excessive - they are great once you get used to them, but hard for beginners to master. Personally I write code for me using $ and ., but if I am ever writing code for a more generalised audience (a talk, paper or mailing list post) I take care to avoid too much . and $.

Clean doesn't require a -> b -> c in type signatures, its something you get used to, but not necessarily something that is essential. I guess one argument for it is that everything is first class.

Records aren't great in Haskell, everyone knows it but no one knows what would be better.

Monads are hard, thats just life I guess.

Haskell isn't a beginner language, but I think that starting from a clean slate, with no C background, you'd find C equally confusing.

Michi said...

I'll try not to flame you all too mercilessly, or for that matter, at all, however I -do- have a few comments.

Please bear in mind that I was a hopelessly entrenched mathematician before I started learning Haskell. This "taints" my POV quite a lot.

1) I very much like the fact that the Haskell syntax is so streamlined; that we don't need to sprinkle parens and other stuff, just to get the compiler to comprehend what I'm writing. In the right corners of mathematics, this is precisely what happens too: I'm used to seeing fghx meaning f(g(h(x))) and similar things, so it's a good kind of adjustment.

2) Either you or I seem seriously confused as to the difference between currying and sections. Almost every single time you write 'curry', I find myself translating it to 'section', in order to understand what you are saying.

Currying, as I understand it, is using the isomorphism
Hom(A+B,C) = Hom(A,Hom(B,C))
and thus giving you a way to go between functions
f::(a,b) -> c
and functions
f::a -> (b -> c)

Sections, on the other hand, are the partial evaluation things. How we by adding something of type a go from
f::a -> (b -> c)
to
f x::b -> c

Since we'll want
f::a -> (b -> c)
rather than
f::(a -> b) -> c
for precisely this reason: that that makes partial evaluation or sections nice to read (in algebraic language: lets functions act from the left and not from the right), it only makes sense to make this the default, and require brackets for the other way around.

Now, involving the $ operator is neat precisely because it lets you state at precisely which points you want a non-standard associativity, without having to chase the code for where to drop your parens. Once this kind of local considerations gets solidified, it gets less scary.

All this said, I probably should not have read the post to begin with; I find reading Haskell extremely close, syntactically, to reading mathematics, and thus have enjoyed learning the language immensely.

albert bulawa said...

Is there any reason to learn Haskell in the first place? I have been looking at multiple languages but see not a single one which would be better for everyday use than Java. Yes, this despised Java. Of course, most of those have some strong points over Java, but Java has one strength that kills each other: code reuse.

Cale Gibbard said...

You're right that lots of people who will probably disagree with you will end up reading this. (Thank Google blog search :) )

I agree with you in that these are things which take some time getting used to, but I disagree that they should be changed. :) It took me at least a few months before I really felt like I could do much by writing Haskell code, and about a year before it was comfortable. But I learned a lot along the way, and came to like lots of the things which were tricky at first. :)

The trick to working with points-free code is learning how to operate at the level of functions rather than thinking about the data that those functions are going to manipulate so much. In most languages, even in Haskell quite a lot of the time, you're writing functions by thinking about transforming data. Points-free programming is about writing functions by transforming other functions. That's part of the essence of functional programming, and one of the reasons why we like the composition operator so much, because it emphasizes that we're really working with functions.

You had a question about unwords . spam showVal -- what might the type of spam look like? Does it take more than one parameter? Well, in fact, we do know from this code, assuming that it typechecks, that spam takes two parameters, and gives a list of strings. How do we know that? Well, let's reason it out. Function composition (.), takes a function of type (b -> c), and a function of type (a -> b), and results in a function of type (a -> c).
(.) :: (b -> c) -> (a -> b) -> (a -> c)
We know that
unwords :: [String] -> String
and it occurs as the first parameter of (.), so that means b = [String], and c = String.
We know also know that
spam showVal :: a -> b
or, substituting in what we know,
spam showVal :: a -> [String]
So if showVal :: t, we have that
spam :: t -> (a -> [String])
which we can also write without the parens as
spam :: t -> a -> [String]

That is, it's a function of 2 parameters, returning a list of Strings.

Being really familiar with the language, I knew this right away, that sort of reasoning just happens automatically. A better breakdown of what really happened in my head goes something like:

- spam showVal is a parameter to (.)
- so it's a function
- so spam takes (at least) 2 parameters, one more than it's already been applied to.

Regarding
main = getArgs >>= putStrLn . (!! 0)
(By the way (!! 0) is called head.)
Apart from your translation into do-notation (which is perfectly acceptable) or flipping the order of composition, I find that this sort of situation is also made much clearer by using the backward form of (>>=), called (=<<):
main = putStrLn . head =<< getArgs

("Print the head of the list computed by running getArgs")

Another option is fmap/liftM, with either direction of bind:
main = putStrLn =<< fmap head getArgs
main = fmap head getArgs >>= putStrLn
These keep the data conceptually flowing in the same direction through the expression.

Regarding mixing (.) and ($), it helps to remember that (.) binds as strongly as possible to its parameters (only function application has precedence over it), whereas ($) binds as weakly as possible. So if I write some expression like:

f . g a b . h c $ x . y

it means:

(f . ((g a) b) . (h c)) $ (x . y)

The functions are applied to their parameters, then the results are composed, and the function on the left of the ($) is applied to the one on the right. It's rather common to have a function composition on the left of ($), rarer to have one on the right, but it happens.

One small thing you might have noticed from the above: I parenthesized g a b as ((g a) b). This is indeed what happens. Every function in Haskell, in a sense, is a function of just one parameter. It's just that some return other functions. :) Mindbending at first, but very useful when it comes time to use higher-order functions like map and fold -- often the function which you'll pass to them is obtained by partially applying one of your existing functions.

Regarding mixing monads, well, we do have a fairly effective tactic for that in the form of monad transformers. Unfortunately, most people are using monad transformers in a kind of sloppy way which doesn't really buy them all the benefits it could. I wrote an article about the right way to use them.

Even when used properly, monad transformers still have their downsides, the largest of which being that you have to write them separately, and there may be no way or more than one way to transformer-ise some monads. You do have to be aware that what you're doing when you mix monads like that is something that's really not commonly done in any other programming language: you're taking two small domain-specific languages and asking for them to be mixed. They could have very different effects, and very different control mechanisms in them, so it's not so easy.

Monad transformers are basically giving you a tool for quickly constructing small custom programming languages, embedded in Haskell.

So it's a hard and interesting problem with obvious benefits to solving it, which is exactly the territory Haskell was invented for. We're seeing some initial solutions to it, and even some design patterns surrounding them, which suggest there are probably better ways (or missing Haskell features!).

Sorry if this comment's a little long, you gave me a lot to chew on! If you haven't yet, come check out the IRC channel, #haskell on Freenode. There are lots of friendly people there and beginner questions are always welcome.

ray said...

It seems at least a few of your complaints stem from not being able to remember the precedence of operators. I only say that because I have that problem myself, which I currently deal with in the worst possible way (spamming parentheses everywhere). I've got to disagree with you about the function application and function currying syntax, though. Not to flame or anything, of course - I'm still learning myself, and these things are really mostly opinion anyway. :)

Shannon -jj Behrens said...

> Currying is really powerful

I agree completely! I only dislike the *syntax* that Haskell uses for currying.

Shannon -jj Behrens said...

> Either you or I seem seriously confused as to the difference between currying and sections.

From http://www.haskell.org/tutorial/functions.html:

Since infix operators are really just functions, it makes sense to be able to partially apply them as well. In Haskell the partial application of an infix operator is called a section.

Currying == partial application, no?

Shannon -jj Behrens said...

> Is there any reason to learn Haskell in the first place? I have been looking at multiple languages but see not a single one which would be better for everyday use than Java. Yes, this despised Java. Of course, most of those have some strong points over Java, but Java has one strength that kills each other: code reuse.

I'm always looking for more powerful languages--ones in which I can do more by saying less. I have a couple exercises that I code in every language I learn. I have found that Python lets me get more stuff done in less code and *with less head scratching* than any other language. Code reuse works just well in other languages ;)

Shannon -jj Behrens said...

> You had a question about unwords . spam showVal...

Obviously, you're quicker than I am ;) Wow, what a lot of reasoning that required! I think, in fact, you've proved my point! What would happen if we didn't know the signature of unwords?

Shannon -jj Behrens said...

Cale Gibbard:

That was an awesome comment. Thanks!

Shannon -jj Behrens said...

Ok, I like thinking of all Haskell functions as taking one argument and possibly returning another function. However, I really wish there was a way to see in the syntax whether applying one more argument was going to give me a normal value vs. a function. I have this strange fear that I'm going to pass one variable instead of two and go around playing with a function when I think I have an int. ;) I just wish I could easily see in my editor how many parameters were left unapplied. I also think it's useful to know if a function is *specifically* returning a function vs. simply being partially applied.

falcon said...

The examples of '.' and '$' are spot on! F# has a little more sensible syntax

Shannon -jj Behrens said...

> F# has a little more sensible syntax

That's so funny. When I defined a pipeline operator in my own code, I used "|>" too!

Cale Gibbard said...

Currying and partial application go hand in hand. It's a somewhat subtle distinction.

Currying is the process of treating functions of multiple parameters as if they only have one, and return a function which takes the rest.

Partial application is where you take something which is a curried function, and apply it to some, but not all of its parameters.

Regarding unwords, all that we'd lose is that we wouldn't know that spam was producing a list of strings. It would just have to produce something suitable for unwords to work with. (And yeah, notice that the reasoning I first provided was a lot more slow and methodical than the reasoning I really used at first, which is a faster way to look at it.)

One reason you don't have to worry *too* much about under-applying functions, and ending up passing around functions when you think you're passing around other values is that the type checker is absolutely going to tell you exactly where you've gone wrong.

While most of the IDEs for Haskell are not yet clever enough to point this out in realtime, putting red underlines beneath the type errors as you make them, there are a few which already are -- Visual Haskell being one of them. (Too bad it requires proprietary software and doesn't have really good screenshots on its website.) Don't worry, there are free software equivalents to this which are coming. The APIs which Visual Haskell uses to interoperate with GHC are free software, and there are some groups working on getting this capability into free IDEs.

Check out shim, which is one of them. :)

Cale Gibbard said...

Oh, by the way, sorry that I wasn't around when you popped up on IRC. I was taking a nap.

Shannon -jj Behrens said...

> One reason you don't have to worry *too* much about under-applying functions, and ending up passing around functions when you think you're passing around other values is that the type checker is absolutely going to tell you exactly where you've gone wrong.

Yep.

> Check out shim, which is one of them. :)

Ah, interesting. I'm enough of a Vim nut to add Vim support, but not quite enough of a Haskell nut to find enough time among all my other projects ;) *sigh*

> Oh, by the way, sorry that I wasn't around when you popped up on IRC. I was taking a nap.

Thanks again for all your comments. You've been quite helpful!

scruzia said...

Hi, JJ! I agree with you mostly about currying/sectioning, but I'd seek a middle ground were I to change the syntax. IMHO, the word "curry" is too heavyweight, and the total
absence of a clue (i.e., current Haskell syntax) is too lightweight. I kinda like the notion of using "_" or comma somehow.

Haskell's ($) was easy for me to get used to; I liked it right from the start.

I totally agree with JJ about points-free notation. I loathe it except in very very simple or very very uniform usages. IMHO the overuse of points-free notation in the Haskell community is one of the worst aspects of that community. I believe it in most cases to be a case of choosing to write code in a keystroke-saving way, instead of writing code to be understood. A mathematician's comment above underscores this unfortunate attitude:

"I very much like the fact that the Haskell syntax is so streamlined; that we don't need to sprinkle parens and other stuff, just to get the compiler to comprehend what I'm writing."


THAT's a WRONG-HEADED way to look at it!!! You should be putting in the extra parentheses to get the HUMAN READER of your program to understand what you're writing. Write your code as if you're teaching another programmer how your code solves the given problem, instead of thinking of it as instructions directing the machine to do the operation that you have in mind.

Ahem. To continue...

... those have some strong points over Java, but Java has one strength that kills each other: code reuse.


Can you say "Scala"?

Thanks to Cale for defending points-free -- nice explanation, but I still find it abhorrent and anti-helpful when people write much code that way.

I do think that operator precedence in Haskell is one of the harder things to get used to for beginners. Here's a feature I'd like to see in a Haskell IDE: visually parenthesize code (upon request, or by default for novices) by putting the background of the innermost parts in one shade, and holding the next outer parts together with a slightly lighter shade, fading out to match the background color at the outermost level. Do this in particular with the unfamiliar operators -- the ones that do not exist in the more popular languages.

Shannon -jj Behrens said...

> Hi, JJ! I agree with you mostly about currying/sectioning...I totally agree with JJ about points-free notation

It's nice to hear I'm not alone.

> THAT's a WRONG-HEADED way to look at it!!! You should be putting in the extra parentheses to get the HUMAN READER of your program to understand what you're writing. Write your code as if you're teaching another programmer how your code solves the given problem, instead of thinking of it as instructions directing the machine to do the operation that you have in mind.

I couldn't agree more! As a mathematician, you can afford to scrutinize some funny symbols. As a programmer, I'm faced with training new engineers how to maintain literally hundreds of thousands of lines of code. I care about readability even more than productivity, hence my post.

I always pretend I have someone looking over my shoulder reading when I'm programming. I think it helps.

Anonymous said...

If you don't like the syntax then don't use it. There are *plenty* of languages that have the f(a,b) nonsense.

And it isn't clearer. If you come from a C/C++/Java background maybe, but if you come from e.g. smalltalk the f(a,b) syntax looks alien.

Anonymous said...

I always hate when people say "I think language X needs these changes" and then list changes that basically take away everything that makes the language powerful.

There are thousands and thousands of languages out there, no reason to make every one of them hideous like C/C++/Java.

And the way currying works is wonderful. Haskell syntax is just beautiful. I can't imagine someone choosing yuck(x,y) over it.

Matth said...

I think you've struck to the heart of a lot of serious problems with Haskell, and I'm glad someone had the guts to say this stuff.

Background: I've done a lot of mathematics and theoretical computer science, and I really love haskell for its theoretical elegance and for what it's taught me about category theory.

BUT

I've also done a lot of software development, and I really find it hard to justify haskell for everyday work for reasons very similar to yours.

As I see it, the problem is a combination of these factors (forgive me if I generalise a little)

- Haskell gives you a massive amount of flexibility in the syntax - in particular the flexibility to shoot yourself in the foot and make code overly dense, cryptic and unreadable. I share some of your bugbears, and will add the following: overuse of short, cryptic variable names, and overuse of custom operators which end up looking like line noise due to haskell's lexical parsing rules

- Haskell is mainly used by very smart people who value conciseness and theoretical elegance over practical readability, conventions and consistency

- Those developing haskell are often really interested in type systems and developing new awesomely clever extensions to Hindley-Milner (don't get me wrong, I find these things incredibly cool too, in a theoretical way - but but). The problem with this is that type system constraints have an excessive influence on the syntax, semantics and the programming conventions of the language - whereas to be truly useful for software development, it should be the other way round.

- Monads are Just Not Good Enough as an abstraction. They're very opaque and they don't compose well. You've hit the nail on the head with this - it's not that beginners are too stupid to grok monad transformers, it's that monad transformers aren't the right answer for structuring large programs. Most people are too in awe of their cleverness to really be blunt about this, but it is a big flaw that Haskell needs to overcome. No I don't have an answer myself, but more people should be looking for one.

Perhaps I've been excessively harsh there, but I say it out of love - I really want to see haskell do well!

Michi said...

scruzia, jj: The reason I talked about getting the compiler to understand what I'm writing is that I already am convinced that anyone decently up to speed reading my code will know what it means and does from just the amount of parenthesizing and syntactic help I put in there. Any more syntactic hints required will end up being in order to get it to go through the compiler as valid code, and Haskell has a very comfortable balance when it comes to that.

However; as I stated in my previous comment, and as might be blindingly obvious from the rest of my internetbased writings, I am writing this from the point of view of a mathematician who does programming and software design as well; not a programmer with possibly a slight interest in mathematics. This probably is a very important distinction, for all the subtlety it carries, but it is most definitely at the core of why I ended up in the Haskell community.

Shannon -jj Behrens said...

> I think you've struck to the heart of a lot of serious problems with Haskell, and I'm glad someone had the guts to say this stuff.

Helpful comments. Thanks!

Shannon -jj Behrens said...

It's interesting to note that Erlang isn't too far away from Haskell, and it follows many of my syntax suggestions. For instance:

* Function application is "lookup(Key, nil)".

* When you export a function, you show in the syntax how many arguments there are: "fac/1".

* Most function definitions don't look like curries, but you can create a function that behaves like a curry "Adder = fun(N) -> fun(X) -> X + N end end."

Remember, in general, I really like Haskell's semantics. I'm just looking for ways of offering those same semantics using syntax that is *intrinsically* more informative to someone who might not be familiar with all of the libraries involved.

Imam Tashdid ul Alam said...

hi JJ,

I agree with some...

(.) and ($) are really really annoying for human readers.

but also, the record issue was right on.

what we really need is a very, very friendly IDE. cheers.

Darrin Eden said...

Maybe a new syntax front end: Liskell?

Steve Downey said...

currying != partial application

They're opposite sides of the same coin, though. Currying is the property that a function of several arguments can be treated as a composition of several functions of one argument. Partial application is using that property to turn an N argument function into an N-1 argument function with the first argument bound.

So, if you have
> f :: a -> b -> c

Is that a function that takes an a and a b and returns a c? Or is it a function that takes an a and returns a function that takes a b and returns a c?

Then answer is Yes. Because it's both, depending on how you choose to use it. At least notionally, the first argument is consumed, a function returned, which consumes the second argument, which returns the result. But that's an as-if model, it certainly doesn't have to be implemented that way.

Still, if it really makes no sense to ever allow partial application, you could write

> g :: (a,b) -> c

That's a function that takes a tuple (pair) of an a and a b and returns a c.

Anonymous said...

I wanted to learn Haskell a while ago, but I found that the syntax is unreadable and kind of gave up.

Shannon -jj Behrens said...

> Maybe a new syntax front end: Liskell?

My brain just exploded.

Shannon -jj Behrens said...

> currying != partial application

Thank you for your correction, but I think my point stands. My point is that the syntax for currying is unnecessarily burdensome. I *know* I can partially apply a function; I shouldn't have to write syntax that reminds me of that every time I write a function.

Anyway, I think (the more broad) partial application is more interesting than (the more specific) currying anyway. Currying suggests you must partially apply the arguments in order, whereas I think it's sometimes useful to apply them out of order, a la Python's "partial" function.

I feel like someone somewhere a long the line said, "currying is such a cool idea we should bake it into the syntax." I disagree. Tail recursion is really cool, but it's not baked into the syntax (fortunately).

Miles said...

I think the real problem with pervasive partial application is that it disallows the possibility of variadic functions and the possibility functions with different type signatures sharing the same name, both of which are seriously useful features. So's pervasive partial application, of course: it's a difficult choice.

I too am a mathematician, albeit one in a different field to Michi: I don't have much problem reading Haskell's space-means-application convention (though I find its operator precedence very confusing, and as for do-notation, ugh!), but I'd be happier with parens for application, and space for composition, so f g h (x) means f(g(h(x))). But this is probably a minority opinion :-)

duschendestroyer said...

like the parentheses in lisp (which i find horrible) the haskell syntax is what makes the language. i love it as lisp developers like there parentheses. for me it makes everything so much easier.

Robert said...

I suggest that you avail yourself of liftM, liftM2 etc and mapM and foldM whenever possible. They're especially useful for Maybe (or when you use Monad m => ... to represent that the receiving function chooses whether to get a Maybe or an exception or an Either String), since do and >>= don't represent the natural way of thinking about these for me. Naturally, for I/O do is the better option.

Anonymous said...

As a Haskell newbie, I find the concise syntax enhances both readability and productivity. And a major bonus over Scheme.

Shannon -jj Behrens said...

Agreed. However, that isn't to say that I think Haskell syntax is perfect.

Anonymous said...

A few points:

* You usually shouldn't have to think about currying, at all, except in specific circumstances. It is only when you deal with /pairs/ (a,b) as a single argument in a function that expects "two" arguments that currying is an issue. Partial function application deals with the "regular" f :: a -> b -> c case trasparently. f a b = (f $ a) $ b

* Function application ($) is a big deal, especially when used with function composition, and partial function application (at least when the partial function is "naturally" typed the way you want). This lets you do things like:

(^2) . (+2) . (sqrt) $ 4

The $ is the "prompt" that takes you from a points free style to a pointed style. If you can handle a command line interface, you can write in this style. Sometimes the type inference engine isn't strong enough to infer a type unless you use an explicit argument, so knowing this style is helpful.

* Functors are important, for similar reasons. We can use the <$> syntax instead of fmap, to write things like:

(^2) . (+2) . sqrt <$> [1,2,3]

* Anybody trying to tell you that there is a substantive difference between software development and mathematics is trying to sell you something. Software development is about finding the normal forms for the data structures and computations you wish to encode, and organizing the code in ways that don't obscure the normal form. This is first year OO textbook stuff. On the other hand, mathematicians have been doing exactly that for about 10,000 years, since the rise of the Indus Valley Civilization. We know a thing or two about finding minimal expressive normal forms, and organizing them. Don't patronize us.

In fact, it is rather amazing that the alphabet of "funny symbols" is so small. Part of learning to do mathematics is to learn to ignore the funny symbols when it isn't necessary to understand them, and look at what they are connecting. That is far more informative.

For example: what's important about the expression

"shirts_in_factory_a + shirts_in_factory_b"?

There is only one sensible way to combine these quantities. Clearly, this will be the expression that does it. Now, if you really need to dig deeper, you are welcomed (and hopefully capable) of digging into the semantics of (+) to verify it's doing what we expect (obviously, that should be unnecessary in the case of addition).

Remember that the typical software development cycle spends far more time in the maintenance phase than the writing phase. Expressing minimal normal forms makes "refactoring" trivial, especially with the help of the type system (which will basically act as a check list of tasks to complete until the task is complete).

* "What would happen if we didn't know the signature of unwords?"

You type ":t unwords" into GHCi.

You are acting as if this is somehow causing complexity. You would have the same problem in any other language. You are either composing functions of the right types with each other, or you have a bug. (Or you are writing in an imperative-OO style, instead of functional-OO, which is just bad for your sanity. Humans can only keep about 7 things in short term memory. You basically have 7 registers to do your mental computations with. There's no point filling them with the values of variables when you could be thinking about their types instead)