Public
I don't know why they call C++ strict and Haskell lazy, it's the wrong way round.
It's easy to write a Haskell program to input a list and start doing work on it before the user has even finished typing it in. This is the default in Haskell. It's eager to get stuff done. The norm in C++ is to wait until the user has finished entering the string before doing any work. Haskell is great if you want to compose a sequence of operations on lists. You don't have to wait for the entire operation on the first list to finish before starting work on the next because Haskell just can't wait to start evaluating the final result.
The Haskell laziness page (http://en.wikibooks.org/wiki/Haskell/Laziness) discusses how thunks are used to implement laziness. But really a thunk is a mechanism that allows Haskell to be eager. Rather than try to evaluate code that could cause a computation to block Haskell puts that stuff in a thunk so it can get on with evaluating the part that's going to be most productive from the point of view of any consumers further down the evaluation chain.
It's all a matter of perspective. If you're a consumer of output from a Haskell program it looks eager but if a Haskell program is the consumer of your input it looks lazy.
(I was motivated to write this because of the tweet: https://twitter.com/CompSciFact/status/572750815800238081 )
Update: Repost with different permissions.
It's easy to write a Haskell program to input a list and start doing work on it before the user has even finished typing it in. This is the default in Haskell. It's eager to get stuff done. The norm in C++ is to wait until the user has finished entering the string before doing any work. Haskell is great if you want to compose a sequence of operations on lists. You don't have to wait for the entire operation on the first list to finish before starting work on the next because Haskell just can't wait to start evaluating the final result.
The Haskell laziness page (http://en.wikibooks.org/wiki/Haskell/Laziness) discusses how thunks are used to implement laziness. But really a thunk is a mechanism that allows Haskell to be eager. Rather than try to evaluate code that could cause a computation to block Haskell puts that stuff in a thunk so it can get on with evaluating the part that's going to be most productive from the point of view of any consumers further down the evaluation chain.
It's all a matter of perspective. If you're a consumer of output from a Haskell program it looks eager but if a Haskell program is the consumer of your input it looks lazy.
(I was motivated to write this because of the tweet: https://twitter.com/CompSciFact/status/572750815800238081 )
Update: Repost with different permissions.
View 5 previous comments
"According to Larry Wall, the original author of the Perl programming language, there are three great virtues of a programmer; Laziness, Impatience and Hubris
"1. Laziness: The quality that makes you go to great effort to reduce overall energy expenditure. It makes you write labor-saving programs that other people will find useful and document what you wrote so you don't have to answer so many questions about it.
"2. Impatience: The anger you feel when the computer is being lazy. This makes you write programs that don't just react to your needs, but actually anticipate them. Or at least pretend to.
"3. Hubris: The quality that makes you write (and maintain) programs that other people won't want to say bad things about."Mar 4, 2015
The good old argument/result duality :)Mar 4, 2015
lazyness is just a term that is used because most people are too lazy to learn what non-strict means.Mar 11, 2015
Indeed you dont... :DMar 15, 2015
I think we have misleading and confusing terminology. And that's pretty lazy of us. And there is a problem that knowledge and education about these techniques, technologies and tools is not widespread. Also pretty lazy of us.
;-)Mar 15, 2015
There is a technical sense in which what you say is true: in sequent-calculi based presentations of call-by-value versus call-by-name evaluation order (keywords: mu/mutilda, duality of computation, L), one remarks that the co-terms (evaluation contexts) of call-by-name languages are linear, that is they force their hole exactly once. To get a strict evaluation strategy, one adds the mutilda construction that corresponds to (let x = e1 in E[_]); this one delays the co-term, in the sense that it shifts the control of evaluation from the context to the term (e1, which must reduce to a value before E continues). So by-name contexts are linear (which is close to the view of "consumer strictness" you have here), and by-value contexts may be delayed or dropped (if the term part loops forever without returning control).Jun 13, 2017
Add a comment...