Shared publicly  - 
Go has a pretty sensible concurrency model. I don't completely agree that "Concurrency makes parallelism (and scaling and everything else) easy." because pure deterministic parallel programming models make it even easier, but still, Go is at the level of Erlang here and that's a good thing. Also it's nice to see the correct terminology being used and promoted!

(another minor nit: people often seem to say that asynchronous message-passing is "based on CSP", but CSP has synchronous message passing, which is a pretty fundamental difference)
This presentation is an HTML5 web site. Press → key to advance. Welcome! (This field is for presenter notes and commentary.) Controls. ← and → to move around. Ctrl/Command and + or - to zoom. S to vie...
Michael Hendricks's profile photoStefan Wehr's profile photoPaulo Carvalho's profile photoChristopher Svanefalk's profile photo
I do keep waiting for the message-passing concurrency nuts to notice what the MPI folks figured out more than a decade ago: it's incredibly hard to scale to hundreds or thousands of CPUs with explicit message sends and receives. But for dealing with asynchrony it's fantastic (even if it means that folks keep re-inventing monads without realizing it).
The main thing Go is lacking from Erlang though is the error handling when something goes wrong. The primitives in Go for cleanup in that case is much weaker than the counterparts in Erlang.

I also agree that concurrency is not about speedups, at least not in the Erlang case. The goal is to get a decent speedup, not to get the best possible. For the latter you may want some kind of sequential parallelism so the program is deterministic still, but runs faster.
As someone who's written a few thousand lines of concurrent Go, I strongly disagree that its concurrency model is sensible.
+Robert Harper First, I find the synchronous channel model to be very error-prone. Deadlocks are common because you have to know which threads are writing to which channels, and need to make sure all the reads/writes happen in the correct order. Using channel buffering doesn't help, it just makes the deadlocks more difficult to diagnose because they happen less predictably.

Second, channels themselves are very slow, which means that any application with performance requirements will need to use mutexes instead. In my case, as I've continued to optimize the application, the code has begun to look an awful lot like C++ with all the semicolons removed.

Third, Go has no equivalent to type parameters or templates, so it's not practical to implement higher-level constructs like mvars. Most Go code can therefore not be directly re-used, and ends up being copy-pasted all over the place.

Fourth, the Go runtime tries to auto-manage OS threads instead of just using a fixed number like GHC's runtime. This interacts poorly with other aspects of the language, like the FFI, and can cause unbounded growth of OS threads. In one case, I noticed that my application was running slowly, and upon investigation discovered the runtime had spawned ~30,000 POSIX threads to service barely ~100,000 Go threads.
You can add an element of asynchronicity to CSP by adding "buffered channels" which can be implemented just as a sequence of processes that read from their input and write to their output. (Without further extensions, such as the ability to check for a waiting message, this doesn't allow arbitrary backlogs though.)
+John Millikin ah, so I misunderstood and thought channels were asynchronous, thanks for the clarification. With synchronous (or fixed-size bufferred) channels I'm not terribly surprised it's hard to program with.
It sounds like what you really want is ConcurrentML, which had everything that Go has, only better, decades ago.
+Charles Stewart , just started reading the Bawden paper. Really interesting! Anything else I should read after, that's happened since '93? Google scholar shows only a handful of citations. (sorry to hijack the thread)
It's great to see concurrency orientation spread to modern high-performance languages, and I expect to see great distributed computing frameworks emerge around it.  It's a great thing on the server-side; making Go the most interesting language since Erlang.  But I wonder a lot about how we will handle SIMD parallelism (GPUs/Cuda/CL) and how it could fit into general concurrency oriented languages in the future.  The GPU is the most capable core for a lot of tasks, and is adding more (SIMD) processors very quickly.  BigData statistical crunching would be one obvious thing.  But when you try to use the same language on the client (phones obviously!) as well, low latency garbage collection and ability to utilize the GPU end up being a big deal.  This is because the new clients are basically mobile signal processors.  ... Convolutions, 2DFFT,etc.
Add a comment...