That's just the thing, it's not using a ton of memory and CPU. Under normal operation (with about 5000 concurrency), it is using around 50-70MB memory and at 7-10% CPU. No big deal. When I increase the concurrency, it jumps up to those numbers for just a second, then goes down to 22MB memory and 0% CPU. It's seemingly not doing anything.
As I mentioned, this is a task runner. It's not queuing up any one single function over and over again (that would be silly). It's queuing many different functions during its operation. To repro, I have the actual task just use a process.nextTick(taskCallback), so the task itself does absolutely nothing. All the functions surrounding the task are things like timing functions, wrappers to make sure everything is asynchronous, and whatever the async library does to run tasks.
Editing memory limits did not help... which makes sense when you look at the actual memory usage.
In case it matters, the app does use multiple threads -- one as the master, and a configurable amount of worker threads. I can reproduce this with one worker thread easily. There is/was some IPC happening between the threads, which was what I originally suspected. However, I did two things to rule that out.
1) When looking at what actually is executing, I can see that attempted IPC messages do actually get through, and tasks get started in the worker thread. The callbacks of those tasks are never called though.
2) I rewrote the IPC bits to use a named pipe instead of an IPC channel -- one named pipe per thread, to be more exact. This had the exact same behavior of getting stuck at the same point in execution.