Explaining Web Audio's new Audio Worker

For my #webaudio  peeps:

Root66 just asked on the Web Audio IRC channel (irc.freenode.net ##webaudio):
> cwilso: could you explain AudioWorkerNode (http://webaudio.github.io/web-audio-api/#dfn-audioworker) in a nutshell and why I should be super excited?

Why, yes, yes I could!  :)

First, I need to note that THIS HAS NOT BEEN IMPLEMENTED YET, AND IS STILL BEING SPEC'ED.  So don't try using it yet.  :)

I've long been a detractor of ScriptProcessorNode - the main reason being that ScriptProcessor's code runs in the main thread.  Remember, Web Audio processing is happening in a separate thread from layout, rendering, JS, and all the other stuff that is going on.  On multi-core host machines (i.e. nearly all modern machines), this means the system may be able to let audio processing run unaffected by lots of stuff going on in the main thread - meaning, it won't have to glitch the audio just because your visual rendering took too long.

In addition to the main thread potentially being busy, it's important to note this means the audio processing chain has to hop over a thread boundary - twice! - in the normal course of events.  When a ScriptProcessor's output is desired, it has to call asynchronously into the main thread, and this will take some time to respond.  Since we can't just wait for the response, ScriptProcessors insert latency - namely, they buffer up some amount of input data, then initiates the asynchronous call across threads - and then has to insert MORE latency to get the response back.  Even on an ideal host machine - like a powerful OSX machine - will likely have issues with a buffer smaller than 512 samples - so using a ScriptProcessor to do ANYTHING will insert at least 2*512 samples (=23ms at 44.1kHz).  In practice, you usually have to have a bigger buffer size (the default is 1024).  So you'll be putting 50+ms of latency into your audio chain, which is going to be pretty noticeable.

By contrast: Audio Workers are a Web Worker that's running in_the_audio_thread.  This means that 1) they won't be sharing the thread with your layout, rendering and other JS, and 2) there's no thread hop in the audio chain, so they can be called synchronously - and with zero additional latency!

If that wasn't exciting enough, I managed to design in a system by which you can add your own AudioParams to your Audio Worker nodes, so you can easily replicate native nodes - which, in the end, is the goal here: to make it so that you can implement JS processing nodes in your own code that are first-class citizens (in true Extensible Web Manifesto form)!
Shared publiclyView activity