Progress (on the Kinect side)! Stereo Kinect (v2) capture (2 machine) and playback in WebGL 50 frames at 4fps (limited from one of the acquisition machines - 30fps is possible with proper hardware) in a browser. This would scale up in # of kinects, if you can time-align the scans...
link to interactive (orbit controls in browser) : https://1f52485fc06319984e8ac9b7ea8e4ebb594cc297.googledrive.com/host/0B9tqZ1GqmRT3UzJ1bHhiaE5TbEU/orbit.html
Each "frame" is 2 PNG images... Depth and Color (examples included)
I'm ready for color in Unity on Tango... I'm learning how to store and transfer the depth quickly (PNG image here - where RGBA store depth for ThreeJS to draw with GLSL/Webgl shaders...) I now need to convert the depth scans/pointclouds into a depth image like this, with interpolation to fill in the blanks I think (based on the color image edge detection?)