Footage from danceroom spectroscopy development workshops at Bristol’s Arnolfini on 18 Mar 2012.
Some ground work and serious thinking over the last day puts me in a position where i can state what the problems are and some ways they might be overcome.
Missing Verts on Edges
Where meshes from 2 or more cameras pointed at the same object do not quite match up you get a gap in the surface of the mesh. This is especially prominent at certain angles and on rounded corners. The missing faces can be seen clearly in Fig 1.
As you can see in Fig 2 at some angles I’m getting triangles for the same area on the surface of the object from both cameras and doing a simple dot(cameraAngle, faceNormal) to cull faces that belong to the ‘less dominant’ camera just does not cut it. It needs some more involved contention checking to discover the triangles that are in contention and then choose the most dominant one.
Ok so i need to build a list of verts / faces at the edge or each camera mesh and match it against the other camera meshes to see if i can fill in the blanks. I also need to determine if any face is in contention with another face from another mesh. Leaving the edges aside for the moment lets look at triangle contention.
Number of Verts
I could brute force the each vert/face in the mesh to check if it is in contention. The resolution of the reduced depth image is 80*60 at this resolution I get a point count of 4,941 and a triangle count of 9,882 per camera.
However there are to be 3 cameras per portal for Me and My Shadow so that puts a brute force iteration to check each triangle against each other triangle at 439,427,835 possible checks, clearly this is too much.
So maybe I could assign each vert to a cell which would reduce the the amount of triangle checking involved considerably.
I could pour all the vertices from all the cameras into a point cloud and use a volume of cells. A quick measurement showed me that I could expect values ranging from [-4,-4,-4] to [+4,+4,+4] when all the points have been converted to skeletal space. I created a 3 dimensional array of int [80,80,80] so each cell would cover 0.1 cubed of the skeletal space. I hooked this up and took a quick tally of the max number of verts i could expect in a single cell, it came out at about 100 (when an object was quite close to the camera) so if each cell had an array of 100 verts possible 80*80*80*100 = 51,200,000 thats over 200MB just in position data! So this method was clearly out unless I could reduce the number of verts in a single sector and even then 80*80*80 = 512,000 sectors to iterate over. Its better but still way too large an iteration to run in realtime.
I could use a Quad tree to reduce the memory footprint however I really don’t want the garbage collector running per frame and the setup / tear down would be a big cost.
Reducing the number of Verts
So the only option I have is to reduce the number of verts per camera mesh. I don’t want to just subdivide and average the values from the depth image again, I have done that enough to make it fit over the network connections we will be employing for each portal. Instead I would like to build a mesh optimization algorithm to reduce the overall vert / triangle count but maintain details.
As I walk the depth data I will look for transitions between verts that are linear (or close to it, within a given margin) and remove the verts along the path say in steps of 4 verts, I will have to build up a list of pointers to the replaced vert as I go so the next row can join, I think by doing 2 passes on each mesh I should have them down to 100s of faces per camera. This is a small enough set to do a brute force duplicate face removal and edge merge on. I will post a detailed description of if/how this worked in my next entry.
So, the London stint of the MADE residencies is over. There were a fair few technical challenges to overcome in this short time, most notably streaming Kinect meshes between 4 machines simultaneously and in real time.
I was not 100% happy with the result but i know that i had sacrificed frame rate for stability for the process demo on the last day. I think i could have pushed it quite a lot further and given a little more time could have got them streaming at maybe half frame rate (12fps). other than that the applicaqtion worked well, i still have a fair number of visual bugs to sort out but as we were only paying lip service to the aesthetics of the the work i only care about those for my own personal pride.
Here is some video of all 4 portals setup side by side
You can see more video over at the project blog here
Thanks to the people of the National Theatre for hosting us, you were all very pleasant company and a special thanks to our dancers Sasha, Nick and Amina we could have done none of this without your patient participation.
Check out the video interview with Dave that was featured in physics world where he talks about the Danceroom Spectroscopy project here
If you have a IOP subscription you can read the full article here