Over the summer, I was looking through some old components, and discovered my old XBox Kinect (the second one). I had been using it for motion capture and depth sensing experiments dating back years at this point, but it had never truly shined as a star of any one specific usage. I had recently started teaching myself some three.js coding, and I had a craving for making cool 3D stuff happen in-browser. You know… things that you could share with friends, family and the world that would solicit the occasional “wow”. I got to thinking… what if the Kinect could somehow display its depth information in-browser… in three.js space?

Well, I was about to get my answer. For starters, one of three.js’s official examples is how to display XBox Kinect data in-browser. This, at the very least, gave me hope. Then I stumbled upon Kinectron: an actual program designed to stream live XBox Kinect data over a network! Setting this up was not hard at all; there was simply a “server” program that acts as the… well… the server, and a client-side API for accessing the data. Now, all I needed was a way to get the Kinect data into a three.js instance…

So I googled that, too! I found Three-Kinectron (by the same people), which did exactly that: it is a Javascript library that allows a three.js instance to access live Kinectron data! Now, all I had to do was combine the three, which I did: three.js, Kinectron, and Three-Kinectron. I created a page based on the Three-Kinectron examples that referenced and pulled data from the Kinectron client and displayed it in 3D space! When I pulled depth information, things became exciting. I could see a live 3D representation of my surroundings on screen, in Google Chrome! How cool is that?

The last step I took was to create a WebVR instance in three.js to display the Kinectron data in. A WebVR instance is just what it sounds like: it is virtual reality in-browser. It can be viewed on a desktop by navigating with a mouse, but it is predictably cooler to view in a VR headset. I started the stream, did some port forwarding so that my phone could access the stream, and pulled out my Samsung GearVR to view the live Kinect Stream. And voila! I was standing inside a virtual representation of the room. In the future, I could see a setup like this being used for virtual telepresence. Think about it… if two of these sensors were used (one for each person), and streamed via VPN to a central three.js instance in-browser… indeed, even now, if I had two Kinect sensors on hand, I could literally create two Kinectron instances in three.js: one from each Kinect! I find it interesting that such a complex notion can be achieved with simple items that many people have in their own homes already. Maybe someday, this could all be streamlined, and we can stand face to face with a live, high quality 3D representation of our friend across the country from us.

Leave a comment