Creating the CGI Music Video for Adhesive Wombat’s “Storm Crusher”

Recently, I had the opportunity to produce a CGI/visual effects-based music video for AdhesiveWombat, an electronic music creator on SoundCloud, YouTube and the like. If you’re into tech, you may even know him as the producer of the “theme song” for Linus Tech Tips’s “WAN Show“. Well, that “theme song”, AKA “Storm Crusher”, got an animated, VFX music video addition this summer! Using a suite of 3D software and compositing tools, I put together a VFX adventure for this track with the help of some live action footage, photorealistic raytracing, and particle simulations! If you haven’t see it yet, check it out!

Adhesive Wombat’s “Storm Crusher”

This video involved many production and post production processes, but I’d like to highlight a few. For this video to work, you’ll notice that the music needed to “drive” the animation in some parts. I used a script to convert certain frequencies from the sound track (typically between 0 and 4 KHz), into usable animation data that could be manicured, amplified, and turned into pulsations, movement, and force field data for particle animations.

Here, you can see the pulsating orb and its particles from the beginning, alongside a graph visualization displaying several parameters driven by the sound from the music. You’ll also notice the music track along the top of the viewport… super useful when you want to match the movements up just right! Here’s another example of the orb, along with its graph view (showing several channels of displacement data driven by sound):

To complete this video, I needed to use “projection mapping”. That is, I needed to partially reconstruct some environments in 3D, and project imagery onto them in order to ensure that the 3D objects in the scene would conform to the lighting situation when the video was shot. Here’s an example of partially re-projecting a building onto a rough box model:

This allows for so many possibilities. Water, for example (of which there is a lot in this video), has a refractive index (the extent to which light is bent when it passes through the object). This means that material needs to be present behind the object so that there is something to “see” through the water when the light gets bent through it. Using this technique, I can recreate real world lighting conditions (with the help of an HDRI backdrop).

Here, you can see how this is accomplished… a mesh’s geometry is “projected” from the tracked camera’s point of view, and the imagery in that scene is applied. This allows me to make actual “chunks” of the parking lot rip apart!

Finally, I’d like to discuss those clouds. The clouds were produced procedurally using dynamic noise mixed with a black/white 3D mask to “guide” the clouds to where they needed to appear (the mask was animated to “steer” the clouds to different areas). You can see how the clouds only appear in a small area, although the mesh is much larger:

These clouds used volumetric materials, meaning that what you see is a subset of the mesh’s volume, rather than its surface. Raytracing render engines are needed to accomplish this, since the light bounces (the paths that the light takes) through the clouds need to be calculated by the computer. Using this technique, you can also get some really crisp volumetric shadows from other objects in the scene, like in the one above, for example. When that chunk of dirt comes out of the field, the crisp shadow edges on the volumetrics certainly help “sell” the effect.

Overall, this was such a blast! I’m glad everyone seems to like the results. Super, super special thanks to AdhesiveWombat for allowing me to bring his already legendary music a visual counterpart. Please go follow him, etc:

AdhesiveWombat’s Socials:

New Flight Digital’s Socials:

Want to get in touch? Visit newflightdigital.com!

How Can 3D Special Effects Drive Interest in your Products?

Ever see those fancy, flashy product commercials? You know the ones… the ones with the complex camera movements, colorful backdrops, and a hero shot of the main product? Sometimes the product is floating through the air… sometimes it’s a slow motion shot of the product spinning. They’re almost always produced/ funded by large corporate marketing firms for countless products, from hamburgers to facial cleanser. The animations are attractive, sleek… and most importantly, they attract customers by making them feel a certain way about the product. As a small business owner, you might feel that such animations are completely beyond you. After all, your finances are focused on keeping your business running and serving your customers. But marketing is a huge factor of business success… are these types of product shots really out of reach?

The answer used to be ‘yes’, but not anymore. With advancements in 3D technology, the cost of 3D special effects has become manageable for the average small business owner, and they can do a lot for you and your sales. Take this graphic for example:

This animated demo was produced by us in-house, and has everything you might see in a flashy corporate commercial. 3D effects such as fire, smoke, bubbles, water, and sparks can add a uniquely emotional element to your product, and instantly associate it with a certain feeling. But why is creating a “feeling” so crucial, anyway? Coined by P&G in 2005, the “First Moment of Truth” or “FMOT”, is the first 3 to 7 seconds after a customer encounters a product for the first time. In this situation, there are three steps:

  1. Customer encounters advertising
  2. Customer goes to product location
  3. Customer locates product

A customer will not take steps 2 and 3 unless they are impressed in step 1, and most customers are “low involvement” decision makers. This doesn’t mean that they don’t care about their purchases, but it does mean that they rely more on feeling and emotion than facts or research when purchasing a consumable product. This saves time and resources in a consumer’s busy life. This is called a heuristic: a “short-cut” customers take in decision making. The best way to attract a customer is to deliver that emotional punch right in the beginning: in your marketing. And 3D special effects are a great way to deliver a deep, visceral emotional message.

Here’s another image we produced: a still frame from an animation. Fire is almost as old as humans are, so it invokes a deep, almost “evolutionary” reaction. “Elemental” 3D effects are great for that reason.

Whatever your product is, now you can use the same tactics that the big dogs use in their product images. We’ve provided cost effective effects like the ones featured here for companies just like yours, and we work with you to achieve the perfect results. Want to know what we can do for you? Fill out this quick form! We’ll get back to you with a free quote for your project. It couldn’t hurt to ask, right?

Go back

Your message has been sent

Warning
Warning
Warning
Warning.

Creating a Virtual Telepresence with the XBox Kinect and Three.js (An Idea)

Over the summer, I was looking through some old components, and discovered my old XBox Kinect (the second one). I had been using it for motion capture and depth sensing experiments dating back years at this point, but it had never truly shined as a star of any one specific usage. I had recently started teaching myself some three.js coding, and I had a craving for making cool 3D stuff happen in-browser. You know… things that you could share with friends, family and the world that would solicit the occasional “wow”. I got to thinking… what if the Kinect could somehow display its depth information in-browser… in three.js space?

Well, I was about to get my answer. For starters, one of three.js’s official examples is how to display XBox Kinect data in-browser. This, at the very least, gave me hope. Then I stumbled upon Kinectron: an actual program designed to stream live XBox Kinect data over a network! Setting this up was not hard at all; there was simply a “server” program that acts as the… well… the server, and a client-side API for accessing the data. Now, all I needed was a way to get the Kinect data into a three.js instance…

So I googled that, too! I found Three-Kinectron (by the same people), which did exactly that: it is a Javascript library that allows a three.js instance to access live Kinectron data! Now, all I had to do was combine the three, which I did: three.js, Kinectron, and Three-Kinectron. I created a page based on the Three-Kinectron examples that referenced and pulled data from the Kinectron client and displayed it in 3D space! When I pulled depth information, things became exciting. I could see a live 3D representation of my surroundings on screen, in Google Chrome! How cool is that?

The last step I took was to create a WebVR instance in three.js to display the Kinectron data in. A WebVR instance is just what it sounds like: it is virtual reality in-browser. It can be viewed on a desktop by navigating with a mouse, but it is predictably cooler to view in a VR headset. I started the stream, did some port forwarding so that my phone could access the stream, and pulled out my Samsung GearVR to view the live Kinect Stream. And voila! I was standing inside a virtual representation of the room. In the future, I could see a setup like this being used for virtual telepresence. Think about it… if two of these sensors were used (one for each person), and streamed via VPN to a central three.js instance in-browser… indeed, even now, if I had two Kinect sensors on hand, I could literally create two Kinectron instances in three.js: one from each Kinect! I find it interesting that such a complex notion can be achieved with simple items that many people have in their own homes already. Maybe someday, this could all be streamlined, and we can stand face to face with a live, high quality 3D representation of our friend across the country from us.

Designing a Website with Total Creative Control: Building the Improved ‘New Flight’ Website

Whether you own a business, need to promote yourself, or run a nonprofit, owning a website that suffers from bad web design is a terrible disadvantage. The first New Flight Digital website was designed in Adobe Muse, a now-discontinued WYSIWYG (what you see is what you get) website designer. I knew there were many disadvantages to using such a program (less creative freedom, lack of direct control over content, etc), but like so many others, I simply lacked the know-how to launch into building my own website from scratch. Until I started Googling… and googling… and googling…

I knew a revamped New Flight website would need to showcase some of our products, and the many eye catching 3D graphics and effects we have produced. It would need to be big, bold and colorful, like the company. I started looking to other similar (albeit larger-name) players in the VFX industry for inspiration. Digital Domain, for example, has a front page with pleasing motion graphics, large hero-shots of their work, and a unified color scheme.

Digital Domain’s website includes attractive features and large imagery

With next to no idea where to begin, I started with an open source HTML template and opened it in Adobe Dreamweaver. Quick, rapid google searches were my go-to as I learned the ins and outs of HTML and CSS. But I soon realized that the things I wanted to display on the website could not be accomplished with HTML alone. You see, websites like the one I had in mind are the result of the interplay between HTML and CSS files, Javascript, PHP, and other elements, all interacting. The latter two are scripts… files that are intended to execute in order to accomplish something. In order to execute PHP, I needed to set up a web server, since PHP runs on the server itself. Try to view a “.php” file without hosting it on a web server, and it won’t do anything, because there’s no server to run it. Javascript on the other hand executes “client side”, meaning it executes in the viewer’s browser, and not on the server.

  • PHP- Server-side code (script)
  • Javascript- Client-side code (script)
  • HTML- What web pages are ultimately made of
  • CSS- Cascading Style Sheet; how web pages know what to look like

After setting up a local testing environment with XAMPP, I quickly mocked up some design ideas. Any files I edited and saved would automatically update on the web server, since I was editing them directly. This was extremely useful for up-to-the-moment updated views of the website code. I knew I needed some 3D elements on the website. After all, that’s what we do. Why not have some cool 3D objects for the user to interact with? For this, after some googling, I used three.js, a 3D Javascript library. Think of it like a set of files that allows you to create 3D scenes in Javascript. I dug in, and spent months working on a front page 3D scene using Blender to create assets.

Three.js is an incredible library behind some impressive web design

But there was so much more! I needed a contact form, a downloads page, a blog slider, and other elements. For all of them, I either used PHP or Javascript to deliver the information to the viewer. If a script needed direct access to server elements, I used PHP, since it ran on the server, and therefore could access the server files. If something needed to be rendered by the client, I used Javascript, since it ran on the user’s computer.

The front page of http://www.newflightdigital.com

In the coming months, I hope to share details of how I accomplished certain aspects of my website. All it took was googling… and patience. If you have those two things, you can create an entire website without costly alternatives, and have complete control over how it operates, what it does, and everything in between. Leave some comments below about what element of the new website you would like explained, or shoot an email to contact@newflightdigital.com.

Processing…
Success! You're on the list.

Artistic Hard Surface Modelling: Creating Realistic Looking, Fictional Technical Art (Blender and Photoshop)

I got inspired a few weeks ago. You may have heard of a CG artist by the name of “Beeple”. His real name is Mike Winkelmann, and… well… he’s been creating one scene in 3D every day for the last 13 years! Seriously. He uses Cinema 4D primarily, and he puts out a ton of awesome work. You may even be familiar with some of it (www.beeple-crap.com; check him out). I found him by doing a Google reverse-image search of this image:

Image courtesy: Mike Winkelmann, of course.
http://www.beeple-crap.com

This is what started me on my journey down the trail of Beeple. This image inspired me to explore what was possible in the world of technical hard surface 3D modelling and rendering. There were quite a few things I was impressed with; primarily, the level of detail in this image was astounding. It felt very… believable. Beeple used common current-era materials to craft something completely new. Shipping containers, a bicycle, some cans, a cardboard box or two, and an air conditioning unit… these things are recognizable to the everyday layman. So when you see them stacked up into a huge post-apocalyptic Amazon truck, well, it feels more real than if the vehicle had been made up of elements that we could not recognize. Beeple has a lot of these types of images: images that are so familiar looking and matter-of-fact looking that they might as well be real. These are my 9 absolute favorites out of his 4,338 daily renders:

This is the kind of stuff that makes me stay up at night writing blog posts and frantically modelling increasingly complex technical nonsense. The technical detail… the accuracy… the style. It all made me want to be a technical designer or engineer all of the sudden. I realized I could learn a lot from this guy. And so, I began my first adventure into more complex hard surface 3D modelling. In Blender, I began box modelling with a subdivision surface modifier on. What does this do? Well, it allowed me to get clean, smooth surfaces and edges in my technical renders. Instead of harsh, sharp edges, I could make my objects as smooth as I wanted with much less effort. Take a look:

Modeling a Technical Piece with Subdivision Surface Modifiers

This is important because if you’ve noticed, nothing in the world is absolutely 100% sharp-edged. There is always a slight bevel or softness to an edge. The subdivision surface modifier divided and smoothed out my boxy geometry as I modeled. To get sharper edges, I would add a lateral edge loop and bring that edge closer to the exterior edge. As I did so, the exterior edge would become sharper and sharper. The result? Well, I was able to model some pretty interesting hard surfaces. Inspired by the first image I showed you, I decided to model a “tank” of sorts built out of a shipping container that might be driven in the future, perhaps. I decided to have it stranded out at a docking station, and I mentally built a story around that. I thought about where every item would go, and why it would be there. I used some particle simulations to add some snow to the shot, and took a TON of time lighting and texturing. The result? Have a look:

I liked where this was going. I went WAY overkill with this. I took about 3 weeks of on and off free time work to model all of the little details. Most of them aren’t even visible. For example, did you notice this flag bracket? Or these other details?

No? Well, the flag bracket was hidden from view, and those other details were hard to see. But I still modeled them. I modeled everything. There is even a fully furnished computer control area in the interior.

Next, I went on and did a project inspired by some of Beeple’s caterpillar images (see the gallery above). After around 2 weeks of work, I had these:

I added the human element using the open source “MakeHuman” program that I found out there on the great World Wide Web (it’s great, isn’t it?). Just like the previous image, a TON of post processing effects were done in Photoshop. Even the depth of field was done in PS using a depth pass! That’s right. See that black and white image in the Instagram gallery? I used that to define the blurriness of the image at any given point using the “Lens Blur” effect in Photoshop. Just add the depth pass as an image channel, select it in the effect dialog box, and voila! You have an automatic depth of field visualizer… in post production! I also used the depth pass as a mask for smoke effects, light flares, etc. Very useful trick.

The last hard surface modelling project I’ve taken on is something… different. It is not finished yet, but I decided to model a device of some sort. I kind of went crazy on the details here. It was inspired by my pocket multi-meter sitting on my desk as I write this (a multi-meter measures a bunch of electricity stuff). I thought… what if we had a device that could warp space and time? A modulator… but for the fabric of reality (I know I’m crazy). So I came up with an idea: the “Localized Axial Spacetime Modulator”. It would use these spatula-looking antennae to warp space and time. I did a ton of background research on what this would entail so that I could make this thing look plausible. Also, just like Beeple, I added a bunch of real world elements. Sockets and materials that science-oriented people would recognize add realism to the model.

Remember… adding stuff that actually exists in the real world kind of convinces your audience that they’re looking at something real (this circuit doesn’t work by the way).

And now… the grand finale. This is my texturing I have so far: the screen of the device and the lower labels. Every part, every word and every pixel of this image has a purpose, a meaning and a reason for being there. I’m learning more and more that THAT is how you create convincing worlds: you have a why, a where, a when, a who… and fifty reasons WHY those things are the way they are.

Notice the warnings and stuff on the device, too. I got pretty creative. Some of it is common sense. The device is pretty similar to a signal generator or oscilloscope.

So… that’s all for now! If you’ve made it this far, congratulations! And Beeple, if you read this, you rock man! Go check out Beeple at www.beeple-crap.com, and head over to my site at www.newflightdigital.com to hop on my mailing list. I only send stuff out once a month on average, so you won’t get spammed. Or, shoot me an email at contact@newflightdigital.com.

Creating a Futuristic Concept Scene: Speed Designing with Photoshop and Blender

Last night, I created a set of simple concept images in roughly 4 hours of on-and-off work. My goal was to quickly develop a partial continuity between two images, establishing a sort of implied workflow utilization. In other words, I wanted to first create a hand drawn digital image of the concept using a pressure sensitive pen tablet. Then, using this drawing as a guide, I wanted to speed model and texture a 3D environment inspired by the drawing, thereby establishing a workflow continuity commonly seen in the industry: from hand drawn concept image to rendered concept image.

I began in Photoshop, where I set up a 16*9 inch 500 ppi resolution document with no guidelines. I set the pen size to roughly 25 pixels for fine control of the design. I decided to go for a neofuturistic, bleak image of an astronaut of some sort traversing a somewhat barren area. I wanted to combine elements of the past into the image by featuring a draped cloth, which would be covering the lower half of the figure’s visible physique. I began to draw with smoothing at around 50%. I first did a complete sketch of the man’s face from the inner muscle groups outward, even though most of this would later be covered by other elements, such as his helmet. I even included facial hair. The man’s thin face and the shape of his jaw helped dictate where the different elements of the helmet would go, and what shape they would take.

The first defining strokes I made staked out the area that would later make up the visor part of the helmet. I defined the opening that the man would be looking out of, as well as the collar piece and other essential components of the structure. Once I had pieced the entire structure together, I began shading the image according to the geometric contours using various pen widths and shapes until I was satisfied. I blocked out elements of the man’s bare face using white shapes with various opacities to define the surface of the helmet.

To go from 2D to 3D, I first sculpted the man’s bare face using the sculpting tools in Blender. I then began forming the geometric segments of the helmet around the face according to their required positions. I frequently adjusted the design and shape of the pieces on the fly as I went. When I was ready, I used Blender’s texture paint features to literally draw on color and damage marks, and applied these image textures to the model pieces in the node compositor. I used glossy shaders for the most part, and reused the hand drawn textures as displacement maps to give depth to the material.

To achieve the cloth look, I created a subdivided plane with a hole in the middle around the figure. I “pinned” the vertices at the center so that they would not move, and then ran a cloth simulation using wind and turbulence forces to achieve a billowing cloak model. I textured this using a combination of an opaque velvet shader and a transparent shader in a mix shader, with the factor set to a stretched brick texture. This gave me long strands of “cloth”, which looked like a woven burlap material up close.

I then ran a particle system with wind and a collision object to simulate snow, and rendered this separately. I pulled everything into Photoshop, applied a lookup table, touched up highlights and shadows using the brush tool, and composited dust and smoke into the shot, and rendered it out. The resulting image is comparable to the original sketch, albeit vastly different in proportion.

This exercise has given me some insight as to how concepts tend to change from conception to execution, and how one might go about adapting reference images for use in 3D environments. To see more of my work, go to www.newflightdigital.com. Feel free to reach out to me at contact@newflightdigital.com.

Home Automation Scripting with the Raspberry Pi: Update #1

Over the past year, I have experimented with different methods of home automation, from scripts that automatically render my projects for me, to light control. In this time, I have been using an old E-machines desktop computer with a failing hard disk and limited RAM. Well, this holiday season, the thing finally gave out on me, but luckily I had a spare Raspberry Pi to fill this void. I transferred the necessary scripts that I had written, as well as the necessary programs that I had found online and compiled. The current system has my Pi mounted to a wall on one end of my room with a status monitor, connected to a very long USB cable which snakes around the perimeter of my room to the opposite wall. There, the USB cable is connected to a USB hub which connects to a USB-DMX dongle (to control RGB lights), and a USB relay module to control a tower light and buzzer set for special alert cases. The whole thing is controlled by a script that checks a Google Firebase database every 5 seconds for changes. I have created an Android app using MIT App Inventor which communicates with the same Firebase database, thereby communicating with the Raspberry Pi over the internet. With this setup, I can change the color of my room lights, tell the Raspberry Pi to say a specific phrase, and more. To date, my home automation system is equipped with the following abilities:

  • A briefing program, which uses Linux’s “espeak” text to speech terminal program to address me and give me updates on the status of various processes. For this program, the script first gathers various statistics by loading up various web pages, saving them into text documents, and parsing them. For example, the briefing program provides the weather, temperature, the number of followers on each of my social media accounts, as well as more personalized data such as the number of files rendered over night.
  • A continuous social media checker, which alerts me when I gain or loose a follower on Instagram by telling me verbally and by turning the LEDs in my room either green or red.
  • Full LED lighting control via a USB-DMX interface, and a strip of RGB LEDs connected to a DMX decoder.
  • A facial recognition program that uses a webcam connected to the Pi, complete with a full login and logout system, which can recognize a person entering the room and identify them, setting the room state to their preference.
  • A custom phrase repeater, which gets its phrase from the Android app.
  • A custom alarm clock which will read input from my app, setting an alarm for the appropriate time and waking up the room occupant at the appropriate time using voice features.
  • And much more!
On the left, the wall mounted black box is the Pi. The screen above it will soon display statistics, current system state information, and security data.

As I mentioned above, the two main complex connections involved with this setup are the USB relay module, and the USB-DMX interface. I managed to find open source command line programs for Linux online to control both the USB relay module and the USB-DMX interface. Just pure luck! Below are images of both interfaces.

A USB cable from the hub connected to the Pi connects to the USB-DMX interface, which plugs into this DMX decoder, which then receives power from the left and splits the signal into the appropriate RGB channels for the lights.
A USB cable from the hub connects to this relay module, which receives power from the left. According to the signals received from the Pi, the relay module will open or close the relays, thereby turning on or off the different components on the tower light.
This is the tower light, which includes a buzzer and 3 lights.

Above, you can see a screenshot from the light controller portion of my app, and some of the lighting options.

This has been a very, very brief overview of the system. Parts of it, such as the facial recognition system and the relay control, took many days to set up and get working properly. If you would like to know more, leave a comment requesting a future blog topic!

Motion Capture for Music Videos: Update #1

Over the course of the past two days, I have had the pleasure of working with the Shadow Motion Capture System to create motion capture data for characters, which will be featured in a music video. The music video takes place in a swamp, for which we are creating some creepy practical effects using a diorama. As such, the “people” performing the music will be swamp animals, created with animation and mocap data. 

Practical Effects created with a Swamp Diorama
On set with the “Gods of Fire” during a mocap session

Without giving too much away, a frog will be singing the song, while other animals such as raccoons and lizards play the rest of the instruments. The mocap suit requires several sensors to be placed over the shoulder blades and on the chest, which often come loose during rock-out sessions. This was our biggest challenge, but we overcame that before long once we determined how to best fasten the sensors. The mocap suit transmits a 5Ghz wifi signal back to a laptop running the latest Shadow software, allowing us to record the data. To begin, we needed to have the actor strike a “T” pose, with his legs together and arms out like a T. This allowed the sensor to calibrate and determine the orientation in that pose. Once the software knew the orientation in that pose, it could determine the orientation in any other pose. 

We experienced small problems with the device, but nothing too significant. For example, the device comes with foot pads that go in the user’s shoes. These pads sense pressure, and therefore, determine whether the user is in the air or not (during jumps). Since our talent was rocking out, he would often perform mini jumps in enthusiastic dance. These jumps were not enough to “trip” the pressure sensors, so they did not know the user had left the ground for a moment. As a result, the mocap figure preview on our computers would slowly slip downward as the sensors de-calibrated. The solution, of course, was to cut the jumps and replace them with other enthusiastic movements. Below, you can see our entire broadcast from yesterday when we recorded the mocap data:

All in all, the mocap session was a success. I exported each session as a BioVision Hierarchy (.bvh) file, and imported them into my favorite 3D program, Blender. To make the 3D models of the animals move, I used a plugin called MakeWalk, from http://www.makehumancommunity.com. The plugin takes BVH data and retargets it to a separate rig. To start, I produced a BVH rig for the animals using MakeHuman. The rig is one that MakeWalk can understand (since they are sister applications). I then edited the rig to fit the curvature of each animal, and then applied it to the model using automatic weights. This means that each “bone” in the rig affected a part of the model in a way that the application determined automatically. I then edited the automatic weights to better reflect the animal’s anatomy (for example, the upper arm bone should move the whole upper arm, etc.) After this, I retargeted the rig to a .bvh file recorded on set, and voila! The rig and model moved and acted just as our talent had on set. Last step was to apply “corrective smoothing” to the model to correct some bumps that inevitably form as a result of the mocap warping.

As I work on the mocap data and refine it, I will be posting more updates. Stay tuned for more… including a singing frog! That will require facial motion capture. I will be using a technique that I invented myself, and I will describe it in a future post.

If you have any questions, don’t hesitate to reach out to me at contact@newflightdigital.com. 

Photogrammetry with the Xbox Kinect: VFX Applications

The Xbox Kinect has long been seen as a possible tool for 3D artists and animators like myself. It has, for so long, been contemplated as a tool for both motion capture (Mocap), and for Photogrammetric scanning (3D reconstruction of complex environments). I took the liberty of testing a variety of Kinect techniques. What works? What doesn’t?

We will start with the bad: the Kinect suffers outdoors. Plagued by UV and IR interference, the Kinect cannot see past a few feet outdoors on a sunny day. We tested the Kinect while having it running pointed out the back of a moving vehicle, planning on using the Kinect as a tool to reconstruct a neighborhood in 3D, but due to the bumpy and dark asphalt surface in front of it, infrared light was easily absorbed, and the Kinect could not even reconstruct the road for this test. Its range was reduced to only a few feet. When stationary, the Kinect’s range extended to around 15 feet when pointed at a free standing object. We also tested the Kinect as a tool for scanning the facades of houses. I used a program called Brekel PointCloud to capture a pointcloud sequence of the house as I moved around it. The software captured a sequence of 3D mesh files, which were converted into an .obj sequence and manually reconstructed in Blender. This gave us mixed, partial results:

Brekel Pointcloud does provide a unique opportunity, however. Using the program, one can create 3D sequences, exported either as Alembic (.abc) or .obj sequences. Lets’s suppose, as a VFX artist, you wanted actors to interact with a 3D flood of water, created in post-production with a fluid simulation. With a Kinect, this should, in theory, be easier, as the actors could be captured in 3D by the Kinect, allowing the animated 3D mesh to be used as an obstacle object in the fluid simulation. In our tests, the alembic files created by Brekel did not work as collision objects in Blender’s fluid simulation, but I will update as we think of new ideas in the area.

Moving on to another Kinect program known as Kinect Fusion, the prospects of the Kinect as a stationary photogrammetry device become slightly better. In the video below, observe our efforts in the area. The Kinect is capable of producing a high-poly, low quality 3D mesh of the environment:

This brings a similar idea to mind. If animations of 3D objects captured with the Kinect cannot be used in fluid simulations, perhaps static ones can. This idea checks out, although we have not completed a full test. In theory, one could use the mesh output from Kinect Fusion as a collision object in a fluid simulation, and save a lot of time modelling the room. In fact, in the fast-paced, often rushed schedule of a 3D artist, this could save time and money. I will study this application further.

Outside of the realm of photogrammetry, the Kinect works well as a medium-quality motion capture device. Using Brekel ProBody, I was able to produce convincing .bvh files, imported into Blender:

I will elaborate on Motion Capture with the Kinect in a future blog post.