Creating the CGI Music Video for Adhesive Wombat’s “Storm Crusher”

Recently, I had the opportunity to produce a CGI/visual effects-based music video for AdhesiveWombat, an electronic music creator on SoundCloud, YouTube and the like. If you’re into tech, you may even know him as the producer of the “theme song” for Linus Tech Tips’s “WAN Show“. Well, that “theme song”, AKA “Storm Crusher”, got an animated, VFX music video addition this summer! Using a suite of 3D software and compositing tools, I put together a VFX adventure for this track with the help of some live action footage, photorealistic raytracing, and particle simulations! If you haven’t see it yet, check it out!

Adhesive Wombat’s “Storm Crusher”

This video involved many production and post production processes, but I’d like to highlight a few. For this video to work, you’ll notice that the music needed to “drive” the animation in some parts. I used a script to convert certain frequencies from the sound track (typically between 0 and 4 KHz), into usable animation data that could be manicured, amplified, and turned into pulsations, movement, and force field data for particle animations.

Here, you can see the pulsating orb and its particles from the beginning, alongside a graph visualization displaying several parameters driven by the sound from the music. You’ll also notice the music track along the top of the viewport… super useful when you want to match the movements up just right! Here’s another example of the orb, along with its graph view (showing several channels of displacement data driven by sound):

To complete this video, I needed to use “projection mapping”. That is, I needed to partially reconstruct some environments in 3D, and project imagery onto them in order to ensure that the 3D objects in the scene would conform to the lighting situation when the video was shot. Here’s an example of partially re-projecting a building onto a rough box model:

This allows for so many possibilities. Water, for example (of which there is a lot in this video), has a refractive index (the extent to which light is bent when it passes through the object). This means that material needs to be present behind the object so that there is something to “see” through the water when the light gets bent through it. Using this technique, I can recreate real world lighting conditions (with the help of an HDRI backdrop).

Here, you can see how this is accomplished… a mesh’s geometry is “projected” from the tracked camera’s point of view, and the imagery in that scene is applied. This allows me to make actual “chunks” of the parking lot rip apart!

Finally, I’d like to discuss those clouds. The clouds were produced procedurally using dynamic noise mixed with a black/white 3D mask to “guide” the clouds to where they needed to appear (the mask was animated to “steer” the clouds to different areas). You can see how the clouds only appear in a small area, although the mesh is much larger:

These clouds used volumetric materials, meaning that what you see is a subset of the mesh’s volume, rather than its surface. Raytracing render engines are needed to accomplish this, since the light bounces (the paths that the light takes) through the clouds need to be calculated by the computer. Using this technique, you can also get some really crisp volumetric shadows from other objects in the scene, like in the one above, for example. When that chunk of dirt comes out of the field, the crisp shadow edges on the volumetrics certainly help “sell” the effect.

Overall, this was such a blast! I’m glad everyone seems to like the results. Super, super special thanks to AdhesiveWombat for allowing me to bring his already legendary music a visual counterpart. Please go follow him, etc:

AdhesiveWombat’s Socials:

New Flight Digital’s Socials:

Want to get in touch? Visit newflightdigital.com!

3D Environment/Product Visualization for Business: Helping your Customers Visualize

When it comes to small businesses that offer specific, custom products and services, few other things can help seal the deal as well as 3D visualization. Your customers need to be able to see what they’re purchasing, and with quantum leaps in 3D rendering technology, business owners can now afford to give their customers a sneak peak of their custom orders before they’re completed. Selling the experience of your product is no easy task, but with tangible, realistic representations of your products, customers are more likely to commit to a purchase. Sometimes, seeing something makes it more real, and can tip the scales in favor of your purchase.

Architects can utilize 3D imagery to demonstrate their capabilities to potential clients

For complex projects, providing a view of a product from multiple angles can also foster an increased understanding of it for potential clients, thereby possibly attracting clients from market segments that may otherwise be out of reach. Let’s suppose you offer lighting setups for concerts and parties. There are entire market segments that do not understand how lighting works, and what options are available to them. Therefore, they may not be fully aware of the options available to them. With 3D visualizations, you would be able to show the customer different lighting setups, fixtures, and methods, and foster an understanding that otherwise would be amiss. You can turn entire market segments into customers by simply showing, not telling.

With New Flight Digital, you’re never far from achieving your vision for digital media. In an ever-increasingly competitive economy, 3D visualizations can give you the edge needed to succeed. Apply for a free quote on our website here, or simply join our mailing list here. Bring your business to the next level today!

Artistic Hard Surface Modelling: Creating Realistic Looking, Fictional Technical Art (Blender and Photoshop)

I got inspired a few weeks ago. You may have heard of a CG artist by the name of “Beeple”. His real name is Mike Winkelmann, and… well… he’s been creating one scene in 3D every day for the last 13 years! Seriously. He uses Cinema 4D primarily, and he puts out a ton of awesome work. You may even be familiar with some of it (www.beeple-crap.com; check him out). I found him by doing a Google reverse-image search of this image:

Image courtesy: Mike Winkelmann, of course.
http://www.beeple-crap.com

This is what started me on my journey down the trail of Beeple. This image inspired me to explore what was possible in the world of technical hard surface 3D modelling and rendering. There were quite a few things I was impressed with; primarily, the level of detail in this image was astounding. It felt very… believable. Beeple used common current-era materials to craft something completely new. Shipping containers, a bicycle, some cans, a cardboard box or two, and an air conditioning unit… these things are recognizable to the everyday layman. So when you see them stacked up into a huge post-apocalyptic Amazon truck, well, it feels more real than if the vehicle had been made up of elements that we could not recognize. Beeple has a lot of these types of images: images that are so familiar looking and matter-of-fact looking that they might as well be real. These are my 9 absolute favorites out of his 4,338 daily renders:

This is the kind of stuff that makes me stay up at night writing blog posts and frantically modelling increasingly complex technical nonsense. The technical detail… the accuracy… the style. It all made me want to be a technical designer or engineer all of the sudden. I realized I could learn a lot from this guy. And so, I began my first adventure into more complex hard surface 3D modelling. In Blender, I began box modelling with a subdivision surface modifier on. What does this do? Well, it allowed me to get clean, smooth surfaces and edges in my technical renders. Instead of harsh, sharp edges, I could make my objects as smooth as I wanted with much less effort. Take a look:

Modeling a Technical Piece with Subdivision Surface Modifiers

This is important because if you’ve noticed, nothing in the world is absolutely 100% sharp-edged. There is always a slight bevel or softness to an edge. The subdivision surface modifier divided and smoothed out my boxy geometry as I modeled. To get sharper edges, I would add a lateral edge loop and bring that edge closer to the exterior edge. As I did so, the exterior edge would become sharper and sharper. The result? Well, I was able to model some pretty interesting hard surfaces. Inspired by the first image I showed you, I decided to model a “tank” of sorts built out of a shipping container that might be driven in the future, perhaps. I decided to have it stranded out at a docking station, and I mentally built a story around that. I thought about where every item would go, and why it would be there. I used some particle simulations to add some snow to the shot, and took a TON of time lighting and texturing. The result? Have a look:

I liked where this was going. I went WAY overkill with this. I took about 3 weeks of on and off free time work to model all of the little details. Most of them aren’t even visible. For example, did you notice this flag bracket? Or these other details?

No? Well, the flag bracket was hidden from view, and those other details were hard to see. But I still modeled them. I modeled everything. There is even a fully furnished computer control area in the interior.

Next, I went on and did a project inspired by some of Beeple’s caterpillar images (see the gallery above). After around 2 weeks of work, I had these:

I added the human element using the open source “MakeHuman” program that I found out there on the great World Wide Web (it’s great, isn’t it?). Just like the previous image, a TON of post processing effects were done in Photoshop. Even the depth of field was done in PS using a depth pass! That’s right. See that black and white image in the Instagram gallery? I used that to define the blurriness of the image at any given point using the “Lens Blur” effect in Photoshop. Just add the depth pass as an image channel, select it in the effect dialog box, and voila! You have an automatic depth of field visualizer… in post production! I also used the depth pass as a mask for smoke effects, light flares, etc. Very useful trick.

The last hard surface modelling project I’ve taken on is something… different. It is not finished yet, but I decided to model a device of some sort. I kind of went crazy on the details here. It was inspired by my pocket multi-meter sitting on my desk as I write this (a multi-meter measures a bunch of electricity stuff). I thought… what if we had a device that could warp space and time? A modulator… but for the fabric of reality (I know I’m crazy). So I came up with an idea: the “Localized Axial Spacetime Modulator”. It would use these spatula-looking antennae to warp space and time. I did a ton of background research on what this would entail so that I could make this thing look plausible. Also, just like Beeple, I added a bunch of real world elements. Sockets and materials that science-oriented people would recognize add realism to the model.

Remember… adding stuff that actually exists in the real world kind of convinces your audience that they’re looking at something real (this circuit doesn’t work by the way).

And now… the grand finale. This is my texturing I have so far: the screen of the device and the lower labels. Every part, every word and every pixel of this image has a purpose, a meaning and a reason for being there. I’m learning more and more that THAT is how you create convincing worlds: you have a why, a where, a when, a who… and fifty reasons WHY those things are the way they are.

Notice the warnings and stuff on the device, too. I got pretty creative. Some of it is common sense. The device is pretty similar to a signal generator or oscilloscope.

So… that’s all for now! If you’ve made it this far, congratulations! And Beeple, if you read this, you rock man! Go check out Beeple at www.beeple-crap.com, and head over to my site at www.newflightdigital.com to hop on my mailing list. I only send stuff out once a month on average, so you won’t get spammed. Or, shoot me an email at contact@newflightdigital.com.

Creating a Futuristic Concept Scene: Speed Designing with Photoshop and Blender

Last night, I created a set of simple concept images in roughly 4 hours of on-and-off work. My goal was to quickly develop a partial continuity between two images, establishing a sort of implied workflow utilization. In other words, I wanted to first create a hand drawn digital image of the concept using a pressure sensitive pen tablet. Then, using this drawing as a guide, I wanted to speed model and texture a 3D environment inspired by the drawing, thereby establishing a workflow continuity commonly seen in the industry: from hand drawn concept image to rendered concept image.

I began in Photoshop, where I set up a 16*9 inch 500 ppi resolution document with no guidelines. I set the pen size to roughly 25 pixels for fine control of the design. I decided to go for a neofuturistic, bleak image of an astronaut of some sort traversing a somewhat barren area. I wanted to combine elements of the past into the image by featuring a draped cloth, which would be covering the lower half of the figure’s visible physique. I began to draw with smoothing at around 50%. I first did a complete sketch of the man’s face from the inner muscle groups outward, even though most of this would later be covered by other elements, such as his helmet. I even included facial hair. The man’s thin face and the shape of his jaw helped dictate where the different elements of the helmet would go, and what shape they would take.

The first defining strokes I made staked out the area that would later make up the visor part of the helmet. I defined the opening that the man would be looking out of, as well as the collar piece and other essential components of the structure. Once I had pieced the entire structure together, I began shading the image according to the geometric contours using various pen widths and shapes until I was satisfied. I blocked out elements of the man’s bare face using white shapes with various opacities to define the surface of the helmet.

To go from 2D to 3D, I first sculpted the man’s bare face using the sculpting tools in Blender. I then began forming the geometric segments of the helmet around the face according to their required positions. I frequently adjusted the design and shape of the pieces on the fly as I went. When I was ready, I used Blender’s texture paint features to literally draw on color and damage marks, and applied these image textures to the model pieces in the node compositor. I used glossy shaders for the most part, and reused the hand drawn textures as displacement maps to give depth to the material.

To achieve the cloth look, I created a subdivided plane with a hole in the middle around the figure. I “pinned” the vertices at the center so that they would not move, and then ran a cloth simulation using wind and turbulence forces to achieve a billowing cloak model. I textured this using a combination of an opaque velvet shader and a transparent shader in a mix shader, with the factor set to a stretched brick texture. This gave me long strands of “cloth”, which looked like a woven burlap material up close.

I then ran a particle system with wind and a collision object to simulate snow, and rendered this separately. I pulled everything into Photoshop, applied a lookup table, touched up highlights and shadows using the brush tool, and composited dust and smoke into the shot, and rendered it out. The resulting image is comparable to the original sketch, albeit vastly different in proportion.

This exercise has given me some insight as to how concepts tend to change from conception to execution, and how one might go about adapting reference images for use in 3D environments. To see more of my work, go to www.newflightdigital.com. Feel free to reach out to me at contact@newflightdigital.com.