In addition to being interested in filmmaking, animation, visual effects and the arts, I also happen to like writing songs. In fact, I’d love to make a musical one day since musicals combine so many of the things I’m interested in such as storytelling, art, design, animation, and music. It’s hard for me to imagine much else that would be as fun, or as creative.
With that in mind, I decided to make an animated music video of a song I wrote titled Days Go By. While a music video is not quite a musical, it is not that different either. You can think of it as a musical on a smaller scale, and, after all, what works small should work big.
A wise person once said that the journey is the reward and so I’m looking forward to this creative journey. In addition, the best way to get better at something is to do it. So, besides the fun of the journey, I’m looking forward to increasing my knowledge about 3D animation, rendering and music along the way.
Blog and Vlog
I also decided it would be a good idea to document my progress on the video by posting reports about it right here on this blog as well as videos on my YouTube channel. This would give me a chance to show others what I am making and hopefully foster a lively discussion about 3D animation and the creative process. I hope you will learn something from me (or even get some inspiration). In turn, I would appreciate any opportunity to learn from you.
Comments are, of course, welcome and encouraged both on the blog and on the YouTube channel. Therefore, if you’d like to be informed about updates as soon as they happen, please subscribe to the YouTube channel or send me a message through the Contact area of this blog and request to be added to my email list. I look forward to taking this journey together.
About The Song
I wrote the song Days Go By about a year ago, with the intention of it being used in a musical from the get-go, and recorded it in my home studio. The song has sort of a poppy feel, with touches of jazz and rock.
The DAW that I use currently is Avid Pro Tools 12. I’ve used other recording applications in the past such as Cubase and am interested in Studio One, but I am used to using Pro Tools. It also happens to be the industry standard.
The workstation that I use to record with is an HP Z840 which is just an awesome machine and one of the world’s most powerful workstations. If you are interested, you can read my review of it here.
The audio interface I used to record Days Go By was a Focusrite Scarlet 2i2. However I’ve since replaced it with a Behringer U-Phoria since the Focusrite stopped working when I brought it to Florida last winter (maybe I banged it somewhere along the way). I must say I’m liking the Behringer however. It has MIDAS preamps, Midi ports on the back and sells for a very reasonable price. It also has very sturdy construction.
My main keyboard is a Korg SG-1D which I bought some time ago. It’s got 88 nicely weighted keys and that’s where I usually write my songs. For guitars I have a red Univox Les Paul copy that was built in Japan in the 70s and a Fender dreadnaught style acoustic. The Univox has a nice tone to it and the Fender’s pretty good too.
I might go further into how I recorded Days Go By in a future post. In the meantime you might be wondering how you can hear the song, since it has already been recorded and mixed.
Here’s the thing, you can’t. Not because I don’t want you to hear it. In fact, I’d really like to play it for you. However, I’d like to save it for when the video is finished. That’s the one last thing that I would like to reveal after all the work is done. Maybe I’ll change my mind and play it before I finish the video. Then again, maybe I’ll wait.
The Mind Machine is an animated web series that I made not so long ago with Adobe After Effects. Below is the story of how this cult-classic-to-be was made, as well as some thoughts about independent animation production. There are also a set of images from the series including pictures of the main characters. At the end of the story is a link to a playlist where you can watch all sixteen exciting episodes on YouTube for free. You are free to skip over the article and watch it at anytime if you like (the link is at the bottom), but I suggest you read the story first. I leave that up to you. You can even binge watch if you want to, and see all sixteen parts in one sitting. Or you can watch one part a day for sixteen days (they’re about five minutes each).
The Making of The Mind Machine
Around the late 1990s, I was working as a freelance animator, compositor and motion graphics artist in New York City, mostly at a creative production company called Curious Pictures located near Astor Place in Greenwich Village.
It was an exciting time. The whole field of computer animation and visual effects was still in its infancy, yet every day the tools (software and hardware) were getting more sophisticated. After Effects had blown everyone’s mind a few years earlier with the incredible amount of creative flexibility it offered. It was also the time that I was starting to really get into 3D animation, something I find endlessly fascinating and which forms a large part of what I do today. It was a fun time at Curious Pictures, a period of my life that I look back upon fondly.
A decade before (in the late eighties and early nineties), the graphics arts industry had been revolutionized by digital technology and now, about ten years later, the film, animation and video industries had undergone a similar transformation. Suddenly artists, filmmakers and animators had incredible tools at their fingertips that only a few years before were beyond their grasp. We could even afford to have them at home.
Slow Speeds Abound
Still, with all these advancements in technology, connections to the internet were slow at this time and consisted mainly of dial up modems over copper wire (thank God we don’t have to hear that annoying dial up sound anymore). Even with the slow speeds, if you were a photographer, illustrator or a painter, you could manage to put your stuff up on the web for the world to see. However, if you were a filmmaker, animator or video creator, it was another story. Connections were just too darn slow to attempt to stream video, unless your video was the size of a postage stamp. Even at that size, however, you still had to wait a long time for it to load on a 28k or 56k modem. It would be years before we could even imagine anything approaching the speedy connections of today of 100 Mbps running on a fiber optic cable into your home and the ability to stream high resolution motion pictures on the fly.
Most of the work that was done at Curious was made for the television screen. If you were serious about your independent creative work, however, you could transfer it to 35mm film to send around to film festivals. Those were basically your two alternatives.
It was around that time we heard about Flash (the web technology eventually acquired by Adobe). With it, you could make a form of animation that could be sent over copper wire modems and it wouldn’t take eons to go through. The principle that made this possible was that you weren’t actually sending video over the wire but small amounts of vector information.
For those not familiar with the difference between raster and vector, think of it like the difference between Adobe Photoshop and Adobe Illustrator. A Photoshop file is a bitmap, or a collection of pixels that form an image while an image created in Illustrator is comprised of shapes, or more precisely, mathematical descriptions of shapes. For example a circle in a Photoshop file is a grid of pixels that looks like a circle while a circle in Illustrator is the mathematical description of a circle with values for its position, the size of the radius, etc.
A frame of video is comprised of pixels, and there are 30 of those frames a second. That is a lot of data to push through a wire and it’s why video is so big. Of course there are ways to bring that size down by using compression, and early attempts to display video on the web were heavily compressed (and it looked like it too). Compression has gotten a lot better through the years though, and is still an important factor in online streaming.
Flash, however, offered an alternative for animators. rather then pushing all that pixel data through the net, it just sent the vector data through which is a mere fraction of the size. Your computer then used that information to recreate the animation on your end. With Flash, the speed of the connection was no longer a problem and as long as you worked in a certain style (a vector style, that is), animators could create work with Flash and share it with the entire world — even with slow modems.
Hey, Let’s Make a Movie
I, along with many others, became excited by the possibilities that Flash offered. If you were an animator, suddenly your sole possibility of distribution did not have to be a cable network or a film festival. The internet now also held the promise of a distribution platform, the only caveat was that it had to be Flash animation. If you were around at that time, you may remember a few Flash web series from this era.
Therefore, I, along with one of my friends, decided that we would create a story, animate it with Flash and put it on the internet for the world to enjoy. Our plan for our first project was to tell a picaresque adventure story in parts, each about five minutes long.
I never wrote a script before, and so with characteristic optimism, I set about doing so. At first I came up with a time travel story that had Nazis as the bad guys, which I liked but my friend thought that the Nazis might be a little too edgy. While I like the story that became The Mind Machine, I still would have liked to do the original story. What’s wrong with edgy?
In any case, we started discussing other possibilities for the story and eventually decided that the theme of mind control, or something along those lines, would be interesting. So I went back to the drawing board and wrote a 117 page script called The Mind Machine.
I’ve developed new ideas about screenwriting and my approach to it. Nonetheless, looking back at it now, for a first script, it’s a pretty good yarn.
How Many Voices?
My friend had worked with a voice over artist named Ted Blumberg on a CD-ROM project and so we called him up to see if he was interested in doing some of the voice acting for movie. At first, we planned to have Ted do one or two of the leading male characters (perhaps Louis Jameson and Professor Bergman), but to my surprise he said “I’ll do all of them”. I thought to myself that it wouldn’t harm to let him record as many of the voices as he wanted, but we would end up only using him for a couple of them since they would probably sound too similar. A few days after meeting him we went into a recording studio in downtown Manhattan.
Not only did Blumberg record all the male voices (except for a couple of news reporters at the end which I ending up doing myself), but he did an great job on every one of them. They all sounded very different, with their own unique character and inflection. I challenge you to listen and find out for yourself.
Flash Not So Cool for Characters
After we finished recording Ted in the studio, we started animating the story in Flash. To my dismay, however, I shortly realized that Flash was not a good tool for character animation. As a matter of fact, it was downright clunky. I had been used to using much more advanced applications such as After Effects and other 3D packages which offer the user sophisticated features such as sub-pixel placement, velocity curves, opacity values, alpha channels, masks and so on.
Flash had none of those kinds of features, and the way it handled animation was not very user friendly. Specifically, its approach to character creation was, to put it bluntly, practically nonexistent. It was, however, a useful tool for creating interactive websites. However, I still somehow managed to put together a few minutes of animation, though I felt like the application was working against me, rather than being a helpful partner in creativity (the way software should be). At the time I was also getting more into 3D animation and the whole notion of having to continue working with Flash was something I was not looking forward to.
So, after some consideration, I gave up on The Mind Machine and tucked it away on a drive and a few backup CDs somewhere and kind of forgot about it. Flash was just too much of a pain in the neck. It might be good for building interactive websites (though HTML 5 has changed that), but not to animate characters. The notion of sending vector data over the internet for animation was a neat idea, but having to create character animation in Flash was just too frustrating to deal with.
A few months later September 11th dawned on New York and the two towers on the tip of Manhattan crumbled to the ground. It had a chilling effect on the city (and the rest of country). Suddenly the whole happy and exuberant nineties seemed like a memory.
The next several years saw me working on a bunch of other projects, I worked less at Curious and spent more time working at an advertising agency. I still was doing a lot of After Effects, but 3D animation became more central to what I did. The Mind Machine was something I didn’t think too much about since I didn’t want to work in Flash and my friend had moved away. In addition, the internet was getting faster and because of this, Flash animation was becoming obsolete. In fact, today it is just about dead and interactive graphic websites are being done in HTML 5.
Then one day, about five years later, I came across the voice recordings we made of Ted in the studio and I listened to them. After not hearing it for so long, I was able to listen with fresh ears. Granted, there are things that I may have written differently now, but on a whole I was entertained throughout and I thought it sounded pretty good. I began to think of all the time I spent time writing it and going into a recording studio to record the voices. It seemed like a shame to let all that work go to waste. Plus Ted had done such a great job voicing the characters. And so, over the next few days, I slowly decided that I would try to resurrect the project in some form.
I didn’t want to do it in Flash however. I didn’t even really feel like opening that program. “I know,” I thought, “I’ll do the whole thing with 3D software”. Those of you who do 3D animation might chuckle when considering how much work that is for one person. Still, I thought I was up to the task. Plus, I love doing 3D animation, so it would be fun. What could be better than that?
Another friend of mine, however, convinced me not to do it in 3D. According to him, it would be too overwhelming for anything less than an army of people. In addition, there was the render time and all the other stuff that goes along with 3D. Maybe my friend was right. Maybe it was foolish. So I decided not to take that road. However I still would have liked to make it in 3D.
After considering my options, I decided to make the movie in After Effects, a program that I enjoyed using for many years. However, though it was a lot better than Flash, After Effects was also not really designed to do character animation. Today, I might use Adobe Character or Animator Anime Studio Pro for 2D character work, but back then neither had really been developed yet. After Effects had its benefits, but it also was difficult. Incidently, 3D programs have much better tools for character work such as pose morphs, bones, and so on.
However, After Effects was what I went with for the project. Though it was challenging, it had its good points and I developed some interesting techniques and ways of working along the way. For example, I worked out an interesting system of how to animate the different phonemes (or the way the shape of your mouth changes while speaking). A lot of 2D computer animation uses replacement animation for the different phonemes. In this technique, the mouth and lips immediately switch from one shape to another, let’s say an a shape, for example, to an o shape. The technique I worked out for the lips and mouth, however, uses splines to interpolate from one shape to another over several frames. This resulted in a much smoother look and feel for a character’s mouth when they are speaking. I thought that looked really good.
Another technique I worked out was a way to handle head turns. Like replacement animation for mouth shapes, in a lot of 2D animation, head rotations often abruptly switch from one position to another. For The Mind Machine, I figured out a way to smoothly rotate a head over an arbitrary number of frames. The result was that head rotations look very smooth and appear surprisingly like-like. In fact they almost look like they could have been done in 3D.
Both of these techniques I developed over a period of time contributed a lot to the look and feel of the animation in The Mind Machine. Without interpolating splines for the phonemes and the smooth rotation of the head turns, I would have been much less satisfied with the results.
Before I started production, I cut out a lot of dialogue from the audio. The original script was around ninety minutes but I cut out about thirty minutes from it for a final running time of around sixty minutes. The reason I did so was not only so there would be less work for me to do, but because, upon final analysis, there was excess dialogue that wasn’t really needed.
Around this time I also recorded the voice of Julie which was done by Marci Heit, a New York based talent. Marci did a great job with Julie and she did a great job for the female lead character.
A Long Job
Making the movie took a long time. Not only because doing animation is a slow and detailed undertaking by its nature, but because I worked on it as my schedule permitted, sitting at my desk in my apartment often late at night or on the weekend. I tried to keep the design and the animation as simple as I could, although I was concerned how it would hold up for a one hour movie. In the end, I think the look and feel of the picture worked out pretty well and was effective for what it is.
While the original idea was to make a web series of around sixteen parts, while I was making it I began to think it would be better to present as a single, unbroken movie and that’s how I decided to do it.
After the artwork and animation was finished, I wrote the music and scored the film in Cubase (although I subsequently switched to Pro Tools and have been using Studio One lately). The music was an entire project in itself, and one that I found very fun. I used a popular sample library for the orchestra and while I write music and songs, I never composed music orchestral music before so it was a refreshing change and quite interesting. It made me want to do more of that kind of work. For the theme song, however, I arranged it for a jazz band with a violin and clarinet alternating for the main melody. Music is a big part of what I am interested in and one day I would like to write a musical.
Sound effects were then added and mixed to the score and the whole picture was edited and finished in Adobe Premiere. It was my first attempt at really cutting anything in Premiere and there was hundreds, if not over a thousand cuts on the timeline. I’m pleased that Premiere Pro has really broken through lately and has achieved the popularity it enjoys today.
I finally finished The Mind Machine about three years after I had resurrected it. The year was about 2008 or 2009.
I sent it in to the New York Film Festival, a renowned international film festival where movies with multi-million dollar budgets pine to be accepted. Here I was with a little movie that started out as a Flash project that was created with a budget that was next to nothing (besides the investment of my time). It wasn’t accepted. No surprise there.
Film festivals receive so many entries and only take a fraction of them. In addition, some festivals, I found, seemed more interested in the entrance fee then the films themselves. For these reasons, I didn’t really bother sending it to others. In addition, with the web today and the ability to stream to anywhere in the world, I question the whole necessity for film festivals in the first place.
I sent it to a distributor in New York that distributed some independent animated films. For a couple days I waited while they considered it. They eventually declined it since they said it wasn’t up their alley, but had nice words to say and wished me luck.
I wasn’t disappointed since I recognized that The Mind Machine is hard to pigeonhole into a certain genre. Even though it’s animation, the story is a picaresque thriller. It’s also not really meant to be funny (though it has funny moments). On top of that, some of it is scary or even macabre (like the fact that the Hellers are murderers, or that Lars was kidnapped and brainwashed). Basically, The Mind Machine wasn’t really aimed at kids, though many people think that animation means made for kids.
I didn’t try to find a distributor after that. Truth be told, what I like to do is create stuff: artwork, movies, paintings, music, stories, etc. I’d rather spend my time making more creations.
Although the movie was finished, I didn’t really know what to do with it so I just let it sit on my hard drive for a couple of years. During that time, the speed of the internet had improved so much that it had become a viable video delivery platform. The year was around 2011 or 2012 and broadband connections had sprouted up all in homes across the nation. Practically nobody was using dial up anymore.
The Mind Machine was originally conceived to be on the web (albeit as a Flash animation) and in the end I realized that is where it should be. I had come full circle. Not too long ago I divided it into its original sixteen parts, which corresponded approximately to the 16 original sections that were conceived for the Flash animation, and uploaded it to YouTube where you can watch it today.
I’m glad that I made The Mind Machine since, among other things, it was a very good learning experience. I wrote a script, learned a lot about editing, scored the music a did the sound effects. These are all valuable skills that will no doubt come in handy in the future.
Now you know about some of the things that went into the making of The Mind Machine, hopefully this has enlightened you in some way.
It’s an exciting time to be a filmmaker. Whether you create animation or shoot actors with a camera, not only do you have access to powerful tools that were traditionally out of the reach of most independent creators not all that long ago, but the internet has become fast enough to allow you to distribute your films around the world, on demand, for basically next to nothing. It’s really mind-boggling if you think about it.
By the way, I’m just putting the finishing touches on a new script that I’m excited to realize. I am not sure how exactly I will do it but stay tuned for more details.
An now, here is the link to watch all sixteen parts of The Mind Machine:
CMotion, MAXON’s CINEMA 4D cyclical motion generator, was introduced in R14. While it’s often thought of as a way to generate walk cycles on rigged characters, many are unaware of its power to generate repetitive motion of all kinds, and can be used for motion graphics, machines, and many other kinds of projects.
Some time ago, around the release of R14, I gave a talk in New York at the After Effects user group. In addition to showing how to use CMotion to create the animate the motion of a butterfly, I wanted to talk about and tell the audience about Cineware which had just been introduced. For those of you who are unaware, Cineware is a live bridge between After Effects and CINEMA 4D. With it, you can import CINEMA 4D files into After Effects without rendering them out first. There’s a lot more you can do with Cineware such as being able to extract and convert C4D cameras and lights into After Effects cameras and lights as well as the ability to create multi-pass layers for compositing on the fly.
C4D’s external compositing tag is also a very helpful tool that allows you to tack 3D elements onto coordinates generated by objects in C4D inside of After Effects. Cineware can also be useful for this as well, allowing you to pull in 3D positional data which convert into nulls in the comp.
Thus I designed my talk to cover three things. The first was how to use CMotion to easily animate the motion of a butterfly. Next, how to use Cineware to bring in the 3D butterfly into After Effects and extract the other 3D data such as the cameras (and lights) Finally, I used Cineware to extract positional data generated by two external compositing tags placed at the tips of the butterfly’s wings onto which I affixed Trapcode Particular emitters to generate magic dust.
After the talk, I came back to my studio and captured the talk in a movie. I was intending to release it somewhere, but somehow it got tucked away into a folder and over the course of time I kind of forgot it was there. About a week ago I was poking around my drive and found the folder, so I decided to upload it to my YouTube Channel. I hope you find it useful. It was made in R14, but all of the lessons are still just as pertinent.
After this story was published, it soon became apparent that the rumors about the possible acquisition of The Foundry by Adobe were unfounded. The Foundry instead received a majority investment by England’s HG Capital. Well, it was an interesting rumor anyway, one that was wildly speculated on at the time. In that regard, I’ll leave my thoughts about the implications of such a merger would have been. It might be a fun read. – Joe.
I recently heard that Adobe may potentially acquire The Foundry, the London-based company whose applications include NUKE, a respected compositing program used in the VFX industry, and NUKE Studio which includes editorial and finishing (in addition to compositing and VFX). In addition to NUKE, The Foundry also makes other ambitious products such as MARI, a 3D painting and texturing solution as well as MODO, a 3D modeling and animation program. There is also KATANA, HIERO and other programs in The Foundry’s toolbox.
Suffice to say that this is big news, if it happens, it could change the dynamics of the industry. How? It’s hard to say, but below are a few thoughts. Keep in mind that I haven’t officially confirmed this report and have only read about it online.
On one hand a merger between Adobe and The Foundry makes sense, on the other hand it brings up questions. First, there is the question of overlap. While After Effects, Adobe’s own compositing and VFX application, does some of the things that NUKE can do, each of them has relative strengths. One might say that After Effects is better at handling typography and graphics (as well as integrating better with the rest of the Creative Suite). It’s also timeline based.
NUKE, on the other hand is node based (whether you prefer a layered timeline-based interface or a node-based approach depends on the user). Its 3D compositing capabilities, however, you might say, are more robust in my opinion and handles things like projection or camera mapping more effectively than After Effects.
There are more benefits to each program, however it’s probably safe to say that, due to its features, After Effects has found it’s place among motion graphics artists, broadcast designers and independent filmmakers while NUKE has carved out its niche among high end, VFX-heavy feature film and television projects.
Therefore, while there is some overlap between the two programs, they do serve different markets. Thus, if Adobe does acquire The Foundry, I could see the logic in keeping both programs alive and part of the Creative Cloud. Motion Graphics artists and broadcast design projects would gravitate towards After Effects while high-end VFX for television and film could be done in NUKE.
NUKE Studio, on the other hand, brings up other questions. NUKE Studio is a software suite that adds editing and finishing to NUKE. Adobe of course has worked hard to develop Premiere into a robust NLE that has been embraced by a many professional editors. For finishing, Adobe has SpeedGrade which they acquired a few years ago. NUKE Studio is a relatively new product which was released about a year ago and I am not sure how much penetration into the market it has had yet. Whether Adobe decides to maintain two editing and finishing solutions or merge them into one (such as Premiere and SpeedGrade Lumetri) remains to be seen. I doubt it.
MARI is a product that is used by professional 3D artists to paint complex texture maps for their creations, whether they be dinosaurs, aliens or mechanical monsters. I don’t see any reason why this program wouldn’t continue to live as a contributing member of the Creative Cloud.
What About 3D?
While Adobe has a wide variety of applications for a variety of media, it lacks a true 3D program. Once, a long time ago, it made a program called Adobe Dimensions which was, frankly, pretty bad and was summarily discontinued. After Effects, Photoshop and Illustrator all have some 3D(ish) capabilities, but, of course, none of them approach what is possible with a real 3D application since none of them are.
This void has effectively been filled by MAXON’s CINEMA 4D which is widely used by Adobe users. CINEMA 4D is a highly capable 3D application which works great with Adobe applications such as After Effects, Photoshop and Illustrator. For the record, I am not only a satisfied CINEMA 4D user, but in the interest of full disclosure, I also travel to trade shows to demo the program as well as give talks about how to use it. CINEMA 4D has been enthusiastically embraced by motion graphics and visual effects artists, not only in North America but around the world.
Though not everyone realizes it, a version of CINEMA 4D called C4D Lite comes with the Creative Cloud thanks to a working relationship between MAXON and Adobe and a live bridge between the two programs called CINEWARE has been developed that allows After Effects users to import native C4D files without rendering them first. However, C4D Lite is not listed in the list of apps in CC. To use it, you launch it from within After Effects. Adobe and MAXON remain separate and autonomous companies.
The Foundry has a 3D application called MODO which, I suppose would be acquired along with NUKE and MARI. While it would probably become available as part of a subscription to the Creative Cloud, it isn’t as mature a product as C4D is, and not as popular. MODO is a compelling program, but C4D has features that are very useful to Motion Graphics Artists such as the MoGraph cloner toolset, a built in motion tracker, quality rendering and solid character animation chops. It is thanks to features such as these that C4D has become so popular in the past few years.
It will be interesting to see what might come out of this tentative acquisition of The Foundry by Adobe and whether they will be able to successfully integrate The Foundry’s product line into the Creative Cloud in an elegant fashion without causing confusion and overlapping product lines. I guess we’ll have to wait and see. If you have any thoughts about this potentially important development, feel free to add them in the comments section below.
Mocha Pro is a highly regarded application for visual effects that is used by leading studios around the world. It’s been used in the creation of such blockbuster motion pictures as The Hobbit, Black Swan, The Amazing Spiderman, Harry Potter and more. In February 2013, Imagineer Systems was honored by the Academy of Motion Pictures Arts and Sciences with a Scientific and Technical Award for mocha’s innovations, of which have caused it to gain widespread adoption in the VFX industry.
The fact that it won this prestigious award should be an indication that mocha Pro is an important and compelling piece of software with impressive capabilities, especially in the areas of rotoscoping and planar tracking.
Rotoscoping, as you may be aware, is the process of isolating objects in a scene over a series of frames. For example, you may wish to isolate a building in a scene to change its color. Or you might want to cut out a vehicle in order to lay it onto another background plate. Whatever the case might be, there are a million and one reasons to roto something and it is a common task in many large productions.
However, rotoscoping can be an extremely fussy and tedious process without the right tools. Programs such as After Effects might have built-in tools which allow for the rotoscoping of objects, however they fall far short of the roto tools found in mocha Pro.
The main reason that makes mocha so powerful is because it is a planar tracker as opposed to a point tracker. Planar tracking tracks the movement of planes in your scene, whether the planes are moving in two dimensions (horizontally and vertically) or in three dimensions with perspective.
This is incredibly useful in a wide variety of shots where you might like to do things like add images to television screens and computer monitors, or create those kind of slick looking graphical interfaces the kind which were first made popular in Steven Spielberg’s 2002 film Minority report and can be seen in many other movies such as Iron Man. The list of uses for planar tracking goes on from adding billboards and signs to walls, advertisements to the sides of a bus or a logo to the front of a book.
However, planar tracking is also very useful in rotoscoping. They are complementary. For example, you can track a plane (such as the surface of a wall), and then create roto shapes for, let’s say, the paintings on that wall. Next you can link the roto shapes to the planar track which causes them to automatically move in conjunction with the plane. If the linked roto masks happen to drift a bit during the shot, it can easily be fixed with a few judicious keyframes here and there.
By linking roto masks to a planar track, the process of rotoscoping becomes much easier than trying to rotoscope objects manually frame by frame. Of course this saves you a lot of time and frustration. If you have never used mocha before and are accustomed to doing manual roto work, once you do it this way, you’ll never look back.
As mentioned before, what makes mocha so powerful is that, unlike After Effects which contains a point tracker (that is to say it will track a single point in your shot, or two points if you are tracking rotation), mocha is a planar tracker. What that means is that mocha Pro tracks and analyzes an area, or pattern, of pixels and derives a plane from it.
If the plane is mostly moving in two dimensions you can tell the software to limit the track to translation, scale, rotation and shear. For more complicated tracks including three dimensional movements, you can add perspective.
One nice thing about tracking in mocha Pro that I like, is that you can change the tracking data as you go along. For example, let’s say you are tracking the floor in a scene and at some point the part of the ground you are tracking moves out of frame. No need to start over, simply move the area you are tracking to another suitable position on the ground and continue.
When you are done with tracking and rotoscoping, you can export the tracking data or the shape data from mocha Pro into a variety of different programs where it can be used. These programs include Adobe After Effects, NUKE, Flame, Quantel, Fusion and Adobe Premiere Pro.
In the case of After Effects, if you export the shape data you can simply paste it onto a layer by choosing Paste mocha mask in the Edit menu. The result is an identical AE mask on the layer with keys on every frame that cause it to perform precisely as it did in mocha Pro.
If you’re not after the mask, but rather the tracking data, you can export it from mocha Pro and paste it onto a layer in After Effects. This results in position, scale and rotation keyframes that cause the layer to move in the same way as the tracked object.
If you rather render out an image sequence or Quicktime movie to be used as a matte in your compositing program, you can do that too and include things like motion blur and per vertex feathering.
Other compelling features
Another feature I really like in mocha Pro is the Remove Tool. This handy tool is incredibly useful for rig and wire removal as well as removing things like imperfections, blemishes, microphones or even entire objects or actors. By creating a garbage matte around an object you want to remove, mocha will analyze the background and magically remove it from the scene. If there is not enough clean background in the shot for it to work its magic, you can provide your own clean plate for the purpose. This is an important and useful feature which can save you many hours of work.
mocha Pro can also analyze a scene and solve for a 3D camera which it can export to other applications such as After Effects, CINEMA 4D, NUKE, Maya, and other software. The camera solver is a useful feature, however these days programs such as After Effects, C4D and NUKE have capable motion trackers built in. However, if you don’t use those packages, mocha Pro’s camera solver should be useful to you.
Lens distortion can sometimes present a problem when working with a scene shot with a wide lens which has caused a bulging effect in the image. The lens tab in mocha Pro has tools that can rectify lens distortion and straighten things out. You can also track a plane with the lens distortion as is and export a distortion map for later use in compositing. Imagineer also provides a free lens effect plugin for use in After Effects.
New in Version 4
Now let’s examine some of the innovative features that are new in version 4. mocha Pro 4 has a new Stereoscopic 3D workflow which allows users to analyze differences between both left and right camera streams and solve for the disparity. This can then be applied when tracking, camera solving, as well as the object remove module and image stabilization. In other words, mocha Pro’s planar tracker can do multi-stream image analysis which automatically detects and keeps track of the difference between both eyes allowing tracking and roto tasks to be done on uncorrected footage. The outcome is that a lot less manual keyframing is needed for both solving tracks and creating roto shapes on stereoscopic footage.
In the past when working with typical stereoscopic production workflows, one would first correct the footage, such as its vertical alignment and other differences between the right and left eyes before doing any tracking or roto work.
While mocha has always played nicely with After Effects, version 4 now also allows for the pasting of roto masks directly into Premiere Pro timelines. This is useful to isolate elements, or to selectively blur or color correct areas in the shot. Now editors as well as visual effects artists can enjoy the benefits of mocha. Version 4 also improves Quicktime and MPEG support.
Advanced VFX production workflows in facilities that utilize Python scripting can now more deeply integrate mocha 4 into their pipelines thanks to newly added support for Python scripting. An example of this might be to integrate mocha Pro 4 with an asset management system or allowing technical directors to automate mocha Pro tasks.
Also new in mocha Pro 4 is an improved user interface as well as high resolution retina display support.
If you are a freelance VFX artist and might not need all the features offered by mocha Pro, Imagineer Systems has released a version of mocha just for you called mocha Plus.
As you may be aware, every version of After Effects comes with a free version of mocha called mocha AE CC which allows for planar tracking and rotoscoping inside of After Effects. While this might do for some projects, mocha AE CC lacks many features such as the advanced modules for object removal and 3D tracking. In addition it only works with After Effects CC so if you’re using another program like NUKE, you won’t be able to export the tracking and shape data to another program.
mocha Plus 4 is a nice option for those who want more features than mocha AE CC, yet don’t need everything that mocha Pro has to offer. Like Pro, it also has powerful features such as planar tracking and roto tools as well as professional VFX modules such as the 3D camera solver, lens correction tools and support for copying and pasting mocha roto masks directly into Premiere Pro. In addition there are increased export options.
If you don’t already own it, mocha Pro 4 (and mocha Plus) will no doubt prove to be an important program in your VFX pipeline and occupy a place in the “go to” category of software in your toolbox. Any project that requires tracking and roto work will benefit from its advanced features and could save you many hours of tedious work. If you are currently using version 3, it is a worthy upgrade.
If you use After Effects, you can upgrade to mocha Plus or mocha Pro. More information and pricing can be found on Imagineer Systems’ website: http://www.imagineersystems.com/