Friday, October 27, 2006

Shout Out to all of Them CGI Artist

I was just wondering if CGI (computer Graphics Imagery) will not replace actors one day, with level of technology and artist of different fields becoming good at their Skills and actions it will surely come to a day where we fire actors and replace them with CG in commercials and feature films and i know this might sound weird but it is the case here. The latest Softwares including AUTODESK innovations and investment in packages like Autodesk maya, autodesk 3ds max and Autodesk Motion builder proves to be at the cutting edge of technology and other highly respected vendors like SoftImage,Maxon,luxology,Pixologic many more who are the creators of XSI,Cinema 4d, Modo and Zbrush respectively where also at Siggaraph to showcase the next trend in technology and all pointing to one direction "character enhancement,rendering and modeling improvements". this vendors released their latest version of the software they market after their promise in siggaraph and all have seen great improvement with some completely re-written from scratch from ground up. I will be posting the latest reviews from top professionals in the industry soon and i will like to admit that CGI is getting very good even if it doesn`t replace real actors (...i know when i mentioned replacement of actors, many of you where thinking of how bad the statement was but hey its just the way CGI is advancing and fast pacing through the film industry).
Mfawa Alfred Onen a.k.a Muffy

Monday, December 26, 2005

3ds max 8 Review By Todd Sheridan Perry

3ds Max 8 MAXScript debugger, Open EXR support, Xref Enhancements are a few of many upgrades to Max :

Here we are again, at the time where a new version of software is released. Where production companies want to know whether the new features are worth the trouble of upgrading. Is it stable? Is it backward compatible? Will plug ins have to be upgraded? Will it blowup my network in the middle of a project? And in this case, with Autodesk’s 3ds max 8, will it make it to version 9, or are we going to be looking at 3D Studio Maya v1? This last question is something that both Max users AND Maya users should be thinking about – and one that hopefully Autodesk will adequately answer in the next few months.
But…let's avoid the future for a moment, so that we can focus on the here and now…and look at the newest developments in Max 8. I’m just going to step through the New Enhancement docs and give you my insights – which I know you are hanging on the edge of your seat for – being the egotistical digital supervisor that I am.
Scene ManagementSince the beginning of digital animation , companies have had to track enormous amounts of data. Models, textures, animation , lighting scenes, and rendered elements all have to be managed along with updated versions while backing up previous versions. Usually it had been up to the company to develop an internal system of databases and tools for the artists to use so that the project would be somewhat automatically maintained. Alienbrain seems to have been a system that has permeated some film and game studios ranging in price from $700-$2200 per seat depending on what kind of seat you are looking for (artist, developer, manager) with another 25% subscription plan for maintenance.
And Autodesk has its own system called Vault, which ships with AutoCAD Mechanical ($4,100), AutoCAD Electrical ($5,200) and the Inventor Series ($5,100). These may sound like rather pricey investments to small shops, but if you weigh the cost against hiring programmers to put together your own version, or worse, not having any kind of tracking and versioning system at all, the cost is quite minimal.



3ds Max 8 benefits from access to the Autodesk Vault server , a data management solution hosted from a secure, centralized location. 3ds Max artists will be able to find, reference and reuse their 3D content in game, visual effects and design visualization projects.

Max 8 has incorporated an asset tracking hook that will talk to the Vault server (or Alienbrain server ). In very basic terms, the Vault acts as a librarian. You find the book (scene file) you want and check it out. The librarian will give you a copy of the book. You bring the book home (your local drive, which matches the structure of the Vault server – if set up correctly), and you work with it. When you are finished, you bring the book back and check it back in, where it is then available for others to check out. When the librarian puts your book back on the shelf, she actually takes the previous copy of the book and puts it into another place for safe keeping, just in case someone needs to read it later, and then puts the book you just gave her in its place. This versions up the book while retaining copies of the older versions, without changing the name of the book, I might add, so it works quite well with XRefs. If you happen to have the book checked out, and someone else wishes to check it out, the librarian will tell them that book is checked out, but they may read the copy without the capability to change anything. This safeguards against two artists working on the same asset and overwriting the work the other is doing. Alienbrain has a plug-in that integrates into Max, but with the latest release, you are supposed to be able to interface with multiple asset tracking systems, even, presumably, your own.OPINION: I cannot emphasize enough how important asset tracking is to a production pipeline. Whether you have five artists or 500, you will seriously cripple your productivity if you choose to avoid it. I’m glad that Autodesk has pushed its integration into Max. However, I have only has a chance to use Max 8 on its own, and I use neither Alienbrain nor Vault, so I cannot honestly comment on the ease and utilization of the tools. The concept, however, is solid and is being used in many pipelines. The next step? Getting artists to abide by the rules and procedures so that the concept works.


XRef EnhancementsThe XRef features were made more robust. The new interface is a bit more elegant that previous versions. Camera and light xrefs have received some love. Nested xrefs are indicated through a hierarchy list which makes it definitely more easy to read. For the beginners in the audience, xrefs were created (I think Softimage used them first. I could be wrong) to allow animators and lighters down the line to be able to reference a model or scene rather than merge them into the scene. This adds flexibility so that a model could be altered, or its maps or materials changed, and the changes would propagate down the chain so that the lighters would be lighting the latest and greatest character. In XReffing complete scenes (as opposed to specific objects in scenes), Max 8 has provided an option called “overlay”. This brings in the scene as an Xref, but will break the connection during the session. The overlay option avoids creating cyclical dependency loops when fileA references fileB, which also references fileA. This a great idea…in theory. Real-live production often finds ways to break references. With each iteration, however, the XRef concept is becoming more stable.OPINION: I generally like the new methodology incorporated into Max 8. You can make choices about XReffing or Merging in modifier stacks for objects in a particular scene. But it would be more helpful if you could choose AFTER you’ve referenced the object. Lets say you referenced an object and its modifier stack (which ostensibly collapses the stack), and you realize that you need to make alteration to the Bend modifier on the object. It would be great to change the modifier style in order to recover the modifier stack. Currently, you would have to re-Xref the object with the “merge” parameter selected for the modifier action. Also, there is still no support for XReffing animation – although plans are to have thick support for Xreffed animation controllers in 9. This is a serious bummer for me for items that quite often should not be changed, and are being used by many different artists. Cameras are a good example. In a visual effects pipeline, more often than not, a camera is tracked in something like Boujou or SynthEyes and exported to the 3D (and now, more than ever, the 2D) program. That camera is based on the live action camera , which, unless there is a reshoot, will not change. Now the animator who will be animating the ubiquitous Velociraptor model needs that camera : so does the lighting TD in order to render the model, and the FX guy who is making the tornado that is the life-long nemesis of the Velociraptor – both of which must be present in every effects movie for it to be taken seriously. If we could Xref the camera file into each of those scenes, things would be peachy – which, by the way, we CAN do. BUT…if the camera DOES change, currently the new animation will not propagate into the files that are referencing it, which simply opens doors for errors between elements. OPINION: The moral? Still use Xrefs with caution and skepticism. Perform many tests with different types of uses for them before dedicating to that process in production.

Scene States

The addition of Scene States provides for a way to save the Max scene in its current state, and then be able to restore is back to that state if you have changed it. In the past, a lighting TD will make a bunch of different lighting passes for one scene – a beauty pass, a rim light pass, an ambient occlusion pass, a shadow pass, a pass with the shiny sphere, a pass with ONLY the reflection of the shiny sphere in the glossy checkerboard. Each of these passes would have a correlating scene file. Not such a bad thing if the lighter was not dependent on the fickle animator before him in the pipeline who didn’t quite like his last animation . So, the shiny sphere is now rolling a little further than before. The TD sighs and proceeds to assign the new animation to ALL 30 passes that he just rendered – either that or take the latest animation and set up the same settings for all the passes. This wouldn’t be much of a problem if the animation for Xreffing worked.Along comes Scene States. With a little ingenuity and planning, the TD can save most if not all of his passes in the SAME SCENE. Camera , lighting, object and layer properties as well as material assignments and Environment parameters. Now, when the indecisive animator submits a new animation , that animation can be assigned to the object (or if you are using point caches, not much has to be done at all), and then all the different passes that you, as a kickass TD, have diligently placed into your scene can be rendered again, with just one click of a button.OPINION: The Scene States in use with the new Batch Rendering feature discussed in just a bit should be mandatory in every studio. You cannot measure the amount of time it saves nor count the number of render errors you will avoid. When you begin to render pass after pass after pass for a single shot, it starts to become an incredible burden to keep track of it all. In previous versions of Max, I would use a couple of Blur scripts called The Onion and Render Elements (not to be confused with Max’s internal render elements), which together would provide a similar service. Autodesk seems to have taken the hint and brought it another step further by adding the ability to save camera and light positions as well as properties, individual object properties, and seemingly endless iterations of different materials.
MAXScript Debugger
For you scriptheads out there, Autodesk has some love for you as well. As a token of their gratitude for all the powerful scripts the user base has created, they have offered you a debugger so that you can analyze your script’s variables and threads.
OPINION: I’m afraid that I can’t comment too much on this feature, not because I don’t write scripts, but because you need something that is actually practical to analyze. Its not like checking dynamics by making a ball and plane and adding gravity and collision…proving that gravity, even in CG, works. It just doesn’t make sense to write a little script to test the debugger. From reading comments from others, a potential downside is that the debugger opens up access to scripts, including encrypted ones. This has caused some consternation among some developers. I cannot say whether this has been rectified by the release date, but it is something to think about if you are developing scripts that will be distributed among the Max community.

OpenEXR SupportTo be a real and honest contender in the visual effects world there are certain compatibility issues that you have to abide by. Cineon is one format that you must be able to incorporate. The other, which is fairly new to the game, is the OpenEXR file format, which was developed by Industrial Light & Magic and the code was released as open source for the rest of the world to utilize. It’s a format that can accommodate full or half float images as well as 8 bit integer. It supports RGBA with the option to pre-multiply the alpha. AND you can choose between a number of lossless compression algorithms. With all the options, the goal is to keep the color depth deep and the file size small. Additionally, you have the ability to assign metadata into the file, but more importantly, you can utilize EXRs additional channels for all the usual suspects: Z-depth, Coverage, Velocity, UVs, etc. You may ask why you should use EXR as opposed to, say, RPF files. Outside of making yourself look like you are on the cutting edge by using ILM’s proprietary format, you can take advantage of the compression algorithms – which you cannot use in RPFs, ending up in unnecessarily large files.Autodesk evidently licensed the original plug-in developed by Splutterfish, who provided it as a free download for previous versions of Max. Its now available within Max, I’m sure ILM pushed this a bit since Max is used extensively in the digimatte department at the Bay Area facility. OVERVIEW: Glad to see it.

Modeling

Hair and Fur:
You know? The topic of Hair and Fur is a whole different article unto itself. It was introduced for subscribers in Max 7.5, and fully integrated into Max 8. Its based on the solution provided by Shave and a Haircut, developed by Joe Alter. This program has proven itself in production for at least five years in productions like X-Men 2 and the upcoming King Kong. The basis for CG hair is to control thousands of hairs by driving them by a minimal number of well placed hairs. The software then fills the space in-between the control hairs with additional, interpolated hair. When you adjust the control hairs, the style of the whole hair system adjusts. For previous hair systems, like Shag:Hair for Max and Maya’s own internal hair system, you basically had to setup your control hairs and style it by adjusting the control vertices in the spline. This approach is available in Max 8’s Hair, but in my personal opinion, it’s time consuming and tedious. Fortunately, Shave and a Haircut comes with its own styling interface to alleviate the tedium.

In the styling window, you can groom the hair with strokes of your mouse, pulling the hair around like a comb. You can clump it, curl it, flair it, scale it. All with a very interactive and responsive interface. Once you’re done combing the ‘do, you exit back out to Max with your changes intact. The combing tools have a shallow and short learning curve, and more than likely you can pound out four or five styles in the same time that you can style one head of hair in the old tweak-the-spline methodology.
Once you have your style down, there are plenty of tools to adjust the look. Maps can control color, placement, length, density and thickness. Sliders control the frizz and kink. Furthermore, the actual hair material is controlled in the modifier rather than the material editor, and provides control over specular color along with randomness of each. You can even set mutant hairs, which are stray, odd colored hairs that occur as people age. For those people who love their own hair shaders, the hair can be converted to geometry and you can apply any shader you wish. The drawback to this is obvious – thousands or even millions of hairs can add up to a LOT of geometry and all that that implies. You can also use Mental Ray prims, which are internal to MR and will generate the hairs at render time.
And what good would hair be if it didn’t have dynamics? You’d basically have a fur helmet, and that’s no good unless you are animating Fred Flintstone and Barney Rubble at the Loyal Order of Water Buffalo Picnic. So, MaxHair provides reaction to momentum, gravity, wind, and collision to other objects.
The Hair renders as a render effect into a special GBuffer after the primary render is complete. So, basically, your geometry renders, then the hair renders, and it composites on top of the geo. This appears to work fine with scanline, Mental Ray and Brazil. But, all the render engines fail with the hair when it comes to using additional passes like ZDepth, Coverage, Velocity, etc. If you attempt to use Render Elements with scanline or Mental Ray (Brazil doesn’t support them anyway), the hair does not show up in the extra passes. If you try to embed the extra info into RPF or EXR files, only the Z shows the hair. You can get hair data in most of the channels (minus Velocity) if you use the MR prims instead of buffer, but you can’t utilize the Render Elements to save out separate files. You can also get the extra data if you render the hair as geometry – but again, you better have a heap of RAM (remember, Windows can only utilize 2GBs of RAM – maybe a little more if you engage the 3GB switch) and some free time. For those crazy compositing guys, an image with the object, but not the hair only contributes to additional work and ultimately ends in tears.OPINION: Good stuff. Lots of control. Needs to find a way to pass the data into the extra channels while still using the efficiency of the buffer rather than resorting to geometry-based renders. A word to beginning 3D artists: Just because these tools make creating hair easier, it does not mean that creating hair is easy. Its demanding, tedious, and you need a balance between a mathematician and a hairdresser. A lot of people in the industry are hair people, and only hair people. So, the moral of the story? Don’t assume that you can suddenly make fabulous hair because there are fabulous tools. Come to think of it, this is the case with all the disciplines in the digital world – so apply this advice to everything I’m talking about.

ClothMan, another huge topic that should be better examined in its own article. Like Hair, Cloth was introduced into Max through the subscription program in the point release of Max 7. The technology has been inherited from Stitch. Its quite a lovely little cloth program and far superior to the other option, Reactor, with two components to it: Garment Maker and Cloth.Garment Maker is the tool that provides the method of creating the clothing. You lay the clothing out like a seamstress lays out her patterns before cutting the cloth. Splines define the patterns – the front of the shirt, the back, the sleeves, the collar. The splines are converted to a garment, which is composed of a surface of irregular triangles called a Delauney mesh. This seems to have become known as the best mesh structure for simulating cloth. I can’t say if this is the case, but it seems to work well, despite how ugly the mesh might look.
Once the clothing is setup, you place the patterns and pull them together around the character through seams. Its kind of like threading a string through the pieces of cloth and pulling it taut. The second part of Cloth is setting up the dynamic properties of the cloth with a wide variety of parameters such as the bend, the shear, the density, thickness, air resistance, collision parameters. Simply a multitude of sliders and dials to tweak. But, there are plenty of presets to help you get on your way.OPINION: I’ve been using cloth since the Stitch days, and I’ve always been able to get decent results, and I quite prefer it over such heavy hitters as Syflex, and certainly over Reactor. The simulation engine is fast and responsive, and the amount of controls give you plenty to work with. Once the cloth dynamics are solved, you can easily add a point cache 2 modifier (thanks to John Burnett), and remove the dynamics calculation which, by the way, will increase your file size exponentially. With a bit of practice, a knowledgeable technical director /animator can churn out cloth simulations quickly without much pain.

Editable Poly Enhancements
For a number of Max versions now, Editable Polys have been the modelers de facto choice for modeling to prepare for MeshSmoothing or SubD surfaces – especially concerning character modeling. Throughout each version, the tools have become more and more robust, being supplemented further by additional tools like PolyBoost. In this latest version, bridging has been extended to edges and now provides an interactive panel to adjust the parameters of the bridge. Chamfers have added the ability to make the chamfer open, making a hole in the mesh where the chamfer happens. Clean Remove will remove edges, which is nothing new, but now, with the CNTL, you can remove the verts that are left with only two edges, allowing you to create single straight edges. Ring and Edge Loops can trace down the loop a step at a time rather than automatically tracing the entire loop in one fell swoop. Edge Connect gives you a panel to adjust pinching and sliding of the newly created edges. And, finally, some new toggles have been added to the Subdivision Surface rollout so you can control the visibility of the cage when the object is SubDivved – a benefit for getting the cage out of the way when you want to adjust the mesh based on the isoline.
OPINION: Some nice little tools which feel like they will contribute to some time savings. A lot of little time savers add up to a lot of saved time. So I embrace the additions. I would still give up 20 Venti Mochas to make sure that I have PolyBoost on my system.
Skin ImprovementsNumerous advancements have been added to the Skin parameter, but the most important of which is, in my opinion, the Weight Tool Dialog, which adds quick access to adjusting the vertice weights. Not only do you have tools to assign common weight values, but you can store additional weight values so you can paste them to other vertices. The dialog provides feedback as well, giving you the names of the currently selected bones and their current weight. The combination of the interactive tools and the data feedback saves for a heap load of button clicking.OPINION: Riggers who are used to some of the tools in Maya should be happy with these additions.
UVW EnhancementsAny tools you can get to alleviate the drastic tedium of UV mapping (second only to rotoscoping, IMHO), can only be seen as beneficial. Max 8 has added an internal dialog for exporting a UV map for editing in your favorite paint program (which should be Adobe Photoshop . If not, you probably shouldn’t be working as a digital artist). They have expanded the relax tool to give more flexibility when trying to remove distortions. A default checkboard pattern had been added for troubleshooting stretching and bad UVs. The Select Overlapped Faces gives you access to problematic faces that might be hard to see or select by simply using your selection marquee. Show Edge Distortion is a great little visualization tool that color codes how much a particular edge is distorting beyond the length it is in the actual model. But…by far the most important advance is the Pelt Mapping.

A Pelt is a term referring to the skin of an animal with the fur still on it – which will probably upset PETA, but I’m guessing that PETA activists are too busy releasing non-indigenous test monkeys into the wild to bother themselves with protesting unsuspecting digital artists. I could replace the term pelt with “flay”, but I suspect that would be worse. In terms of procedure, the methodology is the same. You slice the creature up the belly, under the arms and around the wrists, by selecting the edges in the model, then, in the pelt dialog, you stretch out the skin flat pulling out radially with springs, so that the UVs spread out with minimal distortion. In the case of a human character, you would probably slice down the back so that the primary mapping would be the front. Some have made claim that this newer form of mapping takes tasks that took days and has compressed it into hours. I haven’t found out who the real innovator of this technique was originally, but its been available in Max before this through DeepUV and Texture Layers, and it has made its rounds in XSI and a couple standalone UVers.OPINION: As one who stays as far away from UV mapping as humanly possible, I’d have to say that the pelt mapping system may actually get me to consider mapping a character…once. In speaking with artists who actually do this stuff, and who I actually trust to give me an honest opinion, they absolutely love the new UV tools and would probably die happy knowing that they can use Pelt Mapping.

Sweep Modifier
Onto more mudane tools…the Sweep Modifier has been added as a caffeinated loft tool. You can sweep shapes around a spline to create curved railings, an exhaust pipe, or perhaps a Jack In The Box curly fry – which you could do with the loft tool. The largest different, in my opinion, is that if the sweep path turns back on itself, the model will automatically Boolean to create a contiguous surface – a major time saver. Other benefits are that a number of shapes for sweeping are built into the tool, alleviating the necessity of creating addition splines (even though similar shapes are now available in the Extended Splines.)
Material
RealWorld MappingWhen I started working with Renderman after a few years working in the “lowly” 3D Studio MAX world, I found that everyone was using texture maps with a square aspect ratio, no matter what the shape of the mapped object was. It was puzzling. I kept on thinking that that was silly because you were losing fidelity in the direction that the map was being squeezed. Then, low and behold, I find out that Max was doing the SAME thing no matter what size image you fed it. At least Renderman let you KNOW that your image was not square. Max just went on its merry way and let you think that it wasn’t screwing with your art.

In Max 8, you can now use non-normalized or “real-word” mapping, which maintains not only the aspect ratio of your map, but you can designate a certain size of the map. This is not the size in pixels, but rather the size in world units. So, for example, if you have a tileable map of a strip of wallpaper, then you can tell Max that the map should be 16” wide and 84” high (a very plausible size for a strip of wallpaper). When Max puts the map onto the very complex “wall” model, the map will be 16” wide and will tile along the wall at 16” intervals.
OPINION: This may not sound like a big thing, but I think its one of those “big things come in little packages” syndrome. The more people use it, the more prevalent it will become. You might say that the more people use it…the more people will use it – at the risk of sounding like Will Rogers or Yogi Berra, depending on if you think I’m a wise cowboy or a dumb as rocks baseball legend.
Animation Biped
Character Studio has some new features as well. Some we’ve been demanding since Winsor McCay needed them for Gertie. Some we didn’t even know we needed. Bipeds are now equipped with “twisty bones” in all of the limbs. This is to replicate the twisting action that occurs between the radius and the ulna in the forearm when you twist your wrist. Previously, scripts and expressions could be written to extend this effect , or else you would end up with sausage links at the wrist. This twisting action can also be used to alleviate pinching around the shoulders, armpits and the taint.

In early versions of CS, you could set bending so that if you rotated a spine bone, then the rest of the spin would rotate as well. This functionality has been boosted with axis restraints, child inheritance, smooth interpolation for the local X of the first and last links, and buttons to zero out the rotations. The bend features can be applied to numerous chains including the spine, neck, and tail (each of these have gained more links in this latest version).
IK blends have been allowed to propagate through animation layers in the biped. Previously, if you added an adjustment layer to the animation , the IK blends (which allow you to lock hands and feet, causing the chain to move from FK to IK) would break, causing much time consuming rework. As amazing benefit on TOP of this amazing benefit, is that a biped animation can be retargeted for skeletons of different proportions (within reason), but the hands and feet can remain firmly planted in the same place as the original animation.

And now, FINALLY, and with only a cursory reference in the New Features guide – Character Studio animation can be controlled with Euler curves in the Workbench and Curve Editor. Animators have been nearly up in arms because they have been oppressed by the TCB animation controller for almost a decade. Softimage, Maya, Houdini…all have had curve-based animation since their inception. This advance in CS, is big stuff, and will open doors to many disgruntled animators.

OPINION: Character Studio has been around for a while, and I have issues with numerous parts of it – one of which is a slow development cycle. I ultimately feel that a custom character rig is going be able to outrun a biped any day of the week. BUT, that being said, the biped has been getting better and better. It is a terrific tool for quickly laying out and rigging characters, and provides a service to studios who simply do not have the resource and time to develop custom rigs. In fact, it has been indispensable for the production of hundreds of game titles, television shows (most notoriously the Dancing Baby in Ally McBeal), and films. This is not something that should be taken lightly. A hopeful wish for the next iteration would be built in spline-type controllers to provide a bit more of a handle to grab while animating, as opposed to grabbing the actual bone. New Motion Capture Formats I’ve been working with motion capture data for years now, usually being delivered in the CSM format. An easy and clean format easily accommodated within Character Studio, however, after being applied to the skeleton, the motion is surprisingly inaccurate. This is not usually that big of an issue, because in a vacuum, the motion seems intact. On further examination and in comparison to the original recorded motion, it appears that the CS solver plays a little fast and free with the data. Because of these issues (and I suspect from a lot of prodding from the motion capture studios), it looks like we now have a couple more file formats to use – HTR and TRC, both developed by Motion Analysis Corporation.
The HTR (Hierarchy Translation-Rotation) format is a beefier version of the older BioVision format (BVH). Within the file, there is information about the actual skeleton, including the original translation and rotational values of the different segments of the body. The stored motion data is grouped by segment, and all data is read for the segment before the next set of motion data is applied to the following segment. From the underlying math, you will get a more accurate solution because you are receiving explicit and unfiltered position and rotation data that does not need to go through an additional solver. The Motion Analysis TRC contains world position data for the numerous markers located on the performer during the capture session. A skeleton is created and adjusted based on the marker data. The motion is then mapped to this skeleton. This is exactly the same structure as the CSM format, and in fact, Max has conversion utilities for changing the TRC to CSM format to make the data more accessible to biped.OPINION: I haven’t really had a chance to utilize the HTR or TRC formats, but I would just assume lean toward the accuracy of HTR. To be completely honest, I’m not sure what the advantage of TRC is outside of accommodating a previously unsupported format. If Character Studio still has to solve for the biped animation using the same engine that it uses for the CSM data, then there is still going to be noticeably loose motion. Unless, of course, the solver has been updated.
Load/Save Animation
A new loader/saver have been added to save out your scene animation to an external file, which can then be loaded and remapped into other scenes. This replaces the Merge Animation … command and makes it faster and less heavy. Originally, the Merge Animation seemed to actually be a MaxScript which temporarily merged in the file that you were trying to parse the animation from. Most of the time, this meant that for the time you were selecting objects to update the animation from, you have a duplicate scene within your scene. This process always made me a little wary, and it was not always as dependable as it needed to be.

The XAF format saves the animation information into a delimited ASCII file. To retrieve the data, you load the file into your Max scene. The animated objects show up in the dialog window, and you drag and drop the object name to the name of the object in the scene (or you can have it automatically link the animation by matching the name or similar name). The mapping information (which animation will be applied to which object) can also be saved and retrieved. Good for when you are going through numerous shots where you are transferring the animation from one object to another, differently named object numerous times.OPINION: Its fast and seemingly dependable with lots of parameters and toggles to control how the data is wrangled. But, even more important, in my eyes, is that the data can be accessed and modified through tools written in PERL, PYTHON, and databases. Through pattern searching and tokenizing, the data can be manipulated and resaved on a global scale, tying the studio and its tools together. I cannot presume to give a list of tools that might be written, but the simple fact that the data is accessible broadens Max’s usability in larger pipelines. This is a very important benefits that makes RIB files for Renderman or Maya’s option to save as binary OR ASCII files so pervasive in the industry.

Motion Mixer Support for Non-Biped ObjectsMotion Mixer was (and still is) a part of the Character Studio system. Through it, you could blend different biped animations or, even more specifically, different biped animation PARTS, like the animation of the hand for instance. Very complex animations can be developed quickly with the same approach that you have to non-linear editing or sound design. You set up layers of animations and determine the transition parameters between them. You can easily re-time the animations as well, which will smoothly interpolate the frames so that you don’t have stepping in animation that has been slowed down.

Max 8 takes this technology and applies to animation outside of the biped. Blending animations is easier and more efficient. IT could be done in previous versions of Max if you knew how to use the list controller, but the approach was pretty cumbersome. Even though you can use this for any non-biped objects, its really geared toward custom animation rigs, including quadrupeds, insects, or octopi. The mixer will import the XAF files listed above, with the possible need for an XMM file to guide the mixer on how to map the animation . These clips can then be manipulated and mixed together to form new animations.OPINION: The motion mixer takes a little getting used to, but its worth the time and effort. I’m speaking from the functionality of the mixer when it was exclusive to biped, however. I didn’t have a spider rig to really put the new mixer functionality through the ringer. So, take this with a little grain of salt – just a little one though.RenderingBatch RenderAs mentioned above in relation to the Scene States, the Batch Render function in Max 8 provides a way to render out many different passes from one Max scene. By utilizing the information in the Scene States, along with custom output paths, custom frame ranges, and ability to choose between multiple cameras, you can build a list of tons of different passes and objects to render. . . and send them all to the queue for rendering.OPINION: At the risk of repeating myself, this is an invaluable tool for lighting and FX TDs. When things change in shots (which almost NEVER happens. . . .not), re-rendering elements becomes more of a chore than setting up the shot to begin with. By saving the state of the scene and saving the output paths for each elements, you are reducing the chance of error significantly. An animation changes? You make the change to one file and go to the batch render dialog and toggle the passes you need to render. Then click render. No passes with the wrong lights. No passes where you forgot to hide the troll creature. No passes where the trilobite doesn’t have a matte material. You can even access the render presets that you have saved from the main render dialog window. All the little details are stored in the Scene States and the Batch Render. And that’s what computers are for – keeping track of the details so that you can focus on the creative stuff.

Mental rayMental Ray have been kept updated, and the accessibility to it had expanded. Max users are no longer limited to a specific number of Mental Ray nodes per Max license. You can now fill you renderfarm with Mental Ray. Small step for mankind, a huge step for digital artists.
You can also take advantage of Metal Ray satellites for assigning render buckets to eight other machines, beefing up your ability to churn out quick test renders, or for print, distributing that 8K frame to many machines. It’s a good tool, and has been with Renderman for many moons, and VRay has a nice distributed rendering system too. Keep in mind that you should basically use these to speed up frame renders – like test renders. One you start rendering animations, and the frame count exceeds the number of machines you have, then you lose the speed advantage. Always good to have though.OPINION: Mental Ray is tried and true. Its been around since Aristotle, and its maturity gives it a lot of credence. Its used throughout the industry. If you have the power and the time, you can make some beautiful pictures. So, as long as its part of Max, use it. And since they expanded the licenses, there is not reason not to use it. The “Mental Ray is too expensive for my little studio” argument doesn’t fly anymore.
Oh! You can also render Mental Ray calculations to textures. So, if you need an ambient occlusion pass, you can render the AO to a texture and remap it onto the scene or the object. Free AO for however many times you want to render the scene – at least until you change the position of an object. For rigid models like airplanes, starfighters, asteroids, and other things that you animate with a pos/rot controller, rendering to textures can be a lifesaver. I come back to ambient occlusion because it is so prevalent in the industry --- but if you can “bake” the ambient occlusion into the ship with a texture, it’ll save you gobs of render time for the shots where the ship is flying around evading raining lava blobs.
OVERVIEW: I think I’m gonna like this new version of Max. It has a bunch of new tools – and “new” means that they are newly integral to Max. Lots of efficiency tools. If you are new to 3D and can afford to get your paws on Max, it’s a good primer, but its still powerful enough to keep up with the best of them – even though the best of them often scoff at Max. Don’t let em fool you, there are very few arguments that put any production-worthy 3D program light years ahead of any other. And I’m talking production-worthy, so I don’t want to get any Poser or Bryce people talking about how they made a feature film on their dad’s Dell.
For productions that are already using Max 7, you know the drill. Wait a little while before upgrading the studio. At least wait for a point upgrade. And for God’s sake, don’t upgrade in the middle of a project – but you know that. For those you are considering Max 8 in production, go ahead and get Max 7. Its proved itself to be stable in many productions, and you can start tailoring your pipeline to accommodate. Then upgrade on the point.

Todd Sheridan Perry is co-owner of Max Ink Cafe and Max Ink Productions. As VFX Supervisor and Technical Director -- Todd's experience ranges from video games to film to television. His recent credits include: LOTR: Two Towers, Chronicles of Riddick and numerous commercial spots. He is currently wrapping up the 3D supervision for The Triangle on the SciFi Channel. You can reach him at http://www.maxinkcafe.com , http://www.toddsheridanperry.com and http://www.maxinkproductions.com