I recently saw an expert modeller/rigger over a single hour model and rig a fairly realistic character while hammering home the essentials every animator should know. I later saw the same modeller rig one leg of a character for inverse kinematics, explaining other fundamentals, in under ten minutes. I heard him after preparing this page, which either points to the way the Golden Rule works, or how often one needs to rewrite/update a web page, once it's posted. This page aspires to be everything the beginner needs to see and hear to become prolific with LIGHTWAVE 5.6 or INSPIRE, an eight hour cd which could be watched a few hundred times, like the www.tv3d.com CD selling for $30 (I cannot vouch for its quality not having seen it). It is still pretty rough, but there are a number of tips here that did not make it to the Manuals, and may help one better exploit other tutorials http://members.shaw.ca/lightwavetutorials/Main_Menu.htm and feel fairly confident that if a feature looks new, it probably is. Other tutorials may be found at web sites like www.cgtalk.com's Lightwave chat area, and some of this info may be useful, though much will be skewed toward Lightwave 7.5 .
But first, a FAQ:
What are your qualifications?
Who am I? I studied computer animation a little at college (UCLA), and I did a fair amount of trial-and-error with INSPIRE, and the only other academic qualification I have is curiosity about the world and aesthetics. I once co-wrote some cartoons for "Liquid TV." I feel I was "called," and you will find numerous references to spirituality here. This web page has taught me that the world could get college-educated for a little more than $50 each, and that what seems to exist as barriers to this, is not my doing. The Golden Rule is something to actively experiment with: be manipulative, and see how the world tries to manipulate you, be compassionate, and see how much caring surrounds you. (Try not to do them at the same time, that corrupts the results.) This implies that the world is largely subjective, but that can be fun too. Be helpful, see if it comes back.
What is in here?
INSPIRE will let you animate humans as well as teddy bears, but if an elbow looks not-quite-right, your options may seem more limited than they are. Put some parent bones between it and the joint position, add an anchor bone or falloff bone, tweak a limited bone setting, move and re-rest the bone. Current tutorials about these functions tend to focus on newer toolsets, though some helpful LW 5.6 tutorials may be found at http://members.shaw.ca/lightwavetutorials/Main_Menu.htm, sadly, the Manual doesn't go into great detail. Because INSPIRE is a dongleless version of LIGHTWAVE 5.6, it is made available to students for as little as $30; and those making the leap from a program like Bryce are likely to be impressed. INSPIRE has many advantages over programs like "Animation Master," though Hash has inverse kinematics. For instance, one may import INSPIRE scenes and objects into Lightwave 7.5. And if you don't go on a burden binge, making excrutiatingly difficult characters or scenes, you may get enjoyable results. Another nice thing about INSPIRE is that it can take "dxf" and "lwo" object files from the Internet, which one may then animate, texture and modify.
Grid snap a square; using the pen tool, size it in the window using "a," copy it, rotate it; make a box this way, then "merge" to make the box one object. It's pretty important to understand that merge tells a group ofpolygons that theyare one object. Not merging an object is sometimes done in INSPIRE to save on having to keep object components separated under different file names, for easier selecting for modifying. Save as. INSPIRE has a form of grid-snap that is always on, though the increments change in magnitude as you zoom in or out.
Make a ball. Now select one point. (Did you click "Points" at the bottom of the screen?) Okay, select one point and then deselect the one behind it by clicking on it. Where was I? Oh yes, click Move, and try to move the point one increment, the tiniest one you can. That's a pretty huge "grid snap" increment. Now ZOOM IN, and do it again. Better press "a" to autofit the object to the screen again. Now you know.
Open a professional model like a fish. Use "," and "." with "Alt" and moving the cursor/mouse to move the view, and "a," to position the fish and Display "Hide Unselected" after RMB "lassoo select" to select certain polygons and texture them, using Objects panel, "Surface." Do you have to have the polygons selected to change the name of a texture? To change its color? Hide Selected/ Hide Unselected is used A LOT when models become fairly dense, because the slower machines get bogged down on high detail objects unless the parts not being worked on are hidden. Hide Selected is also very good for selecting points like the inside of a mouth that are hard to find otherwise. RMB "lassoo" deselect the ones you don't want. (This deselecting gets stale after the first few thousand times, making you use Hide more often.)
If you're like me, you're going to have a lot of fun with just the lassoo. It's important to know that when you go over an area after you've selected it, that that de-selects wherever you touch the cursor AND that if you want to ADD to the area, you just press "shift" when you select or lassoo.
Use arrows to subdivide the "Box" object before "making" it. This is also described in the book. Click on "box," then drag a square in one view screen, then go down to another and make it a box, but before you press "make," press the up and down and left and right arrow keys. (You will need to do this with the mouse cursor repositioned in a couple of different views for all six sides to subdivide. The cursor doesn't do anything, but resting it in the viewscreen tells the arrows to create divisions along that set of axes.)Ta-dah! Make it have many polygons. Time to point drag a head. Use "shift" selecting. Use Magnet to make things round, try vortex to do this. "Rotate" is very good for things like jaw movement, once one learns that the axis of rotation is WHEREVER the mouse cursor is placed.
(Vortex is a goofy tool, it's one of the few "ball" manipulators that can work with the cursor outside the circle. The center crosshair has the strongest rotation effect, falling off to the circle, but the center of rotation -- the mouse cursor -- can be anywhere. It's used for opening closed eyelids.)
This is actually a very effective way to model objects like heads, which need upwards of 30 lines of detail. You may want to try it again, but using an image to work from. Circle all of the contours and creases before you start, just to get an idea where they should be. Don't work on the eyes, they tend to use some odd techniques. I have to tell you, in my opinion, if you don't give a darn about modelling but just want to do some animating, much of this material can be skipped, because it may not help you very much. You may need to know how detail is preserved in creases with extra points, and how seams usually indicate an unfinished/deliberately-seamed model, and how to select areas you want to deform using lassoo and "Vortex" and "Pole 2." There are animators who almost never model or "rig" characters, but a lack of models and complex rigs in INSPIRE might lead one to think otherwise. You do NOT have to go there.
Make a ball and do this -- increasing the number of segments -- with the Options menu. Do point dragging to make a horse or other animal leg, using RMB lassoo-select and Move/Scale/Rotate. By the way CTRL will make the circle uniform when you start it.
In the above image, twelve cubes each have a ball shape and a cube shape in which a cube side has been subdivided and magnet-dragged to resemble a ball. This was a test for a silly quest: some way of doing ram-and-blend modelling with a "chamfer" function that is not included with INSPIRE, though I believe it now exists for LIGHTWAVE. Where the ball and cube-ball overlap there is a dramatic seam in every case. (In a few cases the ball was omitted.) This is because the LIGHTWAVE rendering engine has an algorithm which stops smoothing assignment when an edge is met that is not related to the adjacent points (more or less). Boolean operations should remove this seam, but Boolean operations tend to double the number of points at a ram-and-blend joint, causing their own seam artifacts, unrelated to ram-and-blend seams. One way to get rid of the seam is to put one of the two welded objects into a "background" layer, and subdivide it many times, and then magnet the foreground object to get it as close to possible to matching it, then "BGConform" the selected area that needs to be identical. Practice makes perfect. The other way to eliminate the seam is simple but the home-student may not think to do: weld the seam. This can be a semi-automatic weld using "BGConfrom" or a "point-by-point" job. I have noticed that if only half the points of a seam are welded, this will sometimes be enough, at least for still work. There are other measures like feather-edged texture bandages and motion-blurring bones that can reduce seams, but all that is needed for ram-and-blend is a few welds.
Before having fun with the weld tool, let's look at smooth-shifting. This is one very popular tool, used for modelling from scratch without seams or welding. With a little practice, it gets some tremendous results. [Another must know you should know by now -- to select more than one polygon at a time, one can select a few using the RMB lassoo, then pick up the remainder using "shift" LMB. Shift LMB allows one to add polygons to a selection after the LMB or RMB has been released. (left mouse button, right mouse button).]
Make a ball.
Let's make a tree out of a ball using "smooth shift." Select the top polygons from the top half of a ball, and go to Multiply Smooth shift. Change the default of "1" to "0" (or backspace). Everyone does this. It is surprising to some that "1" is used as a default. Press "enter" but before you hit another button, go to Move and move the polygons. This is similar to "extrude," except that if you "extrude" the branch, there would be a polygon hidden inside the branch back at its "shoulder." Smooth-shift is pretty neat, you can select many polygons and shift them all together. This works well for bevelling. (The "bevelling" button really only works for one polygon at a time. "Smooth-shift" is generally used instead.) Use point select to rotate and adjust points that are partway along the branch. Working with (selecting) more than one polygon (four or more) when smooth-shifting insures adequate detail -- experts like Larry Schultz seem to think so.
Make another ball, and have a look at the north pole of it, where the little triangles all converge. Kind of nasty. One way to make the messy area smaller is to smooth shift all of those polygons smaller gradually, leaving a much tinier area for ripples and such.
"Hide" most of the object, then elect the part you want with RMB lassoo, then "Unhide" and continue. Practice that a dozen times.
According to Larry Schultz, who teaches Modelling -- www.splinegod.com -- the trick with "smooth shifting" is to previsualize the shape of the points that you want to have as a result, and then smooth shift one or more polygons enough times and in such a path as to have that shape emerge. One way to practice doing this is to put a "cel" on the TV monitor and then draw a shape on it, but we can use the "background" function in INSPIRE with somebody else's object, or start practicing with balls. Try smooth-shifting a circle to make a ball by having one layer as a background (pressing the bottom diagonal flag section of another layer that has had an object loaded into it by opening just that layer and loading the object...) Did you notice that if you hit "make" before creating a ball's volume, you get a circle? Ain't life grand? But the www.splinegod.com does NOT endorse the tool set found in INSPIRE 3D, at least, not if one is trying to "point-drag" effects similar to the NURBS-like "Spin-quad" "edge loops" being used by many of today's modellers. "Edge loops" is not a button, but a philosophy, that results in more character and better animation. He advocates getting the LIGHTWAVE tool set, and using it, and not wasting one's time on primitive tools. So you know.
Smooth-shifting is great, but to get better rounding, first adjust the points of the polygons to be shifted to be as round as possible. Start with a subvided box instead of one with only six polygon sides. Make a cactus by just smooth shifting two adjacent polygons near the middle of a box (subdivided to be four polygons by four polygons), without point-dragging to make the polygons resemble a pentagon (circle). Icky.
Now, try it after selecting four points (not the four on the opposite side of the catcus, lassoo deselect those) and "stretch"ing them or otherwise getting a more circular/octagon shape -- smooth shift a couple times. Press <tab>. Ta-dah! Knowing this makes SUCH a difference. (By the way, does it irk you that the cactus doesn't look more cylindrical? You can gradually "stretch" the remaining polygons in the top view by selecting the bottom half, and deselecting polygons after each stretch, though this is a little counter-intuitive at first, OR open up the "Pole 2" tool and stretch the sphere to be the same size as a cylindrical shape over the cactus ball, and position the cursor over the enter and enlarge it -- voila!)
One hesitates to bring up everything that can go wrong, but there are a few things that can happen that are completely fixable, that everybody working knows about and how to fix.One of these is the creation of extra copies of polygons on top of one another, another is polygons being flipped, another is part of a polygon being on top of another, another is a twisted polygon, and the creation of two point polygons making little toothpicks on the surface of your object. Here are some pictures. Most of these are visible in Modeller, where they are fixed. An object with an "unmerged" or "ununified" look will appear to have charcoal streaks across it. "Align, unify, merge" to the whole object is a pretty good way to get rid of spare polygons. Sometimes a whole object will be "flipped," such as after a mirror operation, and it will look normal, but will not render when loaded into Layout, except possibly as a silhouette. This is easily checked in Modeller, because the "normals" arrows will all point outward, instead of the porcupine look. Fix it by "flipping" it. The partial polygon is a small patch of the charcoal effect, and is usually fixed by deleting the smaller of the two polygons. Twisted polygons should be "tripled" and will generally relax if they are "smoothed." "Smoothing" fixes a lot of problems resulting with facial gestures, morphing, vortex, taper, pole, etc. Finally, little toothpicks are sometimes caused by "welding" or "BGConform" and are fixed by going to "Statistics" in Polygons mode and clicking "+" to select all 2-point (vertex) polygons. "Cut" them, resave, and you're done.
So far, my feeling is that www.newtek.com deserves our help. So, if I were to approach anyone to get "free" LIGHTWAVE, it would be Newtek. If you cannot afford INSPIRE or have little CG knowledge, offer them 10% of any CG-based income for five years from the date they give you LIGHTWAVE, in exchange for the LOAN of LIGHTWAVE. I like a loan, because it spares Newtek a taxable event, but I do not yet know their feeling on this. If you make money, Newtek makes money. Let's make Newtek some more money!
Can you make this deal with total strangers? Sometimes, someone makes a goof in calculations, and discovers they want nothing to do with animation. That might be your investor, or someone who just likes your children's animation special using LIGHTWAVE Discover Edition. If you discover that you do not want to do animation after all, your "investor" is not going to be too happy. Just one tip: try to make your dream God's dream too. The Godly life seems to be one of ignoring what the public idea of things is and doing things as if the aliens have been watching us -- buying "green" products, voting issues and not personalities, having only one loyalty, sharing the surplus with the needy, bringing about a better world. I've enjoyed courses better when the instructor expected students to pitch projects to the whole class. If you aren't in the class environment, pitch to friends. Don't make any "claims," but outline the story and effects, and see how they sound.
The way some web publishing companies have begun was by contacting non-profit organizations to see if they wanted assistance, or producing "spec" pages for businesses. If business improves, then the animation should be bought outright, otherwise, the commercial runs once on cable. Or how about some titles for a show that's already on the air? One has to be a stickler for quality, even though there may be no apparent pay. Some dispute this method, in favor of other approaches like creating a short, or a short that is based on a feature project; there doesn't appear to be a set path. Most schools encourage some community effort like a PSA commercial, whether or not an organized charity gets behind the effort. Feel good about it.
The two figure objects above are from "Cyberware" and the INSPIRE CD respectively. The mermaid head was found on www.3dcafe.com and welded to the figure, as described elsewhere on this page. The teeth, crown and clock were made from primitives.
Welding actually requires understanding a bunch of tools at once, so it gets lost in the "no man's land" of modelling instruction. Does it come up? All the time. Make a box; use point-select and select two points, go to Tools, and "Weld." Welding automatically merges the two points. Make two boxes. Put one near the other, and select pairs of adjacent points, welding each pair. Ta-dah! If you haven't done much modelling, you may accidentally weld four or more points at once destroying a shape. Fortunately, there is the "Undo" button. Actually, this happens to professional modellers all the time. With LIGHTWAVE/INSPIRE, the trick is to get a "clear view" by selecting all areas of the model that will not be welded, and then going to the Display panel, and "Hide Selected." Sudddenly, one can see to work. When done welding, press "Save." "Save" will save the complete model, whether or not one presses "Unhide." You should notice a weird edge to the new box as you rotate it -- that's the side trapped inside the new box. It should be deleted.
To be completely honest, there are fewer opportunities to "group weld" than one would expect, but here is the technique: make two boxes, one beside the other; using point select, select four vertices from one box where the two boxes will be joined. Press "Copy" and then open another layer, and "paste." Press the lower half of the layer box, making this layer a "background," (Do you know about shift-LMB to open several background or foreground layers at once?) and re-open the original two boxes layer as a foreground. They must be foreground/background for this to work. Select the corresponding four points of the other box using "point select." Go to the Tools Custom menu, and click on "BGConform." Click, they have moved, one to the other. Two things need to be done to finish the job. Go to "tools" and "merge" the new object, you may need to de-select the points that you just used so that all the points will be merged and not just the four. A "shortcut key" that is used by a lot is "|" or shift-\. It deselects everything, making everything the current object/polygons/points. NOW it is welded. Remove the trapped polygon by selecting and cutting it.
There is some redundancy in modelling, depending on what you are doing. I had a little project with a chain link that underscored this. You can start with a "toroid" using the "toroid" button on the Objects "custom" menu, for instance. Play with the "radius" amount until you get a shape you like when you press "OK." Cut the ones you don't like. In points mode, grab half of the chain and move it. It looks oblong this way though. So what do you do? The experienced modeller has lived through this so many times, their fingers may press the buttons without knowing it. Grabbing the larger pairs of rings, one moves them to the center of the link, and we're done. But there are several ways of doing this. You could "cut" the toroid's smaller half and "mirror" the large half, but this can be tricky. In polygon mode, start over with the toroid, but this time (the toroid should have an even number of sections by default) lassoo "select" the big half and go to the "Multiply menu. We're going to "smooth shift" it. Press "Sm Shift," it will bring up a requester that has a "1m" shift already entered. I wondered why this was, since usually "0" or backspace are used. I guess this is so that one doesn't do it accidentally and wind up with extra hidden polygons? Anyway, pressing smooth shift and entering "0" and then enter, is only half the work, now go to the "Modify" menu, and "Move" that half of the link. You will see that extra polygons (probably 16) have been added to the link! Modellers use "smooth shift" a lot.
If you accidentally smooth shift and do not also move the polygons a little, you will need to press "Align" and "Unify" to remove the extras you create. Smooth-shiftfing creates polygons along the edges where unselected polygons meet selected polygons. Edges between selected polygons don't create polygons, so you can smooth-shift a whole lump of polygons, and should.
Smooth shift is not the only tool used by modellers, because certain INSPIRE tools are being partially superseded by new ones like "spin-quads" and "sub-d patches." But it's not bad. Some of these, like "knife" can be partially jerry-rigged using a Boolean operation with a single square in another layer. More about that later. When www.splinegod.com 's Larry Schultz spoke to a Lightwave User Group, he broke the stages of modelling proficiency into two levels: form modelling, getting the basic shape by any means necessary, and "flow" modelling, the low-polygon count modelling of the best games, with a spare few points capturing the essential features.These models tend to deform smoothly and predictably, with curves corresponding to major muscle groups. "Spin quads" is a feature which is difficult to emulate with INSPIRE, because it automatically deletes two adjacent triangles, then redraws them in the same square space, but in the opposite orientation. Using "Hide Unselected" aftter selected a triangle pair, and then "p" (Make Polygon") after selecting new triangle points, and deleting the originals, then "Unhide and "merge," would be one INSPIRE method.
One "weakness" of INSPIRE and the earlier LIGHTWAVE 5.6 is that a majority of the tools will result in a "boxy" look. For instance, if there is a flat part of a forehead, hitting "smooth" may make it even flatter. No one would even be aware of this tendency, since it is based on mathematical averages, except that there are competing toolsets which have a hard time creating a flat look and tend to produce a rounded look, and this lends itself to cartooniness. One should be aware of how to fix flatness: using "magnet" to add roundness, smooth-shifting groups of polygons (rather than single polygons) and altering point positions to give rounder starting-point edges; much roundness can also be added by using tools like "Dragnet," "Taper" and "Pole2," though each of them takes a little practice. "Dragnet," for instance, requires positioning the cursor directly on a point for it to work, and the "hot spot" only appears briefly when adjusting the RMB, which will be the resulting rounder shape approximately.
There are other extreme measures for adding roundness, but they will probably rarely be used, like: using "morphing" between extremes of roundness to "Save Transform" a desired result (using the counter in a rendered animation), or placing a highly subdivided ball in another layer for "BGConform."
The "tab" key gives a pretty strong smoothing sub-D effect, but a smaller effect can be achieved by going to Multiply: Subdivide, and selecting Metanurb. The default will give a halfway sub-D effect. REAL Subdivision Surfacing tends to be graded in a few steps: full body would be 1, above the waist 2, 3 for a head shot.
Texture the tree you made earlier. You may want to assign different textures to different areas or "jitter" it a little in modeller first. You only get rounded edges by making very small smooth-shifts, like at nostril holes or book edges. Try "rendering" it without using "triple" first: save one version untripled, and then make one version and save that with a different name after tripling it. Load each into layout, and render them. Did it have any weird parts? Sometimes a mistake will happen when smooth-shifting and several polygons will be hidden one inside another. Have you tried texturing with images yet?
Do you really need to know modelling to become a successful animator? HA!!! Although INSPIRE does not provide a worthwhile head, they are available from other sources, like www.turbosquid.com or www.3dcafe.com . I have been working on a better head for this web site at this link . What is a worthwhile head? When you count the features of a face -- all the dimples and furrows and corners -- if you allow at least three points for every feature, you soon realize a decent face is going to be at least fifty or sixty polygons wide! Many novices learn this step the hard way and wonder why a nostril wing never quite looks right or some form of gesture isn't coming out the right way. Allow for enough points to provide the necessary curves for details. Learn it now -- notice the features!
If you do NOT want to learn modelling, the next step to learning Animation is probably going to be learning how to deform the model realistically for morphing. This will be the work of Hide Selected/Unselected to careful select areas like eyelids, and then "Vortex" or Rotate or Magnet to deform them. Often, you may need to switch to point mode to get deformations that favor features like creases, rather than edges. I have found that "Vortex" can be a very rewarding tool, but is a little less rational than magnet, since the strength of the rotation is strongest at the center, but to open a closed eyelid (which some prefer as the way to model a head, with a partially open mouth) one need to position the cross-hairs on the lash, but the cursor at the corner of the eyelid.
Another exercise you may want to try early on if your interest is primarily animation: animating a hinged (or bones-rigged) character on a skateboard.
Bones are considered indispensable in animation at this time, but rigging them together is made more complex by additional features, like falloff, rest length, multiply strength and limited region. In LIGHTWAVE, there are even more controls. Frustration with "riggiing" runs very high, and this is unfortunate because on a feature film, two people will be responsible for "rigging" for twenty or more animators. It possibly should not be being taught, as it constitutes a detour few will encounter. At least one professional animation school teaches animation with very little discussion of rigging.
If your area of expertise will be animation, you may want to improve your ability to keyframe using bones, with minimal attention to rigging. There is a section on character rigging below in order to improve one's knowledge, but my understanding of rigging and current animation practices is pretty limited, so I could not say how much emphasis to give keyframing IK over INSPIRE's forward kinematics.
It should probably be mentioned here that although "INSPIRE" inclues a very handy "avi" format, most artists are going to record individual frames as "tga's" or "tif's." These can later be loaded to either "Bink" from www.smacker.com, or INSPIRE, using the "Image" "Load sequence" button, to be recorded as "avi's" as quickly as one second per frame. This won't mean anything for simple tests, but when render times start becoming minutes, it's comforting to know one can always turn the computer off and resume at the last frame. INSPIRE may also be used for creating JPEG's from TGA's if you do not have a strong paint box program. Its JPEG format includes some loss, so it is not recommended for the initial rendering.
But you will probably learn a lot more from the exercise that is taught in every beginning animation class -- animating a bouncing ball. This may be done with three or more bones in a ball object, or by making morphs of ball extremes and animating the morphing ball. Learning bones is useful for two reasons: the bones-morphed ball is not that far from the bones-morphed cartoon cheek, and a button called "Save Transformed" may be used to save the deformation created using bones as a morph target object. The very first thing you learn is that it's much easier to draw. The lesson of the bouncing ball generally teaches two things: many drawings near the top lends a sense of weight, and having successive frames of the ball overlap slightly looks much smoother. If you don't have overlapping image elements, your work may look amateur.
Making morph poses for gestures and lip-synching and such can lead to some "illegal" polygons which may be fixed often by selecting only the polygons that look peculiar in the full shaded view, and applying "smooth." This is very normal common practice. "Smooth" averages the point positions, and as often as not, corrects the little discrepanices like inside-out twisted polygons that result when contorting faces for morphing. "Smooth" can be applied with 100% 9,999 times, or iterations; smooth can also be applied with a negative value. The only shortcoming of smooth is that it will not tend to "round" inherently jagged shapes, so that if one wants smoothness without "subdivision," one will need to "pound-down" corners using "magnet," or select the corners and smooth them. Another trick for giving overall roundness is to apply "magnet" or "Pole 2" as a limited region ball from inside the shape. For greatest uniformity, you may want to use "stretch" to squash an area to a flat plane, before doing the "magnet" ball-pull.
Sometimes, part of one face will look great with a part of a different face, and this can get dicey. Sometimes, one will "Multiply/Subdivide" the "Background" object once or twice, and use "Tools/Custom/BGConform" with the desired change "selected." If the background target isn't subdivided, some points may be welded together, causing 2-vertex polygons which render badly. These can be removed by going to "Display/Statistics" and clicking on "+" which highlights them, and then deleting them with "Cut." But the extra step of subdividing should make for fewer 2 point polygons. Sometimes magnetting areas to overlap better before using "BGConform" helps a lot.
A hand using the Smooth-shift technique. There sometimes seem to be lots of little illogical tricks when learning modelling, like smooth-shifting one side of a box three times, straight ahead, then making each of the four sides made into individual fingers. Going back to the other side of the box, and smooth-shifting a few times for a palm. Surface and save. Smooth shifting the top half of a ball can give you more subdivisions and save your having to start over with a better ball primitive.
See how that the position of the cursor defines the axis of rotation, the point from which to enlarge (so that a tree grows up from the roots instead of down to them). Try stretching from different points on an object.
Use "Hide" and "Invert" and "]" and shift| for detail work on the head. Save as.
Do a smooth subdivision. Use the "smooth" tool to smooth it further. If you want to have a smooth subdivision next to an unsubdivided section, little jaggies can result. What beginners do, and it's okay, is to subdivide everything at once. Using <tab>, which has to be activated in Polygon mode, and then going to point dragging mode and playing with details that way gives good results.
Do a card Boolean "knife," MERGE before cutting away the two card polygons.Wanna see what happens when they aren't merged? The purpose of the knife? Usually one can select half of an object and "Smooth-shift" it a short distance, adding extra points which allow smoothness and detail. When one cannot, a "knifing" operation will add those points. "Smooth Subdivide" adds many many points, but is the next choice.
While we're doing Booleans and texturing, let's make a photo-real eye, shall we? Have one ball subtracted from another leaving a concave lens-shaped impression where the pupil would be, and assign this the color "pupil" by selecting those polygons (using "Hide" or deselecting mistakes). You should know how to lower diagonal background select and shift-select multiple layers by now. Looking good? The pupil is assigned pupil, and the rest of the eyeball, which can be selected by hitting the top "Invert" in the Display panel, is called eyeballwhite. Go back to the ball used to cut the Boolean pupil and copy and paste it and shape it to be the lens. Assign the object the surface name lensclear or some such thing, but clear is a good reminder when you're looking at 20-150 surfaces. IT'S IMPORTANT TO ASSIGN SPECIFIC LONG SURFACE NAMES -- it's a good habit. But we're not done yet; add a black disk or squashed ball to the very back of the pupil (or color that section of the pupil black) assigned the surface name "black." The names can be changed in Layout, but the polygon selection has to occur in Modeller. Have a look at the eye in Layout. Change the attributes if you like, but you will need to press the Objects panel "SaveAll Objects" button and resave the Scene file in order to keep the changes, and if you want to keep your original settings as well, you will need to rename every surface and save the object with a different name as well as "Save All Objects."
Something else to be aware of: if you want to place a texture and the center matters, as when you have two arms or two eyes floating in space above a character and you want to cylindrically map them, Modeller is a good place to find the centers, though this may also be done with a Null, or the numbers may be taken from a Ref Object. If you have a globe map and a ball, but the ball is not centered at 0,0,0, the map will need to be adjusted.
Do a two ball seamed object as: two objects, a Booleaned object, a BGConformed welded object (where one set of neighboring points is selected and copied and pasted to a background layer, then used as background for BGConform) with smooth shading. (Try zooming and seeing what the lower left cursor window reads when the cursor is moved slightly, this is due to grid snapping) Welding the two sets of points is actually done by copying the newly "conformed" ball to the other ball, and pressing "Merge." Voila!
Create half of a heart using primitives. Copy. Do a group weld using "BGConform." Add polygons where necessary, and "merge." Now "smooth" this... "Smoothing" operations are great when one gets "in trouble" with a model. More experienced modellers select trouble areas with creases and "smooth" them without a second thought sometimes.
Make an eye shape and extrude it, select the two end polygons and remove them, Boolean this with a ball, now weld this shape to a ball, remove half of the ball, and mirror it.
Smooth shift the eye shape in reference to itself to give an eyeball/skin lip surface, do not MERGE it yet. Merge after texturing if you can. As you get into more complicated modelling, a model with sometimes not look right unless one uses "Align" as well, this is especially the case with "borrowed" models that may need extra corrections where welds are missing, etc. I have twice seen teachers share this important lesson: to make an eye, one needs to smoooth-shift a circle BACK then SMALLER and BACK, then SLIGHTLY SMALLER and FORWARD A FEW MORE TIMES. Suddenly a ball shape has seamlessly appeared out of a face.
Was that hard? You should try it again. According to at least one animation modeller, it is a "must know this." Do install the new eye-shape, get its details using smooth-shifting and point dragging, and then mirror the half head and merge it. ONE OTHER IMPORTANT THING: according to www.splinegod.com 's Larry Schultz, start with a CLOSED eyelid, and then point-drag it open for morph poses, because this is easier to do. I have also heard it is better to model mouths partially/mostly open.
At some point, you may find yourself with an immense model that starts slowing down the interface; use "Hide" to work only on the part you need to, and to speed up display "refresh" substantially.
Make back-ups often; there is no "history" button, though undo is a true friend. In LIGHTWAVE, many arttists set unod for "200," but since we're limited to about 8, we should make backups and expect to make 100 or so versions of an important model..
What "layers" are used for: "Shift Save As" Let's make a mouth for a model that doesn't have one. Here is one approach. Import the turtle head, or any ball shaped head that has a hinged lower jaw. Copy the head to another layer, select, smooth and scale the tophalf of it and color it pink. Select and omit a lower mouth/chin area of the original layer. Do precision welding as needed. (Like you did to make the eye.) The above tutle did not originally have an openable mouth. I've used and seen-used the same technique with a variety of situations though it is a kind of quick fix. After making a face without a back of head, I panicked and "mirrored" the face to create a "janus," then used "smooth" with many many iterations and smoothed and magnetted away the face from the rear of the head. It worked! Use it. By the way, the above model is not of my creation. It was found at www.3dcafe.com .
By the way, one can set "smooth" to perform 100 iterations as easily as 1. "Smooth" can be very useful when having problems with illegal polygons. Just select the troubled area and smooth it, perhaps only 50%; it will bring all the coordinates within a numerical average of one another, generally returning one or two weirdoes to better positions.
In the above model, I changed all of the surfaces once the model was loaded into Layout. It is normal to have two copies of Layout running simultaneously in Windows, with one of the copies used exclusively for texturing. The three most important elements of the Surface Requester are the Surface Color/Texture, Specular Level/Glossiness and the Bump Map texture/values. Practice helps with these, and with the others. INSPIRE only allows three textures to be combined one on the other, but obviously one can composite these using INSPIRE or a paint box. I recently bought a professionally made seamless tiling texture from www.textureworld.com and it was very useful, but a little strong, so I made it 90% opaque and had a surface color underneath it. I also used this texture with only 10% or so opacity as a specularity map. Specularity, per se, is a compromise textural component that doesn't really occur in nature. In nature, everything is reflective, but some surfaces that are waxy have greater scattering than others, resulting in a dull sheen reflection of a light source. Observation reveals, however, that these waxy surfaces are still mirrors. Anyway, the dull sheen wold be greater on the hair texture than in its nooks and crannies, so this was used for specularity. I also used the same texture, with some paint boxing to increase contrast, as a bump map. If the bump map requester is given a setting of 200% and no anti-aliasing, it will create quite a relief according to the lighting. I made some tires this way out of a black/white pattern and was highly impressed.
In the turtle example above, the same texture map of Verdi Pompeii Marble was used as texture and specularity texture, but since the veins were expected to have less shininess than the the dark green areas, and since they had a brighter value, the "Negative" box in the texture image requester was checked making them less specular. A low glossiness setting is the look of
Hollywoodpancake make-up and comes in handy often. One other thing to say about Bump maps is tha it pays to paint box them from scratch. A black line with some blurring makes an excellent bump map for planks. Reflection requires a ray Tracing button in the Render panel to be checked to be effective. Reflectivity can be a little startling, but if the shot allows it and rendering times do not double, it may be worth it. Transparent objects are a ton of fun, though interleaving semi-transparent objects like smoke puffs or petticoats tend to artifact at seams. Another use of transparency is as a hole maker for eyes inside morphing heads or special objects like Nullobjects that can be very useful during trial renders but must be transparent at render time -- remember to use the Advanced Options panel of the Surfaces requester to make the edges completely transparent. "Luminosity" is the surface component of the greatest bravura, since it is so rare in nature. One would "map" luminosity for things like fire or lava. "Diffused" surfaces are an interesting component because this sounds like the subtle difference between a titanium white and a laquered white, one essentially contourless, the other with heavy contouring, but if diffused drops below 100% the surface color drops dramaticaly toward black. This is also the button used when a front projected texture falling on a card for shadow casting produces a brighter or darker effect than the surrounding front projected image.
How to make a mouth (the real way): select a row of polygons; smooth-shift them away from the face; then smooth-shift them smaller and very narrow and slightly further from the face; now, straight back, to make a pair of lips (the fun thing is that smooth-shift is not making polygons inside, so this is possible); the professional modeller then make a mouth sack by making another half-dozen smooth shifts while stretching and moving and rotating the selected polygons. There are two subtle things to mention: for more control, have more smooth shifts in the area of the lips, and for better morphing, model the mouth relatively open. Also, pay careful attention to the corners of the mouth, since these will get the most crowded and be prone to artifacts. The upper lip will tend to angle up, and the lower lip will tend to be double the thickness.
One way to diffuse this density is the "four point triangle," a tutorial at the www.newtek.com website for beginning LIGHTWAVE users. Essentially, two rows of "quad's" become a single row by have a triangular polygon with three points on one side. I've seen something similar used by Larry Schultz for creating forehead furrows. You may find another dozen tutorials specifically for INSPIRE 3D at www.epicsoftware.com/3dinteractive/ .
Tape a photo to the bottom of the monitor; you have just doubled the accuracy of your modelling. Most modellers use reference art extensively. A used encyclopedia will pay dividends, or get in the habit of using the picture search engines.
There is another way of making a mouth. You draw the mouth, then break it up into "quad" shape polygons by selecting four points clockwise, and pressing "p" or finding the "make polygon" button. Instead of "tweaking" the lips made by smooth shifting, one can make them this way. One then selects the points of the polygons wanted and "copies" them, and pastes them in another layer. Then one moves them, and/or stretches them, and copies and pastes them back, followed by the "quad" select and make polygon "p" thing all over again. Larry Schultz would say this is INSANE, and having seen him model using "spin quads," I am inclined to agree with him.
The opposite of efficient is "subdivision surfacing" also known as "MetaNurbs." Make a cube. Press "tab." (In Polygon mode, though once "tabbed" one may switch back and forth to Point Mode, and should.) Try this again with a heavily subdivided (faceted) cube. Big difference. Sub-D or Subsurfacing is much loved right now.
Let's try to make a coffee cup with conventional tools and with MetaNurbs. A ball with a magnet-burrowed interior looks good, but needs a handle. The handle may take some time to do. With welding and scaling near the natural "chamfer" aka "fairing" join where the handle meets the cup, it should look good. Enter "tab" metanurbs. Same ball primitive cup, though fewer subdivision/radial segments will be needed. "Smooth shift" two polygons of the cup where the handle connects, and manuver them to meet, and weld them together. Then omit the polygon trapped inside the handle, and MERGE, just to be safe. Now "tab" and adjust. This object will not be visible in INSPIRE until it is "frozen" and then should be "tripled." Freezing is also used with another function, can anybody tell me what that is? (Spline curves)
The only trick with metanurbs: if the object requires lots of detail, the detail will require more points, and the Metanurb version may not look very different. fruit is going to be easy, but a sculpture, carving or face may not need it. Metanurbs are also called "Subdivision surfacing" or "Sub-D."
MetaNurbing is funnest when one does point adjustments with "tab" already pressed. How about making a cactus? Actually, that gets into some areas where a lot of software hits snags, including "Sub-D." "Sub-D" really works best with "quads," and goes crazy with hidden polygons inside an object or polygons that have more than four points. Otherwise, it's a lot of fun. To make a cactus, either do some fancy welding, or resort to another tool made by the MetaNurb people, "MetaBalls." There are two different buttons in Tools, "Add Metaball" gives one a Metaball, which can be copied and pasted many times. Then "Metaball," "freezes" the group as one shape. This shape may then be copied and pasted to another layer. The balls themselves are not objects, they are an intermediate shape to help get smooth multi-rounded shapes without resorting to welding. Also, they may be saved in this form, as object galleries of gourds, ears, cacti, etc.for cutting and pasting.
Another powerful use of "Sub-D" is making morph heads which can be adjusted with just a few points, BEFORE they are "frozen." Some prefer to work with the frozen finished model and use buttons like "vortex" to make smiles and such (being careful to watch the lower left hand display to see what dimensions are being altered). Remember when we gave a turtle head a mouth? Use either that character or another character with a mouth that we have made, and create some "face shapes." Not "merging" things like hinged jaws is great, because it isn't very hard to select them by selecting one or two polygons and pressing "|" to select the rest of the polygons for that object. Try the "phonemes" first, rotating the jaw from its hinge point, or using vortex to make an "Oh" shape.
We haven't made a nose yet, have we? The "wings" of the nose tend to slow me down. Those take a little time. Okay, I confess, go to another tutorial page to learn noses. Beats me.
Another area of modelling that I have not much explored though some swear by it -- spline curves. I first heard about "spline cages" from Jon Carroll who preferred to model with them. According to the 1078 page LIGHTWAVE 7 Manual, esteemed as the best way to learn LIGHTWAVE by many, there are a number of advantages to working with a cylindrical spline cage. One which stands out for me is that the spline cage behaves like a metanurb object when one tugs on a point, but many of the polygons are not four-sided, which is a requirement of MetaNurbs and some other modelling plug-in's. As a matter of fact, the curves seemed slightly more "spliney" than MetaNurbs, bending in before pulling out. The geometry of these spline cages is supposed to be easier to "patch," and otherwise modify, than other methods. I cannot find much documentation for it though. When I discovered it in the Objects: Custom: Primitives menu, you could have knocked me over with a feather. Nevertheless, I do not know what to do with it for now. For doing smooth modelling, it is good to know that a spline curve's "frozen" points can be extruded cross-sectionally. I should add that INSPIRE may be used to save objects larger than 200 points, made with LIGHTWAVE 7 DEMO, by cutting them into 200 point sections (using cut, exporting in LW5, then pasting or Undo). These can then be pasted together in INSPIRE. Pretty keen, if "dxf"s aren't available.
"Rail clone" is probably the closest thing to an automatic cross-sectional spline modeller in INSPIRE, by loading reference artwork onto cards and animating Nulls across the object surface. This is one of those things to learn by using the LIGHTWAVE Discovery Edition, and not sweat the "workaround."
Metaballs can be used for pretty good ears, but if the ears are going to be welded to a pre-metanurb morphing object, they would probably be done using the "BGConform" group weld or point-by-point. BGConform group-welding after freezing is pretty straigtforward, though if the object is going to morph, the same points should be used every time, and should be copied to another layer.
Magnet is very useful for making morph changes. Another similar tool is "Dragnet." I've also heard of "smooth scale" and "vortex" being useful for making morphs. When one makes a phoneme, one is tempted to stop once the lips have hit their shape. Try to make mouth movements fom the jaw first, because if the lips are open, but the chin does not move, it is not as dramatic. (Unless you want your characters to look this way -- there is a place for everything.) Open your own mouth as wide as it will go. That is a pretty big mouth, many mouths in nature are big. Try moving the jaw sideways for a drawl, add some cheek distortion and squash and stretch to your poses -- they may be more fun to watch this way.
By the way, make a ball. Now, select the top of the ball using lassoo, and using stretch with the cursor at one the edge of the selected area, flatten the top to a flat side. Now do this with five other sides, so that only one remains round. How would you make one of the sides rounded? Undo? That's cheating. You know the geometry for the ball is right there where you left it. Try magnet. Slect one of the sides and create a circle shape using the RMB over the middle of it, and pull this out. It takes some practice, but sometimes, the best way to make a weirdly shaped object look round is exactly this trick. Pole 2 does something very similar. If you flatten a section and then magnet it, you can intuitively and accurately add rounding. The only problem with this method is that the object will often need to be rotated off its orignal center to place the magnet optimally, and sometimes an object will have 1,001 related object files suddenly out of position. A center object Null anchor that can be loaded into the object before doing this can be useful.
Let's say one created a great face shape library, and then someone requested that the eyes should be bigger and the forehead a little narrower, then what? Two answers: full employment; and there IS a way to change a group of models using INSPIRE. One loads the first model into Layout, and puts Bones deformers into the Model by "Drawing" them carefully, this is described in detail below. The resulting bones may be used to shrink, stretch, enlarge or sag the model. Best of all, if the deformation is too extreme, it can be "keyframed" to frame sixty, with the "reset" undeformed keyframe at frame 0, and a somewhat deformed pose at a frame in-between can be found. The "Save Transformed" button will save this object as a new object. The "Save Transformed" button may also be used to select between morphs. And in this context, it may be used to deform a library of face shapes in minutes instead of hours. It is also used with "displacement map" deformation to create an object that may be further deformed, for instance.
The "Layout" approach actually duplicates many Modelling functions. "Bones-size" with "Save Transformed" is actually the same as "Pole 2," except that the textures will morph along.
One last item to mention about morphs -- animation creates circumstances where you may want to add the teeth and eyes to the model very carefully in order to make them all morph together. (They can also be separate, but this adds dozens of steps to the "rigging" process in some cases.) Either have the eye as a ball in a sack or behind a clear eye window of the face. INSPIRE users will tend to hold back on the number of morphs they use because of the 40 pose morph limit, whereas LIGHTWAVE users would use as many as the Scene called for. Characters with hinged jaws like snakes or donkeys could have "Bones" used for the lower jaw, while the eyes and facial expressions could morph in INSPIRE. If the teeth are Bonesed, they can be Booleaned or just pasted; if they are morphed, they should be modelled out of the mouth interior surface. This is seen most in shows like "Max Steele."
When modelling morphs, one will sometimes select one polygon of the skin, then "]" to get all the skin, then "Hide" the skin, then select the teeth and eyes, and cut and paste them to another layer. In this way, one can adjust the teeth after one gets the lips right, but sometimes one will accidentally copy and paste the eyes on topf of another copy. (One doesn't need to cut-and-paste an object dissected this way. One may "shift-select" two layers at once to combine them as one oject when one saves the object as its new name.) If one accidentally copies the eyes on top of themselves, this will alter the point-count and the resulting model will not morph, and the result will probably have some sort of artifact. This kind of mistake, if it happens, is usually easily fixed by "merging." Sometimes the object may also need "Align" if it has undergone many alterations. How to tell when "aligning" is needed? Generally if the object renders badly, but the surfaces are all double-sided, and merging has not effect; plus, it helps to see a "before" and "after." In the image below, the door needed to be "aligned," and the alarm clock needed to be "merged."
Blinking can be tricky if eye sacks are not used because the chances of planes overlapping may be heightened. If only one blink is used with the eyes looking straight ahead, this pose can be placed at the end of the morph chain most safely.
By the way: eyes. This brings up an issue related to morphing -- texturing. INSPIRE HAS FIENDISHLY SOPHISTICATED TEXTURING!.What I mean by this is that one may create a morphing head, and the texturing of the first head will follow all of the morph targets, so that you do not need to animate the pores moving around, or try to adjust the placement of the lips from frame to frame. The lips will automatically follow the morphing. So make some detailed textures for your wonderful geometry and impress your clients!
MORPH WITH DETAILED TEXTURES AS SOON AS YOU CAN TO MAKE SOME PRETTY INCREDIBLE ANIMATION!!!
This capability of INSPIRE/LIGHTWAVE 5.6 is also used for fairly complicated texturing assignments like texturing short hair. Using the example of a hanging rope, how to texture it? Let us say, you are given this bent rope object. You load it into Modeller, and un-bend it (
, vortex, twist, etc.), and save the unbent version. You load the unbent version as an object, and the original as its morph destination or target. Texture the straight rope using a cylindrical rope texture, and program the "morph" into the Object's panel deformation requester. Ta-dah! As the rope bends, the texture will follow along. Morphed texture for rope. The pro's used to do this a lot (before UV mapping). I do not know why it was one of the last things I learned, but it was. Check the morph before using this trick, though, because some morphs like mirroring or 90 plus degree rotation tend to get zany. You may need to include a few "in-between" morph objects. I can imagine it being used to texture the inside of a mouth or the lining of a jacket, among other things. Bend
This method may also be adapted to converting "reference object" interactive placement textures to planar or cylindrical maps that may be loaded without using "Load from Scene" for the object. One may begin with a scalp object, for instance, and create a y-axis flattened version of it. Adjust a reference object to "nail" the texture, give or take a few curls, then morph it to the full screen flat version for rendering as a map. (Having a full-screen card shape exactly fitting the screen also comes in handy for emulating "Offset" paint-boxing to generate seamless textures.) The other way of converting a"Reference Object" texture to a surface that can be saved with the object is to copy the scale and size numbers to the texture, but rotation may pose a problem.
Another way to reliably flatten textures for the best planar projection, using this approach, would seem to be to back the camera away 1 kilometer or so, and zoom in past 100. The dispay may artifact a little, but the rendering looks clean. This technique may be used to obtain x-axis and z-axis planar projections without morphing the objects using stretch. Something I think Larry Schultz mentioned was how this may also be useful when creating "alpha" mattes, where a black z-axis apha matte textured object could be photographed from a side view to create its x-axis negative match.
"Falloff" is an obscure litle button for things like planar-texturing a jeep, so that the label on one side isn't printed on the other side reversed; it is also useful for more realistic effects for textures since dust will tend to collect near walls, etc., but it doesn't have a spherical setting.
Back to the Modeller: Another use of layers should be mentioned here. Although I like to make morphs in separate layers, I generally want to keep the model all together in one file until the last moment. At that time, the eighteen parts of the ant will all be removed from one another, and each will have its own pivot point reset at the center position, and each object will be saved separately. With Bones rigged models, there may only be a few separate objects for one main reason: bones deformation complcates other forms of animation.
And by the way, Layers get used for: BGConform, Boolean operations including stenciling, morphing eyes, copying a problem group of polygons to another layer for smoothing (which may not work) which are then pasted back and merged, keeping a backup object handy, pre-Layout positioning multiple objects, "smooth scaling" one object to better match another (background layer), and to save a few keystrokes.
How does one have realistically animated eyes in INSPIRE?! Without a lot of character examples in the Manuals or existing tutorials, one might waste quite a few hours on this. The solution is: to model them so that they are outside of the body, then put bones in them (using Limited Range with the same number for Min and Max settings) [described below], and then reposition them. I actually think I may have seen one or two examples of characters that were tree-form with eyes floating in space in a tutorial, but here is another example.
To confirm my suspicion that this was how it was done, I put together a crude boxy experiment, and this is what one has to do sometimes, whether the function is ancient or the newest plug-in, it seems. In the first picture, we see the way the object was modelled, with the eye in the middle of the head. The eyes COULD be in the middle of the body, and SHOULD probably be in front of the head floating in space, it doesn't matter, just so long as there are no other points that will be touched by the "limited region" balloon when it is activated, and the eyes are correctly parented to the head bone. A single small bone is drawn and positioned at the center of the eye, (or possibly two bones, for better rotation) and the "limited region" balloon is sized by trial-and-error in the Bones Panel ("p" or activated through the Objects Skeleton button). The minimum and maximum numbers should be the same! When the bone is "rested" described below, one can then move it to where the face skin is, and since the eye balloon's points do not touch the head skin, the eye may be positioned where it belongs -- see the second picture.
The single eye bone is parented to the head bone, so that when the head turns, the eyes move along. Since the polygons of the eye are controlled by the single small bone inside it, turning the eye bone turns the eye -- see the third image. This is apparently the way to do eyes in Lightwave. In the test above, the object was "flipped" inside-out for bones positioning, then the surface was toggled to "Double-sided" when the bones were finished, but another solution is to make the object a wireframe or partial polygon using the Scene Editor panel.
The other day, I opened up the Scene file for "Characters" in the "Inspire3D" directory, and I was frankly surprised at how bones were being used in three out of four examples. Just two bones: one holding the shape, and another one twisting the top half of the object. The top bone's direction didn't seem to matter; only that its center pivot be accurate. What was most surprising -- they looked GREAT! Try animating with these Scene files, and see if your animation skills with bones don't depend on being happier with fewer bones,... Bones can still be used to animate curtains blowing and water rippling and bottoms jiggling using more of them (described below), but a lot of the popular effects use only two or three. Two bones, even using the strongest "falloff" setting in INSPIRE will completely overlap one another's influence typically. That's because "falloff" is an infinite property, though it seems to be limited by INSPIRE's calculation logarithm. It's a tricky tool because of that. A bone in a shoulder can distort a hand or chin.
Seasoned professionals know a few additional mid-sized bones can do the work of dozens. In any case, one may always use the "Scene Editor" to toggle the color of a bone to make it more prominent or differentiate "breathing" from "spine," and also to make the "hold shape" invisible if it never actually "moves." (The two approaches to bones-rigging or just "rigging" are described further below.) Rather than suffer long waits for the display to refresh, the pro usually works with a low polygon proxy that may have holes cut at knee and elbow joints, etc. for easier bone access. Although the "partial polygon" display option in the Scene Editor is very effective when placing bones, for actual animation, the proxy method can save on accidentally rotating the hand behind the back, etc.
According to one expert, one should make bodies "treefrom" with the arms out, but with the joints slightly bent, to assume some of the shape they will take at their extreme target. The fingers wind up in one plane and the limbs in another plane, shortening workflow, according to Schultz.
Although the toolset of INSPIRE is complete using the buttons we've just gone over, there would be a lot of work doing things like making a tapering tower unless one had buttons like "array" for repeating a shape many times, or "taper" for decreasing size along one or two axes. "Path Clone" combined with reference art loaded onto a card object in INSPIRE can be used to make templates that can be loaded in INSPIRE. "Save Transformed" may also be used to make negative scale mirror objects. Like "Lazy points" in Layout, they may not come up often, but it is a comfort that they are there. Array doesn't get used all that much, but it saves so much time that you remember it -- ropes, chain links, naval mines, fences, branches, etc would take a mite longer without array. "Taper" I keep finding uses for, and it is one of the tools where one needs to look down at the lower left corner of the view screen and read the numerical data. This data is also very important to look at when doing a "Stretch" that you want to have undistrted, making sure the x and z numbers are roughly the same. The numerical tool can also help you when modelling the odd object -- a screw or CAD-type object. When I needed to make a flat hypnotic spiral recently, I started with a circle, selected all of the points, and then reduce-stretched them 3% in x and z. (Remember that you have to place the mouse cursor exactly at the center for the stretch to be centered.) Then I deselected another point, and did this again, reduce-stretching the remaining points, repeating this for the rest of the points until I had a weird looking circle. I then copied the points to another level and reduce-stretched this copy. (The first time, I bungled it a half dozen ways -- ah, experience.) In points mode, one can copy only points from one layer to another, which is not much used, except for work like this. I then selected the points of the two spiral curves to make a spiral section as a single polygon. This was then copied and reduce stretched by eye, and pasted/welded a dozen times for a rotating spiral background.
A tool like "Pole 2" can come in handy adjusting the breasts of a character to fit a swimsuit, with greater uniformity than "magnet" alone, but the tools are somewhat redundant. One can pull in the temples slightly by having a long cigar-shaped Pole2, or by pushing with magnet twice. Another tool that can be used for fine-tuning is "smooth scale," though it is less interactive in INSPIRE than in the LIGHTWAVE version. "Pole 1" and "Pole 2" are popular with modellers who model both sides of a character at once. If you make a cigar shaped right mouse button sphere covering both the nostrils or temples of a character and centered, you can adjust those areas with a rounded effect, instead of the dramatic effect of "stretch" used with highlighted polygons. This is the "symmetrical" approach.
Incidentally, "smooth-shift" can become pretty interesting when one starts using Pole2 or Magnet, instead of just Move or Rotate.
Big momentous observation/lesson: one reason why a program can be popular is the philosophy taught with it. Do LIGHTWAVE objects look boxier? Possibly. I was being sloppy and not using reference art, and I was allowing detailed models to get "lumpy" as I worked on them. The experts, I learned, tend to keep their projects low-polygon for a reason, just as they tended to go to NURBS-like tools -- they keep rounder easier. One way to keep roundness is to extrude instead of smooth-shift, deliberately littering the inside of your model with cross-section polygons; these you can go back to later to "smooth" them. "Smoothing" a circle can be done by-eye, or using "smooth," though smooth uses a mathematical averaging method that will "flatten" contours as well. One nice thing about "Magnet" is that one can flatten an area using "Pole" or "Stretch" (with lassoo), then create a perfectly spherical area by putting a magnet RMB ball over the flat area and pulling or pushing it. LIGHTWAVE 7 has some special tools that are supposed to simplify keeping roundness, like "spin quads." The rest of us just have to be careful.
Can you make a model a day every day for a year? How about three? If you can make three models a day for a year, that is 1,000 models. Would you be somebody worth hiring? Many models have not been done much, and some have been. Flashlights, lanterns, dogs, microphones, stereoscopes, sextants, electric fans, hobby horses, sewing machines, keys, coins, etc. The models WORTH making? How do you model compassion or a sturdy awareness of the power of joy, what is an equivalent for the unexpected power of steadfastly noticing good? One may solid-model great inventions and other metaphors, and try to model the flow of economic quantities, but the human potential is not quite finite.
One can experiment with different kinds of texture combinations, allusion in modelling, implications of repetition and contrast, and trying to effect "Old Master" touches in one's Scenes. A little color in a bright spot, brass on velvet, the repetition of a shape in an elbow, chin and brow...
One little warning about modelling: face it, we're playing with dolls here.
In many cases, one may get closer to the "look" one is after by loading an object image into a paint box, according to Kursad Karatis. Play with it there and save a lot of time. If the current model has no hope of being a final model, know why continuing with it is still a good idea. This is even more important when working with "royalty free" geometry, because the geometry cannot be sold or shared, so any pleasant surprises will have to be kept to oneself.
Before discussing the complicated art of correctly "rigging" a character using INSPIRE, it should be pointed out that the hours spent on learning rigging ARE COMPLETELY UNRELATED TO ANIMATING. Animation is posing, with deft timing, to convey emotion and physical dimensions. Melodramatic exaggeration is the art of wringing hands with pleading eyes on one's knees, and most of us probably know how to do it unconsciously, possibly as a result of watching thousands of hours of cartoons. One can probably lay out most of an animated sequence's "keyframes" with a digital camera and/or pencil, and then "brute" the CG skeleton to match. YES, YOU DO NOT NEED TO KNOW HOW TO ACTUALLY OPERATE THE COMPUTER TO ANIMATE. I would recommend cultivating the POSITIVE kinds of expressions over the negative, and having poses dealing with sadness or worry having to do with OTHER characters, though the "easier" character is probably the villain, because they only care about themselves. The hero worries only for a moment, before thinking about how to end a worrisome situation. A little brow furrowing later, an idea is tried. But is it willpower, or do they relax in order to think? Here, one begins to include one's philosophy of life.
Hands and faces are where the eyes tend to go first, so a digital camera can convey a world of emotion, but there are plenty of animations of fruit, pencils, peppers and other assorted items that prove that body motion can be laden with language. Fortunately, most of this knowledge is not arcane, it can be found in animation books going back 100 years. A moment of rest punctuated by a pinky wiggle, that may take a little looking to discover, but that is common enough too. The same principles of contrast that combine elephants and mice and shivering characters and fireplaces, apply to animation.
As many books point out, SILHOUETTE is the art of staging pantomime for greatest effect. It isn't even taught by drawing, some students learn it with paper cut-outs. Drawings are based on silhouette, if one looks at the edges of a given object. Careful silhouetting will cultivate repetition, curves, angles, efficiency of pose and acting. The eye will be kept in the image, and the key ideas will be conveyed.
Animating a bouncing ball is probably a good exercise to try with bones, both because one learns the "parent" needs to be at the center, with children above and below, and because one learns to have fewer keyframes for acceleration than deceleration, etc.
In terms of manipulating cycles and masses of flesh, one probably wants to stay away from cycles. The main thing that goes wrong with cycles is that one cycle will be 45 frames long, but one or two bones will have a 60th frame or neglect to have "Repeat" activated in the Graph Editor. On the plus side, if the learning curve is looking steep, it sure makes the animator look good for a (short) while. The above paragraphs are repeated below, for emphasis.
The use of collision, fluid dynamics and particles is more prevalent than ever, and many of the complicated tasks in INSPIRE, like rigging eyes to move in a bones-rigged head, can now be handled better and faster. Plus, some LIGHTWAVE riggers are experimenting with one-size-fits-all skeletons. In "Shrek," I do not think the donkey walks the same way twice in the whole picture, so take "systemization" with a grain of salt, but it's nice to be able to build on where one starts. Still, if your demo or commercial needs an industry-standard polish, INSPIRE is an under-$450 place to get it.
Use the Data Overlays often when recording tests. There are two powerful reasons: making a hundred corrections to a Scene can be common if you like back-up's in principle and habit, and having the Scene number in the shot can be a real help, and the frame counter to the right can point out points of collision in objects or a morph pose that may need to be thrown out or "conformed." In retrospect, I should probably rename sequences more often, especially when a camera angle is changed.
Forward kinematic animation means that if one moves the hand, it will hang in space because it is ordinarily parented to the wrist, not the other way around. To move the hand, one moves the wrist, and this is actually how one is expected to animate ordinarily. Having a hand making the rest of the arm follow is called "inverse kinematics" and is not in INSPIRE, but experience animators might not miss it much. It is probably very VERY useful for rotoscoped animation/motion control work. According to Larry Schultz, IK is used primarily below the waist, and where needed.
If you are just getting started in animation, you may want to have a puppet-like hand object parented to a forearm object and so on, instead of having a "treeform" figure model that is "rigged" with bones. When one discusses "parenting" or "IK" or other animation terms, though, usually one is referring to a skeleton of bones with a full figure. Using hand objects with ball-joint rigging like robots use looks best with "shadow mapping" set in the Lights panel for all lights. This approach is also popular among LIGHTWAVE 7 animators, believe it or not, though they use a method in which polygonal sections of one seamless model are saved in different "layers" and then are parented to their respective bones. The end-result is an intuitive animation method, just the same.
Turn "Ray Tracing" off and switch to "Shadow mapping." Ray tracing will tend to render ball joint planes with some form of artifact, and shadow mapping will prevent these and keep the animator happy with shorter render times until they move up to bones animation and modelling face-shapes. Shadow mapping is used more often than I can believe; it has the telltale effect that the interiors of mouths will be fully lit as if no head were blocking the light. It also tends to lend a "computery" Phong look that may be desirable for dramatic effect, such as calling attention to scene elements.
The hinged character is first made as a single model in one object file, also called a "workspace." Then each object component -- lower leg minus ankle, forearm, etc -- is copied to an empty layer and reposition so that its likely pivot point is at the center of the workspace. It is also handy to have a Null Object like a tiny box that can be loaded to thiss separate position, in case it gets lost. Save the file to a new object name using "Save As." The fun part is that when one REASSEMBLES the different objects as one file in Layout, one can load the complete object in first, probably with a different color or as a wireframe, and then position the separate objects directly over it. The wireframe will seem to be penetrated by the model. I did this when assembling an ant. It is so much faster than trying to fake it. If cloth animation becomes more widely used and de-bugged, this may be what we use for all kinds of projects. For now, mostly robots and ants and such.
The great thing about hinged animation is how much you can learn. You can pose to your heart's content, experiment with timing and expression, and condition yourself not to move a hand before moving an elbow. You can experiment with deliberate jerkiness or very loose splines. You can make a "cycle" in the Graph Editor" using "Repeat," and more importantly, making sure the slope of the curve at the end of the cycle lines up with the slope at the beginning.
A good thing to remember when you switch over to bones deformers, which will not have any "seams," is that if the hands, forearms, and such have bones positions/settings that isolate each element, a very similar result should be produced.
One of the coolest things I discovered, all of three years into using the program (How did I miss THAT?) was how that one may use the RMB in the Graph Editor to slide around keyframes!
For one interesting class I wanted to take, the teacher had the class prove its mettle by animating a chicken laying an egg, which then rolled down a ramp. Modelling a crude chicken, animating it, and the egg. I think one learns more with a bouncing ball, personally, if one is trying to nail a "squash and stretch" style. There are actually two "right" ways of doing a squash and stretch bouncing ball. First, there is the REAL lesson of the ball: if the different successive ball positions overlap, even slightly, the animation appears much smoother. The comet-tail distortions are fun too. One can get these by using "bones," or by using "morph targets." Personally, I find assignments like these deliberately manipulative: the teacher has a good idea how hard they are. Morphing requires knowing the MTSE method, which takes a little while, but looks great. Use more morph target for more subtlety. The Bones method is a lot faster, once you've done it a dozen times, and uses only one model, but the controls can be excrutiatingly frustrating at first, compared to sketching. It is so easy to draw -- why should it take forever to animate!!
Another great thing to learn from the lesson above is how to have more keyframes where the ball slows, near its path's apex, with few to one where the ball squashes.
There are seven things to know about INSPIRE Bones:
1) Planning for bones. This tends to fall into SIX areas: first, parenting, and whether all bones should be parented to a single bone, and whether or not this should be located at the center of gravity; second, redundant child bones, which may happen accidentally, or which may serve a purpose like adding breathing to a chest; third, the actual animation of the object, location of joints, whether they will turn uniformly or no two turn the same way, (workaround for gimbal lock correction) etc.; fourth, "anchor" child bones which help an object "hold its shape" since a chest bone's area of influence may cause a chin to move more slowly than the nose when turning a neck bone otherwise. This fourth use of bones is almost obsolete, with the inclusion of "weight maps" in LIGHTWAVE, but it is part of the price we pay for working with an under-$500 commercial software. Fifth: eyeball bones and distorting geometry to work better. Sixth: falloff bones.All bones usually should be parented to a single bone; this allows all of these bones to be parented to another bone, which may do things like counter-rotated around another for a galloping motion. After the first drawn bone, every other bone will be a "Draw Child Bone." Fortunately, one may draw many child bones for a single bone, as with finger bones. Having the upper parrt of the torso bounce in the opposite direction of a pair of marching legs is one of the dozens of things one can try this way. (One or two tutorial scene files in INSPIRE do not use parented bones, but I do not know why.) By the way, finger bones rigging may be used to create "Save Transformed" morph objects, which may animate faster than individual finger bones animation.Forward-kinematic animation does not require very much modelling expertise to be impressive. One needs to pick the best place for putting the main parent bone, using "Draw bone." Then, one needs to use "Draw Child bone" to draw the remaining bones, using the up and down arrow buttons to cycle through the bones. The only "trick" to setting bones seems to be to do all the necessary adjustments to the positions of the bones by drawing them in one plane, but adjusting them in all three, as peculiar as that may seem, and TRY TO NOT REST THE BONES UNTIL ALL ARE IN PLACE (possibly not ever).
One may also go back and forth over the same bone area with multiple children. This is like doing a finger child of a palm bone, except that one of the fingers is turned back toward the palm, followed by still another across the palm.like the original. This kind of redundant bone is seen for things like having a spare bone in the chest area for breathing. If these extra back-and-forth bones were not used, every time the chest bone moved or expanded, the shoulder bone would need to shrink or move in the opposite direction-- quelle drag! If breathing is not needed, the bones may be made inactive and invisible. Are there any other uses for these "spare children?" Are you kidding? Having extra joint bones may be used for adjusting the same skeleton to a variety of different character shapes, and thus re-using the same animation. And what else? ONLY THE MOST IMPORTANT USES OF BONES!!! Spare child bones make excellent squash-and-stretch and jiggle-and-sway and follow-through and over-shoot bones!
Positioning joints seems to take practice, and I cannot speak from much experience, though I have noticed that a little difference in positioning can make a petty big difference. A related problem is how to animate bending correctly: the wrist only moves in two axes, karate chop and slapping; the elbow spins and bends forward; and the shoulder lifts and waves the forearm. The elbow spinning may require two added measures -- extra polygons, and possibly having an extra parented bone between the elbow and wrist. A cartoon arm can animate like a garden hose -- don't pay too much attention to nature if you don't have to. The main problems with "photo-real" joints seem to be when a joint is unexpectedly pinched. Pinching may occur because the area behind a bone’s pivot point turns in the opposite direction, and the area being pinched is not adequately controlled by an "anchor" bone. The easiest way to counteract this would probably be to have a bone with a 0% strength or inactive and with a length the same distance as the falloff/limited range area behind the pivot point of the pinching bone be the real pivot bone with a special name and color. This is widely known as a "falloff" bone. When I bones-rigged the skeleton object, I noticed the knee area deformed badly. You might want to try to learn bones using a skeleton object. And about those bones that seem to completely have no rotational sense after they are "drawn?" They are important because rotation is the main way of animating bones. One of the red-highlighted sections below is a tutorial on drawing extra bones at every joint in order to preserve rotational orientation so that moving the mouse to the right makes the bone turn to the right. Worth knowing and DOING! It is probably worthwhile to draw, rotate and re-rest a few tiny bones at every joint, these can also come in handy later on for “mousetion capture” or generic rigging. A tutorial below is about "Limited Region" bone-rigging, which is sometimes better around joints. Since parent bones do not have to be "active" to have an effect, one may have two bones activated but invisible, with key joints only visible/selectable -- which keeps the learning curve for bones animation pretty curvy.
Anchor bones. One main problem newbies encounter, besides rushing into bones and getting practically no results, is that after they draw a fair skeleton and correctly activate the bones (described below), one little area, or several, will animate very strangely. When a leg is moved, the heel of the foot will stretch like a giant claw, or when the head turns, the chin will not rotate along. This is because the bone that SHOULD be controlling that section of points, is only partially affecting it, while some other bone, like a body bone or even a hand bone, is affecting that section of points as well. The pro's tend to: first, go to the Bones control panel and make sure "1/Distance^16" is the entered falloff setting because "falloff" is the culprit; second, adjust the "Rest length" of the affecting bone (Rest length appears where"Zoom" is when the Camera is active); and third, and possibly more common, draw some "anchor" child bones to increase the area of the bone's influence. All of the bones need to be set to Rotate, and they need to be reset, and ideally, they should all be inactive (ctrl-r) before drawing extra child bones. (More about this below.) Extra bones also help sections of points from being affected by correctly rigged limbs; for instance, to keep an arm from pushing-in a chest area, add some bones along the outer chest, parented to the spine. Another "anchor"-like use of bones would be a pair of upper leg child bones in the buttocks so that moving the leg rotated the buttocks slightly.
There is another "workaround" used for animating eyeballs in the head, that might be used for the head, hands, arms, legs or chest. The model for the body has the eyes and chest areas as separated parts of the body, so that the arms and legs are located in different areas from the trunk and head. Using "limited region" with the same value for minimum and maximum falloff, a balloon is created around the different elements. The bones are still correctly parented, but the model's original configuration of eyes hanging in space, etc. is lost. This is because the bone in each eye is set with a very high falloff and has no effect on the head at all, though it is still a child of it. I have only used this technique for eyes, it is described in some detail below. It requires drawing child bones from the parent bone to the distant child, and then inactivating those bones (and making them invisible in the Scene Editor) and moving the limited region bones. Applying this method to arms and legs can reduce chest collapsing or leg shrinking; applying this method to extra heads and hands for morphing purposes is quite a bit trickier. In the Graph Editor, one animates the additional hands to shrink to 0% scale in one frame, using "linear" as the curve type.
When I first heard about drawing an elbow bone for a forearm outside of the arm, it went right over my head. Try to put bones into a sheet of paper to animate turning the corner of a page, and you will see the problem. The area of influence for the bone behind its start/pivot point rotates in the opposite direction of the bone, causing crumpling. The solution for animating arms is to have small inactive "falloff" bones as parents to the bones, and to animate those bones through the falloff parents, so that the area that would rotate in the opposite direction now rotates correctly. A pretty infuriating thing to leave out of a Manual. It may also be worth building an object with a five inch wrist or with the hand removed from the body, so that a limited region child bone may be used with very little falloff, moved to the correct position. This results in an effect pretty similar to cutting a robot object apart and using the pivots points to animate it.
2) Laying down bones in three planes. The INSPIRE "draw bone" and "draw child bone" buttons are unintentionally deceiving. They should be labelled "Draw bone to Zeroplane." The red green and blue cards below are where INSPIRE wants to draw its bones, one for each viewport -- FRONT,TOP and SIDE.
To make laying bones quite a bit easier, reposition the object in Modeller to position the zero planes where you expect the bones to be drawn, having the end of a transtion bone fall on the intersection with another plane, where you want to draw in a different Viewport direction. This allows drawing the spine in one plane (SIDE view), all the shoulder joints in another plane (TOP view), and the front pair of legs in the third plane (FRONT view). The bones won't keep getting magnetically distorted/tugged to a zero plane you can't properly see in an orthogonal view this way. The rear legs of the stegosaur will still be awkward, but we'll take care of them in the next section. When you make these switches from one plane to another, check your work by visiting all three viewports, and don't try to get fancy yet, please.
Yes, this zero-plane method is a crutch, and not a very good crutch, when one considers that the resulting bones will probably need to be adjusted AND will have distorted rotational sense. More about this in #3 below.
(Three "solutions" have been to either use "un-parented" bones as with the monster scene in INSPIRE, or to have all of the bones oriented in one plane, pointed down z. In this second method, although the skeleton resembles a porcupine, the H/P/B rotational sense is not lost to parenting! Nearly all of the bones using the z-axis method have to be moved and re-rested, which is described in #3 below. This can be adapted to the third method, where drawn bones are combined with zig-zagged same-axis parent-bones, creating two (re-rested) bones on one common axis like the z-axis, for every major bone. In this method, all but the porcupine bones are made invisible in the Scene editor, though many invisible bones will be activated. A variation on this is to have some "indicator" bones visible. This may seem to be a lot of trouble, just to obtain a skeleton with intuitive rotation, but it's not.)
Laying bones goes much faster if one "toggles" the "T" button in the "Scene Editor" (opened by pressing the "Scene Editor" button) to change the model to a "partial polygon" view. Also, remember how "," and "." can adjust the size of the view screen. And remember to press "F9" if the refresh gets slowed down. Another thing to make "F9" even faster: use the "Unseen by Rays" and Unseen by Camera" boxes in the Objects Appearance Options panel to leave slow rendering objects out of test renders. They will still "refresh" unless removed at the Scene Editor panel, but watch your renders become 90% faster.
One other thing: DON'T ANIMATE YET!!! Try to keep your Rig_scene002.lws separate from your Run_scene032.lws's. This is because adding and adjusting bones after the animation has begun can be tricky, and that is why there is a #5 about doing that,...
And what about the bones that do NOT lie in the three planes? If you try to draw them by switching from one view to another, the bones will automatically place the end point on one of the zero planes. But there is a way to work with this. Don't worry, be happy.
3) Laying and adjusting bones that do not fit the zero planes. Remember, the pro's seem to mostly ignore that there are zeroplanes, probably because Leonardo DaVinci sketches aside, there are no zeroplanes going through us. Instead, they place the bones for the optimal fit and distortion. It is important to make bones fit closely, because without LIGHTWAVE's length/128^ falloff or weight mapping, a hand put on a hip may make the hip suddenly become narrower. The professional approach is to lay down the bones exactly where the model would have them, adding control bones to hold shape; this is done by adjusting them after they're drawn, and then adjusting them using "Move" and "Rotate" and "Rest length." The fewer bones we draw, the faster the display refresh and rendering.
Where you draw a bone is where its area of influence is, except that the area may become longer by increasing the "Rest length" (clicking on "Bones," makes the "Rest Length" button appear where "Pivot Point" was). Pressing "p" activates the Bones window, when a Bone is selected. Otherwise, it's accessed through the Object panel's Object Skeleton button. If you "rotate" the bone, that will actually DISTORT the object shape UNLESS you tell the bone to redraw its position in the new position by pressing "r." To be thorough, you should press "r," then "shift-r" and then "ctrl-r." This almost accomplishes the same thing as redrawing the bone in the new position.
Another thing worth remembering: THE CAMERA MAY BE TARGETTED. Targetting the camera, and moving it back and forth while adjusting bones makes for faster precise positioning. Although bones must be drawn in front, side or top views, they can and probably should be repositioned for re-resting in the Camera View. This especially comes in handy when placing small bones in the eyeballs -- remember to position the eyeballs outside the body or inside the head, for later positioning inside the eye sockets.
Because of the way "Draw Child Bone" works, one may want to draw all of the bones at once. This actually sort of works with things like horse legs, which may be moved from the zero plane to the shoulder socket, by moving the parent leg bone, and then pressing "r,""shift-r" and "ctrl-r."
If the animal does not have a majority of its bones in one or two planes, then every bone is going to have to be adjusted by hand-tailoring, and as long as you're going to all this trouble, you may as well correct the rotation. Three "solutions" to the rotation problem have been to either use "un-parented" bones as with the monster scene in INSPIRE, or to have all of the bones oriented in one plane, pointed down z. In this second method, although the skeleton resembles a porcupine, the H/P/B rotational sense is not lost to parenting! Nearly all of the (child) bones using the z-axis method have to be moved and re-rested, using "r"/"shift-r"/"ctrl-r". This can be adapted to a third method, where re-resting is combined with checking the rotation for every joint bone right after it is drawn. Trial-and-error seems to make better sense than my explanations. Re-resting saves on having multiple bones at every joint -- the bone is rotated until its "sense" is consistent with the majority of other bones. Having a few bones at each joint may help if the arm swings around to another orientation like running (something called “gimbal lock” is associated with 90 degree changes), and also may come in handy for "real-time" keyframing (Described below...). In this method, nearly all but the properly-oriented zig-zag bones are "invisibleized" in the Scene editor (a good idea WHENEVER using bones since only the visible bones will be able to be selected), and the control bones may be active or inactive (Parent bones still affect their children, even when inactive.). This may seem to be a lot of trouble, just to obtain a skeleton with intuitive animation and rotation; try to make the best of it. Two days later -- patience will pay with this approach.... !
So, you ask, how about if I create a little square or L-shaped group of 1 parent and three child bones at the zero origin and then parent it to a bone I've drawn? Two answers: first, the program will put the box at the base of the bone, instead of at the end, so draw a tiny bone at the end of the parent first. And second, the square or L-shape should probably be drawn in the plane perpendicular to the direction you will be drawing the bone. But hey, at least one bone per set should have pretty good orientation to keep rotation intuitive. All the others can be kept invisble.
The trick with adding bones to the eyebrows is to select a head bone to parent each one to, and then to "Draw Child Bone" them, despite the position they wind up being yanked to. Rotate and move them, then press "r," and "shift-r" and "ctrl-r."
Bones drawn in a zero-plane may also be "re-parented" -- such as leg bones re-parented to a drawn hip socket position. Generally, when this has worked for me, the parent bone has been pointed in the same direction for the old and new parent bones. It also seemsto help for both old and new parents to be the same length, since the initial rest length is stored for the child bone. Otherwise, it may appear that a bone is being parented midway to the center of the new parent or floating in space.
4) "Resting" bones. By pressing "r," then "shift-r," the bones have their "cloud of influence" moved to the position the bone has been adjusted to, and then the function is de-activated. This accidentally "activates" the bone, so "ctrl-r" is also done.
5) Adding bones after "resting." LIGHTWAVE has a feature where a Rig_scene.lws can be hidden in an animation scene, by using negative frame numbers. Since, this isn't an option for us, we have to adjust or add bones before or after the animation that we've already produced.
Novices should probably get used to this routine: hitting "Reset" while cycling through all the bones using the UP and DOWN arrow keys, THEN hitting "Ctrl-r" to de-activate the Bones, alternately hitting the UP and DOWN arrow keys, followed by hitting "p," and cycling through all of the Bones using the UP and DOWN arrow keys.
THEN, when you add bones and/or adjust them, there won't be any hanky panky from activated bones corrupting the results.
(IF THE BOUNDING BOX DOES NOT COMPLETELY COVER THE DISPLAYED OBJECT, SOME BONE IS STILL ACTIVE.)
Wherever you have to re-position a bone, hit "r," then "shift-r" then "ctrl-r" afterward.
My experience with bones has been "a world of hurt," so I cannot recommend to beginners a lot of "short-cut's" like de-activating a few bones, adding a new bone as child, and re-activating the old ones. It is also possible to draw a child bone while the parent is active, as long as you draw it exactly where you want it. It IS possible, but if you are not completely clear that "r"/"shift-r"/"ctrl-r" is different from "ctrl-r," I really hesitate to recommend that path. When you deselect all of the bones, make a backup save of the scene, just in case.
6) NOW, YOU MAY TEST-ANIMATE. It is recommended one animate with a strong falloff -- length/16^ is the strongest offered in INSPIRE. "Activating" bones happens when "ctrl-r" is pushed -- it is a toggle. ("r" unfortunately toggles activation/"ctrl-r" on, as well. which some have found confusing.) Do not use "r" once you have begun animating! Use Ctrl-r to activate the bones you want to use. It is a comon practice to make some bones invisible when the rig is finished, and to color key bones to make them easier to select.
It may help to do the exercises in the INSPIRE Manual where a tube is bones-rigged and fine-tuned, before trying to rig a figure for distortion-free animation-test calistheics.
A test-animation is generally tried before embarking on "real" animation. Can the arms extend in front of the chest, can the hand touch the opposite shoulder, can the knee be raised? Watching for tugging or depressions takes a little experience. If a distortion shows up, it is probably easier in the long run to go back to #5 and add a set of shape-holding bones. Adjusting strength for one bone also works, but beware inviting a domino-effect of other bones being affected, or a walk cycle with a limp.
When I have to remove a bunch of bones, I typically start with a finger bone or other end bone, and open the bones panel by typing "p," and then clear it using the "clear bone" button, and enter," then use the arrow key or scroll upward to remove the parent. this tends to produce fewer surprises when removing bones.
ONE DOESN"T KNOW WHERE THE CLOUDS OF INFLUENCE ARE, THOUGH EXPERIMENT AND EXPERIENCE SUGGESTS THAT THE
IN THE "LIMITED REGION" REQUESTER MAY BE CLOSE. (In practice, one seems to get a better animation result with a figure that is about 50 feet tall, but this tends to lead to problems importing inventory props without enlarging them in Modeller first.) So, one draws the bones to fit, and animates to see where they are, and then adjusts. If inferring the influence areas really gets on your nerves, you may want to switch to the Limited Region method described below in red. A range of a few centimeters is useful. DEFAULT MAXIMUM RANGE
Generally, the Camera and Light views will serve better than the orthogonal (FRONT, TOP, SIDE) for adjusting character bones. Proxy models having one tenth the detail of final models are almost always used until the final renderrs are being made. Proxy objects refresh much faster and render much faster during testing.
One other thing about bones: SOME ANIMATORS PUT BONES IN EVERYTHING. If floorboards can bounce under the weight of a heavy dancer, then the floorboards get bones; if a hill could use a little more height, the hill gets a bone; flames get bones to keep them interesting; objects that could "go either way" like tennis rackets and golf clubs also get them. Animated details can lend punctuation; subtle effects can emphasize strength, weight or speed.
Patience! Start with easy projects. Why? This operation has a fair numbers of variables. You may be mystified why the bones keep showing up in crazy wrong places, for instance, until you realize that you accidentally clicked on another object than the one you wanted to attach bones to. And make Scene back-up's often!!
One of the hardest lessons to learn for me was how to force myself to MAKE GOOD PROXY OBECTS.
This is another one of those sentences in the Manual that need one's full attention. If you want the characters you make to be animate-able in real time, you're going to have to make proxy object versions of them with 1,000 or fewer polygons. A really fast PIV chipset can help too, but for those of us with PIII's and II's, this step can make an incredible difference in quality of life! For multiple characters, you may want to keep the total below 1,000, as the extra polygons will first show up in a frame rate loss, and delay after mouse-adjustment.
The proxy on the left is not much of an example of a proxy, but it illustrates one of the three techniques for creating low polygon models: start with a subdivided box (using the arrow buttons before pressing "make"), and then use "BGConform" with the model you are making a proxy for in the background. Another method of making a low polygon-count model is to try the "Reduce Poly" button, but with the a 20,000 polygon model, this will not get the polygon count down to the 1,000 polygon target. The preferred method is the mermaid character: load the object into another layer, and make it a background, and then magnet primitives like spheres and boxes to get a fairly accurate proxy, and finish the job with BGConform. Hidden polygons, seams, and all the other stuff you never are allowed to do: GO FOR IT!! The only tedious part of the process is re-surfacing sections of the proxy to match the original, especially things like eyes that will animate.
You may opt to cut away polygons from near joint areas in order to be able to pick bones using your mouse or bitpad. If you have used the Scene Editor to make all non-joint bones invisible, just mouse-selecting the elbow area should bring up that bone, confirmed in the bottom requester window.
The Scene Editor may also be set to "partial polygons." Some animators will use the Bones requester window that appears at the bottom of the display when a bone is selected (That's what it's there for) to scroll to the Bone they want, which will be prominently named "LEFT ELBOW," etc. This is not a bad way to animate; another similar way is to keep the Scene Editor panel open, and have it to one side of the screen. The left-right scroll button at the bottom of the editor screen makes if pretty convenient; just click on the Bone, and it becomes highlighted, and you will have key bones color-coded anyway.
Just as the display will refresh in real-time, the animation will be to some degree watchable, and fairly fast to render using Quickrender settings. You may need to name your lights according to their settings (222%) and then change all but one or two to 0% for quickrender. Don't forget "Bink" from http://www.smacker.com/ for high quality compression and editing options.
What about "Limited Region?" I have played with "Limited Region" and found it to be very useful for tyro's because I liked that the limited region balloon indicated precisely where the bone's influence left-off. A good approach seems to be to place limited regions bones as if they were ball joints. When I mix limited region and normal bones, I try to keep the parent and child either parallel or distant/inactive. "Limited Region" bones need to have shape-holding child bones, just as with conventional bones.
"Limited Region" bones is mentioned in the Manual. With this system, one can make groups of clouds that will fit exactly to the shape of an object to fairly tight tolerances. This means that there should be less "pinching" and no problems with feet, thighs or hands stretching. The "Limited Region" box of the Bones panel (opened easily with "p" when Bones are activated) applies a balloon over the bone that corresponds to that area exactly but ONLY when the Minimum and Maximum range numbers are (nearly) the same. This use of "minimum" and "maximum" may seem a little confusing since fog and light also use similar terms, but ignore this similarity. The advantage of being able to place and re-place balloons is that the other bones systems seem to have falloff reaching to 1 meter, and this is less intuitive. The disadvantages of this method: TOO MUCH TYPING. Where two bone balloons overlap, the system seems to default to the same flexing system used without limited region, but it can also introduce a kind of wrinkling (unless there is a 20% buffer between minimum and maximum ranges). I have also experimented with having non-limited region bones between them, with strengths in the 500% range. Although one can vary bones strength -- usually one shouldn't (100% is fine). Also, the bones can be any size, but try to keep them one size (small). This may keep the learning curve nice and shallow and the process fairly intuitive. Another shortcoming of this method is that any area that doesn't have a bone's influence will not move at all, but that's most of the learning curve. If you want to "go crazy," play with making the "
" a smaller number than the maximum for areas with greater flexing, or having overlapping LR "Clouds." Limited Region is very useful for placing bones in eyes in duplicated skeletons. The same techniques of using redundant bones for keeping rotation orientation intuitive apply to Limited Region bones. Minimum Range
The limited region approach can give a result somewhat similar to "ball joint" animation, by having limited region "balls" positioned for child bones near elbows and elbow-like structures like knuckles and knees.
It may seem a little slow to open up the cow object and add bones to it, but getting bones down early is a lifestyle I can only dream about -- I was making some pretty heinous errors when Larry Schultz shared his knowledge. Be productive!
One little weird thing: you don't need to move the object at all to have it walk across the scene. The bounding box can sit in the center, while the skeleton walks in and out of frame. Weird, huh?
Another trick one can do for morphing proxies, is to create boxes with mouth or eye positions that will morph. At this point, though, one is probably using a spoon to dig a tunnel -- consider asking Newtek to help you out with LW7. If you're like me, you will typically animate the characters separately, and then do another version with all the characters, making eyes meet, etc. One last trick on animating: use point lighting at under 100% values to get useful indications of position, above and forward of the object seems good. If you're hitting "F9," this trick may not be necessary. (If you don't want to clear your lights, rename them with their values and set their values to 0.)
In terms of "industry standard" animation, the above limited region figure rig will probably be adequate for a wide range of poses and sequences. The shortcoming of working with "rotoscope" art like the images below is that one may focus on the knees and ankles and neglect the rotating shoulders and shifting hips. The advantage of working from stills like the Muybridge sample below (and one can find a fair number of Muybridge images on the web) is that they can help one nail the first few walk cycles. ( Truth be told, cycles are not used much in animation.) Something that beginning animators have only had for the past fifteen years or so, except at the better animation schools, is access to a way of studying ANY motion sequence frame-by-frame. It's called a VCR!!! USE IT!! One can study walks, runs, famous 3D films and cartoons, all "frame-by-frame."
A teddy bear makes a fun practice object for INSPIRE: fast to make, cute as all get-out, and one will quickly recognize the differences between a hinged teddy and a bones treeform teddy.
It is also a common practice to have only one axis available for elbow bending, while two or three are available for neck bending, comparable to nature. Having highlighted rotation buttons can be a great time-saver.
Melodramatic exaggeration is the art of wringing hands with pleading eyes on one's knees, and most of us probably know how to do it unconsciously, as a result of watching thousands of hours of cartoons. One can probably lay out most of an animated sequence's "keyframes" with a digital camera, and then "brute" the CG skeleton to match. YES, YOU DO NOT NEED TO KNOW HOW TO ACTUALLY OPERATE THE COMPUTER TO ANIMATE. I would recommend cultivating the POSITIVE kinds of expressions over the negative, and having poses dealing with sadness or worry having to do with OTHER characters, though the "easier" character is probably the villain, because they only care about themselves. The hero worries only for a moment, before thinking about how to end a worrisome situation. A little brow furrowing later, an idea is tried. But is it willpower, or do they relax in order to think? Here, one begins to include one's philosophy of life.
Hands and faces are where the eyes tend to go first, so a digital camera can convey a world of emotion, but there are plenty of animations of fruit, pencils, peppers and other assorted items that prove that body motion can be laden with language. Fortunately, most of this knowledge is not arcane, it can be found in animation books going back 100 years. A moment of rest punctuated by a pinky wiggle, that may take a little looking to discover, but that is common enough too. The same principles of contrast that combine elephants and mice and shivering characters and fireplaces, apply to animation.
As many books point out, SILHOUETTE is the art of staging pantomime for greatest effect. It isn't even taught by drawing, some students learn it with paper cut-outs. Drawings are based on silhouette, if one looks at the edges of a given object. Careful silhouetting will cultivate repetition, curves, angles, efficiency of pose and acting. The eye will be kept in the image, and the key ideas will be conveyed.
Animating a bouncing ball is probably a good exercise to try with bones, both because one learns the "parent" needs to be at the center, with children above and below, and because one learns to have fewer keyframes for acceleration than deceleration, etc.
In terms of manipulating cycles and masses of flesh, one probably wants to stay away from cycles. The main thing that goes wrong with cycles is that one cycle will be 45 frames long, but one or two bones will have a 60th frame or neglect to have "Repeat" activated in the Graph Editor. On the plus side, if the learning curve is looking steep, it sure makes the animator look good for a (short) while.
Back when computers like most people own on their desktops were being used to make "Babylon 5" and "Max Steele," the way to keep refresh times reasonable was to animate using skeletons alone, and keeping the object completely invisible, according to Larry Schultz. To see the poses? "F9" Did you forget about "F9" already? When lighting, F9 is sometimes every third keystroke for superlow or "L" renders. Bones animation can be a pretty far cry from manipulating hinged geometry like robots -- all the calculations sometimes turn the interface to mud, even when one isolates the character in its own Scene for later transferring using "Load from Scene." IF YOU AREN'T USING 1,000 POLYGON PROXY "STAND-IN'S" BY NOW FOR ANIMATING, GIVE IT TIME,... . It also sometimes helps to have a light used as a second camera, targetted at a Null, with the light moved back and forth to get a better idea of what is going on.
Incidentally, it is apparently very common to divide up Scenes with many characters, and then recombine them later with "Load from Scene," and there are even a couple of ways of saving quickrender or Preview tests of the other characters and loading them as textured cards or backgrounds for interacting.
Skeleton bones are usually combined with morphing. Rarely, the morphing intentionally affects how the skeleton operates, but I have seen this done. Usually, a morphed head includes the body in an INSPIRE animation. Heads may be parented to bones-rigged Null in LIGHTWAVE now.
"Null to disappear" is what I rename the Null that I add before the morph object. The second object to be morphed and all subsequent morph objects will be parented to this Null. I do not know what the correct approach is, but I numerically size the Null to 0 0 0 and then move what is left far off screen. This seems to work. Is this the correct way of doing it? BEATS ME! (For one thing, to speed up refresh times, the morph targets should be made invisible in the Scene Editor.) The rest of the procedure is mostly straightforward, check the MTSE box for the first object. Load the morphs in the most workable order. Be sure all the morphs are parented to the disappearing Null. Morph the first object to the second, and enter any random keyframes in the morph envelope graph, now change the object requester to the second object, and enter the third object in the target requester, but do not enter info in the graph nor check the MTSE box. The remaining morphs may be loaded in a matter of minutes, if not seconds.
When one is creating morph objects, at first, one probably wants to test them using Layout as soon as possible, because some will either be illegal or just not animate very well. One "fix" for an illegal morph (wrong number of points or a casserole result) may be had by trying to morph to that pose from another acceptable pose using Modeller's "BGConform." Points giving undesirable distortions may sometimes be corrected with "smooth."
If one uses the technique of designing the object in Modeller with eyes and other "accessory" elements floating in space on the zero planes, for later limited region boning and parenting and moving to their new positions where they belong, using strange center positions and other extreme strategies like always having the eyes forward (that I am guilty of recommending) should be completely unnecessary.
Adjusting pivot points for bones-rigging objects should not be a big deal as long as all bones have one parent, which may be parented to another bone, which may counter-rotate around another for galloping, etc.
INSPIRE does not UV-map, but intricate textures do NOT wander around when morphing, because INSPIRE has a kind of "vertex mapping."
(The goofy little Mr. Twisty character above was made by smooth-shifting a cross section of a pink twist-tie and rotating and move it, then performing some subdivisions. There was an actual "sculpture," but it's either somewhere on the floor or in the vicinity of the desk. It might be fun to animate a wire bird or two, and maybe some action sequences. The art of posed animation is that far from the world of modelling sometimes. I have heard that very adept animators all work with "stansd-in's" of one kind or another when laying down bones keyframes.)
(I have also noticed the interface become sluggish when working with displacement mapped sections of cloth or waves, or many many bones, or when working with multiple morph targets where the polygon count is fairly high. One may press the Objects tab to open its panel, and it will highlight, but the panel will be trapped behind a half-refreshed image. The solution here is to press the Objects tab again to de-highlight it, and wait for the display to refresh. Speeding display refreshing up is helped by converting the objects to invisible or partial wireframe realtime display in the Scene Editor, as described elsewhere on this page, and in the manual. This makes a big difference. Once the display is refreshed, THEN press the Objects tab panel again, and it will open normally. Many animators do not bother with the Scene Editor approach, simply switching to low-polygon "proxies" made with "Reduce Polygons" or heavily-cut object files, until final rendering. MAKE SURE that these proxies have the most current textures by saving each finished texture and then loading them to the proxies using "Save All Objects," this will allow you to swap objects at the very last moment before beginning rendering.)
Bones may be used for a lot of effects like hair or fabric animation, for making trees wiggle in the wind (with cycles of different lengths...), etc, though some of this is also done with "displacement mapping" as is described in the flag tutorial, and below. For heavy distortions, bones do not always have to be only rotated, they may also be sized or moved. Moving is sometimes used for added shoulder roundness in cat characters, by positioning the bone in the character's body, then shifting it out as a first pose. Bone sizing can be fun to watch.
If you want to have a sinusoidal motion like a cat's tail or octopus tentacle, you will need to animate one bone going through the full cycle twice. Create many little keyframe. This can then be saved as a motion file and loaded to all of the child bones, negative-shifted, and trimmed, and the bones set to "Repeat" in the Graph Editor. The first frame will be incorrect, and will need to be changed, but this seems to be a good trick and fun to look at.
If you have a long chain of bones and "size" them as you go, you can get effects like a garden hose spray that can droop like the real thing. It's not particles, but it's not too time-consuming. I would probably use image sequence loops on several coaxial cylinders, rather than particles. (Particles with a large pixel size and some motion blur would probably look great too.)
One nice thing about morph targets is that one can export them to other software packages or into LIGHTWAVE 7. Making morph targets? There are two things to know: most morph artists seem to work with very spare models, which are "Metanurbed" (tab key) and "frozen" AFTER the modeller moves the jaw/eyes/whatever. The other thing to know: great spare head models can take a while to make. Consider the variables: nostril wings, the fold around the corners of the mouth, the upper lip detail, the eyes,... Fortunately, a character can be imperfect and still be morphed fairly interestingly, but you may need to morph differently. The morph artist with a strong 300 point head can lassoo point select and use "rotate" from the jaw point and get a lot of morphs quickly, with a very natural look; the artist with a more detailed starting morph object will need to either lassoo overlapping groups of points several times, or use some of the "group" point/polygon tools like bend, vortex or twist for the same effect. Both approaches are valid depending on your needs.
Another thing to mention about morphing, and animation in general -- exxxagggerate! That word was on a sign an animator I know once kept on his desk. Open your mouth as wide as it will go -- that's pretty big. A CARTOON mouth is going to go quite a bit wider, isn't it? Try rotating the jaw over to the side for a Southerner "drawl." Don't just animate lips -- morph the chin and jawline, cheeks and brows, have the whole head distort! Just because some animators don't use tongues doesn't mean you can't. Dust off your copy of the Preston Blair book and model some of those poses of the emotions, not just the phonemes. Have your character on its knees begging, try animating a "take." It's fun to animate a figure skating mouse doing some impossible moves, but watch some "Pokemon" and notice the way they play with passion in scripting. (To have a lot of fun with morphing, use a strong character model; one by Todd Grimes may be found at http://www.evil-plan.com/ , though it is for study purposes only. (I had to import it into LIGHTWAVE 7 DEMO and then cut it up and export it as Lightwave 5 to import it into INSPIRE.) And remember to have cheeks rotate, expand and contract to match jaw movement,...
And remember that the envelope may be adjusted using the RMB in the Graph Editor. : )
Morphing results in some pretty gorgeous animation, but there is a trend to create bones-rigged heads. Why on earth go to the trouble? Well, to reuse the heads you make in INSPIRE for other characters, you will need to create a Layout Scene file that applies distortions like "Bend" "Size" and others to all of the morph targets equally. This can probably be done -- maybe that's one reason those tools are in Layout -- but the alternative is to rig one head with plenty of redundant bones and use LIGHTWAVE 7 for block editing.
Rigging a head with SOME bones is probably a good idea anyway, because eyebrow motions seem to look better, but if the morph targets are made well, they will include good chin and jaw movement, and some eyebrow motion and possibly furrowing. Since the texture utility will move the eyebrow texture around to match the morphing, extra adjustment shouldn't be needed.
Head bones-rigging is also one of the instances where one may want to "break" the rule about drawing bones to NOT be adjusted afterwards. If one has extraneous bone parents in a head model, one should be able to de-activate and un-rest those bones, to adjust the relative positions of the jaw and other areas. This will mean that the new facial skeleton will fit another head, and only "Replace Object" will need to be clicked. The only "problem" with head morphing in INSPIRE is that without the ability to block "copy all descendants" and paste them, the effort to bones rig phonemes and emotion poses will lead to a lot of typing. That head skeleton library might be better loaded into someone else's copy of LIGHTWAVE 7.
One possible reason few of us have seen what a model looks like before it's rigged -- one generally does not show these images to clients. According to Todd Grimes, clients seeing treeform or other pre-rigged models detours attention from issues that are relevant to completing and selling the product, and he discourages others from doing this. Another reason why there may not be very many pre-rig model images like the one above could be that now eyes and hair may be rigged a variety of ways, using nulls and dynamics. Incidentally, position the eyes one above the other along the center line for faster bone placement, or along another zero-plane.
Currently, animators have a number of powerful tools to streamline productivity. One of these is the ability to load a gallery of keyframe (bones) poses, and then copy-and-paste favorite poses, along with all of their descendants, to destinations along the timeline of the animation. That way, one could load a gallery consisting of waltzing walking running jumping pleading shrieking crying writing reading singing etc with 1,000 blank frames before or after it, and do a few cut and pastes. These can then be tweaked to suit the particular scene. INSPIRE users do not have this option, since Bones may only be edited one at a time. INSPIRE is VERY powerful, but productivity may be better srved by the newer software like LIGHTWAVE. X-ray may be used instead of partial-polygons, global coordinates may be used instead of extra joint bones, infinite morph poses, "sliders," more realistic shadow detail, hair, cloth, etc.
Hair? I wouldn't waste my time on it, get something rough and keep moving. True, one could "build" hair as subdivided polygons and then use displacement map "turbulence," for certain scenes, but one wonders when it would be worth the trouble. The hair below is the old reliable "dogbear" from http://www.textureworld.com/ . If a client needed industry-standard, one hopes there would be enough pay to move up to Lightwave. The discussion on morphing textures above bears revisiting, since a lot can be done with a good "dogbear," but don't let it slow you down. There are a number of books available on pricing animation for clients. Although I've sold nothing at this point, there are apparently others who have, ranging from a few hundred dollars for a day's work, where hourly or "up front" pricing might have scared a customer away, to practices like being asked to be available for a project during a set month, which necessitates a large up-front fee.
If you think the audience will be lowered by your project, then what? It may yet get made, but it's going to need better jokes and some kind of exaltation. If you get to a place where nothing is happening, you may want to silence your thinking for a while. Are you experiencing a simile of arguing on the cel phone while driving? Quieting your thought has a lot of benefits; you get to listen to angel ideas, for instance.
So, that's character animation with INSPIRE in a nutshell, as far as I understand it, the rest is pretty much technique. I wouldn't be surprised if effectors can be used for animation more, since so many deforming options are programmed into Layout, but I haven't seen any examples.
Would you rather "Filmbox" your animation? That's two of us. http://www.kaydara.com/ "Filmbox" is a generic cross-platform motion-capture system, though one may find a variety of other motion capture utilities by performing a google search. Motion capture may be performed with http://www.newtek.com/ "Aura.," according to some tutorials. Learn from the sock puppet, listen to what it has to say. The sock puppet is wise, and according to many, well-connected.
And by the way, did you realize that "Auto-adjust" makes it possible to record adjustments to movement and rotations in real time?
When you "play" the arrow controls, you can still adjust whatever object is currently selected. This will lay down x, y, and z information using the "Auto-adjust" feature. Pretty incredible. You can use the right mouse button to record a bouncing ball's up and down motion, and then parent the ball to a Null you add, and adjust the x and z movement in real-time the same way using the Null! This method can be repeated with additional Nulls for rotation and squash and stretch. Most objects will play pretty slow, but if not, hold down the left or right arrow buttons to slow down the display for more control. Will this method work for bones? Of course -- but you won't be able to use the Null trick. Solution -- use extra bones at each joint (the same technique used to keep rotational orientation intuitive)!
LIGHTWAVE animation is currently streamlining inventory so that one cuts and pastes poses instead of posing, and provides ranges of coordinated flexing and fluid-dynamics driven animation, which make for realistic jiggle. Audiences seem willing to watch both kinds though. Fluid-dynamics was on its way 15 years ago when WAVEFRONT shared the work of an intern with guests to their
site -- a rolling jello cube. Jelly-like motions are still being "bruted" in large degree, so feel free to play with these effects in INSPIRE. In LIGHTWAVE, they may be handled with off-center animated rotating Nulls or fully dynamic fluid bodies or chained balls inside cloth, possibly rigged to balance others. Beats me. Rigging for bounciness is pretty straightforward; put child bones in fleshy areas like upper arms, cheeks, breasts, tummy, butt, tail, thighs and chins. Although it can be fully-automated, it is also an aspect of animation known as "overshoot." The character has hit its destination, but these parts keep going past it. California
According to animation director and HR websites that give their gospel truth view of demo reels, if one is going to have an "industry standard" animation feel, it is going to have "anticipation," "follow-through," and "overshoot" and not computer-induced jerkiness (the beginning and end of a cycle have to have the same slope) or an overly spliney feel. Anticipation and follow-through take planning, reducing jerkiness and splineiness can often be handled by looking at the different channels of the Graph Editor. If the line suddenly jerks, that's gotta be a clue. This technique is also very useful for making cycles. With the advent of dynamics-based animation, animation directors may also be looking for more dynamics-based rigging.
Follow through, anticipation, punctuation, rhythm, overshoot, settling, composition -- some of these are covered pretty well in the Preston Blair books. A common criticism of 3D animation is that gravity is ill-handled. Having a character with a challenging obstacle like a tall ladder or heavy trash can is one way to learn by playing.
My first couple of shots had two characters, which can be a lot of fun. As I once overheard at a famous guerilla studio: get the characters out front right away and show them interacting.
Although I share low resolution renders, rarely with sound, the large majority of reels I see are 640 x 480, with music, and the character reels generally have lip synch.
Cycles are created by copying the "zero" keyframe to the end frame of your sequence or the end of the interval that you have in mind. Use the "Create Key" button at the bottom of the INSPIRE GUI screen, it will open a requester for entering the final frame. Toggle the "End Behavior" requester to "Repeat." If one is animating the hip/knee/ankle walking, all the cycles should have the same interval number. If something else more random is desired like a vibrating car, then the values can each be different. Objects that are cycling have the advantage that they don't suddenly become rock solid should the animation linger a second or two too long; their disadvantage -- cycles generally aren't used in feature animation. If the animation uses bones, each bone will have to be activated to "repeat," and not just the object. Again, the graph editor will confirm whether the start and end points of the cycle match. Scroll the menu down to rotation and have a look. If the value is the same, but the slope is down at the end and up at the beginning of the cycle, a keyframe dot will need to be created either below and after the start frame or above and before the end frame position, so that if you were to draw the cycle is became a wave, and not a zig-zag. I love "auto-key" but some animators do not; I also like to make "back-up's" often -- coincidence?
The "new" approach to animation using bones is to transition from one kind of cycle to another using advanced software, such as from a walk to a run to a standstill. The old INSPIRE approach would have been to shoot the cycle in one pass up to the transition, then have a copied-and-pasted keyframe set with shifted later keyframes and brute keyframe the end of the sequence. The Scale and Shift buttons in the Graph editor are pretty nifty. But is it any wonder so many animated films seem to be shot from the waist up?
How does a cartoon character lift a telephone to their face when their arms do not reach their face? How does a duck chew without teeth? You fake it. Object dissolve, animated texture maps -- you name it.
In some cases, one will have a locked down soundtrack, in others, one will not. A non-locked set of phonemes can cycle according to a typical pattern of speech, and may be dubbed approximately for testing or possibly foreign distribution. I find the envelope morph graph editor MTSE system for INSPIRE liberating, though the 40 morph pose limit has me over a barrel. See MTSE and Lip Synch in the Glossary. The setting of lip synch is another reason to make one wireframe render, at least, with the Data Overlay, to check synch.
Stuff going through the floor? The Scene folder has a way of converting objects to partial polygons or wireframes. This is one of two great ways to check for inter-penetration; the other is to position a "camera light" of 0 intensity parented near the foot or at the plane of the floor or just below it. There is a column of little icons in the spreadsheet of the Scene Editor. If you click on them with the left mouse button LMB, they toggle from full textures to boxes to wireframes to surfaces only and back to textures. THIS IS WORTH KNOWING BECAUSE THIS IS ALSO ONE OF THE
FEW WAYSTO CUT DISPLAY REFRESH TIMES WITHOUT DISMANTLING THE SCENE OR REPLACING OBJECTS WITH PROXIES. Proxies are still a good idea -- see what the shot is going to look like and keep its integrity high. (Since this may be related to hard drive or RAM room, clear off what you can to a CD-R...) Another trick one can do, in order to speed up rendering, is to make a model of just the object's feet (and just the terrain), by selecting the rest of the body then "Cut," but using the same bones rig, using "Replace Object." "Replace Object" is one of LIGHTWAVE's strongest tools, and speeds workflow dramatically. Incidentally, if you want to have a character running over rough terrain, animate a Null going over what you believe makes a decent path, then, save the ".mot" file in the graph editor, and load this file to a single point for "Path Clone." Then, use that point to extrude a landscape, magnet craters, pole2 boulders into the surface. The important detail will be the hidden "mot" file landscape, which will match the animation of a character running (more likely a Null, as you will learn...)
Animate a character doing a walk cycle. Truth be known, walk cycles don't come up that often except in gaming. Create or use an inventory environment: a swimming pool, a garden maze, library, playground, what have you. Now, add a Null to this scene file, and animate the Null moving through the environment. I like to animate the slight up and down and other motions using the parent bones. Try the "Align to Path" button in the Graph Editor, and parent your walking character to it. Half the tie, I find out that the character is walking in the opposite direction of their heading. This can be planed for, or you can just parent another Null between the character and the "aligned" Null -- and turn it 180 degrees; this is a good idea anyway, because "aligned to path" Nulls aren't going to allow for things like skidding or hanging in mid-air, or bouncing.
Camera technique and lighting theory is discussed in some detail in the Tipathon. I like a high intensity setting of 333% plus a falloff that is zero within double the distance of the character from the light. Movie lights are typically very bright, and they are positioned close to the actors in order to get better contouring. Contouring like this is only possible when the light is pretty close, due to the "inverse square law." Since Hollywood doesn't put lights in front of the camera, I usually keep mine out of the way too. Try it and see if you like it.
"Shaders" are discussed somewhat in the Glossary . "Shaders" are another word for texture mapping, UV mapping or surfacing. The best texturing program is sold by http://www.pixar.com/ and is called "Renderman." It included things like procedural texture "innards," realistic "radiosity" and "caustics" years before LIGHTWAVE would have them. It also is able to shorten render times by adding subtle changes, instead of rendering from scratch, which is an interesting approach. INSPIRE is somewhat limited in its ability to render a face with a cloudy semi-transparency, but, truth-be-known, when I look at my hand it rarely looks cloudy. Do shaders get credit for art direction? Let's hope not.
On the subject of color theory: make a bunch of cards, and give each one a different surface. Have some gray, some black, some yellow, pink, etc. This does two things, it gives you an inventory of cards that you can use to create into a forest on the fly, by texturing one or more with a transparency map, for instance, and using them as background planes. But try just keeping them as flat cards, and load them into a Scene file. Some color theory is intuitive. Use a wide screen aspect ratio, make some renders that feel balanced to you. If I say that cone vision is insensitive but detailed and colorful, and that rod vision surrounds it and is detailed and carries depth information, but tends to be darker, will you keep textured detailed color in the center of your shot and have dark peripheral shapes for perspective? No, that gets really boring. Is there a mathematical harmonious relationship between certain colors just as certain musical notes are multiples? Yes, and some colorists have a special knack for seeing the dilution or brightness of colors as they are framed, and finding where one could become a balancing complement, where I have yet to really sit down with a color wheel and make friends with it. I can say that I save texturing one or two objects for last, since any one color seems able to suck an image into balance, short of a pinpoint in a forest. Generally, movies make backgrounds darker by 75%, a trick of the Old Masters, and use other Old Master tricks like backlight and lines of interest that keep the eye moving in a circle. INSPIRE seems to have enough texturing capability for me.
While on the subject of color theory; the other day somebody published two images of a test render using LIGHTWAVE and one of the leading render engines. I couldn't tell any difference. INSPIRE is missing some new features like radiosity and volumetric texturing clear through an object, but it's pretty neato when you give it decent lighting, specularity and bump maps, perky texture images and the anti-aliasing it deserves.
To recap from the texturing section under the turtle image, above:
It is normal to have two copies of Layout running simultaneously in Windows, with one of the copies used exclusively for texturing, and Modeller for changing textures of polygons, as needed. Partly, this is because INSPIRE is pre-UV mapping. The three most important elements of the Surface Requester are the Surface Color/Texture, Specular Level/Glossiness and the Bump Map texture/values. Practice helps with these, and with the others. INSPIRE only allows three textures to be combined one on the other, but obviously one can composite these using INSPIRE or a paint box. I recently bought a professionally made seamless tiling texture from www.textureworld.com and it was very useful, but a little strong, so I made it 90% opaque and had a surface color underneath it. I also used this texture with only 10% or so opacity as a specularity map. Specularity, per se, is a compromise textural component that doesn't really occur in nature. In nature, everything is reflective, but some surfaces that are waxy have greater scattering than others, resulting in a dull sheen reflection of a light source. Observation reveals, however, that these waxy surfaces are still mirrors. Anyway, the dull sheen would be greater on the hair texture than in its nooks and crannies, so this was used for specularity. I also used the same texture, with some paint boxing to increase contrast, as a bump map. If the bump map requester is given a setting of 200% and no anti-aliasing, it will create quite a relief according to the lighting. I made some tires this way out of a black/white pattern and was highly impressed.
I recently noticed a dirty metal grating texture that used a solid black for the bump map, but had little bright metal patches at the bottom of each indentation. Using two different images for the bump and texture maps gave a pretty effect. An oil smear I noticed was probably a fractal texture applied over it. Some texture artists avoid the fractal button, because they say it gives a typical look.Allowing for little patches of light or other ambient effects like light falloff is probably a good idea for a texture artist.
In the turtle example above, the same texture map of Verdi Pompeii Marble was used as texture and specularity texture, but since the veins were expected to have less shininess than the the dark green areas, and since they had a brighter value, the "Negative" box in the texture image requester was checked making them less specular. A low glossiness setting is the look of
Hollywoodpancake make-up and comes in handy often. One other thing to say about Bump maps is tha it pays to paint box them from scratch. A black line with some blurring makes an excellent bump map for planks. Reflection requires a ray Tracing button in the Render panel to be checked to be effective. Reflectivity can be a little startling, but if the shot allows it and rendering times do not double, it may be worth it. Transparent objects are a ton of fun, though interleaving semi-transparent objects like smoke puffs or petticoats tend to artifact at seams. Another use of transparency is as a hole maker for eyes inside morphing heads or special objects like Null objects that can be very useful during trial renders but must be transparent at render time -- remember to use the Advanced Options panel of the Surfaces requester to make the edges completely transparent. "Luminosity" is the surface component of the greatest bravura, since it is so rare in nature. One would "map" luminosity for things like fire or lava. "Diffused" surfaces are an interesting component because this sounds like the subtle difference between a titanium white and a laquered white, one essentially contourless, the other with heavy contouring, but if diffused drops below 100% the surface color drops dramaticaly toward black. This is also the button used when a front projected texture falling on a card for shadow casting produces a brighter or darker effect than the surrounding front projected image.
One other recommendation: have your objects sized to match nature. This will make replacing objects in layout more convenient, and it will make surface-trading much more manageable.
If you visit a professional texture company like http://www.textureworld.com/ , www.bitbrush.com/textures.html , or http://www.shaders.org/ , you can learn a lot about texturing. Why texture an extruded vine object, when you can texture a long narrow subdivided card with a seamless texture of a vine, and add Bones to it to move it around a tree, etc. One may also notice that an image of a patch of red flowers will be easier to obtain a "specularity" map from by filtering the whole image read, making it negative, and using this image as a filter to hold back the red flowers, since only the greenery will have specularity usually. Remarkably, one may do most of this in INSPIRE, but use a paint box if you have one. Again, the way to work on textures, even while working on theem in a paint box, is to continually check your work in Layout. Sometimes a wood texture will be its bump map, sometimes not.
The above cloth texture was made by scanning a knit sweater, and then using INSPIRE to render a tiled image with the tiling seams moved to the middle of the image. This is half of the work involved in creating seamless textures, then "clone" or use other editing functions are used to eliminate the cross-shaped joins, and the imnage is re-saved. Since the edges came from a tiled version and represent the middle of the original image, they will already seamlessly tile. Once a Scene file is created to do this first step, it can be re-used whenever seamless textures are needed. I admit that the image above has its shortcomings, but it can also come in handy. Do you know what the average seamless texture looks like?
Below, we have the "Weavebrush" seamless texture included with INSPIRE used on the left, and the "sweaterbw" used on the right. The texture was used for image, specularity and bump for both. A red version of the "sweaterbw" texture was used, 70% opaque, with a dark red surface beneath. It can make for better socks, sweaters, steering wheels, swim trunks, scarves, camera bodies, etc. The "weavebrush" would still be better for large repetitions/areas like blankets or curtains, since it is uniform.
Are there any tools to use when shooting textures? Yes, probably (this is just a guess), since you ask, but who is going to carry a card covered with 3m scotchlite reflector material around to put behind objects in order to get transparency maps from them? You? Oh, never mind. Polaroid filtering will give specularity-less surfaces depending on how it is oriented. A good thyristor flash will give bump maps with hot foregrounds and dark backgrounds, if used fairly close, but may need a card reflector so that there isn't "falloff" in the image corners.
Interactive Texture Placement is a technique that one can come to slowly, if at all. Some swear by it, but it belongs to the era of non-UV mapping. UV mapping is essentially assigning textures according to objects. INSPIRE's method of assigning vertices to the texture used, means it will work with either bones deformers or with morphed models. With morphing, the object morph targets aftter the first object will use the texture positions of the first model, morphing them.UV is unnecessary with INSPIRE, but if you encounter UV-mapping texture people, they will sing its praises in accapello harmony.
Essentially, one applies the texture using a "Reference Null" parented to the object, and then moves/size/rotates the Null to adjust the texture. Remember to save your objects with a new name, and save your surfaces with a new name, (possibly rename the Ref Null by using "Save Object" in the Objects panel for convenience), and then remember to press the "Save All Objects" button in the Objects panel, and finally, save the Scene itself. These are good habits to keep if one doesn't want the ref object effect to disappear later. I also suspect that this will mean that one may only be able to "Load from Scene" the character, and not pull it from other inventory UNLESS one copies the data from the lower left corner data display of the "Ref Null" object's position/rotation/scale and inputs these to the texture directly, omitting the Null. Is it worth the effort to have a cylindrical map that is at the same angle as the character's neck, with the perfect sizing? This feature apparently doesn't work with cubic mapping.
The www.cyberware.com figure object was downloaded from www.3dcafe.com . It was created using high-tech scanning machinery from a life model.CAD objects generally do not lend themselves to low-polygon morphing techniques, but in this case, a low polygon head model was placed on top of the CAD orginal object, and the original face was colored green. Then, the low poly head was carefully magnetted to be slightly larger than the green CAD head object underneath it. Getting rid of the green in this way lead to a pretty keen result. This technique is used by the pro's, and it not only may be used for getting lower polygon counts, it can be used with models where the polygon curves will tend to match natural contours and animate better.
I have used this figure for a handful of tests, it is also a good generic human model to build from. A variation is to create a generic quadruped, or a generic toolkit bones-rig of creature parts bones. In this way, one might be able to animate an octopus or snake with the same creature file, by omitting all of the undesired bones sets. Generally, though, this fits in the "Go ahead and buy Lightwave" pile of workarounds.
The lashes were polygons selected from the eye lid area. The eyebrow was texture/alpha, diffusion and negative/bump mapped from a celebrity photo, to a section of the forehead assigned the skin-color "eyebrow." Incidentally, eyebrows move a fair amount, so it's worth remembering this when modelling for morphing, and possibly texture mapping the left and right eyebrows with Reference objects. Erasing around an eyebrow probably gives a pretty accurate bump map.
See the swimsuit above? I tried this a variety of ways. Given the retrospect of knowing how to use "unwrap," I would probably re-do the texture using a skin texture that includes the swimsuit as a brighter bump map level. If cleavage were needed, I would probably copy the decolletage and smooth it down in Modeller, then paste it and perhaps Boolean a neckline.
Larry Schultz of www.splinegod.com has shared that polygonal surfacing methods used in programs like LIGHTWAVE may be much faster and more efficient to use for a number of applications than UV mapping. Does he advocate cylindrical mapping? Actually, he has used "planar" projection with "alpha matting." In this technique, wherever the texture projected on an object, like a face on a ball, creates smearing in streaks and lines, an alpha black/white image (the box next to the texture in the texture map requester...) is used to omit the texture from that smeared area. The area below will be the base color of the surface unless another map is used. For "Max Steele," other planar maps were then used perpendicular and behind the front map, with a matching pattern where smearing was occuring. I have never seen this done, but it may be covered in books, and can probably be partially "automated" by photographing the object in Layout with distant lights, etc. Since INSPIRE includes vertex-mapping for morphing automatically, applying the first map to subsequent morph targets, UV mapping does not appear necessary.
www.realviz.com has some interesting products, which at the very least provide for photo-realistic 3D backgrounds.
Another little by-the-way: if, after your animation renders, you get a render screen that once closed, disappears -- try the RIGHT mouse button on the Windows button for INSPIRE, pushing "Maximize."
Displacement-mapped (located in the Objects panel) waves can be made by using the "procedural" equation-based textures that call up a requester that includes "velocity" in three axes. Additional procedural textures may be loaded to INSPIRE if purchased from third party vendors like http://www.shaders.org/ . The "fractal" texture will serve for waves, though the LIGHTWAVE "crumple" texture has been lauded for this. The increment for an effective wave animation is very small, a few centimeters in each of the x, y, and z axes. A fire effect can be achieved with a fractal texture as both transparency and luminosity with velocity in y and z across a flat card or animating card. The INSPIRE fire tutorial Scene object actually includes some fairly realistic rising embers that are worth copying and re-using as a utility file. I've seen other software, Maya, do fire in a different way with multiple overlapping textured bubbles, but this method causes artifacting with INSPIRE when last I looked.
Image sequences make great textures, so be open to them. Cycling a texture can be pretty powerful too. If one has a perfect loop, one can then have looping ocean waves with a rotating logo, etc. Animated bump maps anyone? A tire-tread map can cycle while the wheel remains still. A distant object like a covered wagon can have animated props jiggle back and forth that do not have geometry.
Incredibly lame discovery 10,034: one can type "." and "," in Layout to adjust orthogonal (top, side) views, just like in Modeller. Pretty nice, when you think about it.
If one wants to create non-UV "transitions" such as between neck stubble and a beard, this may take a few zones of polygons to do, but the result seems to be worth it. The preferred method is to texture map this kind of detail though. The "Unwrap" utility which creates a wireframe rendering as a "cylindrical" or "spherical" rectangular image is invaluable when doing this kind of texture work, and I recently downloaded it and loaded it into INSPIRE, so I now also know INSPIRE can use it. I found a link to "Unwrap" at www.flay.com . When you "unzip" the file provided at the download site, just extract it to the "Inspire3d" Directory, "Plug-ins" folder, "Modeller" sub-folder. (Where you put the Plug-in apparently IS important.) THEN close Modeller, if you have it open already, then open Modeller, and go to "Objects" section, "Prefs" pulldown menu, and click on "Add Plugin." "Unwrap" should be there, ready and waiting. Click on it. Now, open the "Tools" section, "Custom" pulldown menu, and "Unwrap" should be there. Click on it when you have a model loaded, and it will render an "iff" image to the directory you request. You may have to use INSPIRE to convert the "iff" format images to tga's or jpeg's, because I noticed "Adobe Photo Deluxe" doesn't "see" the format. Just open an empty scene file in Layout and load the image into the Effects: Compositing: Background Image requester and render one frame. According to LIGHTWAVE CHARACTER ANIMATION, cylindrical mapping is less of a headache when working with character textures. I recently needed to see the mesh for a model, where all I needed was the face area, so I cut away most of the other polygons and re-saved the model for later "unwrap" imaging. This is probably also useful when creating a map for a skin map that includes the face, by loading the Unwrap image result into a paint box.
Time to learn how to Alpha map?
Observation is important to modelling, and also texturing. A coffee cup may have a slight grain that gives it a homey look. I use "sand" textures for both bump-mapping and a semi-transparent tooth. Wood that isn't heavily lacquered will need a specular map of its grain to hold back grain lines' shininess. Skin has a similar non-oiliness that a specular map can make a convincing relief with. For close-ups, the same pore map can be used as a bump map. In my limited experience, drifting textures do not always grab the attention. If one comes across an accidental texture like a good brick or asphalt or lollipop or sweater SAVE IT as its own surface either in the Inspire directory or another central place.
Here is a quick tip for a blurrier cyclorama-like backdrop than the default, when using Windows. Go into the Effects panel Backdrop and Fog requester, and select the Gradient Backdrop box.Click on "Zenith Color." This will open a Windows-style color requester. Click on one of the gray squares, then go to the gradient at the rightmost of the requester and pick a near-white value. Click on "OK." Now do this with the "Sky Color" button, and select a middle gray value, but before you press "OK," press "Add to Custom Colors," notice that one of the Custom Colors boxes on the below left of the Requester now has your color in it, press "OK." Now the fun part: press "Ground Color" and go straight to the color saved by the previous operation and click on that box, selecting that color, and press "OK;" now press "Nadir Color" and press the same box, but go to the gradient spectrum on the far right of the requester and select a much darker value, then press "OK." The result should be a cyclorama gray background. If you like it, save it as a Scene file and re-use it. I once rotated the camera, so that the backdrop would be vertical, but these days, I would just load an image to a ball.
Waves can be made by using the "procedural" equation-based textures that call up a requester that includes "velocity" in three axes. Additional procedural textures may be loaded to INSPIRE if purchased from third party vendors like http://www.shaders.org/ . The "fractal" texture will serve for waves, though the LIGHTWAVE "crumple" texture has been lauded for this. The increment for an effective wave animation is very small, a few centimeters in each of the x, y, and z axes. A fire effect can be achieved with a fractal texture as both transparency and luminosity with velocity in y and z across a flat card or bones animating subdivided card (the bones adding varied stretching). The INSPIRE fire tutorial Scene object actually includes some fairly realistic rising embers that are worth copying and re-using as a utility file. I've seen other software, Maya, do fire in a different way with multiple overlapping textured bubbles, but this method causes artifacting with INSPIRE when last I looked. To have waves that "repeat," try attaching all the objects/camera/lights to a single Null, and have that Null rotate at a great distance around another twice-offset Null. The procedural will be "World Coordinate" without any velocity while the subdivided card moves like a merry-go-round. A similar effect also works with fire, though fire being more 2D there are other options, like an offset rotating donut fire image map.
Things like rain and smoke can be done in INSPIRE and these are actually some of the more dramatic sample Scene files, but it's one of the tasks ignored by the book, which is being done much better with Lightwave 7.5. Email me what you want to do and I'll see if I can export you a 5.6 scene file. Make a friend. Rain drops rolling off a round dome roof is what INSPIRE can do but not flat rooves, and excellent point snow using displacement maps can fall through branches, literally. Lightwave's particle dynamics are pretty strong now, though outside vendor plug-in's are being used for rigid dynamics like colliding asteroids. Besides, with "Max Steele" and other shows, many of these effects were generally done with images or image sequences on cards. The above trick with waves and fire is used with multiple cards of smoke, which are alpha or transparency mapped, often cloned, and may begin and end with scaling and object dissolve, such as for smoke. Some of these kinds of images I've come across on web sites like http://www.3dcafe.com/ and http://www.turbosquid.com/ . Since I've used this trick a fair share already, it's worth mentioning that a lot of these images and image sequence can be solid-modelled and rendered still or animated in INSPIRE. Snowflakes can be made quickly using "Array," for instance, and "alpha"/transparency" mattes can be rendered, instead of having to be paint-boxed.
Presumably, one way to get a "spray" of points in INSPIRE, as from a garden hose, would be to animate a mess of points moving in a kind of letter "O" shape, and then to "clip map" the return of the spray to its origin, so that it would be concealed. Clip map is sometimes used to put water in a container or to have air in a rowboat instead of water. The points can morph larger, but it would be easier to use parent-child bones of increasing size. Several morph targets would probably need to be used to rotate the points 360 degrees. Another way of achieving this would be to have an "O" shaped object of points (made by creating a primitive and then copying and pasting its points in another layer and saving them) rotating across some effectors of increasing size. The clip map would need to have a "world coordinate" or opposite rotation.
Since "targetting" is supposed to be key to making some kinds of smoke effects work, like when an explosion card is animated on a plane and the card must point directly at the camera or it will look "foreshortened," here is one way. A light must be used, but do not activate "target." Instead, "brute" targetting by doing the "targetting" in "Light View," by adjusting a Null which the Light is parented to and aiming it at the camera.
If one has a digital camera, I suppose it will pay for itself in texturing geometry, instead of having to model and/or paint box it. The card system has a number of special applications that apply to refresh times, if not rendering. For instance, when animating a chain recently, a subdivided card was modelled, and textured with a single chain link with two half links on either side. The card was actually two perpendicular cards crossing along the middle, and the chain map was shifted 50% for the perpendicular mate. Overkill, perhaps, but the result looked good from a distance, it refreshed quickly, and it could be animated without a lot of geometry.
Scene files are very powerful tools, and one thing an animator will learn once they begin rendering their work, is that there are a number of "Utility" Scene files that they may create, though many of these files will be obsolete with LIGHTWAVE (which has two complete plug-in's for depth-of-field).This entry was excerpted from the "Glossary" page of this website, from "Utility Scene." Here are some suggestions of Scene files that effects animators, directors and producers should have on hand. A light box Scene of 100 point lights, or a single "spinning light" going 720 degrees per frame (with Repeat set in the Graph editor). How about a utility scene for creating a negative of an image sequence? I recently used one of those when making an animated iris-in effect, another utility scene. A seamless texture-making card and camera position. A color correction utility Scene using an image sequence of a spectrum. A sinusoidal animating bones chain for animating one octopus arm, that can be loaded to all eight. How about a "master skeleton" Scene that includes dozens of rotational gestures for cutting and pasting? The gallery of poses actually comes before or after the Scene frames that will be rendered. A single card positioned to emulate a backdrop, but capable of being anti-alias blurred and color filtered or dimmed. A good ocean animation, see waves. A good cycle of smoke rising. Another cycle of smoke rising with smoke puffs in somehwht overlapping positions relative to another smoke scene for compositing a denser smoke, since sometimes smoke ball overlaps will artifct as seams. Large numbers of effectors simulating a roof, etc. Gravity emulation scenes. Another utility scene that can be useful adds a "copyright" to every image sequence while letterboxing it, or "blows up" a low resolution image sequence, possibly with soft filtering or other anti-aliasing/filtering. A "sky dome" is an easy thing to make, once a good sky image can be found or paint-boxed; the texture is usually a 100% luminous surface, but how about one with a transparency map for an adjustable horizon falloff effect, or a second or third dome with distant clouds or multiple invisible lights, or a texture-morphing pair of domes? A "depth of field" Null camera scene is a good idea, with counter-rotating Nulls. A null point object, and a Null anchor with x y z forre-centering objects. A "Save Transformed" modelling Scene will have bones where they would likely be used for adjusting the proportions of a head/torso/etc. Many textures are much better if they include actual objects: ivy, bricks, etc. Other good utility scenes are the ones included with INSPIRE: texture "samplers," snow, outer space stars, and lightning. Some textures are better tested with multiple lights, like metals. The INSPIRE scenes are also useful for learning what one cannot Load-from-Scene; for instance, parenting a 0% intensity light to the face of a character in order to use "Light View" to pose the best eye/facial animation while also animating the character's gross motions. Another related but different idea: have a web page showing all of your models with their correct names, as surfaced, using "data overlay."
The other column in the Scene editor is for managing large numbers of bones by color highlighting key ones. This method may also be used for handling workflow, assigning each worker a different color of item to be responsible for.
Two other tools for speedier results are the limited region utility and super low resolution rendeirng. Super low resolution rendering is all that is usually necessary for lighting. Higher resolution imaging may be necessary for texturing and animation purposes though. Fortunately, one may use "l" to create a "draggable" window box that can be shrunk to any size and positioned anywhere in the frame. It is a feature common to all high-end 3D CGI packages, like SOFTIMAGE, so apparently someone out there uses it.
Do charity. Bring heaven to others to the degree you are able. See whether the results point to a higher power for you. Act on your observations.