GLOSSARY CONTINUED

WARNING!!! This section tends to have poor quality info! The Glossary Section has been divided into three sections. If you use your "Back" button for the following pages, one hopes you have no problems navigating between these sections: 1 - F , G - O and P - Z .

Game Development

Glossary

Graph Editor

Grid snap

Group weld

Groups

Hair

Hands

HDTV

Heads

Hidden Stuff

Hide select

Hinging

Hollywood look

Housekeeping

Hustling

Illlustrator files

Image sequence

Interactive texture placement

Invention/Inventory

Inverse kinematics

J

JPG

Keystrokes

Knife

Layers

Licensing

Lighting

LIGHTWAVE

Limitations

Limited region rendering

Linear light

Lip-synch

Live action combination

Make poly

Matching

Max

Maya

Merge

Meridian

Metal

Modelling

Morphing

Motion capture

Mouse in Modeller

Mov

Move pivot

MPEG

MTSE

Multi-tasking

Music

Negatives

Non-profit

NTSC

Null objects

Nurbs

Object list

Optical effects

Optimizing objects

Orthographic rendering

Over 100% values

(End of Glossary Page 2 of 3)

G

Game development. I am told that freeware BLENDER is now a serious game-development platform. This is no reflection on any software platform; I miss Galaga, that was my speed. The great majority of game development uses 3D Studio MAX, in case you did not know. www.gamasutra.com is a respected link on this topic.

Glossary: do you want to know the worst way to learn about a subject? Assemble key data in a completely arbitrary way, like by alphabetizing it. I apologize that I resorted to this solution.

Graph Editor: very powerful when doing crowds of snowflakes or making "fixes." The only way to create a "cycle" (Repeat). The way Effector-ed objects are targetted. It can take practice to learn how to delete a sequence of keyframes and then "copy/paste" all of the different move/rotation/scale frames, so take heart, it's worthwhile. The graph will often show where a sequence of keyframes may have gotten too linear or too spliney. Using "scale, one can have a sky dome follow camera movement, but not as much, and then adding a shift to that value, have it lag. Similarly, scaling the numbers of frames, one may pixillate a motion, or prepare it for display on "two's."

Graphics Cards: see Video Cards.

Grid Snap: one of the coolest and perhaps least-appreciated aspects of INSPIRE is that it has built-in "grid-snapping." That means that when a point is adjusted, it will tend to fall into a multiple of ten, so that when one mirrors or copies and pastes, overlapping two points may place them directly on one another, ready for "cementing" using "merge." This tool is also great when used with keystroke "j" which drags selected points to the cursor position.

Group-weld: emulated with "BGConform" in the Custom menu of Tools. You ant to weld a group of points together without selecting them all one-at-a-time? Copy the "target" points to another layer, click the bottom square that makes that layer a "background," then go back to the original object and select only the points you want to weld the targets to. Press "BGConform," which will "presto" move them to the position of the targets. Now, all that is needed is to hit "merge" for the whole object.

Groups: this feature, currently offered in LIGHTWAVE, is useful when surfacing polygons, but has to be emulated for INSPIRE. It can best be emulated by copying those surfaces one especially creates and saving them to their own object file, then deleting this section from the object and saving it separately. The extra work pays off, because these elements can be "reworked" later without a lot of tedium. One needs to remember to "merge" the points when they are rejoined, and use the "Unify" button to make sure, or rendering will look "seamy." see "Junk Poly" below for edge-grouping options.

H

Hair: see Fur.

Hands: Use "Pen" to make a silhouette of a hand out of points. Remember to have points wherever there is a knuckle, palm-crease or joint, but try to keep the points to a minimum. Switch to polygon, Extrude the polygon you just made in Multiply, and Triple the profile polygons. Now hit the Tab button to see the Metanurb version. Rotate the Thumb a little in Modify -- remember that where you place the mouse cursor is where the pivot will occur -- and voila! Freeze, then triple the whole object. Another way to do the same thing: Creat a box, select one polygon, smooth shift with a 0 offset, translate the polygon; repeat these steps three times. This gives you a rectangular box with 18 sides; select a row of polygons to be fingers, and one-at-a-time, apply "smooth shift" then translate AND slightly scale each finger individually. You can repeat the smooth shift/translate/scale for each finger to give each one knuckles. Experts make sure the knuckle is slightly further down the finger than the joint crease.

HDTV: According to the Alt Systems site, www.rfx.com , a 704 x 480 format is being used for HDTV, though 1280 x 720 and 1920 x 1080 would probably be preferable. A group of Scene files can be created in a short time for converting image sequences up or down in pixel number as needed. see DVD/HDTV.

Heads: some very good tutorials are floating around to make excellent realistic heads in about an hour using LIGHTWAVE, with a majority of the same tools available in INSPIRE.

Hidden stuff: these petty annoyances you may want to have on a list, or not: "a" in modeller to reset the view, "Save All Objects" when changing object surfaces, saving new surfaces under new names in Layout and renaming the object as well in many circumstances, creating a "Disappearing Null" sized to 0 for all target metamorphosis objects (but not the starting object), not having Image Sequence titles with numbers at the end, "Shift" for selecting additional polygons after releasing the left mouse button and for using "layers" more effectively, "|" twice in Modeller to unselect points or polygons,

Hide Selected: use it, love it. This modelling function is not only useful for any kind of complicated detail work, it is great for speeding up "refresh" speeds when working on large slow-rendering objects. Select what isn't being worked on, hide the selection, and notice how pivotting speeds up.

Hinging: although "motion capture" is becoming more prevalent, many characters lend themselves to good ol' hinging, and not just robots. Furry critters, people in suits (besides armor) or long sleeved shirts, or wrinkles or feathers,... Hinging also makes for better "Bones" animation, because the bones of one object will not affect another object. The only problem with ball-jint hinging is the seams created. See patch. Here are some thoughts: apply "Surf Blur" to socket section surfaces by selecting these surfaces when modelling, use the "Stahlberg Patch" method of applying feathered transparent surfaces near joints or add motion blur. Motion blur is the weakest of the bunch, because the blurred object will probably be transparent where it blurs, even with a localized bone shrinking and growing in two frames on a "Repeat" setting in the Graph Editor. Much though I enjoy the Stahlberg patch; it seems to take a while to set up and is intended for morphing more than hinging. This leaves "SurfBlur," which requires careful texturing, but seems to work. One may even "move" the texture around by having two identical forearms with different positions for the "SurfBlur" surfaced polygons morphing their textures, depending on where the camera is positioned. One can make a really ugly movie pretty fast with hinging.

The Hollywood look: see "Lighting."

Housekeeping: see Folders. There are many little kinds of "housekeeping" that apply to the use of the Layout Module. One needs to rename objects whenever a surface is changed, and then press the "Save All Objects" button and resave the Scene. The different surface will be specific to that Scene, until it is saved with a different name. One should "back-up" Scene files frequently as animation changes. This is a preferable way if one "auto-key's" to speed workflow. Surfaces need to have unique names, as do objects, no matterhow long. When a Scene begins having long refresh times, one should change the majority of objects to partial surfacing or partial wireframes in the Scene Editor by toggling the texture box of each object. Especially successful textures should be resaved with different names. Nulls, Reference Nulls and Lights should be named according to their function. Important bones need to be color-coded in the Scene Editor by toggling the color box for each bone -- only a dozen bones are responsible for a majority of animation. Camera motion files should be downloaded using the Graph Editor, as necessary for backing-up. Renders slow animation, so resort to Super Low Resolution renders as soon as possible, when the allure and glamour of the tool begin to be manageable for one's ego. Experienced animators work in wireframe often. "Load from Scene" gets used often for chores like creating turntable Model showpieces or adding copyright info to a Scene, or adding an area light; have these "Utility" scenes in convenient folders. Avoid branching folders; for some reason, either Windows or INSPIRE doesn't seem to like them. Put all surfaces in one surfaces file. A MAYA animator advises getting in the habit of using "_" underscore instead of spaces, because UNIX does not "see" spaces. And lastly, keep Image Sequence renders -- the only renders you should be making since "Bink" from www.smacker.com can very quickly convert them to "avi's" -- to five character names or less, with no numbers at the end.

Hustling. Shirley Clarke made a point of teaching everyone who attended her UCLA Design Class what "hustling" was. I can't plagiarize her because I never got to take the class, but it's a good idea to explore. Hustling. So far, my take on "hustling" is that we bring abundance to someone who seems to need abundance of a certain kind or cooperation, and then they reward us by sharing. If you were an animator with a mountain of unused model inventory, what would be music to your ears? If you were a shop owner dreading the production of advertizing for TV, what would help ease your woes? If you are a programmer for a TV channel facing advertizers without anything interesting for the coming month, what would be a welcome surprise? When you get ten seconds of a distinctive look for a kind of story that has insights and truth, whether it takes you ten hours or ten days, when you bring forward that love, that love is going to be reflected, so you're ready for it. This web page is a kind of spiritual hustling -- giving comes back. You let others know what seems to be needed. You circulate with others who are involved in what you do. I am told that the real work is obedience to the higher presence, listening for leading, and then obeying.

I

Illustrator files: The usefulness of the "ai" utility is pretty major. In some programs, like Alias|Wavefront MAYA, there is a utility for putting a reference piece of art like a profile of a horse, into the Modeller. Cool? You BET it's cool. Using a piece of reference art work for animals, vehicles, props, etc. makes for faster modelling and fewer afterthoughts on photo-real or naturalistic objects. "Corel Draw" and other high-end paint programs will often have "AI" converters/exporters. I have not yet had success converting from jpegs or other art to "AI" using "Adobe Photo Deluxe." "Adobe Photo Deluxe" claims to be able to read "ai" files, and export them, and can convert images to "EPS" format, which is supposed to be related, but rather than froth about what might be, I'll stop there. The closest "workaround" for applying this approach to INSPIRE: first, enter Layout, and texture map your drawing of a rhino or whatever to a card (add a reference object and use interactive placement to get this done in a hurry), now add a Null object and size it fairly small and position it on the profile of your object, with auto key "on" in the Options panel, advance to the next frame or go a few frames, and reposition the Null object. Keep animating the Null as it moves around the profile of the reference art. The result will be a splined keyframe motion file in the shape of the object. Go the the Graph Editor and SAVE the motion file to a name. Lastly, create a single point object with Pen (or point, or by making a card and deleting all but one point), and go to the Multiple panel and click on "Path Clone." It will request a file; give it the motion file you just made with the animated profile Null. (You may need to hit "a" (auto-fit) or "," to see the final result.) If you want a polygon from the points, select them all in clockwise order, and then press "p" or Make Pol in Tools. This profile template can either be used directly to create an object using cross-sectional modelling or smooth-shifting, OR it can now become the "ai" reference art for modelling a "primitive" with the profile in the background.

Image Sequence: POWERFUL. Useful for animated textures or lights. Invaluable as an editing tool. PLUS, able to do little chores like converting a sequence rendered every 8 frames into an avi by simply loading it into a Scene file for converting files (using the Images panel "Load Sequence" button and Effects panel) -- one of the only things "Bink" from www.smacker.com has trouble with. Rotoscope footage or reference art may be viewed textured to a card in OpenGL, for adjusting model keyframes or Bones deformation and "Save Transformed." I sometimes use a conversion Scene file that automatically adds a copyright to the bottom of every frame. There may be a problem with Sequences longer than 300 frames, which will need to be cut up into 300 frame versions. See Opticals, Render times, Editing, etc. For other editing activity, see BBMPEG. The "Div-X" player is available at many websites.

Interactive placement of textures: see Textures, see painting objects.

Invention/Inventory: this is when you start creating objects like an "origin" object or a scene with an animated/MTSE countdown gauge that you load where you need it, such as to synch-up with a metamorphosis by giving it the same Morph Envelope. A "fade-in" object for compositing, and other "on the job" do-hickeys like a virtual ruler to load from a ruler scene, if it helps you remember to keep objects in scale. INSPIRE gives you some good examples of inventory scenes; for instance, the texture scenes composed of cubes that can be used to select surfaces for smoke or water. Increase the size of that Scene exponentially to meet your needs. "Fire" and most effects objects need to be kept in separate Scenes for using with the "Load from Scene" button in the Objects Panel. Replace the "sparks" object with morphing smoke puffs and save this as another scene, possibly scaling its velocity as required. These can be saved in their own folder, or as "effects_fire_puffs001.lws" The "_" is a convention of "UNIX" that many adhere to, because UNIX doesn't "see" spaces. A convention that I have begun using is to lump all of my "inventory" scenes under the heading of "utility," thus: "utility_effects_fire_puffs001.lws." Why? Because my first inventory scenes were actually optical effects like adding an "iris-in"to the beginning of a short, or a Copyright notice under a letter-box frame Scene file, or a file for re-assembling 300 image-long image sequences as "avi's," or "blowing up" a low res render to conform with other renders. Utility files. Other useful inventory: depth of field camera, "area lights" of different intensities, rain with splashes, sky-domes, smoke, a ruler, generic front projection, favorite backgrounds/backdrops, effector arrangements, and subtitling or scrolling titles. The line is crossed when geometry becomes the star: much though I loved the effect of electrons with long bright tails whizzing around a nucleus.

Inverse Kinematics: It took me forever to realize IK is used primarily for Bones animation. You move a hand bone, and the rest of the arm bone follows. No kidding? Sounds like fun. INSPIRE animators simply have to get into the habit of moving the parents before the children. One place where IK becomes useful is in orienting eyes. The emulation for eyes is to create long "pole" objects that poke out from the pupils and show exactly where each eye is looking: they can either be parented to the eyes or transparent objects only positioned in the Wireframe mode. But MOST eye animation is probably going to be saving the "mot" file for one eye's movement, and then loading this to the other eye or eye-Null. As powerful as IK animation is, it takes a backseat to "Expressions" animation tools in LIGHTWAVE which animate groups of bones with a single control. Most naturalistic animation is FK.

J

"J": one of the great lost keystrokes. In modeller, "j" will snap a point or points to wherever you've placed the cursor. If you select a group of points, the last selected point will overlap the cursor's position, and the other points will be brought along keeping their relative shape. It would not be as powerful as it is if INSPIRE didn't also have grid-snapping. See Group Weld.

JPEG: the JPEG image format that INSPIRE uses is what is called a 100% JPEG. JPEG is one of the few image formats that includes adjustable "loss" or "flattening." JPEGs are not recommended for animation, because if one uses a paint program like "Photo Deluxe," it may accidentally add grain and flatten the image, so that the "fixed" frames flicker badly. The preferred format for retouchable frames at this time is the TGA, the TIFF is recommended if one will be working with various programs or platforms. You should save animations as IMAGES and not as AVIs in most circumstances. See Converting IMAGEs to AVIs. Because the JPEGs created in INSPIRE are 100%, as opposed to 75% or 50%, they may be slower to load up on web pages than most JPEGs; loading these JPEGs to another program like Windows or Photo Deluxe and re-saving them will often knock down their file size. Windows Front Page Express will offer to save TGAs as GIFs, and many other JPEG-conversion utilities are available as affordable downloads or freeware.

"Junk Poly." I have read about this twice. If one makes a polygon using the points from every polygon edge one wants to keep grouped together, and then "smooth shifts" it away from the chosen polygons, one is left with a very weird looking mess of polygon, but it is useful for preserving the grouping of those polygon edges attached to it. One can save this object as another file in order to select those edges again, by "merging" the affected points. ("Merging" is often the first thing taught to classes, because it can play a pivotal role in modelling and rendering. One should periodically "merge" an object. It hurts nothing.) The next-best thing would be to copy those polygons to another object, but when modelling, one may make little changes often. The new polygon can be used for smooth shifting or scaliog that group of polygons, since the Junk Poly" and its buddies will be much easier to grab, and where they go, the attached polygon edges follow. It is a good method for "edge" grouping. I do not have a lot of experience with using this method, but there may be some tutorials available about it.

K

Keystrokes: there are two kinds of keystrokes; those which provide functions not available as mouseable buttons and so-called "hot buttons" or short-cut keys. I am not fond of hidden buttons, is anyone? Hidden buttons in Layout: "Alt" to adjust non-camera views, "p" to bring up the current highlighted item's panel; "r" to reset a bone's position, essential; up and down arrow keys to cycle through bones. Hidden buttons in Modeller: "tab" key to activate metanurb mode; "shift" key to select points after releasing the left mouse button AND to have more than one layer open; "a" to auto-fit the object; "j" yanks selected points to wherever the cursor is placed; the right and left arrow keys adjust the divisions for box and cone objects. They are in the Manual, but it cannot hurt to list them all here too. Hot keys: the theory is that if you can hit the right "short cut"/macro keystroke commands, you will be able to assemble and adjust a sequence much faster than by mousing-around. If one is training to be employed by a production company or service bureau, the employer may be looking for one to be using these methods, as well as having "good habits" like naming object surfaces aptly or keeping objects in the right scale (which makes texturing and animation much easier down the line). If I were an employer, and I saw a modeller hitting shift F, y, t and Enter while pilotting the mouse to and fro', I might feel I was getting my money's worth. In Metanurb modelling, the "smooth shift" command sometimes plays a pretty key role in modelling. Pressing "q" or "]" or "shift |" or "p" in Modeller can make things move a lot faster, so if one were an employer, this could be music to the ears. Not using the shift key while modelling might be disappointing. In Layout, a handful of macro's are plainly marked, and some, but not all, are listed in a menu by pressing "F1." Of these, I think "r" is pretty important, because without "r" Bones cannot be adjusted effectively. In Modeller, a majority of the buttons are plainly marked, but there isn't a master menu. So, do like I did, and press every key on the keyboard. ; ) Or read the books cover to cover looking for keystrokes. It turns out that "z" can come in handy for clearing layers while polygons are on the clipboard/buffer.

Knife: this tool can be indispensable, but it did not make the transition to INSPIRE 3D. Let's say you are making a cartoony arm out of a cylinder; if you "subdivide," you wind up with hundreds of polygons in the forearm when you only really need them in the elbows. The "knife" allows you to subdivide around the elbow and keep the polygon count down. To emulate it? One "workaround" is to go ahead and subdivide, then use the "Reduce Poly" on the areas that aren't the elbows. A closer "emulation" is to create a single polygon using "box" in another layer, and then position it as you would a "knife." Use Boolean (follow the directions for Booleaning, having foreground object and background object overlapping for cutting, then pressing the Boolean selection) "add," then "Merge"in the tools panel, and then remove the "cutter" polygons by selecting them, then "Cut."

L

Layers: Layers can be useful when making a dozen heads, and riffling back-and-forth between them. Use the SHIFT key to open several levels at once. SAVING with several layers open saves all of them at once. Although INSPIRE is limited to four layers, one "workaround" for this limitation is to have several copies of INSPIRE running at the same time, which I've found is possible with Pentium III computers with 256M RAM. Complicated metamorphic objects like heads are made easier to render by keeping tongue and glasses and eyes as separate objects, and the Layers feature can help coordinate the disassembly and re-assembly while you check the metamorphosis in a sample Layout file. In practice, some LIGHWTAVE users use Layers in a manner similar to "clustering" in MAYA, keeping different textures discreet by keeping each in its own discreet layer, of which LIGHTWAVE has hundreds. A single object can thus have dozens of layers. In INSPIRE, this capability is limited to four layers, so it behooves us to get the texture right the first time and not try to dig a well with a spoon when we design the object. We are more likely to break apart a morphing object into separate objects and then save and load one object's morph envelope to its other constituents where texture mapping requires polygon-by-polygon attention. Also, groups of object element points may be copied to other layers and shifted and re-pasted or "knifed," during MetaNurb modelling. see "Make Poly"

Licensing. This is where you and a dozen of your friends who don't own LIGHTWAVE all render your scenes, but only you do the modelling and animating. That's the LIGHTWAVE way, ain't life grand? (If I understand it correctly...) The love comes back. In the course of this project, a relative stranger offered to share a render farm of 60 machines.

Lighting: As is mentioned in the "Tipathon" section, the "look" of many live action productions is very well thought-out. Shadows can be made blue by filtering light, and blue shadows are used often to indicate things like dark interiors. This means that lighting technicians on movie sets deliberately put blue filters on the lights. One thing that puzzled me for ages was why lights were put so close to the actors -- it turns out that this too is deliberate. Light "falls off" closest to its source according to the "rule of squares."(Lightwave 7 has an "inverse square falloff" setting.) For this to work for LIGHTWAVE requires light settings often above 200%. Theater lights seem to lack this fall-off because they tend to be set up in a proscenium; sunlight appears to lack it because the sun is a million miles away -- but both have it. The Great Masters, as well as the great cinematographers, play with light fall-off for more dramatic contours. The addition of fall-off to roundness is calculated mentally for a stronger sense of contouring. "Fall-off" lighting can be achieved by using the "Spotlight" light setting with both an "Intensity" falloff and a "Spot Soft Edge Angle" falloff that is at least half of the angle of the light. I like an angle of 30 degrees, with a fall-off of 20 degrees or so. Set the Intensity falloff to fit the set you are using, and then raise the light's overall intensity for the most realistic effect. (Light fall-off can also be emulated with Effects panel black fog, but only relative to the camera.) The intensity one foot away from the light should be sixteen times the intensity four feet away for a naturalistic look. This can be "automated" with a pair of texture mapped cards using color bars with gray scales and saved as a Scene file. The "Spot Soft Edge Angle" is largely psychological for movie lights which have reflectors, lenses and "barn doors," but their mechanical design allows for contouring and/or to prevent areas where two spotlight cone edges overlap, calling attention to the lights. And all light is inverse square light automatically. To be perfectly honest, I think I like the look of inverse-cubed light a little more -- CGI can force contouring, isn't that nice? "Barn doors" may be positioned using a map applied to the Transparency requester of a card's surface, this may even be assigned to a "Reference Object" for interactive placement of the barn door's shadow and the tightness of its blurred edge, which can be controlled by adjusting anti-aliasing or size. Actual cards may also be used as barn doors, by checking "Unseen by Camera" but NOT "Unseen by Rays," which will make barn door cards something every live action gaffer can only dream of, invisible but in front of the face of the character. The barn door cards are visible in the refressh display, but do not render. see Texture Mapping Daylight "radiosity" occurs when sunlight meets a light surface that scatters some of the light, because the scattering follows inverse square diminution. A perfect mirror will preserve the low-fall-off intensity of the sunlight. So, a radiosityesque rainforest scene would have distant sunlight, plus inverse square spotlight sources peppered wherever objects are positioned, and/or an ambient setting. The NEW THING is to pepper a scene with spotlights of various colors that match the relative reflectivity of illuminated areas and the main light sources; these multi-light arrangements render much faster than true radiosity. The Great Masters tended to like the background to be darker than the foreground, so do cinematographers, or one can surface one's background objects to be darker. "Technicolor" would "paint everything" from costumes to make-up to buildings. "Backlighting" was once a popular technique for separating contours. Light falls off at the shoulders and tops of heads and provides a lovely "rim" effect. The only drawback of this approach is that since the background will be two "zones" or "stops" darker than the foreground, and the lights will obey the rule of squares for contoured heads, this will leave little room for variation. Whether a shot is set in a garage or by the side of a swimming pool, it will look like a "sound stage." So, in a nutshell, that's the "Hollywood" look or "style sheet." INSPIRE allows one to use it or not. For information about using "Image Projection," see "Front Projection. see Linear Light The "area" light is associated with stage lighting and early color motion pictures. According to www.leestranahan.com there may be a way to control all 100 point sources at once through a plug-in. Although many lighting "Masters" have experimented with so-called "slit" light, most "cat walks" and "proscenium" lights are true spot lights and would not provide a slit effect. A "Linear" light is a slit light. Linear and area lights make pleasant "available"/"practical" lighting, and they can be used cosmetically to emphasize lines perpendicular to lines/creases that one wants to fill-in/erase. As with other aesthetics of contrast, going from flat distant light or high key to contoury light may boost its impact. It may be worth mentioning that long shots in movies do not tend to have very careful lighting -- it is often impractical and apparently largely unnoticed. Close-ups get the extra attention. Moving lights are another way of boosting depth cues.

LIGHTWAVE: In case it has been overlooked, INSPIRE is in very large part equivalent to LIGHTWAVE 5.6. The shortcomings are in the number of MTSE morphs, morphing textures, the inclusion of some key plug-in's, the 640x480 automated render ceiling, and a few tools that usually can be emulated without much effort. The difference between INSPIRE and LIGHTWAVE 7 is ENORMOUS (though one could load it with a few hundred dollars' worth of plug-in's to exceed any comparable package within a thousand dollars, short of LIGHTWAVE itself). "Moving up" to LIGHTWAVE 7 at the date I am updating this entry costs a fairly modest amount, in the upper hundreds. INSPIRE users need to contact NEWTEK customer service directly to "move-up" to LIGHTWAVE; INSPIRE was sold through electronics chains and online, as well as through dealers. Generally, one would upgrade by contacting the nearest dealer, listed at www.newtek.com . Although I have yet to work with a dealer, they seem to be very obliging from what I hear.

Limitation: I don't like making prisons for myself or others. Golden Rule: if I want to be freed, visit the prisoner and teach them to be free and bring freedom to others. Yes, one can learn greater precision by forgetting that there are "shift" or "Undo" commands, just as one can develop greater articulation by talking with rocks in one's mouth for a day or so. Either way, a higher positive mindset seems to be more powerful than a fashionable training method. As for INSPIRE's limitations, you will find a list of them in the Newbies section. Two displacement plug-ins per object, three textures per surface (paint box or composite), 300 frame image sequence per texture, 300 bones per object (saw the object), the notorious morphing limit of 40 morph objects (object dissolving between two objects for more) and several missing tools like "knife" (Boolean card), pointscloneplus (Array) and depth of field (offset spinning camera, repeating 2 frames). Although there appears to be a point where the layout module may "sieze" with slower computers, this can be avoided by waiting for the screen to refresh before pressing any more buttons.

Limited Region Rendering: When I am asked to "rate" the INSPIRE package, this is one of the dozen-plus high-end tools that should be mentioned. You press the letter "l" and then do mouse-dragging of the edges of the box -- LMB and RMB -- and then press "render" (or F9, see F1 for list of short cuts) and voila! The good scenes may have a couple dozen objects, so this tool can come in handy pretty quick. Ctrl "l" to remove the box. This also has another possible application when making corrections on long sequences. The rendering time is much shorter, so if you didn't realize the hand went throught the table, etc., you can re-render only that area, alpha matte the limited region shape, and have the limited region show through. I suppose one should do an exposure of a background or fog to get the alpha matte without paint-boxing.

Linear light: linear light and area light, as demonstrated in a www.newtekpro.com article by Patrik Beck, these two lights can be emulated by about ten or one hundred point lights. Setting the first point light and parenting it to a Null is crucial, before cloning it 100 times and repositioning it. A good setting is a 6% intensity with a normal fall-off of about 10 meters, greater intensity for the linear point lights. Neglecting to use fall-off largely defeats the purpose of an area light, contour with control. Adding a box around the lights can also contribute to their control, and the box can then be removed from the shot by checking the "Unseen by" boxes at the bottom of the Objects panel "Appearance Options" menu. When positioning the light, I go back and forth between Super Low Resolution renders, and the Camera and Top views. A variation of the 64 to 100 point light box is the "chinese lantern" with 100 point lights arranged in a semisphere. The rationale of these lights is that they mimick light situations found in nature, but with specific effects. Some of these effects are unnecessary for computer animation, and some are valid. For instance, the "hot spot" of the area light is completely unecessary for computer animation, since a point light is invisible to the camera until it is made visible, and can be placed directly in front of the actor. I know of at least one leading cinematographer who preferred exactly this lighting. Light boxes are often used for product photography with the lens in the middle of the light box, because they give glossy surfaces a nondescript reflection, very soft detailed shadows, and which have an "Old Master" skylight feel when the light is much larger than the subject. Similarly, in dark scenes, the area light should favor contours with less harshness than a point light. Whereas area lighting seems to soften the limb edges of a model, a linear light will have this effect along the axis of the light only. A chin and neck may appear soft, while the sides of the face are more pronounced, due to the angularity/specularity of the typical point sources. Jawlines and cleavage are pronounced, and eye and hair highlights become vertical lines -- all-in-all, not bad. A completely unrelated side-effect of these devices is that glare can be controlled, though glare can also be controlled by introducing textured cards into a scene, and making them unseen by the camera, but leaving them seen by rays/ray-tracing reflections. The art of "glinting" is significant if one wants to introduce a specular highlight or background detail that tricks the eye into seeing slightly more depth by compensating the exposure level of the opposing eye, but this kind of element is usually reserved for calendar art, rather than animated sequences. Animation may use pivotting and any number of visual cues rather than invisible cards and lightboxes. Far more important is that the overall visual effect be grounded in story relevance. "... so shall thy strength be." Does the setting have a typical or accurate lighting? Is unearthly lighting called for? Who is a source of light in the scene? According to Lee Stranahan of www.leestranahan.com , a plug-in has been developed for adjusting 100 lights at one time with LIGHTWAVE.

Lip-synch. Do you want to feel like emperor of the galaxy? Lip synch a short sequence if you've never done it before. Metanurb modelling is the way to get mouth poses fast, to my way of thinking. "Hide select" to add teeth and tongue objects or have them move as separate objects and don't include them in the metamorphing objects. Where metamorphic objects get in trouble is when they have crossing planes. Three discreet objects will morph fine, but if the tongue touches the teeth or enters their z-axis range, distortions may make the animation "underuseable." Breaking the objects apart into separate files and copying the "Morph Envelope" by saving and loading it is no big deal. Actually reading the tracks may not require another program though I bought a program that had "Steinberg WaveLab Lite" from www.steinberg.net included free with it. Windows 98's Sound Recorder (& Player) has a built-in speed changer that can speed-up or slow-down a file 100%; if I understand correctly, one can slow the track down for reading, using the built-in counter, possibly making tiny changes too, then either go back to the original with a calculator, or speed-up the sound file. To speed up the process dramatically, use a TABLE that converts time and frame numbers as you jot the phonemes down. Here is the WordPerfect template I use: this is a 500 frame version -- twenty page long -- Tables Since the track is digital, there shouldn't be any loss changing speeds. When I do this, I use a headphone and extension cable. Transcribing the various vowels and consonants teaches you that a "b" is a "p" and an "m" as well, but that one can always make a more specific "phoneme" in Modeller like an "s" with "o" lips or "j" with "r," or a pronounced "v" or "m." I also make notes when a phoneme ends and when it has a "peak," like for a long "r," and when there is an abrupt one-frame switch. For certain characters -- those with intense facial texturing AND demandnig texture maps -- the more manageable strategy for animating "face shapes" may be to rely on Bones for animation, since this seems to be the only manageable way to have textures follow the deforming geometry accurately. Key frames for groups of bones are supposed to be editable in LIGHTWAVE 7 using Plug-in's, but I have not heard of this being as manageable in older programs like INSPIRE. The trouble of doing this, when LIGHTWAVE automates much of it, doesn't really need to be dwelled on. For the remaining less-intense characters, the next step, if one lacks a lip-synch plug-in: load the objects into MT/SE in the Layout Objects Panel, and add the frame numbers for the morphs. I like to work out what the percentages will be for frame numbers before turning on the computer. It also probably pays to do this in a separate Scene file, with the objects parented to a Null, and then using the "Load from Scene" button in the Objects Panel, load the lip synch object when the other elements are ready. Have the body in one file and the head in another. Also, lip-synch can take a few attempts to get just right and the test scene file will render fast, for a "Bink" avi from www.smacker.com wav-mix. The maximum for loading MTSE morph objects is 40 objects, though I have also read in the INSPIRE Manual where it is only sixteen objects. Using duplicates of phonemes like "Eh" "Aa" (short a) "Mm/Bb/P" and "Ch/J/Sh/D/T/S/Z," both ordinary morphs and single-frame unsplined transitions between identical or similar phoneme morph objects in the morph MTSE list produce very realistic animation, if one wants to go all-out. I try to have every mouth position have its own unique head, because it just seems to look better. Give the character a drawl, a squinty "v," a nosey "oh," and such. The "SAVE TRANSFORMED" button in Layout can also be used to create intermediate poses like an "Ee" between and "Ah" and a "Ch." Here is a 40 mouth-pose MTSE I have been trying: Ch/Th/R/Aa/M/Ah/N/Ch/L/Eh/F/R/Oh/M/L/Ah/Ch/Oh/R/Eh/Th/L/Eh/N/Aa/F/ Th/M/Eh/Ch/F/M/N/R/L/Oh/Aa/Ch/M/R With an arrangement like this, you might also want to add other brow gestures and blinking using bones, hinged eyebrows, compositing, judicious object dissolve/replacement, or our old friend the Stahlberg patch.With the patch, you have the top half of the head as a separate object, morphing to complement the mouth gestures, with a third object covering the inevitable seams generated when two planes intersect but points are not shared. The third object's only purpose is to use a semi-transparent mask to conceal the seams. I haven't tried this, but it figures. If there weren't a 40 pose limit on morphs in INSPIRE, this is an MTSE I would probably use for lip-synch: Oo/Ch/Eh/R/Ch-oo/Oh/Oo/Ch-oo/Aa/M/Ah/N/Ch/L/Eh/F/R/Oo/Oh/M/ L/Ah/Ch/Oh/R/Eh/Th/L/Eh/N/Aa/F/Th/M/Eh/N/Th/Ch/grin/M/ Eh/Ch/F/M/N/R/L/Oo/Aa/Ch/M/R . Though, given the chance, it might be the starting point for a much longer MTSE for varying facial expressions along with lip synch. Fortunately, some facial expressions are reproduced with bones as well as they might be point-dragged, since many cheek and brow movements can be very effectively "bonesed," and morphing has little effect on Bones. Otherwise, the better INSPIRE route would be to morph straight-ahead, and start another Scene or object file at the frame the 40th morph object was reached. The only artifacting I have noticed from this is that an object dissolve in one frame will cause a charcoal coplanar effect on the dissolving frame with ray-tracing.

Live action combination. If a client provides you with sequences of five minutes of video, what do you do? Chances are, it will be in a digital video format very similar to "avi" that can be easily converted to an avi, and then to TIFF's, using a program such as -- ta-dah! -- "Bink" from www.smacker.com (or "Vidget" from www.newtek.com , makers of "Video Toaster"). If not yet in digital format, it will need to be "captured" using a "video card" that includes input jacks like the ATI "All In Wonder," or otherwise converted to a digital format, like by connecting a VCR to a mini DV camera. A mini-DV owner tells me that the Canon camera can output either "avi's" or NTSC digital. INSPIRE can be a very effective compositing tool, or this work can be left to video editors. See "Object List" for one approach or some of the many tutorials. The "Load Sequence" button in the Images Panel does the lion's share of the job by making it possible to load cycles or sequences from a variety of formats into backgrounds, foregrounds, front projection and textures. See "Tracking." LIGHTWAVE 6.5 includes automation to make the fine-tuning of color and highlight elements, and last-minute adjustments, more efficient. This approach can be somewhat emulated using INSPIRE. A multi-level editing package like Premiere or Aura is recommended. See Compositing.

M

Make poly. IMPORTANT KEYSTROKE: "p." The cross-sectional modelling feature in LIGHTWAVE does not seem to be available in INSPIRE, but this feature in Tools is. It allows one to take a series of points from polygonal cross-sections, and polygon-by-polygon connect them as polygons. Many modellers use "p" extensively. This method can also cover minor mistakes or be used to generate partially transparent object-joint "patches." Select a number of points in a circular path and then hit "Make Poly." "p" probably gets used the most when bridging or expanding face areas like mouths or eye. One selects a group or row of POINTS, copies them to another layer, moves them and pastes them AGAIN. Two rows of identical points, waiting to be made into polygons by being slected in clockwise order, one polygon at a time, each followed by "p," and finally pasted in the original object, and "merged" in Polygon mode with everything/nothing selected. The key thing to know to do is to copy the first row of points and move them and then copy them again, and to do this in a separate layer. see "Knife" A knifing boolean may also work, but this doesn't always work for metanurb surfaces. Lastly, "p" can be used with things like hair textures or moles on skin, to map these or create non-stenciled textures that will morph like texture geometry with many more polygons.

Matching: see Tracking.

Max: 3D Studio MAX is the pre-eminent game development software at this time. I do not know why. MAX is used for the great majority of games, and I do not go there. Its more recent upgrades are supposed to make it a more viable theatrical program.

Maya. Learning INSPIRE can make the process of learning a program like Maya easier BUT do not ask me whether or not to home-study LIGHTWAVE or work in a school setting on another system like MAYA. If one has the money for MAYA, one may want to hold a little in reserve for LIGHTWAVE or INSPIRE, just in case, because LIGHTWAVE is a complete solution for producing demo reels and such and doesn't expire. I do not advocate debt, so I encourage students to get community college instruction, which then allows one to home-study for less than $1,000 usually on a one-year license, when I believe the software stops working. If you must obtain software using debt, use credit cards and not student loan debt or SBA lending which are not releasable by bankruptcy. MAYA is pricey if you don't have cash, but student lending is a national embarassment. MAYA's claim-to-fame was its NURBs modelling system, and includes numerous editing and special tools like simplified object welding; in its upgrades it has added "point and click" particle and dynamics animation. MAYA also has realtime animation cycling for its perspective view, as memory permits. MAYA is present at many studios, as SOFTIMAGE/XSI once was/is. There are tutorials for Maya at www.3dcafe.com and also at www.highend3d.com . Some recommend printing out these tutorials for making following them easier. see Educational Software. MAYA is a $4500 (starting, without cloth, see Cloth) CGI animation solution.

Merge: does it look like there are polygons that aren't connected that should be? Usually this will be visible as a line on the object at just that edge. If you select a side of a "box" you make and cut it, then paste it right back where it was before, it will not behave the same as if you hadn't. If a side of a cube can be removed without the rest of the sides following, it needs to be "merged." "Merging" is also the step performed on models that makes them one seamless surface, instead of many separate ones, when you LMB a polygon or two and then hit "|"(Weld does this too). In Nurbing, one sometimes gets a mess around a new section. Chances are, there is a polygon between the adjoining polygons perpendicular to them that needs to be removed, followed by "Merging." If you're new to modelling, you might accidentally use "extrude" more than "smooth shift" because you don't fully realize that "bevel" and "extrude" are generally used for SINGLE polygons. Welding points is done automatically by Merge. If two adjoining faces will not fit without using "weld," then use this two-points-at-a-time tool (weld) for those faces, or consider a "Boolean."

Meridian: Pagan metaphysics is highly theoretical (In John 12:20, Greeks' mystery rite is ((telepathically?)) described without secrecy, with certain consequences). Some hypothecate that since the Creator of the mental universe is eternal, the experience of time will have certain uniformities, such as that catastrophic events will occur in one's life at key "harmonics," and that to expand our experience, we must move all of our acts to accord with a more pure and distant halfway point. That's one approach. I have trouble with mystery rites. Jesus didn'tseem to go near this stuff, except to identify wth the eternal, pre-existing Abraham. Jesus subscribed to the rabbinical idea of Father and I try to be a WWJD'er and a Christian Scientist. If you press others to experiment mentally and do not set them toward detours/trials/temptation, you too will be guided. See others as you would like to be seen. Nudge others spiritually higher and see what happens.

Metal: it took a lot of tweaking of the fractal noise image using Adobe and multiple levels (adding a transparent yellow cast to the fractal noise and then adding highlights over it, making a brighter version for reflection "mapping," etc.) and then using the spherical ray tracing option, but even then, the best results were achieved using several light sources. Metal seems to look better with lots of lights. Metal seems to be better-controlled by removing all of these settings and textures save for the reflection setting itself, and then adding a "cyclorama" object with a spherical map of location details or fractal noise. Enveloped lights aimed at or parented to the cyclorama can make metallic parts visible only when they normally would be illuminated. see Reflections. I believe reflections may also be animated image sequences. This is one way to reduce the "glowing" phenomena when shiny objects are in the dark. The cyclorama can be a double-sided disk with a few sections removed, or something sophisticated like a single-sided hemisphere, with its equator normal to the axis of the lens and parented to the camera. Having a surface color under a reflective setting, and mapping a bright gold spherical map to an object or dome or setting seems to help keep the expected hue.

Metanurb: see NURBs.

Modelling: in life, people take a few salient points, construct an argument, and the world is better. It happens every day. I've watched two modellers, one working from drawing points and using the "p" key after selecting them clockwise, and another, using LW7, doing someting similar, but drawing splines. When one sees a dozen points become a head, it tends to go in one eye and out the other, because it is simply too much of a departure from common sense to take seriously. But this is modelling. For us newbies, here is an alternative: make a sphere, and using "Subdivide smooth," add so many polygons that it is beginning to slow down the refresh of the display. (Don't worry, using "Hide" and hiding the back half of the ball, you should be able to get the refresh times back up... or just hit "Undo" once). Now, get your favorite character or any good profile from the TV Guide, and paste it to the monitor as "Reference art." (Not a lot of profiles out there, are there?) Use primarily "Magnet" to push and pull points, and the"Alt" "," and "." keys to adjust the screen position. When you're done, select the areas where you did very little detail work and go to the Tools Custome List's "Reduce Polygon" button, and reduce polygons with a tolerance of "1" or so. If a few disappear, try "triple." Voila! Sloppy, but you will look like a master modeller -- for a while. Ideally, a few points stay as few as possible, which figures. One surprise I took away from www.siggraph.org a recent convention was the recurring presence of clay models for heads, bodies and castles. Can you force yourself to make three models every day? If you can, at the end of a year, you will have over 1,000 models. And some sites, like www.turbosquid.com will pay one for unused inventory, at a %50 royalty, though you may list your model-rich URL as part of your object listing. www.okino.com is the web site of a texture and geometry translating software called "PolyTrans" which I have not used, but which sounds attractive for going from LIGHTWAVE to packages like SOFTIMAGE.

Morphing: see MTSE. The "old style" of morph was between two very similar objects with the same point number. The Tools plug-in "BGConform" is amazing, and takes whatever is in the foreground and makes it fit the shape of the background. Morphing a 2D object involves picking groups of image areas and animating them to other areas. This can be "bruted" using Bones embedded in a subdivided card object, and then animating these to fit one or two mid-point images, rather than animating all the way from one to the other, though that will give the beter effect.

Motion Capture: Mo-cap on large budget operations appears to use special cameras and velcro-Scotchlite (bicycle reflector Scotch tape) ping pong balls for video which is automatically triangulated using 120 fps video cameras. www.foundation-i.com offers motion capture stages on a rental basis using the industry-standard "vicon" equipment www.vicon.com . I have a bad feeling about recommending INSPIRE for motion capture. see Rotoscope Getting effective rotoscoping footage is going to be best done under a stadium grandstand or in a large parking lot that allows one front, side and top views -- from a long enough distance to be "orthoscopic" using telephoto lenses. INSPIRE allows one to load an Image Sequence onto a card as a texture and have this visible in Open GL. A trick used by some animators is to load three views onto three cards at once for each of the x, y and z planes. Can it be bruted? I was embarassed to learn how straightforward it is (though I have not yet done this). The fastidious might want to work with a skeleton puppet taped to the front of a shirt, since joint centers will be visible. see Reference art One either keyframes a Null to follow its respective joint (saving the movement as a motion file in the Graph Editor" for the Null), and then loads the motion file for the pivot point of each object, then "brutes" the elbow to line up the arm's wrist to the hand's pivot point, and so on; OR one animates only rotation position for the Nulls, applying the same motion file system, being careful to start with the parents and work methodically through the children. It sort of blows my mind, but there it is. Hat's off to INSPIRE. Ideally, one can animate for ball-socket hinging, and then load the Scene file into LIGHTWAVE 7 for Motion Designer cloth dynamics, or one can substitute box primitives and Boolean join them all for "skelegons." Motion capture examples typically are fight scenes and crowds; but according to director David Silverman, the audience is looking at hands and faces most of the time. Faces are only beginning to be "Mo-capped," though I think there are glove products for the hands. "Shrek" did not use any "motion capture," according to the studio's production supervisors. When revising this section, I had a look around me at the sort of finger-play going on in a restaurant -- eating utensils, newspaper reading, hair-teasing, napkins, itching, cup tilting, pointing, purse-foraging. I had been ready to write that there were primarily two grip poses and a relaxed hand position -- oh well.

Mouse in Modeller: are you fully aware that the position of the mouse on a position in one of the three views determines where a modifying effect will emanate from? This is one of the subtleties a first-time user can underlook. Have a tree and apple as two objects and want to size the tree larger without changing the apple's position relative to it? Use "shift" to open both layers, select the tree, click on "size" and position the cursor at the stem of the apple THEN press the left mouse button and drag the tree larger. This same technique works with ALL of the Modifiers: scale, stretch, rotate(pivot), twist, etc. Another thing to mention: graphics tablets make texturing go more quickly.

Mov: so you accidentally saved your animation as an "mov" instead of as a sequence of images? "Bink" from www.smacker.com can convert "mov" to "avi" or sequences. INSPIRE requires a "fix" for Quicktime, available free through http://sunflower.singnet.com.sg/~teddytan/index.htm or from Newtek. Quicktime is a highly compressed (Sorenson) format, similar to MPEG-4, so one may want to consider re-rendering instead. See Quicktime.

Move pivot: Housekeeping: Move Pivot may appear to be malfunctioning when it moves the object as well as the pivot; this is just the way it keeps the keyframing straight.

MPEG-4: this compression format is built-into INSPIRE3D and makes it possible to use it for other applications like "Bink"as well! If your internet service provider limits the size of e-mail files, consider sending an animation in a few pieces or a link to a web page with your ".mpg" as a downloadable file. I recently downloaded an upgrade of Internet Explorer, and half the movie formats DISAPPEARED!! That's how it looks,... it could be another program's doing, but I doubt it. Windows 2000 includes a "streaming" format called ".asf," which you may have a friend help you with, if you don't have 2000. MPEG-1 is an older version of DVD format, but with fairly wide tolerances. MPEG-2 is the professional format used to create DVD's, though a separate authoring program is used to get the MPEG into DVD format, with DVD chapters, etc., AND DVD's are usually compressed to fewer colors in order to run on slower machines. See DVD.

MTSE: Multiple Target Single Envelope morphing saves a lot of time and effort, but take it one step at a time. The first object needs to have a different Null from all subsequent objects, which generally get parented to a Null sized to zero numerically or it will appear in renders too. (Or have I just not learned to make objects "Unseen" yet?) Be careful when cloning poses to change their settings. Accidentally omitting a morph will end the chain early. Forgetting to check the MTSE button will end the morphing after one object. Walk through the process slowly, and make a super low res test as early as possible. I have found that one can load over fifty objects, but I have had distortions appear beyond the maximum 4,000% setting. see Lip-synch. This function will not work well with inter-penetrating polygons, so for better teeth and gums and tongues, they will need to be loaded into separate object files (using layers) and parented to the face, and have the motion files from the face saved and loaded into their motion graph. (Some studio's seem to alternatively use a shallow back of mouth with low-polygon teeth.) This is much faster than animating them, for obvious reasons. Another comparable activity is duplicating a set of bones and separating key objects like a boot or gloved hand to its own object file. This is an IMPORTANT trick to know if you want to combine an MTSE glove with a bones body, and have the glove follow the wrist as it should. Rename the scene, then "replace" the object with the object file of just the glove which you should have given its identical position in Modeller. Or put another way: The glove object will need to be saved in the same position as it normally occurs, but without the rest of the body, and then the body object will need to be replaced in the Layout module (fully boned) with just the glove and then the MTSE will need to be loaded. The glove will then follow the arm's motion while morphing. Morphing can be used for many objects: two eyes can be given a series of morph pose targets as a single object. Otherwise, the way to animate eyes is to animate one and save than load its motion file to its mate, with a parented null adjusting triangulation and transparent ten foot pole polygons attached to pupils confirming position. Smoke can be morphed to dissolve or go from fiery to black as part of an inventory scene, and can then be "Load from Scene'd" to another Scene, I think. An MTSE Scene only allows one two-texture morph, but don't forget that those textures can have 3-axis-moving fractal patterns and object-dissolving and image sequences in their textures. see fire One has to allow that morphing is from point position to point position, irrespective of rotation. A 360 degree twisted wheelbarrow is going to morph without any motion, unles four or five morph targets are added in-between. Shoulders are currently being morphed for better hinging than bones usually provide, especially since clothing objects and other design elements allow it.

Multi-tasking: since character textures need to be loaded in Layout, and one can save a Scene with a new name and render it and then go back to the previous version and continue to work on it, and characters can be replaced with improved renamed versions (and the only program I know of that does NOT multi-task ((well)) is my CD-burner); and one can even "load from scene" objects and animation, as one works on them; and perform "compression" using "Bink" or INSPIRE3D at the same time; it's hard not to be grateful, and gratitude is powerful.

Music: One of those little details. This comes up sometimes, so, here goes: among those more neglected and abused than animators are musicians. Portfolio films may be submitted to contests with existing recordings by obtaining permission to use the music from the publisher, but ordinarily this is called "synchronization rights" or something similar, and is one of the few ways musicians can profit from their work years later. A band can "cover" an old hit for something like a flat fee, but to add the hit to video, other than a music video (though I could be wrong about this) or to change any lyrics, special permission/pay must be arranged. One way to do a favor for a band or musician is to let them compose some music specifically for your short, so that, to them, the short becomes a "video" for their composition. There are apparently some hundreds, if not thousands, of music aficionadoes who actively seek out movie soundtrack music. You may want to pay for the copyright for the musician. Ideally, you will pay them for their work as you would like to be paid, and see if that good energy comes back. Composers actively seek out animators and can be found on search engines, film school bulletin boards (the wood kind), and with a little networking. You will probably find them at www.mp3.com as well as their individual web sites; downloading music for synchronization is actually different from sharing and copying music -- although "demo reels" get cut a lot of slack and possibly qualify for seminar-like copyright-obeying evaluation purposes -- and it is a good idea to contct the artist for permission. I have contacted dozens of artists when downloading their work on the Internet, and by-and-large, they are grateful and obliging. Public domain is currently something like 100 years, so do not jump on a song thinking it is out of copyright just because it seems old. So-called "cheat" music is useless.

N

Negatives: the three which stand out are negative-sized textures, negative keyframes and negative lights. Negative/positive transparency maps have been discussed. Negatively sized textures will appear "flipped" which can be very handy for cubic or planar mapping where a character will appear to rotate. Negative keyframes are useful for long splined actions that one doesn't want to start or end abruptly, like swimming or canoeing. They are also mentioned with effectors like hair and gravity. Negative lights are a uniquely CG idea, since aiming a difference in intensity at a set of points is completely reasonable in CG. A simpler solution is probably going to be compositing or front projecting an image sequence to achieve silhouetting. It may be useful for projecting shadows near objects. In a similar vein, values over 100% are very important to a number of tools, like surfaces. Did you know that a transparent panel with a value for transparency over 100% will brighten the surface behind it? Over-values are important to lighting with fall-off, in order to imitate natural lighting effects and lend greater contour, and texture effects like bump maps and reflectivity, where an insignificant effect may be made more pronounced.

Non-profit: if you find yourself making a sizeable income from computer graphics, you may want to try doing work for non-profits to "work-off" the tax burden as time permits. Any entity that has a training or educational program can probably use your contribution, from ASE to ESL to Zero Emissions Vehicles. The National Association of Broadcasters has good information about deductible efforts, and another active group is "Women In Film."

NTSC: See Video tape.

Null objects: I am experimenting with adding a transparent Null Object in Modeller to files where I may have to move the object out of its original center, and I don't want to refind the center. It's just a few lines, and it gets put at the center of an arm object, for instance. The alternatives are "Undo"ing back to that position, or finding it again by playing with it. I am told other animators do this, and also add these points to keep pivot points visible.

(Meta) NURBS is activated by toggling the "TAB" key. NURBs produces smooth models, and smooth models have a lot of advantages: they look familiar - like elephants, saucers, lamps, etc., they "Bones" deform predictably, they have contour, etc., etc., etc., etc.. Nurbs means non-uniform rational b-spline, and it sort of translates to: if you tug on a point, a spline curve is created for a drop shape instead of a little pyramid. Smoothing and smooth-subdivision will have a very similar effect, but it will be bounded by the nearest points. From what I have read, NURBs typically use three or four points to create a curve, so a change for one point will affect several other curves, AND the bounding area will be three or more rows away. The "stegosaurus" object really shows off the difference in detail possible, pressing the tab button to turn metanurbing on and off in Modeller. There is probably a lot to be learned from that little model. NURBing, being non-uniform, can actually be a little tricky to learn. Consider using "Smooth Subdivide" under "Subdivide" in Multiply. Alternate it with "Smooth" in "Tools," for a classic smooth subdivided object effect. "Metanurb subdivide" is fairly aggressive, but when performed on an object that has been smooth or faceted-subdivided first, the results are less extreme, and closer to uniform smoothing. In this way, one can make an otherwise complicated object like a face or heart from a subdivided cube. The purpose and strength of NURBs is that they create smooth organic shapes from all point-tugging, instead of requiring a lot of point-tugging (or getting frustrated and squashflattening a group of polygons and then "magnetting" a ball shape from that plane or using Pole2 similarly) in order to achieve smooth organic results. The key seems to be to start with as few points as possible, and to place them very very carefully before making a first subdivision. One other technique I've used, to make hands, is draw a profile with "Pen," "Extrude," triple the template polygon and leave the extrudees untripled, turn on "Tab,"and then point-drag, and freeze and triple. One of NURBs's strengths is that since it models large organic blobs with very few points, one can create morphable object copies with a minimum of point-tugging. Since it uses as many as four points in its calculations for each curved edge (and roughly four edges for each surface), NURBs modelling is not very predictable. It takes practice to get used to making cubes and then smooth-shifting (0 offset) and moving a polygon here and there, all the while reminding yourself with the TAB key that the cube is actually an almost perfect sphere. Some kinds of "chamfering" will only be possible after the model has been "frozen." The object will look like a MetaNurb, not a "true" NURB, until such curves and "fairings" are adjusted by using "Pole2" or "taper2" or smooth-subdividing selected groups of polygons. One then tries to remind oneself that some true NURBS modellers require extensive editing that amounts to the same time to get roughly the same look, and that no matter what program was used to make a model, they are all converted to polygons before being rendered. NURBing may also look very odd if certain things occur: polygons that radiate from a single edge like two adjacent slices of cake, "flipped" normals, ANY polygon with more than four points, buried polygons left on the inside of a tube or template area, or duplicate non-"merged" points or areas. I have met some animators who want nothing to do with the NURBs learning curve, and they may have the last laugh, since I have heard that gaming rarely uses any splines, and "subdivision" modelling may be the next wave, and "subdividing" IS Metanurbs (but with a twist, the degree of Nurbing can be adjusted). You may have more fun using "smooth shift" properly or using the "Magnet" like a sculpting tool. See Smooth Shift. The only other NURBs software I have used relied on frequently editing point groups, since a point on one curve is coincidentally a point on three other overlapping curves on one axis and four other overlapping curves on another axis (U/V mapping is very similar to an x/y plane spherically mapped to a ball). Another function frequently performed in that software was to "optimize" the model so that points would be fairly evenly spread across it. I have encountered two kinds of tutorials for NURBs modelling for LIGHTWAVE; the Smooth-shifted shape that starts with a box and then copies/pastes points/"Make Poly" one polygon at a time, and drags rows of points and "knifes" them (being careful to knife in a way that preserves parallel polygons) and "branches" smooth-shifted polygons for details/fingers/nostrils; and the other method: the rough shape that is point-dragged to a certain point, and then the whole object is subdivided, and the point dragging resumes. Some "tripling" is allowed, but NURBs triangles seem to reduce smoothness. In both methods, one hits "tab" frequently or does most modelling in the NURBs display mode. The point of using NURBs is to achieve smooth surface quality. I like that INSPIRE's NURBs modelling premise does not include "hidden" operations like concealed edges -- what you see is what you get, even if it is based on unfamiliar logarithms. I am not sure that MetaNURBS uses FOUR points for its curve calculations, though it seems to make boxes into balls, which is consistent. The leading NURBS modeller is called MAYA and includes many other tools that may not appear to be in INSPIRE, but actually can be emulated: see Group-Weld see Point-walking.

O

"Object List" is a facility for things like running water or other object cycles. Best of all, one can add bones! The trick is to have the first line of the MS Notepad .txt file:

#LW Object Replacement List

0

c:\inspire\frank01.lwo

3

c:\inspire\frank02.lwo

and so on,...

The text file can be named anything and the objects don't need to be in any particular directory. If you haven't worked in animation, the idea of typing out the different line numbers and files may seem odiously tedious, even with "copy" commands and the likelihood that only the last digit of the object will need changing and that cycles will be block-copied and pasted. How long does it take to type 300 numbers? Ten minutes? Your time will be well-rewarded, as it is when lip-synching. As is mentioned elsewhere in the Glossary, one can do very classy things with Object List and "Save Transformed" and commands like metamorphosis.

Optical Effects: creating a box in only one window of the Modeller results in a single square polygon which may be assigned a surface. One may have several different "slabs," each one a "Sequence," using the Image Panel "Load Sequence" button. These may become "Dissolves, "Wipes," "Flying Screens," and they may be used for subsequent effects shots and/or transparency/alpha matting or Compositing. They also may be used to "Bump-up" anything from super-8 films/GIFs/INSPIRE films to LIGHTWAVE Cineon, using LIGHTWAVE. Fade-in's and out's and such may be better managed by having a finished sequence loaded onto a card or as the background in the Effects Tab Compositing panel. See "Compositing" and "Depth of Field" for a good description of some "opticals." They can even be used to convert a sequence rendered "every-other frame" into a normal avi! This can be especially useful when creating a "demo" of an animation with sound. The Image Sequence loader in INSPIRE will play a sequence of images and skip frames that are missing, so that if one renders every eighth frame, an avi using that image sequence will show frame #1 for seven frames, then frame #8 on the eighth frame. (Dissolves tend to make long sequences feel shorter, and a "soft cut" dissolve of 8 frames or less has a distinct feel.) If one has a short part of the sequence fully animated on "one's," this will all be included -- pretty cool. see Editing Colored lighting can filter image sequences, and masking images may be applied to cards given a transparency value over-100%, so that those image areas will be increased in brightness. Similarly, "alpha"cards may be used with anti-aliasing values to blur background cards and leave foreground areas sharp.

Optimizing objects: the INSPIRE Modeller Tools Custom Plug-in menu includes a command called "Reduce Polygons" that can be highly effective. Optimizing objects is an excellent way to produce "dummies" that can then be used for animating keyframes with fast refresh times. High quality design is worth seeing and worth taking note of, a fillet here, a chamfer there. I haven't experimented with performing Boolean "knife" operations to objects before optimization (see knife), but it stands to reason this might help the process a little. Also, consider selecting polygons that you want to keep, such as key crease positions, and then inverting the selection in "Display"so that they will be deselected, then optimizing. After optimizing, one may want to convert polygons with more than 4 points to triple polygons. The "+" button in statistics allows automatic selecting of only non-quad polygons.

Orthographic rendering: there are two ways to do this; one complicated involving moving barn-doored distance lights and motion blur, and the obvious easier solution of moving the camera far away and using a high zoom factor. The resulting "flat" shots have some uses: intercutting with contoured shots for dimensional contrast; dramatically, for a stream of consciousness engineer's point-of-view or inattentive character perspective, since depth is largely inferred; or possibly to parody the shots in "Jaws" and other films.

Over 100% values: bump maps can be set higher than 100%, as can diffused settings and reflectivity and transparency in the Surfaces panel. A 200% value transparent surface will cause objects behind it to be brighter. Dissolving between two transparent panels can make it possible to control imagery as a paint box. According to an article in www.newtekpro.com by Patrik Beck, having trtansparent cards move across a metallic logo is one way to animate attractive "glints." Lighting often needs values over 100% because the mathematical relationship of brightness to proximity known as the "rule of squares" requires very bright lights. On pre -1990 movie sets, it was common to have lights bright enough to cook food a matter of feet from an actor. (Film stocks are somewhat faster.) Streaking particles can also have over 100% values, which can make for fun sparks and such.

CONTINUE TO GLOSSARY P - Z

www.inspirejoy.50megs.com

c. Scott Lee Tygett 2001