GLOSSARY

The Glossary Section has been divided into three sections. If you use your "Back" button for the following pages, one hopes you have no problems navigating between these sections: 1 - F , G - O and P - Z .

This little list is how to "brute" certain popular effects. You may spot some glaring errors, (this page has been my "buggiest") which an e-mail from a good soul like you could correct. I apologize for earlier versions of this Glossary which included some inaccuracies and gaps. Hopefully, it will evolve into a useful page for INSPIRE and LIGHTWAVE users alike.

QUICK REFERENCE INDEX (Actual Entries Just Below...)

1080i

300 image limit

3D

3D Textures

7

A

Adobe Plug-ins

AI

Allusion

Alpha

Animatic

Anti-aliasing

Area lighting

Array

Aspect Ratio

Aura

AVI

Ball

BBMPEG

Bevel

BG Conform

Blur

Bones

Boolean

Brute

CAD

Camera technique

Cel shading

Clip mapping

Cloth

Color correction

Collision-detection

Compositing

Composition

Config.sys

Converting Images to Avi's

Counters

Cross sectional modelling

Cubic mapping

Curves

Cycles

Cycloramas

Demo reels

Depth of field

Disks

Displacement mapping

Dissolves

Domes

DVD/HDTV

Dynamics

Editing

Educational software

Extra camera

Extrude

Eyes

F1

Fabric

Faces

Falloff

Fill hole

Film quality

Fire

Flipping textures

Fog

Folders

Frame-by frame

Fresnel

Front projection

Fur

(End of Glossary page 1 of 3)

1

1080i: if you ever want to render a short animation in a high resolution mode like 1920 x 1080 or 720 x 486 WITHOUT automatic frame advance (which is only available up to 640 x 480), the <F 1> key in Layout has a list of the "hot" keys you will probably want to use to shorten the time chomping at the mouse. <F 9> for rendering, <Esc>, and the Right Arrow key for frame advance.

300 image limit: If one uses the INSPIRE for compositing, there are a couple of limits that may become issues, like the limit on the number of bones, or metamorphoses or images in an image sequence. I have noticed that if one is blowing up from a 320x240 image sequence to 640x480, there seems to be no problem copying 1000 frames, so apparently this may only apply to textures. The limit on bones is managed by sawing characters up, usually around the tummy, or by using bones more sparingly -- in other words, switching to "limited region and using falloff with intensities in the thousands. See "skelegons." The limit on metamorphoses is hard to live with, no argument there. One solution is to focus on mouth extremes that will provide intermediate values like the oo-shaped "sh" in "shoe." Another is to resort to two sets of 40 face-shapes and dissolving between a foreground and background image sequence in a utility compositing scene file. Which brings us to the limit of 300 images to image sequences. This is a limit set in the "config" files, and I do not know if this is one of the limits that can be changed by opening the Scene file in a text editor. Tricking Windows to do this requires opening the "Settings" tab in the "Start" Toolbar and selecting "Folder Options" and removing the connection between INSPIRE and "lws," and then renaming the lws file to be adjusted, with a ".txt" suffix. I am not sure this CAN be adjusted. Another "workaround" is to break-up, say, a 900image sequence into three 300 image sequences with different file names. This will take as little as a second per frame. Just be sure to set the "Frame Offset" in the Image panel "Load Image Sequence" requester to 300 and 600 for those other files, and again, be sure to give them different names. In this way, when one loads the 900 image sequence into the Scene file it is being used in, one loads three files, one with a "-300" offset, and the third with a "-600" offset, and this seems to work. So you know.

"3-D": one way to produce red/blue 3-D directly is to parent the camera to a Null, and parent a set of filtered mirrors and/or filtered transparent cards to the same Null, or change all of the colors of the lights if there are very few. A gray palette is recommended. First convert the image sequences to gray in the Effects Panel, Image Processing menu, near the top, at the Color Saturation requester. Render the sequence as another sequence. Check out the 3-D 360 plug-in gallery page for a discussion of 3-D aesthetics. A better format for 3-D is green/violet, but the glasses aren't yet common. To get a professional quality red/blue or green/blue result: use a separate "Opticals" Scene file; then, apply the Math Filter Plug-in in the Image Processing Panel of the Effects Button. [Opening the Scene file did not automatically load the Filter pack, so they had to be "re-okayed," FYI.] Red was produced with a 100% Blue channel entry, Replace box unchecked, and Green 100% with replaced box checked. Blue was created with a Red and Green both 100% and both Replace Pixel Color boxes checked, being careful to "enter" and "Okay" the changes. Since LIGHTWAVE 6.5 seems to include anaglyph 3D, one may want to ask a favor instead.

3D Textures: at this point, the same as "procedural textures," which I think refers to their being equations. A true 3D texture cube file would be enormous, close to one Gig. Instead of circles on a background, a procedural is a block of swiss cheese. It may have "seams" but if it is sized correctly, they will not occur on the object. INSPIRE is apparently able to load third party 3D "shaders" making it even more powerful. Unfortunately, this is as far as I've gotten on the learning curve on this one. To make truly 3-D "smoke," one would need to have a LOT of particles with the velocity of the texture matched to their motion, if I have it right. Ordinarily smoke will be projected-on or pass-through a plane to give a volumey effect. I have heard of some programs applying fog-like density to objects, but I am unclear whether this was a standard fog density effect composited (double-exposed) to a semi transparent object or something more volume-related. 3D textures usually refers to these concepts. A "true" 3D texture would probably be a "Displacement map" added in the Objects panel, which requires a very heavily subdivided polygon-dense Displacee. Use "Save Transformed" to keep your results.

7: in case you haven't already heard, LIGHTWAVE 7 has a Demo CD that includes a 1078 page Manual. The program will import dxf and obj object files, though it will only "save" tiny under-200-point objects. Images are watermarked; no scene saving. Here's your chance to come up to speed on UV-mapping, etc.

A

"a" : this is the button that fits the Modeller interface to your object, very useful if you pan and zoom all over the place (normal). see Keystrokes.

Adobe Plug-in's: see "Paint Box." My jaw dropped when I realized that the humble INSPIRE3D could run some of the Adobe Photo Deluxe filter package. Photo Deluxe is a FREEBIE with many brands of printer or scanner. Ask around, and if you do not own it, you may learn that a friend has a spare copy that they are not using. The only "work" involved in getting this powerful program to work with INSPIRE is to open the "Photoshop Filters" Panel by selecting that button in the Image Filter Plug-in Requester of "Image Processing" tab of the Effects panel, and then pressing the Options button, and then changing the Directory. The Directory will have to point to where your copy of Photo Deluxe has been loaded, such as the C Drive "Program Files" using the "Change Directory" button. You may also need to reload the Adobe program if you have it configured weirdly. Then press the "Process Directory" button, and welcome to a few more filters. The book points out that certain functions will not work, and some of these are heartbreakers, because the "scratch removal" tool, for instance, can work wonders when removing non-tripled goof's. My other recommendation on this tool (besides activating it) is to save it for loading finished sequences. This is after you have done a "straight" render of a sequence (but you will have to save it as individual images in INSPIRE, and not as an "avi" ((pretty easy to undo with www.smacker.com Bink))), and then texture map or front project it on a box or card or Background Composite" it in Layout and re-render it. This also allows you to play withanti-aliased texture map blur (See "Depth of Field) or halfway-dissolving a brush stroke or sponge effect by having two front-projected slabs one in front of the other. Compositing/double-exposing the Adobe filters this way also makes possible some nifty smoke, shower door, blur, hair, metallic, etc. results by creating duplicates of scenes without background elements, etc, or using the Alpha method. see "Optical Printing." The scene file you create for these effects can be re-used dozens of times.

ai: Adobe Illustrator file format, see "Illustrator."

Allusion: this is where the character is in the distance, in silhouette, and then is only seen in its upper body, while other characters react to it. Some animated films ease into the "look" they establish, letting the audience get accustomed to the rules. Von Sternberg relished allusion, and believed Chaplin was a high practitioner of the art. Allusion is also a way to build intimacy by contrast, where the self-talk of the audience is asking, "What are you about?" And then the answer: "So THAT'S you." (The more innocent the better, though this also gets into the hypnotic nature of story, a sorry crutch.)

"Alpha" matting: unsimple only in that one has to either "Object-dissolve" or make surfaces transparent (with edges set to transparent) that one doesn't want to show, and then save the Alpha Scene separately if one wants to make later changes. Redundant rendering can gobble-up 90% of a scene's rendering time for slow-moving scenes, so putting those objects in a separate background or foreground image is worth doing for animation of any length or complexity. Very fast action scenes can combine low resolution backgrounds with higher resolution running/driving/skating objects, directly using "Load Sequence" or as a separate step, by saving the foreground elements and alpha matte images separately, for later compositing. Professionals seem to do the latter, in case some small flaw is later discovered, necessitating re-reindeering the whole scene. (!) For alpha matte textures, see Surfacing. It also bears mentioning that TV commercials and visual effects "houses" will output all sorts of "compositing" elements in order to be able to make last-minute changes for clients using one of the compositing tools like AURA, CHALICE, DIGITAL FUSION, LIGHTWAVE, etc. This is not recommended for INSPIRE productions, given that moving up to LIGHTWAVE 7.0b will typically cost a fraction of the budget, and LIGHTWAVE is easily able to automate these extra "mattes."

Animatic: a storyboard shot to video, since a non-moving storyboard just BEGS to be moved. Storyboarding is fun, but having the tools to make the storyboard very quickly into an animated short that can be checked for pacing, etc., is very useful. I think I would be more inclined to rough a short using very blocky "dummy" objects and holding every 100th frame for 100 frames, than to scan in drawings and map them onto a card/cards for a literal "animatic" effect. Getting INSPIRE to emulate a video animatic is pretty straightforward -- one can start with a scan of the whole storyboard mapped on a card/slab and move the camera across it with unsplined camera moves, and/or have more than one card with the storyboard for object dissolves and motion blur. One can also break the storyboard up and get fancy with multiple cards. Sound may be added to these low-resolution tests using "Bink" from www.smacker.com . The advantages: sometimes one will think up "business" to add to the shot as one goes along, or notice that the music or dialogue begs for a different pacing, and advertizers make very polished animatics, which they show to prospective clients.

Anti-aliasing. Since INSPIRE does not record large frames with automatic frame advance for animation, it behooves artists to use anti-aliasing with the 640 x 480 size (which is a much-recognized "standard digital" format, though 720x540 is overtaking it), but the better bargain is to move up to a full LIGHTWAVE license ASAP. Or make a few friends at a user's group or a Community College where LIGHTWAVE is used, and ask them to use the "Load Sequence" command in LIGHTWAVE and then load the sequence as a Background, to "bump-up" one's INSPIRE TGA or other image files to a larger format avi or MPEG. On one occasion, an instructor insisted a student re-render in class, where a render farm was being set-up. This does not manage the problem of lacking the time or resources to include anti-aliasing. A compositing file with the image sequence loaded to a texture mapped card, and then the "anti-aliasing strength" of that card's map raised should yield a controlled blur effect with little fuss. "Math Blur," "Surf Blur" or the anti-alias softening filter may also be applied, or a controlled very soft motion blur camera blur may be added (see Depth of Field). Another solution is to hustle. See hustling. Another way to hide a lack of anti-aliasing is to use backgrounds that have been rendered separately and ARE anti-aliased, and add them using as an Effects panel background. Finally, some styles of camera-work and animation will lend themselevs to a lack of anti-aliasing or motion blur -- traditional animation tends to have a great deal of "overlap" resulting in smoothness and solid or mottled colors and jagged contours probably need less smoothing; but that is probably a pretty bass-ackwards way of setting project priorities. Anti-aliasing is also a powerful tool in the surfaces panel, where changing the Anti-aliasing strength of a bump map will round out a texture.

Area Lighting: an array of five or more point source lights, rapidly moving or spinning as a plane, emulates the look of an "area" light setting in LIGHTWAVE, which in turn, emulates the look of "radiosity" (the illumination of objects by light reflected off other objects), which emulates the photographer's popular light source -- the white board reflector. In order to get numerous lights without numerous shadows, one parents the lights to a null and displaces them from the Null and has the Null rotate while the light is targetted at another Null or object. The rate of spin is high, about 720 degrees PER FRAME (with "repeat" engaged in Motion Graph), and for this effect to work the render will need anti-aliasing and motion blur both engaged, and it has limitations.Other ways of getting fuzzy shadows: shadow-mapping, and shadow "cards" parented to their objects -- tip shared by Richard Morton Richard@iqdigital.com . I think one reason this was recommended over using 100 point sources was that it might be a faster render, but I haven't tested this. Use "Falloff" with multiple sources to get an "Old Mastery" feel with soft shadows. See Linear Lights.

Array: although "PointClones+" will cause one's modelling session to end, this handy tool in the Multiply panel will approximate many of the applications of "PointClone+," and not just making better chandaliers. A tree can become a forest, a hair -- a mane, a particle "cloud" sphere -- a really big cloud. See particles. I also use it to make rope, bolts, naval mines, power towers, etc.

Aspect ratio: see Film quality. During the heyday of film, there were over a dozen aspect ratio's, because international films would sometimes be subtitled in three languages. Now that DVD and HDTV have re-introduced multilingual programming, I am inclined to let "Star Trek's" aspect ratio be mine. Whatever that is. 720 x 486? Whatever. (For INSPIRE, 640 x 432 using the Custom requester; 16:9 becomes 640 x 360.) There is a related trouble-shooting issue with DVD publishing that I apologize is beyond me where a pixel can have a "squeeze" so that a 480 x 480 shot becomes a 640 x 480 shot. There are other websites like www.vcdhelper.com that may be able to help.

Aura: Newtek came out with a product that does compositing similar to oher products that are typically used with LIGHTWAVE or MAYA. It can be used for "motion capture" and special effects work combining handheld camera with solid modelling and particle effects; it generates x/y "mot" files and it is supposed to dovetail very well with Adobe Premiere. Other programs that do this to one degree or another: ADOBE PREMIERE, INFERNO, CHALICE, DIGITAL FUSION (ETC,...). The Newtek product is called "Aura" and it can also be used for things like cel animation, and it has some interface elements for LIGHTWAVE so that little animations can be combined with live action elements "on the fly." It's supposed to be very powerful. www.newtek.com see Compositing.

AVI: The Microsoft "AVI" format is an uncompressed format for displaying animations that is hard to show on machines that are not capable of playing DVD's, at least. See DVD's. [With my computer, the image becomes a series of vertical strips; I've discovered if I move the view to the right or left so it is cut-off by the monitor, so that only 1/3rd shows, I can see what the image will look like.] It is recommended that for most animations, one should save their renders as "Image Sequences." Image sequences can be stopped at any time, unlike "avi" animations. "TGA's" are a recommended format, because they are modestly compressed but lossless, and many graphics packages recognize them. "TIF's" are more popular, but are uncompressed. Converting a TGA or TIF sequence to an "AVI" is achieved by loading the name of the sequence to a separate INSPIRE Scene file using the "Images" panel, and then loading the name of the sequence into the "Effects" panel background image requester. This gets used for a lot of different duties. See Compositing. Converting an "AVI" to another format is not necessary with an image sequence, a sequence may be saved to a variety of formats in the same way one converts to an "AVI." A pop-up window will ask what sort of compression one wants to use, as "INSPIRE" comes with several popular compression formats. Windows also includes some: Windows 2000 comes with MPEG-4v3, for instance, which is very blurry but favored for the web. A popular freeware conversion utility, "Bink," from www.smacker.com is able to convert an image sequence to an "AVI" compressed "Quicktime" or high-quality proprietary compressed "EXE" without using an INSPIRE Scene file. Compressed image formats like "Bink," "Sorenson," "Quicktime," "MPG" and "MPEG-2" generally will bring the color pallete down from millions to less than 500 colors, blur detail, and try to remove frames by randomly comparing similar frames and interpolating the lost information. MPEG-2 does relatively less compression than some other formats, and it is used for creating DVD's. MPEG-2 programs may be purchased from a number of sources, including www.newtek.com which markets the very affordable "Vidget." A competing AVI-to-MPEG-2 converter is called "BBMPEG" and another is "Div-X." The "Bink" converter is very popular with animators who own somewhat slower computers. Using "Bink" to compress an AVI will make it visible without any jerkiness (unless the computer's memory is full; removing 100 meg can do wonders.) Converting an AVI to an MPEG-4 in "Bink" is done by pressing "Convert a File" and then going from avi to avi. The pop-up box will ask you what sort of compression to use, and then you pick what you want. Afterward, the new avi will need to be renamed in windows (by clicking on it), giving it the "mpg" or "asf" suffix. And how about changing from "AVI" back to an image sequence of tga's? Although "INSPIRE" doesn't offer this utility, "Bink" from www.smacker.com does, free. Finally, be sure to update to the most current "Media Player" and DirectX utilities, as they become available. "Media Player" is sometimes bundled with "Internet Explorer," which is bundled with many CD's for services like AOL or Netzero. LIGHTWAVE 7 is able to use "avi"s as if they were image sequences for textures, compositing, etc.

B

Ball: the INSPIRE "Ball" object is fine for most purposes, but if you need a "real" high-poly ball, go to Options in the Object Tab, and change the "radial segments." This will create a high res. sphere with good detail. The CTRL key keeps the ball from stretching as it's enlarged. A plug-in for numeric balls is also available from http://sunflower.singnet.com.sg/~teddytan/index.htm .You can also adjust the divisions of box objects when they are made, using the right and left arrow keys.

BBMPEG See "AVI"

Bevel: in INSPIRE, to get what one commonly refers to as a bevel, one "smooth shifts." The command "bevel" in INSPIRE is a powerful command that is best dwelled-on using the book. Bevelling is recommended to reduce acute angles and lend a more realistic feel, and also for metanurb modelling. Unlike "smooth shift," it is used on one polygon at a time usually. In order to bevel a row of polygons, one needs to "smooth shift" them, possibly moving them in different directions to get a desired result. A very powerful plug-in is sold by www.skstudios.com called "Vertibevel." Bevelling is used more than one realizes: "smoothing" can remove details unless extra polygons "contain" the smoothing program, which is calculated between adjacent polygons. A cube will lose corners unless a narrow facet is added along the edges. This is the fine detail work that distinguishes experienced modellers, according to Richard Morton and others. A fillet here, a bevel there.

"BGConform": you would think one of the most powerful tools in Modeller would get a little more notice in the tutorials. Do you lust over the "Group weld" plug-in's of other software? INSPIRE has "group weld," using BG Conform. Select one one group of points and copy and paste it to another layer as a Background, select the points to be welded, "BG CONFORM" and "merge" the result. BUT THAT'S NOT ALL, because this is also a command that will allow you to rework parts of a model, like making a cheek look more rounded, without resorting to Boolean operations. Create a shape from a sphere and use it as a background for some of the points of an objects; I've used it with "Points2Ploys" and polygons as well. This function is also used for creating morph targets, and is very useful when emulating the spherical-map-creating "Unwrap" plug-in with a series of morph targets converting a sphere to a card. It may even be able to "fix" a morph target that the computer doesn't like, though usually some modelling goof will be responsible.

Bit rate. Cineon is a rendering format that can produce millions more colors than 24 bit formats like TIFs. Cineon, RGB, RLA and SGI formats all allow for 64 bit formats. (Cineons, like other formats, may be any size, though certain standards are used. A standard Cineon resolution is 2048 x 931 also called "2 K"(for 2048: 2048 x 1336 being a still format, and 2048 x 1151 being "convertible" to HDTV 1920 x 1080) a "4 K" frame is 4096 x 3112) According to Jon Carroll, INSPIRE and LIGHTWAVE formats before 6.0 do not have any 64 bit rendering, so apparently Cineons exist in a non-64 bit standard. Informative web sites like Alt Systems have detailed charts www.rfx.com/alt/res.html listing the standard formats, such as 3656 x 3112 for lens-squeezed CinemaScope frames. One sees subtle gradations in sky without large bands, as is barely visible in 24 bit rendering, and shadow and low-key dark areas do not tend to "break up." Action scenes will not easily give-up their shortcomings, but slow scenes where the eye is allowed to dwell and adjust reveal some lack of gradation. It is probably possible to "PhotoShop" 64 bit quality, or to emulate 64 bit gray scale values in INSPIRE. Two thoughts along these lines would be to increase light sources enough to cause their "banding" to shift, effectively "anti-aliasing" banding, and then double-exposing the two versions using film or 64 bit re-rendering. If one is using lights with an "inverse square" fall-off, to make a 50% banding offset, the percentage change in the light would need to be 70.71%; to make 66% and 33% bands, 81.24% and 57.45%. For constant lights, use 50% and 33% and 66% respectively. Secondly, one may light all dark Scenes as "Day for night." A Scene set in a dark forest would thus have high quality shadow detail, but film underexposure (or 64 bit re-rendering) providing the darkness. Similarly, a daylight Scene could have a "Day for Night" pass for its shadows, plus two high-ambient-setting (or alpha-masked?) 50%-shifted passes for its midtones. A LIGHTWAVE 7 user can then composite these in a short Scene file one can write. Since most monitors are 32 bit, how can one see the difference anyway?

Blur: for final renders, consider turning motion blur back on. It gobbles up render time, but it is such a high quality uncomputer-looking effect. To add focus blur to a shot, see "Depth of Field." Basically, focus blur is added by rotating the camera very slightly but rapidly, and letting "motion blur"'s multiple images provide the effect. Much longer render times result, but FOCUS blur is not often used by many animation studios at this time. Motion blur is automatic, and focus blur would seem to imply a myopic character or story context, like a couple on a date or a child's point-of-view. Getting the most out of action-oriented motion blur would probably mean alternating a foreshortened shot with lots of overlap and motion blur turned-off, with a sudden move to a shot in profile of two racers, for instance, and the background a streaky mess. Also, one might consider having the characters engage in some sort of "disorienting" discussion -- "Yeah, today I finally kicked coffee...". Since INSPIRE's motion blur breaks-down into multiple images, you may want to try the Adobe motion blur filter and render the blurred background separately for later compositing for horizontal blurs. See compositing, see Adobe plug-ins. Blur may also be added to a Composited scene by adjusting the "anti-aliasing strength" of image textures on different planes. Carrying this a step further, one can alpha matte these planes. Also, the "Surf Blur" filter that is loaded from the "Effects" panel has gotten me out of trouble from time to time. It is worth mentioning the "Soft Focus" blur seems to help 320x240 to blow-up to 640x480 very nicely.

Bones. INSPIRE's Bones function is just a magnet deformer; do not get in trouble by thinking it is an equivalent of hinging. There are two ways I know of of making it more serviceable: one way is to use lots and lots of bones, wherever unwanted magnetting is occuring and wherever one wants details to move, and then adjusting the strength in the Bones panel (type "p" with a Bone highlighted) if unwanted object areas are still affected; and the other way is to get your hands dirty with all four variables in the "Limited Range" menu -- above 100% for Bone strength (often starting with 1000%), the shortest falloff (16), and Maximum ranges balloons that go no further than necessary. The falloff and percentage are doing all of the work. I am not sure what the limited range limits do, but if falloff is high and the value of the bones strength is high, then the balloon will indicate where the bone's effect disappears, and that's easier to manage than competing bones with overlapping weak zones. Lots of bones seems easier, but not if they overlap all over the place. see Workarounds. Some Housekeeping: The "Ctrl r" key. For the longest time I went in circles on this one, so I will chalk that up to a blind spot. Just, be aware that unless you use this keystroke, it is near impossible to control them because they'll be trapped in an origin plane. "Activating" a bone isn't enough; it needs the " Ctrl r" (using "r" will set them, but Ctrl R is a toggle.) Bones deformation requires the bone to be as close to its object's area of influence as possible, and then will have no effect once "rested" there, unless it is moved. (Other useful keystrokes: up and down arrows, and "p" for Bones panels.) Combined with the "Add Child" or "Draw Child" buttons, the "r" and "Ctrl R" keys can be used to achieve some tight tolerances of deformation by multiple child bones for the surfaces surrounding each major bone. IMPORTANT POINT: The "Limited Range" balloons are highly deceptive and indicate next to nothing UNLESS the strength is very high (over 1,000%) and the falloff is 16ish. I learned much more from positioning a sea of bones across a subdivided card than I ever did from the limited range balloons. see Skelegons. BUT HERE IS ANOTHER POINT WORTH SHARING: IN THE "REAL WORLD," EVERYBODY IS WEIGHT-MAPPING. This is similar to displacement mapping or texture mapping, except the "weight" map indicates where a Bone's influence will falloff very precisely. To learn this skill, get the LIGHTWAVE 7 demo CD from www.newtek.com or move up to LIGHTWAVE. ANOTHER POINT: once you have a skeleton of many bones using INSPIRE, if you are like me, you will lose interest in trying to find the right bone in a forest of bones. Although coloring bones is very effective when "rigging" the character, go back to the Scene Editor and make most of these bones INVISIBLE when you animate. The joints you have assigned red, keep those visible, and as few others as you feel help you. (I have considered making "proxy" bones to show the main shape of a character, which would be visible but "inactivated.") If the character's design allows for animation without bones -- the character is not wearing a leotard or continuous texture -- I heartily recommend using old-fashioned hinging, which can also be used for more effective Bones, since bones of different objects do not affect one another's volumes. That way, the bones can be used strictly for shifting bulges, which is what they are often best at. Another trick is to remove these hingeable areas to an identical copy of the Bones Scene file, but a different object, by replacing the object after removing the hingeable parts in the Modeller. Then, one can tweak these accessories without worrying whether an elbow joint will get wrecked when adjusting a sleeve. To get a better idea of what bones do, I recommend making a cube with a lot of polygons, and move some bones around in it. Add bones and try them in different configurations -- try many bones as well as just a few. See what "Limited Range" really does -- it's pretty peculiar. "Messiah" is a blasphemous name for a gun/snack food/tool/car/program, I won't be talking about it here. Often, a character's design will lend itself to some modest cut/seams, like a wide collar near shoulders. On another occasion, there was no problem with legs at all, though this may have resulted from a fairly angular object. "Treeform" is the term for having a character's arms straight out and its legs slightly apart, as a starting point; according to the experts, you want there to be bending in the joints in the treeform or it will not work as well. Something I plan to try is to use many small bones surrounded by child bones along the skin surface. It might work. See "Skelegons." Between multiple child bones, and careful placement, one should get some really nice effects. See "Weight Mapping." Another activity that helps one get a grip on a tool like bones is to set up a practice scene file with a heavily subdivided card or ball. Add bones and try them in different configurations -- try many bones as well as just a few. See what "Limited Range" really does. A common tool is a Scene file for rigging characters that has the character doing a motion like a ballet dancer that will call attention to Bones pinching or unwanted overlaps. This Scene file is also helpful because the speediest workflow seems to come from toggling the different strengths and rest Lengths and positions while looking at the problem pose frame.

Boolean: very powerful, especially combined with "array," or other commands to create things like golf balls, windows and snowflakes. see knife. Used with a single polygon card, Boolean can become one of your favorite modelling tools, adding seams or edges where you want them instead of where they happen. Selecting a polygon and using "Add points" is also effective, but the "Boolean" approach seems faster. It can emulate other tools, as well. When one wants the look of "chamfering" or polygons radiating from high-articulation areas like eyes or mouths, one may consider "disk-knifing" with a subdivided disk or sphere instead of a card. Or how about a "fill hole" operation? It is probably simpler to"select" the points of a lost polygonal area and "Copy" them" and then use "Make Poly" to create and fill a hole, but it would probably also be emulated by pulling out the hole's surrounding points, and then "Booleaning" a card using "add" and deleting extra polygons. Used with NURBS, although Boolean "Adding" is discouraged because non-quad polygons will not convert to curves, and there will probably be a pesky hidden polygon or two to be deleted, but tripling only the non-quad polygon can result in some very workable models. One trick to do this: go to "Display" "Statistics" and press the "+" button beside ">4 Vertices" and all of the non-quad polygons will be selected. Now triple them.

Brute: when one animates the paths of hundreds of individual snowflakes, instead of using a new tool to accomplish this, that is bruting, or brute-forcing. Incidentally, there is a Snow scene in INSPIRE that is very instructive. I believe it moves two overlapping masses of snowflakes/points at two different speeds, with two different displacement maps.

C

CAD:although Alias-derived programs like MAYA and Rhino and "dxf" programs are better-known for use with CAD/CAM and Computer Numerical Control operated mills, LIGHTWAVE is used by NASA. And if you like to noodle on ideas for new advanced materials or a telescope mirror that can be used for nuclear fusion,...

Camera technique: one of these is a pain, the other is kind of fun. The pain is that if one uses a Null object as a parent for the camera (or lights), certain tricky camera motions will be much easier, but moving the Null instead of the camera becomes a "learning curve" issue and is unintuitve. The easy one is that the book mentions one may use additional lights, using "Light View" to "store" favorite shots, as long as the lights have zero intensity. Best of all, using the "Rename Light" button, one can list "Fm 230 Close Reaction A, lens 2.282" or "Dolly position 2 of 3." If one uses a spotlight angle to lens angle converter, one can even change lens settings, since zoom may be "enveloped" though this is done automatically when keyframing zooming. When the Scene is fairly littered with lights, one can use the "Scene Editor" to hide them. The nice thing about this is that one doesn't have to chomp at the bit during initial animation when the creative juices are pumping, and when animation changes may lead to camera changes anyway. I would not be surprised if the sophisticated users parent lights and cameras to the same Nulls, though this leads to Nulls being everywhere, and they cannot be easily made invsisible. A common complaint about camera technique is that is either looks spliney or jerky. "Splineyness" is often related to either having only two of three axes with motion, drawing attention to the hinging, or poor keyframing leading to whiplashing. One can adjust spline tension, but that makes the learning curve for learning spline tensioning that much longer. The quick-and-dirty solution is to A) reduce the number of keyframes, then B) go to keyframes with whiplashing or jerkiness and try copying the keyframes near those poses to other frame number positions. I also sometimes visit the graph Editor, which may show where certain splines are getting in trouble, to make slight adjustments. Now, none of these is what I would call camera TECHNIQUE; they're just instructions the operator needs to know. The real technique of camerawork is whether a shot needs and/or deserves balance/dimensionality/tension/density, and how the ballet of color and content will play out alongside the prejudice of the point-of-view rule that dictates every shot is a point-of-view of the motivating character. (Over-the-shoulder shots inferring the shoulder's POV.) The POV rule is present in a vast majority of movies, and one may study how problematic camera stuations like party scenes are handled by studying movies. Presumably, the movie takes an aesthetic stand consistent with the Golden Rule story structure: the POV of the character reflects the character's perceptions. Rain or sun, extra's or a bare street, music or none, short or long focus, color harmony or bland hues -- depending on the consciousness of the character.

Cel-shading? Try www.celshader.com . Try fog for a composite-able inkline image sequence.

Clip-mapping: the experts say one does a lot of this for effectsy work. Think of it as glorified transparency surfacing. So, it's a good thing. Clip maps make holes -- in a pinch, one can add windows to a rocket or space station with a clip-map. They only make holes in the object they are textured onto -- cylindrical, planar, etc. A canoe that would be full of water by virtue of being below the plane of the water could have a clip map the shape of the canoe planar-texture-mapped onto the water, creating a convenient hole. If the canoe is going to animate, though, either the water is going to need to be parented to the canoe or the clip map is going to need to be an image sequence of some kind. The book suggests adding a transparency mask image and adjusting other surface attributes -- if needed. One can make a clip map image sequence with a luminous card boolean stenciled to an object and photographed for front projection using the same camera settings, or for planar projection by placing the camera very high or far with a long lens setting to "orthographicize" the result. An alternative is to "Boolean" one object to otherwise create either a transparent stencil or void where "clipping" is needed.

Cloth: cloth is a very new feature for animation. LIGHTWAVE currently supports cloth, so an INSPIRE user should be able to animate a little puppet with joints -- without bones, since bones are unnecessary where cloth will cover -- and produce all the necessary keyframes ahead of adding a costume and giving the Scene file to a LIGHTWAVEr (or moving up...). It may be necessary to include some blank keyframes at the beginning of a sequence where the cloth object is positioned over the "treeform" character and "hangs." Even more exciting is the combination of "cloth" with "dynamics" where balls and bulges roll around on constraints inside a suit like a mouse body suit, reacting to skeletally-driven moments and gravity. "Stuart Little's" MAYA dynamics were reportedly all cloth-driven, though this implies cloth on cloth, among other things. I was surprised to learn that some limited cloth dynamics MAY be available to INSPIRE users, since it began to be implemented with LIGHTWAVE 5.6. www.dstorm.co.jp/DS_E/index-e.html . The cloth will probably not allow multiple collisions, but may perhaps be "bruted" to additional collision effects using "Save Transformed," if one is determined to emulate dynamic-driven character animation. Another item to mention about cloth in INSPIRE: displacement mapping with velocity doesn't just work for flames, flags and ocean waves. Fractal displacement with velocity and falloff should be able to ripple a skirt or coattail, along with bones.

Collision-detection: this feature may be emulated to some degree using Motion plug-in "Effectors" and can be used for effects like having a dress metamorph around a dancer without going through her legs (to have the knees stand-out, the Displacement Effector would be used). Or having rain particles roll down a rain gutter, in which the rain particles are parented to a descending gravity Null. Motion Effectors are pretty different from "Displacement map" Effectors, which cause an object to deform, and are set up differently. In INSPIRE, all effectors are spherical areas, so to have a non-spherical repelling or attracting object, it needs to be combined from several other spherical effectors. In order to have a bonesed object moving an effector, it will probably need to be bruted, because the utility for having an effector attached to a bone does not appear until later versions of LIGHTWAVE. This sort of brute-forcing gets out of hand when there are plug-in's and other programs than can d this with the press of a few buttons. This is another area where buying a Plug-in or moving up to LIGHTWAVE is going to pay for itself. Still, effectors are pretty cool. See Dynamics. I believe the LIGHTWAVE 7 DEMO includes the Motion Designer Plugin for object dynamics. The principle is that every x,y,z numbered point will be examined in its relation to a set of points where it may not go.

Color correction: the shortest route to color correction may be to load a single color card into the Image panel, and then make the card a foreground object in the Compositing panel of the Effects panel. An almost black card can have a major effect. The Math Filter described in "3-D" in this Glossary may be used for masking effects, creating RGB color separations for printing, and perhaps artistic touches. This can also be bruted by having an Image Sequence texture-mapped to a card, with a colored Distance light.

Compositing: this was practically the last feature of INSPIRE3D that I ooops-discovered when going through the pages of the book (beside using "r" with Bones and interactive texture placement). And one of the most powerful!!! It all revolves around the "Load Sequence" button in the Image Panel. Want to add titles? Add fade-in's and dissolves to finished shots? Convert from one resolution to another: bumping "down" to get an avi that displays real-time, or bumping up low res animations for a rough cut? Create a letter-boxed or water-marked version of your new short? (Incidentally, I have had trouble saving watermarked Scenes, just FYI.) Prepare your project as an "lws" that can quickly be rendered to widesreen DVD? Get more realistic hair, textures and smoke? Cut rendering time? Import flames or waves for more accurate backgrounds? Once you "Load Sequence" your prior renders, you can apply them to a single-polygon slab (aka card) as a texture map and have that "Object Dissolve," or Envelope a Light to darken the slab, or subdivide the slab and metamorphose the texture map to suit your fancy. "Bink" from www.smacker.com and Netwek's "Vidget" both dismantle "avi's to their native "tiff's or assemble tiff's to "avi"s." Load Sequence" requires a group of still images consecutively numbered -- one really should not render to "avi,"as a rule, if this can be done quickly and easily in a compositing"lws." By integrating "Alpha" renders (and their negatives), although I haven't gotten to this much, you may have texture maps of stunning complexity, blurry smoke, and combinations of elements for a single shot that might be cumbersome to animate all-together, or which may be low priority enough to save until later. Consider adding "anti-aliasing" of a textured card for blur. When one becomes involved in LIGHTWAVE more intensively, many effects tend to be rendered at the fullest cleanest possible resolution, with effects like fire, mirages, etc all added in compositing. In a scene with explosions, the same scene could be rendered twice -- once normally lit, and then only lit by the explosions -- and then the two would be combined. Backgrounds and foregrounds rendered separately cuts rendering times and allows focus effects. "Compositing" is also a term used in game development. See Game development. One can also use INSPIRE3D for editing using Compositing. See Editing. More realistic or artistic surfaces can be emulated using compositing, where two images are loaded one on top of another, with one of the two renders semi-transparent. So-called "skin shaders" are typically at least three layers deep, with one or two oil bump maps, and a skin map. See surfaces. I have to mention that users of high-end equipment like CHALICE, DIGITAL FUSION, AFTER-EFFECTS, AURA and others are currently using LIGHTWAVE 7 to composite effects much more aggressively. Every object, shadow, highlight or effect element will be rendered separately, and these image sequence elements will then be integrated using the aforementioned program/equipment. This is more common for a TV commercial where a product must be made to appear in its best light, and sponsors will pay handsomely for the option of being able to make last minute changes. It also is useful for live action compositing. An example mentioned at a Los Angeles LIGHTWAVE Users' Group meeting by Dave Girard, was that a model of a race car was animated fixed in place, and the information provided by the compositing program from a handheld camera shot (in AURA, LIGHTWAVE "mot" files I think are output) is used to create camera animation for the composited shot.

Composition: see "style sheet." After one adds all of the elements the writer wants for authenticity and to contribute to or contrast the content, and the colorist has had their say, what's left? Less than you may expect and much more. The majority of cinematographers apply the "Point of View" rule referred to by Hitchcock and others. Most shots are "Point of View" shots of characters or "Over the shoulder" shots which are the next best thing. Third person views are restricted to crowded areas like courtrooms, lobbies or traffic. So, the shot is set. Happy lighting for a sad scene? Worth a try, could be disturbing, like when Jill Clayburg throws up in "An Unmarried Woman." Metaphysically, I seem to notice the sun come out when my spirits lift, but then, you either buy that or not. see "Lighting." The next steps are impact and problem solving. If there are pointy objects like palm fronds (or picture corners), they need to be turned so that they lead back into the action or they need to be removed. Angular items and repetitive items tend to steal the attention, goes the theory, oh, and naked folks. In my personal experience, stereoscopic images are most powerful when they have more than a few areas of interest like the proverbial glass half-off the edge of the fireplace mantle. For impact, we assemble all of the cues we know of for a certain effect like frigid cold -- bluish tinge, frost, chattering teeth, foggy breath, tight posture, cold wind, closed eyes, monotonal voice, blowing on hands. If we want to accent the experience further, we may add some depth cues like a pivotting light or object or camera movement or foreshortening or careful lighting. And if we want to push even further, we put a scene with someone rubbing noses with a lover and sipping hot cocoa just before it. That last element is contrast, the theory of "homeostasis," and it is used A LOT. It's properly "storyboarding," but we list it here. Confinement before expanse, yellow before blue, tiny character and elephant, dark before bright, quiet before loud, flat shot before deep shot, frightened before grateful, willful before intuitive. The audience adjusts for the one, and has the adjustment blown out of whack by the next. This is all optional. Camera movement can be jerky or smooth, especially if a particular show or genre are being parodied.

Config.sys: One thing LIGHTWAVE users find themselves doing is writing short-cut keys to shorten their workload. These are loaded in Modeller using the <F1> key. In layout, "keyboard mapping" is not available for INSPIRE users, though this is the sort of thing that MAY be able to be handled by copying the "Config-sys" file and editing it using MS Notepad or another text document editor. Tricking Windows to do this requires opening the "Settings" tab in the "Start" Toolbar and selecting "Folder Options" and removing the connection between INSPIRE and "lws," and then renaming the lws file to be adjusted, with a ".txt" suffix. At one time, the "Station X" LIGHTWAVE 6.5 config.sys was posted at www.flay.com and it included a number of short-cut key assignments.

Converting IMAGEs to AVI's: to do this, one doesn't even need "Bink" from www.smacker.com ; just remembering to always save one's animations as separate images. Use the "Load Sequence" button in the now legendary "Images" panel of Layout. "Load Sequence" in INSPIRE3D does not see "avi's," only sequences of images. Go to the Effects panel and then "Compositing," and make the Background your image sequence. NOW render the scene as an "avi." It should sail through fairly quickly though sequences under 500 frames seem to take less time. Another advantage of this method is that one can render every other frame, and INSPIRE3D will know to shoot every frame twice, or three times, etc. One can show a client one's general idea, and then fill the animation in, one sequence at a time. The best formats for image saving are supposed to be Cineon, RGB, RLA or SGI in LIGHTWAVE 6, because they should record more color gradation, but these images will be unretouchable in most paint programs. If your sequence needs to be retouched, you will need to convert the images to another format, or retouch using LIGHTWAVE. Do the same method above using "Load Sequence," but render as IMAGES instead of an AVI, and select one of the commonly editable but "flatter" formats like PhotoShop, BMP, TGA and TIF. TIF is believed to be the most broadly accessible format for editing at this time, but TGA is still preferred for its smaller size.

Counters: creating a counter is kind of fun. It takes maybe an hour to animate the three disks to tick off 1,000 frames using "Repeat" or the metamorphing numbers to do their magic. Your counter animation can then be rendered and put on cards and "composited," or kept solid for "Load from Scene" for somewhat faster loading. The resulting numbers can be nearly transparent, or be object-dissolved to appear intermittently. In theory, this counter has the advantage of being removable (using negative masking), if the rendered scene is useable "as-is.

Cross-sectional Modelling: see Reference Art. This can be brute-forced using INSPIRE's "Make Poly" button. If one has Adobe Illustrator point files, it can be done very precisely. If one can find an Adobe Illustrator exporter, it should be worth the effort, since the polygon profiles can also be imported in the modeller as reference art for other uses. The Objects panel "Illustrator Import" plug-in can also be used to make high-quality logo animations or achitectural applications, or superior bump or displacement maps. Cross-sectional modelling is good to have in mind when making a hand or object, just to remember that one can often draw and extrude, or "array" key elements without a lot of point dragging. A good one-minute metanurb hand can be made by drawing, extruding, tripling and turning on metanurb. Usually, however, cross-sectional modelling by bruting all the polygons is kind of slow. Not that slow is wrong for good models; the best facial modeller I have met worked methodically, starting from the mouth, and didn't use cross-sectional techniques. One point at a time, pressing the "p" button and occasionally stretching some polygons and splitting them (with a Boolean "knife" (see knife)). see Reference Art for a description of how to keyframe a null to follow a profile textured on a card in OpenGL and use the "Path Clone" modeller function to create points that match. There are many non-destructive ways of getting cross-sectional profiles of objects: building up tape along a card with a hole, painting the object with baby powder and then latex or silicone caulk (and then probably jute/straw/steel wool), pantographs, an L-square rigged for tracing, a mouse with a pencil on an L-square taped to it, wire pressed along the profile of a hard object so hard that it "work-hardens" (like a double-bent coat hanger), aluminum foil "casts" (tricky), ... whatever.

Cubic mapping: Really "3 sided mapping" since the selected texture will be planar mapped three times 90 degrees in X, Y and Z. With amorphous textures, the effect is very nice. I don't know how they do it, but it's great. For artwork like signs, cubic mapping will produce three sides with "flipped" artwork. The fix here is to jump to Modeller and press "Get" and select the opposing faces, give them a different surface name, and then rename the object, return to Layout, replace the object, and copy the surface to those faces, and input negative texture sizes to "flip" the artwork. Resave the surface, "Save all objects" and save the scene.

Curves: This little section in the Tools menu is there, though it might get missed. It can be a redundant toolset if one is using MetaNurbs often or subdivided shapes and Bones with "Save Transformed." Make a polygon using "Pen" or Points (which requires hitting "enter" or the Right Mouse Button RMB), then convert it to a pretty curvy shape using "Make." This is one of those tools like "Extrude" and "MetaNurb" (tab) that requires extra steps; you need to extrude the original jagged polygon by carefully selecting it (pressing the "|" button twice when you miss and starting over) with "Cut" and then "Freeze" the curved version to make it visible in the main window. Great for banisters I suppose.

Cycles: use the "Graph Editor," the requester button that says "Stop" also says "repeat" and "reset." It's the "End behavior" button, and it's very useful in gaming or for making cycled actions. To have a bicyclist stop pedaling after a while and get off the bike, one can have an "Object Dissolve" to a clone of the cyclist, or much more simply, copy and paste the cycle motions up until the cyclist changes his leg movements, since you will only need to adjust a few object variables. This also allows you to adjust the speed of cycling using the "scale" commands. You will probably also need the end and start keyframe positions to have their spline settings "linear."

Cyclorama: where you texture map a large nicely rendered anti-aliased scene or image or sky or night to a dome and light it with a 180 degree spot or point light. It has many applications, like getting more realistic reflections or substantially dropping render times. See metal. One "trick" which I stumbled across is really no trick: if you have a background that is a dome, it is probably going to be spherically texture mapped, and what do you have with spherical texture maps? A seam that has to be hidden. The same technique used for tiled textures is used for making a dome texture map. The image is put into Adobe Photo Deluxe, or a similar paint program, and the "clone" tool may be used to copy information from opposite edges. This goes faster if one uses the "layers" accessible from the right mouse button, putting a copy underneath or above to clone. LIGHTWAVE has some features that make generating effective spherical maps even more straightforward.

D

Demo Reels: the other day, a friend from Church told me about how when she was in advertizing, the ad agency would put together a campaign for a prospective client with someone's demo reel, and get the work. About a week before, a cable show put out a "call" for pilot proposals, and also requested demo reels as proof the applicant could complete the work. Both definitions of "demo reel" seem to lean toward "short film." Several books have specific recommendations for demo reels, like contributing pictures to Contests in order to tell when your work is acceptable to peers. We all know animators; if you want my opinion, mail your reel to Scott Tygett, 12600 Killion Street, Valley Village, CA 91607-1535. One or two books are just about obtaining CG work. I found some nice tips at www.3dcafe.com in their employment section, stuff that you would hope and expect -- like that a funny reel is good, lip-synch is great, etc. I read on another site -- www.vfxpro.com -- how that you should watch for things like "anticipation," and "follow-through." I then looked at my "tentative" reel -- some was there, and it made me want to build on it. I would just add that at a few studios your website/demo reel will be viewed with the morning coffee by a group of animators. You may want to do a little research to have a unique offering -- do a car ad for a Stanley Steamer, animate a snatch of poetry, a bonsai tree trying to escape, a Gulliver update, the latest info on the Higgs boson, coffee mugs debating the Internet. Story is very desirable. The demo reel should be over in about three minutes, probably followed by a title credit. Obvious items: very short color bars or none at the start (like they should adjust their screen for a 3 minute short?), a seven second maximum start title with your name, catchy music, avoid repeating work, avoid using others's objects or worse, animation, and include a "shot sheet", label the video side, front and sleeve with name, address, phone and web page. Keep a couple of tapes in your briefcase or car. I have heard that viewers expect a shot sheet that spells out everything, even if you did everything. If you would like to see a good example of a demo reel, call NewTek at (800) 847-6111 or ask around for a copy of the LIGHTWAVE demo reel or ask friends for theirs. I have also read that funniness, lip-synch, particles, and covering a lot of different techniques will serve you well. There are many reasons for an INSPIRE user to make a demo reel, even if it may not get one work using LIGHTWAVE. Some public schools have LW7, but may want to see a reel to qualify you first. Other ways to get seen: submitting films to SIGGRAPH's Film Festival in March www.siggraph.org or some of the other festivals.

Depth of field. A plug-in that can be used for INSPIRE may be available. (A "Surf Blur" card may also be used, with artifacting.) A way of getting blur behind a foreground object is to first render the background alone as a sequence of images like "tga"s. This background is then entered using the "Load Sequence" button in the Images Panel, and a special Scene file is used which has a card identical to the 640 x 480 render size that gets Planar texture mapped with the Sequence. Adjusting "anti-aliasing strength" for the composited card or cards adjusts the blur. A much more involved process is to use motion blur. To get a limited "rack focus" effect: have the Camera targetted at a Null beyond the Image Sequence card; that null will be rotating with a slight offset around a Parent Null 720 degrees per frame, with "Repeat" activated in the Graph Editor. In the Camera panel, anti-aliasing and motion blur are activated with a 100% blur length, the best effect is obtained with a high setting. This adds a variable blur by adjusting the offset. Thirty seconds per frame at the high setting to blur the entire sequence. See "Spinning light." Save the new sequence as Images also, since the result will be composited with the foreground in INSPIRE. With wide offsets, it may be necessary to have a second "pass" with negative or y-axis offsets between the main and rotating Null -- in order to hide the multiple image break-down; the two passes being joined using Compositing and front projection with transparency. Since the Images load and render rapidly, a second pass may be worthwhile, as well as soft filter. The final image sequence is loaded into the Images Panel and again, at the Background requester of the Compositing Tab of the Effects Panel. It is the Hollywood look to have mild f4 blur, but it is obviously worth a second thought, as some animation studios almost never use it. Another way of emulationg blur where a foreground/background separation is not available to use a "spinning light" camera approach (See Spinning light), moving the camera about an axis 720 degrees per frame with a second counter-rotating (-720) Null offset from the center axis, then switching "Repeat" on in the Motion Graph editor for both Nulls (careful not to lose that keyframe). The camera will need to be targetted at a Null, as well, or the blur effect will be stronger closer to the camera. Use medium-to-high anti-aliasing and 100% blur length. It tends to fringe and gobbles up rendering time, but hey, you asked. For a smoother effect, put in negatives of x-offsets or y offsets and composite the two image sequences by having one transparent and front-projected on a card, and the other a Background. Another workaround is to use a spinning offset mirror or lens between the camera and subject; lenses seem to produce better results.

Disk: the INSPIRE "Disk" default may be made more detailed by changing the radial segments in the Object:Options tab. For the vertical segments, use the left and right arrow keys. If you do not want the ball to accidentally stretch, use the CTRL key. see Ball. It is pretty important to get into the habit of doing this, since it helps for things like bones rigging for arms, and neither "knife" emulation nor subdivision are as effective.

Displacement mapping: Displacement mapping can extrapolate relief from images to create waving flags and other effects. These are detailed in the tutorial disk and "all.pdi" file for INSPIRE. Displacement maps can be procedural animations like ripples or "image sequences," producing effects like a flag waving, and these can even be "enveloped" so that water can come to a boil, as pointed out by Dan Ablan of www.danablan.com who has written several excellent books. The map may be an image sequence of an object made specifically for the relief, like a wedge object moving behind a powerboat, which will result in a wake behind the boat when displacement-mapped. Usually, the lazy animator -- me -- uses a fractal "procedural" which is a 3D texture formula and clicks on the "velocity" requester, and then gives the texture very small increments in x, y and z. Voila!! Displacement maps have a lot of applications -- dancing skirts, blowing curtains, wave animation, offbeat face animation, etc. Clip map or transparency maps are sometimes used with these displacement-mapped surfaces, for instance, to keep choppy lake water out of a rowboat. Hair simulation can use displacement mapping, as can flowing robes, etc.

Dissolves: see Optical Effects.

Dome: see cyclorama.

DVD/HDTV: VHS tape is far more affordable for the animator to have work seen. Setting INSPIRE to create high resolution images is not complicated, but the resulting image will not be automatically advanced to the next frame. A leading format for HDTV is 1920 x 1080, but there is no consumer recording or playing means. DVD widescreen is an NTSC widescreen standard, 720 x 486, or roughly 3:2, but actually, the DVD standard allows for any format from 1.33:1 to 2.35:1 and beyond. The dominant format for the DVD-competitor "VCD" is -- ta-dah! -- 325x240, converted to MPEG-1(VCD) or MPEG-2. There are actually three steps involved in creating a VCD. Although MPEG-1 is supposed to be digital, it still maxes most machines, so the MPEG is knocked down to as few as 256 colors. That's right; MPEG is practically a 256 color medium, from what I hear. I do not know if this color-flattening can be done before the "avi" is converted to an MPEG; I count those two as two discreet steps though. The third step is that MPEG-1 needs to be "conformed" by Nero/EasyCDCreator5/Premiere/Etc "authoring/burning" software to be playable as a VCD. The "avi's" or "mpg's" we produce apparently need a little more futzing before they can be played in those $200 DVD players that have DVD-R/CD-R/VCD/CD-RW printed on the front of the machine. One is apparently much better off buying DVD/VCD players or Sony or Philips DVD players than working with Panasonic or RCA machines that don't mention "VCD" for playing these MPEG-1 VCD's, according to www.vcdhelper.com . Make your MPEG-1 conversion from "avi" using Div-X or other avi conversion software, format it for VCD burning using "Nero" or other VCD software, burn it, and good luck. As mentioned, I have read that the VCD resolution is in the area of 325 x 240, though I believe the MPEG-1 standard theoretically allows for greater resolution; okay on the TV, but forgettable using the computer monitor. Another standard, more similar to the MPEG-2 and the DVD 720 x 486 resolution, is called Super VCD or S-VCD, it is 480 x 480 (NTSC) with a 1.5 squeezed pixel, details eluding me. But it seems to be supported by fewer machines at this time. Too bad, because it's DVD-image quality for the masses. "VCD" at 325 x 240 may sound pretty rinky-dink, but when I read about that being the format, I did not own a video card. Once you have rendered some 320 x 240 motion blurred Scenes and seen them transferred to VHS, you just smile that the rendering will take one fourth as long. A Bink "exe" may be the low-cost S-VCD equivalent for budding entrepreneur artists at this time, though it can only be played on computers. ("Bink" -- the VCD you play on your computer! How's that for a slogan?) According to the folks at www.smacker.com, the carefully-filtered 256 color "Bink" compressed file looks better with more detail than the acme of the MPEG family, MPEG-2, which is used for S-VCD. It has the advantage that it is usually playable with a mouse-click, same as "mpg"s (MPEG-4) and "avi's" that have relatively small images. I have made 720 x 486 Bink "exe's" frame-by-frame as well as 640 x 360 (HDTV 16:9 aspect ratio) "exe's" and enjoyed the result. I have noticed that the best performance with a non-DVD machine comes when one has at least 100 MB free on one's hard drive. MPEG-4v.3 is the latest version of MPEG-4 and apparently "streams," but MPEG-4 looks awfully gloppy and blurred beside a "Bink" or even VCD file. MPEG-2 is supposedly slightly superior to MPEG-1 and convertible to the fairly persnickety DVD using "DVD-authoring" software. When the MPEG-2 file is compressed further and has the other "menu" DVD files added to it, this is used to make a "glass master" for pressing DVD's. (Emulating the "LOOK" of widescreen DVD using INSPIRE with automatic frame advance will mean inputting 640 x 432 as the "Custom Size;" though for "16 by 9," the "Custom Size" becomes "640 x 360." Some animators are "letter-boxing" their animations by rendering to ".tga" files, and then re-rendering the Scene as an "Image Sequence" front-projected on a 16-by-9 card with a black background. The professional formats for 16:9 HDTV are DVCPro HD and HDCAM according to Jon Carroll. see Video Tape. see Optical Effects.) At this point, I am finding the "Bink" "exe" format meets most of my workprint needs, though it only fits one "reel" (approx. 20 min's. at 640 x 480) to a CD. Technical information about DVD may also be found at www.dvddemystified.com or www.cdrinfo.com , such as that CD's hold 700 Meg, but DVD's hold about 4Gig, that both R-media use organic dyes that last 50-300 years, but that DVD's use a higher frequency of laser with smaller pits, very close to the frequency used for CD-RW's. The $3,000 price range for "printing" 1,000 DVD's endorses what one intuitively knows before wandering into this morass --let's worry about that when we get to it. Since DVD-R is intended for DVD players, the correct software is provided with the recording unit, and I have read that DVD-R media is available from Apple for approximately $10 per DVD-R disk, but these DVD's are like CD-R's in one respect, a quick three minute suntan destroys their data. The great advantages of DVD-R are probably in its ability to take the place of DLT tape for mastering "replicated" DVD's and providing workprints for screenings. DVD-R and SVCD could lead to Video stores making or renting SVCD "protection" copies (50 cent charge for not rewinding or accidentally erasing the SVCD), to "subscription clubs" of family oriented or "underground" video's, until true streaming arrives. We shall see -- the stone the builders rejected...

Dynamics: with some other programs like LIGHTWAVE, especially particle programs, there may be buttons that emulate wind, gravity, rain, and a host of other motion variables. Traditionally, animators who have emulated these effects have been honored for planning and executing them. Ho-hum --now they're a button. Some may be bruted with INSPIRE. For instance, a hillside may be made "rigid" by covering it with twenty effectors that follow the contour of the hill (or one hill the shape of one effector), and then the "Effectee" will be parented to a Null that is accelerating according to gravity, straight down. It will not be mathematically accurate as LIGHTWAVE and MAYA dynamics systems are, but with a little rotation and some keyframing, it will serve. If one gets very clever, dynamic particle systems may be emulated using funnels made of Effectors and off-axis rotating Null parents of "effectees" -- for "bouncing," among other things. Who knew? See collision detection. These motion "Effectors" are not to be confused with Displacement "Effectors," which are entered differently. Two identical arrays of Dsiplacement Effectors might be able to produce cloth effects, for instance, if one hundred effectors in the shape of a box had a subdivided sheet collide, and then another box sandwiched the sheet between the two box shapes. To accomplish this with motion Effectors, all of the points of the shirt would need to be separate polygons parented to one another somehow. This is not "dynamics" though. The example of the shirt hanging or following the motion of the wearer -- that is dynamics. LIGHTWAVE 7 Demo has some very advanced dynamics -- have a look at it. A shirt that can be worn, can be worn upside-down, emulating the effect of a jiggling belly. Dynamics is having a major effect on character animation. "Brute" objects like chains inside a creature can help emulate muscular or soft tissue shape distortions. see Smoke see Fire see Particles. Soft dynamics plug-in's used with LIGHTWAVE in some cases may work with INSPIRE; "Motion designer" is a soft dynamics plug-in, "Polk" and "Impact" are hard-dynamics (dominoes falling) plug-in's.

E

Editing: a great program for linking "avi's" is "Bink" from www.smacker.com . All one needs to do is change the names of the different "avi" files to a sequence. Shot1.avi, Shot2.avi, and so on.... and "convert" the file from avi, to avi. Yep, avi to avi. It will automatically sense the Sequence and ask if you want to join them. If one starts by rendering to "TIF" files, omitting the unwanted end frames (or just the last desired frame, interrupting the sequence) this will create that part of the edit. Removing beginning frames from an Image sequence can be achieved in INSPIRE by entering the desired start frame at the "Offset" requester in the Images panel. The new "avi" will be correctly numbered. Presto! This "offsetting" can do all of the work of editing together "texture" image sequences loaded one upon another. And there you have it. I have read (somewhere) that one's image sequences need to be 300 frames or shorter, so one needs to re-render them with an "offset" in the Image Sequence Requester in order to "re-assemble" them with a different name. The 300 "limit" may only apply to textures though, as I have also done avi "blow up" utility files using the backdrop compositer much longer. Moving image sequence-textured cards or dissolving them out of frame is probably prudent since they may hang in position frozen on their last frame. Another handy technique is that one can add a frame counter to a sequence that was judged too good to sully with one. One can even "stretch" a sequence by having a second card, with a looped/offset sequence, so that cutting can "overlap" slightly. One can have dozens of front-projected cards, for a kind of INSPIRE editing bay -- be sure to save this Scene File for future use! The average time to render a frame is comparable with professional editing equipment - about a second per frame. One can even add "wipes" and other effects by sandwiching the black/white wipe animation with the front projected texture as an additional texture. See Optical Effects. I was somewhat surprised to learn that at some studios LIGHTWAVE artists typically do all of their own post-production compositing, even though programs like "Digital Fusion" may be available.

Educational Software: INSPIRE is not "educational" software. Educational software comes in a variety of formats; often it "expires" by becoming hampered after a year or so, and/or may not be used for "commercial" purposes. Often, a person will be required to send some proof they are full-time students, necessitating at least registering as a full-time student at a local community college. In rare cases, as withearlier versions of LIGHTWAVE, there will be an educational version that will NOT expire. Sometimes the maker will even offer commercial upgrading, so that a graduating student can convert for less than a full license. There IS a commercial-upgrade pricing available for INSPIRE users; purchasers buy directly through Newtek customer service, www.newtek.com . Currently the LIGHTWAVE 7 demo CD makes educational software somewhat unecessary.

Extra Camera: This is also mentioned in the book: INSPIRE can emulate another camera's point of view (for checking posing, etc.) if a LIGHT is created with 0% intensity, and the LIGHT VIEW button is activated. In practice, the animation of most characters is best done by separating them to their own scene file, and then using the "Load from Scene" button in the Objects panel when most of the animation is ready, since it renders/refreshes/animates faster. Use the same techniques as for aiming the camera and have a Null placed in an optimum position, with the Light targetted at it. I use them mostly for eyes, but also to check penetrating objects.

Extrude: great tool, but you may need to know that it is one of the more counter-intuitional at first, because it implies extra steps of merging, deleting and merging, and is usually applied to one polygon at a time. It leaves polygons inside of an object, which creates very strange looking MetaNurbs when the "Tab" button is pressed. "Smooth shift" (once a "0" value is entered) would seem to be preferable, but there are circumstances where one wants to litter the inside of an object with extraneous polygons. A vase shape, for instance, could be changed to have the proportions of the neck and base change by adjusting the size of the interior polygons. Usually, when a non-LIGHTWAVE modeller talks about extruding, they mean "Smooth shifting." The interior polygons need to be removed using "cut" and then the whole object is selected and the "Merge" button is pushed, and then "Tab" and "Freeze" (and triple). Merge sees where two polygons seem to share points and if they are duplicated, eliminates one set. In class instruction, a box is made from scratch, and until "merge" is pressed, the polygons are separate objects, but then become a single object. Anyway, that's the lowdown on "extrude."

Eyes: Parent a light to the face, give it 0% intensity and aim it to get both eyes. Use "Light View" to look at just the eyes/face, and animate them. Sometimes when you move a character around a Scene, you discover opportunities for eye "business," so this can come in handy. If the head will use Bones animation, improvisation gets screwed because methods of keeping the eyes in the head when the neck cranes are found in LIGHTWAVE, but not INSPIRE. LIGHTWAVE apparently gets used for "pre-viz" work, so go ahead and ignore Bones while you get some balljoint element poses. To get parented eyes to follow with bones motions, they will need to be "Save Transformed" and then, in a separate Scene File, the head will be switched with the eyes, after the Bones animation is "locked down." That Scene file -- with the head Bones attached to morphing/bonesed eyes -- is then Loaded back to the animation of the head using "Load from Scene" in the Objects panel. This is the obvious way to make eyes follow along with facial/head/neck Bones distortions. Fortunately, there are three alternatives: texture map the eyes and animate the texture using a reference Null, or use Bones that effect more than one object at once, namely, the Object Plug'in's like "Bend" and "Effectors," which I cannot yet recommend; and last, morph the eyes and bones the rest. This last one means for a donkey, there will be a dozen donkeys with the mouth identical, but a dozen different eyes. At this point, you definitely think to move to LIGHTWAVE 7. The audience is looking at the face/eyes and the hands, according to David Silverman and others, so perhaps one should put some attention there.

F

F1 The Function Key "F1" is a handy menu of the Shortcut keys that pops-up in Layout and Modeller.

Fabric: see cloth.

Faces. My feeling on faces is that style like Will Vinton or Nick Park provide is delightful, and if it exploits characteristics of "bones" and IK like the characters in "Toy Story," so much the better. Have a nose move from one side of the face to the other, like in the first season of "the Simpsons," and enjoy. If you want to go for the mostly realistic look, that's fine too. I am still grappling with convincing faces, though it looks like REFERENCE ART plays a big role. The PUZZLE: metamorphosis is gorgeous but still requires identical point numbers to work. So, bones and point-dragging are used, and the cartoony look is often left alone. Object list or Object Dissolve will give a look more in keeping with a puppet style, or one can do Boolean operations and be fairly careful to use the same objects, possibly painting intersected objects with transparent edgeless surfaces, and throwing out the metamorphic objects that don't work... (or trying to fix them with "BG Conform). Complicated objects with intersecting planes like mouths get broken-up/down into as many as a half-dozen different morphing objects, and all are loaded into the same scene file, with one morph envelope saved and loaded among them. There is a trend to use "bones" for eye brows and even some cheek and chin movement, but many animation studios continue to use dozens of "face shapes." To get realistic face-shapes seems to take a little work. At this point, to work within INSPIRE's limits, I am inclined to design faces with relatively high eyebrows, and then bring them down and protrude the brow using bones "rested" near the higher position and keyframed down. Learning to do a "16 point weld" of an eye object to a subdivided sphere square has made my results much stronger. See "eyes." INSPIRE starts to get a little exasperating for improvising gestural animation, so it is probably worthwhile to use eyebrows, eyelids, eyes and such parented-together as balljoint animated objects, at least partially, as an alternative to or in preparation for later rigging. Otherwise, there will have to be at least one duplicated set of bones for the eyes/tongue/teeth/etc, and improvisation goes out the window.

Falloff: Well, three applications of this are worth knowing. One for lighting, and two for textures. The one for lighting is too important to try to encapsulate here. See lighting. Suffice it to say, Hollywood lights are close to the actors for more reason than just slow film. A common use of "falloff" in the Texture panel will be to form a join between two painted polygons, assuming one is not using a better system like cylindrical or spherical mapping, for whatever reason. This method may also be applied to transparency-mapping one section of an object, so that somebody's feet standing in a fog bank might become invisible for effect. Lastly, when projection or cubic-mapping labels on boxes, etc., to prevent the texture from appearing on the opposite side of the object reversed, assigning a falloff will isolate the texture to one side. The last tip courtesy of Dave Girard.

"Fill hole:" this function is used in other programs, just as some programs use a "split polygon" tool, and is fairly easily emulated. This is discussed in the "Boolean" entry as well. Select the points around the "hole," copy them, paste them in another layer, "select" them clockwise or counter-clockwise, and use the "Make Polygon" button. Copy this polygon into the original object, and "merge." Another "workaround" is to smooth-shift (0 offset, then moved) surrounding polygons, and Boolean "Add" a card polygon, then trim away the excess.

Film quality: see Bit rate. I know of one animator who advises "going to film" in order to have a better impact at the "real" festivals. His impression was that "video" festivals had a poorer sample of films, even though most festivals do "pre-screening" using VHS tapes. I think he is also a little curious. 35mm will look better, and 35mm anamorphic will look even better than that (and dual 70mm 3D...). Is there a difference between planning one's film for a small versus large screen? This was SUCH a big issue when video was introduced. It also was a topic decades before when CinemaScope arrived. There is a theoretical angle of view that is most comfortable; as you sit at your monitor/laptop, you are probably AT that angle. If you hold a protractor to your face and measure it, you may find that angle: where calendars are placed near the refrigerator door, where paintings cover the wall opposite the TV area, the Colosseum, etc. Good for TV, but as a medium it conveys details pretty badly. Daytime drama deliberately breaks the "point of view" rule (see camera). Should one angle the camera up in most film shots because the audience in a theater will most likely be looking up most of the time? When your film is screened once for free at the lab where you have it processed, yes, you will be looking up at it. And you may notice a slight vertical bias in older movies, but I wouldn't go farther than that. Some years ago, IMAX posited two hypotheses: large image shows have a spiritual "belief" quality to them, and for 3D, having the edges of the IMAX 3D screen at the periphery helped the illusion. Big pictures have to be scanned, they are not at the ideal angle of view, and some movies seem to exploit this by playing with the ideal angle of view, having it take up one part of the screen or another, playing hockey with the attention. On the small screen, fifteen feet become fifteen inches. The truth is there are two different movies there, and you can either compromise or create two different movies, which, with character animation is a viable option. But festivals still preview VHS.

Fire: see Waves. According to the tutorials I have seen, a single card with a fractal texture (not a card) moving up and through the card should create a convincing fire effect. The displacement mapped "cave fire" scene file in INSPIRE's inventory also has a lot to offer. Rippling and scaling and overlapping translucency with three light sources and some luminous object maps. Richard Morton of www.iqdigital.com advises "compositing" the effects in 2D programs, possibly with 3D matte objects. Some particle system programs like MAYA seem to map a procedural fire image to a bubble that moves and diminishes in size and dissolves out. Their realistic fire, therefore, comes as the result of hundreds of bubbles of fire. Remarkably few fire cycles or images are available on tutorial sites, which leads me to think that "procedural" fractal-based fire is the rule rather than the exception. On Dan Rowsby's site www.rowsby.com , however, he says that cards with explosions are the rule. The INSPIRE cave fire object also includes some 80 objects just for spark effects. Parent these to a Null early on, in order to be able to transfer the file into other scenes. See inventory.

Flipping textures: negative numbers for Texture size.

Fog: I think this is covered in the book, but if you want a spotlight effect, if you model a cone-solid and make it luminous or transparent, it is supposed to be fairly compelling. see Volumetric Lighting. You match the angle of the spotlight to the shape of the cone for greater effect and add "fall-off" for even more. Be sure to activate the show fog button in Options Panel. I have seen some terrific "smoke" done by INSPIRE3D users. See "Smoke." A projected image sequence can create a very realistic volumetric effect, which can be enhanced by having competing image sequences planar and conically projected. Dan Ablans notes that since lights are z-axis oriented when installed, they maintain this orientation. I seem to have had some good results using two large mirror with the fog setting, to get fog in other directions than perpendicular to the camera axis BUT as you might guess, the fog "bank" was curved. If a fog "bank" is needed, it can be emulated by resorting to a "Spinning light"-type trick, having a screw-shaped ("slinky") toroid or other shape (with Repeat activated, if possible) that spins or otherwise moves rapidly so that solid and semi-solid edges result. The effect breaks-down and requires some attention -- see Spinning Light -- but it can be less artifacty than several transparent shells one within another. As I consider this problem, the more I am inclined to resort to Compositing multiple fog levels. A "bank" that hangs around the knees could be created using a "matte" for a less foggy image sequence above the knees. see Volumetrics and seek out tutorials on this. I saw one very good tutorial recently where the "bank" was both procedurally (3D fractal) mapped and motion-blurred.

Folders: at some point, one has to decide whether to keep projects complete in one folder, or separate according to Scene Files/Envelopes, Objects and Images/Surfaces. The Project Folder approach has the advantage that one can use a CD-R to bring one's files to a render farm or friend. Some combination of the approaches is probably the best solution: initialy saving projects and their recycled components separately, and then copying them, using Windows briefcases, to master directories. LIGHTWAVE animator Jon Carroll recommends some file utilities that can do this work much more efficiently.

Frame-by-frame: if one is willing to wait an hour for a render of a single important frame, then one is probably able to do brute-force "Photoshopping" for certain effects taking less than a minute. Leave your options open.

Fresnel: I have heard this term applied to the peculiarly waxy surface of water, which at certain camera angles appears completely transparent. This phenomena also occurs with other polarizing surfaces like sand or concrete or rock or glass, so it gets a little attention. A long-cut "brute" emulation might be to create a sort of venetian blind surface made up of many (hundreds) blades and fit this to the surfaces being 'fresnelled." The goal is sometimes the dimensional look of angular grains in brick, which can be emulated in other ways, such as a second pass with additional lights and a different specular texture map loaded one over the other as a front projection Compositing file. Or by adding geometry. A fresnel shader was at one time a free offering from www.shaders.org and may still be available.

Front projection: see Tracking. There are two complimentary methods of playing with front projection. One is to either "squash" an image or put black above and below it to make it square-shaped. The light "Image Projection" requester converts all images to squares. In "Photo Deluxe," all one needs to do is change the "canvas" size to have the same number for height as width. Then, load the image into the "Lights" panel "Projection Image," and position the spotlight to have the identical position as the camera -- using a separate "Light View" with another light may help. The other is the REAL front projection utility which is a texture option along with "planar" and "cubic" options. The light projection method has one advantage, of course: one can pan and zoom the camera around the scene and plant a few lights, though the effect may not be as graceful. Yee-haw!

Fur: The leading plug-in programs are Worley "Sasquatch" and "Shave" from Joe Alter. Lee Stranahan of www.learninglightwave.com recommends "Shave" for animation needing hair dynamics, but some of these programs are not INSPIRE compatible. The short answer on hair: use a photo-real texture map and keep the character moving. Some of the best hair has been "clip-mapped" onto curved card using a trick described at www.menithings.com . Cartoony grass looks incredible with a program like "Sasquatch." I am not a photo-realist, but hair techniques can be amusing, and good hair and "make-up" sure can sell a character. One of the more important things to realize about hair is that just as ocean waves can be displacement mapped, individual hairs and waves of hair can also be displacement mapped with "velocity" imitating wind. Some really great hair probably comes from shooting a roll of film of a wig and sitting at the paint box and getting a decent spherical map of some actual hair. When we LOOK at hair, we see that many women wear their hair up, so that a fairly simple surface like a bump map and hair texture specular map may convey much of the hair object. I have only done a couple of experiments with overlapping spheres of specular maps for differing depths of hair. Having onion skin objects with specular or transparency texture positive-negative maps is dicey depending on the thickness mapped, but the hairs are more dimensional. Another trick is to have three or more scaled-up heads, and "BG Conform" a group of subdivided lines, one layer of points at a time. Hair feathering seems to apply to Sassoon short styles as well as long tresses. It has to be there. Fortunately it CAN be bump-mapped to some degree. INSPIRE has about a dozen "work-arounds" for doing hair. For instance, there is a tool in Multiply that can reproduce a hair over a sphere or plane -- array. One may use "Points to poly" to create single points and then extrude them, copy, paste and vortex or jitter them, and surface them. Breaking hair into a number of objects and using a woody texture for most of it I have done myself and seen on TV. In the TV version, a few hairs at a "part" were animated objects, this is also useful for animals. The "Reference Object" method of interactive placement can really come in handy for this. See Texture Mapping. One can also use the "Unwrap" plug-in mentioned elsewhere here, and add polygonal markers using "Pen" and "Make Poly." Another characteristic of hair is that it seems to capture light from many angles, reflecting light by its oily reflectivity. Rather than reflection map, however, I would be inclined to add additional lights and rely on a specularity map. A "Bump Map" of hair can have a similar effect on highlights; a bump map combined with a matching texture seems stronger. Hairs can always be modelled to one degree or another -- point extrusions, double-sided polygons or more. According to my understanding of some presentations at SIGGRAPH, hair is increasingly being accomplished using hair geometry, actual polygons or particles. Dander or downy fur or fur in profile is where I tend to throw my hands up. One can trust the context of the shot (a teddy bear head) and create a boolean subtracted semi-transparent shell scaled larger than the main surface -- see "Smoke" -- and trust the audience to see fibers where there may be few or none. A variation on this that I've tried is to create a transparent duplicate of the object and displace it behind it (and/or z-axis squash it) with the transparent surface then "SurfBlur"red in the Image and Surface Panels. The blur tends to bleed forward somewhat resulting in blurred object edges but sharp interiors. Since it is unique to one surface, it can yield pleasing results. Possibly worth compositing a non-blurred object over, see Compositing, see surfacing. If the effect is important, one can composite image sequences projected on a card and anti-aliased to increase blur for just the object and its surface against a black background. Foreground elements may need to be specially matted or composited to prevent down from crossing an eyelid. INSPIRE's texture interface is extremely rational, and only a decision to move up to a more powerful program should cause one to search elsewhere for desired results. As I pet my sister's dog, I am struck by the similarity of his fur to a good bump map. It should also be mentioned that a majority of hair techniques involve adding chains of bones to the hair in bundles (as in "Shrek") or sheets and animating them for whipping/follow-through/etc. There are also Plug-ins like www.worley.com "Sasquatch," and "Shave and a Haircut" from www.joealter.com .

Continue to Glossary G - O

http://www.inspirejoy.50megs.com/

c. Scott Lee Tygett 2001