Sunday, June 7, 2009

1.29 Diagnostic modes

 
1.29 Diagnostic modes
 
 mental ray supports a number of diagnostic modes that help visualizing and optimizing the rendering process. They modify the output image to include grid lines or dot patterns that indicate coordinate spaces or sampling or photon densities. These graphs allow simple detection of insufficient or excessive sampling densities, and help to tune parameters such as numbers of photons or sampling and contrast limits. Grid Mode This mode renders a grid on top of all objects in the scene, in object, camera, or world space. It?s useful to get an idea of the scene scale and to enable rough estimates of distances and areas.
 
Photon DensityModeThis mode replaces shows a false color rendering of photon density on all materials. This is useful when tuning the number of photons to trace in a scene, and to select the optimum accuracy settings for estimation of global illumination or caustics.
 
It also works well in combination with the Grid Mode described above. SamplesmodeThis mode shows how spatial supersampleswere placed in the rendered image, by producing a grayscale image signifying sample density. This is useful when tuning the level and the contrast threshold for spatial supersampling.
 
Diagnostic modes are enabled with the -diagnostic option on the command line, or the diagnostic statement in the options block in the scene description file.

1.28 Global Illumination in Participating Media

 
1.28 Global Illumination in Participating Media
 
 Global illumination in volumes2:1can be used to simulate for example:
 Color bleeding from a colored wall onto gray smoke.
 Multiple light scattering in clouds or other participating media.
 
Global illumination in participating media is computed much the same way as global illumination on surfaces, except that volume shaders and volume photon shaders are needed. To change the number of photons used to compute the local intensity of global illumination in volumes, specify a photonvol accuracy (and optionally a radius) in the options (similar to volume caustics).

1.27 Volume Caustics

 
1.27 Volume Caustics
 
Volume caustics are caused by light that has been specularly reflected or refracted by one or more surfaces and is then scattered by a participating medium in a volume. Examples are:
 Sunlight refracted by a wavy water surface and then scattered by little silt particles in the water.
 Car fog lamps: light emitted by a bulb filament, reflected by a parabolic reflector, and scattered by fog.
 
In order to create volume caustics, the same light sources, material shaders, photon shaders, and caustic tags as for caustics are needed. But in addition, volume shaders and volume photon shaders are needed. For example:
material "volsurf" opaque # material for surfaces of volume
"transmat" ()
shadow "transmat" ()
photon "transmat_photon" ()
volume "parti_volume" (
"scatter" 0.05 0.05 0.05,
"extinction" 0.05,
"lights" ["arealight-i"]
)
photonvol "parti_volume_photon" (
"scatter" 0.05 0.05 0.05,
"extinction" 0.05
)
end material
 
 
Photons that get scattered multiple times in the volume are stored in a volume photon map. During rendering, volume shaders can call the function mi compute volume irradiance to get irradiance from the photons stored in the volume photon map.
 
In order to fine-tune the volume caustic, it is possible to change the number of photons that is used to compute the indirect light in the volume caustic. This is done with a photonvol accuracy statement in the options. The default is 30 photons and a radius that depends on the scene extent.

1.26 Final Gathering

 
1.26 Final Gathering
 
 For diffuse scenes, final gathering2:1can improve the quality of the global illumination solution. Without final gathering, the global illumination on a diffuse surface is computed by estimating the photon density (and energy) near that point.
 
With final gathering, many new rays are sent out to sample the hemisphere above the point to determine the incident illumination. Some of these rays hit diffuse surfaces, and the global illumination at those points is then computed by the material shaders at these point, using illumination from the globillum photon map if available and other material properties.
 
Other rays hit specular surfaces and do not contribute to the final gather color (since that type of light transport is a secondary caustic).
 
Tracing many rays (each with a photon map lookup) is very time-consuming, so it is only done when necessary ? in most cases, interpolation and extrapolation from previous nearby final gathers is sufficient. Final gathering is useful in scenes with slow variation in the indirect illumination. For example purely diffuse scenes.
 
For such scenes, final gathering eliminates photon map artifacts such as low frequency noise and dark corners. With final gathering, fewer photons are needed in the globillum photon map and lower globillum accuracy is sufficient since each final gather averages over many values of indirect illumination. Final gathering is off by default, but can be turned on in the options. To change the number of rays shot in each final gather (and optionally the max distance at which a final gathering result can be used for interpolation and the min distance at which is must be used), specify a finalgather accuracy in the options.
 
 For example,
finalgather accuracy 1000 1.5 0.25
 
 
Increasing the number of rays reduces noise in scenes with complex illumination and geometry; the default number of rays is 1000. The default maximum distance depends on the scene extent; decreasing it will reduce noise but increase render time. The default minimum distance is 10%of the maximum distance.

1.25.3 Fine-tuning Global Illumination

 
1.25.3 Fine-tuning Global Illumination
 
 To change the number of photons used to compute the local intensity of global illumination, specify a globillum accuracy (and optionally a maximum radius) in the options. For example,
 
globillum accuracy 300 2.0
 
 
The default number is 500; larger numbers make the global illumination smoother but increases render time. The default radius depends on the scene extent.

1.25.2 Objects

 
1.25.2 Objects
 
By default, all objects can participate in global illumination computations. This is necessary for simulation of real global illumination. However, sometimes one might just be interested in color bleeding from one object to another, and the rest of the scene does not need to participate in the global illumination simulation. To simulate global illumination more efficiently in such cases, objects can be flagged such that the photons are only emitted towards certain objects and stored only on selected objects.
 
Objects are then divided into globillum-casting (globillum 1 flag) and globillum-receiving (globillum 2 flag), or both (globillum 3 flag), or neither (globillum 0 flag). For example, color bleeding from a red diffuse table onto a diffuse white wall requires a globillum-casting table and a globillum-receiving wall. Objects can also be flagged globillum off, which means that globillum photons will not hit them at all (the objects will be ?invisible? to globillum photons), or flagged globillum on which is the same as globillum 3.
 
The globillum mode is an object attribute. Photons are emitted only in the direction of globillum-casting objects, and only stored on globillum-receiving objects. To use this optimization requires that the default object globillum flag (specified in the options) is set to something different than 3 (which is the default value, enabling all objects to cast and receive global illumination).

1.25.1 Light Sources

 
1.25.1 Light Sources
 
Each light source that should emit global illumination should have an energy statement (just as for caustics). Each light source can also optionally have a globillum photons statement to specify how many photons should be emitted (similar to caustic photons for caustics). The default value is 100000 globillum photons. For example:
 
light "globillum-light" "physical_light" (
"color" 700.0 700.0 700.0
)
origin 20.0 30.0 -40.0
energy 700 700 700
globillum photons 100000
end light
 
 

1.25 Global Illumination

 
1.25 Global Illumination
 
 Global illumination is the simulation of all light interreflection effects in a scene (except caustics). This includes effects such as color bleeding: if a red table is next to awhitewall, thewhite wall gets a slightly pink tint. This effect is not possible with ordinary ray tracing algorithms.
 
But if the pink tint is lacking in an image, the image looks fake, even though it might be hard to point out precisely why. Global illumination effects are subtle but add realism to a scene. Simulation of global illumination has at least two distinct uses:
 Physically accurate simulation of the illumination in an environment. For example the light distribution inside an office building.
 Visually pleasing lighting effects for applications in the entertainment industry. Here physical accuracy is not the most important aspect, the images just have to look believable.
 
The computation of global illumination requires photon tracing, just like computation of caustics. In fact, the same photon material shaders can be used. Since caustics are treated separately in mental ray, the global illumination simulation does not include caustics. So if all light interreflections should be simulated, both global illumination and caustics must be enabled.
 
The photons stored during global illumination simulation are stored in a separate photon map, the global illumination photon map. When the material shader calls mi compute irradiance, the irradiance from both the caustics photon map and the global illumination photon map are computed. To turn global illumination on, specify globillum on in the options or give command-line option -globillum on.

1.24.5 Fine-tuning Caustics

 
1.24.5 Fine-tuning Caustics
 
 The number of photons used to estimate the caustic brightness can be changed with the global option caustic accuracy. The accuracy controls how many photons are considered during rendering. The default is 100; larger numbers make the caustic smoother.
 
There is also an optional radius parameter. The radius controls the maximum distance at which mental ray considers photons. For example, to specify that at most 200 photons should be used to compute the caustic brightness, and that only photons within 1 scene unit away should be used, specify:
 

caustic accuracy 200 1.0

 
in the options. (Similar accuracy parameters are available for global illumination, volume caustics and global illumination, and final gathering.) Accuracy parameters can be used to select two fundamentally different sampling policies.
 
If R is relatively large (as is the default) the overall limiting factor becomes N and R only catches runaway situations with very few photons. Since darker areas have fewer photons than brighter areas, the effective radius within which the N photons are found is larger in dark areas. The effect is that low intensity areas will have less detail than high intensity areas.
 
Also, increasing the number of photons in the scene will result in the effective radius becoming smaller, so if N is not adjusted to compensate the same amount of noise will be seen, only on a smaller scale. The other policy is to select R small, on the scale of the detail the user wishes to see. N is then kept high, or even set to 0 (unlimited). In this case, a constant radius is examined which results in the scale of the detail remaining constant between light and dark areas.
 
Increasing the number of photons in the map will have the effect of reducing the error noise. In practice, one will often use a combination of the two, with a small radius to get detail in dark areas, and N set at a moderate value to speed up rendering. Irrespective of the chosen policy, a large effective radius gives less noise, but a more blurred result. To decrease the noise without blurring detail, it is necessary to increase the number of photons in the photon map.
 
It is very instructive to explore the effect of setting these options with the aid of the diagnostic photon2:1 option since the false color image it generates shows the difference in estimated density more clearly. For fast previewing of caustics it can be useful to use N=20.

1.24.4 Shader Functions

 
1.24.4 Shader Functions
 
There are two functions that are especially important to writers of photon shaders. The shader interface function mi choose scatter type chooses a scatter type for a photon based on the probabilities for diffuse, glossy, and specular reflection and refraction.
 
It can also choose absorption, that is, that the photon should be traced no further. The function also ensures a correct energy level in the scene by altering the reflection coefficients according to the scatter choice.
 
During regular ray tracing, material shaders of caustic-receiving objects should call the mi compute - irradiance function to ?pick up? the illumination caused by photons reaching the object during preprocessing.

1.24.3.2 Physically Plausible Material Shaders

 
1.24.3.2 Physically Plausible Material Shaders
 
A pair of material shaders that emphasize physical accuracy are dgs material and dgs material - photon. They simulate three types of reflection:  diffuse (Lambert?s cosine law),  glossy (isotropic or anisotropic),  specular (mirror), aswell as the corresponding types of refraction and translucency, and any combination of these.
 
Therefore, they can simulate mirrors, glossy paint or plastic, anisotropic glossy materials such as brushed metal, diffuse materials such as paper, translucent materials such as frosted glass, and any combination of these. Each material declaration using dgs material has to have a photon "dgs material photon" () statement. These two shaders should be used together for physical accuracy.
An example is:
 
material "mirror" opaque # ideal mirror material
"dgs_material" (
"specular" 1.0 1.0 1.0,
"lights" ["arealight-i"]
)
shadow "dgs_material" ()
photon "dgs_material_photon" ()
end material
 
Another pair of physically based shaders is dielectric material and dielectric material - photon. For further details on dgs material, dielectric material, and their photon shaders, see the documentation of the Physics Shader Library.

1.24.3.1 Softimage Material Shaders

 
1.24.3.1 Softimage Material Shaders
 
The Softimage material shader can be used with caustics (even though it is not physically correct). This allows the user to have a creative attitude towards realism. The Softimage material shader computes two types of reflection: specular and Lambertian diffuse.
 
The specular reflection of a light source is modeled by Phong?s reflection model, while specular reflection of light from other parts of the environment is modeled by mirror reflection. For the Softimage material, the photon material shader is called soft material photon.
 
Each material declaration in the scene has to have a photon "soft material photon" () statement. It is possible to use other photon material shaders with soft material, but this is not recommended as their parameters may be different or have different meanings.
 
Using these Softimage material shaders makes it possible to design a scene without caustics, and then add the caustics as the ?final touch? without the whole image changing drastically and without having to redesign all materials in the scene.

1.24.3 Material Shaders and Photon Shaders for Caustics

 
1.24.3 Material Shaders and Photon Shaders for Caustics
 
mental ray comes with threematerial shaders that support caustics (and global illumination2:1) soft material, dgs material and dielectric material. Their photon shader equivalents are soft material photon, dgs - material photon and dielectric material photon. When defining a material it is necessary to specify both the regular material shader and the photon shader. Most often, however, the photon shader can inherit the parameter setting from the regular material shader. In addition to these six shaders, users can write new material and photon material shaders.

1.24.2 Objects

 
1.24.2 Objects
 
By default, all objects can cast and receive caustics, that is, photons are emitted in all directions from a point light source (and with all possible origins for a directional light source). For some scenes, this is fine ? for example if a point light source is surrounded by specular surfaces.
 
But for some scenes, it is very inefficient ? for example a point light far away from a single small specular object.To generate caustics more efficiently in such scenes, objects can be flagged such that the photons are only emitted towards certain objects and stored only on selected objects. Objects are then divided into causticcasting (caustic 1 flag) and caustic-receiving (caustic 2 flag), or both (caustic 3 flag), or neither (caustic 0 flag).
 
For example, caustics on the bottom of a swimming pool require a caustic-casting water surface and a caustic-receiving pool bottom. Objects can also be flagged caustic off, which means that caustic photons will not hit them at all (the objects will be ?invisible? to caustic photons), or flagged caustic on which is the same as caustic 3.
 
The caustic mode is an object attribute. Photons are emitted only in the direction of caustic-casting objects, and only stored on caustic-receiving objects. To use this optimization requires that the default object caustic flag (specified in the options) is set to something different than 3 (which is the default value, enabling all objects to cast and receive caustics). For example, the options can contain
 
caustic on
caustic 0
 
The definition of a caustic-casting object can for example begin as object "revol4" caustic 1 visible shadow trace tag 13 The material of a caustic-casting object has to be mainly specular (little or no diffuse reflection), and for Softimage materials, the sum of reflection and transparency has to be close to or larger than 1. For caustics generated by refraction, the index of refraction has to be different from 1. For example, the index of refraction for water is 1.33, for glass 1.5 to 1.7, and for diamond 2.42.

1.24.1 Light Sources

 
1.24.1 Light Sources
 
Photons are emitted from the light sources in the scene. It is necessary to attach some extra information to each light source to control the energy being distributed into the scene (and optionally the number of photons emitted). To generate caustics from a particular light source, one must specify the energy emitted by the light source. This is given by the energy keyword. The energy is the flux distributed by the light source and it will be distributed into the scene by each photon which will carry a fraction of the light source energy. If the energy is zero (the default), no photons will be emitted. Another important factor is the number of photons to be generated by this light source.
 
This can optionally be specified using the caustic photons keyword (10000 photons is the default). This will be the number of photons stored in the photon map and thus a good indication of the quality of the generated caustics. It is also a direct indication of the memory usage which will be proportional to the number of photons in the photon map. For quick, low-quality caustics, caustic photons 10000 is adequate, formedium quality 100000 is typically needed, and for highly accurate effects, caustic photons 1000000 can be necessary.
It is also possible to specify a second integer, which is the maximum number2:1of photons to be emitted from the light source.
 
By default there is no upper limit (indicated by the value 0), in which case emission will continue until the specified number of photons have been stored. Notice that the emitted number of photons and the preprocessing time is most often proportional to the number of photons generated in the photon map. For most light sources, the distribution of energy using photons will give the natural inverse square falloff of the energy. This might be an unwanted effect since some light shaders implement a linear fall-off. It can be avoided by using the exponent keyword. If the exponent is p, the fall-off is 1=rp, where r is the distance to the light source.
 
Exponent values between 1 and 2 make the indirect light less dependent on distance. Exponents of less than 1 is not advisable, as it often gives visible noise. Exponent 2 is the default. The following is an example of a light that uses the soft point light shader and is capable of generating caustics: light "caustic-light1" "soft_point" ( "color" 1.0 1.0 0.95, "shadow" on, "factor" 0.6, "atten" on, "start" 16.0, "stop" 200.0 ) origin 20.0 30.0 -40.0 energy 700 700 700 caustic photons 100000 exponent 1.5 end light An example of a light source which uses the inverse square fall-off to compute illumination (and the default 10000 photons) is: light "point1" "physical_light" ( "color" 700.0 700.0 700.0 ) origin 20.0 30.0 -40.0 energy 700 700 700 end light It is important to note the difference between color and energy. color is the power of the direct illumination, while energy will be the power of the caustic. It is therefore possible to tune the brightness of caustics to make them more or less visible.
 
If area light source information, such as a rectangle statement, is added to the light source definition, both the direct and global illumination will be emitted from an area light source. This tends to make caustics more fuzzy. To emphasize caustics, the energy of the light sources can be higher than their colors (that determine the direct illumination).
 
If, for whatever reason, the user wants to have the sources of caustics to be at different positions than the sources of direct illumination, this is possible too. It might also be that a single light source is sufficient for the caustics, while several light sources are needed to fine-tune the direct illumination.

1.24 Caustics

 
1.24 Caustics
 
Caustics are light patterns that are created when light from a light source illuminates a diffuse surface via one or more specular reflections or transmissions. Examples are:  The light patterns created on the bottom of a swimming pool as light is refracted by the water surface and reflected by the diffuse pool bottom.
 
 Light being focused by a glass of water onto a diffuse table cloth.
 The light emanating from the headlights of a car: the light is emitted by the filament of a light bulb, reflected by a parabolic mirror reflector (thereby being focused in the forward direction), and reflected by the diffuse road surface.
 
Caustics cannot be simulated efficiently using standard ray tracing since predicting the potential specular paths to a light source from any given surface is a difficult (and in many situations impossible) task. To overcome this problem mental ray uses a photon map.4 The photon map is generated in a preprocessing step in which photons are emitted from the light sources and traced through the scene using photon tracing.
 
The emission of photons is controlled using either one of the standard photon emitters for point lights, spot lights, directional lights, and area lights, or by using a user defined photon emitting shader. A photon leaving the light source can be reflected or transmitted specularly by objects.
 
The photon is traced through the scene until it either hits a diffuse surface or until it has been reflected or transmitted a maximum number of times as indicated by the photon trace depth. When a caustic photon hits a diffuse object it is stored in a caustic photon map and not traced any further.
 
To control the behavior of photons as they hit objects in the scene, it is necessary to attach photon material shaders to these objects. Photon material shaders are similar to normal material shaders with the main difference being that they trace the light in the opposite direction.
 
Also, a photon shader distributes energy (flux) instead of collecting a color (radiance). Another important difference is the fact that photon material shaders do not need to send light rays to sample the contribution from the light sources in the scene.
 
In order to use the photon shaders, it is necessary to include the physics.mi file which contains the declarations of all the physics-based material shaders and photon shaders ? or the softimage.mi file which contains the Softimage material shader and photon shader. To turn caustics on, specify caustic on in the options or use the command-line option -caustic on.

1.23 Memory-mapped Textures

 
1.23 Memory-mapped Textures
 
mental ray supports memory mapping of textures in UNIX environments. Memory mapping means that the texture is not loaded into memory, but is accessed directly from disk when a shader accesses it. There is no special keyword or option for this; if a texture is memory-mappable, mental ray will recognize it and memory-map it automatically. Only the map image file format (extension .map) can be mapped. See the Output Shaders chapter for a list of supported file formats.
Note that memory mapping is based on the concept that the image data on disk does not require decoding or data type conversion, but is available in the exact format thatmental ray uses internally during rendering.
Normally mental ray will attempt to auto-convert image data formats; for example if a color image file is given in a scalar texture construct, mental ray will silently convert the color pixels to scalars as the texture is read in. Most data types are auto-converted to most other data types. This does not work for memory-mapped textures.
 
Memory mapping requires several preparation steps:
 
 The texturemust be converted to .map format using a utility likemental images? imf copy. The scene file must be changed to reference this texture. Note that mental ray recognizes .map textures even if they have an extension other than .map; this can be exploited by continuing to use the old file name with the ?wrong? extension.
 Memory-mapped textures are automatically considered local by mental ray, as if the local keyword had been used in the scene file. This means that if the scene is rendered on multiple hosts, each will try to access the given path instead of transferring the texture across the network, which would defeat memory mapping. The given path must be valid on every host participating in the render.
  The texture should not be on an NFS-mounted file system (one that is imported across the network from another host). Although this simplifies the requirement that the texture must exist on all hosts, the necessary network transfers reduce the effectiveness and can easilymakememory-mapping slower than regular textures.
 Memory-mappingworks best if there are extremely large textures containing many tens ofmegabytes that are sampled infrequently because then most of the large texture file is never loaded into memory. If the textures and the scene are so large that they do not fit into physical memory, loading a texture is equivalent to loading the file into memory, decompressing it, and copying it out to swap.
 (The swap is a disk partition that acts as a low-speed extension of the physical memory that exists as RAM chips in the computer). From then on, accessing a texture means accessing the swap. Memory mapping eliminates the read-decompress-write step and accesses the texture from the file system instead of from swap. This has the side effect that less swap space is needed. If the texture and scene are not large and fit into memory, and if the texture is accessed frequently, memory-mapped textures are slower than regular textures because the swap would not have been used.

1.22 Contours

 
1.22 Contours
 
Contour lines can be an important visual cue to distinguish objects and accentuate their forms and spatial relationship. Contour lines are especially useful for cartoon animation production. Contours can be placed at discontinuities of depth or surface orientation, between different materials, or where the color contrast is high. The contour lines are anti-aliased, and there can be several levels of contours created by reflection or seen through semitransparent materials.
 
The contours can be different for each material, and some materials can have no contours at all. The color and thickness of the contours can depend on geometry, position, illumination, material, frame number, and various other parameters. The resulting image may be output as a pure contour image, a contour image composited onto the regular image (in raster form in any of the supported formats), or as a PostScript file. It is not possible to render contours in a scene with motion blur.
 
Contour shaders are called while the normal color image is created. Contours are computed using information stored by a contour store shader. The contour store shader is called once for each intersection of a ray with a material. The position of contours are determined by a contour contrast shader. It compares the two sets of information for a pair of points, and decides whether there should be a contour between the points. The color and thickness of the contours are determined by contour shaders.

1.21 Output Shaders

 
1.21 Output Shaders
 
 mental ray can generate more than one type of image. There are up to five main frame buffers: for RGBA, depth, normal vectors, motion vectors, and labels. The depth, normal vector, motion vector, and label frame buffers store the Z coordinate, the normal vector, the motion vector, and the label of the frontmost object at each sample of the image. If multiple samples are taken for a pixel, the frame buffer value for that pixel may be either any one sample value, or a blend of all samples.
 
The number and type of frame buffers to be rendered is controlled by output statements. Output statements specify what is to be done with each frame buffer. If a frame buffer is not listed by any output statement, it is not rendered (except for RGBA, which always exists).
 
There are two types of output statements, those specifying output shaders and those specifying files to write. There are also up to eight user-defined frame buffers2:1that can be defined with any data type, using a frame buffer statement in the options block.
 
Output shaders are user-written functions that can be linked at runtime that have access to every pixel in all available frame buffers after rendering. They can be used to perform operations like post-filtering or compositing. Files to write are specified with data type, file format and file name. If the data type is omitted a default data type is used that is assumed to be the ?best? type for the given image format.
 
The data type implies the frame buffer type. There are special file formats for depth, vector, and label files, in addition to a variety of standard color file formats. By listing the appropriate number and type of output statements, it is possible to writemultiple files. For example, both a filtered file and the unfiltered version can be written to separate files by listing three output statements: one to write the unfiltered image, one that runs an output shader that does the filtering, and finally another one to write the filtered image. Output statements are executed in sequence.
 
The following file formats are supported:
 
 
Each of these file formats implies a particular default data type (the first entry in column ?Supported data types?); for example, "pic" implies 8-bit RGBA, and "zt" implies Z. The default data type may be overridden by explicitly specifying another data type, such as a 16-bit type, in the output statement, as long as it is supported and appears in the above table. mental ray will adjust its frame buffer list to compute the requested types. For example, the standard RGBA frame buffer stores 8 bits per component by default, but if any output statement references a 16-bit type, the RGBA frame buffer also switches to 16 bits.
 
The available data types are:
 
 
The difference between "vta" and "vts", and between n and m, is significant only when automatic conversions are done. The file contents are identical except for the magic number in the file header. The floating-point RGBA data type "rgba fp" allows color and alpha values outside the normal range (0; : : : 1), and no dithering is applied even if explicitly enabled.
 
In contrast, any conversion to the 8-bit or 16-bit formats will clamp values outside this interval. Note that dithering reduces the effectivity of RLE compression. Allmental images file formats contain a header followed by simple uncompressed image data, pixel by pixel beginning in the lower left corner.
 
Each pixel consists of one to four 8-bit, 16-bit, or 32-bit component values, in RGBA, XYZ, or UV order. The header consists of a magic number byte identifying the format, a null byte, width and height as unsigned shorts, and two unused null bytes reserved for future use. All shorts, integers, and floats are big-endian (most significant byte first).mental ray can combine samples within a pixel in different ways. The combination of existing samples can also pad the frame buffers to ?bridge? unsampled pixels. Interpolation of colors, depths, normals, and motion vectors means that they are averaged, while interpolation of the labels means that the maximum label is used (taking the average label is not a good idea).
 
Interpolation of depths only takes the average of non-infinite depths, and interpolation of normals and motion vectors only takes the average of vectors different from the null vector. Interpolation is turned on by writing a ?+? in the beginning of the output type and turned off by writing a ??? there.
 
For example, to interpolate the depth samples, write ?+z? in the output statement. If interpolation is turned off for a frame buffer, the last sample value (color, normal, motion vector, or label) within each pixel is stored, and pixels without samples get a copy from one of the neighbor pixels.
 
Interpolation off for depth images is an exception: rather than using the last sample depth, the min depth is used?this can be useful for compositing. Interpolation is on by default for color frame buffers (including alpha and intensity frame buffers) and off by default for depth, normal, motion vector, and label frame buffers.

1.20 Color Calculations

 
1.20 Color Calculations
 
All colors in mental ray are given in the RGBA color space and all internal calculations are performed in RGBA. The alpha channel is used to determine transparency; 0 means fully transparent and 1 means fully opaque. mental ray uses premultiplied colors, which means that the R, G, and B components are scaled by A and may not exceed A.
 
Optionally, RGBA colors may be stored in the output image in non- 3OpenGL is a registered trademark of Silicon Graphics, Inc. 20 1 Functionality premultiplied form to increase the precision of highly transparent pixels, but internally mental ray and all shaders work with premultiplied colors.
 
Premultiplication is used to simplify compositing operations. Internally, colors are not restricted to the unit cube in RGB space. As a final step before output, colors are clipped using one of two methods. By default, the red, green and blue values are simply truncated.
 
Optionally, colors may be clipped using a desaturation method which maintains intensity (if possible), but shifts the hue towards the white axis of the cube. Desaturation color clipping may be selected with either the -desaturate on option on the command line or desaturate on in the options block in the scene. The alpha channel is always truncated.

1.19 OpenGL Acceleration

 
1.19 OpenGL Acceleration
 
By default, mental ray uses a scanline rendering algorithm for primary rays when no lens shaders are present that modify the ray direction. This algorithm can cope with both static and motion blurred scenes. In addition, mental ray supports OpenGLRc3 hardware to further accelerate scanline rendering of static scenes (without motion blurring). If the client host provides OpenGL acceleration with sufficient resolution, mental ray can use this to generate acceleration data that is subsequently used during regular rendering.
 
This combines the speed of a hardware OpenGL accelerator with the full shading capabilities of mental ray, and is particularly effective for scenes with high polygon counts. Shadow maps can also be rendering withOpenGL acceleration. This is particularly effective since shadow maps need no shading.
 
Most of the computation involved is then performed by the OpenGL system, which greatly improves performance. However, the accuracy of OpenGL acceleration is generally lower than that of the standard software scanline rendering algorithm.

1.18 Sampling Algorithms

 
1.18 Sampling Algorithms
 
Jittering, motion blurring, area light sources, and the depth-of-field lens shader are based on multiple sampling that is based on varying the sample locations in time, 2D or 3D space. mental ray offers a proprietary implementation of the Quasi-Monte Carlo method for achieving these variations. Sample locations in time, 2D and 3D space are deterministically chosen on fixed points that ensure optimal coverage of the sample space.
 
The algorithm is similar to fixed-raster algorithms, but avoids the regular lattice appearance of such algorithms. The resulting images are identical if the scene is re-rendered with the same options due to the deterministic nature of the algorithm. Quasi-Monte Carlo methods can be succinctly described as strictly deterministic sampling methods. Determinism enters in two ways, namely, by working with deterministic points rather than random samples and by the availability of deterministic error bounds ([Niederreiter 92]).

1.17 Motion Blur

 
1.17 Motion Blur
 
There are two different motion blur algorithms. One is completely general and computes motion blur of highlights, textures, shadows, reflections, refractions, transparency, and intersecting objects. The other algorithm is much faster, but cannot handle reflections and refractions (and shadows have to be approximated with shadow maps). However, motion blur of highlights, textures, transparency, and intersecting objects still work with the faster algorithm.
 
The faster algorithm is used for scanline samples (first-generation non-raytraced). Themovement of objects is specified by associating linear motion vectorswith polygon vertices and surface control points. These vectors give the direction and distance that the vertex or control point moves during one time unit.
 
If a motion vector is not specified, the vertex is assumed to be stationary. Motion blurring computations may be expensive, but note that these computations are only done for those polygons in a scene which include motion information.
 
A shutter speed may be given for the camera with the -shutter option on the command line or shutter in the options statement, with the default speed of zero turning motion blurring off. The shutter opens instantaneously at time zero and closes after the shutter speed time has elapsed.

1.16 Animation

 
1.16 Animation
 
The input file consists of a list of frames, each of which is complete and self-contained. Animation is accomplished by specifying geometry, light sources and materials which change incrementally from one frame to the next.

1.15 Depth of Field

 
1.15 Depth of Field
 
Depth of field is an effect that simulates a plane of maximum sharpness, and blurs objects closer or more distant than this plane. There are two methods for implementing depth of field: a lens shader can be used that takes multiple samples using different paths to reach the same point on the focus plane to interpolate the depth effect; or a combination of a volume shader and an output shader that collect depth information during rendering and then apply a blurring filter as a postprocessing step over the finished image using this depth information. Both methods are supported by standard shaders supplied with mental ray.

1.14 Lens Effects

 
1.14 Lens Effects
 
Lens effects are distortions of the rendered image achieved by changing the light path through the camera lens. Because lens effects are applied to first-generation rays, there is no loss of quality that would be unavoidable if the distortion were applied in a post-processing stage after rendering. Lens effects are introduced by specifying one or more lens shaders in the camera statement. If no lens shaders are present, the standard pinhole camera is used. Each lens shader is called with two state variables that specify the ray origin and the ray direction.
 
The lens shader calculates a neworigin and a new direction, and casts an eye ray using the mi trace eye function. The first lens shader always gets the position of the pinhole camera. All following lens shaders get the origin and direction that the previous lens shader used when calling mi trace eye. Lens shaders imply ray tracing. If lens shaders change the origin or direction of a ray they only work correctly if scanline rendering is turned off. If scanline is turned on, a warning is printed.
 
Then, the lens shaders must not change the origin or direction of a ray. Lens shaders have no effect on the trace depth limit, eye rays are not counted towards the ray trace depth.

1.13 The Camera

 
1.13 The Camera
 
The camera is fixed at the origin, looking down the negative Z axis, with up being the positive Y axis. To view a scene from a given position and orientation, the scene must be transformed such that the camera is at this standard location. By default, the camera is a pin-hole perspective camera for which the focal length, aperture and aspect ratio may be specified in either the camera construct of the input file or on the command line of mental ray. Optionally, lens effects such as depth of field can be achieved by specifying one or more lens shaders.

1.12 User-Defined Shaders

 
1.12 User-Defined Shaders
 
In addition to standard shaders, user-defined shaders written in standard C or C++ can be precompiled and linked at runtime, or can be both compiled and linked at runtime. User-defined shaders can be used in place of any standard shader, redefining materials, textures, lights, environments, volumes, displacements etc. mental ray can link in user-defined shaders in either object, source, or dynamic shared object (DSO or DLL) form. Every user-defined shader must be declared before it can be used.
 
A declaration is a statement that names the shader, and lists the name and type of all its parameters. Declarations may appear in the .mi file, but are typically stored in an external file included at run time. Note that all code and link statements must precede the first declaration in the .mi file.
 
Available parameter types are boolean, integer, scalar, string, color (RGBA), vector, transform(44 matrix), scalar texture, color texture, vector texture, and light. In addition to these primitive types, compound types may be built using struct and array declarations. Structs may be nested but arrays may not. An instance of a shader can be created by creating a material, texture, light etc. that names a declared shader and associates a parameter list with values with it.
 
Any parameter name that appeared in the declaration can be omitted or listed in any order, followed by a value that depends on the parameter type. Omitted parameters default to 0. Scalars accept floating point numbers, vectors accept one to three floating point numbers, and textures accept a texture name. After a material, texture, light etc has been created, it can be used. Materials are used by giving its name in object geometry statements, and textures and lights are used by naming them as parameter values in other shaders, typically material shaders.
 
When the C function that implements a user-defined shader is called, it receives three pointers: one points to the result, one to global state, and one to a data structure that contains the parameter values. mental ray stores the parameter values in that data structure using a layout that corresponds exactly to the layout a C compiler would create, so that the C function can access parameters simply by dereferencing the pointer and accessing the data structure members by name. For this, it is necessary that a struct declaration is available in C syntax that corresponds exactly to the declaration in .mi syntax. For details on user-defined shaders, refer to the ?Writing Shaders? chapter.

1.11.2 Elliptical Projection Filter

 
1.11.2 Elliptical Projection Filter
 
Lookup This method was implemented in mental ray in order to provide a very high quality texture filtering, far superior to the pyramid filtering explained above. It eliminatesmost if not all of the aliasing in high texture frequencies. When using checkerboard textures mapped onto a rectangle, for example, there is much less blurring at the horizon where the texture compression is severe. With mip-mapping as explained above, the blurring at such extreme compressions is sometimes still visible.
 
 The main cause for the excessively blurry-looking images using mip-maps is the approximation of the pixel projection area by a square. With elliptical filtering a circle around the current sampling location is projected to texture space and will give either a circle or an ellipse as a projection shape. Instead of approximating this curve by simple shapes like squares, a direct convolution (averaging) of all texels which are inside the ellipse area is done.
 
Averaging all texels in this area can take quite long, so mental ray uses pyramids of prefiltered textures to accelerate this. There are various parameters explained below which control modification of ellipse shape and level selection in the pyramid. The most difficult part when elliptical projections are used is that a screen to texture space transformation matrix has to be provided. This matrix is used in the filtering code to transform the circle around the current sampling location to texture space. mental ray provides two helper functions for constructing this matrix when UV texture coordinates are available; see mi texture lter project in the Writing Shaders chapter.
 
 If those are not available and (for example) direct cylinder projective mappings are used, it ismuch more easier to calculate this matrix. The following filtering algorithm is applied: first, a circle in the current sampling location is transformed to the ellipse using the provided transformation matrix. Then the eccentricity of the ellipse is calculated (major radius divided by minor radius). If the eccentricity is larger than a specified maximum, the minor radius is adjusted (made larger) to make sure that this eccentricity maximum always holds. The reason for this enlargement is that the direct convolution is done in the pyramid level based on the minor axis length of the ellipse.
 
There is another parameter which specifies the maximumallowed number of texels the minor radius may cover. If that number is exceeded in the finest level (zero), a higher level is used. In the second level, for example, the minor radius as half the size etc. Enlarging the minor radius when the eccentricity is exceeded, basically means that we are going up in the pyramid. So, for very large this ellipses, mental ray is making them ?fatter? and uses a higher level in the pyramid.
 
Referring to the checkerboard-mapped plane example above, the circle is projected to very large thin ellipses near the horizon, covering thousands of texels, and using the technique above mental ray just makes a few texture lookups in the higher pyramid levels. There is another parameter which modifies the size of the circle to be projected, usually the radius is 0.5, making it larger introduces more blurring, making it less gives more aliasing.
 
The projection helper functions expect another parameter which is the maximum offset of the central sampling location to the two other points which have to be selected. The other two points should be inside the pixel, but since mental ray is using the current intersection primitive (the triangle) also for these points to determine the UV texture coordinates, a smaller value than 0.5 (pixel corners) is appropriate since mental ray might hit the triangle plane outside the triangle area. Usually 0.3 gives good results. When the UV coordinates are calculated using cylinder projections, it is possible to obtain the UV coordinates much faster and also much more accurately.

1.11.1 Pyramid Filtering

 
1.11.1 Pyramid Filtering
 
This method can be used very easily with existing .mi files, it is only necessary to add a ?filter scale? modifier to the texture load statements in the scene file. Here is an example: local filter 0.8 color texture "tex_0" "mytexture.map" The basic idea behind pyramid filtering is that when a pixel rectangle (the current sampling location) is projected into texture space, mental ray has to calculate the (weighted) average of all texture pixels (texels) inside this area and return it for the texture lookup.
 
 Using the average of the pixels, high frequencies which cause aliasing are eliminated. To speed up this averaging, the compression value is calculated for the current pixel location which is the inverse size of the pixel in texture space. For example, if the pixel has a projected size of four texels in texture space, then one texel is compressed to 1/4 in the focal plane (severe compression gives those aliasing artifacts). It is very costly to project a rectangle to texture space, so usually the quadrilateral in texture space is approximated by a square and the length of one side is used for the compression value.
 
The compression value is used as an index into the image pyramid, and since this value has a fractional part, the two levels that the value falls in between of are looked up using bilinear interpolation at each level, followed by a linear interpolation of the two colors from the level lookups. (mental ray uses also bilinear texture interpolation when no filtering is applied). Just specifying ?filter scale color texture? is not sufficient for an exact projection of the pixel to texture space.
 
The texture shader modifies the UV texture coordinates (either from specified texture surfaces or generated by cylinder projections) according to remapping shader parameters etc. In mi lookup - color texture, mental ray only has the UV texture coordinates, and it is almost impossible to project the pixel corners to texture space since it is not known how to obtain additional UV coordinates or how to remap them. The remapping is done before mi lookup color texture is called. mental ray?s implementation of pyramid mapping therefore adds a vector to the current intersection point in object space and transforms this point into raster space.
 
The length of the offset vector is calculated by dividing the object extent by the texture resolution (the larger value of width and height is used). This approach assumes that texture space corresponds to object space (that is, if the object size is one object unit, the texture fully covers it).
 
If a texture shader applies texture replications, the filter value should be set to the replication count or larger to adjust for this effect. The compression value is calculated as the distance between the raster position mentioned above and the current raster position (state ! raster x, state ! raster y). Since this can not always attain satisfying results, mental ray allows multiplication of a ?user scaling? value ? the scale value in the filter statement. Using this value, it is possible to reduce blurring (scale < 1) or increase blurring (scale > 1).
 
For example, if the texture is replicated 10 times, which makes it appear smaller in raster space and hence requires more blurring, the filter constant should be multiplied by 10. Since texture projections are handled by shaders and not by the mental ray core, this cannot be done automatically. Pyramid filtering also works when reflection or refraction is used, but mathematical correctness cannot be guaranteed since mental ray cannot take reflection or refraction paths into account, for the same reason.
 

1.11 Texture Filtering

 
1.11 Texture Filtering
 
mental ray provides two methods for texture filtering: a fast filtered texture lookup using image pyramids (which are similar to mip-maps but have no restrictions on texture size), and a high-quality filtering method using elliptical projections. Both methods operate on image pyramids.
 
There is a .map image format defined by mental ray that supporting filters. When standard image files (such as .pic) are used for filtered texture lookups (bothmethods), the pyramid must be created by mental ray when the image is accessed. For high-resolution images this can take a long time (sometimes up to a minute), so it is highly recommended to create this image pyramid ?offline? by mental images? imf copy utility. When called with the -p option on the command line, it down-filters the source texture image file and writes out the filtered images in memory-mapped image format. If such a file is read with a local filter color texture statement in .mi scene file, the pyramid is read almost instantaneously.
 
Also, it is recommended to make local copies of the texture files on the machines in order to speed up access. When machines with different byte order are used in the network, there is a performance penalty when using only one version of the pyramid .map file (it has to be byte swapped), so it is recommended to generate the .map file in the native byte order on the respective machines.
 
The prefiltered .map file containing the pyramid can also be used for standard nonfiltered texture lookups (using a simple local color texture statement), in this case only the first (finest) level of the image pyramid is used.Now the two methods in detail:

1.10 Texture, Bump, Displacement, and Reflection

 
1.10 Texture, Bump, Displacement, and Reflection
 
Mapping mental ray supports texture, bump, displacement and reflection mapping, all of which may be derived from an image file or procedurally defined using user-supplied functions. The following table lists the file formats accepted by mental ray:
 
In the table any combination of comma separated values determines a valid format subtype. For example, the SOFTIMAGE image format will be read when data type is 8 bits per component with or without alpha either RLE compressed or uncompressed. The actual image format is determined by searching the file content, not just by checking the filename extension.
 
Typical image types like black/white, grayscale, colormapped and truecolor images, optionally compressed, are supported. Some of them could be used to supply additional alpha channel information (number of components > 3).
 
The collection covers most common platform independent formats like TIFF and JFIF/JPEG, special UNIX (PPM) or Windows bitmap (BMP) types and well known application formats. The mental images formats, normally created by mental ray itself, are mainly available to exchange data not storable with other formats. The other way to define any sort of map is supplying user functions, which are linked to mental ray at run time without user intervention.
 
The function may require parameters which could specify, for example, the turbulence of a procedural marble texture. Frequently, a function is used to apply texture coordinate transformations such as scaling, cropping, and repetitions. Such a function would have a sub-texture argument that refers to the actual image file texture.
 
A user-defined material shader is not restricted to the above applications for textures. It is free to evaluate any texture and any number of textures for a given point, and use the result for any purpose. In the parameter list of the standard material shaders, a list of texture maps may be given in addition to, for example, a literal RGB value for the diffuse component of a material. The color of the diffuse component will then vary across a surface.
 
To shade a given point on a surface, the coordinates in texture space are first determined for the point. The diffuse color used for shading calculations is then the value of the texture map at these coordinates. The SOFTIMAGE-compatible material shader uses a different approach; it accepts a single list of textures, with parameters attached to each texture that control the way the texture is applied to ambient, diffuse, and other parameters. The shader interface is extremely flexible and permits user-defined shaders to use either of these approaches, or completely different formats.
 
The remainder of this section describes the standard shader parameters only. The standard material shaders support texture mapping for all standard material parameters except the index of refraction. Shinyness, transparency, refraction transparency, and reflectivity are scalar values and may be mapped by a scalar map. Bump maps require a vector map. For all other parameters, a color map is appropriate. SOFTIMAGE texture shaders derive all types of maps from color textures.
 
Determining the texture coordinates of a point on a surface to be shaded requires defining a mapping from points in camera space to points in texture space. Such a mapping is itself referred to as a texture space for the surface. Multiple texture spaces may be specified for a surface. If the geometry is a polygon, a texture space is created by associating texture vertices with the geometric vertices. If the geometry is a free-form surface, a texture space is created by associating a texture surface with the surface. A texture surface is a free-form surface which defines the mapping from the natural surface parameter space to texture space.
 
Texture maps, and therefore texture spaces and texture vertices, may be one, two, or three dimensional. Pyramid textures are a variant of mip-map textures. When loading a texture that is flagged with the filter keyword, mental ray builds a hierarchy of different-resolution texture images that allow elliptical filtering of texture samples. Without filtering, distant textures would be point-sampled atwidely separated locations, missing the texture areas between the samples, which causes texture aliasing. Texture filtering attempts to project the screen pixel on the texture, which results in an elliptic area on the texture. Pyramid textures allow sampling this ellipse very efficiently, taking every pixel in the texture in the ellipse into account without sampling every pixel.
 
Pyramid textures are not restricted to square and power-of-two resolutions, and work with any RGB or RGBA picture file format. The shader can either rely on mental ray?s texture projection or specify its own. Filter blurriness can be adjusted per texture. A procedural texture is free to use the texture space in any way it wants, but texture files are always defined to have unit size and to be repeated through all of texture space. That is, the lower-left corner of the file maps to (0:0; 0:0) in texture space, and again to (1:0; 0:0), (2:0; 0:0), and so on; the lower-right corner maps to (1:0; 0:0); (2:0; 0:0); : : : and the upper right to (1:0; 1:0); (2:0; 2:0); : : :. Just as a texture map can vary a parameter such as the diffuse color over every point on a surface, a bump map can be associated with a material, perturbing the normal at every point on a surface which uses the material.
 
This will affect the shading, though not the geometry, giving the illusion of a pattern being embossed on the surface. Bump maps, like texture maps, require a texture space. In addition, bump maps require a pair of basis vectors to define the coordinate system in which the normal is displaced. A bump map defines a scalar x and a scalar y displacement over the texture space. These components are used together with the respective basis vectors in order to calculate a perturbed surface normal.
 
The basis vectors are automatically defined for free-form surfaces in a way which conforms to the texture space. For polygons, the basis vectors must be explicitly given along with the texture coordinates for every polygon vertex. A displacement map is a scalar map which is used to displace a free-form surface or a polygon at each point in the direction of the local normal. Like texture, bump and reflection maps, a displacement map may be either a file or a user-defined function, or a combination of the two.
 
The surface must be triangulated fine enough to reveal the details of the displacement map. In general, the triangles must be smaller than the smallest feature of the displacement map which is to be resolved. Displacement mapped polygons are at first triangulated as ordinary polygons. The initial triangulation is then further subdivided according to the specified approximation criteria. The parametric technique subdivides each triangle a given number of times. All the other techniques take the displacement into account.
 
The length criterion, for example, limits the size of the edges of the triangles of the displaced polygons and ensures that at least all features of this size are resolved. As the displaced surface is not known analytically, the distance criterion compares the displacements of the vertices of a triangle with each other. The criterion is fulfilled only if they differ by less than the given threshold. Subdivision is finest in areas where the displacement changes.
 
The angle criterion limits the angle under which two triangles meet in an edge contained in the triangulation. Subdivision stops as soon as the given criterion or combination of them is satisfied or the maximum subdivision level is reached. This does not preclude the possibility that at an even finer scale new details may show up which would again violate the approximation criteria.
 
For displacement mapped free-form surfaces approximation techniques can be specified either on the underlying geometric surface or for the surface resulting from the displacement. Previously only the former method existed. Users can still use it exactly the same way as before. However, it does not take into account variations in curvature imparted to the surface as a result of displacement mapping. If one wants to control the approximation from the geometric surface probably the most suitable technique for use with displacement mapping on free-from surfaces is the view dependent uniform spatial subdivision technique, which allows specification of triangle size in raster space.
 
An alternative is to place special curves on the surface which follow the contours or isolines of the displacement map, thus creating flexibility in the surface tessellation at those points where it is most needed for displacement. This would also facilitate the approximation of the displacement map by the new adaptive triangulation method. In addition to or even instead of specifying the subdivision criteria for the base surface they can be given for the displaced surface itself.
 
This approximation statement works exactly the same way as for polygons, i.e. an initial tessellation is subdivided until the criteria on the displaced surface are met. The final type of map which may be associated with a material is an environment map. This is a color-mapped virtual sphere of infinite radius which surrounds any object referencing the given material. ?Environment map? is actually something of a misnomer since this sphere is also seen by refracted rays; the environment seen by first-generation (primary) rays can also be specified but is part of the camera, not of any particular material.
 
In general, if a ray does not intersect any objects, or if casting such a ray would exceed the trace depth, the ray is considered to strike the sphere of the environment map of the last material visited, or the camera environment map in the case of first-generation rays that did not hit any material. The environmentmap always covers the entire sphere exactly once. The sphere may be rotated but, because it is of infinite radius, translations and scalings have no effect. User-defined environment shaders can be written, for example one that defines a six-sided cube or other types of environment mapping.

1.9 Shadow Maps

 
1.9 Shadow Maps
 
Shadow mapping is a technique that generates fast approximate shadows. It can be used for fast previewing of models or as an alternative to the more accurate (but also more costly) ray tracing based approach in scenes where accurate shadows are not required. Shadow maps are particularly efficient when a scene is rendered several times without changes in the shadows (for example an animation where only the camera is moving). A shadow map is a fast depth buffer rendering of the model as seen from a light source.
 
This means that each pixel in a shadow map contains information about the distance to the nearest object in the model in a particular direction from the light source. This information can be used to compute shadows without using shadow rays to test for occluding objects.
 
The shadow computation is based only on the depth information available in the shadow maps. For fast previewing of scenes, shadow maps can be used in combination with scanline rendering to produce fast approximate renderings with shadows ? without using any ray tracing.
 
Two different kind of shadows can be produced with shadow maps: sharp and soft (blurry) shadows. Sharp shadows are very fast, and depending on the resolution of the shadow map they will approximate the result produced with simple ray tracing. Soft shadows are produced by distributing one or more samples in a region of the shadow map. This technique produces soft shadows everywhere and is not as accurate as the ray tracing based approach for computing soft shadows but is much faster.

1.8 Area Light Sources

 
1.8 Area Light Sources
 
The main purpose of area light sources is to generatemore realistic lighting, resulting in soft shadows. This is achieved by using one of four primitives (rectangles, discs, spheres, and cylinders) as light sources with nonzero area. This means that a point on an object may be illuminated by only a part of a light source. Area light sources are based on similar principles as motion blurring, and, like motion blur, may reduce rendering speed.
 
Area light sources are specified in the .mi file by naming a primitive in a standard light definition. Any of the standard spot and point lights can be turned into an area light source. The orientation of the disc and rectangle primitives may be chosen independently of the light direction of spot and directional light sources. Any type of light shading function can be used.

1.7 Light Sources

 
1.7 Light Sources
 
A light source illuminates the objects in a scene. Light sources in mental ray are programmable and consist of a light source name, a named light shader function, and an optional origin and direction (exactly one of the two must be present). All light shaders also accept shader parameters that depend on the shader. All standard shaders require a light color parameter. The lights available to a scene are defined outside materials and referenced by name inside materials. Only those lights which a material references will illuminate surfaces which use that material.
 
The shading function for light sources may be either a user written function linked at run time, or it may be one of the standard functions. There is one standard shader for SOFTIMAGE compatibility, and one for Wavefront compatibility. The shading functions for all SOFTIMAGE shaders accept a boolean parameter shadow that turns shadow casting on or off for that light source, and a floating point factor that is the shadow factor. The shadow factor controls penetration of opaque objects.
 
The mi soft light shader has a mode parameter that selects an infinite (directional) light (mode 0), a point light (mode 1), or a spot light (mode 2). The infinite shader is a directional light source requiring a direction in the input file. The shading function requires only the shadow and factor parameters. A point light source requires an origin in the input file. The shading function accepts, in addition to the color, shadow, and factor parameters, a boolean atten that turns distance attenuation on or off, and two scalar parameters start and end that specify the range over which the attenuation falls off if atten is true. The spot light mode requires only an origin in the input file.
 
The spot direction is considered directional attenuation, and is given as a user parameter. The shading function takes the same parameters as the point light mode, and two cone angles cone and spread that specify the angle of the inner solid cone and the outer falloff cone, respectively. The spot casts a cone of light with a softened edge where the intensity falls off to zero between the cone and spread angles. The mi wave light shader accepts color and a dir (direction) arguments. Shadow casting cannot be turned on and off on a per-light-source basis with Wavefront light sources, and the shading function accepts no shadow factor. There are two types of attenuation, distance and angle.
 
Distance attenuation is turned on by either one of the two boolean flags dist linear or dist inverse. In the linear case, the fading range is controlled by dist start and dist end; in the inverse-power case, the attenuation is proportional to the distance from the illuminated point raised to the dist power argument. Wavefront angle attenuation is turned on by either one of the two boolean flags angle linear or angle cosine. In the linear case, the light falls off between the angles specified by the angle inner and angle outer arguments; in the cosine case, the light falls off proportionally to the cosine raised to the angle power argument. Angle attenuation implements spotlights. The spot light direction is the illumination direction argument, dir.

1.6 Materials

 
1.6 Materials
 
A material determines the response of a surface to illumination. Materials in mental ray consist of a material name and one mandatory and four optional shaders, each of which can be a standard shader or a user-provided C function:
 The first function is the material shader itself. It may not be omitted. The material shader determines the color of a point on an object, based on its parameters which may include object colors, textures, lists of light sources, and other arbitrary parameters.
  An optional displacement shader can be named that displaces a free-form surface at each point in the direction of the local surface normal. Displacement maps affect the triangles resulting from the tessellation of free-form surfaces and polygonal meshes.
 An optional shadow shader determines the way shadow rays pass through the object. This can be used for calculating colored shadows.
 An optional volume shader controls rays passing through the inside of the object. This is functionally equivalent to atmosphere calculations, but takes place inside objects, not outside.
 An optional environment shader provides an environment map for non-raytraced reflections.
 
The shading function may be either a user written function linked at run time, or it may be one of the standard functions. All standard shaders use certain standard parameters that are described here. Parameters can be named in any order. Parameters can also be omitted; default values will be provided by mental ray. Note that the following standard parameters only apply to the standard shaders, a user-written shader is completely free to define these or other parameters.
 
The index of refraction controls the bending of light as it passes through a transparent object. Although actually dependent on the ratio of indices between the transparent material being entered and that being left, in practice one may say that the higher the index of refraction, the more the light is bent. Typical values are 1.0 for air, 1.33 for water and 1.5 for glass.
 
The shinyness material parameter effectively controls the size of highlights on a surface. It is also known as the specular exponent. The higher the value, the smaller the highlight. The dissolve parameter controls the fading transparency of a material independent of refractive effects. This is more accurately described as a blending operation between the surface and whatever lies beyond. If the transparency is 0.0, the surface is completely opaque. A value of 0.5 would cause an equal blend of the surface and the background.
 
A value of 1.0 would result in an invisible surface. This parameter is used by the Wavefront-compatible shaders only. The transparency parameter controls the refractive transparency of a material. Unlike dissolve, this parameter has a physically correct interpretation. The range is, as for transparency, from 0.0 for opaque to 1.0 for a completely transparent surface. The interpretation of transparency is left entirely to the material shader. The reflect parameter controls the reflectivity of a material. If reflect is 0.0 no reflections would be visible on a surface.
 
A perfect mirror would have a reflect of 1.0. This parameter is used by the SOFTIMAGEcompatible shader only. The ambient component approximates the color of a light source which illuminates a surface from all directions without attenuation or shadowing. The diffuse component is the color of the surface which is dependent on its angle to a light source but independent of the position of the viewer. A piece of felt is an example of a material with only a diffuse component. The specular component is the color of the surface which is dependent both on the position of the light source and the position of the viewer. It is the color of highlights on the surface.
 
The transmit component (transmission filter) is a color which filters light refracted through an object. A piece of glass which imparts a green tint to the objects seen through it would have a green transmit component. This parameter is used by the Wavefront shader only. Finally, the shade component (shadow filter) is a color which filters light as it passes through a transparent object which casts a shadow. A blue glass ball would have a blue shade component. This parameter is also used by the Wavefront shader only. These parameters have been referred to as standard because they are each required by at least one of the standard shaders. There is one material shader that supports SOFTIMAGE compatibility and one that supports Wavefront compatibility. Additional shaders compatible with Alias lighting models become available with the Alias translator module of mental ray.

1.1 Parallelism

 
1.1 Parallelism
 mental ray has been designed to take full advantage of parallel hardware. On multiprocessor machines that provide the necessary facilities, it automatically exploits thread parallelism where multiple threads of execution access shared memory. No user intervention is required to take advantage of this type of parallelism. mental ray is also capable of exploiting thread and process level parallelism where multiple threads or processes cooperate in the rendering of a single image but do not share memory. This is done using a distributed shared database that provides demand-driven transparent sharing of database items on multiple systems.1 This allows parallel execution across a network of computers, and on multiprocessor machines which do not support thread parallelism. A queue of tasks to be executed in parallel is generated by subdividing the screen space. Each task consists of a rectangular portion of the screen to be rendered. A rendering process, whether on the machine where mental ray was started or on some remote host, requests tasks from this queue and renders the corresponding portion of the image. Faster processes will request and complete more tasks than slower processes during the course of rendering an image, thus balancing the load. The same task-based adaptive load distribution is also used for a variety of other parallel computations in mental ray, such as tessellation of free-form surfaces. mental ray keeps track of the actual distribution to ensure that related tasks, even if they are part of different computations, are performed on the same host to make optimal use of the distributed shared database with a minimum of network traffic. The host that reads or translates the scene, or runs client application such as a front-end application software that mental ray is integrated in, is called the client host. The client host is responsible for connecting to all other hosts, called server hosts. A server host may also act as client host if an independent copy of mental ray is used by another user; systems do not become unavailable for other jobs if used as servers. However, running a mental ray server on a host may degrade the performance of independent interactive application programs such as modelers on that host significantly.The list of hosts to connect to is stored in the .rayhosts file. The first existing file of .ray2hosts, .rayhosts, $HOME/.ray2hosts, $HOME/.rayhosts is used as .rayhosts file. Each line contains a hostname with an optional colon-separated port number of the service to connect to and an optional whitespace-separated parameter list that is passed to the host to supply additional command line parameters. Only the following parameters are supported here: -threads, -c compiler, -c flags, -c linker, and -ld libs. See the chapter on Command Line Options for a description of these parameters. The first line that literally matches the name of the host the client runs on is ignored; this allows all hosts on the network to share a single .rayhosts file, each ignoring the first reference to itself. Only clients ever access the host list. If the -hosts option is given to mental ray, the .rayhosts file is ignored, and the hosts are taken from the command line. In this case, no hosts are ignored. The library version of mental ray may get its host list directly from the application.

1.5 Atmospheres and Volumes

 
1.5 Atmospheres and Volumes
 
The medium which surrounds all objects in a scene is known as the atmosphere. This is normally a transparent material with a refractive index of 1.0. A procedural atmosphere can be specified by naming a volume shader that determines the attenuation of light as it travels along a ray of a given length through the atmosphere. As with all other types of shaders, a user-written shader can be used in place of the standard volume shader. This capability can be used, for example, to create procedurally defined fog.

1.4 Special Points and Curves

 
1.4 Special Points and Curves
 
Special points and curves force the triangulation of a free-form surface to include specific features. A special point is given in the parameter space of the surface and will be included as a corresponding vertex in the triangulation. A special curve is similar to a trimming curve but does not cause portions of the surface to be trimmed. Rather, the curve is included as a polyline in the triangulation of the surface. Special curves are useful for introducing flexibility in the triangulation along specific features. For example, if letters were to be embossed on a planar surface using displacement mapping, a series of contour curves around the letters could be created with special curves.

1.3 Edge Merging and Adjacency Detection Surfaces

 
1.3 Edge Merging and Adjacency Detection Surfaces
 
are generally approximated independently of each other and small cracks may be visible between them, especially if the approximation is coarse. It may be desirable to use a smaller tolerance for the trimming curves than for the surfaces themselves. If an object is well-modeled, if surfaces meet smoothly along their trimming curves and if the curves are approximated to a high accuracy, the gaps between surfaces become invisible. The ideal solution, however, is to triangulate surfaces consistently along shared edges.
 
Mental ray provides the connect construct for specifying connectivity between surfaces. The two surfaces are named, along with the two trimming curves and the parameter ranges along which they meet. Along such a connection the surfaces will be triangulated consistently resulting in a seamless join. If the system generating the input for mental ray cannot determine such connectivity, adjacency detection may be used to discover connectivity automatically. One may give a merge epsilon within a geometry group which will cause all surfaces in that group to be examined.
 
If any two surfaces approach each other.along a trimming curve (or the surface boundary, if the surface is not trimmed) to within the given epsilon, they will be considered adjacent and an appropriate connection will be generated. Essential to the fast and correct determination of adjacency is the gathering of surfaces into appropriate groups. Obviously, the door of a car should not be considered connected to the body no matter how close the two surfaces approach each other. Moreover, the larger the groups, the more time will be required for adjacency detection.

1.2 Free-Form Surfaces

 
1.2 Free-Form Surfaces
 
mental ray supports free-form curves and surfaces in non-uniform rational B-spline (NURB), B´ezier, Taylor (monomial), cardinal or basis matrix form. Any of these forms may be rational and may be of degree up to twenty-one.2 Surfaces may be trimmed. Internally, free-form surfaces are triangulated (approximated) before rendering. Avariety of approximation techniques is available, including uniform and regular parametric, uniform spatial, curvature dependent, and combined methods.
 
The uniform parametric technique (referred to in the input file as parametric) subdivides the surface at equal intervals in parameter space. The input file specifies a factor which is multiplied by the surface degree to obtain the number of subdivisions in each parametric direction per patch. The regular parametric technique (regular parametric in the input file) is a simpler variant of the previous technique. It subdivides the surface at equal intervals in parameter space. The number of subdivisions per surface is directly specified in the input file. The uniformspatial technique (spatial in the input file) subdivides the surface at equal intervals in camera space (in the mi1 format) or in object space (in the mi2 format)?or, rather, the intervals will never exceed the given upper bound. Optionally, this bound may be specified in raster space (in units of pixel diagonals) rather than camera or object space.
 
If, for example, one wanted to approximate a surface with sub-pixel size triangles, one could use the uniform spatial approximation technique with a raster space accuracy of 0.5 pixel diagonals. Note that the apparent size of a subdivided portion of a surface is computed as if the surface was parallel to the screen. Thus, the triangulation does not become more coarse towards the edge of the object?s silhouette. This has the advantage that the object will be well approximated even if seen in a mirror from a different angle, but such a definition can also result in an overly fine triangulation. View-dependent subdivision means that objects that are instanced more than once must be triangulated in multiple ways. A tradeoff between the additional memory required to store multiple objects, and the reduced total number of instanced triangles must be evaluated to achieve optimal speed. Camera dependency works best if it is used for objects that are not instanced too many times.
 
The curvature dependent technique (known as curvature in the input file), subdivides a surface until two approximation criteria are satisfied simultaneously. The first is an upper bound on the maximum distance in the space the object is defined in between the actual surface and its polygonal approximation (known as the distance tolerance). The second is an upper bound on the maximum angle (in degrees) between any two normals on a subdivided portion of the surface (known as the angle tolerance). Note that the first criterion is scale dependent while the second is scale independent. That is, one must know the size of the object in order to choose a suitable tolerance in the first case but not the second. In spite of this apparent advantage of the angle criterion over the distance criterion, the angle criterion has the undesirable property of resolving small discontinuities ad infinitum, whereas the distance criterion will not resolve features whose scale is below the given tolerance. Either criterion can be disabled by setting the corresponding tolerance to zero. The distance criterion may be optionally given in raster space, again in units of pixel diagonals.
 
It is also possible to use an approximation technique which combines the bounds of the spatial technique and the curvature dependent technique. Both the uniform spatial and curvature dependent approximation techniques use a recursive subdivision process that can also be controlled by two additional parameters, specifying the minimum and maximum number of recursion levels. The subdivision can be forced to proceed at least as far as the given minimum level, and refinement can be halted at the maximum level.
 
All subdivisions of a free-form surface apart from the regular parametric technique and the Delaunay technique2:1begin at the patch level. If, for example, a plane is modeled with ten by ten patches it will be approximated by at least two hundred triangles, although two triangles might be adequate. If mental ray seems to be producing a large number of triangles in spite of a low approximation accuracy, this is often due to the selected patch subdivision algorithm.
 
The curvature dependent approximation technique with the distance tolerance given in raster space and the angle tolerance set to zero has proved to be the most useful technique for high quality rendering.
 
For a quick rendering to examine materials or object positions, the uniform parametric technique may be used with a factor of zero.
 
 Free-form curves (trimming curves) may also be approximated by any of the above described methods using a technique and tolerances which are distinct from those of the surface which the curve trims. The definitions are essentially the same if one considers a curve segment to correspond to a surface patch. An important difference is that the uniform spatial, curvature dependent, and combined approximation techniques will coalesce curve segments if possible. A straight line consisting of one hundred co-linear segments may be approximated by a single line segment.
 
Functionality
 mental ray offers all the features traditionally expected of photorealistic rendering, together with functionality not found in most rendering software. The following sections describe parallelism, freeform surface input geometry, edge merging and adjacency detection and various input entities such as materials, texture mapping and light sources, and global illumination features such as caustics.

Saturday, June 6, 2009

Introduction to mental ray part - 2
 

one frame to the next need to be redefined. This feature allows mental ray to optimize scene tessellation,

preparation, acceleration data structure management, and network transfers, taking advantage of the time

coherency of the animation.

The functionality of mental ray may be extended through runtime linking of user-supplied C or C++

subroutines, called shaders. This feature can be used to create geometric elements at runtime of the renderer,

procedural textures, including bump and displacement maps, materials, atmosphere and other volume

rendering effects, environments, camera lenses, and light sources. The user has access to a convenient

environment of supporting functions and macros for use in writing shaders. The parameters of a userprovided

shader can be freely chosen with name and type; user-defined shaders are not restricted to a list

of predefined parameters. Available parameter types include integers, scalars, vectors, colors, textures,

light sources, arrays, and nested structures. When a user-defined shader is called, mental ray will provide

parameter values according to standard C calling conventions.

The built-in material shaders provide a rich variety of parameters for describing material properties,

including ambient color, diffuse color, specular color, transmission and shadow colors, a specular exponent,

reflectivity, and transparency coefficients, and an index of refraction. These parameters are interpreted by

the shader specified for thematerial. All material parameters except the index of refractionmay be mapped

with one or more textures. Color textures include opacity information and if multiple textures are applied

to a single parameter they are composited. In addition, one or more bump, displacement, and/or reflection

maps may be associated with a material.

Light passing through the space surrounding objects, as well as light passing through solid objects,

is modified according to volume shaders, which allow the creation of effects such as fog and nonhomogeneous

transparency effects and visible caustics beams. In addition to standard material environment

maps, a global environment map can be specified that provides a solid background for rays leaving the

scene.

mental ray can generate a variety of output formats, including common picture file formats and specialpurpose

formats for depth maps and label channels. Alpha channels and both 8 and 16 bits per component

are supported, as well as a 32-bit floating-point component mode. User-supplied functions can be applied

to the rendered image before it is written to disk.

Contour lines can be placed at discontinuities of depth or surface orientation, between different materials,

or where the color contrast is high. The contour lines are anti-aliased, and there can be several levels of

contours created by reflection or seen through semitransparent materials. The contours can be different

for each material (and some materials can have no contours at all). The color and thickness of the contours

can depend on geometry, position, illumination, material, frame number, and various other parameters.

The resulting image may be output as a pure contour image, a contour image composited onto the regular

image (in raster form in any of the supported formats), or as a PostScript file.

Phenomena consist of one or more cooperating shaders or shader trees (actually, shader DAGs; a DAG is

a directed acyclic graph). A phenomenon consists of an "interface node" that looks exactly like a regular

shader to the outside, and in fact may be a regular shader, but generally it will contain a link to a shader

DAG. mental ray takes care of integrating all aspects of the phenomenon into the scene, whichmay include

the introduction or modification of geometry, introduction of lenses, environments, and compile options,

and other shaders and parameters.

The Phenomenon concept is conceived to unify — by packaging and hiding complexity — all those

seemingly disparate approaches, techniques, and tricks, most notably (but not limited to) the concept of

a shader, which are characteristic for today's state of the art in high-end 3D Animation and in Digital

Special Effects production. The aim is to provide a comprehensive, coherent, and consistent foundation

for the reproduction of all visual phenomena by means of rendering. The Phenomenon concept provides

the missing framework for the completion of the definition of a scene for the purpose of rendering in a

unified manner.

This book describes versions 2.0 and 2.1 of mental ray. Features that are available only in mental ray 2.1

but not in mental ray 2.0 are marked with "2:1".