flipkart

Monday, 24 November 2014

Illumination and Shading - computer graphics unit 8 material


Illumination and Shading


Illumination models arc also frequently called lighting models or shading models. For example, some shading models invoke an illumination model for every pixel in the image, whereas others invoke an illumination model for only some pixels, and shade the remaining pixels by interpolation. Consequently, many of the illumination and shading models traditionally used in computer graphics include a multitude of kludges, "hacks," and simplifications that have no firm grounding in theory, but that work well in practice.


Modeling refraction, reflection, and shadows requires additional computation that is very similar to, and often is integrated with, hidden surface elimination. Indeed, these effects occur because some of the “hidden surfaces” are not really hidden at all


16.1 ILLUMINATION MODELS


16.1.1 Ambient Light


An illumination model can be expressed by an illumination equation in variables associated with the point on the object being shaded. The illumination equation that expresses this simple model is


                         I = ki ,


 Where I is the resulting intensity and the coefficient ki the object intrinsic intensity.


It depend on the position of the point being shaded, we can evaluate it once for each object. The process of evaluating the illumination equation at one or more points on an object is often referred to as lighting the object.


Now imagine, instead of self-luminosity, that there is a diffuse, non directional source of light, the product of multiple reflections of light from the many surfaces present in the environment. This is known as ambient light. If we assume that ambient light impinges equally on all surfaces from all directions, then our illumination equation becomes


I = Iaka


Ia is the intensity of the ambient light, assumed to be constant for all objects. The amount of ambient light reflected from an object's surface is determined by ka the ambient rejection coefficient, which ranges from 0 to 1. The ambient—ref1ection coefficient is a material property. Like some of the other properties, the ambient-reliection coefficient is an empirical convenience and does not correspond directly to any physical property of real materials.


16.1.2 Diffuse Reflection:


Although objects illuminated by ambient light are more or less brightly lit in direct proportion to the ambient intensity, they are still uniformly illuminated across their surfaces. Now consider illuminating an object by a pain: light source, whose rays emanate uniformly in all directions from a single point. The object’s brightness varies from one part to another, depending on the direction of and distance to the light source. 


Lambertian reflection:


 Dull, matte surfaces, such as chalk, exhibit diffuse reflection, also known as Lambertian reflection. These surfaces appear equally bright from all viewing angles because they reflect light with equal intensity in all directories.



 For a given surface, the brightness depends only on the angle 6 between the direction L to the light source and  ' the surface normal N of Fig16.1





 

Light-source attenuation:


 If the projections of two parallel surfaces of identical material, lit from the eye. overlap in an image, Eq. (16.5) will not distinguish where one  surface leaves off and the other begins, no matter how different are their distances from the  light source. To do this, we introduce a light—source attenuation factor, fm, yielding 

 

 



Colored lights and surfaces:


 Colored lights and surfaces are commonly treated by writing separate equations for each component of the color model. We represent an object’s diffuse by one value of Od for each component.


Ex: the triple(odR,odG,odB) defines an object’s diffuse red,green,blue components in RGB color system.



Rather than restrict ourselves to a particular color model, we explicitly indicate those terms in an illumination equation that are wavelength-dependent by subscripting them with a . Thus. Eq becomes




 16.1.3 Atmospheric Attenuation


To simulate the atmospheric attenuation from the object to the viewer, many systems provide depth cueing. In this technique. Which originated with vector graphics hardware, more distant objects are rendered with lower intensity than are closer ones.


 The scale factors determine the blending of the original intensity with that of a depth-cue color, Idc . The goal is to modify a previously computed I   to yield the depth-cued value I  that is displayed. Given z0, the 0bject’s z coordinate, a scale factor s0 is derived that will be used to interpolate between I  and ldc , to determine





1 5.1.4 Specular Reflection


Specular reflection can be observed on any shiny surface. Illuminate an apple with a bright white light: The highlight is caused by Specular reflection, whereas the light reflected from the rest of the apple is the result of diffuse reflection.



Similar as a perfect mirror, light is reflected only in the direction of reflection R, which is L mirrored about N. Thus the viewer can see specularly reflected light from a mirror only when the angle α in Fig. 16.8 is zero; or is the angle between R and the direction to the viewpoint V.


The Phong illumination model:


 Phong BuiTuong [BUIT75] developed a popular illumination model for non-perfect reflectors, such as the apple. It assumes that maximum Specular reflectance occurs when α is zero and falls off sharply as  α  increases. This rapid fall of is approximated by cosn α, where n is the material’s Specular-reflection exponent.


The amount of incident light specularly reflected depends on the angle of incidence θ .If W(θ) is the fraction of specularly reflected light, then Phong’s model is




If  the direction of reflection R, and the viewpoint direction V are normalized, then cosα= R.V . ln addition, W(θ) is typically set to a constant Ks  the material's specular-reflection  coefficient , which ranges between 0 and 1. The value of ks is selected experimentally to produce aesthetically pleasing results.





Calculating the reflection vector:


 Calculating R requires mirroring L about N. As shown in Fig. 16.1 l, this can be accomplished with some simple geometry. Since N and L are normalized, the projection of L onto N is N cosθ. Note that R =N cosθ+S, where |S| is sinθ. But, by vector subtraction and congruent triangles, S is just Ncosθ-L. Therefore, R = 2 Ncosθ-L. Substituting N.L for cosθ and R.V for cosα yields.




The halfway vector:


An alternative formulation of Phong's illumination model uses the halfway vector H, so called because its direction is halfway between the directions of the light source and the viewer. as shown in Fig. 16.12. H is also known as the direction of maximum highlights



1 6.1.6 Multiple Light Sources


If there are m light sources, then the terms for each light source are summed:



The summation harbors a new possibility for error in that I  can now exceed the maximum displayable pixel value.


1 6.4 SHADOWS


Visible-surface algorithms determine which surfaces can be seen from the viewpoint shadow algorithms determine which surfaces can be "‘seen" from the light source. Thus, Visible-surface algorithms and shadow algorithms are same.


Note that areas in the shadow of all point light sources are still illuminated by the ambient light. Although computing shadows requires computing visibility from the light source, as we have pointed out, it is also possible to generate “fake" shadows without performing any visibility tests.


1 6.4.1 Scan-Line- Generation of Shadows


One of the oldest methods for generating shadows is to augment a scan-line algorithm to interleave shadow and visible-surface processing lAPPEL68; BOUK70b]. Using the light source as a center of projection, the edges of polygons that might potentially cast shadows are projected onto the polygons intersecting the current scan line. when the scan crosses one of these shadow edges.  the colors of the image pixels are modified accordingly.


A brute-force implementation of this algorithm must compute all n(n-l) projections of every polygon on every other polygon. Bouknight and Kelley instead use a clever preprocessing step in which all polygons are projected onto a sphere surrounding the  light source, with the light source as center of projection.


 While the scan-line algorithm’s regular scan keeps track of which regular polygon edges are being crossed, a separate, parallel shadow scan keeps track of which shadowing polygon projection edges are crossed, and thus which shadowing polygon projections the shadow scan is currently "in".




1 6.4.2 A Two-Pass Object—Precision Shadow Algorithm


Atherton, Weller, and Greenberg have developed an algorithm that performs shadow determination before visible-surface determination [ATHE78]. They process the object description by using the same algorithm twice, once for the viewpoint, and once for the light source. 








 The lit polygons are transformed back into the modeling coordinates and are merged with a copy of the original database as surface-detail polygons, creating a viewpoint-independent merged database, shown in Fig. 16.29. Note that the implementation illustrated in Fig. 16.28 performs the same transformations on both databases before merging them.


1 6.4.3 Shadow Volumes


Crow [CROW77a] describes how to generate shadows by creating for each object a shadow volume that the object blocks from the light source. A shadow volume is defined by the light source and an object and is bounded by a set of invisible shadow polygons. As shown in Fig. 16.30, there is one quadrilateral shadow polygon for each silhouette edge of the object relative to the light source. Three sides of a shadow polygon are defined by a silhouette edge of the objects and the two lines emanating. 




1 6.4.4 A Two- Pass z-Buffer Shadow Algorithm


Williams [WILLTS] developed a shadow-generation method based on two passes through a z-buffer algorithm, one for the viewer and one for the light source. His algorithm, unlike the two-pass algorithm of Section l6.4.2, determines whether a surface is shadowed by using image precision calculation.


Whenever a pixel is determined to be visible, its object-precision coordinates in the observer’s view (.x0,yo,z0) are transformed into coordinates in the light source’s view (x’0,y’o,z’0). The transformed coordinates x; and y; are used to select the value zL in the light source. In analogy to texture mapping, we can think of the light’s z-buffer as a shadow map.


16.4.5 Global Illumination Shadow Algorithms


Ray-tracing and radiosity algorithms have been used to generate some of the most impressive pictures of shadows in complex environments. Simple ray tracing has been used to model shadows from point light sources, whereas more advanced versions allow extended light sources.


16.5 THANSPAFIENCY


Much as surfaces can have specular and diffuse reflection, those that transmit light can be transparent or translucent. We can usually see clearly through transparent materials, such as glass, although in general the rays are refracted (bent). Diffuse transmission occurs through translucent materials, such as frosted glass. Rays passing through translucent materials are jumbled by surface or internal irregularities, and thus objects seen through translucent materials are blurred.


16.5.1 Nonrefractive Transparency


The simplest approach to modeling transparency ignores refraction, so light rays are not bent as they pass through the surface. Thus, whatever is Visible on the line of sight through a transparent surface is also geometrically located on that line of sight. Although refraction-less transparency is not realistic.


Two different methods have been commonly used to approximate the way in which the colors of two objects are combined when one object is seen through the other. We shall refer to these as interpolated and filtered transparency.


Interpolated transparency:


 Interpolated transparency determines the shade of a pixel in the intersection of two polygons’ projections by linearly interpolating the individual shades calculated for the two polygons:




The transmission coefficient kt1 measures the transparency of polygon l, and ranges between 0 and l. When kt1 is 0. the polygon is opaque and transmits no light; when kt1 is l, the polygon is perfectly transparent and contributes nothing to the intensity Iλ; The value 1 —kt1 is called the p0lygon’s opacity. 



            For a more realistic effect, we can interpolate only the ambient and diffuse components of polygon l with the full shade of polygon 2, and then add in polygon l’s specular component.


Another approach, often called screen-door transparency, literally implements a mesh by rendering only some of the pixels associated with a transparent object’s projection. The low—order bits of a pixel’s (x, y) address are used to index into a transparency bit mask. If the indexed bit is 1, then the pixel is written; otherwise, it is suppressed, and the next closest polygon at that pixel is visible.


Filtered transparency:


 Filtered transparency treats a polygon as a transparent filter that selectively passes different wavelengths; it can be modeled by 



Where O, is polygon l’s transparency color. A colored filter may be modeled by choosing a different value of O for each λ (but soc Section 16.9). In either interpolated or filtered  transparency, if additional transparent polygons are in front of these polygons, then the calculation is invoked recursively for polygons in back—to—front order, each time using the previously computed Iλ, as Iλ2.


Implementing transparency:


 Several visible-surface algorithms can be readily adapted to incorporate transparency, including scan-line and list-priority algorithms, In list-priority algorithms, the color of a pixel about to be covered by a transparent polygon is read back and used in the illumination model while the polygon is being scan converted.


                        Most z—buffer—based systems support screen-door transparency because it allows transparent objects to be intermingled with opaque objects and to be drawn in any order. Unfortunately, the z-buffer does not store the information needed to determine which transparent polygons are in front of the opaque polygon, or even the polygons‘ relative order.


Mammen [MAMM89] describes how to render transparent objects properly in back-to-from order in a z-buffer-based system through the use of multiple tendering passes and additional memory. First, all the opaque objects are rendered using a conventional z-buffer. Then, transparent objects are processed into a separate set of buffers that contain, for each pixel, a transparency value and a flag bit, in addition to the pixel’s color and 2 value. Each flag bit is initialized to off and each z value is set to the closest possible value. Information for flagged pixels is then blended with that in the original frame butler and z-buffer. A flagged pixel’s transparency z-value replaces that in the opaque z-buffer and the Hag bit is reset. This process is repeated to render successively closer objects at each pixel.


Kay and Greenberg [KAY79b} have implemented a useful approximation to the increased attenuation that occurs near the silhouette edges of thin curved surfaces, where light passes through more material. They define kt in terms of a nonlinear function of the z component of the surface normal after perspective transformation. 



where ktmin and ktmax are the object’s minimum and maximum transparencies, zN is the z component of the normalized surface normal at the point for which kt, is being computed, and m is a power factor (typically 2 or 3). A higher m models a thinner surface. This new value of kt may be used as kt1 in either.


1 6.5.2 Refractive Transparency


Refractive transparency is significantly more difficult to model than is non refractive transparency, because the geometrical and optical lines of sight are different. If refraction is considered in Fig. 16.35, object A is visible through the transparent object along the line of sight shown; if refraction is ignored, object B is visible. The relationship between the angle of incidence θi; and the angle of refraction θt, is given by Snell’s law.



where n, and n are the indices of refraction of the materials. A material’s index of refraction is the ratio of the speed of light in a vacuum to the speed of light in the material. The index of refraction`s wavelength-dependence is evident in many instances of refraction as dispersion .




Total internal reflection:






When light passes from one medium into another whose index  of refraction is lower, the angle θt, of the transmitted ray is greater than the angle  θi .If θi becomes sufficiently large, then θt, exceeds 90° and the ray is reflected from the interface between the media, rather than being transmitted. This phenomenon is known as total internal reflection, and the smallest θi, at which it occurs is called the critical angle.




1 6.10 IMPROVING THE CAMERA MODEL:


We have modeled the image produced by a pinhole camera: each object, regardless of its position in the environment, is projected sharply and without distortion in the image. Real cameras (and eyes) have lenses that introduce a variety of distortion and focusing effects.


Depth of field: Objects appear to be more or less in focus depending on their distance from the lens, an effect known as depth a field.


A real lens’s focusing effect causes light rays that would not pass through the pinhole to strike the lens and to converge to form the image. These rays see a slightly different view of the scene, including.


Motion blur. Motion blur is the streaked or blurred appearance that moving objects have because a camera’s shutter is open for a finite amount of time.


16.1 1 GLOBAL ILLUMINATION ALGORITHMS


An illumination model computes the color at a point in terms of light directly emitted by light sources and of light that reaches the point after reflection from and transmission through its own and other surfaces. This indirectly reflected and transmitted light is often called global illumination. In contrast, local illumination is light that comes directly from the light sources to the point being shaded. They model all an environments interactions with light sources first in a view—independent stage, and then compute one or more images for the desired viewpoints using conventional visible-surface and interpolative shading algorithms.




View—dependent algorithms are well-suited for handling specular phenomena that are highly dependent on the viewer‘s position, but may perform extra work when modeling diffuse phenomena that change little over large areas of an image, or between images made from different viewpoints. On the other hand, view—independent algorithms model diffuse phenomena efficiently, but require overwhelming amounts of storage to capture enough information about specular phenomena.


1 6.12 RECURSIVE RAY TRACING


This simple algorithm determined the color of a pixel at the closest intersection of an eye ray with an object. To calculate shadows, we fire an additional ray from the point of intersection to each of the light sources. This is shown for a single light source in Fig.



The above fig which is reproduced from a paper. The first paper published on ray tracing for computer graphics. lf one of these shadow rays intersects any object along the way, then the object is in shadow at that point and the shading algorithm ignores the contribution of the shadow ray's light source.


Figure 16.52 shows two pictures that Appel rendered with this algorithm, using a pen plotter. He simulated a halftone pattern by placing a different size "+" at each pixel in the grid, depending on the pixel's intensity. 



            Each of these reflection and refraction rays may, in turn, recursively spawn shadow, reflection, and refraction rays, as shown in Fig. 16.54. 



16.12.1 Efficiency Considerations for Recursive Ray Tracing


The general efficiency techniques are even more important here than in visible-surface ray tracing for several reasons. Object spawns one shadow ray for each other in light sources. Thus, there are potentially m(2n — 1) shadow rays for each ray tree. To make matters worse, since rays can come from any direction, traditional efficiency ploys, such as clipping objects to the view volume and culling back-facing surfaces relative to the eye, cannot be used in recursive my tracing. Objects that would otherwise be invisible, including back faces. may be reflected from or refracted through visible surfaces.


Item buffers: One way of speeding up ray of tracing is simply ROI to use it at all when determining those objects directly visible to the eye. Weghorst, Hooper, and Greenberg describe how to create an item buffer by applying a less costly visible-surface a algorithm, such as the z—buffer algorithm, to the scene, using the same viewing specification.


Reflection maps: Tracing rays can be avoided in other situations, too. Hall shows how to combine ray tracing with the reflection maps. The basic idea is to do less work for the secondary rays than for primary rays. Those objects that are not directly visible in an image are divided into two groups on the basis of an estimation of their indirect visibility. Hall also points out that reflected and refracted images are often extremely distorted. Therefore, good results may be achieved by intersecting the reflection and refraction rays with object definitions less detailed than those used for the eye rays.


Adaptive tree-depth control: Although ray tracing is often used to depict highly specular objects, most of an image’s area is usually not filled with such objects. Consequently, a high recursion level often results in unnecessary processing for large parts of the picture. Hall introduced the use of adaptive tree-depth control, in which a ray is not cast if its contribution to the pixel's intensity is estimated to be below some preset threshold. This is accomplished by approximating a ray’s maximum contribution by calculating its intensity with the assumption that the ray‘s child rays have intensities of l. This allows the ray’s contribution to its parent to be estimated. As the ray tree is built. the maximum contribution of a ray is multiplied by those of its ancestors to derive the ray’s maximum contribution to the pixel.


Light buffers: Haines and Greenberg  have introduced the notion of a light bufer to increase the speed with which shadow rays are processed. A light buffer is a cube centered about a light source and aligned with the world-coordinate axes, as shown in Fig. l6.57(a). Each side is tiled with a regular grid of squares, and each square is associated with a depth-sorted list of surfaces that can be seen through it from the light. The lists are Filled by scan convening the objects in the scene onto each face of the light buffer with the center of projection at the light.



Ray classification: The spatial-partitioning approaches make it possible to determine which objects lie in a given region of 3D space. Arm and Kirk have extended this concept to partition rays by the objects that they intersect, a technique called ray classification. A ray may be specified by its position in 5D my space, determined by the 3D position of its origin in space and its 2D direction in spherical coordinates. A point in ray space defines a single ray, whereas a finite subset of ray space defines a family of rays or beam. Ray classification adaptively partitions ray space into subsets, each of which is associated with a list of objects that it contains (i.e., that one of its rays could intersect). To determine the candidate objects that may be intersected by any ray, we need only to retrieve the list of objects associated with the subset of ray space in which the ray resides.



16.12.2 A Better Illumination Model


Hall] has developed a model in which the specular light expressions are scaled by a wavelength-dependent Fresnel reflectivity term. An additional term for transmitted local light is added to take into account the contribution of transmitted light directly emitted by the light sources, and is also scaled by the Fresnel transmissivity terms. The global reflected and refracted rays further take into account the transmittance of the medium through which they travel.


1 6.12.3 Area-Sampling Variations


One of conventional ray tracing’s biggest drawbacks is that this technique point samples on a regular grid. Whitted suggested that unweighted area sampling could be accomplished by replacing each linear eye ray with a pyramid defined by the eye and the four corners of a pixel. These pyramids would be intersected with the objects in the environment, and sections of them would be recursively refracted and reflected by the objects that they intersect.


Cone tracing: Cone tracing is developed by Amanatides generalizes the linear rays into cones. One cone is fired from the eye through each pixel, with an angle wide enough to encompass the pixel. The cone is intersected with objects in its path by calculating approximate fractional blockage values for a small set of those objects closest to the cone’s origin. Refraction and reflection cones are determined from the optical laws of spherical minors and lenses as a function of the surface curvature of the object intersected and the area of intersection. The effects of scattering on reflection and refraction are simulated by further broadening the angles of the new reflection and refraction cones..


Beam tracing: Beam tracing. Introduced by Heckbert and Hanrahau is an object-precision algorithm for polygonal environments that traces pyramidal beams, rather than linear rays. The viewing pyramid’s beam is intersected with each polygon in the environment, in front-to—back sorted order, relative to the pyramid’s apex. If a polygon is intersected, and therefore visible, it must be subtracted from the pyramid using an algorithm. For each visible polygon fragment, two polyhedral pyramids are spawned, one each for reflection and refection. The algorithm proceeds recursively, with termination criteria similar to those used in ray tracing..Beam tracing takes advantage of coherence to provide impressive speedups over conventional ray tracing at the expense of a more complex algorithm, limited object geometry, and incorrectly modeled refraction.


Pencil tracing: Shinya, Takahashi, and Naito have implemented an approach called pencil tracing that solves some of the problems of cone tracing and beam tracing. A pencil is a bundle of rays consisting of a central axial ray, surrounded by a set of nearby paraxial rays. Each paraxial ray is represented by a 4D vector that represents its relationship to the axial ray. Two dimensions express the paraxial ray’s intersection with a plane perpendicular to the axial ray; the other two dimensions express the paraxial ray's direction. In many cases, only an axial ray and solid angle suffice to represent a pencil. If pencils of sufficiently small solid angle are used, then reflection and refraction can be approximated well by a linear transformation expressed as a 4 X 4 matrix. They have developed error estimation techniques for determining an appropriate solid angle for a pencil. Conventional rays must be traced where a pencil would intersect the edge of an Object.


16.12.4 Distributed Ray Tracing


Sampling by casting solid beams rather than infinitesimal rays. In contrast, distributed ray tracing, developed by Cook. Porter, and Carpenter, is based on a stochastic approach to super sampling that trades off the objectionable artifacts of aliasing for the loss offensive artifacts of noise specular reflection from rough surfaces. The word distributed in this techniques name refers to the fact that rays are stochastically by distributed to sample the quantities that produce those affects. The basic concepts have also been applied to other algorithms besides ray tracing. Poisson distribution is obtained by displacing by u small random distance the position of each element of a regularly spaced sample grid. This technique is called jittering. In sampling the 2D image plans, each sample in a regular grid is jittered by two uncorrelated random quantities, one each for x and y, both generated with a sufficiently small variance that the samples do not overlap (Fig. 16.59). Figure i6.6O shows a minimum-distance Poisson distribution and a jittered regular distribution.


   











1 6.1 2.5 Ray Tracing from the Light Sources


One serious problem with ray tracing is caused by tracing all rays from the eye. Shadow rays are cast only to direct sources of light that are treated separately by the algorithm. Therefore, the effects of indirect reflected and refracted light sources, such as mirrors and lenses, are not reproduced properly: Light rays bouncing off a minor do not cast shadows, and the shadows of transparent objects do not evidence refraction, since shadow rays are cast in a straight line toward the light source. It might seem that we would need only to rt1n a conventional ray tracer "backward" from the light sources to the eye to achieve these effects. This concept has been called backward my tracing, to indicate that it runs in the reverse direction from regular ray tracing, but it is also known as forward my tracing to stress that it follows the actual path from the lights to the eye. We call it ray tracing from the light sources to avoid confusion.


This method creates new problem. Since an insufficient number of rays ever would strike the image plane, let alone pass through the focusing lens or pinhole. Instead, ray tracing from the light sources can be used to supplement the lighting information obtained by regular ray tracing. Heckbert and Hanrahan suggest an elaboration of their proposed beam-tracing shadow method (Section 16.12.3) to accomplish this. If a light’s beam tree is traced recursively; successive levels of the tree below the first level represent indirectly illuminated polygon fragments. Adding these to the database as surface—detail polygons allows indirect specular illumination to be modeled. Arvo has implemented a ray tracer that uses a preprocessing step in which rays from each light source are sent into the environment. Each ray is assigned an initial quota of energy, some of which is deposited at each intersection it makes with a diffusely reflecting object. He compensates for the relative sparseness of ray intersections by mapping each surface tm a regular rectangular grid of counters that accumulate the deposited energy. Each ray’s contribution is bilinearly partitioned among the four counters that bound the grid box in which the ray hits. A conventional ray—tracing pass is then made, in which the first pass‘s interpolated contributions at each intersection are used, along with the intensities of the visible light sources, to compute the diffuse reflection. Unfortunately, if a light ray strikes an object on the invisible side of a silhouette edge as seen from the eye, the ray can affect the shading on the visible side. Note that both these approaches to ray tracing from the light sources use purely specular reflectivity geometry to propagate rays in both directions.


16.14 THE RENDERING PIPELINE


16.14.1 Local Illumination Pipelines


 Z-buffer and Gouraud shading: Perhaps the most straight forward modification to the pipeline occurs in a system that uses the z-buffer visible-surface algorithm to render Gouraud shaded polygons. The Z-buffer algorithm has the advantage that primitives may be presented to it in any order. Therefore, as before, primitives are obtained by traversing the database, and are transformed by the modeling transformation into the WC system.



Primitives may have associated surface normals that were specified when the model was built. The normals defined with the objects may represent the true surface geometry, or may specify user-defined surface blending effects, rather than just being the averages of the normals of shared faces in the polygonal mesh approximation.


Our next step is to cull primitives that fall entirely outside of the window and to perform back-face culling. This trivial-reject phase is typically performed now because we want to eliminate unneeded processing in the lighting step that follows. By using Gouraud shading, the illumination equation is evaluated at each vertex, to preserve the correct angle and distance from each light to the surface. If vertex normals were not provided with the object, they may be computed immediately before lighting the vertices. Culling and lighting are often performed in a lighting coordinate system that is a rigid body transformation of WC .



Next objects are transformed to NPC by the viewing transformation. and clipped to the view volume. Division by W is performed, and objects ate mapped to the viewport. At this point, the clipped primitive is submitted to the z—buffer algorithm, which performs rasterization, interleaving scan conversion with the interpolation needed to compute the z value and color-intensity values for each pixel. If a pixel is determined to be visible, its color-intensity values may be further modified by depth cueing. Although this pipeline may seem straightforward, there are many new issues that must be dealt with to provide an efficient and correct implementation.


 Z-buffer and Phong shading: This simple pipeline must be modified to Phong shading, as shown in Fig. Because Phong shading interpolates surface normals, rather than intensities, the vertices cannot be lit early in the pipeline. Instead, each object must be clipped, transformed by the viewing transformation, and passed to the z-buffer algorithm. Finally, lighting is performed with the interpolated surface normals that are derived during scan conversion. Thus, each point and its normals must be back mapped into a coordinate system that is isometric to WC to evaluate the illumination equation.



List-priority algorithm and Phong shading: When a list-priority algorithm is used, the primitives obtained from traversal and processed by the modeling transformation are inserted in a separate database, such as a BSP tree, as part of preliminary visible-surface determination. The application program and the graphics package may each keep separate databases. Here, we see that rendering can require yet another database. Since, in this case polygons are split, correct shading information must be determined for the newly created vertices. The rendering database can now be traversed to return primitives in a correct in back-to-front order. The overhead of building this database can be applied toward the creation of multiple pictures. Primitives extracted from the rendering database are clipped and normalized, and are presented to the remaining stages of the pipeline. 




1 5.14.2 Global Illumination Pipelines


The incorporating global illumination effect requires information about the geometric relationships between the object being rendered and the other objects in the world. It is to calculate needed information from a specific viewpoint in advance of scan conversion and to Store it in tables. This eliminates the need to access the full db representation of other objects while processing the current object. In the case of shadows,  preprocessing the environment to add surface detail polygon shadows is another way to allow the use of an otherwise conventional pipeline.


Radiosity: The diffuse Radiosity algorithms offer an interesting example of how to take advantage of the conventional pipeline to achieve global-illumination effects. These algorithms process objects and assign to them a set of view-independent vertex intensities. These objects may then be presented to a modified version of the pipeline for z-buffer and Gouraud shading, depicted in Fig that eliminates the lighting stage. 



Ray tracing: it is the simplest because those objects that are visible at each pixel and their illumination are determined entirely in WC. Once objects have been obtained from the database and transformed by the modeling transformation, they are loaded into the ray tracer’s WC database, which is typically implemented using the techniques of Sections 15.10.2 and 16. I2. 1, to support efficient my intersection calculations.




16.14.3 Designing Flexible Renderers


The several design approaches have been suggested to increase the case with which illumination and shading algorithms may be implemented and used.


Modularization: A straightforward approach is to modularize the illumination and shading model in a part of the rendering system that is often known as its shader. The standard mechanism for passing parameters to shaders, different shaders can be used in the same system. The decision to call can be made at runtime based objects. The system performs scan using a scan-line algorithm, the results are added to linked list of spans. It  contains information about a set of values of endpoints, including their x and z values.


Special languages: In contrast to providing extensibility at the level of the programming language in which the system is built, it is possible to design special languages that are better suited to specific graphics tasks. Cook has designed a special-purpose language in which a shader is built as a tree expression called a shade tree. It  takes parameters from its children and produces parameters for its parent. The parameters are the terms of the illumination equation. Some nodes, such as diffuse, specular, or square root are built into the language. Others can be defined by the user and dynamically loaded when needed. All nodes can access information about the lights. A shade tree thus describes a particular shading process and is associated with one or more objects through use of a separate modeling language. Different objects may have different shade trees.


                         


The flexibility of shade trees and pixel-stream editors may be combined by designing a rendering system that allows its users to write their own shaders in a special programming language and to associate them with selected objects. This approach is taken in the RenderMan Interface. It defines a set of key places in the rendering process at which user-defined or system-defined shaders can be called.


Visible surface determination is done with a sub pixel z-buffer whose sub pixel centers are jittered to accomplish stochastic sampling. The closest micro-polygon covering a sub pixel center is visible at that sub pixel. To avoid the need to store micro polygon meshes and sub pixel z and intensity values for the entire image. The image is divided into rectangular partitions into which each object is sorted by the upper left hand corner of its extent. The partitions are then processed left to right, top to bottom. The resulting sub objects or micro polygons are placed in the partitions that they intersect.


16.14.4 Progressive Refinement:


 Instead of attempting to render a final version of a picture all at once, we can first render the picture coarsely, and then progressively refine it, to improve it. For eg a first image might have no antliasing, simpler object models and simpler shading. The user views an image, idle cycles may be spent improving its quality. If the metric by which to determine what to do next, then refinement can occur adaptively. Bergman, Fuchs, Grant, and Spach have developed such a system that uses a variety of heuristics to determine how it should spend its time. For eg a polygon is Gouraud-shaded, rather than constant-shaded, only if the range of its vertex intensities exceeds a threshold. Ray-tracing and radiosity algorithms are both amenable to- progressive refinement.

No comments:

Post a Comment