**by William Shoaff with lots of help**

- Ray tracing handles global reflections and refractions (transmissions), hidden surface removal, shadows, and other special effects
- We want to consider
- How to build a ray tracer
- How to make it efficient
- How to improve the image quality with special effects

- Whitted presented the first general recursive ray tracing model in 1980 (others had used some of the ideas before)
- Ray tracing follows specularly reflected and refracted rays through a scene
- The rays are infinitely thin, so aliasing is a problem
- Ray tracing can be used as a basic technique for volume rendering
- Ray tracing is recursive; secondary rays are followed recursively from primary rays

- The basic algorithm fires an
*eye ray*through the center of a pixel to find the intersection with the closest object - A (specular)
*reflected ray*is followed from the intersection - A (specular)
*refracted ray*is followed from the intersection - To calculate shadows, fire a
*shadow ray*(or*shadow feeler*) from the intersection point to each light source

- Reflection and refraction are handled by
*secondary rays*from the point of intersection - The (specular) reflection ray is given by

where and are the unit surface normal and unit*incident*vector hitting the surface - Note that is first an eye ray, then for secondary rays it can be a reflected or refracted ray; its role changes as rays are cast from ray-object intersection points
- The (specular) refraction (transmission) ray is given by

where and are the indices of refraction in the incident and transmitted media (e.g. air to water), and is the angle between the transmitted ray and the negative surface normal - We can calculate
as

- Each reflection and refraction ray may spawn their own shadow,
reflection and refraction rays, recursively generating
a
*ray tree* - The ray tree can be built to any depth depending on
- The ray does not hit an object
- The amount of work (time) we are willing to spend
- The amount of contributed illumination from the rays

- Shadow rays will be ignored for the moment

- The ray tree is evaluated bottom-up with each node's intensity a function of its children's intensity
- At each intersection point

where- is the intensity at the intersection
*k*_{rg}is the global reflectivity (between 0 and 1)- is the reflected ray's returned intensity
*k*_{tg}is the global transmissivity (between 0 and 1)- is the transmitted ray's returned intensity

- The local intensity can be computed from any illumination model, e.g.

(assuming the light reflects light to the viewer) *I*, , , are vectors- Many of the terms in depend on wavelength: ,
,
*k*_{a},*k*_{d},*k*_{s},*k*_{rg},*k*_{tg}; they can have different values for each component of the intensity vector - Distance attenuation factors may be necessary (especially with the transmitted term)
- Only specular reflected and transmitted rays are considered
- Diffuse reflection and spread of specular reflection is considered only in the local intensity term

`;=3000
*raytrace*()
*Select the eye position *
;
*Select window *
;
**for **
**for **
*eyeray = *
;
*pixel *=* trace*
;

`;=3000
*color trace*
**int level**;

`=3000
*color shade(level, weight, ray, obj, **, **P**)*
**begin**
*color = obj.**k*_{a}**I*_{a};
**for each light do**

- A basic ray tracer intersects the eye ray with every object in the scene
- The nearest object hit spawns secondary rays that are also intersected with every object in the scene (except the current one)
- It has been estimated that 95% of the time in ray tracing is spend on intersection tests
- Quadric surfaces are often used in ray tracing because ray-quadric surface intersections are easy to compute (via quadratic equation)
- Spheres (a quadric surface) is often used as a
*bounding volume* - Ray-polygon intersections are easy (but expensive for a fine polygon mesh)
- Ray-box intersections are also fairly easy, and (generalized) boxes make good bounding volumes
- Intersection tests for bicubic patches, fractals, surfaces of revolutions, and other objects have been developed
- Intersection tests are independent of the rest of the ray tracing program so new objects can be added once the intersection code has been written

- Ray-sphere intersection calculations are easy
- Let the sphere be given by

(*x*-*a*)^{2}+ (*y*-*b*)^{2}+ (*z*-*c*)^{2}=*r*^{2}

where is the center of the sphere and*r*is its radius - Let the ray be given by

where is the ray's origin and is the unit direction vector for the ray

- The plugging parametric line into sphere equation yield the quadratic equation

*At*^{2}+*Bt*+*C*= 0

where

*A*=*x*_{d}^{2}+*y*_{d}^{2}+*z*_{d}^{2},

*B*= 2[*x*_{d}(*x*_{o}-*a*) +*y*_{d}(*y*_{o}-*b*) +*z*_{d}(*z*_{o}-*c*)],

and

*C*= (*x*_{o}-*a*)^{2}+ (*y*_{o}-*b*)^{2}+ (*z*_{o}-*c*)^{2}-*r*^{2}

- If
*B*^{2}-4*AC*<0, the ray and sphere do not intersect - If
*B*^{2}-4*AC*=0, ray grazes sphere - If
*B*^{2}-4*AC*>0, the smallest positive*t*corresponds to the intersection

- If the intersection is
,
then the surface normal at
the intersection is

- A more efficient solution can be determined geometrically
- Find if the ray's intersection is outside the sphere, that is if

- Find the closest approach of the ray to the sphere's center

(this assumes is a unit vector) - If the ray is outside and points away from the sphere it misses the
sphere, that is, if

the sphere is missed

- If the origin is inside the sphere or
,
find the
``squared distance'' from
*t*_{ca}to the sphere's surface- This is the half chord distance squared
- If the ray misses the sphere, it can be negative

*t*_{hc}^{2}=*r*^{2}-(*v*^{2}-*t*_{ca}^{2})

- If
*t*_{hc}< 0, the ray misses the sphere - If
,
then

and

- The code can be written as

`=3000
;
;
**if **
(*t*^{2}_{hc} < 0)** then no intersection**;

- Assume the polygon is described by vertices
*V*_{0}, ...,*V*_{n-1}, - Let
be the coordinates of vertex
*V*_{i} - The normal to the polygon is computed as

- If
*P*is in the plane of the polygon, then

- Let the ray be given by

- The intersection of the plane and the ray is at parameter value

- This calculation requires 12 floating point operations and 3 tests
- If the intersection is rejected
- If , the intersection is rejected
- If a closer intersection has been found the intersection is rejected

- If the intersection is not rejected, determine if it is inside the polygon
- The solution presented here is based on triangles, and works
for
*convex*polygons - A polygon with
*n*vertices can be broken up into*n*-2 triangles - Given point
*P*we can write

- Point
*P*is inside triangle if

- The above equation for
*P*can be written as

- To reduce the system, project it onto either the
*xy*,*xz*or*yz*planes - If the polygon is perpendicular to one of these planes, it projection is a line (or if it is nearly perpendicular, its projection has little area)
- To make the projection as large as possible, find the dominant axis
of the normal vector and project onto the plane perpendicular to this
axis

- Let
*i*_{1}and*i*_{2}be indices different from*i*_{0}

- Let
denote the plane defined by
*i*_{1}and*i*_{2} - The coordinates of
,
,
are

- And the equations for
and
reduce to

- Which has solution

- Here's a more general algorithm
- Reduce dimensionality - project polygon and intersection point orthographically onto coordinate plane which yields largest projection
- Translate polygon so intersection point is at origin
- Draw ray from origin along positive
*u*axis - Count how many times it intersects polygon
- Even number of intersections outside; odd inside

`;=3000
*Polygon-Inside-Outside-Test*
**for ***i*=0** to ***N*
*project *
* onto dominant plane*
*creating a list of vertices *
;
*translate intersection point to the origin*;
*call the new points *
;
*Number-Cross *=0;
**if ***V*'_{0} < 0** then Sign-Hold **= -1;

- Boxes are useful for bounding volumes (they usually fit the objects tighter)
- Hierarchies of boxes (boxes inside of boxes) are often used to partition space
- Generalized boxes are formed from three pairs of parallel planes
- Here we consider upright rectanguluar boxes
- The ray-box intersection is just the Liang-Barsky clipping algorithm in three dimensions
- For each of
*x*,*y*, and*z*in turn, find the*near*and*far*parameter value where intersections occur - The largest near intersection and smallest far parameter values are retained

- Set and
- Let the ray direction vector and , opposite corners defining the box
- If
*x*_{d}=0, the ray is parallel to the*x*planes, so if*x*_{o}<*x*_{b1}or*x*_{o}>*x*_{b2}no intersection - Otherwise, the two intersections with the
*x*planes of the box are and - If
*t*_{1}>*t*_{2}swap - If , then
- If ,then
- If , then the box is missed
- If , the box is behind the ray, so it is missed
- Repeat for
*y*and*z*pairs of parallel planes

- Whether an intersection point is in shadow or not is determined by casting a ray to the light source
- If the
*shadow feeler*intersects an object before the light, then the point is in shadow and is decreased (or set to 0) - If there are
*n*light sources then a total of*n*+2 rays must be spawned from the nearest intersection (a reflected, a refracted, and*n*shadow feelers) - A
*light buffer*can be used to accelerate the shadow testing - Light buffers exploit the fact that point can be determined to be in shadow without finding the closest occluing object
- For each light source, six faces of pointers, surrounding the light point to potentially blocing objects

- The light buffers are constructed as a pre-processing step, before ray tracing begins
- The candidate lists are created by projecting each object onto the six faces (of a unit cube) surrounding the light
- If a direction cell is partially or totally covered by the projected object it is added to the candidate list
- For polygon objects this can be performed efficiently by a modified scan-line algorithm to the projected edges
- Once all lists are created, they are sorted into ascending order according to depth from the light

- Observations that simplify the candidate lists:
- All polygons which face away from the light and are part of opaque solids can be culled
- Any list containing exactly one polygon can be deleted (a polygon cannot occlude itself)
- If a projection completely covers a direction cell, the
candidate list can be marked as
*fully-occluded*at the polygon depth and all polygons of greater depth can be eliminated

- To determine if a point on a given surface is in shadow:
- Check the surface orientation with respect to the light source
- If it faces away, the polygon is in shadow
- Otherwise, retrieve the list of potentially occluding objects from the light buffer using the direction of the shadow feeler
- The objects on the list are tested for intersection in order of
increasing depth until
- An occlusion is found (point in shadow)
- An object is found whose depth exceeds the point being tested (point not in shadow)

- Ray tracing can be very inefficient
- 12 years passed between first published paper that used ray tracing (1968) and ray tracing started to be heavily used
- This delay was primarily due to the cost (in time) required to ray trace scenes

- Three basic approaches to making ray tracing more efficient are:
- To compute fewer intersection -- adaptive depth control and adaptive sampling
- To compute intersections faster -- bounding volume, light buffers, space subdivision
- To use generalized rays -- beams, cones, pencils

- Adaptive depth control allows the depth of the ray tree to change, getting deeper when the returned intensity is large and more shallow then the returned intensity is small
- Bounding volumes can be use quickly discard objects that can not be intersected by a ray
- Beam, cone, and pencil tracing use generalized rays

- The percentage of a scene that consists of highly reflective and transparent surfaces is, in general, small
- It is inefficient to trace all rays to a maximum depth
- The ray tree can be pruned when little intensity will result from the pruned subtree
- Rays are attenuated as they pass through a scene
- Adaptive depth control uses properties of the materials the ray hits
to determine this attenuation
- Reflected rays are attenuated by the global reflection coefficient
*k*_{rg} - Refracted rays are attenuated by the global refraction coefficient
*k*_{tg}

- Reflected rays are attenuated by the global reflection coefficient
- Hall and Greenberg report that adaptive depth control resulted in an average depth of 1.71, in their example scenes, where the maximum depth was set at 15

- Bounding volumes, or extents, or enclosures are the most basic tool for ray tracing acceleration
- A bounding volume encloses a given object (or objects)
- It should be easier to compute the ray-bounding volume intersection than the ray-object intersection
- Whitted initially used spheres as bounding volumes since they are the simplest shapes to test for intersection
- Using simple bounding volumes substitutes simple intersection checks for more costly ones, but does not decrease the number of intersections
- The simplicity of the intersection test is not the only criterion;
the
*void*area of bounding volume should also be considered

- The cost associated with an object and its bounding volume can
be defined as

where*n*is the number of rays tested against the bounding volume,*B*is the cost of each bounding volume test*m*is the number of rays that hit the bounding volume ()*I*is the cost of intersecting the objects within the bounding volume

- Assuming
*n*and*I*are fixed, that is, the number of rays fired*n*depends on the number of pixels and the cost of intersecting objects*I*depends on the scene, neither of which can change - We want to select bounding volumes making
*B*and*m*small

- Reducing
*B*generally increases*m* - For example,
*B*can be reduced by using sphere, but this loosens the fit of the bounding volume causing more intersections (increasing*m*) - On the other hand, a tigher fit, decreases
*m*but increases*B* - Spheres, cylinders, axis aligned (upright) boxes, and oriented boxes are typically used as bounding volumes
- Each bounding volume can be given a complexity rating (spheres = low, oriented boxes = high)
- The tightness of fit can be measured by the projected void area

- Hierarchical bounding volumes can attain a theoretical time complexity that is logarithmic in the number of objects
- Rubin and Whitted used bounding boxes to create a hierarchy of bounding volumes
- A bounding volume hierarchy is assumed to be a tree or directed acyclic graph (DAG), with an arbitrary branching factor
- Thus bounding volumes may enclose any number of other bounding volumes
- Leaf nodes are primitive objects (spheres, polygons, boxes, etc)

- Setting up the hierarchy may require human involvment
- The code below can be used to test bounding volume hierarchies for intersections with rays

`;=3000
*bounding-volume-hierarchy-intersect*(*ray, node*)
*Ray ray*;
*Node node*;
**if **(** node is a leaf**)

- Kay and Kajiya give desirable properties of hierarchical schemes
- Any subtree should contain objects that are near each other; the deeper the subtree, the nearer the objects
- The volume of each node should be minimal
- The sum of the volume of all bounding volumes should be minimal
- The tree construction should concentrate on the nodes nearer the root; this makes pruning more efficient
- The time spend constructing the hierarchy should more than pay for itself in the time spend rendering the image

- Kay and Kajiya also introduce of the concept of
*parallel slabs*to approximate the convex hull of an object - The slabs better approximate the shape of the object; they have less void area
- At least three independent sets of parallel slabs are used to enclose a 3 dimensional object
- Bounding boxes use three sets of slabs; each set is parallel to one of the coordinate axes
- Slabs with arbitrary orientation are characterized by
a normal vector
and a displacement
*D*(from the origin)

*Ax*+*By*+*Cz*+*D*= 0

- Kay and Kajiya use a predetermined set of normals to save computation costs and storage
- As more normals are added to the set, the void area is reduced, but more storage and computation are required
- For an object, the slab bounding volume is determined by a (sub)set of the normals and corresponding pairs of of displacements
- For polygons, the displacements are easy to find;
for each normal ,
compute

where is the*j*th vertex of the polygon - Then

- Spheres and other surfaces are more difficult to handle

- Bounding volume techniques select volumes based on a given set of objects
- Spatial subdivision selects sets of objects based on given volumes
- Spatial subdivision divides space into nonoverlapping regions and considers the objects in those regions
- Rather than check a ray against all objects or set of bounding volumes, we check whether the region in which the ray is traveling contains objects
- If the region is empty, the ray is followed into the next region
- If the region contains objects, the ray is tested for intersection with those objects

- The preprocessing step of subdividing space eliminate two major
overheads associated with ray tracing
- Rays do not need to be checked for intersection with all objects; only the closest object in the first region pierced by the ray needs to be found
- Ray-object intersections are checked only for objects near the path of the ray

- Rendering time is constant (independent of the scene complexity) for spatial subdivision techniques

- All spatial subdivision approaches preprocess space setting up auxiliary data structures that contain information about objects occupying regions of space
- Rays are traced through this data structure
- Octrees establish a hierarchical data structure of nonoverlapping cubes in space
- Binary space partitioning trees use an indexing scheme for branches that differs from octrees
- Space can be partitioned into uniform sized regions or the regions can differ is size
- Nonuniform spatial subdivision discretize space into regions of varying
size to conform to features of the environment
- More subdivisions can be performed in densely populated regions
- Fewer (or larger) regions cover space that have few or no objects

- Glassner first applied octrees to ray tracing
- Octrees are hierarchical data structures that can be efficiently indexed into regions of space
- Rectangular volumes are recursively subdivided into 8 sub-octants until at the leaf (or voxel) some simplicity criterion is met
- The parent node, labeled 1, encloses the ``world''
- The voxels (leaf nodes) are marked as
*empty*,*full*, or*mixed*

- Each (non-empty) voxel contains
- A name (numeric identifier),
- A subdivion flag,
- A list of objects whose surfaces penetrate that volume

- Each ray that pierces the voxel is tested for intersection against
the objects in this list
- Given a point , we can find the node it belongs to using the code below

`;=3000
*findnode*
*node* =1;
**while **(** node-subdivided **=

- Given the list of objects in the ``world'' here's how Glassner decides to
subdivide a given node
- To create a child of a parent, for each object that passes through the child's volume, add the object to the child's list
- We can determine if an object passes through a volume by intersecting the object with each of the 6 planes bounding the voxel
- If any intersection point lies within the square of the face, the object is placed on the child's list
- An additional test for proper containment, of the object within the voxel, is made by testing if a single point on the object is inside the voxel
- Once the list of objects is constructed, if the list has too many objects, the child node is recursively subdivided

- The intersection of rays with the environment using octrees can be performed by the code below
- We must also find a point in the next voxel if the ray hits no objects in the current voxel
- The exit point of the ray and the current voxel can be calculated using the standard ray-box intersection test
- The next point can be found by moving a small distance from this exit point
- The length of the shortest voxel edge in the entire octree can be
used for this purpose, call this length
*minlen* - If the exit point is interior to a face, move the exit point
directly away from this face by
*minlen*/2 - If the exit point is along an edge or vertex, move the exit
point by
*minlen*/2 for each face containing the exit point

- There are potential pitfalls that must be avoided when implementing spatial subdivision
- Two of them arise from the fact that an object may intersect several voxels
- The first is repeated ray-object intersection tests between the same ray and object
- As the ray passes from voxel 1 to 2, it finds object
*A*in object list of each - One way to avoid repeated intersection tests is to use a
*mailbox*- Each object has a mailbox; each ray is tagged by a unique id
- When an object is tested for intersection, the results of the test and the ray tag are stored in the object's mailbox
- Before testing every object, the tag stored in the mailbox is compared against that of the current ray, if they match the object has been previously tested and the intersection result can be retrieved without recalculation
- After testing object
*A*in voxel 1, it can be tested in voxel 2 using the mailbox

- The second pitfall is more dangerous -- it can lead to wrong results
- Notice object
*B*is on the object list for voxel 3 and*B*is intersected by the ray - If the affirmative intersection test with object
*B*causes the ray walking to stop at voxel 3, the closer object will not be found

- Fujimoto
*et al*, introduced the idea of subdividing space into uniformly sized voxels - They call this organization of space SEADS (Spatially Enumerated Auxiliary Data Structure)
- The subdivision is independent of the structure of the environment
- The uniform subdivision is a disadvantage in that it take more space and uses more time subdividing areas that may be simple
- However, rays can be traversed through the voxels very efficiently by a 3D-DDA calculation
- This speed of voxel space traversal can offset the disadvantage of the uniform space subdivision
- A point is indexed into a node directly, e.g., assume a grid, then point belongs to node
- Here all voxels pierced by the ray must be identified (not as in the DDA where only the ``nearest'' pixel is found)
- First, we consider the extension of the 2D-DDA that identifies all pixels pierced by a line, not just those closest to the ideal line

- Let denote the ray
- Let
*s*_{1}=*y*_{d}/*x*_{d}denote the slope of the ray, and for simplicity assume - The pixels are identified by their lower left hand corners, and the ray starts in pixel
- Depending on the position of the ray's origin and the slope
*s*_{1}any of the right, diagonal, or up pixel can be pierced next by the ray - Let
*e*denote the ``error'' in*y*at the left hand edge of the current pixel, that is when , or , thus

- If , the right pixel is pierced next
- If , the diagonal pixel is pierced next
- If , the up pixel is pierced next
- Every time through the loop of the 2D-DDA algorithm, the right or diagonal pixel will be identified, but a special test must be made for the up pixel

`;=3000
*2d-dda-extended*
;
;
*s*_{1} = *y*_{d}/*x*_{d};
*e* = *y*_{o}+*s*_{1}*(*x*-*x*_{o});
**do** *pierced*;
*e*= *e*+*s*_{1};
**if **
** then**
*y*=*y*+1;
**if **(*e* > *y*)** pierced**;

- The 3D-DDA used two synchronized 2D-DDA's working in mutually perpendicular planes
- Let
*s*_{1}=*y*_{d}/*x*_{d}and*s*_{2}=*z*_{d}/*x*_{d}and suppose both these slopes are between -1 and 1 - Consider the 2D-DDA extended so every pixel hit by a ray is identified
- Assume is the ray's origin and is its direction
- Let
*s*_{1}=*y*_{d}/*x*_{d}denote the*xy*slope and*s*_{2}=*z*_{d}/*x*_{d}the*xz*slope, assume both are between 0 and 1 - A 2D-DDA is run in the
*xy*plane and*xz*plane simultaneously; together they identify each voxel pierced

- We've seen BSP trees as a data structure that can be correctly walked from any view to determine visible surfaces
- Here they are used to partition space into regions where ray will be traced
- BSP trees use a different indexing scheme into spatial regions than octrees
- BSP trees nodes are constructed with explicit pointers to their two children; this increases storage but leads to faster traversal than nonuniform octrees
- In the standard implementation for ray tracing,
space is partitioned by planes parallel to the
*x*,*y*and*z*axes

- Bounding volumes have exactly one path to each object (they form a tree)
- Nonuniform spatial subdivision (octrees) results in a DAG;
there is one path to any leaf volume (circles in the graphs),
but leaf volumes can contain many objects and
some objects (squares in the graphs) can belong to many leaf volumes
- Octrees are good for scenes whose occupancy density varies
- Traversal of octrees tends to be slower than BSP trees and SEADS because the trees tend to be unbalanced
- BSP trees tend to have smaller depth than octrees because the tree is balanced

- Uniform spatial subdivision (SEADS) results in a bipartite graph
- Volumes are indexed by direct index calculation, not a path through other volumes
- SEADS has fast traversal but enormous memory costs

- A generalized ray considers an entire family of rays bundled as beams, cones, or pencils
- Some sacrifice is required to use each of these types of generalized
ray
- The types of primitive objects may need to be restricted
- The computation of ``exact'' intersections may need to be abandoned

- The use of generalized rays leads to advantages such as
- Increased efficiency by exploiting coherence
- Effective antialiasing
- Additional optical effects

- A right circular cone of rays can be defined by an
*apex*,*center line*, and*spread angle* - The primitive objects are restricted to spheres, planes, and polygons
- A cone intersects a sphere if

where- is the spread angle of the cone
*R*is the radius of the sphere*d*is the distance between the center of the sphere and the closest approach of the cone's axis*D*is the distance from the cone origin to the closest approach on the cone axis

- For antialiasing, the intersection calculation needs to detect how much of the cone is blocked by the object
- For reflection and refraction, the new center line is computed as the reflected or refracted center line of the incident cone

- Heckbert and Hanrahan exploit coherence by observing that neighbors of a particular ray tend to follow the same path
- They do this by recursively applying a version of the Weiler-Atherton hidden surface removal algorithm
- The basic steps in the Weiler-Atherton algorithm are
- A preliminary rough depth sort of polygons based on the nearest
*z*value - The first polygon in the sorted list is used to clip the remainder of the list into new lists of inside and outside polygons
- Removal of any polygons on the inside list behind the current (front) polygon
- If any polygons remain on the inside list, a recursive call to subdivide the region again with the front polygon and the inside list

- A preliminary rough depth sort of polygons based on the nearest
- Only polygons can be handled by this algorithm

- The initial beam is the viewing frustrum
- An
*beam tree*is built from intersections of this beam with the environment - A beam may intersect many surfaces, so each node in the list contains a list of surfaces intersected by the beam
- New beams are generated from beam-object intersections by clipping the incident beam and defining a new virtual eye point
- ``Pencils'' of rays have also be suggested
- A pencil of rays is a collection parallel to and near an axial ray

- A similar approach to cone, beam and pencil tracing exploits ray coherence without introducing new geometrical entities (cones, beams, pencils)
- The idea is to use the path (and ray tree) of the previous ray to construct the ray tree for the current ray
- As the current tree is constructed, information from the corresponding branch of the previous tree is use to predict then next object hit
- New intervening objects must be detected

- A ``safety zone'' is constructed around each ray
- If the current ray does not exit the safety zone of the previous ray and intersects the same object as the previous ray, then it can not hit any intervening object
- Thus, two basic tests are made: (1) does the ray hit the same object as the previous ray; (2) does the ray leave the previous ray's safety zone
- Alternatively, we can identify the potential blockers (in a ``cache'')
- A cache miss occurs when the current ray misses the previously hit object or hits a potential blocker

- Cook, Porter, and Carpenter introduced this concept in 1984
- Distributed ray tracing is a Monte Carlo technique that stochastically distributes the direction of rays
- It supersamples the image, firing more than one type
- Sampling the reflected ray according to a specular distribution function produces gloss (blurred reflection)
- Sampling the transmitted ray produces translucency (blurred transparency)
- Sampling the solid angle of the light source produces penumbras (soft shadows)
- Sampling the camera lens area produces depth of field (focusing)
- Sampling in time produces motion blur

- Sharp reflections, refraction, shadows ... can be considered aliasing artifacts and distributed ray tracing is an antialiasing approach
- Suppose the 16 eye rays are fired through a pixel; consider the pixel is composed of 16 subpixels
- A sample point is placed in the middle of each subpixel
and then noise is added to the
*x*and*y*locations independently - Color intensities returned from each sample can be simply averaged (box filtered), or weighted averages can be used

- The goal in shading (not just in ray tracing) is to evaluate the intensity
*I*of the reflected light at a point on the surface - The intensity is an integral over the sphere around the surface
of an illumination function
*L*and a reflection function*R*

- The reflection function
*R*describes the surface and the illumination function*L*describes the light in the environment - Distributed ray tracing uses Monte Carlo integration techniques to approximate this integral

- Spatial dimension are divided up into grids, the center of each subpixel is jittered
- To jitter in a nonspatial dimension, randomly created ``prototype'' patterns in screen space are associated with sample points within a certain range
- The exact location is then determined by jittering
- For example, sampling in time to produce motion blur, the frame time is divided into slices and a randomly assigned slice of time is assigned to each sample point
- To assign times in a pixel with a grid of sample points, the integers 1 to 16 can be randomly distributed

- The sample in the
*i*th row and*j*column would yield a prototype time

where*P*_{ij}is the value in the prototype pattern - A random jitter
is added to
*t*_{ij}to obtain the actual time, e.g. the sample in the upper left subpixel would have a time

- Sometimes the samples need to be weighted; for example, we may want to weight the reflected samples according to a specular distribution function
*Importance sampling*is used to distribute the samples; the sample points are distributed so that chance of a location being sampled is proportional to the filter at that location- The filter is divided into regions of equal area, each region corresponds to one sample point at the center of the region

- Usually the filter is known ahead of time, so the centers and jitter magnitudes can be precomputed and stored in a lookup table
- For example, for the reflection ray each surface has an associated lookup table for a presampled reflection function
- Given the angle between the surface normal and the incident ray, the lookup table gives the range of reflection angles plus a jitter magnitude
- For depth of field, the focal point is determined from the center of the pixel
- A lens is ``placed at the pixel'' and a point on the lens found from a jittering lens location lookup table
- The ray is then traced to the first hit
- The effect of depth of field is controlled by the diameter
of the lens,
*F*/*n*where*F*is the focal length and*n*is the*f*-stop

- Determine the spatial screen location of the ray by jittering
- Determine the time for the ray and move the objects and ``camera'' accordingly
- Determine the focal point by following the ray from the center of the lens through the screen location to the distance of the focal distance
- Determine the visible point for this ray using ray-object intersection techniques
- Trace a reflection ray whose direction is determined by jittering a set of directions distributed according to a specular reflection function (use a lookup table directions)
- Trace a transmitted ray if the object is transparent; the direction is determined by jittering a set of directions distributed according to a specular transmission function (use a lookup table directions)
- Trace the shadow rays; the location of the target on the light is determined by its distribution function

- Whitted use an
*adaptive supersampling*algorithm to antialiasing ray traced images - The technique fires rays at pixel corners as well as their center
- If all five rays return about the same color, they can be averaged to produce the pixel's color
- If they return different colors, we'll subdivide the pixel into smaller regions and treat the subpixel just as we did the entire pixel
- The number of subdivisions can be stopped at some maximum level
- This technique is easy, not to slow, and often works well to antialias a ray traced image
- Object too small may still be missed by all rays
- If objects are arranged in a regular pattern and some are missing this causes visible effects
- In animation, objects may appear and disappear as they are hit and missed by rays

- Backward ray tracing follows rays from the light source instead of following rays from the eye
- Since most rays from the light will never reach the viewer, this can be wasteful, but it can also lead to more realistic images by including diffuse effects