Lecture 6 : ray tracing 1
Download as PDF

soohark

so my intuition is that the same applies for shadow mapping and checking for light occlusions in ray tracing. does that sound correct? what about shadow volumes?

mmp

For shadow mapping--"almost but not quite". In traditional shadow mapping, you're computing a depth buffer at a fixed set of sample locations and then projecting points being shaded into that buffer; those projected locations don't in general match up with the original sample locations, which can lead to errors.

There's an interesting generalization of the z-buffer--the irregular z-buffer wikipedia, including links to research papers--that does z-buffer rasterization with irregular sample distributions that can be chosen to match the shaded points in order to do more accurate shadows with rasterization...

Chris

Is the transparent object supposed to be a sphere or a disk? Because from my understanding glass spheres normally flip images, so the checkered floor should appear on the top of the glass and the yellow ball on the left if this was a sphere.

wmonroe

It would depend on the index of refraction. Glass has a higher index of refraction than air, so indeed a solid glass sphere would create an upside-down image. A sphere of lower index of refraction than the surrounding medium, e.g. a bubble of air in water, would make the fisheye distortion that's the salient feature of this image.

However, this particular image seems to be more consistent with a thin, hollow glass sphere. Suppose the picture is taken in air, and the interior of the sphere is also air, with the sphere itself being a few millimeters of glass. Then for rays which pass through the air-filled interior, the angle of incidence on the interior interface will be slightly greater than the angle of refraction from the exterior interface. When the ray is again bent as it enters the air of the interior, it will be directed slightly outward from the original trajectory, giving the same effect as that of a lower-RI sphere.

For parts of the sphere in which rays only pass through the glass, we would expect to find an upside-down refracted image. If the image is physically accurate, though, Fresnel's equations say that at such extreme angles the rays would mostly be reflected, not refracted, so again we would see the checkered floor on the bottom half.

soohark

I'm a bit confused here. How are we calculating the areas? i.e. a0 should be negative if p is on the other side of the p1 p2 edge, right?

eye

We talked about finding the signed area of triangles using the vectors that comprise two of its sides in the lecture on 2D rasterization. In that case the area would be negative, if I understand the question correctly. --> http://candela.stanford.edu/index.php/lecture/rast2d/slide_022

mgao12

It is definitely true. According to the last bullet, we will use the values of $b_1, b_2, b_3$ to determine whether the ray hits the inside of the triangle.

clemire

Geometrically, I don't think this plane equation makes sense unless you recall that b0 + b1 + b2 = 1. Then, you can re-write $p = b_0 p_0 + b_1 p_1 + b_2 p_2$ as $p = p_0 + b_1 (p_1 - p_0) + b_2 (p_2 - p_0)$, where $p_1 - p_0$ and $p_2 - p_0$ are vectors, and our final equation gives us a point displaced by two vectors.

Tianye

Another way to think of it is that we have a fixed coordinate system in the space, therefore we can treat each point as a vector from the origin to that point.

clemire

This is true... I think I just personally prefer to express the plane equation in terms of two vectors, since the plane itself is two-dimensional. In the equation you mentioned, you also need to add the restriction that b0 + b1 + b2 sum to one in order for that equation to define a plane, but with the equation I wrote, b1 and b2 can be any value, since b0 was removed from the equation and is implicitly 1 - b1 - b2.

bmild

clemire's rearrangement of the equation also has the advantage that it works in Point/Vector classes where points can't be added to one another (like pbrt, if I remember correctly) since doing so can mess up the homogeneous w coordinates.

wmonroe

Another student asked in class why b is off by one ulp, and we concluded it was due to accumulation of error in the several steps of the somewhat complicated calculation. To be more precise, the error is introduced in the dot product operation. Since o = (0, 0), subtracting c from o will give the correct result exactly, and multiplication by 2 is guaranteed to be exact unless the distance is so large as to overflow ($\gtrsim 1.7\times 10^{38}$).

The remaining calculations contain the inexact part; they are (where $\vec{\mathbf{m}} = 2(\mathbf{o} - \mathbf{c})$, and assuming a 2-dimensional calculation):

$$b = m_x \cdot d_x + m_y \cdot d_y$$

Each multiplication is guaranteed by IEEE 754 to produce a result within half an ulp of the correct answer. The addition must be within half an ulp of the actual sum of the two prior results; however, since each of these may be off by half an ulp, the total accumulated error is only guaranteed to be within 1.5 ulps.

Chris

How well would distributed ray tracing (using multiple rays per pixel sample) deal with this issue? What I'm imagining is that different rays coming in from slightly different directions would intersect at different spots and also have slightly different amounts of error in precision, so when you average all the samples together it might get rid of some of these black spots or at least make them less noticeable.

mmp

That definitely helps (and in practice, a lot of the really small errors that remain from numerical stuff just don't matter for this reason.)

In this case, you'd probably get a non-noisy image, but it would probably be substantially darker than it should be--e.g. if 1/2 of the rays gave incorrect "black" color values and 1/2 were correct, you'd end up with an image half as bright as it should have been...

clemire

Something Matt mentioned in lecture but didn't put into this slide is the performance implications of having holes in the model we are ray tracing. Not having air-tight meshes means that we can't always do efficient object culling, and could end up bouncing rays around some objects that, due to the position of the camera, should have never been interacting with any camera rays in the first place. This could even mean that we unnecessarily load large models from disk!

Tianye

Also, when we try to accelerate the ray tracing by first intersecting the ray with bounding boxes, there might be a false intersection due to this sneaking problem, which will affect the performance.

mgao12

I am still somehow confused about why there will be such a sneaking problem. Since the two triangles share the same edge, the ray can hit in either the upper one or the lower one. I can understand this because the floating point error makes it undeterministic to us. But it is weird to have the ray miss both of them. Is it the case that b equals to exact zero?

clemire

They do share the same edge, but they don't share the same plane, and so the ray is not intersecting with both triangles on the same plane, either. In 2D rasterization, we're testing the same point against each edge equation, but in ray tracing, we're not! At least, not necessarily. This might be part of the reason...