I'd seen this object rendered in quite a few other places, but didn't realize it was essentially a metric for testing one's algorithm quality. What are some other standard testing models besides the teapot, bunny, and torus? I've definitely seen the problem talked about in class, where papers selectively choose situations where their algorithm works well, yet would obviously perform very poorly on models they left out.

This comment was marked helpful 0 times.

Seems like the dragon, armadillo, and Buddha are also pretty common.

This comment was marked helpful 0 times.

Dragon et al are arguably the 2000s equivalent of the 1990s sphereflake.

This comment was marked helpful 0 times.

I suppose for the case when t*=t_min or t_max,it would make sense to test both the left and right since it is possible that a primitive intersects with the splitting plane just at the corner.

This comment was marked helpful 0 times.

Indeed! kd-trees tend to have more edge cases like this one in practice, which is one of the reasons that BVHs (which have fewer of them) are used more in practice these days..

This comment was marked helpful 0 times.

What is $C_{trav}$? Or what does "traversing a cell" mean here?

I think we do not test any triangles until we reach the leaf nodes. So if the current node has left and right children, we should only test the intersections of the two children's bounding boxes. So there is no traversal in this cell.

This comment was marked helpful 0 times.

I think Ctrav is the cost of testing intersections of the two children's bounding boxes. Cost(node) is defined recursively.

This comment was marked helpful 1 times.

These numbers really surprised me. The 1:80 makes intuitive sense, but I really am amazed that you'd be able to test whether a ray collides with a triangle for only 1.5x more than the cost of traversing a node in the tree.

This comment was marked helpful 0 times.

I agree that 1.5x feels surprisingly low. If you compare, say, the core code in BBox::IntersectP() and Triangle::Intersect() in pbrt, it feels more like maybe 3x-5x (to me at least). Highly optimized ray tracers often do things like not compute those dPdu/dPdv partial derivatives, not do alpha mapping textures, at which point the core math for the two operations isn't ridiculously far of...

This comment was marked helpful 0 times.

It was a bit of a surprise to me, at first, to see that the surface area of a shape is more important than its volume in determining the likelihood of an intersection. If anyone else had any cognitive trouble with this, it may help to think of the following example: consider a sphere and an extremely thin sheet of paper, both of which have the same volume. If you put the (arbitrarily large and thin) sheet of paper into a volume of space you wanted to ray trace, you could bend and crumple the sheet so that almost any ray would intersect it, whereas the sphere would obviously intersect a much smaller number of rays. This thought experiment helped me to see the importance of surface area more intuitively.

This comment was marked helpful 0 times.

I'm actually still a bit confused by this. Is the crofton's theorem equation on this slide just an approximation? Say you had a very thin piece of paper and folded it into an accordian shape so it looks kinda like a cube. This folded piece of paper has a much greater surface area than the cube that would occupy the same space. However, the number of rays that hit the object is smaller, since rays can be shot through the space between the folded sides of the paper (does this make sense?).

This comment was marked helpful 1 times.

Crofton's theorem as stated on this slide applies to convex bodies only (e.g. a bounding box or sphere, but not an accordion). Since we are primarily concerned with the probability that a ray hits a bounding box, the surface area heuristic works well. Sorry that the slide wasn't clear about the restrictions.

This comment was marked helpful 2 times.

To poach a confusion mentioned in lecture: the surface area of the two regions increases or decreases linearly with the position of the split, and the triangle count is constant except for discontinuous jumps at the beginning and end of each primitive's bounding box. If we employ surface area and triangle count as heuristics for the probability of traversal and cost of intersection tests, respectively, then the overall cost function is piecewise linear. A piecewise linear function can only have a local minimum at one of its discontinuities, hence the assertion that the only candidate splits are at the beginning and end of primitives.

I'm curious whether there's a proof of this in the more general case, where the cost function is actually defined recursively (instead of assuming it to be proportional to the number of triangles). It seems to me that in certain situations, arbitrary boundaries might exist independent of primitives, across which shifting a split slightly could cause the structure of the optimal kd-tree to change simply by virtue of the changed size of the child nodes. I couldn't think of an example of this, though.

This comment was marked helpful 0 times.

What are the blue boxes in this slice and the previous one used for?

I suppose these are the tight bounding boxes for the objects in those spaces, and should be used to calculate the probabilities of hitting (maybe $S_a$ and $S_b$ are their surface areas). However, when we really test the intersections later during traversal, we are testing the ray against the larger bounding boxes, which is formed by the boundary and the split plane. So I think $S_a$ and $S_b$ should be the surface areas of the larger bounding boxes.

This comment was marked helpful 1 times.

You're right--this is a bug in the slide. :-( You are correct that the blue area should be the entire side. Thanks!

This comment was marked helpful 0 times.

If a ray is being traced not from the camera but as a recursively reflected/refracted ray, would a BVH tree still force this ray to start its search from the top of the hierarchy, even if it originates from within the hierarchy instead of outside it (as would be the case for camera rays)? This seems kind of inefficient and I was wondering if there would be ways to determine a more optimal starting point in the hierarchy for this type of ray.

This comment was marked helpful 0 times.

Hmm... The last time I came across this context, it was within the context of doings collisions and kinematic computations for animation purposes. Are there any negatives in using the same data structure to both compute animations and perform ray tracing?

This comment was marked helpful 0 times.

I wouldn't be surprised if the same structures that are useful for accelerating primitive-primitive collision are useful for ray-primitive collision as well. I'm not sure about animation--are these sort of primitive subdivision structures well-suited to dynamic scenes?

This comment was marked helpful 0 times.

Dynamic scenes make things a little tricky; it's essentially the same issue we had with motion blur rasterization, where the bounding box for an object over a time interval may me much larger than its bounding box when it's still. Thus, if you build acceleration structures using their full bounding boxes, you may have an inefficient accelerator where you're making a lot of intersection tests with moving objects where the ray actually isn't even close to it.

The optional reading for this lecture has a paper on acceleration structures for moving stuff, FWIW.

This comment was marked helpful 0 times.

How would one construct a BVH tree with an object that is moving through the scene? For example if you had a sphere moving from point p1 to point p2 from time t0 to t1, how would we construct its bounding box? Would you use a method similar to what we did in assignment 1 where there would be multiple bounding boxes for each interval along the way?

This comment was marked helpful 0 times.

Just a reminder, one good thing about grid accelerator is that as we use DDA or other algorithm to trace the line, the grids are processed from near to far along the ray, so once we find an intersection, we can safely discard the primitive for future intersection.

This comment was marked helpful 0 times.