Lecture 17 : Bidirectional Light Transport
Download as PDF

wmonroe

This schematic seems to be showing a pre-order traversal of the tree--is there a reason why pbrt [sec. A.8, p. 1033] implements a post-order traversal (calling the process function only after recursing on both children)? For this $k$-nearest-neighbor lookup and radius searches I don't think this would cause any harm, but in general I would think that it might be useful to give the callback function info about the current node as soon as possible.

mmp

Ah, these slides on photon map lookup are fairly borked. tl;dr I think that the traversal order in PBR is the right one.

Specifically, there the implementation is indeed to go depth first. The upshot of this is that the very closest photon is the first one checked (and then things proceed generally near-to-far). While there'd no no harm to visiting nodes when they're first encountered, visiting in generally-near-to-far order means that once the desired number of photons is found and the search is reduced to the distance to the farthest of the found photons, big sections of the tree can then be quickly culled away from needing any further traversal...

wmonroe

I imagine the shading points for progressive photon mapping would be stored in a kd-tree, analogously to the photons in standard photon mapping? Looping through a linear-format framebuffer for each splat to find relevant shading points seems like it would be a performance disaster.

mmp

Yes--a spatial data structure is definitely in order!

The approach that many folks seem to have settled on is to store the visible points in a 3D grid; then each photon just needs to use the grid to find the points that it may be applicable to.

Check out the Lux render implementation for details:

A few things to note from that one:

  1. They're able to have a fairly memory-efficient grid, even with small voxels, via a hash table--they hash grid coordinates (x,y,z) and then only store data for the voxels that are occupied.

  2. Points are stored by expanding their bounding boxes by the maximum photon search radius and then adding them to all of the voxels that they overlap. Then, given an incoming photon, the photon just needs to be hashed to a single voxel for it to find all of the candidate hit points. Since there will in general be many more photons than visible points, this approach is a bit more efficient...

Chris

Gathering photons along edge boundaries between walls can result in incorrect results/banding - but this can be fixed using a filtering function or by using a disk instead of a sphere to gather the photons.