What's Going On
Chris commented on slide_042 of Ray Tracing Accel ()

If a ray is being traced not from the camera but as a recursively reflected/refracted ray, would a BVH tree still force this ray to start its search from the top of the hierarchy, even if it originates from within the hierarchy instead of outside it (as would be the case for camera rays)? This seems kind of inefficient and I was wondering if there would be ways to determine a more optimal starting point in the hierarchy for this type of ray.


Chris commented on slide_025 of Low Discrepancy Sampling ()

If you use the Halton method to find a set of samples for each pixel, won't each pixel wind up with the same arrangement of points (since the Halton sequence always spits out the same array of numbers for a given base)? On the one hand you could use different bases, like for one pixel do Phi2 and Phi3 and for another pixel do Phi3 and Phi5...but there are only so many combinations like that you can do without going into super high bases (which as the next slide shows gives bad results). So how would you get around this problem?


Chris commented on slide_073 of Bidirectional Light Transport ()

Gathering photons along edge boundaries between walls can result in incorrect results/banding - but this can be fixed using a filtering function or by using a disk instead of a sphere to gather the photons.


Chris commented on slide_044 of Ray Tracing Accel ()

How would one construct a BVH tree with an object that is moving through the scene? For example if you had a sphere moving from point p1 to point p2 from time t0 to t1, how would we construct its bounding box? Would you use a method similar to what we did in assignment 1 where there would be multiple bounding boxes for each interval along the way?


mmp commented on slide_069 of Bidirectional Light Transport ()

Yes--a spatial data structure is definitely in order!

The approach that many folks seem to have settled on is to store the visible points in a 3D grid; then each photon just needs to use the grid to find the points that it may be applicable to.

Check out the Lux render implementation for details:

A few things to note from that one:

  1. They're able to have a fairly memory-efficient grid, even with small voxels, via a hash table--they hash grid coordinates (x,y,z) and then only store data for the voxels that are occupied.

  2. Points are stored by expanding their bounding boxes by the maximum photon search radius and then adding them to all of the voxels that they overlap. Then, given an incoming photon, the photon just needs to be hashed to a single voxel for it to find all of the candidate hit points. Since there will in general be many more photons than visible points, this approach is a bit more efficient...


mmp commented on slide_052 of Bidirectional Light Transport ()

Ah, these slides on photon map lookup are fairly borked. tl;dr I think that the traversal order in PBR is the right one.

Specifically, there the implementation is indeed to go depth first. The upshot of this is that the very closest photon is the first one checked (and then things proceed generally near-to-far). While there'd no no harm to visiting nodes when they're first encountered, visiting in generally-near-to-far order means that once the desired number of photons is found and the search is reduced to the distance to the farthest of the found photons, big sections of the tree can then be quickly culled away from needing any further traversal...


And just to make this all more complicated, radiance actually increases when passing through the boundary from a medium with a lower index of refraction and to one with a higher index of refraction! (Essentially, differential cones of rays are squeezed down into smaller differential cones; this cancels out when going the other way.)

This doesn't violate energy conservation, but it does complicate the analysis of the operator norms. See Chapter 4 of Eric Veach's thesis for more info...


mmp liked wmonroe's comment on slide_004 of Global Illumination and Path Tracing ()

wmonroe commented on slide_014 of Global Illumination and Path Tracing ()

This math is a bit over my head, but I'll hazard a guess as to how conservation of energy might lead to a proof that this series converges: the space of possible distributions of radiance throughout a scene is a normed vector space, with the norm being related to the integral of all radiances across the entire scene. Conservation of energy says that neither reflection nor light propagation can increase this integral; mathematically, this means the light transport operator has norm $|K| \leq 1$.

According to Wikipedia, the Neumann series of a operator with norm less than 1 over a complete vector space always converges. In a real scene, some radiance is necessarily absorbed and turned into heat (at the very least, the camera itself is absorbing light!), which gives us the strict inequality we need. I'm not sure about the second requirement, though -- I've never understood what it means in practice for a vector space to be complete.


wmonroe commented on slide_004 of Global Illumination and Path Tracing ()

My understanding is that Russian roulette tries to solve this problem by estimating the contribution of each path and discarding it with high probability if the contribution is low. This figures out the decrease in radiance you mention by taking into account the number of bounces and the BSDF of the surface at each bounce in the estimation of a path's potential contribution. Doing this on a per-sample basis rather than globally across the whole scene makes sense because some parts of the scene will require more bounces than others (look e.g. at the lamp at the top of the picture--it requires four bounces to even appear transparent, whereas the rest of the scene looks reasonably good even with two).


wmonroe commented on slide_069 of Bidirectional Light Transport ()

I imagine the shading points for progressive photon mapping would be stored in a kd-tree, analogously to the photons in standard photon mapping? Looping through a linear-format framebuffer for each splat to find relevant shading points seems like it would be a performance disaster.


wmonroe commented on slide_052 of Bidirectional Light Transport ()

This schematic seems to be showing a pre-order traversal of the tree--is there a reason why pbrt [sec. A.8, p. 1033] implements a post-order traversal (calling the process function only after recursing on both children)? For this $k$-nearest-neighbor lookup and radius searches I don't think this would cause any harm, but in general I would think that it might be useful to give the callback function info about the current node as soon as possible.


wmonroe commented on slide_025 of Sampling and Reconstruction ()

I was curious how the problems shown in the graph here were defined. It was interesting to find out that this graph was the result of experiments with human subjects! As Mitchell and Netravali put it in their paper: "The response of human viewers to various spatial effects of filters is not yet a well-understood science and is largely subjective in nature."


wmonroe commented on slide_046 of Radiometry and Cameras ()

It looks like people have done these in previous years' final projects (one example is here). Bokkeh could be added simply by modifying the shape of the aperture (in Assignment 3 we assumed a circular aperture). Sending R, G, and B at slightly different angles would be a reasonable approximation; real chromatic aberration would create smoothly varying fringes for continuous spectra, which would require modeling the entire visible part of the spectrum instead of just R/G/B channels.

The link above goes into detail about lens flare. The most prominent effect can be created by modeling the realistic lens as a Fresnel reflector, with both reflection and transmission (we only allowed transmission). This gives the series of circles/hexagons that one normally notices in an image with lens flare. The student who did this project also added an element that I probably would not have thought to include: if the light is bright enough, you can see the diffraction of the light around the aperture in addition to internal reflections in the lens apparatus. This produces an interesting starburst effect that doesn't appear with reflection alone.


So with each bounce the radiance of each photon should decrease which means there should be an optimal number of bounces where adding more bounces doesn't noticeably increase the quality of the image. Is finding this limit just a matter of trial and error and eyeballing it or can we predict the optimal number of bounces ahead of time?


Yes, that's right.


atrytko commented on slide_031 of Monte Carlo Integration ()

The phrase "decreases linearly with sample size" was confusing to me. After checking with the teaching staff, I thought I'd post here to clarify that this means 'is inversely proportional'.


atrytko commented on slide_018 of Global Illumination and Path Tracing ()

In class, we were told that there should be a $ {1\over N_{samples}} $ term in front of the estimator, right?


jingpu commented on slide_046 of Direct Illumination ()

Another interesting thing here is that the windows of the car, which is supposed to be glassier than metal, have much less noise.


jingpu commented on slide_046 of Direct Illumination ()

Even if the top surface and the front surface of the car are the same glassy material. The front surface has much less noise. I think it is because the material has a relatively large probability distribution around the reflecting angle.


jingpu commented on slide_049 of Radiometry and Cameras ()

The term of cosine theta' here is introduced because of the change of variable of integration. It took me some time to figure it out.


mmp commented on slide_038 of Monte Carlo Integration ()

Good catch--thanks. Will fix in the master slides.


brianjo commented on slide_038 of Monte Carlo Integration ()

Small technical note: the right hand side of the first line of the proof should read E[f(X_i)/p(X_i)].


bmild commented on slide_053 of Radiometry and Cameras ()

What is d, particularly for the current assignment? In the paper they indicate that it is the distance to the exit pupil, but I couldn't figure out what that was supposed to be either. (I'm trying to use this to figure out the appropriate weight for GenerateRay to return.)

EDIT: Now I see that d is arbitrary since we are integrating over area, which scales with d^2, so the effect of scaling d will be canceled out by the dA' differential in the integral.


mmp commented on slide_004 of Low Discrepancy Sampling ()

Yes, there are definitely opportunities along those lines! In a few lectures, we'll use luminance (which, as a photometric quantity, accounts for the eye's greater sensitivity to green) to decide when to stop doing more work in global illumination algorithms.

Related, some researchers have tried to take advantage of other characteristics of the human visual system to do more efficient rendering--e.g. this paper takes advantage of high frequency textures to find places where lower-quality lighting calculations can be done without human observers noticing...


Chris commented on slide_004 of Low Discrepancy Sampling ()

Since human (and presumably monkey) cones are more sensitive to green than they are to red and blue, would a sort of color-based sampling be of any use? Like for instance if a certain area of the scene is returning a lot of green hits, you could adaptively have the program send more rays out into that area to make sure it's as accurate as possible (since we'd notice inaccuracies a lot more) and vice versa for areas where red/blue dominate?


Chris commented on slide_041 of Low Discrepancy Sampling ()

Apparently the apostrophe at the end of Sobol's name indicates the Russian letter ? ?, known as the "soft sign." It indicates that the previous consonant should be palatalized (softened). So Sobol is pronounced with a "soft l".

...The things you learn at 3am.


mmp commented on slide_019 of Sampling and Reconstruction ()

Yes, that's correct.


clemire commented on slide_040 of Sampling and Reconstruction ()

I could see this effect as actually being desirable in some cases, either from an artistic perspective, or when you need to enhance edge contrast to make an image easier to make out. Definitely doesn't look realistic, though.


clemire commented on slide_019 of Sampling and Reconstruction ()

Sanity check: the sinc function only introduces ringing when there are frequency components of the unfiltered function greater than the imposed bandwidth, right?


mmp commented on slide_018 of Monte Carlo Integration ()

That's good feedback. In retrospect I think that what you're suggesting--basically starting with the derivation we used later in the lecture, would make the ideas clearer. (The students next year can benefit from this!)


mmp commented on slide_024 of Radiometry and Cameras ()

The cosine factor is still present regardless of how the surface is reflecting light (mirror, diffuse, etc). It turns out that it's there to account for the reduction in energy arriving at the surface from the light in the first place as the surface's orientation changes w.r.t. the incident light.

Hopefully this will be clear after Tuesday's lecture next week. :-)


mmp liked Tianye's comment on slide_020 of Radiometry and Cameras ()

Tianye commented on slide_020 of Radiometry and Cameras ()

Remember that $d cos\theta = -sin\theta d \theta$.

Therefore, $\int_0^{\pi}sin\theta d\theta = -\int_1^{-1}d cos\theta = \int_{-1}^1 d cos\theta$

The integration in terms of $\phi$ remains unchanged


Tianye commented on slide_018 of Monte Carlo Integration ()

I find the slide a little bit confusing, especially with the different occurrence of f(x). It was helpful for me to think of it this way:

our goal is to calculate $$ I=\int f(x)dx $$ We can rewrite this as $$ I=\int {f(x)\over p(x)}p(x)dx $$ which is $E[{f(x)\over p(x)}]$.

By the derivation of the next slide, we can estimate this with ${1\over N}\sum {f(x_i)\over p(x_i)}$, where $x_i$~$p(x)$


clemire commented on slide_024 of Radiometry and Cameras ()

I think in this case we would use a cosine measuring the difference between the outgoing ray in the direction we are computing and the ray reflected across the surface from the incident ray.


clemire commented on slide_020 of Radiometry and Cameras ()

Is there an error here? How did sin theta d-theta d-phi turn into d cos theta d-phi?


clemire liked brianjo's comment on slide_010 of Radiometry and Cameras ()

mmp commented on slide_023 of Monte Carlo Integration ()

V is just a function that is 1 if the two points are mutually visible and 0 if there is another object occluding the line segment between them.

The BRDF comes in starting in next lecture.


Tianye commented on slide_023 of Monte Carlo Integration ()

Sorry I forgot what V(p,p') was... Is it BRDF?


Tianye commented on slide_010 of Radiometry and Cameras ()

This illustration from CS148 might help understanding. We basically just need the angle between the ray and the surface normal.

Image alt text


soohark commented on slide_036 of Sampling and Reconstruction ()

So, here are we choosing r_min to be min(cell_width, cell_height)?


mmp commented on slide_053 of Radiometry and Cameras ()

Zach kindly points out an error in this slide:

I think

d^2 = cos^2(theta) / ||p' - p ||^2

should be:

||p' - p ||^2 = d^2 / cos^2(theta)

or more simply:

||p' - p || = d / cos(theta)


Tianye commented on slide_024 of Radiometry and Cameras ()

There are a lot of models describing the relationship between $L_i$ and $L_o$, which can be put into different categories including BRDF(model reflection), BTDF(model transmission) and BSSRDF(model both reflection and transmission). There are models as simple as the Phong reflection model (where the diffuse reflection property for different material is characterized by a constant, $L_o$ = $k_d$$L_i$cos$\theta_i$) and also ones way more complicated. Seems we will have several lectures to discuss these models...


mgao12 commented on slide_024 of Radiometry and Cameras ()

I wonder what the relationship here between $L_i$ and $L_o$ is for reflection case. I think it may depend on the property of the surface. For a mirror, the strongest direction of reflection certainly follows the reflection law. However, diffuse reflection surface may result in uniform radiance in all directions. So does the "cos" relation still hold here?


mmp commented on slide_017 of High-Performance Ray Tracing ()

It's mostly an issue of ensuring consistent results across different execution schedules. In other words, one definitely could implement hardware that provided atomic operations for non-associative operations (like floating-point addition), but then a program using them could generate different results across different runs (which is an undesirable property, in most cases.)

In other words, if some memory location had a value A and then two threads did atomic floating-point addition of values B and C, respectively, to A, one execution schedule could cause a series of operations ordered like:

A = (A + B) + C

and another schedule could cause:

A = (A + C) + B

which may give a different final value for A, with floating-point. For rendering, we may not care about this, but I think the general philosophy is that most application areas do care about consistency in this area.

FWIW in core/parallel.h in the PBRT code there's an atomic floating-point add built on top of atomic compare and exchange, which everyone does support for floats (since it's all just bits in that respect...)