Since human (and presumably monkey) cones are more sensitive to green than they are to red and blue, would a sort of color-based sampling be of any use? Like for instance if a certain area of the scene is returning a lot of green hits, you could adaptively have the program send more rays out into that area to make sure it's as accurate as possible (since we'd notice inaccuracies a lot more) and vice versa for areas where red/blue dominate?
This comment was marked helpful 0 times.
Yes, there are definitely opportunities along those lines! In a few lectures, we'll use luminance (which, as a photometric quantity, accounts for the eye's greater sensitivity to green) to decide when to stop doing more work in global illumination algorithms.
Related, some researchers have tried to take advantage of other characteristics of the human visual system to do more efficient rendering--e.g. this paper takes advantage of high frequency textures to find places where lower-quality lighting calculations can be done without human observers noticing...
If you use the Halton method to find a set of samples for each pixel, won't each pixel wind up with the same arrangement of points (since the Halton sequence always spits out the same array of numbers for a given base)? On the one hand you could use different bases, like for one pixel do Phi2 and Phi3 and for another pixel do Phi3 and Phi5...but there are only so many combinations like that you can do without going into super high bases (which as the next slide shows gives bad results). So how would you get around this problem?
Apparently the apostrophe at the end of Sobol's name indicates the Russian letter ? ?, known as the "soft sign." It indicates that the previous consonant should be palatalized (softened). So Sobol is pronounced with a "soft l".
...The things you learn at 3am.