So, I wanna present my MS thesis in a better way. My adviser came out with such a beautiful idea that I wasn’t smart enough to fully understand and enjoy until now. It’s artistic and beautiful.

So, what’s wrong with PCSS (Percentage-Closer Soft Shadows)? Blocker estimation. Blockers are estimated by blocker search in a custom defined search size. The custom defined search size is incorrect and doesn’t make sense. It brings wrong blockers in the shadowmap and count them as blockers, that result in wrong average blocker distance, which eventually produces wrong visibility.

So, we fixed the blocker estimation. By, directly looking up the blocker’s depth from the corresponding texel’s value. That gives the explicit blocker depth. With the explicit blocker depth, we would expect to get a penumbra size proportional to the blocker depth. There’s a good fact that, penumbra size is proportional to texel size of a mipmap layer. That means two things. One, a large penumbra size can be filtered with a bunch of texels with large size, which take less samples than with texels with small size. Less samples means less computation cost. Two, the blocker depth is proportional to the texel size. We can find a mipmap layer that matches the relationship (proportion) of the explicit blocker distance and the texel size. Then that mipmap layer is the one to be filtered to get the visibility and the computation cost is constant in terms of looking up samples.