Posts from the “Shading” Category

GLSL: Two Shader Programs in Three Viewports

119 013- 0 039

Steps to render the above sunrise rendering from one viewport to three viewports, to compare three elements of sunrise – Rayleigh, Mie, Rayleigh and Mie:

  • In fragment shaders of the skydome and the area light, set three outputs
  • In CPU, configure three textures for a framebuffer, each of which is rendered from a color attachment [1]. The three color attachments take values of the three outputs from the fragment shaders [2]
  • In the framebuffer, render the two fragment shaders of the skydome and area light to the three textures
  • Render the three textures to three viewports on the screen [2]
image from ios


[1] Fragment Shader Output Buffers.

[2] Render To Texture.

Real-Time Global Illumination

Path tracing is too expensive, radiosity seems like the right way.

Questions to step forward:

  • Is there a way to transform geometry representation into a different form, so that light hits the geometry in a more computational efficient way to get the energy on that surface?
    • Transform the surface to a different space
    • Get the energy to that surface


  • Do a thorough evaluation of global illumination
  • Understand radiosity
  • Understand geometry representation
    • Check “Geometric Algebra for Computer Science” to find a transformed geometric representation in a different space than in 3D space, so that the energy (light) received by the geometry stays invariant or commutative in both spaces, yet is more computational efficient in the “different space”. if such a transformed geometric representation can be found, computer graphics would not need the expensive path-tracer for global illumination

Write GLSL Code Fast in CPU

Since gpu doesn’t have compilation error check when writing glsl shaders, it’s slow to find compilation errors by purely looking at the code. I found that we can write the glsl code in cpu with the aid of cpu compilation error check, then turn to gpu to run. PS. when writing on cpu, cpu should have a copy of the glsl builtin functions, such as vec2, normalization, et.

Light Bleeding of Soft Shadow Mip-Maps

Light bleeding happens in umbra area where the mipmap layers are filtered right away, so that the inconsistent filter sizes break the accumulation  law of of blocked light in the filter areas – not all the blocked light are added to 1.

But the things is, if we can prevent the fully-blocked fragment being filtered, we would fix light bleeding. Idea: check if the fragment is fully blocked in layers blocking any light. If all the layers block the fragment, don’t filter.

The idea theoretically works but when I implemented it, light bleeding doesn’t seem fixed :_(

Space Transformation between Atmospheric Points During Ray Marching


Based on the definition of coordinate system of ray marching point in atmosphere in a previous article, this article talks about how to transform the space from one point to another during ray marching.

For a given point p, whose coordinate in its own space is (0, r + h). Starting from p, if we move p to p_prime along a direction with vertical angle theta (in p’s space), we get p_prime in p’s space. Then how to get p_prime’s coordinate and the direction in p_prime’s space?

  • 1) Compute p_prime’s coordinate in p’s space. Given p (x, y), the direction (x_d, y_d) = (cos(theta), sin(theta)), and the distance to move to p_prime: d, we can get p_prime’s coordinate in p’s space by: (x_prime, y_prime) = p + dist * direction = (x, y) + d * (x_d, y_d) = (x + d * x_d, y + d * y_d).
  • 2) Transform p_prime’s coordinate from p’s space to p_prime’s space. Get the distance from p_prime in p’s space to the origin: distance_to_origin = sqrt(x_prime^2 + y_prime^2). Then p_prime’s coordinate in its own space is: (x_prime_own, y_prime_own) = (0, distance_to_origin).
  • 3) Compute the direction in p_prime’s space. To get the direction in p_prime’s space, we just need to get theta_prime as shown in the figure. theta_prime in p’s space can be obtained by theta_prime = acos(dot_product((x_d, y_d), (x_prime, y_prime))).

So that’s the basic idea. In practical (i.e. in glsl shader), the input is p represented by a normalized altitude and a normalized vertical angle. The output should be p_prime with its normalized altitude and normalized vertical angle in its own space. To do this, we need to:

  • 4) Compute p’s coordinate and the direction in p’s space, then
  • 5) Follow the above three steps to get p_prime’s coordinate and the direction in p_prime’s space.
  • 6) Compute p_prime’s normalized altitude and the normalized vertical angle with the result from the previous step.

Now let’s do 4) and 6).

  • 4) Compute p’s coordinate and the direction in p’s space.
    • Compute earth’s relative radius based on the ratio of earth/outter-atmospheric-depth and the normalized outter-atmospheric-depth: ratio * 1.0 = 106. (=6360/(6420-6360). Simulating the Colors of Sky)
    • Compute p’s coordinate (x, y) = (0, 1280 + normalized-altitude).
    • Compute theta (domain is [0, PI]) with the normalized vertical angle (domain is [0, 1]): theta = normalized-vertical-angle * PI.
  • 5) Compute p_prime’s normalized altitude and the normalized vertical angle.
    • p_prime’s normalized altitude = y_prime_own (i.e. in 2)) – earth’s relative radius = y_prime_own – 1280.
    • normalized vertical angle = theta_prime / PI (NOTE: theta_prime is calculated from 3)).

In conclusion, the steps for computing p_prime’s normalized altitude and normalized vertical angle in its own space with p’s normalized altitude and normalized vertical angle are:

4), 1), 2) 3), 5).

Simulate the Altitude and Vertical of Eye Rays in Outer Atmosphere with Sky Dome

Sky dome in practice is a hemisphere (left in orange), and the outer atmosphere is part of a sphere (right in orange):


#1 In practice, what we have is on the left: a sky dome holding vertices of the sky and the position of the eye.

#2 What we want from #1, is getting the altitude and vertical angle ‘theta’ of the eye as shown on the right. So, how to get the right from the left??

#3 The simple answer of #2 is getting the value of the y axis of the eye position in the sky dome space as the altitude, and getting the vertical angle of the eye ray starting from the eye position in the sky dome’s space to a sky vertex. This would result in the following graph as the colors of the sky, and arise an problem:


When the eye is located at the sky dome center, with a given field of view, i.e. fov, the fragment color is obtained by sampling the center of the fragment. This gives detailed sky color when the eye is looking towards the apex of the sky dome, and coarse sky color when looking towards the center of the sky dome, because of the ellipse-shaped sky is sampled evenly by the same-sized fragments. The color detail of the sky reduces as the the eye direction goes from the apex to the center. This situation applies the same when the eye is located above the sky dome center.

One solution to enrich the color details close to the center of the sky dome is sampling the fragments close to the center of the sky dome. The number of samples per fragment is proportional to the angle between the eye ray and the vertical axis, i.e. Y axis. However, sampling the fragment is optional as the quality of the sky color could be good enough with the regular sampling of the fragments in GLSL.

#4 With the discussion in #3, we can get the altitude and vertical angle of an eye ray by:

  • #4.1 Transforming the eye position in world space to the sky-dome space
    • #4.1.1 Compute the matrix transforming world-space point to the sky-dome space.
    • #4.1.2 Multiply the eye position by the matrix to get the eye position in the sky-dome space.
  • #4.2 Getting the eye ray in sky-dome space for a sky fragment
    • #4.2.1 Subtract the eye position (in sky-dome space) by the position of the sky fragment (in sky-dome space)
    • #4.2.2 Normalize the eye ray
  • #4.3 Getting the altitude of the eye ray: the distance between the eye position and the origin of the sky-dome space (in sky-dome space)
  • #4.4 Getting the vertical angle of the eye ray: the acosine value of the dot product of the eye ray (normalized, in sky-dome space) with the vertical axis (eye position in sky-dome space – origin of sky-dome space; normalized, in sky-dome space).

So far, the only problem left is computing the matrix as discussed in #4.1.1. The known facts are:

  • Eye position in world space: X_e (x_e, y_e, z_e)
  • Sky dome’s position in world space: X_s (x_s, y_s, z_s)
  • Sky dome’s radius in world space: r

Then the eye position in the sky-dome space is: (X_e – X_s) / r. The sky-dome space shares the same directions of the axis of the world space. The result can be approached by the multiplication of a scaling matrix, S, and a translation matrix, T: ST. Where:

  • T = [1, 0, 0, -x_s; 0, 1, 0, -y_s; 0, 0, 1, -z_s; 0, 0, 0, 1]
  • S = [1/r, 0, 0, 0; 0, 1/r, 0, 0; 0, 0, 1/r, 0; 0, 0, 0, 1]
  • Hence, ST = [1/r, 0, 0, -x_s/r; 0, 1/r, 0, -y_s/r; 0, 0, 1/r, -z_s/r; 0, 0, 0, 1]

T and S in this case is row major. With glsl the matrix should be converted to column major, which is:

  • T =

[1, 0, 0, 0,

0, 1, 0, 0,

0, 0, 1, 0,

-x_s, -y_s, -z_s, 1]

  • S =

[1/r, 0, 0, 0,

0, 1/r, 0, 0,

0, 0, 1/r, 0,

0, 0, 0, 1]

  • ST =

[1/r, 0, 0, 0,

0, 1/r, 0, 0,

0, 0, 1/r, 0,

-x_s/r, -y_s/r, -z_s/r, 1]

Altitude of Ray Marching Point for Optical Depth Integration in Atmospheric Rendering

The context of this article is based on the parameter calculation mentioned in the sky rendering article where most concepts stem from O’Neil’s atmospheric rendering article.


Optical depth is an integration of outscattering of a ray, which is at a given altitude h_0 whose direction is deviated from the vertical direction of angle theta. The integration is done by sampling the ray whose starting point is p_0 and end point p_1, which is the intersection of the ray with the outer atmosphere, i.e. sky dome. For every sampling point p_i, the altitude needs to be estimated. That is the topic of this article.

Assume the coordinate is dominated by the starting point p_0, we can set up the initial coordinate system by having p_0 placed right above the earth. So the coordinate of p_0 is (0, r + h_0). For any sampling point p_i, the coordinate is p_0 + i * ray_marching_interval * ray_direction, which would produce (x, y). With (x, y) the altitude of p_i is sqrt(x*x + y*y) – r.

Note that the domain of the altitude in the context is [0, 1]. So the earth radius needs to be scaled according to the ratio of the earth radius upon atmospheric thickness. According to the Simulating the Colors of the Sky blog, the ratio is 6360/(6420-6360) = 106. So the earth radius used in this context is 106 * 1.0 = 106, where 1.0 is the maximum value of the domain of altitude.

GLSL Sky Shading on Blender

The simplest way is using a sky dome, i.e. sphere or hemisphere, and coloring the sphere by linearly interpolating two gradient colors according to the fragment’s height, i.e. z axis. The gradient colors consists of the apex color and center color.

Here’s the result:


Instead of linear interpolation based on height, using eye angle upon the sea level (a) is closer to real: apex * sin(a) + center * cos(a):


Although using cosine and sin of the eye angle makes a nice blending, it doesn’t take account the fact that light is scattered based on light-traveling distance from the outer atmosphere to the eye.

So far the vertex blending trick has been working well. But when it comes with light scattering, the theory gets a bit complex. The discretized light transport process goes as follows:

  • The light along the camera ray is integrated by analyzing the in-scattering light in each point p
  • At each point p, calculate the in-scattering light by counting for the light scattered away from the sun to point p, and from point p to the camera. Phase function is use to attenuate the scattered light towards the camera in the very beginning. Here’s the in-scattering equation:

Image Credit: Sean O’Neil, “Accurate Atmospheric Scattering”, GPU Gems 2, within which the phase function F(theta, g) and out-scattering funtion t(P_aP_b, lamda) are included

However.. I don’t think it’s right to attenuate the entire light by the phase function.. The phase function should only be used to attenuate light when the light direction deviates from the direction to the camera:


Evaluating the two out-scattering integrals and the one in-scattering integral per sample point during rendering is expensive. The amount of evaluation for every camera ray is: N_{in-scattering} * (N_{out-scattering_pp_c} + N_{out-scattering_pp_a}). Considering the wavelengths of the three color channels (i.e. red, green, blue), and the two scattering (Rayleigh and Mie Scattering), the computation per camera ray is 2 * 3 * N_{in-scattering} * (N_{out-scattering_pp_c} + N_{out-scattering_pp_a}). To make the rendering result accurate, the value of the Ns have to go high.

One way to save the scattering evaluation per camera ray is pre-computing and storing the optical depth (the integral part of out-scattering) and atmospherical density at a specific height to a 2D lookup table which consists of altitute and angle towards the sun as the dimensions that respectively define the staring point and direction of a ray. Each of the rays reflected in the 2D lookup table starts from a sample point in the atmosphere, and exits in sky vertex, if the ray doesn’t hit any objects.

With the pre-computed 2D lookup table, the optical depths of all rays from the sample point to the sun (pp_c ) can be directly found in the table. The optical depths from the sample points to the camera, if the camera is in the atmosphere, need two steps to be obtained:

  • If the camera ray towards the sample point doesn’t intersect with the ground, the optical depth between the camera ray and the sample point is the substracted result of the camera ray towards the sky vertex and the sample point towards the sky vertex.
  • Otherwise, if the camera ray towards the sample point intersect with the ground, the optical depth between the camera ray and the sample point is the substracted result of the sample point towards the sky vertex and the camera ray towards the sky vertex.

Now if we implement the shader, we need to pre-calculate the 2D lookup table (stored in a texture) and pass the texture to the shader. Given that using texture on blender needs a few manual configuration which I’m trying to avoid, is there a way of squeezing the values in the 2D table to a math equation which takes the altitude and angle then returns the optical depth? Sean O’Neil plotted the curves of the 2D lookup table and found the curves fitting the values. That way, the computation per camera ray is reduced from 2 * 3 * N_{in-scattering} * (N_{out-scattering_pp_c} + N_{out-scattering_pp_a}) to 2 * 3 * N_{in-scattering} * (1 + 1). That’s a good approach but I can’t use the result since I modified the in-scattering equation a little bit. So I’ll plot the values in the 2D lookup table using the revised equation and see if there’s a curve fits the values.

Using Matlab, the fitted surface turns out a polynomial representation with vertical angle and altitudes. The final result looks like the following:


Capture Capture