Posts from the “Blender” Category

Interactively Update Uniforms of Custom GLSL Shader in Blender Game

An easy way of interactively updating uniform variables of custom GLSL shader is using game properties as the values of uniform variables, and updating the game properties from keyboard input which triggers the update of the game frame. For example, to accumulate a floating point number `verticalSunDirection_cameraSpace` in fragment shader, the steps go as the following:

  • Add a game property `verticalSunDirection` to the object the shader is attached to.

add-game-property

  • Assign the uniform variable `verticalSunDirection_cameraSpace` by the game property `verticalSunDirection`. For debugging purpose, we print the updated value of `verticalSunDirection` when assigning it to the uniform at line 619:

assign-gameproperty-to-uniform

If you are curious about the try-except block, please refer to [1] for explanation in the “Logic” part of “Simple endless scroll” section =)

  • Add a keyboard sensor to add a fixed value 0.1 to the game property `verticalSunDirection` while the game is running (game is triggered key `p`). Here we use the up arrow key:keyboard-sensor
  • Test the updated value of game property `verticalSunDirection`:
    • Run the game by key `p`.
    • Hit the up arrow key, then the game console prints the updated `verticalSunDirection`:

game-console

References

[1] Scrolling Textures with GLSL. https://whatjaysaid.wordpress.com/2015/03/09/bge-scrolling-textures-with-glsl

Altitude of Ray Marching Point for Optical Depth Integration in Atmospheric Rendering

The context of this article is based on the parameter calculation mentioned in the sky rendering article where most concepts stem from O’Neil’s atmospheric rendering article.

fullsizerender-7

Optical depth is an integration of outscattering of a ray, which is at a given altitude h_0 whose direction is deviated from the vertical direction of angle theta. The integration is done by sampling the ray whose starting point is p_0 and end point p_1, which is the intersection of the ray with the outer atmosphere, i.e. sky dome. For every sampling point p_i, the altitude needs to be estimated. That is the topic of this article.

Assume the coordinate is dominated by the starting point p_0, we can set up the initial coordinate system by having p_0 placed right above the earth. So the coordinate of p_0 is (0, r + h_0). For any sampling point p_i, the coordinate is p_0 + i * ray_marching_interval * ray_direction, which would produce (x, y). With (x, y) the altitude of p_i is sqrt(x*x + y*y) – r.

Note that the domain of the altitude in the context is [0, 1]. So the earth radius needs to be scaled according to the ratio of the earth radius upon atmospheric thickness. According to the Simulating the Colors of the Sky blog, the ratio is 6360/(6420-6360) = 106. So the earth radius used in this context is 106 * 1.0 = 106, where 1.0 is the maximum value of the domain of altitude.

GLSL Sky Shading on Blender

The simplest way is using a sky dome, i.e. sphere or hemisphere, and coloring the sphere by linearly interpolating two gradient colors according to the fragment’s height, i.e. z axis. The gradient colors consists of the apex color and center color.

Here’s the result:

sky

Instead of linear interpolation based on height, using eye angle upon the sea level (a) is closer to real: apex * sin(a) + center * cos(a):

sky-refine

Although using cosine and sin of the eye angle makes a nice blending, it doesn’t take account the fact that light is scattered based on light-traveling distance from the outer atmosphere to the eye.

So far the vertex blending trick has been working well. But when it comes with light scattering, the theory gets a bit complex. The discretized light transport process goes as follows:

fullsizerender-2
  • The light along the camera ray is integrated by analyzing the in-scattering light in each point p
  • At each point p, calculate the in-scattering light by counting for the light scattered away from the sun to point p, and from point p to the camera. Phase function is use to attenuate the scattered light towards the camera in the very beginning. Here’s the in-scattering equation:
0257equ01

Image Credit: Sean O’Neil, “Accurate Atmospheric Scattering”, GPU Gems 2, within which the phase function F(theta, g) and out-scattering funtion t(P_aP_b, lamda) are included

However.. I don’t think it’s right to attenuate the entire light by the phase function.. The phase function should only be used to attenuate light when the light direction deviates from the direction to the camera:

fullsizerender-3

Evaluating the two out-scattering integrals and the one in-scattering integral per sample point during rendering is expensive. The amount of evaluation for every camera ray is: N_{in-scattering} * (N_{out-scattering_pp_c} + N_{out-scattering_pp_a}). Considering the wavelengths of the three color channels (i.e. red, green, blue), and the two scattering (Rayleigh and Mie Scattering), the computation per camera ray is 2 * 3 * N_{in-scattering} * (N_{out-scattering_pp_c} + N_{out-scattering_pp_a}). To make the rendering result accurate, the value of the Ns have to go high.

One way to save the scattering evaluation per camera ray is pre-computing and storing the optical depth (the integral part of out-scattering) and atmospherical density at a specific height to a 2D lookup table which consists of altitute and angle towards the sun as the dimensions that respectively define the staring point and direction of a ray. Each of the rays reflected in the 2D lookup table starts from a sample point in the atmosphere, and exits in sky vertex, if the ray doesn’t hit any objects.

With the pre-computed 2D lookup table, the optical depths of all rays from the sample point to the sun (pp_c ) can be directly found in the table. The optical depths from the sample points to the camera, if the camera is in the atmosphere, need two steps to be obtained:

  • If the camera ray towards the sample point doesn’t intersect with the ground, the optical depth between the camera ray and the sample point is the substracted result of the camera ray towards the sky vertex and the sample point towards the sky vertex.
  • Otherwise, if the camera ray towards the sample point intersect with the ground, the optical depth between the camera ray and the sample point is the substracted result of the sample point towards the sky vertex and the camera ray towards the sky vertex.

Now if we implement the shader, we need to pre-calculate the 2D lookup table (stored in a texture) and pass the texture to the shader. Given that using texture on blender needs a few manual configuration which I’m trying to avoid, is there a way of squeezing the values in the 2D table to a math equation which takes the altitude and angle then returns the optical depth? Sean O’Neil plotted the curves of the 2D lookup table and found the curves fitting the values. That way, the computation per camera ray is reduced from 2 * 3 * N_{in-scattering} * (N_{out-scattering_pp_c} + N_{out-scattering_pp_a}) to 2 * 3 * N_{in-scattering} * (1 + 1). That’s a good approach but I can’t use the result since I modified the in-scattering equation a little bit. So I’ll plot the values in the 2D lookup table using the revised equation and see if there’s a curve fits the values.

Using Matlab, the fitted surface turns out a polynomial representation with vertical angle and altitudes. The final result looks like the following:

sunset-glsl

Capture Capture

References

Newbies’ Guide of Setting Up Custom GLSL in Blender Game

The purpose of this post is documenting how to write GLSL code in Blender. I’m using Blender 2.77a windows 64 for this instruction.

The first step of setting up custom GLSL in Blender is choosing Blender Game. Because custom GLSL shader is only supported by Blender Game. How to select Blender Game as the engine?

  • Select Blender Game in the info header:
BlenderInfoHeader

If you are not sure where is the info header or you can’t find it.. please refer to this short answer to find the info header:

http://blender.stackexchange.com/questions/8384/how-can-i-reset-my-menus

Then, attach a sensor and a controller to a SELECTED object. Note that you have to select the object so Blender knows which object the sensor and the controller belong to! The following steps are for newbies:

  • Select an object: right-click an object. The object’s bounds should be highlighted by orange when it’s selected:
BlenderSelectedObject
  • Show the Logic Editor where we can set up the sensor and controller. Logic Editor can be found like this:
BlenderFindLogicEditor

Your fresh Logic Editor should be like:

BlenderLogicEditor
  • Attach a sensor for the selected object. Select “Add Sensor” -> “Always”
  • Attach a controller for the selected object. Select “Add Controller” -> “Python”. Set the file name in the Text Editor to the controller’s python name
  • Connect the sensor to the controller. Drag from the disk of the right side of the sensor to the disk of the left side of the controller. Now your Logic Editor should be like this:
BlenderLogicEditorSensorController

After attaching the sensor and controller to an object, set up the GLSL code in the python script. Try the python example in this wiki page first:

https://en.wikibooks.org/wiki/GLSL_Programming/Blender/Minimal_Shader

Render the scene! In the 3D view, hit ‘p’. The scene should be totally red. I changed the color to blue as the red scene is so scaring.. Hit ‘ESC’ to exit the rendering loop.

BlenderFirstScene

Open Blender from Terminal

To launch blender directly from the terminal, make a “blender” alias in the MacOS environment file .profile. That way, the console outputs the log info when the python scripts executes, including the error details..

Reference: https://www.blender.org/manual/render/workflows/command_line.html#mac-osx