### Posts from the “Non-Photorealistic Rendering” Category

How to calculate the colors of pixels of a stroke given a pixel’s position on the canvas???

From experience of real painting, strokes has a general nature of randomization determined by paper and painting material. On top of basic randomization, the appearance varies on the density and size of randomization as a result of different painting forces driven by painter’s emotion.

To simulate aesthetic-driven strokes connected to our heart / emotion / mind, we could use a basic randomization as the raw model of the stroke, then modify the density and size of the basic randomization to reflect emotion:

• Stroke equation: randomization equation with configurable density and size
• randomization equation: noise function with evenly distributed density and size
• configure density: apply a density equation to the noise function
• configure size: apply a size equation to the noise function

Stroke equation:

Stroke = density equation * size equation * basic randomization

An illustration of what the stroke equation will produce: in the following image, the top stroke is drawn with strong force, hence corresponds to high density and big size. The middle and bottom strokes are drawn with light force, which are close to the look produced by basic randomization with identity density and size. Lastly.. Happy Thursday! Nov 16, 2017 Edited on Feb 15, 2018

The appearance of stroke is determined by the rough surface of paper, which catches different part of the petal to be sticked on the surface of the paper. With evenly distributed force for drawing, the amount of petals sticked on the surface is stochastic. With varying force during drawing, the amount of petals sticked onto the surface is determined by the force, the stochastic amount of petals, and the roughness of the paper surface.

PetalAmount(x, y) = f(PaperSurfaceRoughness(x, y), DrawingForce(x, y), EasinessOfDetachingPetalToPaperSurface(x, y)) = PaperSurfaceRoughness(x, y) * DrawingForce(x, y) * EasinessOfDetachingPetalToPaperSurface(x, y)

• PetalAmount(x, y) is the amount of petal sticked to pixel (x, y) of the area of a stroke
• PaperSurfaceRoughness(x, y) is the roughness of the paper surface at pixel (x, y) of the area of a stroke
• DrawingForce(x, y) is the force used to draw by the user’s hand at pixel (x, y) of the area of a stroke
• EasinessOfDetachingPetalToPaperSurface(x, y) represents the easiness of detaching the petal while drawing at pixel (x, y) of the area of a stroke
• NOTE: could use Perlin Noise for all the above functions Figure 1: a C++ program simulating hand-drawn strokes. Figure 2: reference of hand-drawn strokes. From movie “The Little Prince”

So, it’s possible to “draw” crappy strokes by programming! Figure 1 is generated by a C++ program I wrote to simulate the strokes in Figure 2.

The idea is simple: define the size of a stroke with width and height, then randomly generate the starting point and direction of the stroke within the size. Finally, draw the stroke in an image by rasterizing the line. While drawing the stroke, jitter the pixel to be rasterized, and draw more pixels stretching towards sides of the jittered pixel with a random width. The intensities of these pixels are randomized.

Figure 1 is generated by drawing 128 strokes sizing 400/50 in an image of size 800/600.

The simplest way is using a sky dome, i.e. sphere or hemisphere, and coloring the sphere by linearly interpolating two gradient colors according to the fragment’s height, i.e. z axis. The gradient colors consists of the apex color and center color.

Here’s the result: Instead of linear interpolation based on height, using eye angle upon the sea level (a) is closer to real: apex * sin(a) + center * cos(a): Although using cosine and sin of the eye angle makes a nice blending, it doesn’t take account the fact that light is scattered based on light-traveling distance from the outer atmosphere to the eye.

So far the vertex blending trick has been working well. But when it comes with light scattering, the theory gets a bit complex. The discretized light transport process goes as follows: • The light along the camera ray is integrated by analyzing the in-scattering light in each point p
• At each point p, calculate the in-scattering light by counting for the light scattered away from the sun to point p, and from point p to the camera. Phase function is use to attenuate the scattered light towards the camera in the very beginning. Here’s the in-scattering equation: Image Credit: Sean O’Neil, “Accurate Atmospheric Scattering”, GPU Gems 2, within which the phase function F(theta, g) and out-scattering funtion t(P_aP_b, lamda) are included

However.. I don’t think it’s right to attenuate the entire light by the phase function.. The phase function should only be used to attenuate light when the light direction deviates from the direction to the camera: Evaluating the two out-scattering integrals and the one in-scattering integral per sample point during rendering is expensive. The amount of evaluation for every camera ray is: N_{in-scattering} * (N_{out-scattering_pp_c} + N_{out-scattering_pp_a}). Considering the wavelengths of the three color channels (i.e. red, green, blue), and the two scattering (Rayleigh and Mie Scattering), the computation per camera ray is 2 * 3 * N_{in-scattering} * (N_{out-scattering_pp_c} + N_{out-scattering_pp_a}). To make the rendering result accurate, the value of the Ns have to go high.

One way to save the scattering evaluation per camera ray is pre-computing and storing the optical depth (the integral part of out-scattering) and atmospherical density at a specific height to a 2D lookup table which consists of altitute and angle towards the sun as the dimensions that respectively define the staring point and direction of a ray. Each of the rays reflected in the 2D lookup table starts from a sample point in the atmosphere, and exits in sky vertex, if the ray doesn’t hit any objects.

With the pre-computed 2D lookup table, the optical depths of all rays from the sample point to the sun (pp_c ) can be directly found in the table. The optical depths from the sample points to the camera, if the camera is in the atmosphere, need two steps to be obtained:

• If the camera ray towards the sample point doesn’t intersect with the ground, the optical depth between the camera ray and the sample point is the substracted result of the camera ray towards the sky vertex and the sample point towards the sky vertex.
• Otherwise, if the camera ray towards the sample point intersect with the ground, the optical depth between the camera ray and the sample point is the substracted result of the sample point towards the sky vertex and the camera ray towards the sky vertex.

Now if we implement the shader, we need to pre-calculate the 2D lookup table (stored in a texture) and pass the texture to the shader. Given that using texture on blender needs a few manual configuration which I’m trying to avoid, is there a way of squeezing the values in the 2D table to a math equation which takes the altitude and angle then returns the optical depth? Sean O’Neil plotted the curves of the 2D lookup table and found the curves fitting the values. That way, the computation per camera ray is reduced from 2 * 3 * N_{in-scattering} * (N_{out-scattering_pp_c} + N_{out-scattering_pp_a}) to 2 * 3 * N_{in-scattering} * (1 + 1). That’s a good approach but I can’t use the result since I modified the in-scattering equation a little bit. So I’ll plot the values in the 2D lookup table using the revised equation and see if there’s a curve fits the values.

Using Matlab, the fitted surface turns out a polynomial representation with vertical angle and altitudes. The final result looks like the following:   References