Skip to main content

Render Pipelines

Perhaps the biggest difference between taichi.js and the Python taichi library is that taichi.js includes support for writing rendering pipelines composed of vertex and fragment shaders. Writing rendering code in taichi.js is slightly more involved than writing general-purpose compute kernels, and it does require some understanding of rasterization-based rendering in general. However, you will find that the simple programming model of taichi.js makes these concepts extremely easy to work with and reason about.

In this page, we will go through the basics of creating a render pipeline by rendering a spinning 3D cube. The full source code of this example can be found here, or alternatively for an interactive version that you can play with, here.

Vertex Buffer and Index Buffer

In rasterization-based render pipelines, the geometry to be rendered are always represented as a collection of triangles. In our case, the cube we are trying to render contains 6 faces, each consisting of 2 triangles, giving 12 triangles in total. One way to represent these triangles would be to list all of their vertex positions in a field, which is called a "vertex buffer":

let vertices = ti.field(ti.types.vector(ti.f32, 3), 36);
await vertices.fromArray([
[0, 0, 0], // 1st triangle, 1st vertex
[0, 0, 1], // 1st triangle, 2nd vertex
[0, 1, 0], // 1st triangle, 3rd vertex
[0, 0, 1], // 2nd triangle, 1st vertex
[0, 1, 1], // 2nd triangle, 2nd vertex
[0, 1, 0], // 2nd triangle, 3rd vertex
...
])

Even though there are 12 triangles to be rendered and thus 36 vertices to be declared, the cube itself only has 8 vertices, each of which are shared among multiple triangles. For this reason, it is more efficient to store the positions of each unique vertex in each field, and use a different field to represent which vertices make up each of the 12 triangles:

let vertices = ti.field(ti.types.vector(ti.f32, 3), 8);
let indices = ti.field(ti.types.vector(ti.i32, 3), 12);

await vertices.fromArray([
[0, 0, 0], [0, 0, 1], [0, 1, 0], [0, 1, 1],
[1, 0, 0], [1, 0, 1], [1, 1, 0], [1, 1, 1],
]);
await indices.fromArray([
[0, 1, 2], [1, 3, 2], [4, 5, 6], [5, 7, 6],
[0, 2, 4], [2, 6, 4], [1, 3, 5], [3, 7, 5],
[0, 1, 4], [1, 5, 4], [2, 3, 6], [3, 7, 6],
]);

ti.addToKernelScope({ vertices, indices })

The 2nd buffer is known as an "index buffer", as it contains indices to the vertex buffer field. In 3D rendering, vertex buffers are almost always used together with an index buffer, although using a vertex buffer on its own is still allowed.

In our cube example, the 3D model is simple enough that we can hard-code its vertex and index buffers by hand. However, for rendering complex scense, you will almost always populate these buffers by importing data from a 3D file format. In taichi.js, there are some built-in utilities that help you import GLTF formats. Details can be found here.

Render Target

Before drawing anything, we need access to a piece of canvas that we are drawing onto. Assuming that a canvas named result_canvas exists in the HTML, the following lines of code creates a ti.CanvasTexture object, which represents a piece of texture that can be rendered onto by a taichi.js render pipeline.

let htmlCanvas = document.getElementById('result_canvas');
let renderTarget = ti.canvasTexture(htmlCanvas);
ti.addToKernelScope({ renderTarget })

Depth Buffer

In 3D rendering, an important piece of data is the Depth Buffer. This is an image which is used by render pipelines to ensure that objects closer to the camera are rendered over further ones. The depth buffer should have the same dimensions as the render target:

let depthBuffer = ti.depthTexture(renderTarget.dimensions);
ti.addToKernelScope({ depthBuffer });

Vertex and Fragment Shaders

We have now created all the necessary data needed to render our cube, and can start defining the actual render pipeline. Here's the complete code for this pipeline:

let render = ti.kernel((t) => {
let center = [0.5, 0.5, 0.5];
let eye = center + [ti.sin(t), 0.5, ti.cos(t)] * 2;
let view = ti.lookAt(eye, center, [0.0, 1.0, 0.0]);

let aspectRatio = renderTarget.dimensions[0] / renderTarget.dimensions[1];
let projection = ti.perspective(/* fov */45.0, aspectRatio, /* near */0.1, /* far */100);

let viewProjection = proj.matmul(view);

ti.clearColor(renderTarget, [0.1, 0.2, 0.3, 1]);
ti.useDepth(depthBuffer);

for (let v of ti.inputVertices(vertices, indices)) {
let pos = viewProjection.matmul(v.concat([1.0]));
ti.outputPosition(pos);
ti.outputVertex(v);
}
for (let f of ti.inputFragments()) {
let color = f.concat([1.0]);
ti.outputColor(renderTarget, color);
}
});

The render pipeline takes as input an argument t, which has type ti.f32 by default. This argument represents "time", and is passed from CPU to control the rotation of the cube. Specifically, the time is used to compute the eye variable, which specifies the position from which we are observing the scene. Our pipeline will render the scene from the perspective of this position. The center variable specifies the center of the cube, and the expression center + [ti.sin(t), 0.5, ti.cos(t)] * 2 makes the camera rotate around the cube's center. From the eye and center variable, we create a view matrix, which transforms a 3D location from "world space" to "view space". The view matrix and lookAt() function are common computer graphics functions. If you are unfamiliar with these concepts, you may read more about them here and here.

The next matrix we compute is the projection matrix, which transforms a position from "view space" to "clip space". This is computed using a fixed field-of-view angle, the aspect ratio of the render target, and fixed near- and far- distances. The projection matrix is multiplied with the viewProject matrix, which is used later within the vertex shader. Notice that the view, projection, and viewProjection matrices are all computed in the sequential section of this kernel.This means that these computation are done in a single WebGPU thread, which prepares these matrices before the rendering starts.

After the matrices are computed, we have a ti.clearColor call, which specifies the color to fill the render target. This will be the background color of our canvas. Then, we have a ti.useDepth call, which declares the depth buffer to be used for depth testing.

The core of the render pipeline are represented in two for loops. The first loop iterates over ti.inputVertices(vertices, indices), and represents the vertex shader. The second loop iterates over ti.inputFragments(), representing the fragment shader.

In the vertex shader, the type of the loop index variable v is the same as the element type of the vertex buffer. In our case, the vertex buffer vertices has element type ti.types.vector(ti.f32, 3), so v is also of this type. Since v represents the world-space position of each vertex in the 3D mesh, we transform it using the viewProjection matrix into clip space, which we output using the ti.outputPosition(..) call. The vertex shader also passes the v variable in ti.outputVertex(..), which asks the GPU to interpolate the v variable and passes the interpolated values into the fragment shader.

In the fragment shader, the loop index variable is f. This variable will have the same type as the variable output by the vertex shader. In our case, the vertex shader outputs v, which has type ti.types.vector(ti.f32, 3), so f will also have this type. For each fragment, the value of f will be the interpolated value of v across the 3 vertices of the triangle that contains the fragment. Since v represents the 3D position of each vertex, the variable f will also represent the 3D position of each fragment. We interpret this position as a RGB color value and output it using ti.outputColor(..). As a result, the x value of the fragment's location will determine how red it is painted, and the y and z values will determine how green and blue it is.

We invoke the pipeline every frame with an increasing value of t:

let i = 0;
async function frame() {
render(i * 0.03);
i = i + 1;
requestAnimationFrame(frame);
}
requestAnimationFrame(frame);

which renders the rotating colored cube we wanted:

Structured vertices and Interpolants

In our simple example, the vertex buffer stores only the positions of vertices, so the vertex buffer is simply a field of 3D vectors. However, in rendering pipelines that are more sophisticated, we often need to store more other pieces of data in the vertex buffer, such normals and texture coordinates. In taichi.js, this can be achieved by using a field whose element type is a struct type. For example, the following code declares a vertex buffer storing position and texture coords:

let vertexType = ti.types.struct({
position: ti.types.vector(ti.f32, 3),
normal: ti.types.vector(ti.f32, 3),
})
let vertices = ti.field(vertexType, vertexCount)

Similarly, taichi.js also allows you to use structs as interpolated values between the vertex and fragment shader:

    for (let v of ti.inputVertices(...)) {
...
ti.outputVertex({
position: ...,
normal: ...
});
}
for (let f of ti.inputFragments()) {
f.position...
f.normal...
}