Tuesday, July 1, 2014

Terrain Rendering

Terrain rendering can be done in many ways depending on the scope, scale, and desired visual quality. My ultimate goal is to render an entire planet, with rich detail from any viewpoint and the ability to zoom in/out and see the curvature of the planet. But as a first step, let's just render a "normal" terrain, using a height-map and simple polygon reduction.

Height-map definition

A height-map is simply an image with elevation information. The basic form is a grey-scale image where the luminance at each pixel represents elevation. But this can give only 8 bit precision and is not enough to represent a highly detailed terrain (in a grey-scale image, all color components have the same 8-bit value). A more popular way is to use all 4 channels of the image to reach 32-bit precision. But I determined 24-bit is enough for the kind of detail I want so I went for a height-map with 3 channels worth of information: Basically, each channel is responsible for a different spacial magnitude, and when combining all three I get a very smooth terrain.

In my case, Red can displace in units of 100 (low frequency displacement), Green in units of 20 (medium frequency), and Blue in units of 5 (high frequency) - those are arbitrary values, completely tweak-able. For example, the following height-map, composed of the following channels, produces the terrain below:

Red channel

Green channel

Blue channel (uniform noise)
Final height-map
Corresponding terrain, 1 vertex per height-map pixel
Polygon complexity

The above 512x512 height-map has 262144 pixels and produces a mesh with as many vertices. This can easily amount to 250K triangles. For a 1024x1024 height-map, you can easily reach 1M triangles. At this point polygon reduction seems like a must to keep the terrain complexity under control.

The best practice is to do this reduction at run-time, where pieces of the terrain dynamically change resolution depending on the distance from the viewer. The part of the terrain that is closest needs to have the highest definition, and the farthest need not. Generally speaking, adapting mesh complexity to the view distance (or to any factor, it can be the screen-space area) is called Level of Detail (LOD) determination.

Currently, I don't need to do real-time LOD. For the kind of game I'm making, statically reducing the terrain complexity, combined with dynamic visibility determination using a Quadtree and Frustum culling, gives good results.

Polygon reduction using a Quadtree

A Quadtree is ideal for representing spatial information. In the case of terrain simplification, it can be used to "approximate" the terrain, and then we can create a mesh based on its structure.

In order to build the Quadtree, we must have a criteria to decide if we should sub-divide a node or not. In my case, I chose the criteria to be the "height variation" of the vertices within a node. If all the vertices are approximately at the same height (within a certain threshold), I assume they can be "approximated" as a single flat patch, otherwise, I keep subdividing until the height variation is low enough.

The following is an example of how a terrain is approximated using a Quadtree, with different height variation thresholds:

Threshold = 50, not enough detail

T=10: Too much detail
T = 20, looks about right
Terrain tesselation using a Quadtree

Now that we have our Quadtree representation, we must create a mesh. As a first step, let's assume all the vertices have zero height to keep it simple, build our mesh, and then apply elevation as a last step.

Now it seems like all we have to do is add vertices at the corners of the quadtree nodes, and connect them to have a terrain right? Not so simple dude. The main challenge is to be able to address the vertices in a predictable way, to be able to build triangles out of them. Basically before you can make a triangle out of vertices x, y, z, you need to know that x, y, z are a valid combination. A good way to do this is to only allow adding vertices to pre-determined locations on the quadtree nodes. For example, here are 9 valid positions for a vertex on a node:

This makes triangulation trivial, we know which triangles to create based on the vertices.

Preparing the Quadtree

The problem now is that it's not always possible to have these nice, predictable vertex positions on certain nodes, specially between nodes that have a very different subdivision level. When two neighboring nodes have a subdivision difference greater than 1, the 9 positions are not enough, you need to add more vertices to correctly represent the structure. Things just got out of hand again, by not being able to pre-determine the vertex positions you can't determine the triangles easily. The following picture illustrates the problem:

Irregular vertex combination on a node
The good thing is that we don't have to solve this problem, it's simpler to create a situation where it doesn't exist: Basically, by making sure two neighboring nodes can't have a subdivision difference greater than 1, those nice 9 vertex spots will always be enough to translate the quadtree into a terrain.

So let's iterate through it and determine the neighbors for each node. If a node has more than 2 neighbors on any side, we subdivide it. We repeat this process many times until no node needs subdivision. And tada, this yields the following quadtree (on the right) where two neighboring nodes can't have a subdivision difference great than 1. On the left is the old quadtree.

Quad-tree before "normalization"

After normalization, neighboring nodes differ by 1 subdivision level at most

Building the mesh

Now we have a tesselation-friendly quadtree, where each node can reference at most 9 vertices. We can start adding the vertices to the terrain. Of course, we only add a vertices to leaf nodes, and make sure there are no duplicates. If a vertex is shared between 2 nodes, they both simply point to the same vertex instance. The unique vertex instances are accumulated in the terrain and will later end up in a VertexBuffer. After all vertices have been added, it's time to build an IndexBuffer to contain triangle information. It's easy to add triangles based on the vertex locations in each node. Basically, if a node has only corner vertices, we make 2 triangles out of it. Otherwise, if it has at least one edge vertex, we add a center vertex and create triangles based on the presence of each edge vertex. Here is how the terrain looks at this point:

Elevating the vertices

The last step is to elevate the vertices based on the height-map data. The important thing is to keep track of the parent nodes for each vertex. This can be done at the tessellation stage. Then, we sample the pixels corresponding to each leaf node, and determine an average height for that node. Finally, for each vertex, we iterate through the parent nodes and determine a final average height. For a better approximation, the latter average should be weighted, meaning the parent nodes that cover more height-map pixels should have a higher impact on the average. This is how the terrain looks after elevation:

Final result

After texturing the terrain and adding some fog, here is how my in-game environment looks now:

Screenshot from Starports: Space Defense

Tuesday, June 3, 2014


This is a tutorial on how to use libttf to render text in OpenGL.
You can find an example with source code here.
Please note that the sample uses my personal engine to a certain degree. I included it as a static library so you can compile the sample for yourself.

Bitmap fonts Vs. TTF fonts

Getting consistent text quality across multiple platforms with different screen resolutions can be a pain in the ass. I recently switched to TTF text rendering instead of good old bitmap fonts. With bitmaps fonts, stretching was inevitable. I used to pick the largest common font size across multiple resolutions, and have it scaled down on small screens. To avoid stretching, I could have maintained font bitmaps of different sizes, one for each resolution, but I didn't even bother because it makes switching fonts more painful (it becomes necessary to regenerate all the bitmaps upon every font change). Plus you never really know the resolutions on mobile devices specially in Android, making it virtually impossible to avoid text stretching with bitmap fonts. Agreed it's not the end of the world when the text is stretched a bit thanks to OpenGL's filtering, but if you're shooting for that pixel-perfect, crisp look, bitmap fonts simply don't cut it.

Another more critical reason is the localization: It's also a pain to maintain different bitmaps for each locale.

With TTF rendering, it's possible to render beautiful, anti-aliased text at run-time, given any font size, and a wide-range of locales if you pick your TTF font carefully.

Preparing the Atlas texture

It's always a good idea to group textures into atlases to minimize draw calls, even more so when rendering text - you really don't want to have a separate texture and draw call for each character! The main idea is to retrieve the character data generated by libttf and append it to a texture - while making sure no two characters are appended twice.

The next bit of code should produce the following atlas for the string "Hello World!". I put way more comments than I usually do just to make clear for you.

256x256 Atlas for "Hello World!"

 std::string text = "Hello World!";  
 Size atlasSize(256, 256); // atlas size in pixels  
 u32 fontSize = 80; // font size in pixels  
 u32 textureDataSize = atlasSize.Width*atlasSize.Height*2; // store 2 bytes per-pixel, alpha & luminance  
 u8* pTextureData = snew u8[textureDataSize];  
 memset(pTextureData, 0, textureDataSize);
 // Initialize freetype and load SCRIPTBL.TTF  
 FT_Library ft;  
      SHOOT_ASSERT(false, "FT_Init_FreeType() failed");  
 FT_Face face;  
 if(FT_New_Face(ft, "data/common/SCRIPTBL.TTF", 0, &face))  
      SHOOT_ASSERT(false, "FT_New_Face() failed");  
 FT_Set_Pixel_Sizes(face, 0, fontSize); 
 //! CharData - Needed to build the vertex buffer later on.  
 struct CharData  
      Vector2 UVMin; // the upper left UV of a character  
      Vector2 UVMax; // the lower right UV of a character  
      Vector2 vSize; // the size of a character in pixels  
      Vector3 vOffset; // the character offset from the baseline in pixels  
      Vector3 vAdvance; // offset to draw the next character, in 64 pixels (divide by 64 to get offset in pixels)  
 std::map<char, CharData> charMap;  
 Point curOffset;  
 s32 curMaxY = 0;  
 Size padding(1, 1); // add 1 pixel padding between characters, to avoid bleeding  
 for(u32 i=0; i<text.length(); ++i)  
      char c = text.at(i);  
      if(charMap.find(c) != charMap.end())  
           continue;  // character was already added
      if(FT_Load_Char(face, c, FT_LOAD_RENDER))  
           SHOOT_ASSERT(false, "FT_Load_Char() failed");  
      // This is the most important structure provided by libttf  
      // Contains all the character metric data we need  
      FT_GlyphSlot g = face->glyph;      
      if(curOffset.X + g->bitmap.width > atlasSize.Width) // no more room in the horizontal direction  
           if(curOffset.Y + curMaxY + padding.Height + g->bitmap.rows > atlasSize.Height)  
           { // no more room in the vertical direction  
                SHOOT_WARNING(false, "TextTTF: Texture too small");  
           // reset horizontal offset to 0 and add the size of the biggest character to the vertical offset  
           curOffset.X = 0;  
           curOffset.Y += (curMaxY + padding.Height);  
           curMaxY = 0;  
      // Copy the character bitmap data into our OpenGL texture data  
      for(int y=0; y<g->bitmap.rows; ++y)  
           for(int x=0; x<g->bitmap.width; ++x)  
                u8* pixel = &pTextureData[2*(atlasSize.Width*(curOffset.Y + y) + (curOffset.X + x))];  
                // store the same value for both Alpha and Luminance  
                pixel[0] = pixel[1] = g->bitmap.buffer[g->bitmap.width*y + x];  
      // Record character data, needed to build the vertex buffer later on  
      CharData charData =   
           Vector2::Create(f32(curOffset.X)/atlasSize.Width, f32(curOffset.Y)/atlasSize.Height),  
           Vector2::Create(f32(curOffset.X+g->bitmap.width)/atlasSize.Width, f32(curOffset.Y+g->bitmap.rows)/atlasSize.Height),  
           Vector2::Create(f32(g->bitmap.width), f32(g->bitmap.rows)),  
           Vector3::Create(f32(g->bitmap_left), -f32(g->bitmap_top), 0.0f),  
           Vector3::Create(f32(g->advance.x >> 6), f32(g->advance.y >> 6), 0.0f)  
      charMap[c] = charData;  
      curOffset.X += (g->bitmap.width + padding.Width);  
      curMaxY = Math::Max(curMaxY, g->bitmap.rows);  

Uploading the Atlas to OpenGL

Here is how to create an Alpha/Luminance OpenGL texture and upload the Atlas data to it:

 glGenTextures(1, &m_GLTextureID);  
 glBindTexture(GL_TEXTURE_2D, m_GLTextureID);  
 glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA, atlasSize.Width, atlasSize.Height, 0, GL_LUMINANCE_ALPHA, GL_UNSIGNED_BYTE, pTextureData);  

Preparing the Vertex Buffer

At this point the hard part is already done. Now is just a matter of using the Character data that was collected during the Atlas generation, which contains all the information needed to position the text vertices and give them the right UV coordinates. Here is how I do it:

 // Build the vertices  
 u32 numVertices = text.size()*6; // 6 vertices (2 triangles) per character  
 Vertex3D* pVertices = snew Vertex3D[numVertices];  
 Vector3 vCharacterPos = Vector3::Create(100.0f, 100.0f, 0.0f);  
 for(u32 i=0; i<text.length(); ++i)  
      CharData& d = charMap[text.at(i)];  
      pVertices[i*6+0].UV = Vector2::Create(d.UVMin.X, d.UVMin.Y);  
      pVertices[i*6+1].UV = Vector2::Create(d.UVMax.X, d.UVMin.Y);  
      pVertices[i*6+2].UV = Vector2::Create(d.UVMax.X, d.UVMax.Y);  
      pVertices[i*6+3].UV = Vector2::Create(d.UVMin.X, d.UVMax.Y);  
      pVertices[i*6+4].UV = Vector2::Create(d.UVMin.X, d.UVMin.Y);  
      pVertices[i*6+5].UV = Vector2::Create(d.UVMax.X, d.UVMax.Y);  
      pVertices[i*6+0].Pos = vCharacterPos + d.vOffset + Vector3::Zero; // Top Left  
      pVertices[i*6+1].Pos = vCharacterPos + d.vOffset + Vector3::Create(d.vSize.X, 0.0f, 0.0f); // Top Right  
      pVertices[i*6+2].Pos = vCharacterPos + d.vOffset + Vector3::Create(d.vSize.X, d.vSize.Y, 0.0f); // Bottom Right  
      pVertices[i*6+3].Pos = vCharacterPos + d.vOffset + Vector3::Create(0.0f, d.vSize.Y, 0.0f); // Bottom Left  
      pVertices[i*6+4].Pos = vCharacterPos + d.vOffset + Vector3::Zero; // Top Left  
      pVertices[i*6+5].Pos = vCharacterPos + d.vOffset + Vector3::Create(d.vSize.X, d.vSize.Y, 0.0f); // Bottom Right  
      vCharacterPos += d.vAdvance;  

This is pretty much it! There are other important aspects to text rendering that I might talk about in a future post. If you're ever done 2D rendering using a 3D API like OpenGL, there is the infamous pixel-to-texel alignment problem. Basically, if you want ultra crisp text on any resolution, you gotta make sure each pixel on the screen is going to sample the information from exactly one texel in your Atlas. If any blending happens such as a pixel becomes the average of several texels, the text will look blurred. I have a very simple remedy to this: Just prevent any scaling from happening by always setting the View Matrix to Identity, the World Matrix's scaling components to (1, 1, 1), and more importantly, by making sure the vertex positions are always integers. Hopefully more explanation on this in a future post!

You can find an example with source code here.
Please note that the sample uses my personal engine to a certain degree. I included it as a static library so you can compile the sample for yourself.