Edge Smoothing

By Plump Helmet Studios on

Edge Smoothing

To begin, let's first admit that these textures are of potato quality. Let's just get that out there. I know. But their quick fabrication was to test a concept, of spreading a large texture over a number of smaller quads in order to reduce the repetitive nature of a 2D tiled terrain. In the picture below, you can make out the size of a quad, and each texture covers 256 of these in a 16,16 grid.

Sans smoothing

It works well. The tiling is only really evident when you zoom right out, and in reality, that won't happen "in the game", and rarely do you have so much terrain of the same type. Soil may have some rich soil mixed in, perhaps some gravel and stone, and a bit of marshy bog as well, not to mention the trees, plants, and demonic obelisks. Maybe.

The overall terrain is built in layers. In this example, there are three layers: water, sand, and soil. These layers each have their own meshes, which are generated in that order and drawn in that order too. When viewed with an orthagraphic camera, they meld into a single, unified terrain.

Below you can see the sand layer selected in the Unity editor.

Sand layer proper

With a tiled approach, where each 64 pixel block is represents a terrain type and is represented by a quad, the visuals become very blocky, and it doesn't really look good. So we need to blend these layers into each other. I attempted a few different techniques to achieve this, but in the end, without sacrificing the layered approach, it wasn't really possible (for me.)

So to blend the terrain layers, each layer checks for tiles at it's edge, and, where it finds a tile that is dissimilar, it creates a new quad with vertex-level opacity. For example, if the current tile is water and the tile to the north is sand, then the water layer will create a new quad to the north (tile x,y+1) with four vertices, the "southern" two (x,y + x+1,y) with an opacity of 1 and the "northern" two (x,y+1 + x+1,y+1) with an opacity of 0. These quads are part of the same mesh as the rest of the tiles, and therefore benefit from the same UV mapping & tiling.

Doing this gives us a border of faded quads. As the terrain layers are rendered in order, the fading quads of the layers underneath are culled and the fading quads from the above layers are visible.

Sand layer + smoothing

But how did we get to this result?

The first step is generating the map layers, but the real magic happens in CalculateEdges.

// Iterate over every map tile, starting at bottom left,
// and going left to right, bottom to top.
for (var y = 0; y < mapHeight; y++)
{
    for (var x = 0; x < mapWidth; x++)
    {
        if (_mapTiles[x, y].TerrainType != _type) continue;

        _meshData.AddVertex(x, y, 0);
        _meshData.AddVertex(x, y + 1, 0);
        _meshData.AddVertex(x + 1, y + 1, 0);
        _meshData.AddVertex(x + 1, y, 0);
        _meshData.AddQuadTriangles();
        _meshData.AddQuadColors();
        _meshData.AddUV(CalculateUV(x, y));
        _meshData.AddUV(CalculateUV(x, y + 1));
        _meshData.AddUV(CalculateUV(x + 1, y + 1));
        _meshData.AddUV(CalculateUV(x + 1, y));

        CalculateEdges(x, y);
    }
}

In CalculateEdges we iterate over all directions, cardinal (N,E,S,W) as well as ordinal (NE,SE,SW,NE), checking whether the tile in these directions are a different type than the current tile, and when they are, we create the faded quad as per these directions.

// Iterate over all directions, check whether an edge tile should
// be generated for this direction, and if so, generate one.
List<Direction> list = new List<Direction>();
Array directions = Enum.GetValues(typeof(Direction));
foreach(Direction direction in (Direction[])directions)
{
    switch (direction)
    {
        case Direction.North:
            if (y + 1 < mapHeight && _mapTiles[x, y + 1].TerrainType != _type)
                list.Add(Direction.North);
            break;

        case Direction.NorthEast:
            if (y + 1 < mapHeight && x + 1 < mapWidth && _mapTiles[x + 1, y + 1].TerrainType != _type)
                list.Add(Direction.NorthEast);
            break;

        case Direction.East:
            if (x + 1 < mapWidth && _mapTiles[x + 1, y].TerrainType != _type)
                list.Add(Direction.East);
            break;

        case Direction.SouthEast:
            if (y - 1 >= 0 && x + 1 < mapWidth && _mapTiles[x + 1, y - 1].TerrainType != _type)
                list.Add(Direction.SouthEast);
            break;

        case Direction.South:
            if (y - 1 >= 0 && _mapTiles[x, y - 1].TerrainType != _type)
                list.Add(Direction.South);
            break;

        case Direction.SouthWest:
            if (y - 1 >= 0 && x - 1 >= 0 && _mapTiles[x - 1, y - 1].TerrainType != _type)
                list.Add(Direction.SouthWest);
            break;

        case Direction.West:
            if (x - 1 >= 0 && _mapTiles[x - 1, y].TerrainType != _type)
                list.Add(Direction.West);
            break;

        case Direction.NorthWest:
            if (y + 1 < mapHeight && x - 1 >= 0 && _mapTiles[x - 1, y + 1].TerrainType != _type)
                list.Add(Direction.NorthWest);
            break;
    }
}

Finally, once we have a list of directions where an edge tile is needed, we generate them.

list.ForEach(delegate(Direction direction) {
    GenerateEdge(x, y, direction);
});

The GenerateEdge method makes a copy of the tiles current x,y coordinates and for the direction it needs to generate an edge for, it either adds or subtracts a value from the copied x and/or y values, and also rather importantly, defines which vertices are opaque and which are not.

int dx = x;
int dy = y;

// Calculate relative position as well as the vertex alpha.
switch (direction)
{
    case Direction.North:
        dy += 1;
        _meshData.AddColor(1, 1, 1, 1);
        _meshData.AddColor(1, 1, 1, 0);
        _meshData.AddColor(1, 1, 1, 0);
        _meshData.AddColor(1, 1, 1, 1);
        break;

    case Direction.NorthEast:
        dx += 1;
        dy += 1;
        _meshData.AddColor(1, 1, 1, 1);
        _meshData.AddColor(1, 1, 1, 0);
        _meshData.AddColor(1, 1, 1, 0);
        _meshData.AddColor(1, 1, 1, 0);
        break;

    case Direction.East:
        dx += 1;
        _meshData.AddColor(1, 1, 1, 1);
        _meshData.AddColor(1, 1, 1, 1);
        _meshData.AddColor(1, 1, 1, 0);
        _meshData.AddColor(1, 1, 1, 0);
        break;

    case Direction.SouthEast:
        dx += 1;
        dy -= 1;
        _meshData.AddColor(1, 1, 1, 0);
        _meshData.AddColor(1, 1, 1, 1);
        _meshData.AddColor(1, 1, 1, 0);
        _meshData.AddColor(1, 1, 1, 0);
        break;

    case Direction.South:
        dy -= 1;
        _meshData.AddColor(1, 1, 1, 0);
        _meshData.AddColor(1, 1, 1, 1);
        _meshData.AddColor(1, 1, 1, 1);
        _meshData.AddColor(1, 1, 1, 0);
        break;

    case Direction.SouthWest:
        dx -= 1;
        dy -= 1;
        _meshData.AddColor(1, 1, 1, 0);
        _meshData.AddColor(1, 1, 1, 0);
        _meshData.AddColor(1, 1, 1, 1);
        _meshData.AddColor(1, 1, 1, 0);
        break;

    case Direction.West:
        dx -= 1;
        _meshData.AddColor(1, 1, 1, 0);
        _meshData.AddColor(1, 1, 1, 0);
        _meshData.AddColor(1, 1, 1, 1);
        _meshData.AddColor(1, 1, 1, 1);
        break;

    case Direction.NorthWest:
        dx -= 1;
        dy += 1;
        _meshData.AddColor(1, 1, 1, 0);
        _meshData.AddColor(1, 1, 1, 0);
        _meshData.AddColor(1, 1, 1, 0);
        _meshData.AddColor(1, 1, 1, 1);
        break;
}

Then, like before, it's just a case of defining the quad.

// Add vertex, triangle, and UV data
_meshData.AddVertex(dx, dy, 0);
_meshData.AddVertex(dx, dy + 1, 0);
_meshData.AddVertex(dx + 1, dy + 1, 0);
_meshData.AddVertex(dx + 1, dy, 0);
_meshData.AddQuadTriangles();
_meshData.AddUV(CalculateUV(dx, dy));
_meshData.AddUV(CalculateUV(dx, dy + 1));
_meshData.AddUV(CalculateUV(dx + 1, dy + 1));
_meshData.AddUV(CalculateUV(dx + 1, dy));

The last part, which is essential, is the shader. We define the color fixed4 color : COLOR parameter in appdata and v2f, and when it comes time to manipulate each pixel, we multiply the texel by the color return tex2D(_MainTex, i.uv) * i.color.

Shader "2D/VertexBlend"
{
    Properties
    {
        _MainTex ("Texture", 2D) = "white" {}
    }
    SubShader
    {
        Tags { "RenderType"="Transparent" "Queue"="Transparent" }
        Blend SrcAlpha OneMinusSrcAlpha

        Pass
        {
            CGPROGRAM
            #pragma vertex vert
            #pragma fragment frag
            #include "UnityCG.cginc"

            struct appdata
            {
                float4 vertex : POSITION;
                float2 uv : TEXCOORD0;
                fixed4 color : COLOR;
            };

            struct v2f
            {
                float2 uv : TEXCOORD0;
                float4 vertex : SV_POSITION;
                fixed4 color : COLOR;
            };

            sampler2D _MainTex;
            float4 _MainTex_ST;

            v2f vert (appdata v)
            {
                v2f o;
                o.vertex = UnityObjectToClipPos(v.vertex);
                o.uv = TRANSFORM_TEX(v.uv, _MainTex);
                o.color = v.color;
                return o;
            }

            fixed4 frag (v2f i) : SV_Target
            {
                return tex2D(_MainTex, i.uv) * i.color;
            }
            ENDCG
        }
    }
}

It's not a complex operation. In essence, it's actually quite simple, but almost always the simplest solution is the best, and in this case, once you understand the simple solution, is hard to understand how it could ever have been so tricky.

Thanks for reading, and why not follow us on Twitter @plump_helmet for more on game development & our unannounced title?