Thursday, October 12, 2017

Kotlin Game Programming

Introduction

I love to learn new languages as each one is a treasure trove of new ideas and ways of thinking that I can carry forward to new projects. When porting Android Java code to Kotlin, I started to see a few possible patterns emerge that appeared incredibly useful for solving problems I've encountered in game engine development. I set off to explore these patterns using LWJGL, and possibly find a new language for OpenGL development in the process. The spoiler is that I feel hamstrung without constexpr and the ability to pass by value, but there are still plenty of useful patterns Kotlin made obvious.

As I write this, I'm still actively exploring the language over in my lwjgl_test repository on GitHub. Feel free to use this as a starting point or just as a reference.

The Data Class

One of my inspirations for this project was the existence of a data class in Kotlin. I was hoping for a pass by value type much like C#'s struct, but that is not the case. It means in cases like a scenegraph node or general model transform, you will probably write your logic more like a C math API than a C++ one (that is, you will tend to want to take in a value to write out to rather than having inlined const functions).

You can see the side effects in my Transform class, I end up having to take in a Vector3f to write into to avoid runtime heap allocations and provide hooks for "dirty" flags in the future.


 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
class Transform {
    private val position = Vector3f()
    // ...

    fun setPosition(position: Vector3f) {
        this.position.set(position)
    }

    fun getPosition(position: Vector3f) {
        position.set(this.position)
    }

    // ...
}

OpenGL State Management

A large portion of my time in engine development is typically focused around OpenGL state management. If you're unfamiliar with the OpenGL model, your OpenGL system is effectively a state machine. It's common to abstract this state machine with an additional software layer that prevents redundant state set calls and to ensure that the state is correct to perform the next operation you wish to execute.

My rendering code is what I'm most proud of for this project. As of right now, my renderModel function lives in my shader logic and looks like this:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
        fun renderModel(model: HalfEdgeModel, material: Material) {
            glUniform4f(modelAmbientColorUniform, material.ambient.x, material.ambient.y, material.ambient.z, 1f)

            model.use {
                MemoryStack.stackPush().use {
                    val nativeMatrix = it.mallocFloat(16)
                    val modelMatrix = Matrix4f()
                    model.transform.getWorldMatrix(modelMatrix)
                    modelMatrix.get(nativeMatrix)

                    GL20.glUniformMatrix4fv(modelUniform, false, nativeMatrix)
                }

                loadPositions(positionAttribute)
                loadNormals(normalAttribute)
                drawElements()
            }
        }

The functions loadPositions(), loadNormals(), and drawElements() are only available inside model.use. Two features of Kotlin make this possible. First, functions in Kotlin may take lambdas as parameters. If the last parameter to a function is a lambda, you may close the function argument list and write an open curly brace to implement this lambda. On top of that, if the only argument is a lambda, you may exclude the argument list entirely. Second, these lambdas may have a receiver object. This means that the lambda syntactically appears to be a member of another class and can access members of that class in the lambda body. You can read more about this here.

This is where Kotlin shines. My implementation of use() looks like this:

1
2
3
4
5
6
7
    fun use(callback: HalfEdgeModel.ActiveModel.() -> Unit) {
        glBindBuffer(GL_ARRAY_BUFFER, vertexBufferObject)
        glBindBuffer(GL_ELEMENT_ARRAY_BUFFER, indexBufferObject)
        activeModel.callback()
        glBindBuffer(GL_ELEMENT_ARRAY_BUFFER, 0)
        glBindBuffer(GL_ARRAY_BUFFER, 0)
    }

The line activeModel.callback() references a private member in HalfEdgeModel of the type ActiveModel. All the rendering commands are implemented in this class:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
    inner class ActiveModel {
        fun loadPositions(positionAttributeLocation: Int) {
            GL20.glVertexAttribPointer(positionAttributeLocation, 3, GL11.GL_FLOAT, false, Vertex.VERTEX_SIZE, 0)
        }

        fun loadNormals(normalAttributeLocation: Int) {
            GL20.glVertexAttribPointer(normalAttributeLocation, 3, GL11.GL_FLOAT, true, Vertex.VERTEX_SIZE, Vertex.VECTOR_3_SIZE.toLong())
        }

        fun drawElements() {
            GL11.glDrawElements(GL11.GL_TRIANGLES, edges.size, GL11.GL_UNSIGNED_SHORT, 0)
        }
    }

What I love about this is that you cannot attempt to access the attribute buffer until it's bound as doing so is a compile time error. This is achievable in C++, but you would end up with either a stack allocated object that maintains the binding via RAII or a callback that passes in "ActiveModel".

DSL Like Syntax

This is the first language I've used that put any focus on DSL like syntax. Historically I've been very wary of such constructs, especially Ruby with it's ability to practically redefine the entire language. Kotlin has won me over to the concept by tying it in with a static and strict type system, giving me a configuration file that lives right in my source tree. Consider my procedural model definition DSL:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
        val halfEdgeGround = halfEdgeModel {
            vertex {
                position = Vector3f(-1f, 0f, -1f)
                normal = Vector3f(0f, 1f, 0f)
            }
            vertex {
                position = Vector3f(-1f, 0f, 1f)
                normal = Vector3f(0f, 1f, 0f)
            }
            vertex {
                position = Vector3f(1f, 0f, 1f)
                normal = Vector3f(0f, 1f, 0f)
            }
            vertex {
                position = Vector3f(1f, 0f, -1f)
                normal = Vector3f(0f, 1f, 0f)
            }
            face(0, 1, 2)
            face(0, 2, 3)
        }
I'm actively playing with various ways to structure these DSLs as far as when to use the callback syntax vs the equals sign. As I move forward I am using this anywhere I would want to use the builder pattern or even a config file.

As a side note, HalfEdgeModel (and most classes I write a DSL for) are actually implemented as a Builder behind the scenes:


1
2
3
4
5
fun halfEdgeModel(cb: HalfEdgeModel.Builder.() -> Unit): HalfEdgeModel {
    val builder = HalfEdgeModel.Builder()
    builder.cb()
    return builder.build()
}

If you're new to this pattern, I recommend that you read Kotlin's documentation. They do a far better job at explaining the implementation than I can summarize in this post.

Conclusions

Kotlin gets me close to an ideal language for writing hobby OpenGL programs. The lack of pass by value types and the lack of constexpr make the code still more verbose than I'd like, but DSLs and lambda blocks make some complex code an absolute joy to write.

Monday, January 23, 2017

Making Waves at Global Game Jam 2017

I could not have been more excited when the Global Game Jam 2017's theme was announced. In college, I was fascinated with simulating soft body objects. I'd done some experiments with Lagrangian fluid simulations a few years back, but nothing that I was inspired to work on this long and this fast.

I joined a small team calling outselves "Scyllier than Charybdis" to work on the game "Poseidon and the Argonauts." The game itself is pretty simple, you try to push various pieces of flotsam into a number of boats, getting points for the number of boats you destroy (vs boats that escape) and you get bonused for minimizing the number of waves you can generate.


What is a Wave?

Early in the concepting, we knew that we wanted a game that showed off some of the key features of waves. Our early concepts were focused on the additive properties of waves. Meaning that if you align the crests of two waves they'll add together, as will the troughs.
Blue is sin(x) and red is sin(2x). Green is sum of these functions.
You can see the waves get higher when the crests align.
Additionally, if you align the crest and trough, you'll get a straight line.
Blue is sin(x) and red is sin(x+pi). Green is the sum of these functions.
You can see the waves perfectly cancel each other out.

My first instinct was to discretize the play area. You can get really nice-looking waves by pushing a vertex down (or pulling it up), then every frame you average each vertex with its neighbors. Once you discretize it like this you could eventually move to more accurate physical models, but it seemed unlikely to happen during a Game Jam.

Another idea we had was to make our game 2D, and have expanding circular wave sprites expand from your finger. I wanted to go slightly more realistic, so I moved to the idea of a number of "wave generators." These were tiny functions that can produce a height given an x and y position in a 2d field. Another team member came up with a compliant solution that, given an x and y position as well as a wave start time, would produce a circular height map whose shape was defined via a Unity AnimationCurve. It would decay over time and looked excellent.
AnimationCurve defining the shape of the wave rather than an Animation as intended.
All of these wave generators would get summed up at a point and generate an absolute height. This was especially useful as we didn't have to iron down a play area size or viewport ahead of time. As we evolved the game concept over 48 hours, we can keep the same set of functions and move them about without sacrificing performance.

How do Waves Move Things?

Eventually we decided to focus on using waves to push objects around, but maintaining the above mentioned features. To do this, we had to model the physics of how a wave moves objects. A wave doesn't actually push an object, it simply lifts it up and down. The object effectively enters a free fall down the face of the wave, with drag caused by water slowing you down.
Gravity pulls the object down.
Object pushed forward due to the wave surface.
Free fall is slowed due to drag.
To model this, I decided to calculate the surface normal of the ocean field at the point an object was in contact of the ocean. To do this, I simply sampled the height three times at every point and used the cross product to figure out up.

When I take out the height of this normal, I get a direction that the object is moving scaled by how vertical the wave is. I can then apply a force found by multiplying this by some scalar defined per object (to simulate heavier or lighter objects, or objects that are more or less streamlined).

The best part about this model is, if you don't setup the wave right, an object will ride up the front of the wave and down the back side. This iteraction is observable in the real world, and gets incredibly realistic looking wave interactions with little work.

Rendering Waves

My first attempt at rendering waves was to create a plane in the world. Each vertex will be evaluated once a frame, sampling its position in the height field. We even get the surface normal from our movement calculations! This turned out to scale incredibly poorly, especially when running on a phone. One team member had a Nexus 5, so this wasn't going to fly.

After the normal optimization steps in Unity (using native arrays over List and switching to for loops over indices over foreach with an enumerator), I decided to change the geometry to minimize the number of vertices in the scene. I generate the vertices in viewport space, ensuring an even distribution across the screen. For each vertex, I use Unity's ViewportPointToRay function to generate a world space ray, then intersect it with the water plane. After this projection, I sample the height field to move the point up or down. This let me halve the number of vertices and maintain the same (or even improved) level of graphic fidelity.
Note that the water mesh in the editor is evenly spaced across the display, with the only protrusion being from the ripple.
In the main menu, it was decided that viewing the horizon was important. I'm sure you are worried that this method wouldn't hold up when I didn't have a ground plane to intersect. If the raycast misses, I simply projected the vertex to the far plane, and let the heightmap logic pull it down. Visually it actually worked pretty well (although waves appear broken if you touch too far back).

What is Missing

Due to the way I was performing mesh generation, the UV coordinates of the mesh were locked to screen space. Not only did this make a water texture stand still when the camera moved, but the perspective shearing was undone by the vertices being placed from viewport space. I opted remove the texture entirely, although I could've tied it to the world space position of the vertex in the x,z plane.

The water has just basic lighting on it. I would've liked to change the color based on the height of the wave and how "up" the surface normal was, which could've been done with a simple shader. Since we also computed the radius of the waves, we could've spread "foam" particles around the wave to simulate the waves breaking.

What Would I Change

I mentioned before a model where you'd simply move vertices up or down and let them normalize out to simulate waves. I really would've liked to experiment with this, it would've let waves crash around islands and let me render trails behind the ships without having to come up with a new equation to factor into the height field. Most importantly, it would've easily let me allow a user to drag their finger to generate waves.

Watching people play the game, everyone wants to drag their finger. I would've liked to support this which could've been done by either changing the water model (as mentioned above), dropping multiple "wave points" (which I capped at 16 for performance and code simplicity reasons), or generating a model where I track the start and end points of the touch. I could simulate the magnitude of the wave mathematically as a capsule, with the radius changing based on the time the point was alive.

Conclusion

The model used in the game for waves was incredibly simple and ran well on all the devices we had available. Despite not being incredibly accurate, it turned out very well. Check out our sourcecode and final APK form our Jam page:
http://globalgamejam.org/2017/games/poseidon-and-argonauts

Kotlin Game Programming