Thursday, October 12, 2017

Kotlin Game Programming

Introduction

I love to learn new languages as each one is a treasure trove of new ideas and ways of thinking that I can carry forward to new projects. When porting Android Java code to Kotlin, I started to see a few possible patterns emerge that appeared incredibly useful for solving problems I've encountered in game engine development. I set off to explore these patterns using LWJGL, and possibly find a new language for OpenGL development in the process. The spoiler is that I feel hamstrung without constexpr and the ability to pass by value, but there are still plenty of useful patterns Kotlin made obvious.

As I write this, I'm still actively exploring the language over in my lwjgl_test repository on GitHub. Feel free to use this as a starting point or just as a reference.

The Data Class

One of my inspirations for this project was the existence of a data class in Kotlin. I was hoping for a pass by value type much like C#'s struct, but that is not the case. It means in cases like a scenegraph node or general model transform, you will probably write your logic more like a C math API than a C++ one (that is, you will tend to want to take in a value to write out to rather than having inlined const functions).

You can see the side effects in my Transform class, I end up having to take in a Vector3f to write into to avoid runtime heap allocations and provide hooks for "dirty" flags in the future.


 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
class Transform {
    private val position = Vector3f()
    // ...

    fun setPosition(position: Vector3f) {
        this.position.set(position)
    }

    fun getPosition(position: Vector3f) {
        position.set(this.position)
    }

    // ...
}

OpenGL State Management

A large portion of my time in engine development is typically focused around OpenGL state management. If you're unfamiliar with the OpenGL model, your OpenGL system is effectively a state machine. It's common to abstract this state machine with an additional software layer that prevents redundant state set calls and to ensure that the state is correct to perform the next operation you wish to execute.

My rendering code is what I'm most proud of for this project. As of right now, my renderModel function lives in my shader logic and looks like this:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
        fun renderModel(model: HalfEdgeModel, material: Material) {
            glUniform4f(modelAmbientColorUniform, material.ambient.x, material.ambient.y, material.ambient.z, 1f)

            model.use {
                MemoryStack.stackPush().use {
                    val nativeMatrix = it.mallocFloat(16)
                    val modelMatrix = Matrix4f()
                    model.transform.getWorldMatrix(modelMatrix)
                    modelMatrix.get(nativeMatrix)

                    GL20.glUniformMatrix4fv(modelUniform, false, nativeMatrix)
                }

                loadPositions(positionAttribute)
                loadNormals(normalAttribute)
                drawElements()
            }
        }

The functions loadPositions(), loadNormals(), and drawElements() are only available inside model.use. Two features of Kotlin make this possible. First, functions in Kotlin may take lambdas as parameters. If the last parameter to a function is a lambda, you may close the function argument list and write an open curly brace to implement this lambda. On top of that, if the only argument is a lambda, you may exclude the argument list entirely. Second, these lambdas may have a receiver object. This means that the lambda syntactically appears to be a member of another class and can access members of that class in the lambda body. You can read more about this here.

This is where Kotlin shines. My implementation of use() looks like this:

1
2
3
4
5
6
7
    fun use(callback: HalfEdgeModel.ActiveModel.() -> Unit) {
        glBindBuffer(GL_ARRAY_BUFFER, vertexBufferObject)
        glBindBuffer(GL_ELEMENT_ARRAY_BUFFER, indexBufferObject)
        activeModel.callback()
        glBindBuffer(GL_ELEMENT_ARRAY_BUFFER, 0)
        glBindBuffer(GL_ARRAY_BUFFER, 0)
    }

The line activeModel.callback() references a private member in HalfEdgeModel of the type ActiveModel. All the rendering commands are implemented in this class:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
    inner class ActiveModel {
        fun loadPositions(positionAttributeLocation: Int) {
            GL20.glVertexAttribPointer(positionAttributeLocation, 3, GL11.GL_FLOAT, false, Vertex.VERTEX_SIZE, 0)
        }

        fun loadNormals(normalAttributeLocation: Int) {
            GL20.glVertexAttribPointer(normalAttributeLocation, 3, GL11.GL_FLOAT, true, Vertex.VERTEX_SIZE, Vertex.VECTOR_3_SIZE.toLong())
        }

        fun drawElements() {
            GL11.glDrawElements(GL11.GL_TRIANGLES, edges.size, GL11.GL_UNSIGNED_SHORT, 0)
        }
    }

What I love about this is that you cannot attempt to access the attribute buffer until it's bound as doing so is a compile time error. This is achievable in C++, but you would end up with either a stack allocated object that maintains the binding via RAII or a callback that passes in "ActiveModel".

DSL Like Syntax

This is the first language I've used that put any focus on DSL like syntax. Historically I've been very wary of such constructs, especially Ruby with it's ability to practically redefine the entire language. Kotlin has won me over to the concept by tying it in with a static and strict type system, giving me a configuration file that lives right in my source tree. Consider my procedural model definition DSL:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
        val halfEdgeGround = halfEdgeModel {
            vertex {
                position = Vector3f(-1f, 0f, -1f)
                normal = Vector3f(0f, 1f, 0f)
            }
            vertex {
                position = Vector3f(-1f, 0f, 1f)
                normal = Vector3f(0f, 1f, 0f)
            }
            vertex {
                position = Vector3f(1f, 0f, 1f)
                normal = Vector3f(0f, 1f, 0f)
            }
            vertex {
                position = Vector3f(1f, 0f, -1f)
                normal = Vector3f(0f, 1f, 0f)
            }
            face(0, 1, 2)
            face(0, 2, 3)
        }
I'm actively playing with various ways to structure these DSLs as far as when to use the callback syntax vs the equals sign. As I move forward I am using this anywhere I would want to use the builder pattern or even a config file.

As a side note, HalfEdgeModel (and most classes I write a DSL for) are actually implemented as a Builder behind the scenes:


1
2
3
4
5
fun halfEdgeModel(cb: HalfEdgeModel.Builder.() -> Unit): HalfEdgeModel {
    val builder = HalfEdgeModel.Builder()
    builder.cb()
    return builder.build()
}

If you're new to this pattern, I recommend that you read Kotlin's documentation. They do a far better job at explaining the implementation than I can summarize in this post.

Conclusions

Kotlin gets me close to an ideal language for writing hobby OpenGL programs. The lack of pass by value types and the lack of constexpr make the code still more verbose than I'd like, but DSLs and lambda blocks make some complex code an absolute joy to write.

Monday, January 23, 2017

Making Waves at Global Game Jam 2017

I could not have been more excited when the Global Game Jam 2017's theme was announced. In college, I was fascinated with simulating soft body objects. I'd done some experiments with Lagrangian fluid simulations a few years back, but nothing that I was inspired to work on this long and this fast.

I joined a small team calling outselves "Scyllier than Charybdis" to work on the game "Poseidon and the Argonauts." The game itself is pretty simple, you try to push various pieces of flotsam into a number of boats, getting points for the number of boats you destroy (vs boats that escape) and you get bonused for minimizing the number of waves you can generate.


What is a Wave?

Early in the concepting, we knew that we wanted a game that showed off some of the key features of waves. Our early concepts were focused on the additive properties of waves. Meaning that if you align the crests of two waves they'll add together, as will the troughs.
Blue is sin(x) and red is sin(2x). Green is sum of these functions.
You can see the waves get higher when the crests align.
Additionally, if you align the crest and trough, you'll get a straight line.
Blue is sin(x) and red is sin(x+pi). Green is the sum of these functions.
You can see the waves perfectly cancel each other out.

My first instinct was to discretize the play area. You can get really nice-looking waves by pushing a vertex down (or pulling it up), then every frame you average each vertex with its neighbors. Once you discretize it like this you could eventually move to more accurate physical models, but it seemed unlikely to happen during a Game Jam.

Another idea we had was to make our game 2D, and have expanding circular wave sprites expand from your finger. I wanted to go slightly more realistic, so I moved to the idea of a number of "wave generators." These were tiny functions that can produce a height given an x and y position in a 2d field. Another team member came up with a compliant solution that, given an x and y position as well as a wave start time, would produce a circular height map whose shape was defined via a Unity AnimationCurve. It would decay over time and looked excellent.
AnimationCurve defining the shape of the wave rather than an Animation as intended.
All of these wave generators would get summed up at a point and generate an absolute height. This was especially useful as we didn't have to iron down a play area size or viewport ahead of time. As we evolved the game concept over 48 hours, we can keep the same set of functions and move them about without sacrificing performance.

How do Waves Move Things?

Eventually we decided to focus on using waves to push objects around, but maintaining the above mentioned features. To do this, we had to model the physics of how a wave moves objects. A wave doesn't actually push an object, it simply lifts it up and down. The object effectively enters a free fall down the face of the wave, with drag caused by water slowing you down.
Gravity pulls the object down.
Object pushed forward due to the wave surface.
Free fall is slowed due to drag.
To model this, I decided to calculate the surface normal of the ocean field at the point an object was in contact of the ocean. To do this, I simply sampled the height three times at every point and used the cross product to figure out up.

When I take out the height of this normal, I get a direction that the object is moving scaled by how vertical the wave is. I can then apply a force found by multiplying this by some scalar defined per object (to simulate heavier or lighter objects, or objects that are more or less streamlined).

The best part about this model is, if you don't setup the wave right, an object will ride up the front of the wave and down the back side. This iteraction is observable in the real world, and gets incredibly realistic looking wave interactions with little work.

Rendering Waves

My first attempt at rendering waves was to create a plane in the world. Each vertex will be evaluated once a frame, sampling its position in the height field. We even get the surface normal from our movement calculations! This turned out to scale incredibly poorly, especially when running on a phone. One team member had a Nexus 5, so this wasn't going to fly.

After the normal optimization steps in Unity (using native arrays over List and switching to for loops over indices over foreach with an enumerator), I decided to change the geometry to minimize the number of vertices in the scene. I generate the vertices in viewport space, ensuring an even distribution across the screen. For each vertex, I use Unity's ViewportPointToRay function to generate a world space ray, then intersect it with the water plane. After this projection, I sample the height field to move the point up or down. This let me halve the number of vertices and maintain the same (or even improved) level of graphic fidelity.
Note that the water mesh in the editor is evenly spaced across the display, with the only protrusion being from the ripple.
In the main menu, it was decided that viewing the horizon was important. I'm sure you are worried that this method wouldn't hold up when I didn't have a ground plane to intersect. If the raycast misses, I simply projected the vertex to the far plane, and let the heightmap logic pull it down. Visually it actually worked pretty well (although waves appear broken if you touch too far back).

What is Missing

Due to the way I was performing mesh generation, the UV coordinates of the mesh were locked to screen space. Not only did this make a water texture stand still when the camera moved, but the perspective shearing was undone by the vertices being placed from viewport space. I opted remove the texture entirely, although I could've tied it to the world space position of the vertex in the x,z plane.

The water has just basic lighting on it. I would've liked to change the color based on the height of the wave and how "up" the surface normal was, which could've been done with a simple shader. Since we also computed the radius of the waves, we could've spread "foam" particles around the wave to simulate the waves breaking.

What Would I Change

I mentioned before a model where you'd simply move vertices up or down and let them normalize out to simulate waves. I really would've liked to experiment with this, it would've let waves crash around islands and let me render trails behind the ships without having to come up with a new equation to factor into the height field. Most importantly, it would've easily let me allow a user to drag their finger to generate waves.

Watching people play the game, everyone wants to drag their finger. I would've liked to support this which could've been done by either changing the water model (as mentioned above), dropping multiple "wave points" (which I capped at 16 for performance and code simplicity reasons), or generating a model where I track the start and end points of the touch. I could simulate the magnitude of the wave mathematically as a capsule, with the radius changing based on the time the point was alive.

Conclusion

The model used in the game for waves was incredibly simple and ran well on all the devices we had available. Despite not being incredibly accurate, it turned out very well. Check out our sourcecode and final APK form our Jam page:
http://globalgamejam.org/2017/games/poseidon-and-argonauts

Saturday, May 2, 2015

Raspberry Pi "Arcade Machine"

Like any normal person who happens across a pair of broken PS3 arcade sticks and a Raspberry PI 2, I decided that it was time to create my own arcade machine.

To preface this, I am a software engineer at Sphero and I work on toy robots. Despite what you think this may imply, I have minimal hardware experience. So rather than a tutorial on how to make your own PI arcade, I decided to document my attempt and the mistakes and backtracks I went through. The end result is that a) it works and b) I didn't follow any other tutorial/make a joystick quite like any of the other ones!

I started out with a Street Fighter fight stick. I was told that the board was "burnt out." After a short attempt to utilize it as a PS3 controller, I immediately severed the connections to the PS3 controller board and ensured I kept the wiring order with medical tape. I will never use medical tape again!
I then decided to wire the joystick directly into the GPIO ports on the PI. At this point, I was experimenting with my original PI as the PI 2 was busy being an amateur web server. I came up with this wiring diagram:
After a long period of tinning cables and plugging them into a breadboard, it was time to hook it up to the PI! There were two key problems:
1) I didn't know how to do anything with GPIO ports.
2) I didn't have any way to plug anything into the GPIO ports.

I eventually found a GPIO based joystick project and adapted to their wiring diagram (it turns out that some of the pins are special, so you shouldn't use them for general I/O).

As far as wiring it up, after a bit of a panic I discovered that I still had a single floppy disk ribbon cable buried deep in my closet. That led me to this horrible tangle:
And to top it all off, I could only get the joystick to register down and right! It took quite a bit of time fiddling with the software before my trusty multimeter told me that those directions weren't being registered on the stick at all. It's a good thing I have two!

At this point, I started realizing that it's actually quite complicated to deal with joysticks under Linux. I started searching for a simpler solution. I discovered an Adafruit project, and immediately forked it and reconfigured the pins for my own use.

I also needed to start thinking of a more permanent solution to my final wiring. I didn't want to waste a bunch of my nicer breadboard wires by loosely plugging them into a floppy disk ribbon. I also managed to ruin a HDD ribbon cable trying to cut it up for use (which is why I just dealt with the twist you see in the floppy ribbon). I found a nice GPIO breakout cable from Sparkfun. This has the additional bonus of reordering the GPIO pins to be in numerical order! But, since the traces jump all around, and I can't fit the ribbon cable on the proper side of the GPIO pins, I had to disconnect my PI2 from the internet and use it instead (it's a good thing it isn't still running this blog!).

The Raspberry PI 2 jump actually halted me in a more significant way: I no longer had a built in RCA video out jack on the board! After a brief bout of disappointment, I grabbed a camcorder adapter from Best Buy. This mostly worked... mostly, but the important thing to note is that the red and yellow wires are actually swapped!

So here you have it, I'm now working with the Sparkfun breakout board and a suitable camcorder RCA adapter. Bonus: I no longer need the Jambox to hear the games! I just need the Jambox to jam.
(My fork is available here, it's just a few pin/key changes)

At this point, I have an issue: there is a USB cable going out of the back of the controller, but it went straight into a non-functional PS3 board. Additionally, I had a separate cable going into my PI. After a few minutes with the multimeter and a soldering iron, I had this awesome charging solution:

Next came cleaning everything up, luckily for me, there is a little door in the back to stash the USB cable. Lacking any sort of precision cutting tool, I take my Ikea cordless drill and makes what amounts to the ugliest giant hole to route cables through.

After a lot of haphazard drilling and jamming components where they barely fit, I can close everything off and turn it on!
The final hardware setup is this:
1) there is a Raspberry PI shoved very haphazardly into the top left corner of the arcade stick.
2) there is a second lower level in the stick where I wedged the breakout board.
3) in the PI's USB ports I have a Wifi dongle, BLE dongle (I do work for Sphero!), and a cable snaking out to a USB Micro adapter.
4) I have a camcorder 1/8" to RCA adapter, this means that the yellow and red wires are swapped! I considered running an HDMI cable, but if I can't connect it to an old-school CRT television, what's the point of even doing this?
5) At the end of the USB adapter, I typically keep a USB hub. And interesting side effect: if I don't have a USB power adapter handy, I can actually plug the PI's power into this hub and it will comically "power itself" (there is an external AC adapter I also keep with this hub).

Additional hardware "fixes" I want to make:
1) I'd like to wire in a battery to make it more portable.
2) I want to use the hole in the top to mount the PI externally. This would mean that I could use an HDMI cable when I was forced to use a modern TV, and I'd have access to the one unused USB port!

I will dedicate another post to my PI software setup (I currently need a keyboard to type "startx" and to press "esc" - so I'm not comfortable posting it yet). But what you need to know is that I use:
2) Adafruit Retrogame - which I set to run automatically

Friday, January 23, 2015

Ollie and your Android Activity Lifecycle

Welcome to my second tutorial on connecting your Ollie to your own custom Android application. Last week, I alluded to needing to properly handle the Ollie application lifecycle. I'll finally shed some light on what I meant!
If you have not yet paired Ollie with your phone, I recommend you read my last article. I will not cover setting up your project or application manifest here.

Now that I've verified that I can connect to an Ollie, I want to do so in a manner that I can eventually ship in a game. To do this, I have to perform the following tasks:
  1. I want to wrap connecting to Ollie so I just say "findRobot()" and everything will kick off.
  2. I want to handle the user backgrounding my Activity. When this happens, I will let go of my Ollie connection so another app can connect in my place.
  3. I want to be notified when Ollie disconnects on its own accord, and let my app reconnect in a reasonable way.

Application Architecture

A quick overview of why I'm writing this article. I wanted to create a game similar to "Boppit" where the user performs a series of actions dictated by a possibly malevolent robot overlord. If you fail to perform this series of instructions, you will be horribly punished with a lower score! I did this as part of "Hack Friday" at Sphero where I decided to vet our new public SDK against apps other than our demo applications and production driving app. In the process, I discovered that there were no internally developed instructions and took it upon myself to fill the gap whilst developing a fun game prototype.

To handle the connection lifecycle for this game, I've created a class "BoppitRobotProvider" to handle robots coming online and offline as well as echoing important application lifecycle events down to the DiscoveryAgent. This has an "IRobotManager" that I wrote a DiscoveryAgentLE version named "OllieRobotManager" and a fake implementation for testing I named "FakeRobotManager." I will only walk you through "OllieRobotManager" here. The unit tests I'll hide for now, as I don't want to write an article on Unit Testing in Android (in fact, I'm also taking this as an opportunity to develop my own TDD skillset, so it will be far less interesting than other internet resources on the subject).

Now that my disclaimers are out of the way, lets describe the basic interfaces!


public interface IRobotManager {
 void addRobotConnectionHandler(IRobotConnectionHandler robotConnectionHandler);

 void startDiscovery();
 void stopDiscovery();

 void disconnectAllRobots();

 public interface IRobotConnectionHandler {
  void robotConnected(IRobot robot);
  void robotDisconnected(IRobot robot);
 }
}

As I mentioned above, I moved DiscoveryAgentLE into an interface (this interface!) so I could mock the basic lifecycle events. A quick description of what's happening:

  • addRobotConnectionHandler will add an interface to handle connecting and disconnecting of robots
    • QuickNote: I should add a "removeRobotConnectionHandler" function, but I haven't used it yet so I won't write it. Never write code you don't use, and delete it when you find dead code!
  • startDiscovery will look for robots
  • stopDiscovery will stop looking
  • disconnectAllRobots will disconnect everyone currently connected
  • IRobotConnectionHandler is used for notifying listeners of connection events. We only care about:
    • robotConnected - when a robot connects
    • robotDisconnected - when that robot disconnects
One thing you might notice, I'm passing around something called an "IRobot." This is just an empty interface at the current point in time. I wrote it during unit testing, and will grow it or remove it when necessary.

Activity Lifecycle

Now that we have an interface to target, lets start handling the application lifecycle first! Hopefully the IRobotManager will make the underlying logic much more straightforward to you. Lets start writing the "BoppitRobotProvider"!

Lets start by saying that we'll handle IRobotConnectionHandler events, as we want to know when the robot come online and goes offline.


public class BoppitRobotProvider implements IRobotManager.IRobotConnectionHandler

If you were to build now, you'd get errors due to your not fulfilling the interface. If it really bugs you, you can throw an empty implementation for now. We'll also store some variables here for use later:


 private IRobotManager mRobotManager;
 private IRobot mRobot;

 private Vector<IRobotConnectionHandler> mRobotConnectionHandlers = new Vector<>();

mRobotManager is self explanatory, this is the manager we're going to use. mRobot is the robot we'll have eventually, and mRobotConnectionHandlers is a list of IRobotConnectionHandlers we expose for the eventual Boppit application to respond to only the most basic connection events!

We'll need a constructor:


 public BoppitRobotProvider(IRobotManager robotManager) {
  mRobotManager = robotManager;
  mRobotManager.addRobotConnectionHandler(this);
 }

And a few properties:


 public IRobot getRobot() {
  return mRobot;
 }

 public void addConnectionHandler(IRobotConnectionHandler robotConnectionHandler) {
  mRobotConnectionHandlers.add(robotConnectionHandler);
 }

 public void removeConnectionHandler(IRobotConnectionHandler robotConnectionHandler) {
  mRobotConnectionHandlers.remove(robotConnectionHandler);
 }

Before we get to the real meat and potatoes of this class!

So, what will the first thing we want to do in our app? Find a robot of course! Your first instinct would probably just be to throw connection into the onCreate of your activity. You will almost inevitably move it somewhere else, whether you wait for streaming assets to load or you feel like making Ollie an extra optional step. For this reason, rather than just saying something like "handleOnCreate," I'll name this method a more generic "findRobot."


 public void findRobot() {
  mRobotManager.startDiscovery();
 }

That was easy!

Now we want to handle the most important part of the application lifecycle. You should never maintain a connection to Ollie in the background. The most important reason is that Android may decide to terminate your Activity at any time, and without warning, when it's not visible! The other reason is that you want to be a good citizen, it's rather inconsiderate to steal the Ollie connection all to yourself when another developer may be reading through this very tutorial and banging his head against the table whilst his Ollie is failing to connect! I'd recommend this page for more information on the Android Activity lifecycle.


 public void handleOnPause() {
  mRobotManager.stopDiscovery();
  mRobotManager.disconnectAllRobots();
 }

The goal would be to simply call this method when onPause is invoked. In onResume, you'd attempt to reconnect with "findRobot" if desired. Or perhaps you'd load another fragment first? As I said, you very rarely end up calling findRobot in onCreate in your final app. Another minor detail, the order of the function calls is minimally important. I call stopDiscovery() first so I don't get a robot connection come in after calling disconnectAllRobots(), on the backend this is all happily threaded.

So now, lets handle a robot connecting.


 @Override
 public void robotConnected(IRobot robot) {
  if (mRobot == null) {
   mRobot = robot;
   for (IRobotConnectionHandler connectionHandler : mRobotConnectionHandlers) {
    connectionHandler.robotConnected(robot);
   }
   mRobotManager.stopDiscovery();
  }
 }

The idea behind this is that you'll have one robot you'll care about. If you already have a robot, ignore the new one! If you actually get a robot, then you'll stop looking and play with your happy Ollie. The SDK acts like this by default for now, but there are no guarantees in the future (and it's still recommended that you stop discovery).

Now, one last detail and you've reached the end of handling the basic Activity lifecycle.


 @Override
 public void robotDisconnected(IRobot robot) {
  if (robot == mRobot) {
   mRobot = null;
   for (IRobotConnectionHandler connectionHandler : mRobotConnectionHandlers) {
    connectionHandler.robotDisconnected(robot);
   }
  }
 }

This simply lets go of a robot we've connected to and informs the application. We don't have to clear the robot anywhere else as "disconnectAllRobots" will raise this message (as you'll see soon).

Now, a quick interface. This is identical to the one in IRobotManager, but I like not having a user of this class having to know about IRobotManager.


 public interface IRobotConnectionHandler {
  void robotConnected(IRobot robot);
  void robotDisconnected(IRobot robot);
 }

And you're ready to actually hook it up into DiscoveryAgentLE!

Discovery Agent

Until now, you've written no SDK code. This is because the SDK itself is incredibly generic. It lets you connect to Ollie and Sphero, and it lets you connect one to many robots of any single type. All this generalization is a bit complicated if you just start slinging it around your Activity all willy nilly!

So, to start things off, it's time to create our OllieRobotManager:


public class OllieRobotManager implements IRobotManager, RobotChangedStateListener

You may remember IRobotManager from before, it represents the basic operations you'll perform on a DiscoveryAgentLE. The RobotChangedStateListener you may remember from my last tutorial. This is how we know what's happening with all these robots flying around the room!

Now lets get the basic private fields out of the way. I always hate it when other tutorials skip these!


 private static final String LOG_TAG = "OllieRobotManager";

 private Context mContext;
 private DiscoveryAgent mDiscoveryAgent;
 private RobotWrapper mRobot;

 private Vector<IRobotConnectionHandler> mRobotConnectionHandlers = new Vector<>();

A quick roundup off all these crazy variables I just dumped on your head:

  • LOG_TAG is just the first parameter I pass to the Android Logging subsystem. I dislike seeing any sort of magic numbers in my code, even string literals!
  • mContext is the application context, it's necessary to start the DiscoveryAgent (as well as virtually anything else in Android).
  • mRobot is the robot I'm getting. You'll see later on that this implements IRobot, and pretty much just provides you with a ConvenienceRobot. I highly recommend that your final application does not upcast IRobot (this and any other form of reflection should generally be discouraged in any code), but I wrote this tutorial the moment I had the basic Android lifecycle under control!
  • mRobotConnectionHandlers is, once again, our friendly neighborhood vector of callbacks.
Whew that was a mouth... err... keyboard full! Lets get to acquiring our DiscoveryAgent:


 public OllieRobotManager(Context context) {
  mContext = context;
  mDiscoveryAgent = DiscoveryAgentLE.getInstance();
  mDiscoveryAgent.addRobotStateListener(this);
 }

As you can see, we're opting for a DiscoveryAgentLE. This is the DiscoveryAgent to use for Ollie. We're also choosing to add ourselves as a state listener, but I won't get to that implementation until the end of this tutorial (you know, save the best for last and all that jazz).

There is pretty much no reason for me to even post this other than preventing you from tearing your hair out for a second when you hit build and see nothing but errors:


 @Override
 public void addRobotConnectionHandler(IRobotConnectionHandler robotConnectionHandler) {
  mRobotConnectionHandlers.add(robotConnectionHandler);
 }

As you can guess, we need to register our event handlers.

So our next task is to start discovery.


 @Override
 public void startDiscovery() {
  try {
   mDiscoveryAgent.startDiscovery(mContext);
  } catch (DiscoveryException e) {
   Log.e(LOG_TAG, "Failed to start discovery!");
   e.printStackTrace();
  }
 }

I would like to handle this error better, but even the example code in the SDK does this! The most I could really do is add a boolean to the return type if it fails to start discovery, but that's a task for future me. Future me doesn't like past me...

This is another self explanatory line:


 @Override
 public void stopDiscovery() {
  mDiscoveryAgent.stopDiscovery();
 }

Simply stop discovery when we want to stop discovery.

This line is pretty important. The SDK doesn't have a "disconnect everybody" call at the moment, but you really should perform this task to maintain good citizen status amongst your users.


 @Override
 public void disconnectAllRobots() {
  for (Robot robot: mDiscoveryAgent.getConnectedRobots()) {
   robot.sleep();
  }
 }

As you can see, we go through and ensure that everyone's disconnected. I do this just for my own self assurance that, even if one robot slips through and connects without me taking note, everyone is put to sleep when my app exits (you can connect to more than one robot).

The other item of note is my choice of sleep() over disconnect(). disconnect() will terminate your connection to the robot, but the robot will be left on (and colored magenta) afterwards. To your users, this will look like an error. If you were to call sleep() then disconnect(), you'd still be left in the disconnected but on state due to the way threading and message handling works. Do not fret, when the robot goes to sleep you'll get a disconnection message none the less!

Now time for the actual connection! Be warned, this is quite the mouthful (keyboard full?):


 @Override
 public void changedState(Robot robot, RobotChangedStateNotificationType robotChangedStateNotificationType) {
  switch (robotChangedStateNotificationType) {
   case Online:
    mRobot = new RobotWrapper(new Ollie(robot));
    notfyRobotConnected(mRobot);
    break;

   case Disconnected:
    notifyRobotDisconnected(mRobot);
    mRobot = null;
    break;
  }
 }

 private void notfyRobotConnected(RobotWrapper robot) {
  for(IRobotConnectionHandler connectionHandler: mRobotConnectionHandlers) {
   connectionHandler.robotConnected(robot);
  }
 }

 private void notifyRobotDisconnected(RobotWrapper robot) {
  for(IRobotConnectionHandler connectionHandler: mRobotConnectionHandlers) {
   connectionHandler.robotDisconnected(robot);
  }
 }

Despite being a wall of text, I'm sure you can easily tease apart the meaning. I'll start at the bottom as that's the easiest:
notifyRobotConnected and notifyRobotDisconnected simply tell all our listeners about what's going on!

So, now lets jump up into changedState. As you may recall, this method is invoked whenever the state of a connecting robot changes. There are many more that can be returned (and if you want to make a fancy connection screen, you'll care about all of them), but Online and Disconnected are the two most important.

The Online case happens after we've successfully connected to a robot, it tells us that we have a new buddy we can start talking to! I quickly wrap it up in something that conforms to the IRobot interface (an empty interface for now), cache it, and tell the world about my shiny new present!

Disconnected is raised whenever a robot disconnects. We forget about the robot we've been maintaining a connection to, and tell the world about our tragic loss.

And that's it! You can connect to a robot! Well, outside of my simple RobotWrapper:


 public class RobotWrapper implements IRobot {
  private final ConvenienceRobot mRobot;

  public RobotWrapper(ConvenienceRobot robot) {
   mRobot = robot;
  }

  public ConvenienceRobot getRobot() {
   return mRobot;
  }
 }

And I suppose I should actually show you where to drop this into an activity...

Activity

And now for the grand finale! You can shove this right into your Activity from the last tutorial. A quick breakdown of what I'll show you:
  • in onResume we'll connect to a robot. I know I told you that you'll rarely do this by the end of your project, but this is a lowly tutorial!
  • in onPause, we'll kick off the pausing logic we wrote previously.
  • when a robot connects, we'll turn it green. This is simply because green is the best color.
  • when a robot disconnects AND we're not paused, we'll find a new one!
Lets start by creating the variables I first mentioned:


 private IRobot mRobot;
 private BoppitRobotProvider mRobotProvider;

 private boolean mPaused;

And starting to work in onCreate:


 @Override
 protected void onCreate(Bundle savedInstanceState) {
  super.onCreate(savedInstanceState);
  setContentView(R.layout.activity_connection);

  mRobotProvider = new BoppitRobotProvider(new OllieRobotManager(this));
  mRobotProvider.addConnectionHandler(...

But I'll hold off on actually handling the robot connection for the moment.

Our onResume is simple:


 @Override
 protected void onResume() {
  super.onResume();
  mPaused = false;

  IRobot robot = mRobotProvider.getRobot();
  if (robot != null) {
   setRobot(robot);
  }
  else {
   mRobotProvider.findRobot();
  }
 }

I check to see if we have a robot already (we shouldn't in this app, but imagine if there were able to share this BoppitRobotProvider between activites...), then set it! Otherwise, it's time to start looking for one. Also, we must remember to store the fact that we're no longer paused.

This brings us to onPause. There isn't much to be done here:


 @Override
 protected void onPause() {
  super.onPause();
  mPaused = true;

  mRobotProvider.handleOnPause();
 }

As you can see, we just store the fact that we are paused and call into all that fancy code we wrote earlier.

Of course, you're probably wondering why we're storing whether or not we're paused as well as what I hid behind the "..." above. So, lets finish adding out connection handler!


  mRobotProvider.addConnectionHandler(new BoppitRobotProvider.IRobotConnectionHandler() {
   @Override
   public void robotConnected(IRobot robot) {
    setRobot(robot);
   }

   @Override
   public void robotDisconnected(IRobot robot) {
    setRobot(null);

    if (!mPaused) {
     mRobotProvider.findRobot();
    }
   }
  });

robotConnected is relatively straightforward, we just set the robot when we get one.

robotDisconnected is where the fun is. We forget about our robot, and try to find a new one. But here's where it gets tricky: if you remember back to the onPause handler we wrote, we simply put all our robots to sleep when the Activity pauses. Some time after that, the robot goes to sleep and we get a callback saying that it's gone. If we're paused, we don't want to try to connect again (the SDK will let us until the activity is actually destroyed). So we store a flag to remind ourselves that we're no longer interested in connecting.

And that's everything!

Ok, fine, I'll also turn the robot green.


 private void setRobot(IRobot robot) {
  mRobot = robot;
  if (mRobot != null) {
   ConvenienceRobot convenienceRobot = ((OllieRobotManager.RobotWrapper)robot).getRobot();
   convenienceRobot.setLed(0.f, 1.f, 0.f);
  }
 }

As I stated before, the IRobot interface needs to be actually created or replaced. We'll see where my unit tests take me as I get ready to make a game.

Happy hacking, and may the source be with you!

Friday, January 16, 2015

Getting Started with Ollie and Android

Welcome internet denizens! I am an engineer on the Ollie project, and an avid supporter of the maker movement. I've decided to allocate some of my time to ensuring that you can all jump on the Bluetooth LE robot revolution! In particular, I plan on making a project entirely in our public SDK to work out any idiosyncrasies people on the outside might run into. In the process, I want to ensure that the public can follow in my footsteps.

Internally, we can only work on so much software. We don't even scratch the surface of what's possible with our robots! For that, I hope to see the creative juices of the internet kick into action. Do you make Ollie jump in excitement when you walk in your house? Maybe it's the handlebars of a motorcycle, or even the flight yolk of an airplane.

All great exploits have a humble beginning. To get started, I'd recommend acquiring the following packages:

For this tutorial, I will walk you through the basic steps necessary to connect to a robot. It does not handle any Android lifecycle events, reconnecting, or any other topics you may want before shipping your first Ollie project. If this post gets a good reception, I'll continue to update with my progress through a game.

To start out, lets create a project in Android Studio. I'll give mine a cool name like "GettingStarted"
Then I'll say I'm targeting Android 4.4. Talking to members of the SDK team, 4.3 should be supported and 4.2 may work. All the devices I own are on 4.4, and it's what I test on internally most often, so I'm choosing that to minimize any odd/esoteric issues with earlier BLE stacks.
For now I'm going to just use a "Blank Activity." Whatever you choose is up to your final app!
Lets choose a default name like "GettingStartedActivity."
The file we'll be most interested is our fancy new "GettingStartedActivity.java."

But before we write anything, we need to bring in "RobotLibrary.jar" to communicate with Ollie. To do this, right click on "app" and select "Open Module Settings."

Then click the "+" in the upper left and select "Import .JAR or .AAR Package."

Locate the "RobotLibrary.jar" that you downloaded before, and click "Finish."

Now you should have the "RobotLibrary" module in Android Studio. The last thing we want to do is make our project depend on it. For those of you just starting out, this means that our project needs RobotLibrary to work (which, if you plan on driving Ollie, it does). Select "app" under "Modules" on the left. Then select the "Dependencies" tab and click the "+" on the bottom to setup a "Module dependency."

Choose ":RobotLibrary" and you should be ready to get to the fun part!

Now I'll cover the basics on how to connect to an Ollie. I'd like to stress that I won't cover handling the application lifecycle. I'll include some notes at the end, and leave it as an exercise to the user until I'm bitten by the literary bug again.

To get started, lets put a "DiscoveryAgent" and "ConvenienceRobot" into our main activity. The former will find a robot for us, and the latter lets us send commands to it.



1
2
3
4
5
6
7
8
...
import com.orbotix.ConvenienceRobot;
import com.orbotix.common.DiscoveryAgent;

public class GettingStartedActivity extends ActionBarActivity {
private DiscoveryAgent _discoveryAgent;
private ConvenienceRobot _robot;
...

Now I'm going to create two helper classes. The first will just listen for discovery events, we'll just log that we're seeing robots. If we were connecting to Sphero, we'd use this to connect to the robot. Fortunately for us, Ollie will automatically connect when you get close enough!



1
2
3
4
5
6
7
 private DiscoveryAgentEventListener _discoveryAgentEventListener = new DiscoveryAgentEventListener() {
@Override
public void handleRobotsAvailable(List<Robot> robots) {
// for LE robots, we connect automagically
Log.i("Connecting", "Found " + robots);
}
};

This next class is pretty important, it will actually connect to Ollie and change his color to green. I'll cover "stopDiscovery()" in a second, so don't worry too much.


 1
2
3
4
5
6
7
8
9
10
11
12
13
14
 private RobotChangedStateListener _robotStateListener = new RobotChangedStateListener() {
@Override
public void changedState(Robot robot, RobotChangedStateNotificationType robotChangedStateNotificationType) {
switch (robotChangedStateNotificationType) {
case Online:
Log.i("Connecting", robot + " Online!");
_robot = new Ollie(robot);
stopDiscovery();

_robot.setLed(0.f, 1.f, 0.f);
break;
}
}
};

Once we register this listener, it will return back to us with every state change. For now we just care about "Online." Here, you can listen for things like "Disconnected," "Offline," "Connecting," and "FailedConnect" to more robustly handle the various connection states. Once a robot comes online, we set its LED light to be green with the setLed call. The values are red, green, and blue and valid inputs range from 0 to 1. So setLed(0, 1, 0) will make ollie glow green. If we chose (1,1,1) it would be white and (0,0,0) would be off (not a very exciting test).

Now, lets actually connect to the robot. We'll add the following code to both start looking for a robot and to stop.


 1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
 private void startDiscovery() {
_discoveryAgent = DiscoveryAgentLE.getInstance();
_discoveryAgent.addDiscoveryListener(_discoveryAgentEventListener);
_discoveryAgent.addRobotStateListener(_robotStateListener);
try {
_discoveryAgent.startDiscovery(this);
} catch (DiscoveryException e) {
Log.e("Connecting", "Failed to start discovery because " + e);
e.printStackTrace();
}
}

private void stopDiscovery() {
_discoveryAgent.stopDiscovery();
_discoveryAgent.removeDiscoveryListener(_discoveryAgentEventListener);
_discoveryAgent.removeRobotStateListener(_robotStateListener);
_discoveryAgent = null;
}

Some importing lines to note. We call DiscoveryAgentLE.getInstance() to look for Ollie, we can change this to "DiscoveryAgentClassic" to look for Sphero. Additionally, we're registering those helper classes we made earlier in addDiscoveryListener and addRobotStateListener. Without these calls, we wouldn't get any notification.

Once we have everything setup, we call startDiscovery. One last note, it is ideal to remove all the listeners you registered before you shut down the app, but you won't get the disconnect callback if you unregister the _discoveryAgentEentListener. In your final code, you'd just want to call _discoveryAgent.stopDiscovery(); when a robot connects and call the .remove functions when your app is done with Ollie.

So we have all the code to connect to a robot, lets connect! Add "startDiscovery()" to onCreate like so:


1
2
3
4
5
6
7
 @Override
protected void onCreate(Bundle savedInstanceState) {
super.onCreate(savedInstanceState);
setContentView(R.layout.activity_getting_started);

startDiscovery();
}

You're almost ready to hit run! We just need to do one more thing. In your "AndroidManifest.xml," you need to tell Android that we want to use BluetoothLE.


1
2
3
 <uses-feature android:name="android.hardware.bluetooth_le" android:required="true"/>
<uses-permission android:name="android.permission.BLUETOOTH"/>
<uses-permission android:name="android.permission.BLUETOOTH_ADMIN"/>

Of course, your users may start demanding why you need all these permissions. So lets break this down real fast:
"android.hardware.bluetooth_le" simply lets us use BLE rather than Bluetooth Classic. This is how Ollie communicates with your device.
"android.permission.BLUETOOTH" is pretty self-explanatory. This lets use use Bluetooth.
"android.permission.BLUETOOTH_ADMIN" sometimes scares people. This simply says that we'll discover and pair with devices. Many bluetooth devices like keyboards and speakers are paired with by the OS outside of our app. This is not the case with Ollie! He'll sit quietly for months looking for a friend to play with before your app comes by and wakes him up.

Now you should be up and running with an Ollie. Some things to try on your own:
  • Disconnect from Ollie when you enter the background
  • Support reconnecting to Ollie when he becomes disconnected (you can force a disconnect from the robot by plugging him in)
  • Try changing the colors a bit
  • Get him driving
If you have the entire Git repo pulled down, I recommend checking out the DriveSample for examples on how to use the SDK.

Happy Hacking!

Edit:
One more permission is required, it will crash sometimes if you forget to add it.


<uses-permission android:name="android.permission.INTERNET"/>

This is for collecting stats, which can be used to improve the SDK as well as the general Ollie experience.

Kotlin Game Programming