The Great Gif Exchange Revived, Week 1

I’ve done a valiant job avoiding getting anything done for a long time. I have caught the bug again and got several important things finished/working, including

  • Unified logging – purged std::cout from the Galavant library in favor of the awesome PLog. This allows me to pipe all output through Unreal when using GalavantUnreal, which is nice
  • Added a very simple testing framework, Catch, which makes writing and testing new things easier
  • Finally, finally got my build system in order. Using Jam has been a bit challenging due to lack of documentation. It’s now all sorted out and builds are quick
  • Hierarchical Task Networks now fully hooked up and functional. This is demonstrated in the Gif for this week. It’s still heavy on boilerplate, so I’ve still got work ahead of me cutting that down

It has been a very long time since I posted a Gif or exchanged one with my colleague, but I think I’m going to try to start that up again. Here’s the .Gif:

This .gif is showing twenty agents moving into four groups. Each one of those agents was assigned a Goal to find a “bus stop” and go to it. They then formulated a plan using the hierarchical task network system to locate the position of the nearest bus stop and travel to it.

This is a very simple example. In the future, agents will have much more complicated plans which can accomplish complex goals.

I’m not entirely sure what I will work on next. I’ve been itching to work on the world but I can continue making progress on the AI behavior/gameplay side without the world.

I have been having doubts about whether or not I should continue developing Galavant. I’m going to continue working on it while I do some soul-searching on whether I want to spend any more of my life on this project.

An Update on Galavant’s AI

I made it my goal to finish the design of Galavant’s AI system by the end of the month. I have failed in doing so. I’m going to continue working on the design through November.

I’ve been investigating Goal Oriented Action Planning, Hierarchical Task Networks, and The Sims’ AI. All three of these systems solve a similar problem to what I’m trying to solve with Galavant. Right now, I’m thinking Galavant’s design will be a combination of the three. I can’t say for certain yet what that is because I don’t know yet.

This is the foundation of Galavant’s gameplay, so I don’t feel too guilty about taking my time with the design. Once I decide what I’m going to do and implement it, things should greatly speed up and start becoming interesting, so sit tight!

The Great Gif Exchange, Week 15

I didn’t get a whole lot done in the last couple of weeks, but I did finally get something I should’ve had from the start: the player controller!

The player character/controller was much more frustrating to add than I thought it would be. This tutorial was very helpful.


I fixed some problems with the chunk system that you can see in the .Gif (overlapping chunks, non-grid-aligned placement), but despite this, I still get gaps and tears between voxels.

I’m thinking I might discard the voxel system and instead use a traditional height map terrain. Terrain modification was never a part of the design for Galavant and neither was three-dimensional terrain (i.e. cliffs, overhangs aren’t essential to the vision, although I am going to have to do something special if I want caves in the game).

Building will still be a part of the game, though I am not sure what tech I will use to facilitate that (whether it be voxels, tile-based, model-based, or the really awesome Dreams type approach).


I’ve made it my goal to finish the design for my macro AI system by the end of October 2016. It’s the most critical part of Galavant, so it’s very important that I get it right.

Galavant has a very ambitious goal: simulate hundreds (or maybe thousands) of agents continuously, making its world a truly dynamic place. The entirety of Galavant’s gameplay is based on emergent behaviors that result from this. Agents will start wars, cannibalize each other, spread plagues, build cities, and fall in love. The player is just another person in the chaotic world (though there will be weights etc. to make agents involve the player in some way).

I’ve been looking at methods like Goal-Oriented Action Planning and Hierarchal Task Networks to accomplish this task. I plan on posting my technical design documents at the end of the month, and begin writing the code in November.

Once the macro AI system is in, things are really going to pick up. I’m excited to finally be at this exciting point in development.

In case you weren’t aware, I’ve been streaming development of Galavant on As of writing this post I’ve logged forty-two hours. My stream tends to be pretty quiet, but I enjoy having some company during long days of coding. It also helps to keep me productive, because if I’m streaming live I can’t just watch YouTube all day.



The Great Gif Exchange, Week 13

Work has been really busy, so I’ve been burnt out on working on Galavant. Thankfully, the game at work is now live, so things are much slower.

It’s very surreal to have worked on and released my first professional, actual-game-industry game. It was a console port, so we didn’t build it from the ground up. Total development time was something like nine or ten months, besides the continual post-launch maintenance. I was part of a team that started out with only three programmers inclusive, which then expanded to seven programmers at peak development.

Anyways, here’s my simple .Gif for week 13 (I skipped many weeks due to crunch at work):

This is demonstrating my new Entity Component System, which was designed to replace the Object Component System. It’s much more flexible in terms of managing memory and requires less boilerplate.

The entities in the .gif only have a single component attached to them. The component handles actor creation and movement, which at the current time just has them randomly fly around.

The Great Gif Exchange, Week 7 (+SURPRISE)

Here’s my .Gif for week 7:

I got the ProceduralMeshComponent from the wiki to work. I didn’t quite figure out how to get reliable collisions (weird things where chunks spawned in the editor didn’t line up etc.).

I will have to redo my texturing scheme, because Marching Cubes doesn’t work with just UV coordinates. I’m thinking I need to use triplanar texturing, so I’m going to have to figure out how to do that.

I couldn’t for the life of me get movement to work how I wanted. I really need to drain some time into that so that I can actually have something game-y.

BONUS! World Tree Design Doc

Here’s my cut-and-pasted notes on how I am going to design the world representation for AI:

For the large scale AI, everything is in a tree. Something like this:

  • world
    • region
      • town
        • building
          • person
            • merchant
              • item
                • food
                  • bread
                    • steal
                    • buy
                    • pay 10 money
                    • trade
                • building materials
              • job
                • money
                  • get money
            • cannibalism
              • food
        • building
          • person
            • blacksmith
              • item
              • job
              • employ
            • cannibalism
              • food

Every node has difficulty associated with it, which can be relative to the agent searching the tree (e.g. thieves are open to stealing, cannibals don’t care about eating people, species tend to do things certain ways etc.).

By representing the entire problem like this, it becomes so much simpler. Just fill in all the nodes, plug in A*, build actions from path, and you’re done (obviously, this is still going to be fucking hard).

Agents could think about the future by having surplus as an additional heuristic on nodes, so they’ll buy more food than they need in order to not get hungry all the time.

Anyways, I’m super excited about this, because it breaks down the problem really well. It’ll take work, but I want to make it so that voxel data isn’t required for low LOD AI simulation. The AI would only use the tree for their position/actions. This would be huge in terms of memory and performance savings. However, converting from low LOD to high LOD when the player gets near could be difficult (e.g. if the AI “built” something in low LOD, it may not have been possible to build that thing in high LOD).