bitbrain gamedev. devops. software design.

How to automatically publish your game to itch.io

This tutorial explains how you can very easily deploy your game to itch.io

Motivation

why should I even upload my games automatically?

This is a good question. Simple answer is, that during a gamejam you can fully concentrate on building your game. Especially, when the deadline is coming closer, you simply have to commit and push your changes, in order to trigger an automatic deployment of your game to itch.io:

  1. make a code or asset change change locally
  2. commit your change via Git and push it to Github
  3. TravisCI automatically picks up your change and builds your game
  4. TravisCI automatically pushes the new build to itch.io

This tutorial shows you how to do that!

Prequisites

For this tutorial, we use the following technologies:

  • Github to version control our game code
  • TravisCI as a build agent to build our game
  • itch.io to host our game
  • Butler to upload our game builds to itch.io

Setup Github repository

If not already done, create a Github repository to host our source code:

create-new-github-repo

Setup itch.io

Before we can start uploading our first game, we need to create an itch.io game project:

create-new-game-project

After your project is created, head over to your account settings to generate a new API key. This key is required so other services such as TravisCI are able to communicate with itch.io.

create-new-api-key

Prepare TravisCI deployment

Once the repository exists and itch.io is prepared, we need to prepare TravisCI. This consists of the following steps:

  1. create deployment script
  2. commit and push travis.yml
  3. prepare TravisCI project
  4. setup BUTLER_API_KEY

Create deployment script

This script will take your artifacts and push it to itch.io. Create a new file, called deploy.sh:

#!/bin/bash

set -e
set -o pipefail

if [[ -z "${BUTLER_API_KEY}" ]]; then
  echo "Unable to deploy! No BUTLER_API_KEY environment variable specified!"
  exit 1
fi

prepare_butler() {
    echo "Preparing butler..."
    download_if_not_exist http://dl.itch.ovh/butler/linux-amd64/head/butler butler
    chmod +x butler
}

prepare_and_push() {
    echo "Push $3 build to itch.io..."
    ./butler push $2 $1:$3
}

download_if_not_exist() {
    if [ ! -f $2 ]; then
        curl -L -O $1 > $2
    fi
}


project="bitbrain/mygame"
artifact="mygame.jar"
platform="windows-linux-mac"

prepare_butler

prepare_and_push $project $artifact $platform

echo "Done."
exit 0

This script first checks, if the environment variable BUTLER_API_KEY is defined. This variable can be setup within Travis and is required for itch.io to authenticate your game upload. Afterwards we define a bunch of helper functions. Then we download the latest version of butler and upload the game with it. Please ensure to configure the correct project.

Setup .travis.yml

This file is required by TravisCI to understand how to build your game. For example, you can setup a Java environment (for Java games) or Objective-C environment (for Unity games). TravisCI ensures that this environment is set up and it will build your game:

language: android
jdk:
  - openjdk8

android:
  components:
    # The BuildTools version used by your project
    - build-tools-26.0.2

    # The SDK version used to compile your project
    - android-26

script: 
  echo "this is my game" > mygame.jar

after_script:
  chmod +x deploy.sh && ./deploy.sh

Feel free to create a different .yml for Java, C++ or even Android! Read more about that in the official docs.

Prepare TravisCI project

Now we have to configure our TravisCI project. Head over to https://travis-ci.org, authenticate with your Github account and you should be able to import your Github project from there. Once imported, head over to the settings to configure environment variables:

travis-ci-head-to-settings travis-ci-add-butler-api-key

Run the build

Congratulations! You successfully set up the pipeline. Let’s run the build to see how your game automatically publishes:

Preparing butler...
  % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
                                 Dload  Upload   Total   Spent    Left  Speed
100 19.6M  100 19.6M    0     0  31.6M      0 --:--:-- --:--:-- --:--:-- 31.6M
Push windows-linux-mac build to itch.io...
• For channel `windows-linux-mac`: pushing first build
• Pushing 16 B (1 files, 0 dirs, 0 symlinks)
✓ Added 16 B fresh data
✓ 86 B patch (no savings)
• Build is now processing, should be up in a bit.
Use the `butler status bitbrain/mygame:windows-linux-mac` for more information.
Done.

Your latest game version is now available on itch.io:

itch-io-my-game-upload

Continuous Delivery with Travis and Github

I have released a new library on Maven central. Let me explain how I did that.

Some years ago I started working on a project called braingdx. It is a gamejam framework based on libgdx, fully written in Java. At some point I decided to make the artifact available for a broader audience. As a result I required a deployment flow to automatically upload the .jar files to an artifactory of my choice.

The silly approach

I just wanted to publish artifacts, but not on each commit. Instead, I decided to go for a multi-branch configuration like this:

silly-git-flow

We have a master branch where we commit on. When we decide to release an artifact, we manually (locally) merge into deploy, push the changes and Travis CI will pickup the build, thanks to the .travis.yml file configured:

language: java

services:
  - docker

jdk:
  - openjdk7

cache:
    directories:
        - $HOME/.m2

branches:
  only:
  - deploy

env:
  global:
    - COMMIT=${TRAVIS_COMMIT::8}

before_install:
  - export CH_VERSION=$(docker run -v $(pwd):/chime bitbrain/chime:latest CHANGELOG.md version)
  - export CH_TEXT=$(docker run -v $(pwd):/chime bitbrain/chime:latest CHANGELOG.md text)
  - mvn versions:set -DnewVersion=$CH_VERSION
  - chmod +x deployment/deploy.sh

install:
  - ./deployment/deploy.sh

The configuration file has a before_install section. In there we use a tool written by me called chime. We run this tool as a Docker container to extract version and changelog information from a CHANGELOG.md provided. For example, we have a file like this:

# Version 1.1

This is version 1.1 description.

* some patchnotes
* more patchnotes

# Version 1.0

This is version 1.0 description.

* some patchnotes
* more patchnotes

The resulting environment variables would look like this (after before_install execution):

> echo $CH_VERSION
Version 1.1
> echo $CH_TEXT
This is version 1.0 description.\n* some patchnotes\n* more patchnotes

Using this approach we can define versions within a CHANGELOG.md file and it should automatically pick up the latest version from the file. We update the version of the library temporarily via mvn versions:set with the latest version extracted from the changelog file.

Afterwards we run a deploy.sh script during the install stage:

mvn deploy \
    -DskipTests \
    --settings deployment/settings.xml

I configured a custom Nexus inside my settings.xml to push my artifacts to. Users of my library then would need to add the repository via repository statement in their configuration.

Silly approach, lots of problems

The approach worked fine, however it was not as refined as I hoped it to be:

  • my custom Nexus caused SSL Certificate issues on some Windows and Mac machines when trying to download dependencies
  • it is truly cumbersome to manually switch between master and deploy locally and merge all the time
  • the current multi-branch approach causes lots of merge commits (if we are not able to fast-forward)
  • sometimes you would forget to switch from deploy back to master locally and suddenly commiting on a wrong branch
  • not easy to have a mapping from version to commit history (missing tagging functionality)
  • Travis only builds the deploy branch, not the master branch. We never truly compile each commit on master nor run any tests

After putting some thought into it I came up with a much better, more light-weighted approach.

One branch to rule them all

I eventually decided to get rid of the deploy branch after all. All commits should go to master and should be tested and/or deployed on Travis. I would keep the CHANGELOG.md version extraction and do additional checks to avoid deploying already deployed versions twice:

simple-branch-flow

The new flow is executed whenever a new commit is pushed onto master:

  1. checkout from SCM
  2. extract version and changelog from CHANGELOG.md
  3. Sign artifacts - this is required in order to push artifacts to Maven Central
  4. Install/Deployment
    • verify if the latest git tag is different than version extracted. Run deployment when extracted version from CHANGELOG.md is newer
    • when there is no difference in version, the version had been deployed already. Instead, run unit tests and generate code coverage reports
  5. Upload additional artifacts
    • happens on after_install stage
    • when the new version is different than the latest tag create a new Github Release which will automatically create a new tag with the current version
    • upload latest Javadoc to Github

In the rest of this article I will explain how some of these steps work in detail.

Sign your Artifacts

In order to upload your artifacts to Central you require to sign your artifacts with a GPG signature. I recommend reading this tutorial to learn how to do that.

In the tutorial the author explains that we want to encrypt our codesigning.asc file to prevent strangers from stealing it. We do that by installing and using the travis CLI:

gem install travis
travis login
travis encrypt-file codesigning.asc

When running this I discovered that Travis would fail the build:

bad decrypt
gpg: invalid radix64 character AE skipped
gpg: invalid radix64 character 13 skipped
gpg: invalid radix64 character F5 skipped
gpg: invalid radix64 character BE skipped
gpg: invalid radix64 character C5 skipped
gpg: invalid radix64 character AF skipped
gpg: invalid radix64 character C8 skipped
gpg: invalid radix64 character 14 skipped
gpg: invalid radix64 character 82 skipped
gpg: invalid radix64 character DF skipped
...

What is going on?! I followed the tutorial step by step and for me it did not want to work. After hours of desperation and crying on the floor I found something on Github. Apparently, on my Windows 10 machine the travis encrypt-file operation is broken and produces a corrupted encryption. WOW! Thanks for that. How did I fix it? A little bit of Docker 🐳 for the win. Let’s create a Dockerfile:

FROM ubuntu
RUN apt-get update && apt-get install ruby ruby-dev gcc g++ make && gem install travis
VOLUME /test
COPY codesigning.asc /test/codesigning.asc
ENV GITHUB_TOKEN=""
CMD ["bash", "travis login --github-token $GITHUB_TOKEN && travis-encrypt codesigning.asc && echo codesigning.asc.enc"]

And then:

# Build our image
docker build -t encrypt-asc .
# Encrypt the file and produce it
docker run -e GITHUB_TOKEN=xxx encrypt-asc > codesigning.asc.enc
# Clean up the dirty mess!
docker rm encrypt-asc -f

After committing the codesigning.asc.enc file Travis was able to decrypt the GPG private key which is required to sign the artifacts.

Check if version is changed

In order to check if the version has changed I did the following during the before_install stage:

export LATEST_TAG=$(git describe --abbrev=0 --tags)

After that we can deploy or just run the tests, depending of the version:

if [ "$LATEST_TAG" != "$CH_VERSION" ]; then
echo "Latest deployed version=$LATEST_TAG not equal new version=$CH_VERSION. Deploying..."
mvn deploy \
    -Psign \
    --settings deployment/settings.xml
else
echo "Skipping release! $LATEST_TAG already released to Nexus! Running tests..."
mvn clean test -T4
fi

Pushing new release to Github

In order to push the new release automatically to Github, we do the following on the after_install stage:

cd $HOME
git config --global user.email "sirlancelbot@gmail.com"
git config --global user.name "Sir Lancelbot"
git clone --quiet --branch=master https://${GITHUB_TOKEN}@github.com/bitbrain/braingdx

# Replacing line endings in body
body=$(sed -E ':a;N;$!ba;s/\r{0,1}\n/\\n/g' <(echo "$CH_TEXT"))
json='{"tag_name":"'$CH_VERSION'","target_commitish":"'$TRAVIS_BRANCH'","name":"Version '$CH_VERSION'","body":"'$body'","draft":false,"prerelease":false}'

curl -X POST \
-u bitbrain:$GITHUB_TOKEN \
-d "$json" https://api.github.com/repos/bitbrain/braingdx/releases

This will ensure that a latest release has been pushed (including latest changelog content from CHANGELOG.md and Github will automatically create a tag for us. Next time we run the pipeline, it won’t deploy again since the tag has been updated.

Uploading Javadoc to Github pages

Uploading Javadoc to Github pages is a little bit more tricky. I want to have the following requirements fullfilled:

  • each version is persisted in Github pages, e.g. /docs/1.0.0
  • the latest docs should be available via /docs/latest
# Create temporary directory
mkdir cd $HOME/docs
cd $HOME/braingdx
mvn versions:set -DskipTests -DnewVersion=$CH_VERSION -T4 && mvn javadoc:javadoc -DskipTests -T4
cd $HOME/docs
# Copy generated Javadocs into a temporary directory
cp -r $HOME/braingdx/core/target/site/apidocs/* $HOME/docs

# Cleanup
rm -rf $HOME/braingdx/*
cd $HOME/braingdx

# Checkout Jekyll branch and create new folder with new version
git checkout gh-pages
mkdir -p $HOME/braingdx/docs/$CH_VERSION
cp -r $HOME/docs/* $HOME/braingdx/docs/$CH_VERSION

# Copy also into "latest" docs
rm -rf $HOME/braingdx/docs/latest
mkdir -p $HOME/braingdx/docs/latest
cp -r $HOME/docs/* $HOME/braingdx/docs/latest

# Add everything and push!
git add -f *
git commit -m "Travis build $TRAVIS_BUILD_NUMBER - update Javadoc"
git push -fq origin gh-pages && echo "Successfully deployed Javadoc to /docs"

Click here to see an example of the generated page created.

Conclusion

The new flow allows me to have:

  • single branch
  • every commit is tested in Travis
  • I control deployments via CHANGELOG.md
  • Github releases and tags are automatically created
  • Javadoc is automatically created
  • on release, artifacts are signed and pushed to Maven Central

Do you have feedback? Make sure to follow me @bitbrain_ on Twitter and @bitbrain on Github.

Level Generation in Mindmazer

This article shows how I generated levels for my libgdx game and how I applied seeding to generate them.

Today I want to talk about my project mindmazer. In this simple 2D puzzle game the player has to remember a certain path to progress to the next stage. When starting this project I had to decide if I give the player a static list of predefined levels. After some time I decided against it and went for a procedural generation approach. In this article I am going to explain how these levels are generated.

The Level

Typical ingame level look like this:

mindmazer-level-simple

You notice that the shapes are quite simple and the path is easy to remember. Therefore, as more you progress in the game as more complex the level become:

mindmazer-level-complex

How to generate those levels? Well, first of all I am using libgdx for all my games. This Java library allows me to draw things on the screen and to define a framework to run my game with. Unfortunately, this library does not give me an “out-of-the-box” level generator. Thus, I had to write an algorithm myself.

Biom Data

Let us first describe how a level should get defined. I did not want to have a “random” algorithm which just appends more cells into random directions. The result would be a randomly formed snake where I wouldn’t have any control over. At least I wanted control about various level aspects. Each level is composed out of multiple parts, so called biomes. A biom is defined Java code. Let us take a look at a typical L-Shape:

x
x
x x

This L-Shape can be represented by a byte array:

byte[] L_SHAPE = {
   1, 0,
   1, 0,
   1, 1, 2
};

You might notice that there is an extra entry in the array:

byte numberOfColumns = L_SHAPE[L_SHAPE.length - 1];

Basically we are telling our level generation to always consider the last entry in a byte array as information about the number of columns in this specific biom. This is all we need as input. We now have full control over which parts should get used to compose a level.

Biom Conversion

We need to prepare the input data (byte arrays) into a format the algorithm understands. This format is a so called Biom class with the following properties:

  • byte[][] data the biom as a 2-dimensional byte array (without metadata)
  • int startX the x index on the biom where the player could possibly start
  • int startY the y index on the biom where the player could possibly start
  • int endX the x index on the biom where the player could leave
  • int endY the y index on the biom where the player could leave
  • int length the number of cells inside a biom
  • int width the width of a biom (number of cells)
  • int height the height of a biom (number of cells)

We now call a so called BiomFactory which creates a Biom object for us:

Biom biom = biomFactory.create(L_SHAPE);

If you are interested in how this factory works internally check out the code on Github.

Why I still use Java for gamedev

Yes, I use Java for gamedev, but why? Read it here.

There is an technology, running on billions of devices every single day: Java. Many developers (beardy hipsters) claim this technology belongs to the past, game engines like Unity and Unreal Engine are the future! Still, there is one man who hasn’t lost hope. Every single day he is building real games, not written in C# or Javascript. Not even Python or scientific C++. He is just using Java. And this person is me.

unreal-engine

This picture isn’t showing my living room. This is Unreal Engine. Playable in VR. With 60fps at 4K resolution! Awesome, right?

The simple alternative

Unfortunately I didn’t create this. But hey, I can show you something amazing I did back then, in Java:

example-game

Okay, I get you. Why am I doing this to myself? Why not just using a game engine like everyone else? It would have so many advantages to use any game engine of my choice compared to this ancient crap I am doing:

  • complete 2D/3D editors for modeling
  • inbuilt physics, lighting and rainbow machine 🌈
  • amazing graphics, shader editors
  • almost no programming skillz required! Just drag and drop all your stuff!
  • and much more…

Well, the answer is quite simple: I like to remind myself everyday where I started. My first game was written in Delphi Pascal. Back then I even didn’t know much about OOP or Design Patterns. The only thing I cared about was moving pixels on the screen. It was truly inspiring and it didn’t change, even eight years later.

Getting started with gamedev

Many games have been completed by now and the majority of those games is written in “pure” Java. Not exactly pure Java, I am using libraries to access OpenGL but generally I can say that most games are 100% written by hand, without an actual game engine or predefined templating.

How do I do this? To make a game in Java you basically need three things:

  1. Java skills
  2. Basic understanding of math (ideally vectors, functions and matrices)
  3. Know how to set/move pixels on the screen and how to handle input

Point 1 and 2 are straight-forward: if you know basically how Java works you can write programms which do stuff for you:

public class HelloWorld {
  public static void main(String[] args) {
    if (args.length > 0 && args[0].equals("hey")) {
      System.out.println("Hello World!");
    } else {
      System.out.println("You didn't greet me! WOW!");
    }
  }
}

In addition you need to know math. For example you could write an immutable vector in Java with basic addition and subtraction functionality:

public class Vector {

  private final float x, y;

  public Vector(float x, float y) {
    this.x = x;
    this.y = y;
  }

  public Vector plus(Vector other) {
    return new Vector(x + other.x, y + other.y);
  }

  public Vector minus(Vector other) {
    return new Vector(x - other.x, y - other.y);
  }
}

You can do that with many different concepts and in the end you have a small library to implement math into your game. For example, turning around a player or calculating the distance between two game objects can be simply done by using vectors.

Giving birth to your game

What about moving pixels on the screen? Java already has libraries like Swing or JavaFX and I could have easily used that to write games. It seems crazy to go down the Java path for game development but I am not completely stupid. All gamers have powerful graphics cards and it would be absolutely mental not to use them. These GUI libraries rely on Software Rendering by default, which means the CPU does all the work. So we need a better solution if we want to make a highly performing 3D shooter or the next WoW.

Miguel, why not using an Engine then?

Beeep, wrong question! Why bothering with a complex game engine when you can just include a small library which provides all those features for you?

  • complete 2D/3D editors for modeling
  • inbuilt physics, lighting and rainbow machine 🌈
  • amazing graphics, shaders
  • and much more…

Sounds familiar? I’ll present to you libgdx:

libgdx-icon

Yes, this library let me do all this fancy game engine stuff, but in Java! Everything rendered on my GTX 93849 Extreme Ultra HD Power 5000 Graphics Card (without FPS limit an initial empty game gets ~15000fps, whoops). Nowadays I takes me 30 minutes to create a simple 3D Snake game in Java. In under 200 lines of code. Without touching any game engine. You’re not believing me? Then you have to wait for a next blog article.

Why libgdx?

This library allows me to combine it with my Java knowledge to write games the same way as you would write a small Swing application. Furthermore it is cross-plattform compatible, so the Java game runs on Windows, MacOS, Linux, Android and even iOS!

Moreover I really love Java. During the past years I worked with several programming languages, frameworks and libraries, however Java always was the easiest and simplest to work with. For example, C# is great as well. It even has features like Delegates and Properties which I really miss in Java. Also C++ is great to work with. Compile natively and have very low overhead, compared to Java. A simple SDL game written in C++ (proc-gen, without assets) can be around ~40kb, while the same game in Java is easily 8MB in size (all required .jar libraries need to get packed in the fat-jar). That’s an increase of 2000%. Still, we’re living in 2017 and file size doesn’t matter anymore (mostly). In terms of resource management Java doesn’t do that well compared to C++ but as I said, our computers are monster machines, we just don’t care anymore.

Conservative and old-fashioned?

Do I enjoy Java too much? Probably yes. Could I save lot of time when writing larger games by using an actual game engine? Maybe. Do I want to learn various game engines by heart? Definitely. I am not saying that I blindly refuse to do something different than Java. Regardless, it’s the most fun way to do what I love.

Do what you want independent of technology, library or game engine. It does’t matter how you do it. It doesn’t matter how long it takes to get there. The only thing which matters is that you are doing what you love.

- Miguel

Terminal Setup on MacOS

You always wanted to know how I setup my Terminal? That's what this post is about.

terminal-example

Today I would like to share with you my current terminal setup. It basically consists of the following parts:

The Terminal

The default terminal app on MacOS is definitely lacking of functionality. As xanderdunn has written on Medium, iTerm2 has huge advantages compared to the default terminal:

It’s a lot more customizable. For example, I can tell it to shut up and never show me warning dialogues when I’m closing a tab or quitting the app when there’s a running process. Customizability ends up being pretty important for serious developers who are always in the terminal.

Apart from customisation, iTerm2 allows me to split tabs vertically or horizontally on a native basis (different from tmux where it is virtually implemented). With tmux we could achieve similar behavior, however it would take much longer to set it up and even then it would still not be natively implemented. As a Mac user I always strived for fast and easy setup. When I change my machine I do not want to spend weeks on customising my Terminal. iTerm2 has lots of inbuilt features which could only be added to the default terminal by using plugins.

The Shell

I used many shells such as fish or tmux. After all this time I eventually sticked to zsh for different reasons:

  • amazing community
  • lots of customizations
  • syntax highlighting
  • lots of available plugins
  • inline auto-suggestions
  • paginated completion

To just list some of the great features.

The Theme

When I started programming I never thought about customising my terminal or favourite IDE. After many years of trying different things I started to like dark themes. It’s 2017 and there are thousands of different themes out there. My favourite is still Monokai, presumably known from the legendary RAM eating machine, called Atom. Monokai has vibrant colours which help me to see things, even at night. It’s just preference but I really love vibrant colours (and Tron).

I use two different things for the appearance: the Monokai Soda theme for colours and the sorin zsh theme theme for the terminal layout. Note that I made the colours of the Soda theme slightly brighter to have an increase in contrast.

Conclusion

I hope you like my setup. Happy coding!