Data compression - it’s a daunting subject. It is completely fundamental to our lives though very few of us actually understand it. We all use compression software such as 7-zip or gzip to compress our data, often compressing text files to 30% of their original size or less.

Compression is used heavily in games. Images, textures, geometry, and video are the most common compressed assets. In games, compression plays a critical role in ensuring the content can load quickly or fit on storage devices, and that your game state can be serialized into individual UDP packets for your network code in multiplayer games. Sophisticated data compression is prominent is also highly important in systems like our multi-user Unity scene collaboration tool, Scene Fusion. It helps us enable critical features such as shared terrain editing and efficient network usage. It sounds like magic, however it not as intimidating as it appears on the surface.

The most basic tenant of compressing data is: Don’t store bits of data unless you absolutely need them.

Developers have been performing the simplest form of compression, bit-packing, for decades now. Bit-packing is a simple concept: Use as few bit as possible to store a piece of data. When done well, it can significantly reduce your data size.

I’m going to start with a simple exercise to demonstrate the basic concept of compression by bit-packing. First, we need some data to pack. Here is a sample dataset that we want to compress:

Each value above is stored using a 32-bit unsigned integer. The 15 values consume 60 bytes of space. The principle behind bit packing is you first want to assess IF bit packing is going to gain you anything. In the case above, the Binary representation shows us that there are long strings of leading 0 bits on many of the dataset values.

In this data sample 68.8% of the values can be stored using 16 bits or less. If the order of the values didn’t matter, you could just split the list into two: One that contains all the 16-bit values and other with the 32-bit values. However, if you do want to preserve order (and I want to write a longer blog post) you must adopt a different strategy.

The first thing we will need is a method to read and write values as a string of bits. I have implemented BitStreamReader and BitStreamWriter classes in C# as simple example implementations. Here is the code for reading and writing bits to streams:

```
using System.IO;
namespace BitPackingExamples
{
class BitStreamWriter
{
private int bitsInBuffer = 0;
private byte[] buffer = new byte[1];
public BitStreamWriter()
{
// clear the buffer
buffer[0] = 0x00;
}
public int WriteBits(uint inputBits, int numBits, Stream outputStream)
{
if (numBits > 32) numBits = 32;
int bitsWritten = 0;
for (int i = 0; i < numBits; i++)
{
uint bit = (inputBits >> i) & 0x00000001;
WriteToBuffer(bit, outputStream);
bitsWritten++;
}
return bitsWritten;
}
public void WriteFlush(Stream outputStream)
{
if (bitsInBuffer != 0) {
outputStream.Write(buffer, 0, 1);
}
buffer[0] = 0;
bitsInBuffer = 0;
}
private void WriteToBuffer(uint bit, Stream outputStream)
{
if (bitsInBuffer == 8) WriteFlush(outputStream);
if (bit != 0) // only write 1 bits. buffer is initialized to 0
buffer[0] |= (byte)(bit << bitsInBuffer);
bitsInBuffer++;
}
}
class BitStreamReader
{
private int bitsInBuffer = 0;
private byte[] buffer = new byte[1];
public uint ReadBits(int numBits, Stream inputStream)
{
uint bits = 0;
for (int i = 0; i<numBits;++i)
{
bits |= (ReadBit(inputStream) << i);
// we have used up all the data!
if (bitsInBuffer == 0 && !inputStream.CanRead) break;
}
return bits;
}
private int FillBuffer(Stream inputStream)
{
if (inputStream.Read(buffer, 0, 1) == 1)
{
bitsInBuffer = 8;
return 0;
}
return -1;
}
private uint ReadBit(Stream inputStream)
{
if (bitsInBuffer == 0) {
FillBuffer(inputStream);
}
if (bitsInBuffer > 0)
{
uint value = (uint)(buffer[0] >> (8 -bitsInBuffer)) & 1;
bitsInBuffer--;
return value;
}
return 0;
}
}
}
```

These are pretty straightforward: You create a stream of some kind, be it a FileStream or MemoryStream, and then use these classes to read or write individual bits. This gives us the tool we need to perform our next step of packing: Writing bit-sized headers and more compact forms of the given values.

The first, and simplest, bit-pack is to simply adopt a bit-wise format where you have a 1-bit header followed by a known number of bits representing the value. The bit header works as follows: If it is set (1), then the value following it is encoded using 16 bits. If it is unset (0), the value following it is encoded using 32 bits. In our data set, it means that we are actually expanding our 32 bits into 33 bits for all values that require more than 16 bits of space, but we are simultaneously reducing the space needed by the rest by 15 bits! In our dataset, we can see this encoding will give us 3*33 + 13*17 = 320 bits, or exactly 40 bytes, a reduction in size of 33%!

```
// here are the values we want to compress
static uint[] values =
{
0x0000FDEE,
0x000000A1,
0x0000EF2F,
0x02A3120E,
0x00001147,
0x000000F1,
0x00000907,
0x000013F7,
0x000003A3,
0x00000407,
0x00000067,
0x0000F25A,
0x0041FB7E,
0x075B1C78,
0x0CCC20D6
};
// simple 1-bit header for a single division
static void Example1_SimplePack()
{
const int MAX_BITS = 28;
const int MIN_BITS = 16;
MemoryStream packedData = new MemoryStream();
BitStreamWriter bitWriter = new BitStreamWriter();
// bit packing the values:
// bit header 1, then value bits = 16, bit header 0, value bits = 32
foreach (uint v in values)
{
uint size = CountBits(v);
// if there are 16 bits or fewer, write only 16+1 bits
if (size <= MIN_BITS)
{
bitWriter.WriteBits(1, 1, packedData);
bitWriter.WriteBits(v, MIN_BITS, packedData);
}
else
{
bitWriter.WriteBits(0, 1, packedData);
bitWriter.WriteBits(v, MAX_BITS, packedData);
}
}
bitWriter.WriteFlush(packedData);
Console.WriteLine("Original size = {0}, packed size = {1}", values.Length * sizeof(uint), packedData.Length);
// now read the data back
//
// rewind our stream so we can read it back
packedData.Seek(0, SeekOrigin.Begin);
BitStreamReader bitReader = new BitStreamReader();
uint[] output = new uint[values.Length];
for (int i = 0; i < output.Length; ++i)
{
uint header = bitReader.ReadBits(1, packedData);
switch (header)
{
case 1:
output[i] = bitReader.ReadBits(MIN_BITS, packedData);
break;
case 0:
output[i] = bitReader.ReadBits(MAX_BITS, packedData);
break;
}
if (output[i] == values[i]) Console.WriteLine("Position {0} matches!", i);
}
}
```

Not bad for a relatively small expansion. In many cases, a single-bit expansion like this is all you need, just make sure to choose a good point for your division. You are adding 1 bit to every value, which means that you must save more than 2 bits on half of your values in order to get any compression.

So what is going on here? Well, let’s look at how the first two values would be presented in memory as strings of bits:

Here you can see how the representation of the data has changed in memory. The bold 1’s in the ‘After’ sequence are the additional single-bit headers added to each value. In the same 8 bytes of memory, you now have 30 more bits in which to pack the subsequent values. In this case, the next value would also be compacted into this same space, as well as just under half of the following, 32-bit value.

If you look at the data supplied, you’ll notice that you can actually be more aggressive. The largest value only uses 28 bits. If I knew that I would never see values larger than 28 bits, I could just set my two encoded value type at 16 bits and 28 bits. Doing so on this data set makes the compressed data 38 bytes long – we just saved another 2 bytes, but if the values were to ever change and include a 29-32 bit number, we would encode the wrong value.

Now, let’s suppose that I still want my bit packing algorithm 32-bit values, but to still compress the largest values in the current dataset. Well, now we can use a more complex bit-header scheme. In this scheme, we use a variable-length header. It is 1-bit long for our most common value size, and 2 bits long for our less common ones. Here is how it works:

- For your smallest value size, encode a ‘1’ bit

- For your middle value size, encode a ‘0’ bit followed by a ‘1’ bit

- For your largest value size, encode a ‘0’ bit followed by a ‘0’ bit

And that’s that. The first bit in this example becomes an ‘escapement’. If unset, it denotes more data is to follow.

If we look at our input data and do a bit of trial and error (or some non-obvious math) we can set 3 levels: 32 bits, 16 bits, and 13 bits. Our most common length is going to be 13 bits or less, which occur more than half the time. The remaining two levels are chosen to be 16 bits and 32 bits. This pack also leads to compressed size of 38 bytes, however now it supports 32-bit values as well!

```
// length range where bits <= 16
static void Example2_2Divisions()
{
MemoryStream packedData = new MemoryStream();
BitStreamWriter bitWriter = new BitStreamWriter();
const int MAX_BITS = 32;
const int MED_BITS = 16;
const int MIN_BITS = 13;
// let's pack them into a stream!
foreach (uint v in values)
{
uint size = CountBits(v);
// if there are 16 bits or fewer, write only 16+1 bits
if (size <= MED_BITS)
{
if (size <= MIN_BITS)
{
bitWriter.WriteBits(1, 1, packedData);
bitWriter.WriteBits(v, MIN_BITS, packedData);
}
else
{
bitWriter.WriteBits(2, 2, packedData);
bitWriter.WriteBits(v, MED_BITS, packedData);
}
}
else
{
bitWriter.WriteBits(0, 2, packedData);
bitWriter.WriteBits(v, MAX_BITS, packedData);
}
}
bitWriter.WriteFlush(packedData);
Console.WriteLine("Original size = {0}, packed size = {1}", values.Length * sizeof(uint), packedData.Length);
// now read the data back
//
// rewind our stream so we can read it back
packedData.Seek(0, SeekOrigin.Begin);
BitStreamReader bitReader = new BitStreamReader();
uint[] output = new uint[values.Length];
for (int i = 0; i < output.Length; ++i)
{
uint header = bitReader.ReadBits(1, packedData);
switch (header)
{
case 1:
output[i] = bitReader.ReadBits(MIN_BITS, packedData);
break;
// if the bit is 0, there is a second bit with length subsections
case 0:
uint secondBit = bitReader.ReadBits(1, packedData);
if (secondBit == 0)
output[i] = bitReader.ReadBits(MAX_BITS, packedData);
else
output[i] = bitReader.ReadBits(MED_BITS, packedData);
break;
}
if (output[i] == values[i]) Console.WriteLine("Position {0} matches!", i);
}
}
```

Now, in some cases you may think, what about just encoding two numbers: The first being a bit count, and the second being the value! Well, that can work too. It turns out that the logic is simpler as well, as you are always writing a fixed-sized header of 5 bits storing a number ‘n’, followed by ‘n’ bits containing the value. In our example data, there are 234 bits of data. By adding 5 bits to each value you end up bringing to the total 324 bits, which required 39 bytes of storage. In this case, there isn’t a gain, but this particular scheme is by far the most flexible, giving you an automatic big win should you end with needed to store many more values that use 11 bits or less.

```
// We could just explicitly write the total number of bits prior to each value, so each header is 5 bits,
// but then the value is perfectly packed every time
static void Example3_ExplicitHeaders()
{
MemoryStream packedData = new MemoryStream();
BitStreamWriter bitWriter = new BitStreamWriter();
// packing
foreach (uint v in values)
{
uint header = CountBits(v);
bitWriter.WriteBits(header, 5, packedData);
bitWriter.WriteBits(v, (int)header, packedData);
}
bitWriter.WriteFlush(packedData);
Console.WriteLine("Original size = {0}, packed size = {1}", values.Length * sizeof(uint), packedData.Length);
// now read the data back
//
// rewind our stream so we can read it back
packedData.Seek(0, SeekOrigin.Begin);
BitStreamReader bitReader = new BitStreamReader();
uint[] output = new uint[values.Length];
for (int i = 0; i < output.Length; ++i)
{
uint header = bitReader.ReadBits(5, packedData);
output[i] = bitReader.ReadBits((int)header, packedData);
if (output[i] == values[i]) Console.WriteLine("Position {0} matches!", i);
}
}
```

One more improvement that can be made (I encourage you to try it yourself), the most significant bit in each value is actually not needed with this technique - you can make it implicit, and encoding the remaining bits after the 5-bit length header. The result is that you can compress the example dataset down to 37 bytes (actually, 36.75).

That covers basic bit-packing. In general, you will have to tailor your packing rules to your data in order to maximize compression. The full source and project files for this blog post are on github at https://github.com/ksexamples/bitpacking101.

For my next blog entry, what if I told you it was possible to pack 1 bit of data in less than 1 bit of memory? To some extent, that is what we have done here, but it can be much more extreme by using entropy coding. In my next blog, I will cover using data modelling and entropy coding to build a more generic compressor that compacts the data even more.

To learn more about what we are doing to help developers build better games, faster - check out our multi-user scene collaboration tool for Unity, Scene Fusion.