n0m n0m… n0m was a 250 point forensics challenge written by myself for the BSides Canberra CTF. The event was great fun and this challenge was solved by 9% of the actively playing teams. Before we start looking at the solution, you can have a crack at most of the challenges that were in the CTF linked here at the Ionize github repo.

Let’s have a look and see what the challenge description was.

Okay, so we have a png file where every double null byte 00 00 has been replaced with a single null byte 00. So, can we simply go back and replace all the 00 instances with 00 00? No, we can’t do that because what if the original file had a single null somewhere? Changing that to a double null would once again break the file. So how can we tell what was originally a single null, and what was originally a double null?

Breaking it down

The technique for solving this problem lies in the specifications of the PNG format. A short outline of the format can be found on the W3C website, and they also host a more in depth outline here. I will provide a brief overview here anyway. A PNG file is made up of a series of “chunks”. For a PNG to be valid (according to specifications) there must be an IHDR (header) chunk directly after the PNG file signature of 89 50 4E 47 0D 0A 1A 0A. There must be at least one, but possibly more, IDAT (data) chunks throughout the file, which holds information such as pixel colors, etc. Finally there must be one IEND chunk at the end of the file. There are several auxiliary blocks such as tRNS (transparency), gAMA (gamma) and more, but these may or may not be in the file depending on requirements.

Each chunk has four components, a four byte length field, followed by a 4 byte chunk description, followed by variable amount of data, followed by a 4 byte CRC checksum. The length field only relates to the data, and does not include the lengths of the description, CRC, or it’s own 4 bytes.

Every time a double null has been replaced by a single, it would mean that the size shrinks by 1 byte compared to what is recorded in the size fields. Through this knowledge, we should be able to go through and calculate how many null replacements have occurred in each section. Once we know this, we can try different permutations of replacing single nulls with double nulls, and then calculate the CRC for the section to find out if we guessed correctly. There are still a few catches in this which we’ll have to deal with. What if the CRC has been altered? What if the length field has been altered? What if there are too many nulls and permutations to brute force? As with most CTF problems it’s a case of cracking on and chasing down the rabbit hole to see if it pans out, solving these hurdles as we go.

Scripting up the solution


To start with we want to manually inspect the file, and look at what we’re dealing with.

We can see the first line is the file signature of 89 50 4E 47 0D 0A 1A 0A, which is fine and unmodified. Next should be the 4 bytes of the IHDR size field, followed by the description “IHDR”. Next is data of the length indicated by the size, followed by a CRC checksum. In reality we see the size field is 00 00 0D, not 00 00 00 0D like we’d expect. We’ve found our first replacement! This makes sense as any field with a length shorter than 65535 will be prefixed by a double null (size 0x0000XXXX).

So, since we want to script this, let’s make some statements about how we can figure different sections out, especially in terms of what is expected vs the reality.

  • The “description” (now on referred to as ‘desc’) chunks will never have nulls, hence they will always be unmodified. These will be good anchor points for our script.
  • The “size” chunk will be the ‘desc’ chunk minus 4 bytes, once we fix what we can assume to be an always present replacement of the leading double null bytes.
  • The “size” chunk will always describe what the data size was meant to be, but not necessarily what it is in reality.

The CRC definition is actually a little tricker. If we simply assume the file is well formed, it would be possible to say (Position of Desc) + (Size of Chunk) = (Start of Chunk CRC). If a replacement has occurred though, this would land us further into the CRC bytes and in fact be invalid! We need to instead define the CRC as the position of the next description block, minus 4 bytes for that chunks size, minus 4 bytes again to get to the start of the CRC.

Let’s begin our script. Manually reviewing the file we can see we have the following block types, IHDR, sRGB, pHYs, iTXt, IDAT and IEND. We want to read in the file and start manipulating the byte stream.

Since getting the size the correct number of bytes is critical for determining what the CRC of the previous section is, let’s fix all the sizes up first.

This is going through and finding the desc chunks, and inserting a 00 byte at the start of the size. The IDAT section does this 28 times after manually checking how many there were (hacky I know – it’s a CTF solution give me a break :D). The IEND block is a slight edge case, because it doesn’t have any data the size is actually meant to be 00 00 00 00, so it requires two replacements. At this point, we should have reliable positions for size, desc, and crc chunks, and be able to count how many bytes are actually between the desc and crc positions.

Fixing up the Data

So let’s see a large chunk of the script to fix up the data for reference, then I’ll break it down piece by piece.

The first explained is this section.

The whole script effectively treats the bytes in the original picture as a zero indexed array, chopping and changing before writing out to the solved file. This snippet finds the position of IHDR, and prints out the expected size of the chunk. Through taking into account the given chunk size, the current position, and extra bytes such as the description block itself, CRC, and size block of the next chunk, we get the value of where the position of the next desc block should be within the array. We compare that against the actual position of the next desc block, and through this determine the amount of missing bytes.

We can see with this, there are 3 bytes missing in the IHDR data section of the file.

Now we need to figure out how to fix the section, and importantly confirm when we have arrived at the correct solution. We examine the following snippet.

The data (and through a silly bug which doesn’t matter the description block) section is pulled out into the ihdr_data variable. Again using the next section description block as an anchor point we pull out the CRC into the ihdr_crc variable. Using regex and the null_positions function a list is obtained of all the null positions in the data, used later for attempting to try different permutations of null replacements. All this information (list of relevant bytes, list of null positions, number of missing bytes, target CRC) is passed into the fix function.

The Core Fixing Function

Some debug information is printed out, but the main line is perm = itertools.permutations(null_positions, missing). This amazing function shown here takes in a list and returns a list of turples containing every non-repeating permutation of the length given in the second argument. So given we’re passing in [4, 6, 7, 12, 13] as the null positions in the IHDR section, and specifying 3 bytes are missing, it will return the following

Even for this small number of nulls and missing bytes, we get 60 possible permutations, calculated by the formula null_positions! / (null_positions - missing)!. For each of these permutations we insert an extra null byte at the position and calculate the CRC for that candidate stream of bytes. If it matches are target CRC we return the data as “fixed”, or otherwise move on to the next possible stream of bytes. Assuming we have done everything correctly this is an exhaustive search so it should always return the fixed data.

Once this fixed stream is obtained, we put it back into our slice of bytes, replacing the broken section, and move onto the next.

Exponential growth is horrible – and factorials grow faster!

So at this point we go through and check out each section, fixing as we go. Overall we’re pretty lucky with the amount of bytes missing, usually only 1 or 2, and at those times there are only 4-5 null positions to check. That is until the IDAT section from hell.

Well, we have a total of 587 potential null positions, and a total of 4 missing bytes. Only 117,518,010,480 possible combinations to try! Honestly, it may be solvable but I didn’t want to hang around for a few hours to check. On a hunch I decided to stop the “fixing” there and attempt to write out bytes we had solved so far.

Woohoo! Turns out that block which was hard to solve was just below where the flag was in the picture. It’s still corrupted but we don’t care since we can make out the flag.


Here is the complete solution script.