A while ago I was thinking about compact GPU-friendly vertex data formats to use for normals and other unit vectors (tangents, bitangents etc).

Among the most interesting are DEC3N, which is 10 bit per component (-511..511), and the simple method of just quantizing to (-127..127) range and stuffing the vector in UBYTE4 or UBYTE4N. These all let you store a unit vector in a DWORD, but the precision is rather bad. The quantization can make the normals visibly different from the original floating point versions.

So it occurred to me that just using this “shell” of integers for unit vectors isn’t making good use of the representable values. You already know your unit vector is, well, unit length, so all you care about is the direction. And there are far more directions expressable as** n-length **vectors than there are ones expressable as **unit** vectors in a 3 byte tuple. Assuming adding a normalize() to your decoding step is cheap, we can exploit this!

(Click the image to magnify.)

I hacked up some code and made a compressor that takes a 3 float vector, uses 3D DDA to scan the “3-byte space” for useful candidates, finds the best of these, and spits out a 3 byte vector for you. The code is downright horrible, and I think I lifted the DDA code from the Interweb, but the idea should be clear enough to run with. Also, the compressor seems to work well for any game data I’ve thrown at it, and gives very low error compared to the original normal.

I intend to make a follow-up post with some error numbers, and improved code later. If anyone can find ways to improve it, please comment!