So you can store a 3d world in a sparse voxel octree, but youve taken up
huge amounts of ram in doing so.
Jon Olick says you can get it down to a bit and a half per voxel
utilizing some compression techniques… any ideas to how this could be
Please log in or register to post a reply.
Why not ask Olick himself?
How do I get a message to him? I wonder if he even has the time…
I guess I was implying he’s online and you may be able to find a way to
reach him. For example, a comment on his blog on a voxel post may get an
Tell him to make time :lol:
According to his
his storage is 8 bytes per voxel. He only mentions positional data is
1.15 bits in comparison to triangle storage. According to his storage
structure, 1 byte is allocated for the children mask. A byte has 8 bits
and a voxel has 6 possible children (sides of the cube), plus itself so
7 bits in total. 8 / 7 = 1.142 positional bits per voxel. He probably
just rounded that up to 1.15. His encoding is similar to how PNG files
are encoded before they are compressed with deflate. Stochastic data
doesn’t compress very well, so if you filter the data to minimize
entropy, compression algorithms like deflate will do a much better job.
In his case, he stores the relative values instead of absolute values
(exactly what PNG does).
thanks for that info, so it is similar to png, i cant really do voxels
tho, cause i know how to save a png file, but i dont know how to code it
Ive actually moved to displacement mapping (see my other post
and its going quite well, and with that yes all i do is literally save
the png file and displacement map is compressed.
So far I got my voxel model to be about 1.45 bytes positional data and
not sure at all how I would compress colours in the state they are in,
so I think im much better off with displacement maps, and then I would
get pretty nice compression, even without knowing much about
what do you think nut… would displacement map models compress detail
better than a voxel model?
It would work fine for simple terrain, but anything with interior would
be problematic. It offers the best compression for what it does, but
without interior it’s not much of a voxel format ;) With my limited
knowledge, I would stick with the classical approach of storing and
loading spatial trees. You also get the added bonus of storing and
loading LOD levels, which will be important for dealing with memory
intensive voxel data.
This is a revival on the old post.
So I finally read what you said TheNut and I finally understand!! (sort
So you store the voxels as neighbouring bits from each other, and it
comes to about 8 bits a voxel… how do you compress further than that?
I sorta gather entropy is when the data turns to complete noise.
Oh I get it! Entropy encoding is just huffman encoding, can I ask
another question? could you use entropy encoding for the colours as well
as the neighbour bits? and how well would it do… theres no way youd
get 8:1 compression like Jon Olick says… or can you?!?
and by filter, do you mean make all the similar colours the same colour,
that would be a good image filter to go into huffman code, is that what
You must use entropy encoding for anything, because colors and normals
eats much more space, than child bits (not neighbour). Also, I suggest
using some type of arithmetic coding instead of Huffman (Huffman can
compress down to integer number of bits, while arithmetic coding
compress further to fractions).
As Olick wrote, store colors as difference to parent with some
quantization. Process of quantization is essentially lossy filtering.
everyone seems to know more than me!!! ahhhh!!!