is there a way to insert data into a large file? like write to a bit and
it shifts the later part of the file across?
Please log in or register to post a reply.
Generally no, you must manually move all data after insert position.
Well, in theory it’s possible to shift file data cheaply to integer
number of sectors (512 bytes or file system dependent), but I don’t
think there is API for that or even internal system function.
thats a damn shame, cause im thinking about making a spase voxel octree
streaming program, and I cant seem to be able to build the svo without
an insertion program, could I have each node a separate file? What would
the consequences of that be?
Some file systems such as NTFS support sparse files. Clusters you have
never touched will not be written to disk. This way, you could open a
file, seek to position 1GB, write a byte, and the file will only take up
a few kilobytes of disk space (for the allocated cluster containing that
byte and some other metadata). However, chances are that when copying
that file the copy will actually be 1GB because a read from an untouched
cluster will not fail but simply reads all zeroes, which are then
written to the copy.
But why do the contents of the file have to be linear? Can’t you just
write the added data to the end of the file, and have an in-memory
remapping structure so you know where each chunk is located in the
its because its a sparse voxel octree, i dont want to waste integers on
pointers, i want to use a bit per node if its contained or uncontained,
that means everything has to pan out perfectly in order for it to work
in the file.
the way around this is to have a different file per node, but i was
wondering would that slow streaming down at all? (it would effect disk
space takeup for sure… but thats only a secondary problem.)
nah it doesnt work, ill need integer pointers.