EEEEEEEEEE LL SSSSSSS
E LL S S
E LL SS
EEEEEEEE LL SS
E LL S S
EEEEEEEEEE LLLLLLLL SSSSSS
E nd - L ess - S pace
Basic idea: store data in the names of files instead of the files.
If anyone has a valid usecase for this thing, PLEASE let me know.
Bucktes are directories. The Bucketnames are hashed into a fixed size that fits into the size of the max lenght of a filename on most filesystems.
Values are directories containing files that encode the data in their names. These are called 'ValueBuckets'. Because filenames are not infitely long, we need to chunk data into multiple chunks. These get prefixed with their number in the list of chunks. The resulting filenames look like this:
filename := base64enc(varintenc(index)) + base64(dataChunk)
Write into the fileStructure:
- Encode data with base64
- Encode next index as Varint and base64
- The rest of the name is filled with data
- Create file with this name
- While more data is left, repeat
For reconstruction:
- Split filename into index/datachunk
- decode index into int
- save with index into slice
- repeate with all files
- concatenate all chunks
- decode concatenated chunks
- return data
You define your bucket with a list of bucket names : ["myBigBucket", "mySmallerBucket", "myValueBucket"]. After opening the Bucket with a ELS instance, you can Write and Read to it. (Read is a bit wonky if your buffer is too small better use ReadValue and be done with it. The Read is more of a joke implementation in this absolute joke of a library).
- Name -> Bucket of Values or Names
- Value -> Collection of Files as described above
- Arrays -> Bucket with a sentinel entry thats marks it as an array
Writing:
- Get all paths through the json to the values. Every path has its own els bucket
- Write the values into these bucktes
Reading:
- Split the schema into pathes. open bucket of theses pathes
- Read the buckets
- merge the pathes into one
- return the merged jsonfile
Arrays are very wonky. Everything in this repo is. What did you expect?