8000 GitHub - a-poor/brickdb: A small basic proof-of-concept database written in Rust. I wouldn't recommend using this in production.
[go: up one dir, main page]
More Web Proxy on the site http://driver.im/
Skip to content
/ brickdb Public

A small basic proof-of-concept database written in Rust. I wouldn't recommend using this in production.

License

Notifications You must be signed in to change notification settings

a-poor/brickdb

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

67 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

BrickDB

Rust Test Crates.io Crates.io docs.rs

created by Austin Poor

A small basic proof-of-concept database written in Rust. I wouldn't recommend using this in production.

Ideas for the Future

  • Sharding
  • Multi-node clusters (via Raft?)
  • Schema validation (e.g )

Notes

  • Update the storage::level::Level::new implementation to format the path using the path argument as a parent directory path
  • The LSMTree should be able to move data between levels (eg memtable -> level-1-sstable) without downtime (if possible). Can it craete an sstable from the memtable, write the memtable to disk, and then clear the memtable? Should there be a frozen memtable that can be read from but not written to, while the on-disk operations are working?
  • Like the above, should also (maybe) work with SSTables? Or do they not need to, since they're read-only?
    • Maybe they don't need to but need to be able to mark which table-ids are being compacted. So compaction can start on level 1 (say) and while it's happening, they can still be read from, until the new level-2 table is ready, at which point the level-1 tables can be removed. But since a new level one table might have been added (probably shouldn't have been but could have been), the compaction process should remember which tables it's including and not just say all tables in level 1.
  • Compress data written to disk with snappy compression? (try snap)
  • There was an error with the snappy compression that I didn't really look into deeply. Go back and figure it out!
  • Update todo!()s in tests (and the commented-out tests)
  • Add the ability to encrypt data on disk (aes with ring?)
  • The bloom create hasn't had any updates in the past 7 years. Consider changing to a different implementation or writing it myself.
  • Add metadata for the storage::lsm::LSMTree so it can be read back in.
  • Create a separate reader/writer implementation for reading/writing data. It can simplify async writing of bson data, compression (share encoders/decoders?), and encryption.
  • Implement the WAL

Database Disk Structure

...

To-Do

  • Define on-disk structure for LSM
  • Implement WAL
  • Add compression when reading/writing from/to disk
  • Add encryption when reading/writing from/to disk
  • Add indexes

About

A small basic proof-of-concept database written in Rust. I wouldn't recommend using this in production.

Topics

Resources

License

Stars

Watchers

Forks

Languages

0