Rust based implementation of Advent of Code puzzles as a learning exercise for familiarising myself with Rust
I've pre-set some things to automate or reduce the friction of doing things by leveraging hte great rust ecosystem - for testing, benchmarking, and otherwise evaluating the performance of the Rust solutions.
This includes the command just work
which is passed a particular day and part and is the equivalent workflow of running all of these in a row and stopping if one fails, then re-starting the flow after changes.
cargo check
cargo nextest run
clippy-tracing --action check
cargo clippy
cargo bench
Alternatively you can make use of the awesome tool bacon
- is a background rust code checker.
It's designed for minimal interaction so that you can just let it run, alongside your editor, and be notified of warnings, errors, or test failures in your Rust code.
cargo install cargo-nextest cargo-generate cargo-watch aoc-cli bacon
Just is a command runner and provided a lightweight interface to predefined and dynamically configured commands for testing, templatised bootsrapping, benchmarking, downloading AoC inputs and puzzle markdown etc.
brew install just
just create <day_number>
Criterion is the defacto benchmarking crate but I wanted to compare it against Divan. Divan has a simpler API and provides neat approach to benchmarking generic functions and measuring allocations (not yet tried) - see Divan over criterion.
cargo-nextest is "a next-generation test runner for Rust projects". Basically that means it includes an interesting execution model than can be great for projects with a lot of tests. As it is essentially 2 ~4 x faster than standard testing Nextest Benchmarks
Only drawback is that doesn't run doctests yet - basically that means you also run cargo test --doc
.
cargo install cargo-nextest