Description
What is the problem your feature solves, or the need it fulfills?
I was exploring the LinkedList implementation in the pingora-lru crate and wondered whether using Criterion.rs could make your benchmarks more reliable. Right now, the linked list benchmarks (and possibly those in other crates as well) use some ad-hoc timing that can feel noisy and hard to reproduce, especially when iterating over lists. Adopting Criterion would give built‑in warm‑up, statistical analysis, and clear “before vs. after” reports.
Describe the solution you'd like
I’d like to suggest migrating the existing benchmarks in pingora-lru to use Criterion.rs. If this seems like a valuable change, I’d be happy to contribute the initial implementation myself!
Describe alternatives you've considered
- iai-callgrind, another benchmarking library that uses CPU cycle counting instead of timing. It’s a great tool, but Criterion provides statistically driven benchmarks with confidence intervals, which I’m more familiar with.
Additional context
I’m suggesting Criterion specifically because I’ve used it before and find its workflow effective, but I think both Criterion and iai-callgrind are strong options depending on the goals.