Otter is designed to provide an excellent developer experience while maintaining high performance. It aims to address the shortcomings of its predecessors and incorporates design principles from high-performance libraries in other languages (such as Caffeine).
Performance-wise, Otter provides:
- High hit rates across all workload types via adaptive W-TinyLFU
- Excellent throughput under high contention on most workload types
- Among the lowest memory overheads across all cache capacities
- Automatic data structures configuration based on contention/parallelism and workload patterns
Otter also provides a highly configurable caching API, enabling any combination of these optional features:
- Size-based eviction when a maximum is exceeded
- Time-based expiration of entries, measured since last access or last write
- Automatic loading of entries into the cache
- Asynchronously refresh when the first stale request for an entry occurs
- Accumulation of cache access statistics
For more details, see our user's guide and browse the API docs for the latest release.
Otter requires Go version 1.24 or above.
go get -u github.com/maypok86/otter
go get -u github.com/maypok86/otter/v2
See the release notes for details of the changes.
Note that otter only supports the two most recent minor versions of Go.
Otter follows semantic versioning for the documented public API on stable releases. v2
is the latest stable major version.
Otter uses a plain Options
struct for cache configuration. Check out otter.Options for more details.
Note that all features are optional. You can create a cache that acts as a simple hash table wrapper, with near-zero memory overhead for unused features β thanks to node code generation.
API Usage Example
package main
import (
"context"
"time"
"github.com/maypok86/otter/v2"
"github.com/maypok86/otter/v2/stats"
)
func main() {
ctx := context.Background()
// Create statistics counter to track cache operations
counter := stats.NewCounter()
// Configure cache with:
// - Capacity: 10,000 entries
// - 1 second expiration after last access
// - 500ms refresh interval after writes
// - Stats collection enabled
cache := otter.Must(&otter.Options[string, string]{
MaximumSize: 10_000,
ExpiryCalculator: otter.ExpiryAccessing[string, string](time.Second), // Reset timer on reads/writes
RefreshCalculator: otter.RefreshWriting[string, string](500 * time.Millisecond), // Refresh after writes
StatsRecorder: counter, // Attach stats collector
})
// Phase 1: Test basic expiration
// -----------------------------
cache.Set("key", "value") // Add initial value
// Wait for expiration (1 second)
time.Sleep(time.Second)
// Verify entry expired
if _, ok := cache.GetIfPresent("key"); ok {
panic("key shouldn't be found") // Should be expired
}
// Phase 2: Test cache stampede protection
// --------------------------------------
loader := func(ctx context.Context, key string) (string, error) {
time.Sleep(200 * time.Millisecond) // Simulate slow load
return "value1", nil // Return new value
}
// Concurrent Gets would deduplicate loader calls
value, err := cache.Get(ctx, "key", otter.LoaderFunc[string, string](loader))
if err != nil {
panic(err)
}
if value != "value1" {
panic("incorrect value") // Should get newly loaded value
}
// Phase 3: Test background refresh
// --------------------------------
time.Sleep(500 * time.Millisecond) // Wait until refresh needed
// New loader that returns updated value
loader = func(ctx context.Context, key string) (string, error) {
time.Sleep(100 * time.Millisecond) // Simulate refresh
return "value2", nil // Return refreshed value
}
// This triggers async refresh but returns current value
value, err = cache.Get(ctx, "key", otter.LoaderFunc[string, string](loader))
if err != nil {
panic(err)
}
if value != "value1" { // Should get old value while refreshing
panic("loader shouldn't be called during Get")
}
// Wait for refresh to complete
time.Sleep(110 * time.Millisecond)
// Verify refreshed value
v, ok := cache.GetIfPresent("key")
if !ok {
panic("key should be found") // Should still be cached
}
if v != "value2" { // Should now have refreshed value
panic("refresh should be completed")
}
}
You can find more usage examples here.
The benchmark code can be found here.
Throughput benchmarks are a Go port of the caffeine benchmarks. This microbenchmark compares the throughput of caches on a zipf distribution, which allows to show various inefficient places in implementations.
You can find results here.
The hit ratio simulator tests caches on various traces:
- Synthetic (Zipf distribution)
- Traditional (widely known and used in various projects and papers)
You can find results here.
This benchmark quantifies the additional memory consumption across varying cache capacities.
You can find results here.
Below is a list of known projects that use Otter:
- Grafana: The open and composable observability and data visualization platform.
- Centrifugo: Scalable real-time messaging server in a language-agnostic way.
- FrankenPHP: The modern PHP app server
- Unkey: Open source API management platform
Otter is based on the following papers:
- BP-Wrapper: A Framework Making Any Replacement Algorithms (Almost) Lock Contention Free
- TinyLFU: A Highly Efficient Cache Admission Policy
- Adaptive Software Cache Management
- Denial of Service via Algorithmic Complexity Attack
- Hashed and Hierarchical Timing Wheels
- A large scale analysis of hundreds of in-memory cache clusters at Twitter
Contributions are welcome as always, before submitting a new PR please make sure to open a new issue so community members can discuss it. For more information please see contribution guidelines.
Additionally, you might find existing open issues which can help with improvements.
This project follows a standard code of conduct so that you can understand what actions will and will not be tolerated.
This project is Apache 2.0 licensed, as found in the LICENSE.