Open
Description
Basic benchmarks show that the point-in-polygon API takes between 0 & 1 millisecond to execute.
We don't fully understand what the performance is like:
- under heavy load
- on a cold start vs. when the Linux filesystem cache has paged all/most of the DB
- single core vs. multi-core
- when it hits the max QPS for a machine
- with a small DB vs a large DB
- at various levels of 'max shard complexity' (a tunable config value).
This ticket is to figure out how to generate benchmarks which return more than simply vanity metrics.
It would be ideal if we can automate this process to measure performance over time, as new features are added.