-
Notifications
You must be signed in to change notification settings - Fork 77
fix: uniswap unable to handle more than 10 tickers #797
Conversation
|
||
// Chunk chunks a slice into batches of chunkSize. | ||
// example: {1,2,3,4,5}, chunkSize = 2 -> {1,2}, {3,4}, {5} | ||
func Chunk[T any](input []T, chunkSize int) [][]T { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
taken from sauron.
There is a Chunk function in the slices
pkg for go1.23, but this would require is to bump the go version in our go.mod, which would affect anyone using our code
Codecov ReportAttention: Patch coverage is
Additional details and impacted files@@ Coverage Diff @@
## main #797 +/- ##
==========================================
+ Coverage 57.34% 58.04% +0.70%
==========================================
Files 213 214 +1
Lines 14823 14844 +21
==========================================
+ Hits 8500 8616 +116
+ Misses 5709 5602 -107
- Partials 614 626 +12 ☔ View full report in Codecov by Sentry. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Left some comments without approval since this is still marked as draft. Feel free to request review whenever it's ready to go, or if you want additional feedback.
@@ -100,9 +100,9 @@ func (c *GoEthereumClientImpl) BatchCallContext(ctx context.Context, calls []rpc | |||
|
|||
if err = c.client.BatchCallContext(ctx, calls); err != nil { | |||
c.apiMetrics.AddRPCStatusCode(c.api.Name, c.redactedURL, metrics.RPCCodeError) | |||
return | |||
return err |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I assume this was done for clarity since it can be confusing when we have named return values. Consider removing the name entirely.
(i.e., instead of (err error)
just let it be error
for the return type.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
tickers, | ||
providertypes.NewErrorWithCode(err, providertypes.ErrorAPIGeneral), | ||
) | ||
for _, chunk := range batchChunks { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We should probably just parallelize this. It isn't too much more complicated, and this is one of the most common ways for latency to get out of control.
eg, egCtx := errgroup.WithContext(ctx)
for _, chunk := range batchChunks {
chunk := chunk
eg.Go(func() error {
// Batch call to the EVM.
if err := u.client.BatchCallContext(egCtx, chunk); err != nil {
u.logger.Debug(
"failed to batch call to ethereum network for all tickers",
zap.Error(err),
)
return err
}
return nil
})
}
if err := eg.Wait(); err != nil {
return types.NewPriceResponseWithErr(
tickers,
providertypes.NewErrorWithCode(err, providertypes.ErrorAPIGeneral),
)
}
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
If you do this, you should probably set a limit of something like 5 for the error group.
providers/providertest/provider.go
Outdated
@@ -111,7 +111,7 @@ func (c *Config) Validate() error { | |||
func DefaultProviderTestConfig() Config { | |||
return Config{ | |||
TestDuration: 1 * time.Minute, | |||
PollInterval: 5 * time.Second, | |||
PollInterval: 20 * time.Second, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Any justification for changing the poll interval?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
reverted this, was just accidentally committed as I was experimenting
@@ -158,7 +158,7 @@ func (o *TestingOracle) RunMarketMap(ctx context.Context, mm mmtypes.MarketMap, | |||
if len(prices) != expectedNumPrices { | |||
return nil, fmt.Errorf("expected %d prices, got %d", expectedNumPrices, len(prices)) | |||
} | |||
o.Logger.Info("provider prices", zap.Any("prices", prices)) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Is this being removed because it was making the logs too messy?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
yep
@@ -15,6 +15,9 @@ func FilterMarketMapToProviders(mm mmtypes.MarketMap) map[string]mmtypes.MarketM | |||
for _, market := range mm.Markets { | |||
// check each provider config | |||
for _, pc := range market.ProviderConfigs { | |||
// remove normalizations to isolate markets | |||
pc.NormalizeByPair = nil |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Is this just for testing, or why is it needed?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This is so that we can actually test the markets in isolation.
For example, if I want to isolate all of the Uniswap markets, they're all going to be Normalized by ETH/USD, which requires a bunch of other feeds to exist.
For this kind of testing, we only want to see "Does uniswap return a price for this expected pool?", so we just remove normalize pairs
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
preapproving, pending @zrbecker suggestion of parallelizing the uniswap thing
Closes CON-1849
Uniswap has been found to only support 10 tickers. Any ticker amount larger will cause the entire batch fetching to fail.
What I have found from investigating geth is that nodes are configured with a batchItemLimit, but default values seem to be large enough for anything we realistically would do.
Regardless, from running the test I added and playing with the batchSize of our requests, all sizes > 10 will fail (any size LTE 10 works) for a variety of endpoints.
As it is written, the batch calls are serial, but we could parallelize this. Until we are seeing realistic performance boundaries, I don't think we should prioritize this (and complicate the code).
Leaving the test and test file for demonstration. Will remove the file and test once moving from draft. The test file is generated for dydx main net, filtered for only uniswap markets