You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
We see in SQS that a lot of time is spent in TickToSqrtPrice. Its potentially the case that high amounts of time is spent in protorev / tx flow as well.
This is an easily cacheable function. I suggest we try out adding in a TwoQueue LRU cache from https://pkg.go.dev/github.com/hashicorp/golang-lru/v2#TwoQueueCache with a size of ~50000 entries. This makes sense to me to add into SQS for sure. In the one recent sync benchmark I have over 100 blocks this doesn't feel like a big win, however I am curious as to what happens in v24. The memory overhead of each entry is 3*key + 2*value. A value here is a bigDec ~= 152 bytes, a key is 8 bytes. So 50k entries is about 16.5MB.
We should write both:
A microbench
Eventually check on mainnet
If the microbench is sped up, I suspect this is a good idea, as most user-flow txs should touch the same small number of ticks. The TwoQueueness helps things stay in cache / not get burst evicted out.
To see if this sped things up, we should also check mainnet. I am certain a small local cache will speed things up due to protorev's repeated re-computation of these right now, same for SQS. Though both of those have algorithmic improvements to reduce this, so its not as compelling to add this cache into the state machine for their use cases, hence my lack of clarity if this should be done here vs SQS.
Summary
We see in SQS that a lot of time is spent in TickToSqrtPrice. Its potentially the case that high amounts of time is spent in protorev / tx flow as well.
This is an easily cacheable function. I suggest we try out adding in a TwoQueue LRU cache from https://pkg.go.dev/github.com/hashicorp/golang-lru/v2#TwoQueueCache with a size of ~50000 entries. This makes sense to me to add into SQS for sure. In the one recent sync benchmark I have over 100 blocks this doesn't feel like a big win, however I am curious as to what happens in v24. The memory overhead of each entry is
3*key + 2*value
. A value here is a bigDec ~= 152 bytes, a key is 8 bytes. So 50k entries is about 16.5MB.We should write both:
If the microbench is sped up, I suspect this is a good idea, as most user-flow txs should touch the same small number of ticks. The TwoQueueness helps things stay in cache / not get burst evicted out.
To see if this sped things up, we should also check mainnet. I am certain a small local cache will speed things up due to protorev's repeated re-computation of these right now, same for SQS. Though both of those have algorithmic improvements to reduce this, so its not as compelling to add this cache into the state machine for their use cases, hence my lack of clarity if this should be done here vs SQS.
Problem Definition
No response
Proposed Feature
Use https://pkg.go.dev/github.com/hashicorp/golang-lru/v2#TwoQueueCache with a size of ~50000 entries to cache TickToSqrtPrice calls.
Write a benchmark to see if this single threaded speeds things up. We need it to be concurrency safe for SQS and tx parallelization.
Test on mainnet sync and SQS.
The text was updated successfully, but these errors were encountered: