You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
This means if you call rateLimitedFunc 150 times and only 100 can be called in time frame, the next 50 calls will be postponed and executed later to respect given rate limits.
Might make sense to have an ability to choose the overflow management strategy, such as:
Postpone and keep all (current one)
Postpone and drop new - keep up to X (configurable) oldest requests
Postpone and drop old - keep up to X (configurable) newest requests
Backpressure? (reactive streams)
Fail - throw an exception/fail the promise if rate limiting is hit right away.
We've built this package around the idea, that we have several functions that works with same rate-limited resource. After that we wrap this functions with rateLimit module and work with wrapped functions. If some thrid-party code tries overcome the limits by calling that functions too frequent it should wait for a while.
Imagine we have a microservice that process crypto payments with his own queue and it uses some thrid-party rate-limited API to actual interact with a blockchain. Rate limit in this case might be like 100 requests per minute. So it's ok to wait several additional minutes and execute transaction after that, and dropping the tx is not ok.
This is use case for "Postpone and keep all" strategy, could you please give some use cases for another strategies? It's easier to think about new feature with particular examples.
Might make sense to have an ability to choose the overflow management strategy, such as:
Inspired/related: https://doc.akka.io/docs/akka/2.5/stream/stream-rate.html#buffers-in-akka-streams
The text was updated successfully, but these errors were encountered: