You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Is there an existing issue that is already proposing this?
I have searched the existing issues
Is your feature request related to a problem? Please describe it
I propose adding an advanced set of options to provide a more flexible and powerful rate limiting mechanism. This would allow different rate limits based on different keys (e.g. IP address, customer ID). It would also introduce a credit-based system to temporarily allow exceeding the base rate limit for allowing 'bursty' workloads while controlling the potential cost.
Here are the proposed features:
Requests per Interval: Allow rate limits to be specified per arbitrary time intervals, not just per minute.
Credits and Their Duration: Introduce a credit-based system where each key can be assigned a certain amount of credits. Each credit would allow the key to make requests at a higher rate for a certain duration.
Multiple Limit Configurations: Allow multiple limit configurations, each tied to a different key. This would allow for both platform-wide and key-specific limits.
Customizable Responses: Provide a way to customize the response based on the type of rate limit that was exceeded.
Here's how the configuration for these features might look:
The first limit group applies to each IP address individually. The getKey function returns the IP address of the request.
The second limit group applies to each customer individually. The getKey function returns the customer ID from the request headers.
The third limit group applies platform-wide. The getKey function always returns the string 'platform', so all requests contribute to the same count.
Each limit group has a responseMessage that will be returned in the response when a user exceeds a rate limit. This provides a clear explanation to the user of why their request was throttled.
Each object inside the limits array is a different rate limit tier. The parameters are:
upperLimit: This is the maximum number of requests that can be made within the rateIntervalInMs. If null, it means there is no upper limit.
rateIntervalInMs: This is the time period (in milliseconds) in which the number of requests is counted.
creditsInMinutes: This is the amount of time (in minutes) that a requester can be within this range up to the upper limit. When the requester exceeds the upper limit for the lower, they start using their credits. Once the credits are used up, they are moved to the next lower tier.
creditsWindowInMinutes: This is the duration (in minutes), in which the credits will refresh.
Here are some examples of how this works:
If an IP address makes 100 requests in a minute, they remain in the first tier and don't use any credits - as this tier is set with "isUnlimited":true.
If the same IP address then makes 1100 requests in the next minute, they exceed the upper limit of the first tier (100 requests per minute) and move to the second tier. They start using their credits to make requests at a higher rate (up to 1800 requests per minute). For the next 360 minutes (6 hours), each minute within the credits window they use between 101 and 1800 request will reduce the credits amount for that tier.
In the third tier, they are not allowed to make any more requests (0 credits). They will stay in this tier until their credits are reset (e.g., at the start of the next credits window - in this case a month or 43200 minutes).
This mechanism allows requesters to temporarily exceed the base rate limit when needed, while also ensuring that they don't overwhelm the server with a high number of requests for an extended period of time.
I believe these features would greatly enhance the flexibility and power of the @nestjs/throttler library. I would appreciate your thoughts on this proposal.
Describe the solution you'd like
Desired Solution:
Extend`@nestjs/throttler to:
Allow rate limits per arbitrary time intervals, not just per minute.
Implement a credit-based system for temporary limit exceeding.
Enable multiple limit configurations for different keys, supporting both platform-wide and key-specific limits.
Provide customizable responses for rate limit exceeding based on the type of limit.
Potential Drawbacks:
Added complexity in configuration and understanding of the throttler library.
Increased codebase complexity, potential bugs, and need for extensive testing.
Risk of server flooding if the credit-based system is misused or exploited.
Clear and comprehensive documentation, including explanations of new concepts and step-by-step guides, will help users understand and implement the advanced rate limiting features.
Adoption and Migration Strategy:
The new features should be backwards-compatible, allowing existing users to opt-in as needed. For example you can maintain the basic mode and add the limitGroups option as an extra option and not the main option
Visual aids like flowcharts or diagrams explaining the rate limiting process would also be beneficial for understanding and adoption.
What is the motivation / use case for changing the behavior?
The motivation behind these changes is to provide users with more flexible and powerful rate limiting options, catering to diverse use cases and traffic patterns.
Current rate limiting offer fixed request limits per interval, which may not be ideal for all scenarios. For instance, certain operations might need to temporarily allow higher request rates, such as batch operations or data syncing tasks (especially with elastic cloud solutions where auto-scale makes throttling more about cost control than protection for the platform).
The proposed changes will allow rate limiting to be more adaptable to the dynamic nature of web traffic. The credit-based system can handle bursts of high traffic, while the flexible intervals and multiple limit configurations can cater to different types of requests or users.
Use cases include:
User-based limiting: Different users or roles can have different limits.
Platform-wide limiting: To protect the overall system from high traffic.
Handling traffic bursts: The credit system allows temporary exceeding of limits.
The text was updated successfully, but these errors were encountered:
Currently we hit a problem with the throttler, because of how the infrastructure is set... Your proposal is great and i was thinking of something similar as well, but that was a year and a half from now, really sad.
I'm sure I read this at the time it was created, but it had since flown under my radar with other things in life going on. This seems like an incredible proposal, and it seems like it could even be fun to take on how to implement it. I will see if I can set aside some time in the coming future to work on test suites and implementation of this
Is there an existing issue that is already proposing this?
Is your feature request related to a problem? Please describe it
I propose adding an advanced set of options to provide a more flexible and powerful rate limiting mechanism. This would allow different rate limits based on different keys (e.g. IP address, customer ID). It would also introduce a credit-based system to temporarily allow exceeding the base rate limit for allowing 'bursty' workloads while controlling the potential cost.
Here are the proposed features:
Requests per Interval: Allow rate limits to be specified per arbitrary time intervals, not just per minute.
Credits and Their Duration: Introduce a credit-based system where each key can be assigned a certain amount of credits. Each credit would allow the key to make requests at a higher rate for a certain duration.
Multiple Limit Configurations: Allow multiple limit configurations, each tied to a different key. This would allow for both platform-wide and key-specific limits.
Customizable Responses: Provide a way to customize the response based on the type of rate limit that was exceeded.
Here's how the configuration for these features might look:
In this configuration:
getKey
function returns the IP address of the request.getKey
function returns the customer ID from the request headers.getKey
function always returns the string'platform'
, so all requests contribute to the same count.Each limit group has a
responseMessage
that will be returned in the response when a user exceeds a rate limit. This provides a clear explanation to the user of why their request was throttled.Here is a breakdown for the first example:
Each object inside the
limits
array is a different rate limit tier. The parameters are:upperLimit
: This is the maximum number of requests that can be made within therateIntervalInMs
. Ifnull
, it means there is no upper limit.rateIntervalInMs
: This is the time period (in milliseconds) in which the number of requests is counted.creditsInMinutes
: This is the amount of time (in minutes) that a requester can be within this range up to the upper limit. When the requester exceeds the upper limit for the lower, they start using their credits. Once the credits are used up, they are moved to the next lower tier.creditsWindowInMinutes
: This is the duration (in minutes), in which the credits will refresh.Here are some examples of how this works:
This mechanism allows requesters to temporarily exceed the base rate limit when needed, while also ensuring that they don't overwhelm the server with a high number of requests for an extended period of time.
I believe these features would greatly enhance the flexibility and power of the
@nestjs/throttler
library. I would appreciate your thoughts on this proposal.Describe the solution you'd like
Desired Solution:
Extend`@nestjs/throttler to:
Potential Drawbacks:
Teachability, documentation, adoption, migration strategy
Clear and comprehensive documentation, including explanations of new concepts and step-by-step guides, will help users understand and implement the advanced rate limiting features.
Adoption and Migration Strategy:
The new features should be backwards-compatible, allowing existing users to opt-in as needed. For example you can maintain the basic mode and add the limitGroups option as an extra option and not the main option
Visual aids like flowcharts or diagrams explaining the rate limiting process would also be beneficial for understanding and adoption.
What is the motivation / use case for changing the behavior?
The motivation behind these changes is to provide users with more flexible and powerful rate limiting options, catering to diverse use cases and traffic patterns.
Current rate limiting offer fixed request limits per interval, which may not be ideal for all scenarios. For instance, certain operations might need to temporarily allow higher request rates, such as batch operations or data syncing tasks (especially with elastic cloud solutions where auto-scale makes throttling more about cost control than protection for the platform).
The proposed changes will allow rate limiting to be more adaptable to the dynamic nature of web traffic. The credit-based system can handle bursts of high traffic, while the flexible intervals and multiple limit configurations can cater to different types of requests or users.
Use cases include:
The text was updated successfully, but these errors were encountered: