false for a predefined
RATE_LIMITER_DURATION for the first
limit times it is called within
true if it has been called more than
limit times within a given
This builtin provides a flexible way to implement rate-limiting on any operation. It needs to be connected to a Redis server: you can set it up yourself, or use solutions like AWS ElastiCache (managed Redis).
Because it uses shared memory, this function is safe for use across multiple PDPs. If they are connected to the same Redis server, we can expect the results to be consistent for the given
limit across all PDPs.
In the example, we use it to rate-limit requests made to our AWS API Gateway endpoint.
key - string to be used as key to distinguish between multiple rate limiters
limit - a number representing the upper
limit of calls for given
After setting up Redis, you can use our here's an example of extra env. variables to be added to the PDP:
Note the extra variables:
docker pull buildsecurity/api-gw-pdpdocker run \-e RATE_LIMITER_REDIS_ENDPOINT=<your Redis endpoint> \-e RATE_LIMITER_REDIS_PASSWORD=<your Redis password, if you've set one> \-e RATE_LIMITER_DURATION=<the duration basis for rate-limiting> \-p 8181:8181 \--name pdp \buildsecurity/api-gw-pdp