Extract rate limiter as its own module, with middleware on top

This commit is contained in:
Romain Prieto 2014-08-22 13:51:33 +10:00
parent 8bdf7a6511
commit 34ac08d932
9 changed files with 393 additions and 245 deletions

193
README.md
View file

@ -1,121 +1,158 @@
# connect-rate-limiter
# redis-rate-limiter
Rate-limit your `Node.js` API, backed by Redis.
Rate-limit any operation, backed by Redis.
- easy to configure
- can set limits for different routes
- tested under heavy load for race conditions
- Inspired by [ratelimiter](https://www.npmjs.org/package/ratelimiter)
- But uses a fixed-window algorithm
- Great performance (>10000 checks/sec on local redis)
- No race conditions
[![NPM](http://img.shields.io/npm/v/redis-rate-limiter.svg?style=flat)](https://npmjs.org/package/redis-rate-limiter)
[![License](http://img.shields.io/npm/l/redis-rate-limiter.svg?style=flat)](https://github.com/TabDigital/redis-rate-limiter)
Very easy to plug into `Express` or `Restify` to rate limit your `Node.js` API.
[![Build Status](http://img.shields.io/travis/TabDigital/redis-rate-limiter.svg?style=flat)](http://travis-ci.org/TabDigital/redis-rate-limiter)
[![Dependencies](http://img.shields.io/david/TabDigital/redis-rate-limiter.svg?style=flat)](https://david-dm.org/TabDigital/redis-rate-limiter)
[![Dev dependencies](http://img.shields.io/david/dev/TabDigital/redis-rate-limiter.svg?style=flat)](https://david-dm.org/TabDigital/redis-rate-limiter)
## Usage
The simplest example is
Step 1: create a Redis connection
```coffee
RateLimiter = require 'connect-rate-limiter'
limiter = new RateLimiter(redis: 'redis://localhost:6379')
server.use limiter.middleware(key: 'ip', rate: '10 req/second')
```js
var redis = require('redis');
var client = redis.createClient(6379, 'localhost', {enable_offline_queue: false});
```
That's it!
No one will be able to make more than 10 requests per second from a given IP.
Step 2: create your rate limiter
## Events
You can listen to the `accepted` event to apply extra logic, for example adding custom headers.
```coffee
limiter.on 'accepted', (rate, req, res) ->
res.headers('X-RateLimit-Window', rate.window) # 60 = 1 minute
res.headers('X-RateLimit-Total', rate.total) # 100 = 100 req/minute
res.headers('X-RateLimit-Current', rate.current) # 35 = 35 out of 100
```js
var rateLimiter = require('redis-rate-limiter');
var limit = rateLimiter.create({
redis: client,
key: function(x) { return x.id },
rate: '100/minute'
});
```
By default, rate-limited requests get terminated with a status code of `429`.
You can listen to the `rejected` event to override this behaviour.
If you attach an event handler, you **must** terminate the request yourself.
And go
```coffee
limiter.on 'rejected', (rate, req, res, next) ->
res.send 429, 'Too many requests'
# or for example
next new restify.RequestThrottledError('Too many requests')
```js
limit(request, function(err, rate) {
if (err) {
console.warn('Rate limiting not available');
} else {
console.log('Rate window: ' + rate.window); // 60
console.log('Rate limit: ' + rate.limit); // 100
console.log('Rate current: ' + rate.current); // 74
if (rate.over) {
console.error('Over the limit!');
}
}
});
```
Finally, if Redis is not available the middleware won't apply any rate-limiting.
You can catch that event for logging purposes.
## Options
```coffee
limiter.on 'unavailable', (err) ->
console.log 'Failed to rate limit', err
### `redis`
A pre-created Redis client.
Make sure offline queueing is disabled.
```js
var client = redis.createClient(6379, 'localhost', {
enable_offline_queue: false
});
```
# Rate-limiting key
### `key`
Rate-limiting is applied per user - which are identified with a unique key.
The key is how requests are grouped for rate-limiting.
Typically, this would be a user ID, a type of operation...
There are several helpers built-in:
```coffee
# identify users by IP
```js
// identify users by IP
key: 'ip'
# identify users by their IP network (255.255.255.0 mask)
// identify users by their IP network (255.255.255.0 mask)
key: 'ip/32'
# identify users by the X-Forwarded-For header
# careful: this is just an HTTP header and can easily be spoofed
// identify users by the X-Forwarded-For header
// careful: this is just an HTTP header and can easily be spoofed
key: 'x-forwarded-for'
```
You can also specify a custom function to extract the key from the request.
You can also specify any custom function:
```coffee
# use your own custom function
key: (req) -> req.body.account.number
```js
// rate-limit each user separately
key: function(x) { return x.user.id; }
// rate limit per user and operation type
key: function(x) { return x.user.id + ':' + x.operation; }
// rate limit everyone in the same bucket
key: function(x) { return 'single-bucket'; }
```
# Request rate
### `window`
The rate is made of two components.
This is the duration over which rate-limiting is applied, in seconds.
```coffee
limit: 100 # 100 requests
window: 60 # per minute
```js
// rate limit per minute
window: 60
// rate limit per hour
window: 3600
```
You can also use a shorthand notation using the `rate` property.
Note that this is **not a rolling window**.
If you specify `10 requests / minute`, a user would be able
to execute 10 requests at `00:59` and another 10 at `01:01`.
Then they won't be able to make another request until `02:00`.
```coffee
rate: '10 req/second'
rate: '200 req/minute'
rate: '5000 req/hour'
### `limit`
This is the total number of requests a unique `key` can make during the `window`.
```js
limit: 100
```
# Multiple limiters
### `rate`
You can combine several rate-limiters, either on the entire server or at the route level.
Rate is a shorthand notation to combine `limit` and `window`.
```coffee
# rate limit the whole server to 10/sec from any IP
server.use limiter.middleware(key: 'ip', rate: '10 req/second')
# but you also can't create more than 1 user/min from a given IP
server.post '/api/users',
limiter.middleware(key: 'ip', rate: '5 req/minute'),
controller.create
```js
rate: '10/second'
rate: '100/minute'
rate: '1000/hour'
```
You can also apply several limiters with different criteria.
They will be executed in series, as a logical `AND`.
Or the even shorter
```coffee
# no more than 100 requests per IP
# and no more than 10 requests per account
server.use limiter.middleware(key: uniqueIp, rate: '100 req/second')
server.use limiter.middleware(key: accountNumber, rate: '50 req/minute')
```js
rate: '10/s'
rate: '100/m'
rate: '100/h'
```
*Note:* the rate is parsed ahead of time, so this notation doesn't affect performance.
## HTTP middleware
This package contains a pre-built middleware,
which takes the same options
```js
var rateLimiter = require('redis-rate-limiter');
var middleware = rateLimiter.middleware({
redis: client,
key: 'ip',
rate: '100/minute'
});
server.use(middleware);
```
It rejects any rate-limited requests with a status code of `HTTP 429`,
and an empty body.

View file

@ -1,52 +1,3 @@
var redis = require('redis');
var util = require('util');
var url = require('url');
var EventEmitter = require('events').EventEmitter;
var options = require('./options');
function RateLimiter(config) {
var cnx = url.parse(config.redis);
this.client = redis.createClient(cnx.port, cnx.hostname, {enable_offline_queue: false});
this.client.on('error', this.emit.bind(this, 'unavailable'));
}
util.inherits(RateLimiter, EventEmitter);
module.exports = RateLimiter;
RateLimiter.prototype.middleware = function(opts) {
var self = this;
opts = options.canonical(opts);
return function(req, res, next) {
var key = opts.key(req);
var tempKey = 'ratelimittemp:' + key;
var realKey = 'ratelimit:' + key;
self.client
.multi()
.setex(tempKey, opts.window, 0)
.renamenx(tempKey, realKey)
.incr(realKey)
.exec(function(err, results) {
if(err) {
self.emit('unavailable', err);
next();
} else {
var rate = {
current: results[2],
limit: opts.limit,
window: opts.window
};
if (rate.current <= rate.limit) {
self.emit('accepted', rate, req, res, next);
next();
} else {
if (self.listeners('rejected').length === 0) {
res.writeHead(429);
res.end();
} else {
self.emit('rejected', rate, req, res, next);
}
}
}
});
};
};
exports.create = require('./rate-limiter');
exports.middleware = require('./middleware');

19
lib/middleware.js Normal file
View file

@ -0,0 +1,19 @@
var rateLimiter = require('./rate-limiter');
module.exports = function(opts) {
var limiter = rateLimiter(opts);
return function(req, res, next) {
limiter(req, function(err, rate) {
if (err) {
next();
} else {
if (rate.current > rate.limit) {
res.writeHead(429);
res.end();
} else {
next();
}
}
});
};
};

View file

@ -6,6 +6,10 @@ exports.canonical = function(opts) {
var canon = {};
// Redis connection
assert.equal(typeof opts.redis, 'object', 'Invalid redis client');
canon.redis = opts.redis;
// Key function
if (typeof opts.key === 'function') canon.key = opts.key;
if (typeof opts.key === 'string') canon.key = keyShorthands[opts.key];
@ -13,7 +17,7 @@ exports.canonical = function(opts) {
// Rate shorthand
if (opts.rate) {
assert.equal(typeof opts.rate, 'string', 'Invalid rate: ' + opts.rate);
var match = opts.rate.match(/^(\d+) req\/([a-z]+)$/);
var match = opts.rate.match(/^(\d+)\s*\/\s*([a-z]+)$/);
assert.ok(match, 'Invalid rate: ' + opts.rate);
canon.limit = parseInt(match[1], 10);
canon.window = moment.duration(1, match[2]) / 1000;

27
lib/rate-limiter.js Normal file
View file

@ -0,0 +1,27 @@
var options = require('./options');
module.exports = function(opts) {
opts = options.canonical(opts);
return function(request, callback) {
var key = opts.key(request);
var tempKey = 'ratelimittemp:' + key;
var realKey = 'ratelimit:' + key;
opts.redis.multi()
.setex(tempKey, opts.window, 0)
.renamenx(tempKey, realKey)
.incr(realKey)
.exec(function(err, results) {
if(err) {
callback(err);
} else {
var current = results[2];
callback(null, {
current: current,
limit: opts.limit,
window: opts.window,
over: (current > opts.limit)
});
}
});
};
};

View file

@ -1,7 +1,7 @@
{
"name": "connect-rate-limiter",
"name": "redis-rate-limiter",
"version": "0.0.0",
"description": "Rate-limit middleware, backed by Redis",
"description": "Rate-limit any operation, backed by Redis",
"author": "Tabcorp Digital Team",
"license": "MIT",
"main": "lib/index.js",
@ -10,14 +10,14 @@
},
"dependencies": {
"ip": "~0.3.1",
"moment": "~2.8.1",
"redis": "~0.12.1"
"moment": "~2.8.1"
},
"devDependencies": {
"async": "~0.9.0",
"connect": "~2.25.7",
"express": "~4.8.5",
"lodash": "~2.4.1",
"mocha": "~1.21.4",
"redis": "~0.12.1",
"should": "~4.0.4",
"supertest": "~0.13.0"
}

View file

@ -1,44 +1,54 @@
var _ = require('lodash');
var async = require('async');
var should = require('should');
var connect = require('connect');
var redis = require('redis');
var express = require('express');
var supertest = require('supertest');
var RateLimiter = require('../lib/index');
var middleware = require('../lib/middleware');
describe('Rate-limit middleware', function() {
describe('Middleware', function() {
this.slow(5000);
this.timeout(5000);
var client = null;
var limiter = null;
before(function(done) {
limiter = new RateLimiter({redis: 'redis://localhost:6379'});
limiter.client.on('connect', done);
});
beforeEach(function(done) {
limiter.client.del('ratelimit:127.0.0.1', done);
client = redis.createClient(6379, 'localhost', {enable_offline_queue: false});
client.on('ready', done);
});
describe('IP throttling', function() {
it('works under the limit', function(done) {
var server = connect();
server.use(limiter.middleware({key: 'ip', rate: '10 req/second'}));
server.use(fastResponse);
var reqs = requests(server, '/test', 9);
before(function() {
limiter = middleware({
redis: client,
key: 'ip',
rate: '10/second'
});
});
beforeEach(function(done) {
client.del('ratelimit:127.0.0.1', done);
});
it('passes through under the limit', function(done) {
var server = express();
server.use(limiter);
server.use(okResponse);
var reqs = requests(server, 9, '/test');
async.parallel(reqs, function(err, data) {
withStatus(data, 200).should.have.length(9);
done();
});
});
it('fails over the limit', function(done) {
var server = connect();
server.use(limiter.middleware({key: 'ip', rate: '10 req/second'}));
server.use(fastResponse);
var reqs = requests(server, '/test', 12);
it('returns HTTP 429 over the limit', function(done) {
var server = express();
server.use(limiter);
server.use(okResponse);
var reqs = requests(server, 12, '/test');
async.parallel(reqs, function(err, data) {
withStatus(data, 200).should.have.length(10);
withStatus(data, 429).should.have.length(2);
@ -46,16 +56,16 @@ describe('Rate-limit middleware', function() {
});
});
it('can go under / over / under', function(done) {
var server = connect();
server.use(limiter.middleware({key: 'ip', rate: '10 req/second'}));
server.use(fastResponse);
it('works across several rate-limit windows', function(done) {
var server = express();
server.use(limiter);
server.use(okResponse);
async.series([
function(next) { async.parallel(requests(server, '/test', 9), next); },
function(next) { setTimeout(next, 1100); },
function(next) { async.parallel(requests(server, '/test', 12), next); },
function(next) { setTimeout(next, 1100); },
function(next) { async.parallel(requests(server, '/test', 9), next); }
parallelRequests(server, 9, '/test'),
wait(1100),
parallelRequests(server, 12, '/test'),
wait(1100),
parallelRequests(server, 9, '/test')
], function(err, data) {
withStatus(data[0], 200).should.have.length(9);
withStatus(data[2], 200).should.have.length(10);
@ -69,66 +79,65 @@ describe('Rate-limit middleware', function() {
describe('Custom key throttling', function() {
before(function() {
limiter = middleware({
redis: client,
key: function(req) { return req.query.user; },
rate: '10/second'
});
});
beforeEach(function(done) {
async.series([
client.del.bind(client, 'ratelimit:a'),
client.del.bind(client, 'ratelimit:b'),
client.del.bind(client, 'ratelimit:c')
], done);
});
it('uses a different bucket for each custom key (user)', function(done) {
var server = express();
server.use(limiter);
server.use(okResponse);
var reqs = _.flatten([
requests(server, 5, '/test?user=a'),
requests(server, 12, '/test?user=b'),
requests(server, 10, '/test?user=c')
]);
async.parallel(reqs, function(err, data) {
withStatus(data, 200).should.have.length(25);
withStatus(data, 429).should.have.length(2);
withStatus(data, 429)[0].url.should.eql('/test?user=b');
withStatus(data, 429)[1].url.should.eql('/test?user=b');
done();
});
});
});
});
// describe 'Account throttling', ->
//
// it 'concurrent requests (different accounts)', (done) ->
// server.use authToken
// server.use restify.throttle(username: true, burst: 2, rate: 0)
// server.get '/test', slowResponse
// reqs = [
// (next) -> request(server).get('/test?username=bob').end(next)
// (next) -> request(server).get('/test?username=jane').end(next)
// (next) -> request(server).get('/test?username=john').end(next)
// ]
// async.parallel reqs, (err, data) ->
// withStatus(data, 200).should.have.length 3
// done()
//
// it 'concurrent requests (under the limit)', (done) ->
// server.use authToken
// server.use restify.throttle(username: true, burst: 3, rate: 0)
// server.get '/test', slowResponse
// reqs = [
// (next) -> request(server).get('/test').end(next)
// (next) -> request(server).get('/test').end(next)
// ]
// async.parallel reqs, (err, data) ->
// withStatus(data, 200).should.have.length 2
// done()
//
// it 'concurrent requests (over the limit)', (done) ->
// server.use authToken
// server.use restify.throttle(username: true, burst: 2, rate: 0)
// server.get '/test', slowResponse
// reqs = [
// (next) -> request(server).get('/test?username=bob').end(next)
// (next) -> request(server).get('/test?username=bob').end(next)
// (next) -> request(server).get('/test?username=bob').end(next)
// ]
// async.parallel reqs, (err, data) ->
// withStatus(data, 200).should.have.length 2
// withStatus(data, 429).should.have.length 1
// done()
function request(server, url) {
return function(next) {
supertest(server).get('/test').end(next);
};
}
function requests(server, url, count) {
function requests(server, count, url) {
return _.times(count, function() {
return request(server, url);
return function(next) {
supertest(server).get(url).end(next);
};
});
}
function fastResponse(req, res, next) {
function parallelRequests(server, count, url) {
return function(next) {
async.parallel(requests(server, count, url), next);
};
}
function wait(millis) {
return function(next) {
setTimeout(next, 1100);
};
}
function okResponse(req, res, next) {
res.writeHead(200);
res.end('ok');
}
@ -136,10 +145,10 @@ function fastResponse(req, res, next) {
function withStatus(data, code) {
var pretty = data.map(function(d) {
return {
url: d.req.path,
statusCode: d.res.statusCode,
body: d.res.body
}
});
// console.log('pretty', pretty)
return _.filter(pretty, {statusCode: code});
}

View file

@ -7,6 +7,7 @@ describe('Options', function() {
it('can specify a function', function() {
var opts = options.canonical({
redis: {},
key: function(req) { return req.id; },
limit: 10,
window: 60
@ -18,6 +19,7 @@ describe('Options', function() {
it('can be the full client IP', function() {
var opts = options.canonical({
redis: {},
key: 'ip',
limit: 10,
window: 60
@ -29,6 +31,7 @@ describe('Options', function() {
it('can be the client IP/32 mask', function() {
var opts = options.canonical({
redis: {},
key: 'ip/32',
limit: 10,
window: 60
@ -41,6 +44,7 @@ describe('Options', function() {
it('fails for invalid keys', function() {
(function() {
var opts = options.canonical({
redis: {},
key: 'something',
limit: 10,
window: 60
@ -54,6 +58,7 @@ describe('Options', function() {
it('should accept numeric values in seconds', function() {
var opts = options.canonical({
redis: {},
key: 'ip',
limit: 10, // 10 requests
window: 60 // per 60 seconds
@ -66,45 +71,34 @@ describe('Options', function() {
describe('rate shorthand notation', function() {
it('X req/second', function() {
function assertRate(rate, limit, window) {
var opts = options.canonical({
redis: {},
key: 'ip',
rate: '10 req/second'
rate: rate
});
opts.limit.should.eql(10);
opts.window.should.eql(1);
opts.limit.should.eql(limit, 'Wrong limit for rate ' + rate);
opts.window.should.eql(window, 'Wrong window for rate ' + rate);
}
it('can use the full unit name (x/second)', function() {
assertRate('10/second', 10, 1);
assertRate('100/minute', 100, 60);
assertRate('1000/hour', 1000, 3600);
assertRate('5000/day', 5000, 86400);
});
it('X req/minute', function() {
var opts = options.canonical({
key: 'ip',
rate: '20 req/minute'
});
opts.limit.should.eql(20);
opts.window.should.eql(60);
});
it('X req/hour', function() {
var opts = options.canonical({
key: 'ip',
rate: '1000 req/hour'
});
opts.limit.should.eql(1000);
opts.window.should.eql(3600);
});
it('X req/day', function() {
var opts = options.canonical({
key: 'ip',
rate: '5000 req/day'
});
opts.limit.should.eql(5000);
opts.window.should.eql(86400);
it('can use the short unit name (x/s)', function() {
assertRate('10/s', 10, 1);
assertRate('100/m', 100, 60);
assertRate('1000/h', 1000, 3600);
assertRate('5000/d', 5000, 86400);
});
it('has to be a valid rate', function() {
(function() {
var opts = options.canonical({
redis: {},
key: 'ip',
rate: '50 things'
});

107
test/rate-limiter.spec.js Normal file
View file

@ -0,0 +1,107 @@
var _ = require('lodash');
var async = require('async');
var should = require('should');
var redis = require('redis');
var rateLimiter = require('../lib/rate-limiter');
describe('Rate-limiter', function() {
this.slow(5000);
this.timeout(5000);
var client = null;
before(function(done) {
client = redis.createClient(6379, 'localhost', {enable_offline_queue: false});
client.on('ready', done);
});
beforeEach(function(done) {
async.series([
client.del.bind(client, 'ratelimit:a'),
client.del.bind(client, 'ratelimit:b'),
client.del.bind(client, 'ratelimit:c')
], done);
});
it('calls back with the rate data', function(done) {
var limiter = createLimiter('10/second');
var reqs = request(limiter, 5, {id: 'a'});
async.parallel(reqs, function(err, rates) {
_.pluck(rates, 'current').should.eql([1, 2, 3, 4, 5]);
_.each(rates, function(r) {
r.limit.should.eql(10);
r.window.should.eql(1);
r.over.should.eql(false);
});
done();
});
});
it('sets the over flag when above the limit', function(done) {
var limiter = createLimiter('10/second');
var reqs = request(limiter, 15, {id: 'a'});
async.parallel(reqs, function(err, rates) {
_.each(rates, function(r, index) {
rates[index].over.should.eql(index >= 10);
});
done();
});
});
it('can handle a lot of requests', function(done) {
var limiter = createLimiter('1000/second');
var reqs = request(limiter, 1200, {id: 'a'});
async.parallel(reqs, function(err, rates) {
rates[999].should.have.property('over', false);
rates[1000].should.have.property('over', true);
done();
});
});
it('resets after the window', function(done) {
var limiter = createLimiter('10/second');
async.series([
requestParallel(limiter, 15, {id: 'a'}),
wait(1100),
requestParallel(limiter, 15, {id: 'a'})
], function(err, data) {
_.each(data[0], function(rate, index) {
rate.should.have.property('over', index > 9);
});
_.each(data[2], function(rate, index) {
rate.should.have.property('over', index > 9);
});
done();
});
});
function createLimiter(rate) {
return rateLimiter({
redis: client,
key: function(x) { return x.id },
rate: rate
});
}
function request(limiter, count, data) {
return _.times(count, function() {
return function(next) {
limiter(data, next);
};
});
}
function requestParallel(limiter, count, data) {
return function(next) {
async.parallel(request(limiter, count, data), next);
};
}
function wait(millis) {
return function(next) {
setTimeout(next, 1100);
};
}
});