Building an API Rate limiter using Go

Building an API Rate limiter using Go

·

6 min read

In today's article we're going to be taking a quick look at how to implement rate limiters for APIs.

What are rate limiters?

They're basically a way to stop spamming requests to an API, where there is a cool down window for that specific user and he's going to have to wait until being able to do requests again.

This tutorial will be a simplistic approach into how to rate limit APIs

Getting Started

To get started simply initialize a new directory called rate_limiter and run

go mod init rate_limiter

Then inside the directory create 2 files main.go and limiter/limiter.go, these will contain our code that we'll dive into in the next step.

We'll start off by creating a global rate limiter (not user specific) and then we'll modify our code accordingly. We'll use an algorithm called Token Bucket Algorithm where basically you have a bucket of tries (e.g 3 tries) and the bucket gets filled by 1 each n seconds (we'll get to adjust this in the implementation) so if we for example refill a bucket by 1 try every 1 second then if I used all 3 tries in the first second I will have to wait for a bucket refill to be able to access the API again.

Firstly implementing the limiter function which will act as a middleware in our server. We'll use the time/rate package provided by go, this package implements the token bucket algorithm and provides us with the needed functions to start the implementation!

Inside limiter.go

package limiter

import (
    "net/http"

    "golang.org/x/time/rate"
)

var rate_limiter = rate.NewLimiter(1, 2)

func limit(next http.Handler) http.Handler {
    return http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {
        if rate_limiter.Allow() == false {
            http.Error(w, http.StatusText(429), http.StatusTooManyRequests)
            return
        }
        next.ServeHTTP(w, r)
    })
}

What we did here was simply initialize a http handler func which we'll use as our middleware. But firstly define a rate_limiter which is from the package time/rate, the first value is the time required for filling 1 try into the bucket, so this limiter fills in a try every 1 second, the second value is the total number of tries in a bucket which is in our case 2 We simply check if he has any tries in the bucket, that's what Allow returns, it returns true if he has any tries and false if he doesn't. If he doesn't we return a 429 status error which is too many requests.

In main.go we will have the following code

package main

import (
    "log"
    "net/http"
    "rate_limiter/limiter"
)

func main() {
    mux := http.NewServeMux()
    mux.HandleFunc("/", helloWorldHandler)

    log.Println("listening on port 3000")
    http.ListenAndServe(":3000", limiter.Limit(mux))

}

func helloWorldHandler(w http.ResponseWriter, r *http.Request) {
    w.Write([]byte("Hello World!"))
}

This code is a simple server running on port 3000, and we wrap the mux which is the request handler with the middleware limiter.

Running a curl command once will yield us this response Hello World, but successively running it will print 2 hello worlds and the 3rd response will be a too many requests response, as follows

~|⇒ curl -i localhost:3000
HTTP/1.1 200 OK
Date: Fri, 01 Apr 2022 20:11:39 GMT
Content-Length: 12
Content-Type: text/plain; charset=utf-8

Hello World!%
~|⇒ curl -i localhost:3000
HTTP/1.1 200 OK
Date: Fri, 01 Apr 2022 20:11:39 GMT
Content-Length: 12
Content-Type: text/plain; charset=utf-8

Hello World!%
~|⇒ curl -i localhost:3000
HTTP/1.1 429 Too Many Requests
Content-Type: text/plain; charset=utf-8
X-Content-Type-Options: nosniff
Date: Fri, 01 Apr 2022 20:11:40 GMT
Content-Length: 18

Too Many Requests

Rate limiting by user

Till now we rate limited the API from everyone basically trying to access it. What if we can do that per user? We need a way to track maybe the IPs of the users and for each user add a rate limiter just for him. It's best to store them in a map where every ip maps to the limiter of the user. Also to avoid this map getting bigger than necessary we can cleanup and remove the users that no longer access the api, let's get to implementing this.

We'll update our limiter.go with the following code

package limiter

import (
    "net"
    "net/http"
    "sync"

    "golang.org/x/time/rate"
)

var clients = make(map[string]*rate.Limiter)
var mu sync.RWMutex

func getClient(ip string) *rate.Limiter {
    mu.RLock()
    limiter, exists := clients[ip]
    mu.RUnlock()
    if !exists {
        mu.Lock()
        limiter = rate.NewLimiter(1, 2)
        clients[ip] = limiter
        mu.Unlock()
    }
    return limiter
}
func Limit(next http.Handler) http.Handler {
    return http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {
        ip, _, _ := net.SplitHostPort(r.RemoteAddr)
        limiter := getClient(ip)
        if limiter.Allow() == false {
            http.Error(w, http.StatusText(429), http.StatusTooManyRequests)
            return
        }
        next.ServeHTTP(w, r)
    })
}

What we did here is in our handler obtain the IP address of the user and call getClient which basically checks a built map of IPs to limiters and if the client is a new entry it adds it and returns its limiter, otherwise it returns the limiter of the client that exists. Inside getClient we implement a RWmutex lock to control concurrency issues that might occur accessing the same shared memory space by multiple concurrent users. It basically gives out a shared lock for reads and exclusive write locks. So only 1 can write at a time but several users can read at one time. For more info about concurrency and mutex locking checkout my blog here

This change returns the same response as before but now multiple users each have a limiter of their own when accessing this API. One last thing left is cleaning up users that no longer access the API to free up that memory. A good solution Is implementing a last seen where every client has this last seen on him and if it exceeds for example 2 minutes we clear him from the map! This could be done by a different goroutine that runs in the background and cleans the map so it doesn't keep on growing forever.

Our limiter.go should now look like the following

package limiter

import (
    "net"
    "net/http"
    "sync"
    "time"

    "golang.org/x/time/rate"
)

type client struct {
    limiter   *rate.Limiter
    last_seen time.Time
}

var clients = make(map[string]*client)
var mu sync.RWMutex

func getClient(ip string) *rate.Limiter {
    mu.RLock()
    user, exists := clients[ip]
    mu.RUnlock()
    if !exists {
        mu.Lock()
        limiter := rate.NewLimiter(1, 2)
        clients[ip] = &client{limiter, time.Now()}
        mu.Unlock()
        return limiter
    }
    user.last_seen = time.Now()
    return user.limiter
}

func CleanupUsers() {
    for {
        time.Sleep(time.Minute)
        mu.RLock()
        for ip, v := range clients {
            if time.Since(v.last_seen) < 2*time.Minute {
                delete(clients, ip)
            }
        }
        mu.RUnLock()
    }
}
func Limit(next http.Handler) http.Handler {
    return http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {
        ip, _, _ := net.SplitHostPort(r.RemoteAddr)
        limiter := getClient(ip)
        if limiter.Allow() == false {
            http.Error(w, http.StatusText(429), http.StatusTooManyRequests)
            return
        }
        next.ServeHTTP(w, r)
    })
}

And simply add this in main.go

go limiter.CleanupUsers()

What we did is simply add a type client with a last seen and his normal limiter, and we update the time according to every visit for that client. We also added CleanupUsers which would run on a separate goroutine and clean up the map by removing users that haven't been using the api for 2 minutes and invoked it upon server start.

By doing the previous implementation we implemented a user specific rate limiter that limiters the user from spamming the api. Before I conclude the article I would love to thank Alex Edwards and his amazing website full of insightful articles that helped me make this article. Here's a reference to his article on the same topic. That was it for this article and see you on the next one!

Did you find this article valuable?

Support Amr Elhewy by becoming a sponsor. Any amount is appreciated!