Surprised? It’s counter-intuitive to see that Redis, a cache which is usually introduced to improve performance, can actually slow down the application. I used to think, that the cache is something that should always be fast, until I’ve found that my APIs using Redis have several hundred miliseconds of latency. Today, I would like to show you, how I found and fixed that performance bottleneck.
How Redis executes commands
You should be aware, that Redis is single-threaded, so similarly to Node.js, we want to have mostly non-blocking operations or at least operations that are executed in a small amount of time. We don’t want to block the event loop, what could have a significant impact on the overall performance. Redis is generally a very fast in-memory cache, however if you don’t pay attention to the time complexity of its commands, you can unknowingly slow down even the simplest
Finding slow commands
If you are familiar with a slowlog from PostgreSQL or MySQL, I have a good news, Redis offers the same kind of functionality:
redis-cli you can get your last N slowest commands by executing:
Output explained by slowlog documentation:
1) A unique progressive identifier for every slow log entry
2) The unix timestamp at which the logged command was processed
3) The amount of time needed for its execution, in microseconds (in the example the
MGET command execution took ~13ms)
4) The command and its arguments
KEYS pattern command
Let’s say we build a newsfeed functionality and we implement a fanout on write, when the user posts something, it is pushed to all his friends and stored in Redis
to improve performance. Additionally we don’t want to put all the updates from users’ friends into one list, we would like to have separate keys, so we end up with keys like
users:USER_ID:friends:FRIEND_ID. It might be tempting to use
KEYS users:USER_ID:friends:* command to get all keys related to our user’s newsfeed.
If we stored just a few thousands of keys, that wouldn’t make a significant difference, the app would still perform fine. The problem starts, when we have several millions of keys and as
KEYS command has O(N) time complexity, it has to make a full scan of all the keys on every newsfeed API request to find given pattern.
That behavior impacts Redis performance, blocking the event loop, so other commands which would be very fast otherwise, now are executed with large latencies, especially when the app is being used heavily.
First of all, it is not the best decision to make any kind of logic based on finding keys in Redis with a specific pattern, much better solution would be to store the list of friends in another key (or retrieve it from elsewhere) and then get only the keys we are interested in, without scanning through all other keys.
If we are stuck with this design, there is another solution that doesn’t block Redis when iterating through the keys:
SCAN command. It returns paginated result list, including a cursor of the next page (the first returned value).