Skip to content

Load Testing Redis Solutions (Integration Tests)

f27d edited this page Jul 8, 2018 · 2 revisions

Purpose of basic load testing is to establish the following:

  • Failure threshold - how much can your system take before failures start to happen / total failure
  • How does your system respond as a function of the load on the system
  • What is the consistency of the experience for users (i.e. mean vs 99% percentile etc)

Using our free Loadimpact account, we looked at just running 50 virtual users concurrently for 2 minutes to see how the response times vary.

Our tests hit 4 urls (/, /4, /23, /2) and record the max of the 4 aggregates.

Our first test was against the system pre-redis (so direct Postgres calls).

Test A) 3rd July - Vs. Postgres (with Caching at API level), no redis.

  • Mean: 305ms
  • 99 Percentile: 4310ms
  • 95 Percentile: 248ms
  • 90 Percentile: 176ms

Test B) 5th July - Vs. Redis running on RedisLabs (outside of GCP, actually on AWS). Lag due to going out of Google's network and back.

  • Mean: 115ms
  • 99 Percentile: 454ms
  • 95 Percentile: 279ms
  • 90 Percentile: 233ms

Test C) 8th July - Vs. Redis running on Google Cloud (a product called MemoryStore). Since it is on Google's network, should be faster for the API calls.

  • Mean: 43ms
  • 99 Percentile: 439ms
  • 95 Percentile: 220ms
  • 90 Percentile: 86ms

Interesting results that C>B>A however, for 90%-95% percent of users, A>B. The reason for this is the balancing of the fact that A has longer initial calls but then caches the information, whilst B goes to an external Redis (so it always has the latency of the call and response).

Our inspection of Redis (on RedisLabs) said it has <10ms to fulfil a request, so it is definitely the network latency.

Clone this wiki locally