Let’s try Performance Testing!

Mushaffa Huda
5 min readJun 6, 2021

--

Performance Testing refers to a software testing process used to test the speed, response time, stability, reliability, scalability, and resource usage of a software application under a particular workload.

huh? why do we have to do it? is it that important?

Well, the main purpose of performance testing is to identify and eliminate the performance bottlenecks in the software, thus enabling smoother deployment and eliminating stressful errors during real user testing.

types of performance testing

performance testing are mainly focused on these 3 aspects :

  • Speed — Determines whether the application responds quickly
  • Scalability — Determines the maximum user load that the application can handle.
  • Stability — Determines if the application is stable under varying loads.

there are numerous types of performance testing, but for this article, we will do a deep dive into load testing and stress testing, and try it out on a real application :).

lets go!

Performance test

for this experiment, we will try to do both a load test and a stress test for a document search engine application made by me and a group of friends for PPL 2021.

the application consists of both a backend app made in Django and a frontend app made in react, for this experiment, we shall do the test on the backend application.

document search application

Load Testing

load testing is basically checking the application’s ability to perform under anticipated user loads. The objective is to identify performance bottlenecks before the software application goes live.

for load testing, we shall use Locust.

Locust is an open-source load testing tool based on python, it is robust, easy to use, and pretty much satisfies our requirement to test our application.

you could install it easily with the command pip install locust

our application is running at localhost port 8000 or htpp://localhost:8000

to use locust, first, we should create a locustfile.py and define our specification for our testing there. we could specify the HTTP request type, headers, and so on. here is the locustfile that we are going to be using for this experiment.

locustfile.py

next, we use the locust command to start a load test on our server. we run the following command on the same directory as our locustfile.py

python3 -m locust --host http://localhost:8000 --users 100 --spawn-rate 10

in this instance I use 100 users with a spawn rate of 10, by using this setting, we could test and see our request gradually grows.

we could see our ongoing test by going to localhost:8089

by the time we reach 1000 requests, our application could take 48 requests/second with a 2% failure rate.

by the time we reach 5000 requests, our application RPS (request/second) doesn’t fluctuate much but the failure rate went up to around 4%.

in the end, by the time we reach 10000 requests, our application RPS (request/second) doesn’t fluctuate much but the failure rate actually went down to around 3%.

I stop the test when we went over 10000 requests because the data that we gathered should be sufficient.

here is the graph for the application’s Response time in milliseconds.

Application response time

as you can see in the graph, the average response time during the entire test remains relatively the same, there were no sudden irregular fluctuations and we could be safe to assume that our application could handle the request and response gracefully.

RPS (requests/second)

on the second graph above, we could also tell that the RPS that our application could handle are floating on the 40–50 RPS steadily with no sudden fluctuations. this is adding up to the fact that we are testing with 100 concurrent users, is proof that we could handle quite an okay amount of users at the same time.

Stress Testing

stress testing is testing an application under extreme workloads to see how it handles high traffic or data processing. The objective is to identify the breaking point of an application.

for load testing, we are going to use a node-based benchmarking tool called AutoCannon

AutoCannon is an HTTP/1.1 benchmarking tool written in node with support for HTTP pipelining and HTTPS.

to install AutoCannon, you could use the command

npm i autocannon -g

now to run AutoCannon, you could test it using the command below

autocannon -c 100 -d 5 -p 10 http://localhost:8000/auth/login

the specification is as follow :

  • -c: The number of concurrent connections to use. default: 10
  • -d The number of seconds to run the autocannon. default: 10
  • -p: The number of pipelined requests to use. default: 1

by running the command targeted at our backend server running on localhost port 8000, we should get these results.

From the test results, we could conclude that our application could handle around 2000 requests at a time with 100 concurrent requests during a 5-second stress test. this should be sufficient for the usage of this application and provide us with the necessary metrics for further testing.

Afterthoughts

In Software Engineering, Performance testing is necessary before marketing any software product. It ensures customer satisfaction & protects an investor’s investment against product failure. Also, the costs of performance testing are usually more than made up for with improved customer satisfaction, loyalty, and retention.

so next time that you want to deploy and release your application, don’t forget to run a performance testing beforehand 😄

until next time! cheers.

--

--