There is always a debate about how important is the speed to the software industry. They say and I quote “if the customer/user is not complaining , there is no issue and we are fine”.
I will try here to discuss this in a different way.
what is the hidden cost you pay when your site/service is slow and how you are losing although the customer is not complaining.
They add a 5-second delay to each page load time. Notable facts they found:
The first-second delay resulted in a 4.9% drop in the number of articles a visitor read
The three-second delay resulted in a 7.9% drop
Visitors read less when delays occurred
Effect on Sale: 79% of customers who report dissatisfaction with website performance are less likely to buy from that same site again.
Speed Affects Revenue: If your site makes $100,000/month, a one second improvement in page speed brings $7,000 month
Also customer loyalty can be affected by the site / service speed 👇
To sum up , there will be always a cost for slowness , as described here in the article. Revenue , satisfaction and loyalty are the price you pay or the correct word is you lose when you neglect the speed of your site/service , and yes your customer may not complain but also this is not a proof that he/she is satisfied with your service.
When evaluating a performance test report most of the times we are looking for the response time and specifically the Average response time.
But if you take a deeper look , the performance test report elaborates more information.
In this article I will use one of JMeter basic reports “Summary Report” as example to explain what I mean.
The focus in this article will be on the following terms/values
Min Response Time
Max Response Time
Standard Deviation :
The Standard Deviation is a measure of how response time is spread out around the Mean. Simply say, the smaller the Standard Deviation, the more consistent the response time.
“Logout” transaction having lowest Standard Deviation (0.7) it shows response times are more consistent than other two.
Standard Deviation in your test tells whether the response time of a particular transaction is consistent throughout the test or not? The smaller the Standard Deviation, the more consistent transaction response time and you will be more confident about particular page/request.
The shortest time taken by a sample for specific label. If we look at Min value for Label 1 then, out of 20 samples shortest response time one of the sample had was 584 milliseconds.
The longest time taken by a sample for specific label. If we look at Max value for Label 1 then, out of 20 samples longest response time one of the sample had was 2867 milliseconds.
The performance test process is not a complex process but it has lots of things to keep an eye on , in this article I will focus on factors that personally I think it will affect your test results. I will try to demonstrate the effect in most of them and I will put some references if exist.
Think Time :
By definition think time is the time between the completion of one request and the start of the next request.
So we can say that it is a kind of delay. So obviously when you have long think time value , it means long delays and less pressure on the system under test and also means if we have no think time this means more pressure on the system under test.
Short / No think time = More pressure
Too Long / Long think time = Less / no pressure
The question will be how to determine the suitable think time for your system? , you can determine how long time user spend on your system pages from analytical tool like “Google Analytics”
From the above screenshot we can roughly determine how long time the user spend on each page during his session (the the value is the average of all users in a specific range of dates)
What you can do , you can randomize a value between lower and upper think time values to have different think time value per request / per user.
The more close to real life the values you use , the more realistic results you will get.
Generating Users :
In most of performance testing tools if not all of them you have at least two options for user generation :
Constant Load : which means that all users will start to hit the server the same moment you will start the test.
step-up load / ramp-up : which means that we will introduce new user/thread every specified amount of time.
In most cases the ramp-up user generation will be the better approach except you want to test a specific scenario , because hitting the system under test with all users at the same moment is not an ideal scenario and sometimes It is not realistic. It will affect the test response time badly if the system is not design to sustain this kind of users hit.
There is no ideal number for the step duration so it can be tweaked during the test run or try to get this information from the analytics the same we described with the think time above.
Test Data :
Data used during the performance test run is important , the more close to real life the data the more accurate results you will get.
Also avoiding using the same data for all generated users like (user credentials , search keywords , etc) will eliminate the factor that caching may affect the test results.
Use unique data for each generated user and make sure that you have enough data to use during your test run.
Latency is the time from simply sending out the request until the first byte of response is accepted, it is also called as Time to First Byte.
You will always have a latency in your test if you are not testing in an ideal test environment.
But you can reduce the latency value by placing your remote machines in the closest region to your hosted application.
You have to initiate your test from the same region or close to the region your real users will access the application from.This will lead to more accurate test results.
It is normal that most of test executions are initiated from a one machine / server if the number of users generated is not a large number.
But it is recommended to distribute the load generation among different machines / servers even the number of users is not that large.
This will help to balance the load on the system under test and avoid some security restriction for the hitting frequency from the same host.
Have you faced problems before related to mentioned factors? How you managed it? Please share your tips, experience, comments, and questions for further enriching this topic of discussion.
I think this is one of the questions that you may hear or you may want an answer for it. How often we should do it , what should be tested and how we decide if it is good or bad performance?
I can’t say that I have an absolute answer for all of this questions but I think I have an answer.
Let’s start with “How often you should run a performance testing”
Before we ask how often , let’s ask first why?
You plan , design and execute a performance testing run for one the following reasons in my opinion :
– Set a performance baseline for a running system. – Compare performance between old system (legacy system) and new system. – Detect performance enhancements / degradations between different versions of a software or hot-fixes (patches)
Because we ask how often it means that we already executed a performance testing before and may have a performance testing set to execute when it is needed.
Here is how often you should run a performance testing , in my opinion off course 🙂
If you introduce , modify or enhance a code / new code , which may affect the current running software.
If you modify the current environment infrastructure and also if you modify configuration(S) which may affect the system performance.
To simulate a load happened in production to identify the cause of production Incident related to performance (performance issues).
Before every peak season , mostly for e-commerce websites like “Black Friday” to make sure that everything should working as expected.
What do you think , if you have other ideas or real life scenarios you can leave it in the comment section.