The performance test process is not a complex process but it has lots of things to keep an eye on , in this article I will focus on factors that personally I think it will affect your test results. I will try to demonstrate the effect in most of them and I will put some references if exist.
Think Time :
By definition think time is the time between the completion of one request and the start of the next request.
So we can say that it is a kind of delay. So obviously when you have long think time value , it means long delays and less pressure on the system under test and also means if we have no think time this means more pressure on the system under test.
Short / No think time = More pressure
Too Long / Long think time = Less / no pressure
The question will be how to determine the suitable think time for your system? , you can determine how long time user spend on your system pages from analytical tool like “Google Analytics”
From the above screenshot we can roughly determine how long time the user spend on each page during his session (the the value is the average of all users in a specific range of dates)
What you can do , you can randomize a value between lower and upper think time values to have different think time value per request / per user.
The more close to real life the values you use , the more realistic results you will get.
Generating Users :
In most of performance testing tools if not all of them you have at least two options for user generation :
- Constant Load : which means that all users will start to hit the server the same moment you will start the test.
- step-up load / ramp-up : which means that we will introduce new user/thread every specified amount of time.
In most cases the ramp-up user generation will be the better approach except you want to test a specific scenario , because hitting the system under test with all users at the same moment is not an ideal scenario and sometimes It is not realistic. It will affect the test response time badly if the system is not design to sustain this kind of users hit.
There is no ideal number for the step duration so it can be tweaked during the test run or try to get this information from the analytics the same we described with the think time above.
Test Data :
Data used during the performance test run is important , the more close to real life the data the more accurate results you will get.
Also avoiding using the same data for all generated users like (user credentials , search keywords , etc) will eliminate the factor that caching may affect the test results.
Use unique data for each generated user and make sure that you have enough data to use during your test run.
Latency is the time from simply sending out the request until the first byte of response is accepted, it is also called as Time to First Byte.
You will always have a latency in your test if you are not testing in an ideal test environment.
But you can reduce the latency value by placing your remote machines in the closest region to your hosted application.
You have to initiate your test from the same region or close to the region your real users will access the application from.This will lead to more accurate test results.
It is normal that most of test executions are initiated from a one machine / server if the number of users generated is not a large number.
But it is recommended to distribute the load generation among different machines / servers even the number of users is not that large.
This will help to balance the load on the system under test and avoid some security restriction for the hitting frequency from the same host.
Have you faced problems before related to mentioned factors? How you managed it? Please share your tips, experience, comments, and questions for further enriching this topic of discussion.