Don’t do this ✋🏻 , but the important question is why?
In this article and may be a series of articles , i will try to mention some practices i have noticed or did it myself in the past , that is not recommended.
1) Don’t execute tests over VPN
VPN is Virtual Private Network , and the idea in general is to establish a secure network between you and the destination through a VPN server.
VPN will always add latency because it requires your data to be routed to the VPN server before reaching the destination webserver.
Which means that Latency will be a huge factor here which leads to higher response times.
So your test will show a results that will not match what is going to happen to actual user.
JMeter Thread group with 5 Threads will be executed to target “demo.testfire.net” , This test is executed twice and here are the results.
The difference shown in response time on the previous tables is due to the VPN Latency , this difference will vary each time you execute the test.
2) Don’t execute tests from the same machine/server hosting the application you want to test
This is not sophisticated point but i have seen this practice a lot.
People some time execute the test from the same machine/server which host the application , some times due to connectivity and sometimes to test an application hosted locally.
Keep in mind that both server and the performance testing tool are sharing the same machine resources , this will lead to nothing.
You will not be able to generate the desire number of users and also the application performance is affected by the lack of resources used by the tool.
3) Don’t execute tests against virtualized endpoints or stubs
With early testing and Shift left testing are adopted with wider range of teams and organizations , teams will implement stubs or virtualized end-points to replace actual integrations that will come later in the testing cycle to speed up the testing cycle.
But from a performance testing point of view , those endpoints can speed up the scripting process as well , but those are not qualified to run performance tests against.
Test results will not be representative to the actual integrations , and running tests against them will not be that meaningful.
We have to wait for the actual endpoints to be able to have a meaningful and representative results.
That’s for today 🙂 , hope you find it useful.
Please share your tips, experience, comments, and questions for further enriching this topic of discussion.