Web performance testing – an overview (presentation)

During 2021 I’ve presented this presentation different times to a different crowds in a different regions.

In this presentation I was trying to knock different doors that are related to performance testing without digging deeper , giving the audience the freedom to search and decide which part is more important to them to look into.

And I think it is the time now to publish it for two reasons actually:
1. To force myself not to use it or present it again 🙂
2. It is an overview in a simple form that can be easily consumed.

I hope you find it useful and I really enjoyed the time I was presenting it this year.

Please share your tips, experience, comments, and questions for further enriching this topic of discussion.

Forecasting Number of users in performance testing

How many users should be applied in our performance tests ?

in a previous article we discussed some ways that may simplify the process of setting system SLA , so if we can say that we have now our SLA or simply what is the response time we are hoping to operate under , what about the number of users.

The simple answer is it should be a given requirement , but sometime we don’t have or actually we don’t know.

The following are some ideas that may help :

– If the site is up and running , you can get the users numbers and distributions (users percentage per function or page) through analytical tools like Google Analytics.

– If you are selling a product to a company or organization you can forecast based on the number of employees , it can be a percentage of them or all of them as the highest load possible.

– Adding to the previous point , if the system is license based you can forecast by the number of sold licenses and also based on the max number of users per license.

– If the site / service is completely new and you’re in a lunching process , you can get the numbers based on the market research and the sales forecasting for the first 3 – 6 months and you can adapt your tests and infrastructure when needed during the 1st year of the lunch.

No.of users is an important factor within the performance testing activities and that’s why it is important to set them carefully and as close as possible to real world scenario to have accurate results and to have confidence in your running system.

Please share your tips, experience, comments, and questions for further enriching this topic of discussion.

Running performance tests from the cloud , why?

Does it necessary , why , for what cost and limitations ,all these questions are valid and make totally sense.

Let me first define what is meant by running performance tests from the cloud , in the simple form it is running performance test scripts from a rented virtual machine(s) from a cloud service provider like (AWS or Azure) or testing platform which offers a cloud service run like (Blazemeter , Microfocus,..)

Does it necessary ? , the answer is yes in some situations like the following :

  • Generate huge amount of users which requires a highly equipped machines which cost fortune if you build it locally.
  • Simulate different regions and countries which is required in your test run , this can be provided by multiple cloud service platforms which has different server locations around the world.
  • Reliability and consistency for the used machines , most cloud platforms guarantee more than 90% availability for their services.

Why ? , as I mentioned above in some cases massive resources are required and this can be easily provided by cloud platforms than build your own load generators on-premise.

For what cost ? , you are billed based on hourly rate according to the selected machine and its hardware , sometimes is cheaper to rent than build your own lab. But in some cases based on the usage and frequency , it is cheaper to build your own lab.

Limitations ? , if your application or software is not available online or it is only accessible through internal organization / company network , the cloud testing will not be the best fit in this case.

Also if you have some resources that you can use instead of renting.
example : use the company desktops to generate the users instead of create a cloud load generators.

The final reason is the cost , if you don’t have the budget to rent cloud machines , in this case use your company assets is the best solution to complete the task.

To sum up , running performance tests from the cloud is not a must or a trend , it is a go to solution if need it. It saves time , configuration hassle and sometimes money but it is not the magic solution for every performance run.

Please share your tips, experience, comments, and questions for further enriching this topic of discussion.

Performance Tuning – A Team Effort

Days ago, I was discussing the idea of performance tuning with a number of fellow testers and the discussion direction was heading mainly towards the responsibilities than the methodology.

The main question was who is responsible for the performance tuning?

I think we have to define what is performance tuning in the beginning, Performance tuning is the improvement of system performance , either to solve a performance problem or to achieve a goal.

Back to the main point about who is responsible for the performance tuning, it is a team responsibility.

In a tuning process, you will need different expertise to achieve the tuning goal.

You will need the following people:  

Architects to review the system design and suggest which part could be replaced or enhanced to improve the system performance. 

Performance testers, to design and execute tests to find and point out where is the performance bottlenecks. 

Developers to assess the code and suggest which methods or block of codes can be improved to help solve the performance bottlenecks.

Management to set priorities and clearly elaborate which part of the system is more vital or important to the customer and business to start with.

With the collaboration of all of the mentioned people, you can proceed with a productive performance tuning, achieve the best results and avoid reworks in my opinion.

Please share your tips, experience, comments, and questions for further enriching this topic of discussion.

Availability Testing | what , how and why?

A while ago , a colleague of mine was asking me about the availability test and my first answer was , do you mean soak/endurance test? , but I was wrong , both tests have something in common but they are totally different in the objective of the test.

What is Availability Testing?

As a general idea, availability is a measure of how often the application is available for use. More specifically, availability is a percentage calculation based on how often the application is actually available to handle service requests when compared to the total, planned, available runtime.

So the idea here is to run tests for longer period of time and collect failures , logs and any other metrics that represent the system availability.

But there is a one more thing to consider , how long it takes a system to switch between active and backup servers , wether it is application or database server , more important is what is the system actual downtime.

How to run availability test?

  1. You have to design a test which can be run for a longer period with a moderate number of users , the number of users is not a key factor here as we are not going to collect performance metrics.
  2. It is time to down one of your working server(s) , in this case will be your active/primary server wether it is a application or database based on the target of your test.You should start receiving errors in your tool and here you can start to count the number of failures and how long it takes your system to move to the secondary / backup node.
  3. Once your system is up again , note all of the errors and time it takes your system to work normally again.
  4. you can repeat the operation to switch again from the backup to primary server or servers.

Why we do Availability Testing?

The target here is to measure and collect data in case of application / database failure , and to make sure that your application setup is properly configured and with a reasonable downtime which will not affect your customers badly in case of unplanned failures or downtime.

Please share your tips, experience, comments, and questions for further enriching this topic of discussion.

Don’t commit to a tool and start with POC

Which performance testing tool you’re going to use in your next project , each one of us has a favorite tool or a go to tool , one that always in your mind.

But this is not the case always , we may have some limitations which affect which tool we should use , I will list some of it below :

  1. Corporate decision : some corporates / organizations are not preferring to use an open source tool and also some of them already invested heavily in one tool and they are not going to use something else.
  2. Financial decision : This is the quite opposite from the previous point , we don’t have a budget and we’re going to use an open source tool.
  3. Technical constraints : This is the core of this blog post that we are fine to use anything but which tool is more suitable to our project , an open source tool or a paid tool ?

So why we commit to a tool that may not fit later in the project or wasting time try to make it work.

Let’s do a POC (Proof of concept) , we can try the basic application functionality like login , register , …. to make sure that we don’t have a limitation , and if the current tool is not working properly we can switch to another tool.

Sometimes the limitations is more complex than a basic functionality ,like the application protocol is not supported by the current tool , some of the following protocols are somehow complex that it has a modules available in a specific performance testing tools :

  • Citrix
  • Oracle
  • Siebel
  • SAP

In summary , don’t commit to a tool in the beginning of a performance test project except you’re 100% sure that it is going to work , take your time and make your own POC to make sure that you don’t and you will not have a limitation or unsolvable complexity during your project.

Please share your tips, experience, comments, and questions for further enriching this topic of discussion.

Unusual performance tests for unusual situations

Not all performance test types are famously equal , some types are used less often than other.

That doesn’t mean of course that they are not important , they are and in some cases so important but unusual tests needs unusual situations.

I am going to focus on the following performance test types , we will discuss them in details and I will try to give an example.

  • Endurance Testing
  • Spike Testing
  • Volume Testing

Endurance Testing : The less unusual type , in this test type we are executing performance runs for longer period of times (8 , 12 or 24 hrs) to test system availability and also to make sure that we don’t have some issues like “memory leaks”

The execution time should be based on the system operating time , a hosted web application which is available 24/7 , it is not operating 24/7.

Business / service operating time , is the duration the web application is actually functioning not only available , this vary according to the business domain

PerfMatrix: Do you really know all type of Performance Tests  (Non-Functional Tests)?

Example : Online Delivery web site which accepting orders from 10 AM to 10 PM is not functioning 24 hrs.

Spike Testing : Some people are confusing this test with the load testing , but they are different in design and the impact as well.

As shown on the following graph , the application / service is facing an unusual users hit for a specific period of time and after that we are returning to the normal application load.

Stating the obvious - Coding the Architecture

Example : An e-commerce application which promote 1 hour exceptional discount / offer should have a user spike for 1 hour and after that it will return to the normal user load or slightly higher.

Volume Testing : The idea here is to perform your load testing but with different database sizes (volumes) to be sure that the system performance and behavior is not affected by the expected increases of system size.

Most of the time this type of test is needed when your system is dealing or storing a big amount of data , and we suspect that we may have a big increase in system size in a short time period.

Example : Governmental application which allow users to submit data with large volume (Ex.scanned documents , official documents) are likely hood to have a large set of data base size in a small period of time and achieve massive data size in the near future.

Please share your tips, experience, comments, and questions for further enriching this topic of discussion.

The hidden cost of slowness

There is always a debate about how important is the speed to the software industry. They say and I quote “if the customer/user is not complaining , there is no issue and we are fine”.

I will try here to discuss this in a different way.

what is the hidden cost you pay when your site/service is slow and how you are losing although the customer is not complaining.

Speed-Revenue dilemma

“The speed of the site negatively impacts a user’s session depth, no matter how small the delay…The data suggests, both in terms of user experience and financial impact, that there are clear and highly valued benefits in making the site even faster. 

Users get even MORE impatient when it comes to website speed. Want proof? Have a look at the Financial Times Case study:

They add a 5-second delay to each page load time. Notable facts they found:

  • The first-second delay resulted in a 4.9% drop in the number of articles a visitor read
  • The three-second delay resulted in a 7.9% drop
  • Visitors read less when delays occurred
  • Effect on Sale: 79% of customers who report dissatisfaction with website performance are less likely to buy from that same site again.
  • Speed Affects Revenue: If your site makes $100,000/month, a one second improvement in page speed brings $7,000 month

Speed-Satisfaction dilemma

Also customer loyalty can be affected by the site / service speed 👇

customer loyalty statistics

To sum up , there will be always a cost for slowness , as described here in the article. Revenue , satisfaction and loyalty are the price you pay or the correct word is you lose when you neglect the speed of your site/service , and yes your customer may not complain but also this is not a proof that he/she is satisfied with your service.

Sources :

https://www.websitebuilderexpert.com/building-websites/website-load-time-statistics/

Please share your tips, experience, comments, and questions for further enriching this topic of discussion.

How to Set your performance testing acceptance criteria

This is always a question for people who are doing performance testing (as a general term) for the first time and also those who don’t have a specific performance requirements.

  • What numbers to compare to?
  • what are the current response time means?
  • is it good or bad?
  • How to set an acceptance criteria?

All the above questions we used to hear in the beginning of any performance testing project or even a discussion about a future need for a testing.

I will list here some ideas that will help you determine and simplify the process of setting a test acceptance criteria.

  • Check websites/services working in the same domain : gather information about how their services response times are , and compare it to your current response times and if you’re way higher than them you have to plan enhancements for your current operating service(s).
    *Start with your local competitors as both of you are operating in the same market.
  • Some organizations / sites publish a yearly report about the web performance in general and categorized it by business domain , this will help you have an overview about the response times trend and have at least numbers that you don’t want to exceed anyway.
    *The full article link can be found in the end of this article.
IndustryUnited StatesUnited KingdomGermanyJapan
Automotive9.5 sec12.3  sec11.0 sec10.3 sec
Business & Industrial Markets8.7 sec8.3 sec8.2 sec8.1 sec
Classifieds & Local7.9 sec8.3 sec7.0 sec8.3 sec
Finance8.3 sec8.0 sec8.6 sec7.6 sec
Media & Entertainment9 sec8.8 sec7.6 sec8.4 sec
Retail9.8 sec10.3 sec10.3 sec8.3 sec
Technology11.3 sec10.6 sec8.8 sec10sec
Travel10.1 sec10.9 sec7.1 sec8.2 sec
While the average of the values in the table is 8.66 sec, the recommendation for 2018 is to be under 3 seconds.
  • If you are doing a revamp or replacement to an old system/service try to achieve at the least the same old system performance (in case the performance wasn’t the reason for the revamp 🙂 ) and then you can plan for 20 – 30% better performance than the old system , off-course you can plan for a higher performance achievement but it should be specific to not wasting a lot of time chasing unclear goal.

To summarize , it is ok if you don’t have a specific performance requirements , you can set your requirements based on how you’re operating comparing to the others and also having initial goal is a good step to start plan your performance enhancements and for sure those goals will be more ambitious by time.

Sources :

Please share your tips, experience, comments, and questions for further enriching this topic of discussion.

What a performance test report saying about your system.

When evaluating a performance test report most of the times we are looking for the response time and specifically the Average response time.

But if you take a deeper look , the performance test report elaborates more information.

In this article I will use one of JMeter basic reports “Summary Report” as example to explain what I mean.

The focus in this article will be on the following terms/values

  • Standard Deviation
  • Min Response Time
  • Max Response Time

Standard Deviation :

The Standard Deviation is a measure of how response time is spread out around the Mean. Simply say, the smaller the Standard Deviation, the more consistent the response time.

Transaction NameRT
(I1)
RT
(I2)
RT
(I3)
RT
(I4)
RT
(I5)
AvgSD90th %ile
Login46348526
Search32151455.74
Logout5564550.75
“Logout” transaction having lowest Standard Deviation (0.7) it shows response times are more consistent than other two.

Standard Deviation in your test tells whether the response time of a particular transaction is consistent throughout the test or not? The smaller the Standard Deviation, the more consistent transaction response time and you will be more confident about particular page/request.

Min.Response Time:

The shortest time taken by a sample for specific label. If we look at Min value for Label 1 then, out of 20 samples shortest response time one of the sample had was 584 milliseconds.

Max.Response Time:

The longest time taken by a sample for specific label. If we look at Max value for Label 1 then, out of 20 samples longest response time one of the sample had was 2867 milliseconds.

Sources :

https://www.perfmatrix.com/standard-deviation-in-performance-testing/http://www.testingjournals.com/understand-summary-report-jmeter/

Please share your tips, experience, comments, and questions for further enriching this topic of discussion.