API Security Testing With Postman and OWASP Zap

Most of the content around API testing is about functional testing or recently about API automation testing , so what about Security Testing?

We’re going to use Postman and consume our existing collections.

The idea here is to send the Postman requests to OWASP Zap to be able to start automated pen-testing.

Sometimes we don’t have the proper API definition file that can be imported to OWASP Zap , so this is an easy workaround.

Step 1 :

Open OWASP Zap , go to application settings and look for “Local Proxy” as shown on the following screen shot.

On Windows : Go to “Tools” -> “Options”

On Mac OS : Go to “Zap” -> “Preferences”

In our case the local proxy is on port 8081 remember this number because we will use it very soon.

*Port number may be different on your machine , use the number displayed in your setting on the next step.

Step 2 :

Open “Postman” , go to application settings and press “Proxy” tab as shown on the following screen shot.

On Windows : Go to “File” -> “Settings”

On Mac OS : Go to “Postman” -> “Preferences”

Select “Add custom proxy configuration” and fill the following values :

  • Proxy Server : Localhost
  • Port : 8081 (the port acquired from the OWASP Zap settings in step 1)

Step 3 :

On Postman start to send API requests from the desired API collection as shown on the following Postman example.

All api calls you just did from Postman should be added to OWASP Sites list as the following screen shot.

*Don’t forget to return the Postman proxy settings to the previous / default settings after you finish.

Step 4 :

It is time to start the scan on OWASP ZAP.

Right Click on the main directory and select “Attack” then select “Active Scan”

The scan will be started and you should notice some findings under “Alerts” section as the following screen shot.

Indeed you can do more than active scan on OWASP Zap , and this may be another post I will do in the future to dig more deeper into ZAP ๐Ÿ™‚

The API used in this demo is called VAmPI , it is a vulnerable api and you can find it on the following Github link :


I hope you find it useful and I really enjoyed the time I was trying it.

Please share your tips, experience, comments, and questions for further enriching this topic of discussion.

Performance Testing background noise – What , why and how?

The main goal for any Performance Test Engineer is to make the perfect simulation , because this will lead to accurate test results and that’s what we need from this kind of test.

What is background noise?

The idea is to record and play some users behaviors which is not meant to be measured or not that important to make some noise , and this simulates the user behaviors in any system in real life.

You may too concerned about the “Login” , “Register” or any other user actions , but what is happening in real life is different.


Because while you as a user try to login or register there are some other users who opening the “contact us” page , “site map” , “subscribe to news letter” and may be “about us” page.

I know that those are the least functionality you want to measure but you don’t have to , you just want to play this in the background or in a more accurate way in parallel with your designed tests.


What you need is to record some user behaviors as described above and run them in parallel with your current performance test.

You don’t have to assign a large number of users to these tests , you can assign a percentage 5% or 10% as example of your original load just to let this noise affect the system.

As you can see in the above JMeter script , one more Thread group is added with different transactions , this ThreadGroup will be executed in parallel with the original script to make some noise , or you can run it as a separate test from another machine in parallel with the original script.

As mentioned before , it is not important to measure those extra transactions response time , the job here is to send more request.

In the end , we always searching for a way to make our test results as representative as possible , because this will increase the reliability of the system and make our customers happy.

Please share your tips, experience, comments, and questions for further enriching this topic of discussion.

Forecasting Number of users in performance testing

How many users should be applied in our performance tests ?

in a previous article we discussed some ways that may simplify the process of setting system SLA , so if we can say that we have now our SLA or simply what is the response time we are hoping to operate under , what about the number of users.

The simple answer is it should be a given requirement , but sometime we don’t have or actually we don’t know.

The following are some ideas that may help :

– If the site is up and running , you can get the users numbers and distributions (users percentage per function or page) through analytical tools like Google Analytics.

– If you are selling a product to a company or organization you can forecast based on the number of employees , it can be a percentage of them or all of them as the highest load possible.

– Adding to the previous point , if the system is license based you can forecast by the number of sold licenses and also based on the max number of users per license.

– If the site / service is completely new and you’re in a lunching process , you can get the numbers based on the market research and the sales forecasting for the first 3 – 6 months and you can adapt your tests and infrastructure when needed during the 1st year of the lunch.

No.of users is an important factor within the performance testing activities and that’s why it is important to set them carefully and as close as possible to real world scenario to have accurate results and to have confidence in your running system.

Please share your tips, experience, comments, and questions for further enriching this topic of discussion.

Availability Testing | what , how and why?

A while ago , a colleague of mine was asking me about the availability test and my first answer was , do you mean soak/endurance test? , but I was wrong , both tests have something in common but they are totally different in the objective of the test.

What is Availability Testing?

As a general idea, availability is a measure of how often the application is available for use. More specifically, availability is a percentage calculation based on how often the application is actually available to handle service requests when compared to the total, planned, available runtime.

So the idea here is to run tests for longer period of time and collect failures , logs and any other metrics that represent the system availability.

But there is a one more thing to consider , how long it takes a system to switch between active and backup servers , wether it is application or database server , more important is what is the system actual downtime.

How to run availability test?

  1. You have to design a test which can be run for a longer period with a moderate number of users , the number of users is not a key factor here as we are not going to collect performance metrics.
  2. It is time to down one of your working server(s) , in this case will be your active/primary server wether it is a application or database based on the target of your test.You should start receiving errors in your tool and here you can start to count the number of failures and how long it takes your system to move to the secondary / backup node.
  3. Once your system is up again , note all of the errors and time it takes your system to work normally again.
  4. you can repeat the operation to switch again from the backup to primary server or servers.

Why we do Availability Testing?

The target here is to measure and collect data in case of application / database failure , and to make sure that your application setup is properly configured and with a reasonable downtime which will not affect your customers badly in case of unplanned failures or downtime.

Please share your tips, experience, comments, and questions for further enriching this topic of discussion.

How to Set your performance testing acceptance criteria

This is always a question for people who are doing performance testing (as a general term) for the first time and also those who don’t have a specific performance requirements.

  • What numbers to compare to?
  • what are the current response time means?
  • is it good or bad?
  • How to set an acceptance criteria?

All the above questions we used to hear in the beginning of any performance testing project or even a discussion about a future need for a testing.

I will list here some ideas that will help you determine and simplify the process of setting a test acceptance criteria.

  • Check websites/services working in the same domain : gather information about how their services response times are , and compare it to your current response times and if you’re way higher than them you have to plan enhancements for your current operating service(s).
    *Start with your local competitors as both of you are operating in the same market.
  • Some organizations / sites publish a yearly report about the web performance in general and categorized it by business domain , this will help you have an overview about the response times trend and have at least numbers that you don’t want to exceed anyway.
    *The full article link can be found in the end of this article.
IndustryUnited StatesUnited KingdomGermanyJapan
Automotive9.5 sec12.3  sec11.0 sec10.3 sec
Business & Industrial Markets8.7 sec8.3 sec8.2 sec8.1 sec
Classifieds & Local7.9 sec8.3 sec7.0 sec8.3 sec
Finance8.3 sec8.0 sec8.6 sec7.6 sec
Media & Entertainment9 sec8.8 sec7.6 sec8.4 sec
Retail9.8 sec10.3 sec10.3 sec8.3 sec
Technology11.3 sec10.6 sec8.8 sec10sec
Travel10.1 sec10.9 sec7.1 sec8.2 sec
While the average of the values in the table is 8.66 sec, the recommendation for 2018 is to be under 3 seconds.
  • If you are doing a revamp or replacement to an old system/service try to achieve at the least the same old system performance (in case the performance wasn’t the reason for the revamp ๐Ÿ™‚ ) and then you can plan for 20 – 30% better performance than the old system , off-course you can plan for a higher performance achievement but it should be specific to not wasting a lot of time chasing unclear goal.

To summarize , it is ok if you don’t have a specific performance requirements , you can set your requirements based on how you’re operating comparing to the others and also having initial goal is a good step to start plan your performance enhancements and for sure those goals will be more ambitious by time.

Sources :

Please share your tips, experience, comments, and questions for further enriching this topic of discussion.

How to add security checks to your manual / automation test suite

Let me tell you that you can have a basic / moderate security checks in your manual test suite by extending your test cases in two different areas :

  • Input validation
  • Authentication

Most of test suites if not all of them are already testing the sections mentioned above but mostly just a basic checks like if the field accepting numbers we try characters and alpha numerics. What I am suggesting here is to test any input field against major web app vulnerabilities like XSS & SQL Injection

The same case for authentication instead of trying different combinations for right / wrong usernames and passwords .You can extend your test against major web app vulnerability like SQL Injection

XSS : XSS attacks enable attackers to inject client-side scripts into web pages viewed by other users. A cross-site scripting vulnerability may be used by attackers to bypass access controls such as the same-origin policy.

We’re going to use demo.testfire.net as a safe place to practice on.

Example :

The field we’re going to test here is the Search field , I am assuming that we already executed test cases with character , numbers , alphanumeric and also very long characters / numbers input to validate the input boundary.

Lets try an XSS payload as input.The input will be “<ScRipT>alert(“XSS”);</ScRipT>”

According to the above screen shot it seems that the web app under test is vulnerable to XSS attacks.

XSS payloads example :

  • </script><script>alert(1)</script>
  • <IMG SRC=jAVasCrIPt:alert(โ€˜XSSโ€™)>
  • <iframe %00 src=”&Tab;javascript:prompt(1)&Tab;”%00>
  • <form><isindex formaction=”javascript&colon;confirm(1)”

You can find XSS payload list in the following URL :

SQL Injection : SQL injection is a code injection technique, used to attack data-driven applications, in which malicious SQL statements are inserted into an entry field for execution.

The element we’re going to test here is the Login form , I am assuming that we already executed test cases with valid and invalid username , password also very long characters / numbers input to validate the input boundary.

Lets try SQL Injection payload in the “username” field and any characters in the password field.

username value will be = ‘ ‘
password value will be = test

Lets try more advanced input as below

username value will be = ‘ UNION SELECT sum(columnname ) from tablename —
password value will be = test

According to the above screen shot it seems that the web app under test is vulnerable to SQL Injection attacks.

You can find SQL Injection payload list in the following URL :

Conclusion :

By adding more test cases to your existing test suite you can help discover security vulnerabilities in the system under test without the need to learn a new tool.Of course it will increase the testing execution time but the benefit here is to catch those issues as soon as the testing started.

Of course this not a replacement for security testing a web app but the idea here is cover at least some basic security checks in the normal testing process.

Please share your tips, experience, comments, and questions for further enriching this topic of discussion.

Performance Test Script Validation – Why & How?

The main goal for any performance test script simply is to work , but is this enough? I mean is it enough that your script has no errors ?

No errors do not mean that your script is working flawlessly , you may get a 200 response code but the script functionality is not working and in this case all your results are not correct.

The process of checking if you receive the correct response is called Validation.

In this article I will demonstrate the validation process using JMeter as one of the performance testing tools that is widely being used.

How to use validations in JMeter

In JMeter context menu we have a whole section called Assertions as in the image below

As you can see above there are lots of assertions available to use but we will focus on one particular called “Response Assertion”

Response Assertion

Before we start to dig more deeper let’s have an example to show when the response code doesn’t mean that the script is working correctly.


Our script should do the following :

1- Open “demo.testfire.net”

2- Open the Login page

3- Do the Login with the credentials (admin/admin)

In the following 2 screen shots we will Show that having a success repose code doesn’t mean that the scenario went well.

The do login has a 200 response code

According to the above screen shot the do Login should be done successfully and user should be already logged-in

But Actually the login didn’t happen , so this step of our scenario is not a successful one.

The reason that this step is failing is because I disabled the “HTTP cookie Manager” which is in most cases required in the login scenarios.

Let’s try to use the Response Assertion we mentioned earlier in this article and try to validate our scenario but before we do this let’s enable the Cookie manager to choose which text we can use in our validation step.

Now we have a successful login , so I think we can use the “Sign off” text as our asserion , because the sign off link will not be displayed if the user is not logged-in.

I added a response assertion as a child to the do login request , use the Text response and also put “Sign off” as the text to search for in the response.

I will do a trial with the Cookie manager on , then I will re-run the test with the cookie manager off to check that our validation (assertion) is working.

When I execute the test with the Cookie manager disabled , now we have a failed request although we have a 200 response code as shown on the following image.

Text Assertion is not the only assertion we can use but I think it is the mostly used one , and it will help you validate from the script side that you test is doing what should be done , help you have accurate results and have look about how your script and system under test is behaving.

*The JMX used in this article is uploaded here , feel free to use.

How to write data from JMeter response to a csv file

I though about it when i wanted to execute a data prepration script to generate some system ids and use them in a another script , but how can i get a certain value from the response and write to a file , CSV file specifically.

In this post i will tell you how i did it ๐Ÿ™‚

Lets try to make it that way , we will use the random article function in wikipedia website to write the article name to a csv file , so everytime the random article is triggered JMeter will write the new article name to a CSV file.

The following will be added :

  • Thread Group
  • HTTP Sampler as shown below
  • View results tree
  • Regular expression extractor as a child to the HTTP Sampler
  • BeanShell PostProcessor as a Child to the HTTP Sampler

Every time “https://en.wikipedia.org/wiki/Special:Random&#8221; will requested the value in title will be written to the CSV file

Regular Expression Extractor configuration will be as shown below :

One step is remaining , to write the “Article_Name” parameter value to a CSV file

BeanShell PostProcessor code will be as the following :

artname = vars.get("Article_Name");
f = new FileOutputStream("Results.csv", true);
p = new PrintStream(f);

Our last step is to run the script with more than one iteration , let’s execute it with 3 iterations and then go to see the CSV fie contents.

Hope you find this article useful ๐Ÿ™‚

Increase Number of users generated from your local machine

Number of user generated from your local machine depends on your local machine specs but what if you can generate more users with number of tricks.

1. Don’t use listners

Listners consumes a lot of memory to be able to display information and do the necessary calculations , the less listners you use the more memory is available to generate users.

Use Simple Data Wriiter to store all of your run data and after the execution is finished you can get all information you need from .jtl file created.

*We have an article explaining how to use Simple data writer , see the following link


2. Increase Java Heap Size Limit

We’re going to change the amount of memory reserved to JMeter by default to a larger size which allow JMeter to generate more users but you cannot set the memory size to be > 80% of your total system memory

Here is the default value in APache JMeter 5 , the default value is 256m

Screen Shot 2019-02-17 at 12.28.56 PM

You can set a new value which is not larger than 80% of your system memory , JMeter will not lunch if you set a memory which is much higher than possible.

*You can change the heap size value when you edit ย jmeter.bat file and search for “set HEAP” in notepad or any other editor.

3. Use x64 JDK and Windows

With x64 Windows and JDK you can consume more memory and have a better memory management that will help you generate more users than before.

*With all the above tricks your machine specs cannot exceed a certain boundary related to your hardware , and unfortunately we cannot calculate precisely how much a machine with a specific specs can generate and also it is differ from test to test.But these tricks help you get the max out of your machine before thinking to add a new machine to generate more or move to the cloud.

Stop JMeter test run when reaching a specific number of requests

You want to execute a JMeter performance test but you don’t want to exceed a specific number of requests , is it possible?

Yes it is ๐Ÿ™‚

What if we can get the value of the current iteration and make a condition to stop the test when reaching a specific values. This can be done as the following

Create a basic JMeter test plan with the essential samplers and listeners

Test plan will have the following :

  1. Thread Group
  2. Http Sampler
  3. If controller
  4. Test Action Sampler
  5. Summary Report Listener

Thread Group run settings will be as the following :

Thread group settings is just an example you can use what is suitable to your test

We will put If controller and a test action as a child for it in the beginning of the test just under the Thread Group.

Everytime you call ${__counter(,)}ย you got the current iteration number.
in the above example we need to set a maximum of 300 requests (iterations).

“${__counter(,)}” >= 301

So why i put the condition operator to be >=301 because in case of concurrency it will be easier to check for number of iteration after you already finished your 300 iterations , give it a try if you put 300 the test will be stopped at the iteration number 299.

Test Action will be a child to the IF Controller

When the condition is true the Test Action will stop the test.

Then you have to add http request , in the example we used “https://en.wikipedia.org/wiki/Main_Page&#8221;

At the end add summary report to check the number of request and run the test


With IF Controller and Test Actionย you can limit your test execution maximum number of requests While you executing a duration based test with multiple threads.