Skip to content

Performance Testing :: Module 2 :: Lesson 3 :: Topic 3.2 Distribution

This article is a part of the Self-paced Learning Series for the Course: First-Hand Experience of eureQa.

Please refer to the link for more details on the Course.

Once we have our flows and KPIs Identified and defined.


We decide how to regularly replicate or mimic the real time behavior in the testing environment. For this we need to gather the traffic and behavior metrics.

1. Max Concurrency - How much is the peak traffic the AUT received over time and at which peak, did the system break. 500 Users? 1500? 5000?
 
2. Sustained Testin or Real time Testing - The testing environment we want to execute the tests to capture perf metrics, should we do it for 1 Hour with always peak concurrency which is 500 users accessing the site at every second for an hour? Or should we gradually increase to peak concurrency like in a day we usually have good traffic in the working hours than night. So, we can also start like 30 % of traffic for 10 minutes then scale up to 50% in another 10 minutes then 70% 20 minutes and gradually 100 % for 20 minutes. This is Ramp up and Ramp down. If its 500 peak concurrency we need for an 1 hour, then we can ramp up with 200 users (as base) for 20 minutes then scale up to 500 for remaining 40 minutes. We can also have ramp down, like 500 for may be 30 minutes then the last 10 min again, we slowly come down to 200 users and then 0.

3. % distribution of workflows - If you have 5 workflows then it is possible that 3/5 has more traffic and so you might want to mimic this real time traffic trends in the tests as well.

4. % distribution of Users - Similarly for the users as well, if need be.

5. In Real time testing, another way of mimicing the real time behavior is - for example taking the "Add to Cart" example some users open the site and search for a product but doesn't move beyond and exits. While few others moves to next step and selects the product but doesn't add to cart. And last few actually completes the workflow and add the product to cart. So at each usage step, the users behavior is captured by % i.e., how much % of users are actually moveing to next step or % of users actually dropping at a usage step. We can design the tests at this level as well.

6. Think Times: Think times are used to simulate human behavior that causes people to wait between interactions with a website. Think times occur between requests in a web performance test and between test iterations in a load test scenario. Using think times in a load test can be useful in creating more accurate load simulations. Here, in the "Add to Cart" example, human can think at various points staying on the pages already rendered. LIke which product to search or select or which criteria of the product to select. When we capture the manual execution time of the business workflow and have the max execution time captured we do consider the think time. However in automation, we define the elements to interact and which data to use, so here, generally the automation execution time would be less than that of the manual execution time. So, to cover this gap. We add additional time to meet the max manual time expectation. We do have the logic defined for this, we will explain this better in the design.

Once we have these metrics, we build the distribution of the tests and how the tests are to be configred to meet the above expectations. We will also deduct the number of times each business workflow will be executed and how many tests in total we should expect if executed for an hour or any defined time slot. 




Feedback and Knowledge Base