Performance Testing :: Module 2 :: Lesson 3 :: Topic 3.1 :: Business Scenarios Metric and Data
This article is a part of the Self-paced Learning Series for the Course: First-Hand Experience of eureQa.Please refer to the link for more details on the Course.
Identifying the scenarios based on page traffic and slow performance areas
Once these are identified, we need to understand what action(s) should be considered as a criteria for the performance capture and analysis.
In general, we ususally like to understand how long a load is taking, it can be a page load, pop up load, image, video, or lazy load. The most common case would be page load.
In a business scenario, for example order placement in amazon, we have a basic workflow "Add product to Cart". In this flow, we will have
1. Launch
2. Search for product
3. Select the product
4. Select the product criteria (like colour, size...)
5. Add to Cart
In the above steps, 1, 2, 3 and 5 has actions that trigger a page load. A new page is rendered. While the 4th step is just selection, there is no page rendering. So, first we should be able to identify what are action steps and out of these action steps which has outcome of page/element rendering or load/reload/refreshes. It is also possible that a page is refreshed, when a pop up is handled. So a pop up load or pop up close can also cause a page to reload and here too we can capture performance metrics.
Once the action steps are identified then we need to decide at which of these action steps, do we actually have a need to capture the performance metrics. It can be all or just a few. The steps at which we or the user decides to capture the metrics is what we call an "Usage Step". We will be using this term throughtout the course and based on this most of the logic is revolved.
How do we determine what is a usage step out of the action steps and at which point should we capture the performance metrics. Here there are two things, what has triggered the load (Action load) and what is the outcome of the trigger (Outcome load). To explain this, let us take the example of Search for a product or add to cart. If we click on "Search" element, then the action load is how long did the action take to perform the action i.e., click. We use a command "Click" so whats the duration of the click command.
And what is outcome load, after the click action (here Search), how long it take for the search results to show up in a new page that got rendered. This is the outcome load. It is possible that the actual search results show up but other elements on page took more time. So we need to understand what is the result/outcome we are looking for as well.
The usual term for this is KPI
KPI : Key PErformance Indicators
Response Time: How long does it take to send a request and receive a response?
Wait Time: How long does it take to receive the first byte once a request is sent?
Peak Response Time: What is the longest amount of time it takes to fulfill a request?
Error Rate: What percentage of requests result in errors in comparison to all sent requests?
CPU Utilization: How much time does the CPU need to process requests?
Based on different requirements, performance crietria will be defined. In general, if its a page load then we consider "Action load" + "Outcome load" together.
I hope this has provided some clarity in understanding what is the actual requirement for your performacne tests.
Once we have a defined criteria, then we also need to agree on certain expectations of the tests and the metrics for comparison (baseline).
1. Test Data -- How the test data impacts the outcome of the tests, pass or fail. Is the data volatile? Will there be a data refresh and how much percentage of test data changes should we expect? If there are multiple users involved, do we need to same accounts or create new? Such as these. It is impotant to test real time on close of Production environment to have less test data issues.
2. Threshold times - We need some metrics on how long it takes (MAX) to execute a business workflow or what is the executoin time in backend for each flow, or step (mainly usage steps). We need these to mimic the real time behavior of any user.
3. Success rate expectation - There is a rare 100 % passed test if we are using real time data. So, we ususally decide on the most aggreable pass percentage of tests that we execute. 95% may be. Depending on the data, flows, changes to AUT, etc.