Auto re-execution of Test Run failures
A general practice followed in the automation world when a test run fails, is to execute it again. While tools across the world are getting more and smarter at ensuring the flakiness of the tests is minimal, it cannot be said that they are eliminated completely. Worksoft SaaS as well does apply AI to ensure tests are not flaky but in the end, a certain percentage of flakiness is to be expected.
Worksoft SaaS applies AI/ML logic to figure out why a test run has failed. And if it finds that a test has failed because of any of the following reasons, it re-schedules the test automatically.
- Worksoft SaaS flakiness: System figures it out that some of the commands haven't performed the action it was expected to perform
- Application flakiness: When the application doesn't load properly for any of the reasons
- System Aborted Tests: When a test gets aborted for reasons like Session got killed, Unreachable Browser Exception, Browser Startup Failure, etc
You can see a max difference of 5 minutes before the test run completes and it is re-scheduled. The tests would be re-scheduled max of 2 times.
You can override this default behavior by setting the parameter "MaxTimesToRerunAllFlakyTests" value to 0 while scheduling the Test Cycle. This parameter takes values of 0,1 & 2. To know more about these parameters in the QaCONNECT REST API Docs page, click here. At this time, you cannot change the default setting at Project or Domain level.
You can find the test runs scheduled by AI/ML by using either of the methods.
Checking for icon beside the RunId in the Test Runs Home Screen:
For any test run that is scheduled by AI/ML, this icon shows up beside the RunId. When you click on this icon, Worksoft SaaS brings up a popover with two grids. The top grid show details of the Original Test that is scheduled by the User as part of the Test Cycle. And the bottom grid shows the Test Runs that were scheduled by the AI/ML. The maximum rows you can see in the bottom grid are 2, based on the RunId that you choose. The keyword current is shown beside the RunId that you choose.
Using the Search option in Test Runs Home:
You can perform the Search in the Test Runs Home using the search parameter ‘Worksoft AI/ML’ in the ‘Scheduling Source’ and ‘Scheduling By’ Search Criteria.
Test Cycles Home Screen:
You can find a snapshot of how many Test Runs were re-scheduled by visiting this screen. When you do a mouseover on any of the counts shown in the last field of the Test Cycle in context, a small popover is shown with multiple counts. The following counts are shown on the mouseover of any of the status icons.
Total Executions: Total number of tests executed as part of the Test Cycle
Distinct Executions (Original): Total number of tests scheduled as part of the Test Cycle. When there are no failed test runs in the Test Cycle OR you have overridden the parameter "MaxTimesToRerunAllFlakyTests" with a value of 0, this count matches with the previous one i.e. "Total Executions". This value comes handy when there are failed tests in the Test Cycle and AI/ML has re-scheduled these based on the RCF assigned. This gives the exact number of tests you scheduled as part of the Test Cycle.
Re-executions (Manual): These are the test runs that you re-scheduled and tagged to the Test Cycle.
Re-executions (Worksoft AI/ML): These are the number of test runs that were re-scheduled by AI/ML based on the RCF assigned.
You can review the exact Test Runs by clicking on the counts which redirects to the Test Runs Home screen with filtered results.
Note: The counts shown on mouseover for each status icon are different based on the state of their executions tagged to a Test Cycle.
Re-executions Information:
When you click on the blue ribbon shown on the top-right of the Actual Window gives you detailed information about the Start Time, End Time, Duration, and consumed Capacity of the Re-executions triggered by Worksoft AI/ML and the Users respectively in the Test Cycles Home screen as shown below.
An icon is shown next to the Test Cycle Identifier in the Test Cycles Home screen which indicates the status of the auto re-executions of the failed tests.
You can see the icon in three different colors representing three different states of the auto re-executions of the flaky tests.
- Orange: The icon which is blinking and "Orange" in color represents that the auto re-run flaky tests by Worksoft AI/ML are in the "Pending" state.
- Yellow: The icon which is blinking and "Yellow" in color represents that the auto re-run flaky tests by Worksoft AI/ML are in the "In-Progress" state.
- Green: The icon "Green" in color represents that the auto re-run flaky tests by Worksoft AI/ML are in the "Completed" state.
Flaky Test Auto Reruns by Worksoft AI/ML Report:
You can find a snapshot of the re-executed Test Runs due to the Test Run Failures by viewing this report. When you generate the Test Cycle Level Test Outcomes Report, this new report will be generated and shown next to the 'Transactions Detail Report' and 'Concurrency Usage Report' as shown below.
This report gives you detailed information about the counts of the re-executed test runs and the consumed capacity.