Skip to content

Auto assignment of "Root Cause for Failure" by Worksoft SaaS Machine Learning

Based on the past history of the Labels assignment made by you to the failed tests in the application, Worksoft SaaS will now assign the "Root Cause for Failure", referred as RCF henceforth in the article, labels automatically to the similar failed tests using AI/ML which will reduce the effort of debugging and assigning labels to the failed tests manually. Worksoft SaaS assigns RCF label to a failed test run in following conditions

  1. A test run has failed i.e. one ore more instructions in the test run fails
  2. A test run is executed in the context of a Test Cycle &
  3. When a similar test run failure is seen in the same Project at least once in the past and you have assigned a label (Root Cause for Failure) to the test run

You may notice that not all failed test runs have RCF assigned by the system. The reason can be to do with that the failure is new in the Project and/or the confidence factor is less than 50%.

This default behavior can be overridden by using the parameter "AutoTriageFailedRunsIndicator" in the scheduling call. By default the value is TRUE. By passing FALSE as the value, you can stop Worksoft SaaS from assigning RCF. To more about these parameters in the QaCONNECT REST API Docs page, click hereAt this time, it is not possible to switch off the feature at the project or account level.

For the tests the label is assigned, you can quickly identify if the RCF is assigned by the system or by a user based on the icon that shows up beside the label it is assigned to in the Labels popover. Different icons that show up and when they show up is illustrated below.

- Assigned by AI/ML

- Assigned by AI/ML and liked by User

 - Assigned by User

 - Assigned by AI/ML but disliked by User

 - Assigned by AI/ML and unassigned by the User. Also the User could have disliked the match

Clicking on the icon will bring up a popover similar to the one shown below. Details of the information and actions available on the popover are detailed in the following sections.

Match Accuracy:

Match Accuracy is the confidence with which the label is assigned to the test run. System assigns the label even when the value is less that 100%. This percentage factor is arrived at by the system after analysing and assessing the history of failed tests and labels assigned in the Project. The higher the value the more likeliness that the assigned label is predicted correctly.

Accuracy (or confidence factor) with which AI/ML algorithm can assign correct RCF label solely depends on the past test run fails in the Project and labels assigned to these failed test runs. You can improve the rating by diligently assigning labels to the failed test runs and also by rating the auto assigned labels (refer to next section for further details.)

Worksoft AI/ML algorithm uses this data to train itself in identifying new failures and also to improve the accuracy. 

Rate It:

There are two icons available in this section: Like & Dislike (or Upvote & Downvote.) Your use of these icons is captured by the system and influence auto assignment of RCF in future test cycles.

Like (Upvote)
Liking or not liking of the RCF assignment, doesn't have much impact on the future test cycles when the accuracy percentage is 100%. But it plays a major role when the accuracy is less than 100%. When a test run that is assigned to a label with less than 100% accuracy is liked (upvoted), it increases the chances of the same label being assigned for similar failures in the future. And thereby reducing the effort that you need to put in debugging the test run failures.

Dislike (Downvote)
You are strongly encouraged to dislike (downvote) when the system assigned RCF is wrong. In addition to using this icon, if you assign the correct RCF, AI/ML algorithm will learn from this data and improves the chances of picking the right label in future test cycles.

When you Dislike/Downvote the RCF assignment by clicking on the Downvote icon, an additional popover is displayed with multiple options to choose from. There are six different options that shows that are listed below.

  • This root cause is not correct. No root cause for failure was assigned to previous runs of this test and hence no basis exists for the current auto-assigned root cause.
  • This root cause is not correct. There have been more than one point of failure for the run and the root cause has been assigned based on the non-critical point(s) of failure.
  • This root cause is not correct. No historical data exists for the new point of failure. However, the system assigned a root cause based on a different point of failure for past executions of the same test.
  • This root cause is not correct only for this instance because I believe this is a rare unexpected failure. For any failures during future executions, I’d like the auto-assignments to be done based on past data.
  • Even though the root cause for failure auto-assigned to this test is correct based on the same being used for test runs thus far, I want to deviate from this root cause for failure which was manually assigned in the past that trained the auto-assignment by the system.

The reason chosen while disliking the RCF assignment can be seen when you revisit the popover. On the "Disliked" field a RED triangle shows up. You can do a mouseover to see the reason.


You are encouraged to go through these options and get yourself conversant with these. It is very important that you choose the right option while disliking the option for the AI/ML algorithm to correct itself and assign the right RCF label in the future. 

Assigned:

The table/grid provides details on who and when the RCF is assigned to the test run. By default, you would see one column in the grid whenever RCF is assigned by the AI/ML feature of Worksoft SaaS. Based on whether any user Liked/Disliked or Assigned/Unassigned a different RCF, an additional two columns would be shown in the grid. If you have Liked/Disliked the assignment, second column starts showing up from the next time you open the popover. If you also unassigned & assigned a different RCF label, a third column is shown with the details.


Test Run Used for Matching:

While reviewing the RCF assigned, you may be wondering on what basis the label is assigned. This information is shown in this section. You get to see not only the Run Id but when it is executed and what Test Cycle the test run belongs to.

The root cause for failure label(s) assigned to the failed tests will get reflected in the Worksoft SaaS Analytics Reports giving you a detailed picture of the failures linked to a test cycle.

You can also filter the test runs based on the percentage of the Match Accuracy which is available in the Test Runs Advanced Search Criteria


Feedback and Knowledge Base