Skip to content

Commands for Test Execution Defect Triage

Worksoft SaaS provides different ways of tagging failures. One of them is the Root Cause Failure label that allows for categorizing failures under broad headings like "Application Defect", "Test Data" etc. To help users add more information Worksoft SaaS has introduced a new feature called "Annotations" for a test run.

You can add annotation either through the Worksoft SaaS application (at the test run level & by using command) or by using the QaCONNECT Service. In the SaaS application, you can add the annotation in the "Execution Results" page (and "Test Runs Home" page and "Home" page of Run Definition and/or Scenario in the future).

Here are some use case(s) where you can benefit from these command(s):

  • Assign annotations dynamically at the beginning of the execution of a test so that the context of the test is accurately recorded/assigned. For example, if your automated test maps to more than one manual test case or more than one manual test case scenario and depending on the scope of your automated test at run time, your test can assign the appropriate annotations.

    In the above business scenario, you can add the annotation in the test automation by using the command "setAnnotationforTestRun". If you have any expected errors, you can use Exception Block in your test automation where you can assign Root Cause Failure and also add annotations to provide context on the failure.

The table(s) below lists the command(s) in the ascending alphabetic order within each category along with a brief description of the purpose behind the command. By clicking on the hyperlinked command name in the table(s) below, you can review more detailed information about a specific command that includes the syntax for you to follow when using the command(s).

Command Purpose
setAnnotationForTestRun This command is used to create an annotation to a specific "test run" that is in the scope of the automated test in execution.

This command comes in handy in case you do NOT want to predefine the annotation upfront but want them dynamically created based on the negative outcomes that tests experience at dynamic run time. 

Feedback and Knowledge Base