Skip to content

Use Cases for Test Executions through QaCONNECT - Priority/Dependency

Below are the few use cases that you can implement using this feature

  • Use Case 1: Test Executions dependency for Regression Tests with Smoke Tests
  • You have a scenario where you have changes to your application and you wish to perform sanity tests first to check if there are any failures that can be taken care of before you trigger regression tests.

    To make your process simpler and easier, in Worksoft SaaS you can set the dependency for regression tests in such a way that those are triggered only when the sanity tests are successful.

    {
      "testCycleIdentifier": "Batch Runs-Aug 2019",
      "runDefinitionTestingContextKeys":

     [{

          "id": "1ITA2HC",
          "status": "WIP",
          "batchLabel": "Smoke Test Suite"
        },
        {
          "id": "3YTA4HC",
          "status": "WIP",
          "batchLabel": "Regression Test Suite"
        },
        {
          "batchConfig":

         [{

              "batchLabel": "Regression Test Suite",

              "triggerConditionsForTheBatch":

              [{
                  "testCycleIdentifier": "Batch Runs-Aug 2019",
                  "batchLabel": "Smoke Test Suite",
                  "runDefinitionTestingContextKeys": "1ITA2HC",
                  "testExecutionStatusList": ["01"]
              }]
         }]

     }]

    }

    As you see in the above example, when the test runs tagged to 'Smoke Test Suite' label in the 'runDefinitionTestingContextKeys' JSON i.e., runs which are to be scheduled under test cycle identifier: 'Batch Runs-Aug 2019' gets triggered first and when the test execution status of all the runs moves to '01' (Completed w/o Failures) as specified, then the test runs tagged to 'Regression Test Suite' label gets triggered. This is how you can schedule the batch runs on the Smoke and Regression testbeds accordingly.

  • Use Case 2: Scheduling the Test Runs in batches and generating the Test Cycle Level Test Outcome Reports
  • For example, you have a requirement to do the functional/load testing for your application where you need to schedule the test runs in a timely fashion. Once the test runs are completed, you can generate the Test Cycle Level Test Outcomes Report(s) to analyze the failures that occurred during this phase.

    In this case, you can trigger the test runs in batches and you can set the dependency to generate the test cycle test outcomes report only when the test runs are moved to the logically completed state (any of the states can be specified according to your needs).

    {
      "testCycleIdentifier": "Batch Runs-Aug 2019",
      "runDefinitionTestingContextKeys":

     [{
          "id": "1ITA2HC",
          "status": "WIP",
          "batchLabel": "RD for Batch Runs"
        },
        {
          "id": "3YTA4HC",
          "status": "WIP",
          "batchLabel": "RD for TestCycle Level Test Outcomes/Clipboard Report Generation"
        },
        {
          "batchConfig":

         [{
            "batchLabel": " RD for TestCycle Level Test Outcomes/Clipboard Report Generation",
            "triggerConditionsForTheBatch":

             [{
                 "testCycleIdentifier": "Batch Runs-Aug 2019",
                 "batchLabel": "RD for Batch Runs",
                 "runDefinitionTestingContextKeys": "1ITA2HC",
                 "testExecutionStatusList": ["01"]
             }]
         }]
    }]

    }

    As you see in the above example, once the execution of the batch runs gets completed, then only the Run Definition for the test cycle level test outcomes/test cycle clipboard report generation will be triggered. This way, you can save time instead of triggering the test runs for test cycle level test outcomes/clipboard report generation.

  • Use Case 3: Scheduling of Tests on the Predominant Browser and then performing Cross-Browser Testing
  • This case can be used when you are ready with the automation test suite where you can schedule the test runs on a single browser to check for the failures first rather than scheduling on all the testing platforms which will improve your productivity and subscription capacity.

    The scheduling of test runs on the predominant browser by setting the dependency for the test runs on the other browsers to be triggered only when the test runs scheduled on the predominant browser moved to logically completed state (for e.g., Completed w/o failures).

    {
      "testCycleIdentifier": "Batch Runs-Aug 2019",
      "runDefinitionTestingContextKeys":

     [{

          "id": "1ITA2HC",
          "status": "WIP",
          "batchLabel": " Chrome Browser Testing "
        },
        {
          "id": "3YTA4HC",
          "status": "WIP",
          "batchLabel": "Cross Browser Testing"
        },
        {
          "batchConfig":

         [{

              "batchLabel": "Cross Browser Testing",

              "triggerConditionsForTheBatch":

              [{
                  "testCycleIdentifier": "Batch Runs-Aug 2019",
                  "batchLabel": " Chrome Browser Testing ",
                  "runDefinitionTestingContextKeys": "1ITA2HC",
                  "testExecutionStatusList": ["01,02"]
              }]
         }]

     }]

    }

    The runs tagged to the batch label as 'Chrome Browser Testing' would be triggered first and upon then test execution status moving to either '01' or '02' which refer to 'Completed w/o failures' and 'Completed w/ failures' respectively, then only the test runs tagged to 'Cross Browser Testing' would be picked up. Until then they would remain in the 'Preprocessing Completed' state.

  • Use Case 4: Triggering of the CTC Runs and generating the Test Cycle Level Test Outcomes Report
  • Scheduling of the CTC (Continuous Test Cycle) Test Runs and once they move to logically completed state, a Run Definition with Batch Results Service is triggered which gives an output data file with the list of testing contexts of the failed test runs, so these test runs are triggered again with the required changes and finally, the Test Cycle Level Test Outcomes Report has been generated.

    {
      "testCycleIdentifier": "Batch Runs-Aug 2019",
      "runDefinitionTestingContextKeys":

     [{

          "id": "1ITA2HC",
          "status": "WIP",
          "batchLabel": "CTC Test Runs"
        },
        {
          "id": "3YTA4HC",
          "status": "WIP",
          "batchLabel": "Batch Results Service for Failed Runs Info"
        },
        {
          "batchConfig":

         [{

              "batchLabel": " Batch Results Service for Failed Runs Info",

              "triggerConditionsForTheBatch":

              [{
                  "testCycleIdentifier": "Batch Runs-Aug 2019",
                  "batchLabel": "CTC Test Runs",
                  "runDefinitionTestingContextKeys": "1ITA2HC",
                  "testExecutionStatusList": ["02"]
              }]
         }]

     }]

    }

    In this use case, you are checking for the failures occurred for the test runs tagged to a test cycle, once you get the list of failed testing contexts, you are triggering the same service again and finally scheduling the test run for generating the test cycle level test outcomes report. Here, on using the 'delayPreprocessingIndicator' parameter the scheduling of the second batch would wait until the first batch gets completed and the output data file of the failed test run ids would be taken as input and second batch would be triggered.

You can refer to the docs page for more details accessible at https://www.web.worksoft.cloud/docs/#/100


Feedback and Knowledge Base