Skip to content

Test Runs Home page

Main Article

Worksoft SaaS Test Runs Home Screen provides information about the executed Test Runs. It helps the user to understand the execution status along with information of Test Runs, Status, Testing Contexts, Key Events, and Outcomes and actions available for executing the Test Runs using other Testing Contexts, assigning Labels, Annotations, and Issues.


The purpose of each column and icons related information is given below.

Actions

In this column, you can perform the following actions

  • Run Again: On clicking this icon, it will re-execute the same test run generating a new Run Id.

  • Abort/Cancel: On clicking this icon, it will stop the execution of the Test Run that is currently in Queued, Scheduled, Preprocessing In-Progress, Preprocessing Completed or In-Progress state.

  • More Actions: When you click on the More Actions menu, you can see a list of options available in the dropdown which allows you to execute the other Testing Contexts & view/assign the Labels/Annotations/Issues to the Test Run.


    1. View & Run Other Testing Contexts: When you select this option, all the testing contexts that are available for the Test Run are shown in the new tab.

      By default, you have the local testing context available for all the Test Runs. This allows you to execute the Test Run locally from the Test Runs Home rather than navigating to the Test Scripts and selecting the specific Run Definition/Scenario. Also, note that when you perform local executions from the Test Runs Home will execute based on the data bindings/file bindings/UDV available in the previous Run’s data.

    2. View & Assign Root Causes(s) of Failure: On selecting this option, you can view the assigned labels or assign the labels to the Test Runs based on the failures that occurred within the Test Runs.


    3. View & Assign Annotations: On selecting this option, you can view the existing annotations created for that specific Test Run or you can also create an annotation.
      To know more about the Annotations, click here.


    4. View & Manage Issues: Under this option, you have multiple sub-links which allows you to Create/Link/Delink/View the Issues for the Test Runs.
      To know about this feature, click here.

If the Test Run is in error mode, you cannot perform "Run Again/Abort" action and the icon is as shown below in disabled state. In such cases where the Test Run is in Error Mode or in the "Inactive" state or whether the QaSCRIBE is not installed in your machine, the local executions cannot be performed and the respective 'Play' button and the radio button are disabled in the Testing Context grid.

Test Run

The "Test Run" column has been enhanced to provide richer detail more clearly.

A visual cue is now provided when a test run has 'user notes' or a 'description' exists for the executed run definition or the test scenario. Clicking on the icon will show either the user notes or the entity description or both depending on the data applicability.



If a test run either is triggered for re-execution by Worksoft SaaS ML or by a person user or if a test run triggers the re-execution of other test runs, a visual cue is provided on the bottom right hand side corner of each cell in the 'Test Run' column. Clicking on this visual cue opens the detail in an overlay. The overlay clearly shows the original test run and all re-executions along with additional information about who triggered the re-execution, when, and the root cause(s) of failure, if any, that were assigned to each test run in the re-execution path.







Clicking on this re-execution visual cue opens the detail in an overlay. The overlay clearly shows the original test run and all re-executions along with additional information about who triggered the re-execution, when, and the root cause(s) of failure, if any, that were assigned to each test run in the re-execution path.


If a test run is blocked from execution because the batches on which the execution was dependent on have not been met OR if the dependencies existed but have been cleared, visual cue is now available right next to the test run id that is printed on the bottom left corner of each cell under the 'Test Run' column to indicate that the dependencies exist and the test is blocked from execution OR that the dependencies existed but are now cleared. Clicking on that visual cue shows additional detail about the batch dependencies in an overlay.





If you want to open the executed run definition or the test scenario in the view mode in a new browser tab, you now can click on the 'open in a new tab' icon that exists in the top right hand side corner of each cell in the 'Test Runs' column.

Testing Context

In this column, you will get the following information

  • Test Cycle assigned to the particular Test Run
  • Environment
  • Device Type
  • OS
  • Browser
  • Screenshot Capture
  • Video Capture

For more information, click on the Info Icon as shown below.

This will provide you the information of the scheduled by, the cloud, device setting and advanced settings for that specific Test Run.

Status

The 'Status' column lists the current or the final state of a test run's execution. No new statuses have been added in this release. You will be able to click on the 'View Progress' or 'View Results' hyperlinked text and be able to see the test run execution progress or the final result in a new browser tab.



In addition, this release allows you to directly open the asset viewer for each of the enabled and available asset types like screenshots, video, web driver log, console log and network log.



For the screenshot capture, you will see either a blue camera with a green dot or a red dot inside. The green dot inside a blue camera indicates that the screenshot capture was enabled on the test run for "all test instructions". The red dot inside a blue camera indicates that the screenshot capture was enabled on the test run for "only the failed test instructions".




Key Events

In this column, you will get the information about

  • Test Run Scheduled Time
  • Execution Start Time
  • Execution End Time
  • Duration

Outcomes

The 'Outcome' column provides visibility into the number of scenarions, test scripts, and test instructions that passed, failed, got skipped, and pending execution in a bar graph format.



The blue caret on the top right hand side when clicked will show you richer detail.


Failure Triage

Visibility is provided into the progress of the troubleshooting/triage of failed tests. The all new 'failure triage' column is full of rich detail that was not available so far. You will be able to see the details about the first point of failure, the assignment (or lack there of) of the root cause(s) of failure, the linkage to issues in the store.

  • First Point of Failure detail:

    In this section, you will see up to three icons - a 'camera' icon, a 'error description' icon that looks like the letter 'i', and a 'test instructions' icon

    

    If the screenshot capture was enabled for the test and a failure occurs for which a screenshot was captured, you will see a 'camera' icon. When you click on this icon, the screenshot viewer will open in an overlay and show you the screenshot that corresponds to the first point of failure.
    

    If the first failed test instruction has an error description captured (which by the way shows up as a mouse-over text on the red x with a circle icon on the test execution detail screen), the grey 'i' icon will appear and on-click you will see the 'error description' that corresponds to the failure point.
    
    

    When at least one test instruction has failed in the test run you will see a grey 'test instructions' icon. When you click on this icon, the test run execution results detail will open in a new browser tab and automatically expand the test instruction within the appropriate test script and scenario that corresponds to the first point of failure.

    


  • Test Run level Reason for System Abort, Abort or Cancellation

    This section will only get printed in case there is a additional detail available at the test run level that includes the reason why a test got system aborted, aborted or cancelled.. In case this section appears, you will be able to click on the 'view' hyperlinked text to see the detail.

    

  • Assignment of Root Cause of Failure:

    In this section, if no root causes of failure are assigned as of yet, you will see a link called 'assign' which when clicked will open an overlay screen that allows you to assign root cause(s) of failure. If you already have root cause(s) of failure assigned, you will see the total count of root causes assigned, the subset of the total that are assigned by Worksoft SaaS ML and the subset that are assigned by person users. If the 'Do not show on report(s)' label is tagged to a test run either by Worksoft SaaS ML or by a person user, you will also be able to see a visual cue (red file icon with an 'x') either as being assigned by the Worksoft SaaS ML or by a person user.

    Please note that the total count and the subtotal counts ignores the 'Do not show on report(s)' label assignment. In other words, even if the 'Do not show on report(s)' label is assigned, the total count and the subtotals will ignore it. This is done to help the end users have clear visibility into whether or not a real root cause of failure issues to the test run. The appearance of the red file icon additional provides visibility to the end user about the tagging of the test run with the 'Do not show on report(s)' label.

    You can click on the total count or the individual sub-counts and be able to see the list of root causes assigned to the test run. On the overlay if new root cause of failure is assigned or changes made, when you close the overlay, the changes will be reflected in the main test runs listing.

    For those root cause(s) of failure assigned by Worksoft SaaS ML, you will also see a visual cue whether feedback has been provided on the assignment or not and if provided what the feedback was. If no feedback has been provided for a subset or all of the ML assigned root causes, you will see a ? con right next to the count. If the user liked some or all of the ML assigned root cause(s) of failure you will see a green thumbs up icon next to the count. If the user disliked some or all of the ML assigned root cause(s) of failure you will see a red thumbs down icon next to the count. Depending on the number of ML assigned root causes of failure, you may see just a ? icon or the green thumbs icon or the red thumbs down icon OR combinations depending on the situation.

    Please find below screenshots for a few example situations.

    Example 1: No root causes of failure have been assigned

    


    Example 2: A total of two root causes of failure assigned - one by Worksoft SaaS ML - the other by a person user



    Example 3: A total of two root causes of failure assigned - one by Worksoft SaaS ML - the other by a person user - 'Do not show on report(s)' label is also tagged by ML - no feedback was provided on the ML assigned root cause of failure

    


    Example 4: One root causes of failure assigned by Worksoft SaaS ML - A person user "liked" the assignment done by ML

    


    Example 5: One root causes of failure assigned by Worksoft SaaS ML - A person user "disliked" the assignment done by ML and correctly assigned another root cause of failure manually



    Example 6: Two root causes of failure assigned - One ML assigned on which no feedback was provided yet - A person user assigned an additional root cause of failure - ML also assigned the 'Do not show on report(s)' label




  • Linkage to Issues in the Issue Store:

    In this section, if no issues from the issue store are linked as of yet, you will see a link called 'assign' and 'create' hyperlinks which when clicked will open an overlay screen that allows you to either create a new issue in the local issue store or link an already existing issue in the issue store to the specific test run in context. If you already have issues in the issue store linked, you will see the total count of issues linked, the subset of the total that are assigned by Worksoft SaaS ML and the subset that are assigned by person users.

    You can click on the total count or the individual sub-counts and be able to see the list of issues linked in an overlay that opens on top of the main test runs listing.

    For those issues linked by Worksoft SaaS ML, you will also see a visual cue whether feedback has been provided on the assignment or not and if provided what the feedback was. If no feedback has been provided for a subset or all of the ML linked issues, you will see a ? con right next to the count. If the user liked some or all of the ML linked issues you will see a green thumbs up icon next to the count. If the user disliked some or all of the ML linked issues you will see a red thumbs down icon next to the count. Depending on the number of ML linked issuese, you may see just a ? icon or the green thumbs icon or the red thumbs down icon OR combinations depending on the situation.

    Please find below screenshots for a few example situations.

    Example 1: No issues have been linked



    Example 2: A total of two issues have been linked - one by Worksoft SaaS ML - the other by a person user



    Example 3: One issue was linked by Worksoft SaaS ML - A person user "liked" the assignment done by ML

    


    Example 4: One issue linked by Worksoft SaaS ML - A person user "disliked" the linkage done by ML



Clicking on the magnifying glass icon gives you access to a rich list of search criteria organized into sections (accordions) that you can use to search for and find the test runs you are looking for.

Several of these accordions offer powerful search criteria that will help you find test runs based on failure triage activities like assigning or correcting root cause(s) of failure, linking issues or correcting issue linkages, training the ML engine by liking or disliking the autonomous troubleshooting done by ML, annotations activity.







<


















Feedback and Knowledge Base