Skip to content

Release Notes - 23 Aug 2021

In this release, the search and listing pages for the "test runs" and the "issue store" modules have been enhanced significantly. These enhancements, we are confident, will improve your productivity multi-fold. 

Here are the new features available in this release for you to benefit from:
  • Test Runs module home (search and listing page) completely overhauled for efficiency

    Here are some of the key improvements:

    • Visibility is provided into the progress of the troubleshooting/triage of failed tests. The all new 'failure triage' column is full of rich detail that was not available so far. You will be able to see the details about the first point of failure, the assignment (or lack there of) of the root cause(s) of failure, the linkage to issues in the store.

      • First Point of Failure detail:

        In this section, you will see up to three icons - a 'camera' icon, a 'error description' icon that looks like the letter 'i', and a 'test instructions' icon

        

        If the screenshot capture was enabled for the test and a failure occurs for which a screenshot was captured, you will see a 'camera' icon. When you click on this icon, the screenshot viewer will open in an overlay and show you the screenshot that corresponds to the first point of failure.
        

        If the first failed test instruction has an error description captured (which by the way shows up as a mouse-over text on the red x with a circle icon on the test execution detail screen), the grey 'i' icon will appear and on-click you will see the 'error description' that corresponds to the failure point.
        
        

        When at least one test instruction has failed in the test run you will see a grey 'test instructions' icon. When you click on this icon, the test run execution results detail will open in a new browser tab and automatically expand the test instruction within the appropriate test script and scenario that corresponds to the first point of failure.

        


      • Test Run level Reason for System Abort, Abort or Cancellation

        This section will only get printed in case there is a additional detail available at the test run level that includes the reason why a test got system aborted, aborted or cancelled.. In case this section appears, you will be able to click on the 'view' hyperlinked text to see the detail.

        

      • Assignment of Root Cause of Failure:

        In this section, if no root causes of failure are assigned as of yet, you will see a link called 'assign' which when clicked will open an overlay screen that allows you to assign root cause(s) of failure. If you already have root cause(s) of failure assigned, you will see the total count of root causes assigned, the subset of the total that are assigned by Worksoft SaaS ML and the subset that are assigned by person users. If the 'Do not show on report(s)' label is tagged to a test run either by Worksoft SaaS ML or by a person user, you will also be able to see a visual cue (red file icon with an 'x') either as being assigned by the Worksoft SaaS ML or by a person user.

        Please note that the total count and the subtotal counts ignores the 'Do not show on report(s)' label assignment. In other words, even if the 'Do not show on report(s)' label is assigned, the total count and the subtotals will ignore it. This is done to help the end users have clear visibility into whether or not a real root cause of failure issues to the test run. The appearance of the red file icon additional provides visibility to the end user about the tagging of the test run with the 'Do not show on report(s)' label.

        You can click on the total count or the individual sub-counts and be able to see the list of root causes assigned to the test run. On the overlay if new root cause of failure is assigned or changes made, when you close the overlay, the changes will be reflected in the main test runs listing.

        For those root cause(s) of failure assigned by Worksoft SaaS ML, you will also see a visual cue whether feedback has been provided on the assignment or not and if provided what the feedback was. If no feedback has been provided for a subset or all of the ML assigned root causes, you will see a ? con right next to the count. If the user liked some or all of the ML assigned root cause(s) of failure you will see a green thumbs up icon next to the count. If the user disliked some or all of the ML assigned root cause(s) of failure you will see a red thumbs down icon next to the count. Depending on the number of ML assigned root causes of failure, you may see just a ? icon or the green thumbs icon or the red thumbs down icon OR combinations depending on the situation.

        Please find below screenshots for a few example situations.

        Example 1: No root causes of failure have been assigned

        


        Example 2: A total of two root causes of failure assigned - one by Worksoft SaaS ML - the other by a person user



        Example 3: A total of two root causes of failure assigned - one by Worksoft SaaS ML - the other by a person user - 'Do not show on report(s)' label is also tagged by ML - no feedback was provided on the ML assigned root cause of failure

        


        Example 4: One root causes of failure assigned by Worksoft SaaS ML - A person user "liked" the assignment done by ML

        


        Example 5: One root causes of failure assigned by Worksoft SaaS ML - A person user "disliked" the assignment done by ML and correctly assigned another root cause of failure manually



        Example 6: Two root causes of failure assigned - One ML assigned on which no feedback was provided yet - A person user assigned an additional root cause of failure - ML also assigned the 'Do not show on report(s)' label




      • Linkage to Issues in the Issue Store:

        In this section, if no issues from the issue store are linked as of yet, you will see a link called 'assign' and 'create' hyperlinks which when clicked will open an overlay screen that allows you to either create a new issue in the local issue store or link an already existing issue in the issue store to the specific test run in context. If you already have issues in the issue store linked, you will see the total count of issues linked, the subset of the total that are assigned by Worksoft SaaS ML and the subset that are assigned by person users.

        You can click on the total count or the individual sub-counts and be able to see the list of issues linked in an overlay that opens on top of the main test runs listing.

        For those issues linked by Worksoft SaaS ML, you will also see a visual cue whether feedback has been provided on the assignment or not and if provided what the feedback was. If no feedback has been provided for a subset or all of the ML linked issues, you will see a ? con right next to the count. If the user liked some or all of the ML linked issues you will see a green thumbs up icon next to the count. If the user disliked some or all of the ML linked issues you will see a red thumbs down icon next to the count. Depending on the number of ML linked issuese, you may see just a ? icon or the green thumbs icon or the red thumbs down icon OR combinations depending on the situation.

        Please find below screenshots for a few example situations.

        Example 1: No issues have been linked



        Example 2: A total of two issues have been linked - one by Worksoft SaaS ML - the other by a person user



        Example 3: One issue was linked by Worksoft SaaS ML - A person user "liked" the assignment done by ML

        


        Example 4: One issue linked by Worksoft SaaS ML - A person user "disliked" the linkage done by ML



      • The new 'Outcome' column provides visibility into the number of scenarions, test scripts, and test instructions that passed, failed, got skipped, and pending execution in a bar graph format.



        The blue caret on the top right hand side when clicked will show you richer detail.






      • The new 'Status' column lists the current or the final state of a test run's execution. No new statuses have been added in this release. You will be able to click on the 'View Progress' or 'View Results' hyperlinked text and be able to see the test run execution progress or the final result in a new browser tab.



        In addition, this release allows you to directly open the asset viewer for each of the enabled and available asset types like screenshots, video, web driver log, console log and network log.



        For the screenshot capture, you will see either a blue camera with a green dot or a red dot inside. The green dot inside a blue camera indicates that the screenshot capture was enabled on the test run for "all test instructions". The red dot inside a blue camera indicates that the screenshot capture was enabled on the test run for "only the failed test instructions".




      • The "Test Run" column has been enhanced to provide richer detail more clearly.

        A visual cue is now provided when a test run has 'user notes' or a 'description' exists for the executed run definition or the test scenario. Clicking on the icon will show either the user notes or the entity description or both depending on the data applicability.



        If a test run either is triggered for re-execution by Worksoft SaaS ML or by a person user or if a test run triggers the re-execution of other test runs, a visual cue is provided on the bottom right hand side corner of each cell in the 'Test Run' column. Clicking on this visual cue opens the detail in an overlay. The overlay clearly shows the original test run and all re-executions along with additional information about who triggered the re-execution, when, and the root cause(s) of failure, if any, that were assigned to each test run in the re-execution path.







        Clicking on this re-execution visual cue opens the detail in an overlay. The overlay clearly shows the original test run and all re-executions along with additional information about who triggered the re-execution, when, and the root cause(s) of failure, if any, that were assigned to each test run in the re-execution path.


        If a test run is blocked from execution because the batches on which the execution was dependent on have not been met OR if the dependencies existed but have been cleared, visual cue is now available right next to the test run id that is printed on the bottom left corner of each cell under the 'Test Run' column to indicate that the dependencies exist and the test is blocked from execution OR that the dependencies existed but are now cleared. Clicking on that visual cue shows additional detail about the batch dependencies in an overlay.





        If you want to open the executed run definition or the test scenario in the view mode in a new browser tab, you now can click on the 'open in a new tab' icon that exists in the top right hand side corner of each cell in the 'Test Runs' column.


      • Clicking on the magnifying glass icon gives you access to a rich list of search criteria organized into sections (accordions) that you can use to search for and find the test runs you are looking for.

        Several of these accordions offer powerful search criteria that will help you find test runs based on failure triage activities like assigning or correcting root cause(s) of failure, linking issues or correcting issue linkages, training the ML engine by liking or disliking the autonomous troubleshooting done by ML, annotations activity.







        <



















    • Failure Triage in Bulk of Other Test Runs based on Triage completed for a specific Test Run

      So far you were able to assign root cause(s) of failure or issue linkages in bulk to multiple test runs in a test cycle using the Worksoft SaaS QaCONNECT REST API based on the root cause(s) of failure you assigned and issue linkages you did to a specific test run as part of triaging its failures. With this release, you will be able to accomplish the same using the Worksoft SaaS application.

      To initiate this powerful but easy to use workflow, you have two options:

      • Open the 'Actions' menu for the test run for which you completed the failure triage (or ML performed autonomous troubleshooting/triage for you). You will see one or three options at the bottom of the actions menu under the section "Failure Triage in Bulk of Other Test Runs". If you finished the issue linkage OR the root cause of failure assignment but not both, you will only see one amongst the first two options. If you finished both issue linkage and root cause of failure assignment, you will see all three options.


      • Click on the 3 horizontal dots within the 'Failure Triage' section within the header of the test run execution detail screen for the test run for which you completed the failure triage (or ML performed autonomous troubleshooting/triage for you). You will see one or three options at the bottom of the actions menu under the section "Failure Triage in Bulk of Other Test Runs". If you finished the issue linkage OR the root cause of failure assignment but not both, you will only see one amongst the first two options. If you finished both issue linkage and root cause of failure assignment, you will see all three options.


      Once you initiate the bulk assignment workflow, you will see a couple of changes on the test runs home screen.
      • To the right of the 'My Selections' tab, you will see the context printed for your chosen bulk action. To the right of this context printed, you will see a white right arrow on a blue circle.


      • If you click on the white arrow on the blue circle icon, you will see an overlay open up that shows on the left hand side the test run that you selected as the "source" for your bulk updates/correction of root causes of failure and/or issue linkages for other test runs in the same (or other test cycle).


      • In addition, you will also notice that the 'Actions' column shows a shopping cart kind of icon that allows you hand-pick the list of test runs in the same or a different test cycle (as the source test run) that you want to define as 'target; for your bulk action. This can be any number of test runs that you think would benefit from the bulk action. Clicking on the green shopping cart icon will add the test run to your 'My Selections' list (tab). As you add more and more test runs to your 'My Selections' list, the numerical counter will increment and show the total number of test runs chosen as the target list for your bulk updates from the source test run.

      • You can switch back and forth between the 'Runs List' tab and the 'My Selections' tab of your Test Runs. When you are on the 'My Selections' tab, you can remove any test run individually from your 'My Selections' list by clicking on the red shopping cart icon in a row OR if you prefer, click on the red shopping cart icon in the column header to remove "all" test runs once from your 'My Selections' list.


      • When you are on the 'Runs List' tab you can also click on the icon in the column header of the actions column to switch between the 'Selection' mode and the 'Actions menu' mode for the test runs. These icons act like a toggle switch. Each subsequent click toggles the switch to the other mode.

      • To get all your target test runs on which you want to perform the bulk action, you can perform a series of searches by refining the search criteria each time. You will not lose the test runs you already added to your 'My Selections' tab unless you cause the page to reload. As long as you are on the test runs page and perform searches you will NOT lose your 'My Selections' list. Once you have all the target test runs in your 'My Selections' list, you should click on the right white arrow on the blue circle icon (described previously in this section) and in the overlay choose the appropriate options for the bulk operation for both the root cause(s) of failure and the issue linkage. You can choose one option for the root cause(s) of failure and a different option for the issue linkages, if you prefer. Then click 'Apply'

      • Once you click 'Apply', Worksoft SaaS will bulk apply the root cause(s) and/or issue linkages from your "source" test run to each of your "target" test runs you selected in your 'My Selections' tab. Once the bulk operation successfully completes, your 'My Selections' list will be automatically emptied and the 'Runs List' tab will automatically show only your target test runs. This will give you an opportunity to verify (if you choose to do so) that the bulk update did what you expected.


    • Issue Store module home (search and listing page) and associated functionality has been significantly enhanced. Many productivity-enhancing features have been included in this release.

      To access the 'Issue Store', click on the 'Issue Store' menu items from the hamburger menu.



      The Issue Store module home lists the issues in your project using a 'lazy loading' (also called the incremental loading) approach where in more records get loaded as you scroll down the listing.



      Clicking on the magnifying glass icon gives you access to a rich list of search criteria organized into sections (accordions) that you can use to search for and find the issues you are looking for.



    Share your ideas – Help us improve Worksoft SaaS!

    We could not have gotten Worksoft SaaS to where it is without some brilliant ideas from all of you. Keep the suggestions and ideas for improvement flowing!

    To submit an idea or vote on some of the features requested by other customers, click here



Feedback and Knowledge Base