Skip to content

Accessing the "Details" for a Test Execution

When a test does not result in an outcome you are looking for, you typically want to find out the root cause for the unexpected outcomes (failures). Worksoft SaaS offers numerous views for you to analyze the execution flow to figure out the root cause(s) for the failures.

By default, the test execution results are presented in a "list view" wherein: 
  • the test instruction level detail is grouped by test script that contains it
  • the test scripts are grouped by the data-driven iterations (if any) of a combination of test scripts
  • the data-driven iterations (or if they don't exist, the scripts themselves) are grouped by the test scenario within which the test scripts are part of
  • the test scenarios are grouped by the run definition that contains the scenarios. 
You can also view the execution detail by watching the video or the screenshots captured for the test run if such capture was enabled for the test at the time of its scheduling/triggering. If your technical, you can even look at the web driver (selenium) log.

The "details" section of the test execution results page has many tools that allow you to rapidly view and analyze the test execution and perform debugging and triage of failures if any.

This article will help you understand the various actions and features available for you to tap into within the "details" section of the test execution results page. Please note that some of the actions/features are available for you to use when you are viewing the results of a text executed locally on your laptop/desktop while some are available for you to use in the context of cloud-based test execution. Many of the actions/features, however, are available for you to use in the context of both local executions and cloud-based executions.

You can assign Labels to a test run from Test Runs - Home screen. Instead of you accessing a test run, navigating to its execution screen, and then assigning a label, you can do it from the Home screen itself.

When no labels are tagged to a particular test run, the Label icon in the home screen for that test run would be visible in a grayed-out state. If you want to assign any label, then click on the Label icon and select a label that has to be assigned. Once any label is assigned to a test run, then the Labels icon turns into the enabled state.



You can also Create/Manage Labels where after creating a Label from Test Runs - Home screen, that label can also be assigned to the test run.
This section has a few buttons that help you perform specific actions. These buttons are placed at the top right portion of the section.


You have completed a test run and for different reasons, either you want to ensure that the test is rightly built or you debugged the test and would like to run it again to verify your understanding/assumptions, you want to execute the test again. You can do that by making use of the "Run Again" button.

                                                                  

When you click on the "Run Again" button, the test run executes on the same testing context on which it was executed earlier.


While looking to troubleshoot failures, you may want to quickly check the test scripts that failed. Rather than having to click through each of the test scripts, you can use the button placed in the section that allows you to do this in a single click.

 

Click on the button will present you with two options in a pulldown. If you have only the Scenario(s) showing up then you can expand all failed test scripts. If the state of the screen is that the Scenario(s) are in an expanded state then it allows you to collapse all of them in a single click.

Pause while test execution is in progress (applicable for QaSCRIBE executions)

If you would like to review the results while the execution is in progress or you want to perform some actions that are not part of the test script, before continuing with further execution, you can use this pause the execution.

Pause button can be seen in the "Execute & Debug" tab. A paused execution can be continued from where it is paused without having to restart the test.

                                                         

If for some reason the application tab gets closed either when the test script is in execution state or while it paused, the test will be aborted.

Adjusting execution speed (applicable for QaSCRIBE executions)

You can adjust the speed at which the test execution happens. In other words, you can specify what time lag should be added between instructions while the test is in execution.

You can do this while the test is in execution only if the setting "Allow overrides of play speed from QaSCRIBE context" is checked. Any time during the execution can slow down or speed up the execution by dragging the control either way.


                                                         


The test details are presented in accordion fashion. There are three levels/different entity information that is shown.
  • Scenario(s)
  • Data file &
  • Test Scripts



If in the test run you have used a Scenario that has either predecessor or successor or both, the same can be seen on the Scenario accordion/row. Whenever one of the above is used, an icon can be seen at the far left (before the name of the Scenario) indicating that the Scenario is added in the Test Run because it is predecessor or successor to the Main Scenario.



Do a mouseover on the icon, and you can see why the Scenario is there. It is mentioned if it is a predecessor or successor and also to which main Scenario it is related to.

Viewing of the Test Scripts in new tab

You can open the test script in a new tab by just clicking on the icon available next to the number of the script. You can also perform the actions in the test script accordingly and perform the executions in the other tab simultaneously.


Statistics presented at each entity

You can find following statistics on the accordions i.e. number of
  • Test Scripts with passed vs failed at the Scenario level
  • Instructions with passed vs failed at Test Script level
If there are any test scripts or instructions that failed, then the information presented is in the terms of fails i.e. x of y failed (failed is shown using red icon).

If all test scripts and instructions passed then the information presented is in the terms of pass i.e. x of y passed (passed is shown using green icon).

If some instructions passed and others are skipped, you will see that "x" is going to be less than "y" but with a green icon representing that some of the instructions are skipped.

                                                                   

And the execution of test scripts can be skipped as well depending on the selection in the bind data section (click here for the help article).


You can find how long a Scenario or Test Script is executed by checking the timings provided at each level. While the test execution is in progress, you can see that the timings reflect the duration each entity is executed for and is represented in the format mm:ss.mi (minutes:seconds.milliseconds)

                                                                   

Viewing Execution log (applicable for QaSCRIBE executions)

While the test run is "In Progress" you can see the progress in the tab the application is opened. At the same time, you can also check the log of instructions being executed at each test script level. An icon at the far right on the Test Script accordion provides you access to this information.

Do the mouse over on the icon, and you will see a pull-down with the execution log. The sequence of the command in the test script, command name, target and value along with the activity can be seen in the log. While debugging a test, this information can come handy.



This execution log also has a Search bar which gives you the provision to search within the log. For example, you may search for any command or value in the search bar and you will be shown the search results in the same execution log. Copy to clipboard option is also available which makes your work easier in copying the execution log.


You can find how data is used across test scripts that are parameterized and attached to data files by observing how the iterations are presented in the screen. Every iteration starts with the data file name and ends with an accordion marked "End". Test Scripts that are part of the iteration are shown within the two accordions and represented by the graphical representation of a loop.

                                         


In addition, information about the data used in the iteration is shown.


While the information presented on the page helps you identify iterations and the number of iterations, you may also want to check the actual data row used for the iteration. You can access this information by clicking on the icon placed at the far right on the data file accordion row.

 

When you click on the icon, a popup shows up with the data from the data file. The data row shows in a table structure with the first column having attribute names and the second column having the actual value from the data file. For ease of reviewing the data, you can Transpose the data row.



If a data filter is used in the test run, you can see the information pertaining to the data filter in the popup. The information is presented in the format "Row# x of y filtered rows from z rows" where
x - stands for the number of rows used from the filtered data set
y - stands for the number of rows filtered from the data set
z - total number of rows in the data file



In addition to this information, you also have the provision to navigate to the filter definition. Below the above line and beside the filter icon, the name of the data filter can be seen. If you click on the name, you will be taken to the data filter definition in the next tab.

Viewing Instruction Details

When you click on the Test Script accordion/row, information about the instructions is presented. In addition to the most obvious information you see, presented on the grid, i.e. Sequence of the command, Command, Target, Value, Duration & Result, there is a lot more detail that is available in the grid.



This information includes locator use, variable use, the error information, and the screenshot detail.


This field quite straight forward in the information that it displays. The number shown in the field/cell is the execution sequence of the instruction within the Test Script that it is part of.


 
When you mouse over on the sequence number you will see another number which is the global sequence number. Global sequence number as the name indicates provides the execution sequence of the instruction in respect to the entire test.


This field displays the command that is executed in the specific row of instruction. In addition to that, an icon is shown in certain cases.



The icon is placed to the right of the cell and shows when a variable or user-defined variable is used in either of Target and Value fields of the instruction (this icon is also available at test script level when variables are used in any of the instructions in the test script).


You have used a Local variable or User-Defined variable in your test and you would like to view what value is assigned to the variable at different instructions where it is used, you can do that by accessing the icon which is available at the corner of the command which holds the variable and that opens the Variable Inspector popup.

Doing a click on the icon, display as popover with the variable information.
At the instruction level, it lists the variables used at that particular instruction.
At the test script level, it lists all the variables (in alphabetical order) used in the test script.
At the test scenario level, it lists all the variables (in alphabetical order) used in the test scenario.
At the test run level, it lists all the variables (in alphabetical order) used in the test run.



You can look for the variable you are interested in by scrolling to the specific variable or use the Search option. The search performed is common for filtering the results for the variable name and the value of the variable. You can also sort the variables by using the icon placed beside the Search text area.



When the variable is displayed, click on the variable and the value assigned to the variable is shown. In addition to that, it is also mentioned whether the variable is used in Target or Value field. To close the text area, click on the variable again.
  • The dropdown is populated with the instruction sequence of the instruction that you accessed. If you are viewing the values at the test script level, it displays the first instruction seq # where the variable is used. You can check the variable value at a specific instruction by selecting the seq# in the dropdown.
  • First, Previous, Next & Last. These buttons can be seen below the dropdown. You can navigate back and forth using these buttons to view the value assigned to the variable at different instructions.
In this Variable Inspector pop up, you can now be able to view the DataSet in JSON format. You can view this for the commands wherever the DataSet is called. It also shows the total number of rows present in the DataSet.



Below is the JSON format:
[{
“attributeName”:”value”,
“attributeName”:”value”
}, {
“attributeName”:”value”,
“attributeName”:”value”
}, {
“attributeName”:”value”,
“attributeName":”value”
}]

All the variables used in execution can be viewed at a time in the Variable Inspector which is now available at the Run level. This is available in both cloud and local executions, but, it doesn't allow you to edit/modify the variable value in the cloud. Whereas, in local executions, we would be able to edit/modify the variable values by pausing the execution. 


Note: The results filtered for the search keyword are based on the variable and its value.

Edit Variables in Variable Inspector (local executions only)

You can now edit the value of the variable during the execution. You can pause the execution and edit the value. After the editing is done, you may need to click on the Save button and the new value will be reflected where the variable is next called in the execution.



You can also copy the Original Value to New Value tab by giving a click on the below icon and then you can modify the New Value as per your requirement. Now the New Value becomes the Original Value for the same variable which is next called in the execution.
                                                                                     
Note: Original Value cannot be edited.

Target

Target field displays the locator information for most of the commands. For some commands, it shows the value or text.

For the commands that use a locator in the Target field and that have multiple locators, the cell displays
  • the locator that identified the element, when it is successful in finding the element
  • the first locator in the instruction when it failed to find the element
In addition to the locator information, there is an icon shown when one or more of the locators failed in finding the element. When you do a mouseover on the icon, a popover is shown with all locators in the target of the instruction.



The locators are shown in the same sequence as to what the system attempted for finding the element. When a locator failed in finding an element, it shows up in red color and the one that successfully identified the element is shown in green. In addition to that information related to how many times it tried that locator and how long it tried it for are mentioned as well.

It is very much possible that each locator is tried multiple times when the page is slow in loading or the system is failing to locate the element. This information is very useful when you look for optimizing your test script. Also, please be noted that the locator popup icon will not be shown when there are multiple locators available for an element having all the locators finding the element but all of them are not matching with the given value in the 'Value' field.

You can use this information to either sequence your locators differently or remove obsolete locators that no more find an element. While deciding to remove a locator, you may want to be little cautious as the locator could be used in a different workflow or in a different testing context.

At times you can see a lock icon in the cell. This icon shows up when there is an encrypted text in your executions. You are also given with the toggle option to view the encrypted text as a plain text. If you access that toggle, the encrypted text is then shown as a plain text.

Note
You would be able to see the toggle on the execution screen only when the Admin of the Project has checked "Display clear-text (decrypted) values for Test Execution Results in Worksoft SaaS and downloaded reports (PDF, Excel)" option in Domain Info.




You can view the run-time value of the "Value" used in the instruction in the concerned cell. If a variable is used in the Value field, you can see the actual value by using the Variable Inspector icon placed in the Command field.



At times you only see three dots in the cell. This happens when the text is large and cannot fit in the space provided. This generally happens for "echo" commands. To see the value, click on the icon placed to the right of the cell. This brings up a popover with the actual text/value.


You can see the actual time it took for the command to complete under the Duration field. The information presented in the cell is of the format mm:ss:mi (minutes:seconds:milliseconds). The value is seen in the field includes
  • the execution time of the command &
  • time taken for taking the screenshot (where applicable)



There are different icons that indicate different statuses. The color of the icon indicates the result of the step/instruction
  • The green color icon indicates that the step is successful
  • The red color icon indicates that the step failed
  • Grey color icon indicates that the step is skipped
In addition to the icon indicating the result of the instruction, the screenshot icon is shown (wherever applicable). Clicking on the icon opens the Asset Viewer with the specific screenshot loaded.



When there is some issue taking the screenshot, a red icon is shown to indicate that no screenshot is available.


If you have a large locator in the Target or a huge text at the Value, you can now view them on a Premium Editor on the Execution Results screen. However, for the echo command, the Variable Inspector is still shown at the Value field but, the Premium Editor icon is shown for all the other commands wherever applicable.

Example: 
If you have assertVisible command in the script, you would not be able to view the Premium Editor icon at the Value column as this command accepts the only Target in it.





While debugging & troubleshooting a failed test run, you may need to navigate through the failures (there can be more than one) to find the root cause of failure. You can perform these actions by accessing the tooltip shown when you do a mouseover on the failure icon at Scenario, Test Script or Instruction level. The icons provided allow you to navigate to First, Previous, Next & Last in that order. One or more icons can be greyed out based on where (at which instruction/script/scenario) are you positioned at.

@Scenario


@Script


@Instruction




Feedback and Knowledge Base