Skip to content

Creating Consolidated Output from Clipboards at the Test Run level

To better understand the concept of test run level clipboards and the consolidation of information from these clipboards into a test cycle level consolidated output, a specific business case (or a use case) from eCommerce is illustrated below as an example. Even though the use case mentioned is eCommerce, please note that the feature of consolidation of information from run level clipboards could be very beneficial to customers in other verticals and domains.

Let us assume that you want the automated tests to verify if the product pages (PDP as they usually called) are showing the correct content and pricing for all the products in the catalog that had some changes deployed as part of a release done into a specific environment (production or staging). The number of products (or styles and colors of those products) may be in the tens of thousands depending on the size of your product catalog. You'd have created a single run definition that contains the test scripts that perform all the checks for content and pricing that you wanted to get done. Using the powerful data-driven testing support that Worksoft SaaS offers, most likely you'd be running the same run definitions hundreds or thousands of times, each time feeding in a different list of products. You'd do that to be able to do checks on different products in parallel so that the time window you need to finish the checks on all the products can be as small as possible. But in this, you or your business users typically expect a single report that includes the consolidates all variances found by all the automated tests. They would not prefer to have to deal with as many mini-reports as there were test runs you executed within a single test cycle.

Now, to accomplish this, Worksoft SaaS offers you 3 powerful commands and a QaCONNECT web service that make it easy for your business users (or other stakeholders) a single consolidated report that meets their expectations and do so rapidly and without too much effort.

To make use of this feature, the first thing for you to do is to use the command 'ddPersistDatasetToTestCycleClipboard' within the test script(s) that is(are) part of the run definition that does all the checks on your PDP page to write the variances found (or any other information... it does not have to be errors only) to what we call a test run level 'clipboard' (some sort of a temporary storage). Assuming you execute hundreds or thousands of test runs of the same run definition each time feeding in a different set of products, as many test run clipboards would have accumulated some information that is of interest to you and your business.

The next obvious step is for you to get a single consolidated output that accumulates the information from all the hundreds or thousands of test run level clipboards. Worksoft SaaS allows you to get the consolidated output to appear either as a 'data file' or an 'analytical report'. The former will appear automatically within your project's 'Data Files' module whereas the latter will appear as read-only 'report' within your project's 'Analytics' module within the Worksoft SaaS application. If you choose to get the consolidated output as a 'Data File', you have an advantage that the contents of that data file can be used to drive other test runs involving other run definitions in your project. However, please be aware that if you choose the data file option, Worksoft SaaS limits the size of the data file to a max of 5000 rows. If the consolidation of information from all your run level clipboards within a test cycle does not exceed 5000 rows, then the consolidation process will create a data file with 100% of information from those clipboards. If the consolidation process creates more than 5000 rows of information, then only the top 5000 rows of information will be written into the data file. Such a restriction does not exist if you were to choose the 'analytic report' as the output format.

You may now ask 'How then can I get the consolidated output from the information that is in all my test run level clipboards?". Very easy! Make use of one of two ways:

  1. Create another run definition that test script(s) that make use of either of the commands 'ddCreateDataFileFromTestCycleClipboards' or 'ddCreateAnalyticsReportFileFromTestCycleClipboards'. The first command will create a data file. The second will create an analytical report.

  2. Call the web service within the Worksoft SaaS QaCONNECT REST API (https://www.eureqatest.com/.../executions/create-consolidated-output-from-clipboards).
To make it easy for you to understand the concept of run level clipboards and test cycle level consolidated output, in the business case/goal described above and the steps to accomplish that goal, so far in this article, an impression was probably given to you that only one dataset can be written to the test run level clipboard by each run. That's not true.

Each run can write more than one datasets of information into the test run level clipboard. And the consolidation commands or the web service will consolidate the information that belongs to each dataset separately into a single data file or report. So, if each test run is writing two different datasets worth of information into test run clipboards, and you execute that run definition 1000 times, then the consolidation command or service, will provide two data files or analytical reports with the first file containing consolidation of first dataset from 1000 clipboards and the second data file or analytic report containing consolidation of second dataset from 1000 clipboards. It gets even better. You don't have to generate the consolidated output for all datasets at once unless you want to. You can make two separate calls to the web service mentioned above each time asking for the consolidation of a different data definition that your dataset belongs to. The same support also exists within the commands that do the consolidation that can be used within your test scripts.

These test cycle clipboard reports can be generated based on Run Definition as well. As there are chances of utilizing multiple Data Definitions for a single Run Definition, you may get corresponding multiple Data Files when you persist the DataSet. Instead, you may now get a single workbook (consolidated report) generated with all the required sheets which are mapped to multiple Data Definitions. 

For this, you may need to add a few other parameters to the JSON Object and request for the service. You can also mention the sequence of sheets in which you want to see in the workbook as. Accordingly, the report will be generated by given preferences.

For example, if you want to view the Order Summary after placing the orders in the reports, you may provide the sheet name and sequence in which the sheet has to be arranged and the report would be generated with the same sequence.

Below are the naming conventions for the reports:

1) If the report is generated based on below given parameters:
"consolidatedOutputType": "R"
"dataConsolidationOptionTypeCode":"01",
"reportOutputCompilationTypeCode": "01",
then multiple workbooks will be generated with workbook name as "TestCycleIdentifier_DDName_Timestamp"

2) If the report is generated based on below given parameters:
"consolidatedOutputType": "R"
"dataConsolidationOptionTypeCode":"01",
"reportOutputCompilationTypeCode": "02",
then single workbook will be generated with workbook name as "TestCycleIdentifier_Timestamp"

3) If the report is generated based on below given parameters:
"consolidatedOutputType": "R"
"dataConsolidationOptionTypeCode":"02",
"reportOutputCompilationTypeCode": "01",
then multiple workbooks will be generated with workbook name as "TestCycleIdentifier_RDName_DDName_Timestamp"

4) If the report is generated based on below given parameters:
"consolidatedOutputType": "R"
"dataConsolidationOptionTypeCode":"02",
"reportOutputCompilationTypeCode": "02",
then multiple workbooks will be generated with workbook name as "TestCycleIdentifier_RDName_Timestamp"

Here "Timestamp" is a unique identifier that is of the format "ddmmyyhhmmssSSS" [example: "31072018061327021" that represents the 31st day of the 7th month of the year 2018 at 06 hrs 13 minutes 27 seconds and 21 milliseconds].

Please refer to REST API Docs for more information on Mandatory and Optional parameters in a JSON Object.

Feedback and Knowledge Base