Skip to content

Example Application - Create a Baseline of Non-Data-Driven Tests [A precursor for you to start your Data-Driven Journey]

Lets assume that the application (“AUT” as we call it short and sweet for “Application Under Test”) you have been tasked to test and certify that it works correctly is a Health Insurance Premium & Penalty Calculator for Obama Care (the more popular name for “Affordable Care Act”).   Click here open this calculator (AUT).

In this application you input the annual “salary” of the individual seeking insurance under the law, his/her “age”, and the “zip code” where the individual resides, and the calculator will spit out the “premium” that the individual will have to pay to procure insurance under the law and the “penalty” that the individual is expected to pay the government if the individual remains uninsured for health insurance. In reality there are lot more inputs to the calculator, but for the sole purpose of explaining the data-driven features in Worksoft SaaS, the scope of the calculator is being limited to 3 inputs and 2 expected outputs.


Lets also assume that you have been tasked to test/certify if the calculator coded by a developer (that vouched that the deliverable is of highest quality and that it requires no testing by any one else before it goes live into Production ☺). 

You decide to do the “Smoke-Test” certification by using 7 rows of test inputs (of salary, age and zip code) with the expected output of premium and penalty as depicted in the table below.

If your Smoke Test passes (without you finding any smoke), you decide to do a “Full-Regression” certification by using 1000 rows of test inputs (of salary, age and zip code) with the expected output of premium and penalty. Lets assume that you also have created using Microsoft Excel (or Google Sheets) a table that contains the 1000 rows of data you want to use as your test coverage for your “Full Regression” test of the calculator.

The steps outlined in the article "Process to make your Tests 'Data-Driven'" will be explained in detail in the subsequent articles by using the example application/automation context described in this article (above).

Get ready to "test-drive" Worksoft SaaS Data-Driven Testing!!

                          

Creating your Test Scripts using QaSCRIBE


For the example app discussed in the context of the document (Health Insurance Premium & Penalty Calculator), the Test Script needs to cover the following steps:

•   Opening the Website
•   Entering the Necessary Test Inputs into the Calculator
•   Triggering the Calculator to process the calculation
•   Once the results are presented on the Results validate that the ‘actual’ results match the expected result

You can use QaSCRIBE to create the atomic test scripts that cover the steps described above.  The screenshot below shows 4 atomic test scripts that were created to cover the test case scope:


Since QaSCRIBE captures your interactions with the AUT (the calculator) while you run the test case with one data row of inputs and verify/assert the correctness of expected results (premium & penalty), your Worksoft SaaS Test Script(s) that get created have in them hard-coded values for the test input (salary, age, zip code) and expected output (premium & penalty).

We call these scripts “Non-Parameterized” Test Scripts because they have tightly-coupled (or hard-coded) data.

Creating & Validating your Test Scenarios 

Once you have your (Non-Parameterized) Test Scripts ready, you should upload them into Worksoft SaaS (if you have not done so already) and assemble them into Test Scenario(s) and “Validate” each of those Scenarios for each of the Testing Context(s) of interest to you.

This process will ensure that you have the necessary foundation ready for you to proceed with running your automated tests (Scenarios or Run Definitions) using the Data-Driven Testing Feature within Worksoft SaaS.

While Worksoft SaaS lets you assemble and validate Test Scenarios directly in a Data-Driven mode, it is recommended that you assemble and validate Test Scenarios in a non-data-driven mode first before you proceed with executions in the data-driven mode. We believe that adopting/following such a best practice will go a long way in saving a lot of debug time later.

Assembling Test Scenario – Using ‘Non-Parameterized’ Test Scripts

If you have not parameterized any of your Test Scripts (described later in this document) within your project yet, when you start creating a Scenario, when on the “Assembly” tab of the Scenario Creation wizard, under the first column captioned “Pick” of the “Script Library” section, you will only see one icon with mouseover “Pick” that you can click to select a particular script to be assembled into the scenario.



Once you assembled the Test Scripts and sequenced them into the right order, you should click on “Save & Validate” which will take you to the “Specify Testing Context” tab. (The “Bind Data” tab will be bypassed since you do not have Parameterized Scripts in your Assembly for the Scenario).


You should 'validate' the Scenario for all Testing Contexts that you are required to certify your AUT against.  You can do this by clicking “Start Afresh” button which will take you to the “Testing Contexts” tab where you can add /edit and select a different Testing Context and “Execute” Validation of the same Scenario for that next Testing Context. (Please note that “Run Again” runs the Scenario Validation for the same Testing Context whereas “Start Afresh” lets you choose a different Testing Context.)

Once you are satisfied with the Validation of your Scenario against all Testing Contexts of interest, you are ready to proceed to the next step of starting to make your tests 'Data-driven'.

Feedback and Knowledge Base