Skip to content

Analytics: User Level

Before you read this you may want to see:

User Level reports will help you in understanding the AUT behavior and responsiveness based on the test executions in different browsers and devices. The reports provide an insight into the details of the executions based on different parameters such as test cycle identifiers, by an application module, by browser/OS, by root cause for failure, etc.

The User Level reports are categorized as below
  • Quality
  • Planning
  • Subscription Usage
  • Performance (conditional based on subscription model)
Quality:

The "Quality" reports give you an in-depth understanding of the Test Runs executed. The Quality level reports are granular and provide detailed information on the test runs scheduled in batches/builds, by projects at a particular time period, by executions at the scenario or by product. The Quality level is broken into the following reports:

  • Test Cycle Level Test Outcomes
  • Test Cycle Outcomes and Triage Comparison Reports
  • Project level Test Outcomes by Time Period
  • Test Cycle Clipboard Consolidation Reports

Test Cycle Level Test Outcomes  

This report will provide rich analytics about the Quality Assurance initiatives up and down the chain of command.

Using this report you can gain a greater understanding of how quality assurance efforts are operating and to what extent they are actually succeeding.

Specifically, this report will help you visualize/understand how the test outcomes vary by application release/iterations across numerous dimensions like project environment, product flavour, browser/operating system platform, etc.,

This rich analytics can be accessed in various ways across numerous dimensions:

  • At Product Environment, Product Flavour and User levels
  • At a Test Cycle Level By Run Definition and By Application Module
  • By Browser and Operating System Platforms
    • Is my application working well in all desktop and mobile browsers/operating systems equally well?
    • Will some of the customers experiencing issues with a subset of the application functionality in specific browsers/platforms?
  • By Application Module and By Run Definition
In addition to the above mentioned PDF reports you also get to see two excel and one PDF report to the right of Test Cycle Name
  • Transactions Detail Reports. This excel has multiple sheets that provide detailed information at various levels. You can configure what sheets to hide & see in "Project Settings" and also data to be seen by using following QaCONNECT service.
  • Concurrency Usage Report. This provides quick view of the concurrency utilization when the test cycle executed.
  • Consolidated Triage Report. This provides quick analysis of the issues in the Test Cycle. A very handy report to check execution status of the test cycle as it allows you to visualize if there are any new failures and if existing application bugs are resolved or continue to show up.

Test Cycle Outcomes and Triage Comparison Reports  

When you are executing test cycles for every deployment, you may want to quickly check how they compare with each other. This report helps you compare two test cycle outcomes which will provide quick insights into what new failures do you see in latest test cycle thereby allowing you to debug & troubleshoot the same.   

Test Cycle Clipboard Consolidation Reports

 
This is an excel report that allows you to collate additional user information from test runs. You can have a case wherein you want to know what data is used in a specific Run Definition/Test Case or fetch some information from the AUT and review it quickly. In such instances you can persist data to temporary clipboard area and then after completion of test cycle execution generate this report to get quick access to this additional data.    

Project level Test Outcomes by Time Period

 
This report will provide rich analytics about the Quality Assurance initiatives up and down the chain of command.

Using this report you can gain a greater understanding of how quality assurance efforts are operating and to what extent they are actually succeeding.

Specifically, this report will help you visualize/understand how the test outcomes vary by application release/iterations across numerous dimensions like project environment, browser/operating system platform, testing purpose, etc. 

This rich analytics can be accessed in various ways across numerous dimensions:
  • At Company (Account-level), Project level
  • At a Test Cycle by using Test Cycle Identifier (however your builds are identified)
  • For a specific Calendar Time Period
  • For a specific Testing Purpose (Smoke, Regression, etc)
  • Am I seeing more application breakages in functionality tied to my 'Smoke Test' or 'Full Regression'?
  • Are new releases/sprints/builds breaking the application functionality tied to Smoke or Full Regression?
  • Are there any patterns I can glean that the transactional data does not help me see?
  • By Browser and Operating System Platforms
  • Is my application working well in all desktop and mobile browsers/operating systems equally well?
  • Will some of the customers experiencing issues with a subset of the application functionality in specific browsers/platforms?

Planning

This analytics will help you plan the Test Executions. With artifacts mapped to modules and functional requirements, the test runs can be organized based on capacity and the priority of execution.

  • Traceability and Automation Activity
  • Capacity Planning

Traceability and Automation Activity


This report will help you discover how well your tests cover all product modules (application components) or product features (or functional requirements, use cases, user stories, etc). 

This gives you the details based on Script Coverage and Scenario Coverage.

Capacity Planning


This Excel report provides you a clear picture of capacity (number of hours) that you need to execute all run definitions (across all testing contexts) and by AUT environments) within your project.

This report will help you efficiently and rapidly 'plan' future executions of your automated tests across a subset of or all your AUT environments and/or testing contexts.

This report is based on rich analytics based on your organization's capacity usage within your project.

Subscription Usage

Reports under this section give you visibility into how your subscription is being used.

  • Capacity Utilization
  • Concurrency Usage

Capacity Utilization

This report will help you with insights into how you are utilizing the 'capacity' from your Worksoft SaaS Subscription across projects and users within your company.

'Capacity', as you may know, means the “Maximum” number of “Minutes” of automated test executions/day ['Run Definitions' as well as 'Test Scenarios'] you can perform within your domain).

This report will help you gain answers to the following types of questions:

  • Are you executing more tests on specific time slots of a calendar day, or specific days of the week or specific days of the month than others causing your test cycle to fit into long time windows? Is it better for you to break your test cycles differently so that you are optimally utilizing the full value of your subscription's capacity? For example, may be you can schedule tests for one browser/operating system at a specific time of the day or specific day of the week or specific day of the month and other browser/operating system at some other time window.
  • Are your tests experiencing better outcomes if the tests are running in a time window than you are currently running them? You may see impacts of flakiness vary because of some jobs running on your servers where you AUT is hosted (say a backup job) or may be because of the number of manual users of the AUT accessing the app at the same time automation is accessing, etc.

Concurrency Usage

This report will help you with insights into how you are utilizing the 'concurrency' from your Worksoft SaaS Subscription, over a time period, across projects and users within your company.

'Concurrency', as you know, allows you to execute tests in your parallel to reduce the overall time it takes to finish all your tests.

This report will help you gain answers to the following types of questions:

  • Are you scheduling your automated tests all at once and causing many of them to get queued up because of non-availability of adequate concurrency? Is it better for you to more evenly distribute the test execution schedules?
  • If you are already optimally using your current concurrency, is it time for you to make changes to your Worksoft SaaS subscription to get higher levels of concurrency?
Performance
In case you have subscribed for performance testing you will see an additional accordion called Performance.Click here to learn about Worksoft SaaS Application Performance Testing Analytics.

After you read this article you may want to see:


Feedback and Knowledge Base