Automation Analytics


Test Results / All Test Scenarios

The report shows test results of all test scenarios irrespective of execution of same test scenario multiple times. This execution status report displays combination of test results achieved through automation testing. It displays results in three statuses: (1) Pass, (2) Not Executed and (3) Fail.

Hovering over each pattern shows details of Status it represents, Count of test scenarios with that particular Status, and how much Percent it holds out of total executed test scenarios.

Execution Results / All Execution Results

There are two Views of this report: (1) Platform (2) Latest.

(1) Results by Platform

If you select the Platform View, the report shows test results of test scenarios grouped by Platforms on which they run against.

It is a bar chart. When hover over a bar, it displays stats that include total count and percentage of test scenarios based on the execution status. The count also includes repeated runs of same test scenarios.

The report indicates how many test scenarios executed on which Platform e.g. Google Chrome, Firefox etc. and what their results are.

On the graph, the X-axis represents Platforms and the Y-axis represents Count. On hovering mouse over a bar, it displays Status, Count and Percent details.



(2) Latest Results

If you select the Latest View, the report shows latest test results/executions of all unique test scenarios.  

In case a test scenario is executed more than once, then its latest execution is considered for inclusion in the report.

The last occurrence of execution and its consequence results, Passed or Failed, are picked for report even when the test scenarios are run multiple times creating duplicate count of scenarios.

On the graph, the X-axis represents Counts and the Y-axis represents Statuses. On hovering mouse over a bar, it displays Status, Count and Percent details.

Execution Trends

For a particular period, the line chart shows the trend of test scenario executions in terms of execution results “Pass” and “Fail”. In other words, how many test scenarios are getting passed or failed for a particular period. 



Select Period from: Daily, Weekly, Monthly, Yearly.

You can select which legends, i.e. Statuses “Pass” or “Fail” or both, you want to see on the graph by selecting respective check boxes.

On the graph, the X-axis represents Period and the Y-axis represents Count. On hovering mouse over the intersection, it displays the Count and the Date the test was run on.  

The cumulative count for each status is displayed at the right of the graph.


You can zoom-in and zoom-out the chart view for a particular period by just dragging the mouse pointer and selecting a particular area on the chart.

It gives you a closer look of the period you selected on the chart above.



Click the Lens icon to get back the default view of the chart.


Grid View

On clicking any of the intersection points, the chart view changes to grid view.

You can see Test Runs at left and Test Scenarios at right.

To drill down to test cases, click on the test scenario like below.



1 Status Filter for test scenarios.

2 Expand Test Scenario by clicking on it.

3 It displays test cases under it. You can see the execution time of each test case and execution time of the test scenario i.e. total execution time of all test cases.

4 Sort the test cases by Duration, Execution Order.

5 Select the status you want to view the test cases with. Select "All" to view test cases with any of the statuses.

6 Click on the  icon to view Error Information (in case of error). Following example shows error information.


 


7 Click on to view other test scenarios that get affected by the test case.

8 Click on to return to chart view.


Execution Trends: Build

The graph shows the execution trends for each build. The individual build covers test runs in it. Each test run includes different test scenarios within it.



The status is calculated cumulatively for each Build. If a test scenario is not included in the latest test run of the build, then its immediate past execution status will be considered in the cumulative calculation on the graph.

The test runs will show the results of executed test scenarios.

The following table displays an example of Execution Trend in a Build.









Test Run wise Count


Test Run /Scenarios

1

2

3

4

5

Pass

Fail


1

F

F

P

F

F

1

4


2

P

F

(P)*

F

F

1

3


3

(P)

P

F

F

F

1

3


4

(P)

(P)

(F)

P

F

1

1









Build wise Count

Pass

1

1

0

1

0

3


Fail

0

0

1

0

1

2



* Scenarios in parenthesis are commented in the automation script to avoid its repeated execution in the current test run.


Drill down to view details of status. For example, we are checking the test run details of “Pass” status by clicking on the corresponding point on the graph.

The test runs are listed in descending sequence – the latest run is listed at the top.

The test scenarios under the test run are listed at right. Apart from the selected test status (Pass/Fail), “All” test scenarios can also be viewed.



Performance Bottleneck

This is a bar chart that displays what test scenarios are responsible for hindering performance of the test automation process.

It helps the Analyst –

  • analyze the Automation test results.
  • Compare actual execution time with the standard execution time and identify test scenarios taking longer than average time for execution.
  • isolate the test scenarios responsible for Performance Bottleneck and pinpoint the root cause.

On the chart, the X-axis represents Test Scenarios and the Y-axis represents Duration.

On clicking any of the bars, the chart view changes to grid view with that particular test scenario details.


Grid View

You can see Test Runs at left and Test Scenarios at right.


1 Expanding a test scenario

2 Sorting of test cases

3 Filter test cases by Status: Pass, Fail, All

4 Total execution time of the test scenario

5 Execution time of the test case

Most Failed Test Scenarios

The report shows most failed Test Scenarios during their execution.


Fail Rate = Number of time the scenario got failed/Number of time the scenario executed

Fail Rate and Test Scenarios are clickable which displays –

  • Test Runs the Test Scenario associated with.
  • Test Cases covered under the Test Scenario.

1 Click the  icon to view Error Information (in case of error).

2 Click  to return to chart view.


Rest of the legends are same as aforesaid.

Errors Graph

The graph is generated as per the grouping done for errors above.  

All errors remain by default under Uncategorized Errors which is system default Group. Then user can add more Groups as and when required.

Both the nodes - Group and Error nodes are clickable. The larger bubble indicates a Group, while the smaller bubble indicates Error within the Group.

Clicking on the Group node displays Errors by Error Group with details of –

  • Error: The error that you can see on expanding the Error Information symbol of a failed Test Case.
  • Affected Scenarios: Number of Scenarios that get affected due to this error. The count is clickable; clicking on which opens Test Scenarios by Error report. Each record of test scenario is clickable; clicking on which shows details of Test Runs the test scenario associated with and list of Test Cases covered under that test scenario.
  • Occurrences: Number of times the error occurred.


Clicking on the Error node displays the same details as above with that particular error row highlighted on the list.

Top Errors

Click the  icon at the extreme right corner of the Errors graph to view the report of Top Errors.

It has the following columns:

  • Group: It shows initials of the Group name.
  • Error: The error that you can see on expanding the Error Information symbol of a failed Test Case.
  • Affected Scenarios: Number of Scenarios that get affected due to this error. The count is clickable; clicking on which opens Test Scenarios by Error report. Each record of test scenario is clickable; clicking on which shows details of Test Runs the test scenario associated with and list of Test Cases covered under that test scenario.
  • Occurrences: Number of times the error occurred.

Configure Error with AI

Read more about AI for Grouping Errors.

QQBot: Bits of Wisdom

QQBot will alert you as it makes new discoveries - bringing the power of Big Data Analytics, drill down and Actionable Intelligence at your finger tips.



Click on the robot like figure at the top and it opens the alert panel Bits of Wisdom at the right.

The calculated alerts give complete and precise insights with informational graphs.


Performance Configuration

Refer to Manage Projects for more information about Performance Configuration.


Filters

User can set Filters to define the criteria for generating different Automation Analytics reports.

To set the filter criteria, click on the Filter icon that is floating at the bottom right corner.




Select values for the following parameters:

  • User: The logged-in user who generates the API key and uploads the test result file. On typing in the search box, the most relevant values will only be visible to pick from.
  • Platform: Each test scenario has its associated Platform with it when the test result file is being uploaded. If there is a platform is not associated, then it will be associated with default platform. On typing in the search box, the most relevant values will only be visible to pick from.
  • Component: Each test scenario has Component assigned to it when the test result is uploaded.
  • Version: The version of test scenario that is included in the uploaded test result file.
  • Result: When the test result file is uploaded, each test scenario has either “Pass”, “Fail” or "Not Executed" results.
  • Duration: Each test scenarios has its associated duration when the test results file is uploaded. The filter works according to the parameters set under the Settings.


Click Apply to apply the selected filters to generate the graphs accordingly.

Refresh

The Refresh button is provided to refresh the graphs on the page without reloading the entire web page.