If you make use of the exception tracking capabilities of the Analytics service, the Exception Timeline offers aggregated metrics that can be used to indicate overall the quality trends of your application.
While the Exceptions report allows you to interact with individual exceptions being reported and address these appropriately, the Exception Timeline report allows you to look at the collected exception data as a measurement of overall quality of your software. The report offers three different metrics on your quality based on the collected exception data:
- Total number of exceptions collected
- Average number of exceptions reported from an individual session
- Number of previously unseen exceptions reported per day
Each of these metrics can be used to indicate if the overall quality of the software is improving as the application matures. Note that an overall understand of how exceptions are being handled and processed by the Analytics Service is beneficial in understand what each metric may mean for your specific software. For more in exceptions, see the section on exceptions.
Please note that the exact interpretation of the exception timeline metrics are highly dependent on just how the application is collecting exception. For instance, if the instrumentation of the software are only reporting exceptions when the software crashes, then the metrics have a strong correlation to actual quality but if exceptions are also being reported for conditions that does not directly affect the user experience, then the correlation to actual quality is less straight forward. For more on recommendations on tracking exceptions in your software, please see the section on planning exception handling.
Getting a simple metrics on how many individual exceptions are being collected from your application can indicate an overall quality experience for your end users. See the screenshot below for a visualization of the metric:
While the raw number of exception occurrences may indicate an overall quality measurement, the Average Exception Occurrences metric focuses on the perception of an average usage of the application and is calculated relative to the actual number of usages of your software. This metric will therefor not suffer from the e.g. an increase in overall users using the application resulting in a proportional increase in total exception occurrences. As such, this metric is closer to measuring the software quality for an individual user. See the screenshot below for a visualization of this metric:
As mentioned in the section on exceptions the collected exceptions are grouped into exception items based on similarity criteria to create an easier overview and to easier determine the reach of an exception. The New Exception Items metric indicates the rate at which new exception items are created by the software. This metric can indicate whether software becomes more or less error prone as it matures over time and, by filtering on e.g. specific releases, you can understand the impact of specific versions on the quality. See below for an example screenshot of this metric:
Understanding exceptions and how to best track exceptions within your application is crucial to utilize these metrics to indicate. Please refer to the sections on exceptions in general and on planning exception handling to enable you to maximize the value of these metrics.