A planning guide to product managers and software team leads.
It is our experience that the best results with Analytics are achieved if two key individuals in your organization are identified once the initial analytics project begins. Depending on the size of your organization the roles can be named differently:
- Product manager. Often organizationally placed between the development department and the marketing/sales department. Decides or influences the direction for new product development activities. This person will be responsible for the planning of what to measure, why and the analysis of data received.
- Team lead/lead programmer: The major responsible individual for a given software application. Knows at source code level “all” about the application. This person will be responsible for the integration of the Analytics software into the application.
This guide is written for these two individuals. Following the recommandations will help your organization achieve results faster and limit the amount of experimentation that you initially have to do. Best results are achieved when the two individuals decide and coordinate the necessary tasks together.
This is not a programming guide. All technical information is available at www.telerik.com/analytics/resources/documentation
Any organization with a software development department knows that software development can be a costly affair. You need R&D managers, team leads, system architects, software developers, graphical designers and testers. Often you also need technical writers for documentation, translators, support staff, product managers and product marketing managers.
Every time you decide to extend your software applications with new functionality you say yes to increased costs. Lots of tasks for lots of employees. The more the applications grow the more the total costs will grow.
Whether the business is producing a more than healthy profit or is striving to survive, a set of relevant questions will always include: What should we do next with our software products? Should we do anything? Should we retire functionality or complete applications? Should we improve usability? Should we add new areas of functionality? Should we fix errors? Should we develop a new product and extend our market?
At Telerik we have developed software for more than 15 years. We know it is just about as easy and costly to develop software that is useful and in demand than it is to develop useless software that nobody wants. It is all about what you decide to do. So how do you decide what to do next?
We cannot answer this question, but we can help you collect indisputable data about your application users and their usage patterns. Based on this information you will have a much better understanding of your software products, your market and your customers. With this information readily available many hard decisions will be a lot easier to make, and you’ll be able to make better judgements on a daily basis leading to improvements that over time will have a substantial positive impact on your business.
We recommend two very simple initial activities that each shouldn’t take more than a few hours.
Take a walk around your organization. Ask selected people what they would like to know about your customers and their usage of your products. Ask them which data and answers could help them the most in their job position. Key Performance Indicators (KPIs) will probably be a relevant starting point for a dialog.
Follow the instructions in the article For Developers.
If you have very little initial time to spend on application analytics our basic recommandation is to simply include our Analytics monitor in your application. Call
Start when the application is started and
Stop when the application is stopped. It should take less than an hour to get the integration done. Now you’ll receive valuable information and you don’t have to worry about local caching of data, the frequency of data transmissions, no server access available etc. It is all taken care of by the default monitor settings. This is all that is required to ship your next release with basic application analytics.
If you have any troubles please see the Troubleshooting guide.
After a few hours of work we believe you’ll understand that the required implementation effort for analytics is very low. From all our customers we also know that it takes far more time to plan what to measure than to implement the actual measurements.
It is important to understand that increased insights gained through collection of usage data in itself reveals areas that are relevant to examine further. Hence, working with application analytics is an iterative process of planning what to measure, analyzing data received, deciding based on facts and planning new measurements for the next software release. In other words it is important to understand that you cannot possibly measure all there is to measure in your initial integration of Analytics.
Key aspects that influence the extent of your initial planning
- Your software development cycle
- Analytics purpose
The longer your software release cycles are influences the amount of time spent on planning, defining and implementing measuring points as you can only add measuring points before each release. If your software deployment process is lengthy this further stresses this aspect as it may take a long time before your software is deployed and you start receiving data.
Onthe other hand, if deploy your software through automatic software update mechanisms and frequently release minor updates you can shorten the iterations and basically add measurements with limited planning from release to release.
No matter what your situation is it is a lot better to have some data collection in place rather than none. If you are only a few weeks from code freeze you can still easily manage to get a basic implementation of Analytics in place with very little effort.
From your dialog with your colleages you should decide what the purpose for the initial integration is. The following is a list of generic purposes that may serve as an overall inspiration for you:
- Utilize your development resources at an optimum: Measure what is used and what is (nearly) not used
- Increase the quality of your product offerings: Collect errors and measure performance (time)
- Increase your understanding of your customers: Collect data about e.g. location, versions, hardware (most achieved automatically through basic integration)
- Optimize your pricing models and profit: Analyze key aspects that influence or could influence your pricing model
- Ensure you detect changing trends in your market: Simply watch collected data over longer periods of time
It is important to understand privacy issues and decide how to inform the users that a given software application is using application analytics software.
It is our recommendation that your communication with your users stresses the fact that you are collecting usage data and error reports to improve your product offerings. Thus, in the end the data collection will help you provide better software for your customers. Most users today acknowledge this fact and understand that as long as private information isn’t collected there is no harm caused.
Analytics can be configured to accommodate a number of different scenarios ranging from applications only used internally by employees, where there essentially is no restriction on what data can be collected without any user consent to globally distributed software ending up in geographies with different legislation and at different types of users ranging from trial users to enterprise customers.
As examples you can decide whether to store IP-addresses or not. Whether to use anonymous cookies or not. Whether to use your own custom identification means or not.
When you can answer these questions, you are ready to do the final planning:
- In which release of your software will Analytics initially be integrated?
- What are the major initial analytics purposes?
- How will you handle privacy issues?
The remainder of this document addresses aspects that are relevant to many but not all organizations. Pick what you can use.
Over time you will add many measuring points to your applications. Some measuring points will be removed, but many will exist for several years. Different employees with different job roles will access the data collected. New employees will get involved. Experienced employees will move on to new responsibilities. To avoid information loss and misinterpretations of data we recommend that you define a naming strategy and maintain a record of the measuring points defined.
When you define measuring points use names that are intuitive to most employees. Don´t use terms that are used internally by the software programmers, unless the specific measuring points serve purposes for the software developers only. Strive for short, precise names that are meaningfull to most employees.
TIP #1: Keep a record of all your measuring points e.g. in an Excel sheet. Use a separate sheet for each main module or purpose of your application. For each measuring point register:
- Measurement name: The name that the programmer uses when integrating the measuring point in the source code and the name that will appear when you analyze data through the Analytics web interface. The names can be renamed dynamically at any time, but strive for best names first.
- Measurement type: Analytics offers a number of different data types, such as Feature Usage, Feature Timing, Feature Value, Flow & Goals, Exception, etc. The programmer will typically know what is the best is to choose. The type also defines where you find the results in the Analytics web client.
- Application area: Divide your application into a number of main areas. The areas will often be related to different areas of the UI or different main functionalities.
- Application element: to avoid misunderstandings refer to a separate document with annotated screenshots from your application.
- Measurement purpose: What is the main purpose of the measurement? Usability improvements, price model optimizations, most used/least used overview, etc.
- Measurement interpretation: How should the results be interpreted? Are there related measuring points?
- Integration code reviewed: Has the implementation of the measuring point been reviewed?
- Data reception verified: Has data reception been verified before release of the application?
The record of all the measuring points serves as a specification for the programmers that will implement the measuring points and further is a valuable documentation when you start to analyze the data received. The team lead and program manager should both review the specification.
If you have different types of end users of your software it is important to be able to analyze usage data based on the different user segments alone. Such segments could e.g. be trial-users and various categories of paying users. They could also be users running a given sub-brand of your product. Segments could be users with administrator rights as opposed to standard user rights. Ask your marketing and sales department what matters to them.
When you initialize your software application (or just once while your application is running) you should register such user segments with a call/calls to the monitor method
TrackFeature such as:
Resulting in e.g. a graph such as this:
For all relevant segments you should “tag” the usage as illustrated above. A user can be a member of multiple segments. Simply add multiple method calls to register all relevant segments. You should only call the method once during application initialization for each relevant segment.
TIP #2: To improve data and enable advanced analysis of usage data, you should register any kind of information that can be used to segment your users in different categories. Use the method
TrackFeatureto register such categories. Register the information once per running application instance.
With advanced filtering you can e.g. view specific data only from users that are members of a given segment, such as “paying customers running a corporate license”. You could e.g. get a view showing how many computer displays the corporate license users have and compare it with the non-segmentated distribution. We have done that for users and customers of our product EQATEC Profiler (see diagrams below).
Number of computer displays
for corporate license customers
Number of computer displays
for all customers
We can see that 67% of all EQATEC Profiler corporate customers have two computer displays as opposed to 39% for all EQATEC Profiler users (please note that the coloring of the groups are not the same – blue and orange differ). This example implies that developing better support for customers with two screens might be relevant since the major revenue stream comes from corporate customers.
If you know the identity of your user you may choose to register this information. However, be aware that by doing this you could easily end up breaking privacy laws unless you implement a process of getting your end-user's consent. If your users are internal employees of your company you don’t need to ask for consent according to most legislation. <!-- Read our Privacy Concerns article or consult your legal advisors for further information.-->
You can register your end user identification through the following method:
// The user id can be any string, such as serial number, username, hardware-id monitor.SetInstallationId("<user ID>");
If you know the end-user identity you can proactively contact the end-user if you, for example, observe that your software isn’t functioning as expected at specific installations. Furthermore, with such registration in place you can at a later time augment the data collected by applying extra information from e.g. your CRM-system.
- TIP #3: Don’t register the identity of your end-users unless you are absolutely certain you don’t break privacy legislation and you have specific reasons for doing so. In most cases the value of the information is exactly the same whether it is anonymous or not.
You can monitor anything you wish, but you should of course always prioritize your efforts. Even though your individual circumstances define what is specifically relevant for your company, there is one aspect which is generally relevant. It is highly valuable information to know which parts of your software application are being used and which parts are not (or nearly not) used by your end users. It sounds simple and so it is, but measurements will reveal that your expectations may not fit with real life behavior.
What does “used” mean? Let’s take a very simple example from Microsoft Word. If I right-click in a document and move the mouse down to “Bullets” I get this menu:
I used the right-click feature and I have viewed some of the options available. Is that to use this part of the application? Or did I merely view what was available?
When I select the square bullet I have used the “bullet functionality”. If I only view what is available you could say I have used the “view formating options” functionality. It can be relevant to track both aspects, but in the majority of cases you want to monitor activation of substantial functionality only.
- TIP #4: Make sure you are precise about what usage you are monitoring. Viewing a feature can be a relevant aspect to track for usability improvement, but using a feature is most often what you want to monitor.
Depending on the size of your application you should prioritize what usage you want to measure, but make sure to cover the whole application rather than going into details in specific areas only. IFor example, if your application has an extensive amount of settings you could choose to register whenever a setting from one of say five main areas are changed rather than registering every single setting.
TIP #5: To monitor what is most used and least used in your application instrument your code with
TrackFeaturecalls. Make sure you monitor all main aspects of your application. It is preferable to monitor usage where the user is applying functionalities rather than just viewing options. If the same functionality can be activated through multiple actions (keyboard shortcuts, menu-item-selection, button-bar-clicks, etc) make sure to differentiate with individual measuring points, e.g. in Microsoft Word measuring points could be MiniButtonBar.Print, Menu.Print and ShortCut.Print.
Over time you’ll be inspired from what you learn through your measurements and you’ll know how to extend the measurements. When you are new to application analytics it can be a little difficult to get started without some inspiration. This section lists a number of different examples taken from our customers that we hope can help you get started.
TIP #6: Releasing a new application for the first time. Integrate the monitor once you start to run field-tests and/or release beta editions.
- As a minimum initialize the monitor by calling Start when your application is launched and Stop when it is shut down. This will give you a lot of basic information which is highly valuable compared to nothing, especially when you are new in the market and probably know very little about your users.
- With a very limited extra effort add exception tracking. This will make it easy for you to correct errors and get quantitative measurements on the quality of your application. See the best practices document on exception handling for extra information.
TIP #7: Changing to new technology. If you have been in the market for many years then at some point in time the technology you are using might come to an end compared to the efficiency offered by new development technologies. Before you start to re-implement everything you should ask yourself one crucial question: how much are the different parts of your existing application being used and by whom? Several of our customers have saved up to 25% on re-implementation projects by analyzing the usage of their existing applications.
- Use Feature Usage measurements to get insights into usage frequencies for all larger functionalities in your application.
- Use Feature Timing to get insights into how much time your users are spending in different application modules.
- TIP #8: Review your price models. How did you define your current price model? Through analysis of available data? Gut feelings? Could extra insights into your total user base give you hints towards optimizations and increased profit? Read more in our article on Price Model Optimization.
- TIP #9: Measure errors and performance. Automatic collection of exceptions will often reveal problems that do occur outside your office that nobody cares to tell you about and that never occur in the lab. To learn how you can improve the quality of your software reducing support costs and end user nuisance, please read our article on Exception Handling.
- TIP #10: Improve usability. The usability of your applications matter. The mobile industry is leading the game and setting end-user expectations high. It is easier said than done. Read more in our article on Application Usability.
- TIP #11: Ask questions. Try to ask people in your organization what they would like to know about your customers, the market you are in and how your software is used. Don’t ask them what to measure, but rather what they would like to know. Based on their answers you can judge what to measure to provide the answers. Ask people at all levels of your organization. You’ll be surprised to find that many will have valuable input. Top management is nearly without any doubt interested, and if you can contact them, they will most likely have very relevant questions that you can provide the answers for.
- TIP #12: Ask Telerik. Why not call us. We’ll happily do our best to guide you.