Enable AI-Powered Insights with Built-in AI Client
This tutorial shows how to enable and configure AI-powered insights using built-in support for popular LLM providers, such as Azure OpenAI, OpenAI, Azure AI Foundry, and Ollama, so that end users can run predefined or custom prompts against the data behind the currently previewed report and receive responses from an LLM.
If you use a Telerik Report Server instead of a standalone Telerik Reporting REST service, check the Report Server article AI-Powered Features Settings instead.
Prerequisites
To follow the steps from this tutorial, you must have:
- A running application that hosts a Telerik Reporting REST service.
- A report viewer connected to that REST service.
- An active subscription (or local runtime) for an LLM model provider with API access. The supported out of the box ones are:
You can also connect to LLM providers that are not supported out of the box. To do this, create a custom
Telerik.Reporting.AI.IClientimplementation to integrate the provider into Reporting and enable the AI-powered insights functionality. For more details, refer to the article Enable AI-Powered Insights with Custom AI Client.
Using AI-Powered Insights with a REST service
To enable the AI-powered insights, follow these steps:
-
Install exactly one of the following NuGet packages, depending on the LLM provider you use:
-
Telerik.Reporting.AI.Microsoft.Extensions.AzureAIInference—for Azure AI Foundry -
Telerik.Reporting.AI.Microsoft.Extensions.AzureOpenAI—for Azure OpenAI resources -
Telerik.Reporting.AI.Microsoft.Extensions.OpenAI—for OpenAI -
Telerik.Reporting.AI.Microsoft.Extensions.Ollama—for Ollama
-
Add the AIClient element to the report engine configuration in your application's configuration file. This element allows you to specify the AI model, endpoint, and authentication credentials. The following example demonstrates a basic Azure OpenAI configuration:
{
"telerikReporting": {
"AIClient": {
"friendlyName": "MicrosoftExtensionsAzureOpenAI",
"model": "gpt-4o-mini",
"endpoint": "https://ai-explorations.openai.azure.com/",
"credential": "YOUR_API_KEY"
}
}
}
<Telerik.Reporting>
<AIClient
friendlyName="MicrosoftExtensionsAzureOpenAI"
model="gpt-4o-mini"
endpoint="https://ai-explorations.openai.azure.com/"
credential="YOUR_API_KEY">
</AIClient>
</Telerik.Reporting>
If you haven't configured the report engine previously, make sure to check the article Report Engine Configuration Overview to get familiar with this topic.
In this case, the friendlyName attribute identifies the LLM provider to the report engine. Each provider has specific configuration requirements:
- Azure OpenAI: Use
MicrosoftExtensionsAzureOpenAI. Requiresmodel,endpoint, andcredential. - Azure AI Foundry: Use
MicrosoftExtensionsAzureAIInference. Requiresmodel,endpoint, andcredential. - OpenAI: Use
MicrosoftExtensionsOpenAI. Requires onlymodelandcredential(uses the default OpenAI API endpoint). - Ollama: Use
MicrosoftExtensionsOllama. Requires onlymodelandendpoint(no authentication needed for local deployments).