Rendering Large Reports with 300,000+ Records
Environment
| Product | Reporting |
Description
Rendering reports with large datasets, such as more than 300,000 records, can be resource-intensive and cause failures, especially in containerized environments like Docker Pods. When rendering such reports, the pod may restart due to hitting resource limits, such as memory and CPU throttling. It is essential to understand the minimum resource requirements, optimize resource allocation, and adhere to best practices in report design to avoid failures.
Solution
To render large reports effectively, follow these steps:
-
Ensure Sufficient Resources
- Allocate a dual-core processor and at least 2 GB of RAM for basic report processing.
- For reports with hundreds of thousands of records, increase memory and CPU allocations based on report complexity and export format.
- Allocate a dual-core processor and at least 2 GB of RAM for basic report processing.
-
Optimize Resource Allocation in Pods
- Review container orchestration settings and increase memory and CPU limits for the pod running the reporting microservice.
- Avoid resource throttling by setting appropriate limits in Kubernetes or other container orchestration platforms.
- Review container orchestration settings and increase memory and CPU limits for the pod running the reporting microservice.
-
Follow Best Practices for Report Design
- Limit the data processed and displayed in a single report.
- Use filtering and aggregation to reduce the dataset size.
- Split data into smaller batches for export if needed and render it into multiple reports, combined in a Report Book.
- Limit the data processed and displayed in a single report.
Recommended Resources for Report Optimization
By allocating sufficient resources and following these optimization practices, you can reduce pod restarts and improve report rendering reliability.