Available for: UI for ASP.NET MVC | UI for ASP.NET AJAX | UI for Blazor | UI for WPF | UI for WinForms | UI for Silverlight | UI for Xamarin | UI for WinUI | UI for ASP.NET Core | UI for .NET MAUI

New to Telerik Document Processing? Download free 30-day trial

ExpandableMemoryStream

ExpandableMemoryStream is a specialized in-memory stream built to handle demanding PDF workloads that involve large data volumes or concurrent operations. It provides a scalable alternative to traditional memory buffers by managing data growth in a controlled, efficient way. Instead of relying on a single expanding array, it uses an internal structure that minimizes memory churn and maintains stable performance even under heavy or unpredictable load.

Why a Segmented Approach

Large PDF generation often needs a temporary buffer. A normal contiguous array may reallocate and copy data multiple times as it expands, increasing CPU work, peak memory, and pressure on the Large Object Heap (LOH). Avoiding large contiguous allocations lowers fragmentation, reduces garbage collection pauses, and scales better when size is unpredictable or workloads are bursty.

How It Works

Data lives in equal‑sized blocks held in order. When more space is required a single new block is allocated, earlier blocks stay untouched. A position maps to (block index, offset). Growing exposes cleared bytes ready for writing. Shrinking lowers only the visible length and retains the blocks so later growth can reuse already allocated memory without new large allocations.

When to Use

Use it when you need to:

  • Build or merge large PDFs fully in memory before saving.
  • Combine many pieces where the final size is unknown.
  • Run multiple document builds in parallel and want steady, predictable allocations.
  • Seek and rewrite parts of the buffered content without triggering array growth copies.

Example

The following example shows two common ways to load a large PDF document into memory before further processing. The first approach constructs the stream directly from a byte array and passes an explicit segment size (bufferSize). The second approach creates an empty instance and copies a file stream into it. The constructor's second parameter (bufferSize) is optional and defaults to 1,000,000 bytes (1 MB). You can omit it unless you want a different segment size.

PdfFormatProvider pdfFormatProvider = new PdfFormatProvider();
RadFixedDocument radFixedDocument1;
RadFixedDocument radFixedDocument2;

string inputPath = "large.pdf";

// Method 1: Load from byte array (explicit bufferSize provided; could be omitted because 1,000,000 is the default)
byte[] byteArray = File.ReadAllBytes(inputPath);

using (ExpandableMemoryStream expandableMemoryStream = new ExpandableMemoryStream(byteArray, 1_000_000))
{
    radFixedDocument1 = pdfFormatProvider.Import(expandableMemoryStream, TimeSpan.FromSeconds(120));

    // ... manipulate radFixedDocument1 ...
}

// Method 2: Load by copying from FileStream (will use default buffer size when not specified)
using (ExpandableMemoryStream expandableMemoryStream = new ExpandableMemoryStream())
{
    using (FileStream fileStream = File.OpenRead(inputPath))
    {
        fileStream.CopyTo(expandableMemoryStream);
    }

    expandableMemoryStream.Position = 0; // Reset before import
    radFixedDocument2 = pdfFormatProvider.Import(expandableMemoryStream, TimeSpan.FromSeconds(120));

    // ... manipulate radFixedDocument2 ...
}

In both cases the segmented internal structure avoids reallocating a single large contiguous buffer, helping performance and memory stability for very large PDF files.

See Also

In this article