Skip to content

Performance and Zero-Allocation

Logging sits on the hot path of every request. A single logger.LogInformation(...) call that allocates a few strings and a dictionary can, under high throughput, generate enough GC pressure to cause visible latency spikes. Pragmatic.Logging provides zero-allocation formatting, object pooling, queue-based background processing, rate limiting, and batching — all configurable through presets that match your deployment scenario.


The standard string.Format or interpolated string approach allocates a new string on every log call, even when the log level is disabled. Pragmatic.Logging eliminates these allocations through span-based formatting, stack-allocated buffers, and ArrayPool rentals.

Small messages (up to 1,024 characters) are formatted into a stackalloc buffer, avoiding heap allocation entirely. Larger messages fall back to ArrayPool<char>.Shared so the rented array is reused across calls.

// The runtime chooses stack or pool based on length
StackAllocatedBuffer.WithBuffer(256, (Span<char> buffer) =>
{
// Format directly into the span -- no heap allocation
var written = FormatOrderMessage(buffer, orderId, total);
WriteToOutput(buffer[..written]);
});

The threshold is controlled by the constant StackAllocatedBuffer.StackAllocThreshold (1,024 chars). A generic overload accepts an ISpanCallback<TResult, TState> for operations that need to return a value from the span operation.

The ZeroAllocMessageFormatter is the core formatting engine. It uses a ThreadLocal<StringBuilder> pool and ArrayPool<object?> for parameter arrays, minimizing allocations on every call.

// Fast-path for 1-3 parameters (most common case)
var message = ZeroAllocMessageFormatter.FormatFast(
"Order {OrderId} placed by {CustomerId}".AsSpan(),
orderId,
customerId);
// Span-based formatting with zero heap allocation for small outputs
Span<char> buffer = stackalloc char[256];
var success = ZeroAllocMessageFormatter.TryFormat(
"User {UserId} logged in at {Timestamp}".AsSpan(),
new object[] { "john.doe", DateTime.UtcNow },
buffer,
out var written);

For hot paths where even the parameter array allocation matters, rent from the pool:

var paramArray = ZeroAllocMessageFormatter.RentParameterArray(2);
try
{
paramArray[0] = orderId;
paramArray[1] = amount;
var message = ZeroAllocMessageFormatter.Format(
"Processing order {OrderId} for {Amount}".AsSpan(), paramArray);
}
finally
{
ZeroAllocMessageFormatter.ReturnParameterArray(paramArray);
}

The formatter has specialized handlers for common types that call TryFormat directly on the Span<char> destination, bypassing ToString() entirely.

TypeAllocationMethod
int, long, float, double, decimalZeroTryFormat(Span<char>)
DateTime, DateTimeOffsetZeroTryFormat with ISO 8601
TimeSpan, GuidZeroTryFormat
boolZeroLiteral "true" / "false" copy
stringZeroAsSpan().CopyTo()
ISpanFormattableZeroInterface TryFormat
Other1 allocationToString() fallback

Every provider performs an IsEnabled check with [MethodImpl(AggressiveInlining)] before any formatting work begins. If the log level is below the configured minimum (or filtered by category), the call returns immediately with zero work done.

[MethodImpl(MethodImplOptions.AggressiveInlining)]
public bool IsEnabled(string categoryName, LogLevel logLevel)
{
if (logLevel == LogLevel.None)
return false;
// Category-specific level check (fast dictionary lookup)
if (_configuration.CategoryLevels.TryGetValue(categoryName, out var categoryLevel))
return logLevel >= categoryLevel;
// Wildcard pattern fallback
// ...
return logLevel >= _configuration.MinimumLevel;
}

The source generator recognizes [LoggerMethod(UseZeroAllocation = true)] and generates a logging method that uses ZeroAllocMessageFormatter and pooled parameter arrays instead of the standard LoggerMessage delegate pattern. This gives you the ergonomics of high-level logging with the performance of manual span manipulation.


Allocating and discarding objects on every log call creates GC pressure. Pragmatic.Logging pools the most frequently used objects.

The StringBuilderPool maintains a thread-safe pool of StringBuilder instances with automatic capacity management. When a builder is returned, its capacity is reset if it grew beyond 4 KB, preventing memory bloat from occasional large messages.

var sb = StringBuilderPool.Rent();
try
{
sb.Append('[');
sb.Append(timestamp);
sb.Append("] ");
sb.Append(message);
WriteOutput(sb.ToString());
}
finally
{
StringBuilderPool.Return(sb);
}

A generic pool implementing IObjectPool<T> with configurable maximum size and a factory delegate. Used internally for LogEntry objects, formatter state, and serialization buffers.

var pool = new DefaultObjectPool<LogEntry>(
factory: () => new LogEntry(),
maxSize: 256);
var entry = pool.Get();
try
{
entry.Category = "OrderService";
entry.Message = "Order placed";
entry.LogLevel = LogLevel.Information;
provider.WriteLog(entry);
}
finally
{
pool.Return(entry);
}

The PropertyPool manages reusable Dictionary<string, object?> instances for structured log properties, avoiding the allocation of a new dictionary on every log call that includes structured data.


Pragmatic.Logging ships with four performance presets that configure all the knobs at once. Each preset is available through PragmaticLoggingOptions.ConfigurationPreset.

PresetMin LevelZero-AllocBatchingBackgroundRate LimitingPrivacyUse Case
DevelopmentDebugOffOffOffOffMinimalLocal debugging
ProductionInformationOnOn (100/5s)OnOptionalStandardTypical web API
HighPerformanceWarningOnOn (200/10s)OnOnOffHigh-throughput services
ComplianceInformationOnOn (50/2s)OnOffAggressiveRegulated industries
// Via appsettings.json
{
"PragmaticLogging": {
"ConfigurationPreset": "Production"
}
}
// Via code
services.AddPragmaticLogging(logging =>
{
logging.AddConsole(config =>
{
config.Performance.EnableBatching = true;
config.Performance.BatchSize = 100;
config.Performance.FlushInterval = TimeSpan.FromSeconds(5);
config.Performance.UseZeroAllocation = true;
config.Performance.MaxQueueSize = 10000;
config.Performance.OverflowStrategy = QueueOverflowStrategy.DropOldest;
});
});

Under sustained load, a noisy log category can produce thousands of duplicate messages per second, overwhelming both the logging pipeline and downstream aggregators. The HighPerformanceRateLimiter throttles log output using one of three strategies without blocking the caller.

StrategyAlgorithmBehavior
TokenBucketToken bucketAllows bursts up to bucket size, then rate-limits. Best for most logging scenarios.
SlidingWindowSliding windowPrecise rate over a moving window. Best for strict compliance requirements.
FixedWindowFixed windowSimple counter reset at window boundary. Fastest, but allows edge bursts.
// Apply rate limiting via filter expressions
services.AddPragmaticLogging(null, globalConfig =>
{
// Limit error logs to 10 per minute (token bucket)
globalConfig.AddFilter(ctx => ctx.RateLimitTokenBucket(10, TimeSpan.FromMinutes(1)));
}, logging =>
{
logging.AddConsole();
});

The LoggingRateLimitPresets class provides tuned configurations for common scenarios.

PresetStrategyMax MessagesWindowDescription
ErrorLogsTokenBucket101 minPrevents error spam
WarningLogsTokenBucket501 minModerate throttle
DebugLogsSlidingWindow10010 secStrict debug limiting
HealthCheckLogsFixedWindow15 minOne per interval
{
"PragmaticLogging": {
"RateLimiting": {
"Enabled": true,
"MaxMessagesPerSecond": 1000,
"BurstSize": 100,
"Strategy": "TokenBucket"
}
}
}

Synchronous writes to disk or network block the calling thread. Pragmatic.Logging uses queue-based async processing to decouple log production from log output.

When PerformanceConfiguration.EnableBatching is true, log entries are enqueued into a bounded channel instead of being written synchronously. A background consumer thread (running at BelowNormal priority by default) drains the queue and writes entries to the provider in batches.

Caller thread Background thread
| |
|-- Enqueue(entry) --> |
| (non-blocking) |
| [Wait for batch/timer]
| |
| [WriteLogCore(batch)]
| |
| [Flush to disk/network]

When the queue fills up faster than the background thread can drain it, the QueueOverflowStrategy determines what happens.

StrategyBehaviorData LossBlocking
DropOldestRemove oldest entries to make roomYes (oldest)No
DropNewestReject new entries when fullYes (newest)No
BlockBlock the caller until space is availableNoYes
ExpandDynamically grow the queueNoNo

For production, DropOldest is recommended — it prevents caller blocking while preserving the most recent (and usually most relevant) entries. Use Block only in compliance-critical scenarios where every entry must be persisted.

config.Performance = new PerformanceConfiguration
{
EnableBatching = true,
BatchSize = 100, // Entries per write batch
FlushInterval = TimeSpan.FromSeconds(5), // Timer-based flush
MaxQueueSize = 10000, // Bounded queue capacity
OverflowStrategy = QueueOverflowStrategy.DropOldest,
UseZeroAllocation = true,
BackgroundThreadPriority = ThreadPriority.BelowNormal
};

Individual writes to disk are expensive due to system call overhead. Batching amortizes this cost by collecting multiple log entries and writing them in a single I/O operation.

The SimpleBatchingProvider and BatchingProvider base classes accumulate entries until either the BatchSize threshold or the FlushTimeout is reached, whichever comes first. The batch is then written atomically (or as a single buffered write, depending on the provider).

var batchingOptions = new BatchingOptions
{
BatchSize = 100, // Flush every 100 entries
FlushTimeout = TimeSpan.FromSeconds(1), // Or every 1 second
UseBackgroundProcessing = true, // Async drain
MaxQueueSize = 10000 // Bounded capacity
};
OptionTypeDefaultDescription
BatchSizeint100Maximum entries per batch
FlushTimeoutTimeSpan1sMaximum wait before flushing
UseBackgroundProcessingbooltrueUse dedicated background thread
MaxQueueSizeint10000Maximum queued entries

Larger batches reduce I/O overhead but increase memory usage and latency before entries appear in the output. Smaller batches reduce latency but increase system call frequency.

ScenarioRecommended Batch SizeFlush Interval
Web API (moderate traffic)50-1001-2 seconds
High-throughput service200-5005-10 seconds
Real-time streaming10-20100 ms
Compliance (every entry matters)25-50500 ms

These options are bound from the PragmaticLogging:Performance section.

{
"PragmaticLogging": {
"Performance": {
"BufferSize": 1000,
"FlushThreshold": 100,
"EnableZeroAllocation": true,
"UseBackgroundProcessing": true,
"BackgroundQueueSize": 10000
}
}
}
OptionTypeDefaultDescription
BufferSizeint1000Internal buffer capacity
FlushThresholdint100Entries before auto-flush
EnableZeroAllocationbooltrueUse span-based formatting
UseBackgroundProcessingbooltrueAsync queue processing
BackgroundQueueSizeint10000Background queue capacity

Each provider has its own PerformanceConfiguration instance for fine-grained control.

var config = new PerformanceConfiguration
{
EnableBatching = true,
BatchSize = 100,
FlushInterval = TimeSpan.FromSeconds(5),
MaxQueueSize = 10000,
OverflowStrategy = QueueOverflowStrategy.DropOldest,
UseZeroAllocation = true,
BackgroundThreadPriority = ThreadPriority.BelowNormal
};
PropertyTypeDefaultDescription
EnableBatchingbooltrueEnable batch processing
BatchSizeint100Entries per batch
FlushIntervalTimeSpan5sTimer-based flush
MaxQueueSizeint10000Bounded queue size
OverflowStrategyQueueOverflowStrategyDropOldestQueue full behavior
UseZeroAllocationbooltrueZero-alloc formatting
BackgroundThreadPriorityThreadPriorityBelowNormalBackground thread priority
{
"PragmaticLogging": {
"RateLimiting": {
"Enabled": true,
"MaxMessagesPerSecond": 1000,
"BurstSize": 100,
"Strategy": "TokenBucket"
}
}
}
OptionTypeDefaultDescription
EnabledboolfalseEnable rate limiting
MaxMessagesPerSecondint1000Steady-state rate
BurstSizeint100Allowed burst above rate
Strategystring"TokenBucket"TokenBucket, FixedWindow, SlidingWindow

The PragmaticNullProvider with its benchmark presets (ForBenchmarking, ForStructuredBenchmarking, ForContextBenchmarking, ForBatchingBenchmarking, ForProductionBenchmarking) provides controlled environments for measuring the overhead of each feature in isolation. Pair these with the BenchmarkDotNet harnesses in benchmarks/Pragmatic.Logging.Benchmarks/ to quantify the impact of enabling or disabling zero-allocation formatting, batching, context enrichment, or privacy redaction on your specific workload.