Skip to content

Log Providers

Pragmatic.Logging ships with a set of built-in log providers that cover the most common output targets. Each provider extends PragmaticLoggerProviderBase, inheriting automatic metrics collection, health checks, context enrichment, privacy redaction, and configuration hot-reload. You pick the providers you need and compose them through the fluent PragmaticLoggingBuilder.


All providers are registered through the AddPragmaticLogging extension method on IServiceCollection. The callback receives a PragmaticLoggingBuilder that exposes a fluent API for adding providers.

services.AddPragmaticLogging(logging => logging
.AddConsole(o => o.UseColors = true)
.AddFile("logs/app-{Date}.log", o => o.RetentionDays = 30)
.AddJson("logs/structured.json")
);

Terminal output is the first thing developers reach for when debugging a running application. The Console provider writes colored, structured output to stdout with automatic detection of ANSI color support, Windows Console API fallback, and plain-text mode for CI/CD environments.

  • Automatic terminal capability detection (ANSI, Windows Console API, no-color).
  • Custom ColorScheme per log level (Information, Warning, Error, Critical).
  • Structured property rendering inline with the message.
  • Category name truncation for compact output.
  • Thread-safe writes with a shared StringBuilder pool.
logging.AddConsole(PragmaticConsoleConfiguration.ForAdvancedConsole());
// Or with inline options
logging.AddConsole(config =>
{
config.MinimumLevel = LogLevel.Debug;
config.Formatting.TimestampFormat = "HH:mm:ss.fff";
config.Formatting.UseUtcTimestamp = false;
config.IncludeStructuredProperties = true;
});
OptionTypeDefaultDescription
UseColorsbooltrueEnable colored output
IncludeStructuredDatabooltrueRender structured properties inline
OutputFormatstring"Structured"Output format style
[14:32:15.123] [INFO] OrderService: Order placed successfully {OrderId: 42, CustomerId: "C-100"}
[14:32:15.456] [WARN] PaymentService: Payment retry initiated {Attempt: 2, Amount: 99.50}

Production applications need durable, rotated log files that do not consume unbounded disk space. The File provider uses channel-based async I/O with lock-free concurrent writes, automatic rolling by size and time, retention policies, and exponential backoff retry on write failures.

  • Async channel-based processing with bounded queues and back-pressure.
  • Template-based file naming: app-{Date}.log produces app-2025-03-22.log.
  • Rolling intervals: hourly, daily, weekly, monthly, yearly.
  • Size-based rolling with configurable thresholds.
  • Old file compression and retention.
  • 8 KB buffered streams for optimal disk throughput.
logging.AddFile("logs/app-{Date}.log");
// With detailed options
logging.AddFile("logs/app-{Date}.log", config =>
{
config.MinimumLevel = LogLevel.Information;
config.Performance.EnableBatching = true;
config.Performance.BatchSize = 100;
config.Performance.FlushInterval = TimeSpan.FromSeconds(5);
config.Performance.MaxQueueSize = 50000;
config.Formatting.TimestampFormat = "yyyy-MM-dd HH:mm:ss.fff";
config.Formatting.IncludeExceptionDetails = true;
});
OptionTypeDefaultDescription
BasePathstring"./logs"Root directory for log files
FileNamePatternstring"app-{Date}.log"Template with {Date} placeholder
MaxFileSizeMBint100Maximum size before rolling
CompressOldFilesbooltrueGZip compress rolled files
RetentionDaysint30Days to keep old files

Log aggregation systems such as the ELK Stack, Fluentd, or Azure Monitor consume structured JSON. The JSON provider outputs each log entry as a JSON object with @timestamp, @level, @logger, @message, and @properties fields, using Utf8JsonWriter for high-performance serialization.

  • High-performance Utf8JsonWriter serialization.
  • Automatic file rolling by size and date.
  • Pretty-print mode for development readability.
  • Complex object serialization with System.Text.Json.
  • Both console and file output targets.
logging.AddJson("logs/structured.json", PragmaticJsonConfiguration.ForHighPerformanceJsonFile());
// Pretty-printed for development
logging.AddJson("logs/dev.json", PragmaticJsonConfiguration.ForPrettyJson());
PropertyTypeDefaultDescription
PrettyPrintboolfalseIndent JSON output
SerializeComplexObjectsbooltrueSerialize nested objects as JSON
SkipValidationbooltrueSkip JSON writer validation for speed
AutoFlushboolfalseFlush after each entry
MaxFileSizeByteslong500 MBFile size before rolling
RollByDatebooltrueRoll files daily
{"@timestamp":"2025-03-22T10:30:00.000Z","@level":"INFO","@logger":"OrderService","@message":"Order processed","@properties":{"OrderId":42,"Total":199.99}}

4. Enhanced JSON — PragmaticEnhancedJsonProvider

Section titled “4. Enhanced JSON — PragmaticEnhancedJsonProvider”

High-throughput systems that forward logs to streaming pipelines need NDJSON (newline-delimited JSON) with async buffering and atomic file writes. The Enhanced JSON provider extends the JSON provider with NDJSON format, async buffering via ConcurrentQueue, timer-based flush, and optional atomic writes using temp-file-then-rename.

  • NDJSON output (one JSON object per line) for streaming compatibility.
  • Async buffering with configurable flush intervals and batch sizes.
  • Atomic writes for crash-safe file operations.
  • Flattened properties for better log search indexing.
  • Sequential or timestamp-based roll strategies.
  • Hourly rolling option in addition to daily.
logging.AddNdjsonAsync("logs/events.ndjson");
// High-performance file output
logging.AddNdjsonAsync("logs/events.ndjson", PragmaticEnhancedJsonConfiguration.ForHighPerformanceFile());
// Real-time streaming with minimal buffering
logging.AddNdjsonAsync("logs/stream.ndjson", PragmaticEnhancedJsonConfiguration.ForRealTimeStreaming());
PropertyTypeDefaultDescription
EnableNDJSONbooltrueUse NDJSON format
EnableAsyncBufferingbooltrueQueue entries for async processing
EnableAtomicWritesbooltrueUse temp file + rename
FlattenPropertiesbooltrueFlatten properties to root level
AsyncFlushIntervalMsint2000Timer-based flush interval
MaxAsyncQueueSizeint1000Trigger immediate flush threshold
RollStrategystring"timestamp""timestamp" or "sequential"
RollByHourboolfalseRoll files every hour

Unit tests and diagnostic tools need to inspect log entries programmatically without file I/O. The Memory provider stores entries in a thread-safe ConcurrentQueue with auto-truncation, predicate-based search, and memory usage estimation.

  • Thread-safe circular buffer with configurable capacity.
  • Search by log level, category, message text, or custom predicate.
  • Auto-truncation when capacity is exceeded (removes oldest half).
  • Memory usage estimation and entry age tracking.
  • Clear() for test isolation between test methods.
logging.AddProvider<PragmaticMemoryProvider>(_ =>
new PragmaticMemoryProvider("Memory", PragmaticMemoryConfiguration.ForTesting()));
PropertyTypeDefaultDescription
MaxEntriesint10000Maximum stored entries (max 1M)
AutoTruncatebooltrueAuto-remove oldest when full
var memoryProvider = serviceProvider.GetRequiredService<PragmaticMemoryProvider>();
// All entries at a specific level
var errors = memoryProvider.GetLogEntries(LogLevel.Error);
// Search by message text
var paymentLogs = memoryProvider.GetLogEntriesContaining("payment");
// Custom predicate
var recentWarnings = memoryProvider.GetLogEntries(e =>
e.LogLevel == LogLevel.Warning && e.Timestamp > DateTime.UtcNow.AddMinutes(-5));
// Most recent N entries
var latest = memoryProvider.GetLatestLogEntries(50);
// Assert in tests
memoryProvider.HasLogEntryContaining("Order created").Should().BeTrue();
PresetMin LevelMaxEntriesContextUse case
ForMemory()Trace10,000AllGeneral debugging
ForHighCapacityMemory()Trace100,000AllLong-running diagnostics
ForTesting()Information1,000OffUnit tests

When running under a debugger, developers need log output in the Output window without file or console overhead. The Debug provider writes to System.Diagnostics.Debug.WriteLine, automatically adapting its health status based on whether a debugger is attached.

  • Output to the Visual Studio / Rider Debug Output window.
  • Category filtering to focus on specific components.
  • Thread ID and timestamp injection.
  • Health status: Healthy when debugger is attached, Warning in release builds.
logging.AddProvider<PragmaticDebugProvider>(_ =>
new PragmaticDebugProvider("Debug", PragmaticDebugConfiguration.ForDebug()));
// Focused debugging on a single service
logging.AddProvider<PragmaticDebugProvider>(_ =>
new PragmaticDebugProvider("Debug", PragmaticDebugConfiguration.ForFocusedDebug("OrderService")));
PropertyTypeDefaultDescription
IncludeTimestampbooltrueShow timestamp prefix
IncludeThreadInfobooltrueShow [T01] thread ID
IncludeCategorybooltrueShow logger category
CategoryFilterstring""Show only matching categories
PresetMin LevelThread InfoUse case
ForDebug()TraceYesGeneral debugging
ForFocusedDebug(filter)DebugYesSingle component
ForLightweightDebug()InformationNoMinimal output
ForPerformanceDebug()TraceYesMicrosecond timestamps

Benchmarks need to measure the overhead of the logging pipeline itself, without any I/O. The Null provider discards every message with a single Interlocked.Increment, providing the absolute baseline for performance measurement.

logging.AddProvider<PragmaticNullProvider>(_ =>
new PragmaticNullProvider("Null", PragmaticNullConfiguration.ForBenchmarking()));
PresetPurpose
ForBenchmarking()Pure pipeline overhead measurement
ForStructuredBenchmarking()Measure structured property serialization cost
ForContextBenchmarking()Measure context enrichment cost
ForBatchingBenchmarking()Measure batching overhead
ForProductionBenchmarking()Simulate production configuration

8. Windows Event Log — PragmaticWindowsEventLogProvider

Section titled “8. Windows Event Log — PragmaticWindowsEventLogProvider”

Windows services and enterprise applications often must write to the Windows Event Log for centralized monitoring via SCOM, Splunk, or Windows Event Forwarding. This provider maps log levels to EventLogEntryType, supports auto-registration of the Event Source, and truncates messages to the Event Log limit of 31,839 characters.

  • Automatic EventLogEntryType mapping (Error, Warning, Information).
  • Auto-registration of Event Source (requires admin privileges).
  • Custom Event ID mapping per log level.
  • Structured data serialization into the Event Log message body.
  • Graceful degradation on non-Windows platforms.
  • [SupportedOSPlatform("windows")] annotation for trimming safety.
logging.AddWindowsEventLog("MyApplication");
// Custom configuration
logging.AddWindowsEventLog(new PragmaticWindowsEventLogConfiguration
{
SourceName = "OrderService",
LogName = "Application",
IncludeStructuredData = true,
AutoRegisterSource = true,
MaxMessageLength = 30000
});
PresetLog NameStructured DataUse case
Application(name)ApplicationYesStandard app logging
Security(name)SecurityYesSecurity audit
WindowsService(name)ApplicationYesWindows services with custom Event IDs
HighVolume(name)ApplicationNoHigh-throughput (smaller messages)
Development(name)ApplicationYes (indented)Debug-friendly output

ProviderClassOutput TargetAsyncBatchingBest For
ConsolePragmaticConsoleProviderstdoutNoNoDevelopment, debugging
FilePragmaticFileProviderDisk filesYesYesProduction log files
JSONPragmaticJsonProviderConsole / FileNoYesLog aggregation (ELK, Fluentd)
Enhanced JSONPragmaticEnhancedJsonProviderFile / ConsoleYesYesHigh-throughput NDJSON streaming
MemoryPragmaticMemoryProviderIn-memoryNoNoTesting, diagnostics
DebugPragmaticDebugProviderDebugger outputNoNoIDE debugging
NullPragmaticNullProviderNowhereNoNoBenchmarking
Windows Event LogPragmaticWindowsEventLogProviderEvent LogNoNoWindows services, enterprise

When the built-in providers do not cover your output target, you can implement IPragmaticLoggerProvider or, more conveniently, extend PragmaticLoggerProviderBase. The base class handles configuration management, metrics, health checks, context enrichment, and privacy redaction — you only implement WriteLogCore.

public sealed class SlackAlertProvider : PragmaticLoggerProviderBase
{
private readonly HttpClient _httpClient;
private readonly string _webhookUrl;
public SlackAlertProvider(string webhookUrl, IPragmaticProviderConfiguration configuration)
: base("Slack", configuration)
{
_webhookUrl = webhookUrl;
_httpClient = new HttpClient();
}
protected override void WriteLogCore(LogEntry logEntry)
{
// Only send alerts for Error and Critical
if (logEntry.LogLevel < LogLevel.Error)
return;
var payload = new { text = $"[{logEntry.LogLevel}] {logEntry.Category}: {logEntry.Message}" };
var json = JsonSerializer.Serialize(payload);
var content = new StringContent(json, Encoding.UTF8, "application/json");
// Fire-and-forget for alerts (errors tracked by base class metrics)
_ = _httpClient.PostAsync(_webhookUrl, content);
}
protected override void DisposeCore()
{
_httpClient.Dispose();
}
}

Register the custom provider through the builder:

logging.AddProvider<SlackAlertProvider>(_ =>
new SlackAlertProvider("https://hooks.slack.com/services/...",
new PragmaticProviderConfiguration { MinimumLevel = LogLevel.Error }));

The IPragmaticLoggerProvider interface also exposes GetMetrics() and CheckHealth() so your custom provider participates in the same observability infrastructure as the built-in providers.