Ever wondered how logging frameworks like Serilog work under the hood? Or maybe you need a custom logging solution for your project? In this guide, we'll build our own logging framework from scratch. You'll learn about log levels, structured logging, and how to send logs to different places (called sinks).
But before we dive into code, let's think about why logging matters and what makes a good logging system. Understanding these fundamentals will help you design better logging solutions for your applications.
Why Do We Need Logging?
Imagine you're trying to debug a problem in production. Your application is behaving strangely, but you can't attach a debugger. Or maybe you need to understand how users are interacting with your app. This is where logging becomes invaluable.
Logging serves several important purposes:
- Debugging - Track down bugs and understand what's happening in your code
- Monitoring - Keep an eye on application health and performance
- Auditing - Record important events for compliance or analysis
- Support - Help users troubleshoot issues with detailed information
The key insight is that logging should never impact your application's performance or reliability. If logging fails, your app should keep running. This is why we design logging frameworks to be robust and non-blocking.
The Anatomy of a Logging Framework
A good logging framework has several key components working together:
// This is what our logger will look like to users
public interface ILogger
{
void Log(LogLevel level, string message);
bool IsEnabled(LogLevel level);
}
This interface represents the main API that developers will use. It keeps things simple - just log a message at a certain level, and check if that level is enabled.
// This decides where logs go (console, file, database, etc.)
public interface ILogSink
{
void Write(string message);
}
The sink pattern is powerful because it separates "what to log" from "where to send it". Your application logic doesn't need to know whether logs go to a file, database, or cloud service - it just calls the logger.
This separation of concerns makes logging frameworks extremely flexible. You can easily add new destinations without changing existing code.
Log Levels: Controlling Information Flow
One of the most important concepts in logging is levels. Without them, you'd be overwhelmed by too much information or miss critical errors. Think of log levels like a volume control for your application's voice.
public enum LogLevel
{
Debug = 0, // Detailed info for developers
Info = 1, // General information
Warning = 2, // Something might be wrong
Error = 3, // Something is definitely wrong
Critical = 4 // The application is broken
}
Each level serves a different purpose:
- Debug - Detailed information only useful during development. Things like "User clicked button X" or "Variable value is 42"
- Info - General information about application flow. "Application started", "User logged in", "Order processed"
- Warning - Potential problems that don't stop the application. "Disk space running low", "Network timeout occurred"
- Error - Actual problems that need attention. "Database connection failed", "File not found"
- Critical - System-breaking issues. "Out of memory", "Application cannot start"
The beauty of levels is filtering. In development, you might show everything from Debug up. In production, you might only show Info and above. This gives you the right amount of information for each environment.
Our First Logger: Understanding the Basics
Let's start with the simplest possible logger. This will help us understand the core concepts before adding complexity. We'll build a console logger that writes to the terminal.
public class ConsoleLogger : ILogger
{
private readonly LogLevel _minLevel;
public ConsoleLogger(LogLevel minLevel = LogLevel.Info)
{
_minLevel = minLevel;
}
We store the minimum log level as a private field. This controls what messages get logged - anything below this level gets ignored. The default is Info, which means Debug messages won't show up.
public void Log(LogLevel level, string message)
{
if (!IsEnabled(level)) return;
var timestamp = DateTime.Now.ToString("yyyy-MM-dd HH:mm:ss");
var levelText = level.ToString().ToUpper();
var formatted = $"{timestamp} [{levelText}] {message}";
Console.WriteLine(formatted);
}
The Log method first checks if the message should be logged. If not, it returns early - this is important for performance. Then it formats the message with a timestamp and level indicator. The format "2025-09-22 14:30:15 [INFO] User logged in" is readable and contains all essential information.
public bool IsEnabled(LogLevel level)
{
return level >= _minLevel;
}
IsEnabled uses a simple comparison. Since our enum values increase (Debug=0, Info=1, etc.), higher numbers mean more severe levels. This makes the comparison work correctly.
This basic logger demonstrates the core logging pattern: check if logging is enabled, format the message, and send it somewhere. Everything else we build will follow this same structure.
Multiple Sinks: Logging to Different Places
Real applications need to send logs to multiple destinations. During development, you might want console output. In production, you need files for persistence and maybe a monitoring service. Multiple sinks solve this elegantly.
The key insight is that one log message can go to many places simultaneously. This is more flexible than having different loggers for different destinations.
public class MultiSinkLogger : ILogger
{
private readonly ILogSink[] _sinks;
private readonly LogLevel _minLevel;
public MultiSinkLogger(LogLevel minLevel, params ILogSink[] sinks)
{
_sinks = sinks;
_minLevel = minLevel;
}
We store an array of sinks and the minimum level. The params keyword makes it easy to pass multiple sinks when creating the logger.
public void Log(LogLevel level, string message)
{
if (!IsEnabled(level)) return;
var formatted = FormatMessage(level, message);
foreach (var sink in _sinks)
{
try
{
sink.Write(formatted);
}
catch
{
// Don't let logging failures crash the app
}
}
}
The Log method formats the message once, then sends it to every sink. The try-catch is crucial - if one sink fails (network issue, disk full), the others still work and your app doesn't crash. This defensive programming is essential in logging frameworks.
public bool IsEnabled(LogLevel level) => level >= _minLevel;
private string FormatMessage(LogLevel level, string message)
{
var timestamp = DateTime.Now.ToString("yyyy-MM-dd HH:mm:ss");
return $"{timestamp} [{level.ToString().ToUpper()}] {message}";
}
We extract formatting into a separate method. This keeps the Log method focused on its main job: deciding what to log and where to send it.
Multiple sinks give you incredible flexibility. You can log to console during development, file for production, and a monitoring service for alerts - all from the same logging calls in your code.
File Sink: Persistent Logging
Console logging is great for development, but production applications need persistent logs. Files provide a simple way to store logs that survive application restarts and can be analyzed later.
However, file logging introduces challenges: thread safety, file rotation, and performance. Our basic implementation will handle the essentials.
public class FileSink : ILogSink
{
private readonly string _filePath;
public FileSink(string filePath)
{
_filePath = filePath;
// Ensure directory exists
Directory.CreateDirectory(Path.GetDirectoryName(filePath));
}
The constructor ensures the directory exists before we try to write. This prevents exceptions if the log directory doesn't exist yet.
public void Write(string message)
{
// Using File.AppendAllText for simplicity
// In production, you'd want better performance
File.AppendAllText(_filePath, message + Environment.NewLine);
}
File.AppendAllText opens the file, appends the message, and closes it. This is simple but not optimal for high-frequency logging. In production systems, you'd keep the file open and use buffered writing.
The key insight with file logging is that it provides persistence. Your logs survive application crashes, server restarts, and can be analyzed by other tools. This makes files one of the most important logging destinations.
Using Our Logger
Let's see how easy it is to use our logging framework.
class Program
{
static void Main()
{
// Create sinks
var consoleSink = new ConsoleSink();
var fileSink = new FileSink("logs/app.log");
// Create logger with both sinks
var logger = new MultiSinkLogger(LogLevel.Info, consoleSink, fileSink);
// Log some messages
logger.Log(LogLevel.Info, "Application started");
logger.Log(LogLevel.Debug, "This debug message won't show");
logger.Log(LogLevel.Warning, "Something seems off...");
logger.Log(LogLevel.Error, "Oops! Something went wrong");
}
}
See how clean this is? We create our sinks, pass them to the logger, and start logging. The debug message gets filtered out because we set the minimum level to Info.
Structured Logging: Logs as Data
Traditional logging treats messages as strings: "User 123 logged in at 2025-09-22". But what if you want to search for all logins by user 123? Or analyze login patterns over time? This is where structured logging shines.
Structured logging attaches key-value data to messages. Instead of a string, you get searchable, analyzable data. This is incredibly powerful for monitoring and debugging.
public interface ILogger
{
void Log(LogLevel level, string message);
void Log(LogLevel level, string message, Dictionary properties);
bool IsEnabled(LogLevel level);
}
We add an overload that accepts properties. This keeps backward compatibility while enabling structured logging.
// Update our logger to handle properties
public void Log(LogLevel level, string message, Dictionary properties)
{
if (!IsEnabled(level)) return;
var formatted = FormatMessage(level, message, properties);
foreach (var sink in _sinks)
{
sink.Write(formatted);
}
}
The structured logging method follows the same pattern: check if enabled, format, send to sinks.
private string FormatMessage(LogLevel level, string message, Dictionary properties = null)
{
var timestamp = DateTime.Now.ToString("yyyy-MM-dd HH:mm:ss");
var baseMessage = $"{timestamp} [{level.ToString().ToUpper()}] {message}";
if (properties != null && properties.Count > 0)
{
var props = string.Join(", ", properties.Select(p => $"{p.Key}={p.Value}"));
return $"{baseMessage} | {props}";
}
return baseMessage;
}
We format properties as key=value pairs separated by commas. This creates messages like: "2025-09-22 14:30:15 [INFO] User logged in | UserId=123, LoginTime=2025-09-22T14:30:15"
Now you can search logs by UserId, analyze response times, or build dashboards. Structured logging transforms logs from text to data you can actually use.
Logger Factory: Managing Loggers
As your application grows, you'll need loggers in different parts of your code. Creating them manually everywhere leads to inconsistency and maintenance headaches. A factory solves this by centralizing logger creation.
The factory pattern is perfect here because it ensures all loggers use the same configuration while allowing different categories for organization.
public class LoggerFactory
{
private readonly ILogSink[] _sinks;
private readonly LogLevel _minLevel;
public LoggerFactory(LogLevel minLevel, params ILogSink[] sinks)
{
_sinks = sinks;
_minLevel = minLevel;
}
The factory stores the global configuration: which sinks to use and the minimum log level. All loggers created by this factory will share these settings.
public ILogger CreateLogger(string category)
{
return new CategorizedLogger(category, _sinks, _minLevel);
}
CreateLogger takes a category name and returns a configured logger. The category helps you identify which part of your application logged each message.
public class CategorizedLogger : MultiSinkLogger
{
private readonly string _category;
public CategorizedLogger(string category, ILogSink[] sinks, LogLevel minLevel)
: base(minLevel, sinks)
{
_category = category;
}
CategorizedLogger inherits from MultiSinkLogger, so it gets all the sink functionality. It just adds category information.
private string FormatMessage(LogLevel level, string message, Dictionary properties = null)
{
var timestamp = DateTime.Now.ToString("yyyy-MM-dd HH:mm:ss");
var baseMessage = $"{timestamp} [{level.ToString().ToUpper()}] [{_category}] {message}";
if (properties != null && properties.Count > 0)
{
var props = string.Join(", ", properties.Select(p => $"{p.Key}={p.Value}"));
return $"{baseMessage} | {props}";
}
return baseMessage;
}
The categorized logger overrides formatting to include the category. Now messages look like: "2025-09-22 14:30:15 [INFO] [UserService] User logged in"
Categories make it easy to filter logs by component. You can quickly see messages from UserService, OrderService, or any other part of your application. This organization becomes invaluable as your codebase grows.
Async Logging: Performance Matters
Logging can be slow. Writing to files, sending over networks, or calling databases all take time. If your logging blocks the main thread, it slows down your entire application. This is especially critical for web applications where response time matters.
Async logging solves this by moving I/O operations off the main thread. Your application continues processing while logs are written in the background.
public interface ILogSink
{
void Write(string message);
Task WriteAsync(string message);
}
We add an async version of Write. Sinks can implement both synchronous and asynchronous writing.
public class AsyncFileSink : ILogSink
{
private readonly string _filePath;
public AsyncFileSink(string filePath)
{
_filePath = filePath;
Directory.CreateDirectory(Path.GetDirectoryName(filePath));
}
The async file sink is similar to the regular file sink but uses async file operations.
public void Write(string message)
{
WriteAsync(message).GetAwaiter().GetResult();
}
public async Task WriteAsync(string message)
{
await File.AppendAllTextAsync(_filePath, message + Environment.NewLine);
}
The synchronous Write method calls the async version and waits for it. This provides backward compatibility. The async version uses File.AppendAllTextAsync for non-blocking file operations.
// Update our logger to use async when available
public async Task LogAsync(LogLevel level, string message, Dictionary properties = null)
{
if (!IsEnabled(level)) return;
var formatted = FormatMessage(level, message, properties);
var tasks = _sinks.Select(async sink =>
{
try
{
await sink.WriteAsync(formatted);
}
catch
{
// Still don't crash on logging failures
}
});
await Task.WhenAll(tasks);
}
LogAsync creates tasks for each sink and waits for all of them to complete. This ensures all sinks get the message, but the main thread isn't blocked waiting for I/O.
Async logging is crucial for performance. It prevents logging from becoming a bottleneck in your application, especially under high load.
Exception Logging: Capturing Errors
Exceptions are your application's way of telling you something went wrong. But catching an exception isn't enough - you need to log it so you can debug later. Good exception logging captures not just the error message, but all the context needed to understand and fix the problem.
The key is to log exceptions at the right level (usually Error or Critical) and include structured data that makes debugging easier.
public void Log(LogLevel level, Exception exception, string message)
{
var properties = new Dictionary
{
["ExceptionType"] = exception.GetType().Name,
["ExceptionMessage"] = exception.Message,
["StackTrace"] = exception.StackTrace
};
Log(level, message, properties);
}
This method extracts the essential information from an exception: its type, message, and stack trace. The stack trace is crucial because it shows exactly where the error occurred in your code.
Exception logging should be comprehensive but not overwhelming. Include what you need to diagnose issues without cluttering your logs with unnecessary details.
A JSON Sink
For structured logging, JSON format is perfect for log analysis tools.
public class JsonSink : ILogSink
{
private readonly TextWriter _writer;
public JsonSink(string filePath)
{
_writer = File.AppendText(filePath);
}
public void Write(string message)
{
WriteAsync(message).GetAwaiter().GetResult();
}
public async Task WriteAsync(string message)
{
// For now, just write the message as-is
// In a real implementation, you'd parse and format as JSON
await _writer.WriteLineAsync(message);
await _writer.FlushAsync();
}
}
This sink could be enhanced to properly format each log entry as JSON with timestamp, level, message, and properties.
Putting It All Together: A Complete Logging System
Now that we've built all the components, let's see how they work together in a real application. This example shows a web application with multiple services, each with its own logger, sending logs to multiple destinations.
The beauty of this setup is that you configure logging once at startup, then use it consistently throughout your application.
// Setup logging when the app starts
var consoleSink = new ConsoleSink();
var fileSink = new AsyncFileSink("logs/app.log");
var jsonSink = new JsonSink("logs/app.json");
var factory = new LoggerFactory(LogLevel.Info, consoleSink, fileSink, jsonSink);
We create multiple sinks for different purposes: console for development, file for persistence, and JSON for analysis tools. The factory ensures all loggers use the same configuration.
// Create loggers for different parts of the app
var userLogger = factory.CreateLogger("UserService");
var orderLogger = factory.CreateLogger("OrderService");
Each service gets its own logger with a descriptive category. This makes it easy to filter logs by component when debugging.
// Use them throughout your app
userLogger.Log(LogLevel.Info, "User logged in", new Dictionary
{
["UserId"] = 123,
["LoginTime"] = DateTime.Now
});
try
{
// Some business logic
ProcessOrder(456);
}
catch (Exception ex)
{
orderLogger.Log(LogLevel.Error, ex, "Failed to process order");
}
This shows typical usage: structured logging for normal events and exception logging for errors. Each log message goes to all three sinks simultaneously.
This setup gives you comprehensive observability. You can see what's happening in real-time (console), have persistent records (files), and analyze trends (JSON). The categorized loggers help you understand which part of your application generated each message.
Best Practices: Logging Wisely
Good logging is about balance. Too little logging and you can't debug issues. Too much and you waste resources and create noise. Here are the principles that guide good logging:
Use appropriate log levels - Debug for development, Info for normal operations, Warning for potential issues, Error for actual problems, Critical for system failures.
Include useful context - User IDs, request IDs, operation timings help you understand what was happening when something went wrong.
Don't log sensitive data - Passwords, credit cards, personal information should never appear in logs.Make logging async - Don't let logging slow down your application, especially under high load.
Test your logging setup - Ensure logs are being written where you expect and that failures don't crash your app.
Summary: Building Robust Logging Systems
We've built a complete logging framework from scratch, but the real value is understanding the patterns and principles that make logging effective. Let's review what we've learned:
- Separation of Concerns - Interfaces and sinks separate "what to log" from "where to send it"
- Log Levels - Control information flow based on severity and environment
- Structured Logging - Transform logs from text to searchable, analyzable data
- Async Operations - Prevent logging from becoming a performance bottleneck
- Categories - Organize logs by component for easier debugging
- Defensive Programming - Logging failures should never crash your application
Start simple with console logging, then add file persistence, structured data, and async operations as your needs grow. The framework we've built is production-ready and can scale with your application.
Remember, logging is infrastructure. Invest time in getting it right early, and it'll pay dividends when you need to debug issues or monitor your application's health. Your future self will thank you!