Logging

Golang Slog: A Practical Guide to Structured Logging in Go.

Written by Laura Clayton Verified by Alex Ioannides 24 min read Updated Apr 3, 2026
0%

TL;DR (QUICK ANSWER)

Go’s log/slog package brings structured logging into the standard library. It lets you log consistent key-value data, attach context like request IDs, and output logs in JSON or text formats. For most Go services, it’s a solid default: simple to adopt, flexible through handlers, and easy to integrate with observability tools. It won’t replace high-performance loggers in extreme cases, but it’s more than enough for typical production workloads.

Go 1.21 introduced log/slog, adding structured logging directly to the standard library. Before this, teams relied on log.Printf or third-party libraries to produce logs that could be parsed and analyzed at scale.

The problem with log.Printf isn’t that it’s broken. It’s that it produces unstructured output. Once your application grows, those logs become difficult to search, filter, or connect across requests and services.

slog solves this by making structured logging the default. Instead of embedding data inside strings, you log key-value pairs that tools can process reliably. You also get log levels, context support, and a flexible handler system for formatting and routing logs.

This guide focuses on how to use slog in real applications. You’ll see how it works, how to implement it correctly, and what to watch out for in production.

Key takeaways

  • slog is Go’s standard structured logging package, introduced in Go 1.21
  • It replaces string-based logging with key-value pairs that are easier to search and analyze
  • Handlers control how logs are formatted and where they are sent
  • Contextual logging (request IDs, user IDs, trace IDs) is essential for production use
  • slog is flexible and extensible, but not a full observability solution on its own
  • It’s fast enough for most services, but not the best choice for ultra-low latency systems 

What is slog in Go?

log/slog is Go’s structured logging package, introduced in Go 1.21. It gives a standard way to write logs with consistent fields, log levels, and pluggable output formats.

It’s not a full logging framework or observability platform. It just gives you the building blocks: how logs are created, structured, and emitted. 

What you do with those logs (storage, querying, alerting) is handled by your observability stack.

Core concepts

slog is intentionally small. Most of what you need comes down to four pieces:

  • Logger: the main entry point for writing logs
  • Handler: controls how logs are formatted and where they go
  • Record: the internal representation of a log entry
  • Attr: a key-value pair attached to a log

In practice, you mostly work with the logger and pass key-value pairs directly:

logger.Info("user login", "user_id", 1234, "method", "oauth")

Why slog exists

Before slog, Go developers had two options:

  • Use log.Printf and manually format strings
  • Use third-party libraries like Zap, Logrus, or Zerolog

The first approach doesn’t scale. Logs become hard to query and inconsistent across services.

The second works well, but introduces fragmentation. Each library has its own API, conventions, and tradeoffs.

slog standardizes structured logging in the Go ecosystem. It decreases dependency overhead and gives teams a consistent foundation across projects.

Design philosophy

slog is minimal by design. It doesn’t try to enforce a schema, manage log storage, or provide built-in analytics. Instead, it focuses on structured, leveled logging, composability through handlers, and compatibility with external tools

Compared to other libraries:

  • Zap / Zerolog: More optimized for performance and high-throughput systems
  • Logrus: Easier to use but slower and less consistent
  • slog: Balanced, standard, and flexible

For most teams starting a new Go service today, slog is a reasonable default.

How structured logging works in slog

Structured logging means logs are written as data, not just text.

Instead of embedding everything in a string:

log.Printf("user %d failed login", userID)

You log fields explicitly:

logger.Error("login failed", "user_id", userID, "status", 401)

Key-value logging

Every slog log entry includes:

  • A message (msg)
  • A level (INFO, ERROR, etc.)
  • A timestamp
  • Optional key-value pairs (attributes)

Those attributes are what make logs useful in production.

You can:

  • Filter by user_id
  • Group by status
  • Search by service or request_id

Attributes vs. message strings

The message should describe what happened. Attributes should describe context.

Bad:

logger.Error("user 123 failed login with status 401")

Better:

logger.Error("login failed", "user_id", 123, "status", 401)

This keeps logs readable and queryable.

Log levels

slog supports standard log levels:

  • Debug
  • Info
  • Warn
  • Error

Use them intentionally:

  • Info: normal operations
  • Warn: unexpected but recoverable
  • Error: failures that need attention

Avoid using Error for everything. It makes alerting useless.

Output formats: text vs JSON

Handlers control how logs are formatted.

Text (good for development):

time=... level=INFO msg="login failed" user_id=123 status=401

JSON (better for production):

{

 "time": "...",

 "level": "ERROR",

 "msg": "login failed",

 "user_id": 123,

 "status": 401

}

JSON logs are easier to parse and integrate with tools like Loki, Datadog, or Elasticsearch.

Quick comparison

TypeExample
Unstructureduser 123 failed login
Structured (slog)msg=”login failed” user_id=123 status=401

Structured logging is what makes logs usable at scale. Without it, logs are just strings. With it, they become searchable data.

Getting started with slog (quick but correct)

slog is easy to set up, but a few early decisions matter. The handler, log level, and structure you choose at the start will affect how usable your logs are later.

Create a basic logger

At minimum, you need a handler and a logger:

package main

import (

"log/slog"

"os"

)

func main() {

handler := slog.NewTextHandler(os.Stdout, nil)

logger := slog.New(handler)

logger.Info("slog initialized")

}

This writes human-readable logs to stdout, which is good for local development.

Choose the right handler

slog separates logging from formatting through handlers.

Use:

  • TextHandler: readable logs for local development
  • JSONHandler: structured logs for production

Example switching to JSON:

handler := slog.NewJSONHandler(os.Stdout, nil)

logger := slog.New(handler)

If your logs are going to a log aggregator, JSON should be your default.

Set log levels early

By default, all levels are enabled. In production, you usually want to filter out debug logs.

opts := &slog.HandlerOptions{

Level: slog.LevelInfo,

}

handler := slog.NewJSONHandler(os.Stdout, opts)

logger := slog.New(handler)

This keeps Info, Warn, and Error, and drops Debug.

Set this once at startup. Don’t scatter level logic across your codebase.

Use structured attributes from the start

Make sure not to fall back to string formatting.

Good:

logger.Info("user login", "user_id", 1234, "method", "oauth")

Bad:

logger.Info(fmt.Sprintf("user %d logged in via %s", 1234, "oauth"))

If you mix structured and unstructured logs, you lose most of the benefits.

Use With for shared fields

If multiple logs share the same context, attach it once:

authLogger := logger.With("component", "auth")

authLogger.Info("login attempt", "user_id", 123)

authLogger.Error("login failed", "user_id", 123)

This keeps logs consistent and avoids repetition.

Avoid the global logger (when possible)

You can set a default logger:

slog.SetDefault(logger)

But for larger applications, passing a *slog.Logger explicitly is better. It makes dependencies clear and avoids hidden state.

Quick checklist

Before moving on, you should have:

  • A JSON handler for production
  • Log levels set at startup
  • Structured attributes used everywhere
  • Shared context applied with With
  • A consistent logger passed through your app

This is enough to get started without creating problems you’ll have to fix later.

Logging with context in real applications

Context-aware logging is where slog starts to become genuinely useful in production. Instead of writing isolated log lines, you attach request-scoped data like request IDs, user IDs, and trace IDs so logs can be tied back to a specific action or failure. slog supports context-aware methods such as InfoContext and ErrorContext, and its package docs explicitly cover contexts as part of the API.

Why context matters

A plain error log tells you that something failed, while a contextual log tells you what failed, for whom, and in which request.

That difference matters when:

  • Multiple requests are being processed at the same time
  • One user action triggers work across several services
  • You need to connect logs to traces or downstream analytics

Structured logging is valuable because logs can be parsed, filtered, searched, and analyzed reliably. Adding request-level context makes that much more useful in real systems.

Use context.Context with the …Context logging methods

slog has methods like InfoContext, WarnContext, and ErrorContext. These let handlers access the current context.Context, which is useful when your logging setup pulls trace data, request metadata, or other request-scoped values from context. The official Go docs and blog both describe context support as part of the package design.

logger.InfoContext(ctx, “processing request”, “path”, r.URL.Path)

A key point here: the context is passed to the logging call. It’s not automatically turned into log fields by slog on its own. If you want request IDs or user IDs in output, you still need to add them as attributes directly or use a handler/middleware pattern that does it for you. 

That follows from how slog separates log records from handler behavior.

Attach request-scoped fields once

If a request ID or user ID should appear on many log lines, don’t repeat it in every call. Create a derived logger with With and reuse it.

reqLogger := logger.With(

"request_id", reqID,

"user_id", userID,

)

reqLogger.Info("request started")

reqLogger.Error("database query failed", "err", err)

This pattern matches the package design; attributes can be attached to a logger and reused across multiple records. The Go docs call out With as a way to avoid repeating common attributes.

HTTP middleware example

Middleware is a good place to attach request-level metadata once and pass a prepared logger down the stack.

func LoggingMiddleware(next http.Handler, baseLogger *slog.Logger) http.Handler {

return http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {

reqID := r.Header.Get("X-Request-ID")

if reqID == "" {

reqID = "generated-id"

}

logger := baseLogger.With(

"request_id", reqID,

"method", r.Method,

"path", r.URL.Path,

)

ctx := context.WithValue(r.Context(), loggerKey{}, logger)

next.ServeHTTP(w, r.WithContext(ctx))

})

}

Then inside a handler:

func HandleLogin(w http.ResponseWriter, r *http.Request) {

logger := r.Context().Value(loggerKey{}).(*slog.Logger)

logger.InfoContext(r.Context(), "login attempt")

}

This keeps request metadata consistent without forcing every handler to rebuild the logger.

Service layer example

The same idea applies outside HTTP handlers. Pass a logger into your service or derive one with shared fields at the boundary.

type AuthService struct {

logger *slog.Logger

}

func (s *AuthService) Login(ctx context.Context, userID string) error {

logger := s.logger.With("user_id", userID)

logger.InfoContext(ctx, "starting login flow")

// business logic here

logger.InfoContext(ctx, "login succeeded")

return nil

}

Doing this is usually better than relying on a package-global logger. It keeps dependencies visible and makes testing easier.

Add trace IDs when you have tracing

slog is not an observability platform by itself, but it fits well alongside tracing and metrics. The official package docs describe contexts and handlers as extension points, and the Go 1.21 release notes position slog as structured logging that integrates with popular log analysis tools and services.

In practice, that means if your application already has tracing, you can extract the trace ID from context and add it as a field:

logger.InfoContext(ctx, "calling payment service", "trace_id", traceID)

That makes it much easier to move from a failing request in your logs to the corresponding trace in your tracing system.

Logger injection patterns that work well

For most production Go services, these patterns are the safest:

  • Create a base logger at startup
  • Derive loggers with With for shared fields
  • Pass *slog.Logger explicitly into services and components
  • Use …Context logging methods when request context matters

That keeps your logs consistent and your code easy to reason about.

Contextual logging is one of the biggest reasons to use slog properly. Context turns logs from standalone events into something you can actually trace and debug.

Handlers explained: how slog actually works

Handlers are where slog does its real work. The logger creates log records, but the handler decides what happens to them.

If you understand handlers, you understand how to control formatting, filtering, and routing.

What a handler does

Every log call creates a record with:

  • Timestamp
  • Level
  • Message
  • Attributes (key-value pairs)

That record is passed to a handler.

The handler decides:

  • Whether to log it
  • How to format it (text or JSON)
  • Where to send it (stdout, file, external system)

Built-in handlers

slog includes two main handlers:

  • TextHandler: readable logs for development
  • JSONHandler: structured logs for production

Example:

handler := slog.NewJSONHandler(os.Stdout, nil)

logger := slog.New(handler)

logger.Info("user login", "user_id", 123)

That’s enough for most applications.

Handler options (filtering and formatting)

You can control behavior with HandlerOptions.

opts := &slog.HandlerOptions{

Level: slog.LevelWarn,

AddSource: true,

}

handler := slog.NewTextHandler(os.Stdout, opts)

This filters out logs below Warn and optionally includes source file info.

Redacting or modifying fields

Handlers can modify attributes before they’re written.

This is useful for removing sensitive data and standardizing field names.

opts := &slog.HandlerOptions{

ReplaceAttr: func(groups []string, a slog.Attr) slog.Attr {

if a.Key == "password" {

return slog.String("password", "[REDACTED]")

}

return a

},

}

This runs on every log entry.

Custom handlers (when you actually need them)

You only need a custom handler if:

  • You want to send logs somewhere specific
  • You need advanced filtering or routing
  • You’re integrating with a custom pipeline

Minimal example:

type MyHandler struct{}

func (h *MyHandler) Enabled(ctx context.Context, level slog.Level) bool {

return true

}

func (h *MyHandler) Handle(ctx context.Context, r slog.Record) error {

// custom logic here

return nil

}

func (h *MyHandler) WithAttrs(attrs []slog.Attr) slog.Handler {

return h

}

func (h *MyHandler) WithGroup(name string) slog.Handler {

return h

}

In practice, most teams don’t need this immediately.

Sync vs. async logging (important tradeoff)

By default, handlers are synchronous.

That means every log call writes immediately, and logging can add latency in high-throughput systems.

If logging becomes a bottleneck, you can:

  • Buffer logs
  • Wrap the handler in an async layer

But this introduces tradeoffs:

  • Possible log loss under load
  • More complexity

For most services, synchronous logging is fine.

Key takeaway

  • Logger = creates records
  • Handler = decides what happens to them

If your logs are wrong, noisy, or missing fields, the fix is almost always in your handler setup.

Performance characteristics of slog

Logging can become a bottleneck if you’re not careful. slog is designed to be efficient, but like any logging system, its impact depends on how you use it.

You’re not trying to make logging “free.” The goal is to make it predictable and fast enough for real workloads.

Is slog fast enough?

For most applications: yes.

slog is not the fastest logger in the Go ecosystem, but it’s efficient enough for typical services:

  • APIs
  • background workers
  • SaaS backends
  • internal tools

If logging is not in your hot path, you’re unlikely to notice any difference.

Where overhead comes from

Logging has three main costs:

  • Allocations: building log records and attributes
  • Serialization: formatting logs (especially JSON)
  • I/O: writing logs to stdout or external systems

slog keeps these relatively low, but they’re not zero.

When slog can become a bottleneck

You’ll start to feel it if:

  • You log inside tight loops
  • You emit logs at very high frequency
  • You attach large or complex objects to logs
  • You use slow handlers or heavy formatting

Example to avoid:

for _, item := range items {

logger.Info("processing item", "item", item) // too much logging

}

Use LogValuer to defer expensive work

If computing a value is expensive, don’t do it unless the log will actually be written.

slog supports this via LogValuer.

type expensiveValue struct{}

func (e expensiveValue) LogValue() slog.Value {

return slog.StringValue(computeSomething())

}

logger.Debug("debug info", "data", expensiveValue{})

If debug logs are disabled, computeSomething() won’t run.

JSON vs. text performance

  • TextHandler: faster, easier to read
  • JSONHandler: slightly slower, but required for most production setups

The difference is usually negligible unless you’re logging at very high volume.

Synchronous vs. asynchronous logging

By default, logging is synchronous: each log call blocks until written. This is simple and safe, but adds latency.

If needed, you can:

  • Buffer logs
  • Process them asynchronously

Tradeoffs:

  • Faster request handling
  • Possible log loss under pressure

Only do this if you actually hit performance limits.

Practical performance dos and don’ts

DoDon’t
Log at appropriate levelsLog inside hot loops
Keep log payloads smallLog large structs or full responses
Reuse loggers with WithCompute expensive values unconditionally
Disable debug logs in productionTreat logs like tracing or metrics

Reality check

If you’re debating between slog and ultra-optimized libraries like Zap or Zerolog:

  • choose Zap/Zerolog for extreme throughput or latency-sensitive systems
  • choose slog for clarity, consistency, and standardization

For most teams, slog hits the right balance.

slog vs. other Go logging libraries

slog doesn’t exist in a vacuum. Go teams have been using libraries like Zap, Logrus, and Zerolog for years. Each solves structured logging in a slightly different way.

The question isn’t “which is best.” It’s “which fits your use case.”

ToolBest forTradeoff
slogStandard, flexible defaultNot the fastest
ZapHigh-performance production appsMore complex API
ZerologUltra-low overhead loggingLess beginner-friendly
LogrusSimplicity and legacy projectsSlower, less consistent

slog vs. Zap

CategoryZapslog
StrengthsVery fast (low allocations)Production-proven at scaleStrong ecosystemStandard librarySimpler APIGood balance of performance and usability
TradeoffsMore complex API (especially non-sugared)Steeper learning curveNot as fast as ZapLess mature ecosystem
Best use casesLogging is on a hot pathYou care about every microsecondYou already use Zap across your systemMost APIs and backend servicesTeams that want a simpler, standard approach

slog vs. Zerolog

CategoryZerologslog
StrengthsNear zero-allocation loggingVery fast JSON outputCompact logsStandard libraryReadable APIBalanced performance
TradeoffsFluent API can be harder to readLess conventional styleNot as fast as ZerologLess optimized for extreme throughput
Best use casesExtremely high log throughputPerformance-critical pipelinesReadability and maintainability matter more than raw speedMost typical backend services

slog vs. Logrus

CategoryLogrusslog
StrengthsEasy to useWidely adopted in older projectsStandard libraryConsistent structured loggingModern API
TradeoffsSlower due to reflectionInconsistent usage patternsLargely considered legacySmaller ecosystemLess optimized for extreme performance
Best use casesMaintaining or supporting legacy projectsNew projectsTeams that want a clean, standardized approach

If you’re starting fresh, there’s little reason to choose Logrus over slog.

The real advantage of slog

The biggest benefit is standardization.

With slog:

  • No external dependency required
  • Consistent API across projects
  • Easier onboarding for new developers
  • Long-term support from the Go ecosystem

When to choose slog

Use slog if:

  • You’re starting a new Go service
  • You want structured logging without extra dependencies
  • You value clarity and consistency
  • Your performance requirements are normal (not extreme)

When not to

Stick with Zap or Zerolog if:

  • Logging is a measurable performance bottleneck
  • You already have deep tooling built around them
  • You need maximum throughput

For most teams, the decision is simple: If you’re not solving a performance problem, slog is a solid default.

Migrating to slog from existing loggers

You don’t need to rewrite your entire codebase to adopt slog. Most teams can migrate gradually, replacing logging where it makes sense without breaking existing behavior.

Start with an audit

Before changing anything, look at how logging is currently used:

  • Global loggers or injected dependencies
  • Structured vs string-based logs
  • Log levels and naming conventions
  • Output format (JSON vs text)
  • Integrations with log pipelines

You’ll be able to avoid surprises later.

Introduce slog alongside your current logger

You don’t need a big switch.

Start by using slog in:

  • New services
  • New packages
  • Isolated components

Example:

slog.Info("worker started", "component", "jobs")

This can run alongside your existing logger without conflict.

Match your current output format

If your system expects JSON logs, configure slog the same way:

handler := slog.NewJSONHandler(os.Stdout, nil)

logger := slog.New(handler)

slog.SetDefault(logger)

Logs will be consistent while you transition.

Replace usage incrementally

Move from old patterns to slog step by step.

From log.Printf:

log.Printf("user %d logged in", userID)
slog.Info("user login", "user_id", userID)

From Logrus:

logrus.WithField("user", userID).Info("login")
slog.Info("login", "user", userID)

From Zap:

logger.Info("login", zap.String("user", userID))
slog.Info("login", "user", userID)

Wrap slog if needed

If your codebase uses a custom logging wrapper, update that wrapper to use slog internally. This avoids touching every file at once.

Pass loggers explicitly

Instead of relying on globals, pass *slog.Logger into services:

type Service struct {

logger *slog.Logger

}

Migration will be cleaner and more predictable.

Watch for common migration issues

Migration is a good chance to fix these:

  • Mixing structured and unstructured logs
  • Inconsistent key names (user_id vs userId)
  • Incorrect log level usage
  • Missing context (request IDs, etc.)

You don’t need a big-bang rewrite

Most teams succeed with this approach:

  • Introduce slog
  • Standardize new code
  • Gradually replace old logging

No downtime, no risky refactors.

Key takeaway

Treat migration as an opportunity to clean up logging, not just swap APIs. If you improve structure and consistency during the transition, you’ll get far more value than just switching to slog.

Using slog for observability and production logging

In production, logs are not just for debugging. They are part of your observability setup, alongside metrics and traces. If your logs are inconsistent or missing context, they become hard to use when something goes wrong.

slog helps by making logs structured by default. But to get real value, you need to log in a way that supports search, filtering, and correlation.

Structure logs for querying

Logs should be easy to filter and group.

Instead of relying on message text, use consistent fields:

  • Request_id
  • User_id
  • Service
  • Endpoint
  • Status
  • Error

Example:

logger.Error("login failed",

"request_id", reqID,

"user_id", userID,

"status", 401,

)

This lets you:

  • Find all failures for a user
  • Filter by endpoint
  • Group errors across services

Use consistent field naming

Pick a naming convention and stick to it.

  • Use snake_case or camelCase, not both
  • Use the same key across all services
  • Avoid synonyms (user_id vs uid)

Consistency is what makes logs usable at scale.

Add context that actually helps

Focus on fields that help you trace behavior:

  • Request IDs for tracking a single request
  • User IDs for debugging user-specific issues
  • Service or component names for source identification

Avoid adding everything “just in case.” More data does not always mean better logs.

Combine logs with metrics and traces

Logs alone are not enough for full observability.

Use them together:

  • Logs: What happened
  • Metrics: How often it happens
  • Traces: Where time is spent

Example flow:

  • Alert triggers (metrics)
  • You check logs for errors
  • You follow the trace to find the root cause

slog fits into this by producing structured logs that tools can correlate with traces and metrics.

Make logs useful for monitoring tools

Most observability tools expect structured input.

To make logs work well:

  • Use JSON format in production
  • Include consistent fields across services
  • Keep values simple and searchable

Logs written with slog can be ingested directly by tools like Loki, Datadog, or Elasticsearch.

Control log volume

Too many logs create noise and increase costs.

To manage this:

  • Use appropriate log levels
  • Avoid logging every request in high-traffic endpoints
  • Reduce duplication

Bad:

logger.Info("request received")
logger.Info("processing request")
logger.Info("request finished")

Better:

logger.Info("request handled", "status", 200)

Keep logs small and focused

Large payloads slow down systems and make logs harder to read.

Avoid:

  • Logging full request/response bodies
  • Logging large structs
  • Dumping debug data in production

Instead:

  • Log identifiers and key fields
  • Add details only when needed

If your logs are structured and consistent, you can search them, filter them, and actually debug with them.

Best practices for slog in production

Good logging is mostly about discipline. slog gives you the structure, but how you use it determines whether your logs are helpful or noisy.

Use log levels intentionally

  • Use Info for normal operations
  • Use Warn for unexpected but recoverable issues
  • Use Error for failures that need attention
  • Avoid Debug in production unless actively investigating

If everything is an error, nothing is.

Keep field names consistent

  • Use the same keys across your codebase
  • Stick to one format (e.g. snake_case)
  • Avoid synonyms for the same concept

Consistency is what makes logs searchable.

Add context once, not everywhere

  • Use With to attach shared fields
  • Avoid repeating the same attributes in every log call
  • Keep request-level data consistent across logs

This cuts noise and improves readability.

Avoid noisy logs

  • Do not log every step of a request
  • Avoid duplicate log lines
  • Focus on meaningful events

Too much logging makes real issues harder to spot.

Keep logs small

  • Log identifiers, not full objects
  • Avoid dumping large payloads
  • Keep values simple and readable

Large logs slow down systems and increase storage costs.

Redact sensitive data

  • Never log passwords, tokens, or secrets
  • Remove or mask sensitive fields
  • Use handler-level filtering if needed

Logs often live longer than you expect.

Use JSON in production

  • Prefer JSONHandler for structured logs
  • Make logs easy for tools to parse
  • Avoid relying on text parsing

This makes integration with observability tools much easier.

Control log volume

  • Adjust log levels by environment
  • Avoid logging in hot paths
  • Consider sampling for high-frequency events

Logging should not become your bottleneck.

Key takeaway

Treat logging as part of your system design, not an afterthought.

If your logs are consistent, structured, and intentional, they will save you time during incidents.

Common mistakes and pitfalls

Even with slog, it’s easy to end up with logs that are noisy, inconsistent, or hard to use. Most issues come from how logging is used, not the tool itself.

Logging in hot loops

  • Logging inside tight loops can quickly become a performance issue
  • It floods your logs and adds unnecessary overhead
  • It makes it harder to find meaningful events

If something runs frequently, log summaries or errors instead of every iteration.

Logging large objects

  • Dumping full structs or payloads creates huge log entries
  • It slows down serialization and increases storage costs
  • It makes logs harder to read and search

Log key fields instead of entire objects.

Mixing structured and unstructured logs

  • Combining key-value logs with formatted strings breaks consistency
  • It makes logs harder to query in observability tools
  • It defeats the purpose of structured logging

Pick one approach and stick to it. With slog, that means key-value pairs.

Inconsistent field naming

  • Using different keys for the same concept (user_id, userId, uid)
  • Makes filtering unreliable
  • Creates confusion across services

Define a naming standard and follow it everywhere.

Treating logs like print statements

  • Logging everything “just in case” creates noise
  • Important events get buried
  • Debugging becomes harder, not easier

Logs should answer questions, not create more of them.

Ignoring log levels

  • Using the wrong level makes alerting unreliable
  • Overusing Error reduces its meaning
  • Not using Debug properly limits troubleshooting

Be deliberate with levels.

Forgetting downstream consumers

  • Logs are not just for developers reading stdout
  • They are consumed by tools, dashboards, and alerts
  • Poor structure makes them harder to use downstream

Always think about how logs will be queried and used.

Key takeaway

Most logging problems are self-inflicted. If your logs are inconsistent, noisy, or hard to search, the fix is usually in how you write them, not in switching tools. slog is a solid default for most Go services, but it’s not always the best fit. There are cases where other approaches make more sense.

Ultra-low latency systems

  • Every microsecond matters
  • Logging overhead needs to be as close to zero as possible
  • Even small allocations or serialization costs add up

In these cases, tools like Zap or Zerolog are a better fit.

Extremely high log throughput

  • Services emitting thousands of logs per second
  • Heavy reliance on structured logging in hot paths
  • Tight performance budgets

slog can handle high volume, but it’s not optimized for extreme throughput.

Teams that need opinionated tooling

  • Some teams want built-in conventions and patterns
  • Predefined logging formats and pipelines
  • Tight integration with specific platforms

slog is intentionally minimal. You have to define your own structure and standards.

No observability pipeline

  • Logs are only viewed locally or in raw text
  • No log aggregation or querying tools in place
  • No need for structured search

In this case, structured logging adds complexity without much benefit.

Existing mature logging setup

  • Deep integration with Zap, Zerolog, or another system
  • Established tooling, dashboards, and workflows
  • No clear benefit from switching

Switching just for the sake of it is rarely worth it.

Key takeaway

slog is a strong foundation, not a universal solution. If your needs are typical, it’s a great default. If your requirements are extreme or highly specialized, other tools may fit better.

Final thoughts: how to adopt slog successfully

Treat slog as a foundation, not a complete solution. The real value comes from how consistently your team uses it. Define clear logging standards early, keep your structure consistent, and refine your approach over time as your system and needs evolve.

  • slog is Go’s structured logging package, introduced in Go 1.21. It allows developers to write logs using key-value pairs, making them easier to search, filter, and analyze.
  • slog is not exactly replacing log.Printf or the log package. The log package still exists, but slog is the recommended option for structured, production-ready logging.
  • Use slog instead of Zap or Logrus if you want a standard library solution with a simple API. Use Zap or Zerolog if you need maximum performance or already rely on them.
  • Yes, slog is fast enough for most applications. It may not match the performance of specialized libraries, but it is efficient enough for typical services.
  • slog uses key-value pairs (attributes) attached to log messages to handle structured logging. These fields are included in the output and can be parsed by logging tools.
  • Yes, use slog.NewJSONHandler to output logs in JSON format, which is recommended for production environments.
  • Attach them as attributes using With or include them in each log call. You can also pass context and extract values where needed.
  • Yes, you can implement the slog.Handler interface to customize how logs are processed, filtered, or sent to external systems.
  • Yes, most teams adopt slog incrementally, replacing logging in new or updated parts of the codebase.
  • Use consistent fields like request_id, user_id, service, and status. Keep logs structured, concise, and aligned with how you query them in your observability tools.

Start using UptimeRobot today.

Join more than 3.2M+ users and companies!

  • Get 50 monitors for free - forever!
  • Monitor your website, server, SSL certificates, domains, and more.
  • Create customizable status pages.
Laura Clayton

Written by

Laura Clayton

Copywriter |

Laura Clayton has over a decade of experience in the tech industry, she brings a wealth of knowledge and insights to her articles, helping businesses maintain optimal online performance. Laura's passion for technology drives her to explore the latest in monitoring tools and techniques, making her a trusted voice in the field.

Expert on: Cron Monitoring, DevOps

🎖️

Our content is peer-reviewed by our expert team to maximize accuracy and prevent miss-information.

Alex Ioannides

Content verified by

Alex Ioannides

Head of DevOps |

Prior to his tenure at itrinity, Alex founded FocusNet Group and served as its CTO. The company specializes in providing managed web hosting services for a wide spectrum of high-traffic websites and applications. One of Alex's notable contributions to the open-source community is his involvement as an early founder of HestiaCP, an open-source Linux Web Server Control Panel. At the core of Alex's work lies his passion for Infrastructure as Code. He firmly believes in the principles of GitOps and lives by the mantra of "automate everything". This approach has consistently proven effective in enhancing the efficiency and reliability of the systems he manages. Beyond his professional endeavors, Alex has a broad range of interests. He enjoys traveling, is a football enthusiast, and maintains an active interest in politics.

Feature suggestions? Share

Recent Articles

Table of Contents
In this article14 sections