Your Node.js application logs are critical for debugging production issues, monitoring performance, and maintaining system health. But if your logging library becomes a bottleneck, it defeats the purpose.
Pino solves this with a fundamentally different approach to logging:
- 5x faster than Winston with minimal CPU overhead
- Structured JSON output that's immediately machine-readable
- Asynchronous by design to prevent event loop blocking
- Production-ready with built-in security and performance optimizations
This guide covers everything from basic setup to advanced production patterns, including integration with observability platforms for comprehensive monitoring.
What Makes Pino Different
Pino is built around a simple principle: logging should never slow down your application. Unlike traditional loggers that perform expensive formatting operations in the main thread, Pino takes a minimalist approach.
Core Design Principles:
- JSON-first: Every log is structured data, not formatted text
- Async transports: Heavy operations happen in worker threads
- Minimal serialization: Only essential data transformation
- Zero-cost abstractions: Logs below threshold level have no performance impact
Performance Comparison
Here's how Pino compares to popular alternatives in real-world scenarios:
Library | Logs/Second | CPU Usage | Memory Overhead |
---|---|---|---|
Pino | 50,000+ | 2-4% | ~45MB |
Winston | ~10,000 | 10-15% | ~180MB |
Bunyan | ~15,000 | 8-12% | ~150MB |
These numbers translate to concrete benefits:
- Faster response times under high load
- Lower infrastructure costs through reduced resource usage
- Better application stability with minimal logging overhead
Installation and Basic Setup
Install Pino with npm:
npm install pino
For development with readable output:
npm install --save-dev pino-pretty
First Logger Implementation
To get started, import and initialize Pino. The following code creates a basic logger instance that outputs to the console.
const pino = require('pino')
const logger = pino()
logger.info('Application started')
logger.error('Database connection failed')
Default output is structured JSON:
{"level":30,"time":1690747200000,"pid":12345,"hostname":"server-01","msg":"Application started"}
{"level":50,"time":1690747201000,"pid":12345,"hostname":"server-01","msg":"Database connection failed"}
Each log includes:
level
: Numeric severity (30=info, 50=error)time
: High-precision timestamppid
: Process ID for multi-process debugginghostname
: Server identification
Environment-Adaptive Configuration
In a real-world application, your logging needs will differ between development, testing, and production. The following configuration creates a logger that adapts based on the NODE_ENV
environment variable, a standard Node.js convention for specifying the application's current environment.
This setup uses a transport, which is a component that processes and outputs logs. For development, it uses pino-pretty
to format logs for readability. In production, it defaults to JSON for efficient processing.
const pino = require('pino')
function createLogger() {
const isDevelopment = process.env.NODE_ENV === 'development'
const isTest = process.env.NODE_ENV === 'test'
return pino({
level: process.env.LOG_LEVEL || (isDevelopment ? 'debug' : 'info'),
// Pretty output for development
transport: isDevelopment ? {
target: 'pino-pretty',
options: {
colorize: true,
ignore: 'pid,hostname',
translateTime: 'yyyy-mm-dd HH:MM:ss'
}
} : undefined,
// Disable in tests unless explicitly needed
enabled: !isTest,
// Add application context
base: {
env: process.env.NODE_ENV,
version: process.env.APP_VERSION
}
})
}
module.exports = createLogger()
Log Levels and Strategic Usage
Pino uses numeric levels that correspond to severity. Understanding when to use each level is crucial for effective monitoring and debugging.
Standard Levels
Level | Numeric | Purpose | Example Use Cases |
---|---|---|---|
fatal | 60 | Application crash imminent | Database connection lost, critical service down |
error | 50 | Errors requiring investigation | API failures, validation errors, exceptions |
warn | 40 | Potential issues | Deprecated API usage, resource limits approaching |
info | 30 | Significant application events | User authentication, service starts, key operations |
debug | 20 | Detailed debugging information | Function entry/exit, variable states |
trace | 10 | Very detailed execution flow | Loop iterations, deep debugging |
Level Configuration Strategy
Set a logging threshold to control verbosity. In this example, level
is set to 'info'
, so only info
, warn
, error
, and fatal
logs will be processed. trace
and debug
messages are ignored, saving resources.
const logger = pino({ level: 'info' })
// These won't appear in logs (below threshold)
logger.trace('Entering user validation function')
logger.debug('Checking user permissions')
// These will appear (at or above threshold)
logger.info({ userId: 123 }, 'User login successful')
logger.error({ err: error }, 'Payment processing failed')
Runtime Level Changes
For production debugging, you can change the log level dynamically without restarting your application. This example creates an Express.js endpoint that allows authorized users to adjust the log level on the fly.
const express = require('express')
const logger = require('./logger')
const app = express()
app.post('/admin/log-level', (req, res) => {
const { level } = req.body
const validLevels = ['trace', 'debug', 'info', 'warn', 'error', 'fatal']
if (!validLevels.includes(level)) {
return res.status(400).json({ error: 'Invalid log level' })
}
logger.level = level
logger.info({ newLevel: level }, 'Log level changed')
res.json({ message: `Log level changed to ${level}` })
})
Structured Logging for Observability
Structured logging makes your logs immediately useful for monitoring, alerting, and debugging. Instead of parsing text strings, you get queryable data.
Basic Structured Patterns
Structured logging involves passing an object as the first argument to your log function. This approach is more powerful than simple string interpolation because it creates machine-readable data that can be easily queried and analyzed.
const logger = pino()
// Traditional string interpolation (avoid this)
logger.info(`User ${userId} completed order ${orderId} for ${amount}`)
// Structured approach (better)
logger.info({
userId: 'usr_123',
orderId: 'ord_456',
amount: 99.99,
currency: 'USD',
paymentMethod: 'credit_card'
}, 'Order completed successfully')
Error Logging with Context
When logging errors, always include relevant context. This example shows how to wrap a payment processing function in a try...catch
block. If an error occurs, the logger captures not only the error itself (err: error
) but also contextual data like the orderId
and userId
, which is invaluable for debugging.
async function processPayment(orderId, userId) {
try {
const result = await paymentService.charge(orderId)
logger.info({
orderId,
userId,
paymentId: result.id,
amount: result.amount,
duration: result.processingTime
}, 'Payment processed successfully')
return result
} catch (error) {
logger.error({
err: error,
orderId,
userId,
operation: 'payment_processing',
paymentProvider: 'stripe'
}, 'Payment processing failed')
throw error
}
}
Performance Monitoring Through Logs
You can use structured logs to monitor application performance. This function measures the duration of a database query using process.hrtime.bigint()
for high-precision timing. The resulting duration is logged along with other metadata, allowing you to track and categorize query performance over time.
async function performDatabaseQuery(query) {
const startTime = process.hrtime.bigint()
try {
const result = await db.query(query)
const duration = Number(process.hrtime.bigint() - startTime) / 1000000 // Convert to ms
logger.info({
operation: 'database_query',
table: query.table,
duration: Math.round(duration * 100) / 100,
recordCount: result.length,
performanceCategory: categorizePerformance(duration)
}, 'Database query completed')
return result
} catch (error) {
const duration = Number(process.hrtime.bigint() - startTime) / 1000000
logger.error({
err: error,
operation: 'database_query',
table: query.table,
duration: Math.round(duration * 100) / 100
}, 'Database query failed')
throw error
}
}
function categorizePerformance(ms) {
if (ms < 100) return 'fast'
if (ms < 500) return 'normal'
if (ms < 1000) return 'slow'
return 'critical'
}
Child Loggers for Context Management
Child loggers inherit parent configuration while adding contextual information. This creates consistent context across related operations without repetitive logging.
Request-Scoped Logging
In web applications, it's useful to correlate all logs generated during a single API request. This Express.js middleware creates a child logger for each incoming request. The child logger automatically includes a unique requestId
and other request details in every log message, making it easy to trace the entire lifecycle of a request.
const express = require('express')
const { v4: uuidv4 } = require('uuid')
const logger = require('./logger')
const app = express()
// Create request-scoped logger middleware
app.use((req, res, next) => {
const requestId = req.headers['x-request-id'] || uuidv4()
req.log = logger.child({
requestId,
method: req.method,
path: req.path,
userAgent: req.headers['user-agent'],
ip: req.ip
})
req.log.info('Request started')
next()
})
// Route handlers automatically have contextual logging
app.get('/users/:id', async (req, res) => {
const { id } = req.params
req.log.debug({ userId: id }, 'Fetching user data')
try {
const user = await getUserById(id)
req.log.info({ userId: id, fetchDuration: 45 }, 'User retrieved')
res.json(user)
} catch (error) {
req.log.error({ err: error, userId: id }, 'User fetch failed')
res.status(500).json({ error: 'Internal server error' })
}
})
Service-Specific Loggers
For better organization, you can create child loggers for different parts of your application, such as authentication, database interactions, or payment processing. This helps you filter and analyze logs from specific services more easily.
// logger.js
const pino = require('pino')
const baseLogger = pino()
module.exports = {
auth: baseLogger.child({ service: 'auth' }),
database: baseLogger.child({ service: 'database' }),
payment: baseLogger.child({ service: 'payment' }),
notification: baseLogger.child({ service: 'notification' })
}
Now, your authentication service can use its dedicated logger:
// auth-service.js
const { auth: logger } = require('./logger')
class AuthService {
async authenticateUser(email, password) {
logger.info({ email }, 'Authentication attempt')
try {
const user = await this.validateCredentials(email, password)
logger.info({ userId: user.id, email }, 'Authentication successful')
return this.generateToken(user)
} catch (error) {
logger.warn({ email, reason: error.message }, 'Authentication failed')
throw new AuthenticationError('Invalid credentials')
}
}
}
Custom Serializers for Security and Performance
Serializers transform objects before logging, giving you control over what data appears in logs. This is essential for security, performance, and consistency.
Security-Focused Serializers
A serializer is a function that transforms an object before it is logged. This is crucial for security, as it allows you to redact or mask sensitive information like passwords, API keys, and personally identifiable information (PII) before it ever reaches your logs.
This example defines serializers for user
, request
, and error
objects to ensure sensitive data is properly handled.
const logger = pino({
serializers: {
user: (user) => ({
id: user.id,
username: user.username,
email: user.email ? maskEmail(user.email) : undefined,
role: user.role,
// Never log password, apiKey, tokens, etc.
lastLogin: user.lastLogin
}),
request: (req) => {
const safe = { ...req }
if (safe.headers) {
// Remove sensitive headers
delete safe.headers.authorization
delete safe.headers['x-api-key']
delete safe.headers.cookie
}
return safe
},
error: (err) => {
const safeError = { ...err }
// Sanitize error messages that might contain sensitive data
if (safeError.message) {
safeError.message = safeError.message
.replace(/password=\w+/gi, 'password=***')
.replace(/token=[\w-]+/gi, 'token=***')
}
return safeError
}
}
})
function maskEmail(email) {
const [local, domain] = email.split('@')
return `${local.slice(0, 2)}***@${domain}`
}
Global Data Redaction
Pino also provides a built-in redaction feature for a simpler way to remove sensitive data. You can specify a list of paths
(object keys) to automatically remove from your logs. This is a more convenient alternative to writing custom serializers for common redaction needs.
const logger = pino({
redact: {
paths: [
'password',
'token',
'apiKey',
'creditCard.number',
'ssn',
'*.password',
'*.token',
'req.headers.authorization',
'req.headers.cookie'
],
remove: true // Completely remove these fields
}
})
// This will automatically remove sensitive fields
logger.info({
user: {
id: 123,
email: 'user@example.com',
password: 'secret123', // This won't appear in logs
apiKey: 'sk_live_abc123' // This won't appear either
}
}, 'User data processed')
HTTP Request Logging
Effective HTTP logging provides insights into API performance, user behavior, and system issues.
Express.js Integration
For web applications, pino-http
is an essential middleware that automates the logging of incoming requests and outgoing responses. It provides rich, contextual information about each HTTP transaction with minimal configuration.
npm install pino-http
This example demonstrates how to use pino-http
with Express.js, including custom log levels based on HTTP status codes and custom success messages.
const express = require('express')
const pinoHttp = require('pino-http')
const logger = require('./logger')
const app = express()
app.use(pinoHttp({
logger,
// Custom log levels based on response status
customLogLevel: (req, res, err) => {
if (res.statusCode >= 400 && res.statusCode < 500) return 'warn'
if (res.statusCode >= 500 || err) return 'error'
if (res.statusCode >= 300 && res.statusCode < 400) return 'silent'
return 'info'
},
// Custom success message with timing
customSuccessMessage: (req, res) => {
return `${req.method} ${req.url} completed in ${res.responseTime}ms`
}
}))
app.get('/api/users/:id', async (req, res) => {
// req.log is automatically available with request context
req.log.info({ userId: req.params.id }, 'Processing user request')
try {
const user = await getUserById(req.params.id)
res.json(user)
} catch (error) {
req.log.error({ err: error }, 'User retrieval failed')
res.status(500).json({ error: 'Internal server error' })
}
})
Performance-Optimized HTTP Logging
For high-traffic APIs where every millisecond counts, you can optimize pino-http
to reduce its overhead. This example shows how to skip logging for common, low-value endpoints like health checks and static assets. It also uses minimal serializers to log only the most essential request and response data, further improving performance.
const pinoHttp = require('pino-http')
const httpLogger = pinoHttp({
logger: pino({ level: 'info' }),
// Skip logging for health checks and static assets
autoLogging: {
ignore: (req) => {
return req.url === '/health' ||
req.url.startsWith('/static/') ||
req.url.match(/\.(css|js|png|jpg|ico)$/)
}
},
// Minimal serialization for performance
serializers: {
req: (req) => ({
method: req.method,
url: req.url,
id: req.id
}),
res: (res) => ({
statusCode: res.statusCode
})
}
})
app.use(httpLogger)
Production Configuration Best Practices
Production logging requires careful configuration for performance, security, and reliability.
Environment-Specific Setup
This function demonstrates a robust, environment-aware logger configuration. It combines several best practices:
- Environment-based levels: Uses
'debug'
for development and'info'
for production. - Custom formatters: Standardizes the appearance of log levels and adds application context like
env
andversion
. - Conditional transports: Uses
pino-pretty
in development for readability and standard JSON output in production for efficiency. - Production redaction: Automatically removes sensitive data in production environments.
function createProductionLogger() {
const isDevelopment = process.env.NODE_ENV === 'development'
const isProduction = process.env.NODE_ENV === 'production'
const config = {
level: process.env.LOG_LEVEL || (isDevelopment ? 'debug' : 'info'),
formatters: {
level: (label) => ({ level: label.toUpperCase() }),
bindings: (bindings) => ({
pid: bindings.pid,
hostname: bindings.hostname,
env: process.env.NODE_ENV,
version: process.env.APP_VERSION
})
}
}
if (isDevelopment) {
config.transport = {
target: 'pino-pretty',
options: { colorize: true, translateTime: 'yyyy-mm-dd HH:MM:ss' }
}
} else if (isProduction) {
// Production optimizations
config.redact = {
paths: ['password', 'token', 'apiKey', '*.password', '*.token'],
remove: true
}
}
return pino(config)
}
module.exports = createProductionLogger()
High-Performance Async Logging
For maximum performance, especially in high-throughput applications, you can use sonic-boom
, a specialized, high-speed file writer. By setting sync: false
, you enable fully asynchronous logging, where log messages are written to a buffer and flushed to the file system in the background. This minimizes I/O wait time in your application's main thread.
const pino = require('pino')
const SonicBoom = require('sonic-boom')
// High-performance file destination
const dest = new SonicBoom({
dest: '/var/log/app/application.log',
sync: false, // Async writes
append: true,
mkdir: true
})
const logger = pino({
level: 'info',
timestamp: pino.stdTimeFunctions.epochTime, // Faster timestamps
}, dest)
// Graceful shutdown handling
process.on('SIGTERM', () => {
dest.flush()
dest.end()
process.exit(0)
})
Error Handling and Crash Safety
It is critical that your application logs fatal errors right before it crashes. This code sets up global error handlers for uncaughtException
and unhandledRejection
. It uses pino.final
to create a special logger that performs a synchronous, blocking write operation. This ensures that the final, critical log message is written before the process exits, which is essential for post-mortem debugging.
const logger = pino()
// Handle uncaught exceptions
process.on('uncaughtException', (error) => {
const finalLogger = pino.final(logger)
finalLogger.fatal({ err: error }, 'Uncaught exception')
process.exit(1)
})
// Handle unhandled promise rejections
process.on('unhandledRejection', (reason, promise) => {
const finalLogger = pino.final(logger)
finalLogger.fatal({ reason }, 'Unhandled promise rejection')
process.exit(1)
})
// Graceful shutdown
process.on('SIGTERM', () => {
logger.info('Received SIGTERM, shutting down gracefully')
pino.final(logger, (err, finalLogger) => {
if (err) finalLogger.error(err, 'Shutdown error')
finalLogger.info('Application shutdown complete')
process.exit(0)
})
})
Integration with SigNoz for Complete Observability
SigNoz provides a unified platform for logs, metrics, and traces, making it ideal for comprehensive Node.js application monitoring with Pino.
Why SigNoz with Pino
SigNoz offers several advantages for Pino integration:
- Unified Dashboard: View logs alongside traces and metrics
- OpenTelemetry Native: Seamless integration with modern observability standards
- SQL-like Querying: Query structured log data with familiar syntax
- Real-time Monitoring: Live log streaming and alerting
- Open Source: Cost-effective alternative to commercial solutions
Setup with OpenTelemetry Transport
To send logs to SigNoz, you need to use the pino-opentelemetry-transport
. This transport converts Pino log entries into the OpenTelemetry format and sends them to your SigNoz instance.
npm install pino-opentelemetry-transport
This configuration sets up a multi-transport logger. In production, it sends info
-level logs to SigNoz. In development, it also sends debug
-level logs to the console in a human-readable format using pino-pretty
.
const pino = require('pino')
const logger = pino({
transport: {
targets: [
{
target: 'pino-opentelemetry-transport',
options: {
resourceAttributes: {
'service.name': 'nodejs-api',
'service.version': process.env.APP_VERSION || '1.0.0',
'deployment.environment': process.env.NODE_ENV || 'development'
}
},
level: 'info'
},
// Keep console output for development
...(process.env.NODE_ENV === 'development' ? [{
target: 'pino-pretty',
level: 'debug',
options: { colorize: true }
}] : [])
]
}
})
module.exports = logger
Environment Configuration for SigNoz
To connect your application to SigNoz, you need to set a few environment variables. These variables tell the OpenTelemetry transport where to send the logs and what metadata to include.
OTEL_EXPORTER_OTLP_LOGS_ENDPOINT
: The URL of your SigNoz log ingestion endpoint.OTEL_EXPORTER_OTLP_HEADERS
: Your SigNoz access token for authentication (for SigNoz Cloud).OTEL_RESOURCE_ATTRIBUTES
: Metadata that identifies your service, such as its name and version.
# For SigNoz Cloud
export OTEL_EXPORTER_OTLP_LOGS_ENDPOINT="https://ingest.{region}.signoz.cloud:443/v1/logs"
export OTEL_EXPORTER_OTLP_HEADERS="signoz-access-token=YOUR_ACCESS_TOKEN"
export OTEL_RESOURCE_ATTRIBUTES="service.name=nodejs-api,service.version=1.0.0"
# For self-hosted SigNoz
export OTEL_EXPORTER_OTLP_LOGS_ENDPOINT="http://your-signoz-instance:4318/v1/logs"
export OTEL_RESOURCE_ATTRIBUTES="service.name=nodejs-api,service.version=1.0.0"
Complete Application Example
This example ties everything together, showing a complete Express.js application that uses pino-http
for automated request logging and sends all logs to SigNoz. It demonstrates how to add rich, structured context to your logs, including performance metrics like fetchDuration
and business-relevant data like userEmail
and cacheHit
.
// app.js
const express = require('express')
const pinoHttp = require('pino-http')
const logger = require('./logger')
const app = express()
// Add request logging middleware
app.use(pinoHttp({ logger }))
app.get('/api/users/:id', async (req, res) => {
const { id } = req.params
const startTime = Date.now()
req.log.info({ userId: id }, 'Fetching user data')
try {
const user = await getUserById(id)
const duration = Date.now() - startTime
req.log.info({
userId: id,
userEmail: user.email,
fetchDuration: duration,
cacheHit: user.fromCache
}, 'User data retrieved')
res.json(user)
} catch (error) {
const duration = Date.now() - startTime
req.log.error({
err: error,
userId: id,
duration,
operation: 'user_fetch'
}, 'Failed to retrieve user data')
res.status(500).json({ error: 'Internal server error' })
}
})
app.listen(3000, () => {
logger.info({
port: 3000,
env: process.env.NODE_ENV,
version: process.env.APP_VERSION
}, 'Server started successfully')
})
Get Started with SigNoz
You can choose between various deployment options in SigNoz. The easiest way to get started with SigNoz is SigNoz cloud. We offer a 30-day free trial account with access to all features.
Those who have data privacy concerns and can't send their data outside their infrastructure can sign up for either enterprise self-hosted or BYOC offering.
Those who have the expertise to manage SigNoz themselves or just want to start with a free self-hosted option can use our community edition.
Hope we answered all your questions regarding Pino logging in Node.js. If you have more questions, feel free to use the SigNoz AI chatbot, or join our slack community.
Common Issues and Troubleshooting
Logs Stop Writing Suddenly
Problem: Pino stops logging without errors, requiring application restart.
Solution: This can happen if the underlying log destination (like a file) encounters an error. To prevent this, always attach an error handler to your destination stream. This example also ensures the log directory exists before attempting to write to it, preventing a common source of errors.
const pino = require('pino')
const fs = require('fs')
const path = require('path')
// Ensure log directory exists
const logDir = path.dirname('./logs/app.log')
if (!fs.existsSync(logDir)) {
fs.mkdirSync(logDir, { recursive: true })
}
const dest = pino.destination({
dest: './logs/app.log',
sync: false,
mkdir: true
})
// Handle destination errors
dest.on('error', (err) => {
console.error('Log destination error:', err)
// Implement fallback logging mechanism
})
const logger = pino(dest)
Memory Leaks Under High Load
Problem: Memory usage increases continuously under high logging volume.
Solution: This is often caused by backpressure, where logs are generated faster than they can be written. SonicBoom
provides fine-grained control over buffering to manage this. By tuning minLength
(the buffer size) and maxWrite
(the maximum amount written at once), you can optimize for your specific workload. The drain
event lets you know when the buffer has been successfully flushed.
const SonicBoom = require('sonic-boom')
const dest = new SonicBoom({
dest: './app.log',
sync: false,
minLength: 4096, // Buffer before writing
maxWrite: 16384 // Maximum write size
})
// Monitor backpressure
dest.on('drain', () => {
console.log('Log buffer drained')
})
const logger = pino(dest)
// Graceful shutdown
process.on('SIGTERM', async () => {
await dest.flush()
dest.end()
})
Transport Configuration Issues
Problem: Complex transport setups fail silently.
Solution: When creating a logger with multiple or conditional transports, wrap the initialization in a try...catch
block. If the configuration fails for any reason (e.g., a missing dependency or incorrect options), you can catch the error and fall back to a simple, reliable logger. This ensures that your application can still start and log critical errors, even if your primary logging setup is broken.
function createSafeLogger() {
try {
return pino({
transport: process.env.NODE_ENV === 'development'
? { target: 'pino-pretty', options: { colorize: true } }
: { target: 'pino/file', options: { destination: './logs/app.log' } }
})
} catch (error) {
console.error('Logger creation failed, falling back to console:', error)
return pino({ transport: { target: 'pino-pretty' } })
}
}
Migration from Other Loggers
From Winston to Pino
Winston and Pino have different philosophies, but migrating is straightforward. This example shows how to replicate a common Winston setup—logging to two different files based on level—using Pino's multi-transport configuration.
// Winston pattern
const winston = require('winston')
const logger = winston.createLogger({
level: 'info',
format: winston.format.json(),
transports: [
new winston.transports.File({ filename: 'error.log', level: 'error' }),
new winston.transports.File({ filename: 'combined.log' })
]
})
// Pino equivalent
const pino = require('pino')
const logger = pino({
level: 'info',
transport: {
targets: [
{ target: 'pino/file', options: { destination: './combined.log' }, level: 'info' },
{ target: 'pino/file', options: { destination: './error.log' }, level: 'error' }
]
}
})
Migration Strategy
- Replace logger creation with Pino initialization
- Convert string interpolation to structured logging
- Update error logging to use Pino's error serializers
- Test performance impact and adjust configuration
- Update monitoring configurations to handle new log format
Key Takeaways
Pino represents a fundamental shift in Node.js logging philosophy, prioritizing performance without sacrificing the structured data needed for modern observability.
Performance Benefits:
- 5x faster than traditional loggers with minimal CPU overhead
- Asynchronous architecture prevents event loop blocking
- Memory-efficient design maintains stability under load
Production Readiness:
- Structured JSON output enables powerful querying and analysis
- Child loggers provide excellent context management
- Built-in security features protect sensitive data
Modern Integration:
- Native OpenTelemetry support enables trace correlation
- Seamless integration with observability platforms like SigNoz
- Extensive ecosystem of transports and plugins
For new projects, start with Pino from day one using the patterns demonstrated in this guide. For existing applications, the performance benefits justify gradual migration, starting with new features and high-traffic endpoints.
As Node.js applications continue to scale, Pino's combination of speed, structure, and extensibility makes it the ideal logging solution for performance-critical applications.
Frequently Asked Questions
What is Pino package?
Pino is a high-performance Node.js logging library optimized for speed and structured output. It produces JSON logs by default, uses asynchronous I/O, and focuses on minimal overhead to prevent logging from becoming an application bottleneck.
What is the common log format in Pino?
Pino uses NDJSON (Newline Delimited JSON) format by default. Each log entry includes standard fields like level
, time
, pid
, hostname
, and msg
, making logs immediately machine-readable for automated processing and analysis.
What is Pino pretty?
pino-pretty
is a development tool that transforms Pino's JSON output into human-readable, colorized text. Use it during development for easier log reading, but avoid it in production as it adds overhead and defeats Pino's performance advantages.
What are the benefits of Pino logger?
Key benefits include: 5x faster performance than Winston, minimal CPU and memory overhead, structured JSON logging by default, child logger support for context management, built-in security features with data redaction, and seamless integration with modern observability platforms.
Can Pino handle asynchronous logging?
Yes, Pino excels at asynchronous logging through worker threads and non-blocking I/O operations. This prevents logging from blocking the main event loop, maintaining application performance even under high logging volume.
Is it possible to change the log level in Pino dynamically?
Yes, you can change Pino's log level at runtime using logger.level = 'newLevel'
. This is valuable in production for temporarily increasing verbosity to debug issues without restarting the application.
What is a child logger?
A child logger inherits parent configuration while adding consistent context fields to every log message. Perfect for request-scoped logging, service-specific logging, or adding persistent context like user IDs or request IDs.
What is Pino transport?
Pino transport handles where and how logs are processed and output. Transports can send logs to files, external services, or format them for different purposes. Pino 7+ uses worker threads for transports to minimize performance impact.
What is Pino HTTP?
pino-http
is Express.js middleware that automatically logs HTTP requests and responses. It provides detailed request/response logging with minimal setup and integrates with Pino's performance optimizations.
What are the three main log levels in Pino?
While Pino supports six levels, the three most commonly used are: info
for general application information and significant events, warn
for potential issues requiring attention, and error
for actual failures requiring investigation.