Day 43 of 50 Days of Python: Logging & Debugging Best Practices
Part of Week 7: Python in Production
Welcome back to the series! Today we’ll be covering logging and debugging techniques to improve your error handling and resolving best practices.
What We’ll Cover
→ Why using logging over print() is best practice.
→ Tips for reducing log noise and protecting sensitive data.
→ Structured logging with structlog and FastAPI middleware.
Key Concepts
→ Level: The importance of messages, which can range from INFO, DEBUG, CRITICAL, ERROR.
→ Handler: Where logs end up.
→ Formatter: What the log structure lookslike.
→ Structured: Making sure they are machine-parseable (JSON)
→ ID: Unique ID of a request/job to link back to a log.
Hands‑On: Configure Logging
# logging_config.py
import logging.config
LOGGING = {
"version": 1,
"disable_existing_loggers": False,
"formatters": {
"default": {
"format": "%(asctime)s %(levelname)s [%(name)s] %(message)s",
},
},
"handlers": {
"console": {
"class": "logging.StreamHandler",
"formatter": "default",
},
"file": {
"class": "logging.handlers.RotatingFileHandler",
"filename": "logs/app.log",
"maxBytes": 10_000_000, # 10 MB
"backupCount": 5,
"formatter": "default",
},
},
"root": {"level": "INFO", "handlers": ["console", "file"]},
}
logging.config.dictConfig(LOGGING)
logger = logging.getLogger(__name__)
logger.info("Logging ready...")
Structured Logging with structlog
import structlog, logging
logging.basicConfig(format="%(message)s", level="INFO")
structlog.configure(
wrapper_class=structlog.make_filtering_bound_logger(logging.INFO),
processors=[
structlog.processors.UUID4Adder("corr_id"),
structlog.processors.JSONRenderer(),
],
)
log = structlog.get_logger()
log.info("started", service="calihouse-api")
Output
{"event":"started","service":"calihouse-api","corr_id":"b3f6...","timestamp":"2025-07-02T09:15:12.345Z"}
FastAPI Middleware for Correlation IDs
from fastapi import FastAPI, Request
import uuid, structlog
app = FastAPI()
log = structlog.get_logger()
@app.middleware("http")
async def add_correlation_id(request: Request, call_next):
cid = request.headers.get("x-correlation-id", str(uuid.uuid4()))
structlog.contextvars.bind_contextvars(correlation_id=cid)
response = await call_next(request)
response.headers["x-correlation-id"] = cid
return response
Debugging with pdp and VS Code
# drop into pdb REPL
import pdb; pdb.set_trace()
import debugpy
debugpy.listen(("0.0.0.0", 5678))
debugpy.wait_for_client()
Then → VS Code > Run & Debug > Python: Remote Attach.
logging.info()
is something you’ll likely use at every point. If you want to see how many rows you’ve inserted, then you would save that metric to a rows = df.count()
and then in your logging.info(f”Rows inserted: {rows})
The main reasoning for using this over print is because it doesn’t print it in memory, it stores it within a file located elsewhere. The more prints you do in a run the less efficient the process becomes. You want your process doing only the necessary writes and reads, nothing more.
TL;DR
Configure logs with dictConfig, rotate files, and set root level to INFO.
Structure logs via structlog → JSON for painless searching.
Trace requests by injecting correlation‑IDs in middleware.
Debug quickly with pdb, ipdb, or VS Code remote attach.
Next Up: Day 44 - Writing Unit and Integration Tests in Python.
I’ll explain the reasoning behind unit and integration tests. As well as help you understand how they bolster your CI/CD pipelines.
See you for the next one and as always… Happy coding!