Dealing with Handlers that Block

Today we’re going to talk about a common issue that can slow down your code: handlers that block. If you’ve ever used the logging module in Python for debugging or error handling purposes, you might have noticed that some loggers (like SMTPHandler) can take forever to send emails due to network-related issues outside of our control. This is where we need a solution and luckily, there’s one!

The problem with handlers like SMTPHandler is that they can block the thread you’re logging from, causing performance issues in web applications or other scenarios. To avoid this, let’s use a two-part approach: first, attach only QueueHandlers to loggers accessed from performance-critical threads. These will simply write to their queue, which can be sized to a large enough capacity (or initialized with no upper bound). The second part of the solution is called QueueListener it’s designed as the counterpart to QueueHandler and has been created specifically for this purpose.

Let’s take a look at an example script that demonstrates how we can use these techniques:

# Import necessary modules
import logging
from logging.handlers import MemoryHandler, QueueListener
import sys

# Set up logger
logger = logging.getLogger(__name__)
logger.addHandler(logging.NullHandler())

# Define function to log errors
def log_if_errors(target_handler=None, flush_level=None, capacity=None):
    # Set default values if not provided
    if target_handler is None:
        target_handler = logging.StreamHandler()
    if flush_level is None:
        flush_level = logging.ERROR
    if capacity is None:
        capacity = 100
    
    # Create MemoryHandler with specified capacity and flush level
    handler = MemoryHandler(capacity, flushLevel=flush_level, target=target_handler)
    # Return decorator function
    return decorator(fn)

# Define wrapper function to handle logging and exceptions
def wrapper(*args, **kwargs):
    # Add handler to logger
    logger.addHandler(handler)
    try:
        # Call original function with provided arguments
        return fn(*args, **kwargs)
    except Exception:
        # Log exception and raise it
        logger.exception('Call failed')
        raise
    finally:
        # Flush handler and remove it from logger
        super(MemoryHandler, handler).flush()
        logger.removeHandler(handler)

# Define function to write to stderr
def write_line(s):
    sys.stderr.write('%s\n' % s)

# Decorate function with log_if_errors decorator and specify target handler
@log_if_errors(target_handler=QueueListener(sys.stdout))
def foo():
    # Your code here!
    # Log at DEBUG level
    logger.debug("About to log at DEBUG.")
    ...

In this example, we first define a function called `log_if_errors()`, which takes three optional arguments: target_handler (defaulting to logging.StreamHandler()), flush_level (defaulting to logging.ERROR), and capacity (defaulting to 100). This function returns a decorator that wraps the original function, adds the handler to the logger, logs any errors or exceptions, flushes the buffer, removes the handler from the logger, and then calls the original function.

We also define a `write_line()` function for writing messages directly to stderr (which is useful if you want to see debugging information in real-time). Finally, we use our `log_if_errors()` decorator on the `foo()` function and pass it a target handler of type QueueListener. This will ensure that any log messages from this function are queued up instead of being sent immediately (which can cause blocking issues), but still allow us to see them in real-time by writing directly to stderr using our `write_line()` function.

By using QueueHandlers and QueueListeners, we can ensure that our log messages are processed efficiently without causing any performance issues in critical threads.

SICORPS