Python logging is confusing.
CRITICAL = 50
FATAL = CRITICAL
ERROR = 40
WARNING = 30
WARN = WARNING
INFO = 20
DEBUG = 10
NOTSET = 0
To pass a message through, a log message must have the following relationship:
record.level >= logger.level
So:
- A logger set to
DEBUG
would seeDEBUG, INFO, WARNING, ...
. - LOWER levels are relatively less important for the user to see.
- HIGHER levels are relatively more important for the user to see.
- Consider the levels like a "high water marker", indicating the nearby lake is nearing overflowing at WARNING (30) and a flood is imminent at CRITICAL (50).
isEnabledFor
performs this check.
isEnabledFor
works like the following:
-
if
logger.disabled
, return immediately -
Otherwise,
- Check
manager.disable >= level
(usually not the case) - Check
level >= logger.getEffectiveLevel()
- Check
-
logger.getEffectiveLevel()
- Find the first logger
[self, self.parent, self.parent.parent, ...]
that has alevel
set. - Return the level found in (a) or fall back to
NOTSET
.
- Find the first logger
For performance reasons, the per-level isEnabledFor
information is
cached in a dictionary logger._cache[level]
.
This cache is locked using a module-level threading.RLock
.
>>> logging.getLevelName(10)
'DEBUG'
But the same function reverses it, too!
>>> logging.getLevelName("DEBUG")
10
So using something like the following would make it consistent:
from typing import Union
def validate_log_level(level: Union[str, int]) -> int:
"""
Return a logging level integer for level comparison.
Parameters
----------
level : str or int
The logging level string or integer value.
Returns
-------
log_level : int
The integral log level.
Raises
------
ValueError
If the logging level is invalid.
"""
if isinstance(level, int):
return level
if isinstance(level, str):
levelno = logging.getLevelName(level)
if not isinstance(levelno, int):
raise ValueError("Invalid logging level (use e.g., DEBUG or 6)")
return levelno
Consistent numerical values:
>>> validate_log_level(10)
10
>>> validate_log_level("DEBUG")
10
>>> logging.getLevelName(validate_log_level(10))
'DEBUG'
Never hear of the logging.Manager
? Me either.
Normally, it appears there is just one Manager
instance.
It contains the following information:
- The root logger node (
Logger.root
) - A method to disable logging globally below a certain level.
- If a no-handler-warning has been emitted (no handler found and no fallback)
loggerDict
- a mapping of{logger_name: Logger}
loggerClass
the defaultLogger
class to create ongetLogger
.logRecordFactory
the factory used to create records
While the manager takes care of this, the public API is
logging.disable(level)
This also forces a cache clearing, so that loggers have to find their effective levels again.
logging.getLogger
is a shortcut for Logger.manager.getLogger
!
... with the exception of the root logger, that is returned directly
from logging.root
.
Some things to note:
- You can override the default Logger class by doing
logging.
- Custom
Logger
classes must be subclasses ofLogger
. manager.setLoggerClass
takes precedence overlogging.setLoggerClass
- They are two separate state variables!
Internal API for managing parent/child logger relationships. Quoting the source code:
Get a logger with the specified name (channel name), creating it if it doesn't yet exist. This name is a dot-separated hierarchical name, such as "a", "a.b", "a.b.c" or similar.
If a PlaceHolder existed for the specified name [i.e. the logger didn't exist but a child of it did], replace it with the created logger and fix up the parent/child references which pointed to the placeholder to now point to the logger.
LogRecord
ignores unknown kwargs, so where do extra
s get added as
attributes?
Extras get patched in by way of record.__dict__[key] = value
.
LoggerAdapter
is a simple way to add contextual information onto a logger.
The adapter itself, however, is not 100% API-compatible with the Logger
class.
It should be used when you have multiple contexts (objects, etc.) utilizing the same logger instance, but you want the generated logger records to be easily differentiable.
Only the following classes implement the Filterer
interface:
- Handler
- Logger
Filters should either be callable or have a callable attribute .filter()
.
Either should take one positional argument: the LogRecord
.
class MyFilter:
def filter(self, record):
return True
def filter(record):
return True
MyFilter()
and filter()
are both acceptable (and equivalent in this
simple example).
Filters are applied uniformly for either handlers or loggers:
should_handle = all(filter() for filter in logger.filters)
should_handler_0_emit_record = all(filter() for filter in logger.handlers[0].filters)
A filter applied to a handler takes in a record.
If any filter returns False
, the final message will not be emitted.
The parent logger of logging.getLogger("a.b")
is logging.getLogger("a")
.
Basic rule is that if:
logger.propagate = False
That the log message will not propagate to logger.parent
.
There's more to it than this, though.
A False
-returning filter in logger.filters
or setting logger.disabled
will stop records from reaching callHandlers
and thus avoid propagation entirely.
Handler-level filters have no effect on propagation.
Unset log levels are configured as logging.NOTSET = 0
logger.debug("Message")
- Ensure
logger.isEnabledFor(DEBUG)
- Ensure
logger._log()
- Create a
LogRecord
by way oflogger.makeRecord()
- This is where exception information and stack information is determined.
- Create a
logger.handle(record)
- Stop if
logger.disabled
- Stop if
not logger.filter(record)
- Stop if
logger.callHandlers(record)
- For each handler in
logger.handlers
- If the handler level is
<=
therecord.levelno
, pass the message to the handler:handler.handle(record)
- If the handler level is
- Repeat (a) for each parent/ancestor logger if
.propagate
- If zero handlers were found in this step, use the "last resort" logger. The log level number requirement still applies for the last resort logger.
- For each handler in
For each handler that passed step (4)(a)(1):
- Stop if
not handler.filter(record)
- All filters must return
True
- All filters must return
- Emit the record by way of:
- Acquire
threading.RLock
dedicated to the handler (handler.lock
). handler.emit(record)
- Release the lock.
- Acquire
For each emitted record above in handler.emit(record)
:
- Call
handler.format(record)
to get the message - Send message out to target (this is custom depending on the handler implementation itself)
The adapter is a thin wrapper around its public logger
attribute.
Assuming:
extra = {"a": "b"}
adapter = logging.LoggerAdapter(logging.getLogger("logger"), extra=extra)
adapter.debug("Test")
adapter.log(DEBUG, "Test")
adapter.log(level, msg, *args, **kwargs)
- Ensure
adapter.logger.isEnabledFor(DEBUG)
- Call
adapter.process(DEBUG, "Test")
- Ensure
adapter.process(DEBUG, "Test", *args, **kwargs)
- Attach
kwargs["extra"] = self.extra
- Attach
- Pass on the message and args to the original logger instance.
self.logger.log(level, msg, *args, **kwargs)
If no other handlers are found in a log message chain, a "last resort" logger is used.
This is by default configured at the WARNING
level.
The log level number requirement still applies for the last resort logger.
logger.makeRecord
takes care of creating a record.
Under the hood, log records are created through a factory
logging.getLogRecordFactory()
, which can be set with
logging.setLogRecordFactory(factory)
.
This factory (as of the time of writing) should have arguments:
_logRecordFactory(
name,
level,
filename,
line_number,
msg,
args,
exception_info,
function,
stack_info,
)
There's an atexit
hook which will look through every handler ever defined
(well, weakrefs of handlers, stashed in logging._handlerList
) and call the
following:
- If the weakref is invalid, stop
- Call the following, ignoring OSError/ValueError:
handler.acquire()
handler.flush()
handler.close()
- And finally,
handler.release()
Helper functions for using the root logger are there, but you really shouldn't be using them (in my opinion).
There is at least one upside to using them, an application without logging setup may still see the messages due to a check for handlers on the root logger in these helpers.
logging.info(...)
-> basicConfig
+ root.info(...)
logging.debug(...)
-> basicConfig
+ root.debug(...)
logging.warning(...)
-> basicConfig
+ root.warning(...)
logging.error(...)
-> basicConfig
+ root.error(...)
logging.exception(...)
-> basicConfig
+ root.exception(...)
logging.critical(...)
-> basicConfig
+ root.critical(...)
Warnings can be captured in the logging system by way of:
logging.captureWarnings(True)
This (surprisingly, to me) monkey-patches warnings.showwarning
to
redirect it to logging._showwarning
. This formats the warning
and routes it to logging.getLogger("py.warnings").warning()
.
This is used pretty much everywhere for simple configuration of logging.
As someone clearly interested in the Python logging system, it's worth reading through the docstring once to see what it's actually doing.
Do basic configuration for the logging system.
This function does nothing if the root logger already has handlers configured, unless the keyword argument force is set to
True
. It is a convenience method intended for use by simple scripts to do one-shot configuration of the logging package.The default behaviour is to create a StreamHandler which writes to sys.stderr, set a formatter using the BASIC_FORMAT format string, and add the handler to the root logger.
A number of optional keyword arguments may be specified, which can alter the default behaviour.
Also, you can
- Specify
filename
/filemode
to quickly create aFileHandler
- Configure formatting by way of
format
/datefmt
/style
andencoding
- Configure the root logger level (
level
) - Redirect the default
stderr
stream (stream
) - Pass in already-created handlers (
handlers = [...]
)
TODO