日志操作手册

作者

Vinay Sajip <vinay_sajip at red-dove dot com>

本页包含了许多日志记录相关的概念,这些概念在过去一直被认为很有用。

在多个模块中使用日志

无论对 logging.getLogger('someLogger') 进行多少次调用,都会返回同一个 logger 对象的引用。不仅在同一个模块内如此,只要是在同一个 Python 解释器进程中,跨模块调用也是一样。同样是引用同一个对象,应用程序也可以在一个模块中定义和配置一个父 logger,而在另一个单独的模块中创建(但不配置)子 logger,对于子 logger 的所有调用都会传给父 logger。以下是主模块:

  1. import logging
  2. import auxiliary_module
  3. # create logger with 'spam_application'
  4. logger = logging.getLogger('spam_application')
  5. logger.setLevel(logging.DEBUG)
  6. # create file handler which logs even debug messages
  7. fh = logging.FileHandler('spam.log')
  8. fh.setLevel(logging.DEBUG)
  9. # create console handler with a higher log level
  10. ch = logging.StreamHandler()
  11. ch.setLevel(logging.ERROR)
  12. # create formatter and add it to the handlers
  13. formatter = logging.Formatter('%(asctime)s - %(name)s - %(levelname)s - %(message)s')
  14. fh.setFormatter(formatter)
  15. ch.setFormatter(formatter)
  16. # add the handlers to the logger
  17. logger.addHandler(fh)
  18. logger.addHandler(ch)
  19. logger.info('creating an instance of auxiliary_module.Auxiliary')
  20. a = auxiliary_module.Auxiliary()
  21. logger.info('created an instance of auxiliary_module.Auxiliary')
  22. logger.info('calling auxiliary_module.Auxiliary.do_something')
  23. a.do_something()
  24. logger.info('finished auxiliary_module.Auxiliary.do_something')
  25. logger.info('calling auxiliary_module.some_function()')
  26. auxiliary_module.some_function()
  27. logger.info('done with auxiliary_module.some_function()')

以下是辅助模块:

  1. import logging
  2. # create logger
  3. module_logger = logging.getLogger('spam_application.auxiliary')
  4. class Auxiliary:
  5. def __init__(self):
  6. self.logger = logging.getLogger('spam_application.auxiliary.Auxiliary')
  7. self.logger.info('creating an instance of Auxiliary')
  8. def do_something(self):
  9. self.logger.info('doing something')
  10. a = 1 + 1
  11. self.logger.info('done doing something')
  12. def some_function():
  13. module_logger.info('received a call to "some_function"')

输出结果会像这样:

  1. 2005-03-23 23:47:11,663 - spam_application - INFO -
  2. creating an instance of auxiliary_module.Auxiliary
  3. 2005-03-23 23:47:11,665 - spam_application.auxiliary.Auxiliary - INFO -
  4. creating an instance of Auxiliary
  5. 2005-03-23 23:47:11,665 - spam_application - INFO -
  6. created an instance of auxiliary_module.Auxiliary
  7. 2005-03-23 23:47:11,668 - spam_application - INFO -
  8. calling auxiliary_module.Auxiliary.do_something
  9. 2005-03-23 23:47:11,668 - spam_application.auxiliary.Auxiliary - INFO -
  10. doing something
  11. 2005-03-23 23:47:11,669 - spam_application.auxiliary.Auxiliary - INFO -
  12. done doing something
  13. 2005-03-23 23:47:11,670 - spam_application - INFO -
  14. finished auxiliary_module.Auxiliary.do_something
  15. 2005-03-23 23:47:11,671 - spam_application - INFO -
  16. calling auxiliary_module.some_function()
  17. 2005-03-23 23:47:11,672 - spam_application.auxiliary - INFO -
  18. received a call to 'some_function'
  19. 2005-03-23 23:47:11,673 - spam_application - INFO -
  20. done with auxiliary_module.some_function()

在多个线程中记录日志

多线程记录日志并不需要特殊处理,以下示例演示了在主线程(起始线程)和其他线程中记录日志的过程:

  1. import logging
  2. import threading
  3. import time
  4. def worker(arg):
  5. while not arg['stop']:
  6. logging.debug('Hi from myfunc')
  7. time.sleep(0.5)
  8. def main():
  9. logging.basicConfig(level=logging.DEBUG, format='%(relativeCreated)6d %(threadName)s %(message)s')
  10. info = {'stop': False}
  11. thread = threading.Thread(target=worker, args=(info,))
  12. thread.start()
  13. while True:
  14. try:
  15. logging.debug('Hello from main')
  16. time.sleep(0.75)
  17. except KeyboardInterrupt:
  18. info['stop'] = True
  19. break
  20. thread.join()
  21. if __name__ == '__main__':
  22. main()

运行结果会像如下这样:

  1. 0 Thread-1 Hi from myfunc
  2. 3 MainThread Hello from main
  3. 505 Thread-1 Hi from myfunc
  4. 755 MainThread Hello from main
  5. 1007 Thread-1 Hi from myfunc
  6. 1507 MainThread Hello from main
  7. 1508 Thread-1 Hi from myfunc
  8. 2010 Thread-1 Hi from myfunc
  9. 2258 MainThread Hello from main
  10. 2512 Thread-1 Hi from myfunc
  11. 3009 MainThread Hello from main
  12. 3013 Thread-1 Hi from myfunc
  13. 3515 Thread-1 Hi from myfunc
  14. 3761 MainThread Hello from main
  15. 4017 Thread-1 Hi from myfunc
  16. 4513 MainThread Hello from main
  17. 4518 Thread-1 Hi from myfunc

以上如期显示了不同线程的日志是交替输出的。当然更多的线程也会如此。

多个 handler 和多种 formatter

日志是个普通的 Python 对象。 addHandler() 方法可加入不限数量的日志 handler。有时候,应用程序需把严重错误信息记入文本文件,而将一般错误或其他级别的信息输出到控制台。若要进行这样的设定,只需多配置几个日志 handler 即可,应用程序的日志调用代码可以保持不变。下面对之前的分模块日志示例略做修改:

  1. import logging
  2. logger = logging.getLogger('simple_example')
  3. logger.setLevel(logging.DEBUG)
  4. # create file handler which logs even debug messages
  5. fh = logging.FileHandler('spam.log')
  6. fh.setLevel(logging.DEBUG)
  7. # create console handler with a higher log level
  8. ch = logging.StreamHandler()
  9. ch.setLevel(logging.ERROR)
  10. # create formatter and add it to the handlers
  11. formatter = logging.Formatter('%(asctime)s - %(name)s - %(levelname)s - %(message)s')
  12. ch.setFormatter(formatter)
  13. fh.setFormatter(formatter)
  14. # add the handlers to logger
  15. logger.addHandler(ch)
  16. logger.addHandler(fh)
  17. # 'application' code
  18. logger.debug('debug message')
  19. logger.info('info message')
  20. logger.warning('warn message')
  21. logger.error('error message')
  22. logger.critical('critical message')

需要注意的是,“应用程序”内的代码并不关心是否存在多个日志 handler。示例中所做的改变,只是新加入并配置了一个名为 fh 的 handler。

在编写和测试应用程序时,若能创建日志 handler 对不同严重级别的日志信息进行过滤,这将十分有用。调试时无需用多条 print 语句,而是采用 logger.debug :print 语句以后还得注释或删掉,而 logger.debug 语句可以原样留在源码中保持静默。当需要再次调试时,只要改变日志对象或 handler 的严重级别即可。

在多个地方记录日志

假定要根据不同的情况将日志以不同的格式写入控制台和文件。比如把 DEBUG 以上级别的日志信息写于文件,并且把 INFO 以上的日志信息输出到控制台。再假设日志文件需要包含时间戳,控制台信息则不需要。以下演示了做法:

  1. import logging
  2. # set up logging to file - see previous section for more details
  3. logging.basicConfig(level=logging.DEBUG,
  4. format='%(asctime)s %(name)-12s %(levelname)-8s %(message)s',
  5. datefmt='%m-%d %H:%M',
  6. filename='/temp/myapp.log',
  7. filemode='w')
  8. # define a Handler which writes INFO messages or higher to the sys.stderr
  9. console = logging.StreamHandler()
  10. console.setLevel(logging.INFO)
  11. # set a format which is simpler for console use
  12. formatter = logging.Formatter('%(name)-12s: %(levelname)-8s %(message)s')
  13. # tell the handler to use this format
  14. console.setFormatter(formatter)
  15. # add the handler to the root logger
  16. logging.getLogger('').addHandler(console)
  17. # Now, we can log to the root logger, or any other logger. First the root...
  18. logging.info('Jackdaws love my big sphinx of quartz.')
  19. # Now, define a couple of other loggers which might represent areas in your
  20. # application:
  21. logger1 = logging.getLogger('myapp.area1')
  22. logger2 = logging.getLogger('myapp.area2')
  23. logger1.debug('Quick zephyrs blow, vexing daft Jim.')
  24. logger1.info('How quickly daft jumping zebras vex.')
  25. logger2.warning('Jail zesty vixen who grabbed pay from quack.')
  26. logger2.error('The five boxing wizards jump quickly.')

当运行后,你会看到控制台如下所示

  1. root : INFO Jackdaws love my big sphinx of quartz.
  2. myapp.area1 : INFO How quickly daft jumping zebras vex.
  3. myapp.area2 : WARNING Jail zesty vixen who grabbed pay from quack.
  4. myapp.area2 : ERROR The five boxing wizards jump quickly.

而日志文件将如下所示:

  1. 10-22 22:19 root INFO Jackdaws love my big sphinx of quartz.
  2. 10-22 22:19 myapp.area1 DEBUG Quick zephyrs blow, vexing daft Jim.
  3. 10-22 22:19 myapp.area1 INFO How quickly daft jumping zebras vex.
  4. 10-22 22:19 myapp.area2 WARNING Jail zesty vixen who grabbed pay from quack.
  5. 10-22 22:19 myapp.area2 ERROR The five boxing wizards jump quickly.

如您所见,DEBUG 级别的日志信息只出现在了文件中,而其他信息则两个地方都会输出。

上述示例只用到了控制台和文件 handler,当然还可以自由组合任意数量的日志 handler。

日志配置服务器示例

以下是一个用到了日志配置服务器的模块示例:

  1. import logging
  2. import logging.config
  3. import time
  4. import os
  5. # read initial config file
  6. logging.config.fileConfig('logging.conf')
  7. # create and start listener on port 9999
  8. t = logging.config.listen(9999)
  9. t.start()
  10. logger = logging.getLogger('simpleExample')
  11. try:
  12. # loop through logging calls to see the difference
  13. # new configurations make, until Ctrl+C is pressed
  14. while True:
  15. logger.debug('debug message')
  16. logger.info('info message')
  17. logger.warning('warn message')
  18. logger.error('error message')
  19. logger.critical('critical message')
  20. time.sleep(5)
  21. except KeyboardInterrupt:
  22. # cleanup
  23. logging.config.stopListening()
  24. t.join()

以下脚本将接受文件名作为参数,然后将此文件发送到服务器,前面加上文件的二进制编码长度,做为新的日志配置:

  1. #!/usr/bin/env python
  2. import socket, sys, struct
  3. with open(sys.argv[1], 'rb') as f:
  4. data_to_send = f.read()
  5. HOST = 'localhost'
  6. PORT = 9999
  7. s = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
  8. print('connecting...')
  9. s.connect((HOST, PORT))
  10. print('sending config...')
  11. s.send(struct.pack('>L', len(data_to_send)))
  12. s.send(data_to_send)
  13. s.close()
  14. print('complete')

处理日志 handler 的阻塞

有时需让日志 handler 不要阻塞当前的线程。这在 Web 应用程序中比较常见,当然在其他场景中也会发生。

有一种原因往往会让程序表现迟钝,这就是 SMTPHandler:由于很多因素是开发人员无法控制的(例如邮件或网络基础设施的性能不佳),发送电子邮件可能需要很长时间。不过几乎所有网络 handler 都可能会发生阻塞:即使是 SocketHandler 操作也可能在后台执行 DNS 查询,而这种查询实在太慢了(并且 DNS 查询还可能在很底层的套接字库代码中,位于 Python 层之下,超出了可控范围)。

有一种解决方案是分成两部分实现。第一部分,针对那些对性能有要求的关键线程,只为日志对象连接一个 QueueHandler。日志对象只需简单地写入队列即可,可为队列设置足够大的容量,或者可以在初始化时不设置容量上限。尽管为以防万一,可能需要在代码中捕获 queue.Full 异常,不过队列写入操作通常会很快得以处理。如果要开发库代码,包含性能要求较高的线程,为了让使用该库的开发人员受益,请务必在开发文档中进行标明(包括建议仅连接 QueueHandlers )。

解决方案的另一部分就是 QueueListener,它被设计为 QueueHandler 的对应部分。QueueListener 非常简单:传入一个队列和一些 handler,并启动一个内部线程,用于侦听 QueueHandlers``(或其他 ``LogRecords 源)发送的 LogRecord 队列。LogRecords 会从队列中移除并传给 handler 处理。

QueueListener 作为单独的类,好处就是可以用同一个实例为多个 QueueHandlers 服务。这比把现有 handler 类线程化更加资源友好,后者会每个 handler 会占用一个线程,却没有特别的好处。

以下是这两个类的运用示例(省略了 import 语句):

  1. que = queue.Queue(-1) # no limit on size
  2. queue_handler = QueueHandler(que)
  3. handler = logging.StreamHandler()
  4. listener = QueueListener(que, handler)
  5. root = logging.getLogger()
  6. root.addHandler(queue_handler)
  7. formatter = logging.Formatter('%(threadName)s: %(message)s')
  8. handler.setFormatter(formatter)
  9. listener.start()
  10. # The log output will display the thread which generated
  11. # the event (the main thread) rather than the internal
  12. # thread which monitors the internal queue. This is what
  13. # you want to happen.
  14. root.warning('Look out!')
  15. listener.stop()

在运行后会产生:

  1. MainThread: Look out!

在 3.5 版更改: 在 Python 3.5 之前,QueueListener 总会把由队列接收到的每条信息都传递给已初始化的每个处理程序。(因为这里假定级别过滤操作已在写入队列时完成了。)从 3.5 版开始,可以修改这种处理方式,只要将关键字参数 respect_handler_level=True 传给侦听器的构造函数即可。这样侦听器将会把每条信息的级别与 handler 的级别进行比较,只在适配时才会将信息传给 handler 。

通过网络收发日志事件

假定现在要通过网络发送日志事件,并在接收端进行处理。有一种简单的方案,就是在发送端的根日志对象连接一个 SocketHandler 实例:

  1. import logging, logging.handlers
  2. rootLogger = logging.getLogger('')
  3. rootLogger.setLevel(logging.DEBUG)
  4. socketHandler = logging.handlers.SocketHandler('localhost',
  5. logging.handlers.DEFAULT_TCP_LOGGING_PORT)
  6. # don't bother with a formatter, since a socket handler sends the event as
  7. # an unformatted pickle
  8. rootLogger.addHandler(socketHandler)
  9. # Now, we can log to the root logger, or any other logger. First the root...
  10. logging.info('Jackdaws love my big sphinx of quartz.')
  11. # Now, define a couple of other loggers which might represent areas in your
  12. # application:
  13. logger1 = logging.getLogger('myapp.area1')
  14. logger2 = logging.getLogger('myapp.area2')
  15. logger1.debug('Quick zephyrs blow, vexing daft Jim.')
  16. logger1.info('How quickly daft jumping zebras vex.')
  17. logger2.warning('Jail zesty vixen who grabbed pay from quack.')
  18. logger2.error('The five boxing wizards jump quickly.')

在接收端,可以用 socketserver 模块设置一个接收器。简要示例如下:

  1. import pickle
  2. import logging
  3. import logging.handlers
  4. import socketserver
  5. import struct
  6. class LogRecordStreamHandler(socketserver.StreamRequestHandler):
  7. """Handler for a streaming logging request.
  8. This basically logs the record using whatever logging policy is
  9. configured locally.
  10. """
  11. def handle(self):
  12. """
  13. Handle multiple requests - each expected to be a 4-byte length,
  14. followed by the LogRecord in pickle format. Logs the record
  15. according to whatever policy is configured locally.
  16. """
  17. while True:
  18. chunk = self.connection.recv(4)
  19. if len(chunk) < 4:
  20. break
  21. slen = struct.unpack('>L', chunk)[0]
  22. chunk = self.connection.recv(slen)
  23. while len(chunk) < slen:
  24. chunk = chunk + self.connection.recv(slen - len(chunk))
  25. obj = self.unPickle(chunk)
  26. record = logging.makeLogRecord(obj)
  27. self.handleLogRecord(record)
  28. def unPickle(self, data):
  29. return pickle.loads(data)
  30. def handleLogRecord(self, record):
  31. # if a name is specified, we use the named logger rather than the one
  32. # implied by the record.
  33. if self.server.logname is not None:
  34. name = self.server.logname
  35. else:
  36. name = record.name
  37. logger = logging.getLogger(name)
  38. # N.B. EVERY record gets logged. This is because Logger.handle
  39. # is normally called AFTER logger-level filtering. If you want
  40. # to do filtering, do it at the client end to save wasting
  41. # cycles and network bandwidth!
  42. logger.handle(record)
  43. class LogRecordSocketReceiver(socketserver.ThreadingTCPServer):
  44. """
  45. Simple TCP socket-based logging receiver suitable for testing.
  46. """
  47. allow_reuse_address = True
  48. def __init__(self, host='localhost',
  49. port=logging.handlers.DEFAULT_TCP_LOGGING_PORT,
  50. handler=LogRecordStreamHandler):
  51. socketserver.ThreadingTCPServer.__init__(self, (host, port), handler)
  52. self.abort = 0
  53. self.timeout = 1
  54. self.logname = None
  55. def serve_until_stopped(self):
  56. import select
  57. abort = 0
  58. while not abort:
  59. rd, wr, ex = select.select([self.socket.fileno()],
  60. [], [],
  61. self.timeout)
  62. if rd:
  63. self.handle_request()
  64. abort = self.abort
  65. def main():
  66. logging.basicConfig(
  67. format='%(relativeCreated)5d %(name)-15s %(levelname)-8s %(message)s')
  68. tcpserver = LogRecordSocketReceiver()
  69. print('About to start TCP server...')
  70. tcpserver.serve_until_stopped()
  71. if __name__ == '__main__':
  72. main()

先运行服务端,再运行客户端。客户端控制台不会显示什么信息;在服务端应该会看到如下内容:

  1. About to start TCP server...
  2. 59 root INFO Jackdaws love my big sphinx of quartz.
  3. 59 myapp.area1 DEBUG Quick zephyrs blow, vexing daft Jim.
  4. 69 myapp.area1 INFO How quickly daft jumping zebras vex.
  5. 69 myapp.area2 WARNING Jail zesty vixen who grabbed pay from quack.
  6. 69 myapp.area2 ERROR The five boxing wizards jump quickly.

请注意,某些时候 pickle 会存在一些安全问题。若有问题可换用自己的序列化方案,只要覆盖 makePickle() 方法即可,并调整上述脚本以采用自己的序列化方案。

在自己的输出日志中添加上下文信息

有时,除了调用日志对象时传入的参数之外,还希望日志输出中能包含上下文信息。 比如在网络应用程序中,可能需要在日志中记录某客户端专属的信息(如远程客户端的用户名或 IP 地址)。 这虽然可以用 extra 参数实现,但传递起来并不总是很方便。 虽然为每个网络连接都创建 Logger 实例貌似不错,但并不是个好主意,因为这些实例不会被垃圾回收。 虽然在实践中不是问题,但当 Logger 实例的数量取决于应用程序要采用的日志粒度时,如果 Logger 实例的数量实际上是无限的,则有可能难以管理。

利用 LoggerAdapter 传递上下文信息

要传递上下文信息和日志事件信息,有一种简单方案是利用 LoggerAdapter 类。这个类设计得类似 Logger,所以可以直接调用 debug()info()warning()error()exception()critical()log()。这些方法的签名与 Logger 对应的方法相同,所以这两类实例可以交换使用。

当你创建一个 LoggerAdapter 的实例时,你会传入一个 Logger 的实例和一个包含了上下文信息的字典对象。当你调用一个 LoggerAdapter 实例的方法时,它会把调用委托给内部的 Logger 的实例,并为其整理相关的上下文信息。这是 LoggerAdapter 的一个代码片段:

  1. def debug(self, msg, /, *args, **kwargs):
  2. """
  3. Delegate a debug call to the underlying logger, after adding
  4. contextual information from this adapter instance.
  5. """
  6. msg, kwargs = self.process(msg, kwargs)
  7. self.logger.debug(msg, *args, **kwargs)

LoggerAdapterprocess() 方法是将上下文信息添加到日志的输出中。 它传入日志消息和日志调用的关键字参数,并传回(隐式的)这些修改后的内容去调用底层的日志记录器。此方法的默认参数只是一个消息字段,但留有一个 ‘extra’ 的字段作为关键字参数传给构造器。当然,如果你在调用适配器时传入了一个 ‘extra’ 字段的参数,它会被静默覆盖。

使用 ‘extra’ 的优点是这些键值对会被传入 LogRecord 实例的 __dict__ 中,让你通过 Formatter 的实例直接使用定制的字符串,实例能找到这个字典类对象的键。 如果你需要一个其他的方法,比如说,想要在消息字符串前后增加上下文信息,你只需要创建一个 LoggerAdapter 的子类,并覆盖它的 process() 方法来做你想做的事情,以下是一个简单的示例:

  1. class CustomAdapter(logging.LoggerAdapter):
  2. """
  3. This example adapter expects the passed in dict-like object to have a
  4. 'connid' key, whose value in brackets is prepended to the log message.
  5. """
  6. def process(self, msg, kwargs):
  7. return '[%s] %s' % (self.extra['connid'], msg), kwargs

你可以这样使用:

  1. logger = logging.getLogger(__name__)
  2. adapter = CustomAdapter(logger, {'connid': some_conn_id})

然后,你记录在适配器中的任何事件消息前将添加``some_conn_id``的值。

使用除字典之外的其它对象传递上下文信息

你不需要将一个实际的字典传递给 LoggerAdapter-你可以传入一个实现了``__getitem__`` 和``__iter__``的类的实例,这样它就像是一个字典。这对于你想动态生成值(而字典中的值往往是常量)将很有帮助。

使用过滤器传递上下文信息

你也可以使用一个用户定义的类 Filter 在日志输出中添加上下文信息。Filter 的实例是被允许修改传入的 LogRecords,包括添加其他的属性,然后可以使用合适的格式化字符串输出,或者可以使用一个自定义的类 Formatter

例如,在一个web应用程序中,正在处理的请求(或者至少是请求的一部分),可以存储在一个线程本地 (threading.local) 变量中,然后从``Filter`` 中去访问。请求中的信息,如IP地址和用户名将被存储在``LogRecord``中,使用上例 LoggerAdapter 中的 ‘ip’ 和 ‘user’ 属性名。在这种情况下,可以使用相同的格式化字符串来得到上例中类似的输出结果。这是一段示例代码:

  1. import logging
  2. from random import choice
  3. class ContextFilter(logging.Filter):
  4. """
  5. This is a filter which injects contextual information into the log.
  6. Rather than use actual contextual information, we just use random
  7. data in this demo.
  8. """
  9. USERS = ['jim', 'fred', 'sheila']
  10. IPS = ['123.231.231.123', '127.0.0.1', '192.168.0.1']
  11. def filter(self, record):
  12. record.ip = choice(ContextFilter.IPS)
  13. record.user = choice(ContextFilter.USERS)
  14. return True
  15. if __name__ == '__main__':
  16. levels = (logging.DEBUG, logging.INFO, logging.WARNING, logging.ERROR, logging.CRITICAL)
  17. logging.basicConfig(level=logging.DEBUG,
  18. format='%(asctime)-15s %(name)-5s %(levelname)-8s IP: %(ip)-15s User: %(user)-8s %(message)s')
  19. a1 = logging.getLogger('a.b.c')
  20. a2 = logging.getLogger('d.e.f')
  21. f = ContextFilter()
  22. a1.addFilter(f)
  23. a2.addFilter(f)
  24. a1.debug('A debug message')
  25. a1.info('An info message with %s', 'some parameters')
  26. for x in range(10):
  27. lvl = choice(levels)
  28. lvlname = logging.getLevelName(lvl)
  29. a2.log(lvl, 'A message at %s level with %d %s', lvlname, 2, 'parameters')

在运行时,产生如下内容:

  1. 2010-09-06 22:38:15,292 a.b.c DEBUG IP: 123.231.231.123 User: fred A debug message
  2. 2010-09-06 22:38:15,300 a.b.c INFO IP: 192.168.0.1 User: sheila An info message with some parameters
  3. 2010-09-06 22:38:15,300 d.e.f CRITICAL IP: 127.0.0.1 User: sheila A message at CRITICAL level with 2 parameters
  4. 2010-09-06 22:38:15,300 d.e.f ERROR IP: 127.0.0.1 User: jim A message at ERROR level with 2 parameters
  5. 2010-09-06 22:38:15,300 d.e.f DEBUG IP: 127.0.0.1 User: sheila A message at DEBUG level with 2 parameters
  6. 2010-09-06 22:38:15,300 d.e.f ERROR IP: 123.231.231.123 User: fred A message at ERROR level with 2 parameters
  7. 2010-09-06 22:38:15,300 d.e.f CRITICAL IP: 192.168.0.1 User: jim A message at CRITICAL level with 2 parameters
  8. 2010-09-06 22:38:15,300 d.e.f CRITICAL IP: 127.0.0.1 User: sheila A message at CRITICAL level with 2 parameters
  9. 2010-09-06 22:38:15,300 d.e.f DEBUG IP: 192.168.0.1 User: jim A message at DEBUG level with 2 parameters
  10. 2010-09-06 22:38:15,301 d.e.f ERROR IP: 127.0.0.1 User: sheila A message at ERROR level with 2 parameters
  11. 2010-09-06 22:38:15,301 d.e.f DEBUG IP: 123.231.231.123 User: fred A message at DEBUG level with 2 parameters
  12. 2010-09-06 22:38:15,301 d.e.f INFO IP: 123.231.231.123 User: fred A message at INFO level with 2 parameters

从多个进程记录至单个文件

尽管 logging 是线程安全的,将单个进程中的多个线程日志记录至单个文件也 受支持的,但将 多个进程 中的日志记录至单个文件则 不是 受支持的,因为在 Python 中并没有在多个进程中实现对单个文件访问的序列化的标准方案。 如果你需要将多个进程中的日志记录至单个文件,有一个方案是让所有进程都将日志记录至一个 SocketHandler,然后用一个实现了套接字服务器的单独进程一边从套接字中读取一边将日志记录至文件。 (如果愿意的话,你可以在一个现有进程中专门开一个线程来执行此项功能。) 这一部分 文档对此方式有更详细的介绍,并包含一个可用的套接字接收器,你自己的应用可以在此基础上进行适配。

你也可以编写你自己的处理程序,让其使用 multiprocessing 模块中的 Lock 类来顺序访问你的多个进程中的文件。 现有的 FileHandler 及其子类目前并不使用 multiprocessing,尽管它们将来可能会这样做。 请注意在目前,multiprocessing 模块并未在所有平台上都提供可用的锁功能 (参见 https://bugs.python.org/issue3770)。

或者,你也可以使用 QueueQueueHandler 将所有的日志事件发送至你的多进程应用的一个进程中。 以下示例脚本演示了如何执行此操作。 在示例中,一个单独的监听进程负责监听其他进程的日志事件,并根据自己的配置记录。 尽管示例只演示了这种方法(例如你可能希望使用单独的监听线程而非监听进程 —— 它们的实现是类似的),但你也可以在应用程序的监听进程和其他进程使用不同的配置,它可以作为满足你特定需求的一个基础:

  1. # You'll need these imports in your own code
  2. import logging
  3. import logging.handlers
  4. import multiprocessing
  5. # Next two import lines for this demo only
  6. from random import choice, random
  7. import time
  8. #
  9. # Because you'll want to define the logging configurations for listener and workers, the
  10. # listener and worker process functions take a configurer parameter which is a callable
  11. # for configuring logging for that process. These functions are also passed the queue,
  12. # which they use for communication.
  13. #
  14. # In practice, you can configure the listener however you want, but note that in this
  15. # simple example, the listener does not apply level or filter logic to received records.
  16. # In practice, you would probably want to do this logic in the worker processes, to avoid
  17. # sending events which would be filtered out between processes.
  18. #
  19. # The size of the rotated files is made small so you can see the results easily.
  20. def listener_configurer():
  21. root = logging.getLogger()
  22. h = logging.handlers.RotatingFileHandler('mptest.log', 'a', 300, 10)
  23. f = logging.Formatter('%(asctime)s %(processName)-10s %(name)s %(levelname)-8s %(message)s')
  24. h.setFormatter(f)
  25. root.addHandler(h)
  26. # This is the listener process top-level loop: wait for logging events
  27. # (LogRecords)on the queue and handle them, quit when you get a None for a
  28. # LogRecord.
  29. def listener_process(queue, configurer):
  30. configurer()
  31. while True:
  32. try:
  33. record = queue.get()
  34. if record is None: # We send this as a sentinel to tell the listener to quit.
  35. break
  36. logger = logging.getLogger(record.name)
  37. logger.handle(record) # No level or filter logic applied - just do it!
  38. except Exception:
  39. import sys, traceback
  40. print('Whoops! Problem:', file=sys.stderr)
  41. traceback.print_exc(file=sys.stderr)
  42. # Arrays used for random selections in this demo
  43. LEVELS = [logging.DEBUG, logging.INFO, logging.WARNING,
  44. logging.ERROR, logging.CRITICAL]
  45. LOGGERS = ['a.b.c', 'd.e.f']
  46. MESSAGES = [
  47. 'Random message #1',
  48. 'Random message #2',
  49. 'Random message #3',
  50. ]
  51. # The worker configuration is done at the start of the worker process run.
  52. # Note that on Windows you can't rely on fork semantics, so each process
  53. # will run the logging configuration code when it starts.
  54. def worker_configurer(queue):
  55. h = logging.handlers.QueueHandler(queue) # Just the one handler needed
  56. root = logging.getLogger()
  57. root.addHandler(h)
  58. # send all messages, for demo; no other level or filter logic applied.
  59. root.setLevel(logging.DEBUG)
  60. # This is the worker process top-level loop, which just logs ten events with
  61. # random intervening delays before terminating.
  62. # The print messages are just so you know it's doing something!
  63. def worker_process(queue, configurer):
  64. configurer(queue)
  65. name = multiprocessing.current_process().name
  66. print('Worker started: %s' % name)
  67. for i in range(10):
  68. time.sleep(random())
  69. logger = logging.getLogger(choice(LOGGERS))
  70. level = choice(LEVELS)
  71. message = choice(MESSAGES)
  72. logger.log(level, message)
  73. print('Worker finished: %s' % name)
  74. # Here's where the demo gets orchestrated. Create the queue, create and start
  75. # the listener, create ten workers and start them, wait for them to finish,
  76. # then send a None to the queue to tell the listener to finish.
  77. def main():
  78. queue = multiprocessing.Queue(-1)
  79. listener = multiprocessing.Process(target=listener_process,
  80. args=(queue, listener_configurer))
  81. listener.start()
  82. workers = []
  83. for i in range(10):
  84. worker = multiprocessing.Process(target=worker_process,
  85. args=(queue, worker_configurer))
  86. workers.append(worker)
  87. worker.start()
  88. for w in workers:
  89. w.join()
  90. queue.put_nowait(None)
  91. listener.join()
  92. if __name__ == '__main__':
  93. main()

上面脚本的一个变种,仍然在主进程中记录日志,但使用一个单独的线程:

  1. import logging
  2. import logging.config
  3. import logging.handlers
  4. from multiprocessing import Process, Queue
  5. import random
  6. import threading
  7. import time
  8. def logger_thread(q):
  9. while True:
  10. record = q.get()
  11. if record is None:
  12. break
  13. logger = logging.getLogger(record.name)
  14. logger.handle(record)
  15. def worker_process(q):
  16. qh = logging.handlers.QueueHandler(q)
  17. root = logging.getLogger()
  18. root.setLevel(logging.DEBUG)
  19. root.addHandler(qh)
  20. levels = [logging.DEBUG, logging.INFO, logging.WARNING, logging.ERROR,
  21. logging.CRITICAL]
  22. loggers = ['foo', 'foo.bar', 'foo.bar.baz',
  23. 'spam', 'spam.ham', 'spam.ham.eggs']
  24. for i in range(100):
  25. lvl = random.choice(levels)
  26. logger = logging.getLogger(random.choice(loggers))
  27. logger.log(lvl, 'Message no. %d', i)
  28. if __name__ == '__main__':
  29. q = Queue()
  30. d = {
  31. 'version': 1,
  32. 'formatters': {
  33. 'detailed': {
  34. 'class': 'logging.Formatter',
  35. 'format': '%(asctime)s %(name)-15s %(levelname)-8s %(processName)-10s %(message)s'
  36. }
  37. },
  38. 'handlers': {
  39. 'console': {
  40. 'class': 'logging.StreamHandler',
  41. 'level': 'INFO',
  42. },
  43. 'file': {
  44. 'class': 'logging.FileHandler',
  45. 'filename': 'mplog.log',
  46. 'mode': 'w',
  47. 'formatter': 'detailed',
  48. },
  49. 'foofile': {
  50. 'class': 'logging.FileHandler',
  51. 'filename': 'mplog-foo.log',
  52. 'mode': 'w',
  53. 'formatter': 'detailed',
  54. },
  55. 'errors': {
  56. 'class': 'logging.FileHandler',
  57. 'filename': 'mplog-errors.log',
  58. 'mode': 'w',
  59. 'level': 'ERROR',
  60. 'formatter': 'detailed',
  61. },
  62. },
  63. 'loggers': {
  64. 'foo': {
  65. 'handlers': ['foofile']
  66. }
  67. },
  68. 'root': {
  69. 'level': 'DEBUG',
  70. 'handlers': ['console', 'file', 'errors']
  71. },
  72. }
  73. workers = []
  74. for i in range(5):
  75. wp = Process(target=worker_process, name='worker %d' % (i + 1), args=(q,))
  76. workers.append(wp)
  77. wp.start()
  78. logging.config.dictConfig(d)
  79. lp = threading.Thread(target=logger_thread, args=(q,))
  80. lp.start()
  81. # At this point, the main process could do some useful work of its own
  82. # Once it's done that, it can wait for the workers to terminate...
  83. for wp in workers:
  84. wp.join()
  85. # And now tell the logging thread to finish up, too
  86. q.put(None)
  87. lp.join()

这段变种的代码展示了如何使用特定的日志记录配置 - 例如``foo``记录器使用了特殊的处理程序,将 foo 子系统中所有的事件记录至一个文件 mplog-foo.log。在主进程(即使是在工作进程中产生的日志事件)的日志记录机制中将直接使用恰当的配置。

Using concurrent.futures.ProcessPoolExecutor

If you want to use concurrent.futures.ProcessPoolExecutor to start your worker processes, you need to create the queue slightly differently. Instead of

  1. queue = multiprocessing.Queue(-1)

you should use

  1. queue = multiprocessing.Manager().Queue(-1) # also works with the examples above

and you can then replace the worker creation from this:

  1. workers = []
  2. for i in range(10):
  3. worker = multiprocessing.Process(target=worker_process,
  4. args=(queue, worker_configurer))
  5. workers.append(worker)
  6. worker.start()
  7. for w in workers:
  8. w.join()

to this (remembering to first import concurrent.futures):

  1. with concurrent.futures.ProcessPoolExecutor(max_workers=10) as executor:
  2. for i in range(10):
  3. executor.submit(worker_process, queue, worker_configurer)

轮换日志文件

有时,你希望当日志文件不断记录增长至一定大小时,打开一个新的文件接着记录。 你可能希望只保留一定数量的日志文件,当不断的创建文件到达该数量时,又覆盖掉最开始的文件形成循环。 对于这种使用场景,日志包提供了 RotatingFileHandler:

  1. import glob
  2. import logging
  3. import logging.handlers
  4. LOG_FILENAME = 'logging_rotatingfile_example.out'
  5. # Set up a specific logger with our desired output level
  6. my_logger = logging.getLogger('MyLogger')
  7. my_logger.setLevel(logging.DEBUG)
  8. # Add the log message handler to the logger
  9. handler = logging.handlers.RotatingFileHandler(
  10. LOG_FILENAME, maxBytes=20, backupCount=5)
  11. my_logger.addHandler(handler)
  12. # Log some messages
  13. for i in range(20):
  14. my_logger.debug('i = %d' % i)
  15. # See what files are created
  16. logfiles = glob.glob('%s*' % LOG_FILENAME)
  17. for filename in logfiles:
  18. print(filename)

结果应该是6个单独的文件,每个文件都包含了应用程序的部分历史日志:

  1. logging_rotatingfile_example.out
  2. logging_rotatingfile_example.out.1
  3. logging_rotatingfile_example.out.2
  4. logging_rotatingfile_example.out.3
  5. logging_rotatingfile_example.out.4
  6. logging_rotatingfile_example.out.5

最新的文件始终是:file:logging_rotatingfile_example.out,每次到达大小限制时,都会使用后缀``.1``重命名。每个现有的备份文件都会被重命名并增加其后缀(例如``.1`` 变为``.2``),而``.6``文件会被删除掉。

显然,这个例子将日志长度设置得太小,这是一个极端的例子。 你可能希望将 maxBytes 设置为一个合适的值。

使用其他日志格式化方式

当日志模块被添加至 Python 标准库时,只有一种格式化消息内容的方法即 %-formatting。 在那之后,Python 又增加了两种格式化方法: string.Template (在 Python 2.4 中新增) 和 str.format() (在 Python 2.6 中新增)。

日志(从 3.2 开始)为这两种格式化方式提供了更多支持。Formatter 类可以添加一个额外的可选关键字参数 style。它的默认值是 '%',其他的值 '{''$' 也支持,对应了其他两种格式化样式。其保持了向后兼容(如您所愿),但通过显示指定样式参数,你可以指定格式化字符串的方式是使用 str.format()string.Template。 这里是一个控制台会话的示例,展示了这些方式:

  1. >>> import logging
  2. >>> root = logging.getLogger()
  3. >>> root.setLevel(logging.DEBUG)
  4. >>> handler = logging.StreamHandler()
  5. >>> bf = logging.Formatter('{asctime} {name} {levelname:8s} {message}',
  6. ... style='{')
  7. >>> handler.setFormatter(bf)
  8. >>> root.addHandler(handler)
  9. >>> logger = logging.getLogger('foo.bar')
  10. >>> logger.debug('This is a DEBUG message')
  11. 2010-10-28 15:11:55,341 foo.bar DEBUG This is a DEBUG message
  12. >>> logger.critical('This is a CRITICAL message')
  13. 2010-10-28 15:12:11,526 foo.bar CRITICAL This is a CRITICAL message
  14. >>> df = logging.Formatter('$asctime $name ${levelname} $message',
  15. ... style='$')
  16. >>> handler.setFormatter(df)
  17. >>> logger.debug('This is a DEBUG message')
  18. 2010-10-28 15:13:06,924 foo.bar DEBUG This is a DEBUG message
  19. >>> logger.critical('This is a CRITICAL message')
  20. 2010-10-28 15:13:11,494 foo.bar CRITICAL This is a CRITICAL message
  21. >>>

请注意最终输出到日志的消息格式完全独立于单条日志消息的构造方式。 它仍然可以使用 %-formatting,如下所示:

  1. >>> logger.error('This is an%s %s %s', 'other,', 'ERROR,', 'message')
  2. 2010-10-28 15:19:29,833 foo.bar ERROR This is another, ERROR, message
  3. >>>

Logging calls (logger.debug(), logger.info() etc.) only take positional parameters for the actual logging message itself, with keyword parameters used only for determining options for how to handle the actual logging call (e.g. the exc_info keyword parameter to indicate that traceback information should be logged, or the extra keyword parameter to indicate additional contextual information to be added to the log). So you cannot directly make logging calls using str.format() or string.Template syntax, because internally the logging package uses %-formatting to merge the format string and the variable arguments. There would be no changing this while preserving backward compatibility, since all logging calls which are out there in existing code will be using %-format strings.

There is, however, a way that you can use {}- and $- formatting to construct your individual log messages. Recall that for a message you can use an arbitrary object as a message format string, and that the logging package will call str() on that object to get the actual format string. Consider the following two classes:

  1. class BraceMessage:
  2. def __init__(self, fmt, /, *args, **kwargs):
  3. self.fmt = fmt
  4. self.args = args
  5. self.kwargs = kwargs
  6. def __str__(self):
  7. return self.fmt.format(*self.args, **self.kwargs)
  8. class DollarMessage:
  9. def __init__(self, fmt, /, **kwargs):
  10. self.fmt = fmt
  11. self.kwargs = kwargs
  12. def __str__(self):
  13. from string import Template
  14. return Template(self.fmt).substitute(**self.kwargs)

Either of these can be used in place of a format string, to allow {}- or $-formatting to be used to build the actual “message” part which appears in the formatted log output in place of “%(message)s” or “{message}” or “$message”. It’s a little unwieldy to use the class names whenever you want to log something, but it’s quite palatable if you use an alias such as __ (double underscore —- not to be confused with _, the single underscore used as a synonym/alias for gettext.gettext() or its brethren).

The above classes are not included in Python, though they’re easy enough to copy and paste into your own code. They can be used as follows (assuming that they’re declared in a module called wherever):

  1. >>> from wherever import BraceMessage as __
  2. >>> print(__('Message with {0} {name}', 2, name='placeholders'))
  3. Message with 2 placeholders
  4. >>> class Point: pass
  5. ...
  6. >>> p = Point()
  7. >>> p.x = 0.5
  8. >>> p.y = 0.5
  9. >>> print(__('Message with coordinates: ({point.x:.2f}, {point.y:.2f})',
  10. ... point=p))
  11. Message with coordinates: (0.50, 0.50)
  12. >>> from wherever import DollarMessage as __
  13. >>> print(__('Message with $num $what', num=2, what='placeholders'))
  14. Message with 2 placeholders
  15. >>>

While the above examples use print() to show how the formatting works, you would of course use logger.debug() or similar to actually log using this approach.

One thing to note is that you pay no significant performance penalty with this approach: the actual formatting happens not when you make the logging call, but when (and if) the logged message is actually about to be output to a log by a handler. So the only slightly unusual thing which might trip you up is that the parentheses go around the format string and the arguments, not just the format string. That’s because the __ notation is just syntax sugar for a constructor call to one of the XXXMessage classes.

If you prefer, you can use a LoggerAdapter to achieve a similar effect to the above, as in the following example:

  1. import logging
  2. class Message:
  3. def __init__(self, fmt, args):
  4. self.fmt = fmt
  5. self.args = args
  6. def __str__(self):
  7. return self.fmt.format(*self.args)
  8. class StyleAdapter(logging.LoggerAdapter):
  9. def __init__(self, logger, extra=None):
  10. super().__init__(logger, extra or {})
  11. def log(self, level, msg, /, *args, **kwargs):
  12. if self.isEnabledFor(level):
  13. msg, kwargs = self.process(msg, kwargs)
  14. self.logger._log(level, Message(msg, args), (), **kwargs)
  15. logger = StyleAdapter(logging.getLogger(__name__))
  16. def main():
  17. logger.debug('Hello, {}', 'world!')
  18. if __name__ == '__main__':
  19. logging.basicConfig(level=logging.DEBUG)
  20. main()

The above script should log the message Hello, world! when run with Python 3.2 or later.

Customizing LogRecord

Every logging event is represented by a LogRecord instance. When an event is logged and not filtered out by a logger’s level, a LogRecord is created, populated with information about the event and then passed to the handlers for that logger (and its ancestors, up to and including the logger where further propagation up the hierarchy is disabled). Before Python 3.2, there were only two places where this creation was done:

  • Logger.makeRecord(), which is called in the normal process of logging an event. This invoked LogRecord directly to create an instance.

  • makeLogRecord(), which is called with a dictionary containing attributes to be added to the LogRecord. This is typically invoked when a suitable dictionary has been received over the network (e.g. in pickle form via a SocketHandler, or in JSON form via an HTTPHandler).

This has usually meant that if you need to do anything special with a LogRecord, you’ve had to do one of the following.

  • Create your own Logger subclass, which overrides Logger.makeRecord(), and set it using setLoggerClass() before any loggers that you care about are instantiated.

  • Add a Filter to a logger or handler, which does the necessary special manipulation you need when its filter() method is called.

The first approach would be a little unwieldy in the scenario where (say) several different libraries wanted to do different things. Each would attempt to set its own Logger subclass, and the one which did this last would win.

The second approach works reasonably well for many cases, but does not allow you to e.g. use a specialized subclass of LogRecord. Library developers can set a suitable filter on their loggers, but they would have to remember to do this every time they introduced a new logger (which they would do simply by adding new packages or modules and doing

  1. logger = logging.getLogger(__name__)

at module level). It’s probably one too many things to think about. Developers could also add the filter to a NullHandler attached to their top-level logger, but this would not be invoked if an application developer attached a handler to a lower-level library logger —- so output from that handler would not reflect the intentions of the library developer.

In Python 3.2 and later, LogRecord creation is done through a factory, which you can specify. The factory is just a callable you can set with setLogRecordFactory(), and interrogate with getLogRecordFactory(). The factory is invoked with the same signature as the LogRecord constructor, as LogRecord is the default setting for the factory.

This approach allows a custom factory to control all aspects of LogRecord creation. For example, you could return a subclass, or just add some additional attributes to the record once created, using a pattern similar to this:

  1. old_factory = logging.getLogRecordFactory()
  2. def record_factory(*args, **kwargs):
  3. record = old_factory(*args, **kwargs)
  4. record.custom_attribute = 0xdecafbad
  5. return record
  6. logging.setLogRecordFactory(record_factory)

This pattern allows different libraries to chain factories together, and as long as they don’t overwrite each other’s attributes or unintentionally overwrite the attributes provided as standard, there should be no surprises. However, it should be borne in mind that each link in the chain adds run-time overhead to all logging operations, and the technique should only be used when the use of a Filter does not provide the desired result.

Subclassing QueueHandler - a ZeroMQ example

You can use a QueueHandler subclass to send messages to other kinds of queues, for example a ZeroMQ ‘publish’ socket. In the example below,the socket is created separately and passed to the handler (as its ‘queue’):

  1. import zmq # using pyzmq, the Python binding for ZeroMQ
  2. import json # for serializing records portably
  3. ctx = zmq.Context()
  4. sock = zmq.Socket(ctx, zmq.PUB) # or zmq.PUSH, or other suitable value
  5. sock.bind('tcp://*:5556') # or wherever
  6. class ZeroMQSocketHandler(QueueHandler):
  7. def enqueue(self, record):
  8. self.queue.send_json(record.__dict__)
  9. handler = ZeroMQSocketHandler(sock)

Of course there are other ways of organizing this, for example passing in the data needed by the handler to create the socket:

  1. class ZeroMQSocketHandler(QueueHandler):
  2. def __init__(self, uri, socktype=zmq.PUB, ctx=None):
  3. self.ctx = ctx or zmq.Context()
  4. socket = zmq.Socket(self.ctx, socktype)
  5. socket.bind(uri)
  6. super().__init__(socket)
  7. def enqueue(self, record):
  8. self.queue.send_json(record.__dict__)
  9. def close(self):
  10. self.queue.close()

Subclassing QueueListener - a ZeroMQ example

You can also subclass QueueListener to get messages from other kinds of queues, for example a ZeroMQ ‘subscribe’ socket. Here’s an example:

  1. class ZeroMQSocketListener(QueueListener):
  2. def __init__(self, uri, /, *handlers, **kwargs):
  3. self.ctx = kwargs.get('ctx') or zmq.Context()
  4. socket = zmq.Socket(self.ctx, zmq.SUB)
  5. socket.setsockopt_string(zmq.SUBSCRIBE, '') # subscribe to everything
  6. socket.connect(uri)
  7. super().__init__(socket, *handlers, **kwargs)
  8. def dequeue(self):
  9. msg = self.queue.recv_json()
  10. return logging.makeLogRecord(msg)

参见

模块 logging

日志记录模块的 API 参考。

模块 logging.config

日志记录模块的配置 API 。

模块 logging.handlers

日志记录模块附带的有用处理程序。

A basic logging tutorial

A more advanced logging tutorial

An example dictionary-based configuration

Below is an example of a logging configuration dictionary - it’s taken from the documentation on the Django project. This dictionary is passed to dictConfig() to put the configuration into effect:

  1. LOGGING = {
  2. 'version': 1,
  3. 'disable_existing_loggers': True,
  4. 'formatters': {
  5. 'verbose': {
  6. 'format': '%(levelname)s %(asctime)s %(module)s %(process)d %(thread)d %(message)s'
  7. },
  8. 'simple': {
  9. 'format': '%(levelname)s %(message)s'
  10. },
  11. },
  12. 'filters': {
  13. 'special': {
  14. '()': 'project.logging.SpecialFilter',
  15. 'foo': 'bar',
  16. }
  17. },
  18. 'handlers': {
  19. 'null': {
  20. 'level':'DEBUG',
  21. 'class':'django.utils.log.NullHandler',
  22. },
  23. 'console':{
  24. 'level':'DEBUG',
  25. 'class':'logging.StreamHandler',
  26. 'formatter': 'simple'
  27. },
  28. 'mail_admins': {
  29. 'level': 'ERROR',
  30. 'class': 'django.utils.log.AdminEmailHandler',
  31. 'filters': ['special']
  32. }
  33. },
  34. 'loggers': {
  35. 'django': {
  36. 'handlers':['null'],
  37. 'propagate': True,
  38. 'level':'INFO',
  39. },
  40. 'django.request': {
  41. 'handlers': ['mail_admins'],
  42. 'level': 'ERROR',
  43. 'propagate': False,
  44. },
  45. 'myproject.custom': {
  46. 'handlers': ['console', 'mail_admins'],
  47. 'level': 'INFO',
  48. 'filters': ['special']
  49. }
  50. }
  51. }

For more information about this configuration, you can see the relevant section of the Django documentation.

Using a rotator and namer to customize log rotation processing

An example of how you can define a namer and rotator is given in the following snippet, which shows zlib-based compression of the log file:

  1. def namer(name):
  2. return name + ".gz"
  3. def rotator(source, dest):
  4. with open(source, "rb") as sf:
  5. data = sf.read()
  6. compressed = zlib.compress(data, 9)
  7. with open(dest, "wb") as df:
  8. df.write(compressed)
  9. os.remove(source)
  10. rh = logging.handlers.RotatingFileHandler(...)
  11. rh.rotator = rotator
  12. rh.namer = namer

These are not “true” .gz files, as they are bare compressed data, with no “container” such as you’d find in an actual gzip file. This snippet is just for illustration purposes.

A more elaborate multiprocessing example

The following working example shows how logging can be used with multiprocessing using configuration files. The configurations are fairly simple, but serve to illustrate how more complex ones could be implemented in a real multiprocessing scenario.

In the example, the main process spawns a listener process and some worker processes. Each of the main process, the listener and the workers have three separate configurations (the workers all share the same configuration). We can see logging in the main process, how the workers log to a QueueHandler and how the listener implements a QueueListener and a more complex logging configuration, and arranges to dispatch events received via the queue to the handlers specified in the configuration. Note that these configurations are purely illustrative, but you should be able to adapt this example to your own scenario.

Here’s the script - the docstrings and the comments hopefully explain how it works:

  1. import logging
  2. import logging.config
  3. import logging.handlers
  4. from multiprocessing import Process, Queue, Event, current_process
  5. import os
  6. import random
  7. import time
  8. class MyHandler:
  9. """
  10. A simple handler for logging events. It runs in the listener process and
  11. dispatches events to loggers based on the name in the received record,
  12. which then get dispatched, by the logging system, to the handlers
  13. configured for those loggers.
  14. """
  15. def handle(self, record):
  16. if record.name == "root":
  17. logger = logging.getLogger()
  18. else:
  19. logger = logging.getLogger(record.name)
  20. if logger.isEnabledFor(record.levelno):
  21. # The process name is transformed just to show that it's the listener
  22. # doing the logging to files and console
  23. record.processName = '%s (for %s)' % (current_process().name, record.processName)
  24. logger.handle(record)
  25. def listener_process(q, stop_event, config):
  26. """
  27. This could be done in the main process, but is just done in a separate
  28. process for illustrative purposes.
  29. This initialises logging according to the specified configuration,
  30. starts the listener and waits for the main process to signal completion
  31. via the event. The listener is then stopped, and the process exits.
  32. """
  33. logging.config.dictConfig(config)
  34. listener = logging.handlers.QueueListener(q, MyHandler())
  35. listener.start()
  36. if os.name == 'posix':
  37. # On POSIX, the setup logger will have been configured in the
  38. # parent process, but should have been disabled following the
  39. # dictConfig call.
  40. # On Windows, since fork isn't used, the setup logger won't
  41. # exist in the child, so it would be created and the message
  42. # would appear - hence the "if posix" clause.
  43. logger = logging.getLogger('setup')
  44. logger.critical('Should not appear, because of disabled logger ...')
  45. stop_event.wait()
  46. listener.stop()
  47. def worker_process(config):
  48. """
  49. A number of these are spawned for the purpose of illustration. In
  50. practice, they could be a heterogeneous bunch of processes rather than
  51. ones which are identical to each other.
  52. This initialises logging according to the specified configuration,
  53. and logs a hundred messages with random levels to randomly selected
  54. loggers.
  55. A small sleep is added to allow other processes a chance to run. This
  56. is not strictly needed, but it mixes the output from the different
  57. processes a bit more than if it's left out.
  58. """
  59. logging.config.dictConfig(config)
  60. levels = [logging.DEBUG, logging.INFO, logging.WARNING, logging.ERROR,
  61. logging.CRITICAL]
  62. loggers = ['foo', 'foo.bar', 'foo.bar.baz',
  63. 'spam', 'spam.ham', 'spam.ham.eggs']
  64. if os.name == 'posix':
  65. # On POSIX, the setup logger will have been configured in the
  66. # parent process, but should have been disabled following the
  67. # dictConfig call.
  68. # On Windows, since fork isn't used, the setup logger won't
  69. # exist in the child, so it would be created and the message
  70. # would appear - hence the "if posix" clause.
  71. logger = logging.getLogger('setup')
  72. logger.critical('Should not appear, because of disabled logger ...')
  73. for i in range(100):
  74. lvl = random.choice(levels)
  75. logger = logging.getLogger(random.choice(loggers))
  76. logger.log(lvl, 'Message no. %d', i)
  77. time.sleep(0.01)
  78. def main():
  79. q = Queue()
  80. # The main process gets a simple configuration which prints to the console.
  81. config_initial = {
  82. 'version': 1,
  83. 'handlers': {
  84. 'console': {
  85. 'class': 'logging.StreamHandler',
  86. 'level': 'INFO'
  87. }
  88. },
  89. 'root': {
  90. 'handlers': ['console'],
  91. 'level': 'DEBUG'
  92. }
  93. }
  94. # The worker process configuration is just a QueueHandler attached to the
  95. # root logger, which allows all messages to be sent to the queue.
  96. # We disable existing loggers to disable the "setup" logger used in the
  97. # parent process. This is needed on POSIX because the logger will
  98. # be there in the child following a fork().
  99. config_worker = {
  100. 'version': 1,
  101. 'disable_existing_loggers': True,
  102. 'handlers': {
  103. 'queue': {
  104. 'class': 'logging.handlers.QueueHandler',
  105. 'queue': q
  106. }
  107. },
  108. 'root': {
  109. 'handlers': ['queue'],
  110. 'level': 'DEBUG'
  111. }
  112. }
  113. # The listener process configuration shows that the full flexibility of
  114. # logging configuration is available to dispatch events to handlers however
  115. # you want.
  116. # We disable existing loggers to disable the "setup" logger used in the
  117. # parent process. This is needed on POSIX because the logger will
  118. # be there in the child following a fork().
  119. config_listener = {
  120. 'version': 1,
  121. 'disable_existing_loggers': True,
  122. 'formatters': {
  123. 'detailed': {
  124. 'class': 'logging.Formatter',
  125. 'format': '%(asctime)s %(name)-15s %(levelname)-8s %(processName)-10s %(message)s'
  126. },
  127. 'simple': {
  128. 'class': 'logging.Formatter',
  129. 'format': '%(name)-15s %(levelname)-8s %(processName)-10s %(message)s'
  130. }
  131. },
  132. 'handlers': {
  133. 'console': {
  134. 'class': 'logging.StreamHandler',
  135. 'formatter': 'simple',
  136. 'level': 'INFO'
  137. },
  138. 'file': {
  139. 'class': 'logging.FileHandler',
  140. 'filename': 'mplog.log',
  141. 'mode': 'w',
  142. 'formatter': 'detailed'
  143. },
  144. 'foofile': {
  145. 'class': 'logging.FileHandler',
  146. 'filename': 'mplog-foo.log',
  147. 'mode': 'w',
  148. 'formatter': 'detailed'
  149. },
  150. 'errors': {
  151. 'class': 'logging.FileHandler',
  152. 'filename': 'mplog-errors.log',
  153. 'mode': 'w',
  154. 'formatter': 'detailed',
  155. 'level': 'ERROR'
  156. }
  157. },
  158. 'loggers': {
  159. 'foo': {
  160. 'handlers': ['foofile']
  161. }
  162. },
  163. 'root': {
  164. 'handlers': ['console', 'file', 'errors'],
  165. 'level': 'DEBUG'
  166. }
  167. }
  168. # Log some initial events, just to show that logging in the parent works
  169. # normally.
  170. logging.config.dictConfig(config_initial)
  171. logger = logging.getLogger('setup')
  172. logger.info('About to create workers ...')
  173. workers = []
  174. for i in range(5):
  175. wp = Process(target=worker_process, name='worker %d' % (i + 1),
  176. args=(config_worker,))
  177. workers.append(wp)
  178. wp.start()
  179. logger.info('Started worker: %s', wp.name)
  180. logger.info('About to create listener ...')
  181. stop_event = Event()
  182. lp = Process(target=listener_process, name='listener',
  183. args=(q, stop_event, config_listener))
  184. lp.start()
  185. logger.info('Started listener')
  186. # We now hang around for the workers to finish their work.
  187. for wp in workers:
  188. wp.join()
  189. # Workers all done, listening can now stop.
  190. # Logging in the parent still works normally.
  191. logger.info('Telling listener to stop ...')
  192. stop_event.set()
  193. lp.join()
  194. logger.info('All done.')
  195. if __name__ == '__main__':
  196. main()

Inserting a BOM into messages sent to a SysLogHandler

RFC 5424 requires that a Unicode message be sent to a syslog daemon as a set of bytes which have the following structure: an optional pure-ASCII component, followed by a UTF-8 Byte Order Mark (BOM), followed by Unicode encoded using UTF-8. (See the relevant section of the specification.)

In Python 3.1, code was added to SysLogHandler to insert a BOM into the message, but unfortunately, it was implemented incorrectly, with the BOM appearing at the beginning of the message and hence not allowing any pure-ASCII component to appear before it.

As this behaviour is broken, the incorrect BOM insertion code is being removed from Python 3.2.4 and later. However, it is not being replaced, and if you want to produce RFC 5424-compliant messages which include a BOM, an optional pure-ASCII sequence before it and arbitrary Unicode after it, encoded using UTF-8, then you need to do the following:

  1. Attach a Formatter instance to your SysLogHandler instance, with a format string such as:

    1. 'ASCII section\ufeffUnicode section'

    The Unicode code point U+FEFF, when encoded using UTF-8, will be encoded as a UTF-8 BOM — the byte-string b'\xef\xbb\xbf'.

  2. Replace the ASCII section with whatever placeholders you like, but make sure that the data that appears in there after substitution is always ASCII (that way, it will remain unchanged after UTF-8 encoding).

  3. Replace the Unicode section with whatever placeholders you like; if the data which appears there after substitution contains characters outside the ASCII range, that’s fine — it will be encoded using UTF-8.

The formatted message will be encoded using UTF-8 encoding by SysLogHandler. If you follow the above rules, you should be able to produce RFC 5424-compliant messages. If you don’t, logging may not complain, but your messages will not be RFC 5424-compliant, and your syslog daemon may complain.

Implementing structured logging

Although most logging messages are intended for reading by humans, and thus not readily machine-parseable, there might be circumstances where you want to output messages in a structured format which is capable of being parsed by a program (without needing complex regular expressions to parse the log message). This is straightforward to achieve using the logging package. There are a number of ways in which this could be achieved, but the following is a simple approach which uses JSON to serialise the event in a machine-parseable manner:

  1. import json
  2. import logging
  3. class StructuredMessage:
  4. def __init__(self, message, /, **kwargs):
  5. self.message = message
  6. self.kwargs = kwargs
  7. def __str__(self):
  8. return '%s >>> %s' % (self.message, json.dumps(self.kwargs))
  9. _ = StructuredMessage # optional, to improve readability
  10. logging.basicConfig(level=logging.INFO, format='%(message)s')
  11. logging.info(_('message 1', foo='bar', bar='baz', num=123, fnum=123.456))

If the above script is run, it prints:

  1. message 1 >>> {"fnum": 123.456, "num": 123, "bar": "baz", "foo": "bar"}

Note that the order of items might be different according to the version of Python used.

If you need more specialised processing, you can use a custom JSON encoder, as in the following complete example:

  1. from __future__ import unicode_literals
  2. import json
  3. import logging
  4. # This next bit is to ensure the script runs unchanged on 2.x and 3.x
  5. try:
  6. unicode
  7. except NameError:
  8. unicode = str
  9. class Encoder(json.JSONEncoder):
  10. def default(self, o):
  11. if isinstance(o, set):
  12. return tuple(o)
  13. elif isinstance(o, unicode):
  14. return o.encode('unicode_escape').decode('ascii')
  15. return super().default(o)
  16. class StructuredMessage:
  17. def __init__(self, message, /, **kwargs):
  18. self.message = message
  19. self.kwargs = kwargs
  20. def __str__(self):
  21. s = Encoder().encode(self.kwargs)
  22. return '%s >>> %s' % (self.message, s)
  23. _ = StructuredMessage # optional, to improve readability
  24. def main():
  25. logging.basicConfig(level=logging.INFO, format='%(message)s')
  26. logging.info(_('message 1', set_value={1, 2, 3}, snowman='\u2603'))
  27. if __name__ == '__main__':
  28. main()

When the above script is run, it prints:

  1. message 1 >>> {"snowman": "\u2603", "set_value": [1, 2, 3]}

Note that the order of items might be different according to the version of Python used.

Customizing handlers with dictConfig()

There are times when you want to customize logging handlers in particular ways, and if you use dictConfig() you may be able to do this without subclassing. As an example, consider that you may want to set the ownership of a log file. On POSIX, this is easily done using shutil.chown(), but the file handlers in the stdlib don’t offer built-in support. You can customize handler creation using a plain function such as:

  1. def owned_file_handler(filename, mode='a', encoding=None, owner=None):
  2. if owner:
  3. if not os.path.exists(filename):
  4. open(filename, 'a').close()
  5. shutil.chown(filename, *owner)
  6. return logging.FileHandler(filename, mode, encoding)

You can then specify, in a logging configuration passed to dictConfig(), that a logging handler be created by calling this function:

  1. LOGGING = {
  2. 'version': 1,
  3. 'disable_existing_loggers': False,
  4. 'formatters': {
  5. 'default': {
  6. 'format': '%(asctime)s %(levelname)s %(name)s %(message)s'
  7. },
  8. },
  9. 'handlers': {
  10. 'file':{
  11. # The values below are popped from this dictionary and
  12. # used to create the handler, set the handler's level and
  13. # its formatter.
  14. '()': owned_file_handler,
  15. 'level':'DEBUG',
  16. 'formatter': 'default',
  17. # The values below are passed to the handler creator callable
  18. # as keyword arguments.
  19. 'owner': ['pulse', 'pulse'],
  20. 'filename': 'chowntest.log',
  21. 'mode': 'w',
  22. 'encoding': 'utf-8',
  23. },
  24. },
  25. 'root': {
  26. 'handlers': ['file'],
  27. 'level': 'DEBUG',
  28. },
  29. }

In this example I am setting the ownership using the pulse user and group, just for the purposes of illustration. Putting it together into a working script, chowntest.py:

  1. import logging, logging.config, os, shutil
  2. def owned_file_handler(filename, mode='a', encoding=None, owner=None):
  3. if owner:
  4. if not os.path.exists(filename):
  5. open(filename, 'a').close()
  6. shutil.chown(filename, *owner)
  7. return logging.FileHandler(filename, mode, encoding)
  8. LOGGING = {
  9. 'version': 1,
  10. 'disable_existing_loggers': False,
  11. 'formatters': {
  12. 'default': {
  13. 'format': '%(asctime)s %(levelname)s %(name)s %(message)s'
  14. },
  15. },
  16. 'handlers': {
  17. 'file':{
  18. # The values below are popped from this dictionary and
  19. # used to create the handler, set the handler's level and
  20. # its formatter.
  21. '()': owned_file_handler,
  22. 'level':'DEBUG',
  23. 'formatter': 'default',
  24. # The values below are passed to the handler creator callable
  25. # as keyword arguments.
  26. 'owner': ['pulse', 'pulse'],
  27. 'filename': 'chowntest.log',
  28. 'mode': 'w',
  29. 'encoding': 'utf-8',
  30. },
  31. },
  32. 'root': {
  33. 'handlers': ['file'],
  34. 'level': 'DEBUG',
  35. },
  36. }
  37. logging.config.dictConfig(LOGGING)
  38. logger = logging.getLogger('mylogger')
  39. logger.debug('A debug message')

To run this, you will probably need to run as root:

  1. $ sudo python3.3 chowntest.py
  2. $ cat chowntest.log
  3. 2013-11-05 09:34:51,128 DEBUG mylogger A debug message
  4. $ ls -l chowntest.log
  5. -rw-r--r-- 1 pulse pulse 55 2013-11-05 09:34 chowntest.log

Note that this example uses Python 3.3 because that’s where shutil.chown() makes an appearance. This approach should work with any Python version that supports dictConfig() - namely, Python 2.7, 3.2 or later. With pre-3.3 versions, you would need to implement the actual ownership change using e.g. os.chown().

In practice, the handler-creating function may be in a utility module somewhere in your project. Instead of the line in the configuration:

  1. '()': owned_file_handler,

you could use e.g.:

  1. '()': 'ext://project.util.owned_file_handler',

where project.util can be replaced with the actual name of the package where the function resides. In the above working script, using 'ext://__main__.owned_file_handler' should work. Here, the actual callable is resolved by dictConfig() from the ext:// specification.

This example hopefully also points the way to how you could implement other types of file change - e.g. setting specific POSIX permission bits - in the same way, using os.chmod().

Of course, the approach could also be extended to types of handler other than a FileHandler - for example, one of the rotating file handlers, or a different type of handler altogether.

Using particular formatting styles throughout your application

In Python 3.2, the Formatter gained a style keyword parameter which, while defaulting to % for backward compatibility, allowed the specification of { or $ to support the formatting approaches supported by str.format() and string.Template. Note that this governs the formatting of logging messages for final output to logs, and is completely orthogonal to how an individual logging message is constructed.

Logging calls (debug(), info() etc.) only take positional parameters for the actual logging message itself, with keyword parameters used only for determining options for how to handle the logging call (e.g. the exc_info keyword parameter to indicate that traceback information should be logged, or the extra keyword parameter to indicate additional contextual information to be added to the log). So you cannot directly make logging calls using str.format() or string.Template syntax, because internally the logging package uses %-formatting to merge the format string and the variable arguments. There would no changing this while preserving backward compatibility, since all logging calls which are out there in existing code will be using %-format strings.

There have been suggestions to associate format styles with specific loggers, but that approach also runs into backward compatibility problems because any existing code could be using a given logger name and using %-formatting.

For logging to work interoperably between any third-party libraries and your code, decisions about formatting need to be made at the level of the individual logging call. This opens up a couple of ways in which alternative formatting styles can be accommodated.

Using LogRecord factories

In Python 3.2, along with the Formatter changes mentioned above, the logging package gained the ability to allow users to set their own LogRecord subclasses, using the setLogRecordFactory() function. You can use this to set your own subclass of LogRecord, which does the Right Thing by overriding the getMessage() method. The base class implementation of this method is where the msg % args formatting happens, and where you can substitute your alternate formatting; however, you should be careful to support all formatting styles and allow %-formatting as the default, to ensure interoperability with other code. Care should also be taken to call str(self.msg), just as the base implementation does.

Refer to the reference documentation on setLogRecordFactory() and LogRecord for more information.

Using custom message objects

There is another, perhaps simpler way that you can use {}- and $- formatting to construct your individual log messages. You may recall (from 使用任意对象作为消息) that when logging you can use an arbitrary object as a message format string, and that the logging package will call str() on that object to get the actual format string. Consider the following two classes:

  1. class BraceMessage:
  2. def __init__(self, fmt, /, *args, **kwargs):
  3. self.fmt = fmt
  4. self.args = args
  5. self.kwargs = kwargs
  6. def __str__(self):
  7. return self.fmt.format(*self.args, **self.kwargs)
  8. class DollarMessage:
  9. def __init__(self, fmt, /, **kwargs):
  10. self.fmt = fmt
  11. self.kwargs = kwargs
  12. def __str__(self):
  13. from string import Template
  14. return Template(self.fmt).substitute(**self.kwargs)

Either of these can be used in place of a format string, to allow {}- or $-formatting to be used to build the actual “message” part which appears in the formatted log output in place of “%(message)s” or “{message}” or “$message”. If you find it a little unwieldy to use the class names whenever you want to log something, you can make it more palatable if you use an alias such as M or _ for the message (or perhaps __, if you are using _ for localization).

Examples of this approach are given below. Firstly, formatting with str.format():

  1. >>> __ = BraceMessage
  2. >>> print(__('Message with {0} {1}', 2, 'placeholders'))
  3. Message with 2 placeholders
  4. >>> class Point: pass
  5. ...
  6. >>> p = Point()
  7. >>> p.x = 0.5
  8. >>> p.y = 0.5
  9. >>> print(__('Message with coordinates: ({point.x:.2f}, {point.y:.2f})', point=p))
  10. Message with coordinates: (0.50, 0.50)

Secondly, formatting with string.Template:

  1. >>> __ = DollarMessage
  2. >>> print(__('Message with $num $what', num=2, what='placeholders'))
  3. Message with 2 placeholders
  4. >>>

One thing to note is that you pay no significant performance penalty with this approach: the actual formatting happens not when you make the logging call, but when (and if) the logged message is actually about to be output to a log by a handler. So the only slightly unusual thing which might trip you up is that the parentheses go around the format string and the arguments, not just the format string. That’s because the __ notation is just syntax sugar for a constructor call to one of the XXXMessage classes shown above.

Configuring filters with dictConfig()

You can configure filters using dictConfig(), though it might not be obvious at first glance how to do it (hence this recipe). Since Filter is the only filter class included in the standard library, and it is unlikely to cater to many requirements (it’s only there as a base class), you will typically need to define your own Filter subclass with an overridden filter() method. To do this, specify the () key in the configuration dictionary for the filter, specifying a callable which will be used to create the filter (a class is the most obvious, but you can provide any callable which returns a Filter instance). Here is a complete example:

  1. import logging
  2. import logging.config
  3. import sys
  4. class MyFilter(logging.Filter):
  5. def __init__(self, param=None):
  6. self.param = param
  7. def filter(self, record):
  8. if self.param is None:
  9. allow = True
  10. else:
  11. allow = self.param not in record.msg
  12. if allow:
  13. record.msg = 'changed: ' + record.msg
  14. return allow
  15. LOGGING = {
  16. 'version': 1,
  17. 'filters': {
  18. 'myfilter': {
  19. '()': MyFilter,
  20. 'param': 'noshow',
  21. }
  22. },
  23. 'handlers': {
  24. 'console': {
  25. 'class': 'logging.StreamHandler',
  26. 'filters': ['myfilter']
  27. }
  28. },
  29. 'root': {
  30. 'level': 'DEBUG',
  31. 'handlers': ['console']
  32. },
  33. }
  34. if __name__ == '__main__':
  35. logging.config.dictConfig(LOGGING)
  36. logging.debug('hello')
  37. logging.debug('hello - noshow')

This example shows how you can pass configuration data to the callable which constructs the instance, in the form of keyword parameters. When run, the above script will print:

  1. changed: hello

which shows that the filter is working as configured.

A couple of extra points to note:

  • If you can’t refer to the callable directly in the configuration (e.g. if it lives in a different module, and you can’t import it directly where the configuration dictionary is), you can use the form ext://... as described in 访问外部对象. For example, you could have used the text 'ext://__main__.MyFilter' instead of MyFilter in the above example.

  • As well as for filters, this technique can also be used to configure custom handlers and formatters. See 用户定义对象 for more information on how logging supports using user-defined objects in its configuration, and see the other cookbook recipe Customizing handlers with dictConfig() above.

Customized exception formatting

There might be times when you want to do customized exception formatting - for argument’s sake, let’s say you want exactly one line per logged event, even when exception information is present. You can do this with a custom formatter class, as shown in the following example:

  1. import logging
  2. class OneLineExceptionFormatter(logging.Formatter):
  3. def formatException(self, exc_info):
  4. """
  5. Format an exception so that it prints on a single line.
  6. """
  7. result = super().formatException(exc_info)
  8. return repr(result) # or format into one line however you want to
  9. def format(self, record):
  10. s = super().format(record)
  11. if record.exc_text:
  12. s = s.replace('\n', '') + '|'
  13. return s
  14. def configure_logging():
  15. fh = logging.FileHandler('output.txt', 'w')
  16. f = OneLineExceptionFormatter('%(asctime)s|%(levelname)s|%(message)s|',
  17. '%d/%m/%Y %H:%M:%S')
  18. fh.setFormatter(f)
  19. root = logging.getLogger()
  20. root.setLevel(logging.DEBUG)
  21. root.addHandler(fh)
  22. def main():
  23. configure_logging()
  24. logging.info('Sample message')
  25. try:
  26. x = 1 / 0
  27. except ZeroDivisionError as e:
  28. logging.exception('ZeroDivisionError: %s', e)
  29. if __name__ == '__main__':
  30. main()

When run, this produces a file with exactly two lines:

  1. 28/01/2015 07:21:23|INFO|Sample message|
  2. 28/01/2015 07:21:23|ERROR|ZeroDivisionError: integer division or modulo by zero|'Traceback (most recent call last):\n File "logtest7.py", line 30, in main\n x = 1 / 0\nZeroDivisionError: integer division or modulo by zero'|

While the above treatment is simplistic, it points the way to how exception information can be formatted to your liking. The traceback module may be helpful for more specialized needs.

Speaking logging messages

There might be situations when it is desirable to have logging messages rendered in an audible rather than a visible format. This is easy to do if you have text-to-speech (TTS) functionality available in your system, even if it doesn’t have a Python binding. Most TTS systems have a command line program you can run, and this can be invoked from a handler using subprocess. It’s assumed here that TTS command line programs won’t expect to interact with users or take a long time to complete, and that the frequency of logged messages will be not so high as to swamp the user with messages, and that it’s acceptable to have the messages spoken one at a time rather than concurrently, The example implementation below waits for one message to be spoken before the next is processed, and this might cause other handlers to be kept waiting. Here is a short example showing the approach, which assumes that the espeak TTS package is available:

  1. import logging
  2. import subprocess
  3. import sys
  4. class TTSHandler(logging.Handler):
  5. def emit(self, record):
  6. msg = self.format(record)
  7. # Speak slowly in a female English voice
  8. cmd = ['espeak', '-s150', '-ven+f3', msg]
  9. p = subprocess.Popen(cmd, stdout=subprocess.PIPE,
  10. stderr=subprocess.STDOUT)
  11. # wait for the program to finish
  12. p.communicate()
  13. def configure_logging():
  14. h = TTSHandler()
  15. root = logging.getLogger()
  16. root.addHandler(h)
  17. # the default formatter just returns the message
  18. root.setLevel(logging.DEBUG)
  19. def main():
  20. logging.info('Hello')
  21. logging.debug('Goodbye')
  22. if __name__ == '__main__':
  23. configure_logging()
  24. sys.exit(main())

When run, this script should say “Hello” and then “Goodbye” in a female voice.

The above approach can, of course, be adapted to other TTS systems and even other systems altogether which can process messages via external programs run from a command line.

缓冲日志消息并有条件地输出它们

在某些情况下,你可能希望在临时区域中记录日志消息,并且只在发生某种特定的情况下才输出它们。 例如,你可能希望起始在函数中记录调试事件,如果函数执行完成且没有错误,你不希望输出收集的调试信息以避免造成日志混乱,但如果出现错误,那么你希望所有调试以及错误消息被输出。

下面是一个示例,展示如何在你的日志记录函数上使用装饰器以实现这一功能。该示例使用 logging.handlers.MemoryHandler ,它允许缓冲已记录的事件直到某些条件发生,缓冲的事件才会被刷新(flushed) - 传递给另一个处理程序( target handler)进行处理。 默认情况下, MemoryHandler 在其缓冲区被填满时被刷新,或者看到一个级别大于或等于指定阈值的事件。 如果想要自定义刷新行为,你可以通过更专业的 MemoryHandler 子类来使用这个秘诀。

这个示例脚本有一个简单的函数 foo ,它只是在所有的日志级别中循环运行,写到 sys.stderr ,说明它要记录在哪个级别上,然后在这个级别上实际记录一个消息。你可以给 foo 传递一个参数,如果为 true ,它将在ERROR和CRITICAL级别记录,否则,它只在DEBUG、INFO和WARNING级别记录。

脚本只是使用了一个装饰器来装饰 foo,这个装饰器将记录执行所需的条件。装饰器使用一个记录器作为参数,并在调用被装饰的函数期间附加一个内存处理程序。装饰器可以使用目标处理程序、记录级别和缓冲区的容量(缓冲记录的数量)来附加参数。这些参数分别默认为写入``sys.stderr`` 的 StreamHandlerlogging.ERROR100

以下是脚本:

  1. import logging
  2. from logging.handlers import MemoryHandler
  3. import sys
  4. logger = logging.getLogger(__name__)
  5. logger.addHandler(logging.NullHandler())
  6. def log_if_errors(logger, target_handler=None, flush_level=None, capacity=None):
  7. if target_handler is None:
  8. target_handler = logging.StreamHandler()
  9. if flush_level is None:
  10. flush_level = logging.ERROR
  11. if capacity is None:
  12. capacity = 100
  13. handler = MemoryHandler(capacity, flushLevel=flush_level, target=target_handler)
  14. def decorator(fn):
  15. def wrapper(*args, **kwargs):
  16. logger.addHandler(handler)
  17. try:
  18. return fn(*args, **kwargs)
  19. except Exception:
  20. logger.exception('call failed')
  21. raise
  22. finally:
  23. super(MemoryHandler, handler).flush()
  24. logger.removeHandler(handler)
  25. return wrapper
  26. return decorator
  27. def write_line(s):
  28. sys.stderr.write('%s\n' % s)
  29. def foo(fail=False):
  30. write_line('about to log at DEBUG ...')
  31. logger.debug('Actually logged at DEBUG')
  32. write_line('about to log at INFO ...')
  33. logger.info('Actually logged at INFO')
  34. write_line('about to log at WARNING ...')
  35. logger.warning('Actually logged at WARNING')
  36. if fail:
  37. write_line('about to log at ERROR ...')
  38. logger.error('Actually logged at ERROR')
  39. write_line('about to log at CRITICAL ...')
  40. logger.critical('Actually logged at CRITICAL')
  41. return fail
  42. decorated_foo = log_if_errors(logger)(foo)
  43. if __name__ == '__main__':
  44. logger.setLevel(logging.DEBUG)
  45. write_line('Calling undecorated foo with False')
  46. assert not foo(False)
  47. write_line('Calling undecorated foo with True')
  48. assert foo(True)
  49. write_line('Calling decorated foo with False')
  50. assert not decorated_foo(False)
  51. write_line('Calling decorated foo with True')
  52. assert decorated_foo(True)

运行此脚本时,应看到以下输出:

  1. Calling undecorated foo with False
  2. about to log at DEBUG ...
  3. about to log at INFO ...
  4. about to log at WARNING ...
  5. Calling undecorated foo with True
  6. about to log at DEBUG ...
  7. about to log at INFO ...
  8. about to log at WARNING ...
  9. about to log at ERROR ...
  10. about to log at CRITICAL ...
  11. Calling decorated foo with False
  12. about to log at DEBUG ...
  13. about to log at INFO ...
  14. about to log at WARNING ...
  15. Calling decorated foo with True
  16. about to log at DEBUG ...
  17. about to log at INFO ...
  18. about to log at WARNING ...
  19. about to log at ERROR ...
  20. Actually logged at DEBUG
  21. Actually logged at INFO
  22. Actually logged at WARNING
  23. Actually logged at ERROR
  24. about to log at CRITICAL ...
  25. Actually logged at CRITICAL

如你所见,实际日志记录输出仅在消息等级为ERROR或更高的事件时发生,但在这种情况下,任何之前较低消息等级的事件还会被记录。

你当然可以使用传统的装饰方法:

  1. @log_if_errors(logger)
  2. def foo(fail=False):
  3. ...

通过配置使用UTC (GMT) 格式化时间

有时候,你希望使用UTC来格式化时间,这可以使用一个类来完成,例如`UTCFormatter`,如下所示:

  1. import logging
  2. import time
  3. class UTCFormatter(logging.Formatter):
  4. converter = time.gmtime

然后你可以在你的代码中使用 UTCFormatter,而不是 Formatter。 如果你想通过配置来实现这一功能,你可以使用 dictConfig() API 来完成,该方法在以下完整示例中展示:

  1. import logging
  2. import logging.config
  3. import time
  4. class UTCFormatter(logging.Formatter):
  5. converter = time.gmtime
  6. LOGGING = {
  7. 'version': 1,
  8. 'disable_existing_loggers': False,
  9. 'formatters': {
  10. 'utc': {
  11. '()': UTCFormatter,
  12. 'format': '%(asctime)s %(message)s',
  13. },
  14. 'local': {
  15. 'format': '%(asctime)s %(message)s',
  16. }
  17. },
  18. 'handlers': {
  19. 'console1': {
  20. 'class': 'logging.StreamHandler',
  21. 'formatter': 'utc',
  22. },
  23. 'console2': {
  24. 'class': 'logging.StreamHandler',
  25. 'formatter': 'local',
  26. },
  27. },
  28. 'root': {
  29. 'handlers': ['console1', 'console2'],
  30. }
  31. }
  32. if __name__ == '__main__':
  33. logging.config.dictConfig(LOGGING)
  34. logging.warning('The local time is %s', time.asctime())

脚本会运行输出类似下面的内容:

  1. 2015-10-17 12:53:29,501 The local time is Sat Oct 17 13:53:29 2015
  2. 2015-10-17 13:53:29,501 The local time is Sat Oct 17 13:53:29 2015

展示了如何将时间格式化为本地时间和UTC两种形式,其中每种形式对应一个日志处理器 。

使用上下文管理器进行选择性记录

有时候,我们需要暂时更改日志配置,并在执行某些操作后将其还原。为此,上下文管理器是实现保存和恢复日志上下文的最明显的方式。这是一个关于上下文管理器的简单例子,它允许你在上下文管理器的作用域内更改日志等级以及增加日志句柄:

  1. import logging
  2. import sys
  3. class LoggingContext:
  4. def __init__(self, logger, level=None, handler=None, close=True):
  5. self.logger = logger
  6. self.level = level
  7. self.handler = handler
  8. self.close = close
  9. def __enter__(self):
  10. if self.level is not None:
  11. self.old_level = self.logger.level
  12. self.logger.setLevel(self.level)
  13. if self.handler:
  14. self.logger.addHandler(self.handler)
  15. def __exit__(self, et, ev, tb):
  16. if self.level is not None:
  17. self.logger.setLevel(self.old_level)
  18. if self.handler:
  19. self.logger.removeHandler(self.handler)
  20. if self.handler and self.close:
  21. self.handler.close()
  22. # implicit return of None => don't swallow exceptions

如果指定上下文管理器的日志记录等级属性,则在上下文管理器的with语句所涵盖的代码中,日志记录器的记录等级将临时设置为上下文管理器所配置的日志记录等级。 如果指定上下文管理的句柄属性,则该句柄在进入上下文管理器的上下文时添加到记录器中,并在退出时被删除。 如果你再也不需要该句柄时,你可以让上下文管理器在退出上下文管理器的上下文时关闭它。

为了说明它是如何工作的,我们可以在上面添加以下代码块:

  1. if __name__ == '__main__':
  2. logger = logging.getLogger('foo')
  3. logger.addHandler(logging.StreamHandler())
  4. logger.setLevel(logging.INFO)
  5. logger.info('1. This should appear just once on stderr.')
  6. logger.debug('2. This should not appear.')
  7. with LoggingContext(logger, level=logging.DEBUG):
  8. logger.debug('3. This should appear once on stderr.')
  9. logger.debug('4. This should not appear.')
  10. h = logging.StreamHandler(sys.stdout)
  11. with LoggingContext(logger, level=logging.DEBUG, handler=h, close=True):
  12. logger.debug('5. This should appear twice - once on stderr and once on stdout.')
  13. logger.info('6. This should appear just once on stderr.')
  14. logger.debug('7. This should not appear.')

我们最初设置日志记录器的消息等级为 INFO,因此消息#1出现,消息#2没有出现。在接下来的 with``代码块中我们暂时将消息等级变更为 ``DEBUG,从而消息 #3 出现。在这一代码块退出后,日志记录器的消息等级恢复为 INFO,从而消息 #4 没有出现。在下一个 with 代码块中,我们再一次将设置消息等级设置为 DEBUG,同时添加一个将消息写入 sys.stdout 的日志处理器。因此,消息#5在控制台出现两次 (分别通过 stderrstdout)。在 with 语句完成后,状态与之前一样,因此消息 #6 出现(类似消息 #1),而消息 #7 没有出现(类似消息 #2)。

如果我们运行生成的脚本,结果如下:

  1. $ python logctx.py
  2. 1. This should appear just once on stderr.
  3. 3. This should appear once on stderr.
  4. 5. This should appear twice - once on stderr and once on stdout.
  5. 5. This should appear twice - once on stderr and once on stdout.
  6. 6. This should appear just once on stderr.

我们将stderr标准错误传输到/dev/null,我再次运行生成的脚步,唯一被写入stdout标准输出的消息,即我们所能看见的消息,如下:

  1. $ python logctx.py 2>/dev/null
  2. 5. This should appear twice - once on stderr and once on stdout.

再一次,将 stdout 标准输出重定向到 /dev/null,我获得如下结果:

  1. $ python logctx.py >/dev/null
  2. 1. This should appear just once on stderr.
  3. 3. This should appear once on stderr.
  4. 5. This should appear twice - once on stderr and once on stdout.
  5. 6. This should appear just once on stderr.

在这种情况下,与预期一致,打印到 stdout 标准输出的消息#5不会出现。

当然,这里描述的方法可以概括,例如临时附加日志记录过滤器。 请注意,上面的代码适用于Python 2以及Python 3。

A CLI application starter template

Here’s an example which shows how you can:

  • Use a logging level based on command-line arguments

  • Dispatch to multiple subcommands in separate files, all logging at the same level in a consistent way

  • Make use of simple, minimal configuration

Suppose we have a command-line application whose job is to stop, start or restart some services. This could be organised for the purposes of illustration as a file app.py that is the main script for the application, with individual commands implemented in start.py, stop.py and restart.py. Suppose further that we want to control the verbosity of the application via a command-line argument, defaulting to logging.INFO. Here’s one way that app.py could be written:

  1. import argparse
  2. import importlib
  3. import logging
  4. import os
  5. import sys
  6. def main(args=None):
  7. scriptname = os.path.basename(__file__)
  8. parser = argparse.ArgumentParser(scriptname)
  9. levels = ('DEBUG', 'INFO', 'WARNING', 'ERROR', 'CRITICAL')
  10. parser.add_argument('--log-level', default='INFO', choices=levels)
  11. subparsers = parser.add_subparsers(dest='command',
  12. help='Available commands:')
  13. start_cmd = subparsers.add_parser('start', help='Start a service')
  14. start_cmd.add_argument('name', metavar='NAME',
  15. help='Name of service to start')
  16. stop_cmd = subparsers.add_parser('stop',
  17. help='Stop one or more services')
  18. stop_cmd.add_argument('names', metavar='NAME', nargs='+',
  19. help='Name of service to stop')
  20. restart_cmd = subparsers.add_parser('restart',
  21. help='Restart one or more services')
  22. restart_cmd.add_argument('names', metavar='NAME', nargs='+',
  23. help='Name of service to restart')
  24. options = parser.parse_args()
  25. # the code to dispatch commands could all be in this file. For the purposes
  26. # of illustration only, we implement each command in a separate module.
  27. try:
  28. mod = importlib.import_module(options.command)
  29. cmd = getattr(mod, 'command')
  30. except (ImportError, AttributeError):
  31. print('Unable to find the code for command \'%s\'' % options.command)
  32. return 1
  33. # Could get fancy here and load configuration from file or dictionary
  34. logging.basicConfig(level=options.log_level,
  35. format='%(levelname)s %(name)s %(message)s')
  36. cmd(options)
  37. if __name__ == '__main__':
  38. sys.exit(main())

And the start, stop and restart commands can be implemented in separate modules, like so for starting:

  1. # start.py
  2. import logging
  3. logger = logging.getLogger(__name__)
  4. def command(options):
  5. logger.debug('About to start %s', options.name)
  6. # actually do the command processing here ...
  7. logger.info('Started the \'%s\' service.', options.name)

and thus for stopping:

  1. # stop.py
  2. import logging
  3. logger = logging.getLogger(__name__)
  4. def command(options):
  5. n = len(options.names)
  6. if n == 1:
  7. plural = ''
  8. services = '\'%s\'' % options.names[0]
  9. else:
  10. plural = 's'
  11. services = ', '.join('\'%s\'' % name for name in options.names)
  12. i = services.rfind(', ')
  13. services = services[:i] + ' and ' + services[i + 2:]
  14. logger.debug('About to stop %s', services)
  15. # actually do the command processing here ...
  16. logger.info('Stopped the %s service%s.', services, plural)

and similarly for restarting:

  1. # restart.py
  2. import logging
  3. logger = logging.getLogger(__name__)
  4. def command(options):
  5. n = len(options.names)
  6. if n == 1:
  7. plural = ''
  8. services = '\'%s\'' % options.names[0]
  9. else:
  10. plural = 's'
  11. services = ', '.join('\'%s\'' % name for name in options.names)
  12. i = services.rfind(', ')
  13. services = services[:i] + ' and ' + services[i + 2:]
  14. logger.debug('About to restart %s', services)
  15. # actually do the command processing here ...
  16. logger.info('Restarted the %s service%s.', services, plural)

If we run this application with the default log level, we get output like this:

  1. $ python app.py start foo
  2. INFO start Started the 'foo' service.
  3. $ python app.py stop foo bar
  4. INFO stop Stopped the 'foo' and 'bar' services.
  5. $ python app.py restart foo bar baz
  6. INFO restart Restarted the 'foo', 'bar' and 'baz' services.

The first word is the logging level, and the second word is the module or package name of the place where the event was logged.

If we change the logging level, then we can change the information sent to the log. For example, if we want more information:

  1. $ python app.py --log-level DEBUG start foo
  2. DEBUG start About to start foo
  3. INFO start Started the 'foo' service.
  4. $ python app.py --log-level DEBUG stop foo bar
  5. DEBUG stop About to stop 'foo' and 'bar'
  6. INFO stop Stopped the 'foo' and 'bar' services.
  7. $ python app.py --log-level DEBUG restart foo bar baz
  8. DEBUG restart About to restart 'foo', 'bar' and 'baz'
  9. INFO restart Restarted the 'foo', 'bar' and 'baz' services.

And if we want less:

  1. $ python app.py --log-level WARNING start foo
  2. $ python app.py --log-level WARNING stop foo bar
  3. $ python app.py --log-level WARNING restart foo bar baz

In this case, the commands don’t print anything to the console, since nothing at WARNING level or above is logged by them.

A Qt GUI for logging

A question that comes up from time to time is about how to log to a GUI application. The Qt framework is a popular cross-platform UI framework with Python bindings using PySide2 or PyQt5 libraries.

The following example shows how to log to a Qt GUI. This introduces a simple QtHandler class which takes a callable, which should be a slot in the main thread that does GUI updates. A worker thread is also created to show how you can log to the GUI from both the UI itself (via a button for manual logging) as well as a worker thread doing work in the background (here, just logging messages at random levels with random short delays in between).

The worker thread is implemented using Qt’s QThread class rather than the threading module, as there are circumstances where one has to use QThread, which offers better integration with other Qt components.

The code should work with recent releases of either PySide2 or PyQt5. You should be able to adapt the approach to earlier versions of Qt. Please refer to the comments in the code snippet for more detailed information.

  1. import datetime
  2. import logging
  3. import random
  4. import sys
  5. import time
  6. # Deal with minor differences between PySide2 and PyQt5
  7. try:
  8. from PySide2 import QtCore, QtGui, QtWidgets
  9. Signal = QtCore.Signal
  10. Slot = QtCore.Slot
  11. except ImportError:
  12. from PyQt5 import QtCore, QtGui, QtWidgets
  13. Signal = QtCore.pyqtSignal
  14. Slot = QtCore.pyqtSlot
  15. logger = logging.getLogger(__name__)
  16. #
  17. # Signals need to be contained in a QObject or subclass in order to be correctly
  18. # initialized.
  19. #
  20. class Signaller(QtCore.QObject):
  21. signal = Signal(str, logging.LogRecord)
  22. #
  23. # Output to a Qt GUI is only supposed to happen on the main thread. So, this
  24. # handler is designed to take a slot function which is set up to run in the main
  25. # thread. In this example, the function takes a string argument which is a
  26. # formatted log message, and the log record which generated it. The formatted
  27. # string is just a convenience - you could format a string for output any way
  28. # you like in the slot function itself.
  29. #
  30. # You specify the slot function to do whatever GUI updates you want. The handler
  31. # doesn't know or care about specific UI elements.
  32. #
  33. class QtHandler(logging.Handler):
  34. def __init__(self, slotfunc, *args, **kwargs):
  35. super().__init__(*args, **kwargs)
  36. self.signaller = Signaller()
  37. self.signaller.signal.connect(slotfunc)
  38. def emit(self, record):
  39. s = self.format(record)
  40. self.signaller.signal.emit(s, record)
  41. #
  42. # This example uses QThreads, which means that the threads at the Python level
  43. # are named something like "Dummy-1". The function below gets the Qt name of the
  44. # current thread.
  45. #
  46. def ctname():
  47. return QtCore.QThread.currentThread().objectName()
  48. #
  49. # Used to generate random levels for logging.
  50. #
  51. LEVELS = (logging.DEBUG, logging.INFO, logging.WARNING, logging.ERROR,
  52. logging.CRITICAL)
  53. #
  54. # This worker class represents work that is done in a thread separate to the
  55. # main thread. The way the thread is kicked off to do work is via a button press
  56. # that connects to a slot in the worker.
  57. #
  58. # Because the default threadName value in the LogRecord isn't much use, we add
  59. # a qThreadName which contains the QThread name as computed above, and pass that
  60. # value in an "extra" dictionary which is used to update the LogRecord with the
  61. # QThread name.
  62. #
  63. # This example worker just outputs messages sequentially, interspersed with
  64. # random delays of the order of a few seconds.
  65. #
  66. class Worker(QtCore.QObject):
  67. @Slot()
  68. def start(self):
  69. extra = {'qThreadName': ctname() }
  70. logger.debug('Started work', extra=extra)
  71. i = 1
  72. # Let the thread run until interrupted. This allows reasonably clean
  73. # thread termination.
  74. while not QtCore.QThread.currentThread().isInterruptionRequested():
  75. delay = 0.5 + random.random() * 2
  76. time.sleep(delay)
  77. level = random.choice(LEVELS)
  78. logger.log(level, 'Message after delay of %3.1f: %d', delay, i, extra=extra)
  79. i += 1
  80. #
  81. # Implement a simple UI for this cookbook example. This contains:
  82. #
  83. # * A read-only text edit window which holds formatted log messages
  84. # * A button to start work and log stuff in a separate thread
  85. # * A button to log something from the main thread
  86. # * A button to clear the log window
  87. #
  88. class Window(QtWidgets.QWidget):
  89. COLORS = {
  90. logging.DEBUG: 'black',
  91. logging.INFO: 'blue',
  92. logging.WARNING: 'orange',
  93. logging.ERROR: 'red',
  94. logging.CRITICAL: 'purple',
  95. }
  96. def __init__(self, app):
  97. super().__init__()
  98. self.app = app
  99. self.textedit = te = QtWidgets.QPlainTextEdit(self)
  100. # Set whatever the default monospace font is for the platform
  101. f = QtGui.QFont('nosuchfont')
  102. f.setStyleHint(f.Monospace)
  103. te.setFont(f)
  104. te.setReadOnly(True)
  105. PB = QtWidgets.QPushButton
  106. self.work_button = PB('Start background work', self)
  107. self.log_button = PB('Log a message at a random level', self)
  108. self.clear_button = PB('Clear log window', self)
  109. self.handler = h = QtHandler(self.update_status)
  110. # Remember to use qThreadName rather than threadName in the format string.
  111. fs = '%(asctime)s %(qThreadName)-12s %(levelname)-8s %(message)s'
  112. formatter = logging.Formatter(fs)
  113. h.setFormatter(formatter)
  114. logger.addHandler(h)
  115. # Set up to terminate the QThread when we exit
  116. app.aboutToQuit.connect(self.force_quit)
  117. # Lay out all the widgets
  118. layout = QtWidgets.QVBoxLayout(self)
  119. layout.addWidget(te)
  120. layout.addWidget(self.work_button)
  121. layout.addWidget(self.log_button)
  122. layout.addWidget(self.clear_button)
  123. self.setFixedSize(900, 400)
  124. # Connect the non-worker slots and signals
  125. self.log_button.clicked.connect(self.manual_update)
  126. self.clear_button.clicked.connect(self.clear_display)
  127. # Start a new worker thread and connect the slots for the worker
  128. self.start_thread()
  129. self.work_button.clicked.connect(self.worker.start)
  130. # Once started, the button should be disabled
  131. self.work_button.clicked.connect(lambda : self.work_button.setEnabled(False))
  132. def start_thread(self):
  133. self.worker = Worker()
  134. self.worker_thread = QtCore.QThread()
  135. self.worker.setObjectName('Worker')
  136. self.worker_thread.setObjectName('WorkerThread') # for qThreadName
  137. self.worker.moveToThread(self.worker_thread)
  138. # This will start an event loop in the worker thread
  139. self.worker_thread.start()
  140. def kill_thread(self):
  141. # Just tell the worker to stop, then tell it to quit and wait for that
  142. # to happen
  143. self.worker_thread.requestInterruption()
  144. if self.worker_thread.isRunning():
  145. self.worker_thread.quit()
  146. self.worker_thread.wait()
  147. else:
  148. print('worker has already exited.')
  149. def force_quit(self):
  150. # For use when the window is closed
  151. if self.worker_thread.isRunning():
  152. self.kill_thread()
  153. # The functions below update the UI and run in the main thread because
  154. # that's where the slots are set up
  155. @Slot(str, logging.LogRecord)
  156. def update_status(self, status, record):
  157. color = self.COLORS.get(record.levelno, 'black')
  158. s = '<pre><font color="%s">%s</font></pre>' % (color, status)
  159. self.textedit.appendHtml(s)
  160. @Slot()
  161. def manual_update(self):
  162. # This function uses the formatted message passed in, but also uses
  163. # information from the record to format the message in an appropriate
  164. # color according to its severity (level).
  165. level = random.choice(LEVELS)
  166. extra = {'qThreadName': ctname() }
  167. logger.log(level, 'Manually logged!', extra=extra)
  168. @Slot()
  169. def clear_display(self):
  170. self.textedit.clear()
  171. def main():
  172. QtCore.QThread.currentThread().setObjectName('MainThread')
  173. logging.getLogger().setLevel(logging.DEBUG)
  174. app = QtWidgets.QApplication(sys.argv)
  175. example = Window(app)
  176. example.show()
  177. sys.exit(app.exec_())
  178. if __name__=='__main__':
  179. main()