Description
From: Mike Kazantsev (Mantis #48)
Description: Dispatching collectd.Values introduces memory leaks in collectd daemon.
I've tried using heapy (part of guppy project) to trace the leak ("sys.stderr.write(str(hpy().heap()) + '\n\n')") in the interpreter, but it shows no increase in gc-tracked object count or size, so it looks like either python interpreter isn't responsible for the leak or it's leaky, which is unlikely, since pure-python daemons on the same machines do not leak.
Steps to reproduce: test.conf
:
Interval 1
LoadPlugin logfile
<Plugin logfile>
LogLevel info
File stderr
Timestamp true
PrintSeverity true
</Plugin>
LoadPlugin csv
<Plugin csv>
DataDir stdout
StoreRates false
</Plugin>
<LoadPlugin python>
Globals true
</LoadPlugin>
<Plugin python>
ModulePath "/etc/collectd"
Encoding "utf-8"
LogTraces true
Interactive false
Import "testplugin"
</Plugin>
/etc/collectd/testplugin.py
:
import collectd
def read(data=None):
for i in xrange(500):
collectd.Values( type_instance='test', type='memory',
plugin='test', plugin_instance='test' ).dispatch(values=[1], time=0)
collectd.register_read(read)
Commandline:
collectd -f -C test.conf >/dev/null &
watch ps u -p $(jobs -p)
Should show RSS increase of about 100 KiB/s. Increasing interval or decreasing per-call dispatch rate just slows the leak, doesn't eliminate it. Calling .dispatch
on the same Values object also doesn't help.
Tested with 5.0.1 and latest Git with the same results.
Additional information: Real-world memory usage graph of collectd with python plugins: http://fraggod.net/tmp/collectd_rss.png Had to introduce regular restarts there to mitigate the leak, but it's a bad solution.