-
Notifications
You must be signed in to change notification settings - Fork 1.2k
Collectd 4.10 #1
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Closed
Closed
Conversation
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Change-Id: I9899b98517fe0c239bffcf7a75681560029aa2ba
karcaw
pushed a commit
to karcaw/collectd
that referenced
this pull request
Dec 15, 2014
mfournier
pushed a commit
that referenced
this pull request
Mar 3, 2015
ceph: a couple of small details
pyr
pushed a commit
that referenced
this pull request
Apr 18, 2016
write_riemann: avoid deadlocks, rate limit log messages.
rubenk
added a commit
that referenced
this pull request
Jul 24, 2016
Fixes the following warning on Solaris: |c_avl_tree_t *c_avl_create (int (*compare) (const void *, const void *)); | ^ line 54, utils_avltree.h | included in line 34, utils_vl_lookup.c | | obj->by_type_tree = c_avl_create ((void *) strcmp); | ^ line 567, utils_vl_lookup.c E_ARG_INCOMPATIBLE_WITH_ARG_L, argument #1 is incompatible with prototype: prototype: pointer to function(pointer to const void, pointer to const void) returning int : "src/daemon/utils_avltree.h", line 54 argument : pointer to void I'll look into writing a generic function to compare avl keys so we don't need to do all the casting.
vincentbernat
added a commit
to vincentbernat/collectd
that referenced
this pull request
Aug 19, 2016
When using a configuration like this: ``` LoadPlugin curl_json <Plugin curl_json > <URL "http://api.facebook.com/method/fql.query?format=json&query=select+fan_count+from+page+where+page_id=280922605253831" > Instance "facebook" <Key "*/fan_count"> Type "gauge" </Key> </URL> </Plugin> ``` collectd segfaults in `src/curl_json.c` in `cj_cb_number`. >>> bt #0 __strcmp_sse2_unaligned () at ../sysdeps/x86_64/multiarch/strcmp-sse2-unaligned.S:31 collectd#1 0x0000000000414da2 in search (t=0x2304a80, t=0x2304a80, key=0x1) at ../../../src/daemon/utils_avltree.c:119 collectd#2 c_avl_get (t=0x2304a80, key=key@entry=0x1, value=value@entry=0x7f01184f2ab8) at ../../../src/daemon/utils_avltree.c:599 collectd#3 0x000000000040cbd1 in plugin_get_ds (name=0x1 <error: Cannot access memory at address 0x1>) at ../../../src/daemon/plugin.c:2559 collectd#4 0x00007f011f93b7ef in cj_get_type (key=0x23324f0) at ../../src/curl_json.c:147 collectd#5 cj_cb_number (ctx=0x231f2f0, number=<optimized out>, number_len=2) at ../../src/curl_json.c:244 collectd#6 0x00007f011f4b7874 in yajl_do_parse (hand=0x7f01100008e0, jsonText=jsonText@entry=0x234f990 "{\"error_code\":12,\"error_msg\":\"REST API is deprecated for versions v2.1 and higher (12)\",\"request_args\":[{\"key\":\"method\",\"value\":\"fql.query\"},{\"key\":\"format\",\"value\":\"json\"},{\"key\":\"query\",\"value\":\"sel"..., jsonTextLen=jsonTextLen@entry=257) at /tmp/buildd/yajl-2.1.0/src/yajl_parser.c:307 In fact, `db->state[db->depth].key` is not a key (it's a tree). This is correctly detected at the beginning, but despite that, there is a call to `cj_cb_inc_array_index()` (without effect, not in an array) and `key` is set again to `db->state[db->depth].key` which is still not a key. In a previous call to `cj_cb_map_key()` where the corresponding value is set, `(c_avl_get (tree, CJ_ANY, (void *) &value) == 0)` is true but `(CJ_IS_KEY((cj_key_t*)value))` is false. Therefore, this confirms this a tree which is stored in this node, not a key. This is a tentative fix to ignore the value if we are expecting a map, but didn't get one and aren't in an inhomogeneous array.
pmauduit
pushed a commit
to pmauduit/collectd
that referenced
this pull request
Jan 30, 2017
…ugin_support Enable support for the notify_plugin package which depneds on libesmtp-dev. Testing without an explicit binary dependency. This may require additional modification.
mrunge
pushed a commit
that referenced
this pull request
Sep 18, 2019
connectivity plugin initial commit
hnez
added a commit
to hnez/collectd
that referenced
this pull request
Jul 19, 2022
ChangeLog: collectd: Use one write thread per write plugin The previous write thread design u 8000 sed a single queue with a single read thread from which one of the write threads would de-queue an element and would then sequentially call each registered write callback. This meant that all write plugins would have to cooperate in order to not drop values. If for example all write threads are stalled by the same write plugin's callback function not returning, the queue will start to fill up until elements start to be dropped, even though there are other plugins that could still make progress. In addition to that, all write callbacks have to be designed to be reentrant right now, which increases complexity. This new design uses a single linked-list write queue with one read head per output plugin. Each output plugin is serviced in a dedicated write thread. Elements are freed based on a reference count, which is shown in the ASCII-Art below: +- Thread collectd#1 Head +- Thread collectd#2 Head +- Tail v v v +--+ +--+ +--+ +--+ +--+ +--+ +--+ +--+ +--+ +--+ | 0|->| 1|->| 1|->| 1|->| 1|->| 2|->| 2|->| 2|->| 2|->| 2|->X +--+ +--+ +--+ +--+ +--+ +--+ +--+ +--+ +--+ +--+ ^ +- to be free()d The changes introduced by this commit have some side-effects: - The WriteThreads config option no longer exists, as a strict 1:1 ratio of write plugins and write threads is used. - The data flow has changed. The previous data flow was: (From one of the ReadThreads) plugin_dispatch_{values,multivalue}() plugin_dispatch_metric_family() enqueue_metric_family() write_queue_enqueue() -----{Queue}----+ | (In one of the WriteThreads threads) | plugin_write_thread() | ^- plugin_write_dequeue() <-+ plugin_dispatch_metric_internal() ^- fc_process_chain(pre_cache_chain) fc_process_chain(fc_process_chain) fc_bit_write_invoke() plugin_write(NULL) / plugin_write(plugin_name) plugin callback() The data flow now is: (From one of the ReadThreads) plugin_dispatch_{values,multivalue}() plugin_dispatch_metric_family() plugin_dispatch_metric_internal() ^- fc_process_chain(pre_cache_chain) fc_process_chain(post_cache_chain) fc_bit_write_invoke() plugin_write(NULL) / plugin_write(plugin_name) write_queue_enqueue() -----{Queue}----+ | (In one of the WriteThreads threads) | plugin_write_thread() <-+ plugin callback() One result of this change is, that the behaviour of plugin_write has changed from running the plugin callback immediately and in the same thread, to always enqueueing the value and de-queing in the dedicated thread. - The behaviour of the WriteQueueLimitHigh and WriteQueueLimitLow options has changed. The Queue will be be capped to a length of LimitHigh by dropping random queue elements between the queue end and LimitLow. Setting LimitLow to a reasonably large value ensures that fast write plugins do not loose values, even in the vicinity of a slow plugin. The diagram below shows the random element selected for removal (###) in Step 1 and the queue with the element removed in Step 2. Step 1: +- Thread collectd#1 Head | +- Thread collectd#2 Head +- Tail v | | v v +--+| +--+ #### +--+| +--+ +--+ +--+ +--+ +--+ | 1|->| 1|-># 1#->| 1|->| 2|->| 2|->| 2|->| 2|->| 2|->X +--+| +--+ #### +--+| +--+ +--+ +--+ +--+ +--+ | | | LimitHigh | LimitLow Step 2: | +- Thread collectd#1 Head +- Thread collectd#2 Head +- Tail | v | v v | +--+ +--+ +--+| +--+ +--+ +--+ +--+ +--+ | | 1|->| 1|->| 1|->| 2|->| 2|->| 2|->| 2|->| 2|->X | +--+ +--+ +--+| +--+ +--+ +--+ +--+ +--+ | | | LimitHigh | LimitLow Signed-off-by: Leonard Göhrs <l.goehrs@pengutronix.de>
hnez
added a commit
to hnez/collectd
that referenced
this pull request
Jul 19, 2022
ChangeLog: collectd: Use one write thread per write plugin The previous write thread design used a single queue with a single read head from which one of the write threads would de-queue an element and would then sequentially call each registered write callback. This meant that all write plugins would have to cooperate in order to not drop values. If for example all write threads are stalled by the same write plugin's callback function not returning, the queue will start to fill up until elements start to be dropped, even though there are other plugins that could still make progress. In addition to that, all write callbacks have to be designed to be reentrant right now, which increases complexity. This new design uses a single linked-list write queue with one read head per output plugin. Each output plugin is serviced in a dedicated write thread. Elements are freed based on a reference count, which is shown in the ASCII-Art below: +- Thread collectd#1 Head +- Thread collectd#2 Head +- Tail v v v +--+ +--+ +--+ +--+ +--+ +--+ +--+ +--+ +--+ +--+ | 0|->| 1|->| 1|->| 1|->| 1|->| 2|->| 2|->| 2|->| 2|->| 2|->X +--+ +--+ +--+ +--+ +--+ +--+ +--+ +--+ +--+ +--+ ^ +- to be free()d The changes introduced by this commit have some side-effects: - The WriteThreads config option no longer exists, as a strict 1:1 ratio of write plugins and write threads is used. - The data flow has changed. The previous data flow was: (From one of the ReadThreads) plugin_dispatch_{values,multivalue}() plugin_dispatch_metric_family() enqueue_metric_family() write_queue_enqueue() -----{Queue}----+ | (In one of the WriteThreads threads) | plugin_write_thread() | ^- plugin_write_dequeue() <-+ plugin_dispatch_metric_internal() ^- fc_process_chain(pre_cache_chain) fc_process_chain(fc_process_chain) fc_bit_write_invoke() plugin_write(NULL) / plugin_write(plugin_name) plugin callback() The data flow now is: (From one of the ReadThreads) plugin_dispatch_{values,multivalue}() plugin_dispatch_metric_family() plugin_dispatch_metric_internal() ^- fc_process_chain(pre_cache_chain) fc_process_chain(post_cache_chain) fc_bit_write_invoke() plugin_write(NULL) / plugin_write(plugin_name) write_queue_enqueue() -----{Queue}----+ | (In one of the WriteThreads threads) | plugin_write_thread() <-+ plugin callback() One result of this change is, that the behaviour of plugin_write has changed from running the plugin callback immediately and in the same thread, to always enqueueing the value and de-queing in the dedicated thread. - The behaviour of the WriteQueueLimitHigh and WriteQueueLimitLow options has changed. The Queue will be be capped to a length of LimitHigh by dropping random queue elements between the queue end and LimitLow. Setting LimitLow to a reasonably large value ensures that fast write plugins do not loose values, even in the vicinity of a slow plugin. The diagram below shows the random element selected for removal (###) in Step 1 and the queue with the element removed in Step 2. Step 1: +- Thread collectd#1 Head | +- Thread collectd#2 Head +- Tail v | | v v +--+| +--+ #### +--+| +--+ +--+ +--+ +--+ +--+ | 1|->| 1|-># 1#->| 1|->| 2|->| 2|->| 2|->| 2|->| 2|->X +--+| +--+ #### +--+| +--+ +--+ +--+ +--+ +--+ | | | LimitHigh | LimitLow Step 2: | +- Thread collectd#1 Head +- Thread collectd#2 Head +- Tail | v | v v | +--+ +--+ +--+| +--+ +--+ +--+ +--+ +--+ | | 1|->| 1|->| 1|->| 2|->| 2|->| 2|->| 2|->| 2|->X | +--+ +--+ +--+| +--+ +--+ +--+ +--+ +--+ | | | LimitHigh | LimitLow Signed-off-by: Leonard Göhrs <l.goehrs@pengutronix.de>
hnez
added a commit
to hnez/collectd
that referenced
this pull request
Sep 20, 2022
ChangeLog: collectd: Use one write thread per write plugin The previous write thread design used a single queue with a single read head from which one of the write threads would de-queue an element and would then sequentially call each registered write callback. This meant that all write plugins would have to cooperate in order to not drop values. If for example all write threads are stalled by the same write plugin's callback function not returning, the queue will start to fill up until elements start to be dropped, even though there are other plugins that could still make progress. In addition to that, all write callbacks have to be designed to be reentra 8000 nt right now, which increases complexity. This new design uses a single linked-list write queue with one read head per output plugin. Each output plugin is serviced in a dedicated write thread. Elements are freed based on a reference count, which is shown in the ASCII-Art below: +- Thread collectd#1 Head +- Thread collectd#2 Head +- Tail v v v +--+ +--+ +--+ +--+ +--+ +--+ +--+ +--+ +--+ +--+ | 0|->| 1|->| 1|->| 1|->| 1|->| 2|->| 2|->| 2|->| 2|->| 2|->X +--+ +--+ +--+ +--+ +--+ +--+ +--+ +--+ +--+ +--+ ^ +- to be free()d The changes introduced by this commit have some side-effects: - The WriteThreads config option no longer exists, as a strict 1:1 ratio of write plugins and write threads is used. - The data flow has changed. The previous data flow was: (From one of the ReadThreads) plugin_dispatch_{values,multivalue}() plugin_dispatch_metric_family() enqueue_metric_family() write_queue_enqueue() -----{Queue}----+ | (In one of the WriteThreads threads) | plugin_write_thread() | ^- plugin_write_dequeue() <-+ plugin_dispatch_metric_internal() ^- fc_process_chain(pre_cache_chain) fc_process_chain(fc_process_chain) fc_bit_write_invoke() plugin_write(NULL) / plugin_write(plugin_name) plugin callback() The data flow now is: (From one of the ReadThreads) plugin_dispatch_{values,multivalue}() plugin_dispatch_metric_family() plugin_dispatch_metric_internal() ^- fc_process_chain(pre_cache_chain) fc_process_chain(post_cache_chain) fc_bit_write_invoke() plugin_write(NULL) / plugin_write(plugin_name) write_queue_enqueue() -----{Queue}----+ | (In one of the WriteThreads threads) | plugin_write_thread() <-+ plugin callback() One result of this change is, that the behaviour of plugin_write has changed from running the plugin callback immediately and in the same thread, to always enqueueing the value and de-queing in the dedicated thread. - The behaviour of the WriteQueueLimitHigh and WriteQueueLimitLow options has changed. The Queue will be be capped to a length of LimitHigh by dropping random queue elements between the queue end and LimitLow. Setting LimitLow to a reasonably large value ensures that fast write plugins do not loose values, even in the vicinity of a slow plugin. The diagram below shows the random element selected for removal (###) in Step 1 and the queue with the element removed in Step 2. Step 1: +- Thread collectd#1 Head | +- Thread collectd#2 Head +- Tail v | | v v +--+| +--+ #### +--+| +--+ +--+ +--+ +--+ +--+ | 1|->| 1|-># 1#->| 1|->| 2|->| 2|->| 2|->| 2|->| 2|->X +--+| +--+ #### +--+| +--+ +--+ +--+ +--+ +--+ | | | LimitHigh | LimitLow Step 2: | +- Thread collectd#1 Head +- Thread collectd#2 Head +- Tail | v | v v | +--+ +--+ +--+| +--+ +--+ +--+ +--+ +--+ | | 1|->| 1|->| 1|->| 2|->| 2|->| 2|->| 2|->| 2|->X | +--+ +--+ +--+| +--+ +--+ +--+ +--+ +--+ | | | LimitHigh | LimitLow Signed-off-by: Leonard Göhrs <l.goehrs@pengutronix.de>
mrunge
pushed a commit
that referenced
this pull request
Feb 27, 2023
ChangeLog: collectd: Use one write thread per write plugin The previous write thread design used a single queue with a single read head from which one of the write threads would de-queue an element and would then sequentially call each registered write callback. This meant that all write plugins would have to cooperate in order to not drop values. If for example all write threads are stalled by the same write plugin's callback function not returning, the queue will start to fill up until elements start to be dropped, even though there are other plugins that could still make progress. In addition to that, all write callbacks have to be designed to be reentrant right now, which increases complexity. This new design uses a single linked-list write queue with one read head per output plugin. Each output plugin is serviced in a dedicated write thread. Elements are freed based on a reference count, which is shown in the ASCII-Art below: +- Thread #1 Head +- Thread #2 Head +- Tail v v v +--+ +--+ +--+ +--+ +--+ +--+ +--+ +--+ +--+ +--+ | 0|->| 1|->| 1|->| 1|->| 1|->| 2|->| 2|->| 2|->| 2|->| 2|->X +--+ +--+ +--+ +--+ +--+ +--+ +--+ +--+ +--+ +--+ ^ +- to be free()d The changes introduced by this commit have some side-effects: - The WriteThreads config option no longer exists, as a strict 1:1 ratio of write plugins and write threads is used. - The data flow has changed. The previous data flow was: (From one of the ReadThreads) plugin_dispatch_{values,multivalue}() plugin_dispatch_metric_family() enqueue_metric_family() write_queue_enqueue() -----{Queue}----+ | (In one of the WriteThreads threads) | plugin_write_thread() | ^- plugin_write_dequeue() <-+ plugin_dispatch_metric_internal() ^- fc_process_chain(pre_cache_chain) fc_process_chain(fc_process_chain) fc_bit_write_invoke() plugin_write(NULL) / plugin_write(plugin_name) plugin callback() The data flow now is: (From one of the ReadThreads) plugin_dispatch_{values,multivalue}() plugin_dispatch_metric_family() plugin_dispatch_metric_internal() ^- fc_process_chain(pre_cache_chain) fc_process_chain(post_cache_chain) fc_bit_write_invoke() plugin_write(NULL) / plugin_write(plugin_name) write_queue_enqueue() -----{Queue}----+ | (In one of the WriteThreads threads) | plugin_write_thread() <-+ plugin callback() One result of this change is, that the behaviour of plugin_write has changed from running the plugin callback immediately and in the same thread, to always enqueueing the value and de-queing in the dedicated thread. - The behaviour of the WriteQueueLimitHigh and WriteQueueLimitLow options has changed. The Queue will be be capped to a length of LimitHigh by dropping random queue elements between the queue end and LimitLow. Setting LimitLow to a reasonably large value ensures that fast write plugins do not loose values, even in the vicinity of a slow plugin. The diagram below shows the random element selected for removal (###) in Step 1 and the queue with the element removed in Step 2. Step 1: +- Thread #1 Head | +- Thread #2 Head +- Tail v | | v v +--+| +--+ #### +--+| +--+ +--+ +--+ +--+ +--+ | 1|->| 1|-># 1#->| 1|->| 2|->| 2|->| 2|->| 2|->| 2|->X +--+| +--+ #### +--+| +--+ +--+ +--+ +--+ +--+ | | | LimitHigh | LimitLow Step 2: | +- Thread #1 Head +- Thread #2 Head +- Tail | v | v v | +--+ +--+ +--+| +--+ +--+ +--+ +--+ +--+ | | 1|->| 1|->| 1|->| 2|->| 2|->| 2|->| 2|->| 2|->X | +--+ +--+ +--+| +--+ +--+ +--+ +--+ +--+ | | | LimitHigh | LimitLow Signed-off-by: Leonard Göhrs <l.goehrs@pengutronix.de>
This pull request was closed.
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
Even better than memory leaks: Memory corruption.