8000 performance degradation on big memcache instances · Issue #85 · kohana/cache · GitHub
[go: up one dir, main page]
More Web Proxy on the site http://driver.im/
Skip to content

performance degradation on big memcache instances #85

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms o 8000 f service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Open
Genda1ph opened this issue Mar 2, 2016 · 3 comments
Open

performance degradation on big memcache instances #85

Genda1ph opened this issue Mar 2, 2016 · 3 comments

Comments

@Genda1ph
Copy link
Genda1ph commented Mar 2, 2016

Hello.
We're using quite big memcached instance for our application - 4GB (8 threads, large pages enabled), which, apparently, causes whole application to significantly slow down. I see gradual rise of Load Average from 2-5 to 20-30 when we pass 512MB cache size mark. If we purge the cache, LA drops back to normal within a couple of minutes.
Things got a little bit better after we switched from TCP socket to UNIX socket. And right now we've limited this memcached instance to 512M and it looks like LA did rise a bit after cache usage passed 256M mark, but it's insignificant.
get/set rates are around 400/160-175 req/s, but this shouldn't be too much - this is memcached after all.

Where should we start looking into this issue?

We use Kohana 3.3.2 running on Debian 7 with PHP 5.4.36 and memcache 1.4.13 (via memcache driver).

@enov
Copy link
Contributor
enov commented Mar 8, 2016

Thanks for reporting this @Genda1ph. Not sure what to reply, though. You probably need to use a profiling tool to see where we have a bottleneck in the Kohana code, if any.

@Genda1ph
Copy link
Author
Genda1ph commented Mar 8, 2016

Thing is, profiling doesn't help due to sheer number of requests. I tried attaching via strace to fpm aaand no waits seem to happen.

After some more research I can say that cache plugin is not the culprit, but mcache is. Moreover, issue lies not with plugin itself, but, most likely, with memcached and its LRU mechanism - It appears that the larger slab is, the more expensive eviction in that slab is (probably, memcahed has to loop through whole slab, looking for oldest timestamp).

I'm not that knowledgeable on how FPM works, but here's my hypothesis: while php is waiting for something to be evicted from memcached it parks the requests' thread and works with another request, which leads to huge TTFB and rise of LA (which, under the hood, is thread queue length). So, when I reduced instance size to 128M and tweaked chunk sizes a bit, looking for LRU became inexpensive, even though number of evictions doubled-tripled (~100 evictions/sec, up from 30-50)
I suggest mentioning this somewhere in the docs and closing the issue, since our TTFB dwindled from ~2s to ~600ms.

@tomazov
Copy link
tomazov commented Apr 2, 2020

use it $this->cache = Cache::instance();
in place $this->cache = new Memcache();

public function __construct($id = NULL, $group = NULL){
    $this->cache = Cache::instance($group);
    $this->id = $id;
}

and fix

public function set($tables, $sql, $result, $lifetime){
    ...
    $this->cache->set($hash, $this->condense($result), $lifetime);
    ...

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants
0