8000 glustershd memory keeps increasing while creating PVCs · Issue #1467 · gluster/glusterd2 · GitHub
[go: up one dir, main page]
More Web Proxy on the site http://driver.im/
Skip to content
This repository was archived by the owner on Mar 26, 2020. It is now read-only.
This repository was archived by the owner on Mar 26, 2020. It is now read-only.
glustershd memory keeps increasing while creating PVCs #1467
Open
@PrasadDesala

Description

@PrasadDesala

glusterfs memory increased from 74MB to 6.8G while creating 200 PVCs.

Before:
PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND
1150 root 20 0 3637200 74560 3320 S 0.0 0.2 0:01.52 glusterfs

After 200 PVCs are created:
PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND
1150 root 20 0 101.0g 6.8g 3388 S 94.1 21.6 17:43.07 glusterfs

Below are few other observations:

  1. For few of the volumes brick port is showing as -1
    Volume : pvc-9480160e-1279-11e9-a7a2-5254001ae311
    +--------------------------------------+-------------------------------+-----------------------------------------------------------------------------------------+--------+-------+------+
    | BRICK ID | HOST | PATH | ONLINE | PORT | PID |
    +--------------------------------------+-------------------------------+-----------------------------------------------------------------------------------------+--------+-------+------+
    | b7a95b9b-17da-4220-a38d-2d23eb75c83a | gluster-kube3-0.glusterd2.gcs | /var/run/glusterd2/bricks/pvc-9480160e-1279-11e9-a7a2-5254001ae311/subvol1/brick1/brick | true | 40635 | 3612 |
    | 133011b8-1825-4b6e-87e1-d7bed7332f55 | gluster-kube1-0.glusterd2.gcs | /var/run/glusterd2/bricks/pvc-9480160e-1279-11e9-a7a2-5254001ae311/subvol1/brick2/brick | true | -1 | 3041 |
    | ebfb7837-8657-46c9-aad9-449b6a1ba6bf | gluster-kube2-0.glusterd2.gcs | /var/run/glusterd2/bricks/pvc-9480160e-1279-11e9-a7a2-5254001ae311/subvol1/brick3/brick | true | 45864 | 3146 |
    +--------------------------------------+-------------------------------+-----------------------------------------------------------------------------------------+--------+-------+------+
  2. I am seeing below continuous messages in glustershd logs,
    [2019-01-07 13:14:14.157784] W [MSGID: 101012] [common-utils.c:3186:gf_get_reserved_ports] 36-glusterfs: could not open the file /proc/sys/net/ipv4/ip_local_reserved_ports for getting reserved ports info [No such file or directory]
    [2019-01-07 13:14:14.157840] W [MSGID: 101081] [common-utils.c:3226:gf_process_reserved_ports] 36-glusterfs: Not able to get reserved ports, hence there is a possibility that glusterfs may consume reserved port
    [2019-01-07 13:14:14.160159] W [MSGID: 101012] [common-utils.c:3186:gf_get_reserved_ports] 36-glusterfs: could not open the file /proc/sys/net/ipv4/ip_local_reserved_ports for getting reserved ports info [No such file or directory]
    [2019-01-07 13:14:14.160213] W [MSGID: 101081] [common-utils.c:3226:gf_process_reserved_ports] 36-glusterfs: Not able to get reserved ports, hence there is a possibility that glusterfs may consume reserved port
    [2019-01-07 13:14:14.183845] I [socket.c:811:__socket_shutdown] 36-pvc-93515db8-1279-11e9-a7a2-5254001ae311-replicate-0-client-1: intentional socket shutdown(7073)
    [2019-01-07 13:14:14.183946] E [MSGID: 101191] [event-epoll.c:759:event_dispatch_epoll_worker] 36-epoll: Failed to dispatch handler
  3. Below logs are continuously logged in glusterd2 logs,
    time="2019-01-07 13:15:28.484617" level=info msg="client connected" address="10.233.64.8:47178" server=sunrpc source="[server.go:148:sunrpc.(*SunRPC).acceptLoop]" transport=tcp
    time="2019-01-07 13:15:28.485340" level=error msg="registry.SearchByBrickPath() failed for brick" brick=/var/run/glusterd2/bricks/pvc-9480160e-1279-11e9-a7a2-5254001ae311/subvol1/brick2/brick error="SearchByBrickPath: port for brick /var/run/glusterd2/bricks/pvc-9480160e-1279-11e9-a7a2-5254001ae311/subvol1/brick2/brick not found" source="[rpc_prog.go:104:pmap.(*GfPortmap).PortByBrick]"

Observed behavior

glusterfs memory increased from 74MB to 6.8G after 200 PVCs are created. Also seeing above continuous messages getting logged.

Expected/desired behavior

glusterfs should not consume that much memory.

Details on how to reproduce (minimal and precise)

  1. Create a 3 node GCS setup using valgrind.
  2. Create 200 PVCs and keep on monitoring glusterfs resource consumption.

Information about the environment:

  • Glusterd2 version used (e.g. v4.1.0 or master): v6.0-dev.99.git0839909
  • Operating system used: CentOS 7.6
  • Glusterd2 compiled from sources, as a package (rpm/deb), or container:
  • Using External ETCD: (yes/no, if yes ETCD version): Yes, 3.3.8
  • If container, which container image:
  • Using kubernetes, openshift, or direct install:
  • If kubernetes/openshift, is gluster running inside kubernetes/openshift or outside: kubernetes

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions

      0