Skip to content

Memory issue [dunglas/mercure:v0.21.2] #1136

@klems

Description

@klems

Hello,

I have an ongoing issue with a deployment of mercure hub.
You might be able to give some insight as i am having issues troubleshooting the "real" rootcause.
Not sure if it's mercure hub or infrastructure configuration related.

Basically we are deploying a docker image within a COS VM instance on GCP.
We are expecting 3000 clients, more or less.

On mercure side (1727 is the PID of the caddy process) (file descriptors should be ~= expected clients) :

login@server ~ $ sudo ls -1 /proc/1727/fd | wc -l
115476

The connexions are never closed on mercure side, memory keeps beeing used, over and over until docker restart the process :

Image

I did setup lots of timeout / infra config on the GCP infrastructure, mainly :

  • google_compute_target_http(s)_proxy :
    quic_override = "DISABLE"
    http_keep_alive_timeout_sec = 1200

  • on my fronted/backend services = 3600s

  • Some timeouts are set on mercure hub side as well :

docker run -d --restart=unless-stopped --name xxx-mercure-hub -p 80:80 \
  --log-opt max-size=50m --log-opt max-file=3 \
  -e GOMEMLIMIT="12000MiB" \
  -e SERVER_NAME=":80" \
  -e CORS_ORIGINS="https://xxx.com" \
  -e PUBLISH_ORIGINS="https://xxx.com" \
  -e READ_TIMEOUT="30s" \
  -e WRITE_TIMEOUT="60s" \
  -e IDLE_TIMEOUT="60s" \

Looks like a client connexion gets closed, a second connexion is initiated, and the previous connection is maintained alive for too long, so the overall number of connections keeps pilling up...

Of course, our deployment is working technically on every environment, we have this issue only in production with the load of our users.

Any help appreciated.

Regards,

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions