-
Notifications
You must be signed in to change notification settings - Fork 2.8k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Hasura using more CPU over time #2565
Comments
Out of curiosity, is memory increasing as well? |
I think the CPU usage increase is due to the garbage collector doing more work as time passes. This may be hinting at a possible memory leak or just at the need to move some live data out of the garbage collector. What's the experiment about? Does this happen in other "normal" circumstances? |
@JordanCheney also, out of curiosity, would you mind adding this env var and see if it makes any difference?
This flag disables the idle garbage collection, which may reduce the cpu load when the server is not doing other work. |
@lorenzo I added the new env flag, I'll report back results in a few hours when it's finished. The experiment is intentionally not doing anything besides running the containers (no requests to hasura). I don't have any normal runs of the beta3 to compare to since this behavior got flagged so the new version never made it to real testing |
@lorenzo looks like disabling the idle garbage collector fixed the behavior- What impact will this change have on system performance? |
@JordanCheney thanks for reporting. I've seen this crop up before, see e.g. PostgREST/postgrest#565 It should be perfectly safe to disable idle GC as @lorenzo helpfully suggested (thanks!). The idea is that if the program is doing nothing we might as well perform a GC. With idle GC disabled you might find that e.g. the next request to the server after an idle period has higher average latency (because a GC might happen to be triggered). It really shouldn't matter much for a production service. I've also found in another service I worked on that We're going to look a bit closer at this and will update the ticket when the fix is merged. |
This seems to resolve the issue locally (and has worked in the past), but it's not clear what exactly is going on here (in particular, why this should resolve what looks like a memory leak). It certainly seems like a GHC issue of some sort. Closes hasura#2565
@bartjuh4- @lorenzo suggested adding |
Thanks, one (silly) question how did you add that flag to a docker instance? |
@bartjuh4 If you’re running the container from the command line via environment:
- GHCRTS=-I0 to your container configuration. Other mechanisms of running containers have other ways to control the environment. |
This seems to resolve the issue locally (and has worked in the past), but it's not clear what exactly is going on here (in particular, why this should resolve what looks like a memory leak). It certainly seems like a GHC issue of some sort. Closes hasura#2565
This seems to resolve the issue locally (and has worked in the past), but it's not clear what exactly is going on here (in particular, why this should resolve what looks like a memory leak). It certainly seems like a GHC issue of some sort. Closes #2565
Survey of possibly relevant GHC issues: #2581 (comment) |
This seems to resolve the issue locally (and has worked in the past), but it's not clear what exactly is going on here (in particular, why this should resolve what looks like a memory leak). It certainly seems like a GHC issue of some sort. Closes hasura#2565
Hello,
I recently upgraded to hasura 1.0.0.beta.3 and I've noticed that CPU use in the docker container is generally increasing over time, even if no requests are being made to the hasura server.
Here are the logs produced by Hasura (mostly showing nothing is happening):
Here is a screenshot of a grafana dashboard showing the CPU use over 6 hours:
Here is a minimal docker-compose script that can recreate the test environment and the monitoring tools.
hasura_test.zip
The text was updated successfully, but these errors were encountered: