-
Notifications
You must be signed in to change notification settings - Fork 22
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Expand test scenarios #296
Comments
As the first step I'll pick up node Helm chart to implement tests for it. Node Helm chart is the most used and complex chart in the repo. The initial iteration should cover the most used configuration options and the pieces of functionality that has the most impact on the runtime. I came up with the following: Test the following node modes:
Test scenarios:
I found the Bitnami's testing guide comprehensive covering the details of possible strategies and tools for testing Helm charts. Bitnami's Helm charts repo uses 3 tools to run the tests:
Each test scenario may require one or more tools to verify the requirements are met. |
Covered a few scenarios in #304:
|
Each Helm chart is now configured to run a smoke test on each PR. A chart is deployed to a local K8s cluster with default or close-to-default parameters. While smoke tests are still good to capture basic errors, they don't cover real-world usage scenarios.
To prevent the bugs from ending up in the release more comprehensive end-to-end tests should be added to the CI pipeline. But these tests should not repeat or serve as a replacement for the tests of the application itself. Rather the tests should capture Helm chart specific errors like improperly exposed ports, failures to attach PVCs, bugs in the init-scripts, etc.
The
values.yaml
file should also be extended with more comprehensive configuration reflecting how the chart is used in the wild. Potentially, having multiplevalues.yaml
s to cover different use-cases.The text was updated successfully, but these errors were encountered: