Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Provide various test fixes and improvements #89

Merged
merged 1 commit into from
Mar 14, 2018
Merged

Conversation

seav
Copy link
Contributor

@seav seav commented Mar 9, 2018

Proposed changes in this pull request

Why:

  • Problem 1: Deleting a resource might result in intermittent errors because the step where we click on the "Yes, delete this resource" button might fail because the confirmation modal is still transitioning which means that the button is not yet clickable. See screenshot below:

    screenshot-exception-09672e381a

  • Problem 2: The test_project_user_cannot_update_resource uses the data collector user to upload a dummy resource and then switches to the project user to try and update the dummy resource, then switches back to the data collector to delete the dummy resource. Since we now have dummy resource fixtures, we don't need to log-in as the data collector to upload and then later delete the dummy resource.

  • Problem 3: Batch 3 now doesn't have any upload tests which would result in spurious Travis build failures for commits to master.

What:

  • For problem 1: Add a step to wait until the "Yes, delete this resource" button is clickable.
  • For problem 2: Update test_project_user_cannot_update_resource to use one of the dummy resource fixtures.
  • For problem 3: Add a dummy no-op test with the Pytest marker upload.
  • Update the package version to 0.6.3.

When should this PR be merged

Soon.

Risks

None foreseen.

Follow-up actions

  • Release as v0.6.3.
  • Update cadasta-platform repo to match.

Checklist (for reviewing)

General

  • Is this PR explained thoroughly? All code changes must be accounted for in the PR description.
  • Is the PR labeled correctly? It should have the migration label if a new migration is added.
  • Is the risk level assessment sufficient? The risks section should contain all risks that might be introduced with the PR and which actions we need to take to mitigate these risks. Possible risks are database migrations, new libraries that need to be installed or changes to deployment scripts.

Functionality

  • Are all requirements met? Compare implemented functionality with the requirements specification.
  • Does the UI work as expected? There should be no Javascript errors in the console; all resources should load. There should be no unexpected errors. Deliberately try to break the feature to find out if there are corner cases that are not handled.

Code

  • Do you fully understand the introduced changes to the code? If not ask for clarification, it might uncover ways to solve a problem in a more elegant and efficient way.
  • Does the PR introduce any inefficient database requests? Use the debug server to check for duplicate requests.
  • Are all necessary strings marked for translation? All strings that are exposed to users via the UI must be marked for translation.

Tests

  • Are there sufficient test cases? Ensure that all components are tested individually; models, forms, and serializers should be tested in isolation even if a test for a view covers these components.
  • If this is a bug fix, are tests for the issue in place? There must be a test case for the bug to ensure the issue won’t regress. Make sure that the tests break without the new code to fix the issue.
  • If this is a new feature or a significant change to an existing feature? has the manual testing spreadsheet been updated with instructions for manual testing?

Security

  • Confirm this PR doesn't commit any keys, passwords, tokens, usernames, or other secrets.
  • Are all UI and API inputs run through forms or serializers?
  • Are all external inputs validated and sanitized appropriately?
  • Does all branching logic have a default case?
  • Does this solution handle outliers and edge cases gracefully?
  • Are all external communications secured and restricted to SSL?

Documentation

  • Are changes to the UI documented in the platform docs? If this PR introduces new platform site functionality or changes existing ones, the changes must be documented in the Cadasta Platform Documentation.
  • Are changes to the API documented in the API docs? If this PR introduces new API functionality or changes existing ones, the changes must be documented in the API docs.
  • Are reusable components documented? If this PR introduces components that are relevant to other developers (for instance a mixin for a view or a generic form) they should be documented in the Wiki.

- Make resources deletion/reversion step more robust
- Update test_project_user_cannot_update_resource to use fixtures
- Add dummy test to appease Travis-BrowserStack builds
@seav
Copy link
Contributor Author

seav commented Mar 9, 2018

I've tested this branch on Travis with a PR build and a master-push build and both have passed:
https://travis-ci.org/Cadasta/cadasta-platform/builds/351549568
https://travis-ci.org/Cadasta/cadasta-platform/builds/351535803

Copy link
Member

@oliverroick oliverroick left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Problem 3: Batch 3 now doesn't have any upload tests which would result in spurious Travis build failures for commits to master.

Can you elaborate a bit more about cause of this problem. I don't understand why we have to add a no-op test to fix this problem.

@seav
Copy link
Contributor Author

seav commented Mar 13, 2018

Can you elaborate a bit more about cause of this problem. I don't understand why we have to add a no-op test to fix this problem.

When running tests on BrowserStack we divide each batch into 2 sub-batches: one for non-upload tests to be run using BrowserStack and the other for upload tests to be run locally just on Travis. Since Batch 3 does not have any upload tests, the 2nd sub-batch will be flagged as a failure because no tests ran. See the log snippet below:

============================= test session starts ==============================
platform linux -- Python 3.5.2, pytest-3.4.2, py-1.5.2, pluggy-0.6.0 -- /home/travis/build/Cadasta/cadasta-platform/.tox/py35-functional-batch3/bin/python
cachedir: .pytest_cache
Django settings: config.settings.travis (from environment variable)
rootdir: /home/travis/build/Cadasta/cadasta-platform, inifile:
plugins: django-3.1.2, cov-2.5.1, celery-4.1.0
collecting ... collected 172 items
============================= 172 tests deselected =============================
======================== 172 deselected in 0.36 seconds ========================
Functional tests failed

One solution is to move a set of upload tests (say from Batch 4) to Batch 3 . The alternate, which I chose, is to create the no-op test. I think this makes the organization of the batches simpler because Batch 3 is now dedicated to Records tests while Batch 4 is now dedicated to Resources tests.

@oliverroick oliverroick merged commit 06d0911 into master Mar 14, 2018
@oliverroick oliverroick deleted the various-fixes branch March 14, 2018 13:19
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Development

Successfully merging this pull request may close these issues.

2 participants