-
Notifications
You must be signed in to change notification settings - Fork 9
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Multiple inputs / overriding inputs #112
base: master
Are you sure you want to change the base?
Conversation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think the outward-facing config interface is pretty sound and good, but I think there's a lot of refinement left for the implementation in the workflow. Refinement that would benefit us as authors and anyone else reading the workflow. I would not want to promulgate this implementation onwards to other workflows/pathogens without further refinement first, so I think it's worth iterating on here and now rather than later.
Snakefile
Outdated
# Address may include the '{segment}' wildcard | ||
formatted_address = address.format(segment=segment) \ | ||
if (segment and '{segment}' in address) \ | ||
else address | ||
|
||
# addresses may be a remote filepath or a local file | ||
if address.startswith('s3://'): | ||
# path defines the location of the downloaded file (i.e. produced by a download rule) | ||
path = f"data/{name}/metadata.tsv" \ | ||
if is_metadata \ | ||
else f"data/{name}/sequences_{segment}.fasta" | ||
elif address.lower().startswith(r'http[s]:\/\/'): | ||
raise InvalidConfigError("Workflow cannot yet handle HTTP[S] inputs") | ||
else: | ||
path = formatted_address | ||
|
||
return { | ||
'name': name, | ||
'path': path, | ||
'merge_arg': f"{name}={path}", | ||
'address': formatted_address | ||
} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The awkward path
vs. address
distinction and handling here (and having to reference the output paths of distant rules) would go away if we used something like Snakemake's remote files support (e.g. as in ncov).
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yup, would be interesting to see how you imagine remote file being used more widely than just ncov.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
As a first step, I think we can extract ncov's remote_files.smk
and maintain it as a vendored file used (include
-ed) across our workflows.
Looking at the above code snippet from this workflow again, I noticed a bug:
elif address.lower().startswith(r'http[s]:\/\/'):
That looks like an attempt at a regex pattern to me, but str.startswith()
takes fixed string.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think it'll be less work now and in the future to start with re-using our existing work for remote files support than it would be to maintain new ad-hoc code like the above.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
As a first step...
Sounds good - such a PR / issue would be a good place to discuss any concerns with vendoring and the current ncov implementation. While the code introduced here is new it's no more "ad-hoc" than the previous implementation with S3_SRC
and LOCAL_INGEST
. It doesn't seem unreasonable to implement remote file support before spreading the config.inputs
+ config.additional_inputs
interface across repos if you'd like to prioritise this.
I noticed a bug
Thanks. Fixed.
51888c9
to
1a66e34
Compare
By having all phylogenetic workflows start from two lists of inputs (`config.inputs`, `config.additional_inputs`) we enable a broad range of uses with a consistent interface. 1. Using local ingest files is trivial (see added docs) and doesn't need a bunch of special-cased logic that is prone to falling out of date (as it had indeed done) 2. Adding extra / private data follows the similar pattern, with an additional config list being used so that we are explicit that the new data is additional and enforce an ordering which is needed for predictable `augur merge` behaviour. The canonical data can be removed / replaced via step (1) if needed. I considered adding additional data after the subtype-filtering step, which would avoid the need to add subtype in the metadata but requires encoding this in the config overlay. I felt the chosen way was simpler and more powerful. When considering sequences the structure is more complex than metadata because the influenza genome is segmented and we wish to allow users to provide additional data for only some segments (see docstring for `_parse_config_input`). For non-segmented pathogens the simpler structure used here for metadata could also be used for sequences. This workflow uses an old version of the CI workflow, <https://github.com/nextstrain/.github/blob/v0/.github/workflows/pathogen-repo-ci.yaml#L233-L240> which copies `example_data`. We could upgrade to the latest version and use a config overlay to swap out the canonical inputs with the example data. Note that one of the side effects of the current implementation is that merged inputs use the same filepath irrespective of the workflow. For instance, both gisaid & h5n1-cattle-outbreak use the intermediate path `results/metadata_merged.tsv`, which means it's not possible to maintain runs of both those analysis concurrently if both were to use merged inputs. Using separate analysis directories, e.g. <#103> will help avoid this shortcoming.
1a66e34
to
fde2126
Compare
This is a refactored version of #106 but on top of the current
master
branch. Feedback indicated the ideas in that PR were ready for review and merge.By having all phylogenetic workflows start from two lists of inputs (
config.inputs
,config.additional_inputs
) we enable a broad range of uses with a consistent interface.augur merge
behaviour. The canonical data can be removed / replaced via step (1) if needed.I considered adding additional data after the subtype-filtering step, which would avoid the need to add subtype in the metadata but requires encoding this in the config overlay. I felt the chosen way was simpler and more powerful.
Note that this workflow uses an old version of the CI workflow, https://github.com/nextstrain/.github/blob/v0/.github/workflows/pathogen-repo-ci.yaml#L233-L240 which copies
example_data
. We could upgrade to the latest version and use a config overlay to swap out the canonical inputs with the example data.Note that one of the side effects of the current implementation is that merged inputs use the same filepath irrespective of the workflow. For instance, both gisaid & h5n1-cattle-outbreak use the intermediate path
results/metadata_merged.tsv
, which means it's not possible to maintain runs of both those analysis concurrently if both were to use merged inputs. Using separate analysis directories, e.g. #103, will help avoid this shortcoming.