Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Multiple inputs / overriding inputs #112

Open
wants to merge 1 commit into
base: master
Choose a base branch
from

Conversation

jameshadfield
Copy link
Member

@jameshadfield jameshadfield commented Dec 16, 2024

This is a refactored version of #106 but on top of the current master branch. Feedback indicated the ideas in that PR were ready for review and merge.

By having all phylogenetic workflows start from two lists of inputs (config.inputs, config.additional_inputs) we enable a broad range of uses with a consistent interface.

  1. Using local ingest files is trivial (see added docs) and doesn't need a bunch of special-cased logic that is prone to falling out of date (as it had indeed done)
  2. Adding extra / private data follows the similar pattern, with an additional config list being used so that we are explicit that the new data is additional and enforce an ordering which is needed for predictable augur merge behaviour. The canonical data can be removed / replaced via step (1) if needed.

I considered adding additional data after the subtype-filtering step, which would avoid the need to add subtype in the metadata but requires encoding this in the config overlay. I felt the chosen way was simpler and more powerful.

Note that this workflow uses an old version of the CI workflow, https://github.com/nextstrain/.github/blob/v0/.github/workflows/pathogen-repo-ci.yaml#L233-L240 which copies example_data. We could upgrade to the latest version and use a config overlay to swap out the canonical inputs with the example data.

Note that one of the side effects of the current implementation is that merged inputs use the same filepath irrespective of the workflow. For instance, both gisaid & h5n1-cattle-outbreak use the intermediate path results/metadata_merged.tsv, which means it's not possible to maintain runs of both those analysis concurrently if both were to use merged inputs. Using separate analysis directories, e.g. #103, will help avoid this shortcoming.

README.md Show resolved Hide resolved
Copy link
Member

@tsibley tsibley left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think the outward-facing config interface is pretty sound and good, but I think there's a lot of refinement left for the implementation in the workflow. Refinement that would benefit us as authors and anyone else reading the workflow. I would not want to promulgate this implementation onwards to other workflows/pathogens without further refinement first, so I think it's worth iterating on here and now rather than later.

README.md Outdated Show resolved Hide resolved
Snakefile Outdated Show resolved Hide resolved
Snakefile Outdated Show resolved Hide resolved
Snakefile Outdated
Comment on lines 85 to 106
# Address may include the '{segment}' wildcard
formatted_address = address.format(segment=segment) \
if (segment and '{segment}' in address) \
else address

# addresses may be a remote filepath or a local file
if address.startswith('s3://'):
# path defines the location of the downloaded file (i.e. produced by a download rule)
path = f"data/{name}/metadata.tsv" \
if is_metadata \
else f"data/{name}/sequences_{segment}.fasta"
elif address.lower().startswith(r'http[s]:\/\/'):
raise InvalidConfigError("Workflow cannot yet handle HTTP[S] inputs")
else:
path = formatted_address

return {
'name': name,
'path': path,
'merge_arg': f"{name}={path}",
'address': formatted_address
}
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The awkward path vs. address distinction and handling here (and having to reference the output paths of distant rules) would go away if we used something like Snakemake's remote files support (e.g. as in ncov).

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yup, would be interesting to see how you imagine remote file being used more widely than just ncov.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

As a first step, I think we can extract ncov's remote_files.smk and maintain it as a vendored file used (include-ed) across our workflows.

Looking at the above code snippet from this workflow again, I noticed a bug:

    elif address.lower().startswith(r'http[s]:\/\/'):

That looks like an attempt at a regex pattern to me, but str.startswith() takes fixed string.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think it'll be less work now and in the future to start with re-using our existing work for remote files support than it would be to maintain new ad-hoc code like the above.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

As a first step...

Sounds good - such a PR / issue would be a good place to discuss any concerns with vendoring and the current ncov implementation. While the code introduced here is new it's no more "ad-hoc" than the previous implementation with S3_SRC and LOCAL_INGEST. It doesn't seem unreasonable to implement remote file support before spreading the config.inputs + config.additional_inputs interface across repos if you'd like to prioritise this.

I noticed a bug

Thanks. Fixed.

Snakefile Outdated Show resolved Hide resolved
Snakefile Outdated Show resolved Hide resolved
Snakefile Outdated Show resolved Hide resolved
Snakefile Outdated Show resolved Hide resolved
Snakefile Outdated Show resolved Hide resolved
@jameshadfield jameshadfield force-pushed the james/refactor-inputs-mk2 branch 2 times, most recently from 51888c9 to 1a66e34 Compare January 7, 2025 00:36
By having all phylogenetic workflows start from two lists of inputs
(`config.inputs`, `config.additional_inputs`) we enable a broad range of
uses with a consistent interface.

1. Using local ingest files is trivial (see added docs) and doesn't need
   a bunch of special-cased logic that is prone to falling out of date
   (as it had indeed done)
2. Adding extra / private data follows the similar pattern, with an
   additional config list being used so that we are explicit that the
   new data is additional and enforce an ordering which is needed for
   predictable `augur merge` behaviour. The canonical data can be
   removed / replaced via step (1) if needed.

I considered adding additional data after the subtype-filtering step,
which would avoid the need to add subtype in the metadata but requires
encoding this in the config overlay. I felt the chosen way was simpler
and more powerful.

When considering sequences the structure is more complex than metadata
because the influenza genome is segmented and we wish to allow users to
provide additional data for only some segments (see docstring for
`_parse_config_input`). For non-segmented pathogens the simpler
structure used here for metadata could also be used for sequences.

This workflow uses an old version of the CI workflow,
<https://github.com/nextstrain/.github/blob/v0/.github/workflows/pathogen-repo-ci.yaml#L233-L240>
which copies `example_data`. We could upgrade to the latest version
and use a config overlay to swap out the canonical inputs with the
example data.

Note that one of the side effects of the current implementation is that
merged inputs use the same filepath irrespective of the workflow. For
instance, both gisaid & h5n1-cattle-outbreak use the intermediate path
`results/metadata_merged.tsv`, which means it's not possible to maintain
runs of both those analysis concurrently if both were to use merged
inputs. Using separate analysis directories, e.g.
<#103> will help avoid this
shortcoming.
@jameshadfield jameshadfield force-pushed the james/refactor-inputs-mk2 branch from 1a66e34 to fde2126 Compare January 7, 2025 21:27
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants