Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

wip! workflows as programs prototyping #55

Draft
wants to merge 2 commits into
base: main
Choose a base branch
from
Draft
Show file tree
Hide file tree
Changes from 1 commit
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
20 changes: 17 additions & 3 deletions ingest/Snakefile
Original file line number Diff line number Diff line change
Expand Up @@ -2,9 +2,17 @@
This is the main ingest Snakefile that orchestrates the full ingest workflow
and defines its default outputs.
"""
# Utility functions shared across all workflows.
include: "../shared/functions.smk"


# Use default configuration values. Extend with Snakemake's --configfile/--config options.
configfile: os.path.join(workflow.basedir, "defaults/config.yaml")

# Use custom configuration from analysis directory (i.e. working dir), if any.
if os.path.exists("config.yaml"):
configfile: "config.yaml"
Comment on lines +9 to +14
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The behaviour of multiple configfiles is somewhat documented but I think will prove a tripping hazard to people.

Multiple configfiles (as per this section of code, and/or via additional --configfiles) will overwrite scalars and lists; dicts will be merged, but nested dicts won't be; there's no way to remove a dictionary key as far as I can find.

This has implications for how we structure configs, specifically around how we encode flags to skip rules or skip a particular argument.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Nod, for sure, but "multiple config files" is already our established approach, so this is making that more accessible/standard but isn't changing approaches.

We could change approaches, though, by combining the configs ourselves outside of Snakemake's machinery. I didn't think there was appetite for that, but it's very possible.

dicts will be merged, but nested dicts won't be; there's no way to remove a dictionary key as far as I can find.

Hmm. In my recollection (and testing right now), nested dicts are merged? That is, the config merge is deep.

The merge is additive though, so yes, there is no way to remove keys.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

nested dicts are merged

Oh yeah, I wasn't formatting my YAML correctly. That they are merged will be very helpful for us.

Copy link
Member

@jameshadfield jameshadfield Nov 20, 2024

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

"multiple config files" is already our established approach, so this is making that more accessible/standard but isn't changing approaches.

I can see the point of view that this isn't a workflows-as-programs thing rather a pathgon-repo-{guide,template} thing. Similar to my comment in a recent meeting about merging in additional data I think these concepts should be thought of as intertwined rather than isolated in order to make workflows-as-programs as successful as possible.

Our current use of multiple config files are largely just adding in specific steps, e.g I think the examples in measles are a good summary of how we use them more generally:

  • Measles automated builds use an additional config to add in a deploy rule and the associated config params
  • Measles CI uses an additional config to add a rule which copies example data and thus avoids downloading the entire dataset.

I think it's rarer¹ to use additional configs to change parameters of builds, or to subset build targets etc. If we want to recommend this approach -- an approach that I think we should use and workflows-as-programs seems to lead us towards anyway -- then it's not going to play nicely with our current config structures. Some examples:

  • Avian-flu uses a~nested-dict to define the builds to run (subtype x time window). So we can't override this with a config file without having some approach to encoding "skip this" / "no-op". (Mentioned here as well.) Edit: actually in its current formation it's a dict of lists so there is a way to overwrite, albeit a little clunky.
  • Seasonal flu (at least, the public builds) uses an array of segments and a dictionary of builds (builds here is h1n1pdm_2y, h3n2_2y etc). So config overlays aren't going to be able to target a subset of those builds.
  • RSV uses two lists to encode the builds & resolutions which will play nicely with config overrides but is predicated on all builds being run for all resolutions which isn't always the case.
  • (Somewhat unrelated, but...) Dengue hardcodes serotypes/genes so that'll need lifting into config to make the most of workflows-as-programs.

cc @joverlee521

¹ The most complex workflows are seasonal-flu and ncov, so maybe those utilise the style of config merging I'm talking about here. I don't think seasonal-flu does at all, but maybe? It'd be great to have examples in Nextstrain where we do make use of config merging to replace / subset config parameters.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

"multiple config files" is already our established approach, so this is making that more accessible/standard but isn't changing approaches.

...

I think it's rarer¹ to use additional configs to change parameters of builds, or to subset build targets etc. If we want to recommend this approach -- an approach that I think we should use and workflows-as-programs seems to lead us towards anyway -- then it's not going to play nicely with our current config structures. Some examples:

(avain-flu, seasonal-flu, RSV examples elided)

I think it's totally fair to draw a line, somewhere (not exactly sure where), and say the pathogen repos on one side of it are "legacy" and will continue to be maintained the way they have been historically, and will not use these new patterns, while pathogen repos on the other side of the line are "current" and will be keep reasonably up-to-date and consistent with our best practices. (We can also decide, down the road, to invest the effort into converting "legacy" repos into "current" repos, if and when that feels like it has a positive return-on-effort.)

Note: not currently taking a strong stance on where that line is 😁

* (Somewhat unrelated, but...) Dengue [hardcodes](https://github.com/nextstrain/dengue/blob/6f9144f7767c0cac4755c60cd5c069a5d2c4ed6f/phylogenetic/Snakefile#L3-L4) serotypes/genes so that'll need lifting into config to make the most of workflows-as-programs.

Note that measles and yellow fever do the same thing.

Copy link
Member

@jameshadfield jameshadfield Nov 20, 2024

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

An example of this is the WNV/washington-state config, which layers on top of the default config.

Thanks for the example! That's a great example of a simple config structure (i.e. a single dataset target, no wildcards in the workflow) where merging works nicely. It will be interesting to see how that develops as WNV expands (e.g. to have NA & global targets) - does it go the mpox route of one config (or one config overlay) per target, or does it tend towards a more complicated config structure. It's specifically the more complicated structures I was focusing on above.

(It also uses an approach I think we've moved away from, namely

refine:
  treetime_params: --coalescent opt --date-inference marginal ...

which makes it straightforward to remove/add options/params as you're in charge of the entire string. As a thought exercise, if the base config instead specified

refine:
  coalescent: opt

and the overlay config wanted to skip that argument entirely how would it do it?)

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

As a thought exercise, if the base config instead specified

refine:
  coalescent: opt

and the overlay config wanted to skip that argument entirely how would it do it?)

I can't for the life of me find where I wrote about this recentlyish, but I'd suggested previously:

refine:
  coalescent: ~   # or: null

That's a great example of a simple config structure (i.e. a single dataset target, no wildcards in the workflow) where merging works nicely. It will be interesting to see how that develops as WNV expands (e.g. to have NA & global targets) - does it go the mpox route of one config (or one config overlay) per target, or does it tend towards a more complicated config structure.

The niceness of simple config structures is why, in Slack recently, I suggested we consider "small multiples" of config (optionally with tooling to allow derived configs/inheritance) rather than a large complex cross-product config.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I can't for the life of me find…

oh, it was a freakin' Google Docs comment.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

dicts will be merged, but nested dicts won't be

P.S. This behaviour changed in Snakemake 6.11.0 (Nov 2021). Before that nested dicts were not merged. (Yep, it was another potentially-breaking change in a minor version release.)

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

but I'd suggested previously:

Yeah, saw that. I think something like that's the right move, and the corresponding snakemake logic needs to ensure that it drops the --coalescent argument name as well as the null value. (We can probably use a helpful helper function here.)

I prototyped a similar approach for nextstrain/avian-flu#104 but in the context of defining what builds to run (for gisaid there are 48 by default, so it's nice to (a) not enumerate them all and (b) be able to subset them for a particular analysis). One of the prototypes looked like this:

builds:
  h5nx:
    all-time:
      - ha
      - na
    2y: ∅
  h5n1: ∅
  h7n9: ∅
  h9n2: ∅

But I didn't like the necessity of listing out all the builds you don't want to run. I switched to a list-based interface which is much nicer in this respect. (There were other reasons for switching too, namely the hierarchy here isn't right and to make it right results in very verbose dictionaries.)


# Use default configuration values. Override with Snakemake's --configfile/--config options.
configfile: "defaults/config.yaml"

# This is the default rule that Snakemake will run when there are no specified targets.
# The default output of the ingest workflow is usually the curated metadata and sequences.
Expand Down Expand Up @@ -42,4 +50,10 @@ include: "rules/nextclade.smk"
if "custom_rules" in config:
for rule_file in config["custom_rules"]:

include: rule_file
# Relative custom rule paths in the config are relative to the analysis
# directory (i.e. the current working directory, or workdir, usually
# given by --directory), but the "include" directive treats relative
# paths as relative to the workflow (e.g. workflow.current_basedir).
# Convert to an absolute path based on the analysis/current directory
# to avoid this mismatch of expectations.
include: os.path.join(os.getcwd(), rule_file)
12 changes: 7 additions & 5 deletions ingest/defaults/config.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -37,9 +37,10 @@ curate:
# For the Nextstrain team, this is currently
# 'https://raw.githubusercontent.com/nextstrain/ncov-ingest/master/source-data/gisaid_geoLocationRules.tsv'
geolocation_rules_url: "https://raw.githubusercontent.com/nextstrain/ncov-ingest/master/source-data/gisaid_geoLocationRules.tsv"
# The path to the local geolocation rules within the pathogen repo
# The path should be relative to the ingest directory.
local_geolocation_rules: "defaults/geolocation_rules.tsv"
# The path to the local geolocation rules for this pathogen.
# The path should be relative to the working directory (e.g. --directory).
# If the path doesn't exist in the working directory, the file in the workflow's defaults/ directory it used instead (if it exists).
Comment on lines +40 to +42
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Slightly concerned that users will be confused by the term "working directory", especially those who are not familiar with Snakemake. (I had originally made everything relative to the ingest directory so that users had a concrete reference point.)

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

For user reference, we can say "your analysis directory", perhaps with an explanatory parenthetical like "(Snakemake's working directory, --directory, often your current directory)". This seems like a tripping hazard we can address various ways.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This seems like a tripping hazard we can address various ways.

Totally! Just want to make sure we address it as paths are often the most confusing thing...

Seems like the working directory can be different things depending on how you run the workflow.

  1. native snakemake = the current directory or provided --directory.
  2. nextstrain build <directory> = the provided <directory> (can be complicated by additional Snakemake --directory option)
  3. nextstrain run <pathogen-name> <workflow-name> <analysis-directory> = provided <analysis-directory>

I do like that (3) is the most straight-forward so all for moving in that direction.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Following this up after work in avian-flu and discussions in slack, I think we'll need words to describe the following 4 locations (even if some are only for developers). This is assuming we go with the slight variant to (3) which allows things like nextstrain run avian-flu phylogenetic/h5n1-cattle-outbreak my-analysis.

  1. "pathogen-name" - maybe obvious, but this may include version identifiers etc
  2. the workflow name - "ingest", "phylogenetic", "phylogenetic/gisaid", "phylogenetic/clade-i" etc. workflow.basedir in snakemake.
  3. the analysis directory. os.getcwd() in snakemake.
  4. The "base" workflow directory if there are (sub-)workflows which use a common snakefile. In avian-flu this'd be avian-flu/Snakefile (or maybe one day avian-flu/phylogenetic/Snakefile). This should only be relevant for workflow developers.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

  1. The pathogen spec I'm working from in Nextstrain CLI is <name>[@<version>[=<url>]]. A URL is only accepted by nextstrain setup, as a way to override the default assumption that the name refers to a Nextstrain pathogen repo and/or that the version refers to a Git rev in the same.

  2. I want to stress that I think it's important to conceptually treat/talk about these mostly as names (like you've done!) and not paths, even if they're used internally as paths initially (and likely indefinitely). Doing so makes it more sensible to do things like alias (or "redirect") old names to new names (e.g. via entries in nextstrain-pathogen.yaml, or symlinks) to ease use or preserve backwards compatibility.

  3. I'd call this the "working analysis directory" in full, shortening to "analysis directory" (for Nextstrain CLI) or "working directory" (for Snakemake development).

  4. I think avian-flu/Snakefile (or avian-flu/phylogenetic/Snakefile) would ideally be avian-flu/phylogenetic/rules/base.smk (or core.smk or something similar). That is, we'd stick to the convention of only using the Snakefile filename for supported workflow entrypoints. I think this will reduce confusion.

local_geolocation_rules: "geolocation_rules.tsv"
# List of field names to change where the key is the original field name and the value is the new field name
# The original field names should match the ncbi_datasets_fields provided above.
# This is the first step in the pipeline, so any references to field names in the configs below should use the new field names
Expand Down Expand Up @@ -91,8 +92,9 @@ curate:
# Name to use for the generated abbreviated authors field
abbr_authors_field: "abbr_authors"
# Path to the manual annotations file
# The path should be relative to the ingest directory
annotations: "defaults/annotations.tsv"
# The path should be relative to the working directory (e.g. --directory).
# If the path doesn't exist in the working directory, the file in the workflow's defaults/ directory it used instead (if it exists).
annotations: "annotations.tsv"
# The ID field in the metadata to use to merge the manual annotations
annotations_id: "accession"
# The ID field in the metadata to use as the sequence id in the output FASTA file
Expand Down
18 changes: 9 additions & 9 deletions ingest/rules/curate.smk
Original file line number Diff line number Diff line change
Expand Up @@ -31,7 +31,7 @@ rule fetch_general_geolocation_rules:
rule concat_geolocation_rules:
input:
general_geolocation_rules="data/general-geolocation-rules.tsv",
local_geolocation_rules=config["curate"]["local_geolocation_rules"],
local_geolocation_rules=resolve_config_path(config["curate"]["local_geolocation_rules"]),
output:
all_geolocation_rules="data/all-geolocation-rules.tsv",
shell:
Expand Down Expand Up @@ -59,7 +59,7 @@ rule curate:
sequences_ndjson="data/ncbi.ndjson",
# Change the geolocation_rules input path if you are removing the above two rules
all_geolocation_rules="data/all-geolocation-rules.tsv",
annotations=config["curate"]["annotations"],
annotations=resolve_config_path(config["curate"]["annotations"]),
output:
metadata="data/all_metadata.tsv",
sequences="results/sequences.fasta",
Expand All @@ -86,28 +86,28 @@ rule curate:
shell:
"""
(cat {input.sequences_ndjson} \
| ./vendored/transform-field-names \
| {workflow.basedir}/vendored/transform-field-names \
--field-map {params.field_map} \
| augur curate normalize-strings \
| ./vendored/transform-strain-names \
| {workflow.basedir}/vendored/transform-strain-names \
--strain-regex {params.strain_regex} \
--backup-fields {params.strain_backup_fields} \
| augur curate format-dates \
--date-fields {params.date_fields} \
--expected-date-formats {params.expected_date_formats} \
| ./vendored/transform-genbank-location \
| {workflow.basedir}/vendored/transform-genbank-location \
| augur curate titlecase \
--titlecase-fields {params.titlecase_fields} \
--articles {params.articles} \
--abbreviations {params.abbreviations} \
| ./vendored/transform-authors \
| {workflow.basedir}/vendored/transform-authors \
--authors-field {params.authors_field} \
--default-value {params.authors_default_value} \
--abbr-authors-field {params.abbr_authors_field} \
| ./vendored/apply-geolocation-rules \
| {workflow.basedir}/vendored/apply-geolocation-rules \
--geolocation-rules {input.all_geolocation_rules} \
| ./bin/parse-measles-genotype-names.py --genotype-field {params.genotype_field} \
| ./vendored/merge-user-metadata \
| {workflow.basedir}/bin/parse-measles-genotype-names.py --genotype-field {params.genotype_field} \
| {workflow.basedir}/vendored/merge-user-metadata \
--annotations {input.annotations} \
--id-field {params.annotations_id} \
| augur curate passthru \
Expand Down
9 changes: 7 additions & 2 deletions nextclade/Snakefile
Original file line number Diff line number Diff line change
@@ -1,4 +1,9 @@
configfile: "defaults/config.yaml"
include: "../shared/functions.smk"

configfile: os.path.join(workflow.basedir, "defaults/config.yaml")

if os.path.exists("config.yaml"):
configfile: "config.yaml"

rule all:
input:
Expand All @@ -13,7 +18,7 @@ include: "rules/export.smk"
if "custom_rules" in config:
for rule_file in config["custom_rules"]:

include: rule_file
include: os.path.join(os.getcwd(), rule_file)

rule clean:
"""Removing directories: {params}"""
Expand Down
14 changes: 7 additions & 7 deletions nextclade/defaults/config.yaml
Original file line number Diff line number Diff line change
@@ -1,12 +1,12 @@
strain_id_field: "accession"
files:
exclude: "defaults/dropped_strains.txt"
include: "defaults/include_strains.txt"
reference_N450: "defaults/measles_reference_N450.gb"
reference_N450_fasta: "defaults/measles_reference_N450.fasta"
clades: "defaults/clades.tsv"
colors: "defaults/colors.tsv"
auspice_config: "defaults/auspice_config.json"
exclude: "dropped_strains.txt"
include: "include_strains.txt"
reference_N450: "measles_reference_N450.gb"
reference_N450_fasta: "measles_reference_N450.fasta"
clades: "clades.tsv"
colors: "colors.tsv"
auspice_config: "auspice_config.json"
align_and_extract_N450:
min_length: 400
min_seed_cover: 0.01
Expand Down
6 changes: 3 additions & 3 deletions nextclade/rules/annotate_phylogeny.smk
Original file line number Diff line number Diff line change
Expand Up @@ -14,7 +14,7 @@ rule ancestral:
node_data = "results/nt_muts.json"
params:
inference = config["ancestral"]["inference"],
reference_fasta = config["files"]["reference_N450_fasta"]
reference_fasta = resolve_config_path(config["files"]["reference_N450_fasta"])
shell:
"""
augur ancestral \
Expand All @@ -30,7 +30,7 @@ rule translate:
input:
tree = "results/tree.nwk",
node_data = "results/nt_muts.json",
reference = config["files"]["reference_N450"]
reference = resolve_config_path(config["files"]["reference_N450"])
output:
node_data = "results/aa_muts.json"
shell:
Expand All @@ -47,7 +47,7 @@ rule clades:
tree = "results/tree.nwk",
nt_muts = "results/nt_muts.json",
aa_muts = "results/aa_muts.json",
clade_defs = config["files"]["clades"]
clade_defs = resolve_config_path(config["files"]["clades"])
output:
clades = "results/clades.json"
shell:
Expand Down
4 changes: 2 additions & 2 deletions nextclade/rules/export.smk
Original file line number Diff line number Diff line change
Expand Up @@ -14,8 +14,8 @@ rule export:
clades = "results/clades.json",
nt_muts = "results/nt_muts.json",
aa_muts = "results/aa_muts.json",
colors = config["files"]["colors"],
auspice_config = config["files"]["auspice_config"]
colors = resolve_config_path(config["files"]["colors"]),
auspice_config = resolve_config_path(config["files"]["auspice_config"])
output:
auspice_json = "auspice/measles.json"
params:
Expand Down
6 changes: 3 additions & 3 deletions nextclade/rules/prepare_sequences.smk
Original file line number Diff line number Diff line change
Expand Up @@ -34,7 +34,7 @@ rule decompress:
rule align_and_extract_N450:
input:
sequences = "data/sequences.fasta",
reference = config["files"]["reference_N450_fasta"]
reference = resolve_config_path(config["files"]["reference_N450_fasta"])
output:
sequences = "results/sequences_N450.fasta"
params:
Expand All @@ -57,8 +57,8 @@ rule filter:
input:
sequences = "results/sequences_N450.fasta",
metadata = "data/metadata.tsv",
exclude = config["files"]["exclude"],
include = config["files"]["include"]
exclude = resolve_config_path(config["files"]["exclude"]),
include = resolve_config_path(config["files"]["include"])
output:
sequences = "results/aligned.fasta"
params:
Expand Down
9 changes: 7 additions & 2 deletions phylogenetic/Snakefile
Original file line number Diff line number Diff line change
@@ -1,6 +1,11 @@
genes = ['N450', 'genome']

configfile: "defaults/config.yaml"
include: "../shared/functions.smk"

configfile: os.path.join(workflow.basedir, "defaults/config.yaml")

if os.path.exists("config.yaml"):
configfile: "config.yaml"

rule all:
input:
Expand All @@ -17,7 +22,7 @@ include: "rules/export.smk"
if "custom_rules" in config:
for rule_file in config["custom_rules"]:

include: rule_file
include: os.path.join(os.getcwd(), rule_file)

rule clean:
"""Removing directories: {params}"""
Expand Down
20 changes: 10 additions & 10 deletions phylogenetic/defaults/config.yaml
Original file line number Diff line number Diff line change
@@ -1,15 +1,15 @@
strain_id_field: "accession"
files:
exclude: "defaults/dropped_strains.txt"
include_genome: "defaults/include_strains_genome.txt"
include_N450: "defaults/include_strains_N450.txt"
reference: "defaults/measles_reference.gb"
reference_N450: "defaults/measles_reference_N450.gb"
reference_N450_fasta: "defaults/measles_reference_N450.fasta"
colors: "defaults/colors.tsv"
auspice_config: "defaults/auspice_config.json"
auspice_config_N450: "defaults/auspice_config_N450.json"
description: "defaults/description.md"
exclude: "dropped_strains.txt"
include_genome: "include_strains_genome.txt"
include_N450: "include_strains_N450.txt"
reference: "measles_reference.gb"
reference_N450: "measles_reference_N450.gb"
reference_N450_fasta: "measles_reference_N450.fasta"
colors: "colors.tsv"
auspice_config: "auspice_config.json"
auspice_config_N450: "auspice_config_N450.json"
description: "description.md"
Comment on lines +3 to +12
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Slightly concerned that this change may confuse external users who want to trace the default files. It's not immediately clear that these are pulling from "defaults/".

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

So the ingest/defaults/config.yaml comments on files' locations, but this phylogenetic/defaults/config.yaml did not and so I also did not bother. But we very easily could if we think it'll be a tripping hazard.

If we're terribly concerned and don't think a comment is enough, we can also leave the defaults/ prefix in the default config. It's handled for backwards-compat.

# Special-case defaults/… for backwards compatibility with older
# configs. We could achieve the same behaviour with a symlink
# (defaults/defaults → .) but that seems less clear.
if path.startswith("defaults/"):
defaults_path = os.path.join(workflow.basedir, path)

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think the additional comments should be enough.

we can also leave the defaults/ prefix in the default config.

I thought the defaults/ prefix was removed so that the config will automatically use input files in the analysis directory? If we keep the defaults/ prefix, users must override the config param or put their files in a defaults/ directory within their analysis directory right?

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I thought the defaults/ prefix was removed so that the config will automatically use input files in the analysis directory?

That's my understanding as well.

The search order is:

  1. Filename in current analysis directory (if it exists)
  2. Filename in workflow's defaults/ directory. (If the config-specified filename has "defaults/" then that's ignored for this step.)

By not using defaults/ in the config-specified filename it makes it possible to swap out the workflow file (TSV etc) file with one of the same name in the current analysis directory.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I thought the defaults/ prefix was removed so that the config will automatically use input files in the analysis directory?

Sorry, yes, you're both correct with the code in this PR as-is and what I said. The bit I forgot to say as I was rushing out the door (ahh! 🤦) was that we could strip the defaults/ prefix off when searching for files in the analysis directory, thus letting us keep the defaults/ prefix in the default config. Given we're already special-casing it for backwards-compat, we could instead special-case it the other way for analysis directory lookup instead.

(This is actually the first pattern I tinkered with over a month ago, but I thought it was more confusing than a simple unchanging path + search order.)

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

There may be more than one, of course.

Hmm? There's only one config object at run time, and that's what we'd write out.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

If I run multiple invocations of the workflow with different overlays, or invocations of different workflows, each will write sets of files into the analysis directory, and these sets may not overlap or may partially overlap. To use an avian-flu example, I may have an analysis directory with both gisaid & cattle-outbreak datasets.

Copy link
Member Author

@tsibley tsibley Dec 11, 2024

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Ah, I see. Multiple invocations (whether same workflow or not) writing to the same working analysis directory seemed to me like something we didn't need to support (in terms of keeping things separate): there's always the chance of overwriting files in that case (often overwriting is desired/expected) and confusion around what's output from which invocation. Do you think we definitely do need to support that sort of use case? If so, the implications of that would seem to extend far beyond the proposed results/config.yaml discussed here.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Do you think we definitely do need to support that sort of use case?

No - but it's something to keep in mind as the structure is built out, because what we're building out does support it and I think it'll be a common use case. For instance, if we could include some identifier based on the invocation within the filename we wouldn't be simply overwriting results/config.yaml each time. This follows on from my comment above, I think we should also write out information beyond the config, e.g. pathogen version used, runtime version, invocation command etc. You could shoehorn that into the written-out config, or create a directory for each workflow invocation with files detailing all that information; I'd think logs & benchmarks may better live here as well. Obviously we don't have to do all this now, but thinking about the structure now seems like the right move.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

For instance, if we could include some identifier based on the invocation within the filename we wouldn't be simply overwriting results/config.yaml each time.

Right, of course, but we're still overwriting all/most of the other files in results/ each time though. I don't see the harm in matching that behaviour with results/config.yaml for now.

I think we should also write out information beyond the config, e.g. pathogen version used, runtime version, invocation command etc. You could shoehorn that into the written-out config, or create a directory for each workflow invocation with files detailing all that information; I'd think logs & benchmarks may better live here as well. Obviously we don't have to do all this now, but thinking about the structure now seems like the right move.

A timestamped directory to contain all this individual-run information makes the most sense to me and is the most conventional… but I don't think we need to worry about this structure right now now. I think it's too early to implement now; we should focus on the core workflows-as-programs changes first before building on them while they're still forming. I also think there's some consideration to take on what outputs get differentiated between invocations (e.g. protected from overwrite) and what outputs don't, and why that split is useful vs. the default situation of "nothing's protected" or "everything's protected".

filter:
group_by: "country year"
sequences_per_group: 20
Expand Down
2 changes: 1 addition & 1 deletion phylogenetic/rules/annotate_phylogeny.smk
Original file line number Diff line number Diff line change
Expand Up @@ -28,7 +28,7 @@ rule translate:
input:
tree = "results/{gene}/tree.nwk",
node_data = "results/{gene}/nt_muts.json",
reference = lambda wildcard: config["files"]["reference" if wildcard.gene == "genome" else f"reference_{wildcard.gene}"]
reference = lambda wildcard: resolve_config_path(config["files"]["reference" if wildcard.gene == "genome" else f"reference_{wildcard.gene}"])
output:
node_data = "results/{gene}/aa_muts.json"
shell:
Expand Down
6 changes: 3 additions & 3 deletions phylogenetic/rules/export.smk
Original file line number Diff line number Diff line change
Expand Up @@ -13,9 +13,9 @@ rule export:
branch_lengths = "results/{gene}/branch_lengths.json",
nt_muts = "results/{gene}/nt_muts.json",
aa_muts = "results/{gene}/aa_muts.json",
colors = config["files"]["colors"],
auspice_config = lambda wildcard: config["files"]["auspice_config" if wildcard.gene == "genome" else f"auspice_config_{wildcard.gene}"],
description=config["files"]["description"]
colors = resolve_config_path(config["files"]["colors"]),
auspice_config = lambda wildcard: resolve_config_path(config["files"]["auspice_config" if wildcard.gene == "genome" else f"auspice_config_{wildcard.gene}"]),
description=resolve_config_path(config["files"]["description"])
output:
auspice_json = "auspice/measles_{gene}.json"
params:
Expand Down
6 changes: 3 additions & 3 deletions phylogenetic/rules/prepare_sequences.smk
Original file line number Diff line number Diff line change
Expand Up @@ -42,8 +42,8 @@ rule filter:
input:
sequences = "data/sequences.fasta",
metadata = "data/metadata.tsv",
exclude = config["files"]["exclude"],
include = config["files"]["include_genome"]
exclude = resolve_config_path(config["files"]["exclude"]),
include = resolve_config_path(config["files"]["include_genome"])
output:
sequences = "results/genome/filtered.fasta"
params:
Expand Down Expand Up @@ -74,7 +74,7 @@ rule align:
"""
input:
sequences = "results/genome/filtered.fasta",
reference = config["files"]["reference"]
reference = resolve_config_path(config["files"]["reference"])
output:
alignment = "results/genome/aligned.fasta"
shell:
Expand Down
6 changes: 3 additions & 3 deletions phylogenetic/rules/prepare_sequences_N450.smk
Original file line number Diff line number Diff line change
Expand Up @@ -7,7 +7,7 @@ See Augur's usage docs for these commands for more details.
rule align_and_extract_N450:
input:
sequences = "data/sequences.fasta",
reference = config["files"]["reference_N450_fasta"]
reference = resolve_config_path(config["files"]["reference_N450_fasta"])
output:
sequences = "results/N450/sequences.fasta"
params:
Expand All @@ -34,8 +34,8 @@ rule filter_N450:
input:
sequences = "results/N450/sequences.fasta",
metadata = "data/metadata.tsv",
exclude = config["files"]["exclude"],
include = config["files"]["include_N450"]
exclude = resolve_config_path(config["files"]["exclude"]),
include = resolve_config_path(config["files"]["include_N450"])
output:
sequences = "results/N450/aligned.fasta"
params:
Expand Down
6 changes: 6 additions & 0 deletions shared/README.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,6 @@
# Shared

> **Warning**
> Please be aware of the multiple workflows that will be affected when editing files in this directory!

This directory that holds files that are shared across multiple workflows.
31 changes: 31 additions & 0 deletions shared/functions.smk
Original file line number Diff line number Diff line change
@@ -0,0 +1,31 @@
import os.path

def resolve_config_path(path):
"""
Resolve a relative *path* given in a configuration value.

Resolves *path* as relative to the workflow's ``defaults/`` directory (i.e.
``os.path.join(workflow.basedir, "defaults", path)``) if it doesn't exist
in the workflow's analysis directory (i.e. the current working
directory, or workdir, usually given by ``--directory`` (``-d``)).

This behaviour allows a default configuration value to point to a default
auxiliary file while also letting the file used be overridden either by
setting an alternate file path in the configuration or by creating a file
with the conventional name in the workflow's analysis directory.
"""
global workflow

if not os.path.exists(path):
# Special-case defaults/… for backwards compatibility with older
# configs. We could achieve the same behaviour with a symlink
# (defaults/defaults → .) but that seems less clear.
if path.startswith("defaults/"):
defaults_path = os.path.join(workflow.basedir, path)
else:
defaults_path = os.path.join(workflow.basedir, "defaults", path)

if os.path.exists(defaults_path):
return defaults_path

return path