Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Make one-off index jobs use batching for large tasks #96

Open
anjackson opened this issue Apr 29, 2022 · 0 comments
Open

Make one-off index jobs use batching for large tasks #96

anjackson opened this issue Apr 29, 2022 · 0 comments
Assignees

Comments

@anjackson
Copy link
Contributor

Things like one-off CDX/Solr indexing jobs work okay, but if it's necessary to index a large amount of content, they will fail. If a large input is passed in, the code should break the input into batched of e.g. 1000 input files (batch size as CLI option). The code should then run each batch in turn....

Hmm, the issue here is that this is quite brittle, as if there's a failure, you'll have to go back to the start. The code could simply refuse to process more than 1000 inputs, and force the script user to use split to break up the task.

Additionally, the tools could use a convention of writing the summary results out to <input-file>.out.jsonl and not run if that is present. That would make rerunning a set of batch jobs pretty easy to manage.

These two ideas could be brought together - if the <input-file> is large, the script generates splits like <input-file>.split_1, and stores completion in <input-file>.split_1.out.jsonl.

Or, perhaps more simply, generate a <input-file>.dbm (using the dbm built-in module) and use that to keep track? Or maybe just even a .jsonl that gets replaced after each batch.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

1 participant