You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Things like one-off CDX/Solr indexing jobs work okay, but if it's necessary to index a large amount of content, they will fail. If a large input is passed in, the code should break the input into batched of e.g. 1000 input files (batch size as CLI option). The code should then run each batch in turn....
Hmm, the issue here is that this is quite brittle, as if there's a failure, you'll have to go back to the start. The code could simply refuse to process more than 1000 inputs, and force the script user to use split to break up the task.
Additionally, the tools could use a convention of writing the summary results out to <input-file>.out.jsonl and not run if that is present. That would make rerunning a set of batch jobs pretty easy to manage.
These two ideas could be brought together - if the <input-file> is large, the script generates splits like <input-file>.split_1, and stores completion in <input-file>.split_1.out.jsonl.
Or, perhaps more simply, generate a <input-file>.dbm (using the dbm built-in module) and use that to keep track? Or maybe just even a .jsonl that gets replaced after each batch.
The text was updated successfully, but these errors were encountered:
Things like one-off CDX/Solr indexing jobs work okay, but if it's necessary to index a large amount of content, they will fail. If a large input is passed in, the code should break the input into batched of e.g. 1000 input files (batch size as CLI option). The code should then run each batch in turn....
Hmm, the issue here is that this is quite brittle, as if there's a failure, you'll have to go back to the start. The code could simply refuse to process more than 1000 inputs, and force the script user to use
split
to break up the task.Additionally, the tools could use a convention of writing the summary results out to
<input-file>.out.jsonl
and not run if that is present. That would make rerunning a set of batch jobs pretty easy to manage.These two ideas could be brought together - if the
<input-file>
is large, the script generates splits like<input-file>.split_1
, and stores completion in<input-file>.split_1.out.jsonl
.Or, perhaps more simply, generate a
<input-file>.dbm
(using the dbm built-in module) and use that to keep track? Or maybe just even a.jsonl
that gets replaced after each batch.The text was updated successfully, but these errors were encountered: