Replies: 2 comments 1 reply
-
Hey @jjcovert , thank you for the suggestion! I agree that resuming would be a great feature to have, I'll see what I can do (will create an issue for this discussion). |
Beta Was this translation helpful? Give feedback.
-
Hey @jjcovert I got good and bad news! Bad news — the resume functionality is still not implemented. Good news — I have implemented the retry on the 500 (and all the rest of 5xx range) errors, so you might be able to get away with v2.2.9. For a full retry behaviour description, see #187 . This version also includes the speed and memory optimisations for export mode (details in #185 ) Please let me know how it goes, if you're going to try it? Alternatively, of course, you can wait for v3.0.1, where I'm planning to implement the retry function, but I can't promise that it will be implemented soon, I'm kind of stuck with the changes for v3.0.0, as there's a considerable rewrite of the CLI. |
Beta Was this translation helpful? Give feedback.
-
After 17GB of export, my slackdump process died with a 500 response from Slack. I'd love to be able to start again where it left off. It was a 500 error, so hopefully it was resolved on their end (and not due to bad data etc).
Being able to resume where it died would be huge for someone trying to get a complete workspace export. And it's probably as easy as enumerating everything (into a file) on process start, and removing lines from the file as tasks get completed. On start up, it can see if this file already exists and start with the next line as the first task.
Thoughts?
edit: Thinking about it a bit more - an "if this file already exists, skip it" check before exporting a channel/thread/file might help speed things up dramatically and accomplish almost the same thing. Hmm
Beta Was this translation helpful? Give feedback.
All reactions