Automatically(ish) generate a folder with json files for every link tweeted by @FreeCooperUnion, including title, description, article text, image urls, dominant image colors, keywords, and related articles.
- sign up for api key at http://embed.ly
- downloaded twitter archive for @freecooperunion
- import tweets.csv to google drive
- sort by expanded_urls column
- paste expanded_urls into sublime text
- find and replace commas with line breaks
- select all text and run permute lines > unique to eliminate duplicate urls
- find and replace http:// with http://api.embed.ly/1/extract?key=YourKeyHere&url=http://
- find and replace https:// with http://api.embed.ly/1/extract?key=YourKeyHere&url=https://
- save file in a folder called /fcu-wget as urls.txt
- open terminal and cd to /fcu-wget
- run $ wget -i urls.txt
- batch cleanup filenames using automator's rename function
- batch append .json to filenames using automator
- copy main array "[ ]" out of concat.json into links.json
- download handlebars.js
- open images.html (which imports links.json and loops through all images for each link outputting a list of urls) in firefox and use downthemall extension to download all linked images to /images folder
- open /images fodler in Time Lapse Assembler at max quality and 60fps