diff --git a/README.md b/README.md index 0ed6c47..c28f12e 100644 --- a/README.md +++ b/README.md @@ -33,8 +33,10 @@ about them, just click on the script name. This program _requires_ the `webregautoin` helper program. -## Documentation -Basic documentation is provided in the [wiki](https://github.com/ewang2002/webreg_scraper/wiki). +## Setup +To run this project, feel free to explore the individual scripts or crates above; setup guides for each are provided. + +If you want to get an Ubuntu environment ready with all the necessary files needed to run this project, you can run the setup script in the [`setup`](https://github.com/ewang2002/webreg_scraper/tree/master/setup) folder. More information will be provided there. ## License Everything in this repository is licensed under the MIT license. \ No newline at end of file diff --git a/setup/README.md b/setup/README.md new file mode 100644 index 0000000..2542633 --- /dev/null +++ b/setup/README.md @@ -0,0 +1,25 @@ +# Setup Environment +You can use the `setup.sh` script to install all necessary files and dependencies needed to run this project. At this time, the script has been tested and works on Ubuntu OS version 23.10. + +> [!WARNING] +> Other Linux distributions have not been tested. You may need to adjust the script to work as expected. + +I recommend creating a [DigitalOcean droplet](https://www.digitalocean.com/); the cheapest plan will suffice, and students with the GitHub student pack are eligible for [$200 in DigitalOcean credits for 1 year](https://education.github.com/pack/offers). + +--- + +> [!NOTE] +> Before beginning, I recommend that you run this script on a non-root user with sudo access. +> +> If you plan on running this script using `root`, and then later plan on managing the scraper from a non-root user, you'll need to complete steps 3-5 again for that user. + +To start, copy both `setup.sh` and `nginx.conf` to the directory where you want all necessary project files to be stored at (e.g., your home directory, `~`). Then, update your system if needed (e.g., using `apt-get update`). Afterwards, run `sudo setup.sh`. This script will +1. Set the timezone to Pacific Time +2. Install all dependencies needed for puppeteer to work +3. Install `nvm` and the LTS version of `node.js` + - Install `pm2` and `typescript` globally on the current account +4. Download the latest version of the WebReg scraper with the `authmanager` executable +5. Clone the repository, and extract and compile the `notifier` and `webregautoin` scripts. +6. Install `nginx` and replace the default configuration file with the one provided here. + +In other words, this setup script will get your environment ready to run the scraper and the login script, and make the scraper's API available to anyone through `nginx`. You do not need to install anything else to make this script work. \ No newline at end of file diff --git a/setup/nginx.conf b/setup/nginx.conf new file mode 100644 index 0000000..241663c --- /dev/null +++ b/setup/nginx.conf @@ -0,0 +1,28 @@ +user www-data; +worker_processes auto; +pid /run/nginx.pid; +error_log /var/log/nginx/error.log; +include /etc/nginx/modules-enabled/*.conf; + +events { + worker_connections 768; +} + +http { + sendfile on; + tcp_nopush on; + types_hash_max_size 2048; + include /etc/nginx/mime.types; + default_type application/octet-stream; + ssl_protocols TLSv1 TLSv1.1 TLSv1.2 TLSv1.3; # Dropping SSLv3, ref: POODLE + ssl_prefer_server_ciphers on; + access_log /var/log/nginx/access.log; + gzip on; + server { + listen 80; + + location /ucsd/ { + proxy_pass http://localhost:3000/; + } + } +} \ No newline at end of file diff --git a/setup/setup.sh b/setup/setup.sh new file mode 100644 index 0000000..1bb7992 --- /dev/null +++ b/setup/setup.sh @@ -0,0 +1,66 @@ +#!/bin/bash + +if [ "$(id -u)" -ne 0 ]; then + echo "Please run w/ sudo." >&2; + exit 1; +fi + +# Set the timezone to Pacific time +timedatectl set-timezone America/Los_Angeles + +# Install puppeteer dependencies +# See https://github.com/puppeteer/puppeteer/blob/main/docs/troubleshooting.md#running-puppeteer-on-wsl-windows-subsystem-for-linux +apt install -y libgtk-3-dev libnotify-dev libnss3 libxss1 libasound2 + +# Install jq for JSON parsing (we need this when getting the latest release tag for the scraper) +apt install -y jq + +# Install nvm and the LTS version of node.js +curl -o- https://raw.githubusercontent.com/nvm-sh/nvm/v0.39.5/install.sh +source ~/.bash_profile +nvm install --lts +source ~/.bash_profile + +# Install pm2 and typescript +npm install -g pm2 +npm install -g typescript + +# Get latest scraper tag +tag=$(curl --silent "https://api.github.com/repos/ewang2002/webreg_scraper/releases" | jq -r "first | .tag_name") + +# Setup webreg_scraper +mkdir scraper +cd scraper +wget https://github.com/ewang2002/webreg_scraper/releases/download/$tag/authmanager-x86_64-unknown-linux-gnu.tar.gz +tar -xvzf authmanager-x86_64-unknown-linux-gnu.tar.gz +wget https://github.com/ewang2002/webreg_scraper/releases/download/$tag/webreg-x86_64-unknown-linux-gnu-auth.tar.gz +tar -xvzf webreg-x86_64-unknown-linux-gnu-auth.tar.gz +rm *.tar.gz +cd .. + +# Setup the login & notifier scripts +git clone https://github.com/ewang2002/webreg_scraper +mv webreg_scraper/scripts/notifierbot notifier +mv webreg_scraper/scripts/webregautoin login +rm -rf webreg_scraper + +# Setup the notifier +cd notifier +npm i +npm run compile +cd .. + +# Setup the login script +cd login +npm i +npm run compile +cd .. + +# Install nginx +sudo apt install -y nginx +rm -f /etc/nginx/nginx.conf +mv nginx.conf /etc/nginx + +# All done. +echo "All done. Make sure to configure the notifier bot, login script, and the scraper." +exit 0