This is a demo RubyOnJets project, meant to be used as a sample stack during dev team coaching/training sessions 🤓
- AWS account, free tier is more than enough
- JRE 7+, in order to run a local isolated DynamoDB instance
- Terraform, only if deploying from local workstation
- Docker, only if deploying from local Jenkins container
as of now, jets depends on ruby ~> 2.5, so make sure you use a revision within this version, 2.5.3 in my case.
A few things need to be done as a one-time-activity...
- If not present, you need to download DynamoDB, to be used while running locally.
- Extract the downloaded zip file into the
db
folder. Keep track of the folder name that was created while extracting the file, you might need it below 😬... (unless the name is dynamodb_local_latest 👍).
Now, to create the tables, just run:
$ jets movies_api:db:reset
If needed, you can create some dummy data by running:
$ jets movies_api:db:seed
The project leverages foreman in order to manage the different processes (web server and dynamodb).
if the folder name while unzipping dynamodb is different than dynamodb_local_latest 🔝, you will need to update the
./Procfile
. Also update it if for some reason, you have decided to extract/use it from somewhere else if your computer.
In order to start the local instance, just run:
$ bin/start
for the very first time, you might need to give executable permissions to the script:
$ chmod 777 bin/start
First thing we need to do is to create a AWS IAM user with the proper policy configuration, so all the resources can be created upon deploying the solution. This can be done through the AWS console, or from the terminal leveraging the aws-cli. One way or the other, just follow the official doc: Minimal Deploy IAM Policy.
If not done yet in the AWS console, go to IAM > Users > [created user] > Security Credentials tab, and create an access key pair... they will be used in a little bit.
Depending on how you want to deploy, follow one of the two options below:
First thing we need to do is to configure the project to use the access keys we have created above 🔝. There are several ways to do so, but if this is being done for the first time, I would suggest to follow aws configure.
We are now ready to deploy, and for this, there is a convenient rake task that will take care of everything 💪:
$ jets movies_api:aws_deploy
The first time you deploy, you might be prompted with the question 'Is it okay to send your gem data to Lambdagems? (Y/n)? '. The answer is up to you, and it will be remembered for future deployments. If you want to ever switch this decision back and forth, you can use the JETS_AGREE
env var, by setting it to either 'yes' or 'no'. You can learn more about Lambda Gems on the official docs.
You can do so by running:
$ jets movies_api:aws_destroy
$ docker image build -t jenkins-docker -f Dockerfile.ci
Spin up a container from the image we just built:
$ docker container run -d --name jenkinsci \
-v /var/run/docker.sock:/var/run/docker.sock \
-p 8080:8080 jenkins-docker
From the logs, grab the initial admin password. By running docker logs
, you should find something like this:
*************************************************************
*************************************************************
*************************************************************
Jenkins initial setup is required. An admin user has been created and a password generated.
Please use the following password to proceed to installation:
123456789abcdef6a7280dcc03cd4d1d
Then, go to http:\\localhost:8080
in order to complete the setup. When prompted, go with the option of install suggested plugins.
From now on, you can just start and stop the container by running
// start
$ docker container start jenkinsci
// stop
$ docker container stop jenkinsci
Two extra plugins needs to be installed, which can be done under the Manage Jenkins > Manage Plugins. Look for and install:
Let's go ahead and create the user credentials to be used for deployment. This can be done under the Credentials section. Go ahead and add a new item with:
- ID: jets_iam_user
- Access Key ID & Secret Access Key: keys generated above 🔝
Create a new job, and while giving it a name of your choice, select the Pipeline type. In the configuration page, scroll down to the Pipeline section, and set it up with:
- Definition: 'Pipeline script from SCM'
- SCM: 'Git'
- Repository URL: link to your own clone
Thats pretty much it, you can now build it to see your code deployed to AWS. The very last stage of the pipeline is to destroy the resources created, it will need your input to proceed or not. You can select not to destoy it, play around, and once you are done, run a new build to now proceed with the destroy.
For a full CI/CD experience, the ideal scenario would be to have a webhook configured within github, so jenkins builds would be automatically run upon receiving code updates. For that, you would need your container to be publicly available, feel free to do so if thats your goal.