UPDATE [2024-07-17]: I have created another repo with similar functionality but with all AWS services only. Though it requires more experience on e.g. AWS VPC and Amazon Aurora's query editor, but it should be more robust and scalable than the solution in this repo. Do take a look! gabrielkoo/self-learning-rag-it-support-slackbot.
The bot uses ChatGPT to answer based on your own FAQ database, while allowing users to submit new articles into it with a Slash Command, so that it can answer with new knowledge immediately, as it updates the model on the fly in the cloud!
Read my dev.to article below to know more about why and how I created this solution!
I have also included a pricing estimate on the cost breakdown of using this solution (it's at US$0.009 per question as of Apr 2023 pricings).
A sample dataset is included in the ./sample_data
directory, and it's built based on Wikipedia pages on the Disney+ series "The Mandalorian".
So I can submit a new article to the bot:
And now the bot knows how to answer my question:
Since ChatGPT's API became available in 2023 Mar, the world has been of great hype on building a lot of great integrations around it. Two of these integrations are especially appealing to me:
-
Combing Embedding Search with ChatGPT to build a FAQ engine - it's a way of: Knowledge Base Question Answering (KBQA) - combining
- natural language understanding (via a text embedding on the question)
- information retrieval (via a text embedding on the articles, matches against the one for the question)
- knowledge representation (via ChatGPT with the selected information)
-
Connecting the AI with a programmable messaging platform like Slack
But so far, I have not seen any open-source project that:
- combines the two together
- provides a easy hosting method like AWS SAM, and lastly
- provides a functionality to let the user submit extra knowledge into the embedding dataset.
The 3rd point is very important to me, because in this post-OpenAI era, you should no longer rely on an expensive data scientist to build a FAQ engine for you. Instead, you should let your users submit their own knowledge into the dataset, so that the AI can learn from the collective intelligence of your users.
So I decided to build one myself.
The infrastructure is built with AWS SAM, and it consists of the following components:
- A Lambda function that handles the Slack API requests, it's possible with the new Function URL feature that was released in 2022. This saves us from the trouble of setting up an API Gateway.
- A AWS S3 bucket to store the datafiles, that includes a CSV file of the articles, and a CSV file of the document embeddings.
Yeah that's it! With AWS SAM, things are simply so simple, and all these are defined in template.yml
.
Sequence diagram for the Q&A flow:
sequenceDiagram
participant User
participant Slack
box gray AWS SAM
participant Lambda
participant S3Bucket
end
participant OpenAI
User->>Slack: Asks a question
Slack->>Lambda: POST request with question
Lambda->>S3Bucket: Fetch FAQ datafile and text embeddings
S3Bucket->>Lambda: Returns data files
Lambda->>OpenAI: 1) Create a text embedding of the question
OpenAI->>Lambda: Returns text embedding of the question
Lambda->>Lambda: 2) Match embeddings and find relevant FAQ articles
Lambda->>OpenAI: 3) Feed question and relevant articles to ChatGPT
OpenAI->>Lambda: Returns response
Lambda->>Slack: Returns answer based on FAQ dataset
Slack->>User: Replies with the answer
Sequence diagram for the new training article submission flow:
sequenceDiagram
participant User
participant Slack
box gray AWS SAM
participant Lambda
participant S3Bucket
end
participant OpenAI
User->>Slack: /submit_train_article command
Slack->>Lambda: POST request with open modal request
Lambda->>Slack: Returns modal configuration
Slack->>User: Shows modal with form fields
User->>Slack: Fills in the form fields of the new article
Slack->>Lambda: POST request with the article
Lambda->>S3Bucket: Fetch FAQ datafile and text embeddings
S3Bucket->>Lambda: Returns data files
Lambda->>OpenAI: Compute text embedding for new article
OpenAI->>Lambda: Returns text embedding for new article
Lambda->>S3Bucket: Update FAQ CSV file and embeddings file
S3Bucket->>Lambda: Confirm update
Lambda->>Slack: Returns success message
Slack->>User: Replies with success message
- Prepare a
.env
file at the root directory, according to the template.env.example
. - AWS SAM CLI - Install the SAM CLI
- Docker - Install Docker
- An OpenAI API Key
- Get an OpenAI API Key and put it in the
OPENAI_API_KEY
environment variable. - Alternatively, you can also get one from Azure if you have access to the Azure OpenAI Service.
- Get an OpenAI API Key and put it in the
- A Slack App - Create a Slack App
The following scopes are required (Configure in the "OAuth & Permissions" page > "Scopes" > "Bot Token Scopes"):
chat:write
commands
im:history
im:write
The following event subscriptions are required: (but you can't set these yet until the deployment of the AWS SAM infrastructure is done):
message.channels
message.groups
message.im
message.mpim
Enable "Allow users to send commands and messages from the messages tab” in the “App Home” settings.
Lastly, make sure to install the app to your workspace
Prepare the following environment varaibles into the .env
file
- put the bot OAuth token as
SLACK_BOT_TOKEN
- the signing secret as
SLACK_SIGNING_SECRET
-
Setup your shell for AWS credentials. There are various ways of doing so, and you may refer to this documnetation.
For example, you may run
aws sso login --profile name-of-your-profile
if you have configured your AWS credentials with AWS Identity Center (originally named AWS SSO) before. -
Run the
./deploy.sh
script, it will provision everything for you.
After the deployment, you still need to manually upload the initial datafiles.
-
Prepare a file in
.data/articles.csv
, with three columns(title, heading, content)
.#!/bin/bash cd function export LOCAL_DATA_PATH=./ python3 -c 'from embedding import *; prepare_document_embeddings()'
Be sure to escape e.g. newline characters into
\n
in thecolumn
field.Then, a file should be created at
./data/document_embeddings.csv
. -
Upload both files onto the S3 bucket that was created by the CloudFormation template, at the following paths:
s3://$DATAFILE_S3_BUCKET/data/articles.csv
s3://$DATAFILE_S3_BUCKET/data/document_embeddings.csv
If you want to use the command line, you can run the following command:
aws s3 cp --recursive ./function/data/*.csv s3://$DATAFILE_S3_BUCKET/data/
That's it!
If you want to be a bit lazy and start with my sample data, just run the following command instead
aws s3 cp --recursive ./sample_data/*.csv s3://$DATAFILE_S3_BUCKET/data/
-
Go to the
Outputs
tab of the deployed CloudFormation template (e.g.https://us-east-1.console.aws.amazon.com/cloudformation/home?region=us-east-1#/stacks
), copy the URL value ofFunctionUrlEndpoint
. -
Go back to the config page of your custom Slack App, and paste it at
"Event Subscriptions" > "Enable Events" > "Request URL" and verify it.
-
Once done, you can go to Slack and try messaging your bot with a question that is supposed to be answerable with the help of your own FAQ dataset!
In addition you can also create a /submit_train_article
slack command so that your users can self-serve submit extra articles into the dataset. The handlers are defined in the following methods of lambda_function.py
: handle_submit_train_article_command
and handle_submit_train_article_submission
.
- In your Slack App's config, go to
Features
>Slack Commands
>Create New Command
- After the modal is opened, enter the following details:
- Command:
/submit_train_article
- Request URL: Paste the value of
FunctionUrlEndpoint
- Command:
- Then click "Save".
- Use AWS System Manager Parameter Store instead of plaintext Lambda environment variables - Tutorial Here
- Use API Gateway instead of Lambda function Urls - AWS SAM Example
- Add a WAF to the API Gateway - Documentation
- Put the whole setup into an AWS VPC - Documentation
- Switch to AWS EFS for the datafiles - Documentation
- Preserve user's message context with DynamoDB - Documentation
This project is based on the following projects: