Skip to content

A RAG application to chat on internal knowledge base

Notifications You must be signed in to change notification settings

sourav-tw/spacechat

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

21 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Table of Contents

About the Project

This is an practical implementation of RAG where you upload the private knowledge base as documents and the LLM will answer based on the private knowledge. It's basically a chat application where user wants to know few things

Tools Used

List of tools/languages the product uses

  • Python
  • Ollama
  • LlamaIndex
  • ChromaDB
  • Gradio
  • Langfuse

Set up instruction

Follow the steps below to run the application

Install and run Ollama

  1. Download Ollama from https://ollama.com and install following the instructions
  2. Follow instructions to enable ollama cli command
  3. Run the the following command to pull llama2 7b locally
ollama pull llama2
  1. Once the model is pulled, run the following command to start running ollama service
ollama serve

*⚠️ This will start the application in default port. If you see an error that means Ollama is alrady running

Install and run with python virtual environment

  1. Set up a virtual environment using the command
python3 -m venv venv
  1. Activate the virtual environment using the command
source venv/bin/activate
  1. Change your IDE settings accordingly to use the created virtual environment
  2. Install the required dependencies using the command
pip install -r requirements.txt

Install and run with pipenv

  1. We are using pipenv to control the dependency and virtual environment. Install pipenv using the following command
pip install pipenv
  1. Install the dependencies using the following command
pipenv install
  1. Change your IDE settings accordingly to use the created virtual environment

Set up the environment variables

  1. Create a .env file in the root of the project directory
  2. Refer the .env.example file for the environment variables to be set in the .env

Configure the observability using langfuse

  1. Follow the below steps to configure langfuse
git clone https://github.com/langfuse/langfuse.git
cd langfuse
docker compose up
  1. Once it is up and running you can access the observability at http://localhost:3000
  2. Create a new user id and password to sign in
  3. Create a api key from settings and copy the secret key and public key in .env file as shown: LANGFUSE_SECRET_KEY=sk-<secret_key> LANGFUSE_PUBLIC_KEY=pk-<public_key> LANGFUSE_HOST=http://127.0.0.1:3000
  4. Create a prompt (sample given under samples) from the UI and reference the prompt name in env variable under PROMPT_TEMPLATE variable name.

Prepare knowledge base

  1. To prepare the knowledge base create a docs folder in the root of the project directory
  2. Add pdf documents under the docs folder

Run as ui

To run this from a terminal or command prompt, run the following command

python api.py
python app.py

Run as api

  1. To run this as an api, run the following command
python api.py
  1. Access the application in your browser at http://localhost:5000.

Run as cmd

To run this from a terminal or command prompt, run the following command

python query_model.py

About

A RAG application to chat on internal knowledge base

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Contributors 3

  •  
  •  
  •  

Languages