Skip to content

Commit

Permalink
update readme
Browse files Browse the repository at this point in the history
  • Loading branch information
leftmove committed Apr 21, 2024
1 parent 16a44d2 commit 6d286db
Show file tree
Hide file tree
Showing 8 changed files with 299 additions and 19 deletions.
11 changes: 11 additions & 0 deletions .github/FUNDING.yml
Original file line number Diff line number Diff line change
@@ -0,0 +1,11 @@
github: # Replace with up to 4 GitHub Sponsors-enabled usernames e.g., [user1, user2]
patreon: # Replace with a single Patreon username
open_collective: # Replace with a single Open Collective username
ko_fi: wallstreetlocal
tidelift: # Replace with a single Tidelift platform-name/package-name e.g., npm/babel
community_bridge: # Replace with a single Community Bridge project-name e.g., cloud-foundry
liberapay: # Replace with a single Liberapay username
issuehunt: # Replace with a single IssueHunt username
lfx_crowdfunding: # Replace with a single LFX Crowdfunding project-name e.g., cloud-foundry
polar: # Replace with a single Polar username
custom: # Replace with up to 4 custom sponsorship URLs e.g., ['link1', 'link2']
27 changes: 14 additions & 13 deletions .vscode/launch.json
Original file line number Diff line number Diff line change
@@ -1,14 +1,15 @@
{
// Use IntelliSense to learn about possible attributes.
// Hover to view descriptions of existing attributes.
// For more information, visit: https://go.microsoft.com/fwlink/?linkid=830387
"version": "0.2.0",
"configurations": [
{
"name": "Python Debugger: Module",
"type": "debugpy",
"request": "launch",
"module": "test"
}
]
}
// Use IntelliSense to learn about possible attributes.
// Hover to view descriptions of existing attributes.
// For more information, visit: https://go.microsoft.com/fwlink/?linkid=830387
"version": "0.2.0",
"configurations": [
{
"name": "Python Debugger: Module",
"type": "debugpy",
"request": "launch",
"cwd": "${workspaceFolder}/tests",
"module": "main"
}
]
}
213 changes: 213 additions & 0 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -4,3 +4,216 @@
<p align="center">
<em>Cria, use Python to run LLMs with as little friction as possible.</em>
</p>

Cria is a library for programatically running Large Language Models through Python. Cria is built so you need as little configuration as possible — even with more advanced features.

- **Easy**: No configuration is required out of the box. Defaults are built in, so getting started takes just five lines of code.
- **Concise**: Write less code to save time and avoid duplication.
- **Efficient**: Use advanced features with your own `ollama` instance, or a subprocess.

<!-- <p align="center">
<em>
Cria uses <a href="https://ollama.com/">ollama</a>.
</em>
</p> -->

## Guide

- [Quick Start](#quickstart)
- [Installation](#installation)
- [Windows](#windows)
- [Mac](#mac)
- [Linux](#linux)
- [Advanced Usage](#advanced-usage)
- [Custom Models](#custom-models)
- [Streams](#streams)
- [Closing](#closing)
- [Message History](#message-history)
- [Multiple Models and Parallel Conversations](#multiple-models-and-parallel-conversations)
- [Contributing](#contributing)
- [License](#license)

## Quickstart

Running Cria is easy, after installation, you need just five lines of code.

```python
import cria

ai = cria.Cria()

prompt = "Who is the CEO of OpenAI?"
for chunk in ai.chat(prompt):
print(chunk, end="") # The CEO of OpenAI is Sam Altman!

# Not required, but best practice.
ai.close()
```

By default, Cria runs `llama3:8b`. If `llama3:8b` is not installed on your machine, Cria will install it automatically.

**Important**: If the default model is not installed on your machine, downloading will take a while (`llama:8b` is about 4.7GB).

## Installation

1. Cria uses [`ollama`](https://ollama.com/), to install it, run the following.

### Windows

[Download](https://ollama.com/download/windows)

### Mac

[Download](https://ollama.com/download/mac)

### Linux

```
curl -fsSL https://ollama.com/install.sh | sh
```

2. Install Cria with `pip`.

```
pip install cria
```

## Advanced Usage

### Custom Models

To run other LLM models, pass them into your `ai` variable.

```python
import cria

ai = cria.Cria("llama2")

prompt = "Who is the CEO of OpenAI?"
for chunk in ai.chat(prompt):
print(chunk, end="") # The CEO of OpenAI is Sam Altman. He co-founded OpenAI in 2015 with...
```

You can find available models [here](https://ollama.com/library).

### Streams

Streams are used by default in Cria, but you can turn them off by passing in a boolean for the `stream` parameter.

```python
prompt = "Who is the CEO of OpenAI?"
response = ai.chat(prompt, stream=False)
print(response) # The CEO of OpenAI is Sam Altman!
```

### Closing

By default, models are closed when you exit the Python program, but closing them manually is a best practice.

```python
ai.close()
```

### Message History

Message history is automatically saved in Cria, so asking followup questions is easy.

```python
prompt = "Who is the CEO of OpenAI?"
response = ai.chat(prompt, stream=False)
print(response) # The CEO of OpenAI is Sam Altman.

prompt = "Tell me more about him."
response = ai.chat(prompt, stream=False)
print(response) # Sam Altman is an American entrepreneur and technologist who serves as the CEO of OpenAI...
```

Clearing history is available as well.

```python
prompt = "Who is the CEO of OpenAI?"
response = ai.chat(prompt, stream=False)
print(response) # Sam Altman is an American entrepreneur and technologist who serves as the CEO of OpenAI...

ai.clear()

prompt = "Tell me more about him."
response = ai.chat(prompt, stream=False)
print(response) # I apologize, but I don't have any information about "him" because the conversation just started...
```

### Multiple Models and Parallel Conversations

If you are running multiple models or parallel conversations, the `Model` class is also available. This is recommended for most usecases.

```python
import cria

ai = cria.Model()

prompt = "Who is the CEO of OpenAI?"
response = ai.chat(prompt, stream=False)
print(response) # The CEO of OpenAI is Sam Altman.
```

_All methods that apply to the `Cria` class also apply to `Model`._

Multiple models can be run through a `with` statement. This automatically closes them after use.

```python
import cria

prompt = "Who is the CEO of OpenAI?"

with cria.Model("llama3") as ai:
response = ai.chat(prompt, stream=False)
print(response) # OpenAI's CEO is Sam Altman, who also...

with cria.Model("llama2") as ai:
response = ai.chat(prompt, stream=False)
print(response) # The CEO of OpenAI is Sam Altman.
```

Or they can be run traditonally.

```python
import cria


prompt = "Who is the CEO of OpenAI?"

llama3 = cria.Model("llama3")
response = llama3.chat(prompt, stream=False)
print(response) # OpenAI's CEO is Sam Altman, who also...

llama2 = cria.Model("llama2")
response = llama2.chat(prompt, stream=False)
print(response) # The CEO of OpenAI is Sam Altman.

# Not required, but best practice.
llama3.close()
llama2.close()

```

Cria can also has a `generate` method.

```python
prompt = "Who is the CEO of OpenAI?"
for chunk in ai.generate(prompt):
print(chunk, end="") # The CEO of OpenAI (Open-source Artificial Intelligence) is Sam Altman.

promt = "Tell me more about him."
response = ai.generate(prompt, stream=False)
print(response) # I apologize, but I think there may have been some confusion earlier. As this...
```

## Contributing

If you have a feature request, feel free to make an issue!

Contibutions are highly appreciated.

## License

[MIT](./LICENSE.md)
30 changes: 29 additions & 1 deletion poetry.lock

Some generated files are not rendered by default. Learn more about how customized files appear on GitHub.

3 changes: 2 additions & 1 deletion pyproject.toml
Original file line number Diff line number Diff line change
@@ -1,6 +1,6 @@
[project]
name = "cria"
version = "1.2"
version = "1.3.0"
authors = [{ name = "leftmove", email = "100anonyo@gmail.com" }]
description = "Run AI locally with as little friction as possible"
readme = "README.md"
Expand All @@ -26,6 +26,7 @@ readme = "README.md"
[tool.poetry.dependencies]
python = "^3.8"
ollama = "^0.1.8"
psutil = "^5.9.8"


[build-system]
Expand Down
16 changes: 12 additions & 4 deletions src/cria.py
Original file line number Diff line number Diff line change
Expand Up @@ -40,12 +40,18 @@ def check_models(model, silence_output):
return

if not silence_output:
print(f"LLM model not found, downloading '{model}'...")
print(f"LLM model not found, searching '{model}'...")

try:
ollama.pull(model)
progress = ollama.pull(model, stream=True)
print(
f"LLM model {model} found, downloading... (this will probably take a while)"
)
if not silence_output:
for chunk in progress:
print(chunk)
print(f"'{model}' downloaded, starting processes.")
return model
except Exception as e:
print(e)
raise ValueError(
Expand Down Expand Up @@ -100,7 +106,7 @@ def generate_stream(self, prompt):
ai = ollama

for chunk in ai.generate(model=model, prompt=prompt, stream=True):
content = chunk["message"]["content"]
content = chunk["response"]
yield content

def generate(
Expand All @@ -114,7 +120,9 @@ def generate(
if stream:
return self.generate_stream(prompt)

response = ai.generate(model=model, prompt=prompt, stream=False)
chunk = ai.generate(model=model, prompt=prompt, stream=False)
response = chunk["response"]

return response

def clear(self):
Expand Down
Empty file added tests/__init__.py
Empty file.
18 changes: 18 additions & 0 deletions tests/main.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,18 @@
# Quick Start

import sys

sys.path.append("../")

from src import cria

ai = cria.Cria()

prompt = "Who is the CEO of OpenAI?"
for chunk in ai.generate(prompt):
print(chunk, end="")
# OpenAI, a non-profit artificial intelligence research organization, does...

prompt = "Tell me more about him."
response = ai.generate(prompt, stream=False)
print(response)

0 comments on commit 6d286db

Please sign in to comment.