Skip to content

Commit

Permalink
Merge branch 'main' into main
Browse files Browse the repository at this point in the history
Signed-off-by: Ismael Faro Sertage <ismael.faro.sertage@gmail.com>
  • Loading branch information
ismaelfaro authored Oct 18, 2024
2 parents 30419cc + fe011c0 commit f7efdf3
Show file tree
Hide file tree
Showing 9 changed files with 118 additions and 110 deletions.
5 changes: 3 additions & 2 deletions .env.template
Original file line number Diff line number Diff line change
@@ -1,10 +1,11 @@
# LLMs (watsonx/ollama/openai/groq/bam)
LLM_PROVIDER=ollama
# LLM Provider (watsonx/ollama/openai/groq/bam)
LLM_BACKEND=ollama

## WatsonX
# WATSONX_API_KEY=
# WATSONX_PROJECT_ID=
# WATSONX_MODEL="meta-llama/llama-3-1-70b-instruct"
# WATSONX_REGION="us-south"

## Ollama
# OLLAMA_HOST=http://0.0.0.0:11434
Expand Down
1 change: 0 additions & 1 deletion .gitignore
Original file line number Diff line number Diff line change
Expand Up @@ -138,6 +138,5 @@ infra/bee-code-interpreter/*
!infra/bee-code-interpreter/k8s/bee-code-interpreter.yaml

# Code interpreter data
/tmp/observe/*
/tmp/code_interpreter_target/*
/tmp/code_interpreter_source/*
75 changes: 47 additions & 28 deletions README.md
Original file line number Diff line number Diff line change
@@ -1,14 +1,14 @@
# Bee Agent Framework Starter
# 🐝 Bee Agent Framework Starter

This starter template allows you to easily start working with the [Bee Agent Framework](https://github.com/i-am-bee/bee-agent-framework) in a second.
This starter template lets you quickly start working with the [Bee Agent Framework](https://github.com/i-am-bee/bee-agent-framework) in a second.

## Key Features
## Key Features

- Safely execute an arbitrary Python Code via [Bee Code Interpreter](https://github.com/i-am-bee/bee-code-interpreter).
- Get complete visibility into agents decisions using our MLFlow integration thanks to [Bee Observe](https://github.com/i-am-bee/bee-observe).
- Fully fledged TypeScript project setup with linting and formatting.
- 🔒 Safely execute an arbitrary Python Code via [Bee Code Interpreter](https://github.com/i-am-bee/bee-code-interpreter).
- 🔎 Get complete visibility into agents' decisions using our MLFlow integration thanks to [Bee Observe](https://github.com/i-am-bee/bee-observe).
- 🚀 Fully fledged TypeScript project setup with linting and formatting.

## Requeriments
## 📦 Requeriments

You need to have installed the next tools.

Expand All @@ -17,45 +17,64 @@ You need to have installed the next tools.
- Remote [watsonx](https://www.ibm.com/watsonx) or Local [ollama](https://ollama.com) LLM service
- LLM model [Granite](https://huggingface.co/ibm-granite) or [Llama 3.x](https://huggingface.co/meta-llama)

## Getting started
## 🛠️ Getting started

1. Clone the repository `git clone git@github.com:i-am-bee/bee-agent-framework-starter` or create your own repository from this one.
1. Clone this repository or [use it as a template](https://github.com/new?template_name=bee-agent-framework-starter&template_owner=i-am-bee).
2. Install dependencies `npm ci`.
3. Fill missing values in `.env`.
4. Run the agent `npm run start` (it runs the `./src/agent.ts` file).
3. Configure your project by filling in missing values in the `.env` file (default LLM provider is locally hosted `Ollama`).
4. Run the agent `npm run src/agent.ts`

## Infrastructure
To run an agent with a custom prompt, simply do this `npm run src/agent.ts <<< 'Hello Bee!'`

🧪 More examples can be found [here](https://github.com/i-am-bee/bee-agent-framework/blob/main/examples).

> [!TIP]
>
> To use Bee agent with [Python Code Interpreter](https://github.com/i-am-bee/bee-code-interpreter) refer to the [Code Interpreter](#code-interpreter) section.
> [!TIP]
>
> To use Bee agent with [Bee Observe](https://github.com/i-am-bee/bee-observe) refer to the [Observability](#observability) section.
## 🏗 Infrastructure

> [!NOTE]
>
> Docker distribution with support for compose is required, the following are supported:
> Docker distribution with support for _compose_ is required, the following are supported:
>
> - [Docker](https://www.docker.com/)
> - [Rancher](https://www.rancher.com/) - macOS users may want to use VZ instead of QEMU
> - [Podman](https://podman.io/) - requires [compose](https://podman-desktop.io/docs/compose/setting-up-compose) and **rootful machine** (if your current machine is rootless, please create a new one)
> - [Podman](https://podman.io/) - requires [compose](https://podman-desktop.io/docs/compose/setting-up-compose) and **rootful machine** (if your current machine is rootless, please create a new one, also ensure you have enabled Docker compatibility mode).
## 🔒Code interpreter

## Code interpreter
The [Bee Code Interpreter](https://github.com/i-am-bee/bee-code-interpreter) is a gRPC service that an agent uses to execute an arbitrary Python code safely.

### Instructions

1. Start all services related to Code Interpreter `npm run infra:start --profile=code_interpreter`
2. Add `CODE_INTERPRETER_URL=http://127.0.0.1:50051` to your `.env` (if `.env` does not exist, create one from `.env.template`).
3. Run the agent `npm run start:code_interpreter` (it runs the `./src/agent_code_interpreter.ts` file)
1. Start all services related to the [`Code Interpreter`](https://github.com/i-am-bee/bee-code-interpreter) `npm run infra:start --profile=code_interpreter`
2. Run the agent `npm run src/agent_code_interpreter.ts`

## Observability
> [!NOTE]
>
> Code Interpreter runs on `http://127.0.0.1:50051`.
Get full visibility of the agent's inner working via our observability stack.
## 🔎 Observability

- The [MLFlow](https://mlflow.org/) is used as UI for observability.
- The [Bee Observe](https://github.com/i-am-bee/bee-observe) is the main Open-source observability service for Bee Agent Framework.
- The [Bee Observe Connector](https://github.com/i-am-bee/bee-observe-connector) is the observability connector for Bee Agent Framework
Get complete visibility of the agent's inner workings via our observability stack.

Configuration (ENV variables) can be found [here](./infra/observe/.env.docker).
- The [MLFlow](https://mlflow.org/) is used as UI for observability.
- The [Bee Observe](https://github.com/i-am-bee/bee-observe) is the observability service (API) for gathering traces from [Bee Agent Framework](https://github.com/i-am-bee/bee-agent-framework).
- The [Bee Observe Connector](https://github.com/i-am-bee/bee-observe-connector) is the observability connector that sends traces from [Bee Agent Framework](https://github.com/i-am-bee/bee-agent-framework) to [Bee Observe](https://github.com/i-am-bee/bee-observe).

### Instructions

1. Start all services related to Observe `npm run infra:start --profile=observe`
2. Start the agent using the observe and MLFlow `npm run start:observe` (it runs the `./src/agent_observe.ts` file).
3. Run the `curl` command that retrieves data from Bee Observe and passes them to the `MLFlow` instance.
4. Access MLFlow web application [`http://localhost:8080/#/experiments/`](http://localhost:8080/#/experiments/)
1. Start all services related to [Bee Observe](https://github.com/i-am-bee/bee-observe) `npm run infra:start --profile=observe`
2. Run the agent `npm run src/agent_observe.ts`
3. Upload the final trace to the `MLFlow` (the agent will print instructions on how to do that).
4. See visualized trace in MLFlow web application [`http://127.0.0.1:8080/#/experiments/0`](http://localhost:8080/#/experiments/0)
- Credentials: (user: `admin`, password: `password`)

> [!TIP]
>
> Configuration file is [infra/observe/.env.docker](./infra/observe/.env.docker).
2 changes: 1 addition & 1 deletion docker-compose.yml
Original file line number Diff line number Diff line change
Expand Up @@ -55,7 +55,7 @@ services:
profiles:
- all
- observe
image: bitnami/mlflow:2.14.1
image: bitnami/mlflow:2.16.2
ports:
- "8080:8080"
entrypoint:
Expand Down
6 changes: 2 additions & 4 deletions package.json
Original file line number Diff line number Diff line change
Expand Up @@ -20,16 +20,14 @@
"url": "https://github.com/i-am-bee/bee-agent-framework-starter/issues"
},
"scripts": {
"start": "tsx ./src/agent.ts",
"start:basic": "tsx ./src/agent.ts",
"start:observe": "tsx ./src/agent_observe.ts",
"start:code_interpreter": "tsx ./src/agent_code_interpreter.ts",
"run": "npm exec tsx",
"ts:check": "tsc --noEmit --project tsconfig.json",
"build": "rimraf dist && tsc",
"lint": "eslint",
"lint:fix": "eslint --fix",
"format": "prettier --check .",
"format:fix": "prettier --write .",
"infra:pull": "docker compose --profile=${npm_config_profile:=all} pull",
"infra:start": "docker compose --profile=${npm_config_profile:=all} up --detach --wait",
"infra:stop": "docker compose --profile=${npm_config_profile:=all} down",
"infra:remove": "npm run infra:stop -- --volumes",
Expand Down
24 changes: 7 additions & 17 deletions src/agent_observe.ts
Original file line number Diff line number Diff line change
Expand Up @@ -6,15 +6,9 @@ import { DuckDuckGoSearchTool } from "bee-agent-framework/tools/search/duckDuckG
import { OpenMeteoTool } from "bee-agent-framework/tools/weather/openMeteo";
import * as process from "node:process";
import { createObserveConnector, ObserveError } from "bee-observe-connector";
import { beeObserveApiSetting } from "./helpers/observe.js";
import { dirname } from "node:path";
import { fileURLToPath } from "node:url";
import * as path from "node:path";
import { getChatLLM } from "./helpers/llm.js";
import { getPrompt } from "./helpers/prompt.js";

const __dirname = dirname(fileURLToPath(import.meta.url));

const llm = getChatLLM();

const agent = new BeeAgent({
Expand All @@ -40,22 +34,18 @@ try {
)
.middleware(
createObserveConnector({
api: beeObserveApiSetting,
api: {
baseUrl: "http://127.0.0.1:4002",
apiAuthKey: "testing-api-key",
},
cb: async (err, data) => {
if (err) {
console.error(`Agent 🤖 : `, ObserveError.ensure(err).explain());
} else {
const { id, response } = data?.result || {};
console.info(`Observe 🔎 : `, response?.text || "Invalid output");

// you can use `&include_mlflow_tree=true` as well to return all sent data to mlflow
console.info(`Observe 🔎`, data?.result?.response?.text || "Invalid result.");
console.info(
`Observe 🔎 : Call the Observe API via this curl command outside of this Interactive session and see the trace data in the "trace.json" file: \n\n`,
`curl -s "${beeObserveApiSetting.baseUrl}/trace/${id}?include_tree=true&include_mlflow=true" \\
\t-H "x-bee-authorization: ${beeObserveApiSetting.apiAuthKey}" \\
\t-H "Content-Type: application/json" \\
\t-o ${path.join(__dirname, "/../tmp/observe/trace.json")}`,
`\n`,
`Observe 🔎`,
`Trace has been created and will shortly be available at https://127.0.0.1:8080/#/experiments/0`,
);
}
},
Expand Down
111 changes: 58 additions & 53 deletions src/helpers/llm.ts
Original file line number Diff line number Diff line change
Expand Up @@ -10,60 +10,65 @@ import { GroqChatLLM } from "bee-agent-framework/adapters/groq/chat";
import { Ollama } from "ollama";
import Groq from "groq-sdk";

export enum LLMProviders {
BAM = "bam",
WATSONX = "watsonx",
OLLAMA = "ollama",
OPENAI = "openai",
GROQ = "groq",
}
export const Providers = {
BAM: "bam",
WATSONX: "watsonx",
OLLAMA: "ollama",
OPENAI: "openai",
GROQ: "groq",
} as const;
type Provider = (typeof Providers)[keyof typeof Providers];

export const LLMFactories: Record<Provider, () => ChatLLM<ChatLLMOutput>> = {
[Providers.BAM]: () =>
BAMChatLLM.fromPreset(getEnv("GENAI_MODEL") || "meta-llama/llama-3-1-70b-instruct", {
client: new BAMSDK({
apiKey: getEnv("GENAI_API_KEY"),
}),
}),
[Providers.GROQ]: () =>
new GroqChatLLM({
modelId: getEnv("GROQ_MODEL") || "llama-3.1-70b-versatile",
client: new Groq({
apiKey: getEnv("GROQ_API_KEY"),
}),
}),
[Providers.OPENAI]: () =>
new OpenAIChatLLM({
modelId: getEnv("OPENAI_MODEL") || "gpt-4o",
parameters: {
temperature: 0,
max_tokens: 2048,
},
}),
[Providers.OLLAMA]: () =>
new OllamaChatLLM({
modelId: getEnv("OLLAMA_MODEL") || "llama3.1:8b",
parameters: {
temperature: 0,
repeat_penalty: 1,
num_predict: 2000,
},
client: new Ollama({
host: getEnv("OLLAMA_HOST"),
}),
}),
[Providers.WATSONX]: () =>
WatsonXChatLLM.fromPreset(getEnv("WATSONX_MODEL") || "meta-llama/llama-3-1-70b-instruct", {
apiKey: getEnv("WATSONX_API_KEY"),
projectId: getEnv("WATSONX_PROJECT_ID"),
region: getEnv("WATSONX_REGION"),
}),
};

export function getChatLLM(provider?: LLMProviders): ChatLLM<ChatLLMOutput> {
provider = provider || parseEnv("LLM_PROVIDER", z.nativeEnum(LLMProviders), LLMProviders.OLLAMA);
export function getChatLLM(provider?: Provider): ChatLLM<ChatLLMOutput> {
if (!provider) {
provider = parseEnv("LLM_BACKEND", z.nativeEnum(Providers), Providers.OLLAMA);
}

switch (provider) {
case LLMProviders.WATSONX:
return WatsonXChatLLM.fromPreset(
getEnv("WATSONX_MODEL") || "meta-llama/llama-3-1-70b-instruct",
{
apiKey: getEnv("WATSONX_API_KEY"),
projectId: getEnv("WATSONX_PROJECT_ID"),
},
);
case LLMProviders.BAM:
return BAMChatLLM.fromPreset(getEnv("GENAI_MODEL") || "meta-llama/llama-3-1-70b-instruct", {
client: new BAMSDK({
apiKey: getEnv("GENAI_API_KEY"),
}),
});
case LLMProviders.OPENAI:
return new OpenAIChatLLM({
modelId: getEnv("OPENAI_MODEL") || "gpt-4o",
parameters: {
temperature: 0,
max_tokens: 2048,
},
});
case LLMProviders.OLLAMA:
return new OllamaChatLLM({
modelId: getEnv("OLLAMA_MODEL") || "llama3.1:8b",
parameters: {
temperature: 0,
repeat_penalty: 1,
num_predict: 2000,
},
client: new Ollama({
host: getEnv("OLLAMA_HOST"),
}),
});
case LLMProviders.GROQ:
return new GroqChatLLM({
modelId: getEnv("GROQ_MODEL") || "llama-3.1-70b-versatile",
client: new Groq({
apiKey: getEnv("GROQ_API_KEY"),
}),
});
default:
throw new Error("No LLM provider has been defined (check out .env.example)!");
const factory = LLMFactories[provider];
if (!factory) {
throw new Error(`Provider "${provider}" not found.`);
}
return factory();
}
4 changes: 0 additions & 4 deletions src/helpers/observe.ts

This file was deleted.

Empty file removed tmp/observe/.gitkeep
Empty file.

0 comments on commit f7efdf3

Please sign in to comment.