BAML is a language that helps you get structured data from LLMs, with the best DX possible. Works with all languages. Check out the promptfiddle.com playground
-
Updated
Jan 10, 2025 - Rust
BAML is a language that helps you get structured data from LLMs, with the best DX possible. Works with all languages. Check out the promptfiddle.com playground
A curated list of blogs, videos, tutorials, code, tools, scripts, and anything useful to help you learn Azure Policy - by @jesseloudon
ChatGPT API Usage using LangChain, LlamaIndex, Guardrails, AutoGPT and more
PAIG (Pronounced similar to paige or payj) is an open-source project designed to protect Generative AI (GenAI) applications by ensuring security, safety, and observability.
Framework for LLM evaluation, guardrails and security
LangEvals aggregates various language model evaluators into a single platform, providing a standard interface for a multitude of scores and LLM guardrails, for you to protect and benchmark your LLM models and pipelines.
Trustworthy question-answering AI plugin for chatbots in the social sector with advanced content performance analysis.
Building blocks for rapid development of GenAI applications
This is a RAG based chatbot in which semantic cache and guardrails have been incorporated.
💂🏼 Build your Documentation AI with Nemo Guardrails
This repo hosts the Python SDK and related examples for AIMon, which is a proprietary, state-of-the-art system for detecting LLM quality issues such as Hallucinations. It can be used during offline evals, continuous monitoring or inline detection. We offer various model quality metrics that are fast, reliable and cost-effective.
We compared LangChain, Fixie, and Marvin
A Python library for guardrail models evaluation.
AI Tool RAG System: LlamaIndex-powered discovery engine for AI tools with Telegram bot interface using NeMo Guardrails
Free Tier policy for AWS usage
E-commerce fashion assistant with Chatgpt, Hugging Face, Ltree and Pgvector.
Demo showcase highlighting the capabilities of Guardrails in LLMs.
Exposing Jailbreak Vulnerabilities in LLM Applications with ARTKIT
The Modelmetry Python SDK allows developers to easily integrate Modelmetry’s advanced guardrails and monitoring capabilities into their LLM-powered applications.
Add a description, image, and links to the guardrails topic page so that developers can more easily learn about it.
To associate your repository with the guardrails topic, visit your repo's landing page and select "manage topics."