p-tuning
Here are 10 public repositories matching this topic...
We unified the interfaces of instruction-tuning data (e.g., CoT data), multiple LLMs and parameter-efficient methods (e.g., lora, p-tuning) together for easy use. We welcome open-source enthusiasts to initiate any meaningful PR on this repo and integrate as many LLM related technologies as possible. 我们打造了方便研究人员上手和使用大模型等微调平台,我们欢迎开源爱好者发起任何有意义的pr!
-
Updated
Dec 12, 2023 - Jupyter Notebook
An optimized deep prompt tuning strategy comparable to fine-tuning across scales and tasks
-
Updated
Nov 16, 2023 - Python
A novel method to tune language models. Codes and datasets for paper ``GPT understands, too''.
-
Updated
Oct 6, 2022 - Python
This repository is an AI Bootcamp material that consist of a workflow for LLM
-
Updated
Aug 20, 2024 - Jupyter Notebook
Code for COLING22 paper, DPTDR: Deep Prompt Tuning for Dense Passage Retrieval
-
Updated
Aug 7, 2023 - Python
This bootcamp is designed to give NLP researchers an end-to-end overview on the fundamentals of NVIDIA NeMo framework, complete solution for building large language models. It will also have hands-on exercises complimented by tutorials, code snippets, and presentations to help researchers kick-start with NeMo LLM Service and Guardrails.
-
Updated
Mar 7, 2024 - Jupyter Notebook
Comparison of different adaptation methods on PEFT for fine-tuning downstream tasks or benchmarks.
-
Updated
Feb 15, 2024 - Python
Improve this page
Add a description, image, and links to the p-tuning topic page so that developers can more easily learn about it.
Add this topic to your repo
To associate your repository with the p-tuning topic, visit your repo's landing page and select "manage topics."