-
Notifications
You must be signed in to change notification settings - Fork 8
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
can it make Lora sft? #24
Comments
Hi, thanks for paying attention to this !! This repo is currently designed for full parameter finetuning, but lora freezes most of the parameters. Since they contradict with each other, currently this repo does not support lora. This repo bases on pipeline method, which allows you to train your model with |
great job
…---Original---
From: ***@***.***>
Date: Tue, Nov 7, 2023 09:36 AM
To: ***@***.***>;
Cc: ***@***.******@***.***>;
Subject: Re: [CoinCheung/gdGPT] can it make Lora sft? (Issue #24)
Hi, thanks for paying attention to this !!
This repo is currently designed for full parameter finetuning, but lora freezes most of the parameters. Since they contradict with each other, currently this repo does not support lora.
This repo bases on pipeline method, which allows you to train your model with DP + PP (megatronLM is DP + PP + TP, the so-called 3D layout). This is faster and requires less memory than zero-based methods when there is not so many gpus (100+). You can train a 7b or 13b model on a server with 8 gpus (24G), which I believe many companies can afford to.
—
Reply to this email directly, view it on GitHub, or unsubscribe.
You are receiving this because you authored the thread.Message ID: ***@***.***>
|
Limited by graphics card devices. For most people, Lora is the only way to fine-tuning. can it make lora sft?
The text was updated successfully, but these errors were encountered: