Skip to content

How does one convert alpaca Llama for use with this repo? #502

Answered by BetaDoggo
arijoon asked this question in Q&A
Discussion options

You must be logged in to vote

There is no way to convert the 4bit ggml models without loss because they use a different method for quantization. You'll have to merge the Lora using the alpaca-lora repo then quantize it to 4bit using the GPTQ-for-llama repo.

Replies: 2 comments 4 replies

Comment options

You must be logged in to vote
0 replies
Comment options

You must be logged in to vote
4 replies
@arijoon
Comment options

@BetaDoggo
Comment options

@NovNovikov
Comment options

@BetaDoggo
Comment options

Answer selected by arijoon
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Category
Q&A
Labels
None yet
3 participants