OpenAI API Extension : System prompt is not being followed #3206
Replies: 8 comments 6 replies
-
I can confirm your problem and there may be a fix for this, there is also a workaround. On thing you can do to improve this is to remove the default context from the instruction template, but even that may not be enough, for example, this is a comparison with vicuna-13B and the raw format (you can test this in the default or notebook mode of the UI): Current code:
Remove the context (context: ''):
BUT, with the default context (no system message), and just use the statement as a user to the model:
In short, each model is a bit different. Some will respond very well just user messages instructing them what to do and ignore system messages. Other models will respond very well to system messages. Even OpenAI has problems with this, they've mentioned in the docs that system messages are not always followed as they should be. The easiest way I find to fix this if the system message isn't being obeyed is to just switch the role to user, that usually works but this is very model dependant so I don't have a good code solution for this yet. |
Beta Was this translation helpful? Give feedback.
-
Thanks. Your are correct. By your experience, which models are perfect for this OpenAI API extensions ? |
Beta Was this translation helpful? Give feedback.
-
I tried TheBloke_airoboros-l2-70B-gpt4-1.4.1-GPTQ , Llama 2 version. But system prompt was not followed |
Beta Was this translation helpful? Give feedback.
-
For my experience, system prompt worked perfectly in BaiChuan2-13b (AutoGPTQ, after assigning instruction template) but completely ignored in Qwen-14b(Transformers, after assigning instruction template). That is pretty weird, probably a backend-related issue? |
Beta Was this translation helpful? Give feedback.
-
When using Llama 2 / CodeLlama 2, system prompt provided through API doesn't seem to be followed either. |
Beta Was this translation helpful? Give feedback.
-
I'm using Mixtral 8x7b from exllamav2 (https://huggingface.co/turboderp/Mixtral-8x7B-instruct-exl2) and system prompts are not followed either |
Beta Was this translation helpful? Give feedback.
-
It looks like the user's choice of prompt template setting is being ignored in OpenAi endpoints. It uses template extracted from model metadata instead. If there is no regex for system role, then the system message does not get into the prompt. |
Beta Was this translation helpful? Give feedback.
-
It seems system prompt not added to system_message, so I add this part to OpenAI extension's completions.py and system prompt is now functioning:
Hope it helps... |
Beta Was this translation helpful? Give feedback.
-
I am using 'TheBloke/Llama-2-13B-chat-GPTQ' model
I am using OpenAI API Extension.
When calling the API, this is my system config and other messages
But the response is:
See that, the system prompt instruct to behave as a doctor, but it still behaving as AI model. Why?
I also tried 'TheBloke/WizardLM-13B-V1-1-SuperHOT-8K-GPTQ' . Same problem
Beta Was this translation helpful? Give feedback.
All reactions