You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
{{ message }}
This repository has been archived by the owner on Sep 27, 2024. It is now read-only.
Hermes-2-Pro-Llama-3-8B model uses a new role called "tool" similar to the OpenAI API standard to pass tool results back to the model but the current preset config is too restrictive and doesn't allow adding new roles or non-alternating turns such as multiple system prompts.
Currently when i send function results or errors back to the model with "tool" role i get the BadRequestError:
[
{
"role": "user",
"content": "Your task is to Extract key financial data and ratios from NVDA's SEC filings."
},
{
"role": "assistant",
"content": "<scratchpad>\nWe need to first find the latest 10-Q and 10-K forms for NVDA and extract relevant information from them.\n</scatchpad>\n\n<tool_call>\n{\"arguments\": {\"data\": \"NVDA|what was last quarter's revenue\"}, \"name\": \"search_10q\"}\n</tool_call>\n"
},
{
"role": "tool",
"content": "<tool_error>\nXML Parse Error: mismatched tag: line 3, column 2\n</tool_error>\n"
}
]
openai.BadRequestError: Error code: 400 - {'error': "'messages' array must only contain objects with a 'role' field that is either 'user', 'assistant', or 'system'. Got 'tool'."}
There's probably a better way to do this like the Jinja prompt templates but here's what I'm suggesting for the time being allowing multiple input prefixes as a list:
{
"name": "ChatML",
"inference_params": {
"input_prefixes": [
"<|im_end|>\n<|im_start|>user\n",
"<|im_end|>\n<|im_start|>tool\n",
"<|im_end|>\n<|im_start|>agent\n"
],
"input_suffix": "<|im_end|>\n<|im_start|>assistant\n",
"antiprompt": [
"<|im_start|>",
"<|im_end|>"
],
"pre_prompt_prefix": "<|im_start|>system\n",
"pre_prompt_suffix": "",
"pre_prompt": "Perform the task to the best of your ability."
}
}
The text was updated successfully, but these errors were encountered:
Sign up for freeto subscribe to this conversation on GitHub.
Already have an account?
Sign in.
Hermes-2-Pro-Llama-3-8B model uses a new role called "tool" similar to the OpenAI API standard to pass tool results back to the model but the current preset config is too restrictive and doesn't allow adding new roles or non-alternating turns such as multiple system prompts.
Currently when i send function results or errors back to the model with "tool" role i get the
BadRequestError
:openai.BadRequestError: Error code: 400 - {'error': "'messages' array must only contain objects with a 'role' field that is either 'user', 'assistant', or 'system'. Got 'tool'."}
There's probably a better way to do this like the Jinja prompt templates but here's what I'm suggesting for the time being allowing multiple input prefixes as a list:
The text was updated successfully, but these errors were encountered: