-
Notifications
You must be signed in to change notification settings - Fork 22
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
WIP : ADD: Batch request (Non-streaming) for OpenAI Plugin #66
base: main
Are you sure you want to change the base?
WIP : ADD: Batch request (Non-streaming) for OpenAI Plugin #66
Conversation
@sjmonson @dagrayvid @npalaska |
Thanks @thameem-abbas 👍 Initial tests with changes from this PR work well. I'll add more in the afternoon. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I still don't like this approach. It duplicates too much code; fine for a one-off, but to merge it would be preferable to have all request methods take a list of queries and return a list of Results (see later comment).
@@ -177,6 +185,88 @@ def request_http(self, query: dict, user_id: int, test_end_time: float = 0): | |||
|
|||
return result | |||
|
|||
def request_batch_http(self, queries, user_id, test_end_time: float = 0): |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Changing the interface of request_func()
based on calling args is bad practice. See above comment.
# if timeout passes, queue.Empty will be thrown | ||
# User should continue to poll for inputs |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Why was this comment removed?
concurrency = load_options.get("concurrency") | ||
duration = load_options.get("duration") | ||
|
||
plugin_type = config.get("plugin") | ||
if plugin_type == "openai_plugin": | ||
plugin = openai_plugin.OpenAIPlugin( | ||
config.get("plugin_options") | ||
config.get("plugin_options"), batch_size |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Again, this should be handled in a way the the plugin does not need to know the batch size in advance. However, if you must... just set config["plugin_options"] = batch_size
to avoid changing the interface.
Agreeing on change of condition. Co-authored-by: Samuel Monson <smonson@irbash.net>
Don't know what was running through my head when I wrote that. Thanks Co-authored-by: Samuel Monson <smonson@irbash.net>
Fix redundant annotation Co-authored-by: Samuel Monson <smonson@irbash.net>
Adds non-streaming batch request support in OpenAI plugin.