Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

WIP : ADD: Batch request (Non-streaming) for OpenAI Plugin #66

Open
wants to merge 7 commits into
base: main
Choose a base branch
from

Conversation

thameem-abbas
Copy link
Collaborator

Adds non-streaming batch request support in OpenAI plugin.

@thameem-abbas thameem-abbas changed the title ADD: Batch request for OpenAI Plugin ADD: Batch request (Non-streaming) for OpenAI Plugin Nov 4, 2024
@thameem-abbas
Copy link
Collaborator Author

@sjmonson @dagrayvid @npalaska
It would be great if someone could review the PR.

@npalaska
Copy link
Collaborator

npalaska commented Nov 4, 2024

Thanks @thameem-abbas 👍 Initial tests with changes from this PR work well. I'll add more in the afternoon.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I still don't like this approach. It duplicates too much code; fine for a one-off, but to merge it would be preferable to have all request methods take a list of queries and return a list of Results (see later comment).

@@ -177,6 +185,88 @@ def request_http(self, query: dict, user_id: int, test_end_time: float = 0):

return result

def request_batch_http(self, queries, user_id, test_end_time: float = 0):
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Changing the interface of request_func() based on calling args is bad practice. See above comment.

plugins/openai_plugin.py Outdated Show resolved Hide resolved
plugins/openai_plugin.py Outdated Show resolved Hide resolved
plugins/openai_plugin.py Outdated Show resolved Hide resolved
user.py Outdated Show resolved Hide resolved
Comment on lines -40 to -41
# if timeout passes, queue.Empty will be thrown
# User should continue to poll for inputs
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Why was this comment removed?

result.py Outdated Show resolved Hide resolved
plugins/openai_plugin.py Outdated Show resolved Hide resolved
concurrency = load_options.get("concurrency")
duration = load_options.get("duration")

plugin_type = config.get("plugin")
if plugin_type == "openai_plugin":
plugin = openai_plugin.OpenAIPlugin(
config.get("plugin_options")
config.get("plugin_options"), batch_size
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Again, this should be handled in a way the the plugin does not need to know the batch size in advance. However, if you must... just set config["plugin_options"] = batch_size to avoid changing the interface.

thameem-abbas and others added 5 commits November 4, 2024 12:38
Agreeing on change of condition.

Co-authored-by: Samuel Monson <smonson@irbash.net>
Don't know what was running through my head when I wrote that. Thanks

Co-authored-by: Samuel Monson <smonson@irbash.net>
Fix redundant annotation

Co-authored-by: Samuel Monson <smonson@irbash.net>
@thameem-abbas thameem-abbas changed the title ADD: Batch request (Non-streaming) for OpenAI Plugin WIP : ADD: Batch request (Non-streaming) for OpenAI Plugin Nov 11, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants