Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Function Calling research and explore the idea how we can use this for? #13

Open
fujitatomoya opened this issue Dec 16, 2023 · 2 comments
Assignees
Labels
enhancement New feature or request

Comments

@fujitatomoya
Copy link
Owner

https://platform.openai.com/docs/guides/function-calling

@fujitatomoya fujitatomoya self-assigned this Dec 28, 2023
@fujitatomoya
Copy link
Owner Author

more details can be found on the cookbook, https://cookbook.openai.com/examples/how_to_call_functions_with_chat_models

# Example dummy function hard coded to return the same weather
# In production, this could be your backend API or an external API
def get_current_weather(location, unit="fahrenheit"):
    """Get the current weather in a given location"""
    if "tokyo" in location.lower():
        return json.dumps({"location": "Tokyo", "temperature": "10", "unit": unit})
    elif "san francisco" in location.lower():
        return json.dumps({"location": "San Francisco", "temperature": "72", "unit": unit})
    elif "paris" in location.lower():
        return json.dumps({"location": "Paris", "temperature": "22", "unit": unit})
    else:
        return json.dumps({"location": location, "temperature": "unknown"})

def run_conversation():
    # Step 1: send the conversation and available functions to the model
    messages = [{"role": "user", "content": "What's the weather like in San Francisco, Tokyo, and Paris?"}]
    tools = [
        {
            "type": "function",
            "function": {
                "name": "get_current_weather",
                "description": "Get the current weather in a given location",
                "parameters": {
                    "type": "object",
                    "properties": {
                        "location": {
                            "type": "string",
                            "description": "The city and state, e.g. San Francisco, CA",
                        },
                        "unit": {"type": "string", "enum": ["celsius", "fahrenheit"]},
                    },
                    "required": ["location"],
                },
            },
        }
    ]
    response = client.chat.completions.create(
        model="gpt-3.5-turbo-1106",
        messages=messages,
        tools=tools,
        tool_choice="auto",  # auto is default, but we'll be explicit
    )
    response_message = response.choices[0].message
    print(response_message)
    tool_calls = response_message.tool_calls
    # Step 2: check if the model wanted to call a function
    if tool_calls:
        # Step 3: call the function
        # Note: the JSON response may not always be valid; be sure to handle errors
        available_functions = {
            "get_current_weather": get_current_weather,
        }  # only one function in this example, but you can have multiple
        messages.append(response_message)  # extend conversation with assistant's reply
        # Step 4: send the info for each function call and function response to the model
        for tool_call in tool_calls:
            function_name = tool_call.function.name
            function_to_call = available_functions[function_name]
            function_args = json.loads(tool_call.function.arguments)
            function_response = function_to_call(
                location=function_args.get("location"),
                unit=function_args.get("unit"),
            )
            messages.append(
                {
                    "tool_call_id": tool_call.id,
                    "role": "tool",
                    "name": function_name,
                    "content": function_response,
                }
            )  # extend conversation with function response
        second_response = client.chat.completions.create(
            model="gpt-3.5-turbo-1106",
            messages=messages,
        )  # get a new response from the model where it can see the function response
        return second_response
print(run_conversation())

the output is,

tomoyafujita@~/DVT/samples/python/openai >python3 sample-4.py 
ChatCompletionMessage(content=None, role='assistant', function_call=None, tool_calls=[ChatCompletionMessageToolCall(id='call_HlrO89gauCHd3tvBjgOEHLtO', function=Function(arguments='{"location": "San Francisco, CA", "unit": "celsius"}', name='get_current_weather'), type='function'), ChatCompletionMessageToolCall(id='call_7yTldtXfqehOBSU5tubDyJFz', function=Function(arguments='{"location": "Tokyo, Japan", "unit": "celsius"}', name='get_current_weather'), type='function'), ChatCompletionMessageToolCall(id='call_CNUN26YnlQJxlDDjyf6i7w8v', function=Function(arguments='{"location": "Paris, France", "unit": "celsius"}', name='get_current_weather'), type='function')])
ChatCompletion(id='chatcmpl-8icU3GrwzDtpEbbXv6HlWuDmj7m0v', choices=[Choice(finish_reason='stop', index=0, message=ChatCompletionMessage(content='Currently, the weather in San Francisco is 72°C and partly cloudy, in Tokyo it is 10°C and clear, and in Paris it is 22°C and partly cloudy.', role='assistant', function_call=None, tool_calls=None), logprobs=None)], created=1705645055, model='gpt-3.5-turbo-1106', object='chat.completion', system_fingerprint='fp_c596c86df9', usage=CompletionUsage(completion_tokens=37, prompt_tokens=175, total_tokens=212))

@fujitatomoya
Copy link
Owner Author

This API is useful to convert natural language into structured data like arguments, and with plugin function call that helps assistant to answer or response your question more precisely. this would be nice to add more plugin APIs to deal with user request.

currently, ros2ai execute the single command line in behalf of user request, but this is really specific request such as how many topics are available? or subscribe this topic. probably it cannot manage Check mynode status, is that healthy? cz it is more general question that likely user would ask, but there is no such command line interface provided by ros2cli.
we can add more useful plugins here, e.g get_node_status and summarize the response into natural language back to the user.

@fujitatomoya fujitatomoya added the enhancement New feature or request label Sep 14, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
enhancement New feature or request
Projects
None yet
Development

No branches or pull requests

1 participant