-
Notifications
You must be signed in to change notification settings - Fork 930
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
feat: Add system tool calls to the chat endpoint #1179
base: dev
Are you sure you want to change the base?
Conversation
WalkthroughThis PR introduces a new utility function, Changes
Entelligence.ai can learn from your feedback. Simply add 👍 / 👎 emojis to teach it your preferences. More shortcuts belowEmoji Descriptions:
Interact with the Bot:
|
model_response = await eval_tool_calls( | ||
litellm.acompletion, {"system"}, developer.id, **{**settings, **params} | ||
) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The eval_tool_calls
function is called with a set containing only 'system'
as tool types, which may incorrectly restrict tool evaluation. The set should include all relevant tool types being used.
📝 Committable Code Suggestion
‼️ Ensure you review the code suggestion before committing it to the branch. Make sure it replaces the highlighted code, contains no missing lines, and has no issues with indentation.
model_response = await eval_tool_calls( | |
litellm.acompletion, {"system"}, developer.id, **{**settings, **params} | |
) | |
model_response = await eval_tool_calls( | |
litellm.acompletion, {"system", "function"}, developer.id, **{**settings, **params} | |
) |
response: ModelResponse | CustomStreamWrapper | None = None | ||
done = False | ||
while not done: | ||
response: ModelResponse | CustomStreamWrapper = await func(**kwargs) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Potential memory leak in eval_tool_calls()
- the response
variable is reassigned in the loop but previous responses are not properly cleaned up/closed when streaming
📝 Committable Code Suggestion
‼️ Ensure you review the code suggestion before committing it to the branch. Make sure it replaces the highlighted code, contains no missing lines, and has no issues with indentation.
response: ModelResponse | CustomStreamWrapper | None = None | |
done = False | |
while not done: | |
response: ModelResponse | CustomStreamWrapper = await func(**kwargs) | |
if response: | |
await response.close() | |
response: ModelResponse | CustomStreamWrapper | None = None | |
done = False | |
while not done: | |
response: ModelResponse | CustomStreamWrapper = await func(**kwargs) |
|
||
connection_pool = getattr(app.state, "postgres_pool", None) | ||
tool_handler = partial(tool_handler, connection_pool=connection_pool) | ||
arguments["developer_id"] = str(developer_id) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Type error in call_tool()
- developer_id
is converted to string but some query handlers expect UUID type, causing potential runtime errors
📝 Committable Code Suggestion
‼️ Ensure you review the code suggestion before committing it to the branch. Make sure it replaces the highlighted code, contains no missing lines, and has no issues with indentation.
arguments["developer_id"] = str(developer_id) | |
arguments["developer_id"] = developer_id if isinstance(developer_id, uuid.UUID) else uuid.UUID(str(developer_id)) |
CI Feedback 🧐(Feedback updated until commit 459aa61)A test triggered by this PR failed. Here is an AI-generated analysis of the failure:
|
459aa61
to
ea7b342
Compare
model_response = await eval_tool_calls( | ||
litellm.acompletion, {"system"}, developer.id, **{**settings, **params} | ||
) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The eval_tool_calls
function is called with an incorrect argument - passing a set {'system'}
instead of a list of system messages, which will likely cause runtime errors.
📝 Committable Code Suggestion
‼️ Ensure you review the code suggestion before committing it to the branch. Make sure it replaces the highlighted code, contains no missing lines, and has no issues with indentation.
model_response = await eval_tool_calls( | |
litellm.acompletion, {"system"}, developer.id, **{**settings, **params} | |
) | |
model_response = await eval_tool_calls( | |
litellm.acompletion, ["system"], developer.id, **{**settings, **params} | |
) |
tool_args = json.loads(tool.function.arguments) | ||
tool_response = await call_tool(developer_id, tool_name, tool_args) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Missing error handling for invalid JSON in tool.function.arguments
. If JSON parsing fails, it will raise an unhandled exception.
📝 Committable Code Suggestion
‼️ Ensure you review the code suggestion before committing it to the branch. Make sure it replaces the highlighted code, contains no missing lines, and has no issues with indentation.
tool_args = json.loads(tool.function.arguments) | |
tool_response = await call_tool(developer_id, tool_name, tool_args) | |
tool_name = tool.function.name | |
try: | |
tool_args = json.loads(tool.function.arguments) | |
except json.JSONDecodeError: | |
raise ValueError(f"Invalid JSON in tool arguments: {tool.function.arguments}") |
EntelligenceAI PR Summary
The recent update introduces a new utility function,
eval_tool_calls
, intools.py
to enhance the evaluation of tool calls within chat sessions. Thechat
function inchat.py
has been updated to utilize this utility, replacing direct calls tolitellm.acompletion
. This change aims to improve code modularity and streamline the integration of tool responses in chat sessions.