Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

feat: Add system tool calls to the chat endpoint #1179

Draft
wants to merge 1 commit into
base: dev
Choose a base branch
from

Conversation

whiterabbit1983
Copy link
Contributor

@whiterabbit1983 whiterabbit1983 commented Feb 20, 2025


EntelligenceAI PR Summary

The recent update introduces a new utility function, eval_tool_calls, in tools.py to enhance the evaluation of tool calls within chat sessions. The chat function in chat.py has been updated to utilize this utility, replacing direct calls to litellm.acompletion. This change aims to improve code modularity and streamline the integration of tool responses in chat sessions.

Copy link
Contributor

Walkthrough

This PR introduces a new utility function, eval_tool_calls, in tools.py to enhance tool call evaluations within chat sessions. The chat function in chat.py is modified to leverage this utility, replacing direct calls to litellm.acompletion. These changes aim to streamline tool call handling, improve code modularity, and enhance the integration of tool responses into chat sessions.

Changes

File(s) Summary
agents-api/agents_api/routers/sessions/chat.py Updated chat function to integrate eval_tool_calls utility, replacing direct litellm.acompletion calls for dynamic tool call processing.
agents-api/agents_api/routers/utils/tools.py Added eval_tool_calls function to manage tool call evaluations, supporting operations like create, update, and search for agents, users, sessions, and tasks.
Entelligence.ai can learn from your feedback. Simply add 👍 / 👎 emojis to teach it your preferences. More shortcuts below

Emoji Descriptions:

  • ⚠️ Potential Issue - May require further investigation.
  • 🔒 Security Vulnerability - Fix to ensure system safety.
  • 💻 Code Improvement - Suggestions to enhance code quality.
  • 🔨 Refactor Suggestion - Recommendations for restructuring code.
  • ℹ️ Others - General comments and information.

Interact with the Bot:

  • Send a message or request using the format:
    @bot + *your message*
Example: @bot Can you suggest improvements for this code?
  • Help the Bot learn by providing feedback on its responses.
    @bot + *feedback*
Example: @bot Do not comment on `save_auth` function !

Comment on lines +206 to +208
model_response = await eval_tool_calls(
litellm.acompletion, {"system"}, developer.id, **{**settings, **params}
)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The eval_tool_calls function is called with a set containing only 'system' as tool types, which may incorrectly restrict tool evaluation. The set should include all relevant tool types being used.

📝 Committable Code Suggestion

‼️ Ensure you review the code suggestion before committing it to the branch. Make sure it replaces the highlighted code, contains no missing lines, and has no issues with indentation.

Suggested change
model_response = await eval_tool_calls(
litellm.acompletion, {"system"}, developer.id, **{**settings, **params}
)
model_response = await eval_tool_calls(
litellm.acompletion, {"system", "function"}, developer.id, **{**settings, **params}
)

Comment on lines +231 to +232
response: ModelResponse | CustomStreamWrapper | None = None
done = False
while not done:
response: ModelResponse | CustomStreamWrapper = await func(**kwargs)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Potential memory leak in eval_tool_calls() - the response variable is reassigned in the loop but previous responses are not properly cleaned up/closed when streaming

📝 Committable Code Suggestion

‼️ Ensure you review the code suggestion before committing it to the branch. Make sure it replaces the highlighted code, contains no missing lines, and has no issues with indentation.

Suggested change
response: ModelResponse | CustomStreamWrapper | None = None
done = False
while not done:
response: ModelResponse | CustomStreamWrapper = await func(**kwargs)
if response:
await response.close()
response: ModelResponse | CustomStreamWrapper | None = None
done = False
while not done:
response: ModelResponse | CustomStreamWrapper = await func(**kwargs)


connection_pool = getattr(app.state, "postgres_pool", None)
tool_handler = partial(tool_handler, connection_pool=connection_pool)
arguments["developer_id"] = str(developer_id)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Type error in call_tool() - developer_id is converted to string but some query handlers expect UUID type, causing potential runtime errors

📝 Committable Code Suggestion

‼️ Ensure you review the code suggestion before committing it to the branch. Make sure it replaces the highlighted code, contains no missing lines, and has no issues with indentation.

Suggested change
arguments["developer_id"] = str(developer_id)
arguments["developer_id"] = developer_id if isinstance(developer_id, uuid.UUID) else uuid.UUID(str(developer_id))

Copy link
Contributor

qodo-merge-pro-for-open-source bot commented Feb 20, 2025

CI Feedback 🧐

(Feedback updated until commit 459aa61)

A test triggered by this PR failed. Here is an AI-generated analysis of the failure:

Action: Test

Failed stage: Run tests [❌]

Failure summary:

The action failed due to a circular import dependency in the Python code:

  • File chat.py tries to import eval_tool_calls from tools.py
  • While tools.py tries to import chat from chat.py
  • This creates a circular dependency where each module depends on the other, causing the Python
    interpreter to fail with "ImportError: cannot import name 'chat' from partially initialized module"

  • Relevant error logs:
    1:  ##[group]Operating System
    2:  Ubuntu
    ...
    
    1401:  File "/home/runner/work/julep/julep/agents-api/agents_api/routers/__init__.py", line 25, in <module>
    1402:  from .sessions import router as sessions_router
    1403:  File "/home/runner/work/julep/julep/agents-api/agents_api/routers/sessions/__init__.py", line 3, in <module>
    1404:  from .chat import chat
    1405:  File "/home/runner/work/julep/julep/agents-api/agents_api/routers/sessions/chat.py", line 27, in <module>
    1406:  from ..utils.tools import eval_tool_calls
    1407:  File "/home/runner/work/julep/julep/agents-api/agents_api/routers/utils/tools.py", line 49, in <module>
    1408:  from ...routers.sessions.chat import chat
    1409:  ImportError: cannot import name 'chat' from partially initialized module 'agents_api.routers.sessions.chat' (most likely due to a circular import) (/home/runner/work/julep/julep/agents-api/agents_api/routers/sessions/chat.py)
    1410:  ##[error]Process completed with exit code 1.
    

    @whiterabbit1983 whiterabbit1983 force-pushed the f/chat-system-tool-calls branch from 459aa61 to ea7b342 Compare February 21, 2025 13:20
    Comment on lines +206 to +208
    model_response = await eval_tool_calls(
    litellm.acompletion, {"system"}, developer.id, **{**settings, **params}
    )
    Copy link
    Contributor

    Choose a reason for hiding this comment

    The reason will be displayed to describe this comment to others. Learn more.

    The eval_tool_calls function is called with an incorrect argument - passing a set {'system'} instead of a list of system messages, which will likely cause runtime errors.

    📝 Committable Code Suggestion

    ‼️ Ensure you review the code suggestion before committing it to the branch. Make sure it replaces the highlighted code, contains no missing lines, and has no issues with indentation.

    Suggested change
    model_response = await eval_tool_calls(
    litellm.acompletion, {"system"}, developer.id, **{**settings, **params}
    )
    model_response = await eval_tool_calls(
    litellm.acompletion, ["system"], developer.id, **{**settings, **params}
    )

    Comment on lines +245 to +246
    tool_args = json.loads(tool.function.arguments)
    tool_response = await call_tool(developer_id, tool_name, tool_args)
    Copy link
    Contributor

    Choose a reason for hiding this comment

    The reason will be displayed to describe this comment to others. Learn more.

    Missing error handling for invalid JSON in tool.function.arguments. If JSON parsing fails, it will raise an unhandled exception.

    📝 Committable Code Suggestion

    ‼️ Ensure you review the code suggestion before committing it to the branch. Make sure it replaces the highlighted code, contains no missing lines, and has no issues with indentation.

    Suggested change
    tool_args = json.loads(tool.function.arguments)
    tool_response = await call_tool(developer_id, tool_name, tool_args)
    tool_name = tool.function.name
    try:
    tool_args = json.loads(tool.function.arguments)
    except json.JSONDecodeError:
    raise ValueError(f"Invalid JSON in tool arguments: {tool.function.arguments}")

    Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
    Labels
    None yet
    Projects
    None yet
    Development

    Successfully merging this pull request may close these issues.

    1 participant