Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Bug]: GroqException - list index out of range #8710

Open
ChenghaoMou opened this issue Feb 21, 2025 · 1 comment
Open

[Bug]: GroqException - list index out of range #8710

ChenghaoMou opened this issue Feb 21, 2025 · 1 comment
Labels
bug Something isn't working

Comments

@ChenghaoMou
Copy link

What happened?

Not sure if this is a Groq problem or litellm problem. Here is a code snippet to reproduce the issue:

import os

import litellm

os.environ["LITELLM_LOG"] = "DEBUG"


async def main():
    response = await litellm.acompletion(
        model="groq/llama-3.3-70b-specdec",
        messages=[
            {"role": "system", "content": "You are a helpful assistant."},
            {
                "role": "user",
                "content": [{"type": "text", "text": "Test"}],
            },
        ],
        stream=True,
        stream_options={"include_usage": True},
    )
    async for chunk in response:
        print(chunk)


if __name__ == "__main__":
    import asyncio

    asyncio.run(main())

Turning off include_usage will make it work. From the log, it looks like it is using databricks for parsing.

Relevant log output

uv run --with "litellm==1.61.7"  temp.py

ModelResponseStream(id='chatcmpl-b4e476ff-5fde-456e-9b86-0549c2652542', created=1740145122, model='llama-3.3-70b-specdec', object='chat.completion.chunk', system_fingerprint=None, choices=[StreamingChoices(finish_reason=None, index=0, delta=Delta(provider_specific_fields=None, content='It', role='assistant', function_call=None, tool_calls=None, audio=None), logprobs=None)], stream_options={'include_usage': True})
ModelResponseStream(id='chatcmpl-b4e476ff-5fde-456e-9b86-0549c2652542', created=1740145122, model='llama-3.3-70b-specdec', object='chat.completion.chunk', system_fingerprint=None, choices=[StreamingChoices(finish_reason=None, index=0, delta=Delta(provider_specific_fields=None, content=' looks', role=None, function_call=None, tool_calls=None, audio=None), logprobs=None)], stream_options={'include_usage': True})
ModelResponseStream(id='chatcmpl-b4e476ff-5fde-456e-9b86-0549c2652542', created=1740145122, model='llama-3.3-70b-specdec', object='chat.completion.chunk', system_fingerprint=None, choices=[StreamingChoices(finish_reason=None, index=0, delta=Delta(provider_specific_fields=None, content=' like', role=None, function_call=None, tool_calls=None, audio=None), logprobs=None)], stream_options={'include_usage': True})
ModelResponseStream(id='chatcmpl-b4e476ff-5fde-456e-9b86-0549c2652542', created=1740145122, model='llama-3.3-70b-specdec', object='chat.completion.chunk', system_fingerprint=None, choices=[StreamingChoices(finish_reason=None, index=0, delta=Delta(provider_specific_fields=None, content=' you', role=None, function_call=None, tool_calls=None, audio=None), logprobs=None)], stream_options={'include_usage': True})
ModelResponseStream(id='chatcmpl-b4e476ff-5fde-456e-9b86-0549c2652542', created=1740145122, model='llama-3.3-70b-specdec', object='chat.completion.chunk', system_fingerprint=None, choices=[StreamingChoices(finish_reason=None, index=0, delta=Delta(provider_specific_fields=None, content="'re", role=None, function_call=None, tool_calls=None, audio=None), logprobs=None)], stream_options={'include_usage': True})
ModelResponseStream(id='chatcmpl-b4e476ff-5fde-456e-9b86-0549c2652542', created=1740145122, model='llama-3.3-70b-specdec', object='chat.completion.chunk', system_fingerprint=None, choices=[StreamingChoices(finish_reason=None, index=0, delta=Delta(provider_specific_fields=None, content=' just', role=None, function_call=None, tool_calls=None, audio=None), logprobs=None)], stream_options={'include_usage': True})
ModelResponseStream(id='chatcmpl-b4e476ff-5fde-456e-9b86-0549c2652542', created=1740145122, model='llama-3.3-70b-specdec', object='chat.completion.chunk', system_fingerprint=None, choices=[StreamingChoices(finish_reason=None, index=0, delta=Delta(provider_specific_fields=None, content=' testing', role=None, function_call=None, tool_calls=None, audio=None), logprobs=None)], stream_options={'include_usage': True})
ModelResponseStream(id='chatcmpl-b4e476ff-5fde-456e-9b86-0549c2652542', created=1740145122, model='llama-3.3-70b-specdec', object='chat.completion.chunk', system_fingerprint=None, choices=[StreamingChoices(finish_reason=None, index=0, delta=Delta(provider_specific_fields=None, content=' the', role=None, function_call=None, tool_calls=None, audio=None), logprobs=None)], stream_options={'include_usage': True})
ModelResponseStream(id='chatcmpl-b4e476ff-5fde-456e-9b86-0549c2652542', created=1740145122, model='llama-3.3-70b-specdec', object='chat.completion.chunk', system_fingerprint=None, choices=[StreamingChoices(finish_reason=None, index=0, delta=Delta(provider_specific_fields=None, content=' waters', role=None, function_call=None, tool_calls=None, audio=None), logprobs=None)], stream_options={'include_usage': True})
ModelResponseStream(id='chatcmpl-b4e476ff-5fde-456e-9b86-0549c2652542', created=1740145122, model='llama-3.3-70b-specdec', object='chat.completion.chunk', system_fingerprint=None, choices=[StreamingChoices(finish_reason=None, index=0, delta=Delta(provider_specific_fields=None, content='.', role=None, function_call=None, tool_calls=None, audio=None), logprobs=None)], stream_options={'include_usage': True})
ModelResponseStream(id='chatcmpl-b4e476ff-5fde-456e-9b86-0549c2652542', created=1740145122, model='llama-3.3-70b-specdec', object='chat.completion.chunk', system_fingerprint=None, choices=[StreamingChoices(finish_reason=None, index=0, delta=Delta(provider_specific_fields=None, content=' Is', role=None, function_call=None, tool_calls=None, audio=None), logprobs=None)], stream_options={'include_usage': True})
ModelResponseStream(id='chatcmpl-b4e476ff-5fde-456e-9b86-0549c2652542', created=1740145122, model='llama-3.3-70b-specdec', object='chat.completion.chunk', system_fingerprint=None, choices=[StreamingChoices(finish_reason=None, index=0, delta=Delta(provider_specific_fields=None, content=' there', role=None, function_call=None, tool_calls=None, audio=None), logprobs=None)], stream_options={'include_usage': True})
ModelResponseStream(id='chatcmpl-b4e476ff-5fde-456e-9b86-0549c2652542', created=1740145122, model='llama-3.3-70b-specdec', object='chat.completion.chunk', system_fingerprint=None, choices=[StreamingChoices(finish_reason=None, index=0, delta=Delta(provider_specific_fields=None, content=' something', role=None, function_call=None, tool_calls=None, audio=None), logprobs=None)], stream_options={'include_usage': True})
ModelResponseStream(id='chatcmpl-b4e476ff-5fde-456e-9b86-0549c2652542', created=1740145122, model='llama-3.3-70b-specdec', object='chat.completion.chunk', system_fingerprint=None, choices=[StreamingChoices(finish_reason=None, index=0, delta=Delta(provider_specific_fields=None, content=' I', role=None, function_call=None, tool_calls=None, audio=None), logprobs=None)], stream_options={'include_usage': True})
ModelResponseStream(id='chatcmpl-b4e476ff-5fde-456e-9b86-0549c2652542', created=1740145122, model='llama-3.3-70b-specdec', object='chat.completion.chunk', system_fingerprint=None, choices=[StreamingChoices(finish_reason=None, index=0, delta=Delta(provider_specific_fields=None, content=' can', role=None, function_call=None, tool_calls=None, audio=None), logprobs=None)], stream_options={'include_usage': True})
ModelResponseStream(id='chatcmpl-b4e476ff-5fde-456e-9b86-0549c2652542', created=1740145122, model='llama-3.3-70b-specdec', object='chat.completion.chunk', system_fingerprint=None, choices=[StreamingChoices(finish_reason=None, index=0, delta=Delta(provider_specific_fields=None, content=' help', role=None, function_call=None, tool_calls=None, audio=None), logprobs=None)], stream_options={'include_usage': True})
ModelResponseStream(id='chatcmpl-b4e476ff-5fde-456e-9b86-0549c2652542', created=1740145122, model='llama-3.3-70b-specdec', object='chat.completion.chunk', system_fingerprint=None, choices=[StreamingChoices(finish_reason=None, index=0, delta=Delta(provider_specific_fields=None, content=' you', role=None, function_call=None, tool_calls=None, audio=None), logprobs=None)], stream_options={'include_usage': True})
ModelResponseStream(id='chatcmpl-b4e476ff-5fde-456e-9b86-0549c2652542', created=1740145122, model='llama-3.3-70b-specdec', object='chat.completion.chunk', system_fingerprint=None, choices=[StreamingChoices(finish_reason=None, index=0, delta=Delta(provider_specific_fields=None, content=' with', role=None, function_call=None, tool_calls=None, audio=None), logprobs=None)], stream_options={'include_usage': True})
ModelResponseStream(id='chatcmpl-b4e476ff-5fde-456e-9b86-0549c2652542', created=1740145122, model='llama-3.3-70b-specdec', object='chat.completion.chunk', system_fingerprint=None, choices=[StreamingChoices(finish_reason=None, index=0, delta=Delta(provider_specific_fields=None, content=' or', role=None, function_call=None, tool_calls=None, audio=None), logprobs=None)], stream_options={'include_usage': True})
ModelResponseStream(id='chatcmpl-b4e476ff-5fde-456e-9b86-0549c2652542', created=1740145122, model='llama-3.3-70b-specdec', object='chat.completion.chunk', system_fingerprint=None, choices=[StreamingChoices(finish_reason=None, index=0, delta=Delta(provider_specific_fields=None, content=' would', role=None, function_call=None, tool_calls=None, audio=None), logprobs=None)], stream_options={'include_usage': True})
ModelResponseStream(id='chatcmpl-b4e476ff-5fde-456e-9b86-0549c2652542', created=1740145122, model='llama-3.3-70b-specdec', object='chat.completion.chunk', system_fingerprint=None, choices=[StreamingChoices(finish_reason=None, index=0, delta=Delta(provider_specific_fields=None, content=' you', role=None, function_call=None, tool_calls=None, audio=None), logprobs=None)], stream_options={'include_usage': True})
ModelResponseStream(id='chatcmpl-b4e476ff-5fde-456e-9b86-0549c2652542', created=1740145122, model='llama-3.3-70b-specdec', object='chat.completion.chunk', system_fingerprint=None, choices=[StreamingChoices(finish_reason=None, index=0, delta=Delta(provider_specific_fields=None, content=' like', role=None, function_call=None, tool_calls=None, audio=None), logprobs=None)], stream_options={'include_usage': True})
ModelResponseStream(id='chatcmpl-b4e476ff-5fde-456e-9b86-0549c2652542', created=1740145122, model='llama-3.3-70b-specdec', object='chat.completion.chunk', system_fingerprint=None, choices=[StreamingChoices(finish_reason=None, index=0, delta=Delta(provider_specific_fields=None, content=' to', role=None, function_call=None, tool_calls=None, audio=None), logprobs=None)], stream_options={'include_usage': True})
ModelResponseStream(id='chatcmpl-b4e476ff-5fde-456e-9b86-0549c2652542', created=1740145122, model='llama-3.3-70b-specdec', object='chat.completion.chunk', system_fingerprint=None, choices=[StreamingChoices(finish_reason=None, index=0, delta=Delta(provider_specific_fields=None, content=' start', role=None, function_call=None, tool_calls=None, audio=None), logprobs=None)], stream_options={'include_usage': True})
ModelResponseStream(id='chatcmpl-b4e476ff-5fde-456e-9b86-0549c2652542', created=1740145122, model='llama-3.3-70b-specdec', object='chat.completion.chunk', system_fingerprint=None, choices=[StreamingChoices(finish_reason=None, index=0, delta=Delta(provider_specific_fields=None, content=' a', role=None, function_call=None, tool_calls=None, audio=None), logprobs=None)], stream_options={'include_usage': True})
ModelResponseStream(id='chatcmpl-b4e476ff-5fde-456e-9b86-0549c2652542', created=1740145122, model='llama-3.3-70b-specdec', object='chat.completion.chunk', system_fingerprint=None, choices=[StreamingChoices(finish_reason=None, index=0, delta=Delta(provider_specific_fields=None, content=' conversation', role=None, function_call=None, tool_calls=None, audio=None), logprobs=None)], stream_options={'include_usage': True})
ModelResponseStream(id='chatcmpl-b4e476ff-5fde-456e-9b86-0549c2652542', created=1740145122, model='llama-3.3-70b-specdec', object='chat.completion.chunk', system_fingerprint=None, choices=[StreamingChoices(finish_reason=None, index=0, delta=Delta(provider_specific_fields=None, content='?', role=None, function_call=None, tool_calls=None, audio=None), logprobs=None)], stream_options={'include_usage': True})
ModelResponseStream(id='chatcmpl-b4e476ff-5fde-456e-9b86-0549c2652542', created=1740145122, model='llama-3.3-70b-specdec', object='chat.completion.chunk', system_fingerprint=None, choices=[StreamingChoices(finish_reason=None, index=0, delta=Delta(provider_specific_fields=None, content=' I', role=None, function_call=None, tool_calls=None, audio=None), logprobs=None)], stream_options={'include_usage': True})
ModelResponseStream(id='chatcmpl-b4e476ff-5fde-456e-9b86-0549c2652542', created=1740145122, model='llama-3.3-70b-specdec', object='chat.completion.chunk', system_fingerprint=None, choices=[StreamingChoices(finish_reason=None, index=0, delta=Delta(provider_specific_fields=None, content="'m", role=None, function_call=None, tool_calls=None, audio=None), logprobs=None)], stream_options={'include_usage': True})
ModelResponseStream(id='chatcmpl-b4e476ff-5fde-456e-9b86-0549c2652542', created=1740145122, model='llama-3.3-70b-specdec', object='chat.completion.chunk', system_fingerprint=None, choices=[StreamingChoices(finish_reason=None, index=0, delta=Delta(provider_specific_fields=None, content=' here', role=None, function_call=None, tool_calls=None, audio=None), logprobs=None)], stream_options={'include_usage': True})
ModelResponseStream(id='chatcmpl-b4e476ff-5fde-456e-9b86-0549c2652542', created=1740145122, model='llama-3.3-70b-specdec', object='chat.completion.chunk', system_fingerprint=None, choices=[StreamingChoices(finish_reason=None, index=0, delta=Delta(provider_specific_fields=None, content=' to', role=None, function_call=None, tool_calls=None, audio=None), logprobs=None)], stream_options={'include_usage': True})
ModelResponseStream(id='chatcmpl-b4e476ff-5fde-456e-9b86-0549c2652542', created=1740145122, model='llama-3.3-70b-specdec', object='chat.completion.chunk', system_fingerprint=None, choices=[StreamingChoices(finish_reason=None, index=0, delta=Delta(provider_specific_fields=None, content=' assist', role=None, function_call=None, tool_calls=None, audio=None), logprobs=None)], stream_options={'include_usage': True})
ModelResponseStream(id='chatcmpl-b4e476ff-5fde-456e-9b86-0549c2652542', created=1740145122, model='llama-3.3-70b-specdec', object='chat.completion.chunk', system_fingerprint=None, choices=[StreamingChoices(finish_reason=None, index=0, delta=Delta(provider_specific_fields=None, content=' you', role=None, function_call=None, tool_calls=None, audio=None), logprobs=None)], stream_options={'include_usage': True})
ModelResponseStream(id='chatcmpl-b4e476ff-5fde-456e-9b86-0549c2652542', created=1740145122, model='llama-3.3-70b-specdec', object='chat.completion.chunk', system_fingerprint=None, choices=[StreamingChoices(finish_reason=None, index=0, delta=Delta(provider_specific_fields=None, content=' with', role=None, function_call=None, tool_calls=None, audio=None), logprobs=None)], stream_options={'include_usage': True})
ModelResponseStream(id='chatcmpl-b4e476ff-5fde-456e-9b86-0549c2652542', created=1740145122, model='llama-3.3-70b-specdec', object='chat.completion.chunk', system_fingerprint=None, choices=[StreamingChoices(finish_reason=None, index=0, delta=Delta(provider_specific_fields=None, content=' any', role=None, function_call=None, tool_calls=None, audio=None), logprobs=None)], stream_options={'include_usage': True})
ModelResponseStream(id='chatcmpl-b4e476ff-5fde-456e-9b86-0549c2652542', created=1740145122, model='llama-3.3-70b-specdec', object='chat.completion.chunk', system_fingerprint=None, choices=[StreamingChoices(finish_reason=None, index=0, delta=Delta(provider_specific_fields=None, content=' questions', role=None, function_call=None, tool_calls=None, audio=None), logprobs=None)], stream_options={'include_usage': True})
ModelResponseStream(id='chatcmpl-b4e476ff-5fde-456e-9b86-0549c2652542', created=1740145122, model='llama-3.3-70b-specdec', object='chat.completion.chunk', system_fingerprint=None, choices=[StreamingChoices(finish_reason=None, index=0, delta=Delta(provider_specific_fields=None, content=' or', role=None, function_call=None, tool_calls=None, audio=None), logprobs=None)], stream_options={'include_usage': True})
ModelResponseStream(id='chatcmpl-b4e476ff-5fde-456e-9b86-0549c2652542', created=1740145122, model='llama-3.3-70b-specdec', object='chat.completion.chunk', system_fingerprint=None, choices=[StreamingChoices(finish_reason=None, index=0, delta=Delta(provider_specific_fields=None, content=' topics', role=None, function_call=None, tool_calls=None, audio=None), logprobs=None)], stream_options={'include_usage': True})
ModelResponseStream(id='chatcmpl-b4e476ff-5fde-456e-9b86-0549c2652542', created=1740145122, model='llama-3.3-70b-specdec', object='chat.completion.chunk', system_fingerprint=None, choices=[StreamingChoices(finish_reason=None, index=0, delta=Delta(provider_specific_fields=None, content=' you', role=None, function_call=None, tool_calls=None, audio=None), logprobs=None)], stream_options={'include_usage': True})
ModelResponseStream(id='chatcmpl-b4e476ff-5fde-456e-9b86-0549c2652542', created=1740145122, model='llama-3.3-70b-specdec', object='chat.completion.chunk', system_fingerprint=None, choices=[StreamingChoices(finish_reason=None, index=0, delta=Delta(provider_specific_fields=None, content="'d", role=None, function_call=None, tool_calls=None, audio=None), logprobs=None)], stream_options={'include_usage': True})
ModelResponseStream(id='chatcmpl-b4e476ff-5fde-456e-9b86-0549c2652542', created=1740145122, model='llama-3.3-70b-specdec', object='chat.completion.chunk', system_fingerprint=None, choices=[StreamingChoices(finish_reason=None, index=0, delta=Delta(provider_specific_fields=None, content=' like', role=None, function_call=None, tool_calls=None, audio=None), logprobs=None)], stream_options={'include_usage': True})
ModelResponseStream(id='chatcmpl-b4e476ff-5fde-456e-9b86-0549c2652542', created=1740145122, model='llama-3.3-70b-specdec', object='chat.completion.chunk', system_fingerprint=None, choices=[StreamingChoices(finish_reason=None, index=0, delta=Delta(provider_specific_fields=None, content=' to', role=None, function_call=None, tool_calls=None, audio=None), logprobs=None)], stream_options={'include_usage': True})
ModelResponseStream(id='chatcmpl-b4e476ff-5fde-456e-9b86-0549c2652542', created=1740145122, model='llama-3.3-70b-specdec', object='chat.completion.chunk', system_fingerprint=None, choices=[StreamingChoices(finish_reason=None, index=0, delta=Delta(provider_specific_fields=None, content=' discuss', role=None, function_call=None, tool_calls=None, audio=None), logprobs=None)], stream_options={'include_usage': True})
ModelResponseStream(id='chatcmpl-b4e476ff-5fde-456e-9b86-0549c2652542', created=1740145122, model='llama-3.3-70b-specdec', object='chat.completion.chunk', system_fingerprint=None, choices=[StreamingChoices(finish_reason=None, index=0, delta=Delta(provider_specific_fields=None, content='.', role=None, function_call=None, tool_calls=None, audio=None), logprobs=None)], stream_options={'include_usage': True})
ModelResponseStream(id='chatcmpl-b4e476ff-5fde-456e-9b86-0549c2652542', created=1740145122, model='llama-3.3-70b-specdec', object='chat.completion.chunk', system_fingerprint=None, choices=[StreamingChoices(finish_reason='stop', index=0, delta=Delta(provider_specific_fields=None, content=None, role=None, function_call=None, tool_calls=None, audio=None), logprobs=None)], stream_options={'include_usage': True})
ModelResponseStream(id='chatcmpl-b4e476ff-5fde-456e-9b86-0549c2652542', created=1740145123, model='llama-3.3-70b-specdec', object='chat.completion.chunk', system_fingerprint=None, choices=[StreamingChoices(finish_reason=None, index=0, delta=Delta(provider_specific_fields=None, content=None, role=None, function_call=None, tool_calls=None, audio=None), logprobs=None)], stream_options={'include_usage': True}, usage=Usage(completion_tokens=44, prompt_tokens=6, total_tokens=50, completion_tokens_details=None, prompt_tokens_details=None))

Give Feedback / Get Help: https://github.com/BerriAI/litellm/issues/new
LiteLLM.Info: If you need to debug this error, use `litellm._turn_on_debug()'.


Provider List: https://docs.litellm.ai/docs/providers

Traceback (most recent call last):
  File "/Users/chenghao/Developer/ai-worker/.venv/lib/python3.12/site-packages/litellm/litellm_core_utils/streaming_handler.py", line 1548, in __anext__
    async for chunk in self.completion_stream:
  File "/Users/chenghao/Developer/ai-worker/.venv/lib/python3.12/site-packages/litellm/llms/databricks/streaming_utils.py", line 143, in __anext__
    return self.chunk_parser(chunk=json_chunk)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/Users/chenghao/Developer/ai-worker/.venv/lib/python3.12/site-packages/litellm/llms/databricks/streaming_utils.py", line 28, in chunk_parser
    if processed_chunk.choices[0].delta.content is not None:  # type: ignore
       ~~~~~~~~~~~~~~~~~~~~~~~^^^
IndexError: list index out of range

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "/Users/chenghao/Developer/ai-worker/temp.py", line 28, in <module>
    asyncio.run(main())
  File "/Users/chenghao/.local/share/uv/python/cpython-3.12.6-macos-aarch64-none/lib/python3.12/asyncio/runners.py", line 194, in run
    return runner.run(main)
           ^^^^^^^^^^^^^^^^
  File "/Users/chenghao/.local/share/uv/python/cpython-3.12.6-macos-aarch64-none/lib/python3.12/asyncio/runners.py", line 118, in run
    return self._loop.run_until_complete(task)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/Users/chenghao/.local/share/uv/python/cpython-3.12.6-macos-aarch64-none/lib/python3.12/asyncio/base_events.py", line 687, in run_until_complete
    return future.result()
           ^^^^^^^^^^^^^^^
  File "/Users/chenghao/Developer/ai-worker/temp.py", line 21, in main
    async for chunk in response:
  File "/Users/chenghao/Developer/ai-worker/.venv/lib/python3.12/site-packages/litellm/litellm_core_utils/streaming_handler.py", line 1703, in __anext__
    raise exception_type(
          ^^^^^^^^^^^^^^^
  File "/Users/chenghao/Developer/ai-worker/.venv/lib/python3.12/site-packages/litellm/litellm_core_utils/exception_mapping_utils.py", line 2207, in exception_type
    raise e  # it's already mapped
    ^^^^^^^
  File "/Users/chenghao/Developer/ai-worker/.venv/lib/python3.12/site-packages/litellm/litellm_core_utils/exception_mapping_utils.py", line 463, in exception_type
    raise APIConnectionError(
litellm.exceptions.APIConnectionError: litellm.APIConnectionError: APIConnectionError: GroqException - list index out of range

Are you a ML Ops Team?

No

What LiteLLM version are you on ?

1.61.7

Twitter / LinkedIn details

No response

@ChenghaoMou ChenghaoMou added the bug Something isn't working label Feb 21, 2025
@wwwillchen
Copy link

Getting a similar exception as above:

I don't think this is a Groq error because I've tried calling Groq directly (using the openAI client) and it works and with the exact same input it's getting the error thrown.

Weirdly though, the streaming response prior to the error looks OK (i.e. the content returned before the error is the same as calling Groq directly).

Thanks in advance!

23:24:34 - LiteLLM Proxy:ERROR: proxy_server.py:2995 - litellm.proxy.proxy_server.async_data_generator(): Exception occured - litellm.APIConnectionError: APIConnectionError: GroqException - list index out of range

Traceback (most recent call last):

  File "/usr/lib/python3.13/site-packages/litellm/litellm_core_utils/streaming_handler.py", line 1545, in __anext__

    async for chunk in self.completion_stream:

    ...<50 lines>...

        return processed_chunk

  File "/usr/lib/python3.13/site-packages/litellm/llms/databricks/streaming_utils.py", line 143, in __anext__

    return self.chunk_parser(chunk=json_chunk)

           ~~~~~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^

  File "/usr/lib/python3.13/site-packages/litellm/llms/databricks/streaming_utils.py", line 28, in chunk_parser

    if processed_chunk.choices[0].delta.content is not None:  # type: ignore

       ~~~~~~~~~~~~~~~~~~~~~~~^^^

IndexError: list index out of range

 

During handling of the above exception, another exception occurred:

 

Traceback (most recent call last):

  File "/usr/lib/python3.13/site-packages/litellm/proxy/proxy_server.py", line 2974, in async_data_generator

    async for chunk in response:

    ...<14 lines>...

            yield f"data: {str(e)}\n\n"

  File "/usr/lib/python3.13/site-packages/litellm/litellm_core_utils/streaming_handler.py", line 1700, in __anext__

    raise exception_type(

          ~~~~~~~~~~~~~~^

        model=self.model,

        ^^^^^^^^^^^^^^^^^

    ...<3 lines>...

        extra_kwargs={},

        ^^^^^^^^^^^^^^^^

    )

    ^

  File "/usr/lib/python3.13/site-packages/litellm/litellm_core_utils/exception_mapping_utils.py", line 2195, in exception_type

    raise e  # it's already mapped

    ^^^^^^^

  File "/usr/lib/python3.13/site-packages/litellm/litellm_core_utils/exception_mapping_utils.py", line 461, in exception_type

    raise APIConnectionError(

    ...<7 lines>...

    )

litellm.exceptions.APIConnectionError: litellm.APIConnectionError: APIConnectionError: GroqException - list index out of range

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working
Projects
None yet
Development

No branches or pull requests

2 participants