Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Bug]: model: fireworks_ai/accounts/fireworks/models/deepseek-r1 gives 400 error in version v1.61.11 #8699

Open
numenbit opened this issue Feb 21, 2025 · 0 comments
Labels
bug Something isn't working

Comments

@numenbit
Copy link

What happened?

Testing the chat with this existing model gives a 400 error below this was working in version main-v1.59.8. My proxy config is this

  • model_name: deepseek-r1
    litellm_params:
    model: fireworks_ai/accounts/fireworks/models/deepseek-r1
    api_key: secret

Did anything change that might cause this. I dont see deepseek as a support model for fireworks I believe it was there before. Can I just make this an open ai compatible model?

Relevant log output

itellm.BadRequestError: LLM Provider NOT provided. Pass in the LLM provider you are trying to call. You passed model=deepseek-r1
 Pass model as E.g. For 'Huggingface' inference endpoints pass in `completion(model='huggingface/starcoder',..)` Learn more: https://docs.litellm.ai/docs/providers

Are you a ML Ops Team?

No

What LiteLLM version are you on ?

v1.61.11

Twitter / LinkedIn details

No response

@numenbit numenbit added the bug Something isn't working label Feb 21, 2025
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working
Projects
None yet
Development

No branches or pull requests

1 participant