-
Notifications
You must be signed in to change notification settings - Fork 639
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Update InferenceClient
docstring to reflect that token=False
is no longer accepted
#2853
Conversation
…ot accepted anymore
The docs for this PR live here. All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update. |
@abidlabs in practice, HF Inference API still supports passing |
but the HF Inference API requires a token, right? (or if not, it will very very soon) |
I noticed our internal logic for And yes with the new inference providers, even HF Inference requires a token now. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Hey! Sorry, took me some time to get back to this PR. I've pushed a new commit so that passing token=False
raises an error. This is a breaking change of a broken behavior so ok to have it IMO. Previously, passing token=False
resulted on the token being sent, which is absolutely the opposite of user's intention. As a reminder, authentication is now required for all HF Inference calls so it just doesn't make sense to pass token=False
.
I also took the opportunity to remove some tests that were failing due to authentication issues. They were tests around get_model_status
and list_deployed_models
which are deprecated anyway.
@hanouticelina @julien-c mind re-reviewing it? |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
thanks!
Just wondering -- is there still any way not to pass in the local hf token?