-
Notifications
You must be signed in to change notification settings - Fork 8
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Problem in LLM-text with Ollama #31
Comments
thats due to the LocalHost you set from the setting in the webui itself it has to be the same like just set it as https:// LocalHost 1234/v1 or use whatever you have it personally set |
you have it mostly Correct but in ollama itself you have to change the port to 11434 |
PS. a sheep on sys tray, that is not load a model. u need load model by command.
|
Thanks to your advice my problem is solved, it works perfectly, many thanks to @LadyFlames and @xlinx! |
anytime |
Hello, I can't get llm-text to work with ollama. Can I have some explanations on how to configure the setup exactly, for example about the API key etc... Thank you in advance. I have Ollama running on my PC while A1111 is running, with llama 3.1 engaged, the civitai meta grabber works fine, llm-text also works with the setup configured for openai with the API key, but I can't get it to work with Ollama... In the llm answer window I still get this message:
[Auto-LLM][Result][Missing LLM-Text]'choices'
Thank you very much in advance
The text was updated successfully, but these errors were encountered: