Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Problem in LLM-text with Ollama #31

Open
marc2608 opened this issue Oct 23, 2024 · 6 comments
Open

Problem in LLM-text with Ollama #31

marc2608 opened this issue Oct 23, 2024 · 6 comments

Comments

@marc2608
Copy link

Hello, I can't get llm-text to work with ollama. Can I have some explanations on how to configure the setup exactly, for example about the API key etc... Thank you in advance. I have Ollama running on my PC while A1111 is running, with llama 3.1 engaged, the civitai meta grabber works fine, llm-text also works with the setup configured for openai with the API key, but I can't get it to work with Ollama... In the llm answer window I still get this message:
[Auto-LLM][Result][Missing LLM-Text]'choices'
Thank you very much in advance

@LadyFlames
Copy link

LadyFlames commented Oct 24, 2024

thats due to the LocalHost you set from the setting in the webui itself it has to be the same like just set it as https:// LocalHost 1234/v1 or use whatever you have it personally set
dont add anything else behind the v1 otherwise you will get errors&Warnings like dont add /Chat/Completions to the end of it
just keep it as the https://localhost:(Number)/V1 just make sure the LocalHost is not the same as the webui port then everything should work just fine but still leave it to just https://localhost:(Number)/V1 or what ever you have it set to without adding anything else behind the V1

@marc2608
Copy link
Author

Ok Thank you. could you say me what I should change in this config, and where?
Capture d'écran 2024-10-25 170558

@LadyFlames
Copy link

LadyFlames commented Oct 25, 2024

you have it mostly Correct but in ollama itself you have to change the port to 11434
that should make it work as intended
but for the webui port itself if you already havent id suggest using A1111 through Stability matrix instead it makes changing settings much easier the webui port itself cannot be 11434

@xlinx
Copy link
Owner

xlinx commented Oct 28, 2024

  • makesure ollama is running, open command window, run it

ollama run llama3.2

PS. a sheep on sys tray, that is not load a model. u need load model by command.
change default llama3.1 to llama3.2 (depence what model u load), same as @LadyFlames say.

  • ollama is app name
  • llama is model name
  • a sheep icon on sys-tray not mean model loaded (u need download and load model when first time)
  • run 'ollama run llama3.2' in cmd, make sure ollama running and loaded model.
  • return Auto-LLM extension, click button Call LLM for test.
  • ollama log -> C:\Users\XXX\AppData\Local\Ollama
  • ollama model path-> C:\Users\XXX\ .ollama\models

image

@marc2608
Copy link
Author

marc2608 commented Oct 28, 2024

Thanks to your advice my problem is solved, it works perfectly, many thanks to @LadyFlames and @xlinx!

@LadyFlames
Copy link

anytime

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants