You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
After resolving my LLM-as-assistant issue, I am now having issues using the LLM-vision. I have the models suggested on the github, as seen below, but every single one of them returns [ERROR] Model does not support images. Please use a model that does.. Error Data: n/a, Additional Data: n/a ]. Any idea as to why?
The text was updated successfully, but these errors were encountered:
u need the model with vision support. with YELLOW tag
u need download one more append file (vision adapter) total 2file
modexxx
modelxxx (vision adapter)
example: Eris_PrimeV4-Vision-32k-7B-GGUF-IQ-Imatrix
Models with vision capabilities are identified by the presence of a 'mmproj-' (MultiModal Projector) file in the repository, indicating that a vision adapter is available. Vision adapters enable language models to process images as input, enhancing their capabilities.
After resolving my LLM-as-assistant issue, I am now having issues using the LLM-vision. I have the models suggested on the github, as seen below, but every single one of them returns [ERROR] Model does not support images. Please use a model that does.. Error Data: n/a, Additional Data: n/a ]. Any idea as to why?
The text was updated successfully, but these errors were encountered: