llux is an AI chatbot for the Matrix chat protocol. It uses local LLMs via Ollama for chat and image recognition, offers image generation via Diffusers, specifically FLUX.1, and an OpenAI-compatible API for text-to-speech (e.g. Kokoro FasAPI by remsky). Each user in a Matrix room can set a unique personality (or system prompt), and conversations are kept per user, per channel. Model switching is also supported if you have multiple models installed and configured.
You're welcome to try the bot out on We2.ee at #ai:we2.ee.
-
Install Ollama
You’ll need Ollama to run local LLMs (text and multimodal). A quick install:curl https://ollama.ai/install.sh | sh
Choose your preferred models. For base chat functionality, good options include: llama3.3 and phi4. For multimodal chat, you’ll need a vision model. I recommend llama3.2-vision. This can be — but doesn’t have to be — the same as your base chat model.
Pull your chosen model(s) with:
ollama pull <modelname>
-
Create a Python Environment (Recommended)
You can use eitherconda/mamba
orvenv
:# Using conda/mamba: mamba create -n llux python=3.10 conda activate llux # or using Python's built-in venv: python3 -m venv venv source venv/bin/activate
-
Install Dependencies
Install all required Python libraries fromrequirements.txt
:pip install -r requirements.txt
This will install:
matrix-nio
for Matrix connectivitydiffusers
for image generationollama
for local LLMstorch
for the underlying deep learning frameworkpillow
for image manipulationmarkdown
,pyyaml
, etc.
-
Set Up Your Bot
- Create a Matrix account for your bot (on a server of your choice).
- Record the server, username, and password.
- Copy
config.yaml-example
toconfig.yaml
(e.g.,cp config.yaml-example config.yaml
). - In your new
config.yaml
, fill in the relevant fields (Matrix server, username, password, channels, admin usernames, etc.). Also configure the Ollama section for your model settings and the Diffusers section for image generation (model, device, steps, etc.).
Note: this bot was designed for macOS on Apple Silicon. It has not been tested on Linux. It should work on Linux but might require some minor changes, particularly for image generation. At the very least you will need to change
device
in config.yaml frommps
to your torch device, e.g.,cuda
. -
Run llux
python3 llux.py
If you’re using a virtual environment, ensure it’s activated first.
-
.ai message or botname: message
Basic conversation or roleplay prompt. By replying with this prompt to an image attachment on Matrix, you engage your multimodal / vision model and can ask the model questions about the image attachment. -
.img prompt Generate an image with the prompt
-
.tts text Convert the provided text to speech
-
.x username message
Interact with another user’s chat history (use the display name of that user). -
.persona personality
Set or change to a specific roleplaying personality. -
.custom prompt
Override the default personality with a custom system prompt. -
.reset
Clear your personal conversation history and revert to the preset personality. -
.stock
Clear your personal conversation history, but do not apply any system prompt.
-
.model modelname
- Omit
modelname
to show the current model and available options. - Include
modelname
to switch to that model.
- Omit
-
.clear
Reset llux for everyone, clearing all stored conversations, deleting image cache, and returning to the default settings.
llux is based in part on ollamarama-matrix by h1ddenpr0cess20. For that reason it is covered by the same AGPL-3.0 license.