-
Notifications
You must be signed in to change notification settings - Fork 21
Developing extensions
Newelle extensions are simple python files that can extend Newelle capabilities in these ways:
- Adding new handlers (for LLM, TTS or STT)
- Adding new prompts
- Replacing codeblocks with custom GTK widgets or text to be sent to the LLM (for example, mathematical results)
Developing an extension does not require deep knowledge of the project codebase or the GTK framework.
Generally speaking, every extension is a python file that contains a class that extends NewelleExtension
. This class is located in the same path as the project files, meaning that you can use any other function and class.
Every Newelle extension must have:
- A name, that is the name displayed in the settings
- An ID, a string that uniquely identifies the extension. Please check that it is not already taken form here
You can set ID and name as attributes of your Newelle extension class.
For example, this is a valid extension (that does nothing):
from .extensions import NewelleExtension
class MyCustomExtension(NewelleExtension):
name = "Custom Extension"
id = "customextension"
In every Newelle extension, you can use these variables:
self.pip_path # Path to the pip directory, useful if you want to install new python packages
self.extension_path # Path for cache, where you can put temporary files (shared with other extensions)
self.settings # GIO settings of the application
Since NewelleExtension
is a subclass of the Handler
class, you can manage Extension settings and dependencies like you would do with normal handlers.
Very important: The settings about added LLM/TTS/STT must not be set as extensions settings. Set them as settings in their handlers!
Note: The install method for extensions is always called when the extension is installed!
Example of extension using extra settings: Perchance Image Generator
Extensions can add custom handlers using the methods
get_llm_handlers
get_tts_handlers
get_stt_handlers
For example, you can create an handler like this (assuming HyperbolicHanlder is an handler in the same file):
class MyCustomExtension(NewelleExtension):
...
def get_llm_handlers(self) -> list[dict]:
"""
Returns the list of LLM handlers
Returns:
list: list of LLM handlers in this format
{
"key": "key of the handler",
"title": "title of the handler",
"description": "description of the handler",
"class": LLMHanlder - The class of the handler,
}
"""
return [{
"key": "hyperbolic",
"title": _("Hyperbolic API"),
"description": _("Hyperbolic API"),
"class": HyperbolicHandler,
}]
The handler will appear in settings. The procedure is analog for TTS and STT handlers
If the llm generates a codeblock, for example
```mmd
x -> y
y -> x
\```
You can replace it with a custom GTK widget or with some text that will be sent to the LLM
Let's say you want to replace every mmd codeblock. You can add mmd in the return of get_replace_codeblocks_langs
class MyCustomExtension(NewelleExtension):
def get_replace_codeblocks_langs(self) -> list:
return ["mmd"]
Now if we want to replace that codeblock with a custom widget, we can override the get_gtk_widget
method:
from gi.repository import Gtk, GdkPixbuf
def get_gtk_widget(self, codeblock: str, lang: str) -> Gtk.Widget | None:
"""
Returns the GTK widget to be shown in the chat, optional
NOTE: it is run every time the message is loaded in chat
Args:
codeblock: str: text in the codeblock generated by the llm
lang: str: language of the codeblock
Returns:
Gtk.Widget: widget to be shown in the chat or None if not provided
"""
// Do what you need
return Gtk.Image(...)
You can also replace the codeblock with a result of an operation to send to the llm by overriding get_answer
def get_answer(self, codeblock: str, lang: str) -> str | None:
"""
Returns the answer to the codeblock
Args:
codeblock: str: text in the codeblock generated by the llm
lang: str: language of the codeblock
Returns:
str: answer to the codeblock (will be given to the llm) or None if not provided
"""
if lang == "calc"
return "The result is: " + eval(codeblock)
return None
Extensions can add custom prompts in order to make the llm use their capabilities. To add a custom prompt you can override the get_additional_prompts
method.
def get_additional_prompts(self) -> list:
"""
Returns the list of additional prompts
Returns:
list: list of additional prompts in this format
{
"key": "key of the prompt",
"setting_name": "name of the settings that gets toggled",
"title": "Title of the prompt to be shown in settings",
"description": "Description of the prompt to be shown in settings",
"editable": bool, If the user can edit the prompt
"show_in_settings": bool If the prompt should be shown in the settings,
"default": bool, default value of the setting
"text": "Default Text of the prompt"
}
"""
return [
{
"key": "mermaid",
"setting_name": "mermaid",
"title": "Show mermaid graphs",
"description": "Allow the llm to show mermaid graphs",
"editable": True,
"show_in_settings": True,
"default": True,
"text": "You can use ```mmd\ngraph\n```\n to show a Mermaid graph"
}
]
To get LSP working correctly while coding, you have to:
- Clone Newelle repo:
git clone https://github.com/qwersyk/Newelle
- Create your extension file in
src/
- Write your extension there, the LSP should be able to recognize the right imports
from .utility.pip import find_module
if find_module("numpy") is None:
print("Module not found")
else:
print("Module found")
from .utility.pip import find_module
class MyExtension(NewelleExtension):
...
def install(self):
install_module("numpy", self.pip_path)
The full code for these examples are in this repository.
First of all, we create the base class and add the required metadata:
from .extensions import NewelleExtension
class DDGExtension(NewelleExtension):
name = "DuckDuckGo"
id = "ddg"
After that, we just need to override the get_llm_handlers
method.
DDGHandler was already programmed here. The code for the LLMHandler must be in the same file.
def get_llm_handlers(self) -> list[dict]:
return [
{
"key": "ddg",
"title": "DuckDuckGo",
"description": "DuckDuckGo AI chat, private and fast",
"class": DDGHandler
}
]
We will now build an Extension that will replace any generate-image
codeblock with a generated image by pollinations.ai.
- We create the base extension class
from .extensions import NewelleExtension
class PollinationsExtension(NewelleExtension):
name = "Pollinations Image Generator"
id = "pollinationsimg"
- We override
get_replace_codeblocks_langs
in order to be able to replacegenerate-image
codeblocks
def get_replace_codeblocks_langs(self) -> list:
return ["generate-image"]
- We override
get_additional_prompts
in order to add a prompt that tells the AI that it can generate images using those codeblocks
def get_additional_prompts(self) -> list:
return [
{
"key": "generate-image",
"setting_name": "generate-image",
"title": "Generate Image",
"description": "Generate images using Pollinations AI",
"editable": True,
"show_in_settings": True,
"default": True,
"text": "You can generate images using: \n```generate-image\nprompt\n```\nUse detailed prompts, with words separated by commas",
}
]
This will make possible for the user to view and edit the prompt
- Override the method
get_gtk_widget
to return the widget you want to replace the codeblock with.
def get_gtk_widget(self, codeblock: str, lang: str) -> Gtk.Widget | None:
from threading import Thread
# Create the box that will be returned
box = Gtk.Box()
# Create a spinner while loading the image
spinner = Gtk.Spinner(spinning=True)
# Add the spinner to the box
box.append(spinner)
# Create the image widget that will replace the spinner
image = Gtk.Image()
image.set_size_request(400, 400)
# Add the image to the box
box.append(image)
# Create the thread that will load the image in background
thread = Thread(target=self.generate_image, args=(codeblock, image, spinner, box))
# Start the thread
thread.start()
# Return the box
return box
Then we add the necessary methods for image generation:
def generate_image(self, codeblock, image: Gtk.Image, spinner: Gtk.Spinner, box: Gtk.Box):
import urllib.request
import urllib.parse
# Create a pixbuf loader that will load the image
pixbuf_loader = GdkPixbuf.PixbufLoader()
pixbuf_loader.connect("area-prepared", self.on_area_prepared, spinner, image, box)
# Generate the image and write it to the pixbuf loader
try:
url = "https://image.pollinations.ai/prompt/" + urllib.parse.quote(codeblock)
with urllib.request.urlopen(url) as response:
data = response.read()
pixbuf_loader.write(data)
pixbuf_loader.close()
except Exception as e:
print("Exception generating the image: " + str(e))
def on_area_prepared(self, loader: GdkPixbuf.PixbufLoader, spinner: Gtk.Spinner, image: Gtk.Image, box: Gtk.Box):
# Function runs when the image loaded. Remove the spinner and open the image
image.set_from_pixbuf(loader.get_pixbuf())
box.remove(spinner)
box.append(image)
And this is the result:
Now we will create an extension that will allow the LLM to get information about Arch Linux wiki pages.
We want that if the LLM wants to get information from the Arch wiki, it uses an arch-wiki
codeblock with the search query.
- We create the base class and add the required metadata:
class ArchWikiExtension(NewelleExtension):
id = "archwiki"
name = "Arch Wiki integration"
- We override
get_replace_codeblocks_langs
to be able to replace arch-wiki codeblocks
def get_replace_codeblocks_langs(self) -> list:
return ["arch-wiki"]
- We override
get_additional_prompts
and add a prompt to inform the LLM that he can do queries to the Arch Wiki
def get_additional_prompts(self) -> list:
return [
{
"key": "archwiki",
"setting_name": "archwiki",
"title": "Arch Wiki",
"description": "Enable Arch Wiki integration",
"editable": True,
"show_in_settings": True,
"default": False,
"text": "Use \n```arch-wiki\nterm\n```\nto search on Arch Wiki\nThen do not provide any other information. The user will give you the content of the page"
}
]
- We override the method
get_answer
in order to replace the codeblock with the content of the arch wiki page
def get_answer(self, codeblock: str, lang: str) -> str | None:
import requests
import markdownify
# Search for pages similar to that query in the wiki
r = requests.get("https://wiki.archlinux.org/api.php", params={"search": codeblock, "limit": 1, "format": "json", "action": "opensearch"})
if r.status_code != 200:
return "Error contacting Arch API"
# Pick the page
page = r.json()[1][0]
# Pick the page name in order to get its content
name = page.split("/")[-1]
r = requests.get("https://wiki.archlinux.org/api.php", params={"action": "parse", "page": name, "format": "json"})
if r.status_code != 200:
return "Error contacting Arch API"
# Convert the HTML in Markdown in order to make it more readable for the LLM
html = r.json()["parse"]["text"]["*"]
return markdownify.markdownify(html)
- As you may have noticed, we used a library that is not shipped with Newelle by default. So we override the
install
method to install it with pip when the extension is installed.
from .utility.pip import install_module, find_module
from .extensions import NewelleExtension
class ArchWikiExtension(NewelleExtension):
id = "archwiki"
name = "Arch Wiki integration"
def install(self):
if find_module("markdownify", self.pip_path) is None:
install_module("markdownify", self.pip_path)