diff --git a/docs/user-guide/apps/continue-01.png b/docs/user-guide/apps/continue-01.png new file mode 100644 index 0000000..8f8f759 Binary files /dev/null and b/docs/user-guide/apps/continue-01.png differ diff --git a/docs/user-guide/apps/continue.md b/docs/user-guide/apps/continue.md new file mode 100644 index 0000000..127adcb --- /dev/null +++ b/docs/user-guide/apps/continue.md @@ -0,0 +1,89 @@ +--- +sidebar_position: 11 +--- + +# AI coding assistant: Continue + +[Continue](https://github.com/continuedev/continue) is the leading open-source AI code assistant. +It is a copilot-like plugin for VSCode and JetBrains to provide custom autocomplete and chat experiences inside +those IDEs. You can easily configure it to use Gaia nodes as LLM backends. In fact, you can choose different Gaia +nodes for + +* the autocomplete model for coding tasks +* the chat model for understanding and discussing code +* the embedding model to provide chat context based on local files + +## Prerequisites + +You will need a Gaia node ready to provide LLM services through a public URL. You can + +* [run your own node](../../node-guide/quick-start.md) +* [use a public node](../nodes.md) + +In this tutorial, we will use public nodes to power the Continue plugin. + +| Model type | API base URL | Model name | +|-----|--------|-----| +| Chat | https://gemma-2-27b.us.gaianet.network/v1/ | gemma-2-27b-it-Q5_K_M | +| Embedding | https://gemma-2-27b.us.gaianet.network/v1/ | nomic-embed-text-v1.5.f16 | +| Autocompletion | https://codestral-01-22b.us.gaianet.network/v1/ | Codestral-22B-v0.1-hf-Q5_K_M | + +> It is important to note that Continue requires the API endpoint to include a `/` at the end. + +## Install Continue + +[Load this link](https://marketplace.visualstudio.com/items?itemName=Continue.continue) to install the Continue IDE plugin. +It will open up your VSCode when you click on the **Install** button on the web page. When you are +asked to configure Continue, just click on **Skip** and finish the installation without selecting a local model. + +## Configure Continue + +Click on the gear icon on the toolbar to load the `config.json` file for the Continue plugin. The file is located +in your own home directory `$HOME/.continue/config.json`. +You can now change the `config.json` file as follows. +It asks the Continue plugin to use different public Gaia nodes and models for +chat, code autocomplete and embeddings. + +``` +{ + "models": [ + { + "model": "gemma-2-27b-it-Q5_K_M", + "title": "LlamaEdge", + "apiBase": "https://gemma-2-27b.us.gaianet.network/v1/", + "provider": "openai" + } + ], + "tabAutocompleteModel": { + "title": "Autocomplete", + "apiBase": "https://codestral-01-22b.us.gaianet.network/v1/", + "model": "Codestral-22B-v0.1-hf-Q5_K_M", + "provider": "openai" + }, + "embeddingsProvider": { + "provider": "openai", + "model": "nomic-embed-text-v1.5.f16", + "apiBase": "https://gemma-2-27b.us.gaianet.network/v1/" + }, + "customCommands": [ + { + "name": "test", + "prompt": "{{{ input }}}\n\nWrite a comprehensive set of unit tests for the selected code. It should setup, run tests that check for correctness including important edge cases, and teardown. Ensure that the tests are complete and sophisticated. Give the tests just as chat output, don't edit any file.", + "description": "Write unit tests for highlighted code" + } + ], + "allowAnonymousTelemetry": true +} +``` + +Save the `config.json` file and you are done! + +## Use the plugin + +The following screenshot shows how you can chat with an error message +inside the IDE. + +![](continue-01.png) + + + diff --git a/docs/user-guide/apps/flowise-tool-01.png b/docs/user-guide/apps/flowise-tool-01.png new file mode 100644 index 0000000..16a3d07 Binary files /dev/null and b/docs/user-guide/apps/flowise-tool-01.png differ diff --git a/docs/user-guide/apps/flowise-tool-02.png b/docs/user-guide/apps/flowise-tool-02.png new file mode 100644 index 0000000..98e64bf Binary files /dev/null and b/docs/user-guide/apps/flowise-tool-02.png differ diff --git a/docs/user-guide/apps/flowise-tool-03.png b/docs/user-guide/apps/flowise-tool-03.png new file mode 100644 index 0000000..743e31e Binary files /dev/null and b/docs/user-guide/apps/flowise-tool-03.png differ diff --git a/docs/user-guide/apps/flowise-tool-04.png b/docs/user-guide/apps/flowise-tool-04.png new file mode 100644 index 0000000..130784e Binary files /dev/null and b/docs/user-guide/apps/flowise-tool-04.png differ diff --git a/docs/user-guide/apps/flowise-tool-05.png b/docs/user-guide/apps/flowise-tool-05.png new file mode 100644 index 0000000..bf55a70 Binary files /dev/null and b/docs/user-guide/apps/flowise-tool-05.png differ diff --git a/docs/user-guide/apps/flowise-tool-06.png b/docs/user-guide/apps/flowise-tool-06.png new file mode 100644 index 0000000..ada67a6 Binary files /dev/null and b/docs/user-guide/apps/flowise-tool-06.png differ diff --git a/docs/user-guide/apps/flowiseai-tool-call.md b/docs/user-guide/apps/flowiseai-tool-call.md new file mode 100644 index 0000000..a68b871 --- /dev/null +++ b/docs/user-guide/apps/flowiseai-tool-call.md @@ -0,0 +1,104 @@ +--- +sidebar_position: 12 +--- + +# FlowiseAI tool call + +FlowiseAI is a low-code tool for developers to build customized LLM orchestration flows & AI agents. +You can configure the FlowiseAI tool to use a Gaia node that supports [LLM tool calling](https://github.com/LlamaEdge/LlamaEdge/blob/main/api-server/ToolUse.md). + +## Prerequisites + +You will need a Gaia node ready to provide LLM services through a public URL. +In this tutorial, you will need to [set up a public node with tool call support](https://github.com/GaiaNet-AI/node-configs/blob/main/mistral-0.3-7b-instruct-tool-call/README.md). + +## Start a FlowiseAI server + +Follow [the FlowiseAI guide](https://docs.flowiseai.com/getting-started) to install Flowise locally + +``` +npm install -g flowise +npx flowise start +``` + +After running successfully, you can open `http://localhost:3000` to check out the Flowise AI tool. + +## Build a chatbot for realtime IP lookup + +Step 1: Create a new **Chatflow** from the UI. + +![](flowise-tool-01.png) + +Step 2: On the **Chatflow** canvas, add a node called **ChatLocalAI**. + +![](flowise-tool-02.png) + +Step 3: Configure the **ChatLocalAI** widget to use the Gaia node with tool call support you have created. + +* Base path: `https://YOUR-NODE-ID.us.gaianet.network` +* Model name: e.g., `Mistral-7B-Instruct-v0.3.Q5_K_M` + +Step 4: Add a node called **Custom Tool** + +Create a function named `get_ip_address_geo_location`. +The function requires a `string` parameter called `ip`. + +The **Tool description** field is the "prompt" that tells the LLM when to use this function. In this example, +if the LLM detects that the user is asking about the city or country of an IP address, it will +return a tool call response asking FlowiseAI to perform this function call first. + +![](flowise-tool-03.png) + +Now you can add JavaScript code for this function. It looks up the location of the input `ip` parameter. + +``` +const fetch = require("node-fetch") +const url = "http://ipwho.is/"+$ip + +try { + const response = await fetch(url) + const result = await response.text() + console.log(result) + return result +} catch(error) { + console.error(error) +} +``` + +![](flowise-tool-04.png) + +Step 5: Add a node called **Buffer Memory** to the canvas. + +Step 6: Add a node called **Tool Agent**. + +Step 7: Connect the nodes. + +Connect the **Custom Tool** and **Buffer Memory** nodes to the appropriate connectors on the +**Tool Agent** node. Connect the **ChatLocalAI** node to the **Custom Tool**. + +![](flowise-tool-05.png) + +Step 8: Save the **Chatflow**. + +## Give it a try + +From the FlowiseAI UI, you can open a chat window to chat with the **ChatLocalAI** you just created. Let's +ask a question: + +``` +What's the location of this address 35.222.115.181 +``` + +The LLM understands that the request is to find a location for an IP address, and sees that we have a function +called `get_ip_address_geo_location` in tools, which has a description that matches this task. +So, it responses with a JSON message to call this function with +the IP address it extracts from the user query. + +This tool calling JSON message is NOT displayed to the user in the chatbot. Instead, the FlowiseAI +**Custom Tool** node captures it and executes the JavaScript code associated with this tool call. The result of +the tool call is then sent back to the LLM together with the original query, +which is why we need the **Buffer Memory** node BTW, +and the LLM formulates a human readable response to the original question. + +![](flowise-tool-06.png) + diff --git a/docs/user-guide/apps/flowiseai.md b/docs/user-guide/apps/flowiseai.md index 711cced..7f26015 100644 --- a/docs/user-guide/apps/flowiseai.md +++ b/docs/user-guide/apps/flowiseai.md @@ -2,9 +2,23 @@ sidebar_position: 6 --- -# FlowiseAI + GaiaNet +# FlowiseAI RAG chat -FlowiseAI is a low-code tool for developers to build customized LLM orchestration flows & AI agents. You can configure the Flowise tool using any Gaianet Node as the backend LLM API. +FlowiseAI is a low-code tool for developers to build customized LLM orchestration flows & AI agents. You can configure the FlowiseAI tool to use Gaia nodes as LLM service providers. + +## Prerequisites + +You will need a Gaia node ready to provide LLM services through a public URL. You can + +* [run your own node](../../node-guide/quick-start.md) +* [use a public node](../nodes.md) + +In this tutorial, we will use public nodes to power the Continue plugin. + +| Model type | API base URL | Model name | +|-----|--------|-----| +| Chat | https://llama-3-8b.us.gaianet.network/v1 | Meta-Llama-3-8B-Instruct-Q5_K_M | +| Embedding | https://llama-3-8b.us.gaianet.network/v1 | nomic-embed-text-v1.5.f16 | ## Start a FlowiseAI server @@ -21,9 +35,7 @@ After running successfully, you can open http://localhost:3000 to check out the FlowiseAI allows you to visually set up all the workflow components for an AI agent. If you're new to FlowiseAI, it's recommended to use a template quick start. In fact, there are lots of templates around OpenAI in the Flowise marketplace. All we need to do is to replace the ChatOpenAI component with the ChatLocalAI component. -Let's take the **Flowise Docs QnA** as an example. You can build a QnA chatbot based on your documents. In this example, we would like to chat with a set of documents in a GitHub repo. The default template was built with OpenAI and we will now change it to use an open-source LLM on a GaiaNet node. Of course, you must have access to a -[GaiaNet node](https://github.com/GaiaNet-AI/gaianet-node/blob/main/README.md). I recommend running the -[Llama-3-8b + monic-embed](https://github.com/GaiaNet-AI/node-configs/tree/main/llama-3-8b-instruct) models on your GaiaNet node. +Let's take the **Flowise Docs QnA** as an example. You can build a QnA chatbot based on your documents. In this example, we would like to chat with a set of documents in a GitHub repo. The default template was built with OpenAI and we will now change it to use an open-source LLM on a Gaia node. ### Get the **Flowise Docs QnA** template @@ -41,13 +53,10 @@ You will need to delete the ChatOpenAI component and click the + button to searc ![](flowise-03.png) -Then, you will need to input the GaiaNet node base URL `https://node_id.us.gaianet.network/v1` and the model name. You can get the model via the following command line. - -``` -# Replace your node id here +Then, you will need to input -curl -X POST https://node_id.us.gaianet.network/v1/models -``` +* the Gaia node base URL `https://llama-3-8b.us.gaianet.network/v1` +* the model name `Meta-Llama-3-8B-Instruct-Q5_K_M` Next, connect the ChatLocalAI component with the field `Chat model` in the **Conversational Retrieval QA Chain** component. @@ -55,8 +64,8 @@ Next, connect the ChatLocalAI component with the field `Chat model` in the **Con The default template uses the OpenAI Embeddings component to create embeddings for your documents. We need to replace the **OpenAI Embeddings** component with the **LocalAI Embeddings** component. -* Use the GaiaNet node base URL `https://node_id.us.gaianet.network/v1` in the Base Path field. -* Input the model name in the Model Name field. +* Use the Gaia node base URL `https://llama-3-8b.us.gaianet.network/v1` in the Base Path field. +* Input the model name `nomic-embed-text-v1.5.f16` in the Model Name field. Next, connect the **LocalAI Embeddings** component with the field `embedding` in the **In-Memory Vector Store** component. @@ -64,7 +73,7 @@ Next, connect the **LocalAI Embeddings** component with the field `embedding` in Then, let's go through the GitHub component to connect the chat application to our documents on GitHub. You will need to put your docs GitHub link into the **Repo Link** field. For example, you can put GaiaNet's docs link: `https://github.com/GaiaNet-AI/docs/tree/main/docs`. -### Give it a try +## Give it a try You can send a question like "How to install a GaiaNet node" after saving the current chatflow. diff --git a/docs/user-guide/apps/gpt-planner.md b/docs/user-guide/apps/gpt-planner.md index a6fa889..3624edc 100644 --- a/docs/user-guide/apps/gpt-planner.md +++ b/docs/user-guide/apps/gpt-planner.md @@ -29,7 +29,7 @@ In this tutorial, we will use a public node. First, [load the nodebook in colab](https://colab.research.google.com/github/mshumer/gpt-prompt-engineer/blob/main/gpt_planner.ipynb). -Edit the code to create an OpenAIT client. We will pass in the `base_url` here. +Edit the code to create an OpenAI client. We will pass in the `base_url` here. ``` client = openai.OpenAI(base_url="https://llama-3-8b.us.gaianet.network/v1", api_key=OPENAI_API_KEY) diff --git a/docs/user-guide/apps/obsidian.md b/docs/user-guide/apps/obsidian.md index 85fc877..5834457 100644 --- a/docs/user-guide/apps/obsidian.md +++ b/docs/user-guide/apps/obsidian.md @@ -2,11 +2,7 @@ sidebar_position: 8 --- -# Obsidian + GaiaNet - - - -## Using GaiaNet Node for Obsidian Local GPT Plugin +# Obsidian Obsidian is a note-taking application that enables users to create, link, and visualize ideas directly on their devices. With Obsidian, you can seamlessly sync notes across devices, publish your work, and collaborate with others. The app is highly customizable, allowing users to enhance functionality through a wide range of plugins and themes. Its unique features include a graph view to visualize connections between notes, making it ideal for managing complex information and fostering creativity. Obsidian also emphasizes data privacy by storing notes locally. @@ -16,29 +12,19 @@ A key feature of this plugin is that it supports a large number of open source L This guide explains how to set up and use the plugin with a GaiaNet node as an alternative to OpenAI or Ollama. - ## Prerequisites -Before setting up the Obsidian-local-gpt plugin, you need to have access to a GaiaNet node. You have two options: - - -### Option 1: Use a Public GaiaNet Node - - -For a quick start, you could use a [public GaiaNet node](https://docs.gaianet.ai/user-guide/nodes). For example, the following node provides access to the Llama-3-8b model, which is a balanced and fast LLM. - -https://llama-3-8b.us.gaianet.network/ - - -### Option 2: Start Your Own GaiaNet Node - -Running your own GaiaNet node provides maximum privacy and control over your data. It also lets you experiment with different finetuned models for different tasks. This option is ideal for users who prioritize data sovereignty and have the technical capacity to manage their own node. - -To set up your own GaiaNet node, follow the detailed instructions in the [GaiaNet Node Setup Guide](https://github.com/GaiaNet-AI/gaianet-node/blob/main/README.md). By default, this command will download and run a**_ Phi-3-mini-4k-instruct model_** to run as a part of GaiaNet node on your device. +You will need a Gaia node ready to provide LLM services through a public URL. You can -After setup, note down your node's URL, which you'll need for configuring the Obsidian plugin. +* [run your own node](../../node-guide/quick-start.md) +* [use a public node](../nodes.md) +In this tutorial, we will use a public node. +| Attribute | Value | +|-----|--------| +| API endpoint URL | https://llama-3-8b.us.gaianet.network/v1 | +| Model Name | Meta-Llama-3-8B-Instruct-Q5_K_M | ## Obsidian-local-gpt Plugin Setup diff --git a/docs/user-guide/apps/openwebui.md b/docs/user-guide/apps/openwebui.md index c7f2d34..8dddc5e 100644 --- a/docs/user-guide/apps/openwebui.md +++ b/docs/user-guide/apps/openwebui.md @@ -7,6 +7,20 @@ sidebar_position: 3 You can configure the Open WebUI framework, a self-hosted WebUI, using any GaiaNet node as the backend LLM API. That allows you to use your own or community GaiaNet nodes in any application built on Open WebUI. +## Prerequisites + +You will need a Gaia node ready to provide LLM services through a public URL. You can + +* [run your own node](../../node-guide/quick-start.md) +* [use a public node](../nodes.md) + +In this tutorial, we will use public nodes to power the Continue plugin. + +| Model type | API base URL | Model name | +|-----|--------|-----| +| Chat | https://llama-3-8b.us.gaianet.network/v1 | Meta-Llama-3-8B-Instruct-Q5_K_M | +| Embedding | https://llama-3-8b.us.gaianet.network/v1 | nomic-embed-text-v1.5.f16 | + ## Start the Open WebUI on your machine After successfully starting the GaiaNet node, you can use `docker run` to start the Open WebUI. @@ -14,7 +28,7 @@ After successfully starting the GaiaNet node, you can use `docker run` to start ``` docker run -d -p 3000:8080 \ -v open-webui:/app/backend/data \ - -e OPENAI_API_BASE_URL="https://llama3.gaianet.network/v1" \ + -e OPENAI_API_BASE_URL="https://llama-3-8b.us.gaianet.network/v1" \ -e OPENAI_API_KEYS="gaianet" \ --name open-webui \ --restart always \ @@ -52,7 +66,7 @@ Open WebUI also offers a way to implment RAG application. Since the Gaianet node **Step 2:** Use GaiaNet node as the embedding API -Click on **+** to uopload your documentations. +Click on **+** to upload your documentations. **Step 3:** Chat