Skip to content

Commit

Permalink
Add tools support
Browse files Browse the repository at this point in the history
Signed-off-by: Michael Yuan <michael@secondstate.io>
  • Loading branch information
juntao committed Jul 22, 2024
1 parent 3adba7e commit 804b6f4
Show file tree
Hide file tree
Showing 3 changed files with 181 additions and 14 deletions.
81 changes: 81 additions & 0 deletions docs/user-guide/apps/continue.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,81 @@
---
sidebar_position: 11
---

# AI coding assistant: Continue

[Continue](https://github.com/continuedev/continue) is the leading open-source AI code assistant.
It is a copilot-like plugin for VSCode and JetBrains to provide custom autocomplete and chat experiences inside
those IDEs. You can easily configure it to use Gaia nodes as LLM backends. In fact, you can choose different Gaia
nodes for

* the autocomplete model for coding tasks
* the chat model for understanding and discussing code
* the embedding model to provide chat context based on local files

## Prerequisites

You will need a Gaia node ready to provide LLM services through a public URL. You can

* [run your own node](../../node-guide/quick-start.md)
* [use a public node](../nodes.md)

In this tutorial, we will use public nodes to power the Continue plugin.

| Model type | API base URL | Model name |
|-----|--------|-----|
| Chat | https://gemma-2-27b.us.gaianet.network/v1/ | gemma-2-27b-it-Q5_K_M |
| Embedding | https://gemma-2-27b.us.gaianet.network/v1/ | nomic-embed-text-v1.5.f16 |
| Autocompletion | https://codestral-01-22b.us.gaianet.network/v1/ | Codestral-22B-v0.1-hf-Q5_K_M |

> It is important to note that Continue requires the API endpoint to include a `/` at the end.
## Install Continue

[Load this link](https://marketplace.visualstudio.com/items?itemName=Continue.continue) to install the Continue IDE plugin.
It will open up your VSCode when you click on the **Install** button on the web page. When you are
asked to configure Continue, just click on **Skip** and finish the installation without selecting a local model.

## Configure Continue

Click on the gear icon on the toolbar to load the `config.json` file for the Continue plugin. The file is located
in your own home directory `$HOME/.continue/config.json`.
You can now change the `config.json` file as follows. It asks the Continue plugin to use public Gaia nodes.

```
{
"models": [
{
"model": "gemma-2-27b-it-Q5_K_M",
"title": "LlamaEdge",
"apiBase": "https://gemma-2-27b.us.gaianet.network/v1/",
"provider": "openai"
}
],
"customCommands": [
{
"name": "test",
"prompt": "{{{ input }}}\n\nWrite a comprehensive set of unit tests for the selected code. It should setup, run tests that check for correctness including important edge cases, and teardown. Ensure that the tests are complete and sophisticated. Give the tests just as chat output, don't edit any file.",
"description": "Write unit tests for highlighted code"
}
],
"allowAnonymousTelemetry": true,
"embeddingsProvider": {
"provider": "openai",
"model": "nomic-embed-text-v1.5.f16",
"apiBase": "https://gemma-2-27b.us.gaianet.network/v1/"
}
}
```

Save the `config.json` file and you are done!

## Use the plugin

The following screenshot shows how you can chat with an error message
inside the IDE.

![](continue-01.png)



77 changes: 77 additions & 0 deletions docs/user-guide/apps/flowiseai-tool-call.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,77 @@
---
sidebar_position: 12
---

# FlowiseAI tool call

FlowiseAI is a low-code tool for developers to build customized LLM orchestration flows & AI agents. You can configure the FlowiseAI tool to use a Gaia node that supports LLM tool calling.

## Prerequisites

You will need a Gaia node ready to provide LLM services through a public URL.
In this tutorial, you will need to [set up a public node with tool call support](https://github.com/GaiaNet-AI/node-configs/blob/main/mistral-0.3-7b-instruct-tool-call/README.md).

## Start a FlowiseAI server

Follow [the FlowiseAI guide](https://docs.flowiseai.com/getting-started) to install Flowise locally

```
npm install -g flowise
npx flowise start
```

After running successfully, you can open http://localhost:3000 to check out the Flowise AI tool.

## Build a documents QnA chatbot

FlowiseAI allows you to visually set up all the workflow components for an AI agent. If you're new to FlowiseAI, it's recommended to use a template quick start. In fact, there are lots of templates around OpenAI in the Flowise marketplace. All we need to do is to replace the ChatOpenAI component with the ChatLocalAI component.

Let's take the **Flowise Docs QnA** as an example. You can build a QnA chatbot based on your documents. In this example, we would like to chat with a set of documents in a GitHub repo. The default template was built with OpenAI and we will now change it to use an open-source LLM on a Gaia node.

### Get the **Flowise Docs QnA** template

![](flowise-01.png)

Click on Marketplaces on the left tab to browse all the templates. The template **Flowise Docs QnA** we will use is the first one.

![](flowise-02.png)

Then, click on Use this template button on the left top corner to open the visual editor.

### Connect the chat model API

You will need to delete the ChatOpenAI component and click the + button to search ChatLocalAI, and then drag the ChatLocalAI to the screen.

![](flowise-03.png)

Then, you will need to input

* the Gaia node base URL `https://llama-3-8b.us.gaianet.network/v1`
* the model name `Meta-Llama-3-8B-Instruct-Q5_K_M`

Next, connect the ChatLocalAI component with the field `Chat model` in the **Conversational Retrieval QA Chain** component.

### Connect the embedding model API

The default template uses the OpenAI Embeddings component to create embeddings for your documents. We need to replace the **OpenAI Embeddings** component with the **LocalAI Embeddings** component.

* Use the Gaia node base URL `https://llama-3-8b.us.gaianet.network/v1` in the Base Path field.
* Input the model name `nomic-embed-text-v1.5.f16` in the Model Name field.

Next, connect the **LocalAI Embeddings** component with the field `embedding` in the **In-Memory Vector Store** component.

### Set up your documents

Then, let's go through the GitHub component to connect the chat application to our documents on GitHub. You will need to put your docs GitHub link into the **Repo Link** field. For example, you can put GaiaNet's docs link: `https://github.com/GaiaNet-AI/docs/tree/main/docs`.

## Give it a try

You can send a question like "How to install a GaiaNet node" after saving the current chatflow.

![](flowise-04.png)

And you will get the answer based on the GaiaNet docs, which is more accurate.

## More examples

There are lots of examples on the Flowise marketplace. To build a Flowise agent based on GaiaNet, simply replace the **Chat OpenAI** and **OpenAI Embeddings** component with the GaiaNet base URL.
37 changes: 23 additions & 14 deletions docs/user-guide/apps/flowiseai.md
Original file line number Diff line number Diff line change
Expand Up @@ -2,9 +2,23 @@
sidebar_position: 6
---

# FlowiseAI + GaiaNet
# FlowiseAI RAG chat

FlowiseAI is a low-code tool for developers to build customized LLM orchestration flows & AI agents. You can configure the Flowise tool using any Gaianet Node as the backend LLM API.
FlowiseAI is a low-code tool for developers to build customized LLM orchestration flows & AI agents. You can configure the FlowiseAI tool to use Gaia nodes as LLM service providers.

## Prerequisites

You will need a Gaia node ready to provide LLM services through a public URL. You can

* [run your own node](../../node-guide/quick-start.md)
* [use a public node](../nodes.md)

In this tutorial, we will use public nodes to power the Continue plugin.

| Model type | API base URL | Model name |
|-----|--------|-----|
| Chat | https://llama-3-8b.us.gaianet.network/v1 | Meta-Llama-3-8B-Instruct-Q5_K_M |
| Embedding | https://llama-3-8b.us.gaianet.network/v1 | nomic-embed-text-v1.5.f16 |

## Start a FlowiseAI server

Expand All @@ -21,9 +35,7 @@ After running successfully, you can open http://localhost:3000 to check out the

FlowiseAI allows you to visually set up all the workflow components for an AI agent. If you're new to FlowiseAI, it's recommended to use a template quick start. In fact, there are lots of templates around OpenAI in the Flowise marketplace. All we need to do is to replace the ChatOpenAI component with the ChatLocalAI component.

Let's take the **Flowise Docs QnA** as an example. You can build a QnA chatbot based on your documents. In this example, we would like to chat with a set of documents in a GitHub repo. The default template was built with OpenAI and we will now change it to use an open-source LLM on a GaiaNet node. Of course, you must have access to a
[GaiaNet node](https://github.com/GaiaNet-AI/gaianet-node/blob/main/README.md). I recommend running the
[Llama-3-8b + monic-embed](https://github.com/GaiaNet-AI/node-configs/tree/main/llama-3-8b-instruct) models on your GaiaNet node.
Let's take the **Flowise Docs QnA** as an example. You can build a QnA chatbot based on your documents. In this example, we would like to chat with a set of documents in a GitHub repo. The default template was built with OpenAI and we will now change it to use an open-source LLM on a Gaia node.

### Get the **Flowise Docs QnA** template

Expand All @@ -41,30 +53,27 @@ You will need to delete the ChatOpenAI component and click the + button to searc

![](flowise-03.png)

Then, you will need to input the GaiaNet node base URL `https://node_id.us.gaianet.network/v1` and the model name. You can get the model via the following command line.

```
# Replace your node id here
Then, you will need to input

curl -X POST https://node_id.us.gaianet.network/v1/models
```
* the Gaia node base URL `https://llama-3-8b.us.gaianet.network/v1`
* the model name `Meta-Llama-3-8B-Instruct-Q5_K_M`

Next, connect the ChatLocalAI component with the field `Chat model` in the **Conversational Retrieval QA Chain** component.

### Connect the embedding model API

The default template uses the OpenAI Embeddings component to create embeddings for your documents. We need to replace the **OpenAI Embeddings** component with the **LocalAI Embeddings** component.

* Use the GaiaNet node base URL `https://node_id.us.gaianet.network/v1` in the Base Path field.
* Input the model name in the Model Name field.
* Use the Gaia node base URL `https://llama-3-8b.us.gaianet.network/v1` in the Base Path field.
* Input the model name `nomic-embed-text-v1.5.f16` in the Model Name field.

Next, connect the **LocalAI Embeddings** component with the field `embedding` in the **In-Memory Vector Store** component.

### Set up your documents

Then, let's go through the GitHub component to connect the chat application to our documents on GitHub. You will need to put your docs GitHub link into the **Repo Link** field. For example, you can put GaiaNet's docs link: `https://github.com/GaiaNet-AI/docs/tree/main/docs`.

### Give it a try
## Give it a try

You can send a question like "How to install a GaiaNet node" after saving the current chatflow.

Expand Down

0 comments on commit 804b6f4

Please sign in to comment.