From cab4bdf1cb368d69997fc03fd935f376873054d2 Mon Sep 17 00:00:00 2001 From: alabulei1 Date: Wed, 26 Feb 2025 01:23:53 +0800 Subject: [PATCH] use gaia.doamins --- docs/agent-integrations/codegpt.md | 4 ++-- docs/agent-integrations/continue.md | 4 ++-- docs/agent-integrations/flowiseai-tool-call.md | 2 +- docs/agent-integrations/intro.md | 8 ++++---- docs/agent-integrations/llamaedgebook.md | 2 +- docs/agent-integrations/stockbot.md | 2 +- docs/agent-integrations/translation-agent.md | 10 +++++----- docs/getting-started/api-reference.md | 14 +++++++------- docs/getting-started/mynode.md | 2 +- docs/tutorial/coinbase.md | 3 ++- docs/tutorial/concepts.md | 2 +- docs/tutorial/tool-call.md | 2 +- docs/tutorial/translator-agent.md | 3 ++- .../creator-guide/knowledge/concepts.md | 2 +- versioned_docs/version-1.0.0/tutorial/coinbase.md | 8 ++++++-- versioned_docs/version-1.0.0/tutorial/tool-call.md | 9 ++++++--- .../version-1.0.0/tutorial/translator-agent.md | 3 ++- .../version-1.0.0/user-guide/api-reference.md | 14 +++++++------- .../version-1.0.0/user-guide/apps/agent-zero.md | 12 ++++++------ .../version-1.0.0/user-guide/apps/codegpt.md | 4 ++-- .../version-1.0.0/user-guide/apps/continue.md | 12 ++++++------ .../user-guide/apps/flowiseai-tool-call.md | 2 +- .../version-1.0.0/user-guide/apps/flowiseai.md | 8 ++++---- .../version-1.0.0/user-guide/apps/gpt-planner.md | 5 +++-- .../version-1.0.0/user-guide/apps/intro.md | 8 ++++---- .../version-1.0.0/user-guide/apps/llamacoder.md | 8 +++++--- .../version-1.0.0/user-guide/apps/llamaedgebook.md | 5 +++-- .../version-1.0.0/user-guide/apps/llamaparse.md | 8 +++++--- .../version-1.0.0/user-guide/apps/llamatutor.md | 4 +++- .../version-1.0.0/user-guide/apps/lobechat.md | 2 +- .../version-1.0.0/user-guide/apps/obsidian.md | 6 ++++-- .../version-1.0.0/user-guide/apps/openwebui.md | 8 +++++--- .../version-1.0.0/user-guide/apps/stockbot.md | 6 +++++- .../user-guide/apps/translation-agent.md | 9 +++++---- versioned_docs/version-1.0.0/user-guide/mynode.md | 2 +- 35 files changed, 115 insertions(+), 88 deletions(-) diff --git a/docs/agent-integrations/codegpt.md b/docs/agent-integrations/codegpt.md index de34c1b..ecefb02 100644 --- a/docs/agent-integrations/codegpt.md +++ b/docs/agent-integrations/codegpt.md @@ -17,7 +17,7 @@ In this tutorial, we will use the public CodeStral nodes to power the CodeGPT pl | Model type | API base URL | Model name | |-----|--------|-----| -| Chat | https://codestral.us.gaianet.network/v1/v1/ | codestral | +| Chat | https://coder.gaia.domains/v1/v1/ | codestral | > For some reason, CodeGPT requires the API endpoint to include an extra `v1/` at the end. @@ -42,7 +42,7 @@ Click the CODEGPT on the right sidebar and enter the settings page for CodeGPT. | Attribute | Value | |-----|--------| -| API endpoint URL | https://codestral.us.gaianet.network/v1/v1/ | +| API endpoint URL | https://coder.gaia.domains/v1/v1/ | | API Key | gaia | ![](codegpt-03.png) diff --git a/docs/agent-integrations/continue.md b/docs/agent-integrations/continue.md index 214179a..9a9e532 100644 --- a/docs/agent-integrations/continue.md +++ b/docs/agent-integrations/continue.md @@ -25,7 +25,7 @@ In this tutorial, we will use public nodes to power the Continue plugin. |-----|--------|-----| | Chat | https://llama8b.gaia.domains/v1/ | llama | | Embedding | https://llama8b.gaia.domains/v1/ | nomic | -| Autocompletion | https://codestral.us.gaianet.network/v1/ | codestral | +| Autocompletion | https://coder.gaia.domains./v1/ | codestral | > It is important to note that Continue requires the API endpoint to include a `/` at the end. @@ -55,7 +55,7 @@ chat, code autocomplete and embeddings. ], "tabAutocompleteModel": { "title": "Autocomplete", - "apiBase": "https://codestral.us.gaianet.network/v1/", + "apiBase": "https://coder.gaia.domains/v1/", "model": "codestral", "provider": "openai" }, diff --git a/docs/agent-integrations/flowiseai-tool-call.md b/docs/agent-integrations/flowiseai-tool-call.md index 852e708..88b5dde 100644 --- a/docs/agent-integrations/flowiseai-tool-call.md +++ b/docs/agent-integrations/flowiseai-tool-call.md @@ -35,7 +35,7 @@ Step 2: On the **Chatflow** canvas, add a node called **ChatLocalAI**. Step 3: Configure the **ChatLocalAI** widget to use the Gaia node with tool call support you have created. -* Base path: `https://YOUR-NODE-ID.us.gaianet.network/v1` +* Base path: `https://YOUR-NODE-ID.gaia.domains/v1` * Model name: e.g., `Mistral-7B-Instruct-v0.3.Q5_K_M` Step 4: Add a node called **Custom Tool** diff --git a/docs/agent-integrations/intro.md b/docs/agent-integrations/intro.md index 1be313d..2099f81 100644 --- a/docs/agent-integrations/intro.md +++ b/docs/agent-integrations/intro.md @@ -26,13 +26,13 @@ Remember to append the `/v1` after the host name. You can find a list of public ``` import openai -client = openai.OpenAI(base_url="https://YOUR-NODE-ID.us.gaianet.network/v1", api_key="YOUR_API_KEY_GOES_HERE") +client = openai.OpenAI(base_url="https://YOUR-NODE-ID.gaia.domains/v1", api_key="YOUR_API_KEY_GOES_HERE") ``` Alternatively, you could set an environment variables at the OS level. ``` -export OPENAI_API_BASE=https://YOUR-NODE-ID.us.gaianet.network/v1 +export OPENAI_API_BASE=https://YOUR-NODE-ID.gaia.domains/v1 export OPENAI_API_KEY=YOUR_API_KEY_GOES_HERE ``` @@ -79,14 +79,14 @@ Create an OpenAI client with a custom base URL. Remember to append the `/v1` aft ```js const client = new OpenAI({ - baseURL: 'https://YOUR-NODE-ID.us.gaianet.network/v1', + baseURL: 'https://YOUR-NODE-ID.gaia.domains/v1', apiKey: 'YOUR_API_KEY_GOES_HERE' }); ``` Alternatively, you can set an environment variable using `dotenv` in Node. ``` -process.env.OPENAI_API_BASE = 'https://YOUR-NODE-ID.us.gaianet.network/v1'; +process.env.OPENAI_API_BASE = 'https://YOUR-NODE-ID.gaia.domains/v1'; ``` Then, when you make API calls from the `client`, make sure that the `model` is set to the model name diff --git a/docs/agent-integrations/llamaedgebook.md b/docs/agent-integrations/llamaedgebook.md index 42432a0..4d58b05 100644 --- a/docs/agent-integrations/llamaedgebook.md +++ b/docs/agent-integrations/llamaedgebook.md @@ -32,7 +32,7 @@ export OPENAI_API_KEY="GAIANET" **Hint:** if you don't know the model name of the Gaia node, you can retrieve the model information using: ``` -curl -X POST https://0x57b00e4f3d040e28dc8aabdbe201212e5fb60ebc.us.gaianet.network/v1/models +curl -X POST https://0x57b00e4f3d040e28dc8aabdbe201212e5fb60ebc.gaia.domains/v1/models ``` Then, use the following command line to run the app. diff --git a/docs/agent-integrations/stockbot.md b/docs/agent-integrations/stockbot.md index dc5a5a8..fa23f48 100644 --- a/docs/agent-integrations/stockbot.md +++ b/docs/agent-integrations/stockbot.md @@ -20,7 +20,7 @@ In this tutorial, we will use a public Llama3 node with the function call suppor | Attribute | Value | |-----|--------| -| API endpoint URL | https://llamatool.us.gaianet.network/v1 | +| API endpoint URL | https://llama8b.gaia.domains/v1 | | Model Name | llama | | API KEY | gaia | diff --git a/docs/agent-integrations/translation-agent.md b/docs/agent-integrations/translation-agent.md index f899e9f..8da323e 100644 --- a/docs/agent-integrations/translation-agent.md +++ b/docs/agent-integrations/translation-agent.md @@ -13,10 +13,10 @@ You can run the Translation Agent on top of a public Gaia Node as a backend and ## Prepare the environment -Here, we will use the public Gaia node with gemma-2-27b model. `https://gemma.us.gaianet.network/`. +Here, we will use the public Gaia node with Llama 3.1 8b model. `https://llama8b.gaia.domains/`. ->As an alternative, you can also start a Gaia node locally on your device. Refer to [this guide](https://github.com/GaiaNet-AI/node-configs/tree/main/gemma-2-27b-it). +>As an alternative, you can also start a Gaia node locally on your device. Refer to [this guide](https://github.com/GaiaNet-AI/node-configs/tree/main/llama-3.1-8b-instruct). To get started, clone the Translation Agent that supports open source LLMs. @@ -28,12 +28,12 @@ cd translation-agent git checkout use_llamaedge ``` -Set environment variables and install necessary Python packages if needed. Replace the OPENAI_BASE_URL with `https://gemma.us.gaianet.network/` +Set environment variables and install necessary Python packages if needed. Replace the OPENAI_BASE_URL with `https://llama8b.gaia.domains/` ``` -export OPENAI_BASE_URL="https://gemma.us.gaianet.network/v1" +export OPENAI_BASE_URL="https://llama8b.gaia.domains/v1" export PYTHONPATH=${PWD}/src -export OPENAI_API_KEY="GAIANET" +export OPENAI_API_KEY="GET YOUR OWN API KEY" pip install python-dotenv pip install openai tiktoken icecream langchain_text_splitters diff --git a/docs/getting-started/api-reference.md b/docs/getting-started/api-reference.md index 74ffb45..a96b949 100644 --- a/docs/getting-started/api-reference.md +++ b/docs/getting-started/api-reference.md @@ -9,7 +9,7 @@ sidebar_position: 10 Each Gaia node is an OpenAI compatible API server. You can build your application based on the Gaia node API. You can also replace OpenAI API configuration with the Gaia node API in other AI agent frameworks. -The base URL to send all API requests is `https://node_id.gaianet.network/v1`. +The base URL to send all API requests is `https://node_id.gaia.domains/v1`. :::note @@ -30,7 +30,7 @@ By default, the API responds with a full answer in the HTTP response. **Request** ``` -curl -X POST https://node_id.gaianet.network/v1/chat/completions \ +curl -X POST https://node_id.gaia.domains/v1/chat/completions \ -H 'accept:application/json' \ -H 'Content-Type: application/json' \ -H 'Authorization: Bearer YOUR_API_KEY_GOES_HERE' \ @@ -50,7 +50,7 @@ Add `"stream":true` in your request to make the API send back partial responses **Request:** ``` -curl -X POST https://node_id.gaianet.network/v1/chat/completions \ +curl -X POST https://node_id.gaia.domains/v1/chat/completions \ -H 'accept:application/json' \ -H 'Content-Type: application/json' \ -H 'Authorization: Bearer YOUR_API_KEY_GOES_HERE' \ @@ -107,7 +107,7 @@ The `embeddings` endpoint computes embeddings for user queries or file chunks. **Request** ``` -curl -X POST https://node_id.gaianet.network/v1/embeddings \ +curl -X POST https://node_id.gaia.domains/v1/embeddings \ -H 'accept:application/json' \ -H 'Content-Type: application/json' \ -H 'Authorization: Bearer YOUR_API_KEY_GOES_HERE' \ @@ -167,7 +167,7 @@ The `retrieve` endpoint can retrieve text from the node's vector collection base **Request:** ``` -curl -X POST https://node_id.gaianet.network/v1/retrieve \ +curl -X POST https://node_id.gaia.domains/v1/retrieve \ -H 'accept:application/json' \ -H 'Content-Type: application/json' \ -H 'Authorization: Bearer YOUR_API_KEY_GOES_HERE' \ @@ -212,7 +212,7 @@ The `models` endpoint provides the chat and embedding models available on the no **Request:** ``` -curl -X POST https://node_id.gaianet.network/v1/models +curl -X POST https://node_id.gaia.domains/v1/models ``` **Response:** @@ -228,7 +228,7 @@ The `info` endpoint provides detailed information about the node. **Request:** ``` -curl -X POST https://node_id.gaianet.network/v1/info +curl -X POST https://node_id.gaia.domains/v1/info ``` **Response:** diff --git a/docs/getting-started/mynode.md b/docs/getting-started/mynode.md index 838f998..a838f0c 100644 --- a/docs/getting-started/mynode.md +++ b/docs/getting-started/mynode.md @@ -9,7 +9,7 @@ web-based chatbot UI and an OpenAI compatible web service. Just load the node's Let's say the URL is as follows. ``` -https://0x1234...xyz.gaianet.network/ +https://0x1234...xyz.gaia.domains/ ``` > Please refer to the [agent apps](../agent-integrations/intro) section to see how to use the Gaia node API in your favorite agent frameworks or apps. diff --git a/docs/tutorial/coinbase.md b/docs/tutorial/coinbase.md index 28c48fe..5dc8621 100644 --- a/docs/tutorial/coinbase.md +++ b/docs/tutorial/coinbase.md @@ -37,8 +37,9 @@ export CDP_API_KEY_PRIVATE_KEY='-----BEGIN EC...END EC PRIVATE KEY-----\n' Edit the `chatbot.py` file to configure the agent to use the Gaia node above. ``` -llm = ChatOpenAI(model="llama", api_key="GAIA", base_url="https://llamatool.us.gaianet.network/v1") +llm = ChatOpenAI(model="llama", api_key="GAIA", base_url="https://llama8b.gaia.domains/v1") ``` +> You will need to get an API key from Gaia. Finally, run the agent using Python. diff --git a/docs/tutorial/concepts.md b/docs/tutorial/concepts.md index 406bf44..53de5b4 100644 --- a/docs/tutorial/concepts.md +++ b/docs/tutorial/concepts.md @@ -46,7 +46,7 @@ On a Gaia node, we will get a database snapshot with the embeddings to use at la ## Lifecycle of a user query on a knowledge-supplemented LLM -Next, let's learn the lifecycle of a user query on a knowledge-supplemented LLM. We will take [a Gaia Node with Gaia knowledge](https://knowledge.gaianet.network/chatbot-ui/index.html) as an example. +Next, let's learn the lifecycle of a user query on a knowledge-supplemented LLM. We will take [a Gaia Node with Gaia knowledge](https://gaia.gaia.domains/chatbot-ui/index.html) as an example. ![user-query-rag](https://github.com/GaiaNet-AI/docs/assets/45785633/c64b85ea-65f0-43d2-8ab3-78889d21c248) diff --git a/docs/tutorial/tool-call.md b/docs/tutorial/tool-call.md index a04e112..c906050 100644 --- a/docs/tutorial/tool-call.md +++ b/docs/tutorial/tool-call.md @@ -43,7 +43,7 @@ Set the environment variables for the API server and model name we just set up. ``` export OPENAI_MODEL_NAME="llama" -export OPENAI_BASE_URL= "https://llamatool.us.gaianet.network/v1" +export OPENAI_BASE_URL= "https://llama8b.gaia.domains/v1" ``` Run the `main.py` application and bring up the command line chat interface. diff --git a/docs/tutorial/translator-agent.md b/docs/tutorial/translator-agent.md index d424700..199bbc1 100644 --- a/docs/tutorial/translator-agent.md +++ b/docs/tutorial/translator-agent.md @@ -29,7 +29,7 @@ Next, we will install a local Gaia node, which provides the backend API services curl -sSfL 'https://github.com/GaiaNet-AI/gaianet-node/releases/latest/download/install.sh' | bash ``` -You will also need the following configurations and prerequisites to run the agent app. If you are using a public Gaia node instead of your local node, replace the `http://localhost:8080` with `https://node_id.us.gaianet.network`. +You will also need the following configurations and prerequisites to run the agent app. If you are using a Gaia node instead of your local node, replace the `http://localhost:8080` with `https://node_id.gaia.domains`. ``` export OPENAI_BASE_URL="http://localhost:8080/v1" @@ -39,6 +39,7 @@ export OPENAI_API_KEY="GAIANET" pip install python-dotenv pip install openai tiktoken icecream langchain_text_splitters ``` +> If you're using a Domain service, you will [need to get an API key from Gaia](../getting-started/authentication.md). ## Demo 1: Running Translation Agents with Llama-3-8B diff --git a/versioned_docs/version-1.0.0/creator-guide/knowledge/concepts.md b/versioned_docs/version-1.0.0/creator-guide/knowledge/concepts.md index 99aadf0..ef9e798 100644 --- a/versioned_docs/version-1.0.0/creator-guide/knowledge/concepts.md +++ b/versioned_docs/version-1.0.0/creator-guide/knowledge/concepts.md @@ -49,7 +49,7 @@ On a Gaia node, we will get a database snapshot with the embeddings to use at la ## Lifecycle of a user query on a knowledge-supplemented LLM -Next, let's learn the lifecycle of a user query on a knowledge-supplemented LLM. We will take [a Gaia Node with Gaia knowledge](https://knowledge.gaianet.network/chatbot-ui/index.html) as an example. +Next, let's learn the lifecycle of a user query on a knowledge-supplemented LLM. We will take [a Gaia Node with Gaia knowledge](https://gaia.gaia.domains/chatbot-ui/index.html) as an example. ![user-query-rag](https://github.com/GaiaNet-AI/docs/assets/45785633/c64b85ea-65f0-43d2-8ab3-78889d21c248) diff --git a/versioned_docs/version-1.0.0/tutorial/coinbase.md b/versioned_docs/version-1.0.0/tutorial/coinbase.md index 8a19999..bd575ed 100644 --- a/versioned_docs/version-1.0.0/tutorial/coinbase.md +++ b/versioned_docs/version-1.0.0/tutorial/coinbase.md @@ -10,10 +10,12 @@ Or, you could simply use our public node. | Attribute | Value | |-----|--------| -| API endpoint URL | https://llamatool.us.gaianet.network/v1 | +| API endpoint URL | https://llama8b.gaia.domains/v1 | | Model Name | llama | | API KEY | gaia | +> If you're using a Domain service, not your own node, you will [need to get an API key from Gaia](../getting-started/authentication.md). + ## Quickstart First, you need a [Coinbase Developer Platform account](https://www.coinbase.com/developer-platform) and then [create an API key](https://docs.cdp.coinbase.com/advanced-trade/docs/auth/#creating-api-keys). @@ -35,9 +37,11 @@ export CDP_API_KEY_PRIVATE_KEY='-----BEGIN EC...END EC PRIVATE KEY-----\n' Edit the `chatbot.py` file to configure the agent to use the Gaia node above. ``` -llm = ChatOpenAI(model="llama", api_key="GAIA", base_url="https://llamatool.us.gaianet.network/v1") +llm = ChatOpenAI(model="llama", api_key="GAIA", base_url="https://llama8b.gaia.domains/v1") ``` +> If you're using a Domain service, not your own node, you will [need to get an API key from Gaia](../getting-started/authentication.md). + Finally, run the agent using Python. ``` diff --git a/versioned_docs/version-1.0.0/tutorial/tool-call.md b/versioned_docs/version-1.0.0/tutorial/tool-call.md index 2777af8..ee688df 100644 --- a/versioned_docs/version-1.0.0/tutorial/tool-call.md +++ b/versioned_docs/version-1.0.0/tutorial/tool-call.md @@ -19,14 +19,15 @@ You will need a Gaia node ready to provide LLM services through a public URL. Yo * [run your own node](../node-guide/quick-start.md). You will need to start a Gaia node for the [Llama-3-Groq model](https://github.com/GaiaNet-AI/node-configs/tree/main/llama-3-groq-8b-tool) or the [Mistral-7B-v0.3 Instruct model](https://github.com/GaiaNet-AI/node-configs/tree/main/mistral-0.3-7b-instruct-tool-call) . You can then use the node's API URL endpoint and model name in your tool call apps. * [use a public node](../user-guide/nodes.md) -In this tutorial, we will use a public Llama3 node with the function call support. +In this tutorial, we will use a public Llama3 domain with the function call support. | Attribute | Value | |-----|--------| -| API endpoint URL | https://llamatool.us.gaianet.network/v1 | +| API endpoint URL | https://llama8b.gaia.domains/v1 | | Model Name | llama | | API KEY | gaia | +> If you're using a Domain service, not your own node, you will [need to get an API key from Gaia](../getting-started/authentication.md). ## Run the demo agent @@ -44,8 +45,10 @@ Set the environment variables for the API server and model name we just set up. ``` export OPENAI_MODEL_NAME="llama" -export OPENAI_BASE_URL= "https://llamatool.us.gaianet.network/v1" +export OPENAI_BASE_URL= "https://llama8b.gaia.domains/v1" +export OPENAI_API_KEY="GAIANET" ``` +> If you're using a Domain service, not your own node, you will [need to get an API key from Gaia](../getting-started/authentication.md). Run the `main.py` application and bring up the command line chat interface. diff --git a/versioned_docs/version-1.0.0/tutorial/translator-agent.md b/versioned_docs/version-1.0.0/tutorial/translator-agent.md index ed4cb1a..c5b5651 100644 --- a/versioned_docs/version-1.0.0/tutorial/translator-agent.md +++ b/versioned_docs/version-1.0.0/tutorial/translator-agent.md @@ -29,7 +29,7 @@ Next, we will install a local GaiaNet node, which provides the backend API servi curl -sSfL 'https://github.com/GaiaNet-AI/gaianet-node/releases/latest/download/install.sh' | bash ``` -You will also need the following configurations and prerequisites to run the agent app. If you are using a public GaiaNet node instead of your local node, replace the `http://localhost:8080` with `https://node_id.us.gaianet.network`. +You will also need the following configurations and prerequisites to run the agent app. If you are using a public GaiaNet node instead of your local node, replace the `http://localhost:8080` with `https://node_id.gaia.domains`. ``` export OPENAI_BASE_URL="http://localhost:8080/v1" @@ -39,6 +39,7 @@ export OPENAI_API_KEY="GAIANET" pip install python-dotenv pip install openai tiktoken icecream langchain_text_splitters ``` +> If you're using a Domain service, not your own node, you will [need to get an API key from Gaia](../getting-started/authentication.md). ## Demo 1: Running Translation Agents with Llama-3-8B diff --git a/versioned_docs/version-1.0.0/user-guide/api-reference.md b/versioned_docs/version-1.0.0/user-guide/api-reference.md index d297037..3d8babe 100644 --- a/versioned_docs/version-1.0.0/user-guide/api-reference.md +++ b/versioned_docs/version-1.0.0/user-guide/api-reference.md @@ -9,7 +9,7 @@ sidebar_position: 4 Each GaiaNet node is an OpenAI compatible API server. You can build your application based on the GaiaNet node API. You can also replace OpenAI API configuration with the GaiaNet node API in other AI agent frameworks. -The base URL to send all API requests is `https://node_id.gaianet.network/v1`. +The base URL to send all API requests is `https://node_id.gaia.domains/v1`. ## Endpoints @@ -24,7 +24,7 @@ By default, the API responds with a full answer in the HTTP response. **Request** ``` -curl -X POST https://node_id.gaianet.network/v1/chat/completions \ +curl -X POST https://node_id.gaia.domains/v1/chat/completions \ -H 'accept:application/json' \ -H 'Content-Type: application/json' \ -d '{"messages":[{"role":"system", "content": "You are a helpful assistant."}, {"role":"user", "content": "What is the capital of France?"}], "model": "model_name"}' @@ -43,7 +43,7 @@ Add `"stream":true` in your request to make the API send back partial responses **Request:** ``` -curl -X POST https://node_id.gaianet.network/v1/chat/completions \ +curl -X POST https://node_id.gaia.domains/v1/chat/completions \ -H 'accept:application/json' \ -H 'Content-Type: application/json' \ -d '{"messages":[{"role":"system", "content": "You are a helpful assistant."}, {"role":"user", "content": "What is the capital of France?"}], "model": "model_name", "stream":true}' @@ -99,7 +99,7 @@ The `embeddings` endpoint computes embeddings for user queries or file chunks. **Request** ``` -curl -X POST https://node_id.gaianet.network/v1/embeddings \ +curl -X POST https://node_id.gaia.domains/v1/embeddings \ -H 'accept:application/json' \ -H 'Content-Type: application/json' \ -d '{"model": "nomic-embed-text-v1.5.f16", "input":["Paris, city and capital of France, ..., for Paris has retained its importance as a centre for education and intellectual pursuits.", "Paris’s site at a crossroads ..., drawing to itself much of the talent and vitality of the provinces."]}' @@ -158,7 +158,7 @@ The `retrieve` endpoint can retrieve text from the node's vector collection base **Request:** ``` -curl -X POST https://node_id.gaianet.network/v1/retrieve \ +curl -X POST https://node_id.gaia.domains/v1/retrieve \ -H 'accept:application/json' \ -H 'Content-Type: application/json' \ -d '{"messages":[{"role":"system", "content": "You are a helpful assistant."}, {"role":"user", "content": "What is the location of Paris?"}], "model":"nomic-embed-text-v1.5.f16"}' @@ -202,7 +202,7 @@ The `models` endpoint provides the chat and embedding models available on the no **Request:** ``` -curl -X POST https://node_id.gaianet.network/v1/models +curl -X POST https://node_id.gaia.domains/v1/models ``` **Response:** @@ -218,7 +218,7 @@ The `info` endpoint provides detailed information about the node. **Request:** ``` -curl -X POST https://node_id.gaianet.network/v1/info +curl -X POST https://node_id.gaia.domains/v1/info ``` **Response:** diff --git a/versioned_docs/version-1.0.0/user-guide/apps/agent-zero.md b/versioned_docs/version-1.0.0/user-guide/apps/agent-zero.md index c0d15d9..99318d7 100644 --- a/versioned_docs/version-1.0.0/user-guide/apps/agent-zero.md +++ b/versioned_docs/version-1.0.0/user-guide/apps/agent-zero.md @@ -18,8 +18,8 @@ In this tutorial, we will use the public [Llama-3.1-8B node](https://github.com/ | Model type | API base URL | Model name | |-----|--------|-----| -| Chat | https://llama.us.gaianet.network/v1/ | llama | -| Embedding | https://llama.us.gaianet.network/v1/ | nomic-embed | +| Chat | https://llama8b.gaia.domains/v1/ | llama | +| Embedding | https://llama8b.gaia.domains/v1/ | nomic-embed | **You will also need to make sure your Docker engine is running.** Because the Agent Zero framework will leverage Docker to execute the generated code. @@ -48,12 +48,12 @@ cp example.env .env You will need to configure the following items: -* `CHAT_MODEL_BASE_URL`: URL for the LLM API base URL. E.g., `https://llama.us.gaianet.network/v1/` +* `CHAT_MODEL_BASE_URL`: URL for the LLM API base URL. E.g., `https://llama8b.gaia.domains/v1/` * `CHAT_MODEL_NAME`: Name of the chat model to be used. E.g., `llama` -* `CHAT_API_KEY`: An API key to access the LLM services. You can enter several random characters here. E.g., `GAIA` -* `EMBEDDING_MODEL_BASE_URL`: URL for the embedding model API base URL. E.g., `https://llama.us.gaianet.network/v1/` +* `CHAT_API_KEY`: An API key to access the LLM services. If you're using a Domain service, not your own node, you will [need to get an API key from Gaia](../getting-started/authentication.md). +* `EMBEDDING_MODEL_BASE_URL`: URL for the embedding model API base URL. E.g., `https://llama8b.gaia.domains/v1/` * `EMBEDDING_MODEL_NAME`: Name of the embedding model name. E.g., `nomic-embed` -* `EMBEDDING_API_KEY`: An API key to access the embedding services. You can enter several random characters here. E.g., `GAIA` +* `EMBEDDING_API_KEY`: An API key to access the embedding services. If you're using a Domain service, not your own node, you will [need to get an API key from Gaia](../getting-started/authentication.md). ## Run the agent diff --git a/versioned_docs/version-1.0.0/user-guide/apps/codegpt.md b/versioned_docs/version-1.0.0/user-guide/apps/codegpt.md index 897c69b..15b387e 100644 --- a/versioned_docs/version-1.0.0/user-guide/apps/codegpt.md +++ b/versioned_docs/version-1.0.0/user-guide/apps/codegpt.md @@ -17,7 +17,7 @@ In this tutorial, we will use the public CodeStral nodes to power the CodeGPT pl | Model type | API base URL | Model name | |-----|--------|-----| -| Chat | https://codestral.us.gaianet.network/v1/v1/ | codestral | +| Chat | https://coder.gaia.domains/v1/v1/ | codestral | > For some reason, CodeGPT requires the API endpoint to include an extra `v1/` at the end. @@ -42,7 +42,7 @@ Click the CODEGPT on the right sidebar and enter the settings page for CodeGPT. | Attribute | Value | |-----|--------| -| API endponit URL | https://codestral.us.gaianet.network/v1/v1/ | +| API endponit URL | https://coder.gaia.domains/v1/v1/ | | API Key | gaia | ![](codegpt-03.png) diff --git a/versioned_docs/version-1.0.0/user-guide/apps/continue.md b/versioned_docs/version-1.0.0/user-guide/apps/continue.md index 19d5160..8451e95 100644 --- a/versioned_docs/version-1.0.0/user-guide/apps/continue.md +++ b/versioned_docs/version-1.0.0/user-guide/apps/continue.md @@ -24,9 +24,9 @@ In this tutorial, we will use public nodes to power the Continue plugin. | Model type | API base URL | Model name | |-----|--------|-----| -| Chat | https://gemma.us.gaianet.network/v1/ | gemma | -| Embedding | https://gemma.us.gaianet.network/v1/ | nomic | -| Autocompletion | https://codestral.us.gaianet.network/v1/ | codestral | +| Chat | https://llama8b.gaia.domains/v1/ | gemma | +| Embedding | https://llama8b.gaia.domains/v1/ | nomic | +| Autocompletion | https://coder.gaia.domains/v1/ | codestral | > It is important to note that Continue requires the API endpoint to include a `/` at the end. @@ -50,20 +50,20 @@ chat, code autocomplete and embeddings. { "model": "gemma", "title": "LlamaEdge", - "apiBase": "https://gemma.us.gaianet.network/v1/", + "apiBase": "https://llama8b.gaia.domains/v1/", "provider": "openai" } ], "tabAutocompleteModel": { "title": "Autocomplete", - "apiBase": "https://codestral.us.gaianet.network/v1/", + "apiBase": "https://coder.gaia.domains/v1/", "model": "codestral", "provider": "openai" }, "embeddingsProvider": { "provider": "openai", "model": "nomic-embed", - "apiBase": "https://gemma.us.gaianet.network/v1/" + "apiBase": "https://llama8b.gaia.domains/v1/" }, "customCommands": [ { diff --git a/versioned_docs/version-1.0.0/user-guide/apps/flowiseai-tool-call.md b/versioned_docs/version-1.0.0/user-guide/apps/flowiseai-tool-call.md index a823884..8e1f58a 100644 --- a/versioned_docs/version-1.0.0/user-guide/apps/flowiseai-tool-call.md +++ b/versioned_docs/version-1.0.0/user-guide/apps/flowiseai-tool-call.md @@ -35,7 +35,7 @@ Step 2: On the **Chatflow** canvas, add a node called **ChatLocalAI**. Step 3: Configure the **ChatLocalAI** widget to use the Gaia node with tool call support you have created. -* Base path: `https://YOUR-NODE-ID.us.gaianet.network/v1` +* Base path: `https://YOUR-NODE-ID.gaia.domains/v1` * Model name: e.g., `Mistral-7B-Instruct-v0.3.Q5_K_M` Step 4: Add a node called **Custom Tool** diff --git a/versioned_docs/version-1.0.0/user-guide/apps/flowiseai.md b/versioned_docs/version-1.0.0/user-guide/apps/flowiseai.md index 4c37059..9fc95ea 100644 --- a/versioned_docs/version-1.0.0/user-guide/apps/flowiseai.md +++ b/versioned_docs/version-1.0.0/user-guide/apps/flowiseai.md @@ -17,8 +17,8 @@ In this tutorial, we will use public nodes to power the Continue plugin. | Model type | API base URL | Model name | |-----|--------|-----| -| Chat | https://llama.us.gaianet.network/v1 | llama | -| Embedding | https://llama.us.gaianet.network/v1 | nomic | +| Chat | https://llama8b.gaia.domains/v1 | llama | +| Embedding | https://llama8b.gaia.domains/v1 | nomic | ## Start a FlowiseAI server @@ -55,7 +55,7 @@ You will need to delete the ChatOpenAI component and click the + button to searc Then, you will need to input -* the Gaia node base URL `https://llama.us.gaianet.network/v1` +* the Gaia node base URL `https://llama8b.gaia.domains/v1` * the model name `llama` Next, connect the ChatLocalAI component with the field `Chat model` in the **Conversational Retrieval QA Chain** component. @@ -64,7 +64,7 @@ Next, connect the ChatLocalAI component with the field `Chat model` in the **Con The default template uses the OpenAI Embeddings component to create embeddings for your documents. We need to replace the **OpenAI Embeddings** component with the **LocalAI Embeddings** component. -* Use the Gaia node base URL `https://llama.us.gaianet.network/v1` in the Base Path field. +* Use the Gaia node base URL `https://llama8b.gaia.domains/v1` in the Base Path field. * Input the model name `nomic-embed-text-v1.5.f16` in the Model Name field. Next, connect the **LocalAI Embeddings** component with the field `embedding` in the **In-Memory Vector Store** component. diff --git a/versioned_docs/version-1.0.0/user-guide/apps/gpt-planner.md b/versioned_docs/version-1.0.0/user-guide/apps/gpt-planner.md index 959210c..e1b00a3 100644 --- a/versioned_docs/version-1.0.0/user-guide/apps/gpt-planner.md +++ b/versioned_docs/version-1.0.0/user-guide/apps/gpt-planner.md @@ -22,7 +22,7 @@ In this tutorial, we will use a public node. | Attribute | Value | |-----|--------| -| API endpoint URL | https://llama.us.gaianet.network/v1 | +| API endpoint URL | https://llama8b.gaia.domains/v1 | | Model Name | llama | ## Run the agent @@ -32,8 +32,9 @@ First, [load the notebook in colab](https://colab.research.google.com/github/msh Edit the code to create an OpenAI client. We will pass in the `base_url` here. ``` -client = openai.OpenAI(base_url="https://llama.us.gaianet.network/v1", api_key=OPENAI_API_KEY) +client = openai.OpenAI(base_url="https://llama8b.gaia.domains/v1", api_key=OPENAI_API_KEY) ``` +> If you're using a Domain service, not your own node, you will [need to get an API key from Gaia](../getting-started/authentication.md). Next, replace all the `gpt-4o-mini` model name with the `llama` model name in the code. Here is an example. diff --git a/versioned_docs/version-1.0.0/user-guide/apps/intro.md b/versioned_docs/version-1.0.0/user-guide/apps/intro.md index 1a328cd..6523997 100644 --- a/versioned_docs/version-1.0.0/user-guide/apps/intro.md +++ b/versioned_docs/version-1.0.0/user-guide/apps/intro.md @@ -22,13 +22,13 @@ Remember to append the `/v1` after the host name. You can find a list of public ``` import openai -client = openai.OpenAI(base_url="https://YOUR-NODE-ID.us.gaianet.network/v1", api_key="") +client = openai.OpenAI(base_url="https://YOUR-NODE-ID.gaia.domains/v1", api_key="") ``` Alternatively, you could set an environment variable at the OS level. ``` -export OPENAI_API_BASE=https://YOUR-NODE-ID.us.gaianet.network/v1 +export OPENAI_API_BASE=https://YOUR-NODE-ID.gaia.domains/v1 ``` Then, when you make API calls from the `client`, make sure that the `model` is set to the model name @@ -68,14 +68,14 @@ Create an OpenAI client with a custom base URL. Remember to append the `/v1` aft ``` const client = new OpenAI({ - baseURL: 'https://YOUR-NODE-ID.us.gaianet.network/v1', + baseURL: 'https://YOUR-NODE-ID.gaia.domains/v1', apiKey: '' // Leave this empty when using Gaia }); ``` Alternatively, you can set an environment variable using `dotenv` in Node. ``` -process.env.OPENAI_API_BASE = 'https://YOUR-NODE-ID.us.gaianet.network/v1'; +process.env.OPENAI_API_BASE = 'https://YOUR-NODE-ID.gaia.domains/v1'; ``` Then, when you make API calls from the `client`, make sure that the `model` is set to the model name diff --git a/versioned_docs/version-1.0.0/user-guide/apps/llamacoder.md b/versioned_docs/version-1.0.0/user-guide/apps/llamacoder.md index 26e9fac..b780e32 100644 --- a/versioned_docs/version-1.0.0/user-guide/apps/llamacoder.md +++ b/versioned_docs/version-1.0.0/user-guide/apps/llamacoder.md @@ -17,7 +17,7 @@ In this tutorial, we will use a public Llama3 node. | Attribute | Value | |-----|--------| -| API endpoint URL | https://llama.us.gaianet.network/v1 | +| API endpoint URL | https://llama8b.gaia.domains/v1 | | Model Name | llama | | API KEY | gaia | @@ -42,12 +42,14 @@ You will need to configure three parameters here. * LLAMAEDGE_MODEL_NAME: Name of the model to be used. * LLAMAEDGE_API_KEY: API key for accessing the LLM services. +> If you're using a Domain service, not your own node, you will [need to get an API key from Gaia](../getting-started/authentication.md). + For example, you can use the following `.env` setting. ``` -LLAMAEDGE_BASE_URL=https://llama.us.gaianet.network/v1 +LLAMAEDGE_BASE_URL=https://llama8b.gaia.domains/v1 LLAMAEDGE_MODEL_NAME=llama -LLAMAEDGE_API_KEY=GaiaNet +LLAMAEDGE_API_KEY=Gaia-XYZ ``` Then, we will need to install the required dependencies. diff --git a/versioned_docs/version-1.0.0/user-guide/apps/llamaedgebook.md b/versioned_docs/version-1.0.0/user-guide/apps/llamaedgebook.md index 64ea3e6..3ee5867 100644 --- a/versioned_docs/version-1.0.0/user-guide/apps/llamaedgebook.md +++ b/versioned_docs/version-1.0.0/user-guide/apps/llamaedgebook.md @@ -24,15 +24,16 @@ pip install -r requirements.txt Next, let's configure the GaiaNet node as the LLM backend. ``` -export OPENAI_BASE_URL="https://gemma.us.gaianet.network/v1" +export OPENAI_BASE_URL="https://llama8b.gaia.domains/v1" export OPENAI_MODEL_NAME="gemma" export OPENAI_API_KEY="GAIANET" ``` +> If you're using a Domain service, not your own node, you will [need to get an API key from Gaia](../getting-started/authentication.md). **Hint:** if you don't know the model name of the GaiaNet node, you can retrieve the model information using: ``` -curl -X POST https://0x57b00e4f3d040e28dc8aabdbe201212e5fb60ebc.us.gaianet.network/v1/models +curl -X POST https://0x57b00e4f3d040e28dc8aabdbe201212e5fb60ebc.gaia.domains/v1/models ``` Then, use the following command line to run the app. diff --git a/versioned_docs/version-1.0.0/user-guide/apps/llamaparse.md b/versioned_docs/version-1.0.0/user-guide/apps/llamaparse.md index 8fbc480..23b2f02 100644 --- a/versioned_docs/version-1.0.0/user-guide/apps/llamaparse.md +++ b/versioned_docs/version-1.0.0/user-guide/apps/llamaparse.md @@ -19,8 +19,10 @@ In this tutorial, we will use public nodes to power the Continue plugin. | Model type | API base URL | Model name | |-----|--------|-----| -| Chat | https://gemma.us.gaianet.network/v1 | gemma | -| Embedding | https://gemma.us.gaianet.network/v1 | nomic-embed | +| Chat | https://llama8b.gaia.domains/v1 | gemma | +| Embedding | https://llama8b.gaia.domains/v1 | nomic-embed | + +> If you're using a Domain service, not your own node, you will [need to get an API key from Gaia](../getting-started/authentication.md). ## Steps @@ -58,7 +60,7 @@ nohup docker run -d -p 6333:6333 -p 6334:6334 \ Then, we will need to set up the LLM model settings. We can configure the model setting in the `.env` file. ``` -OPENAI_BASE_URL=https://gemma.us.gaianet.network/v1/ +OPENAI_BASE_URL=https://llama8b.gaia.domains/v1/ OPENAI_API_KEY=gaianet LLAMAEDGE_CHAT_MODEL=gemma LLAMAEDGE_EMBEDDING_MODEL=nomic diff --git a/versioned_docs/version-1.0.0/user-guide/apps/llamatutor.md b/versioned_docs/version-1.0.0/user-guide/apps/llamatutor.md index efc979c..7c48650 100644 --- a/versioned_docs/version-1.0.0/user-guide/apps/llamatutor.md +++ b/versioned_docs/version-1.0.0/user-guide/apps/llamatutor.md @@ -21,10 +21,12 @@ In this tutorial, we will use a public Llama3 node. | Attribute | Value | |-----|--------| -| API endpoint URL | https://llama.us.gaianet.network/v1 | +| API endpoint URL | https://llama8b.gaia.domains/v1 | | Model Name | llama | | API KEY | gaia | +> If you're using a Domain service, not your own node, you will [need to get an API key from Gaia](../getting-started/authentication.md). + ## Run the agent First, we will need to get the source code of the forked LlamaTutor diff --git a/versioned_docs/version-1.0.0/user-guide/apps/lobechat.md b/versioned_docs/version-1.0.0/user-guide/apps/lobechat.md index f41c98a..e3a086e 100644 --- a/versioned_docs/version-1.0.0/user-guide/apps/lobechat.md +++ b/versioned_docs/version-1.0.0/user-guide/apps/lobechat.md @@ -13,7 +13,7 @@ You can configure [LobeChat](https://lobehub.com/) to use a GaiaNet node as its Go to the [Language Model Setting page](https://chat-preview.lobehub.com/settings/modal?agent=&session=inbox&tab=llm) and choose OpenAI. 1. Enter a random string in the OpenAI API Key field. It does not matter what you enter here since we are going to ignore it on the backend. -2. Enter the GaiaNet node API base URL in the API Proxy Address field. For example, you can use `https://llama.gaianet.network/v1` here. +2. Enter the GaiaNet node API base URL in the API Proxy Address field. For example, you can use `https://llama8b.gaia.domains/v1` here. 3. Enable Use Client-Side Fetching Mode 4. Click on the Get Model List text and it will automatically fetch LLMs available on the GaiaNet node. Choose the chat model `llama` here. 5. Optional: click on the Check button to check the connection status. diff --git a/versioned_docs/version-1.0.0/user-guide/apps/obsidian.md b/versioned_docs/version-1.0.0/user-guide/apps/obsidian.md index 616ca48..4fee7a9 100644 --- a/versioned_docs/version-1.0.0/user-guide/apps/obsidian.md +++ b/versioned_docs/version-1.0.0/user-guide/apps/obsidian.md @@ -23,9 +23,11 @@ In this tutorial, we will use a public node. | Attribute | Value | |-----|--------| -| API endpoint URL | https://llama.us.gaianet.network/v1 | +| API endpoint URL | https://llama8b.gaia.domains/v1 | | Model Name | llama | +> If you're using a Domain service, not your own node, you will [need to get an API key from Gaia](../getting-started/authentication.md). + ## Obsidian-local-gpt Plugin Setup Make sure you have already installed the Obsidian app on your device. @@ -45,7 +47,7 @@ Then click “Enable”. 1. Go to the plugin settings. 2. Select "AI Provider" as "OpenAI compatible server". -3. Set the server URL. Use https://llama.us.gaianet.network/ if you are using a public GaiaNet node. Or, use http://localhost:8080/ if you are running a local GaiaNet node. +3. Set the server URL. Use https://llama8b.gaia.domains/ if you are using a public GaiaNet node. Or, use http://localhost:8080/ if you are running a local GaiaNet node. 4. Configure API key to GaiaNet. ![](obsidian-configure.png) diff --git a/versioned_docs/version-1.0.0/user-guide/apps/openwebui.md b/versioned_docs/version-1.0.0/user-guide/apps/openwebui.md index 83afef5..5bb354b 100644 --- a/versioned_docs/version-1.0.0/user-guide/apps/openwebui.md +++ b/versioned_docs/version-1.0.0/user-guide/apps/openwebui.md @@ -18,8 +18,8 @@ In this tutorial, we will use public nodes to power the Continue plugin. | Model type | API base URL | Model name | |-----|--------|-----| -| Chat | https://llama.us.gaianet.network/v1 | llama | -| Embedding | https://llama.us.gaianet.network/v1 | nomic | +| Chat | https://llama8b.gaia.domains/v1 | llama | +| Embedding | https://llama8b.gaia.domains/v1 | nomic | ## Start the Open WebUI on your machine @@ -28,13 +28,15 @@ After successfully starting the GaiaNet node, you can use `docker run` to start ``` docker run -d -p 3000:8080 \ -v open-webui:/app/backend/data \ - -e OPENAI_API_BASE_URL="https://llama.us.gaianet.network/v1" \ + -e OPENAI_API_BASE_URL="https://llama8b.gaia.domains/v1" \ -e OPENAI_API_KEYS="gaianet" \ --name open-webui \ --restart always \ ghcr.io/open-webui/open-webui:main ``` +> If you're using a Domain service, not your own node, you will [need to get an API key from Gaia](../getting-started/authentication.md). + Then, open `http://localhost:3000` in your browser and you will see the Open WebUI page. You can also configure your own node when the webUI is started. diff --git a/versioned_docs/version-1.0.0/user-guide/apps/stockbot.md b/versioned_docs/version-1.0.0/user-guide/apps/stockbot.md index c202d00..5b43e82 100644 --- a/versioned_docs/version-1.0.0/user-guide/apps/stockbot.md +++ b/versioned_docs/version-1.0.0/user-guide/apps/stockbot.md @@ -20,10 +20,12 @@ In this tutorial, we will use a public Llama3 node with the function call suppor | Attribute | Value | |-----|--------| -| API endpoint URL | https://llamatool.us.gaianet.network/v1 | +| API endpoint URL | https://llama8b.gaia.domains/v1 | | Model Name | llama | | API KEY | gaia | +> If you're using a Domain service, not your own node, you will [need to get an API key from Gaia](../getting-started/authentication.md). + ## Run the agent First, we will need to get the source code of the forked Stockbot. @@ -45,6 +47,8 @@ You will need to configure four parameters here. * LLAMAEDGE_MODEL_NAME: Name of the model to be used. * LLAMAEDGE_API_KEY: API key for accessing the LLM services. +> If you're using a Domain service, not your own node, you will [need to get an API key from Gaia](../getting-started/authentication.md). + Then, we will need to install the required dependencies. ``` diff --git a/versioned_docs/version-1.0.0/user-guide/apps/translation-agent.md b/versioned_docs/version-1.0.0/user-guide/apps/translation-agent.md index 01411d5..421d4a4 100644 --- a/versioned_docs/version-1.0.0/user-guide/apps/translation-agent.md +++ b/versioned_docs/version-1.0.0/user-guide/apps/translation-agent.md @@ -16,10 +16,10 @@ You can run the Translation Agent on top of a public GaiaNet Node as a backend a ## Prepare the environment -Here, we will use the public GaiaNet node with gemma-2-27b model. `https://gemma.us.gaianet.network/`. +Here, we will use the public GaiaNet node with Llama-3.1-8b model. `https://llama8b.gaia.doamins/`. ->As an alternative, you can also start a GaiaNet node locally on your device. Refer to [this guide](https://github.com/GaiaNet-AI/node-configs/tree/main/gemma-2-27b-it). +>As an alternative, you can also start a GaiaNet node locally on your device. Refer to [this guide](https://github.com/GaiaNet-AI/node-configs/tree/main/llama-3.1-8b-instruct). To get started, clone the Translation Agent that supports open source LLMs. @@ -31,16 +31,17 @@ cd translation-agent git checkout use_llamaedge ``` -Set environment variables and install necessary Python packages if needed. Replace the OPENAI_BASE_URL with `https://gemma.us.gaianet.network/` +Set environment variables and install necessary Python packages if needed. Replace the OPENAI_BASE_URL with `https://llama8b.gaia.domains/` ``` -export OPENAI_BASE_URL="https://gemma.us.gaianet.network/v" +export OPENAI_BASE_URL="https://llama8b.gaia.domains/v" export PYTHONPATH=${PWD}/src export OPENAI_API_KEY="GAIANET" pip install python-dotenv pip install openai tiktoken icecream langchain_text_splitters ``` +> If you're using a Domain service, not your own node, you will [need to get an API key from Gaia](../getting-started/authentication.md). ## Prepare your translation task diff --git a/versioned_docs/version-1.0.0/user-guide/mynode.md b/versioned_docs/version-1.0.0/user-guide/mynode.md index 6932955..7e97d0e 100644 --- a/versioned_docs/version-1.0.0/user-guide/mynode.md +++ b/versioned_docs/version-1.0.0/user-guide/mynode.md @@ -9,7 +9,7 @@ web-based chatbot UI and an OpenAI compatible web service. Just load the node's Let's say the URL is as follows. ``` -https://0x1234...xyz.gaianet.network/ +https://0x1234...xyz.gaia.domains/ ``` > Please refer to the [agent apps](apps/intro) section to see how to use the GaiaNet node API in your favorite agent frameworks or apps.