From 61c282ff63abcab1aaafc43676be69a9b18af14a Mon Sep 17 00:00:00 2001 From: SangeetaMishr <143380171+SangeetaMishr@users.noreply.github.com> Date: Tue, 1 Oct 2024 12:44:39 +0530 Subject: [PATCH 1/2] Create Structured responses in GPT webhook functions.md added new documentation on Structured responses in GPT webhook functions --- ...ured responses in GPT webhook functions.md | 58 +++++++++++++++++++ 1 file changed, 58 insertions(+) create mode 100644 docs/4. Integrations/Structured responses in GPT webhook functions.md diff --git a/docs/4. Integrations/Structured responses in GPT webhook functions.md b/docs/4. Integrations/Structured responses in GPT webhook functions.md new file mode 100644 index 000000000..b78a2632c --- /dev/null +++ b/docs/4. Integrations/Structured responses in GPT webhook functions.md @@ -0,0 +1,58 @@ +> ### **5 minutes read                                                                                                                         `Advanced`** + + + +Reliably get json responses from parse_via_chat_gpt and parse_via_gpt_vision webhook functions + + +## Introduction + +If the webhook call type is POST/GET then we expect the response from external apis to be json. +But we are not doing this for the webhook type FUNCTION, since the response from the functions (especially LLMs) is not reliable such that they return json everytime correctly. +Due to this we store the response of the function as a string. Due to this even if they are returning proper json we store it as a string, and we don't try to parse it. + +Recently openAI added a new feature called [structured responses](https://openai.com/index/introducing-structured-outputs-in-the-api/) in which we can give the expected response schema in the request itself, thus we can have responses in reliable json formats. +We are leveraging this and parsing the response, if its json then add the key-value pairs to the results variable. + + +### Using structured responses in webhook + +We have to add an optional param called **response_format** in the existing **parse_via_gpt_vision/parse_via_chat_gpt** webhook body. + +The value of **response_format** will be the same as what openAI needs in their apis, you can read it [here](https://platform.openai.com/docs/api-reference/chat/create#chat-create-response_format). + +In short, if the value is **{"type": "json_object"}**, and the prompt has a **"json"** keyword, then the response will always be a json. Or if the value is **{"type": "json_schema"}**, then we have to pass the schema as mentioned in the [openAI docs](https://platform.openai.com/docs/api-reference/chat/create#chat-create-response_format). + +For both we will get a valid json from the openAI, and those will be merged into the results variable. + +## Examples + +### **json_object** + +image + +And the corresponding webhook response we get will be + +image + +image + +**WARNING:** If we are using **json_object** as response_format, then the response will be always json but there’s no reliability that the keys will be same always. For ex in the next run the key “Volunteer ID” can be “VolunteerID” unless states explicitly in the prompt. + + ### **json_schema** + +For the same prompt above the response_format would be something like + +image + + +image + +The Pros of **json_schema** wrto **json_object** is that, the json keys will be always deterministic as specified in the schema. + + +## Resources +https://openai.com/index/introducing-structured-outputs-in-the-api/ +https://platform.openai.com/docs/api-reference/chat/create#chat-create-response_format + + From 1d1b2a8a1f2376127f42f0f2ebb69e6a8fed237f Mon Sep 17 00:00:00 2001 From: SangeetaMishr <143380171+SangeetaMishr@users.noreply.github.com> Date: Tue, 1 Oct 2024 14:00:01 +0530 Subject: [PATCH 2/2] Update Structured responses in GPT webhook functions.md --- ...tured responses in GPT webhook functions.md | 18 +++++++++--------- 1 file changed, 9 insertions(+), 9 deletions(-) diff --git a/docs/4. Integrations/Structured responses in GPT webhook functions.md b/docs/4. Integrations/Structured responses in GPT webhook functions.md index b78a2632c..48e3415c3 100644 --- a/docs/4. Integrations/Structured responses in GPT webhook functions.md +++ b/docs/4. Integrations/Structured responses in GPT webhook functions.md @@ -8,7 +8,7 @@ Reliably get json responses from parse_via_chat_gpt and parse_via_gpt_vision web ## Introduction If the webhook call type is POST/GET then we expect the response from external apis to be json. -But we are not doing this for the webhook type FUNCTION, since the response from the functions (especially LLMs) is not reliable such that they return json everytime correctly. +But we are not doing this for the webhook type `FUNCTION`, since the response from the functions (especially LLMs) is not reliable such that they return json everytime correctly. Due to this we store the response of the function as a string. Due to this even if they are returning proper json we store it as a string, and we don't try to parse it. Recently openAI added a new feature called [structured responses](https://openai.com/index/introducing-structured-outputs-in-the-api/) in which we can give the expected response schema in the request itself, thus we can have responses in reliable json formats. @@ -17,17 +17,17 @@ We are leveraging this and parsing the response, if its json then add the key-va ### Using structured responses in webhook -We have to add an optional param called **response_format** in the existing **parse_via_gpt_vision/parse_via_chat_gpt** webhook body. +We have to add an optional param called `response_format` in the existing `parse_via_gpt_vision/parse_via_chat_gpt` webhook body. -The value of **response_format** will be the same as what openAI needs in their apis, you can read it [here](https://platform.openai.com/docs/api-reference/chat/create#chat-create-response_format). +The value of `response_format` will be the same as what openAI needs in their apis, you can read it [here](https://platform.openai.com/docs/api-reference/chat/create#chat-create-response_format). -In short, if the value is **{"type": "json_object"}**, and the prompt has a **"json"** keyword, then the response will always be a json. Or if the value is **{"type": "json_schema"}**, then we have to pass the schema as mentioned in the [openAI docs](https://platform.openai.com/docs/api-reference/chat/create#chat-create-response_format). +In short, if the value is `{"type": "json_object"}`, and the prompt has a `json` keyword, then the response will always be a json. Or if the value is `{"type": "json_schema"}`, then we have to pass the schema as mentioned in the [openAI docs](https://platform.openai.com/docs/api-reference/chat/create#chat-create-response_format). For both we will get a valid json from the openAI, and those will be merged into the results variable. ## Examples -### **json_object** +### json_object image @@ -37,18 +37,18 @@ And the corresponding webhook response we get will be image -**WARNING:** If we are using **json_object** as response_format, then the response will be always json but there’s no reliability that the keys will be same always. For ex in the next run the key “Volunteer ID” can be “VolunteerID” unless states explicitly in the prompt. +`WARNING:` If we are using `json_object` as `response_format`, then the response will be always json but there’s no reliability that the keys will be same always. For ex in the next run the key “Volunteer ID” can be “VolunteerID” unless states explicitly in the prompt. - ### **json_schema** + ### json_schema -For the same prompt above the response_format would be something like +For the same prompt above the `response_format` would be something like image image -The Pros of **json_schema** wrto **json_object** is that, the json keys will be always deterministic as specified in the schema. +The Pros of `json_schema` wrto `json_object` is that, the json keys will be always deterministic as specified in the schema. ## Resources