-
Notifications
You must be signed in to change notification settings - Fork 214
/
Copy pathspans.yaml
248 lines (240 loc) · 10.4 KB
/
spans.yaml
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
groups:
- id: trace.gen_ai.client.common_request_attributes
type: attribute_group
stability: development
brief: >
Describes GenAI operation span.
attributes:
- ref: gen_ai.request.model
requirement_level:
conditionally_required: If available.
note: >
The name of the GenAI model a request is being made to. If the model is supplied by a vendor,
then the value must be the exact name of the model requested. If the model is a fine-tuned
custom model, the value should have a more specific name than the base model that's been fine-tuned.
- ref: gen_ai.operation.name
requirement_level: required
- ref: gen_ai.request.max_tokens
requirement_level: recommended
- ref: gen_ai.request.choice.count
requirement_level:
conditionally_required: if available, in the request, and !=1
- ref: gen_ai.request.temperature
requirement_level: recommended
- ref: gen_ai.request.top_p
requirement_level: recommended
- ref: gen_ai.request.stop_sequences
requirement_level: recommended
- ref: gen_ai.request.frequency_penalty
requirement_level: recommended
- ref: gen_ai.request.presence_penalty
requirement_level: recommended
- ref: gen_ai.request.seed
requirement_level:
conditionally_required: if applicable and if the request includes a seed
- ref: gen_ai.request.encoding_formats
requirement_level: recommended
- ref: server.address
brief: GenAI server address.
requirement_level: recommended
- ref: server.port
brief: GenAI server port.
requirement_level:
conditionally_required: If `server.address` is set.
- ref: error.type
requirement_level:
conditionally_required: "if the operation ended in an error"
note: |
The `error.type` SHOULD match the error code returned by the Generative AI provider or the client library,
the canonical name of exception that occurred, or another low-cardinality error identifier.
Instrumentations SHOULD document the list of errors they report.
- id: trace.gen_ai.client.common_attributes
type: attribute_group
stability: development
brief: >
Describes GenAI operation span.
extends: trace.gen_ai.client.common_request_attributes
attributes:
- ref: gen_ai.output.type
requirement_level:
conditionally_required: when applicable and if the request includes an output format.
- ref: gen_ai.response.id
requirement_level: recommended
- ref: gen_ai.response.model
requirement_level: recommended
note: >
If available. The name of the GenAI model that provided the response. If the model is supplied by a vendor,
then the value must be the exact name of the model actually used. If the model is a
fine-tuned custom model, the value should have a more specific name than the base model that's been fine-tuned.
- ref: gen_ai.response.finish_reasons
requirement_level: recommended
- ref: gen_ai.usage.input_tokens
requirement_level: recommended
- ref: gen_ai.usage.output_tokens
requirement_level: recommended
- id: span.gen_ai.client
type: span
stability: development
span_kind: client
brief: >
Describes GenAI operation span.
extends: trace.gen_ai.client.common_attributes
attributes:
- ref: gen_ai.system
# TODO: Not adding to trace.gen_ai.client.common_attributes because of https://github.com/open-telemetry/build-tools/issues/192
requirement_level: required
- ref: gen_ai.request.top_k
requirement_level: recommended
events:
- gen_ai.content.prompt
- gen_ai.content.completion
- id: attributes.gen_ai.openai_based
extends: trace.gen_ai.client.common_attributes
type: attribute_group
stability: development
brief: >
Describes attributes that are common to OpenAI-based Generative AI services.
attributes:
- ref: gen_ai.output.type
note: >
This attribute SHOULD be set to the output type requested by the client:
- `json` for structured outputs with defined or undefined schema
- `image` for image output
- `speech` for speech output
- `text` for plain text output
The attribute specifies the output modality and not the actual output format.
For example, if an image is requested, the actual output could be a
URL pointing to an image file.
Additional output format details may be recorded in the future in the
`gen_ai.output.{type}.*` attributes.
- id: span.gen_ai.openai.client
extends: attributes.gen_ai.openai_based
stability: development
span_kind: client
type: span
brief: >
Describes an OpenAI operation span.
attributes:
- ref: gen_ai.request.model
requirement_level: required
- ref: gen_ai.openai.request.service_tier
requirement_level:
conditionally_required: if the request includes a service_tier and the value is not 'auto'
- ref: gen_ai.openai.response.service_tier
requirement_level:
conditionally_required: if the response was received and includes a service_tier
- ref: gen_ai.openai.response.system_fingerprint
requirement_level: recommended
- ref: gen_ai.usage.input_tokens
brief: The number of tokens used in the prompt sent to OpenAI.
- ref: gen_ai.usage.output_tokens
brief: The number of tokens used in the completions from OpenAI.
- id: span.gen_ai.az.ai.inference.client
extends: attributes.gen_ai.openai_based
stability: development
type: span
span_kind: client
brief: >
Describes Azure AI Inference span attributes.
attributes:
- ref: az.namespace
note: >
When `az.namespace` attribute is populated, it MUST be set to `Microsoft.CognitiveServices` for all
operations performed by Azure AI Inference clients.
examples: ["Microsoft.CognitiveServices"]
- ref: gen_ai.usage.input_tokens
brief: >
The number of prompt tokens as reported in the usage prompt_tokens property of the response.
- ref: gen_ai.usage.output_tokens
brief: >
The number of completion tokens as reported in the usage completion_tokens property of the response.
- ref: server.port
requirement_level:
conditionally_required: If not default (443).
- id: span.gen_ai.create_agent.client
type: span
stability: development
span_kind: client
brief: >
Describes GenAI agent creation and is usually applicable when working
with remote agent services.
note: |
The `gen_ai.operation.name` SHOULD be `create_agent`.
The **span name** SHOULD be `create_agent {gen_ai.agent.name}`.
Semantic conventions for individual GenAI systems and frameworks MAY specify different span name format.
extends: trace.gen_ai.client.common_request_attributes
attributes:
- ref: gen_ai.system
requirement_level: required
- ref: gen_ai.agent.id
requirement_level:
conditionally_required: if applicable.
- ref: gen_ai.agent.name
requirement_level:
conditionally_required: If provided by the application.
- ref: gen_ai.agent.description
requirement_level:
conditionally_required: If provided by the application.
- ref: gen_ai.request.model
requirement_level:
conditionally_required: If provided by the application.
note: >
The name of the GenAI model a request is being made to. If the model is supplied by a vendor,
then the value must be the exact name of the model requested. If the model is a fine-tuned
custom model, the value should have a more specific name than the base model that's been fine-tuned.
- ref: gen_ai.request.temperature
requirement_level:
conditionally_required: If provided by the application.
- ref: gen_ai.request.top_p
requirement_level:
conditionally_required: If provided by the application.
- id: span.gen_ai.invoke_agent.client
type: span
stability: experimental
span_kind: client
brief: >
Describes GenAI agent invocation and is usually applicable when working
with remote agent services.
note: |
The `gen_ai.operation.name` SHOULD be `invoke_agent`.
The **span name** SHOULD be `invoke_agent {gen_ai.agent.name}` if `gen_ai.agent.name` is readily available.
When `gen_ai.agent.name` is not available, it SHOULD be `invoke_agent`.
Semantic conventions for individual GenAI systems and frameworks MAY specify different span name format.
extends: trace.gen_ai.client.common_attributes
attributes:
- ref: gen_ai.system
requirement_level: required
- ref: gen_ai.agent.id
requirement_level:
conditionally_required: if applicable.
- ref: gen_ai.agent.name
requirement_level:
conditionally_required: when available
- ref: gen_ai.agent.description
requirement_level:
conditionally_required: when available
- id: span.gen_ai.execute_tool.internal
type: span
stability: development
span_kind: internal
brief: Describes tool execution span.
note: |
`gen_ai.operation.name` SHOULD be `execute_tool`.
Span name SHOULD be `execute_tool {gen_ai.tool.name}`.
GenAI instrumentations that are able to instrument tool execution call SHOULD do so.
However, it's common for tools to be executed by the application code. It's recommended
for the application developers to follow this semantic conventions for tool invoked
by the application code.
attributes:
- ref: gen_ai.tool.name
requirement_level: recommended
- ref: gen_ai.tool.call.id
requirement_level:
recommended: if available
- ref: error.type
requirement_level:
conditionally_required: "if the operation ended in an error"
note: |
The `error.type` SHOULD match the error code returned by the Generative AI provider or the client library,
the canonical name of exception that occurred, or another low-cardinality error identifier.
Instrumentations SHOULD document the list of errors they report.