You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
# Basic OpenTelemetry metrics example with custom error handler:
2
2
3
-
This example shows how to setup the custom error handler for self-diagnostics.
4
-
5
-
## Custom Error Handling:
6
-
7
-
A custom error handler is set up to capture and record errors using the `tracing` crate's `error!` macro. These errors are then exported to a collector using the `opentelemetry-appender-tracing` crate, which utilizes the OTLP log exporter over `HTTP/protobuf`. As a result, any errors generated by the configured OTLP metrics pipeline are funneled through this custom error handler for proper recording and export.
3
+
This example shows how to self-diagnose OpenTelemetry by enabling its internal
4
+
logs. OpenTelemetry crates publish internal logs when "internal-logs" feature is
5
+
enabled. This feature is enabled by default. Internal logs are published using
6
+
`tracing` events, and hence, a `tracing` subscriber must be configured without
7
+
which the logs are simply discarded.
8
8
9
9
## Filtering logs from external dependencies of OTLP Exporter:
10
10
11
-
The example configures a tracing `filter` to restrict logs from external crates (`hyper`, `tonic`, and `reqwest`) used by the OTLP Exporter to the `error` level. This helps prevent an infinite loop of log generation when these crates emit logs that are picked up by the tracing subscriber.
12
-
13
-
## Ensure that the internally generated errors are logged only once:
14
-
15
-
By using a hashset to track seen errors, the custom error handler ensures that the same error is not logged multiple times. This is particularly useful for handling scenarios where continuous error logging might occur, such as when the OpenTelemetry collector is not running.
16
-
17
-
18
-
## Usage
19
-
20
-
### `docker-compose`
21
-
22
-
By default runs against the `otel/opentelemetry-collector:latest` image, and uses `reqwest-client`
23
-
as the http client, using http as the transport.
24
-
25
-
```shell
26
-
docker-compose up
27
-
```
28
-
29
-
In another terminal run the application `cargo run`
30
-
31
-
The docker-compose terminal will display logs, traces, metrics.
32
-
33
-
Press Ctrl+C to stop the collector, and then tear it down:
34
-
35
-
```shell
36
-
docker-compose down
37
-
```
38
-
39
-
### Manual
40
-
41
-
If you don't want to use `docker-compose`, you can manually run the `otel/opentelemetry-collector` container
42
-
and inspect the logs to see traces being transferred.
43
-
44
-
On Unix based systems use:
45
-
46
-
```shell
47
-
# From the current directory, run `opentelemetry-collector`
48
-
docker run --rm -it -p 4318:4318 -v $(pwd):/cfg otel/opentelemetry-collector:latest --config=/cfg/otel-collector-config.yaml
49
-
```
50
-
51
-
On Windows use:
52
-
53
-
```shell
54
-
# From the current directory, run `opentelemetry-collector`
55
-
docker run --rm -it -p 4318:4318 -v "%cd%":/cfg otel/opentelemetry-collector:latest --config=/cfg/otel-collector-config.yaml
56
-
```
57
-
58
-
Run the app which exports logs, metrics and traces via OTLP to the collector
59
-
60
-
```shell
61
-
cargo run
62
-
```
63
-
64
-
### Output:
65
-
66
-
- If the docker instance for collector is running, below error should be logged into the container. There won't be any logs from the `hyper`, `reqwest` and `tonic` crates.
otel-collector-1 | ObservedTimestamp: 2024-06-05 17:09:45.931951161 +0000 UTC
81
-
otel-collector-1 | Timestamp: 1970-01-01 00:00:00 +0000 UTC
82
-
otel-collector-1 | SeverityText: ERROR
83
-
otel-collector-1 | SeverityNumber: Error(17)
84
-
otel-collector-1 | Body: Str(OpenTelemetry metrics error occurred: Metrics error: Warning: Maximum data points for metric stream exceeded. Entry added to overflow. Subsequent overflows to same metric until next collect will not be logged.)
- The SDK will keep trying to upload metrics at regular intervals if the collector's Docker instance is down. To avoid a logging loop, internal errors like 'Connection refused' will be attempted to be logged only once.
11
+
The example configures a tracing `filter` to restrict logs from external crates
12
+
(`hyper`, `tonic`, and `reqwest`) used by the OTLP Exporter to the `error`
13
+
level. This helps prevent an infinite loop of log generation when these crates
14
+
emit logs that are picked up by the tracing subscriber. This is only a
15
+
workaround until https://github.com/open-telemetry/opentelemetry-rust/issues/761
16
+
is resolved.
17
+
18
+
## Filtering logs to be send to OpenTelemetry itself
19
+
20
+
If you use [OpenTelemetry Tracing
21
+
Appender](../../opentelemetry-appender-tracing/README.md) to send `tracing` logs
22
+
to OpenTelemetry, then enabling OpenTelemetry internal logs can also cause
23
+
infinite, recursive logging. You can filter out all OpenTelemetry internal logs
24
+
from being sent to [OpenTelemetry Tracing
25
+
Appender](../../opentelemetry-appender-tracing/README.md) using a filter, like
0 commit comments