Skip to content

Commit c1746c8

Browse files
authored
Merge branch 'main' into set-resource-status
2 parents eaa1ce3 + ad990d6 commit c1746c8

File tree

15 files changed

+445
-22
lines changed

15 files changed

+445
-22
lines changed

examples/self-diagnostics/Cargo.toml

+19
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,19 @@
1+
[package]
2+
name = "self-diagnostics"
3+
version = "0.1.0"
4+
edition = "2021"
5+
license = "Apache-2.0"
6+
publish = false
7+
8+
[dependencies]
9+
opentelemetry = { path = "../../opentelemetry" }
10+
opentelemetry_sdk = { path = "../../opentelemetry-sdk", features = ["rt-tokio"]}
11+
opentelemetry-stdout = { path = "../../opentelemetry-stdout"}
12+
opentelemetry-appender-tracing = { path = "../../opentelemetry-appender-tracing"}
13+
tokio = { workspace = true, features = ["full"] }
14+
tracing = { workspace = true, features = ["std"]}
15+
tracing-core = { workspace = true }
16+
tracing-subscriber = { version = "0.3.18", features = ["env-filter","registry", "std"]}
17+
opentelemetry-otlp = { path = "../../opentelemetry-otlp", features = ["http-proto", "reqwest-client", "logs"] }
18+
once_cell ={ version = "1.19.0"}
19+
ctrlc = "3.4"

examples/self-diagnostics/Dockerfile

+6
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,6 @@
1+
FROM rust:1.51
2+
COPY . /usr/src/basic-otlp-http/
3+
WORKDIR /usr/src/basic-otlp-http/
4+
RUN cargo build --release
5+
RUN cargo install --path .
6+
CMD ["/usr/local/cargo/bin/basic-otlp-http"]

examples/self-diagnostics/README.md

+93
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,93 @@
1+
# Basic OpenTelemetry metrics example with custom error handler:
2+
3+
This example shows how to setup the custom error handler for self-diagnostics.
4+
5+
## Custom Error Handling:
6+
7+
A custom error handler is set up to capture and record errors using the `tracing` crate's `error!` macro. These errors are then exported to a collector using the `opentelemetry-appender-tracing` crate, which utilizes the OTLP log exporter over `HTTP/protobuf`. As a result, any errors generated by the configured OTLP metrics pipeline are funneled through this custom error handler for proper recording and export.
8+
9+
## Filtering logs from external dependencies of OTLP Exporter:
10+
11+
The example configures a tracing `filter` to restrict logs from external crates (`hyper`, `tonic`, and `reqwest`) used by the OTLP Exporter to the `error` level. This helps prevent an infinite loop of log generation when these crates emit logs that are picked up by the tracing subscriber.
12+
13+
## Ensure that the internally generated errors are logged only once:
14+
15+
By using a hashset to track seen errors, the custom error handler ensures that the same error is not logged multiple times. This is particularly useful for handling scenarios where continuous error logging might occur, such as when the OpenTelemetry collector is not running.
16+
17+
18+
## Usage
19+
20+
### `docker-compose`
21+
22+
By default runs against the `otel/opentelemetry-collector:latest` image, and uses `reqwest-client`
23+
as the http client, using http as the transport.
24+
25+
```shell
26+
docker-compose up
27+
```
28+
29+
In another terminal run the application `cargo run`
30+
31+
The docker-compose terminal will display logs, traces, metrics.
32+
33+
Press Ctrl+C to stop the collector, and then tear it down:
34+
35+
```shell
36+
docker-compose down
37+
```
38+
39+
### Manual
40+
41+
If you don't want to use `docker-compose`, you can manually run the `otel/opentelemetry-collector` container
42+
and inspect the logs to see traces being transferred.
43+
44+
On Unix based systems use:
45+
46+
```shell
47+
# From the current directory, run `opentelemetry-collector`
48+
docker run --rm -it -p 4318:4318 -v $(pwd):/cfg otel/opentelemetry-collector:latest --config=/cfg/otel-collector-config.yaml
49+
```
50+
51+
On Windows use:
52+
53+
```shell
54+
# From the current directory, run `opentelemetry-collector`
55+
docker run --rm -it -p 4318:4318 -v "%cd%":/cfg otel/opentelemetry-collector:latest --config=/cfg/otel-collector-config.yaml
56+
```
57+
58+
Run the app which exports logs, metrics and traces via OTLP to the collector
59+
60+
```shell
61+
cargo run
62+
```
63+
64+
### Output:
65+
66+
- If the docker instance for collector is running, below error should be logged into the container. There won't be any logs from the `hyper`, `reqwest` and `tonic` crates.
67+
```
68+
otel-collector-1 | 2024-06-05T17:09:46.926Z info LogsExporter {"kind": "exporter", "data_type": "logs", "name": "logging", "resource logs": 1, "log records": 1}
69+
otel-collector-1 | 2024-06-05T17:09:46.926Z info ResourceLog #0
70+
otel-collector-1 | Resource SchemaURL:
71+
otel-collector-1 | Resource attributes:
72+
otel-collector-1 | -> telemetry.sdk.name: Str(opentelemetry)
73+
otel-collector-1 | -> telemetry.sdk.version: Str(0.23.0)
74+
otel-collector-1 | -> telemetry.sdk.language: Str(rust)
75+
otel-collector-1 | -> service.name: Str(unknown_service)
76+
otel-collector-1 | ScopeLogs #0
77+
otel-collector-1 | ScopeLogs SchemaURL:
78+
otel-collector-1 | InstrumentationScope opentelemetry-appender-tracing 0.4.0
79+
otel-collector-1 | LogRecord #0
80+
otel-collector-1 | ObservedTimestamp: 2024-06-05 17:09:45.931951161 +0000 UTC
81+
otel-collector-1 | Timestamp: 1970-01-01 00:00:00 +0000 UTC
82+
otel-collector-1 | SeverityText: ERROR
83+
otel-collector-1 | SeverityNumber: Error(17)
84+
otel-collector-1 | Body: Str(OpenTelemetry metrics error occurred: Metrics error: Warning: Maximum data points for metric stream exceeded. Entry added to overflow. Subsequent overflows to same metric until next collect will not be logged.)
85+
otel-collector-1 | Attributes:
86+
otel-collector-1 | -> name: Str(event examples/self-diagnostics/src/main.rs:42)
87+
otel-collector-1 | Trace ID:
88+
otel-collector-1 | Span ID:
89+
otel-collector-1 | Flags: 0
90+
otel-collector-1 | {"kind": "exporter", "data_type": "logs", "name": "logging"}
91+
```
92+
93+
- The SDK will keep trying to upload metrics at regular intervals if the collector's Docker instance is down. To avoid a logging loop, internal errors like 'Connection refused' will be attempted to be logged only once.
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,11 @@
1+
version: "2"
2+
services:
3+
4+
# Collector
5+
otel-collector:
6+
image: otel/opentelemetry-collector:latest
7+
command: ["--config=/etc/otel-collector-config.yaml", "${OTELCOL_ARGS}"]
8+
volumes:
9+
- ./otel-collector-config.yaml:/etc/otel-collector-config.yaml
10+
ports:
11+
- "4318:4318" # OTLP HTTP receiver
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,27 @@
1+
# This is a configuration file for the OpenTelemetry Collector intended to be
2+
# used in conjunction with the opentelemetry-otlp example.
3+
#
4+
# For more information about the OpenTelemetry Collector see:
5+
# https://github.com/open-telemetry/opentelemetry-collector
6+
#
7+
receivers:
8+
otlp:
9+
protocols:
10+
grpc:
11+
http:
12+
13+
exporters:
14+
debug:
15+
verbosity: detailed
16+
17+
service:
18+
pipelines:
19+
traces:
20+
receivers: [otlp]
21+
exporters: [debug]
22+
metrics:
23+
receivers: [otlp]
24+
exporters: [debug]
25+
logs:
26+
receivers: [otlp]
27+
exporters: [debug]

examples/self-diagnostics/src/main.rs

+145
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,145 @@
1+
use opentelemetry::global::{self, set_error_handler, Error as OtelError};
2+
use opentelemetry::KeyValue;
3+
use opentelemetry_appender_tracing::layer;
4+
use opentelemetry_otlp::WithExportConfig;
5+
use tracing_subscriber::prelude::*;
6+
use tracing_subscriber::EnvFilter;
7+
8+
use std::error::Error;
9+
use tracing::error;
10+
11+
use once_cell::sync::Lazy;
12+
use std::collections::HashSet;
13+
use std::sync::{Arc, Mutex};
14+
15+
use ctrlc;
16+
use std::sync::mpsc::channel;
17+
18+
struct ErrorState {
19+
seen_errors: Mutex<HashSet<String>>,
20+
}
21+
22+
impl ErrorState {
23+
fn new() -> Self {
24+
ErrorState {
25+
seen_errors: Mutex::new(HashSet::new()),
26+
}
27+
}
28+
29+
fn mark_as_seen(&self, err: &OtelError) -> bool {
30+
let mut seen_errors = self.seen_errors.lock().unwrap();
31+
seen_errors.insert(err.to_string())
32+
}
33+
}
34+
35+
static GLOBAL_ERROR_STATE: Lazy<Arc<ErrorState>> = Lazy::new(|| Arc::new(ErrorState::new()));
36+
37+
fn custom_error_handler(err: OtelError) {
38+
if GLOBAL_ERROR_STATE.mark_as_seen(&err) {
39+
// log error not already seen
40+
match err {
41+
OtelError::Metric(err) => error!("OpenTelemetry metrics error occurred: {}", err),
42+
OtelError::Trace(err) => error!("OpenTelemetry trace error occurred: {}", err),
43+
OtelError::Log(err) => error!("OpenTelemetry log error occurred: {}", err),
44+
OtelError::Propagation(err) => {
45+
error!("OpenTelemetry propagation error occurred: {}", err)
46+
}
47+
OtelError::Other(err_msg) => error!("OpenTelemetry error occurred: {}", err_msg),
48+
_ => error!("OpenTelemetry error occurred: {:?}", err),
49+
}
50+
}
51+
}
52+
53+
fn init_logger_provider() -> opentelemetry_sdk::logs::LoggerProvider {
54+
let provider = opentelemetry_otlp::new_pipeline()
55+
.logging()
56+
.with_exporter(
57+
opentelemetry_otlp::new_exporter()
58+
.http()
59+
.with_endpoint("http://localhost:4318/v1/logs"),
60+
)
61+
.install_batch(opentelemetry_sdk::runtime::Tokio)
62+
.unwrap();
63+
64+
// Add a tracing filter to filter events from crates used by opentelemetry-otlp.
65+
// The filter levels are set as follows:
66+
// - Allow `info` level and above by default.
67+
// - Restrict `hyper`, `tonic`, and `reqwest` to `error` level logs only.
68+
// This ensures events generated from these crates within the OTLP Exporter are not looped back,
69+
// thus preventing infinite event generation.
70+
// Note: This will also drop events from these crates used outside the OTLP Exporter.
71+
// For more details, see: https://github.com/open-telemetry/opentelemetry-rust/issues/761
72+
let filter = EnvFilter::new("info")
73+
.add_directive("hyper=error".parse().unwrap())
74+
.add_directive("tonic=error".parse().unwrap())
75+
.add_directive("reqwest=error".parse().unwrap());
76+
let cloned_provider = provider.clone();
77+
let layer = layer::OpenTelemetryTracingBridge::new(&cloned_provider);
78+
tracing_subscriber::registry()
79+
.with(filter)
80+
.with(layer)
81+
.init();
82+
provider
83+
}
84+
85+
fn init_meter_provider() -> opentelemetry_sdk::metrics::SdkMeterProvider {
86+
let provider = opentelemetry_otlp::new_pipeline()
87+
.metrics(opentelemetry_sdk::runtime::Tokio)
88+
.with_period(std::time::Duration::from_secs(1))
89+
.with_exporter(
90+
opentelemetry_otlp::new_exporter()
91+
.http()
92+
.with_endpoint("http://localhost:4318/v1/metrics"),
93+
)
94+
.build()
95+
.unwrap();
96+
let cloned_provider = provider.clone();
97+
global::set_meter_provider(cloned_provider);
98+
provider
99+
}
100+
101+
#[tokio::main]
102+
async fn main() -> Result<(), Box<dyn Error + Send + Sync + 'static>> {
103+
// Set the custom error handler
104+
if let Err(err) = set_error_handler(custom_error_handler) {
105+
eprintln!("Failed to set custom error handler: {}", err);
106+
}
107+
108+
let logger_provider = init_logger_provider();
109+
110+
// Initialize the MeterProvider with the stdout Exporter.
111+
let meter_provider = init_meter_provider();
112+
113+
// Create a meter from the above MeterProvider.
114+
let meter = global::meter("example");
115+
// Create a Counter Instrument.
116+
let counter = meter.u64_counter("my_counter").init();
117+
118+
// Record measurements with unique key-value pairs to exceed the cardinality limit
119+
// of 2000 and trigger error message
120+
for i in 0..3000 {
121+
counter.add(
122+
10,
123+
&[KeyValue::new(
124+
format!("mykey{}", i),
125+
format!("myvalue{}", i),
126+
)],
127+
);
128+
}
129+
130+
let (tx, rx) = channel();
131+
132+
ctrlc::set_handler(move || tx.send(()).expect("Could not send signal on channel."))
133+
.expect("Error setting Ctrl-C handler");
134+
135+
println!("Press Ctrl-C to continue...");
136+
rx.recv().expect("Could not receive from channel.");
137+
println!("Got Ctrl-C, Doing shutdown and existing.");
138+
139+
// MeterProvider is configured with an OTLP Exporter to export metrics every 1 second,
140+
// however shutting down the MeterProvider here instantly flushes
141+
// the metrics, instead of waiting for the 1 sec interval.
142+
meter_provider.shutdown()?;
143+
let _ = logger_provider.shutdown();
144+
Ok(())
145+
}

opentelemetry-otlp/examples/basic-otlp-http/Cargo.toml

+11
Original file line numberDiff line numberDiff line change
@@ -5,14 +5,25 @@ edition = "2021"
55
license = "Apache-2.0"
66
publish = false
77

8+
[features]
9+
default = ["reqwest"]
10+
reqwest = ["opentelemetry-otlp/reqwest-client"]
11+
hyper = ["dep:async-trait", "dep:http", "dep:hyper", "dep:opentelemetry-http", "dep:bytes"]
12+
13+
814
[dependencies]
915
once_cell = { workspace = true }
1016
opentelemetry = { path = "../../../opentelemetry" }
1117
opentelemetry_sdk = { path = "../../../opentelemetry-sdk", features = ["rt-tokio", "metrics", "logs"] }
18+
opentelemetry-http = { path = "../../../opentelemetry-http", optional = true }
1219
opentelemetry-otlp = { path = "../..", features = ["http-proto", "reqwest-client", "logs"] }
1320
opentelemetry-appender-tracing = { path = "../../../opentelemetry-appender-tracing", default-features = false}
1421
opentelemetry-semantic-conventions = { path = "../../../opentelemetry-semantic-conventions" }
1522

23+
async-trait = { workspace = true, optional = true }
24+
bytes = { workspace = true, optional = true }
25+
http = { workspace = true, optional = true }
26+
hyper = { workspace = true, features = ["client"], optional = true }
1627
tokio = { workspace = true, features = ["full"] }
1728
tracing = { workspace = true, features = ["std"]}
1829
tracing-core = { workspace = true }

opentelemetry-otlp/examples/basic-otlp-http/README.md

+8
Original file line numberDiff line numberDiff line change
@@ -52,6 +52,14 @@ Run the app which exports logs, metrics and traces via OTLP to the collector
5252
cargo run
5353
```
5454

55+
56+
By default the app will use a `reqwest` client to send. A hyper 0.14 client can be used with the `hyper` feature enabled
57+
58+
```shell
59+
cargo run --no-default-features --features=hyper
60+
```
61+
62+
5563
## View results
5664

5765
You should be able to see something similar below with different time and ID in the same console that docker runs.
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,45 @@
1+
use async_trait::async_trait;
2+
use bytes::Bytes;
3+
use http::{Request, Response};
4+
use hyper::{
5+
client::{connect::Connect, HttpConnector},
6+
Body, Client,
7+
};
8+
use opentelemetry_http::{HttpClient, HttpError, ResponseExt};
9+
10+
pub struct HyperClient<C> {
11+
inner: hyper::Client<C>,
12+
}
13+
14+
impl Default for HyperClient<HttpConnector> {
15+
fn default() -> Self {
16+
Self {
17+
inner: Client::new(),
18+
}
19+
}
20+
}
21+
22+
impl<C> std::fmt::Debug for HyperClient<C> {
23+
fn fmt(&self, f: &mut std::fmt::Formatter<'_>) -> std::fmt::Result {
24+
f.debug_struct("HyperClient")
25+
.field("inner", &self.inner)
26+
.finish()
27+
}
28+
}
29+
30+
#[async_trait]
31+
impl<C: Connect + Clone + Send + Sync + 'static> HttpClient for HyperClient<C> {
32+
async fn send(&self, request: Request<Vec<u8>>) -> Result<Response<Bytes>, HttpError> {
33+
let request = request.map(Body::from);
34+
35+
let (parts, body) = self
36+
.inner
37+
.request(request)
38+
.await?
39+
.error_for_status()?
40+
.into_parts();
41+
let body = hyper::body::to_bytes(body).await?;
42+
43+
Ok(Response::from_parts(parts, body))
44+
}
45+
}

0 commit comments

Comments
 (0)