Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Collector extension intermittently logs "HTTP Server closed:" #1696

Open
bhaskarbanerjee opened this issue Feb 7, 2025 · 1 comment
Open
Labels
bug Something isn't working

Comments

@bhaskarbanerjee
Copy link

Describe the bug
Collector extension intermittently logs "HTTP Server closed:"

Steps to reproduce

  1. Running collector version
  2. We have the decoupled processor, without batch processor
  3. Send traffic to the extension.
  4. Extension intermittently logs HTTP Server closed:
{"level":"info","ts":1736808998.0036397,"caller":"service@v0.102.1/service.go:180","msg":"Starting otelcol-lambda...","Version":"v0.91.0","NumCPU":2}
2025-01-13T17:56:38.003-05:00
{"level":"info","ts":1736808998.003882,"caller":"extensions/extensions.go:34","msg":"Starting extensions..."}
2025-01-13T17:56:38.004-05:00
{"level":"info","ts":1736808998.004209,"caller":"otlpreceiver@v0.102.1/otlp.go:102","msg":"Starting GRPC server","kind":"receiver","name":"otlp","data_type":"metrics","endpoint":"localhost:4317"}
2025-01-13T17:56:38.017-05:00
{"level":"info","ts":1736808998.0170355,"caller":"otlpreceiver@v0.102.1/otlp.go:152","msg":"Starting HTTP server","kind":"receiver","name":"otlp","data_type":"metrics","endpoint":"localhost:4318"}
2025-01-13T17:56:38.017-05:00
{"level":"info","ts":1736808998.017302,"caller":"service@v0.102.1/service.go:206","msg":"Everything is ready. Begin running and processing data."}
2025-01-13T17:56:38.017-05:00
{"level":"warn","ts":1736808998.0173926,"caller":"localhostgate/featuregate.go:63","msg":"The default endpoints for all servers in components will change to use localhost instead of 0.0.0.0 in a future version. Use the feature gate to preview the new default.","feature gate ID":"component.UseLocalHostAsDefaultHost"}
2025-01-13T17:56:38.271-05:00
{"level":"info","ts":1736808998.2714095,"caller":"samplego:198","msg":"Running as a lambda"}
2025-01-13T17:56:38.272-05:00
EXTENSION Name: collector State: Ready Events: [INVOKE, SHUTDOWN]
2025-01-13T17:56:38.273-05:00
START RequestId: c5972d10-b467-5eba-ae6c-72ef2831dec4 Version: $LATEST
2025-01-13T17:56:38.273-05:00
{"level":"info","ts":1736808998.27388,"caller":"decoupleprocessor@v0.0.0-00010101000000-000000000000/processor.go:82","msg":"started forwarding data","kind":"processor","name":"decouple","pipeline":"traces"}
2025-01-13T17:56:38.274-05:00
{"level":"info","ts":1736808998.2738986,"caller":"decoupleprocessor@v0.0.0-00010101000000-000000000000/processor.go:82","msg":"started forwarding data","kind":"processor","name":"decouple","pipeline":"metrics"}
2025-01-13T17:56:38.274-05:00
{"level":"info","ts":1736809000.4192262,"caller":"client.go:138","msg":"Successfully made POST and got response"}
2025-01-13T17:56:40.447-05:00
{"level":"info","ts":1736809000.4471953,"caller":"decoupleprocessor@v0.0.0-00010101000000-000000000000/processor.go:92","msg":"stopped forwarding data","kind":"processor","name":"decouple","pipeline":"metrics"}
2025-01-13T17:56:40.447-05:00
{"level":"info","ts":1736809000.447334,"caller":"decoupleprocessor@v0.0.0-00010101000000-000000000000/processor.go:92","msg":"stopped forwarding data","kind":"processor","name":"decouple","pipeline":"traces"}
2025-01-13T17:56:40.447-05:00
END RequestId:abcdefgh
2025-01-13T17:56:40.447-05:00
REPORT RequestId: abcdefgh Duration: 2174.50 ms Billed Duration: 2769 ms Memory Size: 128 MB Max Memory Used: 83 MB Init Duration: 593.74 ms
2025-01-13T18:03:14.297-05:00
{"level":"info","ts":1736809394.2975132,"logger":"lifecycle.manager","msg":"Received SHUTDOWN event"}
2025-01-13T18:03:14.297-05:00
{"level":"info","ts":1736809394.2977474,"logger":"telemetryAPI.Listener","msg":"HTTP Server closed:","error":"http: Server closed"}
2025-01-13T18:03:14.298-05:00
{"level":"info","ts":1736809394.2979894,"caller":"decoupleprocessor@v0.0.0-00010101000000-000000000000/processor.go:82","msg":"started forwarding data","kind":"processor","name":"decouple","pipeline":"traces"}
2025-01-13T18:03:14.298-05:00
{"level":"info","ts":1736809394.2980387,"caller":"decoupleprocessor@v0.0.0-00010101000000-000000000000/processor.go:82","msg":"started forwarding data","kind":"processor","name":"decouple","pipeline":"metrics"}
2025-01-13T18:03:14.317-05:00
{"level":"info","ts":1736809394.2981458,"caller":"otelcol@v0.102.1/collector.go:331","msg":"Received shutdown request"}
2025-01-13T18:03:14.317-05:00
{"level":"info","ts":1736809394.3169343,"caller":"service@v0.102.1/service.go:243","msg":"Starting shutdown..."}
2025-01-13T18:03:14.317-05:00
{"level":"info","ts":1736809394.3172495,"caller":"decoupleprocessor@v0.0.0-00010101000000-000000000000/processor.go:92","msg":"stopped forwarding data","kind":"processor","name":"decouple","pipeline":"traces"}
2025-01-13T18:03:14.317-05:00
{"level":"info","ts":1736809394.317292,"caller":"decoupleprocessor@v0.0.0-00010101000000-000000000000/processor.go:92","msg":"stopped forwarding data","kind":"processor","name":"decouple","pipeline":"metrics"}
2025-01-13T18:03:14.317-05:00
{"level":"info","ts":1736809394.3173075,"caller":"extensions/extensions.go:59","msg":"Stopping extensions..."}
2025-01-13T18:03:14.317-05:00
{"level":"info","ts":1736809394.3173738,"caller":"service@v0.102.1/service.go:257","msg":"Shutdown complete."}

What did you expect to see?
Is the collector really shut down? If so, I would have expected another log lines before the next forward from the decoupled processor

{"level":"info","ts":1736808998.003882,"caller":"extensions/extensions.go:34","msg":"Starting extensions..."}
2025-01-13T17:56:38.004-05:00
{"level":"info","ts":1736808998.004209,"caller":"otlpreceiver@v0.102.1/otlp.go:102","msg":"Starting GRPC server","kind":"receiver","name":"otlp","data_type":"metrics","endpoint":"localhost:4317"}
2025-01-13T17:56:38.017-05:00
{"level":"info","ts":1736808998.0170355,"caller":"otlpreceiver@v0.102.1/otlp.go:152","msg":"Starting HTTP server","kind":"receiver","name":"otlp","data_type":"metrics","endpoint":"localhost:4318"}

What did you see instead?
Didn't see any message saying the collector is starting after the closed message.

What version of collector/language SDK version did you use?
Version: 0.10.0

What language layer did you use?
This is happening irrespective of the layer and is intermittent

Additional context
Quite confusing whether this is a log message or a problem.

@bhaskarbanerjee bhaskarbanerjee added the bug Something isn't working label Feb 7, 2025
@serkan-ozal
Copy link
Contributor

Hi @bhaskarbanerjee,

As far as I see, before the collector shutdown, there is about 6.5 minutes of inactivity which means that since Lambda instance is idle, it might be destroyed by AWS Lambda. And since OTEL Lambda container extension registers itself to shutdown hook event to flush the buffered data if there is and close itself, you should (most probably) see those logs.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working
Projects
None yet
Development

No branches or pull requests

2 participants