You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I'm encountering an error when running inhooks on Kubernetes.
The error suggests a timeout when resolving the Redis hostname, but oddly, everything seems to work fine when I exec into the container and test.
Error Log
2025-02-11T23:55:18.251Z ERROR supervisor/processing.go:19 failed to move processing to ready {"flowID": "prometheus-watchdog", "sinkID": "prometheus-sink", "error": "failed to lrangeall: failed to lrange. queueKey: inhooks:inhooksdb:f:prometheus-watchdog:s:prometheus-sink:q:processing: dial tcp: lookup redis.inhooks.svc.cluster.local.: i/o timeout", "errorVerbose": "dial tcp: lookup redis.inhooks.svc.cluster.local.: i/o timeout\nfailed to lrange. queueKey: inhooks:inhooksdb:f:prometheus-watchdog:s:prometheus-sink:q:processing\ngithub.com/didil/inhooks/pkg/services.(*redisStore).LRangeAll\n\t/home/runner/work/inhooks/inhooks/pkg/services/redis_store.go:234\ngithub.com/didil/inhooks/pkg/services.(*processingRecoveryService).MoveProcessingToReady\n\t/home/runner/work/inhooks/inhooks/pkg/services/processing_recovery_service.go:45\ngithub.com/didil/inhooks/pkg/supervisor.(*Supervisor).HandleProcessingQueue\n\t/home/runner/work/inhooks/inhooks/pkg/supervisor/processing.go:17\ngithub.com/didil/inhooks/pkg/supervisor.(*Supervisor).Start.func1\n\t/home/runner/work/inhooks/inhooks/pkg/supervisor/supervisor.go:116\nruntime.goexit\n\t/opt/hostedtoolcache/go/1.23.5/x64/src/runtime/asm_amd64.s:1700\nfailed to lrangeall\ngithub.com/didil/inhooks/pkg/services.(*processingRecoveryService).MoveProcessingToReady\n\t/home/runner/work/inhooks/inhooks/pkg/services/processing_recovery_service.go:47\ngithub.com/didil/inhooks/pkg/supervisor.(*Supervisor).HandleProcessingQueue\n\t/home/runner/work/inhooks/inhooks/pkg/supervisor/processing.go:17\ngithub.com/didil/inhooks/pkg/supervisor.(*Supervisor).Start.func1\n\t/home/runner/work/inhooks/inhooks/pkg/supervisor/supervisor.go:116\nruntime.goexit\n\t/opt/hostedtoolcache/go/1.23.5/x64/src/runtime/asm_amd64.s:1700"}
github.com/didil/inhooks/pkg/supervisor.(*Supervisor).HandleProcessingQueue
/home/runner/work/inhooks/inhooks/pkg/supervisor/processing.go:19
github.com/didil/inhooks/pkg/supervisor.(*Supervisor).Start.func1
/home/runner/work/inhooks/inhooks/pkg/supervisor/supervisor.go:116
@mhamzahkhan thanks for reporting. I'm not familiar with Valkey and have only tested against Redis. This seems like a connectivity issue that would be difficult to reproduce outside of your specific Kubernetes setup.
I'm encountering an error when running inhooks on Kubernetes.
The error suggests a timeout when resolving the Redis hostname, but oddly, everything seems to work fine when I exec into the container and test.
Error Log
Configuration
Flow definition:
Environment variables:
Debugging Steps Taken
nc
inside the container works fine:Strangely, the the webhook works though - I can send a POST request to /api/v1/ingest/prometheus-watchdog, and it goes through.
I am using Valkey instead of Redis, but I don't think this should make a difference?
The text was updated successfully, but these errors were encountered: