Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Support controlling default routes via ansible-init/metadata #602

Draft
wants to merge 16 commits into
base: feature/k3s-monitoring
Choose a base branch
from

Conversation

sjpb
Copy link
Collaborator

@sjpb sjpb commented Mar 5, 2025

No description provided.

bertiethorpe and others added 8 commits March 5, 2025 15:19
…d is false (#601)

* fix security_group_id logic

* toggle secgroups without touching port security

* document no_security_groups flag
* add file deletion to cleanup play

* bump CI image

* add bacin deleted OOD file and fix paths in /etc

* bump CI image
@sjpb sjpb force-pushed the feat/cloudinit-gateways-v3 branch from f785a44 to 6eac0e2 Compare March 5, 2025 17:05
@sjpb
Copy link
Collaborator Author

sjpb commented Mar 6, 2025

@sjpb
Copy link
Collaborator Author

sjpb commented Mar 6, 2025

Tested that pods work ok:

# daemonset.yaml
apiVersion: apps/v1
kind: DaemonSet
metadata:
  name: test-node-daemonset
  labels:
    app: test-node
spec:
  selector:
    matchLabels:
      app: test-node
  template:
    metadata:
      labels:
        app: test-node
    spec:
      containers:
      - name: busybox
        image: busybox:latest
        command: ["sleep", "3600"]
        resources:
          requests:
            cpu: "100m"
            memory: "64Mi"
          limits:
            cpu: "200m"
            memory: "128Mi"

Then, what didn't work when I tried this before:

kubectl apply -f daemonset.yml
kubectl get pods -o wide
kubectl exec -it $POD_NAME -- sh

@sjpb
Copy link
Collaborator Author

sjpb commented Mar 6, 2025

However monitoring is failing with:

TASK [kube_prometheus_stack : Install kube-prometheus-stack on target Kubernetes cluster] ***************************************************************************************************************************************************
Thursday 06 March 2025  12:51:45 +0000 (0:00:06.238)       0:14:59.197 ******** 
fatal: [RL9-control]: FAILED! => {
    "changed": false,
    "command": "/bin/helm --version=59.1.0 --repo=https://prometheus-community.github.io/helm-charts upgrade -i --reset-values --wait --timeout 5m -f=/tmp/tmp1q6qtlkc.yml kube-prometheus-stack kube-prometheus-stack"
}

STDOUT:

Release "kube-prometheus-stack" does not exist. Installing it now.

STDERR:

Error: timed out waiting for the condition

MSG:

Failure when executing Helm command. Exited 1.
stdout: Release "kube-prometheus-stack" does not exist. Installing it now.

stderr: Error: timed out waiting for the condition

@sjpb
Copy link
Collaborator Author

sjpb commented Mar 6, 2025

Trying manually:

[root@RL9-control ~]# /bin/helm --version=59.1.0 --repo=https://prometheus-community.github.io/helm-charts upgrade -i --reset-values --wait --timeout 5m -f=/tmp/tmp1q6qtlkc.yml kube-prometheus-stack kube-prometheus-stack
Release "kube-prometheus-stack" does not exist. Installing it now.
Error: rendered manifests contain a resource that already exists. Unable to continue with install: ClusterRole "kube-prometheus-stack-grafana-clusterrole" in namespace "" exists and cannot be imported into the current release: invalid ownership metadata; annotation validation error: key "meta.helm.sh/release-namespace" must equal "default": current value is "monitoring-system"

This works, so going thro squid is OK:

[root@RL9-control ~]# curl -L https://prometheus-community.github.io/helm-charts

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants