-
Notifications
You must be signed in to change notification settings - Fork 2.6k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
TXT records created for aliases in AWS Route 53 have wrong record type prefix #2903
Comments
Also faced with that issue. Thank you @seh for report. |
What I see is two guard records being produced; one with same name as 'A' record and one with 'cname-' prefix. |
That's odd. Does your "A" record's name happen to begin with "a-," inducing false aliasing? |
Version Args containers:
- args:
- --log-level=info
- --namespace=mis-feature
- --publish-host-ip
- --aws-batch-change-size=20
- --domain-filter=mis.example.com
- --interval=2m
- --policy=upsert-only
- --provider=aws
- --source=ingress
- --source=service
- --registry=txt
- --txt-owner-id=use-feature Redacted Kubernetes Resources ---
apiVersion: v1
kind: Service
metadata:
name: unified-theatre
annotations:
external-dns.alpha.kubernetes.io/alias: "true"
external-dns.alpha.kubernetes.io/hostname: us.example.com
external-dns.alpha.kubernetes.io/ingress-hostname-source: annotation-only
external-dns.alpha.kubernetes.io/aws-weight: "255"
external-dns.alpha.kubernetes.io/set-identifier: us-east-1
spec:
type: ExternalName
externalName: use.example.com
---
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: unified-region
annotations:
external-dns.alpha.kubernetes.io/alias: "true"
external-dns.alpha.kubernetes.io/hostname: use.example.com
external-dns.alpha.kubernetes.io/ingress-hostname-source: annotation-only Redacted Route53 records
|
I'm also seeing this, and an additional problem is that when the k8s resource is deleted, the TXT record with prefix "cname-" is not deleted from route53. We have a zone with a large churn of resources and this resulted in reaching the limit on the number of records in the zone. |
I have the same problem. TXT records with prefix "cname-" are not deleted and cause an issue when I try to recreate k8s resources. |
We're seeing similar, but subtly different behavior: external-dns tries to delete |
+1 with the same problem in AWS. External DNS created a lot of entries in Route53 that start with CNAME-{{name}}.local TXT |
Having recently come across this issue, it appears part of the problem with the creation of erroneous |
+1 experiencing the same issue, while creation of A(Alias) records TXT record uses incorrect prefix (cname instead of a) |
Same, highly annoying. I'm having to delete Route53 records on a daily basis for dozens of clusters in order for the controller to properly create all the relevant records and go healthy with "all records are up to date". |
The Kubernetes project currently lacks enough contributors to adequately respond to all issues. This bot triages un-triaged issues according to the following rules:
You can:
Please send feedback to sig-contributor-experience at kubernetes/community. /lifecycle stale |
/remove-lifecycle stale |
External-dns represents ALIAS records of type A to the planner as Problems with deletion would be separate bugs. |
So, will be any fix of that behaviour? I need to pin the tag version(0.11.1-debian-10-r27) due to this |
The Kubernetes project currently lacks enough contributors to adequately respond to all issues. This bot triages un-triaged issues according to the following rules:
You can:
Please send feedback to sig-contributor-experience at kubernetes/community. /lifecycle stale |
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues. This bot triages un-triaged issues according to the following rules:
You can:
Please send feedback to sig-contributor-experience at kubernetes/community. /lifecycle rotten |
/remove-lifecycle rotten |
any updates?! |
The Kubernetes project currently lacks enough contributors to adequately respond to all issues. This bot triages un-triaged issues according to the following rules:
You can:
Please send feedback to sig-contributor-experience at kubernetes/community. /lifecycle stale |
/remove-lifecycle stale |
The Kubernetes project currently lacks enough contributors to adequately respond to all issues. This bot triages un-triaged issues according to the following rules:
You can:
Please send feedback to sig-contributor-experience at kubernetes/community. /lifecycle stale |
/remove-lifecycle stale |
Experiencing this issue as well.
Default chart config (no overrides) looks like this (snippet) when describing the pod: spec:
containers:
- args:
- --log-level=info
- --log-format=text
- --interval=1m
- --source=service
- --source=ingress
- --policy=upsert-only
- --registry=txt
- --provider=aws
env:
- name: AWS_STS_REGIONAL_ENDPOINTS
value: regional
- name: AWS_DEFAULT_REGION
value: us-east-1
- name: AWS_REGION
value: us-east-1
- name: AWS_ROLE_ARN
value: arn:aws:iam::xxxxxxxxxxxx:role/xxxxx
- name: AWS_WEB_IDENTITY_TOKEN_FILE
value: /var/run/secrets/eks.amazonaws.com/serviceaccount/token
image: registry.k8s.io/external-dns/external-dns:v0.15.0
imagePullPolicy: IfNotPresent
... All records created and managed successfully through automation, but each spec:
ingressClassName: nginx
rules:
- host: api.example.com
http:
paths:
- path: /v1/...
pathType: Prefix
backend:
service:
name: microservice-api-svc
port:
number: 1234 results in the creation of these three records in Route 53:
|
I think this issue is the root cause of #3977 (at least in some cases). As an example consider the 3 records from previous comment:
Let's assume you have domain filter set to That is at least the behaviour we observe in our clusters. In my opinion, the |
This is frustrating. After more than 12 hours of effort, I discovered that this issue has been open for ages. I tried hard to use open-source, ready-to-use solutions, but you guys don't allow us to enjoy a seamless experience. Now, I have to write my own container to manage it smoothly. Wasted. |
What happened:
Using the "aws" provider to create DNS records for hostnames that point at AWS ELBs (such as for endpoints extracted from a Kubernetes Service or Ingress), since the hostnames don't parse as IP addresses, ExternalDNS considers the endpoints warrant a record of type CNAME. As the target hostname discovered from the Ingress's status sits within a canonical hosted zone, ExternalDNS decides that the record should be an alias to the target ELB's DNS record. Later, when composing the changes to send to the Route 53 service, ExternalDNS changes its mind and decides to use an A record instead. At that point, ExternalDNS leaves the
endpoint.Endpoint
's "RecordType" field's value as the originalendpoint.RecordTypeCNAME
("CNAME").That sets us up to create an A record for an
endpoint.Endpoint
that still represents a CNAME record. ExternalDNS then goes on to add the TXT ownership records to the change batch, and consults theendpoint.Endpoint
's "RecordType" field, finding it to be "CNAME." This leads to a TXT record prefix of "cname-" even though it should probably be "a-" instead, if the goal is to have the TXT records indicate which of several primary records they describe.What you expected to happen:
ExternalDNS will create a TXT record with a prefix indicating the same primary record type that the TXT record describes. In this case, since the primary record type created in Route 53 turns out to be A, I expect the TXT record's prefix to be "a-" instead of "cname-."
How to reproduce it (as minimally and precisely as possible):
In a Kubernetes cluster running within AWS EC2, create a Service of type "LoadBalancer," and allow ExternalDNS to discover the endpoint and its target by using either the "service" or "ingress" source.
Inspect the Route 53 service to see that ExternalDNS creates a primary record of type A, as an alias to the target AWS-hosted load balancer. Note too that ExternalDNS creates a TXT record with a prefix of "cname-" instead of "a-."
Anything else we need to know?:
In order to align the record type mentioned by these primary and TXT records, we need to make the TXT registry portion of ExternalDNS aware of the late decision that the AWS provider makes to use an A record instead. I am not sure whether other providers make similar overriding decisions when composing changes.
Environment:
The text was updated successfully, but these errors were encountered: