⚠️ Ignore this section if going the Calico Cloud route, refer to this section for the steps instead
- Cluster-1 will act as the mgmt. cluster and we will enable MCM with cluster-2 being added as a managed cluster.
-
Apply the operator and prometheus manifests from the repo:
kubectl create -f https://downloads.tigera.io/ee/v3.18.0-2.0/manifests/tigera-operator.yaml
kubectl create -f https://downloads.tigera.io/ee/v3.18.0-2.0/manifests/tigera-prometheus-operator.yaml
-
Get a pull secret: The official build uses the quay.io images, so you'll need a pull secret. This doc has options on how to generate one as per the official POC/testing process.
⚠️ Please make sure to delete pull secrets after testing or if not being used by an active/paying customer or an active POC
-
Install the pull secret:
kubectl create secret generic tigera-pull-secret --type=kubernetes.io/dockerconfigjson -n tigera-operator --from-file=.dockerconfigjson=<path/to/pull/secret>
-
For the Prometheus operator, create the pull secret in the tigera-prometheus namespace and then patch the deployment
kubectl create secret generic tigera-pull-secret --type=kubernetes.io/dockerconfigjson -n tigera-prometheus --from-file=.dockerconfigjson=<path/to/pull/secret>
kubectl patch deployment -n tigera-prometheus calico-prometheus-operator -p '{"spec":{"template":{"spec":{"imagePullSecrets":[{"name": "tigera-pull-secret"}]}}}}'
-
Modify the
mgmtcluster-custom-resources-example.yaml
file as needed and apply it.kubectl create -f manifests/mgmtcluster-custom-resources-example.yaml
-
For
LogStorage
, install the EBS CSI driver: EBS on EKS docs referenceOne quick way of doing this is with using your AWS access key id and secret access key to create the
Secret
object for the EBS CSI controller:Export the vars:
export AWS_ACCESS_KEY_ID=<my-access-key-id> export AWS_SECRET_ACCESS_KEY=<my-secret-access-key>
Configure the aws-secret:
kubectl create secret generic aws-secret --namespace kube-system --from-literal "key_id=${AWS_ACCESS_KEY_ID}" --from-literal "access_key=${AWS_SECRET_ACCESS_KEY}"
Install the CSI driver:
kubectl apply -k "github.com/kubernetes-sigs/aws-ebs-csi-driver/deploy/kubernetes/overlays/stable/?ref=release-1.39"
-
Apply the storage class:
kubectl apply -f - <<-EOF
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: tigera-elasticsearch
provisioner: ebs.csi.aws.com
reclaimPolicy: Retain
allowVolumeExpansion: true
volumeBindingMode: WaitForFirstConsumer
EOF
-
Add worker nodes to the cluster using the values for the clustername and region as used from the
eksctl-config-cluster-1.yaml
file:eksctl create nodegroup --cluster <cluster_name> --region <region> --node-type <node_type> --max-pods-per-node 100 --nodes 2 --nodes-max 3 --nodes-min 2
-
Once EKS has added the worker nodes to the cluster and the output of
kubectl get nodes
shows the nodes as available, monitor progress of all the pods as well as the output ofkubectl get tigerastatus
and onceapiserver
status showsAvailable
, proceed to the next step. -
Install the Tigera license file.
-
Once the rest of the cluster comes up, Create a
LoadBalancer
service for thetigera-manager
pods to access from your machine:kubectl create -f manifests/mgmt-cluster-lb.yaml
-
Configure user access to the manager UI with the docs here
-
Create the mgmt cluster resources as per the docs here
An example of using another
LoadBalancer
svc to expose the MCMtargetport
of 9449 and using that for the managed cluster to access is atmanifests/mcm-svc-lb.yaml
:kubectl create -f manifests/mcm-svc-lb.yaml
If you prefer to use NodePort or Ingress type svc you can, but it is outside the scope of this README. Refer to the docs above.
The latest nightly build before v3.18.0-2.0 release had an issue with creating the
ManagementClusterCR
as per the docs manifest. The workaround here is kept just for historical purpose. When we get to the step to apply theManagementClusterCR
, change the spec to have the.spec.tls.secretName
set totigera-management-cluster-connection
, like so (replace.spec.address
with your relevant svc URL and port):export MGMT_ADDRESS=<address-of-mcm-svc>:<port>
kubectl apply -f - <<-EOF
apiVersion: operator.tigera.io/v1
kind: ManagementCluster
metadata:
name: tigera-secure
spec:
address: $MGMT_ADDRESS
tls:
secretName: tigera-management-cluster-connection
EOF
Ensure that the tigera-manager
and tigera-linseed
pods restart, and that the GUI of the mgmt. cluster shows the management-cluster
in the right drop-down when the GUI svc comes back.
-
Apply the operator and prometheus manifests from the repo:
kubectl create -f https://downloads.tigera.io/ee/v3.18.0-2.0/manifests/tigera-operator.yaml
kubectl create -f https://downloads.tigera.io/ee/v3.18.0-2.0/manifests/tigera-prometheus-operator.yaml
-
Get a pull secret: The official build uses the quay.io images, so you'll need a pull secret. This doc has options on how to generate one as per the official POC/testing process.
-
Install the pull secret:
kubectl create secret generic tigera-pull-secret --type=kubernetes.io/dockerconfigjson -n tigera-operator --from-file=.dockerconfigjson=<path/to/pull/secret>
-
For the Prometheus operator, create the pull secret in the tigera-prometheus namespace and then patch the deployment
kubectl create secret generic tigera-pull-secret --type=kubernetes.io/dockerconfigjson -n tigera-prometheus --from-file=.dockerconfigjson=<path/to/pull/secret>
kubectl patch deployment -n tigera-prometheus calico-prometheus-operator -p '{"spec":{"template":{"spec":{"imagePullSecrets":[{"name": "tigera-pull-secret"}]}}}}'
-
Modify the
managedcluster-custom-resources-example.yaml
file as needed and apply it. In this case cluster-2 will be added to cluster-1 as a managed cluster so we omit all necessary components in the resources file, but ensure pod cidr is unique in theInstallation
resource.kubectl create -f manifests/managedcluster-custom-resources-example.yaml
-
Add worker nodes to the cluster using the values for the clustername and region as used from the
eksctl-config-cluster-1.yaml
file:eksctl create nodegroup --cluster <cluster_name> --region <region> --node-type <node_type> --max-pods-per-node 100 --nodes 2 --nodes-max 3 --nodes-min 2
-
Once EKS has added the worker nodes to the cluster and the output of
kubectl get nodes
shows the nodes as available, monitor progress of all the pods as well as the output ofkubectl get tigerastatus
and onceapiserver
status showsAvailable
, proceed to the next step. -
Create the managed cluster resources as per the docs here
➡️ Module 4 - Setup VPC Peering
⬅️ Module 2 - Deploy the EKS Clusters