Skip to content

Reproduce License prompt issue

Allan Roger Reid edited this page Mar 30, 2023 · 3 revisions

Install multipass

brew cask install multipass
multipass version
multipass find
multipass launch --name control-plane-k3s --cpus 2 --mem 2048M --disk 5G focal

multipass list
multipass shell control-plane-k3s

On control node install k3s. Validate

sudo apt update -y
sudo apt upgrade -y
curl -sfL https://get.k3s.io | sh -

sudo kubectl get node -o wide
NAME                STATUS   ROLES                  AGE     VERSION        INTERNAL-IP    EXTERNAL-IP   OS-IMAGE             KERNEL-VERSION      CONTAINER-RUNTIME
control-plane-k3s   Ready    control-plane,master   2m39s   v1.25.4+k3s1   192.168.64.3   <none>        Ubuntu 20.04.5 LTS   5.4.0-135-generic   containerd://1.6.8-k3s1
cat /var/lib/rancher/k3s/server/node-token
K106b4311308a48f2df83fa53b857f9986bf059a9b314a97a8ca829e698a3e4ab35::server:57d46ab38d7cf4f625f6d44308a60dd7

Launch and exec to child workers in separate tabs

multipass launch --name kind-worker --cpus 2 --mem 2048M --disk 5G focal
multipass launch --name kind-worker2 --cpus 2 --mem 2048M --disk 5G focal
multipass launch --name kind-worker3 --cpus 2 --mem 2048M --disk 5G focal
multipass launch --name kind-worker4 --cpus 2 --mem 2048M --disk 5G focal

multipass shell kind-worker
multipass shell kind-worker2
multipass shell kind-worker3
multipass shell kind-worker4

On all nodes (using control ip above)

sudo apt update -y
sudo apt upgrade -y
curl -sfL https://get.k3s.io | sh -
curl -sfL https://get.k3s.io | K3S_URL=https://192.168.64.50:6443 
K3S_TOKEN=K106b4311308a48f2df83fa53b857f9986bf059a9b314a97a8ca829e698a3e4ab35::server:57d46ab38d7cf4f625f6d44308a60dd7 sh -

On all nodes, update kube config and set kube context

mkdir -p /home/ubuntu/.kube
touch ~/.kube/merged_kubeconfig
sudo kubectl config view --merge --flatten > ~/.kube/merged_kubeconfig
export KUBECONFIG=$HOME/.kube/config:$HOME/.kube/config_multipass_k3s
mv ~/.kube/merged_kubeconfig ~/.kube/config
kubectl config use-context multipass
kubectl config get-contexts

kubectl config get-contexts
kubectl get node -o wide

sudo kubectl config rename-context default multipass

kubectl config use-context multipass

On control plane, install helm

curl https://baltocdn.com/helm/signing.asc | gpg --dearmor | sudo tee /usr/share/keyrings/helm.gpg > /dev/null
sudo apt-get install apt-transport-https --yes
echo "deb [arch=$(dpkg --print-architecture) signed-by=/usr/share/keyrings/helm.gpg] https://baltocdn.com/helm/stable/debian/ all main" | sudo tee /etc/apt/sources.list.d/helm-stable-debian.list
sudo apt-get update
sudo apt-get install helm

On control plane, modify helm to use operator 5.0.1

helm repo add minio https://operator.min.io/
export SCRIPT_DIR=$HOME/operator/testing

mkdir -p /home/ubuntu/operator
git clone https://github.com/minio/operator.git

On control plane, comment out all kind / docker functions

sudo vi /home/ubuntu/operator/operator/testing/check-helm.sh

sudo apt install make -y

On control plane, change operator image in deployments minio-operator and console to

quay.io/minio/operator:v5.0.1

On control plane, apply a storage class

cat <<EOF | kubectl apply -f -
apiVersion: v1
items:
- apiVersion: storage.k8s.io/v1
  kind: StorageClass
  metadata:
    annotations:
      kubectl.kubernetes.io/last-applied-configuration: |
        {"apiVersion":"storage.k8s.io/v1","kind":"StorageClass","metadata":{"annotations":{"storageclass.kubernetes.io/is-default-class":"true"},"name":"standard"},"provisioner":"rancher.io/local-path","reclaimPolicy":"Delete","volumeBindingMode":"WaitForFirstConsumer"}
      storageclass.kubernetes.io/is-default-class: "true"
    creationTimestamp: "2022-12-30T22:33:06Z"
    name: standard
    resourceVersion: "278"
    uid: 44d05183-5c38-4769-90cc-c3a8a7ccc4f1
  provisioner: rancher.io/local-path
  reclaimPolicy: Delete
  volumeBindingMode: WaitForFirstConsumer
kind: List
metadata:
  resourceVersion: ""
EOF

On control plane, run

$SCRIPT_DIR/check-helm.sh

On control plane, access minio console from outside. Access using the JWT:

kubectl -n minio-operator get secret console-sa-secret -o jsonpath="{.data.token}" | base64 -d
kubectl port-forward svc/console -n minio-operator 9090:9090 --address 0.0.0.0

On local machine, navigate to tenant e.g. http://192.168.64.50:9090/. Register tenant:

Note cluster is registered: image

On performance tab, note that tenant still prompts for licence: image

Clone this wiki locally