Skip to content

Commit bda30c7

Browse files
author
Xiaolong He
committed
update cilium
1 parent 6f5699f commit bda30c7

File tree

11 files changed

+645
-5
lines changed

11 files changed

+645
-5
lines changed

README.md

+8-3
Original file line numberDiff line numberDiff line change
@@ -4,8 +4,13 @@
44
55
本文档主要介绍了基于 Vagrant/kubeadm 部署 kubernetes 测试环境..
66

7-
- ansible_k8s_ubuntu 基于 ansible 环境一键可以完成 Kubernetes 环境部署
8-
- ansible_k8s_centos 基于 ansible 环境一键可以完成 Kubernetes 环境部署
9-
- kubeadm_k8s 手动介绍了高可用 kubernetes 环境的部署
7+
- Kubernetes
8+
- [ansible_k8s_ubuntu](./ansible_k8s_ubuntu/README.md) 基于 ansible 环境一键可以完成 Kubernetes 环境部署
9+
- [ansible_k8s_centos](./ansible_k8s_centos/README.md) 基于 ansible 环境一键可以完成 Kubernetes 环境部署
10+
- Kubernetes Cilium
11+
- [compiler-kernel](./compiler-kernel/README.md) 基于 centos/7.8 系统更新 kernel 至 5.9.8..
12+
- [ansible_k8s_centos_cilium](./ansible_k8s_centos_cilium/README.md) 基于 ansible 环境一键可以完成 Kubernetes 环境部署 - 使用 cilium
13+
- Kubernetes HA
14+
- [kubeadm_k8s](./kubeadm_k8s/README.md) 手动介绍了高可用 kubernetes 环境的部署
1015

1116
如果您在使用中有任何问题,欢迎与我联系..

ansible_k8s_centos/README.md

-2
Original file line numberDiff line numberDiff line change
@@ -41,8 +41,6 @@ kube-system kube-proxy-zjzcs 1/1 Running 0
4141
kube-system kube-scheduler-k8s-master 1/1 Running 0 18m 192.168.50.10 k8s-master <none> <none>
4242
```
4343

44-
45-
4644
```bash
4745
# 查看 fdb 表项
4846
bridge fdb show dev flannel.1

ansible_k8s_centos_cilium/README.md

+63
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,63 @@
1+
# 基于 BPF 的网络方案 cilium 部署
2+
3+
> Time: 2020.11.12
4+
5+
Cilium 要求 linux kernel 4.8.0 以上的版本支持,因此我们
6+
7+
使用 Kubeadm 部署 Kubernetes...
8+
9+
> 需要注意 Mac 上安装好 ansible/vagrant
10+
11+
## Step1. 准备测试环境
12+
13+
安装 Virtualbox/Vagrant 之后,还需要安装好 ansible `brew install ansible`
14+
15+
## Step2. 下载 ansible 代码,并启动 Kubernetes 集群
16+
17+
```
18+
git clone https://github.com/markthink/deploy_k8s.git
19+
cd deploy_k8s
20+
vagrant up
21+
```
22+
23+
## Step3. 下载 helm 安装 cilium
24+
25+
```bash
26+
wget https://get.helm.sh/helm-v3.4.1-linux-amd64.tar.gz
27+
tar xvf helm-v3.4.1-linux-amd64.tar.gz
28+
ls linux-amd64/
29+
30+
helm repo add cilium https://helm.cilium.io/
31+
helm install cilium cilium/cilium --version 1.9.0 \
32+
--namespace kube-system \
33+
--set externalWorkloads.enabled=true \
34+
--set clustermesh.apiserver.tls.auto.method=cronJob
35+
```
36+
37+
命令输出...
38+
39+
40+
```bash
41+
[vagrant@k8s-master ~]$ helm repo add cilium https://helm.cilium.io/
42+
"cilium" has been added to your repositories
43+
[vagrant@k8s-master ~]$ helm install cilium cilium/cilium --version 1.9.0 \
44+
> --namespace kube-system \
45+
> --set externalWorkloads.enabled=true \
46+
> --set clustermesh.apiserver.tls.auto.method=cronJob
47+
NAME: cilium
48+
LAST DEPLOYED: Fri Nov 13 12:34:24 2020
49+
NAMESPACE: kube-system
50+
STATUS: deployed
51+
REVISION: 1
52+
TEST SUITE: None
53+
NOTES:
54+
You have successfully installed Cilium with Hubble.
55+
56+
Your release version is 1.9.0.
57+
58+
For any further help, visit https://docs.cilium.io/en/v1.9/gettinghelp
59+
```
60+
61+
```bash
62+
The cilium_net: Caught tx_queue_len zero misconfig is harmless, by the way.
63+
```

ansible_k8s_centos_cilium/Vagrantfile

+39
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,39 @@
1+
IMAGE_NAME = "centos-kernel"
2+
N = 2
3+
4+
Vagrant.configure("2") do |config|
5+
config.ssh.insert_key = false
6+
7+
config.vm.provider "virtualbox" do |v|
8+
v.memory = 8192
9+
v.cpus = 4
10+
end
11+
12+
config.vm.define "k8s-master" do |master|
13+
master.vm.box = IMAGE_NAME
14+
master.vm.network "private_network", ip: "192.168.50.10"
15+
master.vm.hostname = "k8s-master"
16+
master.vm.synced_folder ".", "/vagrant", id: "vagrant-root", disabled: true
17+
master.vm.provision "ansible" do |ansible|
18+
ansible.playbook = "kubernetes-setup/master-playbook.yml"
19+
ansible.extra_vars = {
20+
node_ip: "192.168.50.10",
21+
}
22+
end
23+
end
24+
25+
(1..N).each do |i|
26+
config.vm.define "node-#{i}" do |node|
27+
node.vm.box = IMAGE_NAME
28+
node.vm.synced_folder ".", "/vagrant", id: "vagrant-root", disabled: true
29+
node.vm.network "private_network", ip: "192.168.50.#{i + 10}"
30+
node.vm.hostname = "node-#{i}"
31+
node.vm.provision "ansible" do |ansible|
32+
ansible.playbook = "kubernetes-setup/node-playbook.yml"
33+
ansible.extra_vars = {
34+
node_ip: "192.168.50.#{i + 10}",
35+
}
36+
end
37+
end
38+
end
39+
end
Original file line numberDiff line numberDiff line change
@@ -0,0 +1 @@
1+
kubeadm join 192.168.50.10:6443 --token jqmmw9.jhcn2fcjksl3d07b --discovery-token-ca-cert-hash sha256:1be5bcde2161fc5385d4bc876a650f3389911e6cd8aa09e033bf124430d30f87
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,57 @@
1+
---
2+
- hosts: all
3+
become: true
4+
# remote_user: vagrant
5+
vars:
6+
# ansible_become_pass: vagrant
7+
# iface: eth1
8+
tasks:
9+
- name: Read prek8s.sh file
10+
template:
11+
src: prek8s.sh
12+
dest: /tmp/prek8s.sh
13+
14+
- name: Install docker-ce/kubelet/kubeadm/kubectl
15+
command: bash /tmp/prek8s.sh
16+
17+
- name: Accept forward rules
18+
command: iptables -P FORWARD ACCEPT
19+
20+
- name: Configure node ip
21+
lineinfile:
22+
path: /etc/sysconfig/kubelet
23+
line: KUBELET_EXTRA_ARGS=--node-ip={{ node_ip }}
24+
create: yes
25+
26+
- name: Restart kubelet
27+
service:
28+
name: kubelet
29+
daemon_reload: yes
30+
state: restarted
31+
32+
# "modprobe: FATAL: Module configs not found
33+
- name: copy config
34+
command: cp /boot/config-3.10.0-1127.el7.x86_64 /boot/config-5.9.8
35+
36+
- name: Initialize the Kubernetes cluster using kubeadm
37+
command: kubeadm init --apiserver-advertise-address="192.168.50.10" --apiserver-cert-extra-sans="192.168.50.10" --node-name k8s-master --pod-network-cidr=10.244.0.0/16
38+
39+
- name: Setup kubeconfig for vagrant user
40+
command: "{{ item }}"
41+
with_items:
42+
- mkdir -p /home/vagrant/.kube
43+
- cp -i /etc/kubernetes/admin.conf /home/vagrant/.kube/config
44+
- chown vagrant:vagrant /home/vagrant/.kube/config
45+
46+
- name: Generate join command
47+
command: kubeadm token create --print-join-command
48+
register: join_command
49+
50+
- name: Copy join command to local file
51+
become: false
52+
local_action: copy content="{{ join_command.stdout_lines[0] }}" dest="./join-command"
53+
54+
handlers:
55+
- name: docker status
56+
service: name=docker state=started
57+
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,47 @@
1+
---
2+
- hosts: all
3+
become: true
4+
# remote_user: vagrant
5+
vars:
6+
# ansible_become_pass: vagrant
7+
tasks:
8+
- name: Read prek8s.sh file
9+
template:
10+
src: prek8s.sh
11+
dest: /tmp/prek8s.sh
12+
13+
- name: Install docker-ce/kubelet/kubeadm/kubectl
14+
command: bash /tmp/prek8s.sh
15+
16+
- name: Accept forward rules
17+
command: iptables -P FORWARD ACCEPT
18+
19+
- name: Accept forward rules
20+
command: iptables -P FORWARD ACCEPT
21+
22+
- name: Configure node ip
23+
lineinfile:
24+
path: /etc/sysconfig/kubelet
25+
line: KUBELET_EXTRA_ARGS=--node-ip={{ node_ip }}
26+
create: yes
27+
28+
- name: Restart kubelet
29+
service:
30+
name: kubelet
31+
daemon_reload: yes
32+
state: restarted
33+
34+
- name: Copy the join command to server location
35+
become: false
36+
copy: src=join-command dest=/tmp/join-command.sh mode=0777
37+
38+
# "modprobe: FATAL: Module configs not found
39+
- name: copy config
40+
command: cp /boot/config-3.10.0-1127.el7.x86_64 /boot/config-5.9.8
41+
42+
- name: Join the node to cluster
43+
command: sh /tmp/join-command.sh
44+
45+
handlers:
46+
- name: docker status
47+
service: name=docker state=started
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,59 @@
1+
#!/bin/bash
2+
yum install -y yum-utils device-mapper-persistent-data lvm2
3+
yum-config-manager --add-repo \
4+
https://download.docker.com/linux/centos/docker-ce.repo
5+
yum update -y && yum install -y \
6+
containerd.io-1.2.13 \
7+
docker-ce-19.03.11 \
8+
docker-ce-cli-19.03.11
9+
mkdir /etc/docker
10+
# Set up the Docker daemon
11+
cat <<EOF | tee /etc/docker/daemon.json
12+
{
13+
"exec-opts": ["native.cgroupdriver=systemd"],
14+
"log-driver": "json-file",
15+
"log-opts": {
16+
"max-size": "100m"
17+
},
18+
"storage-driver": "overlay2",
19+
"storage-opts": [
20+
"overlay2.override_kernel_check=true"
21+
]
22+
}
23+
EOF
24+
mkdir -p /etc/systemd/system/docker.service.d
25+
26+
# 安装指定版本的 kubelet/kubectl/kubeadm
27+
cat <<EOF > /etc/yum.repos.d/kubernetes.repo
28+
[kubernetes]
29+
name=Kubernetes
30+
baseurl=https://packages.cloud.google.com/yum/repos/kubernetes-el7-x86_64
31+
enabled=1
32+
gpgcheck=1
33+
repo_gpgcheck=1
34+
gpgkey=https://packages.cloud.google.com/yum/doc/yum-key.gpg https://packages.cloud.google.com/yum/doc/rpm-package-key.gpg
35+
EOF
36+
# yum autoremove -y kubelet kubeadm kubectl --disableexcludes=kubernetes
37+
# yum install -y kubelet-1.18.10-0 kubeadm-1.18.10-0 kubectl-1.18.10-0 --disableexcludes=kubernetes
38+
yum install -y kubelet kubeadm kubectl --disableexcludes=kubernetes
39+
# 设置开机启动
40+
systemctl daemon-reload && systemctl enable docker && systemctl restart docker
41+
systemctl enable --now kubelet
42+
43+
# echo "1" > /proc/sys/net/bridge/bridge-nf-call-iptables
44+
cat <<EOF > /etc/sysctl.d/k8s.conf
45+
net.bridge.bridge-nf-call-ip6tables = 1
46+
net.bridge.bridge-nf-call-iptables = 1
47+
EOF
48+
sysctl --system
49+
50+
sed -i '/swap/d' /etc/fstab
51+
swapoff -a
52+
53+
# 配置系统环境
54+
echo "export LC_ALL=en_US.UTF-8" >> /etc/profile
55+
source /etc/profile
56+
57+
# 将 SELinux 禁用
58+
setenforce 0
59+
sed -i 's/^SELINUX=enforcing$/SELINUX=disabled/' /etc/selinux/config

0 commit comments

Comments
 (0)