Kubernetes 1.14.1 ha
Contents
概述
此处环境部署为kubeadm高可用部署,使用keepalived+haproxy 做apiserver的LB,见下图或者看官方文档
环境准备
| IP | ROLE | OS | | 192.168.1.51 | master1 | cetos7.6 | | 192.168.1.52 | master2 | cetos7.6 | | 192.168.1.53 | master3 | cetos7.6 | | 192.168.1.61 | node1 | cetos7.6 | | 192.168.1.62 | node2 | cetos7.6 | | 192.168.1.63 | node3 | cetos7.6 | | 192.168.1.71 | keepalived+haproxy | cetos7.6 | | 192.168.1.72 | keepalived+haproxy | cetos7.6 | | 192.168.1.73 | VIP | cetos7.6 |
初始化环境
这里使用脚本部署初始环境
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 |
cat > k8s-init.sh <<EOF #!/bin/bash set -e # 停止防火墙 systemctl stop firewalld systemctl disable firewalld # 关闭SWAP swapoff -a && sysctl -w vm.swappiness=0 sed -ri '/^[^#]*swap/[email protected]^@#@' /etc/fstab # 停止SELINUX setenforce 0 sed -ri '/^[^#]*SELINUX=/s#=.+$#=disabled#' /etc/selinux/config # 系统参数设置 cat >> /etc/sysctl.conf <<EOF net.ipv4.ip_forward = 1 net.bridge.bridge-nf-call-ip6tables = 1 net.bridge.bridge-nf-call-iptables = 1 EOF sysctl -p # 安装ipvs模块 function module(){ cat << EOF | tee /etc/sysconfig/modules/ipvs.modules #!/bin/bash ipvs_modules="ip_vs ip_vs_lc ip_vs_wlc ip_vs_rr ip_vs_wrr ip_vs_lblc ip_vs_lblcr ip_vs_dh ip_vs_sh ip_vs_fo ip_vs_nq ip_vs_sed ip_vs_ftp nf_conntrack" for kernel_module in \${ipvs_modules}; do /sbin/modinfo -F filename \${kernel_module} > /dev/null 2>&1 if [ $? -eq 0 ]; then /sbin/modprobe \${kernel_module} fi done EOF } chmod 755 /etc/sysconfig/modules/ipvs.modules && bash /etc/sysconfig/modules/ipvs.modules && lsmod | grep ip_vs yum install ipset ipvsadm conntrack -y # 安装docker function docker(){ yum -y install yum-utils yum-config-manager --add-repo https://download.docker.com/linux/centos/docker-ce.repo yum install docker-ce-18.06.3.ce -y systemctl start docker && systemctl enable docker cat > /etc/docker/daemon.json <<EOF { "exec-opts": ["native.cgroupdriver=systemd"], "registry-mirrors": ["https://registry.docker-cn.com"], "log-driver": "json-file", "log-opts": { "max-size": "100m" }, "storage-driver": "overlay2", "storage-opts": [ "overlay2.override_kernel_check=true" ] } EOF systemctl restart docker } # 升级内核 function kernel(){ rpm --import https://www.elrepo.org/RPM-GPG-KEY-elrepo.org rpm -Uvh http://www.elrepo.org/elrepo-release-7.0-2.el7.elrepo.noarch.rpm yum --enablerepo=elrepo-kernel install kernel-ml -y grub2-mkconfig -o /boot/grub2/grub.cfg } # hosts解析 function hosts(){ cat << EOF | tee /etc/hosts 192.168.1.51 m1 192.168.1.52 m2 192.168.1.53 m3 192.168.1.61 n1 192.168.1.62 n2 192.168.1.63 n3 } # 安装ntpdate # ntpdate cn.ntp.org.cn yum install ntpdate -y # 依次执行 module docker kernel hosts |
配置keepalived+haproxy
在71、72节点配置keepalived+haproxy,71为master、72为backup
1
|
yum install keepalived haproxy -y |
配置71节点的keepalived
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 |
cat << EOF | tee /etc/keepalived/keepalived.conf ! Configuration File for keepalived global_defs { notification_email { } router_id kube_api } vrrp_script check_haproxy { # 自身状态检测 script "killall -0 haproxy" interval 3 weight 5 } vrrp_instance haproxy-vip { # 使用单播通信,默认是组播通信 unicast_src_ip 192.168.1.71 unicast_peer { 192.168.1.72 } # 初始化状态 state MASTER # 虚拟ip 绑定的网卡 (这里根据你自己的实际情况选择网卡) interface ens33 # 此ID 要与Backup 配置一致 virtual_router_id 51 # 默认启动优先级,要比Backup 大点,但要控制量,保证自身状态检测生效 priority 100 advert_int 1 authentication { auth_type PASS auth_pass 1111 } virtual_ipaddress { # 虚拟ip 地址 192.168.1.73 } track_script { check_haproxy } } systemctl enable keepalived systemctl start keepalived |
72节点上的keepalived配置
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 |
cat << EOF | tee /etc/keepalived/keepalived.conf ! Configuration File for keepalived global_defs { notification_email { } router_id kube_api } vrrp_script check_haproxy { # 自身状态检测 script "killall -0 haproxy" interval 3 weight 5 } vrrp_instance haproxy-vip { # 使用单播通信,默认是组播通信 unicast_src_ip 192.168.1.72 unicast_peer { 192.168.1.71 } # 初始化状态 state BACKUP # 虚拟ip 绑定的网卡 (这里根据你自己的实际情况选择网卡) interface ens33 # 此ID 要与MASTER 配置一致 virtual_router_id 51 # 默认启动优先级,要比MASTER 小点,但要控制量,保证自身状态检测生效 priority 90 advert_int 1 authentication { auth_type PASS auth_pass 1111 } virtual_ipaddress { # 虚拟ip 地址 192.168.1.73 } track_script { check_haproxy } } systemctl enable keepalived systemctl start keepalived |
检验keepalived VIP是否生效,查看master网卡信息,此处用ifconfig看不了,用ip a
配置71、72节点上的haproxy,二个节点配置一摸一样
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 |
global log /dev/log local0 log /dev/log local1 notice chroot /var/lib/haproxy stats socket /var/run/haproxy-admin.sock mode 660 level admin stats timeout 30s user haproxy group haproxy daemon nbproc 1 defaults mode tcp log global timeout connect 5000 timeout client 10m timeout server 10m listen admin_stats bind 0.0.0.0:10080 mode http log 127.0.0.1 local0 err stats refresh 30s stats uri /status stats realm welcome login\ Haproxy stats auth admin:admin stats hide-version stats admin if TRUE frontend kubernetes bind *:6443 mode tcp default_backend kube-master backend kube-master balance roundrobin server m1 192.168.1.51:6443 check inter 2000 fall 2 rise 2 weight 1 server m2 192.168.1.52:6443 check inter 2000 fall 2 rise 2 weight 1 server m3 192.168.1.53:6443 check inter 2000 fall 2 rise 2 weight 1 systemctl enable haproxy systemctl start haproxy |
登陆网址http://192.168.1.73:10080/status,可以看到haproxy状态
master 节点部署
所有节点都要执行
1 2 |
yum install kubelet-1.14.1 kubeadm-1.14.1 kubectl-1.14.1 systemctl enable kubelet |
kubeadm初始化配置文件
1 2 3 4 5 6 7 8 9 10 11 12 |
cat <<EOF | tee kubeadm-init.yaml apiVersion: kubeadm.k8s.io/v1beta1 kind: ClusterConfiguration kubernetesVersion: v1.14.1 controlPlaneEndpoint: 192.168.1.73:6443 imageRepository: registry.aliyuncs.com/google_containers networking: podSubnet: 10.244.0.0/16 apiVersion: kubeproxy.config.k8s.io/v1alpha1 kind: KubeProxyConfiguration mode: ipvs EOF |
kubeadm init –config kubeadm-init.yaml
master1 初始化后会看到如下信息
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 |
Your Kubernetes control-plane has initialized successfully! To start using your cluster, you need to run the following as a regular user: mkdir -p $HOME/.kube sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config sudo chown $(id -u):$(id -g) $HOME/.kube/config You should now deploy a pod network to the cluster. Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at: https://kubernetes.io/docs/concepts/cluster-administration/addons/ You can now join any number of control-plane nodes by copying certificate authorities and service account keys on each node and then running the following as root: kubeadm join 192.168.1.73:6443 --token reg6r7.98apd811ll8duznl \ --discovery-token-ca-cert-hash sha256:edba87db4979469e002d3afcbe2ef3c39041f5391fc1f32d37a0095e22e8adce \ --experimental-control-plane Then you can join any number of worker nodes by running the following on each as root: kubeadm join 192.168.1.73:6443 --token reg6r7.98apd811ll8duznl \ --discovery-token-ca-cert-hash sha256:edba87db4979469e002d3afcbe2ef3c39041f5391fc1f32d37a0095e22e8adce |
master1执行命令
1 2 3 |
mkdir -p $HOME/.kube sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config sudo chown $(id -u):$(id -g) $HOME/.kube/config |
拷贝证书到master2、master3上
1 2 3 4 5 6 7 8 9 10 11 12 13 14 |
for index in m1 m2; do ip=${IPS[${index}]} ssh $ip "mkdir -p /etc/kubernetes/pki/etcd; mkdir -p ~/.kube/" scp /etc/kubernetes/pki/ca.crt $ip:/etc/kubernetes/pki/ca.crt scp /etc/kubernetes/pki/ca.key $ip:/etc/kubernetes/pki/ca.key scp /etc/kubernetes/pki/sa.key $ip:/etc/kubernetes/pki/sa.key scp /etc/kubernetes/pki/sa.pub $ip:/etc/kubernetes/pki/sa.pub scp /etc/kubernetes/pki/front-proxy-ca.crt $ip:/etc/kubernetes/pki/front-proxy-ca.crt scp /etc/kubernetes/pki/front-proxy-ca.key $ip:/etc/kubernetes/pki/front-proxy-ca.key scp /etc/kubernetes/pki/etcd/ca.crt $ip:/etc/kubernetes/pki/etcd/ca.crt scp /etc/kubernetes/pki/etcd/ca.key $ip:/etc/kubernetes/pki/etcd/ca.key scp /etc/kubernetes/admin.conf $ip:/etc/kubernetes/admin.conf scp /etc/kubernetes/admin.conf $ip:~/.kube/config done |
在master2、master3 上加入节点
1 2 3 |
kubeadm join 192.168.1.73:6443 --token reg6r7.98apd811ll8duznl \ --discovery-token-ca-cert-hash sha256:edba87db4979469e002d3afcbe2ef3c39041f5391fc1f32d37a0095e22e8adce \ --experimental-control-plane |
Calico网络安装
1 2 3 4 5 6 7 8 9 10 11 12 |
wget https://docs.projectcalico.org/v3.7/manifests/calico.yaml # # 修改pod-CIRD cat calico.yaml ... - name: CALICO_IPV4POOL_CIDR value: "10.244.0.0/16" (创建集群时指定的pod_cidr) kubectl create -f calico.yaml # 查看pod(拉镜像要等待很久,建议把镜像下载到本地) kubectl get po -n kube-system |
Node 节点加入集群
1 2 |
kubeadm join 192.168.1.73:6443 --token reg6r7.98apd811ll8duznl \ --discovery-token-ca-cert-hash sha256:edba87db4979469e002d3afcbe2ef3c39041f5391fc1f32d37a0095e22e8adce |
测试
1 2 3 4 5 |
[[email protected] ~]# kubectl get cs NAME STATUS MESSAGE ERROR scheduler Healthy ok controller-manager Healthy ok etcd-0 Healthy {"health":"true"} |
1 2 3 4 5 6 7 8 |
[[email protected] ~]# kubectl get nodes NAME STATUS ROLES AGE VERSION m1 Ready master 22h v1.14.1 m2 Ready master 22h v1.14.1 m3 Ready master 22h v1.14.1 n1 Ready <none> 22h v1.14.1 n2 Ready <none> 22h v1.14.1 n3 Ready <none> 22h v1.14.1 |
1 2 |
kubectl create deployment nginx --image=nginx:1.14-alpine kubectl create service nodeport nginx --tcp=80:80 |
Author zhengwei
LastMod 2019-05-14