由kubernetes-1.13.3升级到1.13.4

环境配置

| Type | IP | OS | | master | 192.168.1.221 | centos-7.6 | | node1 | 192.168.1.222 | centos-7.6 | | node2 | 192.168.1.223 | centos-7.6 | | node3 | 192.168.1.224 | centos-7.6 | | node4 | 192.168.1.225 | centos-7.6 |

关闭防火墙和SELINUX
1
2
3
systemctl stop firewalld && systemctl disable firewalld
setenforce 0
sed -i "s#SELINUX=enforcing#SELINUX=disabled#" /etc/selinux/config
关闭swap
1
2
3
swapoff -a && sysctl -w vm.swappiness=0
# /etc/fstab 注释掉swap行 
# UUID=7bff6243-324c-4587-b550-55dc34018ebf swap swap defaults 0 0
安装 ansible

在master安装 ansible 2.7,这里用ansible安装docker和kuberentes

1
2
3
4
5
6
7
8
# epel版本
yum install epel-release

# 安装ansible 2.7
yum install ansible

# 查看版本
ansible --version
ansible host配置
 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
# 关闭ansible第一次连接客户端输入命令提示
echo "host_key_checking = False" >>/etc/ansible/ansible.cfg

# 添加 hosts
vim /etc/ansible/hosts

[k8s-node]
192.168.1.222 ansible_ssh_user=root ansible_ssh_pass=123456 ansible_ssh_port=22
192.168.1.223 ansible_ssh_user=root ansible_ssh_pass=123456 ansible_ssh_port=22
192.168.1.224 ansible_ssh_user=root ansible_ssh_pass=123456 ansible_ssh_port=22
192.168.1.225 ansible_ssh_user=root ansible_ssh_pass=123456 ansible_ssh_port=22
免密登陆
1
2
3
4
# 在master端生成密钥
ssh-keygen -t rsa

# ansible-playbook给node发送公钥
 1
 2
 3
 4
 5
 6
 7
 8
 9
10
cat > push-ssh.yaml << EOF
---
- hosts: k8s-node
  user: root
  tasks:
    - name: ssh-key-copy
      authorized_key: user=root key="{{lookup( 'file', '/root/.ssh/id_rsa.pub') }}"
      tags:
        - sshkey
EOF

ansible-playbook push-ssh.yaml

docker 安装

Docker安装官方文档

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
# 首先要在master端安装好docker
yum -y install yum-utils 
yum-config-manager --add-repo https://download.docker.com/linux/centos/docker-ce.repo
yum list docker-ce --showduplicates | sort -r
yum install docker-ce-(版本) -y
systemctl start docker && systemctl enable docker

# 查看docker版本
docker version

# node安装docker
 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
cat >push-docker.yaml<<EOF
---
- hosts: k8s-node
  remote_user: root
  tasks:
    - name: docker.repo
      copy:
        src: "/etc/yum.repos.d/docker-ce.repo"
        dest: "/etc/yum.repos.d/docker-ce.repo"
    - name: install yum-utils
      yum: name=yum-utils state=present
    - name: install docker-ce
      yum: name=docker-ce-18.06.3.ce state=present
    - name: start docker
      systemd: name=docker state=started enabled=true
EOF

ansible-playbook push-docker.yaml

安装 kubeadm、kubelet、kubectl

kubelet 在群集中所有节点上运行的核心组件, 用来执行如启动pods和containers等操作。 kubeadm 引导启动k8s集群的命令行工具,用于初始化 Cluster。 kubectl 是 Kubernetes 命令行工具。通过 kubectl 可以部署和管理应用,查看各种资源,创建、删除和更新各种组件。

1
# master部署 配置kubernetes.repo的源,由于官方源国内无法访问,这里使用阿里云yum源.
1
2
3
4
5
6
7
8
9
cat >/etc/yum.repos.d/kubernetes.repo<<EOF
[kubernetes]
name=Kubernetes
baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64/
enabled=1
gpgcheck=1
repo_gpgcheck=1
gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
EOF
1
2
3
4
5
6
yum clean all
yum makecache
#在master上安装指定版本 kubelet、kubeadm 和 kubectl,
yum install -y kubelet-1.13.3 kubeadm-1.13.3 kubectl-1.13.3

# node安装
 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
cat >push-k8s.yaml<<EOF
---
- hosts: k8s-node
  remote_user: root
  tasks:
    - name: kubernetes.repo
      copy: 
        src: "/etc/yum.repos.d/kubernetes.repo"
        dest: "/etc/yum.repos.d/kubernetes.repo"
    - name: install kubeadm  
      yum: name=kubeadm-1.13.3 state=present
    - name: install kubelet
      yum: name=kubelet-1.13.3 state=present
    - name: install kubectl
      yum:  name=kubectl-1.13.3 state=present
    - name: start kubelet
      systemd: name=kubelet enabled=true 
EOF

ansible-palybook push-k8s.yaml

参数设置
1
# master和node执行
1
2
3
4
5
cat >/etc/sysctl.d/k8s.conf<<EOF
net.ipv4.ip_forward = 1
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
EOF
1
2
modprobe br_netfilter
sysctl -p /etc/sysctl.d/k8s.conf
添加hosts
1
2
3
4
5
6
7
8
# vim /etc/hosts
192.168.1.221 k8s-m1
192.168.1.222 k8s-n1
192.168.1.223 k8s-n2
192.168.1.224 k8s-n3
192.168.1.225 k8s-n4

for i in k8s-n1 k8s-n2 k8s-n3 k8s-n4;do scp /etc/hosts $i:/etc/hosts;done
加载IPVS模块

由于ipvs已经加入到了内核的主干,所以为kube-proxy开启ipvs的前提需要加载以下的内核模块: 在所有的Kubernetes节点执行以下脚本:

1
2
3
4
5
6
7
cat /etc/sysconfig/modules/ipvs.modules
#!/bin/bash
modprobe -- ip_vs
modprobe -- ip_vs_rr
modprobe -- ip_vs_wrr
modprobe -- ip_vs_sh
modprobe -- nf_conntrack_ipv4
1
2
3
4
5
6
7
chmod 755 /etc/sysconfig/modules/ipvs.modules && bash /etc/sysconfig/modules/ipvs.modules && lsmod | grep -e ip_vs -e nf_conntrack_ipv4

# 查看模块
lsmod | grep -e ip_vs -e nf_conntrack_ipv4

# 安装管理工具
yum install -y ipset ipvsadm conntrack
升级内核
 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
# 查看当前内核
uname -sr

# 启用 ELRepo 仓库
rpm --import https://www.elrepo.org/RPM-GPG-KEY-elrepo.org
rpm -Uvh http://www.elrepo.org/elrepo-release-7.0-2.el7.elrepo.noarch.rpm

# 列出可用包
yum --disablerepo="*" --enablerepo="elrepo-kernel" list available

# 安装
yum --enablerepo=elrepo-kernel install kernel-lt -y

# vim /etc/defaut/grub GRUB 初始化页面的第一个内核将作为默认内核
GRUB_DEFAULT=0

# 生成内核配置
grub2-mkconfig -o /boot/grub2/grub.cfg

# 重启查看内核版本

Master 部署

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
kubeadm init \
    --apiserver-advertise-address=192.168.1.221 \
    --image-repository registry.cn-hangzhou.aliyuncs.com/google_containers \
    --kubernetes-version v1.13.3 \
    --pod-network-cidr=10.244.0.0/16

# apiserver-advertise-address 指定为master IP
# pod-network-cidr 指定 Pod 网络的范围 
# image-repository 指定初始化的镜像源,可以在本地下载好镜像不用指定
# kubernetes-version 指定为固定版本
 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
Your Kubernetes master has initialized successfully!

To start using your cluster, you need to run the following as a regular user:

  mkdir -p $HOME/.kube
  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  sudo chown $(id -u):$(id -g) $HOME/.kube/config

You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
  https://kubernetes.io/docs/concepts/cluster-administration/addons/

You can now join any number of machines by running the following on each node
as root:

  kubeadm join 192.168.1.221:6443 --token 8vxxz2.w0mhlu2h2r2u7vj7 --discovery-token-ca-cert-hash sha256:1b4667e2bb76e723b1bd17293b2e0ebaf36ae75e7bf33bd7fcf7714c8440f027

以上结果表明master初始化成功,注意记录好kubeadm join这一条数据,加入node节点需要

配置用户访问集群
1
2
3
  mkdir -p $HOME/.kube
  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  sudo chown $(id -u):$(id -g) $HOME/.kube/config
查看集群状态
1
2
3
4
5
[[email protected] ~]# kubectl get cs
NAME                 STATUS    MESSAGE              ERROR
controller-manager   Healthy   ok                   
scheduler            Healthy   ok                   
etcd-0               Healthy   {"health": "true"}   

Calico 部署

Calico创建和管理一个扁平的三层网络(不需要overlay),每个容器会分配一个可路由的IP。由于通信时不需要解包和封包,网络性能损耗小,易于排查,且易于水平扩展。

Calico官网地址

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
wget https://docs.projectcalico.org/v3.5/getting-started/kubernetes/installation/hosted/etcd.yaml
kubectl apply -f etcd.yaml

wget https://docs.projectcalico.org/v3.5/getting-started/kubernetes/installation/hosted/calico.yaml
# 修改pod-CIRD
cat calico.yaml
...
- name: CALICO_IPV4POOL_CIDR
  value: "10.244.0.0/16" (创建集群时指定的pod_cidr)

kubectl create -f calico.yaml

# 查看pod(拉镜像要等待很久,可以把镜像下载到本地)
kubectl get po -n kube-system

1.13.3 升级到 1.13.4

查看最新稳定版本
1
yum list --showduplicates kubeadm --disableexcludes=kubernetes

检查您的群集是否可以升级,并获取可以升级到的版本

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
kubeadm upgrade plan

Components that must be upgraded manually after you have upgraded the control plane with 'kubeadm upgrade apply':
COMPONENT   CURRENT       AVAILABLE
Kubelet     5 x v1.13.3   v1.13.4

Upgrade to the latest stable version:

COMPONENT            CURRENT   AVAILABLE
API Server           v1.13.3   v1.13.4
Controller Manager   v1.13.3   v1.13.4
Scheduler            v1.13.3   v1.13.4
Kube Proxy           v1.13.3   v1.13.4
CoreDNS              1.2.6     1.2.6
Etcd                 3.2.24    3.2.24

You can now apply the upgrade by executing the following command:

	kubeadm upgrade apply v1.13.4

Note: Before you can perform this upgrade, you have to update kubeadm to v1.13.4.

查看容器镜像版本

1
2
3
4
5
6
7
8
9
kubeadm config images list

k8s.gcr.io/kube-apiserver:v1.13.4
k8s.gcr.io/kube-controller-manager:v1.13.4
k8s.gcr.io/kube-scheduler:v1.13.4
k8s.gcr.io/kube-proxy:v1.13.4
k8s.gcr.io/pause:3.1
k8s.gcr.io/etcd:3.2.24
k8s.gcr.io/coredns:1.2.6

使用脚本拖镜像

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
#!/bin/bash

IMAGES="kube-apiserver:v1.13.4 kube-controller-manager:v1.13.4 kube-scheduler:v1.13.4 kube-proxy:v1.13.4 pause:3.1 etcd:3.2.24 coredns:1.2
.6"

REGISTRY="registry.aliyuncs.com/google_containers"

for image in $IMAGES

do

docker pull $REGISTRY/$image

done

# 修改tag
docker tag $REGISTRY/kube-apiserver:v1.13.4  k8s.gcr.io/kube-apiserver:v1.13.4
docker tag $REGISTRY/kube-controller-manager:v1.13.4  k8s.gcr.io/kube-controller-manager:v1.13.4
docker tag $REGISTRY/kube-scheduler:v1.13.4  k8s.gcr.io/kube-scheduler:v1.13.4
docker tag $REGISTRY/kube-proxy:v1.13.4  k8s.gcr.io/kube-proxy:v1.13.4
docker tag $REGISTRY/pause:3.1  k8s.gcr.io/pause:3.1
docker tag $REGISTRY/etcd:3.2.24  k8s.gcr.io/etcd:3.2.24
docker tag $REGISTRY/coredns:1.2.6  k8s.gcr.io/coredns:1.2.6

在各节点上升级软件

yum install -y kubelet kubeadm kubectl –disableexcludes=kubernetes

升级
 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
kubeadm upgrade apply v1.13.4

[preflight] Running pre-flight checks.
[upgrade] Making sure the cluster is healthy:
[upgrade/config] Making sure the configuration is correct:
[upgrade/config] Reading configuration from the cluster...
[upgrade/config] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -oyaml'
[upgrade/apply] Respecting the --cri-socket flag that is set with higher priority than the config file.
[upgrade/version] You have chosen to change the cluster version to "v1.13.4"
[upgrade/versions] Cluster version: v1.13.4
[upgrade/versions] kubeadm version: v1.13.4
[upgrade/confirm] Are you sure you want to proceed with the upgrade? [y/N]: y
[upgrade/prepull] Will prepull images for components [kube-apiserver kube-controller-manager kube-scheduler etcd]
[upgrade/prepull] Prepulling image for component etcd.
[upgrade/prepull] Prepulling image for component kube-apiserver.
[upgrade/prepull] Prepulling image for component kube-controller-manager.
[upgrade/prepull] Prepulling image for component kube-scheduler.
[apiclient] Found 0 Pods for label selector k8s-app=upgrade-prepull-kube-apiserver
[apiclient] Found 0 Pods for label selector k8s-app=upgrade-prepull-etcd
[apiclient] Found 0 Pods for label selector k8s-app=upgrade-prepull-kube-scheduler
[apiclient] Found 0 Pods for label selector k8s-app=upgrade-prepull-kube-controller-manager
[apiclient] Found 1 Pods for label selector k8s-app=upgrade-prepull-kube-apiserver
[apiclient] Found 1 Pods for label selector k8s-app=upgrade-prepull-etcd
[apiclient] Found 1 Pods for label selector k8s-app=upgrade-prepull-kube-scheduler
[apiclient] Found 1 Pods for label selector k8s-app=upgrade-prepull-kube-controller-manager
[upgrade/prepull] Prepulled image for component etcd.
[upgrade/prepull] Prepulled image for component kube-apiserver.
[upgrade/prepull] Prepulled image for component kube-controller-manager.
[upgrade/prepull] Prepulled image for component kube-scheduler.
[upgrade/prepull] Successfully prepulled the images for all the control plane components
[upgrade/apply] Upgrading your Static Pod-hosted control plane to version "v1.13.4"...
Static pod: kube-apiserver-k8s-m1 hash: 6e54af85e05431193aca79140f64502e
Static pod: kube-controller-manager-k8s-m1 hash: e7110ea15a2c15d67c45081425e26e12
Static pod: kube-scheduler-k8s-m1 hash: 6eef8f6010dfa7a3e63b41be20fede80
[upgrade/staticpods] Writing new Static Pod manifests to "/etc/kubernetes/tmp/kubeadm-upgraded-manifests045679024"
[upgrade/staticpods] current and new manifests of kube-apiserver are equal, skipping upgrade
[upgrade/staticpods] current and new manifests of kube-controller-manager are equal, skipping upgrade
[upgrade/staticpods] current and new manifests of kube-scheduler are equal, skipping upgrade
[uploadconfig] storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
[kubelet] Creating a ConfigMap "kubelet-config-1.13" in namespace kube-system with the configuration for the kubelets in the cluster
[kubelet] Downloading configuration for the kubelet from the "kubelet-config-1.13" ConfigMap in the kube-system namespace
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[patchnode] Uploading the CRI Socket information "/var/run/dockershim.sock" to the Node API object "k8s-m1" as an annotation
[bootstraptoken] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
[bootstraptoken] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
[bootstraptoken] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
[addons] Applied essential addon: CoreDNS
[addons] Applied essential addon: kube-proxy

[upgrade/successful] SUCCESS! Your cluster was upgraded to "v1.13.4". Enjoy!

[upgrade/kubelet] Now that your control plane is upgraded, please proceed with upgrading your kubelets if you haven't already done so.

通过将每个节点标记为不可调度并逐出工作负载来准备每个节点以进行维护

1
kubectl drain $NODE --ignore-daemonsets

重新启动所有节点的kubelet进程

1
systemctl daemon-reload && systemctl restart kubelet

验证kubelet节点上是否正在运行新版本

1
systemctl status kubelet

通过将节点标记为可调度,使节点重新联机

1
kubectl uncordon $NODE

验证所有节点是否可用

1
2
3
4
5
6
7
[[email protected] ~]# kubectl get nodes
NAME     STATUS   ROLES    AGE     VERSION
k8s-m1   Ready    master   4h46m   v1.13.4
k8s-n1   Ready    <none>   4h30m   v1.13.4
k8s-n2   Ready    <none>   4h30m   v1.13.4
k8s-n3   Ready    <none>   4h30m   v1.13.4
k8s-n4   Ready    <none>   4h30m   v1.13.4