centos7在线一键式安装k8s集群
LiHaiYang 发布于 阅读:1090 Kubernetes
[TOC]
一键式部署k8s集群
硬件系统要求
- Master节点:2C4G +
- Worker节点:2C4G +
使用centos7.6安装请按上面配置准备好3台centos,1台作为Master节点,2台Worker节点
这是我的各个节点的配置
主机名 | IP | 角色 |
---|---|---|
master | 192.168.1.21 | 8G 4C |
node1 | 192.168.1.22 | 4G 2C |
node2 | 192.168.1.31 | 4G 2C |
安装准备
-
首先需要保证实验用到的虚拟机或主机可以联通互联网。
-
其次需要安装unzip及其他基础命令组件所有节点安装,命令如下
[root@master kubeadm-ha-master]# yum -y install wget install unzip vim net-tools git
- 建议所有节点关闭centos的防火墙
systemctl stop firewalld && systemctl disable firewalld && systemctl status firewalld
- 同时需要保证各个节点间可以相互ping通,并且22端口可以互相链接
部署k8s前置配置master节点执行
- 下载部署脚本,github下载较慢,可以访问 k8s一键部署脚本进行下载下载完成后上传到服务器
# github下载执行
[root@master]# git clone --depth 1 https://github.com/TimeBye/kubeadm-ha
# k8s一键部署脚本下载上传到服务器后执行
[root@master]# unzip kubeadm-ha-master.zip
- 进入下载脚本的目录
[root@master ~]# cd kubeadm-ha-master/
[root@master kubeadm-ha-master]# ls
00-kernel.yml 05-load-balancer.yml 10-post.yml 25-cert-manager.yml 84-remove-worker.yml 91-upgrade-cluster.yml ansible offline
01-base.yml 06-etcd.yml 21-network-plugin.yml 31-docker-to-containerd.yml 85-remove-master.yml 92-certificates-renew.yml ansible.cfg README.md
02-container-engine.yml 07-kubernetes-certificates.yml 22-ingress-controller.yml 81-add-worker.yml 86-remove-etcd.yml 93-backup-cluster.yml docs roles
03-kubernetes-component.yml 08-kubernetes-master.yml 23-kubernetes-dashboard.yml 82-add-master.yml 87-remove-node.yml 94-restore-cluster.yml example Vagrantfile
04-chrony.yml 09-kubernetes-worker.yml 24-metrics-server.yml 83-add-etcd.yml 90-init-cluster.yml 99-reset-cluster.yml LICENSE
[root@master kubeadm-ha-master]#
- 安装ansible
[root@master ]# cd ansible/
[root@master ansible]# ./install.sh
- 修改安装配置文件
由于我是一个master两个node的方式构建的centos所以我们需要修改example/hosts.s-master.ip.ini 文件
# 具体要修改的就是ip 和密码 其他的保持默认
[root@master]# cd kubeadm-ha-master/example
[root@master example]# vim hosts.s-master.ip.ini
; 将所有节点信息在这里填写
; 第一个字段 为远程服务器内网IP
; 第二个字段 ansible_port 为节点 sshd 监听端口
; 第三个字段 ansible_user 为节点远程登录用户名
; 第四个字段 ansible_ssh_pass 为节点远程登录用户密码
[all]
192.168.1.21 ansible_port=22 ansible_user="root" ansible_ssh_pass="root"
192.168.1.22 ansible_port=22 ansible_user="root" ansible_ssh_pass="root"
192.168.1.31 ansible_port=22 ansible_user="root" ansible_ssh_pass="root"
; 单 master 节点不需要进行负载均衡,lb节点组留空。
[lb]
; 注意etcd集群必须是1,3,5,7...奇数个节点
[etcd]
192.168.1.21
192.168.1.22
192.168.1.31
[kube-master]
192.168.1.21
[kube-worker]
192.168.1.21
192.168.1.22
192.168.1.31
; 预留组,后续添加master节点使用
[new-master]
; 预留组,后续添加worker节点使用
[new-worker]
; 预留组,后续添加etcd节点使用
[new-etcd]
; 预留组,后续删除worker角色使用
[del-worker]
; 预留组,后续删除master角色使用
[del-master]
; 预留组,后续删除etcd角色使用
[del-etcd]
; 预留组,后续删除节点使用
[del-node]
;-------------------------------------- 以下为基础信息配置 ------------------------------------;
[all:vars]
; 是否跳过节点物理资源校验,Master节点要求2c2g以上,Worker节点要求2c4g以上
skip_verify_node=false
; kubernetes版本
kube_version="1.21.4"
; 容器运行时类型,可选项:containerd,docker;默认 containerd
container_manager="containerd"
; 负载均衡器
; 有 nginx、openresty、haproxy、envoy 和 slb 可选,默认使用 nginx
; 为什么单 master 集群 apiserver 也使用了负载均衡请参与此讨论: https://github.com/TimeBye/kubeadm-ha/issues/8
lb_mode="nginx"
; 使用负载均衡后集群 apiserver ip,设置 lb_kube_apiserver_ip 变量,则启用负载均衡器 + keepalived
; lb_kube_apiserver_ip="192.168.56.15"
; 使用负载均衡后集群 apiserver port
lb_kube_apiserver_port="8443"
; 网段选择:pod 和 service 的网段不能与服务器网段重叠,
; 若有重叠请配置 `kube_pod_subnet` 和 `kube_service_subnet` 变量设置 pod 和 service 的网段,示例参考:
; 如果服务器网段为:10.0.0.1/8
; pod 网段可设置为:192.168.0.0/18
; service 网段可设置为 192.168.64.0/18
; 如果服务器网段为:172.16.0.1/12
; pod 网段可设置为:10.244.0.0/18
; service 网段可设置为 10.244.64.0/18
; 如果服务器网段为:192.168.0.1/16
; pod 网段可设置为:10.244.0.0/18
; service 网段可设置为 10.244.64.0/18
; 集群pod ip段,默认掩码位 18 即 16384 个ip
kube_pod_subnet="10.244.0.0/18"
; 集群service ip段
kube_service_subnet="10.244.64.0/18"
; 分配给节点的 pod 子网掩码位,默认为 24 即 256 个ip,故使用这些默认值可以纳管 16384/256=64 个节点。
kube_network_node_prefix="24"
; node节点最大 pod 数。数量与分配给节点的 pod 子网有关,ip 数应大于 pod 数。
; https://cloud.google.com/kubernetes-engine/docs/how-to/flexible-pod-cidr
kube_max_pods="110"
; 集群网络插件,目前支持flannel,calico
network_plugin="calico"
; 若服务器磁盘分为系统盘与数据盘,请修改以下路径至数据盘自定义的目录。
; Kubelet 根目录
kubelet_root_dir="/var/lib/kubelet"
; docker容器存储目录
docker_storage_dir="/var/lib/docker"
; containerd容器存储目录
containerd_storage_dir="/var/lib/containerd"
; Etcd 数据根目录
etcd_data_dir="/var/lib/etcd"
[root@master example]#
- 升级内核
# 升级前建议把roles/prepare/variables/defaults/main.yml内的http://files.saas.hand-china.com/kernel/centos/kernel-lt-5.4.92-1.el7.elrepo.x86_64.rpm地址更换成http://mirrors.aliyun.com/elrepo/kernel/el7/x86_64/RPMS/kernel-lt-5.4.92-1.el7.elrepo.x86_64.rpm
# 升级前建议把roles/prepare/variables/defaults/main.yml内的http://files.saas.hand-china.com/kernel/centos/kernel-lt-devel-5.4.92-1.el7.elrepo.x86_64.rpm地址换成http://mirrors.aliyun.com/elrepo/kernel/el7/x86_64/RPMS/kernel-lt-devel-5.4.92-1.el7.elrepo.x86_64.rpm
[root@master kubeadm-ha-master]# ansible-playbook -i example/hosts.s-master.ip.ini 00-kernel.yml
PLAY [all] *********************************************************************************************************************************************************************************************************************************
***********
ok: [192.168.1.22]
ok: [192.168.1.31]
ok: [192.168.1.21]
PLAY [all] *********************************************************************************************************************************************************************************************************************************
TASK [Gathering Facts] *********************************************************************************************************************************************************************************************************************
ok: [192.168.1.31]
ok: [192.168.1.22]
ok: [192.168.1.21]
****
ok: [192.168.1.21] => {
"msg": "Check roles/prepare/variables/defaults/main.yml"
}
ok: [192.168.1.22] => {
"msg": "Check roles/prepare/variables/defaults/main.yml"
}
ok: [192.168.1.31] => {
"msg": "Check roles/prepare/variables/defaults/main.yml"
}
included: /root/kubeadm-ha-master/roles/prepare/base/tasks/verify_variables.yml for 192.168.1.21, 192.168.1.22, 192.168.1.31
******
ok: [192.168.1.21] => {
"changed": false,
"msg": "All assertions passed"
}
ok: [192.168.1.22] => {
"changed": false,
"msg": "All assertions passed"
}
ok: [192.168.1.31] => {
"changed": false,
"msg": "All assertions passed"
}
******
ok: [192.168.1.21] => {
"changed": false,
"msg": "All assertions passed"
}
******
ok: [192.168.1.21] => {
"changed": false,
"msg": "All assertions passed"
}
******
ok: [192.168.1.21] => {
"changed": false,
"msg": "All assertions passed"
}
*******
ok: [192.168.1.21] => {
"changed": false,
"msg": "All assertions passed"
}
ok: [192.168.1.22] => {
"changed": false,
"msg": "All assertions passed"
}
ok: [192.168.1.31] => {
"changed": false,
"msg": "All assertions passed"
}
******
ok: [192.168.1.21] => {
"changed": false,
"msg": "All assertions passed"
}
******
ok: [192.168.1.21] => {
"changed": false,
"msg": "All assertions passed"
}
********
ok: [192.168.1.21] => {
"changed": false,
"msg": "All assertions passed"
}
******
ok: [192.168.1.21] => {
"changed": false,
"msg": "All assertions passed"
}
*********
ok: [192.168.1.21] => {
"changed": false,
"msg": "All assertions passed"
}
included: /root/kubeadm-ha-master/roles/prepare/base/tasks/verify_node.yml for 192.168.1.21, 192.168.1.22, 192.168.1.31
********
ok: [192.168.1.21] => {
"changed": false,
"msg": "All assertions passed"
}
ok: [192.168.1.22] => {
"changed": false,
"msg": "All assertions passed"
}
ok: [192.168.1.31] => {
"changed": false,
"msg": "All assertions passed"
}
**********
ok: [192.168.1.21] => {
"changed": false,
"msg": "All assertions passed"
}
ok: [192.168.1.22] => {
"changed": false,
"msg": "All assertions passed"
}
ok: [192.168.1.31] => {
"changed": false,
"msg": "All assertions passed"
}
********
ok: [192.168.1.21] => {
"changed": false,
"msg": "All assertions passed"
}
ok: [192.168.1.22] => {
"changed": false,
"msg": "All assertions passed"
}
ok: [192.168.1.31] => {
"changed": false,
"msg": "All assertions passed"
}
********
ok: [192.168.1.21] => {
"changed": false,
"msg": "All assertions passed"
}
ok: [192.168.1.22] => {
"changed": false,
"msg": "All assertions passed"
}
ok: [192.168.1.31] => {
"changed": false,
"msg": "All assertions passed"
}
********
ok: [192.168.1.21] => {
"changed": false,
"msg": "All assertions passed"
}
ok: [192.168.1.22] => {
"changed": false,
"msg": "All assertions passed"
}
ok: [192.168.1.31] => {
"changed": false,
"msg": "All assertions passed"
}
******
ok: [192.168.1.21] => {
"changed": false,
"msg": "All assertions passed"
}
******
ok: [192.168.1.21] => {
"changed": false,
"msg": "All assertions passed"
}
ok: [192.168.1.22] => {
"changed": false,
"msg": "All assertions passed"
}
ok: [192.168.1.31] => {
"changed": false,
"msg": "All assertions passed"
}
******
ok: [192.168.1.21] => {
"changed": false,
"msg": "All assertions passed"
}
******
ok: [192.168.1.21] => {
"changed": false,
"msg": "All assertions passed"
}
ok: [192.168.1.22] => {
"changed": false,
"msg": "All assertions passed"
}
ok: [192.168.1.31] => {
"changed": false,
"msg": "All assertions passed"
}
included: /root/kubeadm-ha-master/roles/prepare/base/tasks/common.yml for 192.168.1.21, 192.168.1.22, 192.168.1.31
*****
ok: [192.168.1.31]
ok: [192.168.1.22]
ok: [192.168.1.21]
****
changed: [192.168.1.21]
changed: [192.168.1.31]
changed: [192.168.1.22]
******
ok: [192.168.1.22]
ok: [192.168.1.21]
ok: [192.168.1.31]
******
ok: [192.168.1.31] => (item=sunrpc)
ok: [192.168.1.22] => (item=sunrpc)
ok: [192.168.1.21] => (item=sunrpc)
ok: [192.168.1.31] => (item=ip_vs)
ok: [192.168.1.22] => (item=ip_vs)
ok: [192.168.1.21] => (item=ip_vs)
ok: [192.168.1.31] => (item=ip_vs_rr)
ok: [192.168.1.22] => (item=ip_vs_rr)
ok: [192.168.1.21] => (item=ip_vs_rr)
ok: [192.168.1.22] => (item=ip_vs_sh)
ok: [192.168.1.31] => (item=ip_vs_sh)
ok: [192.168.1.21] => (item=ip_vs_sh)
ok: [192.168.1.22] => (item=ip_vs_wrr)
ok: [192.168.1.31] => (item=ip_vs_wrr)
ok: [192.168.1.21] => (item=ip_vs_wrr)
changed: [192.168.1.22] => (item=br_netfilter)
changed: [192.168.1.21] => (item=br_netfilter)
changed: [192.168.1.31] => (item=br_netfilter)
**
k_ipv4 not found.\n", "stderr_lines": ["modprobe: FATAL: Module nf_conntrack_ipv4 not found."], "stdout": "", "stdout_lines": []}
...ignoring
k_ipv4 not found.\n", "stderr_lines": ["modprobe: FATAL: Module nf_conntrack_ipv4 not found."], "stdout": "", "stdout_lines": []}
...ignoring
k_ipv4 not found.\n", "stderr_lines": ["modprobe: FATAL: Module nf_conntrack_ipv4 not found."], "stdout": "", "stdout_lines": []}
...ignoring
**
ok: [192.168.1.21]
ok: [192.168.1.22]
ok: [192.168.1.31]
****
changed: [192.168.1.22]
changed: [192.168.1.21]
changed: [192.168.1.31]
****
changed: [192.168.1.31]
changed: [192.168.1.22]
changed: [192.168.1.21]
******
changed: [192.168.1.21]
changed: [192.168.1.22]
changed: [192.168.1.31]
******
changed: [192.168.1.21]
changed: [192.168.1.31]
changed: [192.168.1.22]
****
ok: [192.168.1.21]
ok: [192.168.1.22]
ok: [192.168.1.31]
****
changed: [192.168.1.21]
changed: [192.168.1.22]
changed: [192.168.1.31]
************
ok: [192.168.1.31]
ok: [192.168.1.21]
ok: [192.168.1.22]
*********
ok: [192.168.1.21]
ok: [192.168.1.22]
ok: [192.168.1.31]
*********
ok: [192.168.1.21]
ok: [192.168.1.22]
ok: [192.168.1.31]
******
ok: [192.168.1.21]
ok: [192.168.1.31]
ok: [192.168.1.22]
****
changed: [192.168.1.21]
changed: [192.168.1.22]
changed: [192.168.1.31]
included: /root/kubeadm-ha-master/roles/prepare/base/tasks/centos.yml for 192.168.1.21, 192.168.1.22, 192.168.1.31
******
changed: [192.168.1.21]
changed: [192.168.1.22]
changed: [192.168.1.31]
*****
ok: [192.168.1.21]
ok: [192.168.1.22]
ok: [192.168.1.31]
****
ok: [192.168.1.21]
ok: [192.168.1.22]
ok: [192.168.1.31]
****
changed: [192.168.1.21]
changed: [192.168.1.22]
changed: [192.168.1.31]
****
changed: [192.168.1.21]
ok: [192.168.1.31]
ok: [192.168.1.22]
*******
ok: [192.168.1.21]
ok: [192.168.1.22]
ok: [192.168.1.31]
*********************
changed: [192.168.1.31]
changed: [192.168.1.21]
changed: [192.168.1.22]
****
changed: [192.168.1.21]
changed: [192.168.1.22]
changed: [192.168.1.31]
****
ok: [192.168.1.21]
ok: [192.168.1.22]
ok: [192.168.1.31]
********
ok: [192.168.1.21] => {
"changed": false,
"msg": "All assertions passed"
}
ok: [192.168.1.22] => {
"changed": false,
"msg": "All assertions passed"
}
ok: [192.168.1.31] => {
"changed": false,
"msg": "All assertions passed"
}
included: /root/kubeadm-ha-master/roles/prepare/kernel/tasks/centos.yml for 192.168.1.21, 192.168.1.22, 192.168.1.31
**
ok: [192.168.1.21]
ok: [192.168.1.22]
ok: [192.168.1.31]
***********
changed: [192.168.1.31]
changed: [192.168.1.22]
changed: [192.168.1.21]
********
changed: [192.168.1.22]
changed: [192.168.1.21]
changed: [192.168.1.31]
**
changed: [192.168.1.21]
changed: [192.168.1.22]
changed: [192.168.1.31]
********
ok: [192.168.1.22] => {
"msg": "节点:192.168.1.22, 内核已升级完成, 请手动执行 reboot -f 重启该服务器。"
}
ok: [192.168.1.21] => {
"msg": "节点:192.168.1.21, 内核已升级完成, 请手动执行 reboot -f 重启该服务器。"
}
ok: [192.168.1.31] => {
"msg": "节点:192.168.1.31, 内核已升级完成, 请手动执行 reboot -f 重启该服务器。"
}
PLAY RECAP *********************************************************************************************************************************************************************************************************************************
192.168.1.21 : ok=59 changed=16 unreachable=0 failed=0 skipped=14 rescued=0 ignored=1
192.168.1.22 : ok=49 changed=15 unreachable=0 failed=0 skipped=13 rescued=0 ignored=1
192.168.1.31 : ok=49 changed=15 unreachable=0 failed=0 skipped=13 rescued=0 ignored=1
- 内核升级完毕后重启所有节点 在master node1 node2上执行
reboot
开始部署k8s
- 等待所有节点重启完成后执行部署命令
[root@master kubeadm-ha-master]# ansible-playbook -i example/hosts.s-master.ip.ini 90-init-cluster.yml
PLAY [all] *********************************************************************************************************************************************************************************************************************************
***********
ok: [192.168.1.21]
ok: [192.168.1.22]
ok: [192.168.1.31]
PLAY [all] *********************************************************************************************************************************************************************************************************************************
TASK [Gathering Facts] *********************************************************************************************************************************************************************************************************************
ok: [192.168.1.22]
ok: [192.168.1.31]
ok: [192.168.1.21]
****
ok: [192.168.1.21] => {
"msg": "Check roles/prepare/variables/defaults/main.yml"
}
ok: [192.168.1.22] => {
"msg": "Check roles/prepare/variables/defaults/main.yml"
}
ok: [192.168.1.31] => {
"msg": "Check roles/prepare/variables/defaults/main.yml"
}
included: /root/kubeadm-ha-master/roles/prepare/base/tasks/verify_variables.yml for 192.168.1.21, 192.168.1.22, 192.168.1.31
******
ok: [192.168.1.21] => {
"changed": false,
"msg": "All assertions passed"
}
ok: [192.168.1.22] => {
"changed": false,
"msg": "All assertions passed"
}
ok: [192.168.1.31] => {
"changed": false,
"msg": "All assertions passed"
}
******
ok: [192.168.1.21] => {
"changed": false,
"msg": "All assertions passed"
}
******
ok: [192.168.1.21] => {
"changed": false,
"msg": "All assertions passed"
}
******
ok: [192.168.1.21] => {
"changed": false,
"msg": "All assertions passed"
}
*******
ok: [192.168.1.21] => {
"changed": false,
"msg": "All assertions passed"
}
ok: [192.168.1.22] => {
"changed": false,
"msg": "All assertions passed"
}
ok: [192.168.1.31] => {
"changed": false,
"msg": "All assertions passed"
}
******
ok: [192.168.1.21] => {
"changed": false,
"msg": "All assertions passed"
}
******
ok: [192.168.1.21] => {
"changed": false,
"msg": "All assertions passed"
}
********
ok: [192.168.1.21] => {
"changed": false,
"msg": "All assertions passed"
}
******
ok: [192.168.1.21] => {
"changed": false,
"msg": "All assertions passed"
}
*********
ok: [192.168.1.21] => {
"changed": false,
"msg": "All assertions passed"
}
included: /root/kubeadm-ha-master/roles/prepare/base/tasks/verify_node.yml for 192.168.1.21, 192.168.1.22, 192.168.1.31
********
ok: [192.168.1.21] => {
"changed": false,
"msg": "All assertions passed"
}
ok: [192.168.1.22] => {
"changed": false,
"msg": "All assertions passed"
}
ok: [192.168.1.31] => {
"changed": false,
"msg": "All assertions passed"
}
**********
ok: [192.168.1.21] => {
"changed": false,
"msg": "All assertions passed"
}
ok: [192.168.1.22] => {
"changed": false,
"msg": "All assertions passed"
}
ok: [192.168.1.31] => {
"changed": false,
"msg": "All assertions passed"
}
********
ok: [192.168.1.21] => {
"changed": false,
"msg": "All assertions passed"
}
ok: [192.168.1.22] => {
"changed": false,
"msg": "All assertions passed"
}
ok: [192.168.1.31] => {
"changed": false,
"msg": "All assertions passed"
}
********
ok: [192.168.1.21] => {
"changed": false,
"msg": "All assertions passed"
}
ok: [192.168.1.22] => {
"changed": false,
"msg": "All assertions passed"
}
ok: [192.168.1.31] => {
"changed": false,
"msg": "All assertions passed"
}
********
ok: [192.168.1.21] => {
"changed": false,
"msg": "All assertions passed"
}
ok: [192.168.1.22] => {
"changed": false,
"msg": "All assertions passed"
}
ok: [192.168.1.31] => {
"changed": false,
"msg": "All assertions passed"
}
******
ok: [192.168.1.21] => {
"changed": false,
"msg": "All assertions passed"
}
******
ok: [192.168.1.21] => {
"changed": false,
"msg": "All assertions passed"
}
ok: [192.168.1.22] => {
"changed": false,
"msg": "All assertions passed"
}
ok: [192.168.1.31] => {
"changed": false,
"msg": "All assertions passed"
}
******
ok: [192.168.1.21] => {
"changed": false,
"msg": "All assertions passed"
}
******
ok: [192.168.1.21] => {
"changed": false,
"msg": "All assertions passed"
}
ok: [192.168.1.22] => {
"changed": false,
"msg": "All assertions passed"
}
ok: [192.168.1.31] => {
"changed": false,
"msg": "All assertions passed"
}
included: /root/kubeadm-ha-master/roles/prepare/base/tasks/common.yml for 192.168.1.21, 192.168.1.22, 192.168.1.31
*****
ok: [192.168.1.21]
ok: [192.168.1.31]
ok: [192.168.1.22]
****
changed: [192.168.1.21]
changed: [192.168.1.22]
changed: [192.168.1.31]
******
ok: [192.168.1.31]
ok: [192.168.1.22]
ok: [192.168.1.21]
******
ok: [192.168.1.31] => (item=sunrpc)
ok: [192.168.1.21] => (item=sunrpc)
ok: [192.168.1.22] => (item=sunrpc)
ok: [192.168.1.31] => (item=ip_vs)
ok: [192.168.1.22] => (item=ip_vs)
ok: [192.168.1.21] => (item=ip_vs)
ok: [192.168.1.21] => (item=ip_vs_rr)
ok: [192.168.1.31] => (item=ip_vs_rr)
ok: [192.168.1.22] => (item=ip_vs_rr)
ok: [192.168.1.31] => (item=ip_vs_sh)
ok: [192.168.1.21] => (item=ip_vs_sh)
ok: [192.168.1.22] => (item=ip_vs_sh)
ok: [192.168.1.31] => (item=ip_vs_wrr)
ok: [192.168.1.21] => (item=ip_vs_wrr)
ok: [192.168.1.22] => (item=ip_vs_wrr)
changed: [192.168.1.31] => (item=br_netfilter)
changed: [192.168.1.21] => (item=br_netfilter)
changed: [192.168.1.22] => (item=br_netfilter)
**
k_ipv4 not found.\n", "stderr_lines": ["modprobe: FATAL: Module nf_conntrack_ipv4 not found."], "stdout": "", "stdout_lines": []}
...ignoring
k_ipv4 not found.\n", "stderr_lines": ["modprobe: FATAL: Module nf_conntrack_ipv4 not found."], "stdout": "", "stdout_lines": []}
...ignoring
k_ipv4 not found.\n", "stderr_lines": ["modprobe: FATAL: Module nf_conntrack_ipv4 not found."], "stdout": "", "stdout_lines": []}
...ignoring
**
ok: [192.168.1.21]
ok: [192.168.1.22]
ok: [192.168.1.31]
****
ok: [192.168.1.31]
ok: [192.168.1.21]
ok: [192.168.1.22]
****
changed: [192.168.1.22]
changed: [192.168.1.31]
changed: [192.168.1.21]
******
ok: [192.168.1.21]
ok: [192.168.1.22]
ok: [192.168.1.31]
******
changed: [192.168.1.21]
changed: [192.168.1.22]
changed: [192.168.1.31]
****
ok: [192.168.1.21]
ok: [192.168.1.31]
ok: [192.168.1.22]
****
changed: [192.168.1.22]
changed: [192.168.1.31]
changed: [192.168.1.21]
************
ok: [192.168.1.31]
ok: [192.168.1.21]
ok: [192.168.1.22]
*********
ok: [192.168.1.21]
ok: [192.168.1.22]
ok: [192.168.1.31]
*********
ok: [192.168.1.21]
ok: [192.168.1.22]
ok: [192.168.1.31]
******
ok: [192.168.1.31]
ok: [192.168.1.22]
ok: [192.168.1.21]
****
ok: [192.168.1.22]
ok: [192.168.1.21]
ok: [192.168.1.31]
included: /root/kubeadm-ha-master/roles/prepare/base/tasks/centos.yml for 192.168.1.21, 192.168.1.22, 192.168.1.31
******
changed: [192.168.1.21]
changed: [192.168.1.22]
changed: [192.168.1.31]
*****
ok: [192.168.1.21]
ok: [192.168.1.22]
ok: [192.168.1.31]
****
ok: [192.168.1.21]
ok: [192.168.1.22]
ok: [192.168.1.31]
****
changed: [192.168.1.21]
changed: [192.168.1.22]
changed: [192.168.1.31]
****
ok: [192.168.1.21]
ok: [192.168.1.31]
ok: [192.168.1.22]
*******
ok: [192.168.1.21]
ok: [192.168.1.22]
ok: [192.168.1.31]
*********************
changed: [192.168.1.21]
changed: [192.168.1.31]
changed: [192.168.1.22]
****
changed: [192.168.1.21]
changed: [192.168.1.22]
changed: [192.168.1.31]
****
ok: [192.168.1.31]
ok: [192.168.1.21]
ok: [192.168.1.22]
********
changed: [192.168.1.21]
changed: [192.168.1.31]
changed: [192.168.1.22]
********
changed: [192.168.1.22]
changed: [192.168.1.21]
changed: [192.168.1.31]
included: /root/kubeadm-ha-master/roles/prepare/container-engine/tasks/containerd/main.yml for 192.168.1.21, 192.168.1.22, 192.168.1.31
included: /root/kubeadm-ha-master/roles/prepare/container-engine/tasks/containerd/centos.yml for 192.168.1.21, 192.168.1.22, 192.168.1.31
****
ok: [192.168.1.21]
ok: [192.168.1.22]
ok: [192.168.1.31]
**
changed: [192.168.1.22]
changed: [192.168.1.21]
changed: [192.168.1.31]
included: /root/kubeadm-ha-master/roles/prepare/container-engine/tasks/containerd/common.yml for 192.168.1.21, 192.168.1.22, 192.168.1.31
******
changed: [192.168.1.21] => (item=/etc/systemd/system/containerd.service.d)
changed: [192.168.1.22] => (item=/etc/systemd/system/containerd.service.d)
changed: [192.168.1.31] => (item=/etc/systemd/system/containerd.service.d)
changed: [192.168.1.21] => (item=/var/lib/containerd)
changed: [192.168.1.22] => (item=/var/lib/containerd)
changed: [192.168.1.31] => (item=/var/lib/containerd)
*********
changed: [192.168.1.21]
changed: [192.168.1.31]
changed: [192.168.1.22]
******
changed: [192.168.1.22]
changed: [192.168.1.21]
changed: [192.168.1.31]
****
ok: [192.168.1.21]
ok: [192.168.1.31]
ok: [192.168.1.22]
****
changed: [192.168.1.22]
changed: [192.168.1.21]
changed: [192.168.1.31]
******
ok: [192.168.1.21]
ok: [192.168.1.22]
ok: [192.168.1.31]
included: /root/kubeadm-ha-master/roles/prepare/kubernetes/tasks/centos.yml for 192.168.1.21, 192.168.1.22, 192.168.1.31
****
ok: [192.168.1.21]
ok: [192.168.1.22]
ok: [192.168.1.31]
**
changed: [192.168.1.21]
changed: [192.168.1.31]
changed: [192.168.1.22]
*********
changed: [192.168.1.21]
changed: [192.168.1.22]
changed: [192.168.1.31]
PLAY [all] *********************************************************************************************************************************************************************************************************************************
****
ok: [192.168.1.21] => {
"msg": "Check roles/prepare/variables/defaults/main.yml"
}
ok: [192.168.1.22] => {
"msg": "Check roles/prepare/variables/defaults/main.yml"
}
ok: [192.168.1.31] => {
"msg": "Check roles/prepare/variables/defaults/main.yml"
}
included: /root/kubeadm-ha-master/roles/load-balancer/tasks/internal.yml for 192.168.1.22, 192.168.1.21, 192.168.1.31
********
changed: [192.168.1.21]
changed: [192.168.1.31]
changed: [192.168.1.22]
****
ok: [192.168.1.21]
ok: [192.168.1.22]
ok: [192.168.1.31]
*******
changed: [192.168.1.21] => (item=/var/lib/kubelet)
changed: [192.168.1.22] => (item=/var/lib/kubelet)
changed: [192.168.1.31] => (item=/var/lib/kubelet)
ok: [192.168.1.21] => (item=/etc/kubernetes/manifests)
ok: [192.168.1.31] => (item=/etc/kubernetes/manifests)
ok: [192.168.1.22] => (item=/etc/kubernetes/manifests)
changed: [192.168.1.21] => (item=/etc/kubernetes/plugins/lb-config)
changed: [192.168.1.31] => (item=/etc/kubernetes/plugins/lb-config)
changed: [192.168.1.22] => (item=/etc/kubernetes/plugins/lb-config)
ok: [192.168.1.21] => (item=/etc/systemd/system/kubelet.service.d)
ok: [192.168.1.31] => (item=/etc/systemd/system/kubelet.service.d)
ok: [192.168.1.22] => (item=/etc/systemd/system/kubelet.service.d)
included: /root/kubeadm-ha-master/roles/load-balancer/tasks/nginx.yml for 192.168.1.21, 192.168.1.22, 192.168.1.31
******
changed: [192.168.1.21]
changed: [192.168.1.22]
changed: [192.168.1.31]
******
changed: [192.168.1.21] => (item=registry.aliyuncs.com/kubeadm-ha/nginx:1.19-alpine)
changed: [192.168.1.22] => (item=registry.aliyuncs.com/kubeadm-ha/nginx:1.19-alpine)
changed: [192.168.1.31] => (item=registry.aliyuncs.com/kubeadm-ha/nginx:1.19-alpine)
changed: [192.168.1.22] => (item=registry.aliyuncs.com/kubeadm-ha/pause:3.4.1)
changed: [192.168.1.21] => (item=registry.aliyuncs.com/kubeadm-ha/pause:3.4.1)
changed: [192.168.1.31] => (item=registry.aliyuncs.com/kubeadm-ha/pause:3.4.1)
****
changed: [192.168.1.21]
changed: [192.168.1.22]
changed: [192.168.1.31]
******
ok: [192.168.1.31]
ok: [192.168.1.21]
ok: [192.168.1.22]
**********
ok: [192.168.1.21]
ok: [192.168.1.22]
ok: [192.168.1.31]
******
changed: [192.168.1.21]
changed: [192.168.1.22]
changed: [192.168.1.31]
****
ok: [192.168.1.21]
ok: [192.168.1.22]
ok: [192.168.1.31]
*********
changed: [192.168.1.21]
changed: [192.168.1.22]
changed: [192.168.1.31]
FAILED - RETRYING: 以轮询的方式等待 nginx 运行完成 (12 retries left).
FAILED - RETRYING: 以轮询的方式等待 nginx 运行完成 (12 retries left).
FAILED - RETRYING: 以轮询的方式等待 nginx 运行完成 (12 retries left).
FAILED - RETRYING: 以轮询的方式等待 nginx 运行完成 (11 retries left).
FAILED - RETRYING: 以轮询的方式等待 nginx 运行完成 (11 retries left).
FAILED - RETRYING: 以轮询的方式等待 nginx 运行完成 (11 retries left).
************
changed: [192.168.1.21]
changed: [192.168.1.22]
changed: [192.168.1.31]
******
changed: [192.168.1.21]
changed: [192.168.1.22]
changed: [192.168.1.31]
****
ok: [192.168.1.21]
ok: [192.168.1.22]
ok: [192.168.1.31]
******
changed: [192.168.1.21]
changed: [192.168.1.31]
changed: [192.168.1.22]
PLAY [all] *********************************************************************************************************************************************************************************************************************************
****
ok: [192.168.1.21] => {
"msg": "Check roles/prepare/variables/defaults/main.yml"
}
ok: [192.168.1.22] => {
"msg": "Check roles/prepare/variables/defaults/main.yml"
}
ok: [192.168.1.31] => {
"msg": "Check roles/prepare/variables/defaults/main.yml"
}
PLAY [etcd,new-etcd,kube-master,new-master] ************************************************************************************************************************************************************************************************
****
ok: [192.168.1.21] => {
"msg": "Check roles/prepare/variables/defaults/main.yml"
}
ok: [192.168.1.22] => {
"msg": "Check roles/prepare/variables/defaults/main.yml"
}
ok: [192.168.1.31] => {
"msg": "Check roles/prepare/variables/defaults/main.yml"
}
*********
changed: [192.168.1.22]
changed: [192.168.1.21]
changed: [192.168.1.31]
***********
ok: [192.168.1.21]
included: /root/kubeadm-ha-master/roles/etcd/certificates/tasks/certs_stat.yml for 192.168.1.21
********
ok: [192.168.1.21]
*******
ok: [192.168.1.21]
********
ok: [192.168.1.21]
********
ok: [192.168.1.21]
********
ok: [192.168.1.21]
********
ok: [192.168.1.21]
included: /root/kubeadm-ha-master/roles/etcd/certificates/tasks/generate.yml for 192.168.1.21
*********
changed: [192.168.1.21]
*******
changed: [192.168.1.21]
*****
changed: [192.168.1.21]
******
changed: [192.168.1.21]
******
changed: [192.168.1.21]
****
changed: [192.168.1.21]
******
changed: [192.168.1.21]
******
changed: [192.168.1.21]
****
changed: [192.168.1.21]
******
changed: [192.168.1.21]
******
changed: [192.168.1.21]
****
changed: [192.168.1.21]
******
changed: [192.168.1.21]
******
changed: [192.168.1.21]
****
changed: [192.168.1.21]
*************
ok: [192.168.1.21]
**********
changed: [192.168.1.21] => (item=None)
changed: [192.168.1.21] => (item=None)
changed: [192.168.1.21] => (item=None)
changed: [192.168.1.21]
included: /root/kubeadm-ha-master/roles/etcd/certificates/tasks/distribute.yml for 192.168.1.21, 192.168.1.22, 192.168.1.31
******
ok: [192.168.1.21 -> 192.168.1.21] => (item=ca.crt)
ok: [192.168.1.21 -> 192.168.1.21] => (item=ca.key)
ok: [192.168.1.21 -> 192.168.1.21] => (item=healthcheck-client.crt)
ok: [192.168.1.21 -> 192.168.1.21] => (item=healthcheck-client.key)
ok: [192.168.1.21 -> 192.168.1.21] => (item=peer.crt)
ok: [192.168.1.21 -> 192.168.1.21] => (item=peer.key)
ok: [192.168.1.21 -> 192.168.1.21] => (item=server.crt)
ok: [192.168.1.21 -> 192.168.1.21] => (item=server.key)
*********
changed: [192.168.1.22] => (item=None)
changed: [192.168.1.31] => (item=None)
changed: [192.168.1.31] => (item=None)
changed: [192.168.1.22] => (item=None)
changed: [192.168.1.31] => (item=None)
changed: [192.168.1.22] => (item=None)
changed: [192.168.1.31] => (item=None)
changed: [192.168.1.22] => (item=None)
changed: [192.168.1.31] => (item=None)
changed: [192.168.1.22] => (item=None)
changed: [192.168.1.31] => (item=None)
changed: [192.168.1.22] => (item=None)
changed: [192.168.1.31] => (item=None)
changed: [192.168.1.22] => (item=None)
changed: [192.168.1.31] => (item=None)
changed: [192.168.1.31]
changed: [192.168.1.22] => (item=None)
changed: [192.168.1.22]
****
ok: [192.168.1.21 -> 192.168.1.21] => (item=etcd/ca.crt)
ok: [192.168.1.21 -> 192.168.1.21] => (item=apiserver-etcd-client.crt)
ok: [192.168.1.21 -> 192.168.1.21] => (item=apiserver-etcd-client.key)
********
changed: [192.168.1.21]
changed: [192.168.1.22]
changed: [192.168.1.31]
****
ok: [192.168.1.21]
ok: [192.168.1.22]
ok: [192.168.1.31]
******
ok: [192.168.1.21] => (item=/var/lib/kubelet)
ok: [192.168.1.31] => (item=/var/lib/kubelet)
ok: [192.168.1.22] => (item=/var/lib/kubelet)
ok: [192.168.1.21] => (item=/etc/systemd/system/kubelet.service.d)
ok: [192.168.1.31] => (item=/etc/systemd/system/kubelet.service.d)
ok: [192.168.1.22] => (item=/etc/systemd/system/kubelet.service.d)
*********
changed: [192.168.1.21] => (item=/var/lib/etcd)
changed: [192.168.1.22] => (item=/var/lib/etcd)
changed: [192.168.1.31] => (item=/var/lib/etcd)
changed: [192.168.1.21] => (item=/etc/kubernetes/backup/etcd)
changed: [192.168.1.22] => (item=/etc/kubernetes/backup/etcd)
changed: [192.168.1.31] => (item=/etc/kubernetes/backup/etcd)
changed: [192.168.1.21] => (item=/etc/kubernetes/pki/etcd)
changed: [192.168.1.31] => (item=/etc/kubernetes/pki/etcd)
changed: [192.168.1.22] => (item=/etc/kubernetes/pki/etcd)
changed: [192.168.1.22] => (item=/etc/kubernetes/manifests)
changed: [192.168.1.21] => (item=/etc/kubernetes/manifests)
changed: [192.168.1.31] => (item=/etc/kubernetes/manifests)
*********
changed: [192.168.1.22] => (item=registry.aliyuncs.com/kubeadm-ha/etcd:3.4.13-0)
changed: [192.168.1.22] => (item=registry.aliyuncs.com/kubeadm-ha/pause:3.4.1)
changed: [192.168.1.21] => (item=registry.aliyuncs.com/kubeadm-ha/etcd:3.4.13-0)
changed: [192.168.1.21] => (item=registry.aliyuncs.com/kubeadm-ha/pause:3.4.1)
changed: [192.168.1.31] => (item=registry.aliyuncs.com/kubeadm-ha/etcd:3.4.13-0)
changed: [192.168.1.31] => (item=registry.aliyuncs.com/kubeadm-ha/pause:3.4.1)
******
ok: [192.168.1.22]
ok: [192.168.1.21]
ok: [192.168.1.31]
****
changed: [192.168.1.21]
changed: [192.168.1.22]
changed: [192.168.1.31]
******
ok: [192.168.1.22]
ok: [192.168.1.21]
ok: [192.168.1.31]
**********
ok: [192.168.1.21]
ok: [192.168.1.22]
ok: [192.168.1.31]
******
changed: [192.168.1.21]
changed: [192.168.1.22]
changed: [192.168.1.31]
****
ok: [192.168.1.21]
ok: [192.168.1.22]
ok: [192.168.1.31]
*********
changed: [192.168.1.21]
changed: [192.168.1.31]
changed: [192.168.1.22]
included: /root/kubeadm-ha-master/roles/etcd/install/tasks/containerd.yml for 192.168.1.21, 192.168.1.22, 192.168.1.31
FAILED - RETRYING: 以轮询的方式等待 etcd 运行完成 (12 retries left).
FAILED - RETRYING: 以轮询的方式等待 etcd 运行完成 (12 retries left).
FAILED - RETRYING: 以轮询的方式等待 etcd 运行完成 (12 retries left).
************
changed: [192.168.1.31]
changed: [192.168.1.22]
changed: [192.168.1.21]
******
changed: [192.168.1.21]
changed: [192.168.1.22]
changed: [192.168.1.31]
****
ok: [192.168.1.21]
ok: [192.168.1.22]
ok: [192.168.1.31]
******
changed: [192.168.1.22]
changed: [192.168.1.21]
changed: [192.168.1.31]
***********
changed: [192.168.1.21]
***********
ok: [192.168.1.21]
***************
ok: [192.168.1.21]
PLAY [kube-master,new-master,kube-worker,new-worker] ***************************************************************************************************************************************************************************************
****
ok: [192.168.1.21] => {
"msg": "Check roles/prepare/variables/defaults/main.yml"
}
ok: [192.168.1.22] => {
"msg": "Check roles/prepare/variables/defaults/main.yml"
}
ok: [192.168.1.31] => {
"msg": "Check roles/prepare/variables/defaults/main.yml"
}
********
ok: [192.168.1.21] => (item=/etc/kubernetes/pki)
ok: [192.168.1.22] => (item=/etc/kubernetes/pki)
ok: [192.168.1.31] => (item=/etc/kubernetes/pki)
ok: [192.168.1.21] => (item=/var/lib/kubelet/pki)
ok: [192.168.1.22] => (item=/var/lib/kubelet/pki)
ok: [192.168.1.31] => (item=/var/lib/kubelet/pki)
included: /root/kubeadm-ha-master/roles/kube-certificates/tasks/certs_stat.yml for 192.168.1.21
*********
ok: [192.168.1.21]
*******
ok: [192.168.1.21]
*********
ok: [192.168.1.21]
*******
ok: [192.168.1.21]
********
ok: [192.168.1.21]
********
ok: [192.168.1.21]
********
ok: [192.168.1.21]
********
ok: [192.168.1.21]
********
ok: [192.168.1.21]
********
ok: [192.168.1.21]
********
ok: [192.168.1.21]
********
ok: [192.168.1.21]
included: /root/kubeadm-ha-master/roles/kube-certificates/tasks/common.yml for 192.168.1.21
*********
changed: [192.168.1.21]
*******
changed: [192.168.1.21]
*****
changed: [192.168.1.21]
******
changed: [192.168.1.21]
*****
changed: [192.168.1.21]
******
changed: [192.168.1.21]
******
changed: [192.168.1.21]
****
changed: [192.168.1.21]
******
changed: [192.168.1.21]
******
changed: [192.168.1.21]
****
changed: [192.168.1.21]
******
changed: [192.168.1.21]
******
changed: [192.168.1.21]
****
changed: [192.168.1.21]
******
changed: [192.168.1.21]
******
changed: [192.168.1.21]
****
changed: [192.168.1.21]
******
changed: [192.168.1.21]
********
changed: [192.168.1.21]
***********
changed: [192.168.1.21 -> 192.168.1.21]
******
changed: [192.168.1.21]
****
changed: [192.168.1.21]
******
changed: [192.168.1.21]
******
changed: [192.168.1.21]
****
changed: [192.168.1.21]
*********
changed: [192.168.1.21]
*******
changed: [192.168.1.21]
*********
changed: [192.168.1.21] => (item=192.168.1.21)
changed: [192.168.1.21] => (item=192.168.1.22)
changed: [192.168.1.21] => (item=192.168.1.31)
*******
changed: [192.168.1.21] => (item=192.168.1.21)
changed: [192.168.1.21] => (item=192.168.1.22)
changed: [192.168.1.21] => (item=192.168.1.31)
*****************
ok: [192.168.1.21]
************
changed: [192.168.1.21] => (item=None)
changed: [192.168.1.21] => (item=None)
changed: [192.168.1.21] => (item=None)
changed: [192.168.1.21] => (item=None)
changed: [192.168.1.21] => (item=None)
changed: [192.168.1.21] => (item=None)
changed: [192.168.1.21] => (item=None)
changed: [192.168.1.21]
*************
ok: [192.168.1.21]
********
changed: [192.168.1.21] => (item=None)
changed: [192.168.1.21] => (item=None)
changed: [192.168.1.21] => (item=None)
changed: [192.168.1.21] => (item=None)
changed: [192.168.1.21]
included: /root/kubeadm-ha-master/roles/kube-certificates/tasks/distribute.yml for 192.168.1.21, 192.168.1.22, 192.168.1.31
********
ok: [192.168.1.21 -> 192.168.1.21] => (item=admin.crt)
ok: [192.168.1.21 -> 192.168.1.21] => (item=admin.key)
ok: [192.168.1.21 -> 192.168.1.21] => (item=apiserver.crt)
ok: [192.168.1.21 -> 192.168.1.21] => (item=apiserver.key)
ok: [192.168.1.21 -> 192.168.1.21] => (item=apiserver-kubelet-client.crt)
ok: [192.168.1.21 -> 192.168.1.21] => (item=apiserver-kubelet-client.key)
ok: [192.168.1.21 -> 192.168.1.21] => (item=ca.crt)
ok: [192.168.1.21 -> 192.168.1.21] => (item=ca.key)
ok: [192.168.1.21 -> 192.168.1.21] => (item=front-proxy-ca.crt)
ok: [192.168.1.21 -> 192.168.1.21] => (item=front-proxy-ca.key)
ok: [192.168.1.21 -> 192.168.1.21] => (item=front-proxy-client.crt)
ok: [192.168.1.21 -> 192.168.1.21] => (item=front-proxy-client.key)
ok: [192.168.1.21 -> 192.168.1.21] => (item=kube-controller-manager.crt)
ok: [192.168.1.21 -> 192.168.1.21] => (item=kube-scheduler.crt)
ok: [192.168.1.21 -> 192.168.1.21] => (item=kube-scheduler.key)
ok: [192.168.1.21 -> 192.168.1.21] => (item=sa.key)
ok: [192.168.1.21 -> 192.168.1.21] => (item=sa.pub)
*******
ok: [192.168.1.21 -> 192.168.1.21] => (item=kubelet.crt)
ok: [192.168.1.21 -> 192.168.1.21] => (item=kubelet.key)
************
changed: [192.168.1.22] => (item=None)
changed: [192.168.1.31] => (item=None)
changed: [192.168.1.31] => (item=None)
changed: [192.168.1.22] => (item=None)
changed: [192.168.1.31]
changed: [192.168.1.22]
*********
ok: [192.168.1.21]
ok: [192.168.1.22]
ok: [192.168.1.31]
*******
ok: [192.168.1.21 -> 192.168.1.21] => (item=/etc/kubernetes/pki/ca.crt)
ok: [192.168.1.22 -> 192.168.1.21] => (item=/etc/kubernetes/pki/ca.crt)
ok: [192.168.1.31 -> 192.168.1.21] => (item=/etc/kubernetes/pki/ca.crt)
ok: [192.168.1.21 -> 192.168.1.21] => (item=/var/lib/kubelet/pki/kubelet.key)
ok: [192.168.1.22 -> 192.168.1.21] => (item=/var/lib/kubelet/pki/kubelet.key)
ok: [192.168.1.31 -> 192.168.1.21] => (item=/var/lib/kubelet/pki/kubelet.key)
ok: [192.168.1.21 -> 192.168.1.21] => (item=/var/lib/kubelet/pki/kubelet-client-192.168.1.21.crt)
ok: [192.168.1.22 -> 192.168.1.21] => (item=/var/lib/kubelet/pki/kubelet-client-192.168.1.22.crt)
ok: [192.168.1.31 -> 192.168.1.21] => (item=/var/lib/kubelet/pki/kubelet-client-192.168.1.31.crt)
*******
changed: [192.168.1.22] => (item=None)
changed: [192.168.1.31] => (item=None)
ok: [192.168.1.22] => (item=None)
ok: [192.168.1.31] => (item=None)
changed: [192.168.1.22] => (item=None)
changed: [192.168.1.22]
changed: [192.168.1.31] => (item=None)
changed: [192.168.1.31]
******
ok: [192.168.1.21]
ok: [192.168.1.22]
ok: [192.168.1.31]
PLAY [kube-master,new-master] **************************************************************************************************************************************************************************************************************
****
ok: [192.168.1.21] => {
"msg": "Check roles/prepare/variables/defaults/main.yml"
}
********
changed: [192.168.1.21]
****
ok: [192.168.1.21]
******
ok: [192.168.1.21] => (item=/var/lib/kubelet)
changed: [192.168.1.21] => (item=/etc/kubernetes/config)
ok: [192.168.1.21] => (item=/etc/kubernetes/pki)
ok: [192.168.1.21] => (item=/etc/kubernetes/config)
ok: [192.168.1.21] => (item=/etc/kubernetes/manifests)
ok: [192.168.1.21] => (item=/var/log/kubernetes/audit)
ok: [192.168.1.21] => (item=/usr/share/bash-completion/completions)
******
ok: [192.168.1.21]
included: /root/kubeadm-ha-master/roles/kube-master/tasks/kubeadm-config.yml for 192.168.1.21
****
changed: [192.168.1.21]
***
ok: [192.168.1.21]
**********
ok: [192.168.1.21]
*******
changed: [192.168.1.21]
******
changed: [192.168.1.21]
included: /root/kubeadm-ha-master/roles/kube-master/tasks/master-init.yml for 192.168.1.21
*******
ok: [192.168.1.21]
*************
changed: [192.168.1.21]
**********
changed: [192.168.1.21] => (item=registry.aliyuncs.com/kubeadm-ha/kube-apiserver:v1.21.4)
changed: [192.168.1.21] => (item=registry.aliyuncs.com/kubeadm-ha/kube-controller-manager:v1.21.4)
changed: [192.168.1.21] => (item=registry.aliyuncs.com/kubeadm-ha/kube-scheduler:v1.21.4)
changed: [192.168.1.21] => (item=registry.aliyuncs.com/kubeadm-ha/kube-proxy:v1.21.4)
changed: [192.168.1.21] => (item=registry.aliyuncs.com/kubeadm-ha/pause:3.4.1)
********
changed: [192.168.1.21]
included: /root/kubeadm-ha-master/roles/kube-certificates/tasks/kubeconfig.yml for 192.168.1.21
******
changed: [192.168.1.21]
******
changed: [192.168.1.21]
******
changed: [192.168.1.21]
****
changed: [192.168.1.21] => (item=/root/.kube)
ok: [192.168.1.21] => (item=/root/.kube)
*******
changed: [192.168.1.21] => (item=/root/.kube)
ok: [192.168.1.21] => (item=/root/.kube)
******
ok: [192.168.1.21]
*******
changed: [192.168.1.21]
**********
changed: [192.168.1.21]
****
ok: [192.168.1.21]
****
changed: [192.168.1.21]
****
ok: [192.168.1.21]
****
ok: [192.168.1.21]
****
ok: [192.168.1.21]
*************
changed: [192.168.1.21]
***********
changed: [192.168.1.21]
*********
changed: [192.168.1.21]
****
changed: [192.168.1.21]
PLAY [kube-worker,new-worker] **************************************************************************************************************************************************************************************************************
****
ok: [192.168.1.22] => {
"msg": "Check roles/prepare/variables/defaults/main.yml"
}
ok: [192.168.1.21] => {
"msg": "Check roles/prepare/variables/defaults/main.yml"
}
ok: [192.168.1.31] => {
"msg": "Check roles/prepare/variables/defaults/main.yml"
}
********
changed: [192.168.1.21]
changed: [192.168.1.22]
changed: [192.168.1.31]
****
ok: [192.168.1.21]
ok: [192.168.1.22]
ok: [192.168.1.31]
******
ok: [192.168.1.22] => (item=/var/lib/kubelet)
ok: [192.168.1.21] => (item=/var/lib/kubelet)
ok: [192.168.1.31] => (item=/var/lib/kubelet)
ok: [192.168.1.22] => (item=/etc/kubernetes)
ok: [192.168.1.21] => (item=/etc/kubernetes)
ok: [192.168.1.31] => (item=/etc/kubernetes)
ok: [192.168.1.21] => (item=/usr/share/bash-completion/completions)
ok: [192.168.1.22] => (item=/usr/share/bash-completion/completions)
ok: [192.168.1.31] => (item=/usr/share/bash-completion/completions)
******
ok: [192.168.1.31]
ok: [192.168.1.22]
ok: [192.168.1.21]
included: /root/kubeadm-ha-master/roles/kube-master/tasks/kubeadm-config.yml for 192.168.1.22, 192.168.1.31
****
changed: [192.168.1.22]
changed: [192.168.1.31]
***
ok: [192.168.1.22]
ok: [192.168.1.31]
**********
ok: [192.168.1.22]
ok: [192.168.1.31]
*******
changed: [192.168.1.22]
changed: [192.168.1.31]
*******
ok: [192.168.1.22]
ok: [192.168.1.31]
******
changed: [192.168.1.31]
changed: [192.168.1.22]
*********
changed: [192.168.1.31]
changed: [192.168.1.22]
****
changed: [192.168.1.22]
changed: [192.168.1.31]
PLAY [kube-master,kube-worker,new-master,new-worker] ***************************************************************************************************************************************************************************************
****
ok: [192.168.1.21] => {
"msg": "Check roles/prepare/variables/defaults/main.yml"
}
ok: [192.168.1.22] => {
"msg": "Check roles/prepare/variables/defaults/main.yml"
}
ok: [192.168.1.31] => {
"msg": "Check roles/prepare/variables/defaults/main.yml"
}
**********
changed: [192.168.1.21 -> 192.168.1.21]
**********
changed: [192.168.1.31 -> 192.168.1.21]
changed: [192.168.1.21 -> 192.168.1.21]
changed: [192.168.1.22 -> 192.168.1.21]
*******
changed: [192.168.1.21 -> 192.168.1.21]
****
changed: [192.168.1.21 -> 192.168.1.21]
changed: [192.168.1.22 -> 192.168.1.21]
changed: [192.168.1.31 -> 192.168.1.21]
****
changed: [192.168.1.21 -> 192.168.1.21]
changed: [192.168.1.22 -> 192.168.1.21]
changed: [192.168.1.31 -> 192.168.1.21]
PLAY [kube-master[0]] **********************************************************************************************************************************************************************************************************************
****
ok: [192.168.1.21] => {
"msg": "Check roles/prepare/variables/defaults/main.yml"
}
******
changed: [192.168.1.21]
included: /root/kubeadm-ha-master/roles/plugins/network-plugins/tasks/calico.yml for 192.168.1.21
******
changed: [192.168.1.21]
******
changed: [192.168.1.21]
**
changed: [192.168.1.21]
**
changed: [192.168.1.21]
FAILED - RETRYING: 轮询等待 calico 运行 (12 retries left).
FAILED - RETRYING: 轮询等待 calico 运行 (11 retries left).
FAILED - RETRYING: 轮询等待 calico 运行 (10 retries left).
******
changed: [192.168.1.21]
**************
changed: [192.168.1.21]
******
changed: [192.168.1.21]
included: /root/kubeadm-ha-master/roles/plugins/ingress-controller/tasks/nginx-ingress-controller.yml for 192.168.1.21
******
changed: [192.168.1.21]
**
changed: [192.168.1.21]
FAILED - RETRYING: 轮询等待 nginx-ingress-controller 运行 (24 retries left).
FAILED - RETRYING: 轮询等待 nginx-ingress-controller 运行 (23 retries left).
FAILED - RETRYING: 轮询等待 nginx-ingress-controller 运行 (22 retries left).
FAILED - RETRYING: 轮询等待 nginx-ingress-controller 运行 (21 retries left).
FAILED - RETRYING: 轮询等待 nginx-ingress-controller 运行 (20 retries left).
FAILED - RETRYING: 轮询等待 nginx-ingress-controller 运行 (19 retries left).
FAILED - RETRYING: 轮询等待 nginx-ingress-controller 运行 (18 retries left).
FAILED - RETRYING: 轮询等待 nginx-ingress-controller 运行 (17 retries left).
FAILED - RETRYING: 轮询等待 nginx-ingress-controller 运行 (16 retries left).
FAILED - RETRYING: 轮询等待 nginx-ingress-controller 运行 (15 retries left).
******
changed: [192.168.1.21]
**************
changed: [192.168.1.21]
******
changed: [192.168.1.21]
**
changed: [192.168.1.21]
FAILED - RETRYING: 轮询等待 metrics-server 运行 (12 retries left).
FAILED - RETRYING: 轮询等待 metrics-server 运行 (11 retries left).
FAILED - RETRYING: 轮询等待 metrics-server 运行 (10 retries left).
******
changed: [192.168.1.21]
**************
changed: [192.168.1.21]
******
changed: [192.168.1.21]
******
ok: [192.168.1.21] => (item=dashboard.key)
ok: [192.168.1.21] => (item=dashboard.crt)
******
changed: [192.168.1.21]
**
changed: [192.168.1.21]
FAILED - RETRYING: 轮询等待 kubernetes-dashboard 运行 (12 retries left).
FAILED - RETRYING: 轮询等待 kubernetes-dashboard 运行 (11 retries left).
FAILED - RETRYING: 轮询等待 kubernetes-dashboard 运行 (10 retries left).
******
changed: [192.168.1.21]
PLAY RECAP *********************************************************************************************************************************************************************************************************************************
192.168.1.21 : ok=263 changed=132 unreachable=0 failed=0 skipped=62 rescued=0 ignored=1
192.168.1.22 : ok=125 changed=50 unreachable=0 failed=0 skipped=50 rescued=0 ignored=1
192.168.1.31 : ok=125 changed=50 unreachable=0 failed=0 skipped=50 rescued=0 ignored=1
查看节点运行情况
[root@master kubeadm-ha-master]# kubectl get nodes
NAME STATUS ROLES AGE VERSION
192.168.1.21 Ready control-plane,etcd,master,worker 3m31s v1.21.4
192.168.1.22 Ready etcd,worker 3m3s v1.21.4
192.168.1.31 Ready etcd,worker 3m3s v1.21.4
[root@master kubeadm-ha-master]#
集群重置
- 如果部署失败了,想要重置整个集群【包括数据】,执行下面脚本
ansible-playbook -i example/hosts.s-master.ip.ini 99-reset-cluster.yml
安装docker
- 因为我们需要拉取镜像,所以需要在服务器提前安装好Docker,首先配置一下Docker的阿里yum源
[root@master ~]# cat >/etc/yum.repos.d/docker.repo<<EOF
> [docker-ce-edge]
> name=Docker CE Edge - \$basearch
> baseurl=https://mirrors.aliyun.com/docker-ce/linux/centos/7/\$basearch/edge
> enabled=1
> gpgcheck=1
> gpgkey=https://mirrors.aliyun.com/docker-ce/linux/centos/gpg
> EOF
[root@master ~]#
- 然后yum方式安装docker
# yum安装
[root@master ~]# yum -y install docker-ce
Loaded plugins: fastestmirror
Loading mirror speeds from cached hostfile
* base: mirrors.bfsu.edu.cn
* extras: mirrors.bfsu.edu.cn
* updates: mirrors.bfsu.edu.cn
docker-ce-edge | 3.5 kB 00:00:00
(1/2): docker-ce-edge/x86_64/updateinfo | 55 B 00:00:00
(2/2): docker-ce-edge/x86_64/primary_db | 50 kB 00:00:00
Resolving Dependencies
--> Running transaction check
---> Package docker-ce.x86_64 3:19.03.13-3.el7 will be installed
--> Processing Dependency: docker-ce-cli for package: 3:docker-ce-19.03.13-3.el7.x86_64
--> Running transaction check
---> Package docker-ce-cli.x86_64 1:19.03.13-3.el7 will be installed
--> Finished Dependency Resolution
Dependencies Resolved
============================================================================================================================================================================================================================================
Package Arch Version Repository Size
============================================================================================================================================================================================================================================
Installing:
docker-ce x86_64 3:19.03.13-3.el7 docker-ce-edge 24 M
Installing for dependencies:
docker-ce-cli x86_64 1:19.03.13-3.el7 docker-ce-edge 38 M
Transaction Summary
============================================================================================================================================================================================================================================
Install 1 Package (+1 Dependent package)
Total download size: 62 M
Installed size: 273 M
Downloading packages:
warning: /var/cache/yum/x86_64/7/docker-ce-edge/packages/docker-ce-19.03.13-3.el7.x86_64.rpm: Header V4 RSA/SHA512 Signature, key ID 621e9f35: NOKEY=- ] 6.8 MB/s | 27 MB 00:00:05 ETA
Public key for docker-ce-19.03.13-3.el7.x86_64.rpm is not installed
(1/2): docker-ce-19.03.13-3.el7.x86_64.rpm | 24 MB 00:00:02
(2/2): docker-ce-cli-19.03.13-3.el7.x86_64.rpm | 38 MB 00:00:06
--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
Total 10 MB/s | 62 MB 00:00:06
Retrieving key from https://mirrors.aliyun.com/docker-ce/linux/centos/gpg
Importing GPG key 0x621E9F35:
Userid : "Docker Release (CE rpm) <docker@docker.com>"
Fingerprint: 060a 61c5 1b55 8a7f 742b 77aa c52f eb6b 621e 9f35
From : https://mirrors.aliyun.com/docker-ce/linux/centos/gpg
Running transaction check
Running transaction test
Transaction test succeeded
Running transaction
Installing : 1:docker-ce-cli-19.03.13-3.el7.x86_64 1/2
Installing : 3:docker-ce-19.03.13-3.el7.x86_64 2/2
Verifying : 1:docker-ce-cli-19.03.13-3.el7.x86_64 1/2
Verifying : 3:docker-ce-19.03.13-3.el7.x86_64 2/2
Installed:
docker-ce.x86_64 3:19.03.13-3.el7
Dependency Installed:
docker-ce-cli.x86_64 1:19.03.13-3.el7
Complete!
# 查看docker版本
[root@master ~]# docker --version
Docker version 19.03.13, build 4484c46d9d
[root@master ~]#
# 开机自启并启动docker
[root@master ~]# systemctl enable docker
Created symlink from /etc/systemd/system/multi-user.target.wants/docker.service to /usr/lib/systemd/system/docker.service.
[root@master ~]# systemctl start docker
[root@master ~]#
- 配置docker的镜像源
[root@master ~]# cat >> /etc/docker/daemon.json << EOF
> {
> "registry-mirrors": ["https://b9pmyelo.mirror.aliyuncs.com"]
> }
> EOF
[root@master ~]#
- 重启docker
[root@master ~]# systemctl daemon-reload
[root@master ~]# systemctl restart docker
[root@master ~]#
安装Kuboard【可选】
-
Kuboard 是一款免费的 Kubernetes 图形化管理工具,力图帮助用户快速在 Kubernetes 上落地微服务。
-
Kuboard文档:https://kuboard.cn/
安装
- 在master节点执行
[root@master ~]# kubectl apply -f https://addons.kuboard.cn/kuboard/kuboard-v3.yaml
- 查看运行状态。输出结果如下所示。注意:如果是 ContainerCreating 那么需要等待一会
[root@master ~]# kubectl get pods -n kuboard
NAME READY STATUS RESTARTS AGE
kuboard-agent-2-65bc84c86c-r7tc4 1/1 Running 2 28s
kuboard-agent-78d594567-cgfp4 1/1 Running 2 28s
kuboard-etcd-fh9rp 1/1 Running 0 67s
kuboard-etcd-nrtkr 1/1 Running 0 67s
kuboard-etcd-ader3 1/1 Running 0 67s
kuboard-v3-645bdffbf6-sbdxb 1/1 Running 0 67s
[root@master ~]#
访问
- Kuboard Service 使用了 NodePort 的方式暴露服务,NodePort 为 32567;您可以按如下方式访问 Kuboard。
# 格式
http://任意一个Worker节点的IP地址:30080/
# 例如,我的访问地址如下所示
http://192.168.1.21:30080/
# 账号密码
admin
Kuboard123
扫描二维码,在手机上阅读