二进制安装k8s多matser集群

一、集群实验规划

**master1:**192.168.106.11

master2:192.168.106.12

master3:192.168.106.13

node1:192.168.106.21

**node2:**192.168.106.22

**node3:**192.168.106.23

Pod网段:10.0.0.0/16

Service网段:10.255.0.0/16

角色 Ip 主机名 安装的组件
Master 192.168.106.11 master1 apiserver、controller-manager、scheduler、etcd、docker、keepalived、nginx
Master 192.168.106.12 master2 apiserver、controller-manager、scheduler、etcd、docker、keepalived、nginx
Master 192.168.106.13 master3 apiserver、controller-manager、scheduler、etcd、docker
Node 192.168.106.21 Node1 kubelet、kube-proxy、docker、calico、coredns
Node 192.168.106.22 Node2 kubelet、kube-proxy、docker、calico、coredns
Node 192.168.106.23 Node3 kubelet、kube-proxy、docker、calico、coredns
Vip 192.168.106.100

image-20220807132258562

二、初始化集群安装环境

1、修改主机名设置固定ip

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
hostnamectl set-hostname master1
nmcli con add con-name eth0-static ifname eth0 type ethernet ipv4.method manual ipv4.addresses 192.168.106.11/24 ipv4.gateway 192.168.106.2 ipv4.dns 114.114.114.114 connection.autoconnect yes
nmcli con up eth0-static;nmcli con delete eth0

hostnamectl set-hostname master2
nmcli con add con-name eth0-static ifname eth0 type ethernet ipv4.method manual ipv4.addresses 192.168.106.12/24 ipv4.gateway 192.168.106.2 ipv4.dns 114.114.114.114 connection.autoconnect yes
nmcli con up eth0-static;nmcli con delete eth0

hostnamectl set-hostname master3
nmcli con add con-name eth0-static ifname eth0 type ethernet ipv4.method manual ipv4.addresses 192.168.106.13/24 ipv4.gateway 192.168.106.2 ipv4.dns 114.114.114.114 connection.autoconnect yes
nmcli con up eth0-static;nmcli con delete eth0

hostnamectl set-hostname node1
nmcli con add con-name eth0-static ifname eth0 type ethernet ipv4.method manual ipv4.addresses 192.168.106.21/24 ipv4.gateway 192.168.106.2 ipv4.dns 114.114.114.114 connection.autoconnect yes
nmcli con up eth0-static;nmcli con delete eth0

hostnamectl set-hostname node2
nmcli con add con-name eth0-static ifname eth0 type ethernet ipv4.method manual ipv4.addresses 192.168.106.22/24 ipv4.gateway 192.168.106.2 ipv4.dns 114.114.114.114 connection.autoconnect yes
nmcli con up eth0-static;nmcli con delete eth0

hostnamectl set-hostname node3
nmcli con add con-name eth0-static ifname eth0 type ethernet ipv4.method manual ipv4.addresses 192.168.106.23/24 ipv4.gateway 192.168.106.2 ipv4.dns 114.114.114.114 connection.autoconnect yes
nmcli con up eth0-static;nmcli con delete eth0

2、 配置主机hosts文件(在所有主机上)

1
2
3
4
5
6
7
8
9
10
cat > /etc/hosts<<END
127.0.0.1 localhost localhost.localdomain localhost4 localhost4.localdomain4
::1 localhost localhost.localdomain localhost6 localhost6.localdomain6
192.168.106.11 master1
192.168.106.12 master2
192.168.106.13 master3
192.168.106.21 node1
192.168.106.22 node2
192.168.106.23 node3
END

3、配置主机间的免密登录(所有主机相同操作)

1
2
3
4
5
6
7
ssh-keygen 
ssh-copy-id master1
ssh-copy-id master2
ssh-copy-id master3
ssh-copy-id node1
ssh-copy-id node2
ssh-copy-id node3

4、关闭swap分区提升性能

为什么要关闭swap交换分区?

1
Swap是交换分区,如果机器内存不够,会使用swap分区,但是swap分区的性能较低,k8s设计的时候为了能提升性能,默认是不允许使用交换分区的。Kubeadm初始化的时候会检测swap是否关闭,如果没关闭,那就初始化失败。如果不想要关闭交换分区,安装k8s的时候可以指定--ignore-preflight-errors=Swap来解决。
1
2
3
#所有主机执行
swapoff -a
sed -i 's/\/dev\/mapper\/centos-swap/#\/dev\/mapper\/centos-swap/' /etc/fstab

5、 修改主机内核参数(所有主机上)

1
2
3
4
5
6
7
8
9
10
modprobe br_netfilter
echo 'modprobe br_netfilter' >> /etc/profile
cat > /etc/sysctl.d/k8s.conf<<END
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
net.ipv4.ip_forward = 1
END
sysctl -p /etc/sysctl.d/k8s.conf
chmod +x /etc/rc.d/rc.local
echo "sysctl -p /etc/sysctl.d/k8s.conf" >> /etc/rc.d/rc.local

6、 关闭防火墙与selinux(所有主机上)

1
2
3
4
5
6
systemctl disable firewalld --now
sed -i 's/SELINUX=enforcing/SELINUX=disabled/' /etc/selinux/config
setenforce 0
sed -i "s/^#ClientAliveInterval.*/ClientAliveInterval 600/" /etc/ssh/sshd_config
sed -i "s/^#ClientAliveCountMax.*/ClientAliveCountMax 10/" /etc/ssh/sshd_config
systemctl restart sshd

7、 配置repo源(所有主机上)

1
2
3
4
yum install -y yum-utils
yum-config-manager \
--add-repo \
https://download.docker.com/linux/centos/docker-ce.repo

8、配置时间同步(所有主机上)

1
2
3
4
yum install chrony -y
sed -i 's/^server.*//' /etc/chrony.conf
sed -i 's/# Please.*/server ntp.aliyun.com iburst/' /etc/chrony.conf
systemctl enable chronyd --now

9、 开启ipvs(所有主机上)

ipvs是什么?

ipvs (IP Virtual Server) 实现了传输层负载均衡,也就是我们常说的4层LAN交换,作为 Linux 内核的一部分。ipvs运行在主机上,在真实服务器集群前充当负载均衡器。ipvs可以将基于TCP和UDP的服务请求转发到真实服务器上,并使真实服务器的服务在单个 IP 地址上显示为虚拟服务。

ipvs和iptable对比分析

kube-proxy支持 iptables 和 ipvs 两种模式, 在kubernetes v1.8 中引入了 ipvs 模式,在 v1.9 中处于 beta 阶段,在 v1.11 中已经正式可用了。iptables 模式在 v1.1 中就添加支持了,从 v1.2 版本开始 iptables 就是 kube-proxy 默认的操作模式,ipvs 和 iptables 都是基于netfilter的,但是ipvs采用的是hash表,因此当service数量达到一定规模时,hash查表的速度优势就会显现出来,从而提高service的服务性能。那么 ipvs 模式和 iptables 模式之间有哪些差异呢?

1、ipvs 为大型集群提供了更好的可扩展性和性能

2、ipvs 支持比 iptables 更复杂的复制均衡算法(最小负载、最少连接、加权等等)

3、ipvs 支持服务器健康检查和连接重试等功能

1
2
3
4
5
6
7
8
9
10
11
cat > /etc/sysconfig/modules/ipvs.modules<<END 
#!/bin/bash
ipvs_modules="ip_vs ip_vs_lc ip_vs_wlc ip_vs_rr ip_vs_wrr ip_vs_lblc ip_vs_lblcr ip_vs_dh ip_vs_sh ip_vs_nq ip_vs_sed ip_vs_ftp nf_conntrack"
for kernel_module in \${ipvs_modules}; do
/sbin/modinfo -F filename \${kernel_module} > /dev/null 2>&1
if [ 0 -eq 0 ]; then
/sbin/modprobe \${kernel_module}
fi
done
END
bash /etc/sysconfig/modules/ipvs.modules

10、 安装基础软件包(所有主机上)

1
yum install -y yum-utils device-mapper-persistent-data lvm2 wget net-tools nfs-utils lrzsz gcc gcc-c++ make cmake libxml2-devel openssl-devel curl curl-devel unzip sudo ntp libaio-devel  vim ncurses-devel autoconf automake zlib-devel  python-devel epel-release openssh-server socat  ipvsadm conntrack ntpdate telnet ipvsadm bash-completion rsync

11、 安装iptables(备用 所有主机上)

1
2
3
yum install iptables-services -y
systemctl disable iptables --now
iptables -F

12、 安装docker(node节点上)

二进制安装k8s,控制节点不需要安装docker

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
yum install docker-ce docker-ce-cli containerd.io docker-compose-plugin -y
mkdir -p /etc/docker
mkdir -p /data/docker
IP=`ip a s| grep eth0 | grep inet | awk {'print $2'}|awk -F "/" {'print $1'} | awk -F "." {'print $4'}`
cat > /etc/docker/daemon.json<<END
{
"data-root":"/data/docker",
"registry-mirrors": ["https://oym4jkot.mirror.aliyuncs.com"],
"insecure-registries":["registry.access.redhat.com","quay.io"],
"bip":"172.106.$IP.1/24",
"live-restore":true,
"exec-opts": ["native.cgroupdriver=systemd"]
}
END
systemctl enable docker --now && systemctl status docker

13、扩容硬盘资源(所有节点上)

1
2
3
4
pvcreate /dev/sdb
vgextend centos /dev/sdb
lvextend -l +100%FREE /dev/mapper/centos-root
xfs_growfs /dev/mapper/centos-root

二、搭建etcd集群(master节点上)

1、配置etcd工作目录

1
2
#创建配置文件和证书文件存放目录
mkdir -p /etc/etcd && mkdir -p /etc/etcd/ssl

2、安装证书签发工具cfssl

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
mkdir -p /data/work

#上传cfssl相关文件
[root@master1 work]# pwd
/data/work
[root@master1 work]# chmod +x *
[root@master1 work]# ls -l
total 18808
-rwxr-xr-x. 1 root root 6595195 Aug 7 15:36 cfssl-certinfo_linux-amd64
-rwxr-xr-x. 1 root root 2277873 Aug 7 15:36 cfssljson_linux-amd64
-rwxr-xr-x. 1 root root 10376657 Aug 7 15:36 cfssl_linux-amd64

[root@master1 work]# mv /data/work/cfssl_linux-amd64 /usr/local/bin/cfssl
[root@master1 work]# mv /data/work/cfssl-certinfo_linux-amd64 /usr/bin/cfssl-certinfo
[root@master1 work]# mv /data/work/cfssljson_linux-amd64 /usr/local/bin/cfssljson

3、配置ca证书

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
#配置证书请求文件
cat > /data/work/ca-csr.json<<END
{
"CN": "kubernetes",
"key": {
"algo": "rsa",
"size": 2048
},
"names": [
{
"C": "CN",
"ST": "HeNan",
"L": "ZhengZhou",
"O": "k8s",
"OU": "system"
}
],
"ca": {
"expiry": "87600h"
}
}
END

注:

CN:Common Name(公用名称),kube-apiserver 从证书中提取该字段作为请求的用户名 (User Name);浏览器使用该字段验证网站是否合法;对于 SSL 证书,一般为网站域名;而对于代码签名证书则为申请单位名称;而对于客户端证书则为证书申请者的姓名。

O:Organization(单位名称),kube-apiserver 从证书中提取该字段作为请求用户所属的组 (Group);对于 SSL 证书,一般为网站域名;而对于代码签名证书则为申请单位名称;而对于客户端单位证书则为证书申请者所在单位名称。

L 字段:所在城市

S 字段:所在省份

C 字段:只能是国家字母缩写,如中国:CN

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
#生成ca证书文件
cat > /data/work/ca-config.json<<END
{
"signing": {
"default": {
"expiry": "87600h"
},
"profiles": {
"kubernetes": {
"usages": [
"signing",
"key encipherment",
"server auth",
"client auth"
],
"expiry": "87600h"
}
}
}
}
END


[root@master1 work]# cfssl gencert -initca ca-csr.json | cfssljson -bare ca

4、生成etcd证书

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
#配置etcd的证书请求,hosts的ip改成自己etcd所在节点的ip,hosts上ip为etcd的ip(证书为etcd内部通信使用),包括vip:192.168.106.100,可以多预留出来几个ip,做扩容用
cat > /data/work/etcd-csr.json<<END
{
"CN": "etcd",
"hosts": [
"127.0.0.1",
"192.168.106.11",
"192.168.106.12",
"192.168.106.13",
"192.168.106.14",
"192.168.106.15",
"192.168.106.16",
"192.168.106.100"
],
"key": {
"algo": "rsa",
"size": 2048
},
"names": [{
"C": "CN",
"ST": "HeNan",
"L": "ZhengZhou",
"O": "k8s",
"OU": "system"
}]
}
END


#生成证书
[root@master1 work]# cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=kubernetes etcd-csr.json | cfssljson -bare etcd

[root@master1 work]# ls etcd*.pem
etcd-key.pem etcd.pem

5、部署etcd集群

配置命令

1
2
3
4
5
6
7
[root@master1 etcd-v3.4.13-linux-amd64]# pwd
/data/work/etcd-v3.4.13-linux-amd64
[root@master1 etcd-v3.4.13-linux-amd64]# cp -p etcd* /usr/local/bin/


[root@master1 etcd-v3.4.13-linux-amd64]# scp -r /data/work/etcd-v3.4.13-linux-amd64/etcd* master2:/usr/local/bin/
[root@master1 etcd-v3.4.13-linux-amd64]# scp -r /data/work/etcd-v3.4.13-linux-amd64/etcd* master3:/usr/local/bin/

创建配置文件(所有master节点)

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
IP_LOCAL=`ip  a s| grep eth0 | grep inet | awk {'print $2'}|awk -F "/" {'print $1'}`
HOST_NAME_NUM=`cat /etc/hostname | awk -F "r" {'print $2'}`
mkdir -p /var/lib/etcd/default.etcd
cat > /etc/etcd/etcd.conf<<END
#[Member]
ETCD_NAME="etcd$HOST_NAME_NUM"
ETCD_DATA_DIR="/var/lib/etcd/default.etcd"
ETCD_LISTEN_PEER_URLS="https://$IP_LOCAL:2380"
ETCD_LISTEN_CLIENT_URLS="https://$IP_LOCAL:2379,http://127.0.0.1:2379"
#[Clustering]
ETCD_INITIAL_ADVERTISE_PEER_URLS="https://$IP_LOCAL:2380"
ETCD_ADVERTISE_CLIENT_URLS="https://$IP_LOCAL:2379"
ETCD_INITIAL_CLUSTER="etcd1=https://192.168.106.11:2380,etcd2=https://192.168.106.12:2380,etcd3=https://192.168.106.13:2380"
ETCD_INITIAL_CLUSTER_TOKEN="etcd-cluster"
ETCD_INITIAL_CLUSTER_STATE="new"
END

注:

ETCD_NAME:节点名称,集群中唯一

ETCD_DATA_DIR:数据目录

ETCD_LISTEN_PEER_URLS:集群通信监听地址

ETCD_LISTEN_CLIENT_URLS:客户端访问监听地址

ETCD_INITIAL_ADVERTISE_PEER_URLS:集群通告地址

ETCD_ADVERTISE_CLIENT_URLS:客户端通告地址

ETCD_INITIAL_CLUSTER:集群节点地址

ETCD_INITIAL_CLUSTER_TOKEN:集群Token

ETCD_INITIAL_CLUSTER_STATE:加入集群的当前状态,new是新集群,existing表示加入已有集群

创建etcd启动文件(所有master节点)

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
cat > /usr/lib/systemd/system/etcd.service<<END
[Unit]
Description=Etcd Server
After=network.target
After=network-online.target
Wants=network-online.target

[Service]
Type=notify
EnvironmentFile=-/etc/etcd/etcd.conf
WorkingDirectory=/var/lib/etcd/
ExecStart=/usr/local/bin/etcd \\
--cert-file=/etc/etcd/ssl/etcd.pem \\
--key-file=/etc/etcd/ssl/etcd-key.pem \\
--trusted-ca-file=/etc/etcd/ssl/ca.pem \\
--peer-cert-file=/etc/etcd/ssl/etcd.pem \\
--peer-key-file=/etc/etcd/ssl/etcd-key.pem \\
--peer-trusted-ca-file=/etc/etcd/ssl/ca.pem \\
--peer-client-cert-auth \\
--client-cert-auth
Restart=on-failure
RestartSec=5
LimitNOFILE=65536

[Install]
WantedBy=multi-user.target
END

#将证书放置到正确的位置
[root@master1 work]# cp ca*.pem /etc/etcd/ssl/
[root@master1 work]# cp etcd*.pem /etc/etcd/ssl/

将证书文件拷贝到其他master节点

1
2
[root@master1 etcd]# scp  /etc/etcd/ssl/*  master2:/etc/etcd/ssl/
[root@master1 etcd]# scp /etc/etcd/ssl/* master3:/etc/etcd/ssl/

启动etcd集群(所有master节点上)

1
2
3
#启动master1服务后要紧接着启动master2、master3上的etcd服务,否则服务会一直卡在那
systemctl daemon-reload
systemctl enable etcd --now && systemctl status etcd

检出etcd集群状态

1
2
3
4
5
6
7
8
9
[root@master1 ~]# ETCDCTL_API=3
[root@master1 ~]# /usr/local/bin/etcdctl --write-out=table --cacert=/etc/etcd/ssl/ca.pem --cert=/etc/etcd/ssl/etcd.pem --key=/etc/etcd/ssl/etcd-key.pem --endpoints=https://192.168.106.11:2379,https://192.168.106.12:2379,https://192.168.106.13:2379 endpoint health
+-----------------------------+--------+-------------+-------+
| ENDPOINT | HEALTH | TOOK | ERROR |
+-----------------------------+--------+-------------+-------+
| https://192.168.106.11:2379 | true | 23.300247ms | |
| https://192.168.106.13:2379 | true | 28.185769ms | |
| https://192.168.106.12:2379 | true | 30.731989ms | |
+-----------------------------+--------+-------------+-------+

三、安装kubernetes组件

1、下载二进制安装包

1
wget https://dl.k8s.io/v1.20.7/kubernetes-server-linux-amd64.tar.gz

2、解压文件拷贝命令

1
2
3
4
5
6
7
8
9
 [root@master1 work]# tar xvf  kubernetes-server-linux-amd64.tar.gz 

[root@master1 work]# cd /data/work/kubernetes/server/bin/
[root@master1 bin]# cp kube-apiserver kubectl kube-scheduler kube-controller-manager /usr/local/bin/

#将二进制文件也拷贝到其他master节点
[root@master1 bin]# scp kube-apiserver kubectl kube-scheduler kube-controller-manager master2:/usr/local/bin/

[root@master1 bin]# scp kube-apiserver kubectl kube-scheduler kube-controller-manager master3:/usr/local/bin/

3、将部分二进制执行文件拷贝到node节点

1
2
3
4
5
[root@master1 bin]# scp kube-proxy kubelet  node1:/usr/local/bin/  

[root@master1 bin]# scp kube-proxy kubelet node2:/usr/local/bin/

[root@master1 bin]# scp kube-proxy kubelet node3:/usr/local/bin/

4、创建kubernetes目录(所有master节点上)

1
2
3
mkdir -p /etc/kubernetes/ 
mkdir -p /etc/kubernetes/ssl
mkdir /var/log/kubernetes

5、部署apiserver组件

启动TLS Bootstrapping 机制

Master apiserver启用TLS认证后,每个节点的 kubelet 组件都要使用由 apiserver 使用的 CA 签发的有效证书才能与 apiserver 通讯,当Node节点很多时,这种客户端证书颁发需要大量工作,同样也会增加集群扩展复杂度。

为了简化流程,Kubernetes引入了TLS bootstraping机制来自动颁发客户端证书,kubelet会以一个低权限用户自动向apiserver申请证书,kubelet的证书由apiserver动态签署。

Bootstrap 是很多系统中都存在的程序,比如 Linux 的bootstrap,bootstrap 一般都是作为预先配置在开启或者系统启动的时候加载,这可以用来生成一个指定环境。Kubernetes 的 kubelet 在启动时同样可以加载一个这样的配置文件,这个文件的内容类似如下形式:

1
2
3
4
5
6
7
8
9
10
11
12
13
apiVersion: v1
clusters: null
contexts:
- context:
cluster: kubernetes
user: kubelet-bootstrap
name: default
current-context: default
kind: Config
preferences: {}
users:
- name: kubelet-bootstrap
user: {}

TLS bootstrapping 具体引导过程

1、TLS 作用
TLS 的作用就是对通讯加密,防止中间人窃听;同时如果证书不信任的话根本就无法与 apiserver 建立连接,更不用提有没有权限向apiserver请求指定内容。

2、RBAC 作用
当 TLS 解决了通讯问题后,那么权限问题就应由 RBAC 解决(可以使用其他权限模型,如 ABAC);RBAC 中规定了一个用户或者用户组(subject)具有请求哪些 api 的权限;在配合 TLS 加密的时候,实际上 apiserver 读取客户端证书的 CN 字段作为用户名,读取 O字段作为用户组.

以上说明:

第一,想要与 apiserver 通讯就必须采用由 apiserver CA 签发的证书,这样才能形成信任关系,建立 TLS 连接;

第二,可以通过证书的 CN、O 字段来提供 RBAC 所需的用户与用户组。

kubelet 首次启动流程

TLS bootstrapping 功能是让 kubelet 组件去 apiserver 申请证书,然后用于连接 apiserver;那么第一次启动时没有证书如何连接 apiserver ?

在apiserver 配置中指定了一个 token.csv 文件,该文件中是一个预设的用户配置;同时该用户的Token 和 由apiserver 的 CA签发的用户被写入了 kubelet 所使用的 bootstrap.kubeconfig 配置文件中;这样在首次请求时,kubelet 使用 bootstrap.kubeconfig 中被 apiserver CA 签发证书时信任的用户来与 apiserver 建立 TLS 通讯,使用 bootstrap.kubeconfig 中的用户 Token 来向 apiserver 声明自己的 RBAC 授权身份.

token.csv格式:

3940fd7fbb391d1b4d861ad17a1f0613,kubelet-bootstrap,10001,”system:kubelet-bootstrap”

Token.csv里的token和被apiserver ca证书颁发机构信任的用户kubelet-bootstrap被写在bootstarp.kubeconfig文件里面,kubelet启动的时候会加载bootstarp.kubeconfig 文件,kubelet第一次启动的时候,kubelet会使用bootsrap.kubeconfig文件里面的用户kubelet-bootstrap与apiserver建立TLS通讯,使用bootstarp.kubeconfig 文件那里的token向apiserver声明自己的RBAC授权。

首次启动时,可能与遇到 kubelet 报 401 无权访问 apiserver 的错误;这是因为在默认情况下,kubelet 通过 bootstrap.kubeconfig 中的预设用户 Token 声明了自己的身份,然后创建 CSR 请求;但是不要忘记这个用户在我们不处理的情况下他没任何权限的,包括创建 CSR 请求;所以需要创建一个 ClusterRoleBinding,将预设用户 kubelet-bootstrap 与内置的 ClusterRole system:node-bootstrapper 绑定到一起,使其能够发起 CSR 请求。稍后安装kubelet的时候演示。

创建token.csv文件(拷贝到其他master节点上)

1
2
3
4
5
6
7
#在mster1生成拷贝到master2、master3
#格式:token,用户名,UID,用户组
cat > /etc/kubernetes/token.csv <<END
$(head -c 16 /dev/urandom | od -An -t x | tr -d ' '),kubelet-bootstrap,10001,"system:kubelet-bootstrap"
END
scp /etc/kubernetes/token.csv master2:/etc/kubernetes/
scp /etc/kubernetes/token.csv master3:/etc/kubernetes/

创建csr请求文件,替换为自己机器的IP

注: 如果 hosts 字段不为空则需要指定授权使用该证书的 IP 或域名列表。 由于该证书后续被 kubernetes master 集群使用,需要将master节点的IP都填上,同时还需要填写 service 网络的首个IP。(一般是 kube-apiserver 指定的 service-cluster-ip-range 网段的第一个IP,如 10.255.0.1)

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
cat > /data/work/kube-apiserver-csr.json <<END
{
"CN": "kubernetes",
"hosts": [
"127.0.0.1",
"192.168.106.11",
"192.168.106.12",
"192.168.106.13",
"192.168.106.21",
"192.168.106.22",
"192.168.106.23",
"192.168.106.100",
"10.255.0.1",
"kubernetes",
"kubernetes.default",
"kubernetes.default.svc",
"kubernetes.default.svc.cluster",
"kubernetes.default.svc.cluster.local"
],
"key": {
"algo": "rsa",
"size": 2048
},
"names": [
{
"C": "CN",
"ST": "HeNan",
"L": "ZhengZhou",
"O": "k8s",
"OU": "system"
}
]
}
END

生成apiserver证书

1
[root@master1 work]# cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=kubernetes kube-apiserver-csr.json | cfssljson -bare kube-apiserver

拷贝证书到指定文件夹及拷贝到其他master节点

1
2
3
4
5
6
7
[root@master1 work]# cp ca*.pem /etc/kubernetes/ssl
[root@master1 work]# cp kube-apiserver*.pem /etc/kubernetes/ssl

[root@master1 work]# scp /etc/kubernetes/ssl/* master2:/etc/kubernetes/ssl/

[root@master1 work]# scp /etc/kubernetes/ssl/* master3:/etc/kubernetes/ssl/

创建api-server的配置文件,替换成自己的ip(所有master节点上执行)

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
cat  > /etc/kubernetes/kube-apiserver.conf <<END
KUBE_APISERVER_OPTS="--enable-admission-plugins=NamespaceLifecycle,NodeRestriction,LimitRanger,ServiceAccount,DefaultStorageClass,ResourceQuota \\
--anonymous-auth=false \\
--bind-address=`ip a s| grep eth0 | grep inet | awk {'print $2'}|awk -F "/" {'print $1'}` \\
--secure-port=6443 \\
--advertise-address=`ip a s| grep eth0 | grep inet | awk {'print $2'}|awk -F "/" {'print $1'}` \\
--insecure-port=0 \\
--authorization-mode=Node,RBAC \\
--runtime-config=api/all=true \\
--enable-bootstrap-token-auth \\
--service-cluster-ip-range=10.255.0.0/16 \\
--token-auth-file=/etc/kubernetes/token.csv \\
--service-node-port-range=30000-50000 \\
--tls-cert-file=/etc/kubernetes/ssl/kube-apiserver.pem \\
--tls-private-key-file=/etc/kubernetes/ssl/kube-apiserver-key.pem \\
--client-ca-file=/etc/kubernetes/ssl/ca.pem \\
--kubelet-client-certificate=/etc/kubernetes/ssl/kube-apiserver.pem \\
--kubelet-client-key=/etc/kubernetes/ssl/kube-apiserver-key.pem \\
--service-account-key-file=/etc/kubernetes/ssl/ca-key.pem \\
--service-account-signing-key-file=/etc/kubernetes/ssl/ca-key.pem \\
--service-account-issuer=https://kubernetes.default.svc.cluster.local \\
--etcd-cafile=/etc/etcd/ssl/ca.pem \\
--etcd-certfile=/etc/etcd/ssl/etcd.pem \\
--etcd-keyfile=/etc/etcd/ssl/etcd-key.pem \\
--etcd-servers=https://192.168.106.11:2379,https://192.168.106.12:2379,https://192.168.106.13:2379 \\
--enable-swagger-ui=true \\
--allow-privileged=true \\
--apiserver-count=3 \\
--audit-log-maxage=30 \\
--audit-log-maxbackup=3 \\
--audit-log-maxsize=100 \\
--audit-log-path=/var/log/kube-apiserver-audit.log \\
--event-ttl=1h \\
--alsologtostderr=true \\
--logtostderr=false \\
--log-dir=/var/log/kubernetes \\
--v=4"
END

#注:

–logtostderr:启用日志

–v:日志等级

–log-dir:日志目录

–etcd-servers:etcd集群地址

–bind-address:监听地址

–secure-port:https安全端口

–advertise-address:集群通告地址

–allow-privileged:启用授权

–service-cluster-ip-range:Service虚拟IP地址段

–enable-admission-plugins:准入控制模块

–authorization-mode:认证授权,启用RBAC授权和节点自管理

–enable-bootstrap-token-auth:启用TLS bootstrap机制

–token-auth-file:bootstrap token文件

–service-node-port-range:Service nodeport类型默认分配端口范围

–kubelet-client-xxx:apiserver访问kubelet客户端证书

–tls-xxx-file:apiserver https证书

–etcd-xxxfile:连接Etcd集群证书 –

-audit-log-xxx:审计日志

创建服务启动文件(所有master节点上都执行)

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
cat > /usr/lib/systemd/system/kube-apiserver.service <<END
[Unit]
Description=Kubernetes API Server
Documentation=https://github.com/kubernetes/kubernetes
After=etcd.service
Wants=etcd.service

[Service]
EnvironmentFile=-/etc/kubernetes/kube-apiserver.conf
ExecStart=/usr/local/bin/kube-apiserver \$KUBE_APISERVER_OPTS
Restart=on-failure
RestartSec=5
Type=notify
LimitNOFILE=65536

[Install]
WantedBy=multi-user.target
END

启动服务(所有master节点上执行)

1
systemctl daemon-reload && systemctl enable kube-apiserver --now && systemctl status kube-apiserver
1
2
3
4
5
6
7
8
9
10
11
12
#检查apiserver,401状态是正常的,还没做认证
[root@master1 ~]# curl --insecure https://192.168.106.11:6443/
{
"kind": "Status",
"apiVersion": "v1",
"metadata": {

},
"status": "Failure",
"message": "Unauthorized",
"reason": "Unauthorized",
"code": 401

6、部署kubectl组件

Kubectl是客户端工具,操作k8s资源的,如增删改查等。

Kubectl操作资源的时候,怎么知道连接到哪个集群,需要一个文件/etc/kubernetes/admin.conf,kubectl会根据这个文件的配置,去访问k8s资源。/etc/kubernetes/admin.con文件记录了访问的k8s集群,和用到的证书。

可以设置一个环境变量KUBECONFIG

export KUBECONFIG =/etc/kubernetes/admin.conf

这样在操作kubectl,就会自动加载KUBECONFIG来操作要管理哪个集群的k8s资源了

也可以按照下面方法,这个是在kubeadm初始化k8s的时候会告诉我们要用的一个方法

cp /etc/kubernetes/admin.conf /root/.kube/config

这样我们在执行kubectl,就会加载/root/.kube/config文件,去操作k8s资源了

如果设置了KUBECONFIG,那就会先找到KUBECONFIG去操作k8s,如果没有KUBECONFIG变量,那就会使用/root/.kube/config文件决定管理哪个k8s集群的资源

创建证书请求文件

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
cat > /data/work/admin-csr.json <<END
{
"CN": "admin",
"hosts": [],
"key": {
"algo": "rsa",
"size": 2048
},
"names": [
{
"C": "CN",
"ST": "HeNan",
"L": "ZhengZhou",
"O": "system:masters",
"OU": "system"
}
]
}
END

说明: 后续 kube-apiserver 使用 RBAC 对客户端(如 kubelet、kube-proxy、Pod)请求进行授权; kube-apiserver 预定义了一些 RBAC 使用的 RoleBindings,如 cluster-admin 将 Group system:masters 与 Role cluster-admin 绑定,该 Role 授予了调用kube-apiserver 的所有 API的权限; O指定该证书的 Group 为 system:masters,kubelet 使用该证书访问 kube-apiserver 时 ,由于证书被 CA 签名,所以认证通过,同时由于证书用户组为经过预授权的 system:masters,所以被授予访问所有 API 的权限;

注: 这个admin 证书,是将来生成管理员用的kube config 配置文件用的,现在我们一般建议使用RBAC 来对kubernetes 进行角色权限控制, kubernetes 将证书中的CN 字段 作为User, O 字段作为 Group; “O”: “system:masters”, 必须是system:masters,否则后面kubectl create clusterrolebinding报错。

证书O配置为system:masters 在集群内部cluster-admin的clusterrolebinding将system:masters组和cluster-admin clusterrole绑定在一起

生成证书

1
[root@master1 work]# cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=kubernetes admin-csr.json | cfssljson -bare admin

拷贝证书到指定目录

1
[root@master1 work]# cp /data/work/admin*.pem /etc/kubernetes/ssl/

创建kubeconfig配置文件,比较重要

kubeconfig 为 kubectl 的配置文件,包含访问 apiserver 的所有信息,如 apiserver 地址、CA 证书和自身使用的证书(这里如果报错找不到kubeconfig路径,请手动复制到相应路径下,没有则忽略)

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
#设置集群参数
[root@master1 work]# kubectl config set-cluster kubernetes --certificate-authority=ca.pem --embed-certs=true --server=https://192.168.106.11:6443 --kubeconfig=kube.config
Cluster "kubernetes" set.


#设置客户端认证参数
[root@master1 work]# kubectl config set-credentials admin --client-certificate=admin.pem --client-key=admin-key.pem --embed-certs=true --kubeconfig=kube.config
User "admin" set.


#设置上下文参数
[root@master1 work]# kubectl config set-context kubernetes --cluster=kubernetes --user=admin --kubeconfig=kube.config
Context "kubernetes" created.


#设置当前上下文
[root@master1 work]# kubectl config use-context kubernetes --kubeconfig=kube.config
Switched to context "kubernetes".


#创建目录拷贝配置文件
[root@master1 ~]# mkdir ~/.kube -p
[root@master1 ~]# cp /data/work/kube.config ~/.kube/config


#测试是否正常
[root@master1 work]# kubectl get pods
No resources found in default namespace.

[root@master1 work]# kubectl cluster-info
Kubernetes control plane is running at https://192.168.106.11:6443

[root@master1 work]# kubectl get cs
Warning: v1 ComponentStatus is deprecated in v1.19+
NAME STATUS MESSAGE ERROR
controller-manager Unhealthy Get "http://127.0.0.1:10252/healthz": dial tcp 127.0.0.1:10252: connect: connection refused
scheduler Unhealthy Get "http://127.0.0.1:10251/healthz": dial tcp 127.0.0.1:10251: connect: connection refused
etcd-2 Healthy {"health":"true"}
etcd-1 Healthy {"health":"true"}
etcd-0 Healthy {"health":"true"}
[root@master1 work]# kubectl get svc
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubernetes ClusterIP 10.255.0.1 <none> 443/TCP 17h

授权kubebernetes证书访问kubelet api权限

1
2
[root@master1 work]# kubectl create clusterrolebinding kube-apiserver:kubelet-apis --clusterrole=system:kubelet-api-admin --user kubernetes
clusterrolebinding.rbac.authorization.k8s.io/kube-apiserver:kubelet-apis created

同步到master2、master3节点

1
2
3
4
5
[root@master2 ~]# mkdir /root/.kube/
[root@master3 ~]# mkdir /root/.kube/

[root@master1 ~]# scp .kube/config master2:~/.kube/
[root@master1 ~]# scp .kube/config master3:~/.kube/

配置kubectl子命令补全(所有matser节点上)

1
2
3
4
5
6
7
yum install -y bash-completion
source /usr/share/bash-completion/bash_completion
source <(kubectl completion bash)
kubectl completion bash > ~/.kube/completion.bash.inc
source '/root/.kube/completion.bash.inc'
source $HOME/.bash_profile
echo "source '/root/.kube/completion.bash.inc'" >> /etc/bashrc

Kubectl官方备忘单:

1
https://kubernetes.io/zh/docs/reference/kubectl/cheatsheet/

7、部署kube-controller-manager组件

创建csr请求文件

hosts 列表包含所有 kube-controller-manager 节点 IP; CN 为 system:kube-controller-manager、O 为 system:kube-controller-manager,kubernetes 内置的 ClusterRoleBindings system:kube-controller-manager 赋予 kube-controller-manager 工作所需的权限

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
cat > /data/work/kube-controller-manager-csr.json <<END 
{
"CN": "system:kube-controller-manager",
"key": {
"algo": "rsa",
"size": 2048
},
"hosts": [
"127.0.0.1",
"192.168.106.11",
"192.168.106.12",
"192.168.106.13",
"192.168.106.100"
],
"names": [
{
"C": "CN",
"ST": "HeNan",
"L": "ZhengZhou",
"O": "system:kube-controller-manager",
"OU": "system"
}
]
}
END

生成证书

1
[root@master1 work]# cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=kubernetes kube-controller-manager-csr.json | cfssljson -bare kube-controller-manager

创建kube-controller-manager的kubeconfig

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
#设置集群参数
[root@master1 work]# kubectl config set-cluster kubernetes --certificate-authority=ca.pem --embed-certs=true --server=https://192.168.106.11:6443 --kubeconfig=kube-controller-manager.kubeconfig
Cluster "kubernetes" set.


#设置客户端认证参数
[root@master1 work]# kubectl config set-credentials system:kube-controller-manager --client-certificate=kube-controller-manager.pem --client-key=kube-controller-manager-key.pem --embed-certs=true --kubeconfig=kube-controller-manager.kubeconfig
User "system:kube-controller-manager" set.


#设置上下文参数
[root@master1 work]# kubectl config set-context system:kube-controller-manager --cluster=kubernetes --user=system:kube-controller-manager --kubeconfig=kube-controller-manager.kubeconfig
Context "system:kube-controller-manager" created.


#设置使用上下文
[root@master1 work]# kubectl config use-context system:kube-controller-manager --kubeconfig=kube-controller-manager.kubeconfig
Switched to context "system:kube-controller-manager".

创建kube-controller-manager.conf配置文件(所有master节点)

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
cat > /etc/kubernetes/kube-controller-manager.conf <<END
KUBE_CONTROLLER_MANAGER_OPTS="--port=0 \\
--secure-port=10252 \\
--bind-address=127.0.0.1 \\
--kubeconfig=/etc/kubernetes/kube-controller-manager.kubeconfig \\
--service-cluster-ip-range=10.255.0.0/16 \\
--cluster-name=kubernetes \\
--cluster-signing-cert-file=/etc/kubernetes/ssl/ca.pem \\
--cluster-signing-key-file=/etc/kubernetes/ssl/ca-key.pem \\
--allocate-node-cidrs=true \\
--cluster-cidr=10.0.0.0/16 \\
--experimental-cluster-signing-duration=87600h \\
--root-ca-file=/etc/kubernetes/ssl/ca.pem \\
--service-account-private-key-file=/etc/kubernetes/ssl/ca-key.pem \\
--leader-elect=true \\
--feature-gates=RotateKubeletServerCertificate=true \\
--controllers=*,bootstrapsigner,tokencleaner \\
--horizontal-pod-autoscaler-use-rest-clients=true \\
--horizontal-pod-autoscaler-sync-period=10s \\
--tls-cert-file=/etc/kubernetes/ssl/kube-controller-manager.pem \\
--tls-private-key-file=/etc/kubernetes/ssl/kube-controller-manager-key.pem \\
--use-service-account-credentials=true \\
--alsologtostderr=true \\
--logtostderr=false \\
--log-dir=/var/log/kubernetes \\
--v=2"
END

创建启动文件(所有master节点)

1
2
3
4
5
6
7
8
9
10
11
12
cat > /usr/lib/systemd/system/kube-controller-manager.service <<END
[Unit]
Description=Kubernetes Controller Manager
Documentation=https://github.com/kubernetes/kubernetes
[Service]
EnvironmentFile=-/etc/kubernetes/kube-controller-manager.conf
ExecStart=/usr/local/bin/kube-controller-manager \$KUBE_CONTROLLER_MANAGER_OPTS
Restart=on-failure
RestartSec=5
[Install]
WantedBy=multi-user.target
END

拷贝证书或配置文件

1
2
3
4
5
6
7
8
9
[root@master1 work]# cp /data/work/kube-controller-manager.kubeconfig /etc/kubernetes/
[root@master1 work]# cp /data/work/kube-controller-manager*.pem /etc/kubernetes/ssl/

[root@master1 work]# scp /data/work/kube-controller-manager*.pem master2:/etc/kubernetes/ssl/
[root@master1 work]# scp /data/work/kube-controller-manager*.pem master3:/etc/kubernetes/ssl/


[root@master1 work]# scp /data/work/kube-controller-manager.kubeconfig master2:/etc/kubernetes/
[root@master1 work]# scp /data/work/kube-controller-manager.kubeconfig master3:/etc/kubernetes/

启动服务(所有master节点)

1
systemctl daemon-reload && systemctl enable kube-controller-manager --now && systemctl status kube-controller-manager

8、部署kube-scheduler组件

创建csr请求文件

注: hosts 列表包含所有 kube-scheduler 节点 IP; CN 为 system:kube-scheduler、O 为 system:kube-scheduler,kubernetes 内置的 ClusterRoleBindings system:kube-scheduler 将赋予 kube-scheduler 工作所需的权限。

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
cat > /data/work/kube-scheduler-csr.json <<END
{
"CN": "system:kube-scheduler",
"hosts": [
"127.0.0.1",
"192.168.106.11",
"192.168.106.12",
"192.168.106.13",
"192.168.106.100"
],
"key": {
"algo": "rsa",
"size": 2048
},
"names": [
{
"C": "CN",
"ST": "HeNan",
"L": "ZhengZhou",
"O": "system:kube-scheduler",
"OU": "system"
}
]
}
END

生成证书

1
[root@master1 work]# cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=kubernetes kube-scheduler-csr.json | cfssljson -bare kube-scheduler

创建kube-scheduler的kubeconfig

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
#设置集群参数
[root@master1 work]# kubectl config set-cluster kubernetes --certificate-authority=ca.pem --embed-certs=true --server=https://192.168.106.11:6443 --kubeconfig=kube-scheduler.kubeconfig
Cluster "kubernetes" set.


#设置客户端认证参数
[root@master1 work]# kubectl config set-credentials system:kube-scheduler --client-certificate=kube-scheduler.pem --client-key=kube-scheduler-key.pem --embed-certs=true --kubeconfig=kube-scheduler.kubeconfig
User "system:kube-scheduler" set.


#设置上下文参数
[root@master1 work]# kubectl config set-context system:kube-scheduler --cluster=kubernetes --user=system:kube-scheduler --kubeconfig=kube-scheduler.kubeconfig
Context "system:kube-scheduler" created.


#设置当前上下文
[root@master1 work]# kubectl config use-context system:kube-scheduler --kubeconfig=kube-scheduler.kubeconfig
Switched to context "system:kube-scheduler".

创建配置文件kube-scheduler.conf(所有master节点上)

1
2
3
4
5
6
7
8
9
cat > /etc/kubernetes/kube-scheduler.conf <<END
KUBE_SCHEDULER_OPTS="--address=127.0.0.1 \\
--kubeconfig=/etc/kubernetes/kube-scheduler.kubeconfig \\
--leader-elect=true \\
--alsologtostderr=true \\
--logtostderr=false \\
--log-dir=/var/log/kubernetes \\
--v=2"
END

创建服务启动文件(所有master节点上)

1
2
3
4
5
6
7
8
9
10
11
12
13
14
cat > /usr/lib/systemd/system/kube-scheduler.service <<END
[Unit]
Description=Kubernetes Scheduler
Documentation=https://github.com/kubernetes/kubernetes

[Service]
EnvironmentFile=-/etc/kubernetes/kube-scheduler.conf
ExecStart=/usr/local/bin/kube-scheduler \$KUBE_SCHEDULER_OPTS
Restart=on-failure
RestartSec=5

[Install]
WantedBy=multi-user.target
END

拷贝证书和配置文件

1
2
3
4
5
6
7
[root@master1 work]# cp /data/work/kube-scheduler.kubeconfig /etc/kubernetes/
[root@master1 work]# scp /data/work/kube-scheduler.kubeconfig master2:/etc/kubernetes/
[root@master1 work]# scp /data/work/kube-scheduler.kubeconfig master3:/etc/kubernetes/

[root@master1 work]# cp /data/work/kube-scheduler*.pem /etc/kubernetes/ssl/
[root@master1 work]# scp /data/work/kube-scheduler*.pem master2:/etc/kubernetes/ssl/
[root@master1 work]# scp /data/work/kube-scheduler*.pem master3:/etc/kubernetes/ssl/

启动服务(所有master节点上)

1
systemctl daemon-reload && systemctl enable kube-scheduler --now && systemctl status  kube-scheduler

9、部署kubelet组件

kubelet: 每个Node节点上的kubelet定期就会调用API Server的REST接口报告自身状态,API Server接收这些信息后,将节点状态信息更新到etcd中。kubelet也通过API Server监听Pod信息,从而对Node机器上的POD进行管理,如创建、删除、更新Pod

kubelet安装在node节点上,但是生成的配置文件需要在master上生成,然后拷贝到node节点上

创建kubelet-bootstrap.kubeconfig

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
[root@master1 work]# BOOTSTRAP_TOKEN=$(awk -F "," '{print $1}' /etc/kubernetes/token.csv)


[root@master1 work]# kubectl config set-cluster kubernetes --certificate-authority=ca.pem --embed-certs=true --server=https://192.168.106.11:6443 --kubeconfig=kubelet-bootstrap.kubeconfig
Cluster "kubernetes" set.


[root@master1 work]# kubectl config set-credentials kubelet-bootstrap --token=${BOOTSTRAP_TOKEN} --kubeconfig=kubelet-bootstrap.kubeconfig
User "kubelet-bootstrap" set.


[root@master1 work]# kubectl config set-context default --cluster=kubernetes --user=kubelet-bootstrap --kubeconfig=kubelet-bootstrap.kubeconfig
Context "default" created.


[root@master1 work]# kubectl config use-context default --kubeconfig=kubelet-bootstrap.kubeconfig
Switched to context "default".


[root@master1 work]# kubectl create clusterrolebinding kubelet-bootstrap --clusterrole=system:node-bootstrapper --user=kubelet-bootstrap
clusterrolebinding.rbac.authorization.k8s.io/kubelet-bootstrap created

创建配置文件kubelet.json(所有node节点上)

“cgroupDriver”: “systemd”要和docker的驱动一致。

address替换为自己node1的IP地址。

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
mkdir /etc/kubernetes
mkdir /var/lib/kubelet
mkdir /var/log/kubernetes
mkdir /etc/kubernetes/ssl
cat > /etc/kubernetes/kubelet.json <<END
{
"kind": "KubeletConfiguration",
"apiVersion": "kubelet.config.k8s.io/v1beta1",
"authentication": {
"x509": {
"clientCAFile": "/etc/kubernetes/ssl/ca.pem"
},
"webhook": {
"enabled": true,
"cacheTTL": "2m0s"
},
"anonymous": {
"enabled": false
}
},
"authorization": {
"mode": "Webhook",
"webhook": {
"cacheAuthorizedTTL": "5m0s",
"cacheUnauthorizedTTL": "30s"
}
},
"address": "`ip a s | grep eth0 | grep inet | awk {'print $2'} | awk -F "/" {'print $1'}`",
"port": 10250,
"readOnlyPort": 10255,
"cgroupDriver": "systemd",
"hairpinMode": "promiscuous-bridge",
"serializeImagePulls": false,
"featureGates": {
"RotateKubeletClientCertificate": true,
"RotateKubeletServerCertificate": true
},
"clusterDomain": "cluster.local.",
"clusterDNS": ["10.255.0.2"]
}
END

创建服务启动文件(所有node节点上)

–hostname-override:显示名称,集群中唯一

–network-plugin:启用CNI

–kubeconfig:空路径,会自动生成,后面用于连接apiserver

–bootstrap-kubeconfig:首次启动向apiserver申请证书

–config:配置参数文件

–cert-dir:kubelet证书生成目录

–pod-infra-container-image:管理Pod网络容器的镜像

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
cat > /usr/lib/systemd/system/kubelet.service <<END
[Unit]
Description=Kubernetes Kubelet
Documentation=https://github.com/kubernetes/kubernetes
After=docker.service
Requires=docker.service
[Service]
WorkingDirectory=/var/lib/kubelet
ExecStart=/usr/local/bin/kubelet \\
--bootstrap-kubeconfig=/etc/kubernetes/kubelet-bootstrap.kubeconfig \\
--cert-dir=/etc/kubernetes/ssl \\
--kubeconfig=/etc/kubernetes/kubelet.kubeconfig \\
--config=/etc/kubernetes/kubelet.json \\
--network-plugin=cni \\
--pod-infra-container-image=k8s.gcr.io/pause:3.2 \\
--alsologtostderr=true \\
--logtostderr=false \\
--log-dir=/var/log/kubernetes \\
--v=2
Restart=on-failure
RestartSec=5

[Install]
WantedBy=multi-user.target
END

拷贝证书配置文件

1
2
3
4
5
6
7
[root@master1 work]# scp /data/work/kubelet-bootstrap.kubeconfig node1:/etc/kubernetes/
[root@master1 work]# scp /data/work/kubelet-bootstrap.kubeconfig node2:/etc/kubernetes/
[root@master1 work]# scp /data/work/kubelet-bootstrap.kubeconfig node3:/etc/kubernetes/

[root@master1 work]# scp /data/work/ca.pem node1:/etc/kubernetes/ssl/
[root@master1 work]# scp /data/work/ca.pem node2:/etc/kubernetes/ssl/
[root@master1 work]# scp /data/work/ca.pem node3:/etc/kubernetes/ssl/

启动服务(所有node节点上)

1
systemctl daemon-reload && systemctl enable kubelet --now && systemctl status  kubelet

Approve一下bootstrap请求

1
2
3
4
5
6
7
8
9
10
11
12
13
[root@master1 work]# kubectl get csr
NAME AGE SIGNERNAME REQUESTOR CONDITION
node-csr-DZFsB1G2IkzCkpAsoq5JN6oIj_8Tqkzk6DgvHexL_-0 14m kubernetes.io/kube-apiserver-client-kubelet kubelet-bootstrap Pending
node-csr-YfF2Khx8Y5krUD45zuCdKy8OJGUvZgPKgPBk29HcDAY 15m kubernetes.io/kube-apiserver-client-kubelet kubelet-bootstrap Pending
node-csr-pkQYsY7BvJnyEYZIERQBrXQYvx0jHBOXn2JGLCjxhq8 15m kubernetes.io/kube-apiserver-client-kubelet kubelet-bootstrap Pending


[root@master1 work]# kubectl certificate approve node-csr-DZFsB1G2IkzCkpAsoq5JN6oIj_8Tqkzk6DgvHexL_-0
certificatesigningrequest.certificates.k8s.io/node-csr-DZFsB1G2IkzCkpAsoq5JN6oIj_8Tqkzk6DgvHexL_-0 approved
[root@master1 work]# kubectl certificate approve node-csr-YfF2Khx8Y5krUD45zuCdKy8OJGUvZgPKgPBk29HcDAY
certificatesigningrequest.certificates.k8s.io/node-csr-YfF2Khx8Y5krUD45zuCdKy8OJGUvZgPKgPBk29HcDAY approved
[root@master1 work]# kubectl certificate approve node-csr-pkQYsY7BvJnyEYZIERQBrXQYvx0jHBOXn2JGLCjxhq8
certificatesigningrequest.certificates.k8s.io/node-csr-pkQYsY7BvJnyEYZIERQBrXQYvx0jHBOXn2JGLCjxhq8 approved

导入docker镜像

1
2
3
[root@node1 ~]# docker load -i pause-cordns.tar.gz 
[root@node2 ~]# docker load -i pause-cordns.tar.gz
[root@node3 ~]# docker load -i pause-cordns.tar.gz

10、部署kube-proxy组件

创建csr证书请求文件(仅在master1上)

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
cat > /data/work/kube-proxy-csr.json <<END
{
"CN": "system:kube-proxy",
"key": {
"algo": "rsa",
"size": 2048
},
"names": [
{
"C": "CN",
"ST": "HeNan",
"L": "ZhengZhou",
"O": "k8s",
"OU": "system"
}
]
}
END

生成证书

1
[root@master1 work]# cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=kubernetes kube-proxy-csr.json | cfssljson -bare kube-proxy

创建kube-proxy的kubeconfig

1
2
3
4
5
6
7
8
9
10
11
12
13
14
[root@master1 work]# kubectl config set-cluster kubernetes --certificate-authority=ca.pem --embed-certs=true --server=https://192.168.106.11:6443 --kubeconfig=kube-proxy.kubeconfig
Cluster "kubernetes" set.


[root@master1 work]# kubectl config set-credentials kube-proxy --client-certificate=kube-proxy.pem --client-key=kube-proxy-key.pem --embed-certs=true --kubeconfig=kube-proxy.kubeconfig
User "kube-proxy" set.


[root@master1 work]# kubectl config set-context default --cluster=kubernetes --user=kube-proxy --kubeconfig=kube-proxy.kubeconfig
Context "default" created.


[root@master1 work]# kubectl config use-context default --kubeconfig=kube-proxy.kubeconfig
Switched to context "default".

创建kube-proxy配置文件(所有node节点上)

1
2
3
4
5
6
7
8
9
10
11
cat > /etc/kubernetes/kube-proxy.yaml <<END
apiVersion: kubeproxy.config.k8s.io/v1alpha1
bindAddress: `ip a s | grep eth0 | grep inet | awk {'print $2'} | awk -F "/" {'print $1'}`
clientConnection:
kubeconfig: /etc/kubernetes/kube-proxy.kubeconfig
clusterCIDR: 192.168.106.0/24
healthzBindAddress: `ip a s | grep eth0 | grep inet | awk {'print $2'} | awk -F "/" {'print $1'}`:10256
kind: KubeProxyConfiguration
metricsBindAddress: `ip a s | grep eth0 | grep inet | awk {'print $2'} | awk -F "/" {'print $1'}`:10249
mode: "ipvs"
END

创建服务启动文件(所有node节点上)

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
mkdir -p /var/lib/kube-proxy
cat > /usr/lib/systemd/system/kube-proxy.service <<END
[Unit]
Description=Kubernetes Kube-Proxy Server
Documentation=https://github.com/kubernetes/kubernetes
After=network.target

[Service]
WorkingDirectory=/var/lib/kube-proxy
ExecStart=/usr/local/bin/kube-proxy \\
--config=/etc/kubernetes/kube-proxy.yaml \\
--alsologtostderr=true \\
--logtostderr=false \\
--log-dir=/var/log/kubernetes \\
--v=2
Restart=on-failure
RestartSec=5
LimitNOFILE=65536

[Install]
WantedBy=multi-user.target
END

拷贝配置文件

1
2
3
[root@master1 work]# scp kube-proxy.kubeconfig node1:/etc/kubernetes/
[root@master1 work]# scp kube-proxy.kubeconfig node2:/etc/kubernetes/
[root@master1 work]# scp kube-proxy.kubeconfig node3:/etc/kubernetes/

启动服务(所有node节点上)

1
systemctl daemon-reload && systemctl enable kube-proxy --now && systemctl status kube-proxy

11、部署calico组件

导入镜像包

1
2
3
[root@node2 ~]# docker load -i calico.tar.gz
[root@node2 ~]# docker load -i calico.tar.gz
[root@node2 ~]# docker load -i calico.tar.gz

应用calico yaml文件

1
[root@master1 ~]# kubectl apply -f calico.yaml 

12、部署coredns组件

导入镜像

1
2
3
[root@node1 ~]# docker load -i pause-cordns.tar.gz 
[root@node2 ~]# docker load -i pause-cordns.tar.gz
[root@node3 ~]# docker load -i pause-cordns.tar.gz

应用coredns文件

1
[root@master1 ~]# kubectl apply -f coredns.yaml 

查看集群的状态

1
2
3
4
5
6
7
[root@master1 ~]# kubectl get pods -n kube-system -owide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
calico-kube-controllers-6949477b58-7r8tw 1/1 Running 0 11m 172.16.135.1 node3 <none> <none>
calico-node-98mm4 1/1 Running 0 11m 192.168.106.21 node1 <none> <none>
calico-node-cnzgt 1/1 Running 0 11m 192.168.106.23 node3 <none> <none>
calico-node-jcx5k 1/1 Running 0 11m 192.168.106.22 node2 <none> <none>
coredns-7bf4bd64bd-rhzbk 1/1 Running 0 3m56s 172.16.166.129 node1 <none>

四、测试k8s集群部署tomcat服务

1、将镜像上传到node节点,导入镜像

1
2
3
4
5
6
7
8
root@eve-ng:~/script/binary_install_k8s# scp tomcat.tar.gz 192.168.106.21:~ 
root@eve-ng:~/script/binary_install_k8s# scp tomcat.tar.gz 192.168.106.22:~
root@eve-ng:~/script/binary_install_k8s# scp tomcat.tar.gz 192.168.106.23:~


[root@node1 ~]# docker load -i tomcat.tar.gz
[root@node2 ~]# docker load -i tomcat.tar.gz
[root@node3 ~]# docker load -i tomcat.tar.gz

2、编写tomcat.yaml应用文件

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
cat > /root/tomcat.yaml <<END
apiVersion: v1 #pod属于k8s核心组v1
kind: Pod #创建的是一个Pod资源
metadata: #元数据
name: demo-pod #pod名字
namespace: default #pod所属的名称空间
labels:
app: myapp #pod具有的标签
env: dev #pod具有的标签
spec:
containers: #定义一个容器,容器是对象列表,下面可以有多个name
- name: tomcat-pod-java #容器的名字
ports:
- containerPort: 8080
image: tomcat:8.5-jre8-alpine #容器使用的镜像
imagePullPolicy: IfNotPresent
- name: busybox
image: busybox:latest
command: #command是一个列表,定义的时候下面的参数加横线
- "/bin/sh"
- "-c"
- "sleep 3600"
END

3、应用tomcat.yaml配置文件

1
2
3
4
5
[root@master1 ~]# kubectl apply -f tomcat.yaml 
pod/demo-pod created

[root@master1 ~]# kubectl apply -f tomcat-service.yaml
service/tomcat created

4、检查服务是否正常

1
2
3
4
5
6
7
8
[root@master1 ~]# kubectl get pods -owide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
demo-pod 2/2 Running 0 4m9s 172.16.104.1 node2 <none> <none>
[root@master1 ~]# curl 192.168.106.22:30080 -I
HTTP/1.1 200
Content-Type: text/html;charset=UTF-8
Transfer-Encoding: chunked
Date: Wed, 10 Aug 2022 13:06:56 GMT

五、验证coredns是否正常

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
[root@master1 ~]# kubectl run busybox --image busybox:1.28 --restart=Never --rm -it busybox -- sh
If you don't see a command prompt, try pressing enter.
/ # ping www.baidu.com
PING www.baidu.com (36.152.44.96): 56 data bytes
64 bytes from 36.152.44.96: seq=0 ttl=55 time=3.048 ms
^C
--- www.baidu.com ping statistics ---
1 packets transmitted, 1 packets received, 0% packet loss
round-trip min/avg/max = 3.048/3.048/3.048 ms
/ # nslookup kubernetes.default.svc.cluster.local
Server: 10.255.0.2
Address 1: 10.255.0.2 kube-dns.kube-system.svc.cluster.local

Name: kubernetes.default.svc.cluster.local
Address 1: 10.255.0.1 kubernetes.default.svc.cluster.local

六、配置apiserver高可用

1、master1与master2节点安装keepalived与nginx

1
2
[root@master1 ~]# yum install keepalived nginx nginx-mod-stream -y
[root@master2 ~]# yum install keepalived nginx nginx-mod-stream -y

2、编辑mster1与master2的配置文件

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
cat > /etc/nginx/nginx.conf <<END
user nginx;
worker_processes auto;
error_log /var/log/nginx/error.log;
pid /run/nginx.pid;

include /usr/share/nginx/modules/*.conf;

events {
worker_connections 1024;
}

# 四层负载均衡,为两台Master apiserver组件提供负载均衡
stream {

log_format main '$remote_addr $upstream_addr - [$time_local] $status $upstream_bytes_sent';

access_log /var/log/nginx/k8s-access.log main;

upstream k8s-apiserver {
server 192.168.106.11:6443; # xianchaomaster1 APISERVER IP:PORT
server 192.168.106.12:6443; # xianchaomaster2 APISERVER IP:PORT
server 192.168.106.13:6443; # xianchaomaster3 APISERVER IP:PORT

}

server {
listen 16443; # 由于nginx与master节点复用,这个监听端口不能是6443,否则会冲突
proxy_pass k8s-apiserver;
}
}

http {
log_format main '$remote_addr - $remote_user [$time_local] "$request" '
'$status $body_bytes_sent "$http_referer" '
'"$http_user_agent" "$http_x_forwarded_for"';

access_log /var/log/nginx/access.log main;

sendfile on;
tcp_nopush on;
tcp_nodelay on;
keepalive_timeout 65;
types_hash_max_size 2048;

include /etc/nginx/mime.types;
default_type application/octet-stream;

server {
listen 80 default_server;
server_name _;

location / {
}
}
}
END

3、启动nginx服务

1
2
[root@master1 ~]# systemctl enable nginx --now && systemctl status nginx
[root@master2 ~]# systemctl enable nginx --now && systemctl status nginx

4、配置keepalived主备

主:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
cat > /etc/keepalived/keepalived.conf<<END
global_defs {
notification_email {
[email protected]
[email protected]
[email protected]
}
notification_email_from [email protected]
smtp_server 127.0.0.1
smtp_connect_timeout 30
router_id NGINX_MASTER
}

vrrp_script check_nginx {
script "/etc/keepalived/check_nginx.sh"
}

vrrp_instance VI_1 {
state MASTER
interface eth0 # 修改为实际网卡名
virtual_router_id 51 # VRRP 路由 ID实例,每个实例是唯一的
priority 100 # 优先级,备服务器设置 90
advert_int 1 # 指定VRRP 心跳包通告间隔时间,默认1秒
authentication {
auth_type PASS
auth_pass 1111
}
# 虚拟IP
virtual_ipaddress {
192.168.106.100/24
}
track_script {
check_nginx
}
}
END

备:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
cat > /etc/keepalived/keepalived.conf<<END
global_defs {
notification_email {
[email protected]
[email protected]
[email protected]
}
notification_email_from [email protected]
smtp_server 127.0.0.1
smtp_connect_timeout 30
router_id NGINX_BACKUP
}

vrrp_script check_nginx {
script "/etc/keepalived/check_nginx.sh"
}

vrrp_instance VI_1 {
state BACKUP
interface eth0
virtual_router_id 51 # VRRP 路由 ID实例,每个实例是唯一的
priority 90
advert_int 1
authentication {
auth_type PASS
auth_pass 1111
}
virtual_ipaddress {
192.168.106.100/24
}
track_script {
check_nginx
}
}
END

5、配置check_nginx脚本

1
2
3
4
5
6
7
8
9
10
11
12
13
cat > /etc/keepalived/check_nginx.sh <<END
#!/bin/bash
counter=\`netstat -tunpl | grep nginx | wc -l\`
if [ \$counter -eq 0 ]; then
service nginx start
sleep 2
counter=\`netstat -tunpl | grep nginx | wc -l\`
if [ \$counter -eq 0 ]; then
service keepalived stop
fi
fi
END
chmod +x /etc/keepalived/check_nginx.sh

6、启动keepalived服务

1
2
[root@master1 ~]# systemctl  enable keepalived --now && systemctl status keepalived
[root@master2 ~]# systemctl enable keepalived --now && systemctl status keepalived

7、测试vip自动迁移

1
2
3
4
5
6
7
8
9
10
11
12
13
[root@master1 ~]# systemctl stop nginx && systemctl mask nginx

[root@master2 ~]# ip a s | grep eth0
2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group default qlen 1000
inet 192.168.106.12/24 brd 192.168.106.255 scope global noprefixroute eth0
inet 192.168.106.100/24 scope global secondary eth0


[root@master1 ~]# systemctl unmask nginx && systemctl start nginx
[root@master1 ~]# ip a s | grep eth0
2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group default qlen 1000
inet 192.168.106.11/24 brd 192.168.106.255 scope global noprefixroute eth0
inet 192.168.106.100/24 scope global secondary eth0

8、更改配置文件中的ip为vip(所有node节点上)

1
2
3
grep -r 6443 /etc/kubernetes/ | awk -F ":" {'print $1'} | xargs sed -i  "s/192.168.106.11:6443/192.168.106.100:16443/" 

systemctl restart kubelet kube-proxy && systemctl status kubelet kube-proxy

七、binary_install_k8s_script

1、目录

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
[eve-ng binary_install_k8s(main)] $ tree
.
├── deploy_keepalived_nginx.sh
├── deploy_module_master.sh
├── deploy_module_node.sh
├── env.sh
├── set-cer.sh
├── site.sh
└── software_config
├── calico.yaml
├── cfssl
│   ├── cfssl-certinfo_linux-amd64
│   ├── cfssljson_linux-amd64
│   └── cfssl_linux-amd64
├── check_nginx.sh
├── coredns.yaml
├── nginx.conf
├── tomcat-service.yaml
└── tomcat.yaml

2 directories, 15 files

2、env.sh

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
[eve-ng binary_install_k8s(main)] $ cat env.sh 
#!/bin/bash
function check ()
{
if [ $? == 0 ]
then
echo -e "\x1b[32;1m $1====> SUCCESS \x1b[0m"
else
echo -e "\x1b[31;1m $1====> FAILE \x1b[0m"
exit 1
fi
}

cat > /etc/hosts<<END
127.0.0.1 localhost localhost.localdomain localhost4 localhost4.localdomain4
::1 localhost localhost.localdomain localhost6 localhost6.localdomain6
192.168.106.11 master1
192.168.106.12 master2
192.168.106.13 master3
192.168.106.21 node1
192.168.106.22 node2
192.168.106.23 node3
END
check "配置主机hosts文件"

yum install -y yum-utils device-mapper-persistent-data lvm2 wget net-tools nfs-utils lrzsz gcc gcc-c++ make cmake libxml2-devel openssl-devel curl curl-devel unzip sudo ntp libaio-devel vim ncurses-devel autoconf automake zlib-devel python-devel epel-release openssh-server socat ipvsadm conntrack ntpdate telnet ipvsadm bash-completion rsync expect &>>/dev/null
check "安装基础软件包"

expect <<EOF &>>/dev/null
set timeout 10
spawn ssh-keygen
expect {
"(/root/.ssh/id_rsa):" { send "\n";exp_continue }
"(empty for no passphrase):" { send "\n";exp_continue }
"again:" { send "\n";exp_continue }
}
EOF
check "生成公私匙文件"

for i in master1 master2 master3 node1 node2 node3
do
expect <<EOF &>>/dev/null
set timeout 10
spawn ssh-copy-id $i
expect {
"(yes/no)?" { send "yes\n";exp_continue }
"password:" { send "1\n";exp_continue }
}
EOF
done
check "配置主机间的免密登录"

swapoff -a &>>/dev/null
sed -i 's/\/dev\/mapper\/centos-swap/#\/dev\/mapper\/centos-swap/' /etc/fstab &>>/dev/null
check "关闭swap分区"

modprobe br_netfilter &>>/dev/null
echo 'modprobe br_netfilter' >> /etc/profile
cat > /etc/sysctl.d/k8s.conf<<END
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
net.ipv4.ip_forward = 1
END
sysctl -p /etc/sysctl.d/k8s.conf &>>/dev/null
check "加载br_netfilter内核参数"

systemctl disable firewalld --now &>>/dev/null
sed -i 's/SELINUX=enforcing/SELINUX=disabled/' /etc/selinux/config &>>/dev/null
setenforce 0 &>>/dev/null
sed -i "s/^#ClientAliveInterval.*/ClientAliveInterval 600/" /etc/ssh/sshd_config &>>/dev/null
sed -i "s/^#ClientAliveCountMax.*/ClientAliveCountMax 10/" /etc/ssh/sshd_config &>>/dev/null
systemctl restart sshd &>>/dev/null
check "关闭selinux与firewalld"

yum install -y yum-utils &>>/dev/null
yum-config-manager \
--add-repo \
https://download.docker.com/linux/centos/docker-ce.repo &>>/dev/null
check "配置docker repo源"

yum install chrony -y &>>/dev/null
sed -i 's/^server.*//' /etc/chrony.conf &>>/dev/null
sed -i 's/# Please.*/server ntp.aliyun.com iburst/' /etc/chrony.conf &>>/dev/null
systemctl enable chronyd --now &>>/dev/null
check "配置时间同步"

cat > /etc/sysconfig/modules/ipvs.modules<<END
#!/bin/bash
ipvs_modules="ip_vs ip_vs_lc ip_vs_wlc ip_vs_rr ip_vs_wrr ip_vs_lblc ip_vs_lblcr ip_vs_dh ip_vs_sh ip_vs_nq ip_vs_sed ip_vs_ftp nf_conntrack"
for kernel_module in \${ipvs_modules}; do
/sbin/modinfo -F filename \${kernel_module} > /dev/null 2>&1
if [ 0 -eq 0 ]; then
/sbin/modprobe \${kernel_module}
fi
done
END
bash /etc/sysconfig/modules/ipvs.modules &>>/dev/null
if [ `lsmod | grep ip_vs | wc -l` == 0 ]
then ?
fi &>>/dev/null
check "开启ipvs"


yum install iptables-services -y &>>/dev/null
systemctl disable iptables --now &>>/dev/null
iptables -F &>>/dev/null
check "安装iptables"

yum install docker-ce docker-ce-cli containerd.io docker-compose-plugin -y &>>/dev/null
mkdir -p /etc/docker &>>/dev/null
mkdir -p /data/docker &>>/dev/null
IP=`ip a s| grep eth0 | grep inet | awk {'print $2'}|awk -F "/" {'print $1'} | awk -F "." {'print $4'}` &>>/dev/null
cat > /etc/docker/daemon.json<<END
{
"data-root":"/data/docker",
"registry-mirrors": ["https://oym4jkot.mirror.aliyuncs.com"],
"insecure-registries":["registry.access.redhat.com","quay.io"],
"bip":"172.106.$IP.1/24",
"live-restore":true,
"exec-opts": ["native.cgroupdriver=systemd"]
}
END
systemctl enable docker --now &>>/dev/null && systemctl status docker &>>/dev/null
check "安装与配置docker"

pvcreate /dev/sdb &>>/dev/null
vgextend centos /dev/sdb &>>/dev/null
lvextend -l +100%FREE /dev/mapper/centos-root &>>/dev/null
xfs_growfs /dev/mapper/centos-root &>>/dev/null
check "进行根分区扩容"

#创建目录
if [ `hostname` == master1 ]
then
mkdir -p /etc/etcd/ssl
mkdir /data/work -p
mkdir -p /var/lib/etcd/default.etcd
mkdir -p /etc/kubernetes/ssl
mkdir /var/log/kubernetes
mkdir ~/.kube -p
fi

if [ `hostname` == master2 ]
then
mkdir -p /etc/etcd/ssl
mkdir -p /var/lib/etcd/default.etcd
mkdir -p /etc/kubernetes/ssl
mkdir /var/log/kubernetes
mkdir ~/.kube -p
fi

if [ `hostname` == master3 ]
then
mkdir -p /etc/etcd/ssl
mkdir -p /var/lib/etcd/default.etcd
mkdir -p /etc/kubernetes/ssl
mkdir /var/log/kubernetes
mkdir ~/.kube -p
fi

if [[ `hostname` == node? ]]
then
mkdir /etc/kubernetes/ssl -p
mkdir /var/lib/kubelet
mkdir /var/log/kubernetes
mkdir -p /var/lib/kube-proxy
fi
check "创建组件目录"

echo "bash /etc/sysconfig/modules/ipvs.modules" >> /etc/rc.d/rc.local
chmod u+x /etc/rc.d/rc.local

3、set-cer.sh

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
354
355
356
357
358
359
360
361
362
363
364
365
366
367
368
369
370
371
372
373
374
375
376
377
378
[eve-ng binary_install_k8s(main)] $ cat set-cer.sh 
#!/bin/bash
PASSWD=Aa792548841..

function check ()
{
if [ $? == 0 ]
then
echo -e "\x1b[32;1m $1====> SUCCESS \x1b[0m"
else
echo -e "\x1b[31;1m $1====> FAILE \x1b[0m"
exit 1
fi
}

cd /data/work
wget https://dl.k8s.io/v1.20.7/kubernetes-server-linux-amd64.tar.gz &>>/dev/null
tar xvf kubernetes-server-linux-amd64.tar.gz &>>/dev/null
cd /data/work/kubernetes/server/bin/
cp kube-apiserver kubectl kube-scheduler kube-controller-manager /usr/local/bin/ &>>/dev/null
scp kube-apiserver kubectl kube-scheduler kube-controller-manager master2:/usr/local/bin/ &>>/dev/null
scp kube-apiserver kubectl kube-scheduler kube-controller-manager master3:/usr/local/bin/ &>>/dev/null
check "下载kubectl、kube-apiserver、kube-scheduler、kube-controller-manager二进制安装包"

scp kubelet kube-proxy node1:/usr/local/bin/ &>>/dev/null
scp kubelet kube-proxy node2:/usr/local/bin/ &>>/dev/null
scp kubelet kube-proxy node3:/usr/local/bin/ &>>/dev/null
check "拷贝kubelet、kube-proxy拷贝到node节点"

expect <<END &>>/dev/null
set time 30
spawn scp 192.168.88.88:/root/script/binary_install_k8s/software_config/cfssl/* /data/work
expect {
"*yes/no" { send "yes\r"; exp_continue }
"*password:" { send "Aa792548841..\r" }
}
expect eof
END
check "从主机上安装证书所需文件"

chmod +x /data/work/cfssl*
mv /data/work/cfssl_linux-amd64 /usr/local/bin/cfssl &>>/dev/null
mv /data/work/cfssl-certinfo_linux-amd64 /usr/bin/cfssl-certinfo &>>/dev/null
mv /data/work/cfssljson_linux-amd64 /usr/local/bin/cfssljson &>>/dev/null
check "配置cfssl命令"

cat > /data/work/ca-csr.json<<END
{
"CN": "kubernetes",
"key": {
"algo": "rsa",
"size": 2048
},
"names": [
{
"C": "CN",
"ST": "HeNan",
"L": "ZhengZhou",
"O": "k8s",
"OU": "system"
}
],
"ca": {
"expiry": "87600h"
}
}
END
check "配置ca证书请求文件"

cd /data/work
cfssl gencert -initca ca-csr.json | cfssljson -bare ca &>>/dev/null
check "生成ca证书pem与key"

cat > /data/work/ca-config.json<<END
{
"signing": {
"default": {
"expiry": "87600h"
},
"profiles": {
"kubernetes": {
"usages": [
"signing",
"key encipherment",
"server auth",
"client auth"
],
"expiry": "87600h"
}
}
}
}
END
check "配置ca证书配置文件"


cat > /data/work/etcd-csr.json<<END
{
"CN": "etcd",
"hosts": [
"127.0.0.1",
"192.168.106.11",
"192.168.106.12",
"192.168.106.13",
"192.168.106.14",
"192.168.106.15",
"192.168.106.16",
"192.168.106.100"
],
"key": {
"algo": "rsa",
"size": 2048
},
"names": [{
"C": "CN",
"ST": "HeNan",
"L": "ZhengZhou",
"O": "k8s",
"OU": "system"
}]
}
END
cd /data/work/
cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=kubernetes etcd-csr.json | cfssljson -bare etcd &>>/dev/null
check "生成etcd证书"

cp /data/work/ca*.pem /etc/etcd/ssl/
cp /data/work/etcd*.pem /etc/etcd/ssl/
scp /etc/etcd/ssl/* master2:/etc/etcd/ssl/ &>>/dev/null
scp /etc/etcd/ssl/* master3:/etc/etcd/ssl/ &>>/dev/null
check "拷贝证书到指定位置、拷贝证书到master节点的指定位置"

cd /data/work/
cat > token.csv << EOF
$(head -c 16 /dev/urandom | od -An -t x | tr -d ' '),kubelet-bootstrap,10001,"system:kubelet-bootstrap"
EOF
check "创建token.csv文件"

cat > /data/work/kube-apiserver-csr.json <<END
{
"CN": "kubernetes",
"hosts": [
"127.0.0.1",
"192.168.106.11",
"192.168.106.12",
"192.168.106.13",
"192.168.106.21",
"192.168.106.22",
"192.168.106.23",
"192.168.106.100",
"10.255.0.1",
"kubernetes",
"kubernetes.default",
"kubernetes.default.svc",
"kubernetes.default.svc.cluster",
"kubernetes.default.svc.cluster.local"
],
"key": {
"algo": "rsa",
"size": 2048
},
"names": [
{
"C": "CN",
"ST": "HeNan",
"L": "ZhengZhou",
"O": "k8s",
"OU": "system"
}
]
}
END
check "生成kube-apiserver的证书请求文件"

cd /data/work
cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=kubernetes kube-apiserver-csr.json | cfssljson -bare kube-apiserver &>>/dev/null
check "生成kube-apiserver证书"


cp /data/work/ca*.pem /etc/kubernetes/ssl &>>/dev/null
cp /data/work/kube-apiserver*.pem /etc/kubernetes/ssl &>>/dev/null
cp /data/work/token.csv /etc/kubernetes/
scp /etc/kubernetes/ssl/* master2:/etc/kubernetes/ssl/ &>>/dev/null
scp /etc/kubernetes/ssl/* master3:/etc/kubernetes/ssl/ &>>/dev/null
check "拷贝kube-apiserver证书到指定文件夹及拷贝到其他master节点"

scp /etc/kubernetes/token.csv master2:/etc/kubernetes/ &>>/dev/null
scp /etc/kubernetes/token.csv master3:/etc/kubernetes/ &>>/dev/null
check "创建token.csv拷贝到mster2与master3节点"

cat > /data/work/admin-csr.json <<END
{
"CN": "admin",
"hosts": [],
"key": {
"algo": "rsa",
"size": 2048
},
"names": [
{
"C": "CN",
"ST": "HeNan",
"L": "ZhengZhou",
"O": "system:masters",
"OU": "system"
}
]
}
END
check "创建kubectl证书请求文件"

cd /data/work
cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=kubernetes admin-csr.json | cfssljson -bare admin &>>/dev/null
check "创建kubectl证书文件"

cp /data/work/admin*.pem /etc/kubernetes/ssl/ &>>/dev/null
scp /etc/kubernetes/ssl/admin*.pem master2:/etc/kubernetes/ssl/ &>>/dev/null
scp /etc/kubernetes/ssl/admin*.pem master3:/etc/kubernetes/ssl/ &>>/dev/null
check "拷贝kubectl证书到指定文件夹及拷贝到其他master节点"

cd /data/work
kubectl config set-cluster kubernetes --certificate-authority=ca.pem --embed-certs=true --server=https://192.168.106.11:6443 --kubeconfig=kube.config &>>/dev/null
kubectl config set-credentials admin --client-certificate=admin.pem --client-key=admin-key.pem --embed-certs=true --kubeconfig=kube.config &>>/dev/null
kubectl config set-context kubernetes --cluster=kubernetes --user=admin --kubeconfig=kube.config &>>/dev/null
kubectl config use-context kubernetes --kubeconfig=kube.config &>>/dev/null
cp /data/work/kube.config ~/.kube/config
check "创建kubeconfig配置文件"



scp /root/.kube/config master2:~/.kube/
scp /root/.kube/config master3:~/.kube/
check "拷贝证书到master2与master3节点"

cat > /data/work/kube-controller-manager-csr.json <<END
{
"CN": "system:kube-controller-manager",
"key": {
"algo": "rsa",
"size": 2048
},
"hosts": [
"127.0.0.1",
"192.168.106.11",
"192.168.106.12",
"192.168.106.13",
"192.168.106.100"
],
"names": [
{
"C": "CN",
"ST": "HeNan",
"L": "ZhengZhou",
"O": "system:kube-controller-manager",
"OU": "system"
}
]
}
END
check "生成kube-controller-manager证书请求文件"

cd /data/work
cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=kubernetes kube-controller-manager-csr.json | cfssljson -bare kube-controller-manager &>>/dev/null
check "生成kube-controller-manager证书"

cd /data/work
kubectl config set-cluster kubernetes --certificate-authority=ca.pem --embed-certs=true --server=https://192.168.106.11:6443 --kubeconfig=kube-controller-manager.kubeconfig &>>/dev/null
kubectl config set-credentials system:kube-controller-manager --client-certificate=kube-controller-manager.pem --client-key=kube-controller-manager-key.pem --embed-certs=true --kubeconfig=kube-controller-manager.kubeconfig &>>/dev/null
kubectl config set-context system:kube-controller-manager --cluster=kubernetes --user=system:kube-controller-manager --kubeconfig=kube-controller-manager.kubeconfig &>>/dev/null
kubectl config use-context system:kube-controller-manager --kubeconfig=kube-controller-manager.kubeconfig &>>/dev/null
check "创建kube-controller-manager的kubeconfig"

cp /data/work/kube-controller-manager.kubeconfig /etc/kubernetes/ &>>/dev/null
cp /data/work/kube-controller-manager*.pem /etc/kubernetes/ssl/ &>>/dev/null
scp /data/work/kube-controller-manager*.pem master2:/etc/kubernetes/ssl/ &>>/dev/null
scp /data/work/kube-controller-manager*.pem master3:/etc/kubernetes/ssl/ &>>/dev/null
scp /data/work/kube-controller-manager.kubeconfig master2:/etc/kubernetes/ &>>/dev/null
scp /data/work/kube-controller-manager.kubeconfig master3:/etc/kubernetes/ &>>/dev/null
check "拷贝kube-controller-manager证书到指定文件夹及拷贝到其他master节点"

cat > /data/work/kube-scheduler-csr.json <<END
{
"CN": "system:kube-scheduler",
"hosts": [
"127.0.0.1",
"192.168.106.11",
"192.168.106.12",
"192.168.106.13",
"192.168.106.100"
],
"key": {
"algo": "rsa",
"size": 2048
},
"names": [
{
"C": "CN",
"ST": "HeNan",
"L": "ZhengZhou",
"O": "system:kube-scheduler",
"OU": "system"
}
]
}
END
check "创建kube-scheduler证书请求文件"

cd /data/work
cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=kubernetes kube-scheduler-csr.json | cfssljson -bare kube-scheduler &>>/dev/null
check "生成kube-scheduler证书"

kubectl config set-cluster kubernetes --certificate-authority=ca.pem --embed-certs=true --server=https://192.168.106.11:6443 --kubeconfig=kube-scheduler.kubeconfig &>>/dev/null
kubectl config set-credentials system:kube-scheduler --client-certificate=kube-scheduler.pem --client-key=kube-scheduler-key.pem --embed-certs=true --kubeconfig=kube-scheduler.kubeconfig &>>/dev/null
kubectl config set-context system:kube-scheduler --cluster=kubernetes --user=system:kube-scheduler --kubeconfig=kube-scheduler.kubeconfig &>>/dev/null
kubectl config use-context system:kube-scheduler --kubeconfig=kube-scheduler.kubeconfig &>>/dev/null
check "创建kube-scheduler的kubeconfig"

cp /data/work/kube-scheduler.kubeconfig /etc/kubernetes/ &>>/dev/null
scp /data/work/kube-scheduler.kubeconfig master2:/etc/kubernetes/ &>>/dev/null
scp /data/work/kube-scheduler.kubeconfig master3:/etc/kubernetes/ &>>/dev/null
cp /data/work/kube-scheduler*.pem /etc/kubernetes/ssl/ &>>/dev/null
scp /data/work/kube-scheduler*.pem master2:/etc/kubernetes/ssl/ &>>/dev/null
scp /data/work/kube-scheduler*.pem master3:/etc/kubernetes/ssl/ &>>/dev/null
check "拷贝kube-scheduler证书到指定文件夹及拷贝到其他master节点"



cd /data/work/
BOOTSTRAP_TOKEN=$(awk -F "," '{print $1}' /etc/kubernetes/token.csv)
kubectl config set-cluster kubernetes --certificate-authority=ca.pem --embed-certs=true --server=https://192.168.106.11:6443 --kubeconfig=kubelet-bootstrap.kubeconfig &>>/dev/null
kubectl config set-credentials kubelet-bootstrap --token=${BOOTSTRAP_TOKEN} --kubeconfig=kubelet-bootstrap.kubeconfig &>>/dev/null
kubectl config set-context default --cluster=kubernetes --user=kubelet-bootstrap --kubeconfig=kubelet-bootstrap.kubeconfig &>>/dev/null
kubectl config use-context default --kubeconfig=kubelet-bootstrap.kubeconfig &>>/dev/null
check "创建kubelet-bootstrap.kubeconfig"


scp /data/work/kubelet-bootstrap.kubeconfig node1:/etc/kubernetes/ &>>/dev/null
scp /data/work/kubelet-bootstrap.kubeconfig node2:/etc/kubernetes/ &>>/dev/null
scp /data/work/kubelet-bootstrap.kubeconfig node3:/etc/kubernetes/ &>>/dev/null
scp /data/work/ca.pem node1:/etc/kubernetes/ssl/ &>>/dev/null
scp /data/work/ca.pem node2:/etc/kubernetes/ssl/ &>>/dev/null
scp /data/work/ca.pem node3:/etc/kubernetes/ssl/ &>>/dev/null
check "向node节点上拷贝证书文件"

cat > /data/work/kube-proxy-csr.json <<END
{
"CN": "system:kube-proxy",
"key": {
"algo": "rsa",
"size": 2048
},
"names": [
{
"C": "CN",
"ST": "HeNan",
"L": "ZhengZhou",
"O": "k8s",
"OU": "system"
}
]
}
END
check "创建kube-proxy证书请求文件"

cd /data/work/
cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=kubernetes kube-proxy-csr.json | cfssljson -bare kube-proxy &>>/dev/null
check "生成kube-proxy证书"

kubectl config set-cluster kubernetes --certificate-authority=ca.pem --embed-certs=true --server=https://192.168.106.11:6443 --kubeconfig=kube-proxy.kubeconfig &>>/dev/null
kubectl config set-credentials kube-proxy --client-certificate=kube-proxy.pem --client-key=kube-proxy-key.pem --embed-certs=true --kubeconfig=kube-proxy.kubeconfig &>>/dev/null
kubectl config set-context default --cluster=kubernetes --user=kube-proxy --kubeconfig=kube-proxy.kubeconfig &>>/dev/null
kubectl config use-context default --kubeconfig=kube-proxy.kubeconfig &>>/dev/null
check "创建kube-proxy的kubeconfig"

scp /data/work/kube-proxy.kubeconfig node1:/etc/kubernetes/ &>>/dev/null
scp /data/work/kube-proxy.kubeconfig node2:/etc/kubernetes/ &>>/dev/null
scp /data/work/kube-proxy.kubeconfig node3:/etc/kubernetes/ &>>/dev/null
check "拷贝kube-proxy的kubeconfig"

4、deploy_module_master.sh

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
354
355
356
357
358
359
360
361
362
363
364
365
366
367
368
369
370
371
372
373
374
375
376
377
378
379
380
381
382
383
384
385
386
387
388
389
390
391
392
393
394
395
396
397
398
399
400
401
402
403
[eve-ng binary_install_k8s(main)] $ cat deploy_module_master.sh
#!/bin/bash
function check ()
{
if [ $? == 0 ]
then
echo -e "\x1b[32;1m $1====> SUCCESS \x1b[0m"
else
echo -e "\x1b[31;1m $1====> FAILE \x1b[0m"
exit 1
fi
}

#部署etcd服务(master)
function deploy_etcd ()
{

IP_LOCAL=`ip a s| grep eth0 | grep inet | awk {'print $2'}|awk -F "/" {'print $1'}`
HOST_NAME_NUM=`cat /etc/hostname | awk -F "r" {'print $2'}`
cat > /etc/etcd/etcd.conf<<END
#[Member]
ETCD_NAME="etcd$HOST_NAME_NUM"
ETCD_DATA_DIR="/var/lib/etcd/default.etcd"
ETCD_LISTEN_PEER_URLS="https://$IP_LOCAL:2380"
ETCD_LISTEN_CLIENT_URLS="https://$IP_LOCAL:2379,http://127.0.0.1:2379"
#[Clustering]
ETCD_INITIAL_ADVERTISE_PEER_URLS="https://$IP_LOCAL:2380"
ETCD_ADVERTISE_CLIENT_URLS="https://$IP_LOCAL:2379"
ETCD_INITIAL_CLUSTER="etcd1=https://192.168.106.11:2380,etcd2=https://192.168.106.12:2380,etcd3=https://192.168.106.13:2380"
ETCD_INITIAL_CLUSTER_TOKEN="etcd-cluster"
ETCD_INITIAL_CLUSTER_STATE="new"
END
check "创建etcd配置文件"

cat > /usr/lib/systemd/system/etcd.service<<END
[Unit]
Description=Etcd Server
After=network.target
After=network-online.target
Wants=network-online.target

[Service]
Type=notify
EnvironmentFile=-/etc/etcd/etcd.conf
WorkingDirectory=/var/lib/etcd/
ExecStart=/usr/local/bin/etcd \\
--cert-file=/etc/etcd/ssl/etcd.pem \\
--key-file=/etc/etcd/ssl/etcd-key.pem \\
--trusted-ca-file=/etc/etcd/ssl/ca.pem \\
--peer-cert-file=/etc/etcd/ssl/etcd.pem \\
--peer-key-file=/etc/etcd/ssl/etcd-key.pem \\
--peer-trusted-ca-file=/etc/etcd/ssl/ca.pem \\
--peer-client-cert-auth \\
--client-cert-auth
Restart=on-failure
RestartSec=5
LimitNOFILE=65536

[Install]
WantedBy=multi-user.target
END
check "创建etcd启动文件"

expect <<EOF &>>/dev/null
set timeout 10
spawn scp 192.168.88.88:/root/software/etcd-v3.4.13-linux-amd64.tar.gz .
expect {
"(yes/no)?" { send "yes\n";exp_continue }
"password:" { send "Aa792548841..\n";exp_continue }
}
EOF
tar -xf etcd-v3.4.13-linux-amd64.tar.gz
cd etcd-v3.4.13-linux-amd64
cp -p etcd* /usr/local/bin/
check "拷贝etcd文件"

systemctl daemon-reload
systemctl enable etcd --now
check "启动etcd服务"
}

#部署api-server(master)
function deploy_apiserver ()
{

cat > /etc/kubernetes/kube-apiserver.conf <<END
KUBE_APISERVER_OPTS="--enable-admission-plugins=NamespaceLifecycle,NodeRestriction,LimitRanger,ServiceAccount,DefaultStorageClass,ResourceQuota \\
--anonymous-auth=false \\
--bind-address=`ip a s| grep eth0 | grep inet | awk {'print $2'}|awk -F "/" {'print $1'}` \\
--secure-port=6443 \\
--advertise-address=`ip a s| grep eth0 | grep inet | awk {'print $2'}|awk -F "/" {'print $1'}` \\
--insecure-port=0 \\
--authorization-mode=Node,RBAC \\
--runtime-config=api/all=true \\
--enable-bootstrap-token-auth \\
--service-cluster-ip-range=10.255.0.0/16 \\
--token-auth-file=/etc/kubernetes/token.csv \\
--service-node-port-range=30000-50000 \\
--tls-cert-file=/etc/kubernetes/ssl/kube-apiserver.pem \\
--tls-private-key-file=/etc/kubernetes/ssl/kube-apiserver-key.pem \\
--client-ca-file=/etc/kubernetes/ssl/ca.pem \\
--kubelet-client-certificate=/etc/kubernetes/ssl/kube-apiserver.pem \\
--kubelet-client-key=/etc/kubernetes/ssl/kube-apiserver-key.pem \\
--service-account-key-file=/etc/kubernetes/ssl/ca-key.pem \\
--service-account-signing-key-file=/etc/kubernetes/ssl/ca-key.pem \\
--service-account-issuer=https://kubernetes.default.svc.cluster.local \\
--etcd-cafile=/etc/etcd/ssl/ca.pem \\
--etcd-certfile=/etc/etcd/ssl/etcd.pem \\
--etcd-keyfile=/etc/etcd/ssl/etcd-key.pem \\
--etcd-servers=https://192.168.106.11:2379,https://192.168.106.12:2379,https://192.168.106.13:2379 \\
--enable-swagger-ui=true \\
--allow-privileged=true \\
--apiserver-count=3 \\
--audit-log-maxage=30 \\
--audit-log-maxbackup=3 \\
--audit-log-maxsize=100 \\
--audit-log-path=/var/log/kube-apiserver-audit.log \\
--event-ttl=1h \\
--alsologtostderr=true \\
--logtostderr=false \\
--log-dir=/var/log/kubernetes \\
--v=4"
END
check "创建kube-apiserver配置文件"

cat > /usr/lib/systemd/system/kube-apiserver.service <<END
[Unit]
Description=Kubernetes API Server
Documentation=https://github.com/kubernetes/kubernetes
After=etcd.service
Wants=etcd.service

[Service]
EnvironmentFile=-/etc/kubernetes/kube-apiserver.conf
ExecStart=/usr/local/bin/kube-apiserver \$KUBE_APISERVER_OPTS
Restart=on-failure
RestartSec=5
Type=notify
LimitNOFILE=65536

[Install]
WantedBy=multi-user.target
END
check "创建kube-apiserver启动文件"

systemctl daemon-reload && systemctl enable kube-apiserver --now && systemctl status kube-apiserver
check "启动api-server服务"
}

#部署kubectl组件(master)
function deploy_kubectl ()
{
if [ `hostname` == master1 ]
then
cd /data/work
kubectl create clusterrolebinding kube-apiserver:kubelet-apis --clusterrole=system:kubelet-api-admin --user kubernetes
else
echo 666
fi &>>/dev/null
check "授权kubebernetes证书访问kubelet-api的权限"

yum install -y bash-completion &>>/dev/null
source /usr/share/bash-completion/bash_completion
source <(kubectl completion bash)
kubectl completion bash > ~/.kube/completion.bash.inc
source '/root/.kube/completion.bash.inc'
source $HOME/.bash_profile
echo "source '/root/.kube/completion.bash.inc'" >> /etc/bashrc
check "配置kubectl子命令补全"
}

#部署kube-controller--manager组件(master)
function deploy_controller_manager ()
{
cat > /etc/kubernetes/kube-controller-manager.conf <<END
KUBE_CONTROLLER_MANAGER_OPTS=" \\
--secure-port=10253 \\
--bind-address=127.0.0.1 \\
--kubeconfig=/etc/kubernetes/kube-controller-manager.kubeconfig \\
--service-cluster-ip-range=10.255.0.0/16 \\
--cluster-name=kubernetes \\
--cluster-signing-cert-file=/etc/kubernetes/ssl/ca.pem \\
--cluster-signing-key-file=/etc/kubernetes/ssl/ca-key.pem \\
--allocate-node-cidrs=true \\
--cluster-cidr=10.0.0.0/16 \\
--experimental-cluster-signing-duration=87600h \\
--root-ca-file=/etc/kubernetes/ssl/ca.pem \\
--service-account-private-key-file=/etc/kubernetes/ssl/ca-key.pem \\
--leader-elect=true \\
--feature-gates=RotateKubeletServerCertificate=true \\
--controllers=*,bootstrapsigner,tokencleaner \\
--horizontal-pod-autoscaler-use-rest-clients=true \\
--horizontal-pod-autoscaler-sync-period=10s \\
--tls-cert-file=/etc/kubernetes/ssl/kube-controller-manager.pem \\
--tls-private-key-file=/etc/kubernetes/ssl/kube-controller-manager-key.pem \\
--use-service-account-credentials=true \\
--alsologtostderr=true \\
--logtostderr=false \\
--log-dir=/var/log/kubernetes \\
--v=2"
END
check "创建kube-controller-manager.conf配置文件"

cat > /usr/lib/systemd/system/kube-controller-manager.service <<END
[Unit]
Description=Kubernetes Controller Manager
Documentation=https://github.com/kubernetes/kubernetes
[Service]
EnvironmentFile=-/etc/kubernetes/kube-controller-manager.conf
ExecStart=/usr/local/bin/kube-controller-manager \$KUBE_CONTROLLER_MANAGER_OPTS
Restart=on-failure
RestartSec=5
[Install]
WantedBy=multi-user.target
END
check "创建kube-controller-manager启动文件"

systemctl daemon-reload && systemctl enable kube-controller-manager --now && systemctl status kube-controller-manager
check "启动kube-controller-manager服务"
}

#部署kube-scheduler组件(matser)
function deploy_scheduler ()
{
cat > /etc/kubernetes/kube-scheduler.conf <<END
KUBE_SCHEDULER_OPTS="--address=127.0.0.1 \\
--kubeconfig=/etc/kubernetes/kube-scheduler.kubeconfig \\
--leader-elect=true \\
--alsologtostderr=true \\
--logtostderr=false \\
--log-dir=/var/log/kubernetes \\
--v=2"
END
check "创建配置文件kube-scheduler.conf"

cat > /usr/lib/systemd/system/kube-scheduler.service <<END
[Unit]
Description=Kubernetes Scheduler
Documentation=https://github.com/kubernetes/kubernetes

[Service]
EnvironmentFile=-/etc/kubernetes/kube-scheduler.conf
ExecStart=/usr/local/bin/kube-scheduler \$KUBE_SCHEDULER_OPTS
Restart=on-failure
RestartSec=5

[Install]
WantedBy=multi-user.target
END
check "创建kube-scheduler的启动文件"

systemctl daemon-reload && systemctl enable kube-scheduler --now && systemctl status kube-scheduler &>>/dev/null
check "启动kube-scheduler服务"
}

#部署kubelet服务(node)
function deploy_kubelet ()
{
cat > /etc/kubernetes/kubelet.json <<END
{
"kind": "KubeletConfiguration",
"apiVersion": "kubelet.config.k8s.io/v1beta1",
"authentication": {
"x509": {
"clientCAFile": "/etc/kubernetes/ssl/ca.pem"
},
"webhook": {
"enabled": true,
"cacheTTL": "2m0s"
},
"anonymous": {
"enabled": false
}
},
"authorization": {
"mode": "Webhook",
"webhook": {
"cacheAuthorizedTTL": "5m0s",
"cacheUnauthorizedTTL": "30s"
}
},
"address": "`ip a s | grep eth0 | grep inet | awk {'print $2'} | awk -F "/" {'print $1'}`",
"port": 10250,
"readOnlyPort": 10255,
"cgroupDriver": "systemd",
"hairpinMode": "promiscuous-bridge",
"serializeImagePulls": false,
"featureGates": {
"RotateKubeletClientCertificate": true,
"RotateKubeletServerCertificate": true
},
"clusterDomain": "cluster.local.",
"clusterDNS": ["10.255.0.2"]
}
END
check "创建配置文件kubelet.json"

cat > /usr/lib/systemd/system/kubelet.service <<END
[Unit]
Description=Kubernetes Kubelet
Documentation=https://github.com/kubernetes/kubernetes
After=docker.service
Requires=docker.service
[Service]
WorkingDirectory=/var/lib/kubelet
ExecStart=/usr/local/bin/kubelet \\
--bootstrap-kubeconfig=/etc/kubernetes/kubelet-bootstrap.kubeconfig \\
--cert-dir=/etc/kubernetes/ssl \\
--kubeconfig=/etc/kubernetes/kubelet.kubeconfig \\
--config=/etc/kubernetes/kubelet.json \\
--network-plugin=cni \\
--pod-infra-container-image=k8s.gcr.io/pause:3.2 \\
--alsologtostderr=true \\
--logtostderr=false \\
--log-dir=/var/log/kubernetes \\
--v=2
Restart=on-failure
RestartSec=5

[Install]
WantedBy=multi-user.target
END
check "创建服务kubelet的启动文件"

systemctl daemon-reload && systemctl enable kubelet --now && systemctl status kubelet &>>/dev/null
check "启动服务kubelet"
}

#部署kube-proxy组件(node)
function deploy_proxy ()
{
expect <<EOF &>>/dev/null
set timeout 10
spawn scp 192.168.88.88:/root/software/pause-cordns.tar.gz .
expect {
"(yes/no)?" { send "yes\n";exp_continue }
"password:" { send "Aa792548841..\n";exp_continue }
}
EOF
docker load -i pause-cordns.tar.gz &>>/dev/null
check "导入docker镜像"

cat > /etc/kubernetes/kube-proxy.yaml <<END
apiVersion: kubeproxy.config.k8s.io/v1alpha1
bindAddress: `ip a s | grep eth0 | grep inet | awk {'print $2'} | awk -F "/" {'print $1'}`
clientConnection:
kubeconfig: /etc/kubernetes/kube-proxy.kubeconfig
clusterCIDR: 192.168.106.0/24
healthzBindAddress: `ip a s | grep eth0 | grep inet | awk {'print $2'} | awk -F "/" {'print $1'}`:10256
kind: KubeProxyConfiguration
metricsBindAddress: `ip a s | grep eth0 | grep inet | awk {'print $2'} | awk -F "/" {'print $1'}`:10249
mode: "ipvs"
END
check "创建kube-proxy配置文件"


cat > /usr/lib/systemd/system/kube-proxy.service <<END
[Unit]
Description=Kubernetes Kube-Proxy Server
Documentation=https://github.com/kubernetes/kubernetes
After=network.target

[Service]
WorkingDirectory=/var/lib/kube-proxy
ExecStart=/usr/local/bin/kube-proxy \\
--config=/etc/kubernetes/kube-proxy.yaml \\
--alsologtostderr=true \\
--logtostderr=false \\
--log-dir=/var/log/kubernetes \\
--v=2
Restart=on-failure
RestartSec=5
LimitNOFILE=65536

[Install]
WantedBy=multi-user.target
END
check "创建kube-proxy服务启动文件"

systemctl daemon-reload && systemctl enable kube-proxy --now && systemctl status kube-proxy &>>/dev/null
check "启动kube-proxy服务"
}

#部署calico组件(node)
function deploy_calico ()
{
expect <<EOF &>>/dev/null
set timeout 10
spawn scp 192.168.88.88:/root/software/*.tar.gz .
expect {
"(yes/no)?" { send "yes\n";exp_continue }
"password:" { send "Aa792548841..\n";exp_continue }
}
EOF
docker load -i calico.tar.gz
docker load -i pause-cordns.tar.gz
check "导入docker calico镜像"
}
deploy_etcd
deploy_apiserver
deploy_kubectl
deploy_controller_manager
deploy_scheduler

5、deploy_module_node.sh

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
354
355
356
357
358
359
360
361
362
363
364
365
366
367
368
369
370
371
372
373
374
375
376
377
378
379
380
381
382
383
384
385
386
387
388
389
390
391
392
393
394
395
396
397
398
399
400
401
402
[eve-ng binary_install_k8s(main)] $ cat deploy_module_node.sh 
#!/bin/bash
function check ()
{
if [ $? == 0 ]
then
echo -e "\x1b[32;1m $1====> SUCCESS \x1b[0m"
else
echo -e "\x1b[31;1m $1====> FAILE \x1b[0m"
exit 1
fi
}

#部署etcd服务(master)
function deploy_etcd ()
{
mkdir -p /etc/etcd && mkdir -p /etc/etcd/ssl
check "创建ectd工作目录"

IP_LOCAL=`ip a s| grep eth0 | grep inet | awk {'print $2'}|awk -F "/" {'print $1'}`
HOST_NAME_NUM=`cat /etc/hostname | awk -F "r" {'print $2'}`
mkdir -p /var/lib/etcd/default.etcd
cat > /etc/etcd/etcd.conf<<END
#[Member]
ETCD_NAME="etcd$HOST_NAME_NUM"
ETCD_DATA_DIR="/var/lib/etcd/default.etcd"
ETCD_LISTEN_PEER_URLS="https://$IP_LOCAL:2380"
ETCD_LISTEN_CLIENT_URLS="https://$IP_LOCAL:2379,http://127.0.0.1:2379"
#[Clustering]
ETCD_INITIAL_ADVERTISE_PEER_URLS="https://$IP_LOCAL:2380"
ETCD_ADVERTISE_CLIENT_URLS="https://$IP_LOCAL:2379"
ETCD_INITIAL_CLUSTER="etcd1=https://192.168.106.11:2380,etcd2=https://192.168.106.12:2380,etcd3=https://192.168.106.13:2380"
ETCD_INITIAL_CLUSTER_TOKEN="etcd-cluster"
ETCD_INITIAL_CLUSTER_STATE="new"
END
check "创建etcd配置文件"

cat > /usr/lib/systemd/system/etcd.service<<END
[Unit]
Description=Etcd Server
After=network.target
After=network-online.target
Wants=network-online.target

[Service]
Type=notify
EnvironmentFile=-/etc/etcd/etcd.conf
WorkingDirectory=/var/lib/etcd/
ExecStart=/usr/local/bin/etcd \\
--cert-file=/etc/etcd/ssl/etcd.pem \\
--key-file=/etc/etcd/ssl/etcd-key.pem \\
--trusted-ca-file=/etc/etcd/ssl/ca.pem \\
--peer-cert-file=/etc/etcd/ssl/etcd.pem \\
--peer-key-file=/etc/etcd/ssl/etcd-key.pem \\
--peer-trusted-ca-file=/etc/etcd/ssl/ca.pem \\
--peer-client-cert-auth \\
--client-cert-auth
Restart=on-failure
RestartSec=5
LimitNOFILE=65536

[Install]
WantedBy=multi-user.target
END
check "创建etcd启动文件"

systemctl daemon-reload
systemctl enable etcd --now
check "启动etcd服务"
}

#部署api-server(master)
function deploy_apiserver ()
{
mkdir -p /etc/kubernetes/
mkdir -p /etc/kubernetes/ssl
mkdir /var/log/kubernetes
check "创建kubernetes目录"

cat > /etc/kubernetes/kube-apiserver.conf <<END
KUBE_APISERVER_OPTS="--enable-admission-plugins=NamespaceLifecycle,NodeRestriction,LimitRanger,ServiceAccount,DefaultStorageClass,ResourceQuota \\
--anonymous-auth=false \\
--bind-address=`ip a s| grep eth0 | grep inet | awk {'print $2'}|awk -F "/" {'print $1'}` \\
--secure-port=6443 \\
--advertise-address=`ip a s| grep eth0 | grep inet | awk {'print $2'}|awk -F "/" {'print $1'}` \\
--insecure-port=0 \\
--authorization-mode=Node,RBAC \\
--runtime-config=api/all=true \\
--enable-bootstrap-token-auth \\
--service-cluster-ip-range=10.255.0.0/16 \\
--token-auth-file=/etc/kubernetes/token.csv \\
--service-node-port-range=30000-50000 \\
--tls-cert-file=/etc/kubernetes/ssl/kube-apiserver.pem \\
--tls-private-key-file=/etc/kubernetes/ssl/kube-apiserver-key.pem \\
--client-ca-file=/etc/kubernetes/ssl/ca.pem \\
--kubelet-client-certificate=/etc/kubernetes/ssl/kube-apiserver.pem \\
--kubelet-client-key=/etc/kubernetes/ssl/kube-apiserver-key.pem \\
--service-account-key-file=/etc/kubernetes/ssl/ca-key.pem \\
--service-account-signing-key-file=/etc/kubernetes/ssl/ca-key.pem \\
--service-account-issuer=https://kubernetes.default.svc.cluster.local \\
--etcd-cafile=/etc/etcd/ssl/ca.pem \\
--etcd-certfile=/etc/etcd/ssl/etcd.pem \\
--etcd-keyfile=/etc/etcd/ssl/etcd-key.pem \\
--etcd-servers=https://192.168.106.11:2379,https://192.168.106.12:2379,https://192.168.106.13:2379 \\
--enable-swagger-ui=true \\
--allow-privileged=true \\
--apiserver-count=3 \\
--audit-log-maxage=30 \\
--audit-log-maxbackup=3 \\
--audit-log-maxsize=100 \\
--audit-log-path=/var/log/kube-apiserver-audit.log \\
--event-ttl=1h \\
--alsologtostderr=true \\
--logtostderr=false \\
--log-dir=/var/log/kubernetes \\
--v=4"
END
check "创建kube-apiserver配置文件"

cat > /usr/lib/systemd/system/kube-apiserver.service <<END
[Unit]
Description=Kubernetes API Server
Documentation=https://github.com/kubernetes/kubernetes
After=etcd.service
Wants=etcd.service

[Service]
EnvironmentFile=-/etc/kubernetes/kube-apiserver.conf
ExecStart=/usr/local/bin/kube-apiserver \$KUBE_APISERVER_OPTS
Restart=on-failure
RestartSec=5
Type=notify
LimitNOFILE=65536

[Install]
WantedBy=multi-user.target
END
check "创建kube-apiserver启动文件"

systemctl daemon-reload && systemctl enable kube-apiserver --now && systemctl status kube-apiserver
check "启动api-server服务"
}

#部署kubectl组件(master)
function deploy_kubectl ()
{
if [ `hostname` == master1 ]
then
cd /data/work
kubectl create clusterrolebinding kube-apiserver:kubelet-apis --clusterrole=system:kubelet-api-admin --user kubernetes
kubectl create clusterrolebinding kubelet-bootstrap --clusterrole=system:node-bootstrapper --user=kubelet-bootstrap
else
echo 666
fi &>>/dev/null
check "授权kubebernetes证书访问kubelet-api的权限"

yum install -y bash-completion &>>/dev/null
source /usr/share/bash-completion/bash_completion
source <(kubectl completion bash)
kubectl completion bash > ~/.kube/completion.bash.inc
source '/root/.kube/completion.bash.inc'
source $HOME/.bash_profile
echo "source '/root/.kube/completion.bash.inc'" >> /etc/bashrc
check "配置kubectl子命令补全"
}

#部署kube-controller--manager组件(master)
function deploy_controller_manager ()
{
cat > /etc/kubernetes/kube-controller-manager.conf <<END
KUBE_CONTROLLER_MANAGER_OPTS="--port=0 \\
--secure-port=10252 \\
--bind-address=127.0.0.1 \\
--kubeconfig=/etc/kubernetes/kube-controller-manager.kubeconfig \\
--service-cluster-ip-range=10.255.0.0/16 \\
--cluster-name=kubernetes \\
--cluster-signing-cert-file=/etc/kubernetes/ssl/ca.pem \\
--cluster-signing-key-file=/etc/kubernetes/ssl/ca-key.pem \\
--allocate-node-cidrs=true \\
--cluster-cidr=10.0.0.0/16 \\
--experimental-cluster-signing-duration=87600h \\
--root-ca-file=/etc/kubernetes/ssl/ca.pem \\
--service-account-private-key-file=/etc/kubernetes/ssl/ca-key.pem \\
--leader-elect=true \\
--feature-gates=RotateKubeletServerCertificate=true \\
--controllers=*,bootstrapsigner,tokencleaner \\
--horizontal-pod-autoscaler-use-rest-clients=true \\
--horizontal-pod-autoscaler-sync-period=10s \\
--tls-cert-file=/etc/kubernetes/ssl/kube-controller-manager.pem \\
--tls-private-key-file=/etc/kubernetes/ssl/kube-controller-manager-key.pem \\
--use-service-account-credentials=true \\
--alsologtostderr=true \\
--logtostderr=false \\
--log-dir=/var/log/kubernetes \\
--v=2"
END
check "创建kube-controller-manager.conf配置文件"

cat > /usr/lib/systemd/system/kube-controller-manager.service <<END
[Unit]
Description=Kubernetes Controller Manager
Documentation=https://github.com/kubernetes/kubernetes
[Service]
EnvironmentFile=-/etc/kubernetes/kube-controller-manager.conf
ExecStart=/usr/local/bin/kube-controller-manager \$KUBE_CONTROLLER_MANAGER_OPTS
Restart=on-failure
RestartSec=5
[Install]
WantedBy=multi-user.target
END
check "创建kube-controller-manager启动文件"

systemctl daemon-reload && systemctl enable kube-controller-manager --now && systemctl status kube-controller-manager
check "启动kube-controller-manager服务"
}

#部署kube-scheduler组件(matser)
function deploy_scheduler ()
{
cat > /etc/kubernetes/kube-scheduler.conf <<END
KUBE_SCHEDULER_OPTS="--address=127.0.0.1 \\
--kubeconfig=/etc/kubernetes/kube-scheduler.kubeconfig \\
--leader-elect=true \\
--alsologtostderr=true \\
--logtostderr=false \\
--log-dir=/var/log/kubernetes \\
--v=2"
END
check "创建配置文件kube-scheduler.conf"

cat > /usr/lib/systemd/system/kube-scheduler.service <<END
[Unit]
Description=Kubernetes Scheduler
Documentation=https://github.com/kubernetes/kubernetes

[Service]
EnvironmentFile=-/etc/kubernetes/kube-scheduler.conf
ExecStart=/usr/local/bin/kube-scheduler \$KUBE_SCHEDULER_OPTS
Restart=on-failure
RestartSec=5

[Install]
WantedBy=multi-user.target
END
check "创建kube-scheduler的启动文件"

systemctl daemon-reload && systemctl enable kube-scheduler --now && systemctl status kube-scheduler &>>/dev/null
check "启动kube-scheduler服务"
}

#部署kubelet服务(node)
function deploy_kubelet ()
{
wget https://dl.k8s.io/v1.20.7/kubernetes-server-linux-amd64.tar.gz &>>/dev/null
tar -xf kubernetes-server-linux-amd64.tar.gz
cd kubernetes/server/bin/
mv kube-proxy kubelet /usr/local/bin/
cat > /etc/kubernetes/kubelet.json <<END
{
"kind": "KubeletConfiguration",
"apiVersion": "kubelet.config.k8s.io/v1beta1",
"authentication": {
"x509": {
"clientCAFile": "/etc/kubernetes/ssl/ca.pem"
},
"webhook": {
"enabled": true,
"cacheTTL": "2m0s"
},
"anonymous": {
"enabled": false
}
},
"authorization": {
"mode": "Webhook",
"webhook": {
"cacheAuthorizedTTL": "5m0s",
"cacheUnauthorizedTTL": "30s"
}
},
"address": "`ip a s | grep eth0 | grep inet | awk {'print $2'} | awk -F "/" {'print $1'}`",
"port": 10250,
"readOnlyPort": 10255,
"cgroupDriver": "systemd",
"hairpinMode": "promiscuous-bridge",
"serializeImagePulls": false,
"featureGates": {
"RotateKubeletClientCertificate": true,
"RotateKubeletServerCertificate": true
},
"clusterDomain": "cluster.local.",
"clusterDNS": ["10.255.0.2"]
}
END
check "创建配置文件kubelet.json"

cat > /usr/lib/systemd/system/kubelet.service <<END
[Unit]
Description=Kubernetes Kubelet
Documentation=https://github.com/kubernetes/kubernetes
After=docker.service
Requires=docker.service
[Service]
WorkingDirectory=/var/lib/kubelet
ExecStart=/usr/local/bin/kubelet \\
--bootstrap-kubeconfig=/etc/kubernetes/kubelet-bootstrap.kubeconfig \\
--cert-dir=/etc/kubernetes/ssl \\
--kubeconfig=/etc/kubernetes/kubelet.kubeconfig \\
--config=/etc/kubernetes/kubelet.json \\
--network-plugin=cni \\
--pod-infra-container-image=k8s.gcr.io/pause:3.2 \\
--alsologtostderr=true \\
--logtostderr=false \\
--log-dir=/var/log/kubernetes \\
--v=2
Restart=on-failure
RestartSec=5

[Install]
WantedBy=multi-user.target
END
check "创建服务kubelet的启动文件"

systemctl daemon-reload && systemctl enable kubelet --now && systemctl status kubelet &>>/dev/null
check "启动服务kubelet"
}

#部署kube-proxy组件(node)
function deploy_proxy ()
{
expect <<EOF &>>/dev/null
set timeout 10
spawn scp 192.168.88.88:/root/software/pause-cordns.tar.gz .
expect {
"(yes/no)?" { send "yes\n";exp_continue }
"password:" { send "Aa792548841..\n";exp_continue }
}
EOF
docker load -i pause-cordns.tar.gz &>>/dev/null
check "导入docker镜像"

cat > /etc/kubernetes/kube-proxy.yaml <<END
apiVersion: kubeproxy.config.k8s.io/v1alpha1
bindAddress: `ip a s | grep eth0 | grep inet | awk {'print $2'} | awk -F "/" {'print $1'}`
clientConnection:
kubeconfig: /etc/kubernetes/kube-proxy.kubeconfig
clusterCIDR: 192.168.106.0/24
healthzBindAddress: `ip a s | grep eth0 | grep inet | awk {'print $2'} | awk -F "/" {'print $1'}`:10256
kind: KubeProxyConfiguration
metricsBindAddress: `ip a s | grep eth0 | grep inet | awk {'print $2'} | awk -F "/" {'print $1'}`:10249
mode: "ipvs"
END
check "创建kube-proxy配置文件"

mkdir -p /var/lib/kube-proxy
cat > /usr/lib/systemd/system/kube-proxy.service <<END
[Unit]
Description=Kubernetes Kube-Proxy Server
Documentation=https://github.com/kubernetes/kubernetes
After=network.target

[Service]
WorkingDirectory=/var/lib/kube-proxy
ExecStart=/usr/local/bin/kube-proxy \\
--config=/etc/kubernetes/kube-proxy.yaml \\
--alsologtostderr=true \\
--logtostderr=false \\
--log-dir=/var/log/kubernetes \\
--v=2
Restart=on-failure
RestartSec=5
LimitNOFILE=65536

[Install]
WantedBy=multi-user.target
END
check "创建kube-proxy服务启动文件"

systemctl daemon-reload && systemctl enable kube-proxy --now && systemctl status kube-proxy &>>/dev/null
check "启动kube-proxy服务"
}

#部署calico组件(node)
function deploy_calico ()
{
expect <<EOF &>>/dev/null
set timeout 10
spawn scp 192.168.88.88:/root/software/*.tar.gz .
expect {
"(yes/no)?" { send "yes\n";exp_continue }
"password:" { send "Aa792548841..\n";exp_continue }
}
EOF
docker load -i calico.tar.gz
docker load -i pause-cordns.tar.gz
check "导入docker calico镜像"
}
deploy_kubelet
deploy_proxy
deploy_calico

grep -r 6443 /etc/kubernetes/ | awk -F ":" {'print $1'} | xargs sed -i "s/192.168.106.11:6443/192.168.106.100:16443/"

6、deploy_keepalived_nginx.sh

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
[eve-ng binary_install_k8s(main)] $ cat deploy_keepalived_nginx.sh
#!/bin/bash
#仅master1与master2执行
function check ()
{
if [ $? == 0 ]
then
echo -e "\x1b[32;1m $1====> SUCCESS \x1b[0m"
else
echo -e "\x1b[31;1m $1====> FAILE \x1b[0m"
exit 1
fi
}
yum install keepalived nginx nginx-mod-stream -y &>>/dev/null
check "安装keepalived与nginx软件"

cat > /etc/nginx/nginx.conf <<END
user nginx;
worker_processes auto;
error_log /var/log/nginx/error.log;
pid /run/nginx.pid;

include /usr/share/nginx/modules/*.conf;

events {
worker_connections 1024;
}

# 四层负载均衡,为两台Master apiserver组件提供负载均衡
stream {

log_format main '$remote_addr $upstream_addr - [$time_local] $status $upstream_bytes_sent';

access_log /var/log/nginx/k8s-access.log main;

upstream k8s-apiserver {
server 192.168.106.11:6443; # xianchaomaster1 APISERVER IP:PORT
server 192.168.106.12:6443; # xianchaomaster2 APISERVER IP:PORT
server 192.168.106.13:6443; # xianchaomaster3 APISERVER IP:PORT

}

server {
listen 16443; # 由于nginx与master节点复用,这个监听端口不能是6443,否则会冲突
proxy_pass k8s-apiserver;
}
}

http {
log_format main '$remote_addr - $remote_user [$time_local] "$request" '
'$status $body_bytes_sent "$http_referer" '
'"$http_user_agent" "$http_x_forwarded_for"';

access_log /var/log/nginx/access.log main;

sendfile on;
tcp_nopush on;
tcp_nodelay on;
keepalive_timeout 65;
types_hash_max_size 2048;

include /etc/nginx/mime.types;
default_type application/octet-stream;

server {
listen 80 default_server;
server_name _;

location / {
}
}
}
END
check "编写nginx配置文件"

systemctl enable nginx --now && systemctl status nginx
check "启动nginx服务"

if [ `hostname` == master1 ]
then
cat > /etc/keepalived/keepalived.conf<<END
global_defs {
notification_email {
[email protected]
[email protected]
[email protected]
}
notification_email_from [email protected]
smtp_server 127.0.0.1
smtp_connect_timeout 30
router_id NGINX_MASTER
}

vrrp_script check_nginx {
script "/etc/keepalived/check_nginx.sh"
}

vrrp_instance VI_1 {
state MASTER
interface eth0 # 修改为实际网卡名
virtual_router_id 51 # VRRP 路由 ID实例,每个实例是唯一的
priority 100 # 优先级,备服务器设置 90
advert_int 1 # 指定VRRP 心跳包通告间隔时间,默认1秒
authentication {
auth_type PASS
auth_pass 1111
}
# 虚拟IP
virtual_ipaddress {
192.168.106.100/24
}
track_script {
check_nginx
}
}
END
else
cat > /etc/keepalived/keepalived.conf<<END
global_defs {
notification_email {
[email protected]
[email protected]
[email protected]
}
notification_email_from [email protected]
smtp_server 127.0.0.1
smtp_connect_timeout 30
router_id NGINX_BACKUP
}

vrrp_script check_nginx {
script "/etc/keepalived/check_nginx.sh"
}

vrrp_instance VI_1 {
state BACKUP
interface eth0
virtual_router_id 51 # VRRP 路由 ID实例,每个实例是唯一的
priority 90
advert_int 1
authentication {
auth_type PASS
auth_pass 1111
}
virtual_ipaddress {
192.168.106.100/24
}
track_script {
check_nginx
}
}
END
fi &>>/dev/null
check "修改keepalived配置文件"

cat > /etc/keepalived/check_nginx.sh <<END
#!/bin/bash
counter=\`netstat -tunpl | grep nginx | wc -l\`
if [ \$counter -eq 0 ]; then
service nginx start
sleep 2
counter=\`netstat -tunpl | grep nginx | wc -l\`
if [ \$counter -eq 0 ]; then
service keepalived stop
fi
fi
END
chmod +x /etc/keepalived/check_nginx.sh
check "配置check_nginx存活性检测"


systemctl enable keepalived --now && systemctl status keepalived
check "启动keepalived服务"

7、site.sh

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
[eve-ng binary_install_k8s(main)] $ cat site.sh 
#!/bin/bash
for i in 11 12 13 21 22 23
do
expect <<EOF &>>/dev/null
set timeout 10
spawn scp -r /root/script/binary_install_k8s 192.168.106.$i:~
expect {
"(yes/no)?" { send "yes\n";exp_continue }
"password:" { send "1\n";exp_continue }
}
EOF
done

for i in 11 12 13 21 22 23
do
sshpass -p1 ssh 192.168.106.$i "bash binary_install_k8s/env.sh"
done

sshpass -p1 ssh 192.168.106.11 "bash binary_install_k8s/set-cer.sh"

for i in 11 12 13
do
sshpass -p1 ssh 192.168.106.$i "bash binary_install_k8s/deploy_module_master.sh"
done

for i in 21 22 23
do
sshpass -p1 ssh 192.168.106.$i "bash binary_install_k8s/deploy_module_node.sh"
done
kubectl create clusterrolebinding kubelet-bootstrap --clusterrole=system:node-bootstrapper --user=kubelet-bootstrap
sshpass -p1 ssh 192.168.106.11 "for i in `kubectl get csr | grep node | awk {'print $1'}`; do kubectl certificate approve $i;done"


sshpass -p1 ssh 192.168.106.11 "kubectl apply -f binary_install_k8s/software_config/coredns.yaml"
sshpass -p1 ssh 192.168.106.11 "kubectl apply -f binary_install_k8s/software_config/calico.yaml"
sshpass -p1 ssh 192.168.106.11 "bash binary_install_k8s/deploy_keepalived_nginx.sh"
sshpass -p1 ssh 192.168.106.12 "bash binary_install_k8s/deploy_keepalived_nginx.sh"

for i in 21 22 23
do
sshpass -p1 ssh 192.168.106.$i "systemctl restart kubelet kube-proxy && systemctl status kubelet kube-proxy"
done

8、check_nginx.sh

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
[eve-ng binary_install_k8s(main)] $ cat  software_config/check_nginx.sh 
#!/bin/bash
#1、判断Nginx是否存活
counter=`ps -C nginx --no-header | wc -l`
if [ $counter -eq 0 ]; then
#2、如果不存活则尝试启动Nginx
service nginx start
sleep 2
#3、等待2秒后再次获取一次Nginx状态
counter=`ps -C nginx --no-header | wc -l`
#4、再次进行判断,如Nginx还不存活则停止Keepalived,让地址进行漂移
if [ $counter -eq 0 ]; then
service keepalived stop
fi
fi

本博客所有文章除特别声明外,均采用 CC BY-SA 4.0 协议 ,转载请注明出处!