1、基于角色的RBAC控制
Context:
您已经被要求为部署管道创建一个新的ClusterRole,并将其绑定到特定namespace内的特定ServiceAccount。
Task:
创建一个新的名为deployment-clusterrole的ClusterRole,它只允许创建以下资源类型:
Deployment
StatefulSet
DaemonSet
在现有的名称空间app-team1中创建一个名为cicd-token的新ServiceAccount
将新的ClusterRole deployment-clusterrole绑定到新的ServiceAccount cicd-token中,仅限于命名空间app-team1
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 root@master1:~ root@master1:~ root@master1:~ root@master1:~ root@master1:~ root@master1:~ Name: cicd-token-binding Labels: <none> Annotations: <none> Role: Kind: ClusterRole Name: deployment-clusterrole Subjects: Kind Name Namespace ---- ---- --------- ServiceAccount cicd-token app-team1
2、节点维护:指定节点不可用 在官网文档搜索Safely Drain a Node
中文翻译:
将ek8s-node-1节点设置为不可用,然后重新调度该节点上的所有的pod
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 root@master1:~ root@master1:~ root@master1:~ root@master1:~ NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES calico-kube-controllers-677cd97c8d-7xzrk 1/1 Running 0 2m19s 10.244.137.71 master1 <none> <none> calico-node-57lnd 1/1 Running 2 (75m ago) 15h 192.168.100.131 node1 <none> <none> calico-node-zwmqq 1/1 Running 2 (75m ago) 15h 192.168.100.130 master1 <none> <none> coredns-65c54cc984-lxgmv 1/1 Running 2 (75m ago) 16h 10.244.137.69 master1 <none> <none> coredns-65c54cc984-sldrd 1/1 Running 2 (75m ago) 16h 10.244.137.70 master1 <none> <none> etcd-master1 1/1 Running 3 (75m ago) 16h 192.168.100.130 master1 <none> <none> kube-apiserver-master1 1/1 Running 2 (75m ago) 16h 192.168.100.130 master1 <none> <none> kube-controller-manager-master1 1/1 Running 2 (75m ago) 16h 192.168.100.130 master1 <none> <none> kube-proxy-2vgjf 1/1 Running 2 (75m ago) 15h 192.168.100.131 node1 <none> <none> kube-proxy-7cz5z 1/1 Running 2 (75m ago) 16h 192.168.100.130 master1 <none> <none> kube-scheduler-master1 1/1 Running 2 (75m ago) 16h 192.168.100.130 master1 <none> <none>
3、k8s版本升级 官网搜索kubeadm upgrade,考试是从1.23升级到1.24
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 root@master1:~ root@master1:~ root@master1:~ root@master1:~ ssh matser01 sudo -i apt-get upgrade apt-cache madison kubeadm root@master1:~ kubeadm version kubeadm upgrade plan kubeadm upgrade apply v1.23.2 --etcd-upgrade=false apt-get install kubelet=1.23.2-00 kubelet --version apt-get install kubectl=1.23.2-00 kubectl version exit exit kubectl uncordon master1
4、Etcd备份还原
在做题之前确认自己处在student@node-1 下
1 2 3 4 5 6 7 8 export ETCDCTL_API=3 mkdir /srv/data -p etcdctl --endpoints=https://127.0.0.1:2379 --cacert=/etc/kubernetes/pki/etcd/ca.crt --cert=/etc/kubernetes/pki/etcd/server.crt --key=/etc/kubernetes/pki/etcd/server.key snapshot save /srv/data/etcd-snapshot.db etcdctl --endpoints=https://127.0.0.1:2379 --cacert=/etc/kubernetes/pki/etcd/ca.crt --cert=/etc/kubernetes/pki/etcd/server.crt --key=/etc/kubernetes/pki/etcd/server.key snapshot restore /srv/data/etcd-snapshot.db
5、networkpolicy网络策略
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 kubectl create ns my-app kubectl create ns echo kubectl config use-context hk8s kubectl get ns --show-labels kubectl label ns echo project=echo root@master1:~ apiVersion: networking.k8s.io/v1 kind: NetworkPolicy metadata: name: allow-port-from-namespce namespace: my-app spec: podSelector: matchLabels: {} policyTypes: - Ingress ingress: - from: - namespaceSelector: matchLabels: project: echo ports: - protocol: TCP port: 9000 kubectl apply -f network_policy.yaml
6、四层负载均衡-service
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 kubectl edit deploy front-end ports: - containerPort: 80 name: http kubectl get deploy front-end -oyaml root@master1:~ apiVersion: v1 kind: Service metadata: name: front-end-svc spec: selector: app: nginx type : NodePort ports: - protocol: TCP port: 80 targetPort: http nodePort: 30080 kubectl apply -f deploy_service.yaml
7、七层负载均衡ingress
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 kubectl config use-context k8s root@master1:~ apiVersion: networking.k8s.io/v1 kind: Ingress metadata: name: pong namespace: ing-internal annotations: nginx.ingress.kubernetes.io/rewrite-target: / kubernetes.io/ingress.class: "nginx" spec: rules: - http: paths: - path: /hello pathType: Prefix backend: service: name: hello port: number: 5678
8、deployment实现pod的扩容缩容
1 kubectl scale --replicas=3 deloyment/loadbalancer
9、Pod指定节点调度
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 root@master1:~ apiVersion: v1 kind: Pod metadata: name: nginx-kusc00401 spec: nodeSelector: disk: spinning containers: - name: nginx image: nginx root@master1:~ root@master1:~ nginx-kusc00401 1/1 Running 0 3m48s
10、检查ready节点的数量
1 2 3 4 5 6 7 8 9 root@master1:~ root@master1:~ 2 root@master1:~ 1 root@master1:/opt/KUSC00402
11、一个pod封装多个容器
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 root@master1:~ apiVersion: v1 kind: Pod metadata: name: kucc1 spec: containers: - name: nginx image: nginx - name: redis image: redis - name: memcached image: memcached - name: consul image: consul
12、持久化存储卷PersistentVolume
1 2 3 4 5 6 7 8 9 10 11 12 root@master1:~ apiVersion: v1 kind: PersistentVolume metadata: name: app-config spec: capacity: storage: 2Gi accessModes: - ReadWriteMany hostPath: path: /srv/app-config
13、PersistentVolumeClaim
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 root@master1:~ apiVersion: v1 kind: PersistentVolumeClaim metadata: name: pv-volume spec: accessModes: - ReadWriteOnce resources: requests: storage: 10Mi storageClassName: csi-hostpath-sc root@master1:~ apiVersion: v1 kind: Pod metadata: name: web-server spec: containers: - name: nginx image: nginx volumeMounts: - mountPath: "/usr/share/nginx/html" name: pv-volume volumes: - name: pv-volume persistentVolumeClaim: claimName: pv-volume root@master1:~
14、查看pod日志
15、Sidecar代理
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 root@master1:~ apiVersion: v1 kind: Pod metadata: name: legacy-app spec: containers: - name: count image: busybox:1.28 args: - /bin/sh - -c - > i=0; while true ; do echo "$(date) INFO $i " >> /var/log /legacy-app.log; i=$((i+1 )); sleep 1; done volumeMounts: - name: varlog mountPath: /var/log volumes: - name: varlog emptyDir: {} kubectl get pods legacy-app -oyaml > sidecar.yaml root@master1:~ apiVersion: v1 kind: Pod metadata: name: legacy-app namespace: default spec: containers: - args: - /bin/sh - -c - "i=0; while true; do\n echo \"$(date) INFO $i \" >> /var/log/legacy-app.log;\n \ i=$((i+1) );\n sleep 1;\ndone \n" image: busybox:1.28 imagePullPolicy: IfNotPresent name: count volumeMounts: - mountPath: /var/log name: varlog - args: [/bin/sh, -c, 'tail -n+1 -F /var/log/legacy-app.log' ] image: busybox:1.28 imagePullPolicy: IfNotPresent name: busybox volumeMounts: - mountPath: /var/log name: varlog volumes: - emptyDir: {} name: varlog root@master1:~ 148
16、查看pod cpu的使用率
1 2 3 kubectl top pods -l name=cpu-loader --sort-by=cpu -A
17、集群故障排查