web123456

kubernetes] binary deployment of k8s cluster, multi-master node load balancing and high availability (next)

↑↑↑↑ picks up where the last one left off to continue the deployment ↑↑↑↑

We have already completed the deployment of a single master node, and now we need to complete the deployment of multiple master nodes and realize the high availability of the k8s cluster.

I. Complete the initialization operation of the master02 node

Second, on the basis of master01 node, complete master02 node deployment

Step 1: Prepare the files needed for the master node

ssl certificates for the etcd database, the kubernetes installation directory for the master01 node (binaries, cluster boot files for components to communicate with the apiserver, and startup parameter configuration files), the cluster boot files for kubectl to communicate with the apiserver, and the service files for the components to be managed by systemd. service files for each component managed by systemd

  1. ##etcd directory as long as ssl, kubernetes installation directory transfer, contains binaries, certificates, startup parameter configuration files, cluster boot files
  2. [root@master01 opt#ls
  3. etcd k8s kubernetes rh
  4. [root@master01 opt#scp -r kubernetes/ etcd/ master02:/opt/

  1. ##service service file
  2. [root@master01 opt#ls /usr/lib/systemd/system/kube*
  3. /usr/lib/systemd/system/ /usr/lib/systemd/system/ /usr/lib/systemd/system/
  4. [root@master01 opt#scp /usr/lib/systemd/system/kube* master02:/usr/lib/systemd/system/

  1. [root@master01 opt#ls /root/.kube/
  2. cache config
  3. [root@master01 opt#scp -r /root/.kube/ master02:/root/
  4. root@master02's password:
  5. config

Leave only the necessary documents

Step 2: Modify the listening addresses in the apiserver, controller-manager, and scheduler startup parameter configuration files, as well as the apiserver notification address.

  1. ---------- master02 Node Deployment ----------
  2. //From master01 node to copy the certificate files, configuration files for each master component, and service management files to the master02 nodal
  3. scp -r /opt/etcd/ root@192.168.20.10:/opt/
  4. scp -r /opt/kubernetes/ root@192.168.20.10:/opt
  5. scp /usr/lib/systemd/system/{kube-apiserver,kube-controller-manager,kube-scheduler}.service root@192.168.20.10:/usr/lib/systemd/system/
  6. //Modify the IP in the configuration file kube-apiserver
  7. vim /opt/kubernetes/cfg/kube-apiserver
  8. KUBE_APISERVER_OPTS="--logtostderr=true \
  9. --v=4 \
  10. --etcd-servers=https://192.168.20.15:2379,https://192.168.20.16:2379,https://192.168.20.17:2379 \
  11. --bind-address=192.168.20.10 \ #modify
  12. --secure-port=6443 \
  13. --advertise-address=192.168.20.10 \ #modify
  14. ......
  15. // Start services on master02 node and set up bootup.
  16. systemctl start
  17. systemctl enable
  18. systemctl start
  19. systemctl enable
  20. systemctl start
  21. systemctl enable
  22. // View node node status
  23. ln -s /opt/kubernetes/bin/* /usr/local/bin/
  24. kubectl get nodes
  25. kubectl get nodes -o wide #-o=wide: outputs additional information; for Pods, it will output the name of the Node where the Pod is located
  26. // At this point in the master02 node to check the node node status is only from etcd query to the information, and at this time the node node is not actually established with the master02 node communication connection, so you need to use a VIP to the node node and the master node are related to the

Step 3: Sequential startupapiserver, controller-manager, scheduler, and validate the

II. Deploying nginx as a load balancer

  1. ------------------------------ Load Balancing Deployment ------------------------------
  2. //Configure load balancer cluster with dual hot standby load balancing (nginx for load balancing, keepalived for dual hot standby)
  3. ##### at lb01、lb02Operate on node #####
  4. //Configure the official online yum source for nginx and configure the local yum source for nginx
  5. cat > /etc// << 'EOF'
  6. [nginx]
  7. name=nginx repo
  8. baseurl=http:///packages/centos/7/$basearch/
  9. gpgcheck=0
  10. EOF
  11. yum install nginx -y
  12. //Modify the nginx configuration file to configure four-tier reverse proxy load balancing, specifying k8s-cluster2The node ip of the master is the same as the node ip of the6443ports
  13. vim /etc/nginx/
  14. events {
  15. worker_connections 1024;
  16. }
  17. #add
  18. stream {
  19. log_format main '$remote_addr $upstream_addr - [$time_local] $status $upstream_bytes_sent';
  20. access_log /var/log/nginx/k8 main;
  21. upstream k8s-apiserver {
  22. server 192.168.20.15:6443;
  23. server 192.168.20.10:6443;
  24. }
  25. server {
  26. listen 6443;
  27. proxy_pass k8s-apiserver;
  28. }
  29. }
  30. http {
  31. ......
  32. //Checking configuration file syntax
  33. nginx -t
  34. //Start the nginx service and see that it is listening.6443ports
  35. systemctl start nginx
  36. systemctl enable nginx
  37. netstat -natp | grep nginx

 

Note that the stream module is required for the four-layer proxy

Third, deploy keepalived service to do k8s cluster load balancer high availability

  1. //Deploying the keepalived service
  2. yum install keepalived -y
  3. //Modify the keepalived configuration file
  4. vim /etc/keepalived/
  5. ! Configuration File for keepalived
  6. global_defs {
  7. # Receive e-mail address
  8. notification_email {
  9. acassen@
  10. failover@
  11. sysadmin@
  12. }
  13. # Mail delivery address
  14. notification_email_from @
  15. smtp_server 127.0.0.1
  16. smtp_connect_timeout 30
  17. router_id NGINX_MASTER #lb01node for NGINX_MASTER, lb02node is NGINX_BACKUP.
  18. }
  19. # Add a script that executes periodically
  20. vrrp_script check_nginx {
  21. script "/etc/keepalived/"# Specify the path to the script that checks for nginx survival
  22. }
  23. vrrp_instance VI_1 {
  24. state MASTER #lb01The node's are MASTER, lb02node is BACKUP
  25. interface ens33# Specify the NIC name ens33
  26. virtual_router_id 51#Specify vrid, both nodes should be the same
  27. priority 100 #lb01The node's are100,lb02The node's are90
  28. advert_int 1
  29. authentication {
  30. auth_type PASS
  31. auth_pass 1111
  32. }
  33. virtual_ipaddress {
  34. 192.168.20.150/24# Designated VIP
  35. }
  36. track_script {
  37. check_nginx #Specify the script for vrrp_script configuration
  38. }
  39. }
  40. //Method 1: Create a nginx state checking script
  41. vim /etc/keepalived/
  42. #!/bin/bash
  43. #egrep -cv "grep|$$" Filters out current Shell process IDs containing grep or $$.
  44. count=$(ps -ef | grep nginx | egrep -cv "grep|$$")
  45. if [ "$count" -eq 0 ];then
  46. systemctl stop keepalived
  47. fi
  48. //Method 2: Create a nginx state checking script2
  49. cat > /etc/keepalived/ << 'EOF'
  50. #!/bin/bash
  51. killall -0 nginx &>/dev/null
  52. if [ $? -ne 0 ];then
  53. systemctl stop keepalived
  54. fi
  55. EOF
  56. chmod +x /etc/keepalived/
  57. //Start the keepalived service (be sure to start the nginx service first, then start the keepalived service)
  58. systemctl start keepalived
  59. systemctl enable keepalived
  60. ip a #Check if the VIP is generated

 

The standby node modifies the master node's configuration file on the

Verifying Failover

Restarting nginx and keepalived services on the master node will hijack the vip.

IV. ModificationsnodeThe ip corresponding to the server in the configuration bootstrap file on the node is VIP

, profile for VIP

  1. //Modify the, profile on the node node to VIP
  2. cd /opt/kubernetes/cfg/
  3. vim
  4. server: https://192.168.20.100:6443
  5. vim
  6. server: https://192.168.20.100:6443
  7. vim
  8. server: https://192.168.20.100:6443
  9. //Restarting the kubelet and kube-proxy services
  10. systemctl restart
  11. systemctl restart

 

Synchronizing node02 also requires the same changes

V. All master node cluster boot configuration files point to the local apiserver ip and port

  1. [root@master02 cfg]#ls
  2. kube-apiserver kube-controller-manager kube-scheduler
  3. [root@master02 cfg]#vim
  4. [root@master02 cfg]#vim
  5. [root@master02 cfg]#cd ~/.kube/
  6. [root@master02 .kube]#ls
  7. cache config
  8. [root@master02 .kube]#vim config
  9. [root@master02 .kube]#ls /usr/lib/systemd/system/kube-*
  10. /usr/lib/systemd/system/ /usr/lib/systemd/system/ /usr/lib/systemd/system/
  11. [root@master02 .kube]#systemctl restart
  12. [root@master02 .kube]#systemctl restart
  13. [root@master02 .kube]#systemctl restart

 

At this point the k8s cluster has been deployed

VI. Installation of the dashboard

The dashboard is a web-based user interface for Kubernetes.You can use the dashboard to set thecontainerizationApplications are deployed to a Kubernetes cluster, troubleshoot containerized applications, and manage the cluster itself and its accompanying resources. You can use the dashboard to get an overview of the applications running on the cluster, as well as to create or modify individual Kubernetes resources (such as deployments, jobs, daemons, and so on). For example, you can use the Deployment Wizard to extend a deployment, initiate a rolling update, restart a Pod, or deploy a new application. The dashboard also provides information about the status of Kubernetes resources in the cluster and any errors that may have occurred.

  1. //In the master01 on-node operation
  2. # Upload a file to/opt/k8s directory
  3. cd /opt/k8s
  4. vim
  5. # Default Dashboard can only be accessed internally by the cluster, modify the Service to be of type NodePort to expose it externally:
  6. kind: Service
  7. apiVersion: v1
  8. metadata:
  9. labels:
  10. k8s-app: kubernetes-dashboard
  11. name: kubernetes-dashboard
  12. namespace: kubernetes-dashboard
  13. spec:
  14. ports:
  15. - port: 443
  16. targetPort: 8443
  17. nodePort: 30001 #add
  18. type: NodePort #Add
  19. selector:
  20. k8s-app: kubernetes-dashboard
  21. kubectl apply -f
  22. # Create service account and bind the default cluster-admin administrator cluster role
  23. kubectl create serviceaccount dashboard-admin -n kube-system
  24. kubectl create clusterrolebinding dashboard-admin --clusterrole=cluster-admin --serviceaccount=kube-system:dashboard-admin
  25. kubectl describe secrets -n kube-system $(kubectl -n kube-system get secret | awk '/dashboard-admin/{print $1}')

Login to the Dashboard using the output token
https://NodeIP:30001 

VII. Summarizing binary deploymentk8sclustering

1) Deploying etcd

  • Issuing certificate and private key files using cfssl tools
  • Unpack the etcd package and get the binary file etcd etcdctl
  • Prepare the etcd cluster configuration file
  • Start the etcd process service and add all nodes to the etcd cluster
  1. etcd operations:
  2. # Checking the health of an etcd cluster
  3. ETCDCTL_API=3 /opt/etcd/bin/etcdctl --endpoints="https://IP1:2379,https://IP2:2379,https://IP3:2379" --cacert=CA certificate --cert=Client certificate --key=Client Private Key -wtable endpoint health
  4. # View etcd cluster status information
  5. ETCDCTL_API=3 /opt/etcd/bin/etcdctl --endpoints="https://IP1:2379,https://IP2:2379,https://IP3:2379" --cacert=CA certificate --cert=Client certificate --key=Client private key -wtable endpointstatus
  6. # View list of etcd cluster members
  7. ETCDCTL_API=3 /opt/etcd/bin/etcdctl --endpoints="https://IP1:2379,https://IP2:2379,https://IP3:2379" --cacert=CA certificate --cert=Client certificate --key=Client private key -wtable member list
  8. #Insert key values into etcd
  9. ETCDCTL_API=3 /opt/etcd/bin/etcdctl --endpoints="https://IP1:2379" --cacert=CA certificate --cert=Client certificate --key=Client private key put<KEY> '<VALUE>'
  10. #View the value of the key
  11. ETCDCTL_API=3 /opt/etcd/bin/etcdctl --endpoints="https://IP1:2379" --cacert=CA certificate --cert=Client certificate --key=Client Private Keyget <KEY>
  12. #delete key
  13. ETCDCTL_API=3 /opt/etcd/bin/etcdctl --endpoints="https://IP1:2379" --cacert=CA certificate --cert=Client certificate --key=Client private key del<KEY>
  14. #Backup the etcd database
  15. ETCDCTL_API=3 /opt/etcd/bin/etcdctl --endpoints="https://IP1:2379" --cacert=CA certificate --cert=Client certificate --key=Client private key snapshot save Backup file path
  16. #Recovering the etcd database
  17. ETCDCTL_API=3 /opt/etcd/bin/etcdctl --endpoints="https://IP1:2379" --cacert=CA certificate --cert=Client certificate --key=Client private key snapshot restore Backup file path

2) Deploying the master component

  • Issuing certificate and private key files using cfssl tools
  • Download the K8S package for binaries kube-apiserver kube-controller-manager kube-scheduler
  • Prepare the bootstrap-token authentication file to be called when kube-apiserver starts ()
  • Prepare the process service startup parameters configuration file for kube-apiserver kube-controller-manager kube-scheduler
  • Prepare the kubeconfig cluster bootstrap configuration file for kube-controller-manager kube-scheduler kubectl (for connecting to and verifying kube-apiserver)
  • Start the kube-apiserver kube-controller-manager kube-scheduler process service in order.
  • Execute the kubectl get cs command to view the health status of the master component

3) Deploying node components

  • Getting binaries kubelet kube-proxy
  • Prepare the kubeconfig cluster bootstrap configuration file used by the kubelet kube-proxy (the file that kubelet uses to authenticate the first time it accesses the apiserver)
  • Prepare the process service startup parameter configuration file for kubelet kube-proxy
  • Start the kubelet process service, make a CSR request to the apiserver to issue the certificate automatically, and the kubelet can get the certificate after the master passes the CSR request.
  • Load the ipvs module and start the kube-proxy process service.
  • Installing the cni network plugin (flannel or calico) and CoreDNS
  • Execute the kubectl get nodes command to view the status of node nodes

4) Deploy multi-master high availability

  • Responsible for master component related binaries, certificates, private keys, startup parameter configuration files, kubeconfig cluster boot configuration files, and etcd certificates and private key files.
  • Modify the listening and notification addresses in the kube-apiserver kube-controller-manager kube-scheduler startup parameter configuration file, and then restart the service processes in turn.
  • Deploying nginx/haproxy load balancers and keepalived high availability
  • Modify the server parameter in the kubeconfig cluster bootstrap configuration file of kubelet kube-proxy kubectl to point to the VIP address of the keepalived, and then restart the kubelet kube-proxy service process.
  • Modify the server parameter in the kubeconfig cluster bootstrap configuration file of the kube-controller-manager kube-scheduler on the other master nodes to point to their respective local apiserver addresses, and then restart the service process.