↑↑↑↑ picks up where the last one left off to continue the deployment ↑↑↑↑
We have already completed the deployment of a single master node, and now we need to complete the deployment of multiple master nodes and realize the high availability of the k8s cluster.
I. Complete the initialization operation of the master02 node
Second, on the basis of master01 node, complete master02 node deployment
Step 1: Prepare the files needed for the master node
ssl certificates for the etcd database, the kubernetes installation directory for the master01 node (binaries, cluster boot files for components to communicate with the apiserver, and startup parameter configuration files), the cluster boot files for kubectl to communicate with the apiserver, and the service files for the components to be managed by systemd. service files for each component managed by systemd
-
##etcd directory as long as ssl, kubernetes installation directory transfer, contains binaries, certificates, startup parameter configuration files, cluster boot files
-
[root@master01 opt#ls
-
etcd k8s kubernetes rh
-
[root@master01 opt#scp -r kubernetes/ etcd/ master02:/opt/
-
##service service file
-
[root@master01 opt#ls /usr/lib/systemd/system/kube*
-
/usr/lib/systemd/system/ /usr/lib/systemd/system/ /usr/lib/systemd/system/
-
[root@master01 opt#scp /usr/lib/systemd/system/kube* master02:/usr/lib/systemd/system/
-
[root@master01 opt#ls /root/.kube/
-
cache config
-
[root@master01 opt#scp -r /root/.kube/ master02:/root/
-
root@master02's password:
-
config
Leave only the necessary documents
Step 2: Modify the listening addresses in the apiserver, controller-manager, and scheduler startup parameter configuration files, as well as the apiserver notification address.
-
---------- master02 Node Deployment ----------
-
//From master01 node to copy the certificate files, configuration files for each master component, and service management files to the master02 nodal
-
scp -r /opt/etcd/ root@192.168.20.10:/opt/
-
scp -r /opt/kubernetes/ root@192.168.20.10:/opt
-
scp /usr/lib/systemd/system/{kube-apiserver,kube-controller-manager,kube-scheduler}.service root@192.168.20.10:/usr/lib/systemd/system/
-
-
//Modify the IP in the configuration file kube-apiserver
-
vim /opt/kubernetes/cfg/kube-apiserver
-
KUBE_APISERVER_OPTS="--logtostderr=true \
-
--v=4 \
-
--etcd-servers=https://192.168.20.15:2379,https://192.168.20.16:2379,https://192.168.20.17:2379 \
-
--bind-address=192.168.20.10 \ #modify
-
--secure-port=6443 \
-
--advertise-address=192.168.20.10 \ #modify
-
......
-
-
// Start services on master02 node and set up bootup.
-
systemctl start
-
systemctl enable
-
systemctl start
-
systemctl enable
-
systemctl start
-
systemctl enable
-
-
// View node node status
-
ln -s /opt/kubernetes/bin/* /usr/local/bin/
-
-
kubectl get nodes
-
kubectl get nodes -o wide #-o=wide: outputs additional information; for Pods, it will output the name of the Node where the Pod is located
-
// At this point in the master02 node to check the node node status is only from etcd query to the information, and at this time the node node is not actually established with the master02 node communication connection, so you need to use a VIP to the node node and the master node are related to the
Step 3: Sequential startupapiserver, controller-manager, scheduler, and validate the
II. Deploying nginx as a load balancer
-
------------------------------ Load Balancing Deployment ------------------------------
-
//Configure load balancer cluster with dual hot standby load balancing (nginx for load balancing, keepalived for dual hot standby)
-
##### at lb01、lb02Operate on node #####
-
//Configure the official online yum source for nginx and configure the local yum source for nginx
-
cat > /etc// << 'EOF'
-
[nginx]
-
name=nginx repo
-
baseurl=http:///packages/centos/7/$basearch/
-
gpgcheck=0
-
EOF
-
-
yum install nginx -y
-
-
//Modify the nginx configuration file to configure four-tier reverse proxy load balancing, specifying k8s-cluster2The node ip of the master is the same as the node ip of the6443ports
-
vim /etc/nginx/
-
events {
-
worker_connections 1024;
-
}
-
-
#add
-
stream {
-
log_format main '$remote_addr $upstream_addr - [$time_local] $status $upstream_bytes_sent';
-
-
access_log /var/log/nginx/k8 main;
-
-
upstream k8s-apiserver {
-
server 192.168.20.15:6443;
-
server 192.168.20.10:6443;
-
}
-
server {
-
listen 6443;
-
proxy_pass k8s-apiserver;
-
}
-
}
-
-
http {
-
......
-
-
-
//Checking configuration file syntax
-
nginx -t
-
-
//Start the nginx service and see that it is listening.6443ports
-
systemctl start nginx
-
systemctl enable nginx
-
netstat -natp | grep nginx
Note that the stream module is required for the four-layer proxy
Third, deploy keepalived service to do k8s cluster load balancer high availability
-
//Deploying the keepalived service
-
yum install keepalived -y
-
-
//Modify the keepalived configuration file
-
vim /etc/keepalived/
-
! Configuration File for keepalived
-
-
global_defs {
-
# Receive e-mail address
-
notification_email {
-
acassen@
-
failover@
-
sysadmin@
-
}
-
# Mail delivery address
-
notification_email_from @
-
smtp_server 127.0.0.1
-
smtp_connect_timeout 30
-
router_id NGINX_MASTER #lb01node for NGINX_MASTER, lb02node is NGINX_BACKUP.
-
}
-
-
# Add a script that executes periodically
-
vrrp_script check_nginx {
-
script "/etc/keepalived/"# Specify the path to the script that checks for nginx survival
-
}
-
-
vrrp_instance VI_1 {
-
state MASTER #lb01The node's are MASTER, lb02node is BACKUP
-
interface ens33# Specify the NIC name ens33
-
virtual_router_id 51#Specify vrid, both nodes should be the same
-
priority 100 #lb01The node's are100,lb02The node's are90
-
advert_int 1
-
authentication {
-
auth_type PASS
-
auth_pass 1111
-
}
-
virtual_ipaddress {
-
192.168.20.150/24# Designated VIP
-
}
-
track_script {
-
check_nginx #Specify the script for vrrp_script configuration
-
}
-
}
-
-
-
//Method 1: Create a nginx state checking script
-
vim /etc/keepalived/
-
#!/bin/bash
-
#egrep -cv "grep|$$" Filters out current Shell process IDs containing grep or $$.
-
count=$(ps -ef | grep nginx | egrep -cv "grep|$$")
-
-
if [ "$count" -eq 0 ];then
-
systemctl stop keepalived
-
fi
-
-
//Method 2: Create a nginx state checking script2
-
cat > /etc/keepalived/ << 'EOF'
-
#!/bin/bash
-
killall -0 nginx &>/dev/null
-
if [ $? -ne 0 ];then
-
systemctl stop keepalived
-
fi
-
EOF
-
-
-
chmod +x /etc/keepalived/
-
-
//Start the keepalived service (be sure to start the nginx service first, then start the keepalived service)
-
systemctl start keepalived
-
systemctl enable keepalived
-
ip a #Check if the VIP is generated
The standby node modifies the master node's configuration file on the
Verifying Failover
Restarting nginx and keepalived services on the master node will hijack the vip.
IV. ModificationsnodeThe ip corresponding to the server in the configuration bootstrap file on the node is VIP
, profile for VIP
-
//Modify the, profile on the node node to VIP
-
cd /opt/kubernetes/cfg/
-
vim
-
server: https://192.168.20.100:6443
-
-
vim
-
server: https://192.168.20.100:6443
-
-
vim
-
server: https://192.168.20.100:6443
-
-
//Restarting the kubelet and kube-proxy services
-
systemctl restart
-
systemctl restart
Synchronizing node02 also requires the same changes
V. All master node cluster boot configuration files point to the local apiserver ip and port
-
[root@master02 cfg]#ls
-
kube-apiserver kube-controller-manager kube-scheduler
-
[root@master02 cfg]#vim
-
[root@master02 cfg]#vim
-
[root@master02 cfg]#cd ~/.kube/
-
[root@master02 .kube]#ls
-
cache config
-
[root@master02 .kube]#vim config
-
[root@master02 .kube]#ls /usr/lib/systemd/system/kube-*
-
/usr/lib/systemd/system/ /usr/lib/systemd/system/ /usr/lib/systemd/system/
-
[root@master02 .kube]#systemctl restart
-
[root@master02 .kube]#systemctl restart
-
[root@master02 .kube]#systemctl restart
At this point the k8s cluster has been deployed
VI. Installation of the dashboard
The dashboard is a web-based user interface for Kubernetes.You can use the dashboard to set thecontainerizationApplications are deployed to a Kubernetes cluster, troubleshoot containerized applications, and manage the cluster itself and its accompanying resources. You can use the dashboard to get an overview of the applications running on the cluster, as well as to create or modify individual Kubernetes resources (such as deployments, jobs, daemons, and so on). For example, you can use the Deployment Wizard to extend a deployment, initiate a rolling update, restart a Pod, or deploy a new application. The dashboard also provides information about the status of Kubernetes resources in the cluster and any errors that may have occurred.
-
//In the master01 on-node operation
-
# Upload a file to/opt/k8s directory
-
cd /opt/k8s
-
vim
-
# Default Dashboard can only be accessed internally by the cluster, modify the Service to be of type NodePort to expose it externally:
-
kind: Service
-
apiVersion: v1
-
metadata:
-
labels:
-
k8s-app: kubernetes-dashboard
-
name: kubernetes-dashboard
-
namespace: kubernetes-dashboard
-
spec:
-
ports:
-
- port: 443
-
targetPort: 8443
-
nodePort: 30001 #add
-
type: NodePort #Add
-
selector:
-
k8s-app: kubernetes-dashboard
-
-
kubectl apply -f
-
-
# Create service account and bind the default cluster-admin administrator cluster role
-
kubectl create serviceaccount dashboard-admin -n kube-system
-
kubectl create clusterrolebinding dashboard-admin --clusterrole=cluster-admin --serviceaccount=kube-system:dashboard-admin
-
kubectl describe secrets -n kube-system $(kubectl -n kube-system get secret | awk '/dashboard-admin/{print $1}')
Login to the Dashboard using the output token
https://NodeIP:30001
VII. Summarizing binary deploymentk8sclustering
1) Deploying etcd
- Issuing certificate and private key files using cfssl tools
- Unpack the etcd package and get the binary file etcd etcdctl
- Prepare the etcd cluster configuration file
- Start the etcd process service and add all nodes to the etcd cluster
-
etcd operations:
-
# Checking the health of an etcd cluster
-
ETCDCTL_API=3 /opt/etcd/bin/etcdctl --endpoints="https://IP1:2379,https://IP2:2379,https://IP3:2379" --cacert=CA certificate --cert=Client certificate --key=Client Private Key -wtable endpoint health
-
-
# View etcd cluster status information
-
ETCDCTL_API=3 /opt/etcd/bin/etcdctl --endpoints="https://IP1:2379,https://IP2:2379,https://IP3:2379" --cacert=CA certificate --cert=Client certificate --key=Client private key -wtable endpointstatus
-
-
# View list of etcd cluster members
-
ETCDCTL_API=3 /opt/etcd/bin/etcdctl --endpoints="https://IP1:2379,https://IP2:2379,https://IP3:2379" --cacert=CA certificate --cert=Client certificate --key=Client private key -wtable member list
-
-
#Insert key values into etcd
-
ETCDCTL_API=3 /opt/etcd/bin/etcdctl --endpoints="https://IP1:2379" --cacert=CA certificate --cert=Client certificate --key=Client private key put<KEY> '<VALUE>'
-
-
#View the value of the key
-
ETCDCTL_API=3 /opt/etcd/bin/etcdctl --endpoints="https://IP1:2379" --cacert=CA certificate --cert=Client certificate --key=Client Private Keyget <KEY>
-
-
#delete key
-
ETCDCTL_API=3 /opt/etcd/bin/etcdctl --endpoints="https://IP1:2379" --cacert=CA certificate --cert=Client certificate --key=Client private key del<KEY>
-
-
#Backup the etcd database
-
ETCDCTL_API=3 /opt/etcd/bin/etcdctl --endpoints="https://IP1:2379" --cacert=CA certificate --cert=Client certificate --key=Client private key snapshot save Backup file path
-
-
#Recovering the etcd database
-
ETCDCTL_API=3 /opt/etcd/bin/etcdctl --endpoints="https://IP1:2379" --cacert=CA certificate --cert=Client certificate --key=Client private key snapshot restore Backup file path
2) Deploying the master component
- Issuing certificate and private key files using cfssl tools
- Download the K8S package for binaries kube-apiserver kube-controller-manager kube-scheduler
- Prepare the bootstrap-token authentication file to be called when kube-apiserver starts ()
- Prepare the process service startup parameters configuration file for kube-apiserver kube-controller-manager kube-scheduler
- Prepare the kubeconfig cluster bootstrap configuration file for kube-controller-manager kube-scheduler kubectl (for connecting to and verifying kube-apiserver)
- Start the kube-apiserver kube-controller-manager kube-scheduler process service in order.
- Execute the kubectl get cs command to view the health status of the master component
3) Deploying node components
- Getting binaries kubelet kube-proxy
- Prepare the kubeconfig cluster bootstrap configuration file used by the kubelet kube-proxy (the file that kubelet uses to authenticate the first time it accesses the apiserver)
- Prepare the process service startup parameter configuration file for kubelet kube-proxy
- Start the kubelet process service, make a CSR request to the apiserver to issue the certificate automatically, and the kubelet can get the certificate after the master passes the CSR request.
- Load the ipvs module and start the kube-proxy process service.
- Installing the cni network plugin (flannel or calico) and CoreDNS
- Execute the kubectl get nodes command to view the status of node nodes
4) Deploy multi-master high availability
- Responsible for master component related binaries, certificates, private keys, startup parameter configuration files, kubeconfig cluster boot configuration files, and etcd certificates and private key files.
- Modify the listening and notification addresses in the kube-apiserver kube-controller-manager kube-scheduler startup parameter configuration file, and then restart the service processes in turn.
- Deploying nginx/haproxy load balancers and keepalived high availability
- Modify the server parameter in the kubeconfig cluster bootstrap configuration file of kubelet kube-proxy kubectl to point to the VIP address of the keepalived, and then restart the kubelet kube-proxy service process.
- Modify the server parameter in the kubeconfig cluster bootstrap configuration file of the kube-controller-manager kube-scheduler on the other master nodes to point to their respective local apiserver addresses, and then restart the service process.