博客原文

文章目录

    • 集群配置
      • 配置清单
      • 集群规划
      • 集群网络规划
    • 环境初始化
      • 主机配置
    • 配置高可用ApiServer
      • 安装 nginx
      • 安装 Keepalived
    • 安装脚本
      • 需要魔法的脚本
      • 不需要魔法的脚本
      • 配置自动补全
      • 加入其余节点
    • 验证集群

集群配置

配置清单

  • OS: ubuntu 20.04
  • kubernetes: 1.29.1
  • Container Runtime:Containerd 1.7.11
  • CRI: runc 1.10
  • CNI: cni-plugin 1.4

集群规划

IPHostname配置
192.168.254.130master012C 4G 30G
192.168.254.131master022C 4G 30G
192.168.254.132node12C 4G 30G

集群网络规划

  • Pod 网络: 10.244.0.0/16
  • Service 网络: 10.96.0.0/12
  • Node 网络: 192.168.254.0/24

环境初始化

主机配置

ssh-keygenssh-copy-id -i /root/.ssh/id_rsa.pub root@192.168.254.131ssh-copy-id -i /root/.ssh/id_rsa.pub root@192.168.254.132# 将节点加入 hostscat << EOF >> /etc/hosts192.168.254.130 master01192.168.254.131 master02192.168.254.132 node01EOF

配置高可用ApiServer

安装 nginx

所有 master 节点都要操作

apt install nginx -ysystemctl status nginx# 修改 nginx 配置文件cat /etc/nginx/nginx.confuser user;worker_processes auto;pid /run/nginx.pid;include /etc/nginx/modules-enabled/*.conf;events {worker_connections 768;# multi_accept on;}#添加了stream 这一段,其他的保持默认即可stream {log_formatmain'$remote_addr $upstream_addr - [$time_local] $status $upstream_bytes_sent';access_log/var/log/nginx/k8s-access.logmain;upstream k8s-apiserver { server 192.168.254.130:6443;#master01的IP和6443端口 server 192.168.254.131:6443;#master02的IP和6443端口}server { listen 16443;#监听的是16443端口,因为nginx和master复用机器,所以不能是6443端口 proxy_pass k8s-apiserver;#使用proxy_pass模块进行反向代理}}......# 重启 nginx 服务systemctl restart nginx && systemctl enable nginx && systemctl status nginx# 端口检查# netstat-lntup| grep 16443nc -l -p 16443#nc: Address already in use

安装 Keepalived

所有 master 节点都要操作

apt install keepalived -y# 写入 nginx 检查脚本cat << EOF > /etc/keepalived/nginx_check.sh#!/bin/bash#1、判断Nginx是否存活counter=`ps -C nginx --no-header | wc -l`if [ $counter -eq 0 ]; then#2、如果不存活则尝试启动Nginx./usr/local/nginx/sbin/nginxsleep 2#3、等待2秒后再次获取一次Nginx状态counter=`ps -C nginx --no-header | wc -l`#4、再次进行判断,如Nginx还不存活则停止Keepalived,让地址进行漂移if [ $counter -eq 0 ]; thenkillall keepalivedfifiEOFchmod +x /etc/keepalived/nginx_check.sh

更改 master01 的 keepalived 配置:

cat << EOF > /etc/keepalived/keepalived.confglobal_defs { router_id LVS_DEVEL vrrp_skip_check_adv_addr vrrp_garp_interval 0 vrrp_gna_interval 0}vrrp_script chk_nginx {script "/etc/keepalived/nginx_check.sh" ## 检测 nginx 状态的脚本路径interval 2## 检测时间间隔weight -20## 如果条件成立,权重-20}vrrp_instance VI_1 {state MASTER##主节点为 MASTER,备份节点为 BACKUPinterface ens33 ##绑定 VIP 的网络接口,与本机IP地址所在网络接口相同virtual_router_id 100 ##虚拟路由id,主从节点必须保持一致priority 100##节点优先级,直范围0-254,MASTER 要比 BACKUP 高advert_int 1authentication {##设置验证信息,两个节点必须一致auth_type PASSauth_pass 123456}track_script {chk_nginx ##执行 Nginx 监控}virtual_ipaddress {192.168.254.100##VIP,两个节点必须设置一样(可设置多个)}}EOFsystemctl restart keepalived && systemctl enable keepalived.serviceip a | grep 192.168.254.100

更改 master02 的 keepalived 配置:

cat << EOF > /etc/keepalived/keepalived.confglobal_defs { router_id LVS_DEVEL vrrp_skip_check_adv_addr vrrp_garp_interval 0 vrrp_gna_interval 0}vrrp_script chk_nginx {script "/etc/keepalived/nginx_check.sh" ## 检测 nginx 状态的脚本路径interval 2## 检测时间间隔weight -20## 如果条件成立,权重-20}vrrp_instance VI_1 {state BACKUP##主节点为 MASTER,备份节点为 BACKUPinterface ens33 ##绑定 VIP 的网络接口,与本机IP地址所在网络接口相同virtual_router_id 100 ##虚拟路由id,主从节点必须保持一致priority 90##节点优先级,直范围0-254,MASTER 要比 BACKUP 高advert_int 1authentication {##设置验证信息,两个节点必须一致auth_type PASSauth_pass 123456}track_script {chk_nginx ##执行 Nginx 监控}virtual_ipaddress {192.168.254.100##VIP,两个节点必须设置一样(可设置多个)}}EOFsystemctl restart keepalived && systemctl enable keepalived.serviceip a | grep 192.168.254.100

安装脚本

**前置条件: ** 脚本中存在拉取国外资源, 需要你配置代理 ==> [如何让虚拟机拥有愉快网络环境](https://ai-feier.github.io/p/%E5%A6%82%E4%BD%95%E8%AE%A9%E8%99%9A%E6%8B%9F%E6%9C%BA%E6%8B%A5%E6%9C%89%E6%84%89%E5%BF%AB%E7%BD%91%E7%BB%9C%E7%8E%AF%E5%A2%83/)

需要:

  • 虚拟机代理
  • apt 下载代理

需要魔法的脚本

在所有节点执行以下脚本

脚本功能:

  • 时间同步
  • 关闭 swap
  • 启用内核模块
  • 安装 ipvs 并启用内核参数
  • 安装 containerd, runc, cni
  • 更改 containerd 沙箱镜像和 cgroup 并且配置镜像加速
  • 安装最新 kubelet, kubeadm, kubectl

注意: 请先通过export name=master01方式设置当前 node 的 hostname

install.sh:

export name=master01# 改为你 hostname 的名称, 脚本中删除该行#!/bin/bashhostnamectl set-hostname $name# 阿里源mv /etc/apt/sources.list /etc/apt/sources.list.bakcat <<EOF > /etc/apt/sources.listdeb https://mirrors.aliyun.com/ubuntu/ focal main restricted universe multiversedeb-src https://mirrors.aliyun.com/ubuntu/ focal main restricted universe multiversedeb https://mirrors.aliyun.com/ubuntu/ focal-security main restricted universe multiversedeb-src https://mirrors.aliyun.com/ubuntu/ focal-security main restricted universe multiversedeb https://mirrors.aliyun.com/ubuntu/ focal-updates main restricted universe multiversedeb-src https://mirrors.aliyun.com/ubuntu/ focal-updates main restricted universe multiversedeb https://mirrors.aliyun.com/ubuntu/ focal-backports main restricted universe multiversedeb-src https://mirrors.aliyun.com/ubuntu/ focal-backports main restricted universe multiverseEOFapt update# 时间同步timedatectl set-timezone Asia/Shanghai#安装chrony,联网同步时间apt install chrony -y && systemctl enable --now chronyd# 禁用 swapsudo swapoff -a && sed -i '/swap/s/^/#/' /etc/fstab# 安装 ipvsapt install -y ipset ipvsadm# 配置需要的内核模块cat <<EOF | sudo tee /etc/modules-load.d/k8s.confoverlaybr_netfilterEOF# 启动模块sudo modprobe overlaysudo modprobe br_netfiltercat <<EOF | sudo tee /etc/sysctl.d/k8s.confnet.bridge.bridge-nf-call-iptables= 1net.bridge.bridge-nf-call-ip6tables = 1net.ipv4.ip_forward = 1EOF# 是 sysctl 参数生效sudo sysctl --system# 检验是否配置成功#lsmod | grep br_netfilter#lsmod | grep overlay#sysctl net.bridge.bridge-nf-call-iptables net.bridge.bridge-nf-call-ip6tables net.ipv4.ip_forward# 配置 ipvs 内核参数cat <<EOF | sudo tee /etc/modules-load.d/ipvs.confip_vsip_vs_rrip_vs_wrrip_vs_shnf_conntrackEOF# 内核加载 ipvssudo modprobe ip_vssudo modprobe ip_vs_rrsudo modprobe ip_vs_wrrsudo modprobe ip_vs_shsudo modprobe nf_conntrack# 确认ipvs模块加载#lsmod |grep -e ip_vs -e nf_conntrack# 安装 Containerdwget -c https://github.com/containerd/containerd/releases/download/v1.7.11/containerd-1.7.11-linux-amd64.tar.gztar -xzvf containerd-1.7.11-linux-amd64.tar.gz#解压出来一个bin目录,containerd可执行文件都在bin目录里面mv bin/* /usr/local/bin/rm -rf bin#使用systemcd来管理containerdcat << EOF > /usr/lib/systemd/system/containerd.service[Unit]Description=containerd container runtimeDocumentation=https://containerd.ioAfter=network.target local-fs.target[Service]ExecStartPre=-/sbin/modprobe overlayExecStart=/usr/local/bin/containerdType=notifyDelegate=yesKillMode=processRestart=alwaysRestartSec=5# Having non-zero Limit*s causes performance problems due to accounting overhead# in the kernel. We recommend using cgroups to do container-local accounting.LimitNPROC=infinityLimitCORE=infinity# Comment TasksMax if your systemd version does not supports it.# Only systemd 226 and above support this version.TasksMax=infinityOOMScoreAdjust=-999[Install]WantedBy=multi-user.targetEOFsystemctl daemon-reload && systemctl enable --now containerd #systemctlstatus containerd# 安装 runc#runc是容器运行时,runc实现了容器的init,run,create,ps...我们在运行容器所需要的cmd:curl -LO https://github.com/opencontainers/runc/releases/download/v1.1.10/runc.amd64 && \install -m 755 runc.amd64 /usr/local/sbin/runc# 安装 CNI pluginswget -c https://github.com/containernetworking/plugins/releases/download/v1.4.0/cni-plugins-linux-amd64-v1.4.0.tgz#根据官网的安装步骤来,创建一个目录用于存放cni插件mkdir -p /opt/cni/bintar -xzvfcni-plugins-linux-amd64-v1.4.0.tgz -C /opt/cni/bin/# 修改 Containd 配置#修改containerd的配置,因为containerd默认从k8s官网拉取镜像#创建一个目录用于存放containerd的配置文件mkdir -p /etc/containerd#把containerd配置导出到文件containerd config default | sudo tee /etc/containerd/config.toml# 修改沙箱镜像sed -i 's#sandbox_image = "registry.k8s.io/pause:.*"#sandbox_image = "registry.aliyuncs.com/google_containers/pause:3.9"#' /etc/containerd/config.toml# 修改 cgroup 为 systemdsed -i 's#SystemdCgroup = false#SystemdCgroup = true#' /etc/containerd/config.toml# 配置镜像加速sed -i 's#config_path = ""#config_path = "/etc/containerd/certs.d"#' /etc/containerd/config.toml# 配置 Containerd 镜像源# docker hub镜像加速mkdir -p /etc/containerd/certs.d/docker.iocat > /etc/containerd/certs.d/docker.io/hosts.toml << EOFserver = "https://docker.io"[host."https://dockerproxy.com"]capabilities = ["pull", "resolve"][host."https://docker.m.daocloud.io"]capabilities = ["pull", "resolve"][host."https://reg-mirror.qiniu.com"]capabilities = ["pull", "resolve"][host."https://registry.docker-cn.com"]capabilities = ["pull", "resolve"][host."http://hub-mirror.c.163.com"]capabilities = ["pull", "resolve"]EOF# k8s.gcr.io镜像加速mkdir -p /etc/containerd/certs.d/k8s.gcr.iotee /etc/containerd/certs.d/k8s.gcr.io/hosts.toml << 'EOF'server = "https://k8s.gcr.io"[host."https://k8s-gcr.m.daocloud.io"]capabilities = ["pull", "resolve", "push"]EOF#重启containerdsystemctl restart containerd #systemctl status containerd# 安装 kubeadm、kubelet、kubectl# 安装依赖sudo systemctl restart containerdsudo apt-get update -ysudo apt-get install -y apt-transport-https ca-certificates curl gpg mkdir -p /etc/apt/keyringscurl -fsSL https://pkgs.k8s.io/core:/stable:/v1.29/deb/Release.key | sudo gpg --dearmor -o /etc/apt/keyrings/kubernetes-apt-keyring.gpgecho 'deb [signed-by=/etc/apt/keyrings/kubernetes-apt-keyring.gpg] https://pkgs.k8s.io/core:/stable:/v1.29/deb/ /' | sudo tee /etc/apt/sources.list.d/kubernetes.listsudo apt-get update -ysudo apt-get install -y kubelet kubeadm kubectl sudo apt-mark hold kubelet kubeadm kubectl# kubelet 开机自启systemctl enable --now kubelet# 配置 crictl socketcrictl configruntime-endpoint unix:///run/containerd/containerd.sockcrictl config image-endpoint unix:///run/containerd/containerd.sock

不需要魔法的脚本

前置:

下载我下载好的资源包

  • CSDN 资源 – 免费

  • 阿里云 OSS

  • GitLab

资源列表:

资源原始地址
Container Runtime:Containerd 1.7.11https://github.com/containerd/containerd/releases/download/v1.7.11/containerd-1.7.11-linux-amd64.tar.gz
CRI: runc 1.10https://github.com/opencontainers/runc/releases/download/v1.1.10/runc.amd64
CNI: cni-plugin 1.4https://github.com/containernetworking/plugins/releases/download/v1.4.0/cni-plugins-linux-amd64-v1.4.0.tgz
calico 3.27 : tigera-operator.yamlhttps://raw.githubusercontent.com/projectcalico/calico/v3.27.0/manifests/tigera-operator.yaml
calico 3.27 : custom-resources.yamlhttps://raw.githubusercontent.com/projectcalico/calico/v3.27.0/manifests/custom-resources.yaml

下载资源:

wget -O k8s1.29.tar.gz https://blog-source-mkt.oss-cn-chengdu.aliyuncs.com/resources/k8s/kubeadm%20init/k8s1.29.tar.gztar xzvf k8s1.29.tar.gzcd workdirexport name=master01# 改为你 hostname 的名称

在所有节点执行以下脚本

脚本功能:

  • 时间同步
  • 关闭 swap
  • 启用内核模块
  • 安装 ipvs 并启用内核参数
  • 安装 containerd, runc, cni
  • 更改 containerd 沙箱镜像和 cgroup 并且配置镜像加速
  • 安装最新 kubelet, kubeadm, kubectl

注意: 请先通过export name=master01方式设置当前 node 的 hostname

install.sh:

#!/bin/bashhostnamectl set-hostname $name# 阿里源mv /etc/apt/sources.list /etc/apt/sources.list.bakcat <<EOF > /etc/apt/sources.listdeb https://mirrors.aliyun.com/ubuntu/ focal main restricted universe multiversedeb-src https://mirrors.aliyun.com/ubuntu/ focal main restricted universe multiversedeb https://mirrors.aliyun.com/ubuntu/ focal-security main restricted universe multiversedeb-src https://mirrors.aliyun.com/ubuntu/ focal-security main restricted universe multiversedeb https://mirrors.aliyun.com/ubuntu/ focal-updates main restricted universe multiversedeb-src https://mirrors.aliyun.com/ubuntu/ focal-updates main restricted universe multiversedeb https://mirrors.aliyun.com/ubuntu/ focal-backports main restricted universe multiversedeb-src https://mirrors.aliyun.com/ubuntu/ focal-backports main restricted universe multiverseEOFapt update# 时间同步timedatectl set-timezone Asia/Shanghai#安装chrony,联网同步时间apt install chrony -y && systemctl enable --now chronyd# 禁用 swapsudo swapoff -a && sed -i '/swap/s/^/#/' /etc/fstab# 安装 ipvsapt install -y ipset ipvsadm# 配置需要的内核模块cat <<EOF | sudo tee /etc/modules-load.d/k8s.confoverlaybr_netfilterEOF# 启动模块sudo modprobe overlaysudo modprobe br_netfiltercat <<EOF | sudo tee /etc/sysctl.d/k8s.confnet.bridge.bridge-nf-call-iptables= 1net.bridge.bridge-nf-call-ip6tables = 1net.ipv4.ip_forward = 1EOF# 是 sysctl 参数生效sudo sysctl --system# 检验是否配置成功#lsmod | grep br_netfilter#lsmod | grep overlay#sysctl net.bridge.bridge-nf-call-iptables net.bridge.bridge-nf-call-ip6tables net.ipv4.ip_forward# 配置 ipvs 内核参数cat <<EOF | sudo tee /etc/modules-load.d/ipvs.confip_vsip_vs_rrip_vs_wrrip_vs_shnf_conntrackEOF# 内核加载 ipvssudo modprobe ip_vssudo modprobe ip_vs_rrsudo modprobe ip_vs_wrrsudo modprobe ip_vs_shsudo modprobe nf_conntrack# 确认ipvs模块加载#lsmod |grep -e ip_vs -e nf_conntrack# 安装 Containerd#wget -c https://github.com/containerd/containerd/releases/download/v1.7.11/containerd-1.7.11-linux-amd64.tar.gztar -xzvf containerd-1.7.11-linux-amd64.tar.gz#解压出来一个bin目录,containerd可执行文件都在bin目录里面mv bin/* /usr/local/bin/rm -rf bin#使用systemcd来管理containerdcat << EOF > /usr/lib/systemd/system/containerd.service[Unit]Description=containerd container runtimeDocumentation=https://containerd.ioAfter=network.target local-fs.target[Service]ExecStartPre=-/sbin/modprobe overlayExecStart=/usr/local/bin/containerdType=notifyDelegate=yesKillMode=processRestart=alwaysRestartSec=5# Having non-zero Limit*s causes performance problems due to accounting overhead# in the kernel. We recommend using cgroups to do container-local accounting.LimitNPROC=infinityLimitCORE=infinity# Comment TasksMax if your systemd version does not supports it.# Only systemd 226 and above support this version.TasksMax=infinityOOMScoreAdjust=-999[Install]WantedBy=multi-user.targetEOFsystemctl daemon-reload && systemctl enable --now containerd #systemctlstatus containerd# 安装 runc#runc是容器运行时,runc实现了容器的init,run,create,ps...我们在运行容器所需要的cmd:#curl -LO https://github.com/opencontainers/runc/releases/download/v1.1.10/runc.amd64 && \install -m 755 runc.amd64 /usr/local/sbin/runc# 安装 CNI plugins#wget -c https://github.com/containernetworking/plugins/releases/download/v1.4.0/cni-plugins-linux-amd64-v1.4.0.tgz#根据官网的安装步骤来,创建一个目录用于存放cni插件mkdir -p /opt/cni/bintar -xzvfcni-plugins-linux-amd64-v1.4.0.tgz -C /opt/cni/bin/# 修改 Containd 配置#修改containerd的配置,因为containerd默认从k8s官网拉取镜像#创建一个目录用于存放containerd的配置文件mkdir -p /etc/containerd#把containerd配置导出到文件containerd config default | sudo tee /etc/containerd/config.toml# 修改沙箱镜像sed -i 's#sandbox_image = "registry.k8s.io/pause:.*"#sandbox_image = "registry.aliyuncs.com/google_containers/pause:3.9"#' /etc/containerd/config.toml# 修改 cgroup 为 systemdsed -i 's#SystemdCgroup = false#SystemdCgroup = true#' /etc/containerd/config.toml# 配置镜像加速sed -i 's#config_path = ""#config_path = "/etc/containerd/certs.d"#' /etc/containerd/config.toml# 配置 Containerd 镜像源# docker hub镜像加速mkdir -p /etc/containerd/certs.d/docker.iocat > /etc/containerd/certs.d/docker.io/hosts.toml << EOFserver = "https://docker.io"[host."https://dockerproxy.com"]capabilities = ["pull", "resolve"][host."https://docker.m.daocloud.io"]capabilities = ["pull", "resolve"][host."https://reg-mirror.qiniu.com"]capabilities = ["pull", "resolve"][host."https://registry.docker-cn.com"]capabilities = ["pull", "resolve"][host."http://hub-mirror.c.163.com"]capabilities = ["pull", "resolve"]EOF# k8s.gcr.io镜像加速mkdir -p /etc/containerd/certs.d/k8s.gcr.iotee /etc/containerd/certs.d/k8s.gcr.io/hosts.toml << 'EOF'server = "https://k8s.gcr.io"[host."https://k8s-gcr.m.daocloud.io"]capabilities = ["pull", "resolve", "push"]EOF#重启containerdsystemctl restart containerd #systemctl status containerd# 安装 kubeadm、kubelet、kubectl# 安装依赖sudo apt-get update -ysudo apt-get install -y apt-transport-https ca-certificates curl gpg mkdir -p /etc/apt/keyringscurl -fsSL https://pkgs.k8s.io/core:/stable:/v1.29/deb/Release.key | sudo gpg --dearmor -o /etc/apt/keyrings/kubernetes-apt-keyring.gpgecho 'deb [signed-by=/etc/apt/keyrings/kubernetes-apt-keyring.gpg] https://pkgs.k8s.io/core:/stable:/v1.29/deb/ /' | sudo tee /etc/apt/sources.list.d/kubernetes.listsudo apt-get update -ysudo apt-get install -y kubelet kubeadm kubectl sudo apt-mark hold kubelet kubeadm kubectl# kubelet 开机自启systemctl enable --now kubelet# 配置 crictl socketcrictl configruntime-endpoint unix:///run/containerd.sockcrictl config image-endpoint unix:///run/containerd/containerd.sock
chmod +x install.sh./install.sh

初始化 master01

暴露环境变量

export K8S_VERSION=1.29.1 # k8s 集群版本export POD_CIDR=10.244.0.0/16 # pod 网段export SERVICE_CIDR=10.96.0.0/12 # service 网段export APISERVER_MASTER01=192.168.254.130 # master01 ipexport APISERVER_HA=192.168.254.100# 集群 vip 地址export APISERVER_HA_PORT=16443# 集群 vip 地址

在你的主节点初始化集群(同样在 workdir/ 下)

# 命令行方式初始化, 后面需要手动更改 kube-proxy 为 ipvs 模式# kubeadm init --apiserver-advertise-address=$APISERVER_MASTER01 --apiserver-bind-port=6443 --image-repository registry.aliyuncs.com/google_containers --kubernetes-version v1.29.1 --service-cidr=$SERVICE_CIDR --pod-network-cidr=$POD_CIDR --upload-certs# kubeadm config print init-defaults >Kubernetes-cluster.yaml# kubeadm 默认配置cat << EOF > Kubernetes-cluster.yamlapiVersion: kubeadm.k8s.io/v1beta3bootstrapTokens:- groups:- system:bootstrappers:kubeadm:default-node-tokentoken: abcdef.0123456789abcdefttl: 24h0m0susages:- signing- authenticationkind: InitConfigurationlocalAPIEndpoint:# 将此处IP地址替换为主节点IP ETCD容器会试图通过此地址绑定端口 如果主机不存在则会失败advertiseAddress: $APISERVER_MASTER01bindPort: 6443nodeRegistration:criSocket: unix:///run/containerd/containerd.sockimagePullPolicy: IfNotPresentname: $name# 节点 hostnametaints: null---# controlPlaneEndpoint 可配置高可用的 ApiServerapiServer:timeoutForControlPlane: 4m0scertSANs: # 主节点IP- $APISERVER_HA- $APISERVER_MASTER01apiVersion: kubeadm.k8s.io/v1beta3controlPlaneEndpoint: "$APISERVER_HA:$APISERVER_HA_PORT"certificatesDir: /etc/kubernetes/pkiclusterName: kubernetescontrollerManager: {}dns: {}etcd:# 可使用外接 etcd 集群local:dataDir: /var/lib/etcdimageRepository: registry.aliyuncs.com/google_containers# 国内源kind: ClusterConfigurationkubernetesVersion: $K8S_VERSIONnetworking:dnsDomain: cluster.local# 增加配置 指定pod网段podSubnet: $POD_CIDRserviceSubnet: $SERVICE_CIDRscheduler: {}---apiVersion: kubeproxy.config.k8s.io/v1alpha1kind: KubeProxyConfigurationmode: ipvs# kubeproxy 使用 ipvs---kind: KubeletConfigurationapiVersion: kubelet.config.k8s.io/v1beta1cgroupDriver: systemdEOFkubeadm init --config Kubernetes-cluster.yaml --upload-certsmkdir -p $HOME/.kubesudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/configsudo chown $(id -u):$(id -g) $HOME/.kube/config# 安装 calicosed -i 's#cidr.*#cidr: '$POD_CIDR'#' custom-resources.yamlkubectl create -f tigera-operator.yamlkubectl create -f custom-resources.yaml

–upload-certs: 将控制平面证书上传到 kubeadm-certs Secret。

​ 简单来说: 后面就不需要把集群证书拷贝到其他 master 节点

配置自动补全

apt install bash-completion -ycat << EOF >> ~/.profilealias k='kubectl'source <(kubectl completion bash)complete -F __start_kubectl kEOFsource ~/.profile

加入其余节点

master02:

kubeadm join 192.168.254.100:16443 --token abcdef.0123456789abcdef \--discovery-token-ca-cert-hash sha256:6c9f43be739919e1e03abaa3d0deae00bc2400f77dc7574e338dc6460be2eab6 \--control-plane --certificate-key 02feec260870e7145d69b65d0252f1067768c193d9e8c4aba31ed1b1fa7aaba8

node01:

kubeadm join 192.168.254.100:16443 --token abcdef.0123456789abcdef \--discovery-token-ca-cert-hash sha256:6c9f43be739919e1e03abaa3d0deae00bc2400f77dc7574e338dc6460be2eab6

验证集群

$ k get po -ANAMESPACE NAME READY STATUSRESTARTS AGEcalico-system calico-kube-controllers-75f84bf8b4-96hht 0/1 ContainerCreating 06m19scalico-system calico-node-4cd7c0/1 PodInitializing 0105scalico-system calico-node-7z22c0/1 PodInitializing 0109scalico-system calico-node-pcq8m0/1 Running 06m19scalico-system calico-typha-65b78b8f8d-r2qjn1/1 Running 0100scalico-system calico-typha-65b78b8f8d-vv4ph1/1 Running 06m19scalico-system csi-node-driver-bsd660/2 ContainerCreating 0105scalico-system csi-node-driver-h465x0/2 ContainerCreating 0109scalico-system csi-node-driver-htqj20/2 ContainerCreating 06m19skube-system coredns-857d9ff4c9-nk4kx 1/1 Running 06m40skube-system coredns-857d9ff4c9-w6zff 1/1 Running 06m40skube-system etcd-master011/1 Running 06m53skube-system etcd-master021/1 Running 097skube-system kube-apiserver-master011/1 Running 06m53skube-system kube-apiserver-master021/1 Running 098skube-system kube-controller-manager-master01 1/1 Running 06m53skube-system kube-controller-manager-master02 1/1 Running 097skube-system kube-proxy-7mwpd 1/1 Running 0109skube-system kube-proxy-gfcqb 1/1 Running 06m40skube-system kube-proxy-vkkm4 1/1 Running 0105skube-system kube-scheduler-master011/1 Running 06m53skube-system kube-scheduler-master021/1 Running 099stigera-operator tigera-operator-55585899bf-xssq5 1/1 Running 06m40s

参考:

  1. https://kubernetes.io/zh-cn/docs/setup/production-environment/tools/kubeadm/install-kubeadm/
  2. https://ai-feier.github.io/p/keepalived-nginx%E5%AE%9E%E7%8E%B0%E9%AB%98%E5%8F%AF%E7%94%A8apiserver/
  3. https://blog.csdn.net/m0_51964671/article/details/135256571