背景

  • 机器信息:三台linux/arm64架构
  • 系统版本:centos7.6版本
主机名称IP地址说明软件
Master01192.168.100.21master节点kube-apiserver、kube-controller-manager、kube-scheduler、etcd、 kubelet、kube-proxy
Node01192.168.100.22node节点kubelet、kube-proxy
Node02192.168.100.23node节点kubelet、kube-proxy

安装

配置环境

设置主机名

hostnamectl set-hostname k8s-master01hostnamectl set-hostname k8s-node01hostnamectl set-hostname k8s-node02

安装必要工具

yum -y install wget jq psmisc vim net-tools nfs-utils telnet yum-utils device-mapper-persistent-data lvm2 git network-scripts tar curl -y

关闭防火墙

systemctl disable --now firewalld

关闭SELinux

setenforce 0sed -i 's#SELINUX=enforcing#SELINUX=disabled#g' /etc/selinux/config

关闭交换分区

sed -ri 's/.*swap.*/#&/' /etc/fstabswapoff -a && sysctl -w vm.swappiness=0cat /etc/fstab# /dev/mapper/centos-swap swapswapdefaults0 0

网络配置

# 方式一# systemctl disable --now NetworkManager# systemctl start network && systemctl enable network# 方式二cat > /etc/NetworkManager/conf.d/calico.conf << EOF [keyfile]unmanaged-devices=interface-name:cali*;interface-name:tunl*EOFsystemctl restart NetworkManager

时间同步

设置时区

mv /etc/localtime /etc/localtime.bakln -s /usr/share/zoneinfo/Asia/Shanghai/etc/localtime

设置时间同步

# 服务端yum install chrony -ycat > /etc/chrony.conf << EOF pool ntp.aliyun.com iburstdriftfile /var/lib/chrony/driftmakestep 1.0 3rtcsyncallow 10.0.0.0/24allow 192.168.0.0/16local stratum 10keyfile /etc/chrony.keysleapsectz right/UTClogdir /var/log/chronyEOFsystemctl restart chronyd systemctl enable chronyd# 客户端yum install chrony -ycat > /etc/chrony.conf << EOF pool 192.168.100.21 iburstdriftfile /var/lib/chrony/driftmakestep 1.0 3allow 192.168.100.21 # 服务器的ip地址rtcsynckeyfile /etc/chrony.keysleapsectz right/UTClogdir /var/log/chronyEOFsystemctl restart chronydsystemctl enable chronyd#使用客户端进行验证chronyc sources -v

立即手动同步

chronyc -a makestep

配置ulimit

ulimit -SHn 65535cat >> /etc/security/limits.conf <<EOF* soft nofile 655360* hard nofile 131072* soft nproc 655350* hard nproc 655350* seft memlock unlimited* hard memlock unlimiteddEOF

安装ipvsadm

yum install ipvsadm ipset sysstat conntrack libseccomp -ycat >> /etc/modules-load.d/ipvs.conf <<EOF ip_vsip_vs_rrip_vs_wrrip_vs_shnf_conntrackip_tablesip_setxt_setipt_setipt_rpfilteript_REJECTipipEOFsystemctl restart systemd-modules-load.servicelsmod | grep -e ip_vs -e nf_conntrackip_vs_sh 163840ip_vs_wrr163840ip_vs_rr 163840ip_vs 1802246 ip_vs_rr,ip_vs_sh,ip_vs_wrrnf_conntrack1761281 ip_vsnf_defrag_ipv6 245762 nf_conntrack,ip_vsnf_defrag_ipv4 163841 nf_conntracklibcrc32c163843 nf_conntrack,xfs,ip_vs

修改内核参数

cat < /etc/sysctl.d/k8s.confnet.ipv4.ip_forward = 1net.bridge.bridge-nf-call-iptables = 1fs.may_detach_mounts = 1vm.overcommit_memory=1vm.panic_on_oom=0fs.inotify.max_user_watches=89100fs.file-max=52706963fs.nr_open=52706963net.netfilter.nf_conntrack_max=2310720net.ipv4.tcp_keepalive_time = 600net.ipv4.tcp_keepalive_probes = 3net.ipv4.tcp_keepalive_intvl =15net.ipv4.tcp_max_tw_buckets = 36000net.ipv4.tcp_tw_reuse = 1net.ipv4.tcp_max_orphans = 327680net.ipv4.tcp_orphan_retries = 3net.ipv4.tcp_syncookies = 1net.ipv4.tcp_max_syn_backlog = 16384net.ipv4.ip_conntrack_max = 65536net.ipv4.tcp_max_syn_backlog = 16384net.ipv4.tcp_timestamps = 0net.core.somaxconn = 16384net.ipv6.conf.all.disable_ipv6 = 0net.ipv6.conf.default.disable_ipv6 = 0net.ipv6.conf.lo.disable_ipv6 = 0net.ipv6.conf.all.forwarding = 1EOFsysctl --system

节点配置hosts本地解析

cat > /etc/hosts <<EOF127.0.0.1 localhost localhost.localdomain localhost4 localhost4.localdomain4::1 localhost localhost.localdomain localhost6 localhost6.localdomain6192.168.100.21 k8s-master01192.168.100.22 k8s-node01192.168.100.23 k8s-node02EOF

生成证书

1.下载cfssl二进制包

github二进制包下载地址:https://github.com/cloudflare/cfssl/releases (没看到有linux/arm64的我就直接自己编译了)

git clone git@github.com:cloudflare/cfssl.gitcd cfsslmakemv bin/* /usr/local/bin/chmod +x /usr/local/bin/cfssl /usr/local/bin/cfssljson

2.创建目录存放生成证书信息

mkdir pkicd pkicat > admin-csr.json < ca-config.json < etcd-ca-csr.json< front-proxy-ca-csr.json< kubelet-csr.json< manager-csr.json < apiserver-csr.json < ca-csr.json < etcd-csr.json < front-proxy-client-csr.json< kube-proxy-csr.json< scheduler-csr.json << EOF {"CN": "system:kube-scheduler","key": {"algo": "rsa","size": 2048},"names": [{"C": "CN","ST": "Beijing","L": "Beijing","O": "system:kube-scheduler","OU": "Kubernetes-manual"}]}EOF

3.配置bootstrap定义

cd ..mkdir bootstrapcd bootstrapcat > bootstrap.secret.yaml << EOF apiVersion: v1kind: Secretmetadata:name: bootstrap-token-c8ad9cnamespace: kube-systemtype: bootstrap.kubernetes.io/tokenstringData:description: "The default bootstrap token generated by 'kubelet '."token-id: c8ad9ctoken-secret: 2e4d610cf3e7426eusage-bootstrap-authentication: "true"usage-bootstrap-signing: "true"auth-extra-groups:system:bootstrappers:default-node-token,system:bootstrappers:worker,system:bootstrappers:ingress ---apiVersion: rbac.authorization.k8s.io/v1kind: ClusterRoleBindingmetadata:name: kubelet-bootstraproleRef:apiGroup: rbac.authorization.k8s.iokind: ClusterRolename: system:node-bootstrappersubjects:- apiGroup: rbac.authorization.k8s.iokind: Groupname: system:bootstrappers:default-node-token---apiVersion: rbac.authorization.k8s.io/v1kind: ClusterRoleBindingmetadata:name: node-autoapprove-bootstraproleRef:apiGroup: rbac.authorization.k8s.iokind: ClusterRolename: system:certificates.k8s.io:certificatesigningrequests:nodeclientsubjects:- apiGroup: rbac.authorization.k8s.iokind: Groupname: system:bootstrappers:default-node-token---apiVersion: rbac.authorization.k8s.io/v1kind: ClusterRoleBindingmetadata:name: node-autoapprove-certificate-rotationroleRef:apiGroup: rbac.authorization.k8s.iokind: ClusterRolename: system:certificates.k8s.io:certificatesigningrequests:selfnodeclientsubjects:- apiGroup: rbac.authorization.k8s.iokind: Groupname: system:nodes---apiVersion: rbac.authorization.k8s.io/v1kind: ClusterRolemetadata:annotations:rbac.authorization.kubernetes.io/autoupdate: "true"labels:kubernetes.io/bootstrapping: rbac-defaultsname: system:kube-apiserver-to-kubeletrules:- apiGroups:- ""resources:- nodes/proxy- nodes/stats- nodes/log- nodes/spec- nodes/metricsverbs:- "*"---apiVersion: rbac.authorization.k8s.io/v1kind: ClusterRoleBindingmetadata:name: system:kube-apiservernamespace: ""roleRef:apiGroup: rbac.authorization.k8s.iokind: ClusterRolename: system:kube-apiserver-to-kubeletsubjects:- apiGroup: rbac.authorization.k8s.iokind: Username: kube-apiserverEOF

4.配置coredns

cd ..mkdir corednscd corednscat > coredns.yaml << EOF apiVersion: v1kind: ServiceAccountmetadata:name: corednsnamespace: kube-system---apiVersion: rbac.authorization.k8s.io/v1kind: ClusterRolemetadata:labels:kubernetes.io/bootstrapping: rbac-defaultsname: system:corednsrules:- apiGroups:- ""resources:- endpoints- services- pods- namespacesverbs:- list- watch- apiGroups:- discovery.k8s.ioresources:- endpointslicesverbs:- list- watch---apiVersion: rbac.authorization.k8s.io/v1kind: ClusterRoleBindingmetadata:annotations:rbac.authorization.kubernetes.io/autoupdate: "true"labels:kubernetes.io/bootstrapping: rbac-defaultsname: system:corednsroleRef:apiGroup: rbac.authorization.k8s.iokind: ClusterRolename: system:corednssubjects:- kind: ServiceAccountname: corednsnamespace: kube-system---apiVersion: v1kind: ConfigMapmetadata:name: corednsnamespace: kube-systemdata:Corefile: |.:53 {errorshealth {lameduck 5s}readykubernetes cluster.local in-addr.arpa ip6.arpa {fallthrough in-addr.arpa ip6.arpa}prometheus :9153forward . /etc/resolv.conf {max_concurrent 1000}cache 30loopreloadloadbalance}---apiVersion: apps/v1kind: Deploymentmetadata:name: corednsnamespace: kube-systemlabels:k8s-app: kube-dnskubernetes.io/name: "CoreDNS"spec:# replicas: not specified here:# 1. Default is 1.# 2. Will be tuned in real time if DNS horizontal auto-scaling is turned on.strategy:type: RollingUpdaterollingUpdate:maxUnavailable: 1selector:matchLabels:k8s-app: kube-dnstemplate:metadata:labels:k8s-app: kube-dnsspec:priorityClassName: system-cluster-criticalserviceAccountName: corednstolerations:- key: "CriticalAddonsOnly"operator: "Exists"nodeSelector:kubernetes.io/os: linuxaffinity: podAntiAffinity: preferredDuringSchedulingIgnoredDuringExecution: - weight: 100 podAffinityTerm: labelSelector: matchExpressions: - key: k8s-app operator: In values: ["kube-dns"] topologyKey: kubernetes.io/hostnamecontainers:- name: corednsimage: registry.cn-beijing.aliyuncs.com/dotbalo/coredns:1.8.6 imagePullPolicy: IfNotPresentresources:limits:memory: 170Mirequests:cpu: 100mmemory: 70Miargs: [ "-conf", "/etc/coredns/Corefile" ]volumeMounts:- name: config-volumemountPath: /etc/corednsreadOnly: trueports:- containerPort: 53name: dnsprotocol: UDP- containerPort: 53name: dns-tcpprotocol: TCP- containerPort: 9153name: metricsprotocol: TCPsecurityContext:allowPrivilegeEscalation: falsecapabilities:add:- NET_BIND_SERVICEdrop:- allreadOnlyRootFilesystem: truelivenessProbe:httpGet:path: /healthport: 8080scheme: HTTPinitialDelaySeconds: 60timeoutSeconds: 5successThreshold: 1failureThreshold: 5readinessProbe:httpGet:path: /readyport: 8181scheme: HTTPdnsPolicy: Defaultvolumes:- name: config-volumeconfigMap:name: corednsitems:- key: Corefilepath: Corefile---apiVersion: v1kind: Servicemetadata:name: kube-dnsnamespace: kube-systemannotations:prometheus.io/port: "9153"prometheus.io/scrape: "true"labels:k8s-app: kube-dnskubernetes.io/cluster-service: "true"kubernetes.io/name: "CoreDNS"spec:selector:k8s-app: kube-dnsclusterIP: 10.96.0.10 ports:- name: dnsport: 53protocol: UDP- name: dns-tcpport: 53protocol: TCP- name: metricsport: 9153protocol: TCPEOF

5.配置metrics

cd ..mkdir metrics-servercd metrics-servercat > metrics-server.yaml << EOF apiVersion: v1kind: ServiceAccountmetadata:labels:k8s-app: metrics-servername: metrics-servernamespace: kube-system---apiVersion: rbac.authorization.k8s.io/v1kind: ClusterRolemetadata:labels:k8s-app: metrics-serverrbac.authorization.k8s.io/aggregate-to-admin: "true"rbac.authorization.k8s.io/aggregate-to-edit: "true"rbac.authorization.k8s.io/aggregate-to-view: "true"name: system:aggregated-metrics-readerrules:- apiGroups:- metrics.k8s.ioresources:- pods- nodesverbs:- get- list- watch---apiVersion: rbac.authorization.k8s.io/v1kind: ClusterRolemetadata:labels:k8s-app: metrics-servername: system:metrics-serverrules:- apiGroups:- ""resources:- pods- nodes- nodes/stats- namespaces- configmapsverbs:- get- list- watch---apiVersion: rbac.authorization.k8s.io/v1kind: RoleBindingmetadata:labels:k8s-app: metrics-servername: metrics-server-auth-readernamespace: kube-systemroleRef:apiGroup: rbac.authorization.k8s.iokind: Rolename: extension-apiserver-authentication-readersubjects:- kind: ServiceAccountname: metrics-servernamespace: kube-system---apiVersion: rbac.authorization.k8s.io/v1kind: ClusterRoleBindingmetadata:labels:k8s-app: metrics-servername: metrics-server:system:auth-delegatorroleRef:apiGroup: rbac.authorization.k8s.iokind: ClusterRolename: system:auth-delegatorsubjects:- kind: ServiceAccountname: metrics-servernamespace: kube-system---apiVersion: rbac.authorization.k8s.io/v1kind: ClusterRoleBindingmetadata:labels:k8s-app: metrics-servername: system:metrics-serverroleRef:apiGroup: rbac.authorization.k8s.iokind: ClusterRolename: system:metrics-serversubjects:- kind: ServiceAccountname: metrics-servernamespace: kube-system---apiVersion: v1kind: Servicemetadata:labels:k8s-app: metrics-servername: metrics-servernamespace: kube-systemspec:ports:- name: httpsport: 443protocol: TCPtargetPort: httpsselector:k8s-app: metrics-server---apiVersion: apps/v1kind: Deploymentmetadata:labels:k8s-app: metrics-servername: metrics-servernamespace: kube-systemspec:selector:matchLabels:k8s-app: metrics-serverstrategy:rollingUpdate:maxUnavailable: 0template:metadata:labels:k8s-app: metrics-serverspec:containers:- args:- --cert-dir=/tmp- --secure-port=4443- --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname- --kubelet-use-node-status-port- --metric-resolution=15s- --kubelet-insecure-tls- --requestheader-client-ca-file=/etc/kubernetes/pki/front-proxy-ca.pem # change to front-proxy-ca.crt for kubeadm- --requestheader-username-headers=X-Remote-User- --requestheader-group-headers=X-Remote-Group- --requestheader-extra-headers-prefix=X-Remote-Extra-image: registry.cn-beijing.aliyuncs.com/dotbalo/metrics-server:0.5.0imagePullPolicy: IfNotPresentlivenessProbe:failureThreshold: 3httpGet:path: /livezport: httpsscheme: HTTPSperiodSeconds: 10name: metrics-serverports:- containerPort: 4443name: httpsprotocol: TCPreadinessProbe:failureThreshold: 3httpGet:path: /readyzport: httpsscheme: HTTPSinitialDelaySeconds: 20periodSeconds: 10resources:requests:cpu: 100mmemory: 200MisecurityContext:readOnlyRootFilesystem: truerunAsNonRoot: truerunAsUser: 1000volumeMounts:- mountPath: /tmpname: tmp-dir- name: ca-sslmountPath: /etc/kubernetes/pkinodeSelector:kubernetes.io/os: linuxpriorityClassName: system-cluster-criticalserviceAccountName: metrics-servervolumes:- emptyDir: {}name: tmp-dir- name: ca-sslhostPath:path: /etc/kubernetes/pki---apiVersion: apiregistration.k8s.io/v1kind: APIServicemetadata:labels:k8s-app: metrics-servername: v1beta1.metrics.k8s.iospec:group: metrics.k8s.iogroupPriorityMinimum: 100insecureSkipTLSVerify: trueservice:name: metrics-servernamespace: kube-systemversion: v1beta1versionPriority: 100EOF

6.生成Etcd证书

#创建目录mkdir /etc/etcd/ssl -p#生成证书cd pki# 生成etcd证书和etcd证书的key(如果你觉得以后可能会扩容,可以在ip那多写几个预留出来)# 若没有IPv6 可删除可保留 cfssl gencert -initca etcd-ca-csr.json | cfssljson -bare /etc/etcd/ssl/etcd-cacfssl gencert \ -ca=/etc/etcd/ssl/etcd-ca.pem \ -ca-key=/etc/etcd/ssl/etcd-ca-key.pem \ -config=ca-config.json \ -hostname=127.0.0.1,k8s-master01,192.168.100.21,192.168.100.22,192.168.100.23 \ -profile=kubernetes \ etcd-csr.json | cfssljson -bare /etc/etcd/ssl/etcd

如果是多个master节点则将证书复制过去

Master='k8s-master02 k8s-master03'for NODE in $Master; do ssh $NODE "mkdir -p /etc/etcd/ssl"; for FILE in etcd-ca-key.pemetcd-ca.pemetcd-key.pemetcd.pem; do scp /etc/etcd/ssl/${FILE} $NODE:/etc/etcd/ssl/${FILE}; done; done

7.生成Kubernetes证书

mkdir -p /etc/kubernetes/pkicfssl gencert -initca ca-csr.json | cfssljson -bare /etc/kubernetes/pki/ca# 生成一个根证书 ,多写了一些IP作为预留IP,为将来添加node做准备# 10.96.0.1是service网段的第一个地址,需要计算,192.168.1.69为高可用vip地址# 若没有IPv6 可删除可保留 cfssl gencert \-ca=/etc/kubernetes/pki/ca.pem \-ca-key=/etc/kubernetes/pki/ca-key.pem \-config=ca-config.json \-hostname=10.96.0.1,192.168.1.69,127.0.0.1,kubernetes,kubernetes.default,kubernetes.default.svc,kubernetes.default.svc.cluster,kubernetes.default.svc.cluster.local,x.oiox.cn,k.oiox.cn,l.oiox.cn,o.oiox.cn,192.168.1.61,192.168.1.62,192.168.1.63,192.168.1.64,192.168.1.65,192.168.1.66,192.168.1.67,192.168.1.68,192.168.1.75,192.168.1.75,10.0.0.40,10.0.0.41 \-profile=kubernetes apiserver-csr.json | cfssljson -bare /etc/kubernetes/pki/apiserver

8.生成apiserver证书

cfssl gencert -initca front-proxy-ca-csr.json | cfssljson -bare /etc/kubernetes/pki/front-proxy-ca # 有一个警告,可以忽略cfssl gencert\-ca=/etc/kubernetes/pki/front-proxy-ca.pem \-ca-key=/etc/kubernetes/pki/front-proxy-ca-key.pem \-config=ca-config.json \-profile=kubernetes front-proxy-client-csr.json | cfssljson -bare /etc/kubernetes/pki/front-proxy-client

9.生成controller-manage的证书

cfssl gencert \ -ca=/etc/kubernetes/pki/ca.pem \ -ca-key=/etc/kubernetes/pki/ca-key.pem \ -config=ca-config.json \ -profile=kubernetes \ manager-csr.json | cfssljson -bare /etc/kubernetes/pki/controller-manager# 设置一个集群项kubectl config set-cluster kubernetes \ --certificate-authority=/etc/kubernetes/pki/ca.pem \ --embed-certs=true \ --server=https://192.168.1.69:8443 \ --kubeconfig=/etc/kubernetes/controller-manager.kubeconfig# 设置一个环境项,一个上下文kubectl config set-context system:kube-controller-manager@kubernetes \--cluster=kubernetes \--user=system:kube-controller-manager \--kubeconfig=/etc/kubernetes/controller-manager.kubeconfig# 设置一个用户项kubectl config set-credentials system:kube-controller-manager \ --client-certificate=/etc/kubernetes/pki/controller-manager.pem \ --client-key=/etc/kubernetes/pki/controller-manager-key.pem \ --embed-certs=true \ --kubeconfig=/etc/kubernetes/controller-manager.kubeconfig# 设置默认环境kubectl config use-context system:kube-controller-manager@kubernetes \ --kubeconfig=/etc/kubernetes/controller-manager.kubeconfigcfssl gencert \ -ca=/etc/kubernetes/pki/ca.pem \ -ca-key=/etc/kubernetes/pki/ca-key.pem \ -config=ca-config.json \ -profile=kubernetes \ scheduler-csr.json | cfssljson -bare /etc/kubernetes/pki/schedulerkubectl config set-cluster kubernetes \ --certificate-authority=/etc/kubernetes/pki/ca.pem \ --embed-certs=true \ --server=https://192.168.1.69:8443 \ --kubeconfig=/etc/kubernetes/scheduler.kubeconfigkubectl config set-credentials system:kube-scheduler \ --client-certificate=/etc/kubernetes/pki/scheduler.pem \ --client-key=/etc/kubernetes/pki/scheduler-key.pem \ --embed-certs=true \ --kubeconfig=/etc/kubernetes/scheduler.kubeconfigkubectl config set-context system:kube-scheduler@kubernetes \ --cluster=kubernetes \ --user=system:kube-scheduler \ --kubeconfig=/etc/kubernetes/scheduler.kubeconfigkubectl config use-context system:kube-scheduler@kubernetes \ --kubeconfig=/etc/kubernetes/scheduler.kubeconfigcfssl gencert \ -ca=/etc/kubernetes/pki/ca.pem \ -ca-key=/etc/kubernetes/pki/ca-key.pem \ -config=ca-config.json \ -profile=kubernetes \ admin-csr.json | cfssljson -bare /etc/kubernetes/pki/adminkubectl config set-cluster kubernetes \--certificate-authority=/etc/kubernetes/pki/ca.pem \--embed-certs=true \--server=https://192.168.1.69:8443 \--kubeconfig=/etc/kubernetes/admin.kubeconfigkubectl config set-credentials kubernetes-admin\--client-certificate=/etc/kubernetes/pki/admin.pem \--client-key=/etc/kubernetes/pki/admin-key.pem \--embed-certs=true \--kubeconfig=/etc/kubernetes/admin.kubeconfigkubectl config set-context kubernetes-admin@kubernetes\--cluster=kubernetes \--user=kubernetes-admin \--kubeconfig=/etc/kubernetes/admin.kubeconfigkubectl config use-context kubernetes-admin@kubernetes--kubeconfig=/etc/kubernetes/admin.kubeconfig

10.生成kube-proxy证书

cfssl gencert \ -ca=/etc/kubernetes/pki/ca.pem \ -ca-key=/etc/kubernetes/pki/ca-key.pem \ -config=ca-config.json \ -profile=kubernetes \ kube-proxy-csr.json | cfssljson -bare /etc/kubernetes/pki/kube-proxy kubectl config set-cluster kubernetes \--certificate-authority=/etc/kubernetes/pki/ca.pem \--embed-certs=true \--server=https://192.168.1.69:8443 \--kubeconfig=/etc/kubernetes/kube-proxy.kubeconfigkubectl config set-credentials kube-proxy\--client-certificate=/etc/kubernetes/pki/kube-proxy.pem \--client-key=/etc/kubernetes/pki/kube-proxy-key.pem \--embed-certs=true \--kubeconfig=/etc/kubernetes/kube-proxy.kubeconfigkubectl config set-context kube-proxy@kubernetes\--cluster=kubernetes \--user=kube-proxy \--kubeconfig=/etc/kubernetes/kube-proxy.kubeconfigkubectl config use-context kube-proxy@kubernetes--kubeconfig=/etc/kubernetes/kube-proxy.kubeconfig

11.创建ServiceAccount Key secret

openssl genrsa -out /etc/kubernetes/pki/sa.key 2048openssl rsa -in /etc/kubernetes/pki/sa.key -pubout -out /etc/kubernetes/pki/sa.pub

12.查看证书

ls /etc/kubernetes/pki/# 26个证书

13.复制证书到其他节点

cd /etc/kubernetes/for NODE in k8s-node01 k8s-node02; do ssh $NODE mkdir -p /etc/kubernetes/pki; for FILE in pki/ca.pem pki/ca-key.pem pki/front-proxy-ca.pem bootstrap-kubelet.kubeconfig kube-proxy.kubeconfig; do scp /etc/kubernetes/$FILE $NODE:/etc/kubernetes/${FILE}; done; done

安装Etcd

1.下载

github二进制包下载地址:https://github.com/etcd-io/etcd/releases

https://github.com/etcd-io/etcd/releases/download/v3.4.22/etcd-v3.4.22-linux-arm64.tar.gz

2.解压

tar -xf etcd*.tar.gz && mv etcd-*/etcd /usr/local/bin/ && mv etcd-*/etcdctl /usr/local/bin/

3.查看版本

etcdctl version

4.配置etcd

# 如果要用IPv6那么把IPv4地址修改为IPv6即可cat > /etc/etcd/etcd.config.yml << EOF name: 'k8s-master01'data-dir: /var/lib/etcdwal-dir: /var/lib/etcd/walsnapshot-count: 5000heartbeat-interval: 100election-timeout: 1000quota-backend-bytes: 0listen-peer-urls: 'https://192.168.1.61:2380'listen-client-urls: 'https://192.168.1.61:2379,http://127.0.0.1:2379'max-snapshots: 3max-wals: 5cors:initial-advertise-peer-urls: 'https://192.168.100.21:2380'advertise-client-urls: 'https://192.168.100.21:2379'discovery:discovery-fallback: 'proxy'discovery-proxy:discovery-srv:initial-cluster: 'k8s-master01=https://192.168.100.21:2380'initial-cluster-token: 'etcd-k8s-cluster'initial-cluster-state: 'new'strict-reconfig-check: falseenable-v2: trueenable-pprof: trueproxy: 'off'proxy-failure-wait: 5000proxy-refresh-interval: 30000proxy-dial-timeout: 1000proxy-write-timeout: 5000proxy-read-timeout: 0client-transport-security:cert-file: '/etc/kubernetes/pki/etcd/etcd.pem'key-file: '/etc/kubernetes/pki/etcd/etcd-key.pem'client-cert-auth: truetrusted-ca-file: '/etc/kubernetes/pki/etcd/etcd-ca.pem'auto-tls: truepeer-transport-security:cert-file: '/etc/kubernetes/pki/etcd/etcd.pem'key-file: '/etc/kubernetes/pki/etcd/etcd-key.pem'peer-client-cert-auth: truetrusted-ca-file: '/etc/kubernetes/pki/etcd/etcd-ca.pem'auto-tls: truedebug: falselog-package-levels:log-outputs: [default]force-new-cluster: falseEOF

5.创建服务

cat > /usr/lib/systemd/system/etcd.service << EOF[Unit]Description=Etcd ServiceDocumentation=https://coreos.com/etcd/docs/latest/After=network.target[Service]Type=notifyExecStart=/usr/local/bin/etcd --config-file=/etc/etcd/etcd.config.ymlRestart=on-failureRestartSec=10LimitNOFILE=65536[Install]WantedBy=multi-user.targetAlias=etcd3.serviceEOF

6.配置证书

mkdir /etc/kubernetes/pki/etcdln -s /etc/etcd/ssl/* /etc/kubernetes/pki/etcd/systemctl daemon-reloadsystemctl enable --now etcd

安装Containerd

1.下载etcdctl二进制包

github二进制包下载地址:https://github.com/etcd-io/etcd/releases

wget https://github.com/etcd-io/etcd/releases/download/v3.4.22/etcd-v3.4.22-linux-arm64.tar.gz

2.containerd二进制包下载

github下载地址:https://github.com/containerd/containerd/releases

wget https://github.com/containerd/containerd/releases/download/v1.7.0-beta.0/containerd-1.7.0-beta.0-linux-arm64.tar.gz

3.下载带cni插件的二进制包。

wget https://github.com/containerd/containerd/releases/download/v1.7.0-beta.0/cri-containerd-1.7.0-beta.0-linux-arm64.tar.gzwget https://github.com/containerd/containerd/releases/download/v1.7.0-beta.0/cri-containerd-cni-1.7.0-beta.0-linux-arm64.tar.gz

4.cni插件下载

github下载地址:https://github.com/containernetworking/plugins/releases

wget https://github.com/containernetworking/plugins/releases/download/v1.1.1/cni-plugins-linux-arm64-v1.1.1.tgz

5.crictl客户端二进制下载

github下载:https://github.com/kubernetes-sigs/cri-tools/releases

wget https://github.com/kubernetes-sigs/cri-tools/releases/download/v1.25.0/crictl-v1.25.0-linux-arm64.tar.gz

6.解压缩到指定文件夹

#创建cni插件所需目录mkdir -p /etc/cni/net.d /opt/cni/bin #解压cni二进制包tar xf cni-plugins-linux-arm64-v1.1.1.tgz -C /opt/cni/bin/tar -xzf cri-containerd-cni-1.7.0-beta.0-linux-arm64.tar.gz -C /

7.创建服务启动文件

cat > /etc/systemd/system/containerd.service <<EOF[Unit]Description=containerd container runtimeDocumentation=https://containerd.ioAfter=network.target local-fs.target[Service]ExecStartPre=-/sbin/modprobe overlayExecStart=/usr/local/bin/containerdType=notifyDelegate=yesKillMode=processRestart=alwaysRestartSec=5LimitNPROC=infinityLimitCORE=infinityLimitNOFILE=infinityTasksMax=infinityOOMScoreAdjust=-999[Install]WantedBy=multi-user.targetEOF

8.配置Containerd所需的模块

cat <<EOF | sudo tee /etc/modules-load.d/containerd.confoverlaybr_netfilterEOF

9.加载模块

systemctl restart systemd-modules-load.servicesystemctl status systemd-modules-load.service

10.配置Containerd所需的内核

cat <<EOF | sudo tee /etc/sysctl.d/99-kubernetes-cri.confnet.bridge.bridge-nf-call-iptables= 1net.ipv4.ip_forward = 1net.bridge.bridge-nf-call-ip6tables = 1EOF# 加载内核sysctl --system

11.创建Containerd的配置文件

# 创建默认配置文件mkdir -p /etc/containerdcontainerd config default | tee /etc/containerd/config.toml# 修改Containerd的配置文件sed -i "s#SystemdCgroup\ \=\ false#SystemdCgroup\ \=\ true#g" /etc/containerd/config.tomlcat /etc/containerd/config.toml | grep SystemdCgroupsed -i "s#k8s.gcr.io#registry.cn-hangzhou.aliyuncs.com/chenby#g" /etc/containerd/config.tomlcat /etc/containerd/config.toml | grep sandbox_image

12.启动并设置为开机启动

systemctl daemon-reloadsystemctl enable --now containerd

13.配置crictl客户端连接的运行时位置

#解压tar xf crictl-v1.25.0-linux-arm64.tar.gz -C /usr/bin/#生成配置文件cat > /etc/crictl.yaml <<EOFruntime-endpoint: unix:///run/containerd/containerd.sockimage-endpoint: unix:///run/containerd/containerd.socktimeout: 10debug: falseEOF#测试systemctl restartcontainerdcrictl info

安装kubernetes

1.下载kubernetes

github二进制包下载地址:https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG/CHANGELOG-1.25.md

wget https://dl.k8s.io/v1.25.3/kubernetes-server-linux-arm64.tar.gz

2.解压压缩包

tar -xf kubernetes-server-linux-arm64.tar.gz--strip-components=3 -C /usr/local/bin kubernetes/server/bin/kube{let,ctl,-apiserver,-controller-manager,-scheduler,-proxy}

3.查看版本

kubelet --version

4.创建必要的目录

mkdir -p /etc/kubernetes/manifests/ /etc/systemd/system/kubelet.service.d /var/lib/kubelet /var/log/kubernetes

5.创建api-server

只需要在master节点

cat > /usr/lib/systemd/system/kube-apiserver.service << EOF[Unit]Description=Kubernetes API ServerDocumentation=https://github.com/kubernetes/kubernetesAfter=network.target[Service]ExecStart=/usr/local/bin/kube-apiserver \\--v=2\\--logtostderr=true\\--allow-privileged=true\\--bind-address=0.0.0.0\\--secure-port=6443\\--advertise-address=192.168.1.61 \\--service-cluster-ip-range=10.96.0.0/12,fd00::/108\\--service-node-port-range=30000-32767\\--etcd-servers=https://192.168.100.21:2379 \\--etcd-cafile=/etc/etcd/ssl/etcd-ca.pem\\--etcd-certfile=/etc/etcd/ssl/etcd.pem\\--etcd-keyfile=/etc/etcd/ssl/etcd-key.pem\\--client-ca-file=/etc/kubernetes/pki/ca.pem\\--tls-cert-file=/etc/kubernetes/pki/apiserver.pem\\--tls-private-key-file=/etc/kubernetes/pki/apiserver-key.pem\\--kubelet-client-certificate=/etc/kubernetes/pki/apiserver.pem\\--kubelet-client-key=/etc/kubernetes/pki/apiserver-key.pem\\--service-account-key-file=/etc/kubernetes/pki/sa.pub\\--service-account-signing-key-file=/etc/kubernetes/pki/sa.key\\--service-account-issuer=https://kubernetes.default.svc.cluster.local \\--kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname\\--enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,ResourceQuota\--authorization-mode=Node,RBAC\\--enable-bootstrap-token-auth=true\\--requestheader-client-ca-file=/etc/kubernetes/pki/front-proxy-ca.pem\\--proxy-client-cert-file=/etc/kubernetes/pki/front-proxy-client.pem\\--proxy-client-key-file=/etc/kubernetes/pki/front-proxy-client-key.pem\\--requestheader-allowed-names=aggregator\\--requestheader-group-headers=X-Remote-Group\\--requestheader-extra-headers-prefix=X-Remote-Extra-\\--requestheader-username-headers=X-Remote-User \\--enable-aggregator-routing=true# --feature-gates=IPv6DualStack=true# --token-auth-file=/etc/kubernetes/token.csvRestart=on-failureRestartSec=10sLimitNOFILE=65535[Install]WantedBy=multi-user.targetEOF

启动api-server

systemctl daemon-reload && systemctl enable --now kube-apiserver# 注意查看状态是否启动正常systemctl status kube-apiserver

6.创建kube-controller-manager

# 所有master节点配置,且配置相同# 172.16.0.0/12为pod网段,按需求设置你自己的网段cat > /usr/lib/systemd/system/kube-controller-manager.service << EOF[Unit]Description=Kubernetes Controller ManagerDocumentation=https://github.com/kubernetes/kubernetesAfter=network.target[Service]ExecStart=/usr/local/bin/kube-controller-manager \\--v=2 \\--logtostderr=true \\--bind-address=127.0.0.1 \\--root-ca-file=/etc/kubernetes/pki/ca.pem \\--cluster-signing-cert-file=/etc/kubernetes/pki/ca.pem \\--cluster-signing-key-file=/etc/kubernetes/pki/ca-key.pem \\--service-account-private-key-file=/etc/kubernetes/pki/sa.key \\--kubeconfig=/etc/kubernetes/controller-manager.kubeconfig \\--leader-elect=true \\--use-service-account-credentials=true \\--node-monitor-grace-period=40s \\--node-monitor-period=5s \\--pod-eviction-timeout=2m0s \\--controllers=*,bootstrapsigner,tokencleaner \\--allocate-node-cidrs=true \\--service-cluster-ip-range=10.96.0.0/12,fd00::/108 \\--cluster-cidr=172.16.0.0/12,fc00::/48 \\--node-cidr-mask-size-ipv4=24 \\--node-cidr-mask-size-ipv6=64 \\--requestheader-client-ca-file=/etc/kubernetes/pki/front-proxy-ca.pem # --feature-gates=IPv6DualStack=trueRestart=alwaysRestartSec=10s[Install]WantedBy=multi-user.targetEOF

启动服务

systemctl daemon-reloadsystemctl enable --now kube-controller-managersystemctl status kube-controller-manager

7.创建kube-scheduler

cat > /usr/lib/systemd/system/kube-scheduler.service << EOF[Unit]Description=Kubernetes SchedulerDocumentation=https://github.com/kubernetes/kubernetesAfter=network.target[Service]ExecStart=/usr/local/bin/kube-scheduler \\--v=2 \\--logtostderr=true \\--bind-address=127.0.0.1 \\--leader-elect=true \\--kubeconfig=/etc/kubernetes/scheduler.kubeconfigRestart=alwaysRestartSec=10s[Install]WantedBy=multi-user.targetEOF

启动服务

systemctl daemon-reloadsystemctl enable --now kube-schedulersystemctl status kube-scheduler

8.配置TLS Bootstrapping

在master上配置

cd bootstrapkubectl config set-cluster kubernetes \--certificate-authority=/etc/kubernetes/pki/ca.pem \--embed-certs=true --server=https://192.168.1.69:8443 \--kubeconfig=/etc/kubernetes/bootstrap-kubelet.kubeconfigkubectl config set-credentials tls-bootstrap-token-user \--token=c8ad9c.2e4d610cf3e7426e \--kubeconfig=/etc/kubernetes/bootstrap-kubelet.kubeconfigkubectl config set-context tls-bootstrap-token-user@kubernetes \--cluster=kubernetes \--user=tls-bootstrap-token-user \--kubeconfig=/etc/kubernetes/bootstrap-kubelet.kubeconfigkubectl config use-context tls-bootstrap-token-user@kubernetes \--kubeconfig=/etc/kubernetes/bootstrap-kubelet.kubeconfig# token的位置在bootstrap.secret.yaml,如果修改的话到这个文件修改mkdir -p /root/.kube ; cp /etc/kubernetes/admin.kubeconfig /root/.kube/config

查看集群状态

kubectl get cs

创建bootstrap

kubectl create -f bootstrap.secret.yaml

9.创建kubelet

创建服务

mkdir -p /var/lib/kubelet /var/log/kubernetes /etc/systemd/system/kubelet.service.d /etc/kubernetes/manifests/# 所有k8s节点配置kubelet servicecat > /usr/lib/systemd/system/kubelet.service << EOF[Unit]Description=Kubernetes KubeletDocumentation=https://github.com/kubernetes/kubernetesAfter=containerd.serviceRequires=containerd.service[Service]ExecStart=/usr/local/bin/kubelet \\--bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.kubeconfig\\--kubeconfig=/etc/kubernetes/kubelet.kubeconfig \\--config=/etc/kubernetes/kubelet-conf.yml \\--container-runtime-endpoint=unix:///run/containerd/containerd.sock\\--node-labels=node.kubernetes.io/node=# --feature-gates=IPv6DualStack=true# --container-runtime=remote# --runtime-request-timeout=15m# --cgroup-driver=systemd[Install]WantedBy=multi-user.targetEOF

创建配置文件

cat > /etc/kubernetes/kubelet-conf.yml <<EOFapiVersion: kubelet.config.k8s.io/v1beta1kind: KubeletConfigurationaddress: 0.0.0.0port: 10250readOnlyPort: 10255authentication:anonymous:enabled: falsewebhook:cacheTTL: 2m0senabled: truex509:clientCAFile: /etc/kubernetes/pki/ca.pemauthorization:mode: Webhookwebhook:cacheAuthorizedTTL: 5m0scacheUnauthorizedTTL: 30scgroupDriver: systemdcgroupsPerQOS: trueclusterDNS:- 10.96.0.10clusterDomain: cluster.localcontainerLogMaxFiles: 5containerLogMaxSize: 10MicontentType: application/vnd.kubernetes.protobufcpuCFSQuota: truecpuManagerPolicy: nonecpuManagerReconcilePeriod: 10senableControllerAttachDetach: trueenableDebuggingHandlers: trueenforceNodeAllocatable:- podseventBurst: 10eventRecordQPS: 5evictionHard:imagefs.available: 15%memory.available: 100Minodefs.available: 10%nodefs.inodesFree: 5%evictionPressureTransitionPeriod: 5m0sfailSwapOn: truefileCheckFrequency: 20shairpinMode: promiscuous-bridgehealthzBindAddress: 127.0.0.1healthzPort: 10248httpCheckFrequency: 20simageGCHighThresholdPercent: 85imageGCLowThresholdPercent: 80imageMinimumGCAge: 2m0siptablesDropBit: 15iptablesMasqueradeBit: 14kubeAPIBurst: 10kubeAPIQPS: 5makeIPTablesUtilChains: truemaxOpenFiles: 1000000maxPods: 110nodeStatusUpdateFrequency: 10soomScoreAdj: -999podPidsLimit: -1registryBurst: 10registryPullQPS: 5resolvConf: /etc/resolv.confrotateCertificates: trueruntimeRequestTimeout: 2m0sserializeImagePulls: truestaticPodPath: /etc/kubernetes/manifestsstreamingConnectionIdleTimeout: 4h0m0ssyncFrequency: 1m0svolumeStatsAggPeriod: 1m0sEOF

启动kubelet

systemctl daemon-reloadsystemctl restart kubeletsystemctl enable --now kubelet

查看集群状态

kubectlget node

10.创建kube-proxy

将kubeconfig发送至其他节点

for NODE in k8s-node01 k8s-node02; do scp /etc/kubernetes/kube-proxy.kubeconfig $NODE:/etc/kubernetes/kube-proxy.kubeconfig;done

创建服务文件

cat >/usr/lib/systemd/system/kube-proxy.service << EOF[Unit]Description=Kubernetes Kube ProxyDocumentation=https://github.com/kubernetes/kubernetesAfter=network.target[Service]ExecStart=/usr/local/bin/kube-proxy \\--config=/etc/kubernetes/kube-proxy.yaml \\--v=2Restart=alwaysRestartSec=10s[Install]WantedBy=multi-user.targetEOF

创建配置文件

cat > /etc/kubernetes/kube-proxy.yaml << EOFapiVersion: kubeproxy.config.k8s.io/v1alpha1bindAddress: 0.0.0.0clientConnection:acceptContentTypes: ""burst: 10contentType: application/vnd.kubernetes.protobufkubeconfig: /etc/kubernetes/kube-proxy.kubeconfigqps: 5clusterCIDR: 172.16.0.0/12,fc00::/48 configSyncPeriod: 15m0sconntrack:max: nullmaxPerCore: 32768min: 131072tcpCloseWaitTimeout: 1h0m0stcpEstablishedTimeout: 24h0m0senableProfiling: falsehealthzBindAddress: 0.0.0.0:10256hostnameOverride: ""iptables:masqueradeAll: falsemasqueradeBit: 14minSyncPeriod: 0ssyncPeriod: 30sipvs:masqueradeAll: trueminSyncPeriod: 5sscheduler: "rr"syncPeriod: 30skind: KubeProxyConfigurationmetricsBindAddress: 127.0.0.1:10249mode: "ipvs"nodePortAddresses: nulloomScoreAdj: -999portRange: ""udpIdleTimeout: 250msEOF

启动服务

 systemctl daemon-reload systemctl restart kube-proxy systemctl enable --now kube-proxy systemctl status kube-proxy

安装Calico

11.1 获取定义文件

curl https://projectcalico.docs.tigera.io/manifests/calico-typha.yaml -o calico.yaml

11.2 启动服务

kubectl apply -f calico.yaml

11.3 查看状态

kubectlget pod -A

安装CoreDNS

只需要在master节点执行

获取配置文件

https://github.com/coredns/deployment/

https://github.com/coredns/deployment/blob/master/kubernetes/coredns.yaml.sed

修改配置

cd coredns/cat coredns.yaml | grep clusterIP:clusterIP: 10.96.0.10

安装coredns

kubectlcreate -f coredns.yaml serviceaccount/coredns createdclusterrole.rbac.authorization.k8s.io/system:coredns createdclusterrolebinding.rbac.authorization.k8s.io/system:coredns createdconfigmap/coredns createddeployment.apps/coredns createdservice/kube-dns created

安装Metrics Server

安装

kubectl apply -f https://github.com/kubernetes-sigs/metrics-server/releases/latest/download/components.yaml

https://github.com/kubernetes-sigs/metrics-server

查看状态

kubectltop node

安装命令自动补全

yum install bash-completion -ysource /usr/share/bash-completion/bash_completionsource <(kubectl completion bash)echo "source <(kubectl completion bash)" >> ~/.bashrc

集群验证是否正常

部署pod资源

cat<<EOF | kubectl apply -f -apiVersion: v1kind: Podmetadata:name: busyboxnamespace: defaultspec:containers:- name: busyboximage: busybox:1.28command:- sleep- "3600"imagePullPolicy: IfNotPresentrestartPolicy: AlwaysEOF# 查看kubectlget podNAMEREADY STATUSRESTARTS AGEbusybox 1/1 Running 017s

用pod解析默认命名空间中的kubernetes

kubectl get svcNAME TYPECLUSTER-IP EXTERNAL-IP PORT(S) AGEkubernetes ClusterIP 10.96.0.1<none>443/TCP 17hkubectl execbusybox -n default -- nslookup kubernetes3Server:10.96.0.10Address 1: 10.96.0.10 kube-dns.kube-system.svc.cluster.localName:kubernetesAddress 1: 10.96.0.1 kubernetes.default.svc.cluster.local

测试跨命名空间是否可以解析

kubectl execbusybox -n default -- nslookup kube-dns.kube-systemServer:10.96.0.10Address 1: 10.96.0.10 kube-dns.kube-system.svc.cluster.localName:kube-dns.kube-systemAddress 1: 10.96.0.10 kube-dns.kube-system.svc.cluster.local

每个节点都必须要能访问Kubernetes的kubernetes svc 443和kube-dns的service 53

telnet 10.96.0.1 443Trying 10.96.0.1...Connected to 10.96.0.1.Escape character is '^]'. telnet 10.96.0.10 53Trying 10.96.0.10...Connected to 10.96.0.10.Escape character is '^]'.curl 10.96.0.10:53curl: (52) Empty reply from server

Pod和Pod之前要能通

kubectl get po -owideNAMEREADY STATUSRESTARTS AGE IPNODE NOMINATED NODE READINESS GATESbusybox 1/1 Running 017m 172.27.14.193 k8s-node02  kubectl get po -n kube-system -owideNAME READY STATUSRESTARTSAGE IP NODE NOMINATED NODE READINESS GATEScalico-kube-controllers-59697b644f-zsj62 1/1 Running 2 (3h37m ago) 20h 172.17.58.194k8s-node02  calico-node-8pn4f0/1 Running 2 (23s ago) 30h 192.168.100.21 k8s-master01  calico-node-l4fkz1/1 Running 7 (3h37m ago) 30h 192.168.100.23 k8s-node02  calico-node-mg92w1/1 Running 7 (3h37m ago) 30h 192.168.100.22 k8s-node01  calico-typha-6944f58589-qkm941/1 Running 1 (3h37m ago) 20h 192.168.100.22 k8s-node01  coredns-6795856f79-p7jg9 1/1 Running 1 (6h2m ago)11h 172.17.32.139k8s-master01  metrics-server-c7d4c4dd5-skch4 1/1 Running 1 (6h2m ago)18h 172.17.32.138k8s-master01  # 进入busybox ping其他节点上的podkubectl exec -it busybox -- sh/ # ping 192.168.100.22PING 192.168.100.22 (192.168.100.22): 56 data bytes64 bytes from 192.168.100.22: seq=0 ttl=63 time=0.358 ms64 bytes from 192.168.100.22: seq=1 ttl=63 time=0.668 ms# 可以连通证明这个pod是可以跨命名空间和跨主机通信的

创建三个副本,可以看到3个副本分布在不同的节点上

cat > deployments.yaml << EOFapiVersion: apps/v1kind: Deploymentmetadata:name: nginx-deploymentlabels:app: nginxspec:replicas: 3selector:matchLabels:app: nginxtemplate:metadata:labels:app: nginxspec:containers:- name: nginximage: nginx:1.14.2ports:- containerPort: 80EOFkubectlapply -f deployments.yaml deployment.apps/nginx-deployment createdkubectlget pod NAME READY STATUSRESTARTS AGEbusybox1/1 Running 06m25snginx-deployment-9456bbbf9-4bmvk 1/1 Running 08snginx-deployment-9456bbbf9-9rcdk 1/1 Running 08snginx-deployment-9456bbbf9-dqv8s 1/1 Running 08s# 删除nginx[root@k8s-master01 ~]# kubectl delete -f deployments.yaml

安装dashboard

kubectl apply -f https://raw.githubusercontent.com/kubernetes/dashboard/v2.7.0/aio/deploy/recommended.yaml
apiVersion: v1kind: ServiceAccountmetadata:name: admin-usernamespace: kubernetes-dashboard---apiVersion: rbac.authorization.k8s.io/v1kind: ClusterRoleBindingmetadata:name: admin-userroleRef:apiGroup: rbac.authorization.k8s.iokind: ClusterRolename: cluster-adminsubjects:- kind: ServiceAccountname: admin-usernamespace: kubernetes-dashboard

创建ServiceAccount

kubectl apply -f dashboard-user.yml

更改dashboard的svc为NodePort

kubectl edit svc kubernetes-dashboard -n kubernetes-dashboardtype: NodePort

查看端口号

kubectl get svc kubernetes-dashboard -n kubernetes-dashboardNAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGEkubernetes-dashboard NodePort 10.108.37.26 443:30647/TCP 11h

创建token

kubectl -n kubernetes-dashboard create token admin-usereyJhbGciOiJSUzI1NiIsImtpZCI6IlVfX0ZyLVE5UnlMYk94QU9YZy1tdTJlbDNlVkNNa2hPQm9KcndlR1pSc3cifQ.eyJhdWQiOlsiaHR0cHM6Ly9rdWJlcm5ldGVzLmRlZmF1bHQuc3ZjLmNsdXN0ZXIubG9jYWwiXSwiZXhwIjoxNjY3NjEyNjAzLCJpYXQiOjE2Njc2MDkwMDMsImlzcyI6Imh0dHBzOi8va3ViZXJuZXRlcy5kZWZhdWx0LnN2Yy5jbHVzdGVyLmxvY2FsIiwia3ViZXJuZXRlcy5pbyI6eyJuYW1lc3BhY2UiOiJrdWJlcm5ldGVzLWRhc2hib2FyZCIsInNlcnZpY2VhY2NvdW50Ijp7Im5hbWUiOiJhZG1pbi11c2VyIiwidWlkIjoiMjY1YjQ4NDYtNzg2Ni00OWYxLWFkMmYtNWE0NGU3MDkxNDgzIn19LCJuYmYiOjE2Njc2MDkwMDMsInN1YiI6InN5c3RlbTpzZXJ2aWNlYWNjb3VudDprdWJlcm5ldGVzLWRhc2hib2FyZDphZG1pbi11c2VyIn0.Y-DG1hpb9mk06nhm6ZnIhFaPBj6AAnlzbg4ngPSiWfEpBOj4c_TVpBS7a9eJqDVisWczHerT5K_2cgzmIeLxUdDffIyU8UcijlSM8Df3PQMMTvbMCCpFZC8x9l6T7rRIhI8-xGL5eFBqt6YRf2xRTBKgpRsdqetX5_zZ7552wv5GhDnRXYo1BJ6IQuxWUi39du3mZdPJJSyvKZXTqx8GfEBX01zprNTIJ4DXUQPM7z4cqiKIFkt32KhyTK55GhuIyh9y3cRBtHVhcybRPQ27KnFTg1n4joGdrJP7Q7fxJsiKFmJFbR2K_ljqGQpss-v3QhJrI5SeowAaqebhCumfrg

访问主机:上面的端口

https://192.168.100.21:30647

安装ingress

https://github.com/kubernetes/ingress-nginx

kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/controller-v1.4.0/deploy/static/provider/cloud/deploy.yaml

本地验证

kubectl get service ingress-nginx-controller --namespace=ingress-nginxkubectl wait --namespace ingress-nginx \--for=condition=ready pod \--selector=app.kubernetes.io/component=controller \--timeout=120skubectl create deployment demo --image=httpd --port=80kubectl expose deployment demokubectl create ingress demo-localhost --class=nginx \--rule="demo.localdev.me/*=demo:80"kubectl port-forward --namespace=ingress-nginx service/ingress-nginx-controller 8080:80curl http://demo.localdev.me:8080

外部验证

配置好DNS 指向集群IP[root@k8s-master01 ~]# kubectl get service ingress-nginx-controller --namespace=ingress-nginxNAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S)AGEingress-nginx-controller LoadBalancer 10.98.13.147  80:32522/TCP,443:31226/TCP 71m
127.0.0.1 localhost localhost.localdomain localhost4 localhost4.localdomain4::1 localhost localhost.localdomain localhost6 localhost6.localdomain610.98.13.147 www.demo.io192.168.100.21 k8s-master01192.168.100.22 k8s-node01192.168.100.23 k8s-node02
kubectl create ingress demo --class=nginx \--rule="www.demo.io/*=demo:80"
curl http://www.demo.io/

安装Helm包管理器

Thepackage managerfor Kubernetes

查看自己平台的版本选择下载 : https://github.com/helm/helm/releases

wget https://get.helm.sh/helm-v3.10.1-linux-arm64.tar.gztar -zxvf helm-v3.10.1-linux-arm64.tar.gzmv linux-arm64/helm/usr/local/bin/helm# 查看版本信息helm versionversion.BuildInfo{Version:"v3.10.1", GitCommit:"9f88ccb6aee40b9a0535fcc7efea6055e1ef72c9", GitTreeState:"clean", GoVersion:"go1.18.7"}

使用教程:https://helm.sh/docs/intro/using_helm/

安装Harbor

企业级 Registry 服务器,存放镜像

错误

1.无法拉取镜像,或者找不到所需平台的镜像问题

这个我们可以直接使用源代码来编译通过Docker打包成镜像或者是dockerhub上面搜别人是否已经分享

2.exec /xxx: exec format error

这种情况应该是你镜像拉错了,需要拉你对应平台的,比如我是linux/arm64

3.docker导出镜像到containerd

docker save httpd > httpd.tarctr -n=k8s.io image import httpd.tar

4.crictl查找不到导入的镜像

containerd有命名空间的概念,需要导入k8s.io才能被k8s使用

5.ingress证书过期

error: failed to create ingress: Internal error occurred: failed calling webhook "validate.nginx.ingress.kubernetes.io": failed to call webhook: Post "https://ingress-nginx-controller-admission.ingress-nginx.svc:443/networking/v1/ingresses" />
kubectl delete -A ValidatingWebhookConfiguration ingress-nginx-admission

资料

helm:https://helm.sh/docs
harbor:https://goharbor.io/docs