平台安装

简介

Kubernetes上安装KubeSphere

安装步骤

  • 选择4核8G(master)、8核16G(node1)、8核16G(node2) 三台机器,按量付费进行实验,CentOS7.9
  • 安装Docker
  • 安装Kubernetes
  • 安装KubeSphere前置环境
  • 安装KubeSphere
安装Docker
sudo yum remove docker*sudo yum install -y yum-utilssudo yum-config-manager \--add-repo \http://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.reposudo yum install -y docker-ce-20.10.7 docker-ce-cli-20.10.7 containerd.io-1.4.6systemctl enable docker --nowsudo mkdir -p /etc/dockersudo tee /etc/docker/daemon.json <<-'EOF'{"registry-mirrors": ["https://vgcihl1j.mirror.aliyuncs.com"],"exec-opts": ["native.cgroupdriver=systemd"],"log-driver": "json-file","log-opts": {"max-size": "100m"},"storage-driver": "overlay2"}EOFsudo systemctl daemon-reloadsudo systemctl restart docker
安装Kubernetes

1、基本环境

每个机器使用内网ip互通
每个机器配置自己的hostname,不能用localhost

hostnamectl set-hostname k8s-mastersudo setenforce 0sudo sed -i 's/^SELINUX=enforcing$/SELINUX=permissive/' /etc/selinux/configswapoff -ased -ri 's/.*swap.*/#&/' /etc/fstabcat <<EOF | sudo tee /etc/modules-load.d/k8s.confbr_netfilterEOFcat <<EOF | sudo tee /etc/sysctl.d/k8s.confnet.bridge.bridge-nf-call-ip6tables = 1net.bridge.bridge-nf-call-iptables = 1EOFsudo sysctl --system

2、安装kubelet、kubeadm、kubectl

cat <<EOF | sudo tee /etc/yum.repos.d/kubernetes.repo[kubernetes]name=Kubernetesbaseurl=http://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64enabled=1gpgcheck=0repo_gpgcheck=0gpgkey=http://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg http://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpgEOFsudo yum install -y kubelet-1.20.9 kubeadm-1.20.9 kubectl-1.20.9sudo systemctl enable --now kubeletecho "172.31.0.2k8s-master" >> /etc/hosts
初始化master节点

1、在master执行初始化

kubeadm init \--apiserver-advertise-address=172.31.0.2 \--control-plane-endpoint=k8s-master \--image-repository registry.cn-hangzhou.aliyuncs.com/lfy_k8s_images \--kubernetes-version v1.20.9 \--service-cidr=10.96.0.0/16 \--pod-network-cidr=192.168.0.0/16

2、记录关键信息
记录master执行完成后的日志

Your Kubernetes control-plane has initialized successfully!To start using your cluster, you need to run the following as a regular user:mkdir -p $HOME/.kubesudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/configsudo chown $(id -u):$(id -g) $HOME/.kube/configAlternatively, if you are the root user, you can run:export KUBECONFIG=/etc/kubernetes/admin.confYou should now deploy a pod network to the cluster.Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:https://kubernetes.io/docs/concepts/cluster-administration/addons/You can now join any number of control-plane nodes by copying certificate authoritiesand service account keys on each node and then running the following as root:kubeadm join k8s-master:6443 --token 9db9x5.gvopaqx44fck5irh \--discovery-token-ca-cert-hash sha256:ade53e08667d16ff2866118d15b2e384c1c1dd721afcb9340e13133f15571861 \--control-plane Then you can join any number of worker nodes by running the following on each as root:kubeadm join k8s-master:6443 --token 9db9x5.gvopaqx44fck5irh \--discovery-token-ca-cert-hash sha256:ade53e08667d16ff2866118d15b2e384c1c1dd721afcb9340e13133f15571861 

3、安装Calico网络插件

curl https://docs.projectcalico.org/manifests/calico.yaml -Okubectl apply -f calico.yaml

4、加入worker节点

在子节点执行初始化返回日志的内容

安装KubeSphere前置环境

1、nfs文件系统
安装nfs-server

yum install -y nfs-utilsecho "/nfs/data/ *(insecure,rw,sync,no_root_squash)" > /etc/exportsmkdir -p /nfs/datasystemctl enable rpcbindsystemctl enable nfs-serversystemctl start rpcbindsystemctl start nfs-serverexportfs -rexportfs

在子节点配置nfs-client

showmount -e 172.31.0.2mkdir -p /nfs/datamount -t nfs 172.31.0.2:/nfs/data /nfs/data

3、配置默认存储

配置动态供应的默认存储类 ,注意改成自己的master的IP
在master执行

apiVersion: storage.k8s.io/v1kind: StorageClassmetadata:name: nfs-storageannotations:storageclass.kubernetes.io/is-default-class: "true"provisioner: k8s-sigs.io/nfs-subdir-external-provisionerparameters:archiveOnDelete: "true"---apiVersion: apps/v1kind: Deploymentmetadata:name: nfs-client-provisionerlabels:app: nfs-client-provisionernamespace: defaultspec:replicas: 1strategy:type: Recreateselector:matchLabels:app: nfs-client-provisionertemplate:metadata:labels:app: nfs-client-provisionerspec:serviceAccountName: nfs-client-provisionercontainers:- name: nfs-client-provisionerimage: registry.cn-hangzhou.aliyuncs.com/lfy_k8s_images/nfs-subdir-external-provisioner:v4.0.2volumeMounts:- name: nfs-client-rootmountPath: /persistentvolumesenv:- name: PROVISIONER_NAMEvalue: k8s-sigs.io/nfs-subdir-external-provisioner- name: NFS_SERVERvalue: 172.31.0.2 - name: NFS_PATHvalue: /nfs/datavolumes:- name: nfs-client-rootnfs:server: 172.31.0.2path: /nfs/data---apiVersion: v1kind: ServiceAccountmetadata:name: nfs-client-provisionernamespace: default---kind: ClusterRoleapiVersion: rbac.authorization.k8s.io/v1metadata:name: nfs-client-provisioner-runnerrules:- apiGroups: [""]resources: ["nodes"]verbs: ["get", "list", "watch"]- apiGroups: [""]resources: ["persistentvolumes"]verbs: ["get", "list", "watch", "create", "delete"]- apiGroups: [""]resources: ["persistentvolumeclaims"]verbs: ["get", "list", "watch", "update"]- apiGroups: ["storage.k8s.io"]resources: ["storageclasses"]verbs: ["get", "list", "watch"]- apiGroups: [""]resources: ["events"]verbs: ["create", "update", "patch"]---kind: ClusterRoleBindingapiVersion: rbac.authorization.k8s.io/v1metadata:name: run-nfs-client-provisionersubjects:- kind: ServiceAccountname: nfs-client-provisionernamespace: defaultroleRef:kind: ClusterRolename: nfs-client-provisioner-runnerapiGroup: rbac.authorization.k8s.io---kind: RoleapiVersion: rbac.authorization.k8s.io/v1metadata:name: leader-locking-nfs-client-provisionernamespace: defaultrules:- apiGroups: [""]resources: ["endpoints"]verbs: ["get", "list", "watch", "create", "update", "patch"]---kind: RoleBindingapiVersion: rbac.authorization.k8s.io/v1metadata:name: leader-locking-nfs-client-provisionernamespace: defaultsubjects:- kind: ServiceAccountname: nfs-client-provisionernamespace: defaultroleRef:kind: Rolename: leader-locking-nfs-client-provisionerapiGroup: rbac.authorization.k8s.io

kubectl apply -f

kubectl get sc

测试创建pvc,不用像之前那样,要先创建好pv在创建pvc,有了动态供应,直接创建pvc,pv则自动创建,且指定了大小

kind: PersistentVolumeClaimapiVersion: v1metadata:name: nginx-pvcspec:accessModes:- ReadWriteManyresources:requests:storage: 200MistorageClassName: nfs

2、metrics-server
集群指标监控组件

apiVersion: v1kind: ServiceAccountmetadata:labels:k8s-app: metrics-servername: metrics-servernamespace: kube-system---apiVersion: rbac.authorization.k8s.io/v1kind: ClusterRolemetadata:labels:k8s-app: metrics-serverrbac.authorization.k8s.io/aggregate-to-admin: "true"rbac.authorization.k8s.io/aggregate-to-edit: "true"rbac.authorization.k8s.io/aggregate-to-view: "true"name: system:aggregated-metrics-readerrules:- apiGroups:- metrics.k8s.ioresources:- pods- nodesverbs:- get- list- watch---apiVersion: rbac.authorization.k8s.io/v1kind: ClusterRolemetadata:labels:k8s-app: metrics-servername: system:metrics-serverrules:- apiGroups:- ""resources:- pods- nodes- nodes/stats- namespaces- configmapsverbs:- get- list- watch---apiVersion: rbac.authorization.k8s.io/v1kind: RoleBindingmetadata:labels:k8s-app: metrics-servername: metrics-server-auth-readernamespace: kube-systemroleRef:apiGroup: rbac.authorization.k8s.iokind: Rolename: extension-apiserver-authentication-readersubjects:- kind: ServiceAccountname: metrics-servernamespace: kube-system---apiVersion: rbac.authorization.k8s.io/v1kind: ClusterRoleBindingmetadata:labels:k8s-app: metrics-servername: metrics-server:system:auth-delegatorroleRef:apiGroup: rbac.authorization.k8s.iokind: ClusterRolename: system:auth-delegatorsubjects:- kind: ServiceAccountname: metrics-servernamespace: kube-system---apiVersion: rbac.authorization.k8s.io/v1kind: ClusterRoleBindingmetadata:labels:k8s-app: metrics-servername: system:metrics-serverroleRef:apiGroup: rbac.authorization.k8s.iokind: ClusterRolename: system:metrics-serversubjects:- kind: ServiceAccountname: metrics-servernamespace: kube-system---apiVersion: v1kind: Servicemetadata:labels:k8s-app: metrics-servername: metrics-servernamespace: kube-systemspec:ports:- name: httpsport: 443protocol: TCPtargetPort: httpsselector:k8s-app: metrics-server---apiVersion: apps/v1kind: Deploymentmetadata:labels:k8s-app: metrics-servername: metrics-servernamespace: kube-systemspec:selector:matchLabels:k8s-app: metrics-serverstrategy:rollingUpdate:maxUnavailable: 0template:metadata:labels:k8s-app: metrics-serverspec:containers:- args:- --cert-dir=/tmp- --kubelet-insecure-tls- --secure-port=4443- --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname- --kubelet-use-node-status-portimage: registry.cn-hangzhou.aliyuncs.com/lfy_k8s_images/metrics-server:v0.4.3imagePullPolicy: IfNotPresentlivenessProbe:failureThreshold: 3httpGet:path: /livezport: httpsscheme: HTTPSperiodSeconds: 10name: metrics-serverports:- containerPort: 4443name: httpsprotocol: TCPreadinessProbe:failureThreshold: 3httpGet:path: /readyzport: httpsscheme: HTTPSperiodSeconds: 10securityContext:readOnlyRootFilesystem: truerunAsNonRoot: truerunAsUser: 1000volumeMounts:- mountPath: /tmpname: tmp-dirnodeSelector:kubernetes.io/os: linuxpriorityClassName: system-cluster-criticalserviceAccountName: metrics-servervolumes:- emptyDir: {}name: tmp-dir---apiVersion: apiregistration.k8s.io/v1kind: APIServicemetadata:labels:k8s-app: metrics-servername: v1beta1.metrics.k8s.iospec:group: metrics.k8s.iogroupPriorityMinimum: 100insecureSkipTLSVerify: trueservice:name: metrics-servernamespace: kube-systemversion: v1beta1versionPriority: 100
安装KubeSphere

1、下载核心文件

wget https://github.com/kubesphere/ks-installer/releases/download/v3.1.1/kubesphere-installer.yamlwget https://github.com/kubesphere/ks-installer/releases/download/v3.1.1/cluster-configuration.yaml

2、修改cluster-configuration
在 cluster-configuration.yaml中指定我们需要开启的功能
参照官网“启用可插拔组件”
我们这里 只取消了 basicAuth、metrics-server,将他们置为false
网络连接 设置为 ippool:calico

3、执行安装

kubectl apply -f kubesphere-installer.yamlkubectl apply -f cluster-configuration.yaml

如果pod一直未启动成功,查看镜像的详细信息
kubectl describe pod -n namespace name

4、查看安装进度

kubectl logs -n kubesphere-system $(kubectl get pod -n kubesphere-system -l app=ks-install -o jsonpath='{.items[0].metadata.name}') -f

解决etcd监控证书找不到问题

kubectl -n kubesphere-monitoring-system create secret generic kube-etcd-client-certs--from-file=etcd-client-ca.crt=/etc/kubernetes/pki/etcd/ca.crt--from-file=etcd-client.crt=/etc/kubernetes/pki/apiserver-etcd-client.crt--from-file=etcd-client.key=/etc/kubernetes/pki/apiserver-etcd-client.key

多节点一键安装kubernetes和kubesphere

准备三台服务器

  • 4c8g (master)
  • 8c16g * 2(worker)
  • centos7.9
  • 内网互通
  • 每个机器有自己域名
  • 防火墙开放30000~32767端口

使用KubeKey创建集群

1、下载KubeKey

export KKZONE=cncurl -sfL https://get-kk.kubesphere.io | VERSION=v1.1.1 sh -chmod +x kk

2、创建集群配置文件

./kk create config --with-kubernetes v1.20.4 --with-kubesphere v3.1.1

3、创建集群

./kk create cluster -f config-sample.yaml

4、查看进度

kubectl logs -n kubesphere-system $(kubectl get pod -n kubesphere-system -l app=ks-install -o jsonpath='{.items[0].metadata.name}') -f

config-sample.yaml示例文件
hosts中的name就是 设置的本机hostname,其他都不用改,只改hosts和roleGroups
hostnamectl set-hostname master
hostnamectl set-hostname node1

apiVersion: kubekey.kubesphere.io/v1alpha1kind: Clustermetadata:name: samplespec:hosts:- {name: master, address: 172.31.0.2, internalAddress: 172.31.0.2, user: root, password: aA6675732}- {name: node1, address: 172.31.0.3, internalAddress: 172.31.0.3, user: root, password: aA6675732}- {name: node2, address: 172.31.0.4, internalAddress: 172.31.0.4, user: root, password: aA6675732}roleGroups:etcd:- mastermaster: - masterworker:- node1- node2controlPlaneEndpoint:domain: lb.kubesphere.localaddress: ""port: 6443kubernetes:version: v1.20.4imageRepo: kubesphereclusterName: cluster.localnetwork:plugin: calicokubePodsCIDR: 10.233.64.0/18kubeServiceCIDR: 10.233.0.0/18registry:registryMirrors: []insecureRegistries: []addons: []---apiVersion: installer.kubesphere.io/v1alpha1kind: ClusterConfigurationmetadata:name: ks-installernamespace: kubesphere-systemlabels:version: v3.1.1spec:persistence:storageClass: "" authentication:jwtSecret: ""zone: ""local_registry: ""etcd:monitoring: falseendpointIps: localhostport: 2379 tlsEnable: truecommon:redis:enabled: falseredisVolumSize: 2Gi openldap:enabled: falseopenldapVolumeSize: 2GiminioVolumeSize: 20Gimonitoring:endpoint: http://prometheus-operated.kubesphere-monitoring-system.svc:9090es:elasticsearchMasterVolumeSize: 4Gi elasticsearchDataVolumeSize: 20Gi logMaxAge: 7elkPrefix: logstashbasicAuth:enabled: falseusername: ""password: ""externalElasticsearchUrl: ""externalElasticsearchPort: ""console:enableMultiLogin: true port: 30880alerting: enabled: falseauditing:enabled: falsedevops: enabled: falsejenkinsMemoryLim: 2Gi jenkinsMemoryReq: 1500Mi jenkinsVolumeSize: 8Gi jenkinsJavaOpts_Xms: 512mjenkinsJavaOpts_Xmx: 512mjenkinsJavaOpts_MaxRAM: 2gevents:enabled: falseruler:enabled: truereplicas: 2logging: enabled: falselogsidecar:enabled: truereplicas: 2metrics_server: enabled: falsemonitoring:storageClass: ""prometheusMemoryRequest: 400MiprometheusVolumeSize: 20Gimulticluster:clusterRole: none network:networkpolicy:enabled: falseippool:type: nonetopology:type: noneopenpitrix:store:enabled: falseservicemesh:enabled: falsekubeedge:enabled: falsecloudCore:nodeSelector: {"node-role.kubernetes.io/worker": ""}tolerations: []cloudhubPort: "10000"cloudhubQuicPort: "10001"cloudhubHttpsPort: "10002"cloudstreamPort: "10003"tunnelPort: "10004"cloudHub:advertiseAddress: - "" nodeLimit: "100"service:cloudhubNodePort: "30000"cloudhubQuicNodePort: "30001"cloudhubHttpsNodePort: "30002"cloudstreamNodePort: "30003"tunnelNodePort: "30004"edgeWatcher:nodeSelector: {"node-role.kubernetes.io/worker": ""}tolerations: []edgeWatcherAgent:nodeSelector: {"node-role.kubernetes.io/worker": ""}tolerations: []

多租户

中间件部署

部署mysql有状态副本集

1、在创建好的项目的配置中心创建配置文件

配置试例

[client]default-character-set=utf8mb4 [mysql]default-character-set=utf8mb4 [mysqld]init_connect='SET collation_connection = utf8mb4_unicode_ci'init_connect='SET NAMES utf8mb4'character-set-server=utf8mb4collation-server=utf8mb4_unicode_ciskip-character-set-client-handshakeskip-name-resolve

2、存储管理->存储卷->创建存储卷

创建完成,基本信息随便填,有状态应用多为单节点读写

3、应用负载->工作负载->有状态副本集->创建

设置容器镜像

输入dockerhub中的需要拉取的容器镜像版本,指定资源限制,不要预留资源,使用默认端口

环境变量就是启动容器所带的参数,参考docker官方的启动命令,这里设置密码

设置挂载存储
同样参考 docker的启动中的挂载命令,选择 1、2步中创建好的存储卷和配置文件,填入容器中的挂载目录

创建完成,可以直接在容器中查看挂载的文件

docker启动参考命令

docker run -p 3306:3306 --name mysql-01 \-v /mydata/mysql/log:/var/log/mysql \-v /mydata/mysql/data:/var/lib/mysql \-v /mydata/mysql/conf:/etc/mysql/conf.d \-e MYSQL_ROOT_PASSWORD=root \--restart=always \-d mysql:5.7 

部署mysql负载均衡网络

默认创建好的应用使用的是 cluster IP类型,只能集群内访问

应用负载->服务,删除默认的service(别删掉副本集)

选择访问类型对应先前设置的cluster IP模式和nodeport模式,clusterip模式保证只能集群内部访问,保证了应用的安全性

此时就可以外网连接

即使不创建cluster IP类型的service,创建好的nodeport类型service也自带集群内部访问的DNS

可以这么使用集群内部访问连接

mysql -uroot -h mall-mysql-node.mall -p

部署redis&设置网络

mkdir -p /mydata/redis/conf && vim /mydata/redis/conf/redis.confappendonly yesport 6379bind 0.0.0.0docker run -d -p 6379:6379 --restart=always \-v /mydata/redis/conf/redis.conf:/etc/redis/redis.conf \-v/mydata/redis-01/data:/data \ --name redis-01 redis:6.2.5 \ redis-server /etc/redis/redis.conf

因为redis启动时需指定配置文件,所以创建容器时勾选启动命令,参考docker启动redis命令

不再提前创建存储卷,创建时挂载存储卷模版,这样容器伸缩时就能自动创建并挂载单独的存储卷,形成多存储卷备份

再分别创建两个服务,集群内访问和对外访问

部署ElasticSearch

1、docker es容器启动参考,启动后查看他的默认配置

mkdir -p /mydata/es-01 && chmod 777 -R /mydata/es-01docker run --restart=always -d -p 9200:9200 -p 9300:9300 \-e "discovery.type=single-node" \-e ES_JAVA_OPTS="-Xms512m -Xmx512m" \-v es-config:/usr/share/elasticsearch/config \-v /mydata/es-01/data:/usr/share/elasticsearch/data \--name es-01 \elasticsearch:7.13.4

进入docker容器中查看es的默认配置,将jvm.option和elasticsearch.yml 做为配置创建出来

创建副本集,参考docker启动命令输入两个端口,两个环境变量

挂载配置,由于只挂载两个陪文件,所以得添加完整文件路径名加子路径

elasticsearch.yml同上

应用商店

部署RabbitMQ

应用仓库HELM

相当于docker的dockerhub
在应用管理中,添加应用仓库

从应用市场部署zookeeper

就有更多选择