kubernetes-1.18.8-UOS-龙芯mips64le架构适配

一.适配环境

操作系统:UOS 20

CPU架构:mips64le

服务器厂家:

K8S版本:v1.18.8

docker版本:docker-ce 19.03

二.适配步骤

1. 安装docker

由于UOS之前已与docker做过适配,因此可通过uos官方的软件源,安装docker,官方提供给的版本为docker-ce 19.03,若需要其他版本,需要自行进行源码编译安装,本文档仅提供使用uos官方软件源进行安装:

apt-get install -y docker-ce

注意:以下版本中在安装docker-ce的过程中,发现安装后docker无法正常运行,由uos方工程师确认为内核bug目前已合入最新内核。

[外链图片转存失败,源站可能有防盗链机制,建议将图片保存下来直接上传(img-0RY55WFB-1618380763672)(C:\Users\Lenovo\AppData\Roaming\Typora\typora-user-images\image-20200911174533750.png)]

2.源码编译

2.1 安装其他依赖软件
yum install gcc make -yyum install rsync jq -y

安装go环境:

wget -c https://golang.google.cn/dl/go1.14.6.linux-amd64.tar.gz /opt/cd /opt/tar -C /usr/local -xzf go1.14.6.linux-amd64.tar.gzecho "export PATH=$PATH:/usr/local/go/bin" >> /etc/profile && source /etc/profileecho "export GOPATH=/home/go" >> /etc/profile && source /etc/profile # 配置GOPATHmkdir -p $GOPATH
2.2 下载源码

可根据自己的需要去下载不同版本的源码,本文主要编译的为v1.18.6.

mkdir -p $GOPATH/src/k8s.iocd $GOPATH/src/k8s.iogit clone https://github.com/kubernetes/kubernetes -b v1.18.8cd kubernetes

国内受限于网速,大概率会下载失败,可参考其他帖子在码云 上做仓库同步,将github的kubernetes仓库同步到码云,目前已经有其他童鞋做过同步,可直接进行下载。

cd $GOPATH/src/k8s.iogit clone https://gitee.com/mirrors/Kubernetes.git -b v1.18.8cd Kubernetes
2.3 编译需要的资源
2.3.1 mips64le架构平台镜像
  1. 查看kube-cross的TAG版本号
root@b529f9ce0ca9:/go/src/k8s.io/kubernetes# cat ./build/build-image/cross/VERSIONv1.13.15-1
  1. 查看debian_iptables_version版本号
root@b529f9ce0ca9:/go/src/k8s.io/kubernetes# egrep -Rn "debian_iptables_version=" ././build/common.sh:98:local debian_iptables_version=v12.1.2./build/dependencies.yaml:112:match: debian_iptables_version=
  1. 查看debian_base_version版本号
root@b529f9ce0ca9:/go/src/k8s.io/kubernetes# egrep -Rn "debian_base_version=" ././build/common.sh:97:local debian_base_version=v2.1.3./build/dependencies.yaml:84:match: debian_base_version=

目前还无法从官方下载到这两种tag的镜像,只能通过其他镜像替代,如拉取以下镜像:

docker pull loongnixk8s/debian-iptables-mips64le:v12.1.0docker pull loongnixk8s/debian-base-mips64le:v2.1.0docker pull registry.aliyuncs.com/google_containers/kube-cross:v1.13.6-1docker pull loongnixk8s/pause-mips64le:3.1docker tag loongnixk8s/pause-mips64le:3.1 k8s.gcr.io/pause-mips64le:3.2docker tag registry.aliyuncs.com/google_containers/kube-cross:v1.13.6-1 us.gcr.io/k8s-artifacts-prod/build-image/kube-cross:v1.13.15-1docker tag loongnixk8s/debian-base-mips64le:v2.1.0 k8s.gcr.io/debian-base-mips64le:v2.1.3docker tag loongnixk8s/debian-iptables-mips64le:v12.1.0 k8s.gcr.io/debian-iptables-mips64le:v12.1.2docker rmi loongnixk8s/debian-iptables-mips64le:v12.1.0docker rmi loongnixk8s/debian-base-mips64le:v2.1.0docker rmi registry.aliyuncs.com/google_containers/kube-cross:v1.13.6-1docker rmi loongnixk8s/pause-mips64le:3.1

4)修改编译脚本

k8s官方并未对mips64le操作指令集的架构进行适配,所以在编译脚本中也不支持对构建此架构的镜像,需要对以下脚本的内容进行修改。

vim hack/lib/version.sh

if [[ -z ${KUBE_GIT_TREE_STATE-} ]]; then# Check if the tree is dirty.default to dirtyif git_status=$("${git[@]}" status --porcelain 2>/dev/null) && [[ -z ${git_status} ]]; thenKUBE_GIT_TREE_STATE="clean"elseKUBE_GIT_TREE_STATE="clean"# dirty修改为clean,否则修改代码后编译出的版本号会dirty标记fifi

另外在vendor/github.com/google/cadvisor/fs/fs.go这个包中,有一个数据类型在mips64le架构中不兼容,需将buf.Dev改为uint64(buf.Dev)

hack/lib/golang.sh中的KUBE_SUPPORTED_SERVER_PLATFORMS、KUBE_SUPPORTED_NODE_PLATFORMS、KUBE_SUPPORTED_CLIENT_PLATFORMS、KUBE_SUPPORTED_TEST_PLATFORMS中添加上mips64le

  1. 执行编译命令,在编译过程中会有一个报错,需要将对应的数据类型强制转换一下
KUBE_BASE_IMAGE_REGISTRY=k8s.gcr.io GOOS=linux GOARCH=mips64le KUBE_BUILD_PLATFORMS=linux/mips64le KUBE_BUILD_CONFORMANCE=n KUBE_BUILD_HYPERKUBE=n make release-images GOFLAGS=-v GOGCFLAGS="-N -l" KUBE_BUILD_PULL_LATEST_IMAGES=false
2.3.2 编译kubelet、kubeadm、kubectl二进制

执行如下编译命令:

docker run --rm -v /home/go/src/k8s.io/Kubernetes:/go/src/k8s.io/kubernetes -it us.gcr.io/k8s-artifacts-prod/build-image/kube-cross:v1.13.15-1 bashcd /go/src/k8s.io/kubernetesGOOS=linux GOARCH=mips64le KUBE_BUILD_PLATFORMS=linux/mips64le make all GOFLAGS=-v GOGCFLAGS="-N -l" WHAT=cmd/kubeadm# 再分别编译kubectl和kubelet
2.3.3 使用编译好的镜像

将制作好的镜像上传到指定服务器,导入kube-apiserver.tar镜像,并更新环境上部署的kube-apiserver镜像,其他的操作类似。

# docker load -i kube-apiserver.tarb1d170ccb364: Loading layer [==================================================>]162.4MB/162.4MB

3.安装部署

3.1 安装kubectl、kubelet、kubectl

将上述编译好的二进制文件上传到指定服务器,将上述而你简直文件移入到/usr/bin目录下,为kubelet配置启动脚本和配置文件。

[root@k8s-master ~]# cat /usr/lib/systemd/system/kubelet.service.d/10-kubeadm.conf# Note: This dropin only works with kubeadm and kubelet v1.11+[Service]Environment="KUBELET_KUBECONFIG_ARGS=--bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --kubeconfig=/etc/kubernetes/kubelet.conf"Environment="KUBELET_CONFIG_ARGS=--config=/var/lib/kubelet/config.yaml"# This is a file that "kubeadm init" and "kubeadm join" generates at runtime, populating the KUBELET_KUBEADM_ARGS variable dynamicallyEnvironmentFile=-/var/lib/kubelet/kubeadm-flags.env# This is a file that the user can use for overrides of the kubelet args as a last resort. Preferably, the user should use# the .NodeRegistration.KubeletExtraArgs object in the configuration files instead. KUBELET_EXTRA_ARGS should be sourced from this file.EnvironmentFile=-/etc/sysconfig/kubeletExecStart=ExecStart=/usr/bin/kubelet $KUBELET_KUBECONFIG_ARGS $KUBELET_CONFIG_ARGS $KUBELET_KUBEADM_ARGS $KUBELET_EXTRA_ARGS[root@k8s-master ~]# cat /usr/lib/systemd/system/kubelet.service[Unit]Description=kubelet: The Kubernetes Node AgentDocumentation=https://kubernetes.io/docs/Wants=network-online.targetAfter=network-online.target[Service]ExecStart=/usr/bin/kubeletRestart=alwaysStartLimitInterval=0RestartSec=10[Install]WantedBy=multi-user.target
3.2 部署k8s集群
3.2.1 下载相关镜像

需要提前下载好以下镜像:

docker pull gebilaoyao/pause-mips64le:3.1docker pull gebilaoyao/etcd-mips64le:3.3.11docker pull gebilaoyao/flannel-mips64le:0.10.0docker pull gebilaoyao/coredns-mips64le:v1.6.7docker pull gebilaoyao/kube-scheduler-mips64le:v1.18.8docker pull gebilaoyao/kube-apiserver-mips64le:v1.18.8docker pull gebilaoyao/kube-controller-manager-mips64le:v1.18.8docker pull gebilaoyao/kube-proxy-mips64le:v1.18.8

下载好以后需要将镜像标签修改为指定的版本,可通过kubeadm init的过程中通过查看日志看到具体版本,修改为指定版本后在进行集群部署。

3.2.2 部署

具体部署后期会通过自动化工具进行部署,本次适配过程仍为命令行操作,再次不做具体说明

3.3.3 安装网络插件(flannel)

核心组件安装部署完成后,通过kubectl get pod -A ,会发现coredns这个pod一直处于pending状态,这是因为没有安装网络插件,此时通过kubectl get node查看node也会看到节点处于NoReady状态。可通过下面的yaml文件安装flannel网络插件。

vim flannel-mips64le.yaml---apiVersion: policy/v1beta1kind: PodSecurityPolicymetadata:name: psp.flannel.unprivilegedannotations:seccomp.security.alpha.kubernetes.io/allowedProfileNames: docker/defaultseccomp.security.alpha.kubernetes.io/defaultProfileName: docker/defaultapparmor.security.beta.kubernetes.io/allowedProfileNames: runtime/defaultapparmor.security.beta.kubernetes.io/defaultProfileName: runtime/defaultspec:privileged: falsevolumes:- configMap- secret- emptyDir- hostPathallowedHostPaths:- pathPrefix: "/etc/cni/net.d"- pathPrefix: "/etc/kube-flannel"- pathPrefix: "/run/flannel"readOnlyRootFilesystem: false# Users and groupsrunAsUser:rule: RunAsAnysupplementalGroups:rule: RunAsAnyfsGroup:rule: RunAsAny# Privilege EscalationallowPrivilegeEscalation: falsedefaultAllowPrivilegeEscalation: false# CapabilitiesallowedCapabilities: ['NET_ADMIN']defaultAddCapabilities: []requiredDropCapabilities: []# Host namespaceshostPID: falsehostIPC: falsehostNetwork: truehostPorts:- min: 0max: 65535# SELinuxseLinux:# SELinux is unused in CaaSPrule: 'RunAsAny'---kind: ClusterRoleapiVersion: rbac.authorization.k8s.io/v1beta1metadata:name: flannelrules:- apiGroups: ['extensions']resources: ['podsecuritypolicies']verbs: ['use']resourceNames: ['psp.flannel.unprivileged']- apiGroups:- ""resources:- podsverbs:- get- apiGroups:- ""resources:- nodesverbs:- list- watch- apiGroups:- ""resources:- nodes/statusverbs:- patch---kind: ClusterRoleBindingapiVersion: rbac.authorization.k8s.io/v1beta1metadata:name: flannelroleRef:apiGroup: rbac.authorization.k8s.iokind: ClusterRolename: flannelsubjects:- kind: ServiceAccountname: flannelnamespace: kube-system---apiVersion: v1kind: ServiceAccountmetadata:name: flannelnamespace: kube-system---kind: ConfigMapapiVersion: v1metadata:name: kube-flannel-cfgnamespace: kube-systemlabels:tier: nodeapp: flanneldata:cni-conf.json: |{"name": "cbr0","cniVersion": "0.3.1","plugins": [{"type": "flannel","delegate": {"hairpinMode": true,"isDefaultGateway": true}},{"type": "portmap","capabilities": {"portMappings": true}}]}net-conf.json: |{"Network": "10.244.0.0/16","Backend": {"Type": "vxlan"}}---apiVersion: apps/v1kind: DaemonSetmetadata:name: kube-flannel-ds-mips64lenamespace: kube-systemlabels:tier: nodeapp: flannelspec:selector:matchLabels:app: flanneltemplate:metadata:labels:tier: nodeapp: flannelspec:affinity:nodeAffinity:requiredDuringSchedulingIgnoredDuringExecution:nodeSelectorTerms:- matchExpressions:- key: kubernetes.io/osoperator: Invalues:- linux- key: kubernetes.io/archoperator: Invalues:- mips64lehostNetwork: truetolerations:- operator: Existseffect: NoScheduleserviceAccountName: flannelinitContainers:- name: install-cniimage: gebilaoyao/flannel-mips64le:v0.10.0command:- cpargs:- -f- /etc/kube-flannel/cni-conf.json- /etc/cni/net.d/10-flannel.conflistvolumeMounts:- name: cnimountPath: /etc/cni/net.d- name: flannel-cfgmountPath: /etc/kube-flannel/containers:- name: kube-flannelimage: gebilaoyao/flannel-mips64le:v0.10.0command:- /opt/bin/flanneldargs:- --ip-masq- --kube-subnet-mgrresources:requests:cpu: "300m"memory: "500Mi"limits:cpu: "300m"memory: "500Mi"securityContext:privileged: falsecapabilities: add: ["NET_ADMIN"]env:- name: POD_NAMEvalueFrom:fieldRef:fieldPath: metadata.name- name: POD_NAMESPACEvalueFrom:fieldRef:fieldPath: metadata.namespacevolumeMounts:- name: runmountPath: /run/flannel- name: flannel-cfgmountPath: /etc/kube-flannel/volumes:- name: runhostPath:path: /run/flannel- name: cnihostPath:path: /etc/cni/net.d- name: flannel-cfgconfigMap:name: kube-flannel-cfg

执行kubectl apply -f flannel-mips64le.yaml创建pod,此时通过查看pod发现flannel可以正常运行,但是coredns仍然无法正常运行,通过查看node可以发现,此时node仍处于NoReady状态,可以在systemctl status kubelet的信息中看到有no valid networks found in /etc/cni/net.d的报错,同时报在报无法在/opt/cni/bin下找到合适的插件,可以通过如下方法进行解决:

cd $GOPATH/srcgit clone https://github.com/containernetworking/plugins.gcd containernetworking./build_linux.shcp bin/* /

适配结果

目前已成功将K8S核心组件在基于龙芯CPU(mips64le架构)的UO操作系统的服务器上成功运行,具体情况如下:

[外链图片转存失败,源站可能有防盗链机制,建议将图片保存下来直接上传(img-69zzTnTF-1618380763675)(C:\Users\Lenovo\AppData\Roaming\Typora\typora-user-images\image-20200911184548998.png)]

创建pod,此时通过查看pod发现flannel可以正常运行,但是coredns仍然无法正常运行,通过查看node可以发现,此时node仍处于NoReady状态,可以在systemctl status kubelet的信息中看到有no valid networks found in /etc/cni/net.d的报错,同时报在报无法在/opt/cni/bin`下找到合适的插件,可以通过如下方法进行解决:

cd $GOPATH/srcgit clone https://github.com/containernetworking/plugins.gcd containernetworking./build_linux.shcp bin/* /

适配结果

目前已成功将K8S核心组件在基于龙芯CPU(mips64le架构)的UO操作系统的服务器上成功运行,具体情况如下: