一、目标

  • 利用DPVS部署一个基于OSPF/ECMP的提供HTTP服务的多活高可用的测试环境。

  • 本次部署仅用于验证功能,不提供性能验证。

  • 配置两台DPVS组成集群、两台REAL SERVER提供实际HTTP服务。

    注:在虚拟环境里面,通过在一台虚拟服务器上面安装FRRouting来模拟支持OSPF的路由器。

二、组网架构

![外链图片转存失败,源站可能有防盗链机制,建议将图片保存下来直接上传](https://img-home.csdnimg.cn/images/20230724024159.png?or

角色及配置说明:

CLIENT: 透过DPVS集群访问REAL SERVER提供的相关服务。

CLIENT1的配置: 网卡名称: enp0s8ip地址: 192.168.2.100/24静态路由: ip route add 192.168.0.0/16 via 192.168.2.254 dev enp0s8将所有和192.168.0.0/16的请求指向路由器CLIENT2的配置: 网卡名称: enp0s8ip地址: 192.168.2.101/24静态路由: ip route add 192.168.0/16 via 192.168.2.254 dev enp0s8将所有和192.168.0.0/16的请求指向路由器

ROUTER: 提供OSPF/ECMP功能的路由器(这里用frrouting模拟)。

网卡1名称:enp0s8网卡1的ip地址:192.168.2.254/24网卡2名称:enp0s9网卡2的ip地址:192.168.3.254/24

DPVS: dpvs slb集群。

DPVS1服务器的配置: 网卡1名称:enp0s8-> dpdk0dpdk0.kni网卡1的ip地址:192.168.3.1/24192.168.0.1/24 (VIP)网卡2名称:enp0s9-> dpdk1dpdk1.kni网卡2的ip地址:192.168.1.1/24DPVS1服务器2的配置: 网卡1名称:enp0s8-> dpdk0dpdk0.kni网卡1的ip地址:192.168.3.2/24192.168.0.1/24 (VIP)网卡2名称:enp0s9-> dpdk1dpdk1.kni网卡2的ip地址:192.168.1.2/24

REAL SERVER: 提供HTTP SERVER服务。

real server1的配置:网卡名称: enp0s8ip地址: 192.168.1.100/24real server2的配置:网卡名称: enp0s8ip地址: 192.168.1.101/24

三、部署过程

以下以dpvs1为例, dpvs2和dpvs1的配置类似。

1. ROUTER的部署

1.1 安装frrouting服务

对于centos7执行以下命令

curl -O https://rpm.frrouting.org/repo/$FRRVER-repo-1-0.el7.noarch.rpmsudo yum install ./$FRRVER*# install FRRsudo yum install frr frr-pythontools

对于centos8执行以下命令

curl -O https://rpm.frrouting.org/repo/$FRRVER-repo-1-0.el8.noarch.rpmsudo yum install ./$FRRVER*# install FRRsudo yum install frr frr-pythontools

1.2 配置frr服务

1.2.1 开启ospf服务

在启动前,先要开启linux服务器的路由转发功能

echo "net.ipv4.ip_forward=1" > /etc/sysctl.confsysctl -p

在/etc/frr/daemons文件中开启:

ospfd=yes (对于ipv4)

ospf6d=yes (对于ipv6)

cd /etc/frrcat daemons----------------------# This file tells the frr package which daemons to start.## Sample configurations for these daemons can be found in# /usr/share/doc/frr/examples/.## ATTENTION:## When activating a daemon for the first time, a config file, even if it is# empty, has to be present *and* be owned by the user and group "frr", else# the daemon will not be started by /etc/init.d/frr. The permissions should# be u=rw,g=r,o=.# When using "vtysh" such a config file is also needed. It should be owned by# group "frrvty" and set to ug=rw,o= though. Check /etc/pam.d/frr, too.## The watchfrr, zebra and staticd daemons are always started.#bgpd=noospfd=yesospf6d=yesripd=noripngd=noisisd=nopimd=noldpd=nonhrpd=noeigrpd=nobabeld=nosharpd=nopbrd=nobfdd=nofabricd=novrrpd=nopathd=no

1.2.2 配置ospf服务

cd /etc/frrcat frr.conf----------------------frr version 8.1frr defaults traditionalhostname routerlog syslog informationalservice integrated-vtysh-config!interface enp0s8 ip address 192.168.2.254/24exit!interface enp0s9 ip address 192.168.3.254/24 ip ospf 100 area 0 ip ospf dead-interval 40exit!router ospf 100 ospf router-id 192.168.2.254exit!

1.3 启动FRR服务

service frr start

用ps -ef|grep frr检查frr,ospfd,zebra进程是否已经启动

ps -ef|grep frr-----------------root3635 10 18:11 " />-d -F traditional zebra ospfd-100 ospfd-200 ospfd-300 staticdfrr 3657 10 18:11 ?00:00:00 /usr/lib/frr/zebra -d -F traditional --daemon -A 127.0.0.1 -s 90000000frr 3662 10 18:11 ?00:00:02 /usr/lib/frr/ospfd -d -F traditional -n 100 --daemon -A 127.0.0.1frr 3665 10 18:11 ?00:00:00 /usr/lib/frr/ospfd -d -F traditional -n 200 --daemon -A 127.0.0.1frr 3668 10 18:11 ?00:00:00 /usr/lib/frr/ospfd -d -F traditional -n 300 --daemon -A 127.0.0.1frr 3671 10 18:11 ?00:00:00 /usr/lib/frr/staticd -d -F traditional --daemon -A 127.0.0.1

1.4 frr的常用命令

# 查看邻居列表vtyshdo show ip ospf neighbor# 查看路由表vtyshshow ip route # 写入配置do writedo write mem

2. DPVS的部署

2.1 安装frrouting服务

2.1.1 Centos7执行以下命令

curl -O https://rpm.frrouting.org/repo/$FRRVER-repo-1-0.el7.noarch.rpmsudo yum install ./$FRRVER*# install FRRsudo yum install frr frr-pythontools

对于centos8执行以下命令

curl -O https://rpm.frrouting.org/repo/$FRRVER-repo-1-0.el8.noarch.rpmsudo yum install ./$FRRVER*# install FRRsudo yum install frr frr-pythontools

2.1.2 配置frr服务

2.1.2.1 开启ospf服务

在/etc/frr/daemons文件中开启:

ospfd=yes (对于ipv4)

ospf6d=yes (对于ipv6)

cd /etc/frrcat daemons----------------------# This file tells the frr package which daemons to start.## Sample configurations for these daemons can be found in# /usr/share/doc/frr/examples/.## ATTENTION:## When activating a daemon for the first time, a config file, even if it is# empty, has to be present *and* be owned by the user and group "frr", else# the daemon will not be started by /etc/init.d/frr. The permissions should# be u=rw,g=r,o=.# When using "vtysh" such a config file is also needed. It should be owned by# group "frrvty" and set to ug=rw,o= though. Check /etc/pam.d/frr, too.## The watchfrr, zebra and staticd daemons are always started.#bgpd=noospfd=yesospf6d=yesripd=noripngd=noisisd=nopimd=noldpd=nonhrpd=noeigrpd=nobabeld=nosharpd=nopbrd=nobfdd=nofabricd=novrrpd=nopathd=no

2.1.2.2 配置ospf服务

cd /etc/frrcat frr.conf----------------------frr version 8.2.2frr defaults traditionalhostname localhost.localdomainlog syslog informationalno ip forwardingno ipv6 forwarding!interface dpdk0.kni ip ospf dead-interval 10 ip ospf dead-interval 40exit!router ospf ospf router-id 192.168.3.1 # 如果是dpvs2, 则修改为192.168.3.2 log-adjacency-changes auto-cost reference-bandwidth 1000 network 192.168.0.0/24 area 0 network 192.168.3.0/24 area 0exit!

2.1.3 启动FRR服务

service frr start

用ps -ef|grep frr检查frr,ospfd,zebra进程是否已经启动

ps -ef|grep frr-----------------root3635 10 18:11 ?00:00:01 /usr/lib/frr/watchfrr -d -F traditional zebra ospfd-100 ospfd-200 ospfd-300 staticdfrr 3657 10 18:11 ?00:00:00 /usr/lib/frr/zebra -d -F traditional --daemon -A 127.0.0.1 -s 90000000frr 3662 10 18:11 ?00:00:02 /usr/lib/frr/ospfd -d -F traditional -n 100 --daemon -A 127.0.0.1frr 3665 10 18:11 ?00:00:00 /usr/lib/frr/ospfd -d -F traditional -n 200 --daemon -A 127.0.0.1frr 3668 10 18:11 ?00:00:00 /usr/lib/frr/ospfd -d -F traditional -n 300 --daemon -A 127.0.0.1frr 3671 10 18:11 ?00:00:00 /usr/lib/frr/staticd -d -F traditional --daemon -A 127.0.0.1

2.2 部署dpvs服务

2.2.1 编译dpvs源码

# 拉取dpvs和dpdk的代码git clone http://10.0.171.10/networkService/l4slb-dpvs-1.9.gitgit checkout feature_ospf# 编译dpdk 20.11.1cd l4slb-dpvs-1.9/dpdk-stable-20.11.1meson -Denable_kmods=true -Dexamples=l2fwd,l3fwd -Ddisable_drivers=net/af_xdp,event/dpaa,,event/dpaa2 -Dprefix=`pwd`/dpdklib ./buildninja -C buildninja -C build installexport PKG_CONFIG_PATH=`pwd`/dpdk-stable-20.11.1/dpdklib/lib64/pkgconfig# 编译dpvsmake -j8#如果编译DEBUG版本make DEBUG=1 -j8

2.2.2 初始化dpdk运行环境

# 配置大页内存echo 1024 >/sys/devices/system/node/node0/hugepages/hugepages-2048kB/nr_hugepages[ -d /mnt/huge ] || mkdir /mnt/hugemount -t hugetlbfs nodev /mnt/huge# 加载uio内核模块modprobe uio_pci_generic# 加载rte_kni内核模块insmod dpdk-stable-20.11.1/build/kernel/linux/kni/rte_kni.ko# 加载dpvs使用的网卡dpdk-stable-20.11.1/usertools/dpdk-devbind.py -b uio_pci_generic enp0s8dpdk-stable-20.11.1/usertools/dpdk-devbind.py -b uio_pci_generic enp0s9

2.2.3 配置dpvs:

!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!! This is dpvs default configuration file.!! The attribute "" denotes the configuration item at initialization stage. Item of! this type is configured oneshoot and not reloadable. If invalid value configured in the! file, dpvs would use its default value.!! Note that dpvs configuration file supports the following comment type:! * line comment: using '#" or '!'! * inline range comment: using '', put comment in between!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!! global configglobal_defs {log_level INFO! log_file/var/log/dpvs.log! log_async_modeon! pdump off}! netif confignetif_defs { pktpool_size 65536 pktpool_cache256 fdir_modeperfect device dpdk0 {rx {queue_number1descriptor_number 1024rss all}tx {queue_number1descriptor_number 1024}mtu 1500!promisc_modekni_namedpdk0.kni} device dpdk1 {rx {queue_number1descriptor_number 1024rss all}tx {queue_number1descriptor_number 1024}mtu 1500!promisc_modekni_namedpdk1.kni}}! worker config (lcores)! notes:!1. rx(tx) queue ids MUST start from 0 and continous!2. cpu ids and rx(tx) queue ids MUST be unique, repeated ids is forbidden!3. cpu ids identify dpvs workers only, and not correspond to physical cpu cores.! If you are to specify cpu cores on which to run dpvs, please use dpdk eal options,! such as "-c", "-l", "--lcores". Use "dpvs -- --help" for supported eal options.worker_defs { worker cpu0 {typemastercpu_id0} worker cpu1 {typeslavecpu_id1portdpdk0 {rx_queue_ids 0tx_queue_ids 0! isol_rx_cpu_ids9! isol_rxq_ring_sz 1048576}portdpdk1 {rx_queue_ids 0tx_queue_ids 0! isol_rx_cpu_ids9! isol_rxq_ring_sz 1048576}}}! timer configtimer_defs {# cpu job loops to schedule dpdk timer managementschedule_interval500}! dpvs neighbor configneigh_defs { unres_queue_length128timeout60}! dpvs ipset configipset_defs { ipset_hash_pool_size 131072}! dpvs ipv4 configipv4_defs {forwarding off default_ttl 64fragment { bucket_number 4096 bucket_entries16 max_entries 4096 ttl 1}}! dpvs ipv6 configipv6_defs {disable offforwardingoffroute6 { method hlistrecycle_time10}}! control plane configctrl_defs {lcore_msg { ring_size4096sync_msg_timeout_us 20000priority_levellow}ipc_msg { unix_domain /var/run/dpvs_ctrl}}! ipvs configipvs_defs {conn { conn_pool_size 65536 conn_pool_cache256conn_init_timeout 3! expire_quiescent_template! fast_xmit_close!  redirect off}udp {! defence_udp_dropuoa_modeipouoa_max_trail 3timeout {normal300last3}}tcp {! defence_tcp_droptimeout {none2established 90syn_sent3syn_recv30fin_wait7time_wait 7close 3close_wait7last_ack7listen120synack30last2}synproxy {synack_options {mss 1452ttl 63sack! wscale! timestamp}close_client_window! defer_rs_synrs_syn_max_retry3ack_storm_thresh10max_ack_saved 3conn_reuse_state {closetime_wait! fin_wait! close_wait! last_ack }}}}! sa_pool configsa_pool {pool_hash_size16flow_enable off}

2.2.4 启动dpvs:

bin/dpvs

2.2.5 初始化dpvs的服务配置,创建端口为8080的HTTP负载均衡服务:

VIP=192.168.0.1# dpvs虚IPOSPFIP=192.168.3.1 # dpvs的ospf心跳IP, 如果是dpvs2,改成192.168.3.2GATEWAY=192.168.3.254 # dpvs的出口网关(指向ROUTER)VIP_PREFIX=192.168.0.0 # VIP的掩码前缀LIP_PREFIX=192.168.1.0 # LOCAL IP的掩码前缀LIP=("192.168.1.1")# LOCAL IP地址列表, 如果是client2,改成192.168.1.2SERVICE=$VIP:8080# 开启的服务# REAL SERVER列表RS_SERVER=("192.168.1.100:8080" "192.168.1.101:8080")# 配置IP地址./tools/dpip/build/dpip addr add$VIP/24 dev dpdk0./tools/dpip/build/dpip addr add$OSPFIP/24 dev dpdk0# 定义默认出口网关指向ROUTER./tools/dpip/build/dpip route add default via $GATEWAY dev dpdk0# 配置kni虚拟网卡的IP以及路由(和dpdk网卡中的配置一致)ifconfig dpdk0.kni $VIP netmask 255.255.255.0ip addr add $OSPFIP/24 dev dpdk0.kniip route add default via $GATEWAY dev dpdk0.kniifconfig dpdk1.kni ${LIP[0]} netmask 255.255.255.0# 添加后端IP./tools/dpip/build/dpip route add $LIP_PREFIX/24 dev dpdk1# 添加服务./tools/ipvsadm/ipvsadm -A -t $SERVICE -s rr# 添加RSfor rs in ${RS_SERVER[*]}do./tools/ipvsadm/ipvsadm -a -t $SERVICE -r $rs -bdone# 配置VIP监听器使用的DPVS local_ipfor lip in ${LIP[*]}do./tools/ipvsadm/ipvsadm -P -z $lip -t $SERVICE -F dpdk1done

2.2.6 部署real server的健康检查

#!/bin/bash#VIP=192.168.0.1 # DPVS的vipCPORT=8080# DPVS的服务端口RS=("192.168.1.100" "192.168.1.101")# REAL SERVER列表declare -a RSSTATUS # 存储 REAL SERVER的状态RW=("2" "1")# REAL SERVER的权重RPORT=8080# REAL SERVER的HTTP服务端口RSURL="/index.html" # REAL SERVER的测试URL路径TYPE=b# FULLNAT模式CHKLOOP=3 # 错误检查失败尝试次数LOG=/var/log/ipvsmonitor.log# 日志存放路径# 添加realserveraddrs() {ipvsadm -a -t $VIP:$CPORT -r $1:$RPORT -$TYPE -w $2[ $? -eq 0 ] && return 0 || return 1}# 删除realserverdelrs() {ipvsadm -d -t $VIP:$CPORT -r $1:$RPORT[ $? -eq 0 ] && return 0 || return 1}# 通过http接口对realserver的状态进行探测checkrs() {local I=1while [ $I -le $CHKLOOP ]; doif curl --connect-timeout 1 http://${1}:${RPORT}${RSURL} &> /dev/null; thenreturn 0filet I++donereturn 1}# 状态初始化initstatus() {local Ilocal COUNT=0;for I in ${RS[*]}; doif ipvsadm -L -n | grep "$I:$RPORT" && > /dev/null ; thenRSSTATUS[$COUNT]=1elseRSSTATUS[$COUNT]=0filet COUNT++done}initstatus# 状态检测循环while :; dolet COUNT=0for I in ${RS[*]}; doif checkrs $I; thenif [ ${RSSTATUS[$COUNT]} -eq 0 ]; then addrs $I ${RW[$COUNT]} [ $? -eq 0 ] && RSSTATUS[$COUNT]=1 && echo "`date +'%F %H:%M:%S'`, $I is back." >> $LOGfielse if [ ${RSSTATUS[$COUNT]} -eq 1 ]; then delrs $I [ $? -eq 0 ] && RSSTATUS[$COUNT]=0 && echo "`date +'%F %H:%M:%S'`, $I is gone." >> $LOGfifilet COUNT++donesleep 5done

3. REALSERVER的部署

3.1 部署nginx服务

3.1.1 编译nginx

# 安装nginx服务wget https://nginx.org/download/nginx-1.22.1.tar.gztar xzvf /nginx-1.22.1.tar.gzcd nginx-1.22.1./configure --prefix=/opt/nginxmake -j4make install

3.1.2 配置nginx服务

# 安装nginx服务wget https://nginx.org/download/nginx-1.22.1.tar.gztar xzvf /nginx-1.22.1.tar.gzcd nginx-1.22.1./configure --prefix=/opt/nginxmake -j4make install

3.1.3 启动nginx服务

cd /opt/nginx/./sbin/nginx

四、测试验证

  1. 连通性测试

    # 在client服务器上面ping VIP是否通ping 192.168.0.1# 在dpvs服务器上面ping client是否通ping 192.168.2.100ping 192.168.2.101# 在client服务器上面测试8080 tcp端口是否通telnet 192.168.0.1 8080
  2. http请求测试

    # 在client服务器上面请求HTTPcurl "http://192.168.0.1:8080/index.html"
  3. 模拟单dpvs宕机测试

    随机选择一台dpvs,譬如dvps1,将dpvs进程kill查看http请求测试依旧可以成功
  4. 模拟单dpvs宕机后恢复测试

    将以上kill的dpvs进程重新启动通过查看real server上的访问日志,查看http请求测试是否可以重新从这台dpvs服务器恢复。
  5. 模拟单realserver宕机测试

将以上kill的dpvs进程重新启动通过查看real server上的访问日志,查看http请求测试是否可以重新从这台dpvs服务器恢复。
  1. 模拟单realserver宕机后恢复测试
将以上kill的dpvs进程重新启动通过查看real server上的访问日志,查看http请求测试是否可以重新从这台dpvs服务器恢复。