Raven

本文将介绍如何安装Raven和使用Raven来增强边缘集群中的边-边和边-云网络打通能力。

假设你已经有了一个边缘kubernetes集群,节点分布在不同的物理区域,并且已经在这个集群中部署了Raven Controller Manager,有关Raven Controller Manager的详细信息在这里可以找到。

1. 节点打标区分不同网络域

如下所示,假设你的边缘集群中有五个节点,分布在三个不同的物理(网络)区域,其中节点master节点为云端节点。

  1. $ kubectl get nodes -o wide
  2. NAME STATUS ROLES AGE VERSION INTERNAL-IP
  3. hhht-node1 Ready <none> 20d v1.16.2 10.48.115.9
  4. hhht-node2 Ready <none> 20d v1.16.2 10.48.115.10
  5. master Ready master 20d v1.16.2 10.48.115.8
  6. wlcb-node1 Ready <none> 20d v1.16.2 10.48.115.11
  7. wlcb-node2 Ready <none> 20d v1.16.2 10.48.115.12

我们对位于不同物理(网络)区域节点,分别使用一个Gateway CR来进行管理。通过给节点打标的方式,来标识节点由哪个Gateway管理。

通过如下命令,我们给位于cn-huhehaote的节点打gw-hhht的标签,来表明这些节点是由gw-hhht这个Gateway CR来管理的。

  1. $ kubectl label nodes hhht-node1 hhht-node2 raven.openyurt.io/gateway=gw-hhht
  2. hhht-node1 labeled
  3. hhht-node2 labeled

同样地,我们分别为位于云端节点和cn-huhehaote的节点打上gw-cloudgw-wlcb的标签。

  1. $ kubectl label nodes master raven.openyurt.io/gateway=gw-cloud
  2. master labeled
  1. $ kubectl label nodes wlcb-node1 wlcb-node2 raven.openyurt.io/gateway=gw-wlcb
  2. wlcb-node1 labeled
  3. wlcb-node2 labeled

安装Raven Agent

运行如下命令安装最新版本:

  1. git clone https://github.com/openyurtio/raven.git
  2. cd raven
  3. make deploy

运行如下命令,检查相应的Raven Agent的Pod是否成功运行。

  1. $ kubectl get pod -n kube-system | grep raven-agent-ds
  2. raven-agent-ds-2jw47 1/1 Running 0 91s
  3. raven-agent-ds-bq8zc 1/1 Running 0 91s
  4. raven-agent-ds-cj7k4 1/1 Running 0 91s
  5. raven-agent-ds-p9fk9 1/1 Running 0 91s
  6. raven-agent-ds-rlb9q 1/1 Running 0 91s

2. 如何使用

2.1 Gateways

  • 创建的Gateway CR
  1. $ cat <<EOF | kubectl apply -f -
  2. apiVersion: raven.openyurt.io/v1alpha1
  3. kind: Gateway
  4. metadata:
  5. name: gw-hhht
  6. spec:
  7. endpoints:
  8. - nodeName: hhht-node1
  9. underNAT: true
  10. - nodeName: hhht-node2
  11. underNAT: true
  12. ---
  13. apiVersion: raven.openyurt.io/v1alpha1
  14. kind: Gateway
  15. metadata:
  16. name: gw-cloud
  17. spec:
  18. endpoints:
  19. - nodeName: master
  20. underNAT: false
  21. ---
  22. apiVersion: raven.openyurt.io/v1alpha1
  23. kind: Gateway
  24. metadata:
  25. name: gw-wlcb
  26. spec:
  27. endpoints:
  28. - nodeName: wlcb-node1
  29. underNAT: true
  30. - nodeName: wlcb-node2
  31. underNAT: true
  32. EOF
  • 查看各个Gateway CR的状态
  1. $ kubectl get gateways
  2. NAME ACTIVEENDPOINT
  3. gw-hhht hhht-node1
  4. gw-master master
  5. gw-wlcb wlcb-node1

2.2 测试位于不同网络域的Pod网络联通性

  • 创建测试Pod
  1. $ cat <<EOF | kubectl apply -f -
  2. apiVersion: v1
  3. kind: Pod
  4. metadata:
  5. name: fedora-1
  6. spec:
  7. nodeName: hhht-node2
  8. containers:
  9. - name: fedora
  10. image: njucjc/fedora:latest
  11. imagePullPolicy: Always
  12. ---
  13. apiVersion: v1
  14. kind: Pod
  15. metadata:
  16. name: fedora-2
  17. spec:
  18. nodeName: wlcb-node2
  19. containers:
  20. - name: fedora
  21. image: njucjc/fedora:latest
  22. imagePullPolicy: Always
  23. EOF
  • 确定测试Pod正常运行
  1. $ kubectl get pod -o wide
  2. NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
  3. fedora-1 1/1 Running 0 46s 10.14.10.67 hhht-node2 <none> <none>
  4. fedora-2 1/1 Running 0 46s 10.14.2.70 wlcb-node2 <none> <none>
  • 测试跨网络域的Pod网络联通
  1. $ kubectl exec -it fedora-1 -- bash
  2. [root@fedora-1]# ping 10.14.2.70 -c 4
  3. PING 10.14.2.70 (10.14.2.70) 56(84) bytes of data.
  4. 64 bytes from 10.14.2.70: icmp_seq=1 ttl=60 time=32.2 ms
  5. 64 bytes from 10.14.2.70: icmp_seq=2 ttl=60 time=32.2 ms
  6. 64 bytes from 10.14.2.70: icmp_seq=3 ttl=60 time=32.0 ms
  7. 64 bytes from 10.14.2.70: icmp_seq=4 ttl=60 time=32.1 ms
  8. --- 10.14.2.70 ping statistics ---
  9. 4 packets transmitted, 4 received, 0% packet loss, time 3003ms
  10. rtt min/avg/max/mdev = 32.047/32.136/32.246/0.081 ms