部署KubeEdge

介绍

KubeEdge

KubeEdge 是一个致力于解决边缘场景问题的开源系统,它将容器化应用程序编排和设备管理的能力扩展到边缘设备。基于 Kubernetes ,KubeEdge 为网络、应用程序部署以及云侧与边缘侧之间的元数据同步提供核心基础设施支持。KubeEdge 支持 MQTT,并允许开发人员编写自定义逻辑,在边缘上启用资源受限的设备通信。Kubeedge由云部分和边缘部分组成,目前均已开源。

https://kubeedge.io/

iSulad

iSulad 是一个轻量级容器 runtime 守护程序,专为 IOT 和 Cloud 基础设施而设计,具有轻便、快速且不受硬件规格和体系结构限制的特性,可以被更广泛地应用在云、IoT、边缘计算等多个场景。

https://gitee.com/openeuler/iSulad

准备

组件版本

组件 版本
OS openEuler 21.09(x86_64)
Kubernetes 1.20.2-4
iSulad 2.0.9-20210625.165022.git5a088d9c
KubeEdge v1.8.0

节点规划

节点 位置 组件
9.63.252.224 云侧(cloud) k8s(master)、isulad、cloudcore
9.63.252.227 端侧(edge) isulad、edgecore

环境准备

以下设置需要在cloud和edge端均配置

  1. # 关闭防火墙
  2. $ systemctl stop firewalld
  3. $ systemctl disable firewalld
  4. # 禁用selinux
  5. $ setenforce 0
  6. # 网络配置,开启相应的转发机制
  7. $ cat >> /etc/sysctl.d/k8s.conf <<EOF
  8. net.bridge.bridge-nf-call-ip6tables = 1
  9. net.bridge.bridge-nf-call-iptables = 1
  10. net.ipv4.ip_forward = 1
  11. vm.swappiness=0
  12. EOF
  13. # 生效规则
  14. $ modprobe br_netfilter
  15. $ sysctl -p /etc/sysctl.d/k8s.conf
  16. # 查看是否生效
  17. $ cat /proc/sys/net/bridge/bridge-nf-call-ip6tables
  18. 1
  19. $ cat /proc/sys/net/bridge/bridge-nf-call-iptables
  20. 1
  21. # 关闭系统swap
  22. $ swapoff -a
  23. # 设置hostname
  24. # 云侧
  25. $ hostnamectl set-hostname cloud.kubeedge
  26. # 端侧
  27. $ hostnamectl set-hostname edge.kubeedge
  28. # 配置hosts文件
  29. $ cat >> /etc/hosts << EOF
  30. 9.63.252.224 cloud.kubeedge
  31. 9.63.252.227 edge.kubeedge
  32. EOF
  33. # 同步时钟,选择可以访问的NTP服务器即可
  34. $ ntpdate cn.pool.ntp.org

配置iSulad

以下设置需要在cloud和edge端均配置

  1. # 安装iSulad
  2. $ yum install -y iSulad
  3. # 配置iSulad(只列出修改项)
  4. $ cat /etc/isulad/daemon.json
  5. {
  6. "registry-mirrors": [
  7. "docker.io"
  8. ],
  9. "insecure-registries": [
  10. "k8s.gcr.io",
  11. "quay.io"
  12. "hub.oepkgs.net"
  13. ],
  14. "pod-sandbox-image": "k8s.gcr.io/pause:3.2", # pause镜像设置
  15. "network-plugin": "cni", # 置空表示禁用cni网络插件则下面两个路径失效,安装插件后重启isulad即可
  16. "cni-bin-dir": "/opt/cni/bin",
  17. "cni-conf-dir": "/etc/cni/net.d",
  18. }
  19. # 如果不能直接访问外网,则需要配置proxy,否则不需要
  20. $ cat /usr/lib/systemd/system/isulad.service
  21. [Service]
  22. Type=notify
  23. Environment="HTTP_PROXY=http://..."
  24. Environment="HTTPS_PROXY=http://..."
  25. # 重启iSulad并设置为开机自启
  26. $ systemctl daemon-reload && systemctl restart isulad

安装k8s组件

k8s组件只需要在云侧安装部署

  1. # cri-tools 网络工具
  2. $ wget --no-check-certificate https://github.com/kubernetes-sigs/cri-tools/releases/download/v1.20.0/crictl-v1.20.0-linux-amd64.tar.gz
  3. $ tar zxvf crictl-v1.20.0-linux-amd64.tar.gz -C /usr/local/bin
  4. # cni 网络插件
  5. $ wget --no-check-certificate https://github.com/containernetworking/plugins/releases/download/v0.9.0/cni-plugins-linux-amd64-v0.9.0.tgz
  6. $ mkdir -p /opt/cni/bin
  7. $ tar -zxvf cni-plugins-linux-amd64-v0.9.0.tgz -C /opt/cni/bin
  8. # master节点执行
  9. $ yum install kubernetes-master kubernetes-kubeadm kubernetes-client kubernetes-kubelet
  10. # 开机启动kubelet
  11. $ systemctl enable kubelet --now

部署master节点

k8s组件只需要在云侧安装部署

  1. # 注意,init之前需要取消系统环境中的proxy
  2. $ unset `env | grep -iE "tps?_proxy" | cut -d= -f1`
  3. $ env | grep proxy
  4. # 使用kubeadm init
  5. $ kubeadm init --kubernetes-version v1.20.2 --pod-network-cidr=10.244.0.0/16 --upload-certs --cri-socket=/var/run/isulad.sock
  6. # 默认k8s组件镜像是gcr.k8s.io,可以使用--image-repository=xxx 来使用自定义镜像仓库地址(测试自己的k8s镜像)
  7. # 注意这里的pod-network-cidr网段不能和宿主机的网段重复,否则网络不通
  8. # 先init再配置网络
  9. Your Kubernetes control-plane has initialized successfully!
  10. To start using your cluster, you need to run the following as a regular user:
  11. mkdir -p $HOME/.kube
  12. sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  13. sudo chown $(id -u):$(id -g) $HOME/.kube/config
  14. You should now deploy a pod network to the cluster.
  15. Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
  16. https://kubernetes.io/docs/concepts/cluster-administration/addons/
  17. Then you can join any number of worker nodes by running the following on each as root:
  18. ...
  19. # 根据提示执行
  20. mkdir -p $HOME/.kube
  21. cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  22. chown $(id -u):$(id -g) $HOME/.kube/config
  23. # 这个命令复制了admin.conf(kubeadm帮我们自动初始化好的kubectl配置文件)
  24. # 这里包含了认证信息等相关信息的非常重要的一些配置。
  25. # 重置(如果init出现问题可以重置)
  26. $ kubeadm reset 重置
  27. # 如果出现 Unable to read config path "/etc/kubernetes/manifests"
  28. $ mkdir -p /etc/kubernetes/manifests

配置网络

Calico网络插件在edge节点无法运行, 所以这里使用flannel代替,已有用户在KubeEdge社区提交issue

因为云侧和端侧为不同的网络环境,需要配置不同的亲和性,所以这里需要两份flannel配置文件。

  1. # 下载flannel网络插件
  2. $ wget https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml
  3. # 准备云侧网络配置
  4. $ cp kube-flannel.yml kube-flannel-cloud.yml
  5. # 修改云侧网络配置
  6. diff --git a/kube-flannel.yml b/kube-flannel.yml
  7. index c7edaef..f3846b9 100644
  8. --- a/kube-flannel.yml
  9. +++ b/kube-flannel.yml
  10. @@ -134,7 +134,7 @@ data:
  11. apiVersion: apps/v1
  12. kind: DaemonSet
  13. metadata:
  14. - name: kube-flannel-ds
  15. + name: kube-flannel-cloud-ds
  16. namespace: kube-system
  17. labels:
  18. tier: node
  19. @@ -158,6 +158,8 @@ spec:
  20. operator: In
  21. values:
  22. - linux
  23. + - key: node-role.kubernetes.io/agent
  24. + operator: DoesNotExist
  25. hostNetwork: true
  26. priorityClassName: system-node-critical
  27. tolerations:
  28. # 准备端侧网络配置
  29. $ cp kube-flannel.yml kube-flannel-edge.yml
  30. # 修改端侧网络配置
  31. diff --git a/kube-flannel.yml b/kube-flannel.yml
  32. index c7edaef..66a5b5b 100644
  33. --- a/kube-flannel.yml
  34. +++ b/kube-flannel.yml
  35. @@ -134,7 +134,7 @@ data:
  36. apiVersion: apps/v1
  37. kind: DaemonSet
  38. metadata:
  39. - name: kube-flannel-ds
  40. + name: kube-flannel-edge-ds
  41. namespace: kube-system
  42. labels:
  43. tier: node
  44. @@ -158,6 +158,8 @@ spec:
  45. operator: In
  46. values:
  47. - linux
  48. + - key: node-role.kubernetes.io/agent
  49. + operator: Exists
  50. hostNetwork: true
  51. priorityClassName: system-node-critical
  52. tolerations:
  53. @@ -186,6 +188,7 @@ spec:
  54. args:
  55. - --ip-masq
  56. - --kube-subnet-mgr
  57. + - --kube-api-url=http://127.0.0.1:10550
  58. resources:
  59. requests:
  60. cpu: "100m"
  61. # 这里的--kube-api-url为端侧edgecore监听地址
  62. # 配置calico网络插件
  63. $ kubectl apply -f kube-flannel-cloud.yml
  64. $ kubectl apply -f kube-flannel-edge.yml
  65. # 查看节点状态
  66. $ kubectl get node -A
  67. NAME STATUS ROLES AGE VERSION
  68. cloud.kubeedge Ready control-plane,master 4h11m v1.20.2

如果使用kubeadm部署的k8s集群,那么kube-proxy会下发到端侧节点,但是edgecore无法与kube-proxy并存,所以要修改kube-proxy 的daemonset节点亲和性,禁止在端侧部署kube-proxy

  1. $ kubectl edit ds kube-proxy -n kube-system
  2. # 添加以下配置
  3. spec:
  4. affinity:
  5. nodeAffinity:
  6. requiredDuringSchedulingIgnoredDuringExecution:
  7. nodeSelectorTerms:
  8. - matchExpressions:
  9. - key: node-role.kubernetes.io/agent
  10. operator: DoesNotExist

iSulad配置

KubeEdge 端侧edgecore监听端口10350与iSulad websocket端口冲突,端侧无法启动edgecore

解决办法:修改iSulad配置(/etc/isulad/daemon.json)中的websocket-server-listening-port的字段为10351

  1. diff --git a/daemon.json b/daemon.json
  2. index 3333590..336154e 100644
  3. --- a/daemon.json
  4. +++ b/daemon.json
  5. @@ -31,6 +31,7 @@
  6. "hub.oepkgs.net"
  7. ],
  8. "pod-sandbox-image": "k8s.gcr.io/pause:3.2",
  9. + "websocket-server-listening-port": 10351,
  10. "native.umask": "secure",
  11. "network-plugin": "cni",
  12. "cni-bin-dir": "/opt/cni/bin",

修改完配置文件之后,重启iSulad。

使用keadm部署

如果使用keadm进行集群部署,则只需要在云侧和端侧均安装kubeedge-keadmrpm包即可。

  1. $ yum install kubeedge-keadm

部署云侧

初始化集群

  1. # --advertise-address为云侧IP
  2. $ keadm init --advertise-address="9.63.252.224" --kubeedge-version=1.8.0
  3. Kubernetes version verification passed, KubeEdge installation will start...
  4. W0829 10:41:56.541877 420383 warnings.go:67] apiextensions.k8s.io/v1beta1 CustomResourceDefinition is deprecated in v1.16+, unavailable in v1.22+; use apiextensions.k8s.io/v1 CustomResourceDefinition
  5. W0829 10:41:57.253214 420383 warnings.go:67] apiextensions.k8s.io/v1beta1 CustomResourceDefinition is deprecated in v1.16+, unavailable in v1.22+; use apiextensions.k8s.io/v1 CustomResourceDefinition
  6. W0829 10:41:59.778672 420383 warnings.go:67] apiextensions.k8s.io/v1beta1 CustomResourceDefinition is deprecated in v1.16+, unavailable in v1.22+; use apiextensions.k8s.io/v1 CustomResourceDefinition
  7. W0829 10:42:00.488838 420383 warnings.go:67] apiextensions.k8s.io/v1beta1 CustomResourceDefinition is deprecated in v1.16+, unavailable in v1.22+; use apiextensions.k8s.io/v1 CustomResourceDefinition
  8. kubeedge-v1.8.0-linux-amd64.tar.gz checksum:
  9. checksum_kubeedge-v1.8.0-linux-amd64.tar.gz.txt content:
  10. [Run as service] start to download service file for cloudcore
  11. [Run as service] success to download service file for cloudcore
  12. kubeedge-v1.8.0-linux-amd64/
  13. kubeedge-v1.8.0-linux-amd64/edge/
  14. kubeedge-v1.8.0-linux-amd64/edge/edgecore
  15. kubeedge-v1.8.0-linux-amd64/cloud/
  16. kubeedge-v1.8.0-linux-amd64/cloud/csidriver/
  17. kubeedge-v1.8.0-linux-amd64/cloud/csidriver/csidriver
  18. kubeedge-v1.8.0-linux-amd64/cloud/admission/
  19. kubeedge-v1.8.0-linux-amd64/cloud/admission/admission
  20. kubeedge-v1.8.0-linux-amd64/cloud/cloudcore/
  21. kubeedge-v1.8.0-linux-amd64/cloud/cloudcore/cloudcore
  22. kubeedge-v1.8.0-linux-amd64/version
  23. KubeEdge cloudcore is running, For logs visit: /var/log/kubeedge/cloudcore.log
  24. CloudCore started

此时cloudcore正在运行,但是并未使用systemd管理,且没有开启dynamiccontroller(对应edgecore中的list watch功能),需要修改配置

修改配置

  1. # 修改/etc/kubeedge/config/cloudcore.yaml
  2. # 开启dynamic controller
  3. diff --git a/cloudcore.yaml b/cloudcore.yaml
  4. index 28235a9..313375c 100644
  5. --- a/cloudcore.yaml
  6. +++ b/cloudcore.yaml
  7. @@ -1,7 +1,3 @@
  8. apiVersion: cloudcore.config.kubeedge.io/v1alpha1
  9. commonConfig:
  10. tunnelPort: 10350
  11. @@ -67,7 +63,7 @@ modules:
  12. load:
  13. updateDeviceStatusWorkers: 1
  14. dynamicController:
  15. - enable: false
  16. + enable: true # 开启dynamicController以支持edgecore的listwatch功能
  17. edgeController:
  18. buffer:
  19. configMapEvent: 1
  20. @@ -119,5 +115,3 @@ modules:
  21. restTimeout: 60
  22. syncController:
  23. enable: true
  24. # 云侧cloudcore可以通过systemd管理
  25. # 拷贝cloudcore.service到/usr/lib/systemd/system
  26. $ cp /etc/kubeedge/cloudcore.service /usr/lib/systemd/system
  27. # 杀掉当前cloudcore进程
  28. $ pkill cloudcore
  29. $ systemctl restart cloudcore
  30. # 查看cloudcore是否运行
  31. $ systemctl status cloudcore
  32. cloudcore.service
  33. Loaded: loaded (/usr/lib/systemd/system/cloudcore.service disabled; vendor preset: disabled)
  34. Active: active (running) since Sun 2021-08-29 10:50:14 CST; 4s ago
  35. Main PID: 424578 (cloudcore)
  36. Tasks: 36 (limit: 202272)
  37. Memory: 44.2M
  38. CPU: 112ms
  39. CGroup: /system.slice/cloudcore.service
  40. └─424578 /usr/local/bin/cloudcore
  41. Aug 29 10:50:15 cloud.kubeedge cloudcore[424578]: I0829 10:50:15.845792 424578 upstream.go:121] start upstream controller
  42. Aug 29 10:50:15 cloud.kubeedge cloudcore[424578]: I0829 10:50:15.846586 424578 downstream.go:870] Start downstream devicecontroller
  43. Aug 29 10:50:15 cloud.kubeedge cloudcore[424578]: I0829 10:50:15.849475 424578 downstream.go:566] start downstream controller
  44. Aug 29 10:50:15 cloud.kubeedge cloudcore[424578]: I0829 10:50:15.946110 424578 server.go:243] Ca and CaKey don't exist in local directory, and will read from the secret
  45. Aug 29 10:50:15 cloud.kubeedge cloudcore[424578]: I0829 10:50:15.951806 424578 server.go:288] CloudCoreCert and key don't exist in local directory, and will read from the >
  46. Aug 29 10:50:15 cloud.kubeedge cloudcore[424578]: I0829 10:50:15.959661 424578 signcerts.go:100] Succeed to creating token
  47. Aug 29 10:50:15 cloud.kubeedge cloudcore[424578]: I0829 10:50:15.959716 424578 server.go:44] start unix domain socket server
  48. Aug 29 10:50:15 cloud.kubeedge cloudcore[424578]: I0829 10:50:15.959973 424578 uds.go:71] listening on: //var/lib/kubeedge/kubeedge.sock
  49. Aug 29 10:50:15 cloud.kubeedge cloudcore[424578]: I0829 10:50:15.966693 424578 server.go:64] Starting cloudhub websocket server
  50. Aug 29 10:50:17 cloud.kubeedge cloudcore[424578]: I0829 10:50:17.847150 424578 upstream.go:63] Start upstream devicecontroller

至此,云侧的cloudcore已部署完成,接下来部署端侧edgecore

部署端侧

同样,在端侧机器上使用keadm加入云侧

修改iSulad配置

  1. # 文件位置:/etc/isulad/daemon.json
  2. # 设置"pod-sandbox-image"
  3. "pod-sandbox-image": "kubeedge/pause:3.1",
  4. # 设置websocket监听端口
  5. "websocket-server-listening-port": 10351,

加入云侧

  1. # 在云侧获取token
  2. $ keadm gettoken
  3. 28c25d3b137593f5bbfb776cf5b19866ab9727cab9e97964dd503f87cd52cbde.eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJleHAiOjE2MzAyOTE4MTV9.aGUyCi9gdysVtMu0DQzrD5TcV_DcXob647YeqcOxKDA
  4. # 在端侧使用keadm join 加入云侧
  5. # --cloudcore-ipport是必须要加入的参数,10000是cloudcore默认端口
  6. $ keadm join --cloudcore-ipport=9.63.252.224:10000 --kubeedge-version=1.8.0 --token=28c25d3b137593f5bbfb776cf5b19866ab9727cab9e97964dd503f87cd52cbde.eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJleHAiOjE2MzAyOTE4MTV9.aGUyCi9gdysVtMu0DQzrD5TcV_DcXob647YeqcOxKDA
  7. Host has mosquit+ already installed and running. Hence skipping the installation steps !!!
  8. kubeedge-v1.8.0-linux-amd64.tar.gz checksum:
  9. checksum_kubeedge-v1.8.0-linux-amd64.tar.gz.txt content:
  10. [Run as service] start to download service file for edgecore
  11. [Run as service] success to download service file for edgecore
  12. kubeedge-v1.8.0-linux-amd64/
  13. kubeedge-v1.8.0-linux-amd64/edge/
  14. kubeedge-v1.8.0-linux-amd64/edge/edgecore
  15. kubeedge-v1.8.0-linux-amd64/cloud/
  16. kubeedge-v1.8.0-linux-amd64/cloud/csidriver/
  17. kubeedge-v1.8.0-linux-amd64/cloud/csidriver/csidriver
  18. kubeedge-v1.8.0-linux-amd64/cloud/admission/
  19. kubeedge-v1.8.0-linux-amd64/cloud/admission/admission
  20. kubeedge-v1.8.0-linux-amd64/cloud/cloudcore/
  21. kubeedge-v1.8.0-linux-amd64/cloud/cloudcore/cloudcore
  22. kubeedge-v1.8.0-linux-amd64/version
  23. KubeEdge edgecore is running, For logs visit: journalctl -u edgecore.service -b

此时,edgecore并未部署成功,因为默认配置中使用的是docker,我们需要修改其配置文件用于对接iSulad

修改配置

  1. # 端侧edgecore可以通过systemd管理
  2. # 拷贝edgecore.service到/usr/lib/systemd/system
  3. $ cp /etc/kubeedge/edgecore.service /usr/lib/systemd/system
  4. # 修改edgecore配置
  5. $ vim /etc/kubeedge/config/edgecore.yaml
  6. diff --git a/edgecore.yaml b/edgecore.yaml
  7. index 165e24b..efbfd49 100644
  8. --- a/edgecore.yaml
  9. +++ b/edgecore.yaml
  10. @@ -32,7 +32,7 @@ modules:
  11. server: 9.63.252.224:10000
  12. writeDeadline: 15
  13. edgeMesh:
  14. - enable: true
  15. + enable: false # 关闭edgeMesh
  16. lbStrategy: RoundRobin
  17. listenInterface: docker0
  18. listenPort: 40001
  19. @@ -73,10 +73,10 @@ modules:
  20. podSandboxImage: kubeedge/pause:3.1
  21. registerNode: true
  22. registerNodeNamespace: default
  23. - remoteImageEndpoint: unix:///var/run/dockershim.sock
  24. - remoteRuntimeEndpoint: unix:///var/run/dockershim.sock
  25. + remoteImageEndpoint: unix:///var/run/isulad.sock
  26. + remoteRuntimeEndpoint: unix:///var/run/isulad.sock
  27. runtimeRequestTimeout: 2
  28. - runtimeType: docker
  29. + runtimeType: remote # iSulad类型为remote
  30. volumeStatsAggPeriod: 60000000000
  31. eventBus:
  32. enable: true
  33. @@ -97,7 +97,7 @@ modules:
  34. enable: true
  35. metaServer:
  36. debug: false
  37. - enable: false
  38. + enable: true # 开启listwatch
  39. podStatusSyncInterval: 60
  40. remoteQueryTimeout: 60
  41. serviceBus:
  42. # 杀掉当前edgecore进程
  43. $ pkill edgecore
  44. # 重启edgecore
  45. $ systemctl restart edgecore

检查端侧是否已经加入云侧

  1. # 回到云侧,发现已经有了端侧节点
  2. $ kubectl get node -A
  3. NAME STATUS ROLES AGE VERSION
  4. cloud.kubeedge Ready control-plane,master 19h v1.20.2
  5. edge.kubeedge Ready agent,edge 5m16s v1.19.3-kubeedge-v1.8.0

至此,使用keadm部署KubeEdge集群已经完成,接下来我们测试一下从云侧下发任务到端侧

部署应用

部署nginx

  1. # KubeEdge提供了一个nginx模板,我们可以直接使用
  2. $ kubectl apply -f $GOPATH/src/github.com/kubeedge/kubeedge/build/deployment.yaml
  3. deployment.apps/nginx-deployment created
  4. # 查看是否部署到了端侧
  5. $ kubectl get pod -A -owide | grep nginx
  6. default nginx-deployment-77f96fbb65-fnp7n 1/1 Running 0 37s 10.244.2.4 edge.kubeedge <none> <none>
  7. # 可以看到,已经成功部署到了edge节点

测试功能

  1. # 测试功能是否正常
  2. # 进入端侧节点,curl nginx的IP:10.244.2.4
  3. $ curl 10.244.2.4:80
  4. <!DOCTYPE html>
  5. <html>
  6. <head>
  7. <title>Welcome to nginx!</title>
  8. <style>
  9. body {
  10. width: 35em;
  11. margin: 0 auto;
  12. font-family: Tahoma, Verdana, Arial, sans-serif;
  13. }
  14. </style>
  15. </head>
  16. <body>
  17. <h1>Welcome to nginx!</h1>
  18. <p>If you see this page, the nginx web server is successfully installed and
  19. working. Further configuration is required.</p>
  20. <p>For online documentation and support please refer to
  21. <a href="http://nginx.org/">nginx.org</a>.<br/>
  22. Commercial support is available at
  23. <a href="http://nginx.com/">nginx.com</a>.</p>
  24. <p><em>Thank you for using nginx.</em></p>
  25. </body>
  26. </html>

至此,部署KubeEdge+iSulad已经全流程打通

使用二进制部署

用户也可以使用二进制部署kubeedge集群。 只需要使用两个rpm包: cloudcore(云侧) 和edgecore(端侧)

使用二进制部署KubeEdge进行测试,切勿在生产环境中使用这种方式。

部署云侧

进入云侧主机

安装cloudcorerpm包

  1. $ yum install kubeedge-cloudcore

创建CRD

  1. $ kubectl apply -f /etc/kubeedge/crds/devices/devices_v1alpha2_device.yaml
  2. $ kubectl apply -f /etc/kubeedge/crds/devices/devices_v1alpha2_devicemodel.yaml
  3. $ kubectl apply -f /etc/kubeedge/crds/reliablesyncs/cluster_objectsync_v1alpha1.yaml
  4. $ kubectl apply -f /etc/kubeedge/crds/reliablesyncs/objectsync_v1alpha1.yaml

准备配置文件

  1. $ cloudcore --defaultconfig > /etc/kubeedge/config/cloudcore.yaml

并按照使用keadm部署修改cloudcore相关配置

运行cloudcore

  1. $ pkill cloudcore
  2. $ systemctl start cloudcore

部署端侧

进入端侧主机

安装edgecorerpm包

  1. $ yum install kubeedge-edgecore

准备配置文件

  1. $ edgecore --defaultconfig > /etc/kubeedge/config/edgecore.yaml

并按照使用keadm部署修改edgecore相关配置

  1. $ kubectl get secret -nkubeedge tokensecret -o=jsonpath='{.data.tokendata}' | base64 -d
  2. 1c4ff11289a14c59f2cbdbab726d1857262d5bda778ddf0de34dd59d125d3f69.eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJleHAiOjE2MzE0ODM3MzN9.JY77nMVDHIKD9ipo03Y0mSbxief9qOvJ4yMNx1yZpp0
  3. # 将获取到的token添加到配置文件中
  4. sed -i -e "s|token: .*|token: ${token}|g" /etc/kubeedge/config/edgecore.yaml
  5. # token变量的值来自于之前步骤

运行edgecre

  1. $ pkill edgecore
  2. $ systemctl start edgecore

附录

kube-flannel-cloud.yml

  1. # 使用场景:云侧
  2. ---
  3. apiVersion: policy/v1beta1
  4. kind: PodSecurityPolicy
  5. metadata:
  6. name: psp.flannel.unprivileged
  7. annotations:
  8. seccomp.security.alpha.kubernetes.io/allowedProfileNames: docker/default
  9. seccomp.security.alpha.kubernetes.io/defaultProfileName: docker/default
  10. apparmor.security.beta.kubernetes.io/allowedProfileNames: runtime/default
  11. apparmor.security.beta.kubernetes.io/defaultProfileName: runtime/default
  12. spec:
  13. privileged: false
  14. volumes:
  15. - configMap
  16. - secret
  17. - emptyDir
  18. - hostPath
  19. allowedHostPaths:
  20. - pathPrefix: "/etc/cni/net.d"
  21. - pathPrefix: "/etc/kube-flannel"
  22. - pathPrefix: "/run/flannel"
  23. readOnlyRootFilesystem: false
  24. # Users and groups
  25. runAsUser:
  26. rule: RunAsAny
  27. supplementalGroups:
  28. rule: RunAsAny
  29. fsGroup:
  30. rule: RunAsAny
  31. # Privilege Escalation
  32. allowPrivilegeEscalation: false
  33. defaultAllowPrivilegeEscalation: false
  34. # Capabilities
  35. allowedCapabilities: ['NET_ADMIN', 'NET_RAW']
  36. defaultAddCapabilities: []
  37. requiredDropCapabilities: []
  38. # Host namespaces
  39. hostPID: false
  40. hostIPC: false
  41. hostNetwork: true
  42. hostPorts:
  43. - min: 0
  44. max: 65535
  45. # SELinux
  46. seLinux:
  47. # SELinux is unused in CaaSP
  48. rule: 'RunAsAny'
  49. ---
  50. kind: ClusterRole
  51. apiVersion: rbac.authorization.k8s.io/v1
  52. metadata:
  53. name: flannel
  54. rules:
  55. - apiGroups: ['extensions']
  56. resources: ['podsecuritypolicies']
  57. verbs: ['use']
  58. resourceNames: ['psp.flannel.unprivileged']
  59. - apiGroups:
  60. - ""
  61. resources:
  62. - pods
  63. verbs:
  64. - get
  65. - apiGroups:
  66. - ""
  67. resources:
  68. - nodes
  69. verbs:
  70. - list
  71. - watch
  72. - apiGroups:
  73. - ""
  74. resources:
  75. - nodes/status
  76. verbs:
  77. - patch
  78. ---
  79. kind: ClusterRoleBinding
  80. apiVersion: rbac.authorization.k8s.io/v1
  81. metadata:
  82. name: flannel
  83. roleRef:
  84. apiGroup: rbac.authorization.k8s.io
  85. kind: ClusterRole
  86. name: flannel
  87. subjects:
  88. - kind: ServiceAccount
  89. name: flannel
  90. namespace: kube-system
  91. ---
  92. apiVersion: v1
  93. kind: ServiceAccount
  94. metadata:
  95. name: flannel
  96. namespace: kube-system
  97. ---
  98. kind: ConfigMap
  99. apiVersion: v1
  100. metadata:
  101. name: kube-flannel-cfg
  102. namespace: kube-system
  103. labels:
  104. tier: node
  105. app: flannel
  106. data:
  107. cni-conf.json: |
  108. {
  109. "name": "cbr0",
  110. "cniVersion": "0.3.1",
  111. "plugins": [
  112. {
  113. "type": "flannel",
  114. "delegate": {
  115. "hairpinMode": true,
  116. "isDefaultGateway": true
  117. }
  118. },
  119. {
  120. "type": "portmap",
  121. "capabilities": {
  122. "portMappings": true
  123. }
  124. }
  125. ]
  126. }
  127. net-conf.json: |
  128. {
  129. "Network": "10.244.0.0/16",
  130. "Backend": {
  131. "Type": "vxlan"
  132. }
  133. }
  134. ---
  135. apiVersion: apps/v1
  136. kind: DaemonSet
  137. metadata:
  138. name: kube-flannel-cloud-ds
  139. namespace: kube-system
  140. labels:
  141. tier: node
  142. app: flannel
  143. spec:
  144. selector:
  145. matchLabels:
  146. app: flannel
  147. template:
  148. metadata:
  149. labels:
  150. tier: node
  151. app: flannel
  152. spec:
  153. affinity:
  154. nodeAffinity:
  155. requiredDuringSchedulingIgnoredDuringExecution:
  156. nodeSelectorTerms:
  157. - matchExpressions:
  158. - key: kubernetes.io/os
  159. operator: In
  160. values:
  161. - linux
  162. - key: node-role.kubernetes.io/agent
  163. operator: DoesNotExist
  164. hostNetwork: true
  165. priorityClassName: system-node-critical
  166. tolerations:
  167. - operator: Exists
  168. effect: NoSchedule
  169. serviceAccountName: flannel
  170. initContainers:
  171. - name: install-cni
  172. image: quay.io/coreos/flannel:v0.14.0
  173. command:
  174. - cp
  175. args:
  176. - -f
  177. - /etc/kube-flannel/cni-conf.json
  178. - /etc/cni/net.d/10-flannel.conflist
  179. volumeMounts:
  180. - name: cni
  181. mountPath: /etc/cni/net.d
  182. - name: flannel-cfg
  183. mountPath: /etc/kube-flannel/
  184. containers:
  185. - name: kube-flannel
  186. image: quay.io/coreos/flannel:v0.14.0
  187. command:
  188. - /opt/bin/flanneld
  189. args:
  190. - --ip-masq
  191. - --kube-subnet-mgr
  192. resources:
  193. requests:
  194. cpu: "100m"
  195. memory: "50Mi"
  196. limits:
  197. cpu: "100m"
  198. memory: "50Mi"
  199. securityContext:
  200. privileged: false
  201. capabilities:
  202. add: ["NET_ADMIN", "NET_RAW"]
  203. env:
  204. - name: POD_NAME
  205. valueFrom:
  206. fieldRef:
  207. fieldPath: metadata.name
  208. - name: POD_NAMESPACE
  209. valueFrom:
  210. fieldRef:
  211. fieldPath: metadata.namespace
  212. volumeMounts:
  213. - name: run
  214. mountPath: /run/flannel
  215. - name: flannel-cfg
  216. mountPath: /etc/kube-flannel/
  217. volumes:
  218. - name: run
  219. hostPath:
  220. path: /run/flannel
  221. - name: cni
  222. hostPath:
  223. path: /etc/cni/net.d
  224. - name: flannel-cfg
  225. configMap:
  226. name: kube-flannel-cfg

kube-flannel-edge.yml

  1. # 使用场景:云侧
  2. ---
  3. apiVersion: policy/v1beta1
  4. kind: PodSecurityPolicy
  5. metadata:
  6. name: psp.flannel.unprivileged
  7. annotations:
  8. seccomp.security.alpha.kubernetes.io/allowedProfileNames: docker/default
  9. seccomp.security.alpha.kubernetes.io/defaultProfileName: docker/default
  10. apparmor.security.beta.kubernetes.io/allowedProfileNames: runtime/default
  11. apparmor.security.beta.kubernetes.io/defaultProfileName: runtime/default
  12. spec:
  13. privileged: false
  14. volumes:
  15. - configMap
  16. - secret
  17. - emptyDir
  18. - hostPath
  19. allowedHostPaths:
  20. - pathPrefix: "/etc/cni/net.d"
  21. - pathPrefix: "/etc/kube-flannel"
  22. - pathPrefix: "/run/flannel"
  23. readOnlyRootFilesystem: false
  24. # Users and groups
  25. runAsUser:
  26. rule: RunAsAny
  27. supplementalGroups:
  28. rule: RunAsAny
  29. fsGroup:
  30. rule: RunAsAny
  31. # Privilege Escalation
  32. allowPrivilegeEscalation: false
  33. defaultAllowPrivilegeEscalation: false
  34. # Capabilities
  35. allowedCapabilities: ['NET_ADMIN', 'NET_RAW']
  36. defaultAddCapabilities: []
  37. requiredDropCapabilities: []
  38. # Host namespaces
  39. hostPID: false
  40. hostIPC: false
  41. hostNetwork: true
  42. hostPorts:
  43. - min: 0
  44. max: 65535
  45. # SELinux
  46. seLinux:
  47. # SELinux is unused in CaaSP
  48. rule: 'RunAsAny'
  49. ---
  50. kind: ClusterRole
  51. apiVersion: rbac.authorization.k8s.io/v1
  52. metadata:
  53. name: flannel
  54. rules:
  55. - apiGroups: ['extensions']
  56. resources: ['podsecuritypolicies']
  57. verbs: ['use']
  58. resourceNames: ['psp.flannel.unprivileged']
  59. - apiGroups:
  60. - ""
  61. resources:
  62. - pods
  63. verbs:
  64. - get
  65. - apiGroups:
  66. - ""
  67. resources:
  68. - nodes
  69. verbs:
  70. - list
  71. - watch
  72. - apiGroups:
  73. - ""
  74. resources:
  75. - nodes/status
  76. verbs:
  77. - patch
  78. ---
  79. kind: ClusterRoleBinding
  80. apiVersion: rbac.authorization.k8s.io/v1
  81. metadata:
  82. name: flannel
  83. roleRef:
  84. apiGroup: rbac.authorization.k8s.io
  85. kind: ClusterRole
  86. name: flannel
  87. subjects:
  88. - kind: ServiceAccount
  89. name: flannel
  90. namespace: kube-system
  91. ---
  92. apiVersion: v1
  93. kind: ServiceAccount
  94. metadata:
  95. name: flannel
  96. namespace: kube-system
  97. ---
  98. kind: ConfigMap
  99. apiVersion: v1
  100. metadata:
  101. name: kube-flannel-cfg
  102. namespace: kube-system
  103. labels:
  104. tier: node
  105. app: flannel
  106. data:
  107. cni-conf.json: |
  108. {
  109. "name": "cbr0",
  110. "cniVersion": "0.3.1",
  111. "plugins": [
  112. {
  113. "type": "flannel",
  114. "delegate": {
  115. "hairpinMode": true,
  116. "isDefaultGateway": true
  117. }
  118. },
  119. {
  120. "type": "portmap",
  121. "capabilities": {
  122. "portMappings": true
  123. }
  124. }
  125. ]
  126. }
  127. net-conf.json: |
  128. {
  129. "Network": "10.244.0.0/16",
  130. "Backend": {
  131. "Type": "vxlan"
  132. }
  133. }
  134. ---
  135. apiVersion: apps/v1
  136. kind: DaemonSet
  137. metadata:
  138. name: kube-flannel-edge-ds
  139. namespace: kube-system
  140. labels:
  141. tier: node
  142. app: flannel
  143. spec:
  144. selector:
  145. matchLabels:
  146. app: flannel
  147. template:
  148. metadata:
  149. labels:
  150. tier: node
  151. app: flannel
  152. spec:
  153. affinity:
  154. nodeAffinity:
  155. requiredDuringSchedulingIgnoredDuringExecution:
  156. nodeSelectorTerms:
  157. - matchExpressions:
  158. - key: kubernetes.io/os
  159. operator: In
  160. values:
  161. - linux
  162. - key: node-role.kubernetes.io/agent
  163. operator: Exists
  164. hostNetwork: true
  165. priorityClassName: system-node-critical
  166. tolerations:
  167. - operator: Exists
  168. effect: NoSchedule
  169. serviceAccountName: flannel
  170. initContainers:
  171. - name: install-cni
  172. image: quay.io/coreos/flannel:v0.14.0
  173. command:
  174. - cp
  175. args:
  176. - -f
  177. - /etc/kube-flannel/cni-conf.json
  178. - /etc/cni/net.d/10-flannel.conflist
  179. volumeMounts:
  180. - name: cni
  181. mountPath: /etc/cni/net.d
  182. - name: flannel-cfg
  183. mountPath: /etc/kube-flannel/
  184. containers:
  185. - name: kube-flannel
  186. image: quay.io/coreos/flannel:v0.14.0
  187. command:
  188. - /opt/bin/flanneld
  189. args:
  190. - --ip-masq
  191. - --kube-subnet-mgr
  192. - --kube-api-url=http://127.0.0.1:10550
  193. resources:
  194. requests:
  195. cpu: "100m"
  196. memory: "50Mi"
  197. limits:
  198. cpu: "100m"
  199. memory: "50Mi"
  200. securityContext:
  201. privileged: false
  202. capabilities:
  203. add: ["NET_ADMIN", "NET_RAW"]
  204. env:
  205. - name: POD_NAME
  206. valueFrom:
  207. fieldRef:
  208. fieldPath: metadata.name
  209. - name: POD_NAMESPACE
  210. valueFrom:
  211. fieldRef:
  212. fieldPath: metadata.namespace
  213. volumeMounts:
  214. - name: run
  215. mountPath: /run/flannel
  216. - name: flannel-cfg
  217. mountPath: /etc/kube-flannel/
  218. volumes:
  219. - name: run
  220. hostPath:
  221. path: /run/flannel
  222. - name: cni
  223. hostPath:
  224. path: /etc/cni/net.d
  225. - name: flannel-cfg
  226. configMap:
  227. name: kube-flannel-cfg

cloudcore.service

  1. # 使用场景:云侧
  2. # 文件位置:/usr/lib/systemd/system/cloudcore.service
  3. [Unit]
  4. Description=cloudcore.service
  5. [Service]
  6. Type=simple
  7. ExecStart=/usr/local/bin/cloudcore
  8. Restart=always
  9. RestartSec=10
  10. [Install]
  11. WantedBy=multi-user.target

cloudcore.yaml

  1. # 使用场景:云侧
  2. # 文件位置:/etc/kubeedge/config/cloudcore.yaml
  3. apiVersion: cloudcore.config.kubeedge.io/v1alpha1
  4. commonConfig:
  5. tunnelPort: 10351
  6. kind: CloudCore
  7. kubeAPIConfig:
  8. burst: 200
  9. contentType: application/vnd.kubernetes.protobuf
  10. kubeConfig: /root/.kube/config
  11. master: ""
  12. qps: 100
  13. modules:
  14. cloudHub:
  15. advertiseAddress:
  16. - 9.63.252.224
  17. dnsNames:
  18. - ""
  19. edgeCertSigningDuration: 365
  20. enable: true
  21. https:
  22. address: 0.0.0.0
  23. enable: true
  24. port: 10002
  25. keepaliveInterval: 30
  26. nodeLimit: 1000
  27. quic:
  28. address: 0.0.0.0
  29. enable: false
  30. maxIncomingStreams: 10000
  31. port: 10001
  32. tlsCAFile: /etc/kubeedge/ca/rootCA.crt
  33. tlsCAKeyFile: /etc/kubeedge/ca/rootCA.key
  34. tlsCertFile: /etc/kubeedge/certs/server.crt
  35. tlsPrivateKeyFile: /etc/kubeedge/certs/server.key
  36. tokenRefreshDuration: 12
  37. unixsocket:
  38. address: unix:///var/lib/kubeedge/kubeedge.sock
  39. enable: true
  40. websocket:
  41. address: 0.0.0.0
  42. enable: true
  43. port: 10000
  44. writeTimeout: 30
  45. cloudStream:
  46. enable: false
  47. streamPort: 10003
  48. tlsStreamCAFile: /etc/kubeedge/ca/streamCA.crt
  49. tlsStreamCertFile: /etc/kubeedge/certs/stream.crt
  50. tlsStreamPrivateKeyFile: /etc/kubeedge/certs/stream.key
  51. tlsTunnelCAFile: /etc/kubeedge/ca/rootCA.crt
  52. tlsTunnelCertFile: /etc/kubeedge/certs/server.crt
  53. tlsTunnelPrivateKeyFile: /etc/kubeedge/certs/server.key
  54. tunnelPort: 10004
  55. deviceController:
  56. buffer:
  57. deviceEvent: 1
  58. deviceModelEvent: 1
  59. updateDeviceStatus: 1024
  60. context:
  61. receiveModule: devicecontroller
  62. responseModule: cloudhub
  63. sendModule: cloudhub
  64. enable: true
  65. load:
  66. updateDeviceStatusWorkers: 1
  67. dynamicController:
  68. enable: true
  69. edgeController:
  70. buffer:
  71. configMapEvent: 1
  72. deletePod: 1024
  73. endpointsEvent: 1
  74. podEvent: 1
  75. queryConfigMap: 1024
  76. queryEndpoints: 1024
  77. queryNode: 1024
  78. queryPersistentVolume: 1024
  79. queryPersistentVolumeClaim: 1024
  80. querySecret: 1024
  81. queryService: 1024
  82. queryVolumeAttachment: 1024
  83. ruleEndpointsEvent: 1
  84. rulesEvent: 1
  85. secretEvent: 1
  86. serviceAccountToken: 1024
  87. serviceEvent: 1
  88. updateNode: 1024
  89. updateNodeStatus: 1024
  90. updatePodStatus: 1024
  91. context:
  92. receiveModule: edgecontroller
  93. responseModule: cloudhub
  94. sendModule: cloudhub
  95. sendRouterModule: router
  96. enable: true
  97. load:
  98. ServiceAccountTokenWorkers: 4
  99. UpdateRuleStatusWorkers: 4
  100. deletePodWorkers: 4
  101. queryConfigMapWorkers: 4
  102. queryEndpointsWorkers: 4
  103. queryNodeWorkers: 4
  104. queryPersistentVolumeClaimWorkers: 4
  105. queryPersistentVolumeWorkers: 4
  106. querySecretWorkers: 4
  107. queryServiceWorkers: 4
  108. queryVolumeAttachmentWorkers: 4
  109. updateNodeStatusWorkers: 1
  110. updateNodeWorkers: 4
  111. updatePodStatusWorkers: 1
  112. nodeUpdateFrequency: 10
  113. router:
  114. address: 0.0.0.0
  115. enable: false
  116. port: 9443
  117. restTimeout: 60
  118. syncController:
  119. enable: true

edgecore.service

  1. # 使用场景:端侧
  2. # 文件位置:/etc/systemd/system/edgecore.service
  3. [Unit]
  4. Description=edgecore.service
  5. [Service]
  6. Type=simple
  7. ExecStart=/usr/local/bin/edgecore
  8. Restart=always
  9. RestartSec=10
  10. [Install]
  11. WantedBy=multi-user.target

edgecore.yaml

  1. # 使用场景:端侧
  2. # 文件位置:/etc/kubeedge/config/edgecore.yaml
  3. apiVersion: edgecore.config.kubeedge.io/v1alpha1
  4. database:
  5. aliasName: default
  6. dataSource: /var/lib/kubeedge/edgecore.db
  7. driverName: sqlite3
  8. kind: EdgeCore
  9. modules:
  10. dbTest:
  11. enable: false
  12. deviceTwin:
  13. enable: true
  14. edgeHub:
  15. enable: true
  16. heartbeat: 15
  17. httpServer: https://9.63.252.224:10002
  18. projectID: e632aba927ea4ac2b575ec1603d56f10
  19. quic:
  20. enable: false
  21. handshakeTimeout: 30
  22. readDeadline: 15
  23. server: 9.63.252.227:10001
  24. writeDeadline: 15
  25. rotateCertificates: true
  26. tlsCaFile: /etc/kubeedge/ca/rootCA.crt
  27. tlsCertFile: /etc/kubeedge/certs/server.crt
  28. tlsPrivateKeyFile: /etc/kubeedge/certs/server.key
  29. token: # 这里填写从云侧获取的token
  30. websocket:
  31. enable: true
  32. handshakeTimeout: 30
  33. readDeadline: 15
  34. server: 9.63.252.224:10000
  35. writeDeadline: 15
  36. edgeMesh:
  37. enable: false
  38. lbStrategy: RoundRobin
  39. listenInterface: docker0
  40. listenPort: 40001
  41. subNet: 9.251.0.0/16
  42. edgeStream:
  43. enable: false
  44. handshakeTimeout: 30
  45. readDeadline: 15
  46. server: 9.63.252.224:10004
  47. tlsTunnelCAFile: /etc/kubeedge/ca/rootCA.crt
  48. tlsTunnelCertFile: /etc/kubeedge/certs/server.crt
  49. tlsTunnelPrivateKeyFile: /etc/kubeedge/certs/server.key
  50. writeDeadline: 15
  51. edged:
  52. cgroupDriver: cgroupfs
  53. cgroupRoot: ""
  54. cgroupsPerQOS: true
  55. clusterDNS: ""
  56. clusterDomain: ""
  57. cniBinDir: /opt/cni/bin
  58. cniCacheDirs: /var/lib/cni/cache
  59. cniConfDir: /etc/cni/net.d
  60. concurrentConsumers: 5
  61. devicePluginEnabled: false
  62. dockerAddress: unix:///var/run/docker.sock
  63. edgedMemoryCapacity: 7852396000
  64. enable: true
  65. enableMetrics: true
  66. gpuPluginEnabled: false
  67. hostnameOverride: edge.kubeedge
  68. imageGCHighThreshold: 80
  69. imageGCLowThreshold: 40
  70. imagePullProgressDeadline: 60
  71. maximumDeadContainersPerPod: 1
  72. networkPluginMTU: 1500
  73. nodeIP: 9.63.252.227
  74. nodeStatusUpdateFrequency: 10
  75. podSandboxImage: kubeedge/pause:3.1
  76. registerNode: true
  77. registerNodeNamespace: default
  78. remoteImageEndpoint: unix:///var/run/isulad.sock
  79. remoteRuntimeEndpoint: unix:///var/run/isulad.sock
  80. runtimeRequestTimeout: 2
  81. runtimeType: remote
  82. volumeStatsAggPeriod: 60000000000
  83. eventBus:
  84. enable: true
  85. eventBusTLS:
  86. enable: false
  87. tlsMqttCAFile: /etc/kubeedge/ca/rootCA.crt
  88. tlsMqttCertFile: /etc/kubeedge/certs/server.crt
  89. tlsMqttPrivateKeyFile: /etc/kubeedge/certs/server.key
  90. mqttMode: 2
  91. mqttQOS: 0
  92. mqttRetain: false
  93. mqttServerExternal: tcp://127.0.0.1:1883
  94. mqttServerInternal: tcp://127.0.0.1:1884
  95. mqttSessionQueueSize: 100
  96. metaManager:
  97. contextSendGroup: hub
  98. contextSendModule: websocket
  99. enable: true
  100. metaServer:
  101. debug: false
  102. enable: true
  103. podStatusSyncInterval: 60
  104. remoteQueryTimeout: 60
  105. serviceBus:
  106. enable: false

daemon.json

  1. # 使用场景:云侧、端侧
  2. # 文件位置:/etc/isulad/daemon.json
  3. {
  4. "group": "isula",
  5. "default-runtime": "lcr",
  6. "graph": "/var/lib/isulad",
  7. "state": "/var/run/isulad",
  8. "engine": "lcr",
  9. "log-level": "ERROR",
  10. "pidfile": "/var/run/isulad.pid",
  11. "log-opts": {
  12. "log-file-mode": "0600",
  13. "log-path": "/var/lib/isulad",
  14. "max-file": "1",
  15. "max-size": "30KB"
  16. },
  17. "log-driver": "stdout",
  18. "container-log": {
  19. "driver": "json-file"
  20. },
  21. "hook-spec": "/etc/default/isulad/hooks/default.json",
  22. "start-timeout": "2m",
  23. "storage-driver": "overlay2",
  24. "storage-opts": [
  25. "overlay2.override_kernel_check=true"
  26. ],
  27. "registry-mirrors": [
  28. "docker.io"
  29. ],
  30. "insecure-registries": [
  31. "k8s.gcr.io",
  32. "quay.io",
  33. "hub.oepkgs.net"
  34. ],
  35. "pod-sandbox-image": "k8s.gcr.io/pause:3.2", # 端侧时配置为 kubeedge/pause:3.1
  36. "websocket-server-listening-port": 10351,
  37. "native.umask": "secure",
  38. "network-plugin": "cni",
  39. "cni-bin-dir": "/opt/cni/bin",
  40. "cni-conf-dir": "/etc/cni/net.d",
  41. "image-layer-check": false,
  42. "use-decrypted-key": true,
  43. "insecure-skip-verify-enforce": false
  44. }