YurtAppDaemon
Background
In edge scenarios, edge nodes from the same region will be assigned to the same NodePool, at which point some system components, such as CoreDNS, will typically need to be deployed in NodePool dimension. When creating the NodePool, we want to create these system components automatically, without any manual operations.
YurtAppDaemon ensures that all or some of the NodePools run replicas with a Deployment or StatefulSet template. As NodePools are created, these sub-Deployments or sub-StatefulSets are added to the cluster and the creation/updating of them are controlled by the YurtAppDaemon controller.

Usage:
- Create test1 NodePool
cat <<EOF | kubectl apply -f -apiVersion: apps.openyurt.io/v1alpha1kind: NodePoolmetadata:name: test1spec:selector:matchLabels:apps.openyurt.io/nodepool: test1type: EdgeEOF
- Create test2 NodePool
cat <<EOF | kubectl apply -f -apiVersion: apps.openyurt.io/v1alpha1kind: NodePoolmetadata:name: test2spec:selector:matchLabels:apps.openyurt.io/nodepool: test2type: EdgeEOF
- Add nodes to the corresponding NodePool
kubectl label node cn-beijing.172.23.142.31 apps.openyurt.io/desired-nodepool=test1kubectl label node cn-beijing.172.23.142.32 apps.openyurt.io/desired-nodepool=test1kubectl label node cn-beijing.172.23.142.34 apps.openyurt.io/desired-nodepool=test2kubectl label node cn-beijing.172.23.142.35 apps.openyurt.io/desired-nodepool=test2
- Create YurtAppDaemon
cat <<EOF | kubectl apply -f -apiVersion: apps.openyurt.io/v1alpha1kind: YurtAppDaemonmetadata:name: daemon-1namespace: defaultspec:selector:matchLabels:app: daemon-1workloadTemplate:deploymentTemplate:metadata:labels:app: daemon-1spec:replicas: 1selector:matchLabels:app: daemon-1template:metadata:labels:app: daemon-1spec:containers:- image: nginx:1.18.0imagePullPolicy: Alwaysname: nginxnodepoolSelector:matchLabels:yurtappdaemon.openyurt.io/type: "nginx"EOF
- Label test1 NodePool
kubectl label np test1 yurtappdaemon.openyurt.io/type=nginx# Check the Deploymentkubectl get deployments.apps# Check the Deployment nodeselector# Check the Pod
- Label test2 NodePool
kubectl label np test2 yurtappdaemon.openyurt.io/type=nginx# Check the Deploymentkubectl get deployments.apps# Check the Deployment nodeselector# Check the Pod
- Update YurtAppDaemon
# Change yurtappdaemon workloadTemplate replicas to 2# Change yurtappdaemon workloadTemplate image to nginx:1.19.0# Check the Pod
- Remove NodePool labels
# Remove the nodepool test1 labelkubectl label np test1 yurtappdaemon.openyurt.io/type-# Check the Deployment# Check the Pod# Remove the nodepool test2 labelkubectl label np test2 yurtappdaemon.openyurt.io/type-# Check the Deployment# Check the Pod
Example for deploying coredns
Using
YurtAppDaemon+service topologyto solve dns resolution problems
- Create NodePool
cat <<EOF | kubectl apply -f -apiVersion: apps.openyurt.io/v1alpha1kind: NodePoolmetadata:name: hangzhouspec:selector:matchLabels:apps.openyurt.io/nodepool: hangzhoutaints:- effect: NoSchedulekey: node-role.openyurt.io/edgetype: EdgeEOF
- Add label to NodePool
kubectl label np hangzhou yurtappdaemon.openyurt.io/type=coredns
- Deploy coredns
cat <<EOF | kubectl apply -f -apiVersion: apps.openyurt.io/v1alpha1kind: YurtAppDaemonmetadata:name: corednsnamespace: kube-systemspec:selector:matchLabels:k8s-app: kube-dnsworkloadTemplate:deploymentTemplate:metadata:labels:k8s-app: kube-dnsspec:replicas: 2selector:matchLabels:k8s-app: kube-dnstemplate:metadata:labels:k8s-app: kube-dnsspec:volumes:- name: config-volumeconfigMap:name: corednsitems:- key: Corefilepath: Corefilename: corednsdnsPolicy: DefaultserviceAccount: corednsserviceAccountName: corednscontainers:- args:- -conf- /etc/coredns/Corefileimage: k8s.gcr.io/coredns:1.6.7imagePullPolicy: IfNotPresentname: corednsresources:limits:memory: 170Mirequests:cpu: 100mmemory: 70MisecurityContext:allowPrivilegeEscalation: falsecapabilities:add:- NET_BIND_SERVICEdrop:- allreadOnlyRootFilesystem: truelivenessProbe:failureThreshold: 5httpGet:path: /healthport: 8080scheme: HTTPinitialDelaySeconds: 60periodSeconds: 10successThreshold: 1timeoutSeconds: 5volumeMounts:- mountPath: /etc/corednsname: config-volumereadOnly: truenodepoolSelector:matchLabels:yurtappdaemon.openyurt.io/type: "coredns"---apiVersion: v1kind: Servicemetadata:namespace: kube-systemannotations:prometheus.io/port: "9153"prometheus.io/scrape: "true"openyurt.io/topologyKeys: openyurt.io/nodepoollabels:k8s-app: kube-dnskubernetes.io/cluster-service: "true"kubernetes.io/name: KubeDNSname: kube-dnsspec:clusterIP: __kubernetes-coredns-ip__ ##修改为kubernetes dns service ipports:- name: dnsport: 53protocol: UDPtargetPort: 53- name: dns-tcpport: 53protocol: TCPtargetPort: 53- name: metricsport: 9153protocol: TCPtargetPort: 9153selector:k8s-app: kube-dnssessionAffinity: Nonetype: ClusterIP---apiVersion: v1data:Corefile: |.:53 {errorshealth {lameduck 5s}readykubernetes cluster.local in-addr.arpa ip6.arpa {pods insecurefallthrough in-addr.arpa ip6.arpattl 30}prometheus :9153forward . /etc/resolv.confcache 30loopreloadloadbalance}kind: ConfigMapmetadata:name: corednsnamespace: kube-system---apiVersion: v1kind: ServiceAccountmetadata:name: corednsnamespace: kube-systemlabels:kubernetes.io/cluster-service: "true"addonmanager.kubernetes.io/mode: Reconcile---apiVersion: rbac.authorization.k8s.io/v1kind: ClusterRolemetadata:labels:kubernetes.io/bootstrapping: rbac-defaultsaddonmanager.kubernetes.io/mode: Reconcilename: system:corednsrules:- apiGroups:- ""resources:- endpoints- services- pods- namespacesverbs:- list- watch- apiGroups:- ""resources:- nodesverbs:- get---apiVersion: rbac.authorization.k8s.io/v1kind: ClusterRoleBindingmetadata:annotations:rbac.authorization.kubernetes.io/autoupdate: "true"labels:kubernetes.io/bootstrapping: rbac-defaultsaddonmanager.kubernetes.io/mode: EnsureExistsname: system:corednsroleRef:apiGroup: rbac.authorization.k8s.iokind: ClusterRolename: system:corednssubjects:- kind: ServiceAccountname: corednsnamespace: kube-systemEOF
