v0.10 to v1.0

Follow the Regular Upgrading Process.

Upgrading Notable Changes

Introduced karmada-aggregated-apiserver component

In the releases before v1.0.0, we are using CRD to extend the Cluster API, and starts v1.0.0 we use API Aggregation(AA) to extend it.

Based on the above change, perform the following operations during the upgrade:

Step 1: Stop karmada-apiserver

You can stop karmada-apiserver by updating its replica to 0.

Step 2: Remove Cluster CRD from ETCD

Remove the Cluster CRD from ETCD directly by running the following command.

  1. etcdctl --cert="/etc/kubernetes/pki/etcd/karmada.crt" \
  2. --key="/etc/kubernetes/pki/etcd/karmada.key" \
  3. --cacert="/etc/kubernetes/pki/etcd/server-ca.crt" \
  4. del /registry/apiextensions.k8s.io/customresourcedefinitions/clusters.cluster.karmada.io

Note: This command only removed the CRD resource, all the CR (Cluster objects) not changed. That’s the reason why we don’t remove CRD by karmada-apiserver.

Step 3: Prepare the certificate for the karmada-aggregated-apiserver

To avoid CA Reusage and Conflicts, create CA signer and sign a certificate to enable the aggregation layer.

Update karmada-cert-secret secret in karmada-system namespace:

  1. apiVersion: v1
  2. kind: Secret
  3. metadata:
  4. name: karmada-cert-secret
  5. namespace: karmada-system
  6. type: Opaque
  7. data:
  8. ...
  9. + front-proxy-ca.crt: |
  10. + {{front_proxy_ca_crt}}
  11. + front-proxy-client.crt: |
  12. + {{front_proxy_client_crt}}
  13. + front-proxy-client.key: |
  14. + {{front_proxy_client_key}}

Then update karmada-apiserver deployment’s container command:

  1. - - --proxy-client-cert-file=/etc/kubernetes/pki/karmada.crt
  2. - - --proxy-client-key-file=/etc/kubernetes/pki/karmada.key
  3. + - --proxy-client-cert-file=/etc/kubernetes/pki/front-proxy-client.crt
  4. + - --proxy-client-key-file=/etc/kubernetes/pki/front-proxy-client.key
  5. - - --requestheader-client-ca-file=/etc/kubernetes/pki/server-ca.crt
  6. + - --requestheader-client-ca-file=/etc/kubernetes/pki/front-proxy-ca.crt

After the update, restore the replicas of karmada-apiserver instances.

Step 4: Deploy karmada-aggregated-apiserver:

Deploy karmada-aggregated-apiserver instance to your host cluster by following manifests:

unfold me to see the yaml

  1. ---
  2. apiVersion: apps/v1
  3. kind: Deployment
  4. metadata:
  5. name: karmada-aggregated-apiserver
  6. namespace: karmada-system
  7. labels:
  8. app: karmada-aggregated-apiserver
  9. apiserver: "true"
  10. spec:
  11. selector:
  12. matchLabels:
  13. app: karmada-aggregated-apiserver
  14. apiserver: "true"
  15. replicas: 1
  16. template:
  17. metadata:
  18. labels:
  19. app: karmada-aggregated-apiserver
  20. apiserver: "true"
  21. spec:
  22. automountServiceAccountToken: false
  23. containers:
  24. - name: karmada-aggregated-apiserver
  25. image: swr.ap-southeast-1.myhuaweicloud.com/karmada/karmada-aggregated-apiserver:v1.0.0
  26. imagePullPolicy: IfNotPresent
  27. volumeMounts:
  28. - name: k8s-certs
  29. mountPath: /etc/kubernetes/pki
  30. readOnly: true
  31. - name: kubeconfig
  32. subPath: kubeconfig
  33. mountPath: /etc/kubeconfig
  34. command:
  35. - /bin/karmada-aggregated-apiserver
  36. - --kubeconfig=/etc/kubeconfig
  37. - --authentication-kubeconfig=/etc/kubeconfig
  38. - --authorization-kubeconfig=/etc/kubeconfig
  39. - --karmada-config=/etc/kubeconfig
  40. - --etcd-servers=https://etcd-client.karmada-system.svc.cluster.local:2379
  41. - --etcd-cafile=/etc/kubernetes/pki/server-ca.crt
  42. - --etcd-certfile=/etc/kubernetes/pki/karmada.crt
  43. - --etcd-keyfile=/etc/kubernetes/pki/karmada.key
  44. - --tls-cert-file=/etc/kubernetes/pki/karmada.crt
  45. - --tls-private-key-file=/etc/kubernetes/pki/karmada.key
  46. - --audit-log-path=-
  47. - --feature-gates=APIPriorityAndFairness=false
  48. - --audit-log-maxage=0
  49. - --audit-log-maxbackup=0
  50. resources:
  51. requests:
  52. cpu: 100m
  53. volumes:
  54. - name: k8s-certs
  55. secret:
  56. secretName: karmada-cert-secret
  57. - name: kubeconfig
  58. secret:
  59. secretName: kubeconfig
  60. ---
  61. apiVersion: v1
  62. kind: Service
  63. metadata:
  64. name: karmada-aggregated-apiserver
  65. namespace: karmada-system
  66. labels:
  67. app: karmada-aggregated-apiserver
  68. apiserver: "true"
  69. spec:
  70. ports:
  71. - port: 443
  72. protocol: TCP
  73. targetPort: 443
  74. selector:
  75. app: karmada-aggregated-apiserver

Then, deploy APIService to karmada-apiserver by following manifests.

unfold me to see the yaml

  1. apiVersion: apiregistration.k8s.io/v1
  2. kind: APIService
  3. metadata:
  4. name: v1alpha1.cluster.karmada.io
  5. labels:
  6. app: karmada-aggregated-apiserver
  7. apiserver: "true"
  8. spec:
  9. insecureSkipTLSVerify: true
  10. group: cluster.karmada.io
  11. groupPriorityMinimum: 2000
  12. service:
  13. name: karmada-aggregated-apiserver
  14. namespace: karmada-system
  15. version: v1alpha1
  16. versionPriority: 10
  17. ---
  18. apiVersion: v1
  19. kind: Service
  20. metadata:
  21. name: karmada-aggregated-apiserver
  22. namespace: karmada-system
  23. spec:
  24. type: ExternalName
  25. externalName: karmada-aggregated-apiserver.karmada-system.svc.cluster.local

Step 5: check clusters status

If everything goes well, you can see all your clusters just as before the upgrading.

  1. kubectl get clusters

karmada-agent requires an extra impersonate verb

In order to proxy user’s request, the karmada-agent now request an extra impersonate verb. Please check the ClusterRole configuration or apply the following manifest.

  1. apiVersion: rbac.authorization.k8s.io/v1
  2. kind: ClusterRole
  3. metadata:
  4. name: karmada-agent
  5. rules:
  6. - apiGroups: ['*']
  7. resources: ['*']
  8. verbs: ['*']
  9. - nonResourceURLs: ['*']
  10. verbs: ["get"]

MCS feature now supports Kubernetes v1.21+

Since the discovery.k8s.io/v1beta1 of EndpointSlices has been deprecated in favor of discovery.k8s.io/v1, in Kubernetes v1.21, Karmada adopt this change at release v1.0.0. Now the MCS feature requires member cluster version no less than v1.21.