Prerequisites

Registry Credential

When building a function, you’ll need to push your function container image to a container registry like Docker Hub or Quay.io. To do that you’ll need to generate a secret for your container registry first.

You can create this secret by filling in the REGISTRY_SERVER, REGISTRY_USER and REGISTRY_PASSWORD fields, and then run the following command.

  1. REGISTRY_SERVER=https://index.docker.io/v1/
  2. REGISTRY_USER=<your_registry_user>
  3. REGISTRY_PASSWORD=<your_registry_password>
  4. kubectl create secret docker-registry push-secret \
  5. --docker-server=$REGISTRY_SERVER \
  6. --docker-username=$REGISTRY_USER \
  7. --docker-password=$REGISTRY_PASSWORD

Source repository Credential

If your source code is in a private git repository, you’ll need to create a secret containing the private git repo’s username and password:

  1. cat <<EOF | kubectl apply -f -
  2. apiVersion: v1
  3. kind: Secret
  4. metadata:
  5. name: git-repo-secret
  6. annotations:
  7. build.shipwright.io/referenced.secret: "true"
  8. type: kubernetes.io/basic-auth
  9. stringData:
  10. username: <cleartext username>
  11. password: <cleartext password>
  12. EOF

You can then reference this secret in the Function CR’s spec.build.srcRepo.credentials

  1. apiVersion: core.openfunction.io/v1beta1
  2. kind: Function
  3. metadata:
  4. name: function-sample
  5. spec:
  6. version: "v2.0.0"
  7. image: "openfunctiondev/sample-go-func:v1"
  8. imageCredentials:
  9. name: push-secret
  10. build:
  11. builder: openfunction/builder-go:latest
  12. env:
  13. FUNC_NAME: "HelloWorld"
  14. FUNC_CLEAR_SOURCE: "true"
  15. srcRepo:
  16. url: "https://github.com/OpenFunction/samples.git"
  17. sourceSubPath: "functions/knative/hello-world-go"
  18. revision: "main"
  19. credentials:
  20. name: git-repo-secret
  21. serving:
  22. template:
  23. containers:
  24. - name: function # DO NOT change this
  25. imagePullPolicy: IfNotPresent
  26. runtime: "knative"

Kafka

Async functions can be triggered by events in message queues like Kafka, here you can find steps to setup a Kafka cluster for demo purpose.

  1. Install strimzi-kafka-operator in the default namespace.

    1. helm repo add strimzi https://strimzi.io/charts/
    2. helm install kafka-operator -n default strimzi/strimzi-kafka-operator
  2. Run the following command to create a Kafka cluster and Kafka Topic in the default namespace. The Kafka and Zookeeper clusters created by this command have a storage type of ephemeral and are demonstrated using emptyDir.

    Here we create a 1-replica Kafka server named <kafka-server> and a 1-replica topic named <kafka-topic> with 10 partitions

    1. cat <<EOF | kubectl apply -f -
    2. apiVersion: kafka.strimzi.io/v1beta2
    3. kind: Kafka
    4. metadata:
    5. name: <kafka-server>
    6. namespace: default
    7. spec:
    8. kafka:
    9. version: 3.3.1
    10. replicas: 1
    11. listeners:
    12. - name: plain
    13. port: 9092
    14. type: internal
    15. tls: false
    16. - name: tls
    17. port: 9093
    18. type: internal
    19. tls: true
    20. config:
    21. offsets.topic.replication.factor: 1
    22. transaction.state.log.replication.factor: 1
    23. transaction.state.log.min.isr: 1
    24. default.replication.factor: 1
    25. min.insync.replicas: 1
    26. inter.broker.protocol.version: "3.1"
    27. storage:
    28. type: ephemeral
    29. zookeeper:
    30. replicas: 1
    31. storage:
    32. type: ephemeral
    33. entityOperator:
    34. topicOperator: {}
    35. userOperator: {}
    36. ---
    37. apiVersion: kafka.strimzi.io/v1beta2
    38. kind: KafkaTopic
    39. metadata:
    40. name: <kafka-topic>
    41. namespace: default
    42. labels:
    43. strimzi.io/cluster: <kafka-server>
    44. spec:
    45. partitions: 10
    46. replicas: 1
    47. config:
    48. cleanup.policy: delete
    49. retention.ms: 7200000
    50. segment.bytes: 1073741824
    51. EOF
  3. Run the following command to check Pod status and wait for Kafka and Zookeeper to run and start.

    1. $ kubectl get po
    2. NAME READY STATUS RESTARTS AGE
    3. <kafka-server>-entity-operator-568957ff84-nmtlw 3/3 Running 0 8m42s
    4. <kafka-server>-kafka-0 1/1 Running 0 9m13s
    5. <kafka-server>-zookeeper-0 1/1 Running 0 9m46s
    6. strimzi-cluster-operator-687fdd6f77-cwmgm 1/1 Running 0 11m

    Run the following command to view the metadata for the Kafka cluster.

    1. $ kafkacat -L -b <kafka-server>-kafka-brokers:9092

WasmEdge

Function now supports using WasmEdge as workload runtime, here you can find steps to setup the WasmEdge workload runtime in a Kubernetes cluster (with containerd as the container runtime).

You should run the following steps on all the nodes (or a subset of the nodes that will host the wasm workload) of your cluster.

Step 1 : Installing WasmEdge

The easiest way to install WasmEdge is to run the following command. Your system should have git and curl installed.

  1. wget -qO- https://raw.githubusercontent.com/WasmEdge/WasmEdge/master/utils/install.sh | bash -s -- -p /usr/local

Step 2 : Installing Container runtimes

crun

The crun project has WasmEdge support baked in. For now, the easiest approach is just download the binary and move it to /usr/local/bin/

  1. wget https://github.com/OpenFunction/OpenFunction/releases/latest/download/crun-linux-amd64
  2. mv crun-linux-amd64 /usr/local/bin/crun

If the above approach does not work for you, please refer to build and install a crun binary with WasmEdge support.

Step 3 : Setup CRI runtimes

Option 1: containerd

You can follow this installation guide to install containerd and this setup guide to setup containerd for Kubernetes.

First, edit the configuration /etc/containerd/config.toml, add the following section to setup crun runtime, make sure the BinaryName equal to your crun binary path

  1. # Add crun runtime here
  2. [plugins."io.containerd.grpc.v1.cri".containerd.runtimes.crun]
  3. runtime_type = "io.containerd.runc.v2"
  4. pod_annotations = ["*.wasm.*", "wasm.*", "module.wasm.image/*", "*.module.wasm.image", "module.wasm.image/variant.*"]
  5. privileged_without_host_devices = false
  6. [plugins."io.containerd.grpc.v1.cri".containerd.runtimes.crun.options]
  7. BinaryName = "/usr/local/bin/crun"

Next, restart containerd service:

  1. sudo systemctl restart containerd

Option 2: CRI-O

You can follow this installation guide to install CRI-O and this setup guide to setup CRI-O for Kubernetes.

CRI-O uses the runc runtime by default and we need to configure it to use crun instead. That is done by adding to two configuration files.

First, create a /etc/crio/crio.conf file and add the following lines as its content. It tells CRI-O to use crun by default.

  1. [crio.runtime]
  2. default_runtime = "crun"

The crun runtime is in turn defined in the /etc/crio/crio.conf.d/01-crio-runc.conf file.

  1. [crio.runtime.runtimes.runc]
  2. runtime_path = "/usr/lib/cri-o-runc/sbin/runc"
  3. runtime_type = "oci"
  4. runtime_root = "/run/runc"
  5. # The above is the original content
  6. # Add crun runtime here
  7. [crio.runtime.runtimes.crun]
  8. runtime_path = "/usr/local/bin/crun"
  9. runtime_type = "oci"
  10. runtime_root = "/run/crun"

Next, restart CRI-O to apply the configuration changes.

  1. systemctl restart crio