Faasm Kubernetes/ Knative integration

Faasm is a runtime designed to be integrated into other serverless platforms.The recommended integration is with Knative.

All of Faasm's Kubernetes and Knative configuration can be found in thek8s directory, and the relevant parts of the Faasm CLI can befound in the Knative tasks.

These steps generally assume that you havekubectland kn installedand these are able to connect to your cluster.

Cluster Set-up

Google Kubernetes Engine

To set up Faasm on GKE you can do the following:

  • Set up an account and the Cloud SDK (Ubuntu quick start)
  • Create a Kubernetes cluster with Istio enabled and version >=v1.15
  • Aim for >=4 nodes with more than one vCPU
  • Set up your local kubectl to connect to your cluster (click the "Connect" button in the web interface)
  • Check things are working by running kubectl get nodes
  • Install Knative serving as described below

Bare metal

If you're deploying on a bare-metal cluster then you need to update the externalIPsfield in the upload-service.yml file to match your k8s master node.

You also need to install Istio as described in the Knative docs.

Installation

Knative

Faasm requires a minimal install of Knative serving.If your cluster doesn't already have Knative installed, you can run:

  1. # Install
  2. inv knative.install
  3.  
  4. # Check
  5. kubectl get pods -n knative-serving

Faasm

You can then run the Faasm deploy (where replicas is the number of replicas in the Faasm pod):

  1. # Bare-metal/ GKE
  2. inv knative.deploy --replicas=4
  3.  
  4. # Local
  5. inv knative.deploy --local

This might take a couple of minutes depending on the underlying cluster.

Config file

Once everything has started up, you can populate your ~/faasm/faasm.ini file to avoidtyping in hostnames all the time. To do this, run:

  1. ./bin/knative_route.sh

Which should print out something like:

  1. [Faasm]
  2. invoke_host = ... # Usually the IP of your master node
  3. inoke_port = ... # E.g. 31380
  4. upload_host = ... # IP of the upload service
  5. upload_port = ... # Usually 8002

You can then copy-paste this into ~/faasm/faasm.ini.

Uploading functions

Once you have configured your ~/faasm/faasm.ini file, you can use the FaasmCLI as normal, e.g.

  1. inv upload demo hello
  2. inv invoke demo hello

Flushing Redis

When workers die or are killed, you'll need to clear the queue:

  1. inv redis.clear-queue --knative

Uploading and running native functions

C++

For benchmarking we need to run the functions in a more "normal" serverless way (i.e. nativelyin a container). To build the relevant container:

  1. inv knative.build-native <user> <function>

This will use a parameterised Dockerfile to create a container that runs the given functionnatively. You can test locally with:

  1. # Build the container
  2. inv knative.build-native <user> <function> --nopush
  3. # Start the container
  4. inv knative.native-local <user> <function>
  5. # Submit a request
  6. inv invoke <user> <function> --native

Once you're happy you can run the following on your machine with knative access:

  1. inv knative.deploy-native <user> <function>
  2. inv invoke --native <user> <function>

Note For anything that requires chaining we must run it asynchronously so that thingsdon't get clogged up. To do this:

  1. inv invoke --native --poll <user> <function>

Python

To run Python functions natively we use pyfaasm and a standard Flask-based knative Pythonexecutor. This can be found at func/knative_native.py. We can build the container with:

  1. inv docker.build -c knative-native-python --push

To check things locally:

  1. inv knative.native-python-local
  2. inv invoke python hello --py

To deploy, from the machine with k8s access:

  1. inv knative.deploy-native-python

Troubleshooting

To look at the logs for the faasm containers:

  1. # Find the faasm-worker-xxx pod
  2. kubectl --namespace=faasm get pods
  3. # Tail the logs for a specific pod
  4. kubectl -n faasm logs -f faasm-worker-xxx user-container
  5. # Tail the logs for all containers in the deployment
  6. # You only need to specify the max-log-requests if you have more than 5 containers
  7. kubectl -n faasm logs -f -c user-container -l serving.knative.dev/service=faasm-worker --max-log-requests=<N_CONTAINERS>
  8. # Get all logs from the given deployment (add a very large --tail)
  9. kubectl -n faasm logs --tail=100000 -c user-container -l serving.knative.dev/service=faasm-worker --max-log-requests=10 > /tmp/out.log

Isolation and privileges

Faasm uses namespaces and cgroups to achieve isolation, therefore containers runningFaasm workers need privileges that they don't otherwise have. The current solution tothis is to run containers in privileged mode. This may not be available on certainclusters, in which case you'll need to set the following environment vars:

  1. CGROUP_MODE=off
  2. NETNS_MODE=off

Redis-related set-up

There are a couple of tweaks required to handle running Redis, as detailed in theRedis admin docs.

First you can turn off transparent huge pages (add transparent_hugepage=neverto GRUB_CMDLINE_LINUX in /etc/default/grub and run sudo update-grub).

Then if testing under very high throughput you can set the following in etc/sysctl.conf:

  1. # Connection-related
  2. net.ipv4.tcp_tw_recycle = 1
  3. net.ipv4.tcp_tw_reuse = 1
  4. net.ipv4.tcp_fin_timeout = 15
  5. net.core.somaxconn = 65535
  6. # Memory-related
  7. vm.overcommit_memory=1