Provisioners

Provisioners on tsuru are responsible for creating and scheduling units forapplications and node-containers. Originally tsuru supported only oneprovisioner called docker. This begin changing with tsuru release 1.2 assupport for docker swarm mode andKubernetes as provisioners was added.

Provisioners are also responsible for knowing which nodes are available for thecreation of units, registering new nodes and removing old nodes.

Provisioners are associated to pools and tsuru will use pools to find out whichprovisioner is responsible for each application. A single tsuru installationcan manage different pools with different provisioners at the same time.

docker provisioner

This is the default and original provisioner for tsuru. It comes from a timewhere no other scheduler/orchestrator was available for Docker. Neither swarmnor kubernetes existed yet, so we had to create our own scheduler which usesthe docker-cluster library and isbuilt-in the docker provisioner.

The provisioner uses MongoDB to store metadata on existing nodes and containerson each node, and also to track images as they are created on each node. Toaccomplish this tsuru talks directly to the Docker API on each node, which mustbe allowed to receive connections from the tsuru API using HTTP or HTTPS.

Tsuru relies on the default big-sibling node-container to monitorcontainers on each node and report back containers that are unavailable or thathad its address changed by docker restarting it. The docker provisioner willthen be responsible for rescheduling such containers on new nodes.

There’s no need to register a cluster to use thedocker provisioner, simply adding new nodes with Docker API running on them is enough for tsuruto use them.

Scheduling of units on nodes prioritizes high availability of applicationcontainers. To accomplish this tsuru tries to create each new container on thenode with fewest containers from such application. If there are multiple nodeswith no containers from the application being scheduled tsuru will try tocreate new containers on nodes that have different metadata from the onescontainers already exist.

swarm provisioner

The swarm provisioner uses docker swarm mode available in Docker 1.12.0 onward.Swarm itself is responsible for maintaining available nodes and containers andtsuru itself doesn’t store anything in its internal storage.

To use the swarm provisioner it’s first necessary to register a Swarmcluster in tsuru which must point to a Docker APIserver that will behave as a Swarm manager, tsuru itself will do the docker
swarm init
API call if the cluster address is not a Swarm member yet.

Because not all operations are still available through the swarm managerendpoint (namely commit and push operations) tsuru must still be able toconnect to the docker endpoint of each node directly for such operations. Also,adding a new node to tsuru will call swarm join on such node.

Scheduling and availability of containers is completely controlled by theSwarm, for each tsuru application/process tsuru will create a Swarm servicecalled <application name>-<process name>. The process of adding andremoving units simply updates the service.

An overlay network is created for each application and every service createdfor the application is connected to this same overlay network, allowingintercommunication directly between containers.

Node containers, e.g big-sibling, are also created as Swarm services with modeset to Global, which ensures they run every node.

kubernetes provisioner

The kubernetes provisioner uses Kubernetes tomanage nodes and containers, tsuru also doesn’t store anything in its internalstorage related to nodes and containers. It’s first necessary to register aKubernetes cluster in tsuru which must point to theKubernetes API server.

Scheduling is controlled exclusively by Kubernetes, for eachapplication/process tsuru will create a Deployment controller. Changes to theapplication like adding and removing units are executed by updating theDeployment with rolling update configured using the Kubernetes API. Nodecontainers are created using the DaemonSets.

A Service controller is also created for every Deployment, this allows directcommunication between services without the need to go through a tsuru router.

Adding new nodes is possible using normal tsuru workflow described inadding new nodes. However, tsuru will onlycreate a Node resource using the Kubernetes API and will assume that the newnode already has a kubelet process running on it and that it’s accessible tothe Kubernetes API server.

原文: https://docs.tsuru.io/1.6/managing/provisioners.html